Abstract

This paper proposes a new meta-heuristic algorithm, named wild geese migration optimization (GMO) algorithm. It is inspired by the social behavior of wild geese swarming in nature. They maintain a special formation for long-distance migration in small groups for survival and reproduction. The mathematical model is established based on these social behaviors to solve optimization problems. Meanwhile, the performance of the GMO algorithm is tested on the stable benchmark function of CEC2017, and its potential for dealing with practical problems is studied in five engineering design problems and the inverse kinematics solution of robot. The test results show that the GMO algorithm has excellent computational performance compared to other algorithms. The practical application results show that the GMO algorithm has strong applicability, more accurate optimization results, and more competitiveness in challenging problems with unknown search space, compared with well-known algorithms in the literature. The proposal of GMO algorithm enriches the team of swarm intelligence optimization algorithms and also provides a new solution for solving engineering design problems and inverse kinematics of robots.

1. Introduction

The rapid development of informational and intelligent technology has spawned many new intelligent application requirements. It has also led to many new optimization problems with nonlinearity, complexity, and constraints in engineering, science, economics, management, and other fields. Traditional optimization methods have been unable to meet the needs of computing, and seeking efficient optimization algorithms has become a research hotspot in related disciplines [13]. The meta-heuristic algorithms are widely used to solve optimization problems due to the advantages of simplicity, flexibility, and derivation-free mechanism [46]. The algorithm is based on mathematics and finds the best possible solution from all candidate solutions through an iterative calculation mechanism [7, 8].

Most of the meta-heuristic algorithms are inspired by the social nature of biological swarms, the laws of natural phenomena, and human intelligence. In general, the algorithms are mainly divided into three categories. The algorithms based on the laws of natural phenomena can be divided into evolutionary laws and physical laws. The evolution-based algorithms mainly include genetic algorithm (GA) [9], differential evolution algorithm (DE) [10], black hole algorithm (BH) [11], natural aggregation algorithm (NAA) [12], barnacles mating optimizer (BMO) [13], biogeography-based optimization (BBO) [14], bird mating optimizer (BMO) [15], and so on. Among them, GA algorithm is inspired by Darwin’s theory of evolution. Each individual in the algorithm is assigned a specific gene, and the iterative optimization process is achieved by the genetic evolution of individual genes. NAA algorithm is inspired by the collective decision making intelligence of the group-living animals. Individuals will make decisions about entering/leaving a subpopulation by the quality and crowding of the subpopulation to achieve localization and generalization search for the problem space. The physics-based algorithms mainly include simulated annealing algorithm (SA) [16], central force optimization algorithm (CFO) [17], electromagnetic field optimization algorithm (EFO) [18], water evaporation optimization algorithm (WEO) [19], gravitational search algorithm (GSA) [20], and so on. The algorithms based on human social behavior mainly include teaching-learning-based optimization algorithm (TLBO) [21], student psychology-based optimization algorithm (SPBO) [22], social-based algorithm (SBA) [23], and so on. The kho-kho optimization (KKO) algorithm [24] and battle royale algorithm (BRO) [25] are inspired by players’ rules in the games.

At present, the most studied algorithm is based on biological swarm behavior, which is also called swarm intelligence optimization algorithm. The algorithms mainly include particle swarm optimization algorithm (PSO) [26], bat-inspired algorithm (BA) [27], artificial bee colony algorithm (ABC) [28], fruit fly optimization algorithm (FOA) [29], migrating birds optimization (MBO) [30], cuckoo search algorithm (CS) [31], cuttlefish algorithm (CFA) [32], ant colony optimization algorithm (ACO) [33], moth-flame optimization algorithm (MFO) [34], mayfly optimization algorithm (MA) [35], chicken swarm optimization algorithm (CSO) [36], naked mole-rat algorithm (NMR) [37], and so on. Among them, the PSO algorithm is inspired by the social behavior of bird swarm. Each particle continuously explores the solution space in this algorithm to find the global optimum. The position update strategy is based on the historical optimal position and the global optimal position of each particle. The inspiration of the MBO algorithm comes from the V flight formation during the migration of birds. The position update is implemented sequentially from the optimal value, and the position of current individual is compared with its neighbors. If the fitness of the neighbor is better, the current individual will be replaced. The CS algorithm is a meta-heuristic algorithm based on the cuckoo’s brood parasitic behavior and the bird’s Lévy flight behavior. The algorithm is to search for the global optimal solution through the strategy of Lévy flight and random walk. The CFA algorithm is inspired based on the colour changing behavior of cuttlefish. The population will be divided into four independent groups in the algorithm, and an independent search strategy is designed for each group by simulating the two processes of reflection and visibility.

The meta-heuristic algorithm is proposed not only for theoretical research in the laboratory, but more importantly, it is hoped to achieve satisfactory results in different practical application fields. The research of many algorithms is based on specific practical applications and explores their excellent computational performance. For instance, Taymaz proposed the BRO algorithm [25] and applied it to solve the inverse kinematics problem of the PUMA560 robot. The research shows that the BRO algorithm achieves excellent results in the position solution. Amir et al. proposed the CS algorithm [31] and verified its excellent performance through 13 engineering design problems. Seyedali proposed the ant lion optimizer (ALO) [38] and applied it to the design of ship propellers. The smooth blade shape is found through the ALO algorithm to improve the propeller efficiency. Mirjalili et al. proposed the grey wolf algorithm (GWO) [39] and applied it to optimize the BSPCW structure in the optical buffer design problem. The optimized structure has a good bandwidth and does not require any frequency mixing. Seyedali proposed the sine cosine algorithm (SCA) [40] and applied it to the two-dimensional design of aircraft wings. Minimal drag is the goal of structural optimization. The optimization results show that the drag is reduced from 0.009 to 0.0061, and the effect is pronounced. Li et al. proposed the slime mold algorithm (SMA) [41] and verified the algorithm’s performance on multiple benchmark functions and five practical engineering design problems. The SMA algorithm exhibits satisfactory computational performance in solving engineering problems. Kaur et al. proposed the tunicate swarm algorithm (TSA) [42] and applied it to the solution of constrained and unconstrained engineering problems. The applicability of the TSA algorithm is verified.

In order to mimic nature more effectively and improve the search performance of the algorithm [43], fitness-distance balance (FDB) proposed by Kahraman et al. [44] has made significant contributions, which combines FDB with the symbiotic organisms search algorithm (FDB-SOS). Compared with 13 meta-heuristic search (MHS) techniques, the excellent performance of the FDB-SOS algorithm is verified on 90 benchmark functions. Aras et al. [45] proposed an FDBSFS algorithm, which uses the FDB mechanism to optimize the stochastic fractal search algorithm. Compared with 39 MHS algorithms, it verifies the powerful search performance and the competitiveness of the FDBSFS algorithm, on 89 unconstrained benchmark functions and 5 constrained engineering problems. Ozkaya et al. [46] redesigned the mutation operator of the improved adaptive differential evolution (LSHADE) algorithm by the FDB mechanism, which is defined as the FDB-LSHADE algorithm. Compared with other 8 MHS algorithms, the FDB-LSHADE algorithm shows excellent performance on CEC14, CEC17, and energy hub economic dispatch problems. To achieve higher performance goals, the application range is wider. The researchers consider combining swarm intelligence algorithms with other deep learning methods. For instance, Ghasemi-Darehnaei et al. [47] proposed a swarm intelligence ensemble deep transfer learning method (SI-EDTL) and used the whale optimization algorithm (WOA) to select the optimal hyperparameters of SI-EDTL. Meanwhile, SI-EDTL is applied to multiple vehicle detection in unmanned aerial vehicle (UAV) images. Basha et al. [48] proposed an improved Harris hawks optimization algorithm to optimize the convolutional neural network (CNN) architecture. Compared with other similar methods, the network achieves superior performance in classifying various grades of brain tumors. Singh et al. [49] proposed a multistage particle swarm optimization (MPSO) algorithm to explore the CNN architecture and its hyperparameters (MPSO-CNN), which achieved better performance on 5 benchmark datasets. Hilal et al. [50] studied a remote sensing image classification model (FCMBS-RSIC) based on fuzzy logic and bird swarm algorithm and performed performance verification on benchmark open-access datasets. The FCMBS-RSIC model has enhanced results compared to other state-of-the-art methods. Zivkovic et al. [51] proposed a framework to improve the prediction accuracy of COVID-19 cases, which is an adaptive neuro-fuzzy inference system trained by an improved beetle antenna search algorithm. Kumar and Jaiswal [52] proposed a cognitive-driven analytics model (CNN-WSADT) for real-time data classification. It combines three deep learning methods of CNN, wolf-search algorithm, and decision tree.

With the efforts of the researchers, new meta-heuristic algorithms are proposed every year and applied to solve complex optimization problems in different fields. Each algorithm balances its exploitation and exploration process by setting up a unique search mechanism, which may be intrinsic to the success of the new algorithm [5355]. However, no single meta-heuristic algorithm satisfies all optimization problems, as explained by the no-free-lunch theorem [56]. In other words, the same algorithm may achieve satisfactory results on one optimization problem but may exhibit poor computational performance on another. Therefore, with the continuous innovation of science and technology, the complexity and challenge of optimization problems continue to increase. While improving traditional algorithms, researchers also need to propose new algorithms and theories. This motivates us to propose a new meta-heuristic algorithm, inspired by wild geese migration. There is no prior study on this topic in the optimization algorithm literature to the authors’ knowledge.

This paper describes a new meta-heuristic optimization algorithm (GMO). The algorithm simulates the social behavior of wild geese migration and designs multiple migration groups. The iterative process of the GMO algorithm mainly refers to the behavior of randomly establishing migration groups, synchronous migration, and free foraging. The random establishment of the migration group in the algorithm is that its members are randomly generated with the head goose (the best individual in the migration group) as the center. The synchronous migration means that individuals in each migration group update their positions in equal steps. The free foraging refers to individuals moving within a small random range. To evaluate the performance of the GMO algorithm, the simulation experiments are carried out by 29 stable benchmark functions in CEC2017. At the same time, the algorithm is applied to solve five engineering design problems and the inverse kinematics problem of 7R 6DOF robot and is compared with other algorithms reported in the literature. The results show that the computational performance of the GMO algorithm is more competitive, and it effectively solves practical engineering problems.

The main contributions of this paper are as follows:(1)The development and latest research results of meta-heuristic algorithms are analyzed through literature, which provides more theoretical basis and reference value for the new algorithm proposed in this paper.(2)This paper proposes a new swarm intelligence algorithm, named GMO algorithm, which is inspired by the social behavior of long-distance migration of wild geese swarm. In the algorithm, the search mechanism of randomly establishing migration groups, synchronous migration, and free foraging is designed, which effectively balanced the exploitation and exploration process in the search space.(3)Simulation experiments are carried out in the 29 stable benchmark functions of CEC2017, and each function is tested on 10, 30, 50, and 100 dimensions. The experimental results of GMO algorithm and 5 other algorithms are compared in detail. It is shown that the GMO algorithm has good convergence accuracy and speed, strong stability, and short running time.(4)The GMO algorithm is applied to five engineering design problems in this paper. Compared with the results reported in other studies, the GMO algorithm has shown good results in the face of practical problems in different search spaces. The applicability and feasibility of the algorithm to solve engineering optimization problems are verified.(5)The GMO algorithm is used to solve the inverse kinematics problem of the 7R 6DOF robot. The results show that the GMO algorithm is better than other comparative algorithms in the solution of the inverse kinematic pose problem and has a higher solution accuracy. The algorithm provides a new method for solving the inverse kinematics problem of the robot.

The rest of this paper is organized as follows. Section 2 presents the GMO algorithm and introduces its primary sources of inspiration and design principles. Section 3 gives the simulation experiment of the GMO algorithm by benchmark functions, comparing it with other algorithms to verify its computational performance. Section 4 is devoted to solving five engineering optimization problems using the GMO algorithm and proving the algorithm’s applicability. Section 5 successfully solves the inverse kinematics problem of the 7R 6DOF robot through the GMO algorithm. Finally, the conclusion of this paper and directions for possible future research are given in Section 6.

2. GMO Algorithm

In this section, the inspiration for the GMO algorithm is first introduced to better understand the proposed methodology. Then, the mathematical model of the algorithm is provided, and its implementation flow and pseudocode are described. Finally, the time complexity analysis of the GMO algorithm is carried out.

2.1. Inspiration

The wild goose is a general term for birds of the genus goose, and it is also an excellent air traveller. Every autumn, they fly in droves from Siberia to the south for the winter. The following spring, they will return to Siberia to lay eggs and breed after a long journey. In the migration process, each migration group consists of many geese, and the experienced head geese lead them to fly in line-shaped or V-shaped arrangement, as shown in Figure 1. This is a miraculous natural phenomenon.

During the flight of the wild geese, wild geese generate vortices and updrafts by constantly flapping their wings. The wild geese that follow closely will fly in these air currents, saving a lot of energy. However, the head geese have no available updraft resources, and their physical energy will be consumed the fastest. Therefore, to ensure the continuity of the air flight, each wild geese migration group needs to change formation and head geese frequently on long-distance flights. Meanwhile, the wild geese group migration is also conducive to exchanging information and avoiding natural enemies [57].

2.2. Algorithm Principles and Mathematical Models

The GMO algorithm’s initial population is randomly generated in the solution space, and a certain number of wild geese are selected as the initial head geese. The wild geese swarm migrate under the leadership of the head geese. The population size of the wild geese in the GMO algorithm is N, and the number of the head geese is M. The migration group initial radius size is set to L ().

2.2.1. Formation of Migration Groups

In each iteration process, the migration groups are reestablished according to the position of the head geese. The members of each group are randomly distributed within the radius L with the head goose as the center. Its purpose is to realize the replacement of the head geese and the transformation of the formation. The mathematical model is as follows:where represents the position of the i-th individual at the t-th iteration (i = 1, 2, …, N). T is the maximum number of iterations (t = 1, 2, …, T). represents the position of the j-th head goose individual at the t-th iteration (j = 1, 2, …, M). b represents the number of migration groups (b=N/M).

2.2.2. Synchronized Flight

During the migration process of the wild geese, the head geese in nature mainly rely on environmental information, historical memory, and flight experience to guide the migration. Meanwhile, each migration group member maintains a relatively fixed position to fly with the head goose. The synchronous flight strategy is used in the GMO algorithm to simulate the flight characteristics of wild geese, and the flight steps in the migration group members are set to be equal. The individuals’ position update information in the migration group is derived from the head goose, which is mainly based on the optimal position and refers to the position information of other head goose. The schematic diagram of the flight process of a migration group is shown in Figure 2, and the mathematical model is as follows:where represents the global optimal individual and is the randomly selected head goose individual. and represent the members and the head goose in a migration group, respectively. The flight step size , and is calculated bywhere is the fitness value of the head goose, , , and represent the worst, average, and best fitness value of the head geese, respectively, and is mainly used to control the proportion of other head geese’s experience information. If , it indicates that the value of is small and means that is an excellent head goose and does not need to learn more information from other head goose. The exact opposite is true when .

2.2.3. Free Foraging

Resting and foraging are inevitable for migratory groups during long-distance flights. Wild geese often choose lakes or larger bodies of water in nature as the foraging area. During the free foraging process, the migration group members will randomly explore according to the information of the head goose and maintain a certain connection in a small area. At the same time, the migration group maintains the movement trend by the optimal location information. After finishing foraging, the wild geese will regroup and migrate. A schematic diagram depicting the free foraging process is shown in Figure 3, and the mathematical model is as follows:where and are random numbers between [0, 1], respectively, used to control the movement step size of individuals during the foraging process. L is the radius of the group range, which is used to control the distance between the migration group members and the head goose.

2.2.4. Selection of the Head Geese

During the long-distance migration of wild geese, the head geese are the most crucial individuals, and they are the leaders of the entire wild geese swarm. The head geese must be replaced frequently to achieve high flight durability. Therefore, the optimal individuals in each migration group will be selected as the head geese of the new generation after each location update of the GMO algorithm. This selection strategy not only allows the head geese to carry excellent location information but also ensures the dispersion of the head geese’s positions, so that the algorithm has an excellent ability to balance exploitation and exploration.

After the head geese are all replaced, the migration group radius (L) is reduced by equation (5). The purpose is to increase the density of members in the group and improve the exploration accuracy of the algorithm.where T is the maximum number of iterations and t is the current number of iterations.

2.3. Implementation of GMO Algorithm

The GMO algorithm is a new stochastic optimization algorithm. Multiple random positions within the solution space are chosen as initial solutions, and then all solutions are iterated and optimized continuously to find the optimal solution. The flowchart and pseudocode of the GMO algorithm are presented in Figure 4 and Algorithm 1, respectively.

Initializing the positions of all individuals and the related parameters. Selecting the initial head geese.
For t = 1: T
For j = 1: M
  For i = 1: N
   The migration groups will be rebuilt with the head geese as the center by equation (1).
  End for
End for
If rand > 0.5
  For j = 1: M
   For i = (b (j − 1) + 1) : (bj)
    The members of the migration groups fly synchronously by equations (2) and (3).
   End for
  End for
else
  For j = 1: M
   For i = (b (j − 1) + 1): (b j)
    The migratory groups forage freely by equation (4).
   End for
  End for
End if
 Update the migration group range radius by equation (5). The fitness values are recalculated, and the optimal individual in each migratory group is selected as the new head goose.
End for
Recording the optimal fitness value and its individual location information data.
2.4. Time Complexity

In practical engineering applications, the computational efficiency and computational performance of an algorithm are equally important. The time complexity analysis method is one of the essential means to evaluate the algorithm’s efficiency. This method can analyze the algorithm’ complexity under the condition that the population number N and the number of iterations T remain unchanged, and the computational efficiency of the algorithm can be accurately verified. The calculation process of the GMO algorithm mainly includes three parts: population initialization O(N), the establishment of migration groups O(NT), and synchronized flight or free foraging O(NT). Therefore, the time complexity of the GMO algorithm is O(GMO) = O(N) + O(NT) + O(NT). The complexity formula has no exponentiation operation and is mainly affected by the basic parameter NT. From the above analysis, it can be seen that the GMO algorithm has a lower time complexity.

3. Experimental Results and Analyses

3.1. Benchmark Functions and Parameter Setting

For a new meta-heuristic algorithm, it is necessary to test the ability in terms of exploitation and exploration through a large amount of quantitative data. In this work, the performance of the GMO algorithm is tested on 29 stable benchmark functions in the CEC2017 technical report (F2 function is deprecated in this paper because of its instability) [58]. The specific function names, variable feasible regions, and minimum values are recorded in Table 1, and the detailed function models can be obtained from [58]. In addition, 4 different types of benchmark functions are provided in this table, including unimodal, multimodal, hybrid, and composition functions. The test results of these benchmark functions can infer the potential ability of the GMO algorithm to solve practical problems.

In order to clearly illustrate the excellent computing performance of the GMO algorithm, the five optimization algorithms are selected as the comparison targets, including the PSO, BRO, CSO, ABC, and WOA algorithms. The common parameters of all algorithms are set as follows: the population number N = 100, the maximum number of iterations T = 500, and the dimension D = 10, 30, 50, and 100, and other related parameters are shown in Table 2. Windows 10 operating system is the processing environment for the experimental process, and the PC processor is Inter(R) Core(TM) i5-3470M CPU @3.20 GHz.

3.2. Experimental Results

In the calculation process of the meta-heuristic algorithm, the random numbers in the solution space are generally used as the initial values. The calculation result of the algorithm may be different due to the difference in the initial values. Therefore, to avoid the influence of special data on the overall results, 50 independent experiments are performed for each benchmark function, and the same initial values are used for each independent experiment. This section gives the test results data of 6 algorithms on 29 benchmark functions in different dimensions, and the experiment dimensions include D = 10, D = 30, D = 50, and D = 100. The specific experimental results are shown in Tables 314. Among them, the experimental results of unimodal and multimodal benchmark functions in 4 different dimensions are recorded in Tables 3, 6, 9, and 12, respectively. Similarly, the experimental results of the hybrid functions are recorded in Tables 4, 7, 10, and 13, respectively. The experimental results of the composition benchmark functions are recorded in Tables 5, 8, 11, and 14, respectively.

In order to verify the performance of the GMO algorithm, the mean, standard deviation, and running time of each benchmark function in 50 independent experiments are selected as evaluation indicators. Among them, the mean can evaluate the computing power and accuracy of the algorithm, the standard deviation can evaluate the computational stability of the algorithm, and the running time can judge the complexity of the algorithm. In addition, in order to display the experimental results more clearly and intuitively, each table also records the ranking of the average value and the results of the significance test. The rank of average value is determined by the numerical value of the test results. The algorithm with the smallest average value is ranked 1st, and the algorithm with the largest average value is ranked 6th and gives the same rank when the average value is the same but occupies two positions. The final overall ranking of the algorithm is determined by the average of the algorithm’s ranking on all functions.

The significance test technique uses statistical methods to explore whether there are significant differences in data distribution. In this paper, the significance test is performed on the 50 calculation results of the GMO algorithm and other comparison algorithms, respectively. The Wilcoxon rank-sum test or the independent sample t-test (T-test) is used to test the significance of different types of data. According to the data normality test and variance homogeneity test results, the T-test is performed for normally distributed data, and Wilcoxon rank-sum test is performed for others. The level of statistical significance is set at p = 0.05. p < 0.05 means that the calculation result of the GMO algorithm is significantly different from the comparison algorithm, which is recorded as “1” in the table. p > 0.05 means negative answer, which is recorded as “0” in the table.

3.3. Evaluation of Exploitation and Exploration Capabilities

The unimodal functions (F1, F3) are often used to verify the exploitation ability of the algorithm because they have only one global optimal value. The multimodal functions (F4–F10) have an excellent effect on testing the exploration ability of the algorithm because of the characteristics of multiple local optima.

The following conclusions can be drawn from the data presented in Tables 3, 6, 9, and 12. In the case of D = 10, the calculation results of the GMO algorithm are better than the comparison algorithms. In the case of D = 30, the test results of the GMO algorithm on 6 functions are the optimal values, and the test results on the F3, F6, and F9 functions are not the optimal values, but the results are equally competitive. In the case of D = 50 and D = 100, the test result of the GMO algorithm only on the F3 function is not the optimal value. In addition, the comprehensive ranking of the averages in Tables 3, 6, 9, and 12 is shown in Figure 5. It can be seen that the GMO algorithm has the best computation results. Meanwhile, the box plot of the convergence results obtained by 50 experiments on the F1–F10 functions (taking D = 50 as an example) is shown in Figure 6. The figure shows that the GMO algorithm maintains a leading edge in convergence accuracy and stability.

Based on the analysis results of the above data, it can be seen that the GMO algorithm proposed in this paper has good exploitation ability, exploration ability, and computational stability. This may be attributed to two points. One is that the migration group members move randomly in a small area near the head geese during the free foraging process. The other is that the individuals in each migration group keep moving synchronously during the migration process, which effectively expands the scope of exploration.

3.4. Ability to Avoid Local Minima

F11–F20 are hybrid functions, and F21–F30 are composition functions. These complex functions are obtained by the essential functions’ combination, rotation, and offset. The common feature of the functions is that there are a large number of local extrema in the solution space, which makes the solution space closer to the practical problems. The comprehensive ability of the algorithm to balance exploitation and exploration problems can be verified by these functions.

Based on the experimental data provided in Tables 4, 5, 7, 8, 10, 11, 13, and 14, the following conclusions can be drawn.(1)In the case of D = 10, the GMO algorithm achieves the best calculation results on 17 benchmark functions, and the results only on the F17, F21, and F24 functions are not optimal. In the case of D = 30, the GMO algorithm does not achieve the best test results on the F15 function, but achieves the best test results on all other benchmark functions. In the case of D = 50, the GMO algorithm achieves the best results on 18 hybrid and composition functions, compared with other algorithms. The best experimental results of the other two functions (F17, F22) are obtained by the ABC algorithm. In the case of D = 100, the GMO algorithm obtains the best computational results on all hybrid and composition functions, compared to other algorithms.(2)According to the average of the experimental results, a comprehensive ranking diagram of all algorithms is drawn. The comprehensive ranking of the experiments on the hybrid functions and composition functions is shown in Figures 7 and 8, respectively. The results show that the GMO algorithm ranks first in the solution results of hybrid and composition functions, proving that the GMO algorithm can balance the contradictory problems of exploitation and exploration. The computing power of the GMO algorithm is more competitive compared to other algorithms.(3)The box plots of the convergence results of all algorithms on F11–F30 functions are shown in Figure 9 (taking D = 50 as an example). The figure shows that the GMO algorithm has good stability on hybrid and composition functions.

Based on the above data analysis, the GMO algorithm has the comprehensive ability to solve complex problems of different dimensions. It can well balance the contradiction between exploitation and exploration in the complex solution space, and the algorithm shows good stability. This may be attributed to alternating between synchronous migration and free foraging processes in the GMO algorithm.

3.5. Convergence Analysis

The convergence information during the algorithm solving process can be fully displayed in the average convergence curve, which is very important to the computational power of the analysis algorithm. Taking D = 50 as an example, this paper gives the average convergence curve of 29 functions by the GMO algorithm and 5 comparison algorithms, as shown in Figure 10. From the overall results, the convergence results of the GMO algorithm are the best on 27 functions and rank second on two functions (F3, F22), which powerfully illustrate the advantage of the GMO algorithm in terms of convergence ability. From the convergence effect of a single function, the convergence speed of the GMO algorithm is slow in the early stage. However, the GMO algorithm converges fast in the middle stage and quickly converges to the global optimum. This may be attributed to the large radius of the migration group in the early stage of the GMO algorithm. The wild geese fully explored the solution space during the synchronous migration process and stored the exploration results. With the continuous iteration of the algorithm, the range radius of the migration group is reduced, and the position of the head geese is continuously optimized, so that the algorithm converges quickly until the best convergence effect is achieved.

3.6. Analysis of Significance Test and Running Time

In this section, the experimental results are further analyzed by statistical methods. The significance test (Wilcoxon rank-sum test or T-test) results for all data tables in Section 3.2 are counted, as shown in Table 15. In the table, “1” indicates a significant difference between the two samples, and “0” means no significant difference. “+” indicates that the performance of the GMO algorithm is better than other algorithms, and “−” indicates that the performance of the GMO algorithm is worse than other algorithms. Therefore, the number of “1+” in the results is counted, which can strongly demonstrate the advantages of the GMO algorithm.

From the statistical results in Table 15, it can be seen that comparing the GMO algorithm with the WOA, PSO, BRO, and CSO algorithms, there are at least 26 calculation results of “1+,” and comparing the GMO algorithm with the ABC algorithm, there are at least 20 calculation results of “1+.” Overall, the significance test results of the GMO algorithm compared with the other 5 algorithms can reach “1+” more than 96% of the time, which further illustrates the advantages of the GMO algorithm.

According to the data tables in Section 3.2, the running times of all algorithms are further counted, as shown in Table 16. The statistical results show that the average running time of the GMO algorithm is similar to the PSO algorithm, and it is lower than that of WOA, BRO, and CSO algorithms. In addition, the benchmark functions corresponding to the minimum, median, and maximum running time of all algorithms are almost the same. It shows that the GMO algorithm has lower time complexity and reliable stability.

3.7. Comparative Analysis

In this paper, D = 30 is taken as an example, and the experimental results of GMO are compared with the data in the literature [44, 45, 59, 60], as shown in Table 17. It can be seen from the table that the calculation results of the GMO algorithm are significantly better than those of the FSA and KABC algorithms. The performance of the GMO algorithm is similar to that of the FDB-SOS algorithm on unimodal and combinatorial functions, but the GMO algorithm performs better on multimodal functions. Compared with the FDBSFS algorithm, the calculation results of the GMO algorithm are in the same order of magnitude in most functions. This shows that the GMO algorithm is equally competitive with the improved algorithm.

4. GMO Algorithm for Engineering Design Problems

In order to verify the applicability of the GMO algorithm on engineering design problems, this section seeks five classical structure design problems, and the GMO algorithm is used to solve the problems. In the experimental process, the design variable is used as the individual’s location information in the optimization algorithm, and the calculation model of each problem is used as the objective function. First, the structure design problems are introduced in detail. The problems mainly include three-bar truss design problem, pressure vessel design problem, tension/compression spring design problem, gear train design problem, and cantilever beam design problem. Then, to prove the superiority of the GMO algorithm in solving engineering design problems, the experimental results of the GMO algorithm are compared with the corresponding results of several other algorithms. The results of other algorithms come from literature reports, including KABC [60], DMMFO [61], GOA [62], LSA [63], ALO [38], CS [31], GSA [20], IAPSO [64], CPSO [65], MABGA [66], MBA [67], SOS [68], and CBO [69] algorithms. Finally, all experimental results are analyzed and discussed.

4.1. Three-Bar Truss Design Problem

Three-bar truss design is a classical optimization problem in mechanics [3, 38], and its mechanism schematic is shown in Figure 11. The problem aims to minimize the volume of a three-bar truss structure, while satisfying the constraints of stress and loading force. The cross-sectional area (,) of the connecting rod is used as the optimization variable, and the optimization objective function is as follows.where l is the spacing between the connecting rods, l = 100 cm, and ,.

In the process of optimizing variables, the design variables needs to meet the constraints of structural stress, material deflection, and buckling. The three constraint formulas are as follows.where , .

According to equations (6) and (7), the GMO algorithm is used to solve the three-bar truss problem, and the results are shown in Table 18. Compared with the results of other algorithms, the fitness values of GMO, ALO, and GSA algorithms are optimal, and the solution results satisfy the constraints. It shows that the GMO algorithm is feasible to solve the three-bar truss design problem.

4.2. Pressure Vessel Design Problem

Kannan and Kramer [70] proposed the pressure vessel design problem, which is to minimize the manufacturing cost under the constraints. The structure schematic is shown in Figure 12. This problem consists mainly of 4 design variables, is the shell thickness of the pressure vessel, is the thickness of the head, is the inner ring radius of the pressure vessel, and is the length of the cylindrical section. The calculation model is as follows.where and , in which and are integer multiples of 0.0625. According to the design specification, the constraint formulas are as follows.

The calculation results of the GMO algorithm and the other 9 algorithms for the pressure vessel design problem are shown in Table 19. The table shows that the results of the KABC, DMMOF, MABGA, and MBA algorithms do not meet the constraints of the variables, which is not desirable. However, the proposed GMO algorithm finds a design with the optimal value identical to LSA, CS, GSA, LAPSO, and CPSO algorithms and satisfies the variable constraints. Therefore, the algorithm is also applicable to solve the pressure vessel design problem.

4.3. Tension/Compression Spring Design Problem

It is an interesting problem to achieve tension/compression spring weight minimization, while satisfying specification and theoretical constraints. This problem was described by Belegundu and Arora [71]. The structure is shown in Figure 13.

The calculate model of tension/compression spring weight is as follows.where , , are the design variables, which are wire diameter, coil diameter, and number of coils, respectively. The value ranges of the design variables are , , , respectively. At the same time, the problem also needs to meet the design theories, such as minimum deflection and shear stress. The specific constraint formulas are as follows.

The calculated results of the GMO algorithm for solving the tension/compression spring design problem are shown in Table 20 and compared with the results of 7 other algorithms. It can be seen that the calculated results of all variables meet the requirements of the constraints, and the calculation results of the GMO algorithm are very competitive.

4.4. Gear Train Design Problem

The gear train design is a significant engineering design problem in mechanical transmission [72, 73]. The process designs the number of teeth on each gear in the transmission system according to a reasonable transmission ratio. The gear train is shown in Figure 14. The design variables for this problem include the number of teeth of the 4 gears (, , , ). The mathematical model is as follows.

The gear train design problem has a unique solution, and the elements of the solution vector must be integers. The optimization results of the GMO algorithm for the gear train design problem are the same as those of the ALO, IAPSO, and MBA algorithms, as shown in Table 21. It can be seen that the result of the GMO algorithm is optimal and feasible for the problem.

4.5. Cantilever Beam Design Problem

The cantilever beam design problem is a common engineering problem [74], and its structural diagram is shown in Figure 15. The cantilever beam is mainly composed of 5 sections of square steel with equal wall thickness, and the design variables include the section side length of the 5 sections of square steel (, , , ,). The design objective is the minimum weight of the cantilever beam. The calculation model is established in equations (13), and equation (14) is the constraint formula.

The experimental results of the GMO algorithm to optimize the cantilever beam design problem are shown in Table 22. The table shows that the calculation results of all algorithms satisfy the constraints and the optimal fitness values are very close. It is proven that the GMO algorithm obtains satisfactory results.

The comparison results of the above five engineering design problems show that the GMO algorithm has good applicability in practical engineering problems in complex unknown spaces and has achieved satisfactory calculation results. It proves that the GMO algorithm is a promising meta-heuristic optimization algorithm.

5. GMO Algorithm for Inverse Kinematics Solution

This paper takes the 7R 6DOF robot as an example to study the inverse kinematics solution of robot by GMO algorithm. The 7R 6DOF robot is composed of 7 rotary joints, which are driven by 6 motors. The robot structure is shown in Figure 16. It has the characteristics of a hollow wrist and flexible movement, which can be used for work in narrow spaces and complex paths. However, the problem of no analytical solution for inverse kinematics limits the field application. Therefore, it may be only feasible to study numerical methods for solving the inverse kinematics of the robot.

5.1. Kinematic Modeling of the 7R 6DOF Robot

In this paper, the D-H parameter method is used to establish the kinematic model of the 7R 6DOF robot. The forward kinematics model is as follows.where is the pose matrix of the end effector and is the coordinate transformation matrix between adjacent links of the robot. The specific transformation matrix is as follows.where , , , and represent the link length, link offset, link torsion angle, and joint angle, respectively. Among them, , , are the fixed parameters of the rotary joint robot, and is the control parameter. This paper takes the IRB5400 robot with a 7R 6DOF structure as an example, and its D-H parameters are shown in Table 23 [75].

According to the input robot joint angles, the pose matrix of the robot end position is solved through the forward kinematics formula and D-H parameters. The pose matrix of the robot end position is as follows.where , , , ,, , , , represent the rotational elements of the pose matrix and , , represent the elements of position vector.

In order to realize the step-by-step optimization of the GMO algorithm in the inverse kinematics solution process, the objective function is designed in equation (17), which is the difference between the expected value and the actual value of the pose matrix.where , , , represent the rotational and position vectors of the expected pose matrix and is the adjustment factor.

5.2. Experiment and Result Analysis

According to the forward kinematics model and objective function of the 7R 6DOF robot, the inverse kinematics experiment of the GMO algorithm takes the joint angle of the robot as the optimization variable and the desired end pose as the optimization goal. Then, to prove the GMO algorithm’s computational performance in solving the inverse kinematics of the robot, the experimental results of the GMO algorithm are compared with the WOA, PSO, BRO, CSO, and ABC algorithms. In the experiment, two pose matrices of the robot end position are randomly selected as the test points, and the pose matrix is shown in Table 24. The population size N = 100, the maximum number T = 500, and the adjustment factor γ = 1.

During the experiment, in order to avoid the influence of accidental results, 50 independent experiments are conducted at each test point, and the best, worst, mean, and standard deviation of each algorithm’s convergence results are recorded. The results are shown in Table 25. It can be seen from the table that the average value of the GMO algorithm has reached 1.0E − 11 on two test points, which is at least 5 orders of magnitude better than other algorithms. The best, worst, and standard deviation values are also better than those of the other 5 algorithms. The average convergence curve is shown in Figure 17. Its shows that the GMO algorithm has fast convergence speed and high accuracy.

However, the effectiveness of solving the inverse kinematics problem can be more directly verified by the independent errors of each element in the pose matrix. As shown in Table 26, the independent errors of each element in the pose matrix are calculated. It can be seen that the error of each element in the solution result by the GMO algorithm is less than 1.0E − 15, which is higher than the minimum error in other algorithms. The experimental results verify the feasibility of the GMO algorithm to solve the inverse kinematics problem.

In recent years, scholars have made a lot of valuable explorations to solve the inverse kinematics of robots through intelligent methods. This paper counted the experimental results in the literature and compared them with the solution results of the GMO algorithm, as shown in Table 27. It can be seen that scholars have achieved more research results on the problem of solving the robot end position. However, there are fewer studies on more complex pose problems, and the results are less accurate. The GMO algorithm is applied to solve the inverse kinematic pose problem of a complex 7R 6DOF robot. The average solution result of 50 experiments is 1.68E − 11, which shows that the GMO algorithm has a high solution accuracy and excellent applicability.

6. Conclusion

In this paper, the wild geese migration optimization (GMO) algorithm is inspired by the behavior of wild geese migration. The mathematical optimization model of GMO algorithm is designed by simulating the special migration process of the wild geese, which has the advantages of simple structure and few parameters. In order to verify the optimization ability of the GMO algorithm, the 29 stable benchmark functions from CEC2017 are used for 50 experiments, respectively. The primary performance evaluation indicators are the mean, standard deviation, significance test results, and the algorithm’s running time. The test results of the GMO algorithm and WOA, PSO, BRO, CSO, and ABC algorithms are statistically analyzed. It can be seen that the GMO algorithm has apparent advantages in computing performance and can better seek a balance between exploitation and exploration. It is a sufficiently competitive optimization algorithm.

In addition, the GMO algorithm is used to solve five engineering optimization problems, and the solution results are compared with the results provided in other studies. The comparison results show that the GMO algorithm obtains excellent solution results, and the experimental results meet the constraints of engineering optimization problems. This shows that the GMO algorithm has satisfactory computing performance and universality in the face of unknown space and complex practical problems. Finally, the GMO algorithm is applied to the inverse kinematic pose problem of the 7R 6DOF robot. The experimental results show that the average solution accuracy of the end pose of the GMO algorithm reaches 1.0E − 11, which is at least 5 orders of magnitude higher than that of the comparison algorithm. The GMO algorithm provides a new solution for the inverse kinematics of the complex 7R 6DOF robot, showing that the algorithm has strong practicability and good development prospects.

In future work, we will study the independent optimization mechanism of the migration groups in the GMO algorithm and the multiobjective optimization problem of the GMO algorithm and explore more valuable practical application cases.

Data Availability

The data used to support the findings of this study are included within the article, and the datasets and codes are available from the corresponding author on reasonable request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Authors’ Contributions

All authors contributed to the study conception and design. Xinming Zhang and Linsen Song were responsible for conceptualization and supervision. Honggang Wu and Yufei Zhang were responsible for the algorithm simulation experiment and the first draft of the manuscript. All authors commented on previous versions of the manuscript. Lidong Gu and Xiaonan Zhao were responsible for project management and funding. All authors read and approved the final manuscript.

Acknowledgments

This study was supported by the Key Research and Development Project of Jilin Province Science and Technology Development Plan (20200401098GX) and Jilin Provincial Department of Education Science and Technology Project (JJKH20220778KJ).