Abstract

This paper solves the drawbacks of traditional intelligent optimization algorithms relying on 0 and has good results on CEC 2017 and benchmark functions, which effectively improve the problem of algorithms falling into local optimality. The sparrow search algorithm (SSA) has significant optimization performance, but still has the problem of large randomness and is easy to fall into the local optimum. For this reason, this paper proposes a learning sparrow search algorithm, which introduces the lens reverse learning strategy in the discoverer stage. The random reverse learning strategy increases the diversity of the population and makes the search method more flexible. In the follower stage, an improved sine and cosine guidance mechanism is introduced to make the search method of the discoverer more detailed. Finally, a differential-based local search is proposed. The strategy is used to update the optimal solution obtained each time to prevent the omission of high-quality solutions in the search process. LSSA is compared with CSSA, ISSA, SSA, BSO, GWO, and PSO in 12 benchmark functions to verify the feasibility of the algorithm. Furthermore, to further verify the effectiveness and practicability of the algorithm, LSSA is compared with MSSCS, CSsin, and FA-CL in CEC 2017 test function. The simulation results show that LSSA has good universality. Finally, the practicability of LSSA is verified by robot path planning, and LSSA has good stability and safety in path planning.

1. Introduction

In recent decades, the swarm intelligence optimization algorithm has been favored by many scholars due to its simple structure and high solving efficiency. Plenty of intelligent optimization algorithms have appeared continuously, such as monarch butterfly optimization (MBO) [1], smooth mount algorithm (SMA) [2], mother search algorithm (MSA) [3], hunter games search (HGS) [4], naked mole-rat algorithm (NMRA) [5], and Harris hawks optimization (HHO) [6]. Xue and Shen [7] proposed a sparrow search algorithm (SSA) that simulates the nature of sparrows foraging for food with the advantages of simple principle, fewer adjustment parameters, less programming difficulty, etc. Compared with grey wolf optimizer (GWO) [8], particle swarm optimization (PSO) [9], and genetic algorithm (GA) [10] in function optimization, it has better search results. Although excellent ability has been strongly confirmed, however, the SSA algorithm also has its own shortcomings. SSA is highly dependent on a certain role in the group and lacks learning ability. It is still easy to fall into the local optimum on high-dimensional complex problems.

At present, lots of scholars have carried out a series of studies so as to minify the shortcomings of the SSA itself. Scholars have studied and improved it separately, for the sake of further improving the optimization ability of the SSA. For example, Lu et al. [11] proposed a chaotic sparrow search algorithm (CSSA). The algorithm first used the tent mapping based on random variables to generate a better individual sparrow sequence and introduced tent perturbation as well as Gaussian mutation in the optimization process to carry out the solution found by the sparrow. The update prevents the algorithm from appearing premature; furthermore, the effectiveness of the algorithm is verified in the test function and image segmentation issues. At the same time, they stated an improved sparrow search algorithm (ISSA) to apply to the multithreshold image segmentation problem [12] and achieved meaningful results. In the process of algorithm optimization, the idea of a bird swarm algorithm is applied. The benchmark function and multithreshold image segmentation based on the variance between classes and Kapur entropy verify that the improved algorithm has strong search and development capabilities. Mao and Zhang [13] illustrated an improved sparrow search algorithm that combines Cauchy mutation and opposition-based learning. At first, a sin chaotic mechanism with an unlimited number of folds was used to initialize the population, and the previous generation global optimal solution was employed into the position update formula. For the aim of generating a new solution of higher quality, the information exchange of the algorithm was sped up, and the adaptive weight strategy was introduced to coordinate the local and global search capabilities, and the fusion Cauchy mutation and reverse learning strategy were utilized to perform disturbance mutation at the optimal position, which improved the ability of the algorithm to jump out of the local optimum. Eight test functions verify that the algorithm has been greatly improved in global optimization. Liu et al. [14], in order to better apply the SSA in 3D path planning, also adopted chaos strategies to enhance the diversity of the population and used adaptive inertial weights to balance the convergence speed and exploration capabilities of the algorithm. Finally, they adopted the Cauchy–Gaussian mutation strategy to get rid of the ability of the later algorithm stagnation, through the effect of planning the route, so that the improved SSA with a strong search ability is able to plan the route which is safer. Wang et al. [15] proposed a chaotic map sparrow search algorithm, which uses a dynamic adaptive weight mechanism to control the search range of the sparrow and finally uses reverse learning and Gaussian mutation to prevent the algorithm from falling into local optimality and balance the development of the algorithm and searchability as well. The 12 test functions proved that the improved algorithm has strong optimization capabilities. Zhang and Ding [16] also proposed a chaotic sparrow search algorithm, which strengthens the global search ability of the algorithm by introducing logistic mapping, adaptive hyperparameter, and mutation operation. The effectiveness of the algorithm is verified by the test function, and then, it is applied to the random configuration network. The results show that the proposed model has good regression accuracy.

The abovementioned authors have carried out a lot of experiments to verify the advantages of the proposed algorithm, but still there are some shortcomings:(1)Most papers put forward chaos theory. Chaos theory itself has uncertainty and cannot change the randomness of the algorithm. Therefore, a flexible search mechanism is needed to improve the situation.(2)The existing literature does not fundamentally change the optimization mechanism of the algorithm itself and lacks learning ability, so there is still a probability of falling into the local optimum when encountering high-dimensional complex problems.(3)At present, the improved algorithms are only tested on the function of the optimal value dimension 0, which lacks rationality and cannot fully explain the effectiveness of the algorithm.(4)Researchers only consider the update of the best location, but ignore the update of the worst location of SSA. The search method of SSA is closely related to the worst position.

Based on the above shortcomings, this paper proposes a learning sparrow search algorithm, with the help of lens reverse learning and random reverse learning to improve the learning ability of the algorithm and adapt to various complex models. The improved sine and cosine algorithm is used to guide the followers to update the position and improve the search precision. As a result, the local search based on the difference is used to update the optimal solution to improve the quality of the solution. LSSA is compared with CSSA, ISSA, SSA, beetle swarm optimization (BSO) [17], GWO, and PSO in 12 benchmark functions to verify the feasibility of the algorithm. In order to further verify its effectiveness and practicability, LSSA is compared with multistrategy serial cuckoo search algorithm (MSSCS) [18], CSsin [19], and firefly algorithm with courtship learning (FA-CL) [20] in CEC 2017 test function. All the three algorithms are verified in the CEC test function; finally, the results show that the LSSA algorithm has strong universality. The contributions of this paper are as follows:(i)The fusion of two opposition-based learning strategies is proposed to improve the algorithm’s global search capability(ii)An improved sine and cosine algorithm is proposed to improve the flexibility of the algorithm(iii)A local search based on the difference is proposed to improve the quality of the solution each time(iv)It has a good effect on the benchmark function and the CEC 2017 function and, at the same time, minifies the defect of the algorithm close to the origin(v)LSSA is applied to robot path planning, and good results are achieved

This reminder of this paper is organized as follows. Section 2 introduces the basic sparrow search algorithm and analyzes it. Section 3 describes the process and validation of LSSA. Section 4 shows the experiment and analysis of each algorithm on benchmark function and CEC 2017 test function. Section 5 provides discussion and future research directions.

2. Sparrow Search Algorithm

The SSA is divided into three phases: discoverer, follower, and investigator. As the name implies, the discoverer discovers food, searches for food, and provides direction for other individuals in the population. Therefore, the discoverer searches for a wide range of food, which accounts for 20% of the population. The location update formula for the discoverer is

In formula (1), represents the current number of iterations, is the maximum number of iterations, denotes the current position of the ith sparrow in the jth dimension, is a random number, and represent warning and safety values, respectively, and when , , is a normal distribution of random numbers, and means 1 with all the elements of 1 × D. When , this indicates that the community environment is safe at this time, no predators are found around them, and the discoverer can perform a wide search mechanism. When , this indicates that the individual within the group has discovered the predator and issued an alert, that all individuals in the group will make antipredatory actions, and that the discoverer will lead the follower to a safe location.

Followers perform food searches after the discoverer and neighborhood searches around the discoverer’s location. Followers’ location updates the formula as follows:

In formula (2), is the optimal position currently occupied by the discoverer, represents the worst position currently, and A is a matrix of 1 × d, where the value of each element is 1 or −1, and . When , at this point, the sparrow population will counterattack when it senses danger.

Investigators are randomly selected individuals within the population. When predators invade, they will send out signals to make sparrows escape to a safe position. The behavior formula of investigators is as follows:

In formula (3), is the current global optimal location, is the control step parameter, which is a random number with the normal distribution of mean value 0 and variance 1, is a random number, is the fitness value of the sparrow, and are the best and worst fitness values in the current search range, respectively, and is the smallest real number to prevent the denominator from 0. When , it means that the sparrow is at the boundary of the population and vulnerable to predators, so it needs to adjust its position. When , this indicates that the sparrow individuals in the population are aware of the danger and need to be close to other sparrows in order to avoid the danger. represents the direction of the sparrow movement and can also control the movement step.

2.1. Performance Analysis

SSA can be divided into three stages: discoverer, follower, and scout. From the three formulas, it can be seen that sparrow individuals depend on the search of the discoverer stage. The idea of adaptive weight is introduced in formula (1), but the adaptive weight still has defects in the face of high-dimensional complex functions and cannot open up a global vision. Therefore, it is necessary to make use of lens reverse learning and random reverse learning to dig out more hidden positions, but also increase the diversity of the population and make full preparation for the optimization in the later stage. Formula (2) has the defect of near-zero points; hence, nonlinear sine-cosine guidance is used to balance the local and global search. From the overall formula, the update distance between the front and back position of SSA is far, so the blind area between them becomes more. The local search based on the difference can improve the search precision and reduce the scope of the blind areas.

3. Learning Sparrow Search Algorithm

3.1. Opposition-Based Learning Strategy Based on Lens Principle

The discoverer leads other individuals to search for food, and the search method directly affects the overall search performance, so the discoverer must have a wide range and flexible search mechanism. To solve these problems, researchers have proposed the corresponding learning mechanism [2123]. The general opposition-based learning strategy only solves the problem in a certain space [2426], which still has monotonicity and risk of local optimization. As for this phenomenon, this paper provides two fusion opposition-based learning strategies to jointly improve the searchability of the discoverer. The opposition-based learning strategy based on the lens principle [27] is used to effectively update the location of the discoverer. The schematic diagram is shown in Figure 1. It is the opposition-based learning solution of the lens principle that is flexible and diverse, which is conducive to mining new solutions in the unknown area and increasing the diversity of the population. The principle is as follows:

In a certain space, suppose an individual P of height h, and individual is the projection of individual P onto the X-axis. A lens of focal length f is placed on the base point position O, O is the midpoint of , and and represent the upper and lower limits of the jth dimension of the current solution. An image of height is obtained by the lens imaging process, and its projection on the coordinate axis is (reverse point). At this point, is the new individual generated by through the opposition-based learning strategy based on the principle of lens imaging. The schematic diagram is shown in Figure 1.

As shown in Figure 1, the corresponding reverse point of individual is obtained by taking O as the base point, which can be obtained by the lens imaging principle:

Let , and k is the scaling factor. After transformation, the reverse point can be obtained as

Thus, when k = 1,

Formula (6) is called a general opposition-based learning strategy. From the above formulas, it is known that the general learning strategy is only a specific case of lens imaging opposition-based learning strategy, and the new individuals obtained by the general opposition-based learning strategy are fixed each time. In high-dimensional complex functions, new individuals with the fixed range also have the possibility of falling into the local optimum, and they are monotonic. By adjusting the parameter K, the new individuals based on the lens imaging learning strategy are dynamic, which improves the diversity of the population.

In this paper, we generalize the formula to the d-dimensional space:

In formula (7), and are the j-dimensional components of and , respectively, and and represent the j-dimensional components of the upper and lower bounds of decision variables, respectively.

3.2. Opposition-Based Learning of the Worst Position

After the discoverer has searched, the worst position they get is not necessarily reliable. From formulas (2) and (3), it is known that the worst solution will affect the later stage of optimization, and the minimum value will give followers a better search range. This means that updating at the worst location is extremely important, which is also a point that scholars tend to ignore, only pursuing the optimal location and ignoring the integrity of the algorithm. This paper uses the random opposition-based mechanism to update the worst position; the specific formula is as follows:

3.3. Guidance Strategy Based on Improved Sine-Cosine Algorithm

In the follower location update formula of SSA, the follower searches the location of the follower immediately after the discoverer, and there are few dynamic parameters, so it is easy to limit the search range of sparrow population and blindness, which limits the searchability of the algorithm. To solve these problems, the strategy of sine-cosine guidance [2830] is used in the follower stage to dynamically update the sparrow’s individual position and expand the search scope by using the sine-cosine characteristics. The formula for updating follower positions with sine-cosine strategy is

In the above formula, is a parameter, determined by the number of iterations, and it is the key to determine the individual search range. As the number of iterations increases, gets smaller and smaller, and the sparrow search range is also smaller and smaller. a is a constant, and the value of a in this paper is 2. is a random number in the range [0, 2π], which determines the individual movement distance; and r4 are random numbers in [0, 2] and [0, 1], respectively.

It can be seen from the formula that uses linear decline to balance the search scope, but this approach is easily trapped into local optimal [31, 32] when facing high-dimensional complexity functions. Therefore, this paper adopts nonlinear decline to set to balance local and global search [33]. The specific formula is as follows:

Among them, b is the fixed value of 0.1 and c is the regulating factor. After many experiments, when c = 0.9, the best effect is achieved. The introduction of an improved sine-cosine guidance strategy reduces the blindness of sparrow searches, accelerates information exchange between individuals in the population and those in the best and worst positions, and makes followers more purposeful in their searches. According to the characteristics of the above formulas, it is clear that the nonlinear decreasing parameters make the search more detailed and improve the convergence accuracy of the algorithm.

3.4. Local Search Based on Difference

Sparrows do not always get reliable optimal solutions for each search. Precocious phenomena occur when local extremes are encountered, which paralyzes the algorithm. In order to overcome this limitation, a differential local search is proposed to get rid of the attraction of local extremes and improve the quality of the solution. and are the historic optimal solution and the historic suboptimal solution of the population, respectively, in the process of SSA optimization. In this paper, the difference between the two solutions is used to guide to search for a reliable optimal solution in the neighborhood. Differential guidance is accurate. This strategy enables sparrows to search between solutions to avoid missing high-quality solutions and blind searches. The implementation is as follows:where is the updated historical optimal solution, is the uniform random number between , and is the local scaling factor. In the early stage of the algorithm, is far away from the optimal solution, so a larger search range is needed to speed up the search speed. In the later stage, the distance is relatively short, and a smaller search range is needed to achieve higher mining accuracy. Therefore, the idea of inertia weight is introduced here, and the linear decline strategy is adopted. The range gradually shrinks as the number of iterations increases:

The greedy strategy is adopted to preserve the solution obtained by local search as follows:where is the fitness value of .

3.5. Learning Sparrow Search Algorithm

Compared with other algorithms, the SSA has better performance, but it has more random parameters, which leads to increased randomness and the probability of falling into the local extreme value. Therefore, a learning sparrow search algorithm is proposed in this paper. Two learning mechanisms are introduced in the finder stage, and the opposition-based learning based on the lens principle is adopted to enlarge the finder search range, improve the diversity of the population, and make the search method more flexible. In the follower stage, an improved sine and cosine strategy is introduced to adjust the sine and cosine by adopting a nonlinear decreasing method to reduce the blind search in the follower stage and make its search method more detailed. In the end, a local search based on difference is expounded to update the optimal solution and promote the quality of the solution during the iteration. The algorithm flow chart is shown in Figure 2. The specific pseudocode is shown in Algorithm 1.

Input
M: Maximum number of iterations
PD: Discoverer
SD: Individuals who are aware of the danger
R2: Alert value
N: Population sparrows
Output: Xbest,
Initialize population
t = 1;
While (t < M)
Find the position of the best and worst sparrow individuals according to fitness values.
R2 = rand(1)
For i = 1 : PD
Update the location of the discoverers according to formulas (1) and (7);
End for
Update the worst location found by the discoverer according to formula (8);
For i = (PD + 1) : N
Update the location of the followers according to formulas (2) and (9);
End for
For l = 1 : SD
Get the individual position of a sparrow that is aware of danger according to formula (3);
End for
Get the Location of the New Optimal Individual;
Update the individual location according to formulas (12)–(14);
t = t + 1
End while
Return: Xbest,
3.6. Algorithm Validity Test

For making it obvious that LSSA improves the optimization mechanism of SSA and verify the scientificity and effectiveness of the LSSA algorithm, this paper takes the Schwefel function as an example and gives the individual distribution map of the two algorithms in the optimization process. Let the maximum number of iterations be 20 and the population number be 50. The function model diagram is shown in Figure 3. The individual distribution diagram of the two algorithms is shown in Figures 4 and 5.

From Figures 4 and 5, the LSSA algorithm converges fast, and most individuals are close to the optimal value, from the initial individual to the final individual distribution, while the SSA algorithm is still in a decentralized state, and the convergence speed is slow. Therefore, the LSSA algorithm’s search mechanism is broad and detailed, and it can quickly find the best in the optimization process.

3.7. Time Complexity Analysis

Time complexity is an important index to judge an algorithm and determine the rationality of the algorithm. Let the population size of the algorithm in this paper be P, the maximum number of iterations be M, the dimension be D, and the ratio coefficients of discoverers and followers are and , respectively. The time complexity of this paper is analyzed as follows:

From a macropoint of view, the time of the intelligent optimization algorithm is , so is the sparrow search algorithm. The improved sparrow search algorithm not only does not change the structure of the algorithm but also increases the number of cycles; in this way, its time complexity is , the same as the basic sparrow search algorithm.

From a microscopic point of view, the improved sparrow search algorithm increases a certain amount of computational complexity, and the opposition-based learning of the lens and the opposition-based learning of the worst position, respectively, increase the complexity of and . The computational complexity of introducing the sine and cosine guiding mechanism is , and the time complexity of introducing the local search based on the difference is . It can be seen that the introduction of each strategy does not improve the order of magnitude of the sparrow search algorithm, and the time complexity is still .

4. Benchmark Function Test

In order to better verify the optimization ability of the LSSA algorithm, this paper first selects 10 standard test functions for verification and compares them with the six algorithms of PSO, GWO, SSA, ISSA, CSSA, and BSO. BSO is a new and hot research fusion algorithm in recent years. The specific parameters are shown in the literature. The test function information table is shown in Table 1. F1F6 are complex unimodal functions, and F7F9 are high-dimensional complex functions; the rest is a fixed-dimensional function, where F1F9 are tested in 30 and 100 dimensions. The population size and maximum iteration number of each algorithm are 100 and 500, respectively. The two learning factors in the particle swarm algorithm are c1 = c2 = 1.429 and weight  = 0.729, and each algorithm runs 30 times independently and calculates the best value (best), average (Ave), and standard deviation (std) of the running results. Three indicators comprehensively evaluate the optimization ability of each algorithm in function. For performance evaluation, simulations are performed on Windows 10 having Matlab 2019a, Intel(R) Core (TM) i5-10200H CPU @ 2.40 GHz with 16 GB RAM.

From Tables 2 and 3, the LSSA algorithm exhibits good optimization effects in both 30 and 100 dimensions. In the unimodal function, both LSSA and basic SSA algorithm can find the optimal value, indicating that the LSSA algorithm does not reduce the optimization ability of the algorithm itself, which shows the rationality of the LSSA algorithm. In the multimodal function, LSSA can show a strong optimization ability and has better convergence accuracy than other algorithms, and it does not significantly weaken the optimization ability of the LSSA algorithm in the 100 dimensions. In the fixed dimension function F10F12, the LSSA algorithm has good stability and can find the same value almost every time, which is close to the theoretical optimal value.

In order to better describe the optimization process and convergence speed of each algorithm, the 30-dimensional convergence graph of each algorithm on each function is given, as shown in Figure 6.

As shown in Figure 6, LSSA has great advantages in the optimization speed and convergence accuracy of each function. It converges quickly on the unimodal function and has better antilocal attraction ability on the multimodal function. It can be seen that the LSSA algorithm gets rid of the constraints of the original algorithm’s search mechanism and develops a better search space.

5. CEC 2017 Function Test

5.1. Algorithm Complexity

According to the requirements of CEC 2017 test standard, the complexity of the proposed algorithm needs to be calculated. Therefore, the following code is used to calculate the running time T0 of LSSA:for i = 1 : 1,000,000x = 0.55 + double(i); x = x + x; x = x/2; x = x ∗ x; x = sqrt(x); x = log(x); x = exp(x); x = x/(x + 2);end

T1 is the calculation time of the F18 function under 200,000 evaluation times, and T2 is the average calculation time of the F18 function for 5 times under the same condition. The specific algorithm complexity is shown in Table 4.

5.2. Function Test

Experiments on benchmark functions alone cannot show the universality and effectiveness of the algorithm. To better illustrate the effectiveness of the LSSA algorithm and avoid the situation that the LSSA algorithm depends on the optimal value of 0, the algorithm is tested on CEC 2017 test function [34], and the number of evaluations is 10,000 ∗ dim, the dimension is 50 or 30, and the population number is 50. The test results of LSSA and SSA, CSSA, MSSCS, CSsin, and FA-CL are compared, and the specific parameters of each algorithm are shown in Table 5. Each algorithm runs 30 times independently and calculates five indexes of the results of each algorithm running on the function, namely, the best, the worst, the median, the mean, and the standard deviation. Each index can clearly reflect the optimization ability of each algorithm. The best value in each indicator is treated in bold font. At the same time, the Wilcoxon rank test is used to show whether there is a significant difference in each algorithm, and the experiment is carried out at the significance level of 5%. “ + ” means that the LSSA algorithm is better than other algorithms in the optimization effect, “−” means the opposite, and “ = ” means the same performance. Finally, the comparison of each algorithm is counted. The test results are shown in Tables 6 and 7.

From Tables 6 and 7, it is apparent that LSSA has a better optimization effect in both 30 and 50 dimensions, and each function is close to the theoretical optimal value, but the effect on the F4 function is poor. Although MSSCS and CSsin have better optimization effects, they show the very poor effects on F1, F3, and F12-13 in 50 dimensions. It can be seen that these two algorithms have limitations in this kind of problem. Other algorithms perform poorly and rarely approach the theoretical optimal value. From the statistical test, the LSSA algorithm is significantly different from other algorithms, showing better advantages, and some functions are similar to MSSCS and CSsin. Generally speaking, the LSSA algorithm has high universality and is more suitable for some complex optimization problems than other algorithms.

6. Robot Path Planning

This paper takes the classic robot path planning case to explore it. In path planning, each sparrow is a feasible path. Suppose that there are n feasible paths, and dimension D is determined by the number of lines from the starting point to the target point. The grid method is used for environment modeling, and the grid method is to use 1 × 1. According to the grid value, the obstacles in the equivalent position are calculated. Grid number 0 is defined as the feasible area, and 1 is the obstacle area. Then, the robot can plan the path on the grid assigned to 0, and dimension D is the column number of the grid map. The cost function of the path length of the ith sparrow is shown in the following equation:

In equation (12), j is the jth dimension of a sparrow.

In order to better verify the practicability of the improved algorithm, LSSA is applied to robot path planning and SSA is used for comparative experiments. The number of populations is 50, and the number of iterations is 20. Other environmental parameters are consistent with the above CEC 2017 test. Each algorithm works in a 15 × 15 model, and the optimal route is shown in Figure 7. In order to eliminate the chance, each algorithm is tested 10 times, and the optimal route, average route, and worst route of each algorithm are counted. Three indicators are used to measure the stability and feasibility of each algorithm in this experiment. The optimization statistics of each algorithm are shown in Table 8.

It can be seen from Table 8 and Figure 7 that the minimum cost of LSSA planning is 19.7990, while the minimum cost of SSA is 25.4558. It can be seen that the route planning ability of LSSA is strong, and through the average value and the worst value, it can be seen that the route planned by LSSA has good stability. Therefore, LSSA has a good effect on robot path planning and can plan a more stable and safe route.

7. Conclusions

A learning sparrow search algorithm is proposed in this paper, overcoming the shortcomings of the SSA. The lens reverse learning and random reverse learning are introduced in the discoverer’s position and the worst position, respectively, which make the discoverer’s search method more flexible. Then, the improved sine-cosine guidance makes the follower search more detailed. Finally, the local search based on the difference is used to update the optimal solution, which improves the quality of each solution.

Through 12 standard test functions, LSSA is proved to be better than SSA, BSO, CSSA, ISSA, GWO, and PSO. At the same time, in order to avoid LSSA only depending on the zero point, this paper compares LSSA with MSSCS, CSsin, and FA-CL which have been verified by CEC function in recent years. The results show that the LSSA algorithm has good universality, while other algorithms have limitations in some functions. Finally, the practicability of LSSA is verified by robot path planning. The LSSA test effect is satisfying, but there are some shortcomings; for instance, during the function optimization process, the time consumption is large and cannot show the best effect on some functions of CEC 2017, showing an unstable effect. Of course, the increase of time is inevitable because we add workload. In the future, we need to do three aspects of work, the first is how to balance the time and optimization ability of the algorithm, the second is how to improve the stability of the algorithm on the basis of the previous, and the third is how to get a better application in practical complex engineering. On the contrary, we will also try to analyze and optimize in MBO, SMA, MSA, HGS, and HHO and better apply to practical problems.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was financially supported by Regional Foundation of the National Natural Science Foundation of China (no. 61561024).