Abstract

The butterfly optimization algorithm (BOA) is a swarm-based metaheuristic algorithm inspired by the foraging behaviour and information sharing of butterflies. BOA has been applied to various fields of optimization problems due to its performance. However, BOA also suffers from drawbacks such as diminished population diversity and the tendency to get trapped in local optimum. In this paper, a hybrid butterfly optimization algorithm based on a Gaussian distribution estimation strategy, called GDEBOA, is proposed. A Gaussian distribution estimation strategy is used to sample dominant population information and thus modify the evolutionary direction of butterfly populations, improving the exploitation and exploration capabilities of the algorithm. To evaluate the superiority of the proposed algorithm, GDEBOA was compared with six state-of-the-art algorithms in CEC2017. In addition, GDEBOA was employed to solve the UAV path planning problem. The simulation results show that GDEBOA is highly competitive.

1. Introduction

Optimization problems exist in all aspects of our society, including business, engineering, and science. An optimization problem is the process of finding the best value of decision variables that satisfy the maximum or minimum objective value without violating the constraints. With the development of science and technology in these days, the optimization problems we encounter have become increasingly complex. These real-world optimization problems often involve many decision variables, complex nonlinear constraints and nonconvexity, dynamic objective functions, and expensive computational costs [1, 2]. Therefore, when we solve these problems using traditional gradient-based methods, we encounter many difficulties in achieving a satisfactory solution [3]. As the field of optimization has developed, metaheuristic algorithms have become increasingly popular. It has the property of achieving an optimal or near-optimal solution in a reasonable time and does not rely on problem-specific gradient information [4]. It is therefore widely used to solve various types of optimization problems, such as task planning [57], feature selection [8, 9], parameter optimization [10, 11], and image segmentation [12, 13].

In the last decades, many metaheuristic algorithms have been proposed and successfully applied to different domains. These algorithms can be divided into three categories: evolutionary-based algorithms, physical-based algorithms, and swarm-based algorithms. Evolution-based algorithms are a class of algorithms that simulate the laws of evolution in nature. The genetic algorithm (GA) [14] is a widely used evolutionary-based algorithm proposed by Holland. It updates populations by simulating the natural law of survival of the fittest. With the popularity of GA and GA variants, more and more evolutionary-based algorithms are continuously being proposed, including differential evolution (DE) [15], genetic programming (GP) [16], evolutionary strategies (ES) [17], and evolutionary programming (EP) [18]. Besides these evolutionary algorithms, some novel evolutionary-based algorithms have been proposed recently, such as artificial algae algorithm (AAA) [19], biogeography-based optimization (BBO) [20], and monkey king evolutionary (MKE) [21]. Physical-based algorithms simulate the laws of physics in nature or in the universe. Simulated annealing (SA) [22] inspired by annealing phenomena in metallurgy is one of the best-known physical-based algorithms. There are other physical-based algorithms proposed, including gravitational search algorithm (GSA) [23], nuclear reaction optimization (NRO) [24], water cycle algorithm (WCA) [25], and sine cosine algorithm (SCA) [26]. Swarm-based algorithms simulate social behaviour such as self-organization and division of labour in species. The particle swarm optimization (PSO) [27]and ant colony optimization (ACO) [28] are two classic swarm-based algorithms. Inspired by these two algorithms, more and more scholars are conducting research on this subject and proposing different swarm-based algorithms. The popularity of PSO and ACO has prompted more researchers to propose new swarm-based metaheuristics. Mirjalili et al. proposed grey wolf optimizer based on the collaborative foraging of grey wolves [29]. Wang et al. proposed monarch butterfly optimization inspired by the migratory activity of monarch butterflies [30]. Inspired by the spiral foraging and parabolic foraging of tuna, Xie et al. proposed the tuna swarm optimization [31]. In addition to the above three types of algorithms, a class of human-based metaheuristics is beginning to emerge. These algorithms are inspired by the characteristics of human activity. Teaching-learning-based optimization (TLBO) [32], inspired by traditional teaching methods, is a typical example of this category of metaheuristic algorithms. Other human-based metaheuristics include social evolution and learning optimization (SELO) [33], heap-based optimizers (HBO) [34], political optimizers (PO) [35], and many others. The butterfly optimization algorithm (BOA) is a swarm-based metaheuristic algorithm proposed by Arora and Singh [36]. BOA establishes an exploitation and exploration process based on the foraging behaviour and information-sharing strategies of butterflies. Although the BOA can perform exploitation and exploration operations, the basic BOA suffers from diminished population diversity and a tendency to fall into local optimum. Meanwhile, the No free lunch theory (NFL) [37] states that no single algorithm can solve all optimization problems perfectly. These factors encourage us to further enhance and improve the performance of BOA.

Metaheuristics have a common property: they find optimal solutions by exploiting and exploring the search space. Exploitation dominates and will weaken exploration. But when exploration is enhanced, exploitation is weakened. So, we need to improve algorithm performance by balancing exploitation and exploration. The improvements to the algorithm focus on three main areas. The first one is the optimization of the algorithm’s parameter settings. Fan et al. [38] adjusted the fragrance factor of BOA and proposed an adaptive fragrance factor update method to enhance the convergence of BOA. Tang et al. [39] proposed the use of chaotic mapping operators to replace the alert value of the sparrow search algorithm for the purpose of balancing exploitation and exploration. Fan et al. [32] presented a new nonlinear step-factor control parameter strategy to further enhance the global search capability of the marine predator algorithm. The second is to use some techniques from other fields to improve performance. The fractional order is an effective tool that has been used in other areas [33, 40]. Yousri et al. [34] proposed an enhanced Harris hawk optimization based on fractional-order calculus memory. Elaziz et al. [35] improved the initial population of the Harris hawk optimizer using fractional-order Gaussians and 2 × mod1 chaotic mappings. The third is to use other operators to improve the original algorithm. Wang et al. [41] proposed a hybrid metaheuristic algorithm for butterfly and flower pollination based on a reciprocal mechanism. Houssein et al. [42] proposed a variant of the slime mould algorithm with hybrid adaptive guided differential evolution in order to overcome the disadvantages of unbalanced exploitation and exploration. Inspired by these hybrid variants, this paper proposes a BOA variant with hybrid distribution estimation strategy, GDEBOA. GDEBOA uses a Gaussian probability model to describe the distribution of dominant populations and to guide the direction of evolution, improving the performance of the basic BOA. The performance of GDEBOA was evaluated on the CEC2017 test suite and compared with seven state-of-the-art algorithms. The superiority of the proposed algorithm is verified by numerical analysis, convergence analysis, stability analysis, and statistical analysis. In addition, GDEBOA is applied to the UAV route planning problem to further validate the algorithm’s ability to solve real-world optimization problems.

The remainder of this paper is organized as follows. A review of the basic BOA is presented in Section 2. Section 3 provides a detailed description of the proposed GDEBOA. In Section 4, the effectiveness of the proposed improvement strategy is verified using CEC 2017 test suite. Furthermore, GDEBOA is applied to solve the UAV route planning problem in Section 5. Finally, we summarize this work in Section 6 and offer directions for future research.

2. Butterfly Optimization Algorithm

The butterfly optimization algorithm is a swarm-based metaheuristic algorithm proposed by Arora et al. The algorithm builds a model of butterfly foraging and mating. BOA proposes three hypotheses: (1) all butterflies emit fragrance and are attracted to each other; (2) each butterfly moves randomly or towards the butterfly with the most scent; and (3) the stimulus intensity of the butterfly is determined by the landscape of the fitness function. As the butterflies move, the fragrance changes with them. All butterflies form a fragrance network, and when they do not feel the fragrance network, they fly randomly, which is called the global search phase. As the butterflies approach the butterfly with the greatest concentration of fragrance, this phase is called the local search phase. BOA solves the optimization problem through global and local search with the following mathematical model.

The fragrance of the butterfly is expressed as a function of the physical intensity of the stimulus, described as follows:where represents the butterfly fragrance, represents the sensory modality, represents the stimulus intensity, is a power exponent with a value from 0 to 1, and denotes the number of butterflies. The mathematical model of the global and local search phases of BOA is represented as follows:where denotes the position of the butterfly in the iteration, denotes the global optimal individual, is a random number, and and are the individual and the individual selected randomly. BOA constantly executes two search strategies during the search process. Therefore, a switching probability is introduced to control the switching of the two strategies. The pseudocode for BOA is given in Algorithm 1.

Generate initial population of NP butterflies
Initialize parameters c, a, p
While t < tmax
 For each butterfly do
  Calculate the fragrance using equation (1)
 End for
 Find the best butterfly
 For each butterfly Xi do
  Generate a random number r from [0,1]
  If p < r
   Calculate the position using equation (2)
  Else
   Calculate the position using equation (3)
  End if
  Calculate the fitness and select the better one from [, Xi]
 End for
 Update parameter c
End while
Output the best position and fitness

3. Proposed GDEBOA

To overcome the shortcomings of the basic butterfly optimization algorithm, a modified butterfly optimization algorithm, called GDEBOA, is proposed in this paper. The combination of the Gaussian distribution estimation algorithm and BOA provides a solution to the problem of unbalanced exploitation and exploration capabilities that exist in BOA. Here, this paper employs a Gaussian distribution estimation strategy as an alternative to the global search strategy in BOA. The GDE is used to sample the dominant population and guide the evolutionary direction of the algorithm while increasing population diversity. The improved strategies and GDEBOA are described in detail below.

3.1. Gaussian Distribution Estimation

The Gaussian distribution estimation strategy represents inter-individual relationships through a probabilistic model. The strategy uses the current dominant population to calculate the probability distribution model and generates new offspring populations based on the probability distribution model sampling, eventually obtaining the optimal solution by continuous iteration. In this paper, the distribution model is estimated using a weighted maximum likelihood estimation method, and the top one-half population that performs better is taken as the dominant population. The mathematical model of this strategy is described as follows:where denotes the weighted position of the dominant population and denotes the weight coefficient in the dominant population in descending order of fitness values. is the weighted covariance matrix of the dominant populations. The pseudocode and flowchart of the proposed GDEBOA are shown in Algorithm 2 and Figure 1.

Generate initial population of NP butterflies
Initialize parameters c, a, p
While t < tmax
 Calculate the mean and Cov using equations (5) and (6)
 For each butterfly do
  Calculate the fragrance using equation (1)
 End for
 Find the best butterfly
 For each butterfly Xi do
  Generate a random number r from [0,1]
  If p < r
   Calculate the position using equation (4)
  Else
   Calculate the position using equation (3)
  End if
  Calculate the fitness and select the better one from [, Xi]
 End for
 Update parameter c
End while
Output the best position and fitness

4. Numerical Experiment and Analysis

In order to fully validate the superior performance of the proposed GDEBOA, the algorithm was tested using the IEEE CEC2017 single-objective test function. The CEC2017 test suite consists of 28 test functions. F1 is a unimodal function with only one global optimum solution and is used to verify the local search capability of the algorithm. F2–F8 are multimodal functions and are primarily used to test the ability of an algorithm to get outside of a local optimum. F9–F17 and F18–F28 are hybrid and composite functions, respectively, and can be used to test the potential of an algorithm to solve complex optimization problems in the real world. The definition of functions and optima is given in Table 1.

Seven state-of-the-art metaheuristics were used for comparison with GDEBOA, including artificial ecosystem-based optimization (AEO) [43], grey wolf optimizer (GWO) [29], Harris hawks optimization (HHO) [44], arithmetic optimization algorithm (AOA) [45], slime mould algorithm (SMA) [46], Manta ray foraging optimization (MRFO) [47], and pathfinder algorithm (PFA) [48]. In the CEC2017 test, the maximum number of iterations is 600, and the population size is 500. All the algorithm parameters were set to be the same as in the original literature, as shown in Table 2. All algorithms were run 51 times independently, and the experimental results were recorded as shown in Table 3. In this paper, the performance of GDEBOA was evaluated comprehensively by numerical analysis, convergence analysis, stability analysis, Wilcoxon test, and Friedman test. The experiments in this paper were conducted on a computer with an AMD R7 4800U processor and 16 GB RAM. Programming was performed using MATLAB R2016b.

The results in Table 3 show that GDEBOA performs best on the unimodal function F1. Although GDEBOA does not consistently achieve the optimal solution, it provides the best solution among the nine algorithms. Compared to BOA, the performance of GDEBOA is significantly improved, indicating that the improvement strategy is effective in enhancing the exploitation capability. In the multimodal functions F2–F8, each algorithm performs differently. GDEBOA performs best on F2, F4, and F7. GWO provides the best solution on F3, F6, and F8. SMA achieves the optimal solution on F5. Significantly, GDEBOA outperforms BOA in all multimodal functions, which indicates that GDEBOA has a strong global search capability. The improved strategy can effectively enhance BOA’s ability to explore the solution space. The GDEBOA performs best in most hybrid and composite functions. Specifically, GDEBOA achieved satisfactory results in 7 of the 10 hybrid functions. The GDEBOA achieved optimal solutions on 5 of the 10 composite functions. Compared to BOA, GDEBOA only performed worse on F19. The analysis of the results for the hybrid and composite functions shows that GDEBOA has a good balance of exploitation and exploration capabilities and is able to solve complex optimization problems effectively.

Convergence speed and convergence accuracy are important indicators of an algorithm’s performance. Convergence analysis provides information on how the algorithm has changed over a process of iterations. Figure 2 shows the average fitness convergence curves for F1–F28 based on the results of all algorithms solving the test suite 51 times. The results in Figure 2 show that GDEBOA has a faster convergence speed and better convergence accuracy compared to other algorithms. Specifically, GDEBOA outperformed all comparison algorithms in terms of convergence accuracy and convergence speed on 16 functions. Notably, although GDEBOA did not perform best on all functions, GDEBOA outperformed BOA on the 27 out of 28 functions, indicating that the improvement strategy proposed in this paper better balances exploitation and exploration.

In addition, in order to analyse the distribution characteristics of GDEBOA’s solutions when solving the functions, box diagrams are shown in Figure 3 based on the results of the nine algorithms solving the CEC2017 test set 51 times independently. For most of the test functions, the minimum, maximum, and median values obtained by GDEBOA are the same as the optimal solutions, which indicates that the solutions obtained by GDEBOA are more centrally distributed and more stable. Compared to BOA, GDEBOA solves the function with fewer bad values and a more concentrated distribution of solutions, indicating that GDEBOA has achieved a balance between exploitation and exploration.

Although the superiority of GDEBOA was demonstrated by comparisons of the mean and standard deviation, the literature [49, 50] demonstrates that these comparisons are not adequate. To further verify the differences between GDEBOA and other algorithms, the Wilcoxon signed-rank test was employed. Table 4 shows the results of the Wilcoxon signed-rank test with significance level . The term R+ indicates the extent to which GDEBOA outperforms the comparison algorithm, while R− indicates the opposite. The symbol “+/=/−” indicates that GDEBOA performs better than, similar to, and worse than the comparison algorithm.

From Table 4, GDEBOA outperforms the comparison algorithm on most functions. Numerically, GDEBOA is superior to MRFO on 14 functions. GDEBOA outperforms GWO and PFA on 16 functions. Compared to HHO and AOA, GDEBOA performs better on 27 and 28 functions. In particular, GDEBOA offers better solutions than BOA on all functions, except F9. In general, GDEBOA shows a superior performance.

5. UAV Route Planning

In this section, the application of GDEBOA to UAV route planning is discussed in detail. The UAV route planning problem aims to minimize the cost of carrying out the mission, which can be considered as a multi-constraint optimization problem. The route planning model is described in detail in the following section.

5.1. Cost Function

Considering the actual UAV flight scenario, we think that the UAV should reach the target as fast as possible while being free from threats during the flight. The cost function is described as the total of two functions and two constraints.where is the total cost function, denotes the flight distance cost, denotes the height cost, and are the weighting factors of two costs that satisfy , denotes the number of constraint violations, and is the penalty function factor for converting the constrained optimization problem into an unconstrained optimization problem.

The faster the UAV reaches the target, the better it is for the mission, so the path cost is represented by the sum of the route segments.where is the length of the route segment.

Besides, it is not beneficial for UAV to fly too high to avoid the threat. Therefore, the UAV needs to maintain low height flight. The corresponding height cost function is described as follows:where is the height corresponding to the route point.

Due to the limitation of UAV’s maneuverability, the turn angle and climb angle of the actual flight should be less than the theoretical maximum.where and are the turn angle and climb angle between each track segment, respectively, and and are the corresponding theoretical maximum values.

5.2. Simulation Experiment

In this subsection, we will solve the route planning model using GDEBOA and compare the results with BOA, MRFO, and SMA. All the programs were written using MATLAB R2016b and run on a Windows 10 platform with AMD R7 4700U 16 GB. For fair comparison, the parameters of each algorithm are set according to the original literature. The maximum iteration is 300, the number of populations is 50, and each algorithm is executed 30 times independently. The location of the threat is shown in Table 5. The best paths generated by MRFO, HHO, BOA, and GDEBOA are shown in Figure 4. The best values, mean, standard deviation, and success rate for 30 independent runs are shown in Table 6.

It is clear from analyzing the test results that all algorithms can give a safe flight route. This shows that the route planning model proposed in this paper is feasible. The path found by these algorithms is different as can be seen in Figure 4. Obviously, GDEBOA can find a safe path at lower cost. Also, GDEBOA can consistently find safe flight paths in 30 tests, while there are failures in the other algorithms. This indicates that GDEBOA is superior to the other three algorithms. The average cost convergence curve is plotted according to the statistical results, as shown in Figure 5. GDEBOA has faster convergence speed and better convergence accuracy.

6. Conclusions

In this paper, we propose a variant of BOA using a distribution estimation strategy, called GDEBOA, to solve the global optimization problem. The performance of BOA is enhanced by using the distribution estimation strategy to sample the evolutionary information of the dominant population and to guide the direction of individual evolution. To evaluate the effectiveness of the improved strategy and the superiority of GDEBOA, it was validated using the CEC2017 test suite. It was also compared with seven advanced algorithms through numerical analysis, convergence analysis, stability analysis, and statistical tests. The simulation results show that GDEBOA balances exploitation and exploration and is competitive with other algorithms. In addition, GDEBOA is applied to solve the UAV route planning problem. Simulation results show that GDEBOA can stably find paths with better quality, demonstrating the ability of GDEBOA to solve real-world optimization problems. Of course, there is still potential for enhancement of GDEBOA. The calculation of the covariance matrix leads to an increase in the computational cost of the algorithm. Therefore, how to reduce the cost of computing while maintaining performance is something we need to investigate further.

In future work, we plan to further apply GDEBOA to cooperative multi-UAV route planning and real-time route planning. Moreover, we plan to develop the multi-objective and binary versions of GDEBOA to solve optimization problems in other fields.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Acknowledgments

The authors acknowledge funding received from the following science foundations: National Natural Science Foundation of China (nos. 62101590 and 62176214) and the Science Foundation of the Shaanxi Province, China (2020JQ-481 and 2019JQ-014).