Research Article  Open Access
Zhiming Li, Yongquan Zhou, Sen Zhang, Junmin Song, "LévyFlight MothFlame Algorithm for Function Optimization and Engineering Design Problems", Mathematical Problems in Engineering, vol. 2016, Article ID 1423930, 22 pages, 2016. https://doi.org/10.1155/2016/1423930
LévyFlight MothFlame Algorithm for Function Optimization and Engineering Design Problems
Abstract
The mothflame optimization (MFO) algorithm is a novel natureinspired heuristic paradigm. The main inspiration of this algorithm is the navigation method of moths in nature called transverse orientation. Moths fly in night by maintaining a fixed angle with respect to the moon, a very effective mechanism for travelling in a straight line for long distances. However, these fancy insects are trapped in a spiral path around artificial lights. Aiming at the phenomenon that MFO algorithm has slow convergence and low precision, an improved version of MFO algorithm based on Lévyflight strategy, which is named as LMFO, is proposed. Lévyflight can increase the diversity of the population against premature convergence and make the algorithm jump out of local optimum more effectively. This approach is helpful to obtain a better tradeoff between exploration and exploitation ability of MFO, thus, which can make LMFO faster and more robust than MFO. And a comparison with ABC, BA, GGSA, DA, PSOGSA, and MFO on 19 unconstrained benchmark functions and 2 constrained engineering design problems is tested. These results demonstrate the superior performance of LMFO.
1. Introduction
Optimization is a process of finding the best possible solution(s) for a given problem. In real world, many problems can be viewed as optimization problems. Since the complexity of problems increases, the need for new optimization techniques becomes more evident than before. Over the past several decades, some kinds of methods have been proposed to solve optimization problems and have made great progress. For example, mathematical optimization techniques used to be the only tool for optimizing problems before the proposal of heuristic optimization techniques. However, these methods need to know the property of optimization problem, such as continuity or differentiability. In recent years, metaheuristic optimization algorithms have become more and more popular in optimization techniques. Some popular algorithms in this field are Genetic Algorithms (GA) [1, 2], Particle Swarm Optimization (PSO) [3], Ant Colony Optimization (ACO) [4], Evolutionary Strategy (ES) [5], Differential Evolution (DE) [6], and Evolutionary Programming (EP) [7]. The application of these algorithms can be found in different branches of science and industry as well. Despite the merits of these optimizers, there is a fundamental question here whether there is any optimizer for solving all optimization problems. According to the NoFreeLunch (NFL) theorem [8] for optimization, researchers are allowed to develop new algorithms solving optimization problems more effectively. Some of the latest algorithms are Artificial Bee Colony (ABC) algorithm [9], Bat Algorithm (BA) [10], Cuckoo Search (CS) algorithm [11], Cuckoo Optimization Algorithm (COA) [12], Gravitational Search Algorithm (GSA) [13], Charged System Search (CSS) [14], Firefly Algorithm (FA) [15], and Ray Optimization (RO) [16], and Dragonfly Algorithm (DA) [17].
Mothflame optimization (MFO) [18] algorithm is a new metaheuristic optimization method through imitating the navigation method of moths in nature called transverse orientation. In this algorithm, moths and flames are both solutions. The inventor of this algorithm, Seyedali Mirjalili, proved that this algorithm is able to show very competitive results compared with other stateoftheart metaheuristic optimization algorithms. However, the MFO algorithm has been in research stage so far, and convergence speed and calculation accuracy of this algorithm can be further advanced. To improve the performance of MFO, a Lévyflight mothflame optimization (LMFO) algorithm is proposed.
We know that Lévyflight [11, 19] has a strong ability of strengthening global search and overcoming the problem of being trapped in local minima. In order to make use of the good performance of Lévyflight, we propose a Lévyflight mothflame optimization. MFO and Lévyflight have complementary advantages, so the proposed algorithm can lead to a faster and more robust method. The proposed algorithm is verified on nineteen benchmark functions and two engineering problems.
The rest of the paper is organized as follows: Section 2 presents a brief introduction to MFO and Lévyflight. An improved version of MFO algorithm, LMFO, is proposed in Section 3. The experimental results of test functions and engineering design problem are showed in Sections 4 and 5, respectively. Results and discussion are provided in Section 6. Finally, Section 7 concludes the work.
2. Related Works
In this section, a background about the mothflame optimization algorithm and Lévyflight will be provided briefly.
2.1. MFO Algorithm
Mothflame optimization [18] algorithm is a new metaheuristic optimization method, which is proposed by Seyedali Mirjalili and based on the simulation of the behavior of moths for their special navigation methods in night. They utilize a mechanism called transverse orientation for navigation. In this method, a moth flies by maintaining a fixed angle with respect to the moon, which is a very effective mechanism for travelling long distance in a straight path because the moon is far away from the moth. This mechanism guarantees that moths fly along straight line in night. However, we usually observe that moths fly spirally around the lights. In fact, moths are tricked by artificial lights and show such behaviors. Since such light is extremely close to the moon, hence, maintaining a similar angle to the light source causes a spiral fly path of moths.
In the MFO algorithm, the set of moths is represented in a matrix . For all the moths, there is an array for storing the corresponding fitness values. The second key components in the algorithm are flames. A matrix similar to the moth matrix is considered. For the flames, it is also assumed that there is an array for storing the corresponding fitness values.
The MFO algorithm is a threetuple that approximates the global optimal of the optimization problems and defined as follows:
is a function that creates a random population of moths and corresponding fitness values. The methodical model of this function is as follows:
The function, which is the main function, moves the moths around the search space. This function received the matrix of and returns its updated one eventually:
The function returns true if the termination criterion is satisfied and false if the termination criterion is not satisfied:
With , , and , the general framework of the MFO algorithm is defined as follows: ; while is equal to false ; end
After the initialization, the function is iteratively run until the function returns true. For the sake of simulating the behavior of moths mathematically, the position of each moth is updated with respect to a flame using the following equation: where indicate the th moth, indicates the th flame, and is the spiral function.
Any types of spiral can be utilized here subject to the following conditions:(1)Spiral’s initial point should start from the moth.(2)Spiral’s final point should be the position of the flame.(3)Fluctuation of the range of spiral should not exceed the search space.
Considering these points, a logarithmic spiral is defined for the MFO algorithm as follows:where indicates the distance of the th moth for the th flame, is a constant for defining the shape of the logarithmic spiral, and is a random number in .
is calculated as follows:where indicate the th moth, indicates the th flame, and indicates the distance of the th moth for the th flame.
Equation (6) describes the spiral flying path of moths. From this equation, the next position of a moth is defined with respect to a flame. The parameter in the spiral equation defines how much the next position of the moth should be close to the flame ( is the closest position to the flame, while shows the farthest).
A question that may rise here is that the position updating in (6) only requires the moths to move towards a flame, yet it causes the MFO algorithm to be trapped in local optima quickly. In order to prevent this, each moth is obliged to update its position using only one of the flames in (6). Another concern here is that the position updating of moths with respect to different locations in the search space may degrade the exploitation of the best promising solutions. To resolve this concern, an adaptive mechanism provided the number of flames. The following formula is utilized in this regard:where is the current number of iteration, is the maximum number of flames, and indicates the maximum number of iterations.
The gradual decrement in number of flames balances exploration and exploitation of the search space. After all, the general steps of the function can be described in Algorithm 1.

As described in Algorithm 1, the function is executed until the function returns true. After termination of the function, the best moth is returned as the best obtained approximation of the optimum.
Note that the Quicksort method is utilized in MFO and the sort’s computational complexity is and in the best and worst case, respectively (where is the number of moths).
2.2. LévyFlight
Lévyflight was originally introduced by the French mathematician in 1937 named Paul Lévy. Lévyflight is a statistical description of motion that extends beyond the more traditional Brownian motion discovered over one hundred years earlier. A diverse range of both natural and artificial phenomena are now being described in terms of Lévy statistics [19].
Generally speaking, animals looking for food is random, from one place to another place. A large number of studies have shown that flight behavior of many animals and insects has demonstrated the typical characteristics of randomness. However, the choice of the direction relies only on a mathematical model [20], which is called Lévyflight. For instance, many studies have shown that flight behavior of many animals and insects has revealed the typical characteristics of Lévyflight [21–24]. According to [24], we can know that fruit flies or Drosophila melanogaster explore their landscape utilizing a series of straight flight paths punctuated by a sudden turn, resulting in a Lévyflightstyle fitful scalefree pattern. Studies on human behavior such as the Ju/’hoansi huntergatherer foraging patterns [21] also show the typical feature of Lévyflight. Pavlyukevich has used Lévyflight in his research to present and theoretically justify a new stochastic algorithm for global optimization. Even the light can be related to Lévyflight [20]. Subsequently, Lévyflight have been applied to optimization and optimal search, and preliminary results show its promising capability [22, 25].
3. The Proposed LMFO Approach
In order to increase the diversity of population against premature convergence and accelerate the convergence speed, this paper proposes an improved Lévyflight mothflame optimization (LMFO) algorithm. Lévyflight has the prominent properties to increase the diversity of population, sequentially, which can make the algorithm effectively jump out of the local optimum. In other words, this approach is beneficial to obtain a better tradeoff between the exploration and exploitation ability of MFO. So, we let each moth perform once Lévyflight using (9) after the position updating, which is formulated as follows [11, 26]:where is the th moth or solution vector at iteration , is a random parameter which conforms to a uniform distribution, is the dot product (entrywise multiplications), and rand is a random number in . It should be noted here that takes only three values 1, 0, and . And in (9) the combination of and Lévyflight can make moth walk more random. That is to say, to get rid of local minima and improve global search capability are ensured via this combination in the basic MFO. Lévyflight are a kind of random walk in which the steps are determined by the step lengths, and the jumps conform to a Lévy distribution as follows [11, 27]:
Formula (11) is calculated as Lévy random numbers:where and are both standard normal distributions, is a standard Gamma function, , and is defined as follows:
To sum up, global search ability of the proposed algorithm is strengthened using random walk with Lévyflight to eliminate the weakness of MFO, its being trapped in local minimum is prevented, and it is observed to give more successful results particularly for unimodal and multimodal benchmark functions. Because of these features, the proposed algorithm has potential to provide superior performance compared to MFO. In following section, all kinds of benchmark functions are hired to verify the effectiveness of the proposed algorithm. The main steps of Lévyflight mothflame optimization can be simply presented in Algorithm 2.

4. Simulation Experiments
4.1. Simulation Platform
All the algorithms are tested in MATLAB R2012a (7.14) and numerical experiment is set up on Intel Core (TM) i54590 Processor, 3.30 GHz, 4 GB RAM, running on Windows 7.
4.2. Benchmark Functions
It is common in this field to benchmark the performance of algorithm on a set of mathematical functions with known global optimal. The same process is followed, in which nineteen standard benchmark functions are employed from the literature [27, 28] as test beds for comparison. Three groups of benchmark functions with different characteristics are selected to benchmark the performance of the LMFO algorithm from different perspectives. As shown in Tables 1–3, these benchmark functions are divided into three groups: unimodal functions, multimodal functions, and fixeddimension multimodal functions. As their names imply, unimodal functions are suitable for benchmarking the exploitation and convergence of an algorithm since they have one global optimum and no local optima. In contrary, multimodal functions have more than one optimum, which makes them more challenging than unimodal functions. One of the optima is called global optimum, and the rest are called local optima. An algorithm should avoid all the local optima to approach and approximate the global optimum. Therefore, exploration and local optima avoidance of algorithms can be benchmarked by multimodal functions. The mathematical formulation of the employed benchmark functions is presented in Tables 1, 2, and 3, respectively. In these three tables, represent the boundary of the function’s search space, denotes the dimension of the function, and is the theoretical minimum of the function.



Heuristic algorithms are stochastic optimization techniques, and therefore they have to be run more than 10 times for generating meaningful statistical results. The best obtained solution in the last iteration is calculated as the metrics of performance. The same method is selected to generate and report the results over 30 independent runs. However, average and standard deviation only compare the overall performance of algorithms.
To explore the performance of the proposed LMFO algorithm, some of the recent and wellknown algorithms in the literature are chosen: ABC [9], BA [10], GGSA [29], DA [17], PSOGSA [30], and MFO [18]. Note that 30 number search agents and 1000 iterations are utilized for each of the algorithms. It should be noted that selection of the number of moths (or other candidate solutions in other algorithms) should be done experimentally.
In this paper, Best, Mean, Worst, and Std represent the optimal fitness value, mean fitness value, worst fitness value, and standard deviation, respectively. Experimental results are listed in Tables 4, 5, and 6. The best results are denoted in bold type.



Due to the stochastic nature of the algorithms, statistical tests should be conducted to confirm the significance of the results [31]. The averages and standard deviation only compare the overall performance of the algorithms, while a statistical test considers each run’s results and proves that the results are statistically significant. In order to determine whether the results of LMFO differ from the best results of ABC, BA, GGSA, DA, PSOGSA, and MFO in a statistical method, a nonparametric test which is known as Wilcoxon’s ranksum test [32, 33] is performed at 5% significance level. Tables 7, 8, and 9 report the values produced by Wilcoxon’s test for the pairwise comparison of the best value of six groups. Such groups are formed by ABC versus LMFO, BA versus LMFO, GGSA versus LMFO, DA versus MFO, PSOGSA versus LMFO, and MFO versus LMFO. In general, values < 0.05 can be considered as sufficient evidence against the null hypothesis. With the statistical test, we can make sure that the results are not generated by chance. The nonparametric Wilcoxon statistical test is conducted and the calculated values are reported as metrics of significance as well. Experimental results of values ranksum test are listed in Tables 7, 8, and 9.



4.3. Unimodal Benchmark Functions
The unimodal benchmarks functions have only one global minimum and there are no local minima for them. Therefore, these kinds of functions are very suitable for benchmarking the convergence capability of algorithms. According to the results of Table 4, LMFO is able to provide very competitive results. As can be seen from this table, LMFO outperforms all other algorithms in ~. Therefore, the proposed algorithm has high performance to find the global minimum of unimodal benchmark functions. According to the values of ~ in Table 7, LMFO achieves significant improvement in all the unimodal benchmark functions compared to other algorithms. Hence, this proves that LMFO has better performance than other algorithms in forging for global optimum solution of unimodal benchmark functions.
Figures 1–8 illustrate the averaged convergence curves of all algorithms disposing unimodal benchmark functions over 30 independent runs. It can be noted here that all the convergence curves in the following subsections are also averaged curves. As may be seen from these curves, LMFO has the fastest convergence speed in all algorithms. From Table 4 and Figures 20–27, the LMFO’s Std is much smaller than other algorithms. These show that LMFO has a strong sense of stability and robust comparing from other algorithms.
4.4. Multimodal Benchmark Functions
In contrast to the unimodal benchmark functions, multimodal benchmark functions have many local minima with the number increasing exponentially with dimension. This makes them suitable for benchmarking the exploration ability of an algorithm. So, the final results are more important because these benchmark functions can reflect the ability of the algorithm to escape from poor local optima and obtain the global optimum. The statistical results of the algorithms on multimodal benchmark functions are presented in Table 5. As the results of Best, Worst, Mean, and Std values show, LMFO is also able to provide very competitive results on the multimodal benchmark functions. These results show that the LMFO algorithm has merit in terms of exploration. According to the values of ~ reported in Table 8, LMFO achieves significant improvement on 200D compared to other algorithms. When comparing LMFO and other algorithms, we can conclude that LMFO is significantly performing better with six groups of comparison algorithms. The values of ~ reported in Table 8 are less than 0.05, which is strong evidence against null hypothesis. Hence, this evidence demonstrates that the results of LMFO are statistically significant not occurring by coincidence.
Seen from Table 5 and Figures 9–15, the convergence rate of LMFO on the multimodal benchmark functions in majority cases is better than other algorithms. On the basis of Table 5, and Figures 9–15, we can draw a conclusion that the LMFO is able to avoid local minima in multimodal benchmark functions with a good convergence speed. From Table 5 and Figures 28–37, the LMFO’s Std is much smaller than other algorithms. These show that LMFO has a strong sense of stability and robust comparing from other algorithms.
4.5. FixedDimension Multimodal Benchmark Functions
For fixeddimension multimodal benchmark functions with only a few local minima, the dimensions of the multimodal benchmark functions are also small. Under such circumstances, it is difficult to judge the performance of individual algorithm. The major difference compared with multimodal functions is that fixeddimension multimodal functions appear to be simpler because of their low dimensions and a smaller number of local minima. In this experiment, the results of Best, Worst, Mean, and Std values of fixeddimension multimodal benchmark functions are summarized in Table 6. For all fixeddimension multimodal functions, LMFO can give the best solution in terms of Best. As Table 6 shows, the LMFO algorithm provides the best results on two of fixeddimension multimodal benchmark functions. The results are followed by the MFO, PSOGSA, and ABC algorithms. In addition, the values of Wilcoxon’s ranksum in Table 9 show that the result of LMFO in is not significantly better than DA and GGSA algorithms (5% significance level), but it is significantly different compared with ABC, MFO, PSOGSA, and BA. In the remaining functions (, , and ), however, the results of LMFO are significantly better than other algorithms. So, it can be concluded that the results of LMFO in these benchmark functions are better than ABC, BA, GGSA, DA, PSOGSA, and MFO.
In addition, the convergence rate of LMFO on the fixeddimension benchmark functions with 2dim can be shown in Figures 16–19. As can be seen from these figures, it can be claimed that LMFO has the faster convergence rate on functions and . From Figures 35–38, we can find that all of the algorithms have a strong sense of stability except BA on fixeddimension functions.
Overall, the results from Tables 4–6, Tables 7–9, Figures 1–19, and Figures 20–38 show that the proposed method is effective in not only optimizing unimodal and multimodal functions but also optimizing fixeddimension multimodal functions.
Since constraints are one of the major challenges in solving real problems and the main objective of designing the LMFO algorithm is to solve real problems, two constrained real engineering problems are employed in the next section to further investigate the performance of the MFO algorithm and provide a comprehensive study.
5. LMFO for Engineering Optimization Problems
In this section, a set of two engineering problems (welded beam design and speed reducer design) is solved so as to further testify the performance of the proposed algorithm. There are some inequality constraints in real problems, so the LMFO algorithm should be capable of dealing with them during optimization. Several methods have been applied to handle constraints in the literature: penalty function, special operators, repaired algorithms, separation of objectives and constraints, and hybrid methods [34]. In this paper, penalty method is employed to handle the constraints of welded beam and speed reducer.
5.1. Welded Beam Design
The objective is to evaluate the optimal fabrication cost of a welded beam as shown in Figure 39 [35]. The constraints of the beam are shear stress (), bending stress in the beam (), buckling load on the bar (), end deflection of the beam (), and side constraints.
This problem has four variables that are thickness of weld (), length of attached part of bar (), the height of the bar (), and thickness of the bar (), respectively. This problem is formulated as follows:
Mirjalili tried to solve this problem using MFO [18] and GGSA [29, 36]. Coello Coello [37] and Deb [38, 39] employed GA, whereas Lee and Geem [40] used HS to solve this problem. Richardson’s random method, simplex method, DavidonFletcherPowell, and Griffith and Stewart’s successive linear approximation are the mathematical approaches that have been adopted by Ragsdell and Philips [41] for this problem. The comparison results of the welded beam design problem are shown in Table 10.

The results of Table 10 show that the LMFO algorithm is able to find the best optimal design compared to other algorithms. The results of LMFO are closely followed by the MFO and GGSA algorithms.
5.2. Speed Reducer Design
The objective function of this problem is to minimize the total weight of the speed reducer as illustrated in Figure 40 [42]. The variables ~ denote the face width (), module of teeth (), number of teeth in the pinion (), length of the first shaft between bearings (), length of the second shaft between bearings (), the diameter of first (), and second shafts (), respectively. The mathematical formulation of this problem can be summarized as follows:
This problem has also been popular among researchers and optimized in many studies. The heuristic algorithms that have been employed to optimize this problem are Akhtar et al. [43], MezuraMontes et al. [44], CS [11, 45], HCPS [46], SCA [47], () ES [5, 48], and ABC [9, 49]. The results of this problem are provided in Table 11. According to this table, the LMFO and HCPS algorithms can find a design with the minimum weight for this problem.
