Abstract

The moth-flame optimization (MFO) algorithm is a novel nature-inspired heuristic paradigm. The main inspiration of this algorithm is the navigation method of moths in nature called transverse orientation. Moths fly in night by maintaining a fixed angle with respect to the moon, a very effective mechanism for travelling in a straight line for long distances. However, these fancy insects are trapped in a spiral path around artificial lights. Aiming at the phenomenon that MFO algorithm has slow convergence and low precision, an improved version of MFO algorithm based on Lévy-flight strategy, which is named as LMFO, is proposed. Lévy-flight can increase the diversity of the population against premature convergence and make the algorithm jump out of local optimum more effectively. This approach is helpful to obtain a better trade-off between exploration and exploitation ability of MFO, thus, which can make LMFO faster and more robust than MFO. And a comparison with ABC, BA, GGSA, DA, PSOGSA, and MFO on 19 unconstrained benchmark functions and 2 constrained engineering design problems is tested. These results demonstrate the superior performance of LMFO.

1. Introduction

Optimization is a process of finding the best possible solution(s) for a given problem. In real world, many problems can be viewed as optimization problems. Since the complexity of problems increases, the need for new optimization techniques becomes more evident than before. Over the past several decades, some kinds of methods have been proposed to solve optimization problems and have made great progress. For example, mathematical optimization techniques used to be the only tool for optimizing problems before the proposal of heuristic optimization techniques. However, these methods need to know the property of optimization problem, such as continuity or differentiability. In recent years, metaheuristic optimization algorithms have become more and more popular in optimization techniques. Some popular algorithms in this field are Genetic Algorithms (GA) [1, 2], Particle Swarm Optimization (PSO) [3], Ant Colony Optimization (ACO) [4], Evolutionary Strategy (ES) [5], Differential Evolution (DE) [6], and Evolutionary Programming (EP) [7]. The application of these algorithms can be found in different branches of science and industry as well. Despite the merits of these optimizers, there is a fundamental question here whether there is any optimizer for solving all optimization problems. According to the No-Free-Lunch (NFL) theorem [8] for optimization, researchers are allowed to develop new algorithms solving optimization problems more effectively. Some of the latest algorithms are Artificial Bee Colony (ABC) algorithm [9], Bat Algorithm (BA) [10], Cuckoo Search (CS) algorithm [11], Cuckoo Optimization Algorithm (COA) [12], Gravitational Search Algorithm (GSA) [13], Charged System Search (CSS) [14], Firefly Algorithm (FA) [15], and Ray Optimization (RO) [16], and Dragonfly Algorithm (DA) [17].

Moth-flame optimization (MFO) [18] algorithm is a new metaheuristic optimization method through imitating the navigation method of moths in nature called transverse orientation. In this algorithm, moths and flames are both solutions. The inventor of this algorithm, Seyedali Mirjalili, proved that this algorithm is able to show very competitive results compared with other state-of-the-art metaheuristic optimization algorithms. However, the MFO algorithm has been in research stage so far, and convergence speed and calculation accuracy of this algorithm can be further advanced. To improve the performance of MFO, a Lévy-flight moth-flame optimization (LMFO) algorithm is proposed.

We know that Lévy-flight [11, 19] has a strong ability of strengthening global search and overcoming the problem of being trapped in local minima. In order to make use of the good performance of Lévy-flight, we propose a Lévy-flight moth-flame optimization. MFO and Lévy-flight have complementary advantages, so the proposed algorithm can lead to a faster and more robust method. The proposed algorithm is verified on nineteen benchmark functions and two engineering problems.

The rest of the paper is organized as follows: Section 2 presents a brief introduction to MFO and Lévy-flight. An improved version of MFO algorithm, LMFO, is proposed in Section 3. The experimental results of test functions and engineering design problem are showed in Sections 4 and 5, respectively. Results and discussion are provided in Section 6. Finally, Section 7 concludes the work.

In this section, a background about the moth-flame optimization algorithm and Lévy-flight will be provided briefly.

2.1. MFO Algorithm

Moth-flame optimization [18] algorithm is a new metaheuristic optimization method, which is proposed by Seyedali Mirjalili and based on the simulation of the behavior of moths for their special navigation methods in night. They utilize a mechanism called transverse orientation for navigation. In this method, a moth flies by maintaining a fixed angle with respect to the moon, which is a very effective mechanism for travelling long distance in a straight path because the moon is far away from the moth. This mechanism guarantees that moths fly along straight line in night. However, we usually observe that moths fly spirally around the lights. In fact, moths are tricked by artificial lights and show such behaviors. Since such light is extremely close to the moon, hence, maintaining a similar angle to the light source causes a spiral fly path of moths.

In the MFO algorithm, the set of moths is represented in a matrix . For all the moths, there is an array for storing the corresponding fitness values. The second key components in the algorithm are flames. A matrix similar to the moth matrix is considered. For the flames, it is also assumed that there is an array for storing the corresponding fitness values.

The MFO algorithm is a three-tuple that approximates the global optimal of the optimization problems and defined as follows:

is a function that creates a random population of moths and corresponding fitness values. The methodical model of this function is as follows:

The function, which is the main function, moves the moths around the search space. This function received the matrix of and returns its updated one eventually:

The function returns true if the termination criterion is satisfied and false if the termination criterion is not satisfied:

With , , and , the general framework of the MFO algorithm is defined as follows:;while is equal to false;end

After the initialization, the function is iteratively run until the function returns true. For the sake of simulating the behavior of moths mathematically, the position of each moth is updated with respect to a flame using the following equation: where indicate the th moth, indicates the th flame, and is the spiral function.

Any types of spiral can be utilized here subject to the following conditions:(1)Spiral’s initial point should start from the moth.(2)Spiral’s final point should be the position of the flame.(3)Fluctuation of the range of spiral should not exceed the search space.

Considering these points, a logarithmic spiral is defined for the MFO algorithm as follows:where indicates the distance of the th moth for the th flame, is a constant for defining the shape of the logarithmic spiral, and is a random number in .

is calculated as follows:where indicate the th moth, indicates the th flame, and indicates the distance of the th moth for the th flame.

Equation (6) describes the spiral flying path of moths. From this equation, the next position of a moth is defined with respect to a flame. The parameter in the spiral equation defines how much the next position of the moth should be close to the flame ( is the closest position to the flame, while shows the farthest).

A question that may rise here is that the position updating in (6) only requires the moths to move towards a flame, yet it causes the MFO algorithm to be trapped in local optima quickly. In order to prevent this, each moth is obliged to update its position using only one of the flames in (6). Another concern here is that the position updating of moths with respect to different locations in the search space may degrade the exploitation of the best promising solutions. To resolve this concern, an adaptive mechanism provided the number of flames. The following formula is utilized in this regard:where is the current number of iteration, is the maximum number of flames, and indicates the maximum number of iterations.

The gradual decrement in number of flames balances exploration and exploitation of the search space. After all, the general steps of the function can be described in Algorithm 1.

) Initialize the position of moths
() While (Iteration <= Max_iteration)
() Update flame no using (8)
()  = FitnessFunction();
() if iteration = = 1
()    = sort();
()     = sort();
() else
()     = sort(, );
()    = sort(, );
() end
() for
()    for
()        Update and
()        Calculate using (7) with respect to the corresponding moth
()        Update using (5) and (6) with respect to the corresponding moth
()   end
() end

As described in Algorithm 1, the function is executed until the function returns true. After termination of the function, the best moth is returned as the best obtained approximation of the optimum.

Note that the Quicksort method is utilized in MFO and the sort’s computational complexity is and in the best and worst case, respectively (where is the number of moths).

2.2. Lévy-Flight

Lévy-flight was originally introduced by the French mathematician in 1937 named Paul Lévy. Lévy-flight is a statistical description of motion that extends beyond the more traditional Brownian motion discovered over one hundred years earlier. A diverse range of both natural and artificial phenomena are now being described in terms of Lévy statistics [19].

Generally speaking, animals looking for food is random, from one place to another place. A large number of studies have shown that flight behavior of many animals and insects has demonstrated the typical characteristics of randomness. However, the choice of the direction relies only on a mathematical model [20], which is called Lévy-flight. For instance, many studies have shown that flight behavior of many animals and insects has revealed the typical characteristics of Lévy-flight [2124]. According to [24], we can know that fruit flies or Drosophila melanogaster explore their landscape utilizing a series of straight flight paths punctuated by a sudden turn, resulting in a Lévy-flight-style fitful scale-free pattern. Studies on human behavior such as the Ju/’hoansi hunter-gatherer foraging patterns [21] also show the typical feature of Lévy-flight. Pavlyukevich has used Lévy-flight in his research to present and theoretically justify a new stochastic algorithm for global optimization. Even the light can be related to Lévy-flight [20]. Subsequently, Lévy-flight have been applied to optimization and optimal search, and preliminary results show its promising capability [22, 25].

3. The Proposed LMFO Approach

In order to increase the diversity of population against premature convergence and accelerate the convergence speed, this paper proposes an improved Lévy-flight moth-flame optimization (LMFO) algorithm. Lévy-flight has the prominent properties to increase the diversity of population, sequentially, which can make the algorithm effectively jump out of the local optimum. In other words, this approach is beneficial to obtain a better trade-off between the exploration and exploitation ability of MFO. So, we let each moth perform once Lévy-flight using (9) after the position updating, which is formulated as follows [11, 26]:where is the th moth or solution vector at iteration , is a random parameter which conforms to a uniform distribution, is the dot product (entrywise multiplications), and rand is a random number in . It should be noted here that takes only three values 1, 0, and . And in (9) the combination of and Lévy-flight can make moth walk more random. That is to say, to get rid of local minima and improve global search capability are ensured via this combination in the basic MFO. Lévy-flight are a kind of random walk in which the steps are determined by the step lengths, and the jumps conform to a Lévy distribution as follows [11, 27]:

Formula (11) is calculated as Lévy random numbers:where and are both standard normal distributions, is a standard Gamma function, , and is defined as follows:

To sum up, global search ability of the proposed algorithm is strengthened using random walk with Lévy-flight to eliminate the weakness of MFO, its being trapped in local minimum is prevented, and it is observed to give more successful results particularly for unimodal and multimodal benchmark functions. Because of these features, the proposed algorithm has potential to provide superior performance compared to MFO. In following section, all kinds of benchmark functions are hired to verify the effectiveness of the proposed algorithm. The main steps of Lévy-flight moth-flame optimization can be simply presented in Algorithm 2.

() Initialize the position of moths
() While (Iteration <= Max_iteration)
() Update flame no using (8)
()  = FitnessFunction();
() if iteration == 1
()    = sort();
()    = sort();
() else
()    = sort(, );
()    = sort(, );
() end
() for
()    for
()        Update and
()        Calculate using (7) with respect to the corresponding moth
()        Update using (5) and (6) with respect to the corresponding moth
()    end
()    for each search agent
()        Update the position of the current search agent using Lévy-flight
()    end
()    Iteration = Iteration + 1;
() end

4. Simulation Experiments

4.1. Simulation Platform

All the algorithms are tested in MATLAB R2012a (7.14) and numerical experiment is set up on Intel Core (TM) i5-4590 Processor, 3.30 GHz, 4 GB RAM, running on Windows 7.

4.2. Benchmark Functions

It is common in this field to benchmark the performance of algorithm on a set of mathematical functions with known global optimal. The same process is followed, in which nineteen standard benchmark functions are employed from the literature [27, 28] as test beds for comparison. Three groups of benchmark functions with different characteristics are selected to benchmark the performance of the LMFO algorithm from different perspectives. As shown in Tables 13, these benchmark functions are divided into three groups: unimodal functions, multimodal functions, and fixed-dimension multimodal functions. As their names imply, unimodal functions are suitable for benchmarking the exploitation and convergence of an algorithm since they have one global optimum and no local optima. In contrary, multimodal functions have more than one optimum, which makes them more challenging than unimodal functions. One of the optima is called global optimum, and the rest are called local optima. An algorithm should avoid all the local optima to approach and approximate the global optimum. Therefore, exploration and local optima avoidance of algorithms can be benchmarked by multimodal functions. The mathematical formulation of the employed benchmark functions is presented in Tables 1, 2, and 3, respectively. In these three tables, represent the boundary of the function’s search space, denotes the dimension of the function, and is the theoretical minimum of the function.

Heuristic algorithms are stochastic optimization techniques, and therefore they have to be run more than 10 times for generating meaningful statistical results. The best obtained solution in the last iteration is calculated as the metrics of performance. The same method is selected to generate and report the results over 30 independent runs. However, average and standard deviation only compare the overall performance of algorithms.

To explore the performance of the proposed LMFO algorithm, some of the recent and well-known algorithms in the literature are chosen: ABC [9], BA [10], GGSA [29], DA [17], PSOGSA [30], and MFO [18]. Note that 30 number search agents and 1000 iterations are utilized for each of the algorithms. It should be noted that selection of the number of moths (or other candidate solutions in other algorithms) should be done experimentally.

In this paper, Best, Mean, Worst, and Std represent the optimal fitness value, mean fitness value, worst fitness value, and standard deviation, respectively. Experimental results are listed in Tables 4, 5, and 6. The best results are denoted in bold type.

Due to the stochastic nature of the algorithms, statistical tests should be conducted to confirm the significance of the results [31]. The averages and standard deviation only compare the overall performance of the algorithms, while a statistical test considers each run’s results and proves that the results are statistically significant. In order to determine whether the results of LMFO differ from the best results of ABC, BA, GGSA, DA, PSOGSA, and MFO in a statistical method, a nonparametric test which is known as Wilcoxon’s rank-sum test [32, 33] is performed at 5% significance level. Tables 7, 8, and 9 report the values produced by Wilcoxon’s test for the pairwise comparison of the best value of six groups. Such groups are formed by ABC versus LMFO, BA versus LMFO, GGSA versus LMFO, DA versus MFO, PSOGSA versus LMFO, and MFO versus LMFO. In general, values < 0.05 can be considered as sufficient evidence against the null hypothesis. With the statistical test, we can make sure that the results are not generated by chance. The nonparametric Wilcoxon statistical test is conducted and the calculated values are reported as metrics of significance as well. Experimental results of values rank-sum test are listed in Tables 7, 8, and 9.

4.3. Unimodal Benchmark Functions

The unimodal benchmarks functions have only one global minimum and there are no local minima for them. Therefore, these kinds of functions are very suitable for benchmarking the convergence capability of algorithms. According to the results of Table 4, LMFO is able to provide very competitive results. As can be seen from this table, LMFO outperforms all other algorithms in ~. Therefore, the proposed algorithm has high performance to find the global minimum of unimodal benchmark functions. According to the values of ~ in Table 7, LMFO achieves significant improvement in all the unimodal benchmark functions compared to other algorithms. Hence, this proves that LMFO has better performance than other algorithms in forging for global optimum solution of unimodal benchmark functions.

Figures 18 illustrate the averaged convergence curves of all algorithms disposing unimodal benchmark functions over 30 independent runs. It can be noted here that all the convergence curves in the following subsections are also averaged curves. As may be seen from these curves, LMFO has the fastest convergence speed in all algorithms. From Table 4 and Figures 2027, the LMFO’s Std is much smaller than other algorithms. These show that LMFO has a strong sense of stability and robust comparing from other algorithms.

4.4. Multimodal Benchmark Functions

In contrast to the unimodal benchmark functions, multimodal benchmark functions have many local minima with the number increasing exponentially with dimension. This makes them suitable for benchmarking the exploration ability of an algorithm. So, the final results are more important because these benchmark functions can reflect the ability of the algorithm to escape from poor local optima and obtain the global optimum. The statistical results of the algorithms on multimodal benchmark functions are presented in Table 5. As the results of Best, Worst, Mean, and Std values show, LMFO is also able to provide very competitive results on the multimodal benchmark functions. These results show that the LMFO algorithm has merit in terms of exploration. According to the values of ~ reported in Table 8, LMFO achieves significant improvement on 200-D compared to other algorithms. When comparing LMFO and other algorithms, we can conclude that LMFO is significantly performing better with six groups of comparison algorithms. The values of ~ reported in Table 8 are less than 0.05, which is strong evidence against null hypothesis. Hence, this evidence demonstrates that the results of LMFO are statistically significant not occurring by coincidence.

Seen from Table 5 and Figures 915, the convergence rate of LMFO on the multimodal benchmark functions in majority cases is better than other algorithms. On the basis of Table 5, and Figures 915, we can draw a conclusion that the LMFO is able to avoid local minima in multimodal benchmark functions with a good convergence speed. From Table 5 and Figures 2837, the LMFO’s Std is much smaller than other algorithms. These show that LMFO has a strong sense of stability and robust comparing from other algorithms.

4.5. Fixed-Dimension Multimodal Benchmark Functions

For fixed-dimension multimodal benchmark functions with only a few local minima, the dimensions of the multimodal benchmark functions are also small. Under such circumstances, it is difficult to judge the performance of individual algorithm. The major difference compared with multimodal functions is that fixed-dimension multimodal functions appear to be simpler because of their low dimensions and a smaller number of local minima. In this experiment, the results of Best, Worst, Mean, and Std values of fixed-dimension multimodal benchmark functions are summarized in Table 6. For all fixed-dimension multimodal functions, LMFO can give the best solution in terms of Best. As Table 6 shows, the LMFO algorithm provides the best results on two of fixed-dimension multimodal benchmark functions. The results are followed by the MFO, PSOGSA, and ABC algorithms. In addition, the values of Wilcoxon’s rank-sum in Table 9 show that the result of LMFO in is not significantly better than DA and GGSA algorithms (5% significance level), but it is significantly different compared with ABC, MFO, PSOGSA, and BA. In the remaining functions (, , and ), however, the results of LMFO are significantly better than other algorithms. So, it can be concluded that the results of LMFO in these benchmark functions are better than ABC, BA, GGSA, DA, PSOGSA, and MFO.

In addition, the convergence rate of LMFO on the fixed-dimension benchmark functions with 2-dim can be shown in Figures 1619. As can be seen from these figures, it can be claimed that LMFO has the faster convergence rate on functions and . From Figures 3538, we can find that all of the algorithms have a strong sense of stability except BA on fixed-dimension functions.

Overall, the results from Tables 46, Tables 79, Figures 119, and Figures 2038 show that the proposed method is effective in not only optimizing unimodal and multimodal functions but also optimizing fixed-dimension multimodal functions.

Since constraints are one of the major challenges in solving real problems and the main objective of designing the LMFO algorithm is to solve real problems, two constrained real engineering problems are employed in the next section to further investigate the performance of the MFO algorithm and provide a comprehensive study.

5. LMFO for Engineering Optimization Problems

In this section, a set of two engineering problems (welded beam design and speed reducer design) is solved so as to further testify the performance of the proposed algorithm. There are some inequality constraints in real problems, so the LMFO algorithm should be capable of dealing with them during optimization. Several methods have been applied to handle constraints in the literature: penalty function, special operators, repaired algorithms, separation of objectives and constraints, and hybrid methods [34]. In this paper, penalty method is employed to handle the constraints of welded beam and speed reducer.

5.1. Welded Beam Design

The objective is to evaluate the optimal fabrication cost of a welded beam as shown in Figure 39 [35]. The constraints of the beam are shear stress (), bending stress in the beam (), buckling load on the bar (), end deflection of the beam (), and side constraints.

This problem has four variables that are thickness of weld (), length of attached part of bar (), the height of the bar (), and thickness of the bar (), respectively. This problem is formulated as follows:

Mirjalili tried to solve this problem using MFO [18] and GGSA [29, 36]. Coello Coello [37] and Deb [38, 39] employed GA, whereas Lee and Geem [40] used HS to solve this problem. Richardson’s random method, simplex method, Davidon-Fletcher-Powell, and Griffith and Stewart’s successive linear approximation are the mathematical approaches that have been adopted by Ragsdell and Philips [41] for this problem. The comparison results of the welded beam design problem are shown in Table 10.

The results of Table 10 show that the LMFO algorithm is able to find the best optimal design compared to other algorithms. The results of LMFO are closely followed by the MFO and GGSA algorithms.

5.2. Speed Reducer Design

The objective function of this problem is to minimize the total weight of the speed reducer as illustrated in Figure 40 [42]. The variables ~ denote the face width (), module of teeth (), number of teeth in the pinion (), length of the first shaft between bearings (), length of the second shaft between bearings (), the diameter of first (), and second shafts (), respectively. The mathematical formulation of this problem can be summarized as follows:

This problem has also been popular among researchers and optimized in many studies. The heuristic algorithms that have been employed to optimize this problem are Akhtar et al. [43], Mezura-Montes et al. [44], CS [11, 45], HCPS [46], SCA [47], () ES [5, 48], and ABC [9, 49]. The results of this problem are provided in Table 11. According to this table, the LMFO and HCPS algorithms can find a design with the minimum weight for this problem.

6. Results and Discussion

In this paper, an improved version of MFO algorithm based on Lévy-flight strategy, which is named as LMFO, is proposed. In order to benchmark the performance of LMFO, nineteen unconstrained benchmark functions and two constrained engineering design problems were conducted.

According to the values of Best, Worst, Mean, and Std and values in Section 4, the LMFO algorithm significantly outperforms others in terms of numerical optimization. There are several reasons of why LMFO algorithm did perform well on most of the test cases. First, Lévy-flight strategy: Lévy-flight can increase the diversity of the population and make the algorithm jump out of local optimum more effectively. This approach is helpful to make LMFO faster and more robust than MFO. Second, update mechanism of moths: in this mechanism, moths are required to update their positions with respect to the best recent feasible flames. Therefore, this approach promotes exploration of promising feasible regions and is the main reason of the superiority of the LMFO algorithm. Third, Quicksort method is utilized in LMFO algorithm. These are the reasons why LMFO performs better than other algorithms at the end of the results section. Another finding in the results is the poor performance of ABC, BA, and DA. These three algorithms belong to the class of swarm-based algorithms. In contrary to evolutionary algorithms, there is no mechanism for significant abrupt movements in the search space and this is likely to be the reason for the poor performance of ABC, BA, and DA.

As we can see in Section 4, the LMFO has been demonstrated to perform better than or highly competitive with the other algorithms. The advantages of LMFO involve performing simply and have few parameters to regulate. The work here proves the LMFO to be robust, powerful, and effective over all types of benchmark functions. Benchmark evaluating is a good way for testing the performance of the metaheuristic algorithms, but it also has some limitations. For example, different tuning parameter values in the optimization methods might lead to significant differences in their performance. Also, benchmark test may arrive at fully different conclusions if the termination criterion changes. If we change the population size or the number of iterations, we might draw a different conclusion.

In Section 5, the results show that MFO outperforms other algorithms in the majority of real case studies. Since the search space of these problems is unknown, these results are strong evidences for the applicability of LMFO in solving real problems. Due to the constrained nature of the case studies, in addition, it can be stated that the LMFO algorithm is able to optimize search spaces with infeasible regions as well. This is due to the update mechanism of moths, in which they are required to update their positions with respect to the best recent feasible flames. Therefore, this approach is the main reason of the superiority of the LMFO algorithm.

In our study, nineteen benchmark functions have been applied to evaluate the performance of LMFO. We also test our proposed method on the real-world engineering problems. Moreover, we will compare LMFO with other optimization algorithms.

7. Conclusion and Future Works

Due to the limited performance of MFO, Lévy-flight strategy has been introduced into the standard MFO to develop a novel Lévy-flight moth-flame optimization algorithm for optimization problems. As shown in Section 4, LMFO is very efficient with an almost exponential convergence rate and the results were compared to a wide range of algorithms for verification. This observation is based on the comparison of LMFO with other algorithms to solve optimization problems. The proposed algorithm proved its superior performance on nineteen benchmark functions in terms of enhanced convergence speed and modified avoidance of local minima. This paper also identified and discussed the reasons for poor performances of other algorithms. It was observed that the swarm-based algorithms suffer from low exploration, whereas the LMFO does not.

Furthermore, this paper also considers solving two classical engineering problems by using the LMFO algorithm. The high level of exploration and exploitation of this algorithm were the motivations for this study. The comparative results in Section 5 show that the LMFO algorithm has high performance on challenging constrained problems with unknown spaces. In this work, the LMFO makes an attempt at taking merits of the MFO and Lévy-flight in order to avoid local optimal. With both techniques combined, LMFO can balance exploration and exploitation and effectively solve complex problems and real-world engineering problems.

For future works, two research directions can be recommended. Firstly, we are going to apply the LMFO to solve more real-world engineering problems. Secondly, it is recommended to develop binary and multiobjective versions of the MFO algorithm.

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by National Science Foundation of China under Grants no. 61463007 and 6153008.