Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2016 / Article

Research Article | Open Access

Volume 2016 |Article ID 1423930 | 22 pages | https://doi.org/10.1155/2016/1423930

Lévy-Flight Moth-Flame Algorithm for Function Optimization and Engineering Design Problems

Academic Editor: Jose J. Muñoz
Received18 Apr 2016
Accepted12 Jul 2016
Published09 Aug 2016

Abstract

The moth-flame optimization (MFO) algorithm is a novel nature-inspired heuristic paradigm. The main inspiration of this algorithm is the navigation method of moths in nature called transverse orientation. Moths fly in night by maintaining a fixed angle with respect to the moon, a very effective mechanism for travelling in a straight line for long distances. However, these fancy insects are trapped in a spiral path around artificial lights. Aiming at the phenomenon that MFO algorithm has slow convergence and low precision, an improved version of MFO algorithm based on Lévy-flight strategy, which is named as LMFO, is proposed. Lévy-flight can increase the diversity of the population against premature convergence and make the algorithm jump out of local optimum more effectively. This approach is helpful to obtain a better trade-off between exploration and exploitation ability of MFO, thus, which can make LMFO faster and more robust than MFO. And a comparison with ABC, BA, GGSA, DA, PSOGSA, and MFO on 19 unconstrained benchmark functions and 2 constrained engineering design problems is tested. These results demonstrate the superior performance of LMFO.

1. Introduction

Optimization is a process of finding the best possible solution(s) for a given problem. In real world, many problems can be viewed as optimization problems. Since the complexity of problems increases, the need for new optimization techniques becomes more evident than before. Over the past several decades, some kinds of methods have been proposed to solve optimization problems and have made great progress. For example, mathematical optimization techniques used to be the only tool for optimizing problems before the proposal of heuristic optimization techniques. However, these methods need to know the property of optimization problem, such as continuity or differentiability. In recent years, metaheuristic optimization algorithms have become more and more popular in optimization techniques. Some popular algorithms in this field are Genetic Algorithms (GA) [1, 2], Particle Swarm Optimization (PSO) [3], Ant Colony Optimization (ACO) [4], Evolutionary Strategy (ES) [5], Differential Evolution (DE) [6], and Evolutionary Programming (EP) [7]. The application of these algorithms can be found in different branches of science and industry as well. Despite the merits of these optimizers, there is a fundamental question here whether there is any optimizer for solving all optimization problems. According to the No-Free-Lunch (NFL) theorem [8] for optimization, researchers are allowed to develop new algorithms solving optimization problems more effectively. Some of the latest algorithms are Artificial Bee Colony (ABC) algorithm [9], Bat Algorithm (BA) [10], Cuckoo Search (CS) algorithm [11], Cuckoo Optimization Algorithm (COA) [12], Gravitational Search Algorithm (GSA) [13], Charged System Search (CSS) [14], Firefly Algorithm (FA) [15], and Ray Optimization (RO) [16], and Dragonfly Algorithm (DA) [17].

Moth-flame optimization (MFO) [18] algorithm is a new metaheuristic optimization method through imitating the navigation method of moths in nature called transverse orientation. In this algorithm, moths and flames are both solutions. The inventor of this algorithm, Seyedali Mirjalili, proved that this algorithm is able to show very competitive results compared with other state-of-the-art metaheuristic optimization algorithms. However, the MFO algorithm has been in research stage so far, and convergence speed and calculation accuracy of this algorithm can be further advanced. To improve the performance of MFO, a Lévy-flight moth-flame optimization (LMFO) algorithm is proposed.

We know that Lévy-flight [11, 19] has a strong ability of strengthening global search and overcoming the problem of being trapped in local minima. In order to make use of the good performance of Lévy-flight, we propose a Lévy-flight moth-flame optimization. MFO and Lévy-flight have complementary advantages, so the proposed algorithm can lead to a faster and more robust method. The proposed algorithm is verified on nineteen benchmark functions and two engineering problems.

The rest of the paper is organized as follows: Section 2 presents a brief introduction to MFO and Lévy-flight. An improved version of MFO algorithm, LMFO, is proposed in Section 3. The experimental results of test functions and engineering design problem are showed in Sections 4 and 5, respectively. Results and discussion are provided in Section 6. Finally, Section 7 concludes the work.

In this section, a background about the moth-flame optimization algorithm and Lévy-flight will be provided briefly.

2.1. MFO Algorithm

Moth-flame optimization [18] algorithm is a new metaheuristic optimization method, which is proposed by Seyedali Mirjalili and based on the simulation of the behavior of moths for their special navigation methods in night. They utilize a mechanism called transverse orientation for navigation. In this method, a moth flies by maintaining a fixed angle with respect to the moon, which is a very effective mechanism for travelling long distance in a straight path because the moon is far away from the moth. This mechanism guarantees that moths fly along straight line in night. However, we usually observe that moths fly spirally around the lights. In fact, moths are tricked by artificial lights and show such behaviors. Since such light is extremely close to the moon, hence, maintaining a similar angle to the light source causes a spiral fly path of moths.

In the MFO algorithm, the set of moths is represented in a matrix . For all the moths, there is an array for storing the corresponding fitness values. The second key components in the algorithm are flames. A matrix similar to the moth matrix is considered. For the flames, it is also assumed that there is an array for storing the corresponding fitness values.

The MFO algorithm is a three-tuple that approximates the global optimal of the optimization problems and defined as follows:

is a function that creates a random population of moths and corresponding fitness values. The methodical model of this function is as follows:

The function, which is the main function, moves the moths around the search space. This function received the matrix of and returns its updated one eventually:

The function returns true if the termination criterion is satisfied and false if the termination criterion is not satisfied:

With , , and , the general framework of the MFO algorithm is defined as follows:;while is equal to false;end

After the initialization, the function is iteratively run until the function returns true. For the sake of simulating the behavior of moths mathematically, the position of each moth is updated with respect to a flame using the following equation: where indicate the th moth, indicates the th flame, and is the spiral function.

Any types of spiral can be utilized here subject to the following conditions:(1)Spiral’s initial point should start from the moth.(2)Spiral’s final point should be the position of the flame.(3)Fluctuation of the range of spiral should not exceed the search space.

Considering these points, a logarithmic spiral is defined for the MFO algorithm as follows:where indicates the distance of the th moth for the th flame, is a constant for defining the shape of the logarithmic spiral, and is a random number in .

is calculated as follows:where indicate the th moth, indicates the th flame, and indicates the distance of the th moth for the th flame.

Equation (6) describes the spiral flying path of moths. From this equation, the next position of a moth is defined with respect to a flame. The parameter in the spiral equation defines how much the next position of the moth should be close to the flame ( is the closest position to the flame, while shows the farthest).

A question that may rise here is that the position updating in (6) only requires the moths to move towards a flame, yet it causes the MFO algorithm to be trapped in local optima quickly. In order to prevent this, each moth is obliged to update its position using only one of the flames in (6). Another concern here is that the position updating of moths with respect to different locations in the search space may degrade the exploitation of the best promising solutions. To resolve this concern, an adaptive mechanism provided the number of flames. The following formula is utilized in this regard:where is the current number of iteration, is the maximum number of flames, and indicates the maximum number of iterations.

The gradual decrement in number of flames balances exploration and exploitation of the search space. After all, the general steps of the function can be described in Algorithm 1.

) Initialize the position of moths
() While (Iteration <= Max_iteration)
() Update flame no using (8)
()  = FitnessFunction();
() if iteration = = 1
()    = sort();
()     = sort();
() else
()     = sort(, );
()    = sort(, );
() end
() for
()    for
()        Update and
()        Calculate using (7) with respect to the corresponding moth
()        Update using (5) and (6) with respect to the corresponding moth
()   end
() end

As described in Algorithm 1, the function is executed until the function returns true. After termination of the function, the best moth is returned as the best obtained approximation of the optimum.

Note that the Quicksort method is utilized in MFO and the sort’s computational complexity is and in the best and worst case, respectively (where is the number of moths).

2.2. Lévy-Flight

Lévy-flight was originally introduced by the French mathematician in 1937 named Paul Lévy. Lévy-flight is a statistical description of motion that extends beyond the more traditional Brownian motion discovered over one hundred years earlier. A diverse range of both natural and artificial phenomena are now being described in terms of Lévy statistics [19].

Generally speaking, animals looking for food is random, from one place to another place. A large number of studies have shown that flight behavior of many animals and insects has demonstrated the typical characteristics of randomness. However, the choice of the direction relies only on a mathematical model [20], which is called Lévy-flight. For instance, many studies have shown that flight behavior of many animals and insects has revealed the typical characteristics of Lévy-flight [2124]. According to [24], we can know that fruit flies or Drosophila melanogaster explore their landscape utilizing a series of straight flight paths punctuated by a sudden turn, resulting in a Lévy-flight-style fitful scale-free pattern. Studies on human behavior such as the Ju/’hoansi hunter-gatherer foraging patterns [21] also show the typical feature of Lévy-flight. Pavlyukevich has used Lévy-flight in his research to present and theoretically justify a new stochastic algorithm for global optimization. Even the light can be related to Lévy-flight [20]. Subsequently, Lévy-flight have been applied to optimization and optimal search, and preliminary results show its promising capability [22, 25].

3. The Proposed LMFO Approach

In order to increase the diversity of population against premature convergence and accelerate the convergence speed, this paper proposes an improved Lévy-flight moth-flame optimization (LMFO) algorithm. Lévy-flight has the prominent properties to increase the diversity of population, sequentially, which can make the algorithm effectively jump out of the local optimum. In other words, this approach is beneficial to obtain a better trade-off between the exploration and exploitation ability of MFO. So, we let each moth perform once Lévy-flight using (9) after the position updating, which is formulated as follows [11, 26]:where is the th moth or solution vector at iteration , is a random parameter which conforms to a uniform distribution, is the dot product (entrywise multiplications), and rand is a random number in . It should be noted here that takes only three values 1, 0, and . And in (9) the combination of and Lévy-flight can make moth walk more random. That is to say, to get rid of local minima and improve global search capability are ensured via this combination in the basic MFO. Lévy-flight are a kind of random walk in which the steps are determined by the step lengths, and the jumps conform to a Lévy distribution as follows [11, 27]:

Formula (11) is calculated as Lévy random numbers:where and are both standard normal distributions, is a standard Gamma function, , and is defined as follows:

To sum up, global search ability of the proposed algorithm is strengthened using random walk with Lévy-flight to eliminate the weakness of MFO, its being trapped in local minimum is prevented, and it is observed to give more successful results particularly for unimodal and multimodal benchmark functions. Because of these features, the proposed algorithm has potential to provide superior performance compared to MFO. In following section, all kinds of benchmark functions are hired to verify the effectiveness of the proposed algorithm. The main steps of Lévy-flight moth-flame optimization can be simply presented in Algorithm 2.

() Initialize the position of moths
() While (Iteration <= Max_iteration)
() Update flame no using (8)
()  = FitnessFunction();
() if iteration == 1
()    = sort();
()    = sort();
() else
()    = sort(, );
()    = sort(, );
() end
() for
()    for
()        Update and
()        Calculate using (7) with respect to the corresponding moth
()        Update using (5) and (6) with respect to the corresponding moth
()    end
()    for each search agent
()        Update the position of the current search agent using Lévy-flight
()    end
()    Iteration = Iteration + 1;
() end

4. Simulation Experiments

4.1. Simulation Platform

All the algorithms are tested in MATLAB R2012a (7.14) and numerical experiment is set up on Intel Core (TM) i5-4590 Processor, 3.30 GHz, 4 GB RAM, running on Windows 7.

4.2. Benchmark Functions

It is common in this field to benchmark the performance of algorithm on a set of mathematical functions with known global optimal. The same process is followed, in which nineteen standard benchmark functions are employed from the literature [27, 28] as test beds for comparison. Three groups of benchmark functions with different characteristics are selected to benchmark the performance of the LMFO algorithm from different perspectives. As shown in Tables 13, these benchmark functions are divided into three groups: unimodal functions, multimodal functions, and fixed-dimension multimodal functions. As their names imply, unimodal functions are suitable for benchmarking the exploitation and convergence of an algorithm since they have one global optimum and no local optima. In contrary, multimodal functions have more than one optimum, which makes them more challenging than unimodal functions. One of the optima is called global optimum, and the rest are called local optima. An algorithm should avoid all the local optima to approach and approximate the global optimum. Therefore, exploration and local optima avoidance of algorithms can be benchmarked by multimodal functions. The mathematical formulation of the employed benchmark functions is presented in Tables 1, 2, and 3, respectively. In these three tables, represent the boundary of the function’s search space, denotes the dimension of the function, and is the theoretical minimum of the function.


NameFunctionRangeDim

Sphere−100, 1002000
Schwefel’s 2.22−10, 102000
Schwefel’s 1.2−100, 1002000
Schwefel’s 2.21−100, 1002000
Rosenbrock−30, 302000
Step−100, 1002000
Quartic−1.28, 1.282000
X.S.Yang-7,  −5, 52000


NameFunctionRangeDim

Rastrigin−5.12, 5.122000
Ackley−32, 322000
Griewank−600, 6002000
Penalized 1
; 
−50, 502000
Penalized 2−50, 502000
Alpine−10, 102000
Zakharov−5, 102000


NameFunctionRangeDim

Goldstein-Price−2, 223
Drop Wave−5.12, 5.122−1
Schaffer’s F6−100, 1002−1
Easom−2pi, 2pi2−1

Heuristic algorithms are stochastic optimization techniques, and therefore they have to be run more than 10 times for generating meaningful statistical results. The best obtained solution in the last iteration is calculated as the metrics of performance. The same method is selected to generate and report the results over 30 independent runs. However, average and standard deviation only compare the overall performance of algorithms.

To explore the performance of the proposed LMFO algorithm, some of the recent and well-known algorithms in the literature are chosen: ABC [9], BA [10], GGSA [29], DA [17], PSOGSA [30], and MFO [18]. Note that 30 number search agents and 1000 iterations are utilized for each of the algorithms. It should be noted that selection of the number of moths (or other candidate solutions in other algorithms) should be done experimentally.

In this paper, Best, Mean, Worst, and Std represent the optimal fitness value, mean fitness value, worst fitness value, and standard deviation, respectively. Experimental results are listed in Tables 4, 5, and 6. The best results are denoted in bold type.


Benchmark functionResultAlgorithm
ABCBAGGSADAPSOGSAMFOLMFO

()Best19896.78365717.316278.781513.43534835.45140219.41.2921E − 234
Worst36049.2345628424262.8452511.62142227.4240000.73.3309E − 183
Mean26677.04414807.819224.0424716.8294735.12185588.41.1103E − 184
Std4640.38320515.241852.35511046.5525051.9224177.670

()Best108.07351.83E + 84149.726326.27986637.56420.55128.5E − 129
Worst186.46723.9E + 99189.8577255.799238.06E + 40722.49557.3E − 99
Mean140.28111.45E + 98165.8829137.885942.69E + 39560.11732.4E − 100
Std14.52527.06E + 989.59901856.6742921.47E + 4061.667491.3E − 99

()Best582404823783.5160277.369110.62344077.3462517.71.6E − 215
Worst920006.246176251020727906356.8891332.510943666.4E − 167
Mean757278.31572330434895.9342508.6541491.3772538.33.3E − 168
Std72837.78837152.1190865.2203490.9142131.3170946.50

()Best94.26587.4899720.4427327.048676.2540195.260146.1E − 117
Worst97.4010292.1228231.3011559.6961798.7430398.159996.18E − 82
Mean95.865590.0986527.1232841.9791495.6983197.036532.06E − 83
Std0.7782291.3301542.5512747.809095.908580.6876211.13E − 82

()Best16111660883283392399953471714125656823.61E + 08196.9121
Worst940417611.59E + 085103632479766034.82E + 088.92E + 08198.6685
Mean413448781.33E + 083740872182579951.35E + 086.14E + 08198.2609
Std1756795218028179643192.1109334331.24E + 081.41E + 080.459128

()Best17165.43384155.3148994422.43733665.58136678.539.65709
Worst30743.79453905.52487363473.4109123.6228511.542.92694
Mean23922.9941790319220.4724167.1676561.32180669.841.49763
Std3074.40617001.782314.72112441.119018.5524671.190.632867

()Best51.880920.4002526.4924299.2380111.892781181.9912.3E − 06
Worst299.22070.74465614.77883181.735171.665142686.2550.000427
Mean157.16760.5517469.09019952.0648819.836661908.7888.56E − 05
Std58.735750.0728091.74247939.3458710.47516360.77080.000108

()Best181.8735172.164528.4597814.59629130.6204168.68431.62807
Worst198.3101200.190540.2105480.16251169.9275202.24271.840634
Mean190.2888188.03633.6719543.38706153.398187.91611.752264
Std4.1686616.8679592.23283513.649810.913747.6016350.06096


Benchmark functionResultAlgorithm
ABCBAGGSADAPSOGSAMFOLMFO

()Best575.16081363.3691563.422730.92416922.18321769.4410
Worst709.0191810.4651867.6161890.95451476.2722125.690
Mean653.93161630.7681752.8381367.38991231.5781951.4260
Std36.46425100.720381.02372280.04565117.190379.018270

()Best12.5425819.2052710.529078.66661219.2330919.921318.88E − 16
Worst14.804519.956411.8904214.7543619.9667720.018978.88E − 16
Mean13.8211319.7361411.2102411.8890919.6971919.954338.88E − 16
Std0.5541620.2601540.3836541.4537040.311070.0193960

()Best142.25484290.404137.628561.43179654.93881229.640
Worst316.0365283.435221.9276600.1511631.5472048.4540
Mean225.31034997.492171.5404224.07441251.6951543.040
Std51.75854184.813516.55978104.4042204.6207197.60010

()Best73933983.86E + 0828.177741662.38470433987.03E + 088.88E − 16
Worst1.67E + 086.85E + 0877685.77231720332.05E + 091.92E + 098.88E − 16
Mean587310935.59E + 0811043.7625160127.28E + 081.3E + 098.88E − 16
Std371775918191407920355.0346412045.15E + 083.01E + 080

()Best194649681.25E + 09646700.52620391266665171.68E + 0919.4713
Worst3.4E + 082.09E + 0933190031.83E + 082.47E + 094.04E + 0919.79025
Mean1.53E + 081.72E + 091684844271830539.73E + 082.62E + 0919.62771
Std822965651.89E + 08724732.9347599747.07E + 085.46E + 080.072549

()Best47.2870690.66079103.328315.3869458.10706129.05951.1E − 125
Worst62.0743168.7346139.0044197.7987107.5327216.95781.1E − 103
Mean53.55967119.4603118.7804116.361980.96392171.30743.9E − 105
Std3.43808219.186618.5258943.541112.9369523.186542E − 104

()Best5373.3544315.808461.66981142.9244411.2445741.4175E − 168
Worst6101.20410167.763585.3454914.8449429.56811241.252.6E − 117
Mean5810.78952831327.2343451.426873.2228566.5258.6E − 119
Std207.77041065.437640.87661139.3561451.2941527.9924.7E − 118


Benchmark functionResultAlgorithm
ABCBAGGSADAPSOGSAMFOLMFO

()Best3.000547333333
Worst3.05284884.00001303.1797158433.000357
Mean3.01598915.67.1442153.0185848.433.000061
Std0.01590925.301779.2506190.04937620.550361.83E − 157.19E − 05

()Best−1−1−1−1−1−1−1
Worst−0.98844−0.36913−0.93625−0.78575−0.93625−0.93625−1
Mean−0.99731−0.71344−0.949−0.94932−0.9915−0.96175−1
Std0.0031960.1755630.0259380.0536410.0220430.0317670

()Best−0.99909−0.99028−1−1−1−1−1
Worst−0.98981−0.54822−0.99028−0.92181−0.99028−0.99028−1
Mean−0.99095−0.71504−0.99286−0.98153−0.99093−0.99061−1
Std0.0020230.1268560.0043490.020470.0024650.0017740

()Best−1−1−1−1−1−1−1
Worst−0.99991−8.1E − 05−8.1E − 05−0.95227−1−1−0.99918
Mean−0.99999−0.80002−0.92221−0.99656−1−1−0.99973
Std2.03E − 050.4068050.2579440.010494000.000229

Due to the stochastic nature of the algorithms, statistical tests should be conducted to confirm the significance of the results [31]. The averages and standard deviation only compare the overall performance of the algorithms, while a statistical test considers each run’s results and proves that the results are statistically significant. In order to determine whether the results of LMFO differ from the best results of ABC, BA, GGSA, DA, PSOGSA, and MFO in a statistical method, a nonparametric test which is known as Wilcoxon’s rank-sum test [32, 33] is performed at 5% significance level. Tables 7, 8, and 9 report the values produced by Wilcoxon’s test for the pairwise comparison of the best value of six groups. Such groups are formed by ABC versus LMFO, BA versus LMFO, GGSA versus LMFO, DA versus MFO, PSOGSA versus LMFO, and MFO versus LMFO. In general, values < 0.05 can be considered as sufficient evidence against the null hypothesis. With the statistical test, we can make sure that the results are not generated by chance. The nonparametric Wilcoxon statistical test is conducted and the calculated values are reported as metrics of significance as well. Experimental results of values rank-sum test are listed in Tables 7, 8, and 9.



ABC versus LMFO3.02E − 113.02E − 113.02E − 113.02E − 113.02E − 113.02E − 113.02E − 113.02E − 11
BA versus LMFO3.02E − 113.02E − 113.02E − 113.02E − 113.02E − 113.02E − 113.02E − 113.02E − 11
GGSA versus LMFO3.02E − 113.02E − 113.02E − 113.02E − 113.02E − 113.02E − 113.02E − 113.02E − 11
DA versus LMFO3.02E − 113.02E − 113.02E − 113.02E − 113.02E − 113.02E − 113.02E − 113.02E − 11
PSOGSA versus LMFO3.02E − 113.02E − 113.02E − 113.02E − 113.02E − 113.02E − 113.02E − 113.02E − 11
MFO versus LMFO3.02E − 113.02E − 113.02E − 113.02E − 113.02E − 113.02E − 113.02E − 113.02E − 11



ABC versus LMFO1.21E − 121.21E − 121.21E − 123.02E − 113.02E − 113.02E − 113.02E − 11
BA versus LMFO1.21E − 121.21E − 121.21E − 123.02E − 113.02E − 113.02E − 113.02E − 11
GGSA versus LMFO1.21E − 121.21E − 121.21E − 123.02E − 113.02E − 113.02E − 113.02E − 11
DA versus LMFO1.21E − 121.21E − 121.21E − 123.02E − 113.02E − 113.02E − 113.02E − 11
PSOGSA versus LMFO1.21E − 121.21E − 121.21E − 123.02E − 113.02E − 113.02E − 113.02E − 11
MFO versus LMFO1.21E − 121.21E − 121.21E − 123.02E − 113.02E − 113.02E − 113.02E − 11



ABC versus LMFO3.02E − 111.21E − 121.21E − 125.07E − 10
BA versus LMFO7.29E − 031.21E − 121.21E − 126.77E − 05
GGSA versus LMFO7.70E − 028.81E − 105.13E − 112.48E − 08
DA versus LMFO5.19E − 024.56E − 101.45E − 112.96E − 06
PSOGSA versus LMFO7.91E − 094.18E − 027.15E − 131.21E − 12
MFO versus LMFO2.56E − 116.89E − 071.17E − 131.21E − 12

4.3. Unimodal Benchmark Functions

The unimodal benchmarks functions have only one global minimum and there are no local minima for them. Therefore, these kinds of functions are very suitable for benchmarking the convergence capability of algorithms. According to the results of Table 4, LMFO is able to provide very competitive results. As can be seen from this table, LMFO outperforms all other algorithms in ~. Therefore, the proposed algorithm has high performance to find the global minimum of unimodal benchmark functions. According to the values of ~ in Table 7, LMFO achieves significant improvement in all the unimodal benchmark functions compared to other algorithms. Hence, this proves that LMFO has better performance than other algorithms in forging for global optimum solution of unimodal benchmark functions.

Figures 18 illustrate the averaged convergence curves of all algorithms disposing unimodal benchmark functions over 30 independent runs. It can be noted here that all the convergence curves in the following subsections are also averaged curves. As may be seen from these curves, LMFO has the fastest convergence speed in all algorithms. From Table 4 and Figures 2027, the LMFO’s Std is much smaller than other algorithms. These show that LMFO has a strong sense of stability and robust comparing from other algorithms.

4.4. Multimodal Benchmark Functions

In contrast to the unimodal benchmark functions, multimodal benchmark functions have many local minima with the number increasing exponentially with dimension. This makes them suitable for benchmarking the exploration ability of an algorithm. So, the final results are more important because these benchmark functions can reflect the ability of the algorithm to escape from poor local optima and obtain the global optimum. The statistical results of the algorithms on multimodal benchmark functions are presented in Table 5. As the results of Best, Worst, Mean, and Std values show, LMFO is also able to provide very competitive results on the multimodal benchmark functions. These results show that the LMFO algorithm has merit in terms of exploration. According to the values of ~ reported in Table 8, LMFO achieves significant improvement on 200-D compared to other algorithms. When comparing LMFO and other algorithms, we can conclude that LMFO is significantly performing better with six groups of comparison algorithms. The values of ~ reported in Table 8 are less than 0.05, which is strong evidence against null hypothesis. Hence, this evidence demonstrates that the results of LMFO are statistically significant not occurring by coincidence.

Seen from Table 5 and Figures 915, the convergence rate of LMFO on the multimodal benchmark functions in majority cases is better than other algorithms. On the basis of Table 5, and Figures 915, we can draw a conclusion that the LMFO is able to avoid local minima in multimodal benchmark functions with a good convergence speed. From Table 5 and Figures 2837, the LMFO’s Std is much smaller than other algorithms. These show that LMFO has a strong sense of stability and robust comparing from other algorithms.

4.5. Fixed-Dimension Multimodal Benchmark Functions

For fixed-dimension multimodal benchmark functions with only a few local minima, the dimensions of the multimodal benchmark functions are also small. Under such circumstances, it is difficult to judge the performance of individual algorithm. The major difference compared with multimodal functions is that fixed-dimension multimodal functions appear to be simpler because of their low dimensions and a smaller number of local minima. In this experiment, the results of Best, Worst, Mean, and Std values of fixed-dimension multimodal benchmark functions are summarized in Table 6. For all fixed-dimension multimodal functions, LMFO can give the best solution in terms of Best. As Table 6 shows, the LMFO algorithm provides the best results on two of fixed-dimension multimodal benchmark functions. The results are followed by the MFO, PSOGSA, and ABC algorithms. In addition, the values of Wilcoxon’s rank-sum in Table 9 show that the result of LMFO in is not significantly better than DA and GGSA algorithms (5% significance level), but it is significantly different compared with ABC, MFO, PSOGSA, and BA. In the remaining functions (, , and ), however, the results of LMFO are significantly better than other algorithms. So, it can be concluded that the results of LMFO in these benchmark functions are better than ABC, BA, GGSA, DA, PSOGSA, and MFO.

In addition, the convergence rate of LMFO on the fixed-dimension benchmark functions with 2-dim can be shown in Figures 1619. As can be seen from these figures, it can be claimed that LMFO has the faster convergence rate on functions and . From Figures 3538, we can find that all of the algorithms have a strong sense of stability except BA on fixed-dimension functions.

Overall, the results from Tables 46, Tables 79, Figures 119, and Figures 2038 show that the proposed method is effective in not only optimizing unimodal and multimodal functions but also optimizing fixed-dimension multimodal functions.

Since constraints are one of the major challenges in solving real problems and the main objective of designing the LMFO algorithm is to solve real problems, two constrained real engineering problems are employed in the next section to further investigate the performance of the MFO algorithm and provide a comprehensive study.

5. LMFO for Engineering Optimization Problems

In this section, a set of two engineering problems (welded beam design and speed reducer design) is solved so as to further testify the performance of the proposed algorithm. There are some inequality constraints in real problems, so the LMFO algorithm should be capable of dealing with them during optimization. Several methods have been applied to handle constraints in the literature: penalty function, special operators, repaired algorithms, separation of objectives and constraints, and hybrid methods [34]. In this paper, penalty method is employed to handle the constraints of welded beam and speed reducer.

5.1. Welded Beam Design

The objective is to evaluate the optimal fabrication cost of a welded beam as shown in Figure 39 [35]. The constraints of the beam are shear stress (), bending stress in the beam (), buckling load on the bar (), end deflection of the beam (), and side constraints.

This problem has four variables that are thickness of weld (), length of attached part of bar (), the height of the bar (), and thickness of the bar (), respectively. This problem is formulated as follows:

Mirjalili tried to solve this problem using MFO [18] and GGSA [29, 36]. Coello Coello [37] and Deb [38, 39] employed GA, whereas Lee and Geem [40] used HS to solve this problem. Richardson’s random method, simplex method, Davidon-Fletcher-Powell, and Griffith and Stewart’s successive linear approximation are the mathematical approaches that have been adopted by Ragsdell and Philips [41] for this problem. The comparison results of the welded beam design problem are shown in Table 10.


AlgorithmOptimum variablesOptimal cost

LMFO0.20203.35759.09380.20611.7165
MFO [18]0.20573.47039.03640.20571.72452
GGSA [29, 36]0.2159173.3149558.8961950.2159171.770829
GA (Coello Coello) [37]N/AN/AN/AN/A1.8245
GA (Deb) [38]N/AN/AN/AN/A2.3800
GA (Deb) [39]0.24896.17308.17890.25332.4331
HS (Lee and Geem) [40]0.24426.22318.29150.24432.3807
Random [41]0.45754.73135.08530.66004.1185
Simplex [41]0.27925.62567.75120.27962.5307
David [41]0.24346.25528.29150.24442.3841
APPROX [41]0.24446.21898.29150.24442.3815

The results of Table 10 show that the LMFO algorithm is able to find the best optimal design compared to other algorithms. The results of LMFO are closely followed by the MFO and GGSA algorithms.

5.2. Speed Reducer Design

The objective function of this problem is to minimize the total weight of the speed reducer as illustrated in Figure 40 [42]. The variables ~ denote the face width (), module of teeth (), number of teeth in the pinion (), length of the first shaft between bearings (), length of the second shaft between bearings (), the diameter of first (), and second shafts (), respectively. The mathematical formulation of this problem can be summarized as follows: