Abstract
To improve the performance of the krill herd (KH) algorithm, in this paper, a Lévyflight krill herd (LKH) algorithm is proposed for solving optimization tasks within limited computing time. The improvement includes the addition of a new local Lévyflight (LLF) operator during the process when updating krill in order to improve its efficiency and reliability coping with global numerical optimization problems. The LLF operator encourages the exploitation and makes the krill individuals search the space carefully at the end of the search. The elitism scheme is also applied to keep the best krill during the process when updating the krill. Fourteen standard benchmark functions are used to verify the effects of these improvements and it is illustrated that, in most cases, the performance of this novel metaheuristic LKH method is superior to, or at least highly competitive with, the standard KH and other populationbased optimization methods. Especially, this new method can accelerate the global convergence speed to the true global optimum while preserving the main feature of the basic KH.
1. Introduction
In current competitory world, human beings make an attempt at extracting the maximum output or profit from a restricted amount of usable resources. In the case of engineering optimization, such as design optimization of tall steel buildings [1], optimum design of gravity retaining walls [2], water, geotechnical and transport engineering [3], and structural optimization and design [4, 5], engineers would attempt to design structures that satisfy all design requirements with the minimum possible cost. Most realworld engineering optimization problems could be converted into general global optimization problems. Therefore, the study of global optimization is of vital importance for the engineering optimization. In this issue, many biological intelligent techniques [6] as optimization tools have been developed and applied to solve engineering optimization problems for engineers. A general classification way for these techniques is considering the nature of the techniques, and optimization techniques can be classified as two main groups: classical methods and modern intelligent algorithms. Classical methods such as hill climbing have a rigorous move and will generate the same set of solutions if the iterations start with the same initial starting point. On the other hand, modern intelligent algorithms often generate different solutions even with the same initial value. However, in general, the final solutions, though slightly different, will converge to the same optimal values within a given accuracy. The emergence of metaheuristic optimization algorithm as a blessing from the artificial intelligence and mathematical theorem has opened up a new facet to carry out the optimization of a function. Recently, natureinspired metaheuristic algorithms perform powerfully and efficiently in solving modern nonlinear numerical global optimization problems. To some extent, all metaheuristic algorithms make an attempt at relieving the conflict between diversification/exploration/randomization (global search) and intensification/exploitation (local search) [7, 8].
Inspired by nature, these strong metaheuristic algorithms have been proposed to solve NPhard tasks, such as UCAV path planning [9, 10], testsheet composition [11], and parameter estimation [12]. These kinds of metaheuristic methods perform on a population of solutions and always find optimal or suboptimal solutions. During the 1960s and 1970s, computer scientists studied the possibility of formulating evolution as an optimization method and eventually this generated a subset of gradient free methods, namely, genetic algorithms (GAs) [13, 14]. In the last two decades, a huge number of techniques were developed on function optimization, such as bat algorithm (BA) [15, 16], differential evolution (DE) [17, 18], genetic programming (GP) [19], harmony search (HS) [20, 21], particle swarm optimization (PSO) [22–24], cuckoo search (CS) [25, 26], and, more recently, the krill herd (KH) algorithm [27] that is based on imitating the krill herding behavior in nature.
Firstly proposed by Gandomi and Alavi in 2012, inspired by the herding behavior of krill individuals, KH algorithm is a novel swarm intelligence method for optimizing possibly nondifferentiable and nonlinear complex functions in continuous space [27]. In KH, the timedependent position of the krill individuals involves three main components: (i) movement led by other individuals, (ii) foraging motion, and (iii) random physical diffusion. One of the notable advantages of the KH algorithm is that derivative information is unnecessary, because it uses a random search instead of a gradient search use in classical methods. Moreover, comparing with other populationbased metaheuristic methods, this new method needs few control variables, in principle only a separate parameter Δt (time interval) to tune, which makes KH easy to implement, more robust and fits for parallel computation.
KH is an effective and powerful algorithm in exploration, but at times it may trap into some local optima so that it cannot implement global search well. For KH, the search depends completely on random search, so there is no guarantee for a fast convergence. In order to improve KH in optimization problems, a method has been proposed [28], which introduces a more focused mutation strategy into KH to add the diversity of population.
On the other hand, many researchers have centralized on theories and applications of statistical techniques, especially of Lévy distribution. And recently huge advances are acquired in many fields. One of these fields is the applications of Lévy flight in optimization methods. Previously, Lévy flights have been used together with some metaheuristic optimization methods such as firefly algorithm [29], cuckoo search [30], krill herd algorithm [31], and particle swarm optimization [32].
Firstly presented here, an effective Lévyflight KH (LKH) method is originally proposed in this paper, in order to accelerate convergence speed, thus making the approach more feasible for a wider range of realworld engineering applications while keeping the desirable characteristics of the original KH. In LKH, first of all, a standard KH algorithm is implemented to shrink the search apace and select a good candidate solution set. And then, for more precise modeling of the krill behavior, a local Lévyflight (LLF) operator is added to the algorithm. This operator is applied to exploit the limited promising area intensively to get better solutions so as to improve its efficiency and reliability for solving global numerical optimization problems. The proposed method is evaluated on fourteen standard benchmark functions that have ever been applied to verify optimization methods in continuous optimization problems. Experimental results show that the LKH performs more efficiently and effectively than basic KH, ABC, ACO, BA, CS, DE, ES, GA, HS, PBIL, and PSO.
The structure of this paper is organized as follows. Section 2 gives a description of basic KH algorithm and Lévy flight in brief. Our proposed LKH method is described in detail in Section 3. Subsequently, our method is evaluated through fourteen benchmark functions in Section 4. In addition, the LKH is also compared with ABC, ACO, BA, CS, DE, ES, GA, HS, KH, PBIL, and PSO in that section. Finally, Section 5 involves the conclusion and proposals for future work.
2. Preliminary
At first, in this section, a background on the krill herd algorithm and Lévy flight will be provided in brief.
2.1. Krill Herd Algorithm
Krill herd (KH) [27] is a new metaheuristic optimization method [4] for solving optimization tasks, which is based on the simulation of the herding of the krill swarms in response to particular biological and environmental processes. The timedependent position of an individual krill in 2D space is decided by three main actions presented as follows:(i)movement affected by other krill individuals,(ii)foraging action,(iii)random diffusion.
KH algorithm adopted the following Lagrangian model in a ddimensional decision space as in the following (1): where , and are the motion led by other krill individuals, the foraging motion, and the physical diffusion of the th krill individual, respectively.
In movement affected by other krill individuals, the direction of motion, , is approximately computed by the target effect (target swarm density), local effect (a local swarm density), and a repulsive effect (repulsive swarm density). For a krill individual, this movement can be defined as and is the maximum induced speed, is the inertia weight of the motion induced in , and is the last motion induced.
The foraging motion is estimated by the two main components. One is the food location and the other one is the prior knowledge about the food location. For the th krill individual, this motion can be approximately formulated as follows: where and is the foraging speed, is the inertia weight of the foraging motion between 0 and 1, is the last foraging motion.
The random diffusion of the krill individuals can be considered to be a random process in essence. This motion can be described in terms of a maximum diffusion speed and a random directional vector. It can be indicated as follows: where is the maximum diffusion speed, and is the random directional vector and its arrays are random values in .
Based on the three abovementioned movements, using different parameters of the motion during the time, the position vector of a krill individual during the interval to is expressed by the following equation:
It should be noted that is one of the most important parameters and should be finetuned in terms of the specific realworld engineering optimization problem. This is because this parameter can be treated as a scale factor of the speed vector. More details about the three main motions and KH algorithm can be found in [27].
2.2. Lévy Flights
Usually, the hunt of food by animals takes place in the form of random or quasirandom. That is to say, all animals feed in a walk path from one location to another at random. However, the direction it selects relies only on a mathematical model [33]. One of the remarkable models is called Lévy flights.
Lévy flights are a class of random walk in which the steps are determined in terms of the step lengths, and the jumps are distributed according to a Lévy distribution. More recently, Lévy flights have subsequently been applied to improve and optimize searching. In the case of CS, the random walking steps of a cuckoo are determined by a Lévy flight [34]:
Here, is the step size scaling factor, which should be related to the scales of the problem of interest. The random walk via Lévy flight is more efficient in exploring the search space as its step length is much longer in the long run. Some of the new solutions should be generated by Lévy walk around the best solution obtained so far; this will speed up the local search.
3. Our Approach: LKH
In general, the standard KH algorithm is adept at exploring the search space and locating the promising region of global optimal value, but it is not relatively good at exploiting solution. In order to improve the exploitation of KH, a new distribution Lévy flight performing local search, called local Lévyflight (LLF) operator, is introduced to form a novel Lévyflight krill herd (LKH) algorithm. In LKH, to begin with, standard KH algorithm with high convergence speed is used to shrink the search region to a more promising area. And then, LLF operator with good exploitation ability is applied to exploit the limited area intensively to get better solutions. In this way, the strong exploration abilities of the original KH and the exploitation abilities of the LLF operator can be fully extracted. The difference between LKH and KH is that the LLF operator is used to perform local search and finetune the original KH generating a new solution for each krill instead of random walks originally used in KH. As a matter of fact, according to the figuration of LKH, the original KH in LKH focuses on the exploration/diversification at the beginning of the search to evade trapping into local optima in a multimodal landscape; while later LLF operator encourages the exploitation/intensification and makes the krill individuals search the space carefully at the end of the search. Therefore, our proposed LKH method can fully exploit the merits of different search techniques and overcome the lack of the exploitation of the KH and solve the conflict between exploration and exploitation effectively. The detailed explanation of our method is described as follows.
To start with, standard KH algorithm utilizes three main actions to search the promising area in the solution space and use these actions to guide the generation of the candidate solutions for the next generation. It has been demonstrated that [27] KH performs well in both convergence speed and final accuracy on unimodal problems and many simple multimodal problems. Therefore, in LKH, we employ the merit of the fast convergence of KH to implement global search. In addition, KH is able to shrink the search region towards the promising area within a few generations. However, sometimes KH’s performance on complex multimodal problems is unsatisfying; accordingly, another search technique with good exploitation ability is crucial to exploit the limited area carefully to get optimal solutions.
To improve the exploitation ability of the KH algorithm, genetic reproduction mechanisms have been incorporated into the standard KH algorithm. Gandomi and Alavi have proved that the KH II (KH with crossover operator only) performs the best among serials of KH methods [27]. In our present work, we use a more focused local search technique, local Lévyflight (LLF) operator, in the local search part of the LKH algorithm, which can increase diversity of the population in an attempt to avoid premature convergence and exploit a small region in the later run phase to refine the final solutions. The main step of LLF operator used in the LKH algorithm is presented in Algorithm 1.

Here, and is the maximum of generations. is the number of decision variables. NP is the size of the parent population. A is max Lévyflight step size. is the jth variable of the solution . is the offspring. is a random integer number between 1 and drawn from exponential distribution. returns an array of random numbers chosen from the exponential distribution with mean parameter . Similarly, is a random integer number between 1 and d drawn from uniform distribution. And rand is a random real number in interval (0, 1) drawn from uniform distribution.
In addition, another important improvement is the addition of elitism strategy into the LKH. Clearly, KH has some fundamental elitism. However, it can be further improved. As with other populationbased optimization algorithms, we combine some sort of elitism so as to store the optimal solutions in the population. Here, we use a more centralized elitism on the best solutions, which can stop the best solutions from being ruined by three motions and LLF operator in LKH. In the main cycle of the LKH, to start with, the KEEP best solutions are retained in a variable KEEPKRILL. Generally speaking, the KEEP worst solutions are substituted by the KEEP best solutions at the end of the every iteration. There is a guarantee that this elitism strategy can make the whole population not decline to the population with worse fitness than the former. Note that we use an elitism strategy to save the property of the krill that has the best fitness in the LKH process, so even if three motions and LLF operator corrupt their corresponding krill, we have retained it and can recuperate to its preceding good status if needed.
Based on the above analyses, the main steps of Lévyflight krill herd method can be simply presented in Algorithm 2.

4. Simulation Experiments
In this section, the performance of our proposed method LKH is tested to global numerical optimization through a series of experiments implemented in benchmark functions.
To allow an unprejudiced comparison of CPU time, all the experiments were carried out on a PC with a Pentium IV processor running at 2.0 GHz, 512 MB of RAM, and a hard drive of 160 GB. Our execution was compiled using MATLAB R2012b (8.0) running under Windows XP3. No commercial KH or other optimization tools were used in our simulation experiments.
Welldefined problem sets benefit for testing the performance of optimization algorithms proposed in this paper. Based on numerical functions, benchmark functions can be considered as objective functions to fulfill such tests. In our present study, fourteen different benchmark functions are applied to test our proposed metaheuristic LKH method. The formulation of these benchmark functions are given in Table 1 and the properties of these benchmark functions are presented in Table 2. More details of all the benchmark functions can be found in [35, 36]. We must point out that, in [35], Yao et al. have used 23 benchmarks to test optimization algorithms. However, for the other lowdimensional benchmark functions (such as , 4, and 6), all the methods perform almost identically with each other [37], because these lowdimensional benchmarks are too simple to clarify the performance difference among different methods. Therefore, in our present work, only fourteen highdimensional complex benchmarks are applied to verify our proposed LKH algorithm.
4.1. General Performance of LKH
In order to explore the merits of LKH, in this section, we compared its performance on global numeric optimization problems with eleven populationbased optimization methods, which are ABC, ACO, BA, CS, DE, ES, GA, HS, KH, PBIL, and PSO. ABC (artificial bee colony) [38] is an intelligent optimization algorithm based on the smart behavior of honey bee swarm. ACO (ant colony optimization) [39] is a swarm intelligence algorithm for solving optimization problems which is based on the pheromone deposition of ants. BA (bat algorithm) [16] is a new powerful metaheuristic optimization method inspired by the echolocation behavior of bats with varying pulse rates of emission and loudness. CS (cuckoo search) [40] is a metaheuristic optimization algorithm inspired by the obligate brood parasitism of some cuckoo species by laying their eggs in the nests of other host birds. DE (differential evolution) [17] is a simple but excellent optimization method that uses the difference between two solutions to probabilistically adapt a third solution. An ES (evolutionary strategy) [41] is an algorithm that generally distributes equal importance to mutation and recombination and that allows two or more parents to reproduce an offspring. A GA (genetic algorithm) [13] is a search heuristic that mimics the process of natural evolution. HS (harmony search) [20] is a new metaheuristic approach inspired by behavior of musician’ improvisation process. PBIL (probabilitybased incremental learning) [42] is a type of genetic algorithm where the genotype of an entire population (probability vector) is evolved rather than individual members. PSO (particle swarm optimization) [22] is also a swarm intelligence algorithm which is based on the swarm behavior of fish and bird schooling in nature. In addition, it should be noted that, in [27], Gandomi and Alavi have proved that, comparing all the algorithms, the KH II (KH with crossover operator) performed the best which confirms the robustness of the KH algorithm. Therefore, in our work, we use KH II as a standard KH algorithm.
In our experiments, we will use the same parameters for KH and LKH that are the foraging speed , the maximum diffusion speed , the maximum induced speed , and max Lévyflight step size (only for LKH). For ACO, DE, ES, GA, PBIL, and PSO, we set the same parameters as [36, 43]. For ABC, the number of colony size (employed bees and onlooker bees) , the number of food sources , and maximum search times (a food source which could not be improved through “limit” trials is abandoned by its employed bee). For BA, we set loudness , pulse rate , and scaling factor ; for CS, a discovery rate . For HS, we set harmony memory accepting rate and pitch adjusting rate .
We set population size NP = 50 and maximum generation Maxgen = 50 for each method. We ran 100 Monte Carlo simulations of each method on each benchmark function to get representative performances. Tables 3 and 4 illustrate the results of the simulations. Table 3 shows the average minima found by each method, averaged over 100 Monte Carlo runs. Table 4 shows the absolute best minima found by each method over 100 Monte Carlo runs. That is to say, Table 3 shows the average performance of each method, while Table 4 shows the best performance of each method. The best value achieved for each test problem is marked in bold. Note that the normalizations in the tables are based on different scales, so values cannot be compared between the two tables. Each of the functions in this study has 20 independent variables (i.e., ).
From Table 3, we see that, on average, LKH is the most effective at finding objective function minimum on twelve of the fourteen benchmarks (F01–F08, F10, and F12–F14). ABC and GA are the second most effective, performing the best on the benchmarks F11 and F09 when multiple runs are made, respectively. Table 4 shows that LKH performs the best on twelve of the fourteen benchmarks which are F01–F04, F06–F08, and F10–F14. ACO and GA are the second most effective, performing the best on the benchmarks F05 and F09 when multiple runs are made, respectively.
Moreover, the computational times of the twelve optimization methods were alike. We collected the average computational time of the optimization methods as applied to the 14 benchmarks considered in this section. The results are given in Table 3. From Table 3, PBIL was the quickest optimization method, and LKH was the eleventh fastest of the twelve algorithms. This is because that the evaluation of step size by Lévy flight is too time consuming. However, we must point out that in the vast majority of realworld engineering applications, it is the fitness function evaluation that is by far the most expensive part of a populationbased optimization algorithm.
In addition, in order to further prove the superiority of the proposed LKH method, convergence plots of ABC, ACO, BA, CS, DE, ES, GA, HS, KH, LKH, PBIL, and PSO are illustrated in Figures 1–14 which mean the process of optimization. The values shown in Figures 1–14 are the average objective function optimum obtained from 100 Monte Carlo simulations, which are the true objective function value, not normalized. Most importantly, note that the best global solutions of the benchmarks (F04, F05, F11, and F14) are illustrated in the form of the semilogarithmic convergence plots. KH is short for KH II in the legends of the figures.
Figure 1 shows the results obtained for the twelve methods when the F01 Ackley function is applied. From Figure 1, clearly, we can draw the conclusion that LKH is significantly superior to all the other algorithms during the process of optimization. For other algorithms, although slower, KH II eventually finds the global minimum close to LKH, while ABC, ACO, BA, CS, DE, ES, GA, HS, PBIL, and PSO fail to search the global minimum within the limited generations. Here, all the algorithms show the almost same starting point; however, LKH outperforms them with fast and stable convergence rate.
Figure 2 illustrates the optimization results for F02 FletcherPowell function. In this multimodal benchmark problem, it is clear that LKH outperforms all other methods during the whole progress of optimization. Other algorithms do not manage to succeed in this benchmark function within maximum number of generations. At last, ABC and KH II converge to the value that is significantly inferior to LKH’s.
Figure 3 shows the optimization results for F03 Griewank function. From Figure 3, we can see that the figure shows that there is a little difference between the performance of LKH and KH II. However, from Table 3 and Figure 3, we can conclude that, LKH performs better than KH II in this multimodal function. Through carefully looking at Figure 6, ACO has a fast convergence initially towards the known minimum, as the procedure proceeds LKH gets closer and closer to the minimum, while ACO comes into being premature and traps into the local minimum.
Figure 4 shows the results for F04 Penalty #1 function. From Figure 4, clearly, LKH outperforms all other methods during the whole progress of optimization in this multimodal function. Eventually, KH II performs the second best at finding the global minimum. Although slower later, DE performs the third best at finding the global minimum.
Figure 5 shows the performance achieved for F05 Penalty #2 function. For this multimodal function, similar to the F04 Penalty #2 function as shown in Figure 4, LKH is significantly superior to all the other algorithms during the process of optimization. Here, KH II shows a stable convergence rate in the whole optimization process and eventually it performs the second best at finding the global minimum that is significantly superior to the other algorithms.
Figure 6 shows the results achieved for the twelve methods when using the F06 Quartic (with noise) function. For this case, the figure shows that there is a little difference among the performance of DE, GA, KH II, and LKH. From Table 3 and Figure 6, we can conclude that LKH performs the best in this multimodal function. KH II, DE, and GA perform as well and have ranks of 2, 3, and 4, respectively. Through carefully looking at Figure 6, PSO has a fast convergence initially towards the known minimum; as the procedure proceeds, LKH gets closer and closer to the minimum, while PSO comes into being premature and traps into the local minimum.
Figure 7 shows the optimization results for the F07 Rastrigin function. In this multimodal benchmark problem, it is obvious that LKH outperforms all other methods during the whole progress of optimization. For other algorithms, the figure shows that there is little difference between the performance of ABC and KH II. From Table 3 and Figure 7, we can conclude that, KH II performs slightly better than ABC in this multimodal function. In addition, other algorithms do not manage to succeed in this benchmark function within the maximum number of generations.
Figure 8 shows the results for F08 Rosenbrock function. From Figure 8, we can conclude that LKH performs the best in this unimodal function. In addition, KH II, DE, and ACO perform very well and have ranks of 2, 3, and 4, respectively. Through carefully looking at Figure 8, PSO has a fast convergence initially towards the known minimum; however, it is outperformed by LKH after 10 generations. For other algorithms, they do not manage to succeed in this benchmark function within the maximum number of generations.
Figure 9 shows the equivalent results for the F09 Schwefel 2.26 function. From Figure 9, clearly, GA is significantly superior to other algorithms including LKH during the process of optimization, while ACO and ABC perform the second and the third best in this multimodal benchmark function, respectively. Unfortunately, LKH only performs the fourth in this multimodal benchmark function.
Figure 10 shows the results for F10 Schwefel 1.2 function. For this case, LKH, CS, KH II, and ACO perform the best and have ranks of 1, 2, 3, and 4, respectively. Looking carefully at Figure 7, LKH has the fastest and stable convergence rate at finding the global minimum and significantly outperforms all other approaches.
Figure 11 shows the results for F11 Schwefel 2.22 function. From Figure 11, similar to the F09 Schwefel 2.26 function as shown in Figure 9, it is clear that ABC is significantly superior to other algorithms including LKH during the process of optimization. For other algorithms, DE and KH II perform very well and have ranks of 2 and 3, respectively. Unfortunately, LKH only performs the tenth best in this unimodal benchmark function among the twelve methods.
Figure 12 shows the results for F12 Schwefel 2.21 function. Very clearly, LKH has the fastest convergence rate at finding the global minimum and significantly outperforms all other methods. For other algorithms, KH II and ACO that are only inferior to LKH perform very well and have ranks of 2 and 3, respectively.
Figure 13 shows the results for F13 Sphere function. From Figure 13, LKH shows the fastest convergence rate at finding the global minimum and significantly outperforms all other methods. In addition, KH II, DE, and ACO perform very well and have ranks of 2, 3, and 4, respectively.
Figure 14 shows the results for F14 Step function. Clearly, LKH shows the fastest convergence rate at finding the global minimum and significantly outperforms all other approaches. Though slow, KH II performs the second best at finding the global minimum that is only inferior to the LKH.
From the above analyses about Figures 1–14, we can come to a conclusion that our proposed hybrid metaheuristic LKH algorithm significantly outperforms the other eleven algorithms. In general, KH II is only inferior to LKH and performs the second best among twelve methods. ABC, ACO, DE, and GA perform the third best only inferior to the LKH and KH II; ABC and GA especially perform better than LKH on benchmark functions F11 and F09, respectively. Furthermore, the illustration of benchmarks F04, F05, F06, F08, and F10 shows that PSO has a faster convergence rate initially, while later, it converges slower and slower to the true objective function value.
4.2. Discussion
For all of the standard benchmark functions considered in this section, the LKH method has been demonstrated to perform better than, or at least highly competitive with, the standard KH and other eleven acclaimed stateoftheart populationbased methods. The advantages of LKH involve performing simply and easily and have few parameters to regulate. The work here proves the LKH to be robust, powerful, and effective over all types of benchmark functions.
Benchmark evaluating is a good way for testing the performance of the metaheuristic methods, but it is also not flawless and has some limitations. First, we did not do much work painstakingly to carefully regulate the optimization methods in this section. In general, different tuning parameter values in the optimization methods might lead to significant differences in their performance. Second, realworld optimization problems may have little of a relationship to benchmark functions. Third, benchmark tests may arrive at fully different conclusions if the grading criteria or problem setup changes. In our present work, we looked into the mean and best values obtained with some population size and after some number of iterations. However, we might reach different conclusions if, for example, we change the population size, or look at how many population size it needs to reach a certain function value, or if we change the iteration. Despite these caveats, the benchmark results represented here are prospective for LKH and show that this novel method might be capable of finding a niche among the plethora of populationbased optimization methods.
Note that running time is a bottleneck to the implementation of many populationbased optimization algorithms. If an algorithm converges too slowly, it will be impractical and infeasible, since it would take too long to search an optimal or suboptimal solution. LKH seems not to require an unreasonable amount of computational time; of the twelve comparative optimization methods used in this paper, LKH was the eleventh fastest. How to speed up the LKH’s convergence is worthy of further study.
In our study, 14 benchmark functions have been applied to evaluate the performance of our LKH method; we will test our proposed method on more optimization problems, such as the highdimensional (d ≥ 20) CEC 2010 test suit [44] and the realworld engineering problems. Moreover, we will compare LKH with other optimization algorithms. In addition, we only consider the unconstrained function optimization in this study. Our future work consists of adding the other techniques into LKH for constrained optimization problems, such as constrained realparameter optimization CEC 2010 test suit [45].
5. Conclusion and Future Work
Due to the limited performance of KH on complex problems, LLF operator has been introduced into the standard KH to develop a novel Lévyflight krill herd (LKH) algorithm for optimization problems. In LKH, at first, original KH algorithm is applied to shrink the search region to a more promising area. Thereafter, LLF operator is implemented as a critical complement to perform the local search to exploit the limited area intensively to get better solutions. In principle, KH takes full advantage of the three motions in the population and has experimentally demonstrated very good performance on the multimodal problems. In a rugged region of the fitness landscape, KH may fail to proceed to better solutions [27]. Then, LLF operator is adaptively launched to reboost the search. The LKH makes an attempt at taking merits of the KH and Lévy flight in order to avoid all krill getting trapped in inferior local optimal regions. The LKH enables the krill to have more diverse exemplars to learn from as the krill are updated each generation and also form new krill to search in a larger search space. With both techniques combined, LKH can balance exploration and exploitation and effectively solve complex multimodal problems.
Furthermore, this new method can speed up the global convergence rate without losing the strong robustness of the basic KH. From the analysis of the experimental results, we can see that the Lévyflight KH clearly improves the reliability of the global optimality and they also enhance the quality of the solutions. Based on the results of the twelve methods on the test problems, we can conclude that LKH significantly improves the performances of the KH on most multimodal and unimodal problems. In addition, LKH is simple and implements easily.
In the field of numerical optimization, there are considerable issues that deserve further study, and some more efficient optimization methods should be developed depending on the analysis of specific engineering problem. Our future work will focus on the two issues. On the one hand, we would apply our proposed LKH method to solve realworld civil engineering optimization problems [46], and, obviously, LKH can be a promising method for these optimization problems. On the other hand, we would develop more new metaheuristic methods to solve optimization problems more efficiently and effectively.
Acknowledgments
This work was supported by the State Key Laboratory of Laser Interaction with Material Research Fund under Grant no. SKLLIM090201 and Key Research Technology of Electricdischarge Nonchain Pulsed DF Laser under Grant no. LXJJ11Q80.