Table of Contents Author Guidelines Submit a Manuscript
Discrete Dynamics in Nature and Society
Volume 2017, Article ID 8342694, 23 pages
https://doi.org/10.1155/2017/8342694
Research Article

A Hybrid Lightning Search Algorithm-Simplex Method for Global Optimization

1School of Computer, Electronics and Information, Guangxi University, Nanning 530004, China
2College of Information Science and Engineering, Guangxi University for Nationalities, Nanning 530006, China
3Key Laboratory of Guangxi High Schools Complex System and Intelligent Computing, Nanning 530006, China

Correspondence should be addressed to Yongquan Zhou; moc.621@uohznauqgnoy

Received 11 March 2017; Accepted 1 June 2017; Published 13 July 2017

Academic Editor: Pasquale Candito

Copyright © 2017 Yuting Lu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In this paper, a novel hybrid lightning search algorithm-simplex method (LSA-SM) is proposed to solve the shortcomings of lightning search algorithm (LSA) premature convergence and low computational accuracy and it is applied to function optimization and constrained engineering design optimization problems. The improvement adds two major optimization strategies. Simplex method (SM) iteratively optimizes the current worst step leaders to avoid the population searching at the edge, thus improving the convergence accuracy and rate of the algorithm. Elite opposition-based learning (EOBL) increases the diversity of population to avoid the algorithm falling into local optimum. LSA-SM is tested by 18 benchmark functions and five constrained engineering design problems. The results show that LSA-SM has higher computational accuracy, faster convergence rate, and stronger stability than other algorithms and can effectively solve the problem of constrained nonlinear optimization in reality.

1. Introduction

Optimization is concerned with finding the best solutions for a given problem. It is everywhere and important in many applications such as production line scheduling, power system optimization, 3D path planning, and traveling salesman problem (TSP). The goal of optimization is to minimize costs to maximize profitability and efficiency. In the broadest sense, techniques for dealing with different types of optimization problems are classified into exact and stochastic algorithms. When the problems are large and complex, especially if they are either NP-complete or NP-hard, the use of stochastic algorithms becomes mandatory. These stochastic algorithms do not guarantee an optimal solution, but they are able to find quasi-optimal solutions within a reasonable amount of time [1].

Metaheuristic optimization algorithms are stochastic algorithms that have become the most popular solution for solving optimization problems in the last few decades and they have the characteristics of simplicity, flexibility, strong robustness, and so on. Some of the most famous of these algorithms are genetic algorithm (GA) [2], ant colony optimization (ACO) [3], particle swarm optimization (PSO) [4], harmony search (HS) [5], artificial bee colony (ABC) [6], cuckoo search (CS) [7], bat algorithm (BA) [8], bacterial foraging optimization (BFO) [9], black hole (BH) [10], and so forth. The GA simulates the Darwinian biological evolutionary principle. In this algorithm, individuals are expressed as genotypes. Each individual is selected, crossed, and mutated in the evolutionary process, in accordance with the “survival of the fittest” principle of evolution, and finally the optimal solution to the problem is gotten. The ACO algorithm is based on the phenomenon that ants communicate with each other during the foraging process by releasing pheromones to find the best path between the nest and the food source. The PSO algorithm is inspired by bird predation. This algorithm regards birds as no mass particles, and, in the search space, the change in particle position depends not only on their own experience, but also on the excellent individual of society. The HS algorithm simulates the process by which musicians adjust the different instrumental tones repeatedly to achieve the most beautiful and harmony. The ABC algorithm is a bionic intelligent method for simulating bee populations to find excellent nectar. In short, the inspiration and wisdom from nature spawned the development of heuristic algorithms to solve complex and large-scale optimization problems. In addition, inspired by the natural phenomena of lightning, a new metaheuristic optimization algorithm called lightning search algorithm (LSA) [11] is proposed.

Lightning search algorithm was first proposed by Shareef et al. in 2015, which is inspired by the phenomenon of lightning discharge to the ground during a thunderstorm to solve constraint optimization problems. The proposed optimization algorithm utilizes the mechanism of step leader propagation and considers the involvement of fast particles known as projectiles in the formation of the binary tree structure of a step leader. It has been proved that LSA has high convergence rate and successfully solved the small-scale TSP problem. Although a long time did not pass since the introduction of LSA, it has been widely studied by scholars. In 2015, Shareef et al. used LSA to solve fuzzy logic PV inverter controller optimization problems [12]. In the same year, Ali et al. proposed a novel quantum-behaved lightning search algorithm (QLSA) to improve the fuzzy logic speed controller for an induction motor drive [13]. In 2016, Islam et al. proposed a variant of LSA to solve binary optimization problems called binary lightning search algorithm (BLSA) [14]. Sirjani and Shareef applied LSA for parameter extraction of solar cell models in different weather conditions [15]. Mutlag et al. applied LSA to design an optimal fuzzy logic controller for a PV inverter [16]. LSA was used to improve the artificial neural network (LSA-ANN) for the home energy management scheduling controller for the residential demand response strategy in 2016 [17]. LSA was also used to optimize the learning process of feedforward neural networks [18]. Because the LSA algorithm simulates the fast propagation characteristics of lightning, it has high convergence rate and strong robustness. However, there are some shortcomings in LSA itself, such as having insufficient stability, being easy to fall into local optimum, and having low convergence accuracy. Therefore, the goal of this paper is to improve the LSA and expand its application.

In this study, a novel optimization method called hybrid lightning search algorithm-simplex method (LSA-SM) has been applied to function optimization and constrained engineering design optimization. Simplex method (SM) has the characteristics of fast search speed, small computation amount, and strong local optimization ability. By introducing the reflection, expansion, and compression operations of the simplex method in the LSA algorithm to improve the step leaders in the worst position, the accuracy of the algorithm is improved. LSA-SM adds two strategies, SM and elite opposition-based learning (EOBL). SM iteratively optimizes the worst step leaders, making the population faster and closer to the global optimal solution. EOBL increases the diversity of population and expands the search space to avoid the algorithm falling into local optimum. The performance of proposed LSA-SM is tested by the well-known 18 benchmark functions and five constrained engineering design problems. The test results show that LSA-SM has faster convergence rate, higher optimization accuracy, and strong stability. The remainder of the paper is organized as follows.

Section 2 briefly introduces the original lightning search algorithm. Section 3 presents the novel hybrid lightning search algorithm-simplex method. Simulation experiment setup, results, and analysis are provided in Section 4. The proposed LSA-SM is applied to solve five constrained engineering design optimization problems in Section 5. Finally, Section 6 gives conclusions and future studies.

2. Lightning Search Algorithm (LSA)

Lightning search algorithm (LSA) is based on the natural phenomenon of lightning, and it is inspired by the probabilistic nature and sinuous characteristics of lightning discharges during a thunderstorm (Figure 1). This optimization algorithm is generalized from the mechanism of step leader propagation using the theory of fast particles known as projectiles. The projectile represents the initial population size, similar to the term “particle” used in the PSO. In the LSA, the solution refers to the tip of the current step leader’s energy [11, 12].

Figure 1: Lightning discharge and stepped leader propagation.
2.1. Projectile Model

LSA consists of three types of projectiles: transition, space, and lead projectiles. The transition projectiles create the first step leader population for solutions, the space projectiles engage in exploration and attempt to become the leader, and the lead projectiles attempt to find and exploit the optimal solution [15].

2.1.1. Transition Projectile

An early stage of formation of a stepped lead, the transition projectile is ejected from the thunder cell in a random direction. Therefore, it can be modeled as a random number drawn from the standard uniform probability distribution as follows:where is a random number that may provide a solution and and are the lower and upper bounds, respectively, of the solution space. For a population of stepped leaders , random projectiles need to meet the solution dimension.

2.1.2. Space Projectile

The position of the space projectile at can be modeled as a random number generated from the exponential distribution with shaping parameter as follows:

Thus, the position and direction of at can be written as follows:where is an exponential random number and is taken as the distance between the lead projectile and the space projectile under consideration. If provides a good solution at and the projectile energy is greater than the step leader , then is updated to . Otherwise, they remain unchanged until the next step.

2.1.3. Lead Projectile

The lead projectile moves closer to the ground, which can be modeled as a random number taken from the standard normal distribution as follows: The randomly generated lead projectile can search in all directions from the current position defined by the shape parameter (). This projectile also has an exploitation ability defined by the scale parameter (). The scale parameter exponentially decreases as it progresses towards the Earth or as it finds the best solution. Thus, the position of at can be written as follows:where is a random number generated by the normal distribution function. Similarly, if provides a good solution at and the projectile energy is greater than the step leader , then is updated to . Otherwise, they remain unchanged until the next step.

2.2. Forking Procedure

Forking is an important property of a stepped leader, in which two simultaneous and symmetrical branches emerge. In the proposed algorithm, forking is realized in two ways. First, symmetrical channels are created because the nuclei collision of the projectile is realized by using the opposite number as follows:where and are the opposite and original projectiles, respectively, in a one-dimensional system and and are the boundary limits. In order to maintain the population size, the forking leader selects or with a better fitness value.

In the second type of forking, a channel is assumed to appear at a successful step leader tip because of the energy redistribution of the most unsuccessful leader after several propagation trials. The unsuccessful leader can be redistributed by defining the maximum allowable number of trials as channel time.

The flowchart of the LSA procedure is shown in Figure 2.

Figure 2: Flowchart of the LSA.

3. Hybrid Lightning Search Algorithm-Simplex Method (LSA-SM)

Standard LSA has a fast convergence rate, but there are still some shortcomings, such as premature convergence, easy fall into local optimum, poor solution accuracy, and low ability to solve multimodal optimization problems. In order to improve the search performance of LSA, a hybrid lightning search algorithm-simplex method (LSA-SM) is adopted. In view of the shortcomings of LSA, two optimization strategies, namely, simplex method (SM) and elite opposition-based learning (EOBL), are added to the standard LSA.

3.1. Simplex Method (SM)

Simplex method has exceptional advantages in local search, often optimized to obtain higher precision. The method is proposed by Nelder and Mead, a simple and derivative-free line search method for finding a local minimum of a function. It is based on the idea of comparing the values of the objective function at the vertices of a polytope (simplex) in -dimensional space and moving the polyhedron towards the minimum point as the optimization progress [44, 45]. In this paper, we choose step leaders with the worst objective function values and use SM to optimize their positions, as shown in Figure 3. This allows the algorithm to be closer to the optimal solution, improve its exploit ability, and speed it up to find the optimal solution. The procedures of the SM in this study are shown as follows.

Figure 3: Simplex method to search for different points.

Step 1. According to the objective function values of all searched step leaders, find the optimal point (with minimum function value) , the suboptimal point , and the worst points; take one of them as .

Step 2. Calculate the center of the optimal point and suboptimal point .

Step 3. Reflect through the center to a new point . And calculate the objective function value at this point.where is reflection coefficient.

Step 4. If then perform expansion to generate a new point ; otherwise go to Step 5. If then replace by ; otherwise replace by .where is expansion coefficient.

Step 5. If , reflection failed, and then perform contraction on point to produce a new point ; otherwise go to Step 6. If then replace by . where is contraction coefficient.

Step 6. If then shrink to generate a new point . The shrinkage coefficient is the same as the contraction coefficient.

The SM allows the current worst step leaders to find a better position at each iteration, which may be better than the optimal point. This avoids the population searching at the edge, guiding it towards the global optimum and the convergence accuracy and rate of the algorithm are improved.

3.2. Elite Opposition-Based Learning (EOBL)

Although SM improves the convergence accuracy of the LSA, it is easy to fall into the local optimum, so the EOBL strategy is introduced to increase the diversity of population and expand the search space.

Opposition-based learning (OBL), formally introduced by Tizhoosh [46], is a new model of machine intelligence that takes into account current estimate and its opposite estimate at the same time to achieve a better solution. It has been proved that an opposite candidate solution has a higher chance to be closer to the global optimal solution than a random candidate solution [47]. Elite opposition-based learning (EOBL) [48] is based on the elite step leader using OBL principle to generate elite opposition-based population to participate in competitive evolution, so as to improve population diversity of LSA. The step leader with the best fitness value is defined as elite step leader ; elite opposition-based solution of step leader can be defined as using the following equation:where is the population size, is the dimension of , , and is the search bound. If the elite opposition-based step leader exceeds the search boundary, the processing formula is used as follows:

According to the above principles, the EOBL procedures are as follows.

Step 1. Define an elite step leader with the current best fitness value, and then use (12) to generate an elite opposition-based population with size .

Step 2. If the elite opposition-based step leader exceeds the search boundary, then (13) is used.

Step 3. step leaders are involved in evolutionary competition, and step leaders with better fitness values are selected to enter the next generation.

Two strategies of SM and EOBL are added to the LSA to balance the exploitation and exploration of the algorithm; the flowchart of the LSA-SM is shown in Figure 4. From the above analysis, it can be concluded that the time complexity of LSA-SM is and the space complexity is , where is maximum number of iterations, is the population size, is the dimension of the problem, and is the cost of the objective function.

Figure 4: Flowchart of the LSA-SM.

4. Simulation Experiment and Result Analysis

To verify the performance of LSA-SM, the following will be a variety of numerical contrast experiments, using the benchmark functions to test LSA-SM and compare the results to some well-known algorithms. In this section, 18 benchmark functions are applied to test the LSA-SM, which are classic test functions used by many researchers [11, 28, 4951]. These benchmark functions are listed in Tables 13 where Dim indicates dimension of the function, Range is the boundary of the function search space, and is the global optimum. Generally speaking, the 18 benchmark functions used are minimization functions and can be divided into three groups: to are high-dimensional unimodal functions as group 1, to are high-dimensional multimodal functions as group 2, and to are fixed-dimension multimodal functions as group 3. Unimodal functions have only one optimal value, and they are suitable for testing the exploitation of algorithms. In contrast, multimodal functions have many local optima, and the number of local optima increases exponentially with dimension, which makes them suitable for exploration.

Table 1: Unimodal benchmark functions.
Table 2: Multimodal benchmark functions.
Table 3: Fixed-dimension multimodal benchmark functions.
4.1. Experimental Setup

All numerical comparison experiments were run under MATLAB R2012a using an Intel(R) Xeon(R) CPU E5-1620 v3 @ 3.50 GHz processor and 8.00 GB memory.

4.2. Performance Comparison of Algorithms

In this section, seven algorithms are chosen to compare with LSA-SM, which are LSA [11], bat algorithm (BA) [8], flower pollination algorithm (FPA) [52], gravitational search algorithm (GSA) [51], grey wolf optimizer (GWO) [28], whale optimization algorithm (WOA) [53], and grasshopper optimization algorithm (GOA) [54]. The population size and maximum iteration of the eight algorithms are set to 50 and 1000, respectively. The number of the current worst step leaders of LSA-SM is . The control parameters associated with other algorithm are given below.

LSA parameter setting: channel time ; forking rate . The source code of LSA is publicly available at http://cn.mathworks.com/matlabcentral/fileexchange/54181-lightning-search-algorithm--lsa-.

BA parameter setting: loudness , pulse rate , frequency minimum , and frequency maximum . The source code of BA is publicly available at http://cn.mathworks.com/matlabcentral/fileexchange/37582-bat-algorithm--demo-.

FPA parameter setting: switch probability . The source code of FPA is publicly available at http://cn.mathworks.com/matlabcentral/fileexchange/45112-flower-pollination-algorithm.

GSA parameter setting: , , , and . The source code of GSA is publicly available at http://cn.mathworks.com/matlabcentral/fileexchange/27756-gravitational-search-algorithm--gsa-.

GWO parameter setting: linearly decreases from 2 to 0. The source code of GWO is publicly available at http://cn.mathworks.com/matlabcentral/fileexchange/44974-grey-wolf-optimizer--gwo-.

WOA parameter setting: linearly decreases from 2 to 0. The source code of WOA is publicly available at http://cn.mathworks.com/matlabcentral/fileexchange/55667-the-whale-optimization-algorithm.

GOA parameter setting: ; . The source code of GOA is publicly available at http://cn.mathworks.com/matlabcentral/fileexchange/61421-grasshopper-optimisation-algorithm--goa-.

The results of the comparative experiments for the 20 independent runs of the algorithms are summarized in Tables 46, which are Best, Worst, Mean, and Std., four evaluation indices that represent the optimal fitness value, worst fitness value, mean fitness value, and standard deviation, respectively. For each benchmark function, the eight algorithms are sorted by standard deviation, and the standard deviation is as small as possible. The last column in Tables 46 is the sorting result of LSA-SM. The minimum, best median, and the minimum standard deviation values among the eight algorithms of each benchmark function are shown in bold.

Table 4: Results of unimodal benchmark functions.
Table 5: Results of multimodal benchmark functions.
Table 6: Results of fixed-dimension multimodal benchmark functions.

In order to verify the validity of the above values, nonparametric Wilcoxon rank tests were performed on the results of LSA-SM and other comparison algorithms running 20 times for 18 benchmark functions. The test results are shown in Table 7, where the value of being 1 indicates that the performance and comparative method are statistically different with 95% confidence and 0 implies there is no statistical difference. That is, when , then . As shown in Table 7, “” accounts for the vast majority of the results, which implies that the proposed LSA-SM has different statistical significance from other algorithms.

Table 7: Statistical comparison between LSA-SM and the other seven algorithms.

According to the results in Table 4, LSA-SM can find the exact optimal value in unimodal benchmark functions, indicating that LSA-SM has strong robustness and high convergence accuracy. For functions , the optimal fitness value, mean fitness value, and standard deviation are better than other algorithms. For function , the results of LSA-SM are the same as GSA, GWO, and WOA and better than LSA, BA, FPA, and GOA. The above analysis shows that LSA-SM has good stability and strong exploitation ability to solve unimodal benchmark functions.

In the results of multimodal functions in Tables 5 and 6, LSA-SM can find the global optimal solution or locate a near-global optimal value, indicating that LSA-SM has the ability to escape from poor local optimum. Table 5 is the experimental results of high-dimensional multimodal functions. For functions , LSA-SM is ranked first among the eight algorithms, and its optimal fitness value, mean fitness value, and standard deviation have the highest accuracy. For function , LSA-SM and WOA reach the best global minimum and the standard deviation is zero; GWO also obtains the best global minimum, but its result is not stable. For function , LSA-SM and WOA get the same optimal fitness value, but the average and standard deviation of LSA-SM is better than WOA. For function , LSA-SM, GWO, and WOA can find the exact optimal value, but LSA-SM has the strongest stability. For function , the LSA-SM results are much better than other algorithms. For function , although the standard deviation of LSA-SM is worse than that of FPA and GSA, the optimal fitness value and mean fitness value are better than the seven comparison algorithms. Therefore, LSA-SM has strong global exploration ability and high computational accuracy for solving high-dimensional multimodal functions.

Table 6 shows the experimental results for fixed-dimension multimodal functions (also called low-dimensional multimodal functions). It can be seen that LSA-SM and some other algorithms are able to search for the exact optimal value for functions , indicating that LSA-SM has high convergence accuracy to solve the low-dimensional multimodal functions. For functions , , and , LSA-SM has the least standard deviation, which indicates that LSA-SM is more stable than other algorithms. For function , the four evaluation indices of LSA-SM are the same as LSA and GSA, and standard deviation of the three algorithms is zero. For function , the standard variance of LSA-SM is only worse than GSA. For function , LSA-SM is ranked third, and its mean fitness value and standard deviation are worse than those of GSA and FPA. In short, LSA-SM has certain advantages in optimizing the low-dimensional multimodal functions.

In summary, in the unimodal benchmark functions, LSA-SM can converge to the exact solution, which shows that LSA-SM has high exploitation ability. In the multimodal benchmark functions, LSA-SM can avoid local optimal solution and has high global exploration ability. Moreover, from the “Rank” column in Tables 46, it can be seen that LSA-SM has a strong stability.

For the three groups of benchmark functions, Figures 522 are the convergence curves drawn from the logarithm of mean minimum, which makes the performance of algorithms more convincing. And Figures 2340 are the ANOVA test of the global minimum from 20 independent runs. As can be seen from Figures 522, the convergence accuracy of LSA-SM is closer to the global optimal solution than LSA and other comparison algorithms in functions . For functions , the convergence curves of the eight algorithms finally reach the global minimum. For function , FPA and GSA have better convergence values than LSA-SM, but their values are not much different. For functions and , LSA-SM and FPA eventually converge to global optimum. For function , LSA-SM and GSA converge to the global optimal value. In addition, LSA-SM converges faster than other algorithms in all convergence graphs. As shown in Figures 2340, it is easy to find that the standard deviation of LSA-SM is much smaller except function , and the number of outliers is less than that of other algorithms. Moreover, the median of the LSA-SM (indicated by a red line) is closer to the best solution in . This proves that LSA-SM is highly stable. In conclusion, the proposed LSA-SM in this paper has the characteristics of fast convergence rate, high convergence accuracy, and strong stability.

Figure 5: , convergence curves for .
Figure 6: , convergence curves for .
Figure 7: , convergence curves for .
Figure 8: , convergence curves for .
Figure 9: , convergence curves for .
Figure 10: , convergence curves for .
Figure 11: , convergence curves for .
Figure 12: , convergence curves for .
Figure 13: , convergence curves for .
Figure 14: , convergence curves for .
Figure 15: , convergence curves for .
Figure 16: , convergence curves for .
Figure 17: , convergence curves for .
Figure 18: , convergence curves for .
Figure 19: , convergence curves for .
Figure 20: , convergence curves for .
Figure 21: , convergence curves for .
Figure 22: , convergence curves for .
Figure 23: , ANOVA test of global minimum for .
Figure 24: , ANOVA test of global minimum for .
Figure 25: , ANOVA test of global minimum for .
Figure 26: , ANOVA test of global minimum for .
Figure 27: , ANOVA test of global minimum for .
Figure 28: , ANOVA test of global minimum for .
Figure 29: , ANOVA test of global minimum for .
Figure 30: , ANOVA test of global minimum for .
Figure 31: , ANOVA test of global minimum for .
Figure 32: , ANOVA test of global minimum for .
Figure 33: , ANOVA test of global minimum for .
Figure 34: , ANOVA test of global minimum for .
Figure 35: , ANOVA test of global minimum for .
Figure 36: , ANOVA test of global minimum for .
Figure 37: , ANOVA test of global minimum for .
Figure 38: , ANOVA test of global minimum for .
Figure 39: , ANOVA test of global minimum for .
Figure 40: , ANOVA test of global minimum for .

In this section, in order to verify the optimization performance of the LSA-SM algorithm, 18 benchmark functions are used to simulate the experiment, and the results obtained by LSA-SM are compared with LSA, BA, FPA, GSA, GWO, WOA, and GOA. These benchmark functions include high-dimensional unimodal, high-dimensional multimodal, and fixed-dimension multimodal functions to verify the convergence, exploration, exploitation, and local optimal avoidance of the algorithms. The nonparametric Wilcoxon rank test is performed on the results of LSA-SM and other comparison algorithms to determine whether the proposed LSA-SM has different statistical significance from other algorithms. At the same time, the convergence curves and ANOVA test graphs of the algorithms are given.

From the results of the unimodal test functions, it is known that LSA-SM obtains the exact solution and the standard deviation is zero, which indicates that LSA-SM has high exploitation and strong stability. The LSA-SM achieves global minimum or near-global minimum in multimodal test functions, which proves that it has high exploration and avoids local optimum. From the convergence curves of test functions, it is concluded that LSA-SM has faster convergence rate than LSA and other algorithms. Moreover, in the standard deviation ranking, LSA-SM ranked first in the majority, indicating that the stability of LSA is very strong. This result is also clearly observed in the ANOVA test, where LSA-SM has a narrow interquartile range and the number of outliers is less than other algorithms. In addition, the nonparametric Wilcoxon rank test results show that LSA-SM has different statistical significance from other algorithms, which proves the effectiveness of LSA-SM results.

LSA-SM achieves better test results, mainly because SM and EOBL improve the exploration and exploitation of the hybrid algorithm. SM iteratively optimizes the step leaders in the worst position and promotes the local search of the algorithm, which makes the algorithm faster and closer to the global optimal solution. EOBL extends the search space to enhance the global search of the algorithm and avoid the algorithm falling into local optimization. In addition, according to the results in Tables 46, LSA, BA, FPA, and GOA are poor in high-dimensional unimodal and high-dimensional multimodal test functions, but the best value or approximate optimal value can be obtained in low-dimensional multimodal test functions, indicating that LSA, BA, FPA, and GOA have low exploitation and poor performance in optimizing high-dimensional functions. GSA, GWO, and WOA have achieved good results in most test functions, but their convergence accuracy needs to be further improved by strengthening local search. Moreover, from the convergence curve graphs, the early convergence rate of these seven algorithms is not very fast, mainly because the early search range of the algorithms is not large. In the ANOVA test graphs, the seven algorithms have a wider interquartile range and more outliers in some functions, indicating that the stability of these algorithms is not strong, mainly because they do not have a good balance between exploration and exploitation.

In conclusion, the proposed LSA-SM in this paper has the characteristics of fast convergence rate, high convergence accuracy, and strong stability.

5. Constrained Engineering Design Optimization

Structural engineering optimization problems are complex; sometimes even the optimal solutions of interest do not exist [31]. These problems are usually nonlinear and have several equality and inequality constraints. In order to evaluate the performance of LSA-SM to solve practical application problems, we employed five classic constrained engineering design problems: tension/compression spring, cantilever beam, pressure vessel, welded beam, and three-bar truss.

5.1. Tension/Compression Spring Design

This problem is from Arora and Belegundu, which aims to minimize the weight of a tension/compression spring, as shown in Figure 41. The minimum weight is subject to four constraints on minimum deflection (), shear stress (), surge frequency (), and limits on outside diameter (). There are three design variables: wire diameter , mean coil diameter , and the number of active coils [23]. The mathematical formulas involved are as follows:where the variable regions are , , and . Table 8 compares the best optimization results obtained using LSA-SM with those reported in other literatures using different methods. We can see that the minimum weight obtained by LSA-SM to solve the tension/compression spring design problem is better than most of the methods except ABC, in which the minimum weight of LSA-SM is 0.01266524 and that of ABC is 0.012665.

Table 8: Best results of the spring design example by different methods.
Figure 41: The tension/compression spring design problem.
5.2. Cantilever Beam Design

Cantilever beam consists of five square hollow blocks, as shown in Figure 42. The beam is rigidly supported at node 1, and there is a given vertical force acting at node 6. The problem is related to weight optimization, and the design variables are the height (or width) of the different beam elements, the number of which is five, and the thickness is constant [31]. The detailed description of the cantilever beam problem is as follows:where , . Comparison results for cantilever beam design problem are listed in Table 9. Hence, it can be concluded that the best results of LSA-SM clearly outperform the other methods and it achieves the overall best design of the 1.339958. In addition, the optimization result of SOS is 1.33996, which is close to the result of LSA-SM.

Table 9: Best results of the cantilever beam design by different methods.
Figure 42: The cantilever beam design problem.
5.3. Pressure Vessel Design

The pressure vessel design is a cylindrical vessel whose ends are capped by a hemispherical head as shown in Figure 43. The objective is to minimize the total cost. There are four design variables, (thickness of the shell, ), (thickness of the head, ), (inner radius, ), and (length of the cylindrical section of the vessel, ) [25], and four constraints. The thicknesses and of the variables are discrete values which are integer multiples of 0.0625 inches, and and are continuous values. This question can be described as follows: where and . The comparison of results of different methods pressure vessel design is shown in Table 10. It can be seen that the minimum total cost of LSA-SM is 5942.6966, which is only worse than IHS in the listed methods. This result indicates that LSA-SM has successfully obtained the solution.

Table 10: Best results of the pressure vessel design by different methods.
Figure 43: The pressure vessel design problem.
5.4. Welded Beam Design

Welded beam design is illustrated in Figure 44. This problem is designed for the minimum cost subject to constraints on shear stress (), bending stress in the beam (), buckling load on the bar (), end deflection of the beam (), and side constraints. There are four design variables: (), (), (), and () [24]. The problem can be formulated as follows:where , , , , , , , , ,  lb,  in,  psi,  psi,  psi,  psi,  in, and , .

Figure 44: The welded beam design problem.

Table 11 compares the optimization results of LSA-SM and other optimization methods found in the literature, where LSA-SM achieves solutions that are superior to all other methods with minimum cost of 1.695247 for welded beam design problem.

Table 11: Best results of the welded beam design by different methods.
5.5. Three-Bar Truss Design

Figure 45 shows a three-bar truss structure with the aim of achieving the minimum weight subjected to stress, deflection, and buckling constraints and evaluating the optimal cross sectional area [27, 31]. This question has two variables and three constraints, involving the mathematical formulas as follows:where the variable ranges are and  cm,  KN/cm2, and  KN/cm2. Table 12 lists the results of solving the three-bar truss design problem through the proposed LSA-SM and other eight different methods. The optimal target value of LSA-SM is 263.8958, which is close to the optimization results of Ray and Liew, DEDS, PSO-DE, and MBA (263.8958466, 263.8958434, 263.8958433, and 263.8958522, resp.), and LSA-SM is better than all the algorithms for solving the three-bar truss design problem.

Table 12: Best results of the three-bar truss design by different methods.
Figure 45: The three-bar truss design problem.

In summary, LSA-SM has effectively solved five classical constrained engineering design problems, and its optimization results are superior to most of the methods in the literatures. These results show that LSA-SM has the outstanding ability to solve the problem of constrained nonlinear optimization in reality.

6. Conclusions

In the current study, a hybrid LSA-SM is proposed to solve the global optimization problem. The standard LSA algorithm is based on the natural phenomenon of lightning discharge to the ground, which has a fast convergence rate and simulates the process of lightning bifurcation to explore a better solution. In view of the poor convergence accuracy of the original LSA, a simplex method is added to improve its accuracy. Simplex method is a prominent method for local search. The accuracy of the LSA is improved by using the reflection, expansion, and compression operations of the SM to improve the step leaders with the worst fitness value. In addition, the elite opposition-based learning strategy is introduced to expand the diversity of the population, to avoid LSA-SM falling into the local optimum. SM and EOBL are added to the LSA to balance the exploitation and exploration of the algorithm. The performance of LSA-SM is tested by 18 benchmark functions and compared with LSA, BA, FPA, GSA, GWO, WOA, and GOA. The experimental results show that LSA-SM can find the exact solution of most tested functions and has the characteristics of higher computational accuracy, faster convergence rate, and stronger stability. Finally, the hybrid LSA-SM algorithm is used to solve five engineering problems in reality. These problems are tension/compression spring, cantilever beam, pressure vessel, welded beam, and three-bar truss design problems. The results obtained by LSA-SM to solve these engineering design problems are much better than most of the algorithms in the literature, which proves that LSA-SM is suitable for solving constrained nonlinear optimization problems.

For future work, LSA-SM needs to further adjust the relevant parameters or increase the improvement strategy because LSA-SM does not find an accurate solution in some test functions. The proposed LSA-SM will be used to solve multiobjective optimization problems. In addition, Shareef et al. have used LSA to solve the TSP problem with 20 cities [11]. In order to verify the performance of LSA-SM to solve the higher dimension problem, it will solve the large-scale TSP problem as the next study. Finally, LSA-SM will be applied to the optimal scheduling of tasks on cloud resources, and the results obtained by LSA-SM are compared with the existing algorithms on the problem. The optimal scheduling of tasks in the cloud computing environment is a hot issue. In cloud computing, a number of tasks may need to be scheduled on different virtual machines in order to minimize makespan and increase system utilization. Task scheduling problem is NP-complete; hence finding an exact solution is intractable especially for large task sizes [55].

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work is supported by National Science Foundation of China under Grants nos. 61563008 and 61463007 and Project of Guangxi University for Nationalities Science Foundation under Grant no. 2016GXNSFAA380264.

References

  1. T. Weise, M. Zapf, R. Chiong, and A. J. Nebro, “Why is optimization difficult,” in Nature-Inspired Algorithms for Optimisation, Studies in Computational Intelligence, vol. 193, pp. 1–50, Springer, Berlin, Germany, 2009. View at Google Scholar
  2. J. Holland, Adaptation in Natural and Artificial Systems, University of Michigan Press, Ann Arbor, MI, USA, 1975.
  3. M. Dorigo, Optimization, Learning and Natural Algorithms [Ph.D. thesis], Politecnico di Milano, Milan, Italy, 1992.
  4. J. Kennedy and R. C. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, pp. 1942–1948, Piscataway, NJ, USA, 1995.
  5. Z. W. Geem, J. H. Kim, and G. V. Loganathan, “A new heuristic optimization algorithm: harmony search,” Simulation, vol. 76, no. 2, pp. 60–68, 2001. View at Publisher · View at Google Scholar · View at Scopus
  6. D. Karaboga and B. Basturk, “On the performance of artificial bee colony (ABC) algorithm,” Applied Soft Computing Journal, vol. 8, no. 1, pp. 687–697, 2008. View at Publisher · View at Google Scholar · View at Scopus
  7. X. S. Yang and S. Deb, “Cuckoo search via Lévy flights,” in Proceedings of the World Congress on Nature & Biologically Inspired Computing (NaBIC 2009), pp. 210–214, IEEE Publications, 2009.
  8. X.-S. Yang, “A new metaheuristic bat-inspired algorithm,” in Nature Inspired Cooperative Strategies for Optimization (NICSO 2010), J. R. Gonzalez, D. A. Pelta, C. Cruz, G. Terrazas, and N. Krasnogor, Eds., vol. 284 of Studies in Computational Intelligence, pp. 65–74, Springer, Berlin, Germany, 2010. View at Publisher · View at Google Scholar
  9. K. M. Passino, “Biomimicry of bacterial foraging for distributed optimization and control,” IEEE Control Systems Magazine, vol. 22, no. 3, pp. 52–67, 2002. View at Publisher · View at Google Scholar · View at Scopus
  10. A. Hatamlou, “Black hole: a new heuristic optimization approach for data clustering,” Information Sciences. An International Journal, vol. 222, pp. 175–184, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  11. H. Shareef, A. A. Ibrahim, and A. H. Mutlag, “Lightning search algorithm,” Applied Soft Computing Journal, vol. 36, pp. 315–333, 2015. View at Publisher · View at Google Scholar · View at Scopus
  12. H. Shareef, A. H. Mutlag, and A. Mohamed, “A novel approach for fuzzy logic PV inverter controller optimization using lightning search algorithm,” Neurocomputing, vol. 168, pp. 435–453, 2015. View at Publisher · View at Google Scholar · View at Scopus
  13. J. A. Ali, M. A. Hannan, and A. Mohamed, “A novel quantum-behaved lightning search algorithm approach to improve the fuzzy logic speed controller for an induction motor drive,” Energies, vol. 8, no. 11, pp. 13112–13136, 2015. View at Publisher · View at Google Scholar · View at Scopus
  14. M. M. Islam, H. Shareef, A. Mohamed, and A. Wahyudie, “A binary variant of lightning search algorithm: BLSA,” Soft Computing, pp. 1–20, 2016. View at Publisher · View at Google Scholar · View at Scopus
  15. R. Sirjani and H. Shareef, “Parameter extraction of solar cell models using the lightning search algorithm in different weather conditions,” Journal of Solar Energy Engineering, vol. 138, no. 4, 2016. View at Publisher · View at Google Scholar · View at Scopus
  16. A. H. Mutlag, A. Mohamed, and H. Shareef, “A nature-inspired optimization-based optimum fuzzy logic photovoltaic inverter controller utilizing an eZdsp F28335 board,” Energies, vol. 9, no. 3, article 120, 2016. View at Publisher · View at Google Scholar · View at Scopus
  17. M. S. Ahmed, A. Mohamed, R. Z. Homod, and H. Shareef, “Hybrid LSA-ANN based home energy management scheduling controller for residential demand response strategy,” Energies, vol. 9, no. 9, article 716, 2016. View at Publisher · View at Google Scholar
  18. H. Faris, I. Aljarah, N. Al-Madi, and S. Mirjalili, “Optimizing the learning process of feedforward neural networks using lightning search algorithm,” International Journal on Artificial Intelligence Tools, vol. 25, no. 6, article 1650033, 2016. View at Publisher · View at Google Scholar · View at Scopus
  19. C. A. C. Coello, “Use of a self-adaptive penalty approach for engineering optimization problems,” Computers in Industry, vol. 41, no. 2, pp. 113–127, 2000. View at Publisher · View at Google Scholar · View at Scopus
  20. C. A. C. Coello and E. M. Montes, “Constraint-handling in genetic algorithms through the use of dominance-based tournament selection,” Advanced Engineering Informatics, vol. 16, no. 3, pp. 193–203, 2002. View at Publisher · View at Google Scholar · View at Scopus
  21. E. Mezura-Montes and C. A. Coello Coello, “An empirical study about the usefulness of evolution strategies to solve constrained optimization problems,” International Journal of General Systems, vol. 37, no. 4, pp. 443–473, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  22. K. E. Parsopoulos and M. N. Vrahatis, “Unified particle swarm optimization for solving constrained engineering optimization problems,” in Advances in Natural Computation, vol. 3612 of Lecture Notes in Computer Science, pp. 582–591, Springer, Berlin, Germany, 2005. View at Publisher · View at Google Scholar
  23. Q. He and L. Wang, “An effective co-evolutionary particle swarm optimization for constrained engineering design problems,” Engineering Applications of Artificial Intelligence, vol. 20, no. 1, pp. 89–99, 2007. View at Publisher · View at Google Scholar · View at Scopus
  24. F. Z. Huang, L. Wang, and Q. He, “An effective co-evolutionary differential evolution for constrained optimization,” Applied Mathematics and Computation, vol. 186, no. 1, pp. 340–356, 2007. View at Publisher · View at Google Scholar · View at MathSciNet
  25. M. Mahdavi, M. Fesanghary, and E. Damangir, “An improved harmony search algorithm for solving optimization problems,” Applied Mathematics and Computation, vol. 188, no. 2, pp. 1567–1579, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  26. B. Akay and D. Karaboga, “Artificial bee colony algorithm for large-scale problems and engineering design optimization,” Journal of Intelligent Manufacturing, vol. 23, no. 4, pp. 1001–1014, 2012. View at Publisher · View at Google Scholar · View at Scopus
  27. S. Mirjalili, “Moth-flame optimization algorithm: a novel nature-inspired heuristic paradigm,” Knowledge-Based Systems, vol. 89, pp. 228–249, 2015. View at Publisher · View at Google Scholar · View at Scopus
  28. S. Mirjalili, S. M. Mirjalili, and A. Lewis, “Grey wolf optimizer,” Advances in Engineering Software, vol. 69, pp. 46–61, 2014. View at Publisher · View at Google Scholar · View at Scopus
  29. A. Baykasoğlu and F. B. Ozsoydan, “Adaptive firefly algorithm with chaos for mechanical design optimization problems,” Applied Soft Computing, vol. 36, pp. 152–164, 2015. View at Publisher · View at Google Scholar
  30. A. H. Gandomi, X. S. Yang, A. H. Alavi, and S. Talatahari, “Bat algorithm for constrained optimization tasks,” Neural Computing and Applications, vol. 22, no. 6, pp. 1239–1255, 2013. View at Publisher · View at Google Scholar · View at Scopus
  31. A. H. Gandomi, X. S. Yang, and A. H. Alavi, “Cuckoo search algorithm: a metaheuristic approach to solve structural optimization problems,” Engineering with Computers, vol. 29, no. 1, pp. 17–35, 2013. View at Publisher · View at Google Scholar · View at Scopus
  32. H. Chickermane and H. C. Gea, “Structural optimization using a new local approximation method,” International Journal for Numerical Methods in Engineering, vol. 39, no. 5, pp. 829–846, 1996. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  33. M.-Y. Cheng and D. Prayogo, “Symbiotic organisms search: a new metaheuristic optimization algorithm,” Computers & Structures, vol. 139, pp. 98–112, 2014. View at Publisher · View at Google Scholar · View at Scopus
  34. A. Kaveh and S. Talatahari, “An improved ant colony optimization for constrained engineering design problems,” Engineering Computations, vol. 27, no. 1, pp. 155–182, 2010. View at Publisher · View at Google Scholar · View at Scopus
  35. R. V. Rao, V. J. Savsani, and D. P. Vakharia, “Teaching-learning-based optimization: a novel method for constrained mechanical design optimization problems,” Computer-Aided Design, vol. 43, no. 3, pp. 303–315, 2011. View at Publisher · View at Google Scholar · View at Scopus
  36. K. Deb, “Optimal design of a welded beam via genetic algorithms,” AIAA Journal, vol. 29, no. 11, pp. 2013–2015, 1991. View at Publisher · View at Google Scholar
  37. A. Kaveh and S. Talatahari, “A novel heuristic optimization method: charged system search,” Acta Mechanica, vol. 213, no. 3-4, pp. 267–289, 2010. View at Publisher · View at Google Scholar · View at Scopus
  38. J.-F. Tsai, “Global optimization of nonlinear fractional programming problems in engineering design,” Engineering Optimization, vol. 37, no. 4, pp. 399–409, 2005. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  39. T. Ray and P. Saini, “Engineering design optimization using a swarm with an intelligent information sharing among individuals,” Engineering Optimization, vol. 33, no. 6, pp. 735–748, 2001. View at Publisher · View at Google Scholar · View at Scopus
  40. T. Ray and K. M. Liew, “Society and civilization: an optimization algorithm based on the simulation of social behavior,” IEEE Transactions on Evolutionary Computation, vol. 7, no. 4, pp. 386–396, 2003. View at Publisher · View at Google Scholar · View at Scopus
  41. M. Zhang, W. Luo, and X. Wang, “Differential evolution with dynamic stochastic selection for constrained optimization,” Information Sciences, vol. 178, no. 15, pp. 3043–3074, 2008. View at Publisher · View at Google Scholar · View at Scopus
  42. H. Liu, Z. Cai, and Y. Wang, “Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization,” Applied Soft Computing Journal, vol. 10, no. 2, pp. 629–640, 2010. View at Publisher · View at Google Scholar · View at Scopus
  43. A. Sadollah, A. Bahreininejad, H. Eskandar, and M. Hamdi, “Mine blast algorithm: a new population based algorithm for solving constrained engineering optimization problems,” Applied Soft Computing, vol. 13, no. 5, pp. 2592–2612, 2013. View at Publisher · View at Google Scholar · View at Scopus
  44. E. Davoodi, M. T. Hagh, and S. G. Zadeh, “A hybrid improved quantum-behaved particle swarm optimization-simplex method (IQPSOS) to solve power system load flow problems,” Applied Soft Computing Journal, vol. 21, pp. 171–179, 2014. View at Publisher · View at Google Scholar · View at Scopus
  45. A. S. El-Wakeel, “Design optimization of PM couplings using hybrid particle swarm optimization-simplex method (PSO-SM) algorithm,” Electric Power Systems Research, vol. 116, pp. 29–35, 2014. View at Publisher · View at Google Scholar · View at Scopus
  46. H. R. Tizhoosh, “Opposition-based learning: a new scheme for machine intelligence,” in Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation, International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA/IAWTIC '05), vol. 1, pp. 695–701, Vienna, Austria, 2005. View at Publisher · View at Google Scholar
  47. H. Wang, Z. Wu, and S. Rahnamayan, “Enhanced opposition-based differential evolution for solving high-dimensional continuous optimization problems,” Soft Computing, vol. 15, no. 11, pp. 2127–2140, 2011. View at Publisher · View at Google Scholar · View at Scopus
  48. Y. Zhou, R. Wang, and Q. Luo, “Elite opposition-based flower pollination algorithm,” Neurocomputing, vol. 188, pp. 294–310, 2016. View at Publisher · View at Google Scholar · View at Scopus
  49. X. Yao, Y. Liu, and G. Lin, “Evolutionary programming made faster,” IEEE Transactions on Evolutionary Computation, vol. 3, no. 2, pp. 82–102, 1999. View at Publisher · View at Google Scholar · View at Scopus
  50. X.-S. Yang, “Test problems in optimization,” in Engineering Optimization: An Introduction with Metaheuristic Applications, X.-S. Yang, Ed., John Wiley & Sons, 2010. View at Google Scholar
  51. E. Rashedi, H. Nezamabadi-pour, and S. Saryazdi, “GSA: a gravitational search algorithm,” Information Sciences, vol. 213, pp. 267–289, 2010. View at Publisher · View at Google Scholar · View at Scopus
  52. X. S. Yang, “Flower pollination algorithm for global optimization,” in Unconventional Computation and Natural Computation, vol. 7445 of Lecture Notes in Computer Science, pp. 240–249, Springer, Berlin, Germany, 2012. View at Publisher · View at Google Scholar
  53. S. Mirjalili and A. Lewis, “The whale optimization algorithm,” Advances in Engineering Software, vol. 95, pp. 51–67, 2016. View at Publisher · View at Google Scholar · View at Scopus
  54. S. Saremi, S. Mirjalili, and A. Lewis, “Grasshopper optimisation algorithm: theory and application,” Advances in Engineering Software, vol. 105, pp. 30–47, 2017. View at Publisher · View at Google Scholar
  55. M. Abdullahi, M. A. Ngadi, and S. M. Abdulhamid, “Symbiotic Organism Search optimization based task scheduling in cloud computing environment,” Future Generation Computer Systems, vol. 56, pp. 640–650, 2016. View at Publisher · View at Google Scholar · View at Scopus