Research Article  Open Access
Tongyi Zheng, Weili Luo, "An Enhanced Lightning Attachment Procedure Optimization with QuasiOppositionBased Learning and Dimensional Search Strategies", Computational Intelligence and Neuroscience, vol. 2019, Article ID 1589303, 24 pages, 2019. https://doi.org/10.1155/2019/1589303
An Enhanced Lightning Attachment Procedure Optimization with QuasiOppositionBased Learning and Dimensional Search Strategies
Abstract
Lightning attachment procedure optimization (LAPO) is a new global optimization algorithm inspired by the attachment procedure of lightning in nature. However, similar to other metaheuristic algorithms, LAPO also has its own disadvantages. To obtain better global searching ability, an enhanced version of LAPO called ELAPO has been proposed in this paper. A quasioppositionbased learning strategy is incorporated to improve both exploration and exploitation abilities by considering an estimate and its opposite simultaneously. Moreover, a dimensional search enhancement strategy is proposed to intensify the exploitation ability of the algorithm. 32 benchmark functions including unimodal, multimodal, and CEC 2014 functions are utilized to test the effectiveness of the proposed algorithm. Numerical results indicate that ELAPO can provide better or competitive performance compared with the basic LAPO and other five stateoftheart optimization algorithms.
1. Introduction
Optimization problems can be found in many engineering application domains and scientific fields which have a complex and nonlinear nature. It is usually difficult to solve these optimization problems using classical mathematical methods since such methods are often inefficient and have a requirement of strong math assumptions. Due to the limitations of classical approaches, many naturalinspired stochastic optimization algorithms have been proposed to conduct global optimization problems in the last two decades. Such optimization algorithms were commonly simple and easy to implement, and these features make the possibility to solve highly complex optimization problems. These metaheuristics can be roughly classified into three categories: evolutionary algorithms, swarm intelligence, and physicalbased algorithms.
Evolutionary algorithms are generic populationbased metaheuristics, which imitate the evolutionary behavior of biology in nature such as reproduction, mutation, recombination, and selection. The first generation starts with randomly initialized solutions and further evolves over successive generations. The best individual among the whole population in the final evolution is considered to be the optimization solution. Some of the popular evolutionary algorithms are genetic algorithm (GA) [1], genetic programming (GP) [2], evolution strategy (ES) [3], differential evolution (DE) algorithm [4], and biogeographybased optimizer (BBO) [5].
Swarm intelligence algorithms mimic the collective behavior of swarms, herds, schools, or flocks of creatures in nature, which interact with each other and utilize full information about their environment with the progress of algorithm. For example, honey bees are capable of guaranteeing the survival of a colony without any external guidance. In other words, no one tells honey bees how and where to find food sources; instead, they cooperatively seek the food sources even that is located far away from their nests. In this category, particle swarm optimization (PSO) [6], ant colony optimization (ACO) [7], and artificial bee colony algorithm (ABC) [8] can be regarded as representative algorithms. Some other popular swarm intelligence algorithms are firefly mating algorithm (FMA) [9], shuffled frog leaping algorithm (SFLA) [10], bee collecting pollen algorithm (BCPA) [11], cuckoo search (CS) algorithm [12], dolphin partner optimization (DPO) [13], batinspired algorithm (BA) [14], firefly algorithm (FA) [15], and hunting search (HUS) algorithm [16]. Some of the recent swarm intelligence algorithms are fruit fly optimization algorithm (FOA) [17], dragonfly algorithm (DA) [18], artificial algae algorithm (AAA) [19], ant lion optimizer (ALO) [20], shark smell optimization algorithm (DSOA) [21], whale optimization algorithm (WOA) [22], crow search algorithm (CSA) [23], grasshopper optimization algorithm (GOA) [24], mouth brooding fish algorithm (MBFA) [25], spotted hyena optimizer (SHO) [26], butterflyinspired algorithm (BFA) [27], squirrel search algorithm (SSA) [28], Andean condor algorithm (ACA) [29], and pity beetle algorithm (PBA) [30].
The third category is physicalbased algorithms which are based on the basic physical laws such as gravitational force, electromagnetic force, and inertia force. Some of the prevailing algorithms of this category are simulated annealing (SA) [31], gravitational search algorithm (GSA) [32], bigbang bigcrunch (BBBC) algorithm [33], charged system search (CSS) [34], black hole (BH) algorithm [35], central force optimization (CFO) [36], smallworld optimization algorithm (SWOA) [37], artificial chemical reaction optimization algorithm (ACROA) [38], ray optimization (RO) algorithm [39], galaxybased search algorithm (GbSA) [40], and curved space optimization (CSO) [41], gravitational search algorithm (GSA) [32], and multiverse optimizer (MVO) [42].
Regardless of the difference among the three categories of algorithms, a common point lies in that besides tuning of common control parameters such as population size and number of generations, the metaheuristic algorithms necessitate tuning of algorithmspecific parameters during the course of optimization. For instance, GA requires tuning of crossover probability, mutation probability, and selection operator [43]; SA requires tuning of initial temperature and cooling rate [31]; PSO requires tuning of inertia weight and learning factors [6]. The improper tuning of these parameters either increases the computational cost or leads to the local optimal solution.
Recently, a new physicalbased metaheuristic algorithm named lightning attachment procedure optimization (LAPO) [44] was proposed, which does not require tuning of any algorithmspecific parameters. Instead, an average value of all solutions was employed to adjust the lightning jump behavior of moving towards or away from a jumping point (or position) in a selfadaptive manner. This is an important reason that LAPO is not easily stuck in the local optimal solution and has a good exploration and exploitation abilities. LAPO has already proved its superiority in solving a number of constrained numerical optimization problems [44].
In this paper, an enhanced lightning attachment procedure optimization, namely, ELAPO is developed to increase the convergence speed during the search process of LAPO while maintaining the key feature of the LAPO as free from algorithmspecific parameters tuning. In ELAPO, a concept of oppositionbased learning (OBL) is incorporated for enhancing the searching ability of metaheuristic algorithms. The motivation is that the current estimates and their corresponding opposites are considered simultaneously to find the better solutions, thereby enabling the algorithm to explore a large region of the search space in every generation. This concept was found to be effective in improving the performance of wellknown optimization algorithms such as genetic algorithms (GA) [45], differential evolution (DE) [46, 47], particle swarm optimization (PSO) [48, 49], biogeographybased optimization (BBO) [50, 51], harmony search (HS) algorithm [52, 53], gravitational search optimization (GSO) [54, 55], group search algorithm (GSA) [56, 57], and artificial bee colony (ABC) [58]. Meanwhile, a dimensional search strategy is proposed to intensively exploit a local search for each variable of the best solution in each iteration, thus resulting in a higher quality of solution at the end of iteration and strengthening the exploitation of the algorithm. To evaluate the effectiveness of the proposed algorithms, ELAPO is applied to 32 benchmark functions and compared with the basic LAPO and six representative swarm intelligence algorithms (SSA [28], Jaya [59], IBBBC [60], ODE1 [61], and ALO [20]). The effectiveness of the two strategies is also discussed.
The rest of this paper is organized as follows: Section 2 briefly recapitulates the basic LAPO. Next, the proposed ELAPO is presented in a detailed way in Section 3. Numerical comparisons are illustrated in Section 4. Finally, Section 5 gives the concluding remarks.
2. Basic Algorithm
LAPO is a new natureinspired global optimization, which mimics the lightning attachment procedure including the downward leader movement and the upward leader propagation. The lightning is a sudden electrostatic discharge occurring between electrically charged regions of a cloud, which moves toward or away from the ground in a stepwise movement. After each step, the downward leader stops and then moves to a randomly selected potential point that may have higher value of electrical field. The upward leader starts from sharp points and goes towards the downward leader. The branch fading feature of lightning is taken effect when the charge of a branch is lower than a critical value. In the case where the two leaders join together, a final strike occurs and the charge of the cloud is neutralized.
2.1. Parameters and Initialization of Test Points
Main parameters of the LAPO consist of the maximum number of iterations , the number of test points , the number of decision variables n, and the upper and lower bounds for decision variable and . These parameters are given at the beginning of the algorithm. Similar to other natureinspired optimization algorithms, an initial population is required. Each population is regarded as a test point in the feasible search space, which could be an emitting point of the downward or upward leader. The test points are randomly initialized as follows:where is a uniformly distributed random number in the range [0, 1]. The electric field (i.e., fitness value) of each test point is calculated based on the objective function:
2.2. Downward Leader Movement toward the Ground
In this phase, all the test points are considered as the downward leader and move down towards the ground. The average value of all test points and its corresponding fitness value are calculated as follows:
Given the fact that the lightning has a random behavior, for test point i, a random point k is selected among the population (i ≠ k), and the new test point is updated based on the following rules: (i) if the electric field of point k is higher than the average electric field, thenand (ii) if the electric field of point k is lower than the average electric field, then
If the electric field of the new test point is better than the old one, the branch sustains; otherwise, it fades. This feature is mathematically formulated as
2.3. Upward Leader Movement
In the upward movement phase, all the test points are considered as the upward leader towards the cloud. The new test points are generated as follows:where and are the best and the worst solutions of the population and S is an exponent factor that is a function of the number of iterations and the maximum number of iterations :
From a computational point of view, this iterationdependent exponent factor is important for the balance of exploration and exploitation capabilities of the algorithm. Similar to the downward movement, the branch fading feature also occurs in this phase.
2.4. Enhancement of the Performance
In order to enhance the performance of LAPO, in each iteration, the worst test point is replaced by the average test point if the fitness of the former is worse than the latter:
2.5. Stopping Criterion
The algorithm terminates if the maximum number of iterations is satisfied. Otherwise, the procedures of downward and upward leader movements and of performance enhancement are repeated.
2.6. Procedure of the Basic LAPO
The complete computational procedure of the basic LAPO is provided in Algorithm 1.

3. The Enhanced Lightning Attachment Procedure Optimization
The enhanced lightning attachment procedure optimization (ELAPO) is presented in this section. Two main strategies exist in the ELAPO. First, a quasioppositionbased learning strategy is developed and employed randomly to diversify the population. Second, the dimensional search strategy is proposed to improve the quality of the best solution in each iteration. The key ideas behind ELAPO are illustrated as follows.
3.1. QuasiOppositionBased Learning
In order to prevent the proposed algorithm from being trapped in local optimal solutions, a monitoring condition is introduced and checked in each iteration. Following steps are involved. First, a distance constant between the average test point and the best test point is calculated:
Second, the minimum value of the distance constant condition is computed:and the monitoring condition is then checked. If , the concept of oppositionbased learning is employed to further diversify the population and improve the convergence rage of the algorithm. In the strategy, a portion of test points is randomly selected, based on which the corresponding opposite test points are generated and both are considered at the same time. Then, the fitness of the original test points and the quasiopposite test points are calculated and ranked in a descending order, from which the first solutions are selected for proceeding the downward leader movement and the upward leader movement. In order to maintain the stochastic nature of ELAPO, a quasiopposite solution is randomly generated between the center of the search space CS and the mirror point of the corresponding test point MP:where Nq is the number of randomly chosen test points for the generation of opposite test points, and it is set to be 5 in this paper.
3.2. Enhancing Dimensional Search
During the search process of the basic LAPO, all dimensions of each test point are updated simultaneously after each iteration. In other words, different variables in each dimension are dependent. However, this procedure has one obvious drawback: the change in one dimensional variable may cause negative impacts on other dimensional variables, thereby leading to poor convergence performance in each dimension. In order to enhance the dimensional search for each variable, the following four steps are carried out in each iteration: (a) find the best test point, (b) generate one new solution based on the best test point in a way that the value of one variable is revised while the rest of variables are preserved, (c) compare fitness values of the newgenerated solution with the old solution, and reserve the better one, and (d) repeat steps (b) and (c) for other dimensional variables. The newgenerated solution is produced according the following rule:
3.3. Procedure of ELAPO
The complete computational procedure of the enhanced ELAPO is provided in Algorithm 2.

4. Experimental Results and Analysis
In this section, the performance of ELPAO is evaluated by means of 32 different benchmark functions and the results are compared to those of several stateofthe art metaheuristic optimization algorithms. The benchmark functions are listed in Tables 1–3, among which F1–F11 are unimodal functions, F13–F25 belong to multimodal functions, and F26–F32 are composite functions provided by IEEE CEC 2014 special section [62]. In these tables, n refers to dimension of functions, Range donates the search space, and Fmin is the true optimal value of the test functions. Two kinds of dimension (n = 30, and 100) are chosen in order to evaluate the capability of the proposed algorithm for solving different scale test functions.



Six metaheuristic optimization algorithms are utilized in this section as a comparison with the proposed algorithm, including the basic LAPO, squirrel search algorithm (SSA) [28], Jaya [59], improved big bangbig crunch algorithm (IBBBC) [60], oppositionbased differential evolution algorithm (ODE1) [61], and ant lion optimizer (ALO) [20]. The population size and the maximal iteration number are set to be 50 and 1000, respectively. The same set of initial random populations is used to evaluate different algorithms. The error value, defined as f(x) − F_{min}, is recorded for the solution x, where f(x) is the optimal fitness value of the function calculated by the algorithms. The widely used parametric settings of all algorithms are listed in Table 4. Each algorithm is applied on the test functions in 10 independent runs. The average and standard deviation of the error values over all independent runs is calculated. Meanwhile, all algorithms are compared in terms of convergence behavior with different curves (Figures 1–6). In addition, the effectiveness of each strategy is tested.

(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)