Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2019 / Article

Research Article | Open Access

Volume 2019 |Article ID 1589303 | https://doi.org/10.1155/2019/1589303

Tongyi Zheng, Weili Luo, "An Enhanced Lightning Attachment Procedure Optimization with Quasi-Opposition-Based Learning and Dimensional Search Strategies", Computational Intelligence and Neuroscience, vol. 2019, Article ID 1589303, 24 pages, 2019. https://doi.org/10.1155/2019/1589303

An Enhanced Lightning Attachment Procedure Optimization with Quasi-Opposition-Based Learning and Dimensional Search Strategies

Academic Editor: Bruce J. MacLennan
Received15 Feb 2019
Revised15 Jun 2019
Accepted17 Jul 2019
Published01 Aug 2019

Abstract

Lightning attachment procedure optimization (LAPO) is a new global optimization algorithm inspired by the attachment procedure of lightning in nature. However, similar to other metaheuristic algorithms, LAPO also has its own disadvantages. To obtain better global searching ability, an enhanced version of LAPO called ELAPO has been proposed in this paper. A quasi-opposition-based learning strategy is incorporated to improve both exploration and exploitation abilities by considering an estimate and its opposite simultaneously. Moreover, a dimensional search enhancement strategy is proposed to intensify the exploitation ability of the algorithm. 32 benchmark functions including unimodal, multimodal, and CEC 2014 functions are utilized to test the effectiveness of the proposed algorithm. Numerical results indicate that ELAPO can provide better or competitive performance compared with the basic LAPO and other five state-of-the-art optimization algorithms.

1. Introduction

Optimization problems can be found in many engineering application domains and scientific fields which have a complex and nonlinear nature. It is usually difficult to solve these optimization problems using classical mathematical methods since such methods are often inefficient and have a requirement of strong math assumptions. Due to the limitations of classical approaches, many natural-inspired stochastic optimization algorithms have been proposed to conduct global optimization problems in the last two decades. Such optimization algorithms were commonly simple and easy to implement, and these features make the possibility to solve highly complex optimization problems. These metaheuristics can be roughly classified into three categories: evolutionary algorithms, swarm intelligence, and physical-based algorithms.

Evolutionary algorithms are generic population-based metaheuristics, which imitate the evolutionary behavior of biology in nature such as reproduction, mutation, recombination, and selection. The first generation starts with randomly initialized solutions and further evolves over successive generations. The best individual among the whole population in the final evolution is considered to be the optimization solution. Some of the popular evolutionary algorithms are genetic algorithm (GA) [1], genetic programming (GP) [2], evolution strategy (ES) [3], differential evolution (DE) algorithm [4], and biogeography-based optimizer (BBO) [5].

Swarm intelligence algorithms mimic the collective behavior of swarms, herds, schools, or flocks of creatures in nature, which interact with each other and utilize full information about their environment with the progress of algorithm. For example, honey bees are capable of guaranteeing the survival of a colony without any external guidance. In other words, no one tells honey bees how and where to find food sources; instead, they cooperatively seek the food sources even that is located far away from their nests. In this category, particle swarm optimization (PSO) [6], ant colony optimization (ACO) [7], and artificial bee colony algorithm (ABC) [8] can be regarded as representative algorithms. Some other popular swarm intelligence algorithms are firefly mating algorithm (FMA) [9], shuffled frog leaping algorithm (SFLA) [10], bee collecting pollen algorithm (BCPA) [11], cuckoo search (CS) algorithm [12], dolphin partner optimization (DPO) [13], bat-inspired algorithm (BA) [14], firefly algorithm (FA) [15], and hunting search (HUS) algorithm [16]. Some of the recent swarm intelligence algorithms are fruit fly optimization algorithm (FOA) [17], dragonfly algorithm (DA) [18], artificial algae algorithm (AAA) [19], ant lion optimizer (ALO) [20], shark smell optimization algorithm (DSOA) [21], whale optimization algorithm (WOA) [22], crow search algorithm (CSA) [23], grasshopper optimization algorithm (GOA) [24], mouth brooding fish algorithm (MBFA) [25], spotted hyena optimizer (SHO) [26], butterfly-inspired algorithm (BFA) [27], squirrel search algorithm (SSA) [28], Andean condor algorithm (ACA) [29], and pity beetle algorithm (PBA) [30].

The third category is physical-based algorithms which are based on the basic physical laws such as gravitational force, electromagnetic force, and inertia force. Some of the prevailing algorithms of this category are simulated annealing (SA) [31], gravitational search algorithm (GSA) [32], big-bang big-crunch (BBBC) algorithm [33], charged system search (CSS) [34], black hole (BH) algorithm [35], central force optimization (CFO) [36], small-world optimization algorithm (SWOA) [37], artificial chemical reaction optimization algorithm (ACROA) [38], ray optimization (RO) algorithm [39], galaxy-based search algorithm (GbSA) [40], and curved space optimization (CSO) [41], gravitational search algorithm (GSA) [32], and multiverse optimizer (MVO) [42].

Regardless of the difference among the three categories of algorithms, a common point lies in that besides tuning of common control parameters such as population size and number of generations, the metaheuristic algorithms necessitate tuning of algorithm-specific parameters during the course of optimization. For instance, GA requires tuning of cross-over probability, mutation probability, and selection operator [43]; SA requires tuning of initial temperature and cooling rate [31]; PSO requires tuning of inertia weight and learning factors [6]. The improper tuning of these parameters either increases the computational cost or leads to the local optimal solution.

Recently, a new physical-based metaheuristic algorithm named lightning attachment procedure optimization (LAPO) [44] was proposed, which does not require tuning of any algorithm-specific parameters. Instead, an average value of all solutions was employed to adjust the lightning jump behavior of moving towards or away from a jumping point (or position) in a self-adaptive manner. This is an important reason that LAPO is not easily stuck in the local optimal solution and has a good exploration and exploitation abilities. LAPO has already proved its superiority in solving a number of constrained numerical optimization problems [44].

In this paper, an enhanced lightning attachment procedure optimization, namely, ELAPO is developed to increase the convergence speed during the search process of LAPO while maintaining the key feature of the LAPO as free from algorithm-specific parameters tuning. In ELAPO, a concept of opposition-based learning (OBL) is incorporated for enhancing the searching ability of metaheuristic algorithms. The motivation is that the current estimates and their corresponding opposites are considered simultaneously to find the better solutions, thereby enabling the algorithm to explore a large region of the search space in every generation. This concept was found to be effective in improving the performance of well-known optimization algorithms such as genetic algorithms (GA) [45], differential evolution (DE) [46, 47], particle swarm optimization (PSO) [48, 49], biogeography-based optimization (BBO) [50, 51], harmony search (HS) algorithm [52, 53], gravitational search optimization (GSO) [54, 55], group search algorithm (GSA) [56, 57], and artificial bee colony (ABC) [58]. Meanwhile, a dimensional search strategy is proposed to intensively exploit a local search for each variable of the best solution in each iteration, thus resulting in a higher quality of solution at the end of iteration and strengthening the exploitation of the algorithm. To evaluate the effectiveness of the proposed algorithms, ELAPO is applied to 32 benchmark functions and compared with the basic LAPO and six representative swarm intelligence algorithms (SSA [28], Jaya [59], IBB-BC [60], ODE1 [61], and ALO [20]). The effectiveness of the two strategies is also discussed.

The rest of this paper is organized as follows: Section 2 briefly recapitulates the basic LAPO. Next, the proposed ELAPO is presented in a detailed way in Section 3. Numerical comparisons are illustrated in Section 4. Finally, Section 5 gives the concluding remarks.

2. Basic Algorithm

LAPO is a new nature-inspired global optimization, which mimics the lightning attachment procedure including the downward leader movement and the upward leader propagation. The lightning is a sudden electrostatic discharge occurring between electrically charged regions of a cloud, which moves toward or away from the ground in a stepwise movement. After each step, the downward leader stops and then moves to a randomly selected potential point that may have higher value of electrical field. The upward leader starts from sharp points and goes towards the downward leader. The branch fading feature of lightning is taken effect when the charge of a branch is lower than a critical value. In the case where the two leaders join together, a final strike occurs and the charge of the cloud is neutralized.

2.1. Parameters and Initialization of Test Points

Main parameters of the LAPO consist of the maximum number of iterations , the number of test points , the number of decision variables n, and the upper and lower bounds for decision variable and . These parameters are given at the beginning of the algorithm. Similar to other nature-inspired optimization algorithms, an initial population is required. Each population is regarded as a test point in the feasible search space, which could be an emitting point of the downward or upward leader. The test points are randomly initialized as follows:where is a uniformly distributed random number in the range [0, 1]. The electric field (i.e., fitness value) of each test point is calculated based on the objective function:

2.2. Downward Leader Movement toward the Ground

In this phase, all the test points are considered as the downward leader and move down towards the ground. The average value of all test points and its corresponding fitness value are calculated as follows:

Given the fact that the lightning has a random behavior, for test point i, a random point k is selected among the population (ik), and the new test point is updated based on the following rules: (i) if the electric field of point k is higher than the average electric field, thenand (ii) if the electric field of point k is lower than the average electric field, then

If the electric field of the new test point is better than the old one, the branch sustains; otherwise, it fades. This feature is mathematically formulated as

2.3. Upward Leader Movement

In the upward movement phase, all the test points are considered as the upward leader towards the cloud. The new test points are generated as follows:where and are the best and the worst solutions of the population and S is an exponent factor that is a function of the number of iterations and the maximum number of iterations :

From a computational point of view, this iteration-dependent exponent factor is important for the balance of exploration and exploitation capabilities of the algorithm. Similar to the downward movement, the branch fading feature also occurs in this phase.

2.4. Enhancement of the Performance

In order to enhance the performance of LAPO, in each iteration, the worst test point is replaced by the average test point if the fitness of the former is worse than the latter:

2.5. Stopping Criterion

The algorithm terminates if the maximum number of iterations is satisfied. Otherwise, the procedures of downward and upward leader movements and of performance enhancement are repeated.

2.6. Procedure of the Basic LAPO

The complete computational procedure of the basic LAPO is provided in Algorithm 1.

(1)Set , , n, and
(2)Randomly initialize the test points
(3)
(4)Calculate fitness value
(5)
(6)while
(7) Calculate average value of all test points and its fitness value
(8)
(9)
(10)if
(11)  
(12)end
(13) Downward leader movement toward the ground
(14)for i = 1: 
(15) randomly select
(16)  if
(17)   
(18)  else
(19)   
(20)  end
(21)  Calculate fitness value of new test points
(22)  if
(23)   
(24)  end
(25)end
(26) Upward leader movement
(27)for i = 1: 
(28)  
(29)  
(30)  if
(31)   
(32)  end
(33)end
(34)
(35)end

3. The Enhanced Lightning Attachment Procedure Optimization

The enhanced lightning attachment procedure optimization (ELAPO) is presented in this section. Two main strategies exist in the ELAPO. First, a quasi-opposition-based learning strategy is developed and employed randomly to diversify the population. Second, the dimensional search strategy is proposed to improve the quality of the best solution in each iteration. The key ideas behind ELAPO are illustrated as follows.

3.1. Quasi-Opposition-Based Learning

In order to prevent the proposed algorithm from being trapped in local optimal solutions, a monitoring condition is introduced and checked in each iteration. Following steps are involved. First, a distance constant between the average test point and the best test point is calculated:

Second, the minimum value of the distance constant condition is computed:and the monitoring condition is then checked. If , the concept of opposition-based learning is employed to further diversify the population and improve the convergence rage of the algorithm. In the strategy, a portion of test points is randomly selected, based on which the corresponding opposite test points are generated and both are considered at the same time. Then, the fitness of the original test points and the quasi-opposite test points are calculated and ranked in a descending order, from which the first solutions are selected for proceeding the downward leader movement and the upward leader movement. In order to maintain the stochastic nature of ELAPO, a quasi-opposite solution is randomly generated between the center of the search space CS and the mirror point of the corresponding test point MP:where Nq is the number of randomly chosen test points for the generation of opposite test points, and it is set to be 5 in this paper.

3.2. Enhancing Dimensional Search

During the search process of the basic LAPO, all dimensions of each test point are updated simultaneously after each iteration. In other words, different variables in each dimension are dependent. However, this procedure has one obvious drawback: the change in one dimensional variable may cause negative impacts on other dimensional variables, thereby leading to poor convergence performance in each dimension. In order to enhance the dimensional search for each variable, the following four steps are carried out in each iteration: (a) find the best test point, (b) generate one new solution based on the best test point in a way that the value of one variable is revised while the rest of variables are preserved, (c) compare fitness values of the new-generated solution with the old solution, and reserve the better one, and (d) repeat steps (b) and (c) for other dimensional variables. The new-generated solution is produced according the following rule:

3.3. Procedure of ELAPO

The complete computational procedure of the enhanced ELAPO is provided in Algorithm 2.

(1)Set , , , n, and
(2)Randomly initialize the test points
(3)
(4)Calculate fitness value
(5)
(6)while
(7) Calculate average value of all test points and its fitness value
(8)
(9)
(10)if
(11)  
(12)end
(13) Generate quasi-opposite test points
(14),
(15)if
(16)  ;
(17)  
(18)end
(19) Select good solutions from the original test points and the quasi-opposite test points
(20) Downward leader movement toward the ground
(21)for i = 1: 
(22) randomly select
(23)  if
(24)   
(25)  else
(26)   
(27)  end
(28)  Calculate fitness value of new test points
(28)  if
(29)   
(30)  end
(31)end
(32) Upward leader movement
(33)for i = 1: 
(34)  
(35)  
(36)  if
(37)   
(38)  end
(39)end
(40) Enhance intensive dimensional search
(41) Find
(42)for j = 1: n
(43)  
(44)  Calculate fitness value of the new solution
(45)  
(46)  if
(47)   
(48)   
(49)  end
(50)end
(51)
(52)End

4. Experimental Results and Analysis

In this section, the performance of ELPAO is evaluated by means of 32 different benchmark functions and the results are compared to those of several state-of-the art metaheuristic optimization algorithms. The benchmark functions are listed in Tables 13, among which F1–F11 are unimodal functions, F13–F25 belong to multimodal functions, and F26–F32 are composite functions provided by IEEE CEC 2014 special section [62]. In these tables, n refers to dimension of functions, Range donates the search space, and Fmin is the true optimal value of the test functions. Two kinds of dimension (n = 30, and 100) are chosen in order to evaluate the capability of the proposed algorithm for solving different scale test functions.


FunctionnRangeFmin

30, 100[−10, 10]0
30, 100[−10, 10]0
30, 100[−1, 1]−1
30, 100[−100, 100]0
30, 100[−1.28, 1.28]0
30, 100[−30, 30]0
30, 100[−100, 100]0
30, 100[−100, 100]0
30, 100[−10, 10]0
30, 100[−100, 100]0
30, 100[−1, 1]0


FunctionnRangeFmin

30,100[−32, 32]0
30,100[−10, 10]0
30,100[−100, 100]0
30,100[−100, 100]0
30,100[−50, 50]0
30,100[−100, 100]0
30,100[−5, 5]
30,100[−n2, n2]
30,100[−100, 100]0
30,100[−5.12, 5.12]0
30,100[−5.12, 5.12]0
30,100[−100, 100]0
30,100[−0.5, 0.5]0
30,100[−100, 100]0


FunctionnRangeFmin

F26 (CEC1: rotated high-conditioned elliptic function)30, 100[−100, 100]100
F27 (CEC2: rotated bent cigar function)30, 100[−100, 100]200
F28 (CEC4: shifted and rotated Rosenbrock’s function)30, 100[−100, 100]400
F29 (CEC17: hybrid function 1)30, 100[−100, 100]1700
F30 (CEC23: composition function 1)30, 100[−100, 100]2300
F31 (CEC24: composition function 2)30, 100[−100, 100]2400
F32 (CEC25: composition function 3)30, 100[−100, 100]2500

Six metaheuristic optimization algorithms are utilized in this section as a comparison with the proposed algorithm, including the basic LAPO, squirrel search algorithm (SSA) [28], Jaya [59], improved big bang-big crunch algorithm (IBB-BC) [60], opposition-based differential evolution algorithm (ODE1) [61], and ant lion optimizer (ALO) [20]. The population size and the maximal iteration number are set to be 50 and 1000, respectively. The same set of initial random populations is used to evaluate different algorithms. The error value, defined as f(x) − Fmin, is recorded for the solution x, where f(x) is the optimal fitness value of the function calculated by the algorithms. The widely used parametric settings of all algorithms are listed in Table 4. Each algorithm is applied on the test functions in 10 independent runs. The average and standard deviation of the error values over all independent runs is calculated. Meanwhile, all algorithms are compared in terms of convergence behavior with different curves (Figures 16). In addition, the effectiveness of each strategy is tested.


AlgorithmParameter

ELAPO
LAPO
SSA, sf = 18, , Nfs = 4
Jaya
IBB-BCγ = 0.2, α = 3
ALO
ODE1F = 0.5, Cr = 0.9, JR = 0.3