Networked Systems with Complexities and Their Applications to Engineering
View this Special IssueResearch Article  Open Access
Xu Wang, Shuguang Zhao, "Differential Evolution Algorithm with SelfAdaptive Population Resizing Mechanism", Mathematical Problems in Engineering, vol. 2013, Article ID 419372, 14 pages, 2013. https://doi.org/10.1155/2013/419372
Differential Evolution Algorithm with SelfAdaptive Population Resizing Mechanism
Abstract
A differential evolution (DE) algorithm with selfadaptive population resizing mechanism, SapsDE, is proposed to enhance the performance of DE by dynamically choosing one of two mutation strategies and tuning control parameters in a selfadaptive manner. More specifically, more appropriate mutation strategies along with its parameter settings can be determined adaptively according to the previous status at different stages of the evolution process. To verify the performance of SapsDE, 17 benchmark functions with a wide range of dimensions, and diverse complexities are used. Nonparametric statistical procedures were performed for multiple comparisons between the proposed algorithm and five wellknown DE variants from the literature. Simulation results show that SapsDE is effective and efficient. It also exhibits much more superiorresultsthan the other five algorithms employed in the comparison in most of the cases.
1. Introduction
Evolutionary algorithms (EAs), inspired by biological evolutionary mechanism in nature, have achieved great success on many numerical and combinatorial optimizations in diverse fields [1–3]. During the past two decades, EAs have become a hot topic. When implementing the EAs, users need to solve several points, for example, the appropriate encoding schemes, evolutionary operators, and the suitable parameter settings, to ensure the success of the algorithms. The earlier EAs have some disadvantages, such as complex procedure, stagnation, and poor search ability. To overcome such disadvantages, on one hand, some researchers proposed other related methods (e.g., particle swarm optimization (PSO) [4, 5], differential evolution (DE) [6]) which have better global search ability. On the other hand, the effects of setting the parameters of EAs have also been the subject of extensive research [7] by the EA community, and recently there are substantial selfadaptive EAs, which can adjust their parameters along with iterations (see, e.g., [2, 8] for a review).
DE is proposed by Storn and Price [6]. Like other EAs, DE is a populationbased stochastic search technique as well, but it is simpler and it can be implemented more easily than other EAs. Besides that, DE [9, 10] is an effective and versatile function optimizer. Owing to simplicity of code, practitioners from other fields can simply apply it to solve their domainspecific problems even if they are not good at programming. Moreover, in traditional DE, there are only three crucial control parameters, that is, scaling factor , crossover rate , and population size NP, which are fewer than other EAs' (e.g., [8, 11]). It is clear that the appropriate settings of the three control parameters ensure successful functioning of DE [12]. However, results in [6, 13] potentially confuse scientists and engineers who may try to utilize DE to solve scientific and practical problems. Further, while some objective functions are very sensitive to parameter settings, it will be difficult to set the parameter values according to prior experience. Hence, a great deal of publications [14–21] have been devoted to the adjustment of parameters of variation operators. Brest et al. [15] proposed a selfadaptive differential evolution algorithm (called DE) based on the selfadapting control parameter scheme, which produced control parameters and into a new parent vector and adjusted them with probability. Qin et al. in [20, 21] proposed a SaDE algorithm, in which there was a mutation strategy candidate pool. Specifically, in SaDE algorithm, one trial vector generation strategy and associated parameter ( and ) settings were adjusted according to their previous experiences of generating promising solutions. Moreover, researchers developed the performance of DE by applying oppositionbased learning [22] or local search [23].
In most existing DEs, the population size remains constant over the run. However, there are biological and experimental reasonings to expect that a variable population size would work better. In a natural environment, population sizes of species change and incline to steady state due to natural resources and ecological factors. Technically, the population size in a biological system is the most flexible element. And it can be calibrated more easily than recombination. Bäck et al. [24] have indicated that calibrating the population size during iterative process could be more rewarding than changing the operator parameters in genetic algorithms. Unfortunately, the DEs with variable population size (e.g., [25, 26]) have not received much attention despite their various applications in real world, and there is still a lot of research space. Hence in this paper, we will focus on DE with variable population size scheme in which the population size can be adjusted dynamically based on the online solutionsearch status. In this algorithm, we introduce three population adjustment mechanisms to obtain the appropriate value of according to the desired population distribution. Specifically, while the fitness is improved, it may increase the population size to explore. While short term lacks improvement, it may sink the population size. But if stagnation is over a longer period, the population will grow again. Along with those, two trial vector generation strategies will be adopted adaptively during evolution process.
The remainder of this paper is organized as follows. Section 2 gives a brief review of traditional DE and JADE algorithms. Section 3 introduces the DE with selfadaptive population size scheme—SapsDE. Our mutating and adaptive resizing strategies will also be described. Section 4 describes our studies compared with the traditional DE and several stateoftheart adaptive DE variants and presents the experimental results on a diverse set of test functions with up to 100 dimensions. Finally, Section 5 concludes this paper with some remarks and future research directions.
2. Differential Evolution and JADE Algorithm
In this section, we present an overview of the basic concepts of DE and JADE algorithm necessary for a better understanding of our proposed algorithm.
2.1. Differential Evolution
DE is a populationbased algorithm, and a reliable and versatile function optimizer, which evolves a population of NP Ddimensional individuals towards the global optimum by enhancing the differences of the individuals. In brief, after initialization, DE repeats mutation, crossover, and selection operations to produce a trail vector and select one of those vectors with the best fitness value for each vector until satisfying some specific termination criteria. Conveniently, subsequent generation in DE is denoted by . We notate the th vector of the population at the current generation as follows:
Initialization. First of all, uniformly randomize NP individuals within a Ddimensional real parameter search space. And the initial population should cover the entire search space constrained by the prescribed minimum and maximum bounds: , . Hence, the initialization of the th element in the th vector is taken as follows: where represents a uniformly distributed random variable within the , and it is instantiated independently for each component of the th vector.
Mutation Operation. In the existing literature on DE, mutate vector , called donor vector, is obtained through the differential mutation operation with respect to each individual , known as target vector, in the current population. For each target vector from the current population, the donor vector is created via certain mutation strategy. Several mutation strategies have been proposed. Here we list one of the most popular and simplest forms of DEmutation as follows: The indices , , and are mutually exclusive integers randomly generated within the , which are different from the base vector index . These indices are randomly generated for each mutant vector. Now, the difference of any two of these three vectors is scaled by a mutation weighting factor which typically lies in the interval in the existing DE literature, and the scaled difference is added to the third one to obtain a donor vector V.
Crossover Operation. After the mutation operation, according to the target vector and its corresponding donor vector , a trail vector is produced by crossover operation. In traditional version, DE applies the binary defined crossover as follows: where is a crossover rate within the , defined by user as a constant, which controls the probability of parameter values employed from the donor vector. is a randomly chosen integer within the which is introduced to ensure that the trial vector contains at least one parameter from donor vector.
Selection Operation. In classic DE algorithm, a greedy selection is adopted. The fitness of every trail vectors is evaluated and compared with that of its corresponding target vector in the current population. For minimization problem, if the fitness value of trial vector is not more than that of target vector, the target vector will be replaced by the trial vector in the population of the next generation. Otherwise, the target vector will be maintained in the population of the next generation. The selection operation is expressed as follows: The algorithmic process of DE is depicted in Algorithm 1.

2.2. JADE Algorithm
Zhang and Sanderson [27] introduced adaptive differential evolution with optional external archive, named JADE, in which a neighborhoodbased mutation strategy and an optional external archive were employed to improve the performance of DE. It is possible to balance the exploitation and exploration by using multiple best solutions, called DE/currenttopbest strategy, which is presented as follows: where is randomly selected as one of the top 100 p% individuals of the current population with . Meanwhile, and are diverse and random individuals in the current population P, respectively. is randomly selected from the union of P and the archive A. In Particular, A is a set of achieved inferior solutions in recent generations and its individual number is not more than the population size. At each generation, the mutation factor and the crossover factor of each individual are, respectively, updated dynamically according to a Cauchy distribution of mean and a normal distribution of mean as follows: The proposed two location parameters are initialized as 0.5 and then generated at the end of each generation as follows: where in is a positive constant; / indicates the set of all successful mutation/crossover factors in generation; () denotes the usual arithmetic mean, and () is the Lehmer mean, which is defined as follows:
3. SapsDE Algorithm
Here we develop a new SapsDE algorithm by introducing a selfadaptive population resizing scheme into JADE. Technically, this scheme can gradually selfadapt according to the previous experiences of generating promising solutions. In addition, we use two DE mutation strategies in SapsDE. In each iteration, only one DE strategy is activated. The structure of the SapsDE is shown in Algorithm 2.

3.1. Generation Strategies Chooser
DE performs better if it adopts different trail vector generation strategies during different stages of optimization [21]. Hence, in SapsDE, we utilize two mutation strategies, that is, DE/randtobest/1 and DE/currenttobest/1.
The DE/randtobest/1 strategy benefits from its fast convergence speed but may result in premature convergence due to the resultant reduced population diversity. On the other hand, the DE/currenttobest/1 strategy balances the greediness of the mutation and the diversity of the population. Considering the above two strategies, we introduce a parameter to choose one of the strategies in each iteration of the evolutionary process. At an earlier stage of evolutionary process, DE/currenttobest/1 strategy is adopted more to achieve fast convergence speed. For avoiding trapping into a local optimum, as the generation is proceeding further, DE/currenttobest/1 strategy is used more to search for a relatively large region which is biased toward promising progress directions.
As presented in lines 8–15 in Algorithm 2, the DE/randtobest/1 strategy is used when is smaller than ; otherwise, DE/currenttobest/1 strategy is picked where is a random number generated from the continuous uniform distribution on the interval . Notice that is a timevarying variable which diminishes along with the increase of generation and it can be expressed as follows: where , , and denotes the generation counter. Hence, DE/randtobest/1 strategy can be frequently activated at earlier stage as the random number can easily get smaller than . Meanwhile, DE/currenttobest/1 strategy takes over more easily as the generation increases.
3.2. Population Resizing Mechanism
The population resizing mechanism aims at dynamically increasing or decreasing the population size according to the instantaneous solutionsearching status. In SapsDE, the dynamic population size adjustment mechanism depends on a population resizing trigger. This trigger will activate populationreducing or augmenting strategy in accordance with the improvements of the best fitness in the population. One step further, if a better solution can be found in one iteration process, the algorithm becomes more biased towards exploration augmenting the population size, shortterm lack of improvement shrinks the population for exploitation, but stagnation over a longer period causes population to grow again. Further, the population size is monitored to avoid breaking lower bound. As described in lines 23–34 in Algorithm 2, Lb is the lower bound indicator. Correspondingly, Lbound is the lower bounds of the population size. The proposed , , and are three significant trigger variables, which are used to activate one of all dynamic population size adjustment strategies. More specifically, the trigger variable is set to 1 when the best fitness does not improve, that may lead to a populationreducing strategy to delete poor individuals from current population. Furthermore, similarly, if there is improvement of the best fitness, and are assigned 1, respectively. The populationaugmenting strategy 1 is used when the trigger variable is set to 1. Besides, if the population size is not bigger than the lower bound (Lbound) in consecutive generations, the lower bound monitor () variable will be increased in each generation. This process is continuously repeated until it achieves a userdefined value ; that is, the populationaugmenting strategy2 will be applied if or .
Technically, this paper applies three population resizing strategies as follows.
Population Reducing Strategy. The purpose of the populationreducing strategy is to make the search concentrating more on exploitation by removing the redundant inferior individuals when there is no improvement of the best fitness in short term. More specifically, in population reducing strategy, the first step is to evaluate fitness function values of individuals. Second step is to arrange the population with its fitness function values from small to large. For minimization problems, the smaller the value of fitness function is, the better the individual performs. Consequently, the third step is to remove some individuals with large values from the current population. The scheme of population reduction is presented in Algorithm 3, where denotes the number of deleted individuals.

Population Augmenting Strategy1. Population Augmenting strategy1 is intended to bias more towards exploration in addition to exploitation. Here it applies DE/best/2 mutation strategy to generate a new individual increasing the population size on fitness improvement. The pseudocode of the populationincreasing scheme is shown in Algorithm 4.

Population Augmenting Strategy2. The population size is increased by population augmenting strategy2, shown in Algorithm 5 if there is no improvement during the last number of evaluations. The second growing strategy is supposed to initiate renewed exploration when the population is stuck in local optima. So, here it applies DE/rand/1 mutation scheme. In theory, the size to increase the population in this step can be defined independently from others, but in fact we use the same growth rate as one of the populationreducing strategy.

4. Experiments and Results
4.1. Test Functions
The SapsDE algorithm was tested on benchmark suite consisting of 17 unconstrained singleobjective benchmark functions which were all minimization problems. The first 8 benchmark functions are chosen from the literature and the rest are selected from CEC 2005 Special Session on realparameter optimization [28]. The detailed description of the function can be found in [29, 30]. In Table 1, our test suite is presented, among which functions , , and are unimodal and functions and are multimodal.

4.2. Parameter Settings
SapsDE is compared with three stateoftheart DE variants (JADE, DE, and SaDE) and the classic DE with DE/rand/1/bin/strategy. To evaluate the performance of algorithms, experiments were conducted on the test suite. We adopt the solution error measure (), where is the best solution obtained by algorithms in one run and is wellknown global optimum of each benchmark function. The dimensions () of function are 30, 50, and 100, respectively. The maximum number of function evaluations (FEs), the terminal criteria, is set to , all experiments for each function and each algorithm run 30 times independently.
In our experimentation, we follow the same parameter settings in the original paper of JADE, DE, and SADE. For DE/rand/1/bin, the parameters are also studied in [31]. The details are shown as follows:(1)the original DE algorithm with DE//1/ strategy, F = 0.9, = 0.9 and P (population size) (dimension);(2)JADE, ;(3)DE, ;(4)SaDE, : , .
For SapsDE algorithm, the configuration is listed as follows: the is set to 50. Initial population size is set to 50. The adjustment factor of population size is fixed to 1. The threshold variable of boundary and stagnation variable is set to 4.
All experiments were performed on a computer with Core 2 2.26GHz CPU, 2GB memory, and Windows XP operating system.
4.3. Comparison with Different DEs
Intending to show how well the SapsDE performs, we compared it with the conventional DE and three adaptive DE variants. We first evaluate the performance of different DEs to optimize the 30dimensional numerical functions . Table 2 reports the mean and standard deviation of function values over 30 independent runs with 300 000 FES. The best results are typed in bold. For a thorough comparison, the twotailed ttest with a significance level of 0.05 has been carried out between the SapsDE and other DE variants in this paper. Rows “+ (Better),” “ (Same),” and “− (Worse)” give the number of functions that the SapsDE performs significantly better than, almost the same as, and significantly worse than the compared algorithm on fitness values in 30 runs, respectively. Row total score records the number of +’s and the number of s to show an overall comparison between the two algorithms. Table 2 presents the total score on every function.
