Abstract

The differential evolution (DE) algorithm is a heuristic global optimization technique based on population which is easy to understand, simple to implement, reliable, and fast. The evolutionary parameters directly influence the performance of differential evolution algorithm. The adjustment of control parameters is a global behavior and has no general research theory to control the parameters in the evolution process at present. In this paper, we propose an adaptive parameter adjustment method which can dynamically adjust control parameters according to the evolution stage. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and faster convergence speed.

1. Introduction

In recent years, intelligent optimization algorithms [1] are considered as practical tools for nonlinear optimization problems. Differential evolution algorithm [2, 3] is a novel evolutionary algorithm on the basis of genetic algorithms first introduced by Storn and Price in 1997. The algorithm is a bionic intelligent algorithm by simulation of natural biological evolution mechanism. Its main idea is to generate a temporary individual based on individual differences within populations and then randomly restructure population evolutionary. The algorithm has better global convergence and robustness, very suitable for solving a variety of numerical optimization problems, quickly making the algorithm a hot topic in the current optimization field.

Because it is simple in principle and robust, DE has been applied successfully to all kinds of optimization problems such as constrained global optimization [4], image classification [5], neural network [6], linear array [7], monopoles antenna [8], images segmentation [9], and other areas [1014].

However, DE algorithm can easily fall into local optimal solution in the course of the treatment of the multipeak and the large search space function optimization problems. In order to improve the optimization performance of the DE, many scholars have proposed many control parameters methods [15, 16]. Although all the methods can improve the standard DE performance to some extent, they still cannot get satisfactory results for some of the functions. In this paper, we propose an adaptive parameter adjustment method according to the evolution stage.

This paper is organized as follows. Related work is described in Section 2. In Section 3 the background of DE is presented. The improved algorithm is presented in Section 4. In Section 5 some experimental tests, results, and conclusions are given. Section 6 concludes the paper.

The DE algorithm has a few parameters. These parameters have a great impact on the performance of the algorithm, such as the quality of the optimal value and convergence rate. There is still no good way to determine the parameters. In order to deal with this problem, researchers have made some attempts. Gamperle et al. [17] reported that it is more difficult than expected to choose the control parameters of DE. Liu and Lampinen [18] reported that the performance of DE algorithm is sensitive to the values of the parameters. Different test functions should have different parameter settings.

At present, the parameter settings are mainly three ways:(1)determined parameter setting method: the method is mainly set by experience, for example, keeping fixed value throughout the entire evolutionary process;(2)adaptive parameter setting: some heuristic rules are used to modify the parameter values accordingly to the current state;(3)self-adaptive parameter setting: the idea that “evolution of the evolution” is used to implement the self-adaptive parameter setting.

Liu and Lampinen [19] proposed a fuzzy adaptive parameter setting method which can change the parameters dynamically. The experiment shows the convergence much faster than the traditional DE algorithm when adapting and CR. In [20], self-adapting control parameters in DE are proposed; the results show the improved algorithm is better than, or at least comparable to, the standard DE algorithm.

3. Introduction to DE

Compared to other evolutionary algorithms, DE reserves population-based global search strategy and uses a simple mutation operation of the differential and one-on-one competition, so it can reduce the genetic complexity of the operation. At the same time, the specific memory capacity of DE enables it to dynamically track the current search to adjust their search strategy with a strong global convergence and robustness. So it is suitable for solving some of the complex environments of the optimization problem. Basic operations such as selection, crossover, and mutation are the basis of the difference algorithm.

In an iterative process, the population of each generation contains individuals. Suppose that the individual of generation is represented as

3.1. Mutation Operation

An individual can be generated by the following formula:

Here , , and are random numbers generated within the interval and variation factor is a real number of the interval ; it controls the amplification degree of the differential variable .

3.2. Crossover Operation

In difference algorithm, the cross-operation is introduced to the diversity of the new population. According to the crossover strategy, the old and new individual exchange part of the code to form a new individual. New individuals can be represented as follow: where where is uniformly distributed in the interval and CR is crossover probability in the interval . means a random integer between .

3.3. Selection Operation

Selection operation is greedy strategy; the candidate individual is generated from mutation and crossover operation competition with target individual: where is the fitness function.

The basic differential evolution (DE) algorithm is shown as Algorithm 1.

Algorithm 1 (the differential evolution algorithm). Initialize the number of population NP, the maximum number of evolution Maxinter, the scale factor and cross-factor.
Initialize the population pop.
Follow the DE/rand/1/bin policy enforcement options, and produce a new generation of individual:(a)mutation operation;(b)crossover operation;(c)selection operation.
Until the termination criterion is met.

The flow chart of differential evolution algorithm is shown in Figure 1.

4. The Adaptive Control Parameter Adjustment Method (ADE)

From standard DE algorithm, it is known that scale factor and cross-factor CR will not only affect convergence speed of the algorithm, but may also lead to the occurrence of premature phenomenon. In this paper, we propose an adaptive adjustment method according to the evolution stage.

We use a sine function (1/4 cycle) with value of and a cosine function (1/4 cycle) with value of . The image of the two functions shows slower change at the beginning and in the end, with rapid changes and gradual increase in the middle. It is very suitable for setting value and CR value. The early stage and the late stage of scale factor and cross-factor CR are relatively small, with relatively fast increase in the middle, just to meet the global search of PE where and are constants; for example, we can set , and in the experiment. MAXITER is the maximum number of iterations, and is the current number of iterations.

The procedure for implementing the APE is given by the following steps.

Algorithm 2 (the improved differential evolution algorithm). Initialize the number of population NP, the maximum number of evolution Maxinter, scale factor and cross-factor CR.
Initialize the population pop.
Update the scaling factor of each individual according to the above formula (6).
Update the cross-factor CR of each individual according to the above formula (7).
Perform the following behavior: Mutation, Crossover and Selection, and produce a new generation of individuals.
Until the termination criterion is met.

5. Experimental Results

A set of unconstrained real-valued benchmark functions shown in Table 1 was used to investigate the effect of the improved algorithm.

The results are shown in Table 2. Each point is made from average values of over 10 repetitions. We set scale factor and cross-factor for the standard PE algorithm and dynamically adjust and CR according to the evolution stage for the ADE algorithm.

From Table 2, we can see that no algorithm performs better than the others for all five functions, but on average, the ADE is better than DE algorithm.

For Sphere function, Rastrigin function, and Griewank function, ADE algorithm can effectively improve the accuracy such that the optimal value obtained is much closer to the theoretical one compared with the standard DE algorithm. Ackley function is a multimodal function; from the results of iteration, the accuracy of the improved algorithm is not as that of good as the standard DE algorithm, but the difference is small and acceptable. For Shaffer function, there is no obvious superior algorithm.

For all the five functions, there is a significant improvement as expected on the convergence time. These experimental results show that improving the algorithm can effectively improve the convergence speed with excellent convergence effect.

The comparison of two methods with convergent curves is shown in Figures 2, 3, 4, 5, and 6. The experiment results show the ADE algorithm has better result. Compared with DE, the ADE algorithm has both global search ability and fast convergence speed.

6. Conclusion

The scale factor and cross-factor CR have a great impact on the performance of the algorithm, such as the quality of the optimal value and convergence rate. There is still no good way to determine the parameters. In this paper, we propose an adaptive parameter adjustment method according to the evolution stage. From before mentioned experiment, we can know the improved algorithm has more powerful global exploration ability and faster convergence speed and can be widely used in other optimization tasks.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grant no. 61005052), the Fundamental Research Funds for the Central Universities (Grant no. 2010121068), and the Science and Technology Project of Quanzhou (Grant no. 2012Z91).