Abstract

Particle swarm optimization (PSO) and differential evolution (DE) are both efficient and powerful population-based stochastic search techniques for solving optimization problems, which have been widely applied in many scientific and engineering fields. Unfortunately, both of them can easily fly into local optima and lack the ability of jumping out of local optima. A novel adaptive hybrid algorithm based on PSO and DE (HPSO-DE) is formulated by developing a balanced parameter between PSO and DE. Adaptive mutation is carried out on current population when the population clusters around local optima. The HPSO-DE enjoys the advantages of PSO and DE and maintains diversity of the population. Compared with PSO, DE, and their variants, the performance of HPSO-DE is competitive. The balanced parameter sensitivity is discussed in detail.

1. Introduction

Evolutionary algorithms (EAs), inspired by the natural evolution of species, have been successfully applied to solve numerous optimization problems in diverse fields [1]. Particle swarm optimization (PSO) and differential evolution (DE) are two stochastic, population-based optimization EAs [2].

PSO was introduced by Kennedy and Eberhart in 1995 [3, 4]. PSO uses a simple mechanism that mimics swarm behavior in birds flocking and fish schooling to guide the particles to search for globally optimal solutions. As PSO is easy to implement, it has rapidly progressed in recent years with many successful applications in solving real-world optimization problems. During the last decade, PSO algorithm has been paid much attention to in various areas [519] due to its effectiveness in handling optimization problems. Unfortunately, the PSO algorithm suffers from the premature convergence problem which does exist in complex optimization issues. In [1719], some methods for tuning parameters including inertia weights and acceleration coefficients for PSO have been proposed to enhance PSO’s search performance. A comprehensive learning PSO (CLPSO) algorithm was proposed in [14], which shows its superiority in dealing with multimodal functions.

DE is a simple yet powerful EA for global optimization introduced by Storn and Price [20]. The DE algorithm has gradually become more popular and has been used in many practical cases, mainly because it has demonstrated good convergence properties and is principally easy to understand [21]. DE has been successfully applied in diverse fields of engineering [2226]. The performance of the conventional DE algorithm highly depends on the chosen trial vector generation strategy and associated parameter values used. Inappropriate choice of strategies and parameters may lead to premature convergence, which have been extensively demonstrated in [27]. In the past decade, DE researchers have suggested many empirical guidelines for choosing trial vector generation strategies and their associated control parameter settings [1, 2831].

Although DE and PSO have been successfully applied to a wide range of problems including test and real life problems, both have certain shortcomings associated with them which sometimes deteriorate the performance of algorithms. The major problem is the lack of diversity resulting in a suboptimal solution or a slow convergence rate [32]. In order to improve the performance of these algorithms, one of the class of modified algorithms consists of the hybridization of algorithms, where the two algorithms are combined together to form a new algorithm. DE is applied to each particle for a finite number of iterations to determine the best particle which is then included into the population [33]. DE is applied to the best particle obtained by PSO [34]. A hybrid version of PSO and DE is proposed which is named Barebones DE [35]. The evolution candidate solution is generated either by DE or by PSO according to some fixed probability distribution [36]. A hybrid metaheuristic is designed so as to preserve the strengths of both algorithms [32].

However, it is worth mentioning that, in almost all the hybrid works mentioned above, the convergence rate is not fast enough or the global search performance is not satisfactory. To achieve the goal, a novel adaptive hybrid algorithm based on PSO and DE (HPSO-DE) is formulated by developing a balanced parameter between PSO and DE. Adaptive mutation is carried out to current population when the population clusters around local optima. The HPSO-DE enjoys the advantages of PSO and DE and maintains diversity of the population. With the efforts, the HPSO-DE has the ability to jump out of the local optima.

2. PSO and DE

2.1. PSO

In the standard PSO, a swarm consists of individuals (called particles) that fly around in an -dimensional search space. The position of the th particle at the th iteration is used to evaluate the particle and represent the candidate solution for the optimization problem. It can be represented as , where is the position value of the th particle with respect to the th dimension . During the search process, the position of a particle is guided by two factors: the best position visited by itself () denoted by and the position of the best particle found so far in the swarm () denoted by . The new velocity (denoted by ) and position of particle at the next iteration are calculated according to where is the inertia weight, , and are, respectively, the cognitive and social learning parameters, and , and are random numbers between . Based on the above equations, the particle can fly through search space toward and in a navigated way [16, 17].

2.2. PSO Variants
2.2.1. PSO-w

In the PSO algorithm, proper control of global exploration and local exploitation is an important issue. In general, the higher values of inertia weight help in exploring the search space more thoroughly in the process and benefit the global search, while lower values help in the local search around the current search space. The major concern of this linear PSO is to avoid the premature convergence in the early period of the search and to enhance convergence to the global optimum solution during the latter period of the search. The concept of linearly decreasing inertia weight was introduced in [17] and is given by where is the current iteration number and is the maximum number of iteration. Usually the value of is between 0.9 and 0.4. Therefore, the particle is to use lager inertia weight during the initial exploration and gradually reduce its value as the search proceeds in further iterations. According to the research [37], the inertia weight is adjusted by (4). The nonlinear descending can achieve faster convergence speed than that with linear inertia weight: where and are the initial and final inertia weight.

2.2.2. PSO-TVAC

Although linear PSO can locate satisfactory solution at a markedly fast speed, its ability to fine-tune the optimum solution is limited, mainly due to the lack of diversity at the latter stage of evolution process. In population-based optimization methods, the guideline is to encourage the individuals to roam through the entire search space during the early period of the search, without clustering around local optima. During the later period, convergence towards the global optima is encouraged [3]. With this view, a novel strategy in which time-varying acceleration coefficients are employed by changing the acceleration coefficients with time is proposed [16, 38]. With a large cognitive component and small social component at the beginning, particles are allowed to move around the search space, instead of moving toward the population best. On the other hand, a small cognitive component and a large social component allow the particles to converge to the global optima in the latter part of the optimization. This approach is referred to as PSO-TVAC. This modification can be mathematically represented as follows: where , , , and are initial and final values of cognitive and social acceleration factors, respectively; usually and [38].

2.3. DE

DE is proposed by Storn and Price [20]. It is an effective, robust, and simple global optimization algorithm. According to frequently reported experimental studies, DE has shown better performance than many other EAs in terms of convergence speed and robustness over several benchmark functions and real-world problems [39].

In DE, there are three operators: mutation, crossover, and selection. Initially, a population is generated randomly with uniform distribution; then the mutation, crossover, and selection operators are applied to generate a new population. Trial vector generation is a crucial step in DE process. The two operators mutation and crossover are used to generate the trial vectors. The selection operator is used to select the best trial vector for the next generation. The initialization and DE operators are explained briefly as follows [40].

DE starts with a population of   -dimensional candidate solutions which may be represented as , where index denotes the th individual of the population, denotes the generation to which the population belongs, and is the dimension of the population.

The initial population should try to cover the entire search space as much as possible by uniformly randomizing individuals within the search space constrained by the minimum and maximum bounds. Thus, the initial population can be described as follows: where is a uniformly distributed random variable [40, 44].

(1)  Mutation. After initialization, DE utilizes the mutation operation to generate a trial vector with respect to each individual in the current population. can be produced by certain mutation strategy. For example, the five most frequently mutation strategies implemented in the DE are listed as follows [1]:DE/rand/1: DE/best/1: DE/rand-to-best/1: DE/best/2: DE/rand/2:

The indices , , , , and are mutually exclusive integers randomly generated within the range , which are also different from the index [1]. is the mutation scale factor which is used in controlling the amplification of the differential variation [40].

(2)  Crossover. The crossover operation is introduced to increase the diversity of the target vectors. After the mutation phase, the crossover operation is applied to and to generate a trial vector as follows: is the crossover constant, which has to be determined by the user. is a randomly chosen index which ensures that the trial vector will differ from by at least one parameter.

(3)  Selection. If the generated trial vector exceeds the corresponding upper and lower bounds, we randomly and uniformly reinitialize it within the search range. Then the fitness values of all trial vectors are evaluated [45].

After that, a greedy selection scheme is employed:

If the trial vector yields a better cost function value than , the will replace and enter the population of the next generation; otherwise, the old value is retained.

2.4. DE Variants

In order to improve the performance of DE, some adaptive DE variants are proposed. jDE was proposed based on the self-adaptation of the scale factor and the crossover rate [29]. The jDE is represented by a -dimensional vector . New control parameters or factors and are calculated as and they produce factors and in a new parent vector. are uniform random values within the range . and represent probabilities to adjust factors and , respectively. Generally, , , . SaDE gave the first attempt to simultaneously adopt more than one mutation scheme in DE [1]. The main propose is to reduce the problem-solving risk by distributing available computational resources to multiple search techniques with different biases. SaNSDE can be regarded as an improved version of the SaDE. Its mutation is executed in the same way as SaDE except that only two mutation schemes are used, and the scale factor in the adopted mutation schemes is generated according to either a Gaussian distribution or a Cauchy distribution [30]. JADE is another recent DE variant, in which a new mutation scheme named “/DE/current-to-pbest” is adopted [31].

3. Proposed HPSO-DE

Similar to other EAs, both of PSO and DE are the population-based iterative algorithms. The PSO and DE can easily get trapped in the local optima when solving complex multimodal problems. These weaknesses have restricted wider applications of them. Therefore, avoiding the local optima has become one of the most important and appealing goals in algorithms research. To achieve the goal, an adaptive hybrid algorithm based on PSO and DE (HPSO-DE) for global optimization is proposed. The algorithm focuses on the convergence of population. When a particle discovers a current optima position, the other particles will draw together to the particle. If the position is the local optima, the algorithm will be convergence and clustered in local optima. The premature may appear. Suppose that the population size of HPSO-DE is , the fitness value of th particle is , and the average fitness value is . The convergence degree is defined as follows:

The parameter reflects the convergence degree. When the parameter is large, the algorithm is in random search. On the other hand, the algorithm will get into local optima and the premature maybe occur. In order to evaluate the parameter , is given as follows, where is the mutation probability.

Generally, . If the parameter is less than , the mutation probability is equal to . The mutation including PSO and DE is as follows, respectively: where the parameter obeys Gauss distribution.

The strategy makes HPSO-DE enjoy the advantages of two algorithms and maintain diversity of the population. With the efforts, the HPSO-DE has the ability of jumping out of the local optima. The main procedure of HPSO-DE is presented in Algorithm 1.

Step  1. Initialize parameters: PSO-Flag=false; DE-Flag=false; the mutation probability of PSO: ;
    the mutation probability of DE:  ; the balanced parameter between PSO and DE: .
    Convergence evaluation parameter .
Step  2. generation counter G = 0 and randomly initialize a population of NP individuals
     with , uniformly distributed in the range
     , where and .
Step  3. Evaluation the population and Identify the best position.
Step  4. While stopping criterion is not satisfied
    Generate Random number rand in (0, 1).
    If rand < p
           Set DE-Flag=true; %%Active jDE for current population
           for i = 1 to the NP do
             Update and according to (14);
             Generate using (7)
             Generate the trial vector by (12)
         End for
    Else
       Set PSO-Flag=true; %%Active PSO for current population
       Update using (4)
       for i = 1 to the NP do
         Update particle Velocity and Position according to (1), (2) respectively.
         Set the values of position to the trial vector .
       End for
    End if
    Step  4.1. Randomly reinitialize the trial vector within the search
        space if any variable is outside its boundaries.
    Step  4.2. Selection
        for i = 1 to NP
             Evaluate the trial vector
                  If
                    
                    
                       If
                        
                        
                      End if
                 End if
       End for
    Step  4.3. Calculate parameter d using (15)
        If parameter meets the requirement of (16)
           Generate a random number rand in (0, 1).
           If rand is less than and PSO-Flag is true
            Update using (17)
           End if
           If rand is less than and DE-Flag is true
              Update using (18)
           End if
        End if
    Step  4.4. Increment the generation count ;
       Reset PSO-Flag=false; DE-Flag=false;
Step  5. End while

One of the most important parameters for the proposed HPSO-DE is the balanced parameter . In the following, we will make an integrated analysis of the key parameter by comparing the performance of the HPSO-DE in the optimization of several representative functions.

4. Numerical Experiments and Results

4.1. Test Functions

16 benchmark functions are used to test the performance of HPSO-DE to assure a fair comparison. If the number of test problems is smaller, it will be very difficult to make a general conclusion. Using a test set which is too small also has the potential risk that the algorithm is biased (optimized) toward the chosen set of problems. Such bias might not be useful for other problems of interest. The benchmark functions are given in Table 1. It denotes the ranges of the variables and the value of the global optimum. Functions are high-dimensional problems. Functions are unimodal. Function is a noisy quadratic function. Functions are multimodal functions where the number of local minima increases exponentially with the problem dimension [29]. are rotated functions. An orthogonal matrix is generated to rotate a function. The original variable is left-multiplied by the orthogonal matrix to get the new rotated variable . This variable is used to compute the fitness value . When one dimension in is changed, all dimensions in will be influenced:

4.2. Algorithms for Comparison

Experiments are conducted on a suite of 16 numerical functions to evaluate seven algorithms including the proposed HPSO-DE algorithm. For functions, 30-dimensional (30D) function is tested. The maximum number of function evaluations (FEs) is set to 300 000 and is 100. All experiments are run 25 times independently. The seven algorithms in comparison are listed in Table 2.

4.3. Comparisons on the Solution Accuracy

The mean and standard deviation (Std) of the solutions in 25 independent runs are listed in Table 3. The best result among these algorithms is indicated by boldface in the table. Figures 1, 2, and 3 show the comparisons in terms of convergence, mean solutions, and evolution processes in solving 16 benchmark functions.

From Table 3 and Figures 13, it is very clear that the hybrid proposed algorithm has the strong ability to jump out of the local optima. It can effectively prevent the premature convergence and significantly enhance the convergence rate and accuracy. It provides best performance on the , and , which reachse the highest accuracy on them. The jDE ranks the second on , and and performs a little better than HPSO-DE on . PSO-FDR performs best on .

One can observe that the proposed method can search the optimum and maintain a higher convergence speed. The capabilities of avoiding local optima and finding global optimum of these functions indicate the superiority of HPSO-DE.

4.4. Comparisons on Convergent Rate and Successful Percentage

The convergent rate for achieving the global optimum is another key point for testing the algorithm performance. The success of an algorithm means that this algorithm can result in a function value not worse than the prespecified optimal value, that is, for all problems with the number of function evaluations less than the pre specified maximum number. The success rate (SR) is calculated as the number of successful runs divided by the total number of runs.

In Table 4, we summarize the SR of each algorithm and the average number of function evaluations over successful runs (FESS). An experiment is considered as successful if the best solution is found with sufficient accuracy: .

Table 4 shows that HPSO-DE needs least FESS to achieve the acceptable solution on most of functions, which reveals that proposed algorithm has a higher convergent rate than other algorithms. DE and jDE outperform HPSO-DE on the and ; SPSO, LPSO, and PSO-TVAC have much worse SR and accuracy than HPSO-DE on the test functions. In addition, HPSO-DE can achieve accepted value with a good convergence speed and accuracy on most of the functions, as seen from Figures 13 and Table 3.

In summary, the HPSO-DE performs best on functions and has good search ability. Owing to the proposed techniques, the HPSO-DE processes capabilities of fast convergence speed, the highest successful rate, and the best search accuracy among these algorithms.

4.5. Parameter Study

The balanced parameter needs to be optimized. In this section, we investigate the impact of this parameter on HPSO-DE. The HPSO-DE algorithm runs 25 times on each function with four different balanced parameters of 0, 0.1, 0.2, and 0.3. The influence of balanced parameters on accuracy of HPSO-DE algorithm is investigated by comparing the optima values that HPSO-DE obtains for different balanced parameters.

Figures 4, 5, and 6 show the box plots of minimal values that HPSO-DE obtains with four different balanced parameters. The box has lines at the lower quartile, median, and upper quartile values. The whiskers are lines extending from each end of the box to show the extent of the remaining data. Outliers are data with values beyond the ends of the whiskers.

From Figures 46, one can observe that the accuracy of HPSO-DE is less sensitive to the balanced parameter on most of functions except , and when balanced parameter is between 0 and 0.3.

4.6. Comparison with JADE

The JADE algorithm is tested on a set of standard test functions in [31]. HPSO-DE is compared with JADE on 30D test functions chosen from [31]. The parameter settings are the same as in [31]. Maximum generations are listed in Table 5. The middle results of 50 independent runs are summarized in the table (results for JADE are taken from [31]), which show that the proposed algorithm obviously performs better than the JADE algorithm.

5. Conclusions

In this paper, a novel algorithm HPSO-DE is proposed by developing a balanced parameter between PSO and DE. The population is generated either by DE or by PSO according to the balanced parameter. Adaptive mutation is carried out to current population when the population clusters around local optima. The strategy makes HPSO-DE have the advantages of two algorithms and maintain diversity of the population. In comparison with the PSO, DE, and their variants, the proposed algorithm is more effective in obtaining better quality solutions, works in a more effective way, and finds better quality solutions more frequently.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD), Social Science Foundation of Chinese Ministry of Education (nos. 12YJC630271 and 13YJC630018), Natural Science Fund for Colleges and Universities in Jiangsu Province (no. 13KJB120008), China Natural Science Foundation (nos. 71273139, 71101073 and 71173116) and China Institute of Manufacturing Development (no. SK20130090-15).