Abstract

This paper proposes a hybrid scatter search (SS) algorithm for continuous global optimization problems by incorporating the evolution mechanism of differential evolution (DE) into the reference set updated procedure of SS to act as the new solution generation method. This hybrid algorithm is called a DE-based SS (SSDE) algorithm. Since different kinds of mutation operators of DE have been proposed in the literature and they have shown different search abilities for different kinds of problems, four traditional mutation operators are adopted in the hybrid SSDE algorithm. To adaptively select the mutation operator that is most appropriate to the current problem, an adaptive mechanism for the candidate mutation operators is developed. In addition, to enhance the exploration ability of SSDE, a reinitialization method is adopted to create a new population and subsequently construct a new reference set whenever the search process of SSDE is trapped in local optimum. Computational experiments on benchmark problems show that the proposed SSDE is competitive or superior to some state-of-the-art algorithms in the literature.

1. Introduction

In recent years, many research efforts have been devoted to the hybridization of different algorithms because the adoption of the best features of individual algorithm can often result in a new powerful and efficient hybrid algorithm. When designing such hybrid algorithms, one should examine and evaluate the features and search mechanisms of different algorithms, as well as their potential combinations.

Scatter search (SS) was firstly proposed by Glover [1] for integer programming. It is a population-based evolutionary algorithm that combines a set of solutions to construct a new solution. The scatter search template (Glover [2]) provides the foundation for most implementations of SS by now. After population initialization, SS selects a subset of solutions from the population as the reference set and then operates on the reference set to generate new trial solutions by combining solutions in the reference set. With comparison to the other evolutionary algorithms such as genetic algorithm, the memory feature of the reference set can accurately grasp the current optimization state and reduce the frequency of using stochastic search, which in turn greatly improves the ability of global search. In addition, the diversity maintenance mechanism of SS guarantees the search diversity during evolution. Due to its flexibility and effectiveness, SS has been widely used in tackling combination optimization problems such as the graph-based permutation problems (Rego and Aleao [3]), flow shop scheduling problem (Yamada and Nakano [4], Nowicki and Smutnicki [5]), vehicle routing problem (Taillard [6]), quadratic assignment problem (Chung et al. [7]), and vehicle routing (Greistorfer [8], Russell and Chiang [9]). Although SS has achieved good performance in solving combination optimization problems, few literatures focus on the application of SS in continuous optimization problems. Herrera et al. [10] investigated the performance of continuous SS with two combination methods, namely, the BLX-α and the average combination. Ugray et al. [11] proposed a multistart framework for global optimization by embedding SS with a nonlinear programming solver.

Although differential evolution (DE) is a new member of the swarm-based evolutionary algorithms, it has become one of the most powerful stochastic search algorithms in the literature (Das and Suganthan [12]). Due to its simplicity and ease of implementation, DE has performed well in many single-objective optimization problems (Zhang and Sanderson [13], Qin et al. [14]) and engineering optimization problems (Babu and Angira [15], Zhang and Rangaiah [16]). The main advantage of DE is that it enables the solution to learn from better solutions (e.g., the DE/best/1 mutation operator) and at the same time maintain the search diversity (e.g., the DE/rand/1 mutation operator).

Due to the power of SS and DE, some researchers start to investigate their hybridization to construct a new powerful algorithm. Davendra and Onwubolu [17] and Davendra et al. [18] proposed a hybrid DE with SS for discrete optimization problems such as quadratic assignment problem (QAP), traveling salesman problem (TSP), and permutation flow shop scheduling problem (PFSP). However, this hybrid algorithm is in fact a two-phase method, in which the first phase is to obtain a good solution by DE and the second phase is to improve this solution by SS. Shi et al. [19] developed a real hybrid SS with DE by embedding DE into SS to generate the new reference set. This hybrid algorithm is used to solve the discrete project scheduling problem.

In this paper, we propose a new hybrid algorithm based on SS and DE, named hybrid SS with DE (SSDE). The main framework follows that of SS but uses DE to combine solutions in the reference set and then generate new solutions. Different from that of Shi et al. [19], our hybrid algorithm is designed for continuous algorithm. In addition, our hybrid SSDE algorithm adopts multiple mutation operators proposed for classical DE and makes use of an adaptive mechanism to select the most appropriate mutation operators during evolution. This strategy makes the hybrid SSDE very robust for different kinds of optimization problems.

The remainder of this paper is organized as follows. Section 2 gives the background descriptions on SS and DE, and in Section 3 the procedure and components of the hybrid SSDE are described in details. The computational experiments are carried out in Section 4 and our SSDE is compared to the other state-of-the-art evolutionary algorithms in the literature on benchmark test problems. Finally, this paper is concluded in Section 5.

2. Background

2.1. Canonical Scatter Search

The framework of the SS generally includes five components (shown in Algorithm 1). The SS first generates a population of solutions using a diversification generation method that considers both solution quality and diversity. Then the reference update method is used to build and maintain the reference set by selecting some solutions with good quality and the other solutions with good diversity from the population (note that the size of the reference set is a preset fixed value). Based on the reference set, the subset generation method is used to produce a great number of subsets of solutions as a basis for creating combined solutions (e.g., all the two solutions and the three solutions selected from the reference set are generally taken as the subsets). For each subset, the solution combination method is used to transform the solutions in the subset into one combined solution. At last, the improvement method is applied to further improve each combined solution. If there is no new solution added to the reference set, the algorithm terminates; otherwise, the algorithm goes to the subset generation procedure and starts a new iteration.

(1)   Begin
(2) Use the Diversification generation method to create a population.
(3) Use the Improvement method to enhance the population, and set to be the best solution found so far.
(4) Use the Reference update method to build the initial reference set .
(5)  while (the termination criterion is not reached) do
(6)  Generate subsets from using the Subset generation method.
(7) Combine each subset into a new solution using the Solution combination method.
(8) Use the Improvement method to enhance each combined solution, and update .
(9) Update the reference set based on the union of the current reference set and the new combined
   solutions using the Reference update method.
(10) end while
(11)Report the best solution .
(12) End

2.2. Canonical Differential Evolution

DE algorithm is a simple, fast, and robust evolutionary algorithm. It starts with a population of solutions, and at each generation the population updates through three main consecutive steps: mutation, crossover, and selection. In the following, the th solution in the population of generation is denoted as , .

The mutation step first randomly selects three solutions from the population of generation for the target individual and then generates the perturbed vector based on the three selected solutions as follows:where , , , are three different numbers randomly selected from and is the control parameter.

The crossover step generates the trial solution based on the perturbed vector and the target solution as follows:where , and is a randomly chosen index from to ensure that the trial solution does not duplicate the target solution . is the crossover probability set by the user.

Once the trial solution is generated, the selection step compares it with its target solution. If the trial solution has an equal or lower objective function value than that of its target solution , it replaces the target vector in the next generation; otherwise, the target solution retains its place in the population for at least one generation. That is, the solution in the next generation is selected according to the following rule:

3. Proposed SSDE Algorithm

In this section, we first present the overall framework of our hybrid SSDE algorithm in Algorithm 2 and then describe each component of it in details in the following sections. In Algorithm 2, denotes the size of population.

(1)    Begin
(2)  Set the selection probability of each crossover operator to be the same, that is, ,
   where is the sum of all mutation operators.
(3)  Set the iteration counter and the .
(4)  Use the Diversification method to generate a population , evaluate each solution in and then set the best solution
    found so far to be the best one in .
(5)   while (the termination criterion is not reached) do
(6)    Use the Reference update method to update reference set .
(7)    Set , and . Set the solution set to be empty.
(8)    if () do
(9)     while ()do // generate a new solution for each in
(10)    Set the successful count and failure count of the crossover operator to be zero.
(11)     (1) Mutation
(12)       Randomly select a mutation operator (namely ) using the Adaptive selection method.
(13)       Randomly select solutions from according to the requirement of mutation operator .
(14)       Perform mutation operator on the selected solutions to generate a new trial solution .
(15)     (2) Crossover
(16)        Apply crossover operation based on (5) to obtain new solution .
(17)     (3) Selection
(18)       Use (7) to select the new solution . If , set = ; otherwise, set = .
(19)     Add and of each crossover operator to the end of a list whose length is . If ,
      then remove the first node of this list so that only a maximum of nodes are stored in list .
(20)     Update the selection probability of each operator according to the method described in Section 3.4.
(21)      Set .
(22)   end while
(23)   if (the best solution in is better than ) do
(24)      Update and set .
(25)   else do
(26)      Set .
(27)   end if
(28)  else do
(29)    Sort the solutions in the in the ascending order of the objective value, and then randomly reinitialize
     the latter half of solutions in .
(30)    Set .
(31)  end if
(32) end while
(33)Report the best solution .
(34) End

3.1. Initial Population Generation Method

To obtain an initial population with good diversity, we developed a diversification method following the main ideas of [20]. The details of this method are as follows, in which is the sum of dimensions and and are the lower bound and upper bound of dimension . Based on this method, the diversification of initial population can be guaranteed.

Step 1. Divide the range of each dimension into equal subranges.

Step 2. From dimension to , do the following steps.

Step 2.1. Set and the selection probability of each subrange to be .

Step 2.2. Randomly select a subrange, namely, , and then within this range generate a random value for .

Step 2.3. For the selected subrange , set , and for the other unselected subranges set .

Step 2.4. Set . If , and then go to Step 2.1; otherwise, go to Step 2.2.

3.2. Reference Update Method

In canonical SS algorithm, the reference set generally consists of two parts of solutions: one part is a set of solutions with good quality in objective function value and the other part is a set of solutions with good diversity. This is a good choice for discrete solutions; however, for continuous optimization problems such a setting will often deteriorate the search efficiency. If two solutions are far from each other in the solution space, then the offspring generated from them is often inferior to them. In our preliminary experiment, the results show that the adoption of this setting often results in a very slow convergence speed, which in turn makes the algorithm unable to reach a good solution within a given number of function evaluations. Therefore, in our hybrid SSDE, the reference set consists of only the best solutions in the population. At the end of each generation, the reference set is updated with the best solutions in the new population.

3.3. Mutation Operators Used in Hybrid SSDE

As mentioned above, several mutation operators proposed for DE are adopted in our hybrid SSDE algorithm. The definitions of these mutation operators are as follows. In these equations, denotes the best solution found so far by the algorithm:(1)DE/rand/1: ;(2)DE/best/1: ;(3)DE/rand-to-best/1: ;(4)DE/best/2: .

3.4. Adaptive Selection Method for Mutation Operators

To select the mutation operator, we present an adaptive mechanism by introducing the concept of solution survival used in the JADE [13] and SaDE [14]. The success and failure memories in SaDE are also used in the hybrid SSDE. In our SSDE, the application of a selected mutation operator for solution is viewed as successful only if the new solution resulted from this operator ( in Algorithm 2) outperforms . A list with a fixed length (also called the learning period) is established to store the successful counter and unsuccessful counter of each crossover operator . During the first generations, the selection probability of each mutation operator is equal to ( is the sum of mutation operators). This warm-up period is used to train the algorithm to analyze the performance and suitability of each mutation operator for the current optimization problem. At the end of each generation, the number of and for each crossover operator is counted and then added to the end of the list . Then from the generation , the head node of list will be removed so that the new record of and can be added to the end of .

The selection probability of each mutation operator at generation (denoted as ) is calculated as follows according to SaDE [14]:

3.5. Reinitialization of Population

During the evolution process, the hybrid SSDE algorithm may be trapped in local optimal regions, which in turn will deteriorate the performance of the hybrid SSDE. To further improve the ability of getting out of local optimal regions, a reinitialization method is adopted. During the evolution, we will record the number of consecutive generations that the best solution found so far has not been updated. Whenever this number exceeds a limit, then we will reinitialize the population as follows. First, sort the solutions in the population in the ascending order of their objective function values. Second, delete the latter half of the population in which the solutions have worse objective function values. Third, randomly select two solutions from the left half of and then perform the operator [20] to generate two offspring solutions. We will repeat the third step for times to generate new solutions, and then we select the best solutions and add them to .

The procedure of the operator can be described as follows according to Deb and Agrawal [20]. For two solutions and , the SBX operator can result in two new solutions and as follows. First, generate a random uniform number and then generate a new random number β based on and a distribution index according to At last each dimension of the two new solutions is generated using the following equations: and , .

4. Computational Experiments and Results

In this section, we start with the introduction of the experimental environment in Section 4.1 and then the test problems in Section 4.2. Finally, Section 4.5 is devoted to carrying out the comparative studies between our SSDE and the other state-of-the-art evolutionary algorithms.

4.1. Experimental Settings

The proposed SSDE algorithm is implemented in C++, and all the experiments are carried out on a personal computer with the Intel Core 2 Duo 2.8 GHz CPU and 4 GB memory.

The parameters are set as follows based on our preliminary experiments. The population size is set to be 100, the size of reference set is set to be 35, the control parameter is generated by , the crossover probability is generated by , and the number of consecutive generations is set to be 50.

4.2. Test Problems

To test the performance of the proposed SSDE algorithm, 10 benchmark problems are selected from the literature. The definition of these problems is given in Table 1, in which the last column is the optimal objective function value for each test problem. In our experiments, we use .

In the experiment, a total of 50 independent runs are performed for each test problem to collect the statistical performance of each algorithm. The independent -test is used to show the statistical difference between different algorithms and the result for each problem is given in the last column of Tables 2, 3, and 4. In the last column denoted as , the signal “+” denotes that the performance difference between our SSDE and the best one among the other algorithms in the table is significant with a confidence level of 95%, while the signal “−” denotes that their performance difference is not significant.

4.3. Efficiency Analysis of Adaptive Selection Method

In this section, we first carried out an experiment to show the efficiency of the adaptive selection method. We compared our proposed SSDE algorithm with four SSDE algorithms each of which adopts only one mutation operator. These four algorithms are, respectively, denoted as , , , and according to the four mutation operators.

The comparison results for the proposed SSDE algorithm and the other SSDE with only one mutation operator are presented in Table 2, in which a better result is shown in the bold type. The second column Gen denotes the generations used in the algorithm as the stopping criterion, and Mean (Std Dev) represents the average value and the standard deviation obtained by each algorithm for each test problem.

From Table 2, it can be seen that the SSDE algorithm with the adaptive selection method can obtain the best results and show the best robustness for almost all the test problems. It is significantly better than the other four algorithms for 9 out of the 10 test problems. The reason behind this phenomenon is that this strategy can adaptively select the most appropriate mutation operators for the current problem. In addition, the adoption of multiple kinds of mutation operators also help to improve the exploration ability of SSDE because such a strategy can help to enhance the algorithm’s ability of getting out of local optimal regions.

4.4. Efficiency Analysis of Hybrid Architecture of SSDE

To test the impact of adopting DE as the generation method for new solutions, in this section we implemented another kind of SS in which the new solutions were generated by the simulated binary crossover (SBX). That is, lines from 10 to 20 in Algorithm 2 are replaced by the following steps: first, randomly select two solutions from the reference ; second, apply the SBX operator to the two solutions to generate two new solutions; third, select the best one as the new solution . This algorithm is denoted as SS-SBX.

The comparison results for the two kinds of SS algorithms are presented in Table 3, in which a better result is shown in the bold type. From this table, it can be seen that although SS-SBX obtains the best results for , , , , , and , for the other problems its performance is much worse. The SSDE algorithm obtains the best results for 6 out of the 10 problems, and its performance for the other four problems is also very good. Therefore, it can be concluded that the SSDE algorithm is superior the SS-SBX, especially with respect to the robustness.

4.5. Comparisons with Other State-of-the-Art Algorithms

In the above sections, we have shown that the adaptive selection mechanism based on multiple mutation operators and the adoption of DE into SS can help to improve both the search performance and robustness of SS. In this section, we further compare our SSDE with the other state-of-the-art evolutionary algorithms in the literature. These algorithms include the SaDE proposed in [14] and the JADE with archive proposed in [13]. Both the evolutionary algorithms have adaptive strategies in parameter control or mutation strategy selection during the evolution process. The comparison results are given in Table 4.

From Table 4, it can be found that our SSDE obtains better results for 7 out of the 10 problems with comparison to the SaDE algorithm in [14] (the performance differences are significant for all of the 7 problems) and for 4 out of the 10 problems with comparison to the JADE algorithm in [13]. For problems and , our SSDE algorithm can reach the optimal solutions, while the other two algorithms SaDE and JADE show a much worse performance. In addition, it also appears that our SSDE algorithm is more robust for different kinds of problems. On average, it can be concluded that our SSDE is competitive to the JADE algorithm but superior to the SaDE algorithm for the 10 benchmark test problems.

To give a graphical illustration of this analysis, we reimplemented the SaDE algorithm and compared the evolution processes between our SSDE and the SaDE for the first six problems. The comparison results are shown in Figure 1, in which the ordinate is log10 in which is the objective function value of the best solution found in each iteration. Please note that whenever the algorithm reaches the optimal solution, the evolution process line will terminate in the figures because the optimal objective function value of each problem is zero. From this figure, it can be seen that the convergence speed of our SSDE is much faster than that of the SaDE for all the six test problems.

5. Conclusions

In this paper, we developed a hybrid scatter search algorithm by incorporating the differential evolution algorithm to act as the new solution generation method. To make the proposed hybrid algorithm more robust for different kinds of optimization problems, an adaptive selection mechanism is developed for multiple mutation strategies. A population reinitialization method is also used to help the algorithm to get out from local optimums. The computational results based on benchmark test problems show that the proposed hybrid SSDE algorithm is very efficient and robust and that its performance is competitive or even superior to two state-of-the-art evolutionary algorithms in the literature.

Conflict of Interests

The authors declare that this paper has no conflict of interests.

Acknowledgments

This research was partly supported by the National Natural Science Foundation of China (Grant no. 61403277) and the Humanities and Social Science Research Projects for Colleges in Tianjin (Grant no. 20132151).