Abstract

Differential search algorithm (DS) is a relatively new evolutionary algorithm inspired by the Brownian-like random-walk movement which is used by an organism to migrate. It has been verified to be more effective than ABC, JDE, JADE, SADE, EPSDE, GSA, PSO2011, and CMA-ES. In this paper, we propose four improved solution search algorithms, namely “DS/rand/1,” “DS/rand/2,” “DS/current to rand/1,” and “DS/current to rand/2” to search the new space and enhance the convergence rate for the global optimization problem. In order to verify the performance of different solution search methods, 23 benchmark functions are employed. Experimental results indicate that the proposed algorithm performs better than, or at least comparable to, the original algorithm when considering the quality of the solution obtained. However, these schemes cannot still achieve the best solution for all functions. In order to further enhance the convergence rate and the diversity of the algorithm, a composite differential search algorithm (CDS) is proposed in this paper. This new algorithm combines three new proposed search schemes including “DS/rand/1,” “DS/rand/2,” and “DS/current to rand/1” with three control parameters using a random method to generate the offspring. Experiment results show that CDS has a faster convergence rate and better search ability based on the 23 benchmark functions.

1. Introduction

Optimization problems play an important role in both engineering design fields and the information theory. During the past decade, many researchers have developed different kinds of optimization computation algorithms to handle optimization problems, such as simulated annealing (SA), genetic algorithm (GA), differential evolution algorithm (DE), particle swarm optimization algorithm (PSO), ant colony optimization (ACO), biogeography based optimization (BBO), and differential search algorithm (DS) [17]. These algorithms have been adopted by researches so far and have been applied to solve many practical optimization problems such as pattern recognition, antenna design, and chaotic system [813].

Recently, differential search algorithm (DS) developed by Civicioglu [7] is a population-based heuristic evolutionary algorithm inspired by the Brownian-like random-walk movement which is used by an organism to migrate. This algorithm has been used to find the optimal solution in numerous practical navigational, geodetic, and astro-geodetic problems. In the paper [7], the statistical tests realized for the comparison of performances indicate that the problem-solving success of DS algorithm in global optimization problem is better than the success of the algorithms ABC [14], JDE [15], JADE [16], SADE [17], EPSDE [18], GSA [19], PSO2011 [20], and CMA-ES [21] used in this paper. However, there are still some limitations in this algorithm. It is good at exploring the search space and locating the region of global minimum, but it is slow at exploitation of the solution. Therefore, its convergence rate is also a problem in some cases. Accelerating the convergence rate and enhancing the exploitation ability of the algorithm have become two important problems and goals in the algorithm research. However, this field of study is still in its early days and a large number of future researches are necessary in order to develop the effective algorithm for optimization problems. Particularly, within our knowledge, there is almost no paper concerning an improved heuristic method for the DS algorithm.

In this paper, inspired by the mutation operation of the DE algorithm, we propose four improved solution search schemes to search the new space and enhance the convergence rate of the original algorithm. However, in some cases, these four improved solution search schemes are trapped in local optimal solutions and they cannot find the best solutions. In order to balance the exploration and exploitation of the original algorithm, this paper proposes a high-efficiency composite DS algorithm (CDS). The new algorithm combines three new proposed search schemes with three control parameters in a random method to generate the offspring. Experiments have been conducted on 23 benchmark functions chosen from previous literatures. Experimental results indicate that our approach is effective and efficient. Compared with different search schemes, CDS performs better, or at least comparably, in terms of the quality of the final solutions and the convergence rate.

The rest of this paper is organized as follows. In Section 2 we will review the basic DS. The proposed method is reviewed in Section 3, respectively. Benchmark problems and corresponding experimental results are given in Section 4. In the last section we conclude this paper and point out some future research directions.

2. Differential Search Algorithm

Differential search algorithm (DS) developed by Civicioglu [7] is one of the most superior evolutionary algorithms. The differential search algorithm is inspired by migration of living beings which constitute superorganisms during climate change of the year. In DS algorithm, the search space is simulated as the food areas and each point in the search space corresponds to an artificial-superorganism migration. The goal of this migration is to find the global optimal solution of the problem. During this process, the artificial-superorganism checks which randomly selected positions can be retained temporarily. If such a tested position is suitable to be retained for some time, the artificial-superorganism uses this migration model to settle at the discovered position and then continues its migration from this position on. Main steps of the DS algorithm are listed below.

The algorithm begins with a randomly initiated artificial-organism which utilizes -dimension parameter vector within constrains by the prescribed minimum and maximum bounds as follows:

Therefore, we may generate the th component of the th vector as where is a uniform distribution random number between 0 and 1. Consider , and .

After initialization, stopover vectors at the areas are generated between the artificial-organisms that can be described by a Brownian-like random walk model. In order to calculate the stopover vectors, the algorithm creates a stopover vector corresponding to each population individual or target vector in the current population. The method for producing the stopover vectors can be described as follows: where are randomly chosen integers, and . Scale controlled the size of change in the positions of the individuals of the artificial-organisms. Note that the value of scale is generated by a gamma random number generator controlled by a uniform distribution random number between 0 and 1.

The search process of stopover site can be calculated by the individuals of the artificial organisms of the superorganism. This process can be described as follows: where ; is an integer number either 1 or 0; denotes the trail vector of the th particle in the th dimension at the th iteration.

Selection operation is used to choose the next population (i.e., ) between the stopover site population and the artificial-organism population. The selection operation is described as The standard differential search algorithm can be described as in Procedure 1.

   (1) begin
   (2) Set the generation counter ; and randomly initialize a population of
 NP *   individuals . Initialize the parameter ,
   (3) Evaluate the fitness for each individual in .
   (4) while stopping criteria is not satisfied do
   (5)    scale = randg(2 * rand) * (rand-rand)
   (6)   for   to NP do
   (7)    select randomly
   (8)     
   (9)   end
 (10)    = rand (NP, );
  (11)   If  rand < rand then
 (12)   If  rand <   then
 (13)   for   = 1 to NP do
 (14)   (,:) = (,:) < rand
 (15)   end
 (16)   else
 (17)   for   = 1 to NP do
 (18)   (, randi()) = 0
 (19)   end
(20)   end
 (21)   else
(22)   for   = 1 to NP do
(23)    = randi(, 1, )
(24)   for   = 1 to size (, 2) do
(25)       (, ()) = 0
(26)   end
(27)   end
(28)   end
(29)   ;
(30)   ;
 (31)   for   to NP do
(32)    Evaluate the offspring
(33)   If   is better than   then
(34)   
(35)   end if
(36)   end for
(37)   Memorize the best solution achieved so far
(38) end while
(39) end

3. Improved Approach

3.1. Proposed IDS Algorithm

As we know, differential evolution is a simple yet efficient evolutionary algorithm, first introduced by Storn and Price [22]. Differential evolution algorithm has captured much attention and has been applied to solve many real-world problems. The crucial idea behind DE is a scheme for producing trial vectors according to the manipulation of target vector and difference vector. DE algorithm combines simple arithmetical operators with the classical operators of crossover, mutation, and selection to generate a new population. Among these operators, mutation part employs the mutation operation to produce a mutant vector with respect to each individual in the current population. Different kinds of strategies of DE have been proposed based on the target vector selected and the number of difference vectors used. In the standard DE algorithm, four differential mutation strategies can be used with one of two different crossover methods. They are listed in the following:

“DE/rand/1”

“DE/rand/2”

“DE/current-to-rand/1”

“DE/current-to-rand/2” where are randomly chosen integers and . is the scaling factor controlling the amplification of the differential evolution. is the best individual vector with the best value in the population at generation .

Based on DE algorithm and the property of DS, we propose the following four novel search mechanisms to improve DS:

“DS/rand/1”

“DS/rand/2”

“DS/current-to-rand/1”

“DS/current-to-rand/2” where scale controls the size of change in the positions of the individuals of the artificial-organisms.

Similar to DE, four mutation schemes are proposed in this paper. The search methods “DS/rand/1” and “DS/rand/2” are two strategies which bear stronger exploration capabilities that can effectively maintain population diversity. Compared with other strategies, the search schemes “DS/current to rand/1” and “DS/current to rand/2” benefit from their fast convergence by guiding the evolutionary search with the random target. However, these two new strategies may lose their diversity and their global exploration abilities. Compared with the “DS/original/1,” we can find the advantages of these four strategies. “DS/rand/1” and “DS/rand/2” are random enough for exploration. “DS/current to rand/1” and “DS/current to rand/2” can guide the search to a random direction. In the experiment section, we will use different functions to test these five schemes so that we can show the effective and efficient of these strategies.

3.2. Composite DS

For successful application to optimization problems, a population-based optimization should not only find the global optimization solution but also have a faster convergence speed. Based on the experiment results of these five search schemes, we find the effectiveness of differential search algorithm in solving global numerical problem that depends on selected search schemes and its parameters. However, some different problems need different search schemes and different parameter values according to their problems. From the experiment results in Section 4, we can find that five search schemes show different advantages in various directions such as diversity and convergence rate and so on.

In order to obtain these goals and combine the advantages of these different schemes, a composite differential search algorithm (CDS) is proposed in this paper, which is used to randomly combine several search schemes and some relative parameters to produce the new offspring. The flowchart of the CDS algorithm is shown in Figure 1. In this paper, we use three search schemes and three control parameters to consist a pool to improve the global search ability and enhance the convergence rate. In the algorithm, three search solutions are independently calculated and then the best solution will be retained in the next generation. The chosen three search schemes are describe as follows:“DS/rand/1,”“DS/rand/2,”“DE/current-to-rand/1.”The values of scale are

4. Experimental Results

To evaluate the performance of our algorithm, we applied it to 23 standards benchmark functions in [23]. These functions have been widely used in the literature. Since we do not make any modification of these functions, they are given in Table 1. The first seven functions are unimodal functions. The is the step function which has one minimum and is discontinuous. Function is a noisy quadratic function. The following seven functions are multimodal test functions. For these functions, the number of local minima increases exotically with the problem dimensions. Then, ten multimodal test functions with fixed dimension which have only a few local search minima are used in our experimental study. Tables 1 and 2 have shown the details of these functions. So far, these problems have been widely used as benchmarks for study with different methods by many researchers.

The algorithm is coded in MATLAB 7.9 and experiments are made on a Pentium 3.0 GHz processor with 4.0 GB of memory.

In this experiment, we set the number of particles to be 100, and we set the p1 and p2 to be . In this strategy, all vectors for the update rule are selected from the population at random and, then, it has no bias to any special search directions and it chooses new search directions in a random manner. The maximum number of fitness function evaluations is 100000, 300000, and 500000 for with 10, 30, and 50 dimensions, respectively, and is 10000 for. For all test functions, the algorithms carry out 30 independent runs each starting from a random population with different random seeds.

4.1. Comparison of Different Search Schemes

To investigate the performance of the different search schemes employed on the effectiveness of the differential search algorithm, five search schemes are proposed in the original DS. Four schemes, namely, DS/original/1, DS/rand/1, DS/rand/2, DS/current to rand/1, and DS/current to rand/2 are used in our experiments. These functions were studied at , , and . Some representative convergence graphs are shown in Figures 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, and 13. As can be seen in Table 3, for the problem, it is interesting to note that DS/rand/1 outperforms DS/original/1 on thirteen functions (). The DS/rand/1 can find the global optimization value on 6 functions (, , , , , , and ). On three functions (, , and ), the DS/rand/1 can find the nearest global optimization solution. For the rest of the problems, the DS/rand/1 cannot find the best solutions within the maximum function evaluation. Compared with the DS/original/1, DS/rand/2 can give a better solution on all functions. This scheme also can find the global optimization value on six functions (, , , , , , and ). But this method cannot beat the DS/rand/1. For , the DS/rand/2 search scheme can provide a better solution than the DS/rand/1 method. For , , , , and , the DS/rand/1 search scheme can own better search performance than DS/rand/2. For the DS/current to rand/1 and the DS/current to rand/2 schemes, these two schemes outperform other search schemes. The experiment results are shown in Table 4 for ; as can be seen in Table 4, DS/rand/1 can provide the highest accuracy on functions , , , and . For , the DS/current to rand/2 can obtain better solutions. For , all search schemes can find the optimal solution. The search method DS/current to rand/1 can perform better on function . For multimodal functions , the DS/rand/1 and DS/rand/2 can also find the optimal solution on these complex functions. DS/current to rand/1 and DS/current to rand/2 can provide closer to optimal solution on multimodal optimization functions,; however, they perform a little worse than DS/rand/1 and DS/rand/2. For problems, the experiment results are shown in Table 5; as is shown in Table 5, while solving the unimodial optimization problem, DS/rand/1 can give a better solution than other schemes for functions , , and . For and , DS/current to rand/2 outperforms the other algorithms, but they are a little far from the global optimums. For the , DS/rand/2 has a better solution. For multimodal functions with many local minima, the final results are more important because this function can reflect the algorithm’s ability to escape from poor local optima and obtain the near-global optimum. The DS/rand/1 and DS/rand/2 provide better solutions than other algorithms except for . As can be seen in Tables 35, the results show that DS/rand/1 and DS/rand/2 perform much better in most cases than other schemes.

4.2. Sensitivities to Population Size

Performance of DS is always sensitive to the selected population size. If the population is too small, the diversity of possible movements is poor and then the algorithm may be easily trapped in a local optimum. On the other hand, if the population size is too large, DS exhausts the fitness evaluations very quickly without being able to locate the optimum. Therefore, the choice of the best population size of DS is always critical for different problems.

To investigate the sensitivity of the proposed algorithm to variations of population size, some experiments are repeated for and . The experimental results are given in Tables 6 and 7 for five search schemes at dimension . For , the performances of DS/rand/1 and DS/rand/2 are significantly superior to that of other algorithms according to the experimental results shown in Table 6, since the DS/rand/1 and DS/rand/2 are better than other algorithms except for the and . For and , all algorithms can locate the near-global optimum over all 50 runs. When the population increases to , DS/rand/1 and DS/rand/2 can obtain values higher than . We can find that DS/rand/1 and DS/rand/2 are faster than DS/current to rand/1 and DS/current to rand/2 on these functions. For in Table 7, DS/rand/1 and DS/rand/2 are able to obtain a significantly better performance than other schemes on 11 functions.

4.3. Comparison of CDS with Enhanced Differential Search Algorithm

The performances of CDS are compared with those of original DS and DS with “DS/rand/1”. In CDS, the population size is 40. The maximum number of fitness function evaluations is 100000, 300000, and 500000 for with 10, 30, and 50 dimensions, respectively. The parameters and are set to be . CDS can inherit the bright sides of the three search schemes. The mean and standard deviation results of CDS with other algorithms are shown in Table 8. As can be seen in Table 8, the results of CDS can obtain much better results than original DS and DS with “DS/rand/1” for all benchmarks with . There is no dispute that a more precise exploitation can enhance the performance of the algorithms. For problem, CDS owns a very fast convergence rate and can give a better solution than original DS and DS with “DS/rand/1” for 12 functions, except for . For the function , the “DS/rand/1” can provide better solutions than original DS and CDS. For problems, both CDS and “DS/rand/1” could search the optimal solution on some functions (, , , , , and ). In addition, CDS have much better performances than DS and DS with “DS/rand/1” on the functions , , , , and . However, for the , the “DS/rand/1” can obtain a better solution than other algorithms. Therefore, it is concluded that CDS is more effective than DS and “DS/rand/1” for high-dimensional classical benchmark functions. In particular, CDS exhibits an overall higher convergence speed and better robustness than the two competitors under some conditions. We also can conclude that the combination operator of these methods has the ability to accelerate DS, especially for the higher dimensionality. In addition, the graphs of Figures 212 show that CDS has improved the convergence characteristics of the original algorithm, regardless to dimension.

4.4. Comparison of CDS with Enhanced Differential Search Algorithm in Fixed Dimension

In this section, we will compare our algorithm with enhanced differential search algorithm for fixed functions. The experimental results are listed in Table 9. As can be seen in Table 9, for , , and , with only a few local minima, the dimension of the function is also small. In this case, it is hard to judge the performances of individual algorithms. All algorithms were able to find optimal solutions for these two functions. For , , ,and , the CDS can provide better solutions than DS and “DS/rand/1”. For , the CDS can provide all the better solution. The algorithm performs superiorly better than DS and DS with “DS/rand/1”. The graphs of Figure 13 show that the convergence progresses of different search schemes and CDS for .

5. Conclusions

In this paper, we propose four different search schemes. Although these new schemes could not find better solutions than the original algorithm for only a few functions, these new schemes could have a faster convergence rate and better diversity than the original algorithm. In order to further enhance the exploitation of the algorithm, we combined three new schemes with three control parameters in a random method to consist a new algorithm (CDS). To verify the performance of CDS, 23 benchmark functions chosen from literature are employed. The results show that the proposed CDS algorithm clearly outperforms the basic DS and the new proposed schemes. In this paper, we only consider the global optimization. The algorithm can be extended to solve other problems such as constrained optimization problems.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.