Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2015 / Article

Research Article | Open Access

Volume 2015 |Article ID 769245 | 16 pages | https://doi.org/10.1155/2015/769245

A Hybrid Backtracking Search Optimization Algorithm with Differential Evolution

Academic Editor: Antonio Ruiz-Cortes
Received14 Nov 2014
Revised10 Feb 2015
Accepted19 Feb 2015
Published08 Apr 2015

Abstract

The backtracking search optimization algorithm (BSA) is a new nature-inspired method which possesses a memory to take advantage of experiences gained from previous generation to guide the population to the global optimum. BSA is capable of solving multimodal problems, but it slowly converges and poorly exploits solution. The differential evolution (DE) algorithm is a robust evolutionary algorithm and has a fast convergence speed in the case of exploitive mutation strategies that utilize the information of the best solution found so far. In this paper, we propose a hybrid backtracking search optimization algorithm with differential evolution, called HBD. In HBD, DE with exploitive strategy is used to accelerate the convergence by optimizing one worse individual according to its probability at each iteration process. A suit of 28 benchmark functions are employed to verify the performance of HBD, and the results show the improvement in effectiveness and efficiency of hybridization of BSA and DE.

1. Introduction

Optimization plays an important role in many fields, for example, decision science and physical system, and can be abstracted as the minimization or maximization of objective functions subject to constraints on their variables mathematically. Generally speaking, the optimization algorithms can be employed to find their solutions. The stochastic relaxation optimization algorithms, such as genetic algorithm (GA) [1], particle swarm optimization algorithm (PSO) [2, 3], ant colony algorithm (ACO) [4], and differential evolution (DE) [5], are one of the methods for solving solutions effectively and almost nature-inspired optimization techniques. For instance, DE, one of the most powerful stochastic optimization methods, employs the mutation, crossover, and selection operators at each generation to drive the population to global optimum. In DE, the mutation operator is one of core components and includes many differential mutation strategies which reveal different characteristics. For example, the strategies, which utilize the information of the best solution found so far, have fast convergence speed and favor exploitation. These strategies are classified as the exploitative strategies [6].

Inspired by the success of GA, PSO, ACO, and DE for solving optimization problems, new nature-inspired algorithms have been a hot topic in the development of the stochastic relaxation optimization techniques, such as artificial bee colony [7], cuckoo search [8], bat algorithm [9], firefly algorithm [10], social emotional optimization [1113], harmony search [14], and biogeography based optimization [15]. A survey has pointed out that there are about 40 different nature-inspired algorithms [16].

The backtracking search optimization algorithm (BSA) [17] is a new stochastic method for solving real-valued numerical optimization problems. Similar to other evolutionary algorithms, BSA uses the mutation, crossover, and selection operators to generate trial solutions. When generating trial solutions, BSA employs a memory to store experiences gained from previous generation solutions. Taking advantage of historical information to guide the population to global optimum, BSA focuses on exploration and is capable of solving multimodal optimization problems. However, utilizing experiences may make BSA converge slowly and prejudice exploitation on later iteration stage.

On the other hand, researches have paid more and more attention to combine different search optimization algorithms or machine learning methods to improve the performance for real-world optimization problems. Some good surveys about hybrid metaheuristics or machine learning methods can be found in the literatures [1820]. In this paper, we also concentrate on a hybrid metaheuristic algorithm, called HBD, which combines BSA and DE. HBD employs DE with exploitative mutation strategy to improve convergence speed and to favor exploitation. Furthermore, in HBD, DE is invoked to optimize only one worse individual selected with the help of its probability at each iteration process. We use 28 benchmark functions to verify the performance of HBD, and the results show the improvement in effectiveness and efficiency of hybridization of BSA and DE. The major advantages of our approach are as follows. (i) DE with exploitive strategies helps HBD converge fast and favor exploitation. (ii) Since DE optimizes one individual, HBD expends only one more function evaluation at each iteration and will not increase the overall complexity of BSA. (iii) DE is embedded behind BSA, and therefore HBD does not destroy the structure of BSA, and it is still very simple.

The remainder of this paper is organized as follows. Section 2 describes BSA and DE. Section 3 presents the HBD algorithm. Section 4 reports the experimental results. Section 5 concludes this paper.

2. Preliminary

2.1. BSA

The backtracking search optimization algorithm is a new stochastic search technique developed recently [17]. BSA has a single control parameter and a simple structure that is effective and capable of solving different optimization problems. Furthermore, BSA is a population-based method and possesses a memory in which it stores a population from a randomly chosen previous generation for generating the search-direction matrix. In addition, BSA is a nature-inspired method employing three basic genetic operators: mutation, crossover, and selection.

BSA employs a random mutation strategy that used only one direction individual for each target individual, formulated as follows:where is the current population, is the historical population, and is a coefficient which controls the amplitude of the search-direction matrix .

BSA also uses a nonuniform and more complex crossover strategy. There are two steps in the crossover process. Firstly, a binary integer-values matrix (map) of size ( and are the population size and the problem dimensions) is generated to indicate the mutant individual to be manipulated by using the relevant individual. Secondly, the relevant dimensions of mutant individual are updated by using the relevant individual. This crossover process can be summarized as shown in Algorithm 1.

Input:   and
output:  
Step 1.
Initiate
  Generate drawn from uniformly distribution with the range between 0 and 1.
If then
For from 1 to do
   Generate a vector containing a random permutation of the integers from 1 to
   Generate drawn from uniformly distribution with the range between 0 and 1.
   
End For
Else
  For from 1 to do
   Generate a random integer from 1 to
   
End For
End If
Step 2.
For from 1 to do
For from 1 to do
   If then
    
   Else
    
   End If
End For
End For

BSA has two types of selection operators. The first type selection operator is employed to select the historical population for calculating search direction. The rule is that the historical population should be replaced with the current population when the random number is smaller than the other one. The second type of selection operator is greedy to determine the better individuals to go into the next generation.

According to the above descriptions, the pseudocode of BSA is summarized as shown in Algorithm 2.

Initiate the population and the historical population randomly sampled from search space.
While (Stop Condition doesn’t meet)
 Perform the first type selection: in the case of , where   and are drawn from uniformly distribution with the
 range between 0 and 1.
 Permute arbitrary changes in position of oldP.
 Generate the mutant according to (1).
 Generate the population based on Algorithm 1.
 Perform the second type selection: select the population with better fitness from and .
 Update the best solution.
End While
Output the best solution.

2.2. DE

DE is a powerful evolutionary algorithm for global optimization over continuous space. When being used to solve optimization problems, it evolves a population of candidate solutions with -dimensional parameter vectors, noted as . In DE, the population is initiated by uniform sampling within the prescribed minimum and maximum bounds.

After initialization, DE steps into the iteration process where the evolutionary operators, namely, mutation, crossover, and selection, are invoked in turn, respectively.

DE employs the mutation strategy to generate a mutant vector . So far, there are several mutant strategies, and the most well-known and widely used strategies are listed as follows [21, 22]:“DE/best/1”:“DE/current-to-best/1”:“DE/best/2”:“DE/rand/1”:“DE/current-to-rand/1”:“DE/rand/2”:where the indices , , , , and are uniformly random mutually different integers from 1 to , denotes the best individual obtained so far, and and are the th vector of and , respectively.

The crossover operator is performed to generate a trial vector according to each pair of and after the mutant vector is generated. The most popular strategy is the binomial crossover described as follows:where is called the crossover rate, is randomly sampled from 1 to , and , , and are the th element of , , and , respectively.

Finally, DE uses a greedy mechanism to select the better vector from each pair of and . This can be described as follows:

3. HBD

In this section, we describe the HBD algorithm in detail. First, the motivations of this paper are given. Second, the framework of HBD is shown.

3.1. Motivations

BSA uses an external archive to store experiences gained from previous generation solutions and makes use of them to guide the population to global optimum. According to BSA, permuting arbitrary changes in position of historical population makes the individuals be chosen randomly in the mutation operator; therefore, the algorithm focuses on exploration and is capable of solving multimodal optimization problems. However, just due to random selection, by utilizing experiences, BSA may be led to converge slowly and to prejudice exploitation on later iteration stage. This motivates our approach which aims to accelerate the convergence speed and to enhance the exploitation of the search space to keep the balance between the exploration and exploitation capabilities of BSA.

On the other hand, some studies have investigated the exploration and exploitation ability of different DE mutation strategies and pointed out the mutation operators that incorporate the best individual (e.g., (2), (3), and (4)) favor exploitation because the mutant individuals are strongly attracted around the best individual [6, 23]. This motivates us to hybridize these exploitative mutation strategies to enhance the exploitation capability of BSA. In addition, this paper is also in light of some studies which have shown that it is an effective way to combine other optimization methods to improve the performance for real-world optimization problems [2427].

3.2. Framework of HBD

Generally speaking, there are many ways to hybridize BSA with DE. In this study, we propose another hybrid schema between BSA with DE. In this schema, HBD employs DE with exploitive strategy behind BSA at each iteration process to share the information between BSA and DE. However, more individuals are optimized by DE, and more function evaluations will be spent. In this case, HBD would gain the premature convergence, resulting in prejudicing exploration. Thus, to keep the exploration capability of HBD, DE is used to optimize only one worse individual according to its probability. In addition, (2) is used as default mutation strategy in HBD because (3) and (4) have stronger exploration capabilities by introducing more perturbation with the random individual [6] or a modification combining “DE/best/1” and “DE/rand/1” [28]. The performance influenced by different exploitative strategies will be discussed in Section 4.3.

In order to select one individual for DE, in this work, we assign a probability model for each individual according to its fitness. It can be formulated as follows:where is the population size and is the ranking value of each individual when the population is sorted from the worst fitness to the best one.

Note that the probability equation is similar to the selection probability in DE with ranking-based mutation operators [29]. In general, the worse individuals are more far away from the best individual than the better ones; thus, they will have higher probabilities to get around the best one. This selection strategy can be defined as follows:where is selected individual and optimized by DE.

It is worth pointing out that our previous work [30], called BSADE, splits the whole iteration process into two parts: the previous two-third and the latter one-third stages. BSA is used in the first stage, and DE is employed in the second stage. In this case, DE does not share the population information with BSA. Moreover, it is difficult to split the whole iteration process into two parts. Thus, the difference between HBD and BSADE is that HBD shares the population information between BSA and DE, while BSADE does not. The comparison can be found in Section 4.4.

According to the above descriptions, the pseudocode of HBD is described in Algorithm 3.

Initiate the population and the historical population randomly sampled from search space.
While (Stop Condition doesn’t meet)
 Perform the first type selection: in the case of , where   and are drawn from uniformly distribution with the
 range between 0 and 1.
 Permute arbitrary changes in position of .
 Generate the mutant according to (1).
 Generate the population based on Algorithm 1.
 Perform the second type selection: select the population with better fitness from and .
 Update the best solution.
 //Invoke DE with exploitive strategy
 Select One Individual according to its probability: .
 Optimize with the help of DE, and get
 If (fitness( <= fitness())
  
 End If
 Update the best solution.
End While
Output the best solution.

4. Experimental Verifications

In this section, to verify the performance of HBD, we carry out comprehensive experimental tests on a suit of 28 benchmark functions proposed in the CEC-2013 competition [31]. These 28 benchmark functions include 5 unimodal functions , 15 basic multimodal functions , and 8 composition functions . More details about 28 functions can be found in [31].

To make a fair comparison, we use the same parameters for BSA and HBD, unless a change is mentioned. Each algorithm is performed 25 times for each function with the dimensions , 30, and 50, respectively. The population size of each algorithm is when and , while it is 30 in the case of . The maximum function evaluations are . The mutation factor and the crossover factor are 0.8 and 0.9 for HBD, respectively. In addition, we use the boundary handling method given in [17].

To evaluate the performance of algorithms, we use Error as an evaluation indicator first. Error, which is the function error value for the solution obtained by the algorithms, is defined as , where is the global optimum of function. In addition, the average and standard deviation of the best error values, presented as “,” are used in the different tables. Second, the convergence graphs are employed to show the mean error values of the best solutions at iteration process over the total run. Third, a Wilcoxon signed-rank test at the 5% significance level () is used to show the significant differences between two algorithms. The “+” symbol shows that the null hypothesis is rejected at the 5% significant level and HBD outperforms BSA, the “−” symbol says that the null hypothesis is rejected at the 5% significant level and BSA exceeds HBD, and the “” symbol reveals that the null hypothesis is accepted at the 5% significant level and HBD ties BSA. Additionally, we also give the total number of statistical significant cases at the bottom of each table.

4.1. The Effect of HBD

To show the effect of the proposed algorithm, Table 1 lists the average error values obtained by BSA and HBD for 30-dimentional benchmark functions. For unimodal functions , HBD overall obtains better average error values than BSA does. For instance, HBD gains the global optimum on and brings solutions with high quality to in terms of average error values. HBD exhibits a little inferiority to BSA for , but these two approaches are not significant. For 15 basic multimodal functions , with the help of average error values, HBD brings superior solutions to 10 out of 15 functions, equal ones to 2 out of 15 functions, and inferior ones to 3 out of 15 functions. However, according to the results of Wilcoxon test, they are not significant for HBD and BSA for 3 functions in which HBD gains lower solution quality. For composition functions , HBD and BSA draw a tie on and by the aid of average error values; however, HBD significantly outperforms BSA according to the results of Wilcoxon test. Moreover, according to average error values, HBD performs better than BSA in , , , and but worse than BSA in and . Nevertheless, two algorithms almost are not significant for these 8 composition functions in terms of the results of Wilcoxon test. Summarily, according to “+//−,” HBD wins and ties BSA on 12 and 16 out of 28 benchmark functions, respectively.


BSAHBD
AVGEr  ± STDErAVGEr  ± STDEr value

4.17E − 30 ± 1.40E − 291.36E − 29 ± 4.17E − 29=0.359375
1.22E + 06 ± 5.54E + 053.15E + 05 ± 1.52E + 05+0.000014
7.52E + 06 ± 8.54E + 064.38E + 06 ± 8.11E + 06+0.045010
1.25E + 04 ± 3.47E + 035.05E + 03 ± 2.23E + 03+0.000016
0.00E + 00 ± 0.00E + 000.00E + 00 ± 0.00E + 00=1.000000
3.04E + 01 ± 2.54E + 011.12E + 01 ± 1.47E + 01+0.001721
7.39E + 01 ± 1.03E + 015.03E + 01 ± 1.61E + 01+0.000101
2.09E + 01 ± 5.95E − 022.10E + 01 ± 3.17E − 02=0.142532
2.70E + 01 ± 2.29E + 002.12E + 01 ± 4.26E + 00+0.000081
1.78E − 01 ± 1.34E − 019.15E − 02 ± 5.25E − 02+0.009417
7.96E − 02 ± 2.75E − 013.58E − 01 ± 6.34E − 01=0.062500
8.41E + 01 ± 1.51E + 018.09E + 01 ± 1.53E + 01=0.396679
1.44E + 02 ± 2.37E + 011.29E + 02 ± 2.90E + 01=0.061480
3.70E + 00 ± 1.68E + 002.83E + 00 ± 1.82E + 00+0.034670
3.73E + 03 ± 4.27E + 023.50E + 03 ± 4.82E + 02=0.097970
1.31E + 00 ± 2.14E − 011.31E + 00 ± 2.38E − 01=0.903627
3.09E + 01 ± 1.97E − 013.09E + 01 ± 1.79E − 01=0.840072
1.20E + 02 ± 1.80E + 019.46E + 01 ± 2.11E + 01+0.000980
1.16E + 00 ± 1.79E − 011.23E + 00 ± 2.28E − 01=0.312970
1.15E + 01 ± 4.55E − 011.11E + 01 ± 5.89E − 01=0.057836
2.78E + 02 ± 6.63E + 012.95E + 02 ± 7.95E + 01=0.431762
4.45E + 01 ± 1.91E + 014.48E + 01 ± 1.35E + 01=0.443172
4.46E + 03 ± 5.40E + 024.16E + 03 ± 5.07E + 02=0.103553
2.32E + 02 ± 1.17E + 012.28E + 02 ± 8.92E + 00=0.157770
2.87E + 02 ± 1.41E + 012.80E + 02 ± 8.80E + 00=0.087527
2.00E + 02 ± 2.17E − 022.00E + 02 ± 6.81E − 03+0.000020
8.80E + 02 ± 1.49E + 027.52E + 02 ± 1.33E + 02+0.003822
3.00E + 02 ± 1.65E − 133.00E + 02 ± 1.32E − 13+0.016377
12/16/0

In order to further show the convergence speed of HBD, the convergence curves of two algorithms for six selected benchmark functions are given in Figure 1.

It is observed that the selected functions can be divided into four groups, and overall the convergence performance of HBD is better than BSA. For example, for the first group of functions, for example, and in which HBD has significantly better average error values than BSA, HBD converges faster than BSA in terms of the convergence curves seen in Figures 1(c) and 1(f). For and belong to the second group where HBD cannot bring the solutions with higher quality significantly, HBD still converges faster than BSA does. Third, for in which both of the two algorithms reach the global optimum, convergence performance of HBD is better compared to BSA. Additionally, HBD outperforms BSA according to the convergence curves seen in Figure 1(a), although the average error values optimized by HBD are inferior but not significant to BSA.

All in all, HBD overall outperforms BSA in terms of solution quality and convergence speed. This is because DE with exploitive mutation strategy enhances the exploitation capability of HBD, and it does not expend too much function evaluations.

4.2. Scalability of HBD

In this section, to analyze the performance of HBD affected by the problem dimensionality, a scalability study is investigated, respectively, on the 28 functions with 10- and 50- due to their definition up to 50- [31]. The results are tabulated in Table 2.


BSAHBDBSAHBD
AVGEr  ± STDErAVGEr  ± STDEr valueAVGEr  ± STDErAVGEr  ± STDEr value

0.00E + 00 ± 0.00E + 000.00E + 00 ± 0.00E + 00=1.0000001.69E − 29 ± 3.04E − 291.48E − 29 ± 4.26E − 29=0.582031
8.59E + 04 ± 2.30E + 054.75E + 03 ± 5.65E + 03+0.0000652.77E + 06 ± 8.09E + 051.11E + 06 ± 4.38E + 05+0.000014
5.46E + 04 ± 1.36E + 051.44E + 02 ± 6.95E + 02+0.0000124.95E + 07 ± 4.00E + 071.25E + 07 ± 1.20E + 07+0.000140
5.04E + 02 ± 3.66E + 025.67E + 01 ± 1.10E + 02+0.0000143.22E + 04 ± 5.16E + 032.04E + 04 ± 4.88E + 03+0.000023
0.00E + 00 ± 0.00E + 000.00E + 00 ± 0.00E + 00=1.0000003.06E − 38 ± 1.53E − 370.00E + 00 ± 0.00E + 00=0.250000
1.60E + 00 ± 3.25E + 002.75E + 00 ± 4.50E + 00=0.2208525.42E + 01 ± 1.87E + 014.42E + 01 ± 7.78E − 01+0.000046
6.96E + 00 ± 6.85E + 001.73E + 00 ± 2.49E + 00+0.0450108.48E + 01 ± 1.02E + 018.38E + 01 ± 1.06E + 01=0.756995
2.04E + 01 ± 7.79E − 022.04E + 01 ± 5.55E − 02=0.9892662.11E + 01 ± 3.33E − 022.11E + 01 ± 2.32E − 02=0.562928
3.54E + 00 ± 8.43E − 012.47E + 00 ± 1.29E + 00+0.0063135.44E + 01 ± 3.73E + 005.50E + 01 ± 3.85E + 00=0.657069
8.49E − 02 ± 3.51E − 027.75E − 02 ± 5.50E − 02=0.9892663.99E − 01 ± 2.11E − 011.14E − 01 ± 6.51E − 02+0.000018
0.00E + 00 ± 0.00E + 000.00E + 00 ± 0.00E + 00=1.0000007.96E − 02 ± 2.75E − 010.00E + 00 ± 0.00E + 00=0.062500
1.10E + 01 ± 3.31E + 001.09E + 01 ± 3.48E + 00=0.5998021.92E + 02 ± 2.87E + 011.68E + 02 ± 2.65E + 01+0.007423
1.68E + 01 ± 7.18E + 001.60E + 01 ± 7.27E + 00=0.7366173.25E + 02 ± 3.81E + 012.88E + 02 ± 4.17E + 01+0.001569
1.31E − 01 ± 5.44E − 021.28E − 01 ± 8.94E − 02+0.0002962.17E + 01 ± 5.08E + 002.12E + 01 ± 4.56E + 00=0.287862
6.10E + 02 ± 1.78E + 024.67E + 02 ± 1.74E + 02+0.0229888.25E + 03 ± 4.91E + 028.13E + 03 ± 4.40E + 02=0.142532
6.54E − 01 ± 1.92E − 016.23E − 01 ± 2.50E − 01=0.9892661.88E + 00 ± 2.67E − 011.82E + 00 ± 2.86E − 01=0.427339
7.77E + 00 ± 2.88E + 008.37E + 00 ± 2.87E + 00=0.4273395.44E + 01 ± 6.11E − 015.45E + 01 ± 6.85E − 01=0.924971
2.67E + 01 ± 4.85E + 002.05E + 01 ± 4.06E + 00+0.0006652.65E + 02 ± 2.44E + 012.45E + 02 ± 3.23E + 01+0.039554
2.45E − 01 ± 9.51E − 022.15E − 01 ± 1.19E − 01=0.6570692.67E + 00 ± 3.30E − 012.80E + 00 ± 2.59E − 01=0.312970
2.92E + 00 ± 3.60E − 012.57E + 00 ± 4.67E − 01+0.0199412.09E + 01 ± 7.34E − 012.08E + 01 ± 4.44E − 01=0.819095
3.16E + 02 ± 1.32E + 023.04E + 02 ± 1.21E + 02=0.4448246.94E + 02 ± 4.53E + 028.72E + 02 ± 3.27E + 02=0.675764
1.49E + 01 ± 4.29E + 001.42E + 01 ± 4.11E + 00=0.5449105.96E + 01 ± 1.39E + 016.10E + 01 ± 9.87E + 00=0.544910
8.60E + 02 ± 1.53E + 028.06E + 02 ± 2.04E + 02=0.8186419.53E + 03 ± 6.62E + 029.41E + 03 ± 7.87E + 02=0.599802
1.50E + 02 ± 3.69E + 011.56E + 02 ± 4.44E + 01=0.7164232.70E + 02 ± 1.04E + 012.58E + 02 ± 1.09E + 01+0.001569
1.93E + 02 ± 2.44E + 011.84E + 02 ± 3.39E + 01=0.2418203.79E + 02 ± 1.40E + 013.65E + 02 ± 2.10E + 01+0.013817
1.13E + 02 ± 5.23E + 001.11E + 02 ± 1.87E + 01=0.1093862.00E + 02 ± 7.33E − 022.00E + 02 ± 4.44E − 02+0.000012
3.13E + 02 ± 2.39E + 013.32E + 02 ± 4.73E + 01=0.6766371.53E + 03 ± 1.89E + 021.39E + 03 ± 2.01E + 02+0.028314
2.20E + 02 ± 1.00E + 022.04E + 02 ± 1.02E + 02=0.8028564.00E + 02 ± 1.33E − 134.00E + 02 ± 1.43E − 13+0.000001
9/19/013/15/0

In the case of , according to average error values shown in Table 2, HBD exhibits superiority in the majority of functions while inferiority in a handful of ones. Additionally, in terms of the total of “+//−,” HBD wins and ties BSA in 9 and 19 out of 28 functions, respectively.

When , HBD still can bring solutions with higher quality than BSA does in most of benchmark functions. Moreover, HBD outperforms and ties BSA in 13 and 15 out of 28 functions, respectively.

In summary, it suggests that the advantage of HBD over BSA is stable when the dimensionality of problems increases.

4.3. The Effect of Mutation Strategy

In HBD, the “DE/best/1” mutation strategy is used to enhance the exploitation capability of HBD in default. To show the performance of HBD influenced by other exploitive mutation strategies, the experiments are carried on benchmark functions and the results are listed in Table 3 where cHBD and bHBD mean that HBD uses “DE/current-to-best/1” and “DE/best/2,” respectively. The results obtained by cHBD and bHBD, which are highly accurate compared to those obtained by HBD, are marked in bold.


cHBDHBDbHBD
AVGEr  ± STDEr valueAVGEr  ± STDEr valueAVGEr  ± STDEr

1.01E − 30 ± 3.49E − 30=0.1250001.36E − 29 ± 4.17E − 29=0.3281256.31E − 30 ± 1.67E − 29
2.97E + 05 ± 1.51E + 05=0.6766373.15E + 05 ± 1.52E + 05+0.0002407.07E + 05 ± 3.84E + 05
3.77E + 06 ± 7.51E + 06=0.6186414.38E + 06 ± 8.11E + 06=0.6377334.25E + 06 ± 7.27E + 06
4.72E + 03 ± 1.90E + 03=0.7366175.05E + 03 ± 2.23E + 03+0.0068486.55E + 03 ± 2.23E + 03
0.00E + 00 ± 0.00E + 00=1.0000000.00E + 00 ± 0.00E + 00=1.0000000.00E + 00 ± 0.00E + 00
1.12E + 01 ± 1.34E + 01=0.3260491.12E + 01 ± 1.47E + 01=0.5629288.98E + 00 ± 5.93E + 00
5.69E + 01 ± 1.50E + 01=0.1425325.03E + 01 ± 1.61E + 01+0.0038226.58E + 01 ± 1.69E + 01
2.09E + 01 ± 4.71E − 02=0.2641502.10E + 01 ± 3.17E − 02=0.2878622.09E + 01 ± 4.67E − 02
2.61E + 01 ± 3.12E + 00+0.0001262.12E + 01 ± 4.26E + 00+0.0004462.65E + 01 ± 3.29E + 00
8.18E − 02 ± 5.50E − 02=0.5097559.15E − 02 ± 5.25E − 02=0.3002418.06E − 02 ± 4.77E − 02
0.00E + 00 ± 0.00E + 000.0156253.58E − 01 ± 6.34E − 010.0156250.00E + 00 ± 0.00E + 00
8.19E + 01 ± 1.61E + 01=0.4273398.09E + 01 ± 1.53E + 01=0.0653119.01E + 01 ± 1.78E + 01
1.26E + 02 ± 2.58E + 01=0.7982481.29E + 02 ± 2.90E + 01=0.1742101.39E + 02 ± 2.67E + 01
4.01E + 00 ± 1.94E + 00+0.0324282.83E + 00 ± 1.82E + 00=0.0826533.88E + 00 ± 2.08E + 00
3.80E + 03 ± 4.02E + 02+0.0172533.50E + 03 ± 4.82E + 02+0.0148893.81E + 03 ± 2.84E + 02
1.29E + 00 ± 2.45E − 01=0.9678061.31E + 00 ± 2.38E − 01=0.9249711.30E + 00 ± 2.09E − 01
3.10E + 01 ± 1.75E − 01=0.1658373.09E + 01 ± 1.79E − 01=0.9892663.09E + 01 ± 1.55E − 01
1.17E + 02 ± 1.58E + 01+0.0006659.46E + 01 ± 2.11E + 01+0.0214181.10E + 02 ± 1.99E + 01
1.27E + 00 ± 2.22E − 01=0.4273391.23E + 00 ± 2.28E − 01=0.9249711.22E + 00 ± 2.55E − 01
1.12E + 01 ± 6.80E − 01=0.4431721.11E + 01 ± 5.89E − 01=0.1218281.14E + 01 ± 4.65E − 01
3.18E + 02 ± 8.04E + 01=0.3000092.95E + 02 ± 7.95E + 01+0.0402673.49E + 02 ± 8.29E + 01
5.28E + 01 ± 2.67E + 01=0.3673854.48E + 01 ± 1.35E + 01=0.6964254.58E + 01 ± 2.05E + 01
4.37E + 03 ± 4.03E + 02=0.1218284.16E + 03 ± 5.07E + 02+0.0395544.48E + 03 ± 4.85E + 02
2.28E + 02 ± 7.11E + 00=0.8611622.28E + 02 ± 8.92E + 00=0.9463692.28E + 02 ± 8.83E + 00
2.85E + 02 ± 2.02E + 01=0.0543742.80E + 02 ± 8.80E + 00=0.2758322.85E + 02 ± 1.03E + 01
2.00E + 02 ± 8.66E − 03=0.6377332.00E + 02 ± 6.81E − 03+0.0229882.00E + 02 ± 9.50E − 03
7.68E + 02 ± 1.45E + 02=0.7366177.52E + 02 ± 1.33E + 02=0.0926318.36E + 02 ± 1.67E + 02
2.94E + 02 ± 2.96E + 01=0.1615133.00E + 02 ± 1.32E − 13=0.8083653.00E + 02 ± 1.36E − 13
4/23/19/18/1

From Table 3, in terms of the average error values, bHBD shows the higher accuracy compared to HBD for a few functions since “DE/best/2” usually exhibits better exploration than “DE/best/1” because of one more difference of randomly selected individuals in the former [23]. cHBD also gains higher accuracy of solutions than HBD does for a handful of functions because “DE/current-best/1,” a modification combining “DE/best/1” and “DE/rand/1” [28], shows better exploration than “DE/best/1.” In other words, for a few functions, “DE/best/2” and “DE/current-best/1” can balance the exploration and exploitation capabilities of HBD better. For example, bHBD and cHBD bring the solutions with higher quality to , , , , , and ; in particular, they reach the global optimum. However, for most of the functions, HBD with “DE/best/1” performs better than cHBD and bHBD.

Additionally, Table 4 reports the results of the multiple-problem Wilcoxon test which was done similarly in [29, 32] between HBD and its variants for all functions. We can see from Table 4 that HBD is significantly better than bHBD and HBD gets higher value than value although two values are not significant. Therefore, HBD uses “DE/best/1” in the tradeoff.


Algorithm-value

HBD versus cHBD2351430.269095==
HBD versus bHBD276750.010695++

4.4. The Effect of Hybrid Schema

In this section, we analyze the performance of HBD affected by the hybrid schema. Firstly, to show the effect of more than one individual optimized by DE, the algorithm, called aHBD which uses DE to optimize the whole population, is used to compare with HBD. Secondly, we add a probability on aHBD to control the use of DE and propose paHBD. In paHBD, if the random number drawn from uniform distribution between 0 and 1 is less than , then DE is invoked. The is defined as follows:where is the number of function evaluations which had been spent and is the maximum number of function evaluations. Additionally, BSADE is compared with HBD to show their differences.

Table 5 lists the error values obtained by aHBD, paHBD, BSADE, and HBD for 28 functions at . It can be observed that HBD wins, ties, and loses aHBD in 10, 12, and 6 out of 28 functions in terms of “+//−,” respectively. It says that optimizing more individuals using DE costs more function evaluations when DE is embedded behind BSA directly, resulting in reducing the iteration process cycles and then getting poor performance for most functions. Regarding BSADE, since BSA and DE are invoked in different stages where they cannot exchange the population information, it is clear that this schema cannot balance the exploitation and exploration well. Thus, compared with BSADE, HBD brings solutions with higher accuracy for most functions. Moreover, HBD wins, ties, and loses BSADE in 8, 17, and 3 out of 28 functions with the help of “+//−,” respectively. However, paHBD uses the probability to control the use of DE. In this case, it can decrease the cost of function evaluation at early evolution stage. Thus, paHBD is almost similar to HBD according to “+//−.”


aHBDpaHBDBSADEHBD
AVGEr  ± STDEr valueAVGEr  ± STDEr valueAVGEr  ± STDEr valueAVGEr  ± STDEr

1.29E − 28 ± 1.18E − 28+0.0002925.53E − 29 ± 8.39E − 29+0.0309461.11E − 29 ± 4.11E − 29=0.6660161.36E − 29 ± 4.17E − 29
5.01E + 04 ± 2.88E + 040.0000125.64E + 04 ± 3.39E + 040.0000126.80E + 04 ± 3.33E + 040.0000123.15E + 05 ± 1.52E + 05
1.13E + 07 ± 3.20E + 07=0.9249711.74E + 06 ± 2.61E + 06=0.2012225.21E + 06 ± 1.24E + 07=0.6964254.38E + 06 ± 8.11E + 06
5.61E + 01 ± 5.41E + 010.0000122.40E + 02 ± 2.17E + 020.0000124.02E + 02 ± 2.24E + 020.0000125.05E + 03 ± 2.23E + 03
4.26E − 16 ± 1.48E − 15+0.0078132.84E − 16 ± 1.42E − 15=0.0625000.00E + 00 ± 0.00E + 00=1.0000000.00E + 00 ± 0.00E + 00
5.01E + 00 ± 5.53E + 000.0172534.67E + 00 ± 3.66E + 000.0038221.18E + 01 ± 1.25E + 01=0.3966791.12E + 01 ± 1.47E + 01
4.31E + 01 ± 1.62E + 01=0.0979704.74E + 01 ± 1.50E + 01=0.4926335.87E + 01 ± 1.63E + 01=0.0979705.03E + 01 ± 1.61E + 01
2.09E + 01 ± 4.16E − 02=0.3002412.09E + 01 ± 6.05E − 02=0.5097552.09E + 01 ± 4.74E − 02=0.2108722.10E + 01 ± 3.17E − 02
2.18E + 01 ± 3.56E + 00=0.8611622.18E + 01 ± 3.42E + 00=0.6377332.60E + 01 ± 2.82E + 00+0.0002962.12E + 01 ± 4.26E + 00
6.25E − 02 ± 3.76E − 020.0148896.11E − 02 ± 3.92E − 020.0229882.04E − 01 ± 1.19E − 01+0.0001269.15E − 02 ± 5.25E − 02
3.03E + 01 ± 1.53E + 01+0.0000123.06E + 00 ± 2.05E + 00+0.0000837.96E − 02 ± 2.75E − 01=0.0625003.58E − 01 ± 6.34E − 01
7.43E + 01 ± 1.98E + 01=0.2641508.12E + 01 ± 1.39E + 01=0.9678068.64E + 01 ± 2.23E + 01=0.2418208.09E + 01 ± 1.53E + 01
1.38E + 02 ± 2.88E + 01=0.5097551.35E + 02 ± 2.66E + 01=0.3673851.43E + 02 ± 1.76E + 01=0.0543741.29E + 02 ± 2.90E + 01
5.85E + 01 ± 5.92E + 01+0.0000127.23E + 00 ± 4.53E + 00+0.0002667.33E + 00 ± 4.72E + 00+0.0008082.83E + 00 ± 1.82E + 00
3.45E + 03 ± 4.72E + 02=0.3394793.49E + 03 ± 5.12E + 02=0.8611623.39E + 03 ± 3.92E + 02=0.3818603.50E + 03 ± 4.82E + 02
1.49E + 00 ± 3.48E − 01+0.0080411.14E + 00 ± 5.92E − 01=0.6766371.23E + 00 ± 5.19E − 01=0.8823521.31E + 00 ± 2.38E − 01
3.92E + 01 ± 9.55E + 00+0.0000123.17E + 01 ± 7.18E − 01+0.0000233.17E + 01 ± 5.55E − 01+0.0000253.09E + 01 ± 1.79E − 01
1.05E + 02 ± 2.28E + 01=0.1218288.78E + 01 ± 2.07E + 01=0.3002418.84E + 01 ± 1.83E + 01=0.4118409.46E + 01 ± 2.11E + 01
2.10E + 00 ± 1.09E + 00+0.0000361.44E + 00 ± 2.94E − 01+0.0110001.47E + 00 ± 2.13E − 01+0.0006021.23E + 00 ± 2.28E − 01
1.07E + 01 ± 5.66E − 010.0045301.13E + 01 ± 4.56E − 01=0.5629281.13E + 01 ± 6.33E − 01=0.4273391.11E + 01 ± 5.89E − 01
3.26E + 02 ± 9.08E + 01=0.1214773.41E + 02 ± 9.20E + 01=0.0614072.81E + 02 ± 6.67E + 01=0.8948672.95E + 02 ± 7.95E + 01
2.47E + 02 ± 2.57E + 02+0.0000145.78E + 01 ± 3.61E + 01=0.4926336.63E + 01 ± 2.99E + 01+0.0029474.48E + 01 ± 1.35E + 01
3.86E + 03 ± 5.21E + 02=0.2528134.21E + 03 ± 5.58E + 02=0.3394794.19E + 03 ± 4.20E + 02=0.8190954.16E + 03 ± 5.07E + 02
2.36E + 02 ± 8.94E + 00+0.0087052.27E + 02 ± 9.76E + 00=0.5629282.35E + 02 ± 9.21E + 00+0.0094172.28E + 02 ± 8.92E + 00
2.75E + 02 ± 1.32E + 01=0.1035532.80E + 02 ± 1.02E + 01=0.7569952.86E + 02 ± 1.34E + 01=0.0693372.80E + 02 ± 8.80E + 00
2.11E + 02 ± 3.75E + 01+0.0024702.00E + 02 ± 3.02E − 030.0000162.00E + 02 ± 1.85E − 030.0000122.00E + 02 ± 6.81E − 03
7.42E + 02 ± 1.07E + 02=0.8823527.58E + 02 ± 1.52E + 02=0.9892669.19E + 02 ± 1.49E + 02+0.0008917.52E + 02 ± 1.33E + 02
3.00E + 02 ± 1.90E − 13+0.0163773.00E + 02 ± 1.95E − 13+0.0389473.00E + 02 ± 1.65E − 13=1.0000003.00E + 02 ± 1.32E − 13
10/12/66/17/58/17/3

In addition, we also perform the multiple-problem Wilcoxon test for HBD, aHBD, paHBD, and BSADE for 28 functions and list the results in Table 6.


Algorithm value

HBD versus aHBD2261800.600457==
HBD versus paHBD1972090.891321==
HBD versus BSADE262.5143.50.175450==

It can be found from Table 6 that HBD is not significant to aHBD, paHBD, and BSADE. But HBD gets higher values than values, compared with aHBD and BSADE, respectively. But HBD obtains slightly lower value than value in comparison with paHBD. This is because HBD brings weakly lower accurate solutions on , , , and , resulting in higher ranking. Nevertheless, it indicates that the hybrid schema used in HBD is a reasonable choice.

4.5. The Effect of Probability Model

In HBD, a linear model seen (10) is used to select one individual to optimize. It is worth pointing out that other models, for example, nonlinear, can also be adopted in our algorithm. In this section, we do not seek the optimal probability model but only analyze the performance influenced by different models. Thus, two models, as similarly used in [29, 33], are employed to study the performance affected by other models. They are the quadratic model and the sinusoidal model, formulated as seen in (13) and (14), respectively. The average error values and the results of the multiple-problem Wilcoxon test are reported in Tables 7 and 8, respectively, where qHBD is HBD with the quadratic model and sHBD means HBD with the sinusoidal one. Consider


qHBDHBDsHBD
AVGEr  ± STDEr valueAVGEr  ± STDEr valueAVGEr  ± STDEr

5.05E − 31 ± 2.52E − 30=0.0781251.36E − 29 ± 4.17E − 29=0.4296886.18E − 30 ± 2.22E − 29
3.56E + 05 ± 2.89E + 05=0.7982483.15E + 05 ± 1.52E + 05=0.7366172.87E + 05 ± 1.36E + 05
4.94E + 06 ± 7.32E + 06=0.6186414.38E + 06 ± 8.11E + 06=0.7569953.57E + 06 ± 6.05E + 06
4.69E + 03 ± 1.64E + 03=0.7366175.05E + 03 ± 2.23E + 03=0.9036275.20E + 03 ± 2.96E + 03
2.84E − 16 ± 1.42E − 15=1.0000000.00E + 00 ± 0.00E + 00=1.0000000.00E + 00 ± 0.00E + 00
1.05E + 01 ± 1.31E + 01=0.3818601.12E + 01 ± 1.47E + 01=0.8611629.06E + 00 ± 6.44E + 00
5.68E + 01 ± 1.66E + 01=0.1918985.03E + 01 ± 1.61E + 01=0.7569955.19E + 01 ± 1.93E + 01
2.09E + 01 ± 6.58E − 02=0.5097552.10E + 01 ± 3.17E − 02=0.1658372.09E + 01 ± 5.52E − 02
2.38E + 01 ± 3.78E + 00+0.0395542.12E + 01 ± 4.26E + 00=0.2311672.24E + 01 ± 3.57E + 00
9.15E − 02 ± 6.35E − 02=0.2641509.15E − 02 ± 5.25E − 02=0.5271839.80E − 02 ± 6.25E − 02
4.38E − 01 ± 9.56E − 01=0.9863283.58E − 01 ± 6.34E − 01=0.3666991.99E − 01 ± 4.06E − 01
7.44E + 01 ± 1.79E + 01=0.0735658.09E + 01 ± 1.53E + 01=0.3745587.83E + 01 ± 1.32E + 01
1.26E + 02 ± 2.53E + 01=0.9892661.29E + 02 ± 2.90E + 01=0.4118401.26E + 02 ± 2.94E + 01
3.14E + 00 ± 1.74E + 00=0.6186412.83E + 00 ± 1.82E + 00=0.0510873.66E + 00 ± 1.60E + 00
3.60E + 03 ± 4.07E + 02=0.9036273.50E + 03 ± 4.82E + 02=0.6766373.49E + 03 ± 3.98E + 02
1.29E + 00 ± 2.55E − 01=0.9678061.31E + 00 ± 2.38E − 01=0.6964251.28E + 00 ± 2.15E − 01
3.10E + 01 ± 1.86E − 01=0.4926333.09E + 01 ± 1.79E − 01=0.9463693.09E + 01 ± 2.11E − 01
9.93E + 01 ± 1.75E + 01=0.3966799.46E + 01 ± 2.11E + 01=0.8400729.52E + 01 ± 1.75E + 01
1.20E + 00 ± 2.23E − 01=0.7366171.23E + 00 ± 2.28E − 01=0.2108721.26E + 00 ± 2.04E − 01
1.09E + 01 ± 7.81E − 01=0.1658371.11E + 01 ± 5.89E − 01=0.1918981.14E + 01 ± 5.24E − 01
3.10E + 02 ± 8.68E + 01=0.4807012.95E + 02 ± 7.95E + 01=0.1654923.28E + 02 ± 8.02E + 01
4.19E + 01 ± 1.90E + 01=0.4431724.48E + 01 ± 1.35E + 01=0.2878624.24E + 01 ± 1.86E + 01
4.16E + 03 ± 4.19E + 02=0.9249714.16E + 03 ± 5.07E + 02=0.8400724.12E + 03 ± 3.91E + 02
2.31E + 02 ± 1.02E + 01=0.6377332.28E + 02 ± 8.92E + 00=0.7775432.28E + 02 ± 9.52E + 00
2.76E + 02 ± 9.42E + 000.0422072.80E + 02 ± 8.80E + 00=0.1577702.75E + 02 ± 1.43E + 01
2.00E + 02 ± 6.05E − 03=0.7775432.00E + 02 ± 6.81E − 03=0.8400722.00E + 02 ± 6.31E − 03
7.82E + 02 ± 1.48E + 02=0.2878627.52E + 02 ± 1.33E + 02=0.4593367.90E + 02 ± 1.21E + 02
3.00E + 02 ± 1.14E − 13=1.0000003.00E + 02 ± 1.32E − 130.0253473.00E + 02 ± 8.76E − 14
1/26/10/27/1


Algorithm value

HBD versus qHBD2381400.239106==
HBD versus sHBD1432080.409125==

From Table 7, we can find that qHBD can bring higher solutions to 11 out of 28 functions compared with HBD, although the results they obtain are not significant in terms of “+//−.” In addition, qHBD gets lower values than values HBD gained, though they are not significant at the 5% and 10% significance level. It says that the linear model is a reasonable choice compared with the quadratic model. However, it is not the optimal one compared with the sinusoidal model. For instance, sHBD wins, ties, and loses HBD in 1, 27, and 0 out of 28 functions according to “+//−.” Moreover, sHBD has higher values than values HBD does though they are not significant at the 5% and 10% significance level.

4.6. Compared with Other Algorithms

Firstly, HBD is compared with 6 non-BSA approaches in [17], namely, PSO2011 [34], CMAES [35, 36], ABC [7], JDE [37], CLPSO [38], and SADE [39]. Moreover, to compare fair and conveniently, we use the 25 functions and the parameters which are employed and suggested in [17]. More details about these 25 functions can be found in CEC-2005 competition [40]. Table 9 lists the minimal fitness and average fitness of 7 approaches, where the results of 6 non-BSA algorithms are adopted from [17] directly. In addition, the results of multiple-problem Wilcoxon test and Friedman test similarly done in [29] for the seven algorithms are listed in Tables 10 and 11, respectively.


PSO2011CMAESABCJDECLPSOSADEHBD

Mean−450.0000000000000000−450.0000000000000000−450.0000000000000000−450.0000000000000000−450.0000000000000000−450.0000000000000000−450.0000000000000000
Std.0.00000000000000000.00000000000000000.00000000000000000.00000000000000000.00000000000000000.00000000000000000.0000000000000000
Mean−450.0000000000000000−450.0000000000000000−449.9999999999220000−450.0000000000000000−418.8551838547760000−450.0000000000000000−450.0000000000000000
Std.0.00000000000003500.00000000000000000.00000000020527300.000000000000061551.08805110399850000.00000000000000000.0000000000000000
Mean−44.5873911956554000−450.0000000000000000387131.2441213970000000−197.999999999985000062142.0000000000000000245.0483283713550000−449.9999999999980000
Std.458.57941200162900000.0000000000000000166951.7336592640000000391.516943747499000034796.1785167236000000790.60565967231600000.0000000000012208
Mean−450.000000000000000077982.4567046980000000140.4509447125110000−414.0000000000000000−178.8320689185280000−450.0000000000000000−450.0000000000000000
Std.0.0000000000000460131376.7365456010000000217.264671506319000055.9309919639279000394.86674993395300000.00000000000000000.0000000000000000
Mean−310.0000000000000000−310.0000000000000000−291.5327549384120000−271.0000000000000000333.4108259915760000−309.9999999999960000−309.9999999999990000
Std.0.00000000000000000.000000000000000017.694217121793700060.5919079609218000512.69208377045100000.00000000001339650.0000000000017057
Mean393.4959999056240000390.5315438816460000391.2531452421960000231.3986579112350000405.5233436479650000390.2657719408230000390.2657719408230000
Std.16.02249659004620001.37834339763738003.7254660805238600247.296841528440000010.74800968528690001.01142753847766001.0114275384776500
Mean1091.06443351625000001087.26454667867000001087.04594862860000001141.04594862860000001087.04594862860000001087.0459486286000000−179.9480783793820000
Std.3.49769489427232000.53652300180017800.000000000000558583.89648794589180000.00000000000042640.00000000000048140.0353036104861447
Mean−119.8190232990920000−119.9261073509850000−119.7446063439080000−119.4459380180300000−119.9300269839980000−119.7727713703720000−119.8110293720060000
Std.0.07201075608741990.15540214461577400.06238664344891080.09274182230656440.04179135531014290.12485148536824500.0915374510176448
Mean−324.6046006320200000−306.5782069681560000−330.0000000000000000−329.8673387923880000−329.4361898676470000−329.9668346980970000−329.9668346980960000
Std.2.508230604152100021.94753960487560000.00000000000000000.34400301828127600.62290637119041900.18165383978802300.1816538397880230
Mean−324.3311322538170000−314.7871102989330000−306.7949047862760000−319.6763749798700000−321.7278926895280000−322.9689591871600000−320.3489045380250000
Std.3.00722229336673008.31159893083055005.17878641958704004.91735412453048001.89717786137013002.82546452546636004.1899978130687500
Mean92.564011121214600090.764278570450600094.842848580413800093.297231578496300094.610956764297700091.685908384272300092.0330962077418000
Std.1.582741678163690026.46138314258790000.68694128130908501.87669517264536000.66891291740389500.90330737779152701.4570152623440700
Mean18611.3142254809000000−70.0486708747625000−337.3273080760500000400.3240208136310000−447.8870804905020000−394.5206365378250000−453.7580906206240000
Std.12508.7866126316000000637.458518242027000056.5730759032367000688.334429926430000011.8934815947019000128.63534247181800007.9399805117226300
Mean−129.2373581503910000−128.7850616923410000−129.8343428775830000−129.6294851450880000−129.8382867796110000−129.7129164862680000−129.7795909188150000
Std.0.59862109444937900.61576336589462300.04080164819054550.10547593710854000.03722569218356660.08754565682002320.0974559125367515
Mean−298.2835926212850000−295.1290938304830000−296.9323391084610000−296.8839733969750000−297.5119726691150000−297.8403738182600000−297.1054029448800000
Std.0.55876762717536800.16340399846092700.22519306677028800.43306736145982900.34401152806241800.45368016898007200.3062178045203870
Mean417.4613663019860000492.5045364088000000120.0000000000000000326.6601114362900000131.3550392249760000234.2689845349590000134.6767426915020000
Std.153.9215808771580000181.57096577795800000.0000000000000188174.687723818833000026.1407360548431000150.759597405975000023.3648038768225000
Mean221.4232628350220000455.4454684594550000258.8582688922670000231.1806131539990000231.5547154800990000222.0256674919140000231.8426524963750000
Std.12.2450207482898000254.358351178697000011.882321318968500013.547338096276400011.54414510764210006.184148980066030010.3007095087283000
Mean217.3338617866620000681.0349114021570000265.0370119084380000228.7309024901770000240.3635189964930000221.1801916743850000230.6398805937190000
Std.20.6685850658838000488.061827434364000012.403391709020800012.368271626863100014.84351374852930005.703700684469050010.7176191104135000
Mean668.9850326105730000926.9488078829420000513.8925774904480000743.9859973770210000892.4391527217660000845.4504613493740000626.6666666666660000
Std.275.8071370273340000174.102718265966000031.0124861524005000175.649729424033000079.1422224454971000120.8505129523180000245.0662589267800000
Mean708.2979222913040000831.2324139697050000500.5478931040730000776.5150806087790000863.8929608090610000809.7183195902260000673.6943739862640000
Std.256.2419561521300000289.729641328447000031.2240894705539000160.730752669247000096.5618989087194000147.3158109824600000240.7952379798540000
Mean711.2970397614200000876.9306161887680000483.2984167460740000761.2954767038960000844.6391674419360000810.5227124472170000568.3524718683710000
Std.258.9317052508320000289.729641328447000099.3976740616107000163.4084080635650000113.6848457105400000104.7139423525340000264.1320553106420000
Mean1117.88570796251000001258.1065536572400000659.5351969346130000959.3735119754180000911.4640642691360000990.8546718748010000887.0165496528500000
Std.311.0011859260640000359.738289753657000098.5410511961986000240.5568407069990000238.3180009803040000235.1014092849970000130.7951954796500000
Mean1094.8305116977000000−7.159E + 49915.49581006116300001133.75360098086000001075.52923264369000001094.68236973049000001033.4073460688400000
Std.121.35395763178000004.387E + 50242.199333198353000042.1171260000361000166.935514523633000087.9884000140656000170.5780088721150000
Mean1304.36615501240000001159.9280867973000000830.22901657944100001167.90404887438000001070.43274628364000001105.2511774948600000985.5892101103390000
Std.262.1065863453340000742.121541632049000060.2286903507069000236.7325108248320000203.0676627074300000190.6172874229610000140.3217672954560000
Mean500.0000000000000000653.3355378428050000460.0000000000000000510.0000000000000000493.3333333333400000490.0000000000000000460.0000000000000000
Std.103.7237710925280000302.53129997196500000.0000000000016493113.7147065368360000137.297395141509000091.53857298880940000.0000000000000000
Mean1107.90381278767000001401.6553278264300000930.45654141492100001072.99246598092000001258.51577665247000001074.3695435628600000632.0799172389040000
Std.127.9566489362040000253.242806622021000087.99590723910790002.2606058314671500241.40245076768900002.83141828389178003.5935258481745500


Algorithm value

HBD versus SPSO2011261.9563.050.007453=+
HBD versus CMAES275.9049.100.002279++
HBD versus ABC179.93145.070.639106==
HBD versus JDE276.0049.000.002259++
HBD versus CLPSO291.0034.000.000545++
HBD versus SADE234.5090.500.052709=+


AlgorithmSPSO2011CMAESABCJDECLPSOSADEHBD

Ranking4.065.243.584.444.603.422.66

From Table 9, we find that each algorithm does well in some functions according to its average error value. For instance, PSO2011, CMAES, ABC, JDE, CLPSO, SADE, and HBD perform better in 8, 5, 9, 3, 3, 3, and 7 out of 25 functions, respectively. However, Table 10 shows that HBD gets higher values than values in all cases. This suggests that HBD is better than the other 6 algorithms. Moreover, for Wilcoxon test at and in three cases, there are significant differences for CEC2005 functions. Furthermore, with respect to the average rankings of different algorithms by the Friedman test, it can be seen clearly from Table 11 that HBD offers the best overall performance, while SADE is the second best, followed by ABC, PSO2011, CLPSO, JDE, and CMAES.

Secondly, to appreciate the actual performance of the proposed algorithm, HBD is in comparison with the other five algorithms identified as NBIPOP-aCMA [41], fk-PSO [42], SPSO2011 [43], SPSOABC [44], and PVADE [45], which were presented during the CEC-2013 Special Session & Competition on Real-Parameter Single Objective Optimization.

Table 12 lists the average error values which are dealt with from [46], and the average rankings of the six algorithms by the Friedman test for CEC-2013 functions at are given in Table 13. Since NBIPOP-aCMA is one of top three performing algorithms for CEC-2013 functions [47], seen from Table 12, it shows the promising performance in almost all of functions. Other algorithms bring solutions with higher accuracy in a handful of functions. For example, fk-PSO, SPSO2011, SPSOABC, PVADE, and HBD yield the better performance on 3, 2, 6, 4, and 5 out of 28 functions in terms of the average error values. However, according to the average rankings of different algorithms by the Friedman test in Table 13, we can find that NBIPOP-aCMA is the best, and HBD offers the second best overall performance, followed by SPSOABC, fk-PSO, PVADE, and PSO2011.


NBIPOP-aCMAfk-PSOSPSO2011SPSOABCPVADEHBD
AVGEr  ± STDErAVGEr  ± STDErAVGEr  ± STDErAVGEr  ± STDErAVGEr  ± STDErAVGEr  ± STDEr

0.00E + 00 ± 0.00E + 000.00E + 00 ± 0.00E + 000.00E + 00 ± 0.00E + 000.00E + 00 ± 0.00E + 000.00E + 00 ± 0.00E + 001.36E − 29 ± 4.17E − 29
0.00E + 00 ± 0.00E + 001.59E + 06 ± 8.03E + 053.38E + 05 ± 1.67E + 058.78E + 05 ± 1.69E + 062.12E + 06 ± 1.56E + 063.15E + 05 ± 1.52E + 05
0.00E + 00 ± 0.00E + 002.40E + 08 ± 3.71E + 082.88E + 08 ± 5.24E + 085.16E + 07 ± 8.00E + 071.65E + 03 ± 2.83E + 034.38E + 06 ± 8.11E + 06
0.00E + 00 ± 0.00E + 004.78E + 02 ± 1.96E + 023.86E + 04 ± 6.70E + 036.02E + 03 ± 2.30E + 031.70E + 04 ± 2.85E + 035.05E + 03 ± 2.23E + 03
0.00E + 00 ± 0.00E + 000.00E + 00 ± 0.00E + 005.42E − 04 ± 4.91E − 050.00E + 00 ± 0.00E + 001.40E − 07 ± 1.86E − 070.00E + 00 ± 0.00E + 00
0.00E + 00 ± 0.00E + 002.99E + 01 ± 1.76E + 013.79E + 01 ± 2.83E + 011.09E + 01 ± 1.09E + 018.29E + 00 ± 5.82E + 001.12E + 01 ± 1.47E + 01
2.31E + 00 ± 6.05E + 006.39E + 01 ± 3.09E + 018.79E + 01 ± 2.11E + 015.12E + 01 ± 2.04E + 011.29E + 00 ± 1.22E + 005.03E + 01 ± 1.61E + 01
2.09E + 01 ± 4.80E − 022.09E + 01 ± 6.28E − 022.09E + 01 ± 5.89E − 022.09E + 01 ± 4.92E − 022.09E + 01 ± 4.82E − 022.10E + 01 ± 3.17E − 02
3.30E + 00 ± 1.38E + 001.85E + 01 ± 2.69E + 002.88E + 01 ± 4.43E + 002.95E + 01 ± 2.62E + 006.30E + 00 ± 3.27E + 002.12E + 01 ± 4.26E + 00
0.00E + 00 ± 0.00E + 002.29E − 01 ± 1.32E − 013.40E − 01 ± 1.48E − 011.32E − 01 ± 6.23E − 022.16E − 02 ± 1.36E − 029.15E − 02 ± 5.25E − 02
3.04E + 00 ± 1.41E + 002.36E + 01 ± 8.76E + 001.05E + 02 ± 2.74E + 010.00E + 00 ± 0.00E + 005.84E + 01 ± 1.11E + 013.58E − 01 ± 6.34E − 01
2.91E + 00 ± 1.38E + 005.64E + 01 ± 1.51E + 011.04E + 02 ± 3.54E + 016.44E + 01 ± 1.48E + 011.15E + 02 ± 1.14E + 018.09E + 01 ± 1.53E + 01
2.78E + 00 ± 1.45E + 001.23E + 02 ± 2.19E + 011.94E + 02 ± 3.86E + 011.15E + 02 ± 2.24E + 011.31E + 02 ± 1.24E + 011.29E + 02 ± 2.90E + 01
8.10E + 02 ± 3.60E + 027.04E + 02 ± 2.38E + 023.99E + 03 ± 6.19E + 021.55E + 01 ± 6.13E + 003.20E + 03 ± 4.38E + 022.83E + 00 ± 1.82E + 00
7.65E + 02 ± 2.95E + 023.42E + 03 ± 5.16E + 023.81E + 03 ± 6.94E + 023.55E + 03 ± 3.04E + 025.16E + 03 ± 3.19E + 023.50E + 03 ± 4.82E + 02
4.40E − 01 ± 9.26E − 018.48E − 01 ± 2.20E − 011.31E + 00 ± 3.59E − 011.03E + 00 ± 2.01E − 012.39E + 00 ± 2.66E − 011.31E + 00 ± 2.38E − 01
3.44E + 01 ± 1.87E + 005.26E + 01 ± 7.11E + 001.16E + 02 ± 2.02E + 013.09E + 01 ± 1.23E − 011.02E + 02 ± 1.17E + 013.09E + 01 ± 1.79E − 01
6.23E + 01 ± 4.56E + 016.81E + 01 ± 9.68E + 001.21E + 02 ± 2.46E + 019.01E + 01 ± 8.95E + 001.82E + 02 ± 1.20E + 019.46E + 01 ± 2.11E + 01
2.23E + 00 ± 3.41E − 013.12E + 00 ± 9.83E − 019.51E + 00 ± 4.42E + 001.71E + 00 ± 4.68E − 015.40E + 00 ± 8.10E − 011.23E + 00 ± 2.28E − 01
1.29E + 01 ± 5.98E − 011.20E + 01 ± 9.26E − 011.35E + 01 ± 1.11E + 001.11E + 01 ± 7.60E − 011.13E + 01 ± 3.28E − 011.11E + 01 ± 5.89E − 01
1.92E + 02 ± 2.72E + 013.11E + 02 ± 7.92E + 013.09E + 02 ± 6.80E + 013.18E + 02 ± 7.53E + 013.19E + 02 ± 6.26E + 012.95E + 02 ± 7.95E + 01
8.38E + 02 ± 4.60E + 028.59E + 02 ± 3.10E + 024.30E + 03 ± 7.67E + 028.41E + 01 ± 3.90E + 012.50E + 03 ± 3.86E + 024.48E + 01 ± 1.35E + 01
6.67E + 02 ± 2.90E + 023.57E + 03 ± 5.90E + 024.83E + 03 ± 8.23E + 024.18E + 03 ± 5.62E + 025.81E + 03 ± 5.04E + 024.16E + 03 ± 5.07E + 02
1.62E + 02 ± 3.00E + 012.48E + 02 ± 8.11E + 002.67E + 02 ± 1.25E + 012.51E + 02 ± 1.43E + 012.02E + 02 ± 1.40E + 002.28E + 02 ± 8.92E + 00
2.20E + 02 ± 1.11E + 012.49E + 02 ± 7.82E + 002.99E + 02 ± 1.05E + 012.75E + 02 ± 9.76E + 002.30E + 02 ± 2.08E + 012.80E + 02 ± 8.80E + 00
1.58E + 02 ± 3.00E + 012.95E + 02 ± 7.06E + 012.86E + 02 ± 8.24E + 012.60E + 02 ± 7.62E + 012.18E + 02 ± 4.01E + 012.00E + 02 ± 6.81E − 03
4.69E + 02 ± 7.38E + 017.76E + 02 ± 7.11E + 011.00E + 03 ± 1.12E + 029.10E + 02 ± 1.62E + 023.26E + 02 ± 1.14E + 017.52E + 02 ± 1.33E + 02
2.69E + 02 ± 7.35E + 014.01E + 02 ± 3.48E + 024.01E + 02 ± 4.76E + 023.33E + 02 ± 2.32E + 023.00E + 02 ± 2.24E − 023.00E + 02 ± 1.32E − 13


AlgorithmNBIPOP-aCMAfk-PSOSPSO2011SPSOABCPVADEHBD

Ranking1.803.615.293.343.953.02

5. Conclusion

In this paper, we presented a hybrid BSA, called HBD, which combined BSA and DE with exploitive mutation strategy. At each iteration process, DE was embedded behind the BSA algorithm to optimize one individual which was selected according to its probability in order to enhance the convergence of BSA and to bring solutions with higher quality.

Comprehensive experiments have been carried out in 28 benchmark functions proposed in CEC-2013 competition. The experimental results reveal that the hybridization of BSA and DE provides the high effectiveness and efficiency in most of functions, contributing to solutions with higher accuracy, faster convergence speed, and more stable scalability. HBD was also compared with other evolutionary algorithms and has shown its promising performance.

There are several interesting directions for future work. Experimentally, the linear probability model used to select one individual to optimize is a reasonable but not optimal one; thus, firstly, the comprehensive tests will be performed on various probability models in HBD. Secondly, although experimental results have shown that HBD owns the stable scalability, we plan to investigate HBD for large-scale optimization problems. Last but not least, we plan to apply HBD to some real-world optimization problems for further examinations.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors are very grateful to the editor and the anonymous reviewers for their constructive comments and suggestions to this paper. This work was supported by the NSFC Joint Fund with Guangdong of China under Key Project U1201258, the Shandong Natural Science Funds for Distinguished Young Scholar under Grant no. JQ201316, and the Natural Science Foundation of Fujian Province of China under Grant no. 2013J01216.

References

  1. J. H. Holland, Adaptation in Natural and Artificial Systems, University of Michigan Press, Ann Arbor, Mich, USA, 1975. View at: MathSciNet
  2. R. Eberhart and J. Kennedy, “A new optimizer using particle swarm theory,” in Proceedings of the 6th International Symposium on Micro Machine and Human Science, vol. 1, pp. 39–43, New York, NY, USA, October 1995. View at: Google Scholar
  3. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, vol. 4, pp. 1942–1948, IEEE, Perth, Australia, December 1995. View at: Publisher Site | Google Scholar
  4. M. Dorigo, V. Maniezzo, and A. Colorni, “Ant system: optimization by a colony of cooperating agents,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 26, no. 1, pp. 29–41, 1996. View at: Publisher Site | Google Scholar
  5. R. Storn and K. Price, “Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997. View at: Publisher Site | Google Scholar | MathSciNet
  6. Y. Cai and J. Wang, “Differential evolution with neighborhood and direction information for numerical optimization,” IEEE Transactions on Cybernetics, vol. 43, no. 6, pp. 2202–2215, 2013. View at: Publisher Site | Google Scholar
  7. D. Karaboga, “An idea based on honey bee swarm for numerical optimization,” Tech. Rep. TR-06, Erciyes University, Engineering Faculty, Computer Engineering Department, 2005. View at: Google Scholar
  8. X.-S. Yang and S. Deb, “Cuckoo search via Lévy flights,” in Proceedings of the World Congress on Nature & Biologically Inspired Computing (NABIC '09), pp. 210–214, IEEE, December 2009. View at: Publisher Site | Google Scholar
  9. X. S. Yang, “A new metaheuristic bat-inspired algorithm,” in Nature Inspired Cooperative Strategies for Optimization (NICSO 2010), vol. 284 of Studies in Computational Intelligence, pp. 65–74, Springer, Berlin, Germany, 2010. View at: Publisher Site | Google Scholar
  10. X. S. Yang, Nature-Inspired Metaheuristic Algorithms, Luniver Press, Frome, UK, 2nd edition, 2010.
  11. Y. Chen, Z. Cui, and J. Zeng, “Structural optimization of Lennard-Jones clusters by hybrid social cognitive optimization algorithm,” in Proceedings of the 9th IEEE International Conference on Cognitive Informatics, pp. 204–208, IEEE, July 2010. View at: Publisher Site | Google Scholar
  12. Z. Cui and X. Cai, “Optimal coverage configuration with social emotional optimisation algorithm in wireless sensor networks,” International Journal of Wireless and Mobile Computing, vol. 5, no. 1, pp. 43–47, 2011. View at: Publisher Site | Google Scholar
  13. Z. Cui, Z. Shi, and J. Zeng, “Using social emotional optimization algorithm to direct orbits of chaotic systems,” in Swarm, Evolutionary, and Memetic Computing, pp. 389–395, Springer, 2010. View at: Google Scholar
  14. Z. W. Geem, J. H. Kim, and G. V. Loganathan, “A new heuristic optimization algorithm: harmony search,” Simulation, vol. 76, no. 2, pp. 60–68, 2001. View at: Publisher Site | Google Scholar
  15. D. Simon, “Biogeography-based optimization,” IEEE Transactions on Evolutionary Computation, vol. 12, no. 6, pp. 702–713, 2008. View at: Publisher Site | Google Scholar
  16. I. J. Fister, X. S. Yang, J. Brest, and D. Fister, “A brief review of nature-inspired algorithms for optimization,” Electrotechnical Review, vol. 80, no. 3, 2013. View at: Google Scholar
  17. P. Civicioglu, “Backtracking search optimization algorithm for numerical optimization problems,” Applied Mathematics and Computation, vol. 219, no. 15, pp. 8121–8144, 2013. View at: Publisher Site | Google Scholar | MathSciNet
  18. C. Blum, J. Puchinger, G. R. Raidl, and A. Roli, “Hybrid metaheuristics in combinatorial optimization: a survey,” Applied Soft Computing Journal, vol. 11, no. 6, pp. 4135–4151, 2011. View at: Publisher Site | Google Scholar
  19. M. Lozano and C. García-Martínez, “Hybrid metaheuristics with evolutionary algorithms specializing in intensification and diversification: overview and progress report,” Computers & Operations Research, vol. 37, no. 3, pp. 481–497, 2010. View at: Publisher Site | Google Scholar
  20. J. Zhang, Z.-H. Zhang, Y. Lin et al., “Evolutionary computation meets machine learning: a survey,” IEEE Computational Intelligence Magazine, vol. 6, no. 4, pp. 68–75, 2011. View at: Publisher Site | Google Scholar
  21. K. Price, R. M. Storn, and J. A. Lampinen, Differential Evolution: A Practical Approach to Global Optimization, Springer, 2006. View at: MathSciNet
  22. R. Storn and K. Price, Differential Evolution, 2013, http://www1.icsi.berkeley.edu/~storn/code.html.
  23. M. G. Epitropakis, D. K. Tasoulis, N. G. Pavlidis, V. P. Plagianakos, and M. N. Vrahatis, “Enhancing differential evolution utilizing proximity-based mutation operators,” IEEE Transactions on Evolutionary Computation, vol. 15, no. 1, pp. 99–119, 2011. View at: Publisher Site | Google Scholar
  24. Z. Beheshti, S. M. Shamsuddin, and S. Sulaiman, “Fusion global-local-topology particle swarm optimization for global optimization problems,” Mathematical Problems in Engineering, vol. 2014, Article ID 907386, 19 pages, 2014. View at: Publisher Site | Google Scholar | MathSciNet
  25. W. Y. Gong, Z. H. Cai, and C. X. Ling, “DE/BBO: a hybrid differential evolution with biogeography-based optimization for global numerical optimization,” Soft Computing, vol. 15, no. 4, pp. 645–665, 2011. View at: Publisher Site | Google Scholar
  26. F. Wang, X.-S. He, L. G. Luo, and Y. Wang, “Hybrid optimization algorithm of PSO and Cuckoo Search,” in Proceedings of the 2nd International Conference on Artificial Intelligence, Management Science and Electronic Commerce (AIMSEC '11), pp. 1172–1175, August 2011. View at: Publisher Site | Google Scholar
  27. Z.-H. Zhan, J. Zhang, Y. Li, and Y.-H. Shi, “Orthogonal learning particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 15, no. 6, pp. 832–847, 2011. View at: Publisher Site | Google Scholar
  28. M. G. Epitropakis, V. P. Plagianakos, and M. N. Vrahatis, “Balancing the exploration and exploitation capabilities of the differential evolution algorithm,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '08), pp. 2686–2693, June 2008. View at: Publisher Site | Google Scholar
  29. W. Y. Gong and Z. H. Cai, “Differential evolution with ranking-based mutation operators,” IEEE Transactions on Cybernetics, vol. 43, no. 6, pp. 2066–2081, 2013. View at: Publisher Site | Google Scholar
  30. W. T. Zhao, L. J. Wang, Y. L. Yin, B. Wang, Y. Wei, and Y. Yin, “An improved backtracking search algorithm for constrained optimization problems,” in Knowledge Science, Engineering and Management: 7th International Conference, KSEM 2014, Sibiu, Romania, October 16-18, 2014. Proceedings, vol. 8793 of Lecture Notes in Computer Science, pp. 222–233, Springer, 2014. View at: Publisher Site | Google Scholar
  31. J. J. Liang, B. Y. Qu, P. N. Suganthan et al., “Problem definitions and evaluation criteria for the cec 2013 special session on real-parameter optimization,” Tech. Rep., Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou, China; Nanyang Technological University, Singapore, 2013. View at: Google Scholar
  32. S. García, D. Molina, M. Lozano, and F. Herrera, “A study on the use of non-parametric tests for analyzing the evolutionary algorithms' behaviour: a case study on the CEC'2005 special session on real parameter optimization,” Journal of Heuristics, vol. 15, no. 6, pp. 617–644, 2009. View at: Publisher Site | Google Scholar
  33. H. Ma, “An analysis of the equilibrium of migration models for biogeography-based optimization,” Information Sciences, vol. 180, no. 18, pp. 3444–3464, 2010. View at: Publisher Site | Google Scholar
  34. M. Clerk, 2014, http://clerc.maurice.free.fr/pso/.
  35. N. Hansen and A. Ostermeier, “Completely derandomized self-adaptation in evolution strategies,” Evolutionary Computation, vol. 9, no. 2, pp. 159–195, 2001. View at: Publisher Site | Google Scholar
  36. C. Igel, N. Hansen, and S. Roth, “Covariance matrix adaptation for multi-objective optimization,” Evolutionary Computation, vol. 15, no. 1, pp. 1–28, 2007. View at: Publisher Site | Google Scholar
  37. J. Brest, S. Greiner, B. Bošković, M. Mernik, and V. Zumer, “Self-adapting control parameters in differential evolution: a comparative study on numerical benchmark problems,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 6, pp. 646–657, 2006. View at: Publisher Site | Google Scholar
  38. J. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, “Comprehensive learning particle swarm optimizer for global optimization of multimodal functions,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 3, pp. 281–295, 2006. View at: Publisher Site | Google Scholar
  39. A. K. Qin and P. N. Suganthan, “Self-adaptive differential evolution algorithm for numerical optimization,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '05), vol. 2, pp. 1785–1791, IEEE, September 2005. View at: Publisher Site | Google Scholar
  40. P. N. Suganthan, N. Hansen, J. J. Liang et al., “Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization,” KanGAL Report 2005005, Nanyang Technological University, Singapore, 2005. View at: Google Scholar
  41. I. Loshchilov, “CMA-ES with restarts for solving CEC 2013 benchmark problems,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '13), pp. 369–376, Cancún, Mexico, June 2013. View at: Publisher Site | Google Scholar
  42. F. V. Nepomuceno and A. P. Engelbrecht, “A self-adaptive heterogeneous PSO for real-parameter optimization,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '13), pp. 361–368, June 2013. View at: Publisher Site | Google Scholar
  43. M. Zambrano-Bigiarini, M. Clerc, and R. Rojas, “Standard particle swarm optimization 2011 at CEC-2013: a baseline for future PSO improvements,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '13), pp. 2337–2344, June 2013. View at: Publisher Site | Google Scholar
  44. M. El-Abd, “Testing a Particle Swarm Optimization and Artificial Bee Colony Hybrid algorithm on the CEC13 benchmarks,” in Proceedings of the IEEE Congress on Evolutionary Computation, pp. 2215–2220, June 2013. View at: Publisher Site | Google Scholar
  45. L. Dos Santos Coelho, H. V. H. Ayala, and R. Z. Freire, “Population's variance-based Adaptive differential evolution for real parameter optimization,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '13), pp. 1672–1677, June 2013. View at: Publisher Site | Google Scholar
  46. P. N. Suganthan, 2015, http://www.ntu.edu.sg/home/EPNSugan/.
  47. I. Loshchilov, T. Stuetzle, and T. J. Liao, “Ranking results of CEC'13 special session & competition on real-parameter single objective optimization,” [OL], 2015, http://www.ntu.edu.sg/home/EPNSugan/index_files/CEC2013/results_21.pdf. View at: Google Scholar

Copyright © 2015 Lijin Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1596 Views | 593 Downloads | 14 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder