Abstract

Backtracking search algorithm (BSA) is a relatively new evolutionary algorithm, which has a good optimization performance just like other population-based algorithms. However, there is also an insufficiency in BSA regarding its convergence speed and convergence precision. For solving the problem shown in BSA, this article proposes an improved BSA named COBSA. Enlightened by particle swarm optimization (PSO) algorithm, population control factor is added to the variation equation aiming to improve the convergence speed of BSA, so as to make algorithm have a better ability of escaping the local optimum. In addition, enlightened by differential evolution (DE) algorithm, this article proposes a novel evolutionary equation based on the fact that the disadvantaged group will search just around the best individual chosen from previous iteration to enhance the ability of local search. Simulation experiments based on a set of 18 benchmark functions show that, in general, COBSA displays obvious superiority in convergence speed and convergence precision when compared with BSA and the comparison algorithms.

1. Introduction

In the rapid development of science and technology today, people set a high demand for optimization algorithms, and evolutionary algorithms are popular with people owing to its simple structure and less solving parameters, such as particle swarm optimization (PSO) algorithm [1], ant colony optimization (ACO) algorithm [2], and genetic algorithm (GA) [3]. However, a higher request is set for optimization algorithms when the multidimensional, multimodal, and nonlinear problems arise in academic research. So many scholars presented some modified evolutionary algorithms aiming to find the best values for system’s parameters in different circumstances in recent decades. By taking advantage of the optimum solution, literature [4] mends the search equation when PSO algorithm operates aiming to accelerate convergence speed. Enlightened by biological evolution, literature [5] developed differential evolution (DE) algorithm, and an adaptive evolutionary strategy based on DE is proposed in literature [6] to raise efficiency of DE algorithm. Differential search algorithm (DSA), equipping the new crossover and mutation strategy, is revealed in the literature [7]. Learning from natural system, literatures [810] put forward artificial bee colony (ABC) algorithm enlightened by the natural behavior of bee, cuckoo search (CS) algorithm enlightened by flight behavior of cuckoo, and bat algorithm (BA) enlightened by foraging behavior of bat. Besides, these evolutionary algorithms have found increasingly wide utilization in many fields successfully, such as blind source separation [11], hyperspectral unmixing [12], and image processing [13].

BSA is such a novel computation method evolved by Civicioglu in 2013 [14]. BSA employs fewer parameters and has a simpler process that is efficient, fast, and able to handle various conditions and that makes it more easily to accept different complex problems. Literature [14] demonstrates that BSA has similar superiority to other evolutionary algorithms with strength of global search. Owing to its simplicity and high-efficiency, BSA has caught many scholars’ attention [15, 16].

However, there are also some challenging complications arisen in BSA. For example, the convergence speed of BSA is typically slower than these typical evolutionary algorithms (DE and PSO) in the early stage as a result of ignoring the importance of optimal individual that makes BSA difficult to achieve the satisfactory outcome when handling some complex problems. Therefore, accelerating convergence speed and improving convergence precision have become two relatively valuable and significative targets in BSA’s study. A number of modified BSA aiming to reach above two targets have, hence, been put forward since its appearance. For example, literature [17] focuses on improving convergence precision by proposing the optimal evolution equation and optimal search equation but brings extra evaluations. Literatures [18, 19] focus on convergence precision and globe convergence by employing variation coefficient and selection mechanism but sacrifice iteration numbers as a result. An improved BSA is proposed in this article following the purpose of accelerating convergence speed and improving convergence precision when iteration numbers are lower. Firstly, to accelerate the convergence speed of population in the early stage, population control factor is added to the evolution equation. What is more, an evolutionary equation based on the fact that the disadvantaged group will search just around the best individual chosen from previous iteration is proposed to enhance the ability of local search. Simulation experiments based on 18 benchmark functions show that the improved algorithm can improve performance and productivity of BSA effectively and is a feasible evolutionary algorithm.

The rest of this article is arranged as follows. This article will describe BSA briefly in the second section; the improved algorithm will be introduced in the third section; in the fourth section, the simulation results will be presented; the fifth section is the discussion about the variance appearing in the first strategy; in the last, this article will draw a conclusion.

2. Backtracking Search Algorithm

Similar to other evolutionary algorithms, BSA is also a population-based evolutionary algorithm designed to find a global optimum; the solving process can be described by dividing its operation into five steps as is done in other evolutionary algorithms: population initialization, historical population setting, population evolution, population crossover, and greedy selection. Specific steps of the basic BSA can be described as follows.

2.1. Population Initialization

numbers of randomly emerged -dimensional vectors make up the initial solution of BSA. And the random initialization method can be summarized as follows:where and ; low and up stand for the lower and upper bound of search space, respectively; is the uniform distribution function.

2.2. Historical Population Setting

BSA chooses the previous population randomly by appointing the Oldpop as history population employed with the intent of controlling the search space and remembers the position until the Oldpop gets changed. What makes BSA have memory function is two numbers and generated randomly when the cycle starts. If , then Oldpop = Pop; meanwhile, the order of the individual in Oldpop will be arrayed randomly.

2.3. Population Evolution

At this stage, BSA will generate the new population, making full use of its previous experiment based on Pop and Oldpop; evolutionary equation is as follows:where , and rand is a random number in the rang ; is a parameter, controlling range of the search direction matrix ().

2.4. Population Crossover

The new population crossover strategy different from DE shown in BSA is that the mixed proportion parameter is employed to control the numbers of cross particle. Equation is given as follows:where map is matrix which only contains 0 and 1, and its initial value is 1. Its specific calculation formula is given as follows:where is an integer selected from 0 to randomly; is a mixed proportion parameter, and ; rand is chosen randomly in the range , and so do and ; is an integer vector sorted randomly, and . The map in the crossover stage controls the numbers of individual that will mutate by using and . When , is a two-element vector which contains many random positions; otherwise, a vector just containing an element “0” will be shown in the . Some individuals generated in the crossover stage can overflow the limited search space as a result of crossover strategy. For this, a boundary control to the individuals beyond the search space will be conducted using (1) again.

2.5. Greedy Selection

When BSA finishes above steps, the new population that have better position than original population are used to update the based on greedy selection and become new candidate solution. Meanwhile, if the best individual of new population has a lower fitness than optimum obtained from previous iterations, the original one will be replaced to be the new optimum. and optimum will remain when conditions mentioned above are not satisfied.

3. The Improved Backtracking Search Algorithm

In the above part, BSA is introduced briefly, and it can be known easily that BSA equipping a single control parameter, a trial population (Oldpop), and a simple structure has a good ability to explore search space in the early stage. However, it is a common phenomenon for evolutionary algorithms falling into local optimum easily when facing the multimodal and nonlinear problems, and so does BSA. In this regard, an improved BSA based on population control factor and optimal learning strategy enlightened by PSO and DE, respectively, is presented in this article. On the one hand, population control factor that regulates search direction of population is employed when BSA operates; on the other hand, disadvantaged group is guided to have optimal learning with the help of the optimal individual of the previous iteration. Specific improvements are as follows.

3.1. Improved BSA Based on Population Control Factor (CBSA)

Shortage shown in the BSA is that the Pop in the variation equation (2) has stochastic character, which makes BSA lack memory about position of population when the trend of expanding search space and exploring the new search area follows. Namely, Pop has the trend of global optimization. Therefore, we can take global search firstly and local fine search later into account. Enlightened by literature [20], inertia weight is employed to the evolutionary equation (2), and the new evolutionary equation is as follows:where is a variable parameter, changing with the increase of iteration numbers linearly. So the target we just referred above can be reached on condition that the algorithm has a stronger ability of global search when is larger, and the algorithm tends to local search when is smaller. Thus, we can employ a global coefficient adaptively to improve the search accuracy and search efficiency when facing some complex optimization problems. But, every coin has two sides. This strategy cannot reflect the actual optimization search process when the algorithm we just proposed faces some nonlinear problems as a result of linearity shown in , so this article presents the following search equation again:where is a matrix and the elements in matrix fit normal distribution following the condition that mean is while variance is . Thus, a disturbance will be produced to to help algorithm escape the local optimum when algorithm handles some nonlinear problems. In the last, the final evolution equation is summarized aswhere , named population control factor. The above evolutionary equation can not only improve the convergence speed with the help of but also enhance the ability of escaping the local optimum with the help of . Based on simulation experiments, it is proved strongly that the improved algorithm has a great performance on the convergence speed.

3.2. Improved BSA Based on Optimal Learning Strategy (OBSA)

Another shortage shown in BSA is that BSA does not acquire enough experience, taking full advantage of the best current individual, from DE. Besides, it is important for evolutionary algorithms to find a balance between global search and local search. It can be known that BSA, in focus on global optimization with (2), displays a poor ability of local search, because it ignores the importance of optimum individual. The optimum individual in the current population is very meaningful source that can be used to improve local search. In this regard, enlightened by literature [5] and literature [21], this article proposes the following search equation:where , is the optimum solution in the current population, and is standard normal distribution. Obvious observation can be acquired from Figure 1 that the new individual dominated by (8) can be random enough for local search owing a high probability that the new individual will appear around the best individual. Namely, the evolutionary equation described by (8) is good at local search but poor at global search.

3.3. Improved BSA Based on Population Control Factor and Optimal Learning Strategy (COBSA)

With purpose of solving the shortage of slow convergence speed in the early stage and low convergence precision in the late stage shown in BSA, an improved BSA based on population control factor and optimal learning strategy in combination with Sections 3.1 and 3.2 is presented in this article. The improved BSA named COBSA produces the new population by employing (7), and optimal learning strategy based on (8) will be applied to the disadvantaged group that has a worse behavior than previous population. The specific steps of the COBSA are given as follows.

Step 1. Initialize the population Pop and Oldpop.

Step 2. Generate two numbers and randomly, if , Oldpop = Pop, and Oldpop will be arrayed randomly later.

Step 3. Produce a new population on the basis of initial population by employing (7).

Step 4. Generate cross matrix map by using (4).

Step 5. Take cross strategy through (3).

Step 6. Initialize the individual that is out of boundary.

Step 7. Seek the value of the population, and disadvantaged group is guided to have optimal learning by employing (8).

Step 8. Select the optimal population according to the greedy selection mechanism.

Step 9. Judge whether the cycle meets the target; if not, return to Step 2.

Step 10. Output optimal solution.

4. Experiment and Result Analysis

In order to verify effectiveness of the improved algorithm proposed in this article, COBSA is applied to minimize a set of 18 benchmark functions commonly used in optimization algorithms. Table 1 presents in detail function form, search space, and function dimension used for the evolutionary algorithms in the tests. Some problems will arise when COBSA operates owing to the different characters shown in the 18 benchmark functions. , , and are unimodal and continuous functions. , , , , and are multipeak and the number of local minimum values will increase obviously when dimension gets larger. is the Rosenbrock function which is unimodal when and 3, but numerous local minimum values will arise when problem dimension is high [22]. is the three-hump camel function in , having three local minimum values. This article adjusts the form of some functions for convenience to compare when different versions in different literatures shown in these functions are considered.

Enlightened by literatures [2325], the following methods are designed to evaluate the performance of COBSA: convergence performance in fixed iteration; iteration performance in fixed precision; comparison algorithms employed to compare with COBSA; statistical test based on the Wilcoxon Signed-Rank Test used to analyse the property of COBSA and comparison algorithms.

The parameters of algorithm used in the evaluation methods , , , and are listed as follows: the population size is 30, the maximum cycle of times to be evaluated for each benchmark functions is 1000, and the final result is the average value based on 20 independent running times.

4.1. Convergence Performance in Fixed Iteration

In this section, in order to analyse how much the population control factor and the optimal learning strategy contribute to improve the performance of BSA, this article compares the convergence speed and convergence precision of BSA, CBSA, OBSA, and COBSA based on the 18 objective functions when the maximum cycle is 1000. And the fitness of function takes the logarithm of 10 to display the difference shown in different strategy more obviously.

Important observation on convergence speed and convergence precision of different algorithms can be obtained from the result displayed in Figure 2. A better performance on convergence speed shown in CBSA is displayed when BSA works under the same condition, which implies the efficiency of evolution equation (7) powerfully. It also can be obtained from Figure 2 that OBSA has better convergence precision than that of BSA in the late stage, which shows the correctness of search equation (8) proposed in Section 3.2. Meanwhile, we can learn that the COBSA in combination with CBSA and OBSA, displaying faster speed and higher precision in Figure 2, has a more obvious influence on the performance of BSA when compared with BSA, CBSA, and OBSA.

4.2. Iteration Performance in Fixed Precision

In this section, the performance of each algorithm depended on the iteration numbers when evolution precision which is limited is employed to make a comparison. All the algorithms have been run 20 times independently based on 18 objective functions. For each function, the current cycle will stop either when the precision is less than the limitation given in Table 2 or when the maximum cycle is satisfied. The results, including BSA, CBSA, OBSA, and COBSA, are displayed in Table 2 in terms of min, max, mean, SR, and EI. Specific expression of each parameter we just mentioned is as follows: the success rate (SR) = numbers of achieving target precision ÷ total numbers of experiment; expected iterations (EI) = the numbers of particle × average iteration numbers ÷ success rate. And “∞” appearing in Table 2 indicates that the expected iterations are infinite.

An interesting result is that BSA does not meet the target precision for all functions except Sumpower , and the SR of BSA is just 0.15 on function Sumpower when that of CBSA, OBSA, and COBSA is 1. Besides, for other functions, a higher SR and less EI are displayed when CBSA and OBSA are compared with BSA, while COBSA has 100% success rate for all functions with least expected iteration. The iteration performance in fixed precision in this section also proves the superiority of COBSA.

4.3. Experiment Based on Comparison Algorithms

In this section, some relatively new evolutionary algorithms evolving in recent decades are employed to compare with COBSA. ABC (limit = 100), MABC (limit = 100), BA (, , , , , and ), DSA, and BSA are included. The result is judged in terms of mean, standard deviation (std) of the solution obtained from 20 independent tests.

As it can be obtained from Table 3, COBSA offers a higher convergence precision on almost all functions while the stander deviation of COBSA is also much smaller than that of the comparison algorithms. In particular, COBSA can find optimal solution when facing Rastrigin , Griewank , NCRastrigin , and Weierstrass . However, in the case of Rosenbrock , the convergence precision of COBSA is worse compared to that of ABC and MABC, since ABC is just one magnitude order higher when compared with COBSA on Rosenbrock ; the superiority of ABC and MABC is not very obvious. Besides, COBSA displays a more stable search process than ABC and MABC owing to a lower stander deviation on Rosenbrock .

4.4. Experiment Based on Wilcoxon Signed-Rank Test

A comparative method based on statistical analysis by using Wilcoxon Signed-Rank Test is employed to evaluate the performance of COBSA and comparison algorithms.

Table 4 displays the statistical results using the average values based on 20 independent running times of COBSA and comparison algorithms to handle the objective functions in Table 1. The result presents that the COBSA displaying a better statistical property successfully outperforms all of the comparison algorithms, based on the Wilcoxon Signed-Rank Test with a statistical significance value α = 0.05.

According to the above analyses of Sections 4.1, 4.2, 4.3, and 4.4, conclusion can be summarized safely as follows: COBSA can improve the performance of BSA effectively, and it is a feasible computation method.

5. The Choice of Variance

In Section 3.1, the variance of matrix plays an important role in affecting the performance of CBSA. When takes 0, the matrix will lose its role. When increases from 0 to 1, the fluctuation to population Pop will also increase correspondingly. However, a lower convergence speed will be displayed owing to the obvious fluctuation to population Pop when is large. Therefore, four objective functions Sumsquare , Schwefel2.22 , Alpine , and Ackley are employed to analyse the effect of variance . The fitness of function takes the logarithm of 10 to show and observe the curves obviously.

The result is better when the fitness of function is smaller owing to the minimization process shown in these functions. We can obtain from Figure 3 that variance affects the result obviously. CBSA plays a better performance on Sumsquare and Alpine when . When takes 0.4, the fitness of Schwefel2.22 is better. For Ackley , a lower precision is obtained when is around 0.3. Therefore, the selective variance is set at 0.3 in this article.

Besides, we can also observe from Figure 3 that the influence of matrix cannot be ignored. When variance is 0, the element in matrix is 1, and as a result, the matrix will lose its role. As it can be obtained from Figure 3, a better performance is displayed when is around 0.3 rather than 0, which proves the significance of matrix .

6. Conclusion

In order to solve the problem of slow convergence speed and low convergence precision shown in BSA, this article proposes an improved algorithm, named COBSA, through employing population control factor to the variation equation and helping disadvantaged group have optimal learning. The experiments based on 18 benchmark functions prove that COBSA is an effective evolutionary algorithm, and it is a feasible solution to improve the convergence speed and convergence precision of BSA.

However, COBSA displays a lower convergence precision in the function Rosenbrock when compared with ABC and MABC. How to make algorithm display better performance on functions Rosenbrock is the next step in our work. Besides, it is worth applying the algorithm proposed in this article to the blind source separation and hyperspectral image processing.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The work is supported by National Nature Science Foundation of China (no. 61401307), China Postdoctoral Science Foundation (no. 2014M561184), Tianjin Research Program of Application Foundation and Advanced Technology of China (no. 15JCYBJC17100), and Tianjin Research Program of Science and Technology Commissioner of China (16JCTPJC48400).