Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2017 / Article

Research Article | Open Access

Volume 2017 |Article ID 3017608 | 13 pages | https://doi.org/10.1155/2017/3017608

Improved Backtracking Search Algorithm Based on Population Control Factor and Optimal Learning Strategy

Academic Editor: Erik Cuevas
Received03 Jan 2017
Revised10 May 2017
Accepted12 Jun 2017
Published24 Jul 2017

Abstract

Backtracking search algorithm (BSA) is a relatively new evolutionary algorithm, which has a good optimization performance just like other population-based algorithms. However, there is also an insufficiency in BSA regarding its convergence speed and convergence precision. For solving the problem shown in BSA, this article proposes an improved BSA named COBSA. Enlightened by particle swarm optimization (PSO) algorithm, population control factor is added to the variation equation aiming to improve the convergence speed of BSA, so as to make algorithm have a better ability of escaping the local optimum. In addition, enlightened by differential evolution (DE) algorithm, this article proposes a novel evolutionary equation based on the fact that the disadvantaged group will search just around the best individual chosen from previous iteration to enhance the ability of local search. Simulation experiments based on a set of 18 benchmark functions show that, in general, COBSA displays obvious superiority in convergence speed and convergence precision when compared with BSA and the comparison algorithms.

1. Introduction

In the rapid development of science and technology today, people set a high demand for optimization algorithms, and evolutionary algorithms are popular with people owing to its simple structure and less solving parameters, such as particle swarm optimization (PSO) algorithm [1], ant colony optimization (ACO) algorithm [2], and genetic algorithm (GA) [3]. However, a higher request is set for optimization algorithms when the multidimensional, multimodal, and nonlinear problems arise in academic research. So many scholars presented some modified evolutionary algorithms aiming to find the best values for system’s parameters in different circumstances in recent decades. By taking advantage of the optimum solution, literature [4] mends the search equation when PSO algorithm operates aiming to accelerate convergence speed. Enlightened by biological evolution, literature [5] developed differential evolution (DE) algorithm, and an adaptive evolutionary strategy based on DE is proposed in literature [6] to raise efficiency of DE algorithm. Differential search algorithm (DSA), equipping the new crossover and mutation strategy, is revealed in the literature [7]. Learning from natural system, literatures [810] put forward artificial bee colony (ABC) algorithm enlightened by the natural behavior of bee, cuckoo search (CS) algorithm enlightened by flight behavior of cuckoo, and bat algorithm (BA) enlightened by foraging behavior of bat. Besides, these evolutionary algorithms have found increasingly wide utilization in many fields successfully, such as blind source separation [11], hyperspectral unmixing [12], and image processing [13].

BSA is such a novel computation method evolved by Civicioglu in 2013 [14]. BSA employs fewer parameters and has a simpler process that is efficient, fast, and able to handle various conditions and that makes it more easily to accept different complex problems. Literature [14] demonstrates that BSA has similar superiority to other evolutionary algorithms with strength of global search. Owing to its simplicity and high-efficiency, BSA has caught many scholars’ attention [15, 16].

However, there are also some challenging complications arisen in BSA. For example, the convergence speed of BSA is typically slower than these typical evolutionary algorithms (DE and PSO) in the early stage as a result of ignoring the importance of optimal individual that makes BSA difficult to achieve the satisfactory outcome when handling some complex problems. Therefore, accelerating convergence speed and improving convergence precision have become two relatively valuable and significative targets in BSA’s study. A number of modified BSA aiming to reach above two targets have, hence, been put forward since its appearance. For example, literature [17] focuses on improving convergence precision by proposing the optimal evolution equation and optimal search equation but brings extra evaluations. Literatures [18, 19] focus on convergence precision and globe convergence by employing variation coefficient and selection mechanism but sacrifice iteration numbers as a result. An improved BSA is proposed in this article following the purpose of accelerating convergence speed and improving convergence precision when iteration numbers are lower. Firstly, to accelerate the convergence speed of population in the early stage, population control factor is added to the evolution equation. What is more, an evolutionary equation based on the fact that the disadvantaged group will search just around the best individual chosen from previous iteration is proposed to enhance the ability of local search. Simulation experiments based on 18 benchmark functions show that the improved algorithm can improve performance and productivity of BSA effectively and is a feasible evolutionary algorithm.

The rest of this article is arranged as follows. This article will describe BSA briefly in the second section; the improved algorithm will be introduced in the third section; in the fourth section, the simulation results will be presented; the fifth section is the discussion about the variance appearing in the first strategy; in the last, this article will draw a conclusion.

2. Backtracking Search Algorithm

Similar to other evolutionary algorithms, BSA is also a population-based evolutionary algorithm designed to find a global optimum; the solving process can be described by dividing its operation into five steps as is done in other evolutionary algorithms: population initialization, historical population setting, population evolution, population crossover, and greedy selection. Specific steps of the basic BSA can be described as follows.

2.1. Population Initialization

numbers of randomly emerged -dimensional vectors make up the initial solution of BSA. And the random initialization method can be summarized as follows:where and ; low and up stand for the lower and upper bound of search space, respectively; is the uniform distribution function.

2.2. Historical Population Setting

BSA chooses the previous population randomly by appointing the Oldpop as history population employed with the intent of controlling the search space and remembers the position until the Oldpop gets changed. What makes BSA have memory function is two numbers and generated randomly when the cycle starts. If , then Oldpop = Pop; meanwhile, the order of the individual in Oldpop will be arrayed randomly.

2.3. Population Evolution

At this stage, BSA will generate the new population, making full use of its previous experiment based on Pop and Oldpop; evolutionary equation is as follows:where , and rand is a random number in the rang ; is a parameter, controlling range of the search direction matrix ().

2.4. Population Crossover

The new population crossover strategy different from DE shown in BSA is that the mixed proportion parameter is employed to control the numbers of cross particle. Equation is given as follows:where map is matrix which only contains 0 and 1, and its initial value is 1. Its specific calculation formula is given as follows:where is an integer selected from 0 to randomly; is a mixed proportion parameter, and ; rand is chosen randomly in the range , and so do and ; is an integer vector sorted randomly, and . The map in the crossover stage controls the numbers of individual that will mutate by using and . When , is a two-element vector which contains many random positions; otherwise, a vector just containing an element “0” will be shown in the . Some individuals generated in the crossover stage can overflow the limited search space as a result of crossover strategy. For this, a boundary control to the individuals beyond the search space will be conducted using (1) again.

2.5. Greedy Selection

When BSA finishes above steps, the new population that have better position than original population are used to update the based on greedy selection and become new candidate solution. Meanwhile, if the best individual of new population has a lower fitness than optimum obtained from previous iterations, the original one will be replaced to be the new optimum. and optimum will remain when conditions mentioned above are not satisfied.

3. The Improved Backtracking Search Algorithm

In the above part, BSA is introduced briefly, and it can be known easily that BSA equipping a single control parameter, a trial population (Oldpop), and a simple structure has a good ability to explore search space in the early stage. However, it is a common phenomenon for evolutionary algorithms falling into local optimum easily when facing the multimodal and nonlinear problems, and so does BSA. In this regard, an improved BSA based on population control factor and optimal learning strategy enlightened by PSO and DE, respectively, is presented in this article. On the one hand, population control factor that regulates search direction of population is employed when BSA operates; on the other hand, disadvantaged group is guided to have optimal learning with the help of the optimal individual of the previous iteration. Specific improvements are as follows.

3.1. Improved BSA Based on Population Control Factor (CBSA)

Shortage shown in the BSA is that the Pop in the variation equation (2) has stochastic character, which makes BSA lack memory about position of population when the trend of expanding search space and exploring the new search area follows. Namely, Pop has the trend of global optimization. Therefore, we can take global search firstly and local fine search later into account. Enlightened by literature [20], inertia weight is employed to the evolutionary equation (2), and the new evolutionary equation is as follows:where is a variable parameter, changing with the increase of iteration numbers linearly. So the target we just referred above can be reached on condition that the algorithm has a stronger ability of global search when is larger, and the algorithm tends to local search when is smaller. Thus, we can employ a global coefficient adaptively to improve the search accuracy and search efficiency when facing some complex optimization problems. But, every coin has two sides. This strategy cannot reflect the actual optimization search process when the algorithm we just proposed faces some nonlinear problems as a result of linearity shown in , so this article presents the following search equation again:where is a matrix and the elements in matrix fit normal distribution following the condition that mean is while variance is . Thus, a disturbance will be produced to to help algorithm escape the local optimum when algorithm handles some nonlinear problems. In the last, the final evolution equation is summarized aswhere , named population control factor. The above evolutionary equation can not only improve the convergence speed with the help of but also enhance the ability of escaping the local optimum with the help of . Based on simulation experiments, it is proved strongly that the improved algorithm has a great performance on the convergence speed.

3.2. Improved BSA Based on Optimal Learning Strategy (OBSA)

Another shortage shown in BSA is that BSA does not acquire enough experience, taking full advantage of the best current individual, from DE. Besides, it is important for evolutionary algorithms to find a balance between global search and local search. It can be known that BSA, in focus on global optimization with (2), displays a poor ability of local search, because it ignores the importance of optimum individual. The optimum individual in the current population is very meaningful source that can be used to improve local search. In this regard, enlightened by literature [5] and literature [21], this article proposes the following search equation:where , is the optimum solution in the current population, and is standard normal distribution. Obvious observation can be acquired from Figure 1 that the new individual dominated by (8) can be random enough for local search owing a high probability that the new individual will appear around the best individual. Namely, the evolutionary equation described by (8) is good at local search but poor at global search.

3.3. Improved BSA Based on Population Control Factor and Optimal Learning Strategy (COBSA)

With purpose of solving the shortage of slow convergence speed in the early stage and low convergence precision in the late stage shown in BSA, an improved BSA based on population control factor and optimal learning strategy in combination with Sections 3.1 and 3.2 is presented in this article. The improved BSA named COBSA produces the new population by employing (7), and optimal learning strategy based on (8) will be applied to the disadvantaged group that has a worse behavior than previous population. The specific steps of the COBSA are given as follows.

Step 1. Initialize the population Pop and Oldpop.

Step 2. Generate two numbers and randomly, if , Oldpop = Pop, and Oldpop will be arrayed randomly later.

Step 3. Produce a new population on the basis of initial population by employing (7).

Step 4. Generate cross matrix map by using (4).

Step 5. Take cross strategy through (3).

Step 6. Initialize the individual that is out of boundary.

Step 7. Seek the value of the population, and disadvantaged group is guided to have optimal learning by employing (8).

Step 8. Select the optimal population according to the greedy selection mechanism.

Step 9. Judge whether the cycle meets the target; if not, return to Step 2.

Step 10. Output optimal solution.

4. Experiment and Result Analysis

In order to verify effectiveness of the improved algorithm proposed in this article, COBSA is applied to minimize a set of 18 benchmark functions commonly used in optimization algorithms. Table 1 presents in detail function form, search space, and function dimension used for the evolutionary algorithms in the tests. Some problems will arise when COBSA operates owing to the different characters shown in the 18 benchmark functions. , , and are unimodal and continuous functions. , , , , and are multipeak and the number of local minimum values will increase obviously when dimension gets larger. is the Rosenbrock function which is unimodal when and 3, but numerous local minimum values will arise when problem dimension is high [22]. is the three-hump camel function in , having three local minimum values. This article adjusts the form of some functions for convenience to compare when different versions in different literatures shown in these functions are considered.


FunctionFunction expressionDimSearch space

Sphere 30
Sumsquare 60
Sumpower 60
Schwefel2.22 100
Exponential 60
Alpine 60
Rastrigin 100
Griewank 60
Rosenbrock 30
Ackley 60
R-H-Ellipsoid 80
Schwefel1.2 10
Elliptic 30
NCRastrigin
50
T-H camel 2
Schaffer 10
Powell 70
Weierstrass
60

Enlightened by literatures [2325], the following methods are designed to evaluate the performance of COBSA: convergence performance in fixed iteration; iteration performance in fixed precision; comparison algorithms employed to compare with COBSA; statistical test based on the Wilcoxon Signed-Rank Test used to analyse the property of COBSA and comparison algorithms.

The parameters of algorithm used in the evaluation methods , , , and are listed as follows: the population size is 30, the maximum cycle of times to be evaluated for each benchmark functions is 1000, and the final result is the average value based on 20 independent running times.

4.1. Convergence Performance in Fixed Iteration

In this section, in order to analyse how much the population control factor and the optimal learning strategy contribute to improve the performance of BSA, this article compares the convergence speed and convergence precision of BSA, CBSA, OBSA, and COBSA based on the 18 objective functions when the maximum cycle is 1000. And the fitness of function takes the logarithm of 10 to display the difference shown in different strategy more obviously.

Important observation on convergence speed and convergence precision of different algorithms can be obtained from the result displayed in Figure 2. A better performance on convergence speed shown in CBSA is displayed when BSA works under the same condition, which implies the efficiency of evolution equation (7) powerfully. It also can be obtained from Figure 2 that OBSA has better convergence precision than that of BSA in the late stage, which shows the correctness of search equation (8) proposed in Section 3.2. Meanwhile, we can learn that the COBSA in combination with CBSA and OBSA, displaying faster speed and higher precision in Figure 2, has a more obvious influence on the performance of BSA when compared with BSA, CBSA, and OBSA.

4.2. Iteration Performance in Fixed Precision

In this section, the performance of each algorithm depended on the iteration numbers when evolution precision which is limited is employed to make a comparison. All the algorithms have been run 20 times independently based on 18 objective functions. For each function, the current cycle will stop either when the precision is less than the limitation given in Table 2 or when the maximum cycle is satisfied. The results, including BSA, CBSA, OBSA, and COBSA, are displayed in Table 2 in terms of min, max, mean, SR, and EI. Specific expression of each parameter we just mentioned is as follows: the success rate (SR) = numbers of achieving target precision ÷ total numbers of experiment; expected iterations (EI) = the numbers of particle × average iteration numbers ÷ success rate. And “∞” appearing in Table 2 indicates that the expected iterations are infinite.


FunctionLimitationAlgorithmIterationsSuccess rateExpected iterations
MinMaxMean

BSA1000100010000
CBSA845925877126310
OBSA87010009740.397400
COBSA635734692120760

BSA1000100010000
CBSA93810009710.9530663
OBSA72010009570.3582028
COBSA664802742122260

BSA92710009930.15198600
CBSA334442381111430
OBSA21135328818640
COBSA15925922316690

BSA1000100010000
CBSA844981877126310
OBSA79110009610.3582371
COBSA688851747122410

BSA1000100010000
CBSA757837791123730
OBSA72510009520.557120
COBSA613755675120250

BSA1000100010000
CBSA91310009540.931800
OBSA95110009930.2148950
COBSA711830784123520

BSA1000100010000
CBSA561646599117970
OBSA250541453113590
COBSA354488422112660

BSA1000100010000
CBSA477558529115870
OBSA329615465113680
COBSA247469411112330

30BSA1000100010000
CBSA21529826718010
OBSA16334924417320
COBSA14124118915670

BSA1000100010000
CBSA95810009790.4565266
OBSA83010009890.398900
COBSA687860784123520

BSA1000100010000
CBSA873947914127420
OBSA69310009270.930900
COBSA629816710121300

BSA1000100010000
CBSA89210009860.3584514
OBSA76610008830.929433
COBSA532710624118720

BSA1000100010000
CBSA630700667120010
OBSA50510008210.8528976
COBSA524611566116980

BSA1000100010000
CBSA590790679120370
OBSA286960608118240
COBSA424556492114760

BSA1000100010000
CBSA86910009730.648650
OBSA80310009910.4566066
COBSA763893786110560

0.01BSA1000100010000
CBSA80010009480.740628
OBSA59610008310.831162
COBSA475667577117310

BSA1000100010000
CBSA95610009900.559400
OBSA65810008540.7534160
COBSA545761689120670

0.01BSA1000100010000
CBSA25632329418820
OBSA17210004960.8517505
COBSA21832726017800

An interesting result is that BSA does not meet the target precision for all functions except Sumpower , and the SR of BSA is just 0.15 on function Sumpower when that of CBSA, OBSA, and COBSA is 1. Besides, for other functions, a higher SR and less EI are displayed when CBSA and OBSA are compared with BSA, while COBSA has 100% success rate for all functions with least expected iteration. The iteration performance in fixed precision in this section also proves the superiority of COBSA.

4.3. Experiment Based on Comparison Algorithms

In this section, some relatively new evolutionary algorithms evolving in recent decades are employed to compare with COBSA. ABC (limit = 100), MABC (limit = 100), BA (, , , , , and ), DSA, and BSA are included. The result is judged in terms of mean, standard deviation (std) of the solution obtained from 20 independent tests.

As it can be obtained from Table 3, COBSA offers a higher convergence precision on almost all functions while the stander deviation of COBSA is also much smaller than that of the comparison algorithms. In particular, COBSA can find optimal solution when facing Rastrigin , Griewank , NCRastrigin , and Weierstrass . However, in the case of Rosenbrock , the convergence precision of COBSA is worse compared to that of ABC and MABC, since ABC is just one magnitude order higher when compared with COBSA on Rosenbrock ; the superiority of ABC and MABC is not very obvious. Besides, COBSA displays a more stable search process than ABC and MABC owing to a lower stander deviation on Rosenbrock .


Function

ABC
 Mean
 Std
MABC
 Mean
 Std
DSA
 Mean
 Std
BSA
 Mean
 Std