Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2018 / Article

Research Article | Open Access

Volume 2018 |Article ID 9167414 | https://doi.org/10.1155/2018/9167414

Hailong Wang, Zhongbo Hu, Yuqiu Sun, Qinghua Su, Xuewen Xia, "Modified Backtracking Search Optimization Algorithm Inspired by Simulated Annealing for Constrained Engineering Optimization Problems", Computational Intelligence and Neuroscience, vol. 2018, Article ID 9167414, 27 pages, 2018. https://doi.org/10.1155/2018/9167414

Modified Backtracking Search Optimization Algorithm Inspired by Simulated Annealing for Constrained Engineering Optimization Problems

Academic Editor: Silvia Conforto
Received10 Oct 2017
Accepted20 Dec 2017
Published13 Feb 2018

Abstract

The backtracking search optimization algorithm (BSA) is a population-based evolutionary algorithm for numerical optimization problems. BSA has a powerful global exploration capacity while its local exploitation capability is relatively poor. This affects the convergence speed of the algorithm. In this paper, we propose a modified BSA inspired by simulated annealing (BSAISA) to overcome the deficiency of BSA. In the BSAISA, the amplitude control factor () is modified based on the Metropolis criterion in simulated annealing. The redesigned could be adaptively decreased as the number of iterations increases and it does not introduce extra parameters. A self-adaptive -constrained method is used to handle the strict constraints. We compared the performance of the proposed BSAISA with BSA and other well-known algorithms when solving thirteen constrained benchmarks and five engineering design problems. The simulation results demonstrated that BSAISA is more effective than BSA and more competitive with other well-known algorithms in terms of convergence speed.

1. Introduction

Optimization is an essential research objective in the fields of applied mathematics and computer sciences. Optimization algorithms mainly aim to obtain the global optimum for optimization problems. There are many different kinds of optimization problems in real world. When an optimization problem has a simple and explicit gradient information or requires relatively small budgets of allowed function evaluations, the implementation of classical optimization techniques such as mathematical programming often could achieve efficient results [1]. However, many real-world engineering optimization problems may have complex, nonlinear, or nondifferentiable forms, which make them difficult to be tackled by using classical optimization techniques. The emergence of metaheuristic algorithms has overcome the deficiencies of classical optimization techniques to some extent, as they do not require gradient information and have the ability to escape from local optima. Metaheuristic algorithms are mainly inspired from a variety of natural phenomena and/or biological social behavior. Among these metaheuristic algorithms, swarm intelligence algorithms and evolutionary algorithms perhaps are the most attractive [2]. Swarm intelligence algorithms [3] generally simulate the intelligence behavior of swarms of creatures, such as particle swarm optimization (PSO) [4], ant colony optimization (ACO) [5], cuckoo search (CS) [6], and the artificial bee colony (ABC) algorithm [7]. These types of algorithms generally are developed by inspirations from a series of complex behavior processes in swarms with mutual cooperation and self-organization, in which “cooperation” is their core concept. The evolutionary algorithms (EAs) [8, 9] are inspired by the mechanism of nature evolution, in which “evolution” is the key idea. Examples of EAs include genetic algorithm (GA) [10], differential evolution (DE) [1114], covariance matrix adaptation evolution strategy (CMAES) [15], and the backtracking search optimization algorithm (BSA) [16].

BSA is an iterative population-based EA, which was first proposed by Civicioglu in 2013. BSA has three basic genetic operators: selection, mutation, and crossover. The main difference between BSA and other similar algorithms is that BSA possesses a memory for storing a population from a randomly chosen previous generation, which is used to generate the search-direction matrix for the next iteration. In addition, BSA has a simple structure, which makes it efficient, fast, and capable of solving multimodal problems. BSA has only one control parameter called the mix-rate, which significantly reduces the sensitivity of the initial values to the algorithm’s parameters. Due to these characteristics, in less than 4 years, BSA has been employed successfully to solve various engineering optimization problems, such as power systems [1719], induction motor [20, 21], antenna arrays [22, 23], digital image processing [24, 25], artificial neural networks [2629], and energy and environmental management [3032].

However, BSA has a weak local exploitation capacity and its convergence speed is relatively slow. Thus, many studies have attempted to improve the performance of BSA and some modifications of BSA have been proposed to overcome the deficiencies. From the perspective of modified object, the modifications of BSA can be divided into the following four categories. It is noted that we consider classifying the publication into the major modification category if it has more than one modification:(i)Modifications of the initial populations [3338](ii)Modifications of the reproduction operators, including the mutation and crossover operators [3947](iii)Modifications of the selection operators, including the local exploitation strategy [4851](iv)Modifications of the control factor and parameter [5257].

The research on controlling parameters of EAs is one of the most promising areas of research in evolutionary computation; even a little modification of parameters in an algorithm can make a considerable difference [58]. In the basic BSA, the value of amplitude control factor () is the product of three and the standard normal distribution random number (i.e., ), which is often too large or too small according to its formulation. This may give BSA a powerful global exploration capability at the early iterations; however, it also affects the later exploitation capability of BSA. Based on these considerations, we focus mainly on the influence of the amplitude control factor () on the BSA, that is, the fourth of the categories defined above. Duan and Luo [52] redesigned an adaptive based on the fitness statistics of population at each iteration. Wang et al. [53] and Tian et al. [54] proposed an adaptive based on Maxwell-Boltzmann distribution. Askarzadeh and dos Santos Coelho [55] proposed an adaptive based on Burger’s chaotic map. Chen et al. [56] redesigned an adaptive by introducing two extra parameters. Nama et al. [57] proposed a new to adaptively change in the range of and new mix-rate to randomly change in the range of . These modifications of have achieved good effects in the BSA.

Different from the modifications of in BSA described above, a modified version of BSA (BSAISA) inspired by simulated annealing (SA) is proposed in this paper. In the BSAISA, based on iterations is redesigned by learning a characteristic where SA can probabilistically accept a higher energy state and the acceptance probability decreases with the decrease in temperature. The redesigned can adaptively decrease as the number of iterations increases without introducing extra parameters. This adaptive variation tendency provides an efficient tradeoff between early exploration and later exploitation capability. We verified the effectiveness and competitiveness of BSAISA in simulation experiments using thirteen constrained benchmarks and five engineering design problems in terms of convergence speed.

The remainder of this paper is organized as follows. Section 2 introduces the basic BSA. As the main contribution of this paper, a detailed explanation of BSAISA is presented in Section 3. In Section 4, we present two sets of simulation experiments in which we implemented BSAISA and BSA to solve thirteen constrained optimization and five engineering design problems. The results are compared with those obtained by other well-known algorithms in terms of the solution quality and function evaluations. Finally, we give our concluding remarks in Section 5.

2. Backtracking Search Optimization Algorithm (BSA)

BSA is a population-based iterative EA. BSA generates trial populations to take control of the amplitude of the search-direction matrix which provides a strong global exploration capability. BSA equiprobably uses two random crossover strategies to exchange the corresponding elements of individuals in populations and trial populations during the process of crossover. Moreover, BSA has two selection processes. One is used to select population from the current and historical populations; the other is used to select the optimal population. In general, BSA can be divided into five processes: initialization, selection I, mutation, crossover, and selection II [16].

2.1. Initialization

BSA generates initial population and initial old population using where and are the th individual elements in the problem dimension () that falls in th individual position in the population size (), respectively, and mean the lower boundary and the upper boundary of the th dimension, respectively, and is a random uniform distribution.

2.2. Selection I

BSA’s selection I process is the beginning of each iteration. It aims to reselect a new for calculating the search direction based on population and historical population . The new is reselected through the “if-then” rule in where is the update operation; and represent random numbers between 0 and 1. The update operation (see (2)) ensures BSA has a memory. After is reselected, the order of the individuals in is randomly permuted by

2.3. Mutation

The mutation operator is used for generating the initial form of trial population with where F is the amplitude control factor of mutation operator used to control the amplitude of the search direction. The value , where , and is standard normal distribution.

2.4. Crossover

In this process, BSA generates the final form of trial population . BSA equiprobably uses two crossover strategies to manipulate the selected elements of the individuals at each iteration. Both the strategies generate different binary integer-valued matrices (map) of size to select the elements of individuals that have to be manipulated.

Strategy I uses the mix-rate parameter (mix-rate) to control the numbers of elements of individuals that are manipulated by using , where mix-rate = 1. Strategy II manipulates the only one randomly selected element of individuals by using , where .

The two strategies equiprobably are employed to manipulate the elements of individuals through the “if-then” rule: if , where and , then, is updated with .

At the end of crossover process, if some individuals in have overflowed the allowed search space limits, they will need to be regenerated by using (1).

2.5. Selection II

In BSA’s selection II process, the fitness values in and are compared, used to update the population based on a greedy selection. If have a better fitness value than , then, is updated to be . The population is updated by using

If the best individual of () has a better fitness value than the global minimum value obtained, the global minimizer will be updated to be , and the global minimum value will be updated to be the fitness value of .

3. Modified BSA Inspired by SA (BSAISA)

As mentioned in the introduction of this paper, the research work on the control parameters of an algorithm is very meaningful and valuable. In this paper, in order to improve BSA’s exploitation capability and convergence speed, we propose a modified version of BSA (BSAISA) where the redesign of is inspired by SA. The modified details of BSAISA are described in this section. First, the structure of the modified is described, before we explain the detailed design principle for the modified inspired by SA. Subsequently, two numerical tests are used to illustrate that the redesigned improves the convergence speed of the algorithm. We introduce a self-adaptive -constrained method for handling constraints at the end of this section.

3.1. Structure of the Adaptive Amplitude Control Factor

The modified is a normally distributed random number, where its mean value is an exponential function and its variance is equal to 1. In BSAISA, we redesign the adaptive to replace the original version using where is the index of individuals, is the adaptive amplitude control factor that corresponds to the th individual, is the absolute value of the difference between the objective function values of and (individual differences), is the normal distribution, and is the current iteration.

According to (6), the exponential function (mean value) decreases dynamically with the change in the number of iterations () and the individual differences (). Based on the probability density function curve of the normal distribution, the modified can be decreased adaptively as the number of iterations increases. Another characteristic of the modified F is that there are not any extra parameters.

3.2. Design Principle of the Modified Amplitude Control Factor

The design principle of the modified is inspired by the Metropolis criterion in SA. SA is a metaheuristic optimization technique based on physical behavior in nature. SA based on the Monte Carlo method was first proposed by Metropolis et al. [59] and it was successfully introduced into the field of combinatorial optimization for solving complex optimization problems by Kirkpatrick et al. [60].

The basic concept of SA derives from the process of physical annealing with solids. An annealing process occurs when a metal is heated to a molten state with a high temperature; then it is cooled slowly. If the temperature is decreased quickly, the resulting crystal will have many defects and it is just metastable; even the most stable crystalline state will be achieved at all. In other words, this may form a higher energy state than the most stable crystalline state. Therefore, in order to reach the absolute minimum energy state, the temperature needs to be decreased at a slow rate. SA simulates this process of annealing to search the global optimal solution in an optimization problem. However, accepting only the moves that lower the energy of system is like extremely rapid quenching; thus SA uses a special and effective acceptance method, that is, Metropolis criterion, which can probabilistically accept the hill-climbing moves (the higher energy moves). As a result, the energy of the system evolves into a Boltzmann distribution during the process of the simulated annealing. From this angle of view, it is no exaggeration to say that the Metropolis criterion is the core of SA.

The Metropolis criterion can be expressed by the physical significance of energy, where the new energy state will be accepted when the new energy state is lower than the previous energy state, and the new energy state will be probabilistically accepted when the new energy state is higher than the previous energy state. This feature of SA can escape from being trapped in local minima especially in the early stages of the search. It can also be described as follows.

(i) , then the new state is accepted and the energy with the displaced atom is used as the starting point for the next step, where represents the energy of the atom. Both and are the states of atoms, and is the next state of .

(ii) , then calculate the probability of , and generate a random number , which is a uniform distribution over , where is Boltzmann’s constant (in general, ) and is the current temperature. If , then the new energy will be accepted; otherwise, the previous energy is used to start the next step.

Analysis 1. The Metropolis criterion states that SA has two characteristics: (1) SA can probabilistically accept the higher energy and (2) the acceptance probability of SA decreases as the temperature decreases. Therefore, SA can reject and jump out of a local minimum with a dynamic and decreasing probability to continue exploiting the other solutions in the state space. This acceptance mechanism can enrich the diversity of energy states.

Analysis 2. As shown in (4), is used to control the amplitude of population mutation in BSA, thus is an important factor for controlling population diversity. If is excessively large, the diversity of the population will be too high and the convergence speed of BSA will slow down. If is excessively small, the diversity of the population will be reduced so it will be difficult for BSA to obtain the global optimum and it may be readily trapped by a local optimum. Therefore, adaptively controlling the amplitude of is a key to accelerating the convergence speed of the algorithm and maintaining its the population diversity.

Based on Analyses 1 and 2, it is clear that if can dynamically decrease, the convergence speed of BSA will be able to accelerate while maintaining the population diversity. On the other hand, SA possesses this characteristic that its acceptance probability can be dynamically reduced. Based on these two considerations, we propose BSAISA with a redesigned , which is inspired by SA. More specifically, the new (see (6)) is redesigned by learning the formulation () of acceptance probability, and its formulation has been shown in the previous subsection.

For the two formulas of the modified and , the individual difference () of a population or the energy difference () of a system will decrease as the number of iterations increases in an algorithm, and the temperature of SA tends to decrease, while the iteration of BSA tends to increase. As a result, one can observe the correspondence between modified and where the reciprocal of individual difference () corresponds to the energy difference () of SA, and the reciprocal of current iteration () corresponds to the current temperature () of SA. In this way, the redesigned can be decreased adaptively as the number of iterations increases.

3.3. Numerical Analysis of the Modified Amplitude Control Factor

In order to verify that the convergence speed of the basic BSA is improved with the modified , two types (unimodal and multimodal) of unconstrained benchmark functions are used to test the changing trends in and population variances and best function values as the iterations increases, respectively. The two functions are Schwefel 1.2 and Rastrigin, and their detailed information is provided in [61]. The two functions and the user parameters including the populations (), dimensions (), and maximum iterations (Max) are shown in Table 1. Three groups of test results are compared in the tests including (1) the comparative curves of the mean values of the modified and original for Schwefel 1.2 and Rastrigin, (2) the comparative curves of the mean values of BSA and BSAISA population variances for two functions, and (3) the convergence curves of BSA and BSAISA for two functions. They are depicted in Figures 1, 2, and 3, respectively. Based on Figures 13, two results can be observed as follows.


NameObjective functionRangeMax

Schwefel 1.210030500
Rastrigin10030500

Note. “” means populations, “” means dimensions, and “Max” means maximum iterations.

(1) According to the trends of F in Figure 1, both the original F and modified F are subject to changes from the normal distribution. The mean value of the original F does not tend to decrease as the number of iterations increases. By contrast, the mean value of the modified F exhibits a clear and fluctuating downward trend as the number of iterations increases.

(2) According to Figure 2, the population variances of BSAISA and BSA both exhibit a clear downward trend as the number of iterations increases. The population variances of BSAISA and BSA are almost same during the early iterations. This illustrates that the modified F does not reduce the population diversity in the early iterations. In the middle and later iterations, the population variances of BSAISA decrease more quickly than that of BSA. This illustrates that the modified F improves the convergence speed. As can be seen from Figure 3, as the number of iterations increases, the best objective function value of BSAISA drops faster than that of BSA. This shows that the modified F improves the convergence speed of BSA. Moreover, BSAISA can find a more accurate solution at the same computational cost.

Summary. Based on the design principle and numerical analysis of the modified , the modified exhibits an overall fluctuating and downward trend, which matches with the concept of the acceptance probability in SA. In particular, during the early iterations, the modified is relatively large. This allows BSAISA to search in a wide region, while maintaining the population diversity of BSAISA. As the number of iterations decreases, the modified gradually exhibits a decreasing trend. This accelerates the convergence speed of BSAISA. In the later iterations, the modified is relatively small. This enables BSAISA to fully search in the local region. Therefore, it can be concluded that BSAISA can adaptively control the amplitude of population mutation to change its local exploitation capacity. This may improve the convergence speed of the algorithm. Moreover, the modified does not introduce extra parameters, so it does not increase the sensitivity of BSAISA to the parameters.

3.4. A Self-Adaptive ε-Constrained Method for Handling Constraints

In general, a constrained optimization problem can be mathematically formulated as a minimization problem, as follows: where is a -dimensional vector, is the total number of inequality constraints, and n is the total number of equality constraints. The equality constraints are transformed into inequality constraints by using , where is a very small degree of violation, and in this paper. The maximization problems are transformed into minimization problems using . The constraint violation is given by

Several constraint-handling methods have been proposed previously, where the five most commonly used methods comprise penalty functions, feasibility and dominance rules (FAD), stochastic ranking, -constrained methods, and multiobjectives concepts. Among these five methods, the -constrained method is relatively effective and used widely. Zhang et al. [62] proposed a self-adaptive -constrained method (SA) to combine with the basic BSA for constrained problems. It has been verified that the SA has a stronger search efficiency and convergence than the fixed -constrained method and FAD. In this paper, the SA is used to combine with BSAISA for constrained optimization problems, which comprises the following two rules: (1) if the constraint violations of two solutions are smaller than a given value or two solutions have the same constraint violations, the solution with a better objective function value is preferred and (2) if not, the solution with a smaller constraint violation is preferred. SA could be expressed by the following equations: where is a positive value that represents a tolerance related to constraint violation. The self-adaptive value is formulated as the following equation: where is the number of the current iterations. is the constraint violation of the top th individual in the initial population. is the constraint violation of the top th individual in the trial population at the current iteration. cp and are control parameters. Th1 and Th2 are threshold values. is related to the iteration and functions and .

Firstly, is set as . If the initial value is bigger than Th1, will be assigned to ; otherwise will be assigned to . Then, if and is bigger than Th2, will be assigned to ; otherwise will be assigned to . Finally, is updated as (14). The detailed information of SA can be acquired from [62], and the related parameter settings of SA (the same as [62]) are presented in Table 2.



Note. “” means the maximum iterations; “” means populations.

To illustrate the changing trend of the self-adaptive value vividly, BSAISA with SA is used to solve a well-known benchmark constrained function G10 in [61]. The related parameters are set as , , , , , and . The changing trend of value is shown in Figure 4. Three sampling points, that is, , , and , are marked in Figure 4. As shown in Figure 4, it can be observed that value declines very fast at first. After it is smaller than about 2, it declines as an exponential way. This changing trend of value could help algorithm to sufficiently search infeasible domains near feasible domains.

The pseudocode for BSAISA is showed in Pseudocode 1. In Pseudocode 1, the modified adaptive is shown in lines (14)–(16). When BSAISA deals with constrained optimization problems, the code in line (8) and line (40) in Pseudocode 1 should consider objective function value and constraint violation simultaneously, and SA is applied to choose a better solution or best solution in line (42) and lines (47)-(48).

Input:  ObjFun, , , max-iteration, mix-rate, ,
Output:  globalminimum, globalminimizer
//, , ,
(1) function BSAISA(ObjFun, , , max-iteration, low, up) //  Initialization
(2) globalminimum = inf
(3) for    from  1  to    do
(4) for    from  1  to    do
(5) //  Initialization of population
(6) //  Initialization of oldP
(7) end
(8)//  Initial-fitness values of and oldP
(9) end
(10) for  iteration  from  1  to  max-iteration  do  //  Selection-I
(11) if    then      end
(12)
(13) Generation of Trail-Population
(14) // Modified  F  based  on  SA
(15) // Modified  F  based  on  SA
(16) // Modified  F  based  on  SA
(17) //  Mutation
(18) //  Initial-map is an -by- matrix of ones
(19) if    then  //  Crossover
(20) for    from  1  to    do
(21)
(22) end
(23) else
(24) for    from  1  to    do, ,  end
(25) end
(26) //Generation of Trial Population,
(27) for    from  1  to    do
(28) for    from  1  to    do
(29) if    then  
(30) end
(31) end
(32) for    from  1  to    do
(33) for    from  1  to    do
(34) if    or    then
(35)
(36) end
(37) end
(38) end
(39) end
(40)   //  Seletion-II
(41) for    from  1  to    do
(42)if    then
(43)
(44)
(45) end
(46) end
(47)
(48)if    then  // Export globalminimum and globalminimizer
(49)
(50)
(51) end
(52) end

4. Experimental Studies

In this section, two sets of simulation experiments were executed to evaluate the effectiveness of the proposed BSAISA. The first experiment set performed on 13 well-known benchmark constrained functions taken from [63] (see Appendix A). These thirteen benchmarks contain different properties as shown in Table 3, including the number of variables (), objective function types, the feasibility ratio (), constraint types and number, and the number of active constraints in the optimum solution. The second experiment is conducted on 5 engineering constrained optimization problems chosen from [64] (see Appendix B). These five problems are the three-bar truss design problem (TTP), pressure vessel design problem (PVP), tension/compression spring design problem (TCSP), welded beam design problem (WBP), and speed reducer design problem (SRP), respectively. These engineering problems include objective functions and constraints of various types and natures (quadratic, cubic, polynomial, and nonlinear) with various number of design variables (continuous, integer, mixed, and discrete).


Fun. Type (%) LI NI LE NE Active

g0113Quadratic0.000390006
g0220Nonlinear99.997311001
g0310Nonlinear0.000000011
g045Quadratic27.007906002
g054Nonlinear0.000020033
g062Nonlinear0.005702002
g0710Quadratic0.000335006
g082Nonlinear0.858102000
g097Nonlinear0.519904002
g108Linear0.002033003
g112Quadratic0.097300011
g123Quadratic4.76790000
g135Nonlinear0.000000123

Note. “” is the number of variables. “” represents feasibility ratio. “LI,” “NI,” “LE,” and “NE” represent linear inequality, nonlinear inequality, linear equality, and nonlinear equality, respectively. “Active” represents the number of active constraints at the global optimum.

The recorded experimental results include the best function value (Best), the worst function value (Worst), the mean function value (Mean), the standard deviation (Std), the best solution (variables of best function value), the corresponding constraint value, and the number of function evaluations (FEs). The number of function evaluations can be considered as a convergence rate or a computational cost.

In order to evaluate the performance of BSAISA in terms of convergence speed, the FEs are considered as the best FEs corresponding to the obtained best solution in this paper. The calculation of FEs are the product of population sizes () and the number of iterations () at which the best function value is first obtained (i.e., ). For example, if 2500 is the maximum number of iterations for one minimization problem, , the value should be 2000. However, BSAISA needs to evaluate the initial historical population (), so its actual FEs should be plus (i.e., ).

4.1. Parameter Settings

For the first experiment, the main parameters for 13 benchmark constrained functions are the same as the following: population size () is set as 30; the maximum number of iterations () is set as 11665. Therefore, BSAISA’s maximum number of function evaluations (MFEs) should equal to 34,9980 (nearly 35,0000). The 13 benchmarks were executed by using 30 independent runs.

For the 5 real-world engineering design problems, we use slightly different parameter settings since each problem has different natures, that is, TTP (), PVP (), TCSP (), WBP (), SRP (). The 6 engineering problems were performed using 50 independent runs.

The user parameters of all experiments are presented in Table 4. Termination condition may be the maximum number of iterations, CPU time, or an allowable tolerance value between the last result and the best known function value for most metaheuristics. In this paper, the maximum number of iterations is considered as termination condition. All experiments were conducted on a computer with a Intel(R) Core(TM) i5-4590 CPU @ 3.30 GHz and 4 GB RAM.


ProblemG01–G13TTPPVPTCSPWBPSRP

302020202020
1166510003000300030002000
Runs305050505050

Note. The expression “runs” denotes the number of independent runs.
4.2. Simulation on Constrained Benchmark Problems

In this section, BSAISA and BSA are performed on the 13 benchmarks simultaneously. Their statistical results obtained from 30 independent runs are listed in Tables 5 and 6, including the best known function value (Best Known) and the obtained Best/Mean/Worst/Std values as well as the FEs. The best known values for all the test problems are derived from [62]. The best known values found by algorithms are highlighted in bold. From Tables 5 and 6, it can be seen that BSAISA is able to find the known optimal function values for G01, G04, G05, G06, G08, G09, G011, G012, and G013; however, BSA fails to find the best known function value on G09 and G13. For the rest of the functions, BSAISA obtains the results very close to the best known function values. Moreover, BSAISA requires fewer FEs than BSA on G01, G03, G04, G05, G06, G07, G08, G09, G011, and G012. Although the FEs of BSAISA are slightly worse than that of BSA for G02, G10, and G13, BSAISA finds more accurate best function values than BSA for G02, G03, G07, G09, G10, and G13.


Fun.Known optimalBestMeanWorstStdFEs

G01−1584,630
G02349,500
G0358,560
G04−30665.538672121,650
G055126.496714238,410
G06−6961.81387689,550
G0715,060
G08−0.095825030,930
G09680.630057347,760
G10346,980
G110.74990087,870
G12−15430
G130.0539415349,800

Note. “Known optimal” denotes the best known function values in the literatures. “Bold” means the algorithm has found the best known function values. The same as Table 6.

Fun.Known optimalBestMeanWorstStdFEs

G01−15−15−15−15.00000099,300
G02−0.803619−0.803255−0.792760−0.749326344,580
G03−1.000500−1.000488−0.998905−0.990600348,960
G04−30665.538672−30665.538672−30665.538672−30665.538672272,040
G055126.4967145126.4967145144.0413635275.384724299,220
G06−6961.813876−6961.813876−6961.813876−6961.813876111,450
G0724.30620924.30760724.34462624.399896347,250
G08−0.0958250−0.0958250−0.0958250−0.095825073,440
G09680.630057680.630058680.630352680.632400348,900
G107049.2480217049.2785437053.5738537080.192700340,020
G110.7499000.7499000.7499000.749900113,250
G12−1−1−1−1013,590
G130.05394150.05394200.18169860.5477657347,400

To further compare BSAISA and BSA, the function value convergence curves of 13 functions that have significant differences have been plotted, as shown in Figures 5 and 6. The horizontal axis represents the number of iterations, while the vertical axis represents the difference of the objective function value and the best known value. For G02, G04, G06, G08, G11, and G12, it is obvious that the convergence speed of BSAISA is faster than that of BSA, and its best function value is also better than that of BSA. For G01, G03, G05, G07, G09, G10, and G13, it can be observed that the objective function values of BSAISA and BSA fluctuate in the early iterations, and they decrease as the number of iterations increases during the middle and late stages. This illustrates that both the algorithms are able to escape the local optimum under different degrees, and the convergence curve of BSAISA drops still faster than that of BSA. Therefore, the experiment results demonstrate that the convergence speed of the basic BSA is improved with our modified .

In order to further verify the competitiveness of BSAISA in aspect of convergence speed, we compared BSAISA with some classic and state-of-the-art approaches in terms of best function value and function evaluations. The best function value and the corresponding FEs of each algorithm on 13 benchmarks are presented in Table 7, where the optimal results are in bold on each function. These compared algorithms are listed below:(1)Stochastic ranking (SR) [63](2)Filter simulated annealing (FSA) [65](3)Cultured differential evolution (CDE) [66](4)Agent based memetic algorithm (AMA) [64](5)Modified artificial bee colony (MABC) algorithm [67](6)Rough penalty genetic algorithm (RPGA) [68](7)BSA combined self-adaptive constrained method (BSA-SA) [62].


Alg.BSAISASRFSACDEAMAMABCRPGABSA-SA
Fun.Best (FEs)Best (FEs)Best (FEs)Best (FEs)Best (FEs)Best (FEs)Best (FEs)Best (FEs)

G01−15−15.000−14.993316−15.000000−15.000−15.000−15.000−15.000000
(84,630)(148,200)(205,748)(100,100)(350,000)(350,000)(350,000)(350,000)
G02−0.8036−0.8035−0.7549−0.8036−0.8035−0.8036−0.8036−0.8036
(349,500)(217,200)(227,832)(100,100)(350,000)(350,000)(350,000)(350,000)
G03−1.000498−1.000−1.0000015−0.995413−1.000−1.000−1.000−1.000498
(58,560)(229,200)(314,938)(100,100)(350,000)(350,000)(350,000)(350,000)
G04−30665.539−30665.539−30665.538−30665.539−30665.538−30665.539−30665.539−30665.539
(121,650)(88,200)(86,154)(100,100)(350,000)(350,000)(350,000)(350,000)
G055126.4975126.4975126.49815126.5715126.5125126.4875126.5445126.497
(238,410)(51,600)(47,661)(100,100)(350,000)(350,000)(350,000)(350,000)
G06−6961.814−6961.814−6961.814−6961.814−6961.807−6961.814−6961.814−6961.814
(89,550)(118,000)(44,538)(100,100)(350,000)(350,000)(350,000)(350,000)
G0724.30724.30724.31124.30624.31524.32424.33324.306
(15,060)(143,000)(404,501)(100,100)(350,000)(350,000)(350,000)(350,000)
G08−0.095825−0.095825−0.095825−0.095825−0.095825−0.095825−0.095825−0.095825
(30,930)(76,200)(56,476)(100,100)(350,000)(350,000)(350,000)(350,000)
G09680.630057680.630680.63008680.630057680.645680.631680.631680.6301
(347,760)(111,400)(324,569)(100,100)(350,000)(350,000)(350,000)(350,000)
G107049.2497054.3167059.8647049.2487281.9577058.8237049.8617049.278
(346,980)(128,400)(243,520)(100,100)(350,000)(350,000)(350,000)(350,000)
G110.7499000.7500.7499990.7499000.7500.7500.7490.749900
(87,870)(11,400)(23,722)(100,100)(350,000)(350,000)(350,000)(350,000)
G12−1−1.000000−1.000000−1.000000−1.000−1.000NA−1.000000
(5430)(16,400)(59,355)(100,100)(350,000)(350,000)(350,000)
G130.05394150.0539570.05394980.0561800.0539470.757NA0.0539415
(349,800)(69,800)(120,268)(100,100)(350,000)(350,000)(350,000)
Nu.
RK15728463

Note. “NA” means not available. The same as Tables 8, 10, 11, 12, 13, 14, 15, and 17. “RK” represents the comprehensive ranking of each algorithm on the set of benchmarks. “Nu.” represents the sum of the number of winners and the number of the optimal function values for each algorithm on the set of benchmarks.

MethodDEDSHEAAPSO-DEDELCMBABSABSAISA

NANA
NANA
NANA