Table of Contents Author Guidelines Submit a Manuscript
Computational Intelligence and Neuroscience
Volume 2018, Article ID 9167414, 27 pages
https://doi.org/10.1155/2018/9167414
Research Article

Modified Backtracking Search Optimization Algorithm Inspired by Simulated Annealing for Constrained Engineering Optimization Problems

1School of Information and Mathematics, Yangtze University, Jingzhou, Hubei 434023, China
2School of Software, East China Jiaotong University, Nanchang, Jiangxi 330013, China

Correspondence should be addressed to Zhongbo Hu; moc.621@ddbzuh

Received 10 October 2017; Accepted 20 December 2017; Published 13 February 2018

Academic Editor: Silvia Conforto

Copyright © 2018 Hailong Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The backtracking search optimization algorithm (BSA) is a population-based evolutionary algorithm for numerical optimization problems. BSA has a powerful global exploration capacity while its local exploitation capability is relatively poor. This affects the convergence speed of the algorithm. In this paper, we propose a modified BSA inspired by simulated annealing (BSAISA) to overcome the deficiency of BSA. In the BSAISA, the amplitude control factor () is modified based on the Metropolis criterion in simulated annealing. The redesigned could be adaptively decreased as the number of iterations increases and it does not introduce extra parameters. A self-adaptive -constrained method is used to handle the strict constraints. We compared the performance of the proposed BSAISA with BSA and other well-known algorithms when solving thirteen constrained benchmarks and five engineering design problems. The simulation results demonstrated that BSAISA is more effective than BSA and more competitive with other well-known algorithms in terms of convergence speed.

1. Introduction

Optimization is an essential research objective in the fields of applied mathematics and computer sciences. Optimization algorithms mainly aim to obtain the global optimum for optimization problems. There are many different kinds of optimization problems in real world. When an optimization problem has a simple and explicit gradient information or requires relatively small budgets of allowed function evaluations, the implementation of classical optimization techniques such as mathematical programming often could achieve efficient results [1]. However, many real-world engineering optimization problems may have complex, nonlinear, or nondifferentiable forms, which make them difficult to be tackled by using classical optimization techniques. The emergence of metaheuristic algorithms has overcome the deficiencies of classical optimization techniques to some extent, as they do not require gradient information and have the ability to escape from local optima. Metaheuristic algorithms are mainly inspired from a variety of natural phenomena and/or biological social behavior. Among these metaheuristic algorithms, swarm intelligence algorithms and evolutionary algorithms perhaps are the most attractive [2]. Swarm intelligence algorithms [3] generally simulate the intelligence behavior of swarms of creatures, such as particle swarm optimization (PSO) [4], ant colony optimization (ACO) [5], cuckoo search (CS) [6], and the artificial bee colony (ABC) algorithm [7]. These types of algorithms generally are developed by inspirations from a series of complex behavior processes in swarms with mutual cooperation and self-organization, in which “cooperation” is their core concept. The evolutionary algorithms (EAs) [8, 9] are inspired by the mechanism of nature evolution, in which “evolution” is the key idea. Examples of EAs include genetic algorithm (GA) [10], differential evolution (DE) [1114], covariance matrix adaptation evolution strategy (CMAES) [15], and the backtracking search optimization algorithm (BSA) [16].

BSA is an iterative population-based EA, which was first proposed by Civicioglu in 2013. BSA has three basic genetic operators: selection, mutation, and crossover. The main difference between BSA and other similar algorithms is that BSA possesses a memory for storing a population from a randomly chosen previous generation, which is used to generate the search-direction matrix for the next iteration. In addition, BSA has a simple structure, which makes it efficient, fast, and capable of solving multimodal problems. BSA has only one control parameter called the mix-rate, which significantly reduces the sensitivity of the initial values to the algorithm’s parameters. Due to these characteristics, in less than 4 years, BSA has been employed successfully to solve various engineering optimization problems, such as power systems [1719], induction motor [20, 21], antenna arrays [22, 23], digital image processing [24, 25], artificial neural networks [2629], and energy and environmental management [3032].

However, BSA has a weak local exploitation capacity and its convergence speed is relatively slow. Thus, many studies have attempted to improve the performance of BSA and some modifications of BSA have been proposed to overcome the deficiencies. From the perspective of modified object, the modifications of BSA can be divided into the following four categories. It is noted that we consider classifying the publication into the major modification category if it has more than one modification:(i)Modifications of the initial populations [3338](ii)Modifications of the reproduction operators, including the mutation and crossover operators [3947](iii)Modifications of the selection operators, including the local exploitation strategy [4851](iv)Modifications of the control factor and parameter [5257].

The research on controlling parameters of EAs is one of the most promising areas of research in evolutionary computation; even a little modification of parameters in an algorithm can make a considerable difference [58]. In the basic BSA, the value of amplitude control factor () is the product of three and the standard normal distribution random number (i.e., ), which is often too large or too small according to its formulation. This may give BSA a powerful global exploration capability at the early iterations; however, it also affects the later exploitation capability of BSA. Based on these considerations, we focus mainly on the influence of the amplitude control factor () on the BSA, that is, the fourth of the categories defined above. Duan and Luo [52] redesigned an adaptive based on the fitness statistics of population at each iteration. Wang et al. [53] and Tian et al. [54] proposed an adaptive based on Maxwell-Boltzmann distribution. Askarzadeh and dos Santos Coelho [55] proposed an adaptive based on Burger’s chaotic map. Chen et al. [56] redesigned an adaptive by introducing two extra parameters. Nama et al. [57] proposed a new to adaptively change in the range of and new mix-rate to randomly change in the range of . These modifications of have achieved good effects in the BSA.

Different from the modifications of in BSA described above, a modified version of BSA (BSAISA) inspired by simulated annealing (SA) is proposed in this paper. In the BSAISA, based on iterations is redesigned by learning a characteristic where SA can probabilistically accept a higher energy state and the acceptance probability decreases with the decrease in temperature. The redesigned can adaptively decrease as the number of iterations increases without introducing extra parameters. This adaptive variation tendency provides an efficient tradeoff between early exploration and later exploitation capability. We verified the effectiveness and competitiveness of BSAISA in simulation experiments using thirteen constrained benchmarks and five engineering design problems in terms of convergence speed.

The remainder of this paper is organized as follows. Section 2 introduces the basic BSA. As the main contribution of this paper, a detailed explanation of BSAISA is presented in Section 3. In Section 4, we present two sets of simulation experiments in which we implemented BSAISA and BSA to solve thirteen constrained optimization and five engineering design problems. The results are compared with those obtained by other well-known algorithms in terms of the solution quality and function evaluations. Finally, we give our concluding remarks in Section 5.

2. Backtracking Search Optimization Algorithm (BSA)

BSA is a population-based iterative EA. BSA generates trial populations to take control of the amplitude of the search-direction matrix which provides a strong global exploration capability. BSA equiprobably uses two random crossover strategies to exchange the corresponding elements of individuals in populations and trial populations during the process of crossover. Moreover, BSA has two selection processes. One is used to select population from the current and historical populations; the other is used to select the optimal population. In general, BSA can be divided into five processes: initialization, selection I, mutation, crossover, and selection II [16].

2.1. Initialization

BSA generates initial population and initial old population using where and are the th individual elements in the problem dimension () that falls in th individual position in the population size (), respectively, and mean the lower boundary and the upper boundary of the th dimension, respectively, and is a random uniform distribution.

2.2. Selection I

BSA’s selection I process is the beginning of each iteration. It aims to reselect a new for calculating the search direction based on population and historical population . The new is reselected through the “if-then” rule in where is the update operation; and represent random numbers between 0 and 1. The update operation (see (2)) ensures BSA has a memory. After is reselected, the order of the individuals in is randomly permuted by

2.3. Mutation

The mutation operator is used for generating the initial form of trial population with where F is the amplitude control factor of mutation operator used to control the amplitude of the search direction. The value , where , and is standard normal distribution.

2.4. Crossover

In this process, BSA generates the final form of trial population . BSA equiprobably uses two crossover strategies to manipulate the selected elements of the individuals at each iteration. Both the strategies generate different binary integer-valued matrices (map) of size to select the elements of individuals that have to be manipulated.

Strategy I uses the mix-rate parameter (mix-rate) to control the numbers of elements of individuals that are manipulated by using , where mix-rate = 1. Strategy II manipulates the only one randomly selected element of individuals by using , where .

The two strategies equiprobably are employed to manipulate the elements of individuals through the “if-then” rule: if , where and , then, is updated with .

At the end of crossover process, if some individuals in have overflowed the allowed search space limits, they will need to be regenerated by using (1).

2.5. Selection II

In BSA’s selection II process, the fitness values in and are compared, used to update the population based on a greedy selection. If have a better fitness value than , then, is updated to be . The population is updated by using

If the best individual of () has a better fitness value than the global minimum value obtained, the global minimizer will be updated to be , and the global minimum value will be updated to be the fitness value of .

3. Modified BSA Inspired by SA (BSAISA)

As mentioned in the introduction of this paper, the research work on the control parameters of an algorithm is very meaningful and valuable. In this paper, in order to improve BSA’s exploitation capability and convergence speed, we propose a modified version of BSA (BSAISA) where the redesign of is inspired by SA. The modified details of BSAISA are described in this section. First, the structure of the modified is described, before we explain the detailed design principle for the modified inspired by SA. Subsequently, two numerical tests are used to illustrate that the redesigned improves the convergence speed of the algorithm. We introduce a self-adaptive -constrained method for handling constraints at the end of this section.

3.1. Structure of the Adaptive Amplitude Control Factor

The modified is a normally distributed random number, where its mean value is an exponential function and its variance is equal to 1. In BSAISA, we redesign the adaptive to replace the original version using where is the index of individuals, is the adaptive amplitude control factor that corresponds to the th individual, is the absolute value of the difference between the objective function values of and (individual differences), is the normal distribution, and is the current iteration.

According to (6), the exponential function (mean value) decreases dynamically with the change in the number of iterations () and the individual differences (). Based on the probability density function curve of the normal distribution, the modified can be decreased adaptively as the number of iterations increases. Another characteristic of the modified F is that there are not any extra parameters.

3.2. Design Principle of the Modified Amplitude Control Factor

The design principle of the modified is inspired by the Metropolis criterion in SA. SA is a metaheuristic optimization technique based on physical behavior in nature. SA based on the Monte Carlo method was first proposed by Metropolis et al. [59] and it was successfully introduced into the field of combinatorial optimization for solving complex optimization problems by Kirkpatrick et al. [60].

The basic concept of SA derives from the process of physical annealing with solids. An annealing process occurs when a metal is heated to a molten state with a high temperature; then it is cooled slowly. If the temperature is decreased quickly, the resulting crystal will have many defects and it is just metastable; even the most stable crystalline state will be achieved at all. In other words, this may form a higher energy state than the most stable crystalline state. Therefore, in order to reach the absolute minimum energy state, the temperature needs to be decreased at a slow rate. SA simulates this process of annealing to search the global optimal solution in an optimization problem. However, accepting only the moves that lower the energy of system is like extremely rapid quenching; thus SA uses a special and effective acceptance method, that is, Metropolis criterion, which can probabilistically accept the hill-climbing moves (the higher energy moves). As a result, the energy of the system evolves into a Boltzmann distribution during the process of the simulated annealing. From this angle of view, it is no exaggeration to say that the Metropolis criterion is the core of SA.

The Metropolis criterion can be expressed by the physical significance of energy, where the new energy state will be accepted when the new energy state is lower than the previous energy state, and the new energy state will be probabilistically accepted when the new energy state is higher than the previous energy state. This feature of SA can escape from being trapped in local minima especially in the early stages of the search. It can also be described as follows.

(i) , then the new state is accepted and the energy with the displaced atom is used as the starting point for the next step, where represents the energy of the atom. Both and are the states of atoms, and is the next state of .

(ii) , then calculate the probability of , and generate a random number , which is a uniform distribution over , where is Boltzmann’s constant (in general, ) and is the current temperature. If , then the new energy will be accepted; otherwise, the previous energy is used to start the next step.

Analysis 1. The Metropolis criterion states that SA has two characteristics: (1) SA can probabilistically accept the higher energy and (2) the acceptance probability of SA decreases as the temperature decreases. Therefore, SA can reject and jump out of a local minimum with a dynamic and decreasing probability to continue exploiting the other solutions in the state space. This acceptance mechanism can enrich the diversity of energy states.

Analysis 2. As shown in (4), is used to control the amplitude of population mutation in BSA, thus is an important factor for controlling population diversity. If is excessively large, the diversity of the population will be too high and the convergence speed of BSA will slow down. If is excessively small, the diversity of the population will be reduced so it will be difficult for BSA to obtain the global optimum and it may be readily trapped by a local optimum. Therefore, adaptively controlling the amplitude of is a key to accelerating the convergence speed of the algorithm and maintaining its the population diversity.

Based on Analyses 1 and 2, it is clear that if can dynamically decrease, the convergence speed of BSA will be able to accelerate while maintaining the population diversity. On the other hand, SA possesses this characteristic that its acceptance probability can be dynamically reduced. Based on these two considerations, we propose BSAISA with a redesigned , which is inspired by SA. More specifically, the new (see (6)) is redesigned by learning the formulation () of acceptance probability, and its formulation has been shown in the previous subsection.

For the two formulas of the modified and , the individual difference () of a population or the energy difference () of a system will decrease as the number of iterations increases in an algorithm, and the temperature of SA tends to decrease, while the iteration of BSA tends to increase. As a result, one can observe the correspondence between modified and where the reciprocal of individual difference () corresponds to the energy difference () of SA, and the reciprocal of current iteration () corresponds to the current temperature () of SA. In this way, the redesigned can be decreased adaptively as the number of iterations increases.

3.3. Numerical Analysis of the Modified Amplitude Control Factor

In order to verify that the convergence speed of the basic BSA is improved with the modified , two types (unimodal and multimodal) of unconstrained benchmark functions are used to test the changing trends in and population variances and best function values as the iterations increases, respectively. The two functions are Schwefel 1.2 and Rastrigin, and their detailed information is provided in [61]. The two functions and the user parameters including the populations (), dimensions (), and maximum iterations (Max) are shown in Table 1. Three groups of test results are compared in the tests including (1) the comparative curves of the mean values of the modified and original for Schwefel 1.2 and Rastrigin, (2) the comparative curves of the mean values of BSA and BSAISA population variances for two functions, and (3) the convergence curves of BSA and BSAISA for two functions. They are depicted in Figures 1, 2, and 3, respectively. Based on Figures 13, two results can be observed as follows.

Table 1: Two benchmark functions and the corresponding populations, dimensions, and max iterations.
Figure 1: Comparisons of modified and original for Schwefel 1.2 and Rastrigin.
Figure 2: The mean values of population variances versus number of iterations using BSA and BSAISA for two functions.
Figure 3: Convergence curves of BSA and BSAISA for two functions.

(1) According to the trends of F in Figure 1, both the original F and modified F are subject to changes from the normal distribution. The mean value of the original F does not tend to decrease as the number of iterations increases. By contrast, the mean value of the modified F exhibits a clear and fluctuating downward trend as the number of iterations increases.

(2) According to Figure 2, the population variances of BSAISA and BSA both exhibit a clear downward trend as the number of iterations increases. The population variances of BSAISA and BSA are almost same during the early iterations. This illustrates that the modified F does not reduce the population diversity in the early iterations. In the middle and later iterations, the population variances of BSAISA decrease more quickly than that of BSA. This illustrates that the modified F improves the convergence speed. As can be seen from Figure 3, as the number of iterations increases, the best objective function value of BSAISA drops faster than that of BSA. This shows that the modified F improves the convergence speed of BSA. Moreover, BSAISA can find a more accurate solution at the same computational cost.

Summary. Based on the design principle and numerical analysis of the modified , the modified exhibits an overall fluctuating and downward trend, which matches with the concept of the acceptance probability in SA. In particular, during the early iterations, the modified is relatively large. This allows BSAISA to search in a wide region, while maintaining the population diversity of BSAISA. As the number of iterations decreases, the modified gradually exhibits a decreasing trend. This accelerates the convergence speed of BSAISA. In the later iterations, the modified is relatively small. This enables BSAISA to fully search in the local region. Therefore, it can be concluded that BSAISA can adaptively control the amplitude of population mutation to change its local exploitation capacity. This may improve the convergence speed of the algorithm. Moreover, the modified does not introduce extra parameters, so it does not increase the sensitivity of BSAISA to the parameters.

3.4. A Self-Adaptive ε-Constrained Method for Handling Constraints

In general, a constrained optimization problem can be mathematically formulated as a minimization problem, as follows: where is a -dimensional vector, is the total number of inequality constraints, and n is the total number of equality constraints. The equality constraints are transformed into inequality constraints by using , where is a very small degree of violation, and in this paper. The maximization problems are transformed into minimization problems using . The constraint violation is given by

Several constraint-handling methods have been proposed previously, where the five most commonly used methods comprise penalty functions, feasibility and dominance rules (FAD), stochastic ranking, -constrained methods, and multiobjectives concepts. Among these five methods, the -constrained method is relatively effective and used widely. Zhang et al. [62] proposed a self-adaptive -constrained method (SA) to combine with the basic BSA for constrained problems. It has been verified that the SA has a stronger search efficiency and convergence than the fixed -constrained method and FAD. In this paper, the SA is used to combine with BSAISA for constrained optimization problems, which comprises the following two rules: (1) if the constraint violations of two solutions are smaller than a given value or two solutions have the same constraint violations, the solution with a better objective function value is preferred and (2) if not, the solution with a smaller constraint violation is preferred. SA could be expressed by the following equations: where is a positive value that represents a tolerance related to constraint violation. The self-adaptive value is formulated as the following equation: where is the number of the current iterations. is the constraint violation of the top th individual in the initial population. is the constraint violation of the top th individual in the trial population at the current iteration. cp and are control parameters. Th1 and Th2 are threshold values. is related to the iteration and functions and .

Firstly, is set as . If the initial value is bigger than Th1, will be assigned to ; otherwise will be assigned to . Then, if and is bigger than Th2, will be assigned to ; otherwise will be assigned to . Finally, is updated as (14). The detailed information of SA can be acquired from [62], and the related parameter settings of SA (the same as [62]) are presented in Table 2.

Table 2: The parameter setting of SA.

To illustrate the changing trend of the self-adaptive value vividly, BSAISA with SA is used to solve a well-known benchmark constrained function G10 in [61]. The related parameters are set as , , , , , and . The changing trend of value is shown in Figure 4. Three sampling points, that is, , , and , are marked in Figure 4. As shown in Figure 4, it can be observed that value declines very fast at first. After it is smaller than about 2, it declines as an exponential way. This changing trend of value could help algorithm to sufficiently search infeasible domains near feasible domains.

Figure 4: Plot of value with iteration.

The pseudocode for BSAISA is showed in Pseudocode 1. In Pseudocode 1, the modified adaptive is shown in lines (14)–(16). When BSAISA deals with constrained optimization problems, the code in line (8) and line (40) in Pseudocode 1 should consider objective function value and constraint violation simultaneously, and SA is applied to choose a better solution or best solution in line (42) and lines (47)-(48).

Pseudocode 1: Pseudocode of BSAISA.

4. Experimental Studies

In this section, two sets of simulation experiments were executed to evaluate the effectiveness of the proposed BSAISA. The first experiment set performed on 13 well-known benchmark constrained functions taken from [63] (see Appendix A). These thirteen benchmarks contain different properties as shown in Table 3, including the number of variables (), objective function types, the feasibility ratio (), constraint types and number, and the number of active constraints in the optimum solution. The second experiment is conducted on 5 engineering constrained optimization problems chosen from [64] (see Appendix B). These five problems are the three-bar truss design problem (TTP), pressure vessel design problem (PVP), tension/compression spring design problem (TCSP), welded beam design problem (WBP), and speed reducer design problem (SRP), respectively. These engineering problems include objective functions and constraints of various types and natures (quadratic, cubic, polynomial, and nonlinear) with various number of design variables (continuous, integer, mixed, and discrete).

Table 3: Characters of the 13 benchmark functions.

The recorded experimental results include the best function value (Best), the worst function value (Worst), the mean function value (Mean), the standard deviation (Std), the best solution (variables of best function value), the corresponding constraint value, and the number of function evaluations (FEs). The number of function evaluations can be considered as a convergence rate or a computational cost.

In order to evaluate the performance of BSAISA in terms of convergence speed, the FEs are considered as the best FEs corresponding to the obtained best solution in this paper. The calculation of FEs are the product of population sizes () and the number of iterations () at which the best function value is first obtained (i.e., ). For example, if 2500 is the maximum number of iterations for one minimization problem, , the value should be 2000. However, BSAISA needs to evaluate the initial historical population (), so its actual FEs should be plus (i.e., ).

4.1. Parameter Settings

For the first experiment, the main parameters for 13 benchmark constrained functions are the same as the following: population size () is set as 30; the maximum number of iterations () is set as 11665. Therefore, BSAISA’s maximum number of function evaluations (MFEs) should equal to 34,9980 (nearly 35,0000). The 13 benchmarks were executed by using 30 independent runs.

For the 5 real-world engineering design problems, we use slightly different parameter settings since each problem has different natures, that is, TTP (), PVP (), TCSP (), WBP (), SRP (). The 6 engineering problems were performed using 50 independent runs.

The user parameters of all experiments are presented in Table 4. Termination condition may be the maximum number of iterations, CPU time, or an allowable tolerance value between the last result and the best known function value for most metaheuristics. In this paper, the maximum number of iterations is considered as termination condition. All experiments were conducted on a computer with a Intel(R) Core(TM) i5-4590 CPU @ 3.30 GHz and 4 GB RAM.

Table 4: User parameters used for all experiments.
4.2. Simulation on Constrained Benchmark Problems

In this section, BSAISA and BSA are performed on the 13 benchmarks simultaneously. Their statistical results obtained from 30 independent runs are listed in Tables 5 and 6, including the best known function value (Best Known) and the obtained Best/Mean/Worst/Std values as well as the FEs. The best known values for all the test problems are derived from [62]. The best known values found by algorithms are highlighted in bold. From Tables 5 and 6, it can be seen that BSAISA is able to find the known optimal function values for G01, G04, G05, G06, G08, G09, G011, G012, and G013; however, BSA fails to find the best known function value on G09 and G13. For the rest of the functions, BSAISA obtains the results very close to the best known function values. Moreover, BSAISA requires fewer FEs than BSA on G01, G03, G04, G05, G06, G07, G08, G09, G011, and G012. Although the FEs of BSAISA are slightly worse than that of BSA for G02, G10, and G13, BSAISA finds more accurate best function values than BSA for G02, G03, G07, G09, G10, and G13.

Table 5: The statistical results of BSAISA for 13 constrained benchmarks.
Table 6: The statistical results of the basic BSA on 13 constrained benchmarks.

To further compare BSAISA and BSA, the function value convergence curves of 13 functions that have significant differences have been plotted, as shown in Figures 5 and 6. The horizontal axis represents the number of iterations, while the vertical axis represents the difference of the objective function value and the best known value. For G02, G04, G06, G08, G11, and G12, it is obvious that the convergence speed of BSAISA is faster than that of BSA, and its best function value is also better than that of BSA. For G01, G03, G05, G07, G09, G10, and G13, it can be observed that the objective function values of BSAISA and BSA fluctuate in the early iterations, and they decrease as the number of iterations increases during the middle and late stages. This illustrates that both the algorithms are able to escape the local optimum under different degrees, and the convergence curve of BSAISA drops still faster than that of BSA. Therefore, the experiment results demonstrate that the convergence speed of the basic BSA is improved with our modified .

Figure 5: The convergence curves of the first 9 functions by BSAISA and BSA.
Figure 6: The convergence curves of the latter 4 functions by BSAISA and BSA.

In order to further verify the competitiveness of BSAISA in aspect of convergence speed, we compared BSAISA with some classic and state-of-the-art approaches in terms of best function value and function evaluations. The best function value and the corresponding FEs of each algorithm on 13 benchmarks are presented in Table 7, where the optimal results are in bold on each function. These compared algorithms are listed below:(1)Stochastic ranking (SR) [63](2)Filter simulated annealing (FSA) [65](3)Cultured differential evolution (CDE) [66](4)Agent based memetic algorithm (AMA) [64](5)Modified artificial bee colony (MABC) algorithm [67](6)Rough penalty genetic algorithm (RPGA) [68](7)BSA combined self-adaptive constrained method (BSA-SA) [62].

Table 7: Comparison of the best values and FEs obtained by BSAISA and other algorithms.
Table 8: Comparison of best solutions for the three-bar truss design problem.

To compare these algorithms synthetically, a simple evaluation mechanism is used. It can be explained as the best function value (Best) is preferred, and the function evaluations (FEs) are secondary. More specifically, (1) if one algorithm has a better Best than those of others on a function, there is no need to consider FEs and the algorithm is superior to other algorithms on this function. (2) If two or more algorithms have found the optimal Best on a function, the algorithm with the lowest FEs is considered as the winner on the function. (3) Record the number of winners and the number of the optimal function values for each algorithm on the set of benchmarks, and then give the sort for all algorithms.

From Table 7, it can be observed that the rank of these 8 algorithms is as follows: BSAISA, CDE, BSA-SA, MABC, SR, RPGA, FSA, and AMA. Among the 13 benchmarks, BSAISA wins on 6 functions and it is able to find the optimal values of 10 functions. This is better than all other algorithms, thus BSAISA ranks the first. The second algorithm CDE performs better on G02, G07, G09, and G10 than BSAISA but worse on G01, G03, G08, G11, G12, and G13. BSA-SA obtains the optimal function values of 10 functions but requires more function evaluations than BSAISA and CDE, so it should rank the third. MABC ranks the fourth. It obtains the optimal function values of 7 functions, which are fewer in number than those of the former three algorithms. Both SR and RPGA have found the same number of the optimal function values, while the former is the winner on G04, so SR is slightly better than RPGA. As for the last two algorithms, FSA and AMA just perform well on three functions, while FSA is the winner on G06, so FSA is slightly better than AMA.

Based on the above comparison, it can be concluded that BSAISA is effective and competitive in terms of convergence speed.

4.3. Simulation on Engineering Design Problems

In order to assess the optimization performance of BSAISA in real-world engineering constrained optimization problems, 5 well-known engineering constrained design problems including three-bar truss design, pressure vessel design, tension/compression spring design, welded beam design, and speed reducer design are considered in the second experiment.

4.3.1. Three-Bar Truss Design Problem (TTP)

The three-bar truss problem is one of the engineering minimization test problems for constrained algorithms. The best feasible solution is obtained by BSAISA at = (0.788675, 0.408248) with the objective function value = 263.895843 using 8940 FEs. The comparison of the best solutions obtained from BSAISA, BSA, differential evolution with dynamic stochastic selection (DEDS) [69], hybrid evolutionary algorithm (HEAA) [70], hybrid particle swarm optimization with differential evolution (POS-DE) [71], differential evolution with level comparison (DELC) [72], and mine blast algorithm (MBA) [73] is presented in Table 8. Their statistical results are listed in Table 9.

Table 9: Comparison of statistical results for the three-bar truss design problem.
Table 10: Comparison of best solutions for the pressure vessel design problem.
Table 11: Comparison of statistical results for the pressure vessel design problem.
Table 12: Comparison of best solutions for the tension compression spring design problem.
Table 13: Comparison of statistical results for the tension compression spring design problem.
Table 14: Comparison of best solutions for the welded beam design problem.
Table 15: Comparison of statistical results for the welded beam design problem.

From Tables 8 and 9, BSAISA, BSA, DEDS, HEAA, PSO-DE, and DELC all reach the best solution with the corresponding function value equal to 263.895843 except MBA with 263.895852. However, BSAISA requires the lowest FEs (only 8940) among all algorithms. Its Std value is better than BSA, DEDS, HEAA, PSO-DE, and MBA except DELC. These comparative results indicate that BSAISA outperforms other algorithms in terms of computational cost and robustness for this problem.

Figure 7 depicts the convergence curves of BSAISA and BSA for the three-bar truss design problem, where the value of on the vertical axis equals 263.895843. As shown in Figure 7, BSA achieves the global optimum at about 700 iterations, while BSAISA only reaches the global optimum at about 400 iterations. It can be concluded that the convergence speed of BSAISA is faster than that of BSA for this problem.

Figure 7: Convergence curves of BSAISA and BSA for the three-bar truss design problem.
4.3.2. Pressure Vessel Design Problem (PVP)

The pressure vessel design problem has a nonlinear objective function with three linear and one nonlinear inequality constraints and two discrete and two continuous design variables. The values of the two discrete variables () should be the integer multiples of 0.0625. The best feasible solution is obtained by BSAISA at = (0.8750, 0.4375, 42.0984, 176.6366) with the objective function value = 6059.7143 using 31,960 FEs.

For this problem, BSAISA is compared with nine algorithms: BSA, BSA-SA [62], DELC, POS-DE, genetic algorithms based on dominance tournament selection (GA-DT) [73], modified differential evolution (MDE) [74], coevolutionary particle swarm optimization (CPSO) [75], hybrid particle swarm optimization (HPSO) [76], and artificial bee colony algorithm (ABC) [77]. The comparison of the best solutions obtained by BSAISA and other reported algorithms is presented in Table 10. The statistical results of various algorithms are listed in Table 11.

As shown Table 10, the obtained solution sets of all algorithms satisfy the constraints for this problem. BSAISA, BSA-SA, ABC, DELC, and HPSO find the same considerable good objective function value 6059.7143, which is slightly worse than MDE’s function value 6059.7143. It is worth mentioning that MBA’s best solution was obtained at = (0.7802, 0.3856, 40.4292, 198.4964) with = 5889.3216 and the corresponding constraint values equal to = (0, 0, −86.3645, −41.5035) in [78]. Though MBA finds a far better function value than that of MDE, its obtained variables (i.e., 0.7802 and 0.3856) are not integer multiples of 0.0625. So they are not listed in Table 10 to ensure a fair comparison. From Table 11, except for MDE with the function value of 6059.7016, BSAISA offers better function value results compared to GA-DT, CPSO, ABC, and BSA. Besides that, BSAISA is far superior to other algorithms in terms of FEs. Unfortunately, the obtained Std value of BSAISA is relatively poor compared with others for this problem.

Figure 8 describes the convergence curves of BSAISA and BSA for the pressure vessel design problem, where the value of on the vertical axis equals 6059.7143. As shown in Figure 8, BSAISA is able to find the global optimum at about 800 iterations and obtains a far more accurate function value than that of BSA. Moreover, the convergence speed of BSAISA is much faster than that of BSA.

Figure 8: Convergence curves of BSAISA and BSA for the pressure vessel design problem.
4.3.3. Tension Compression Spring Design Problem (TCSP)

This design optimization problem has three continuous variables and four nonlinear inequality constraints. The best feasible solution is obtained by BSAISA at = (0.051687, 0.356669, 11.291824) with = 0.012665 using 9440 FEs. This problem has been solved by other methods as follows: GA-DT, MDE, CPSO, HPSO, DEDS, HEAA, DELC, POS-DE, ABC, MBA, BSA-SA, and Social Spider Optimization (SSOC) [79]. The comparison of the best solutions obtained from various algorithms is presented in Table 12. Their statistical results are listed in Table 13.

From Tables 12 and 13, the vast majority of algorithms can find the best function value 0.012665 for this problem, while GA-DT and CPSO fail to find it. With regard to the computational cost (FEs), BSAISA only requires 9440 FEs when it reaches the global optimum, which is superior to all other algorithms except MBA with 7650 FEs. However, the Worst and Mean and Std values of BSAISA are better than those of MBA. Consequently, for this problem, it can be concluded that BSA has the obvious superiority in terms of FEs over all other algorithms except MBA. Moreover, BSAISA has a stronger robustness when compared with MBA alone.

Figure 9 depicts the convergence curves of BSAISA and BSA for the tension compression spring design problem, where the value of on the vertical axis equals 0.012665. From Figure 9 it can be observed that both BSAISA and BSA fall into a local optimum in the early iterations but they are able to successfully escape from the local optimum. However, the convergence speed of BSAISA is obviously faster than that of BSA.

Figure 9: Convergence curves of BSAISA and BSA for the tension compression spring design problem.
4.3.4. Welded Beam Design Problem (WBP)

The welded beam problem is a minimum cost problem with four continuous design variables and subject to two linear and five nonlinear inequality constraints. The best feasible solution is obtained by BSAISA at = (0.205730, 3.470489, 9.036624, 0.205730) with the objective function value = 1.724852 using 29,000 FEs.

For this problem, BSAISA is compared with many well-known algorithms as follows: GA-DT, MDE, CPSO, HPSO, DELC, POS-DE, ABC, MBA, BSA, BSA-SA, and SSOC. The best solutions obtained from BSAISA and other well-known algorithms are listed in Table 14. The comparison of their statistical results is presented in Table 15.

From Tables 14 and 15, except that the constraint value of PSO is not available, the obtained solution sets of all algorithms satisfy the constraints for the problem. Most of algorithms including BSAISA, BSA, BSA-SA, MDE, HPSO, DELC, POS-DE, ABC, and SSOC are able to find the best function value 1.724852, while GA-DT and CPSO and MBA fail to find it. It should be admitted that DELC is superior to all other algorithms in terms of FEs and robustness for this problem. On the other hand, except for DELC, MDE, and SSOC with FEs of 2000, 2400, and 2500, respectively, BSAISA requires fewer FEs than the remaining algorithms (excluding algorithms that do not reach the best solution). When considering the comparison of the Std values for this problem, MBA exhibits its powerful robustness and BSAISA performs better than most algorithms except MBA, DELC, PSO-DE, BSA, and BSA-SA.

It is worth mentioning that from [74] the Worst, Mean, Best, and Std value of MDE are given as 1.724854, 1.724853, 1.724852, and , respectively. However, the corresponding values of DELC equal 1.724852, 1.724852, 1.724852, and , respectively, where its Worst and Mean values are smaller than those of MDE while its Std is bigger than those of MDE. So we consider that the Std of MDE is probably an error data for this problem, and we replace it with NA in Table 15.

Figure 10 depicts the convergence curves of BSAISA and BSA for the welded beam design problem, where the value of on the vertical axis equals 1.724852. Figure 10 shows the convergence speed of BSAISA is faster than that of BSA remarkably.

Figure 10: Convergence curves of BSAISA and BSA for the welded beam design problem.
4.3.5. Speed Reducer Design Problem (SRP)

This speed reducer design problem has eleven constraints and six continuous design variables () and one integer variable (). The best solution obtained from BSAISA is = (3.500000, 0.7000000, 17, 7.300000, 7.715320, 3.350215, 5.286654) with using 15,860 FEs. The comparison of the best solutions obtained by BSAISA and other well-known algorithms is given in Table 16. The statistical results of BSAISA, BSA, MDE, DEDS, HEAA, DELC, POS-DE, ABC, MBA, and SSOC are listed in Table 17.

Table 16: Comparison of best solutions for the speed reducer design problem.
Table 17: Comparison of statistical results for the speed reducer design problem.

As shown in Tables 16 and 17, the obtained solution sets of all algorithms satisfy the constraints for this problem. BSAISA, BSA, DEDS, and DELC are able to find the best function value 2994.471066 while the others do not. Among the four algorithms, DEDS, DELC, and BSA require 30000, 30000, and 25640 FEs, respectively. However, BSAISA requires only 15,860 FEs when it reaches the same best function value. MBA fails to find the best known function value; thus BSAISA is better than MBA in this problem, even though MBA has lower FEs. As for the comparison of the Std, among the four algorithms that achieve the best known function value, BSAISA is worse than the others. However, one thing that should be mentioned is that the main purpose of the experiment is to compare the convergence speed between BSAISA and other algorithms. From this point of view, it can be concluded that BSAISA has a better performance than other algorithms in terms of convergence speed.

Figure 11 depicts the convergence curves of BSAISA and BSA for the speed reducer design problem, where the value of on the vertical axis equals 2994.471066. Figure 11 shows that the convergence speed of BSAISA is faster than that of BSA.

Figure 11: Convergence curves of BSAISA and BSA for the speed reducer design problem.
4.4. Comparisons Using Sign Test

Sign Test [80] is one of the most popular statistical methods used to determine whether two algorithms are significantly different. Recently, Miao et al. [81] utilized Sign Test method to analyze the performances between their proposed modified algorithm and the original one. In this paper, the two-tailed Sign Test with a significance level 0.05 is adopted to test the significant differences between the results obtained by different algorithms, and the test results are given in Table 18. The values of Best and FEs are two most important criterions for the evaluations of algorithms in our paper; they therefore should be chosen as the objectives of the Sign Test. The signs “+,” “,” and “−” represent, respectively, the fact that our BSAISA performs significantly better than, almost the same as, or significantly worse than the algorithm it is compared to. The null hypothesis herein is that the performances between BSAISA and one of the others are not significantly differential.

Table 18: Comparisons between BSAISA and other algorithms in Sign Tests.

As shown in Table 18, the values of supporting the null hypothesis of Sign Test for six pairs of algorithms (BSAISA-SR, BSAISA-FSA, BSAISA-AMA, BSAISA-MABC, BSAISA-RPGA, and BSAISA-BSA-SA) are 0.006, 0.003, 0.000, 0.012, 0.001, and 0.039, respectively, and thereby we can reject the null hypothesis. This illustrates that the optimization performance of the proposed BSAISA is significantly better than those of the six algorithms. The value of BSAISA-CDE is equal to 0.581, which shows that we cannot reject the null hypothesis. However, according to the related sign values (“+,” “,” and “−”) from Table 18, BSAISA is slightly worse than CDE on 5 problems but wins on another 8 problems, which illustrates that the proposed BSAISA has a relatively excellent competitiveness compared with the CDE. Generally, the statistical values and sign values validate that BSAISA has the superiority compared to the other well-known algorithms on the constrained optimization problems.

On the one hand, all experimental results suggest that the proposed method improves the convergence speed of BSA. On the other hand, the overall comparative results of BSAISA and other well-known algorithms demonstrate that BSAISA is more effective and competitive for constrained and engineering optimization problems in terms of convergence speed.

5. Conclusions and Future Work

In this paper, we proposed a modified version of BSA inspired by the Metropolis criterion in SA (BSAISA). The Metropolis criterion may probabilistically accept a higher energy state and the acceptance probability can decrease as the temperature decreases, which motivated us to redesign the amplitude control factor so it can adaptively decrease as the number of iterations increases. The design principle and numerical analysis of the redesigned indicate that the change in could accelerate the convergence speed of the algorithm by improving the local exploitation capability. Furthermore, the redesigned does not introduce extra parameters. We successfully implemented BSAISA to solve some constrained optimization and engineering design problems. The experimental results demonstrated that BSAISA has a faster convergence speed than BSA and it can efficiently balance the capacity for global exploration and local exploitation. The comparisons of the results obtained by BSAISA and other well-known algorithms demonstrated that BSAISA is more effective and competitive for constrained and engineering optimization problems in terms of convergence speed.

This paper suggests that the proposed BSAISA has a superiority in terms of convergence speed or computational cost. The downside of the proposed algorithm is, of course, that its robustness does not show enough superiority. So our future work is to further research into the robustness of BSAISA on the basis of current research. Niche technique is able to effectively maintain population diversity of evolutionary algorithms [82, 83]. How to combine BSAISA with niche technology to improve robustness of the algorithm may deserve to be studied in the future.

Appendix

A. Constrained Benchmark Problems

A.1. Constrained Problem 01

A.2. Constrained Problem 02

A.3. Constrained Problem 03

A.4. Constrained Problem 04

A.5. Constrained Problem 05

A.6. Constrained Problem 06

A.7. Constrained Problem 07

A.8. Constrained Problem 08

A.9. Constrained Problem 09

A.10. Constrained Problem 10

A.11. Constrained Problem 11

A.12. Constrained Problem 12

A.13. Constrained Problem 13

B. Engineering Design Problems

B.1. Three-Bar Truss Design Problem

B.2. Pressure Vessel Design Problem

B.3. Tension/Compression Spring Design Problem

B.4. Welded Beam Design Problem

where

B.5. Speed Reducer Design Problem

where

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China (no. 61663009), and the State Key Laboratory of Silicate Materials for Architectures (Wuhan University of Technology, SYSJJ2018-21).

References

  1. P. Posik, W. Huyer, and L. Pal, “A comparison of global search algorithms for continuous black box optimization,” Evolutionary Computation, vol. 20, no. 4, pp. 509–541, 2012. View at Google Scholar
  2. A. P. Piotrowski, M. J. Napiorkowski, J. J. Napiorkowski, and P. M. Rowinski, “Swarm Intelligence and Evolutionary Algorithms: Performance versus speed,” Information Sciences, vol. 384, pp. 34–85, 2017. View at Publisher · View at Google Scholar · View at Scopus
  3. S. Das and A. Konar, “A swarm intelligence approach to the synthesis of two-dimensional IIR filters,” Engineering Applications of Artificial Intelligence, vol. 20, no. 8, pp. 1086–1096, 2007. View at Publisher · View at Google Scholar · View at Scopus
  4. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, pp. 1942–1948, Perth, Australia, December 1995. View at Scopus
  5. M. Dorigo, V. Maniezzo, and A. Colorni, “Ant system: optimization by a colony of cooperating agents,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 26, no. 1, pp. 29–41, 1996. View at Publisher · View at Google Scholar · View at Scopus
  6. X. S. Yang and S. Deb, “Cuckoo search via LΘvy flights,” in Proceedings of the In World Congress on Nature Biologically Inspired Computing, pp. 210–214, NaBIC, 2009.
  7. D. Karaboga, “An idea based on honey bee swarm for numerical optimization,” Tech. Rep., Erciyes University, Engineering Faculty, Computer Engineering Department, 2005. View at Google Scholar
  8. J. Q. Zhang and A. C. Sanderson, “JADE: adaptive differential evolution with optional external archive,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 5, pp. 945–958, 2009. View at Publisher · View at Google Scholar · View at Scopus
  9. S. M. Elsayed, R. A. Sarker, and D. L. Essam, “Adaptive Configuration of evolutionary algorithms for constrained optimization,” Applied Mathematics and Computation, vol. 222, pp. 680–711, 2013. View at Publisher · View at Google Scholar · View at Scopus
  10. J. H. Holland, Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence, University of Michigan Press, Oxford, UK, 1975. View at MathSciNet
  11. R. Storn and K. Price, “Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997. View at Publisher · View at Google Scholar · View at Scopus
  12. Z. Hu, Q. Su, X. Yang, and Z. Xiong, “Not guaranteeing convergence of differential evolution on a class of multimodal functions,” Applied Soft Computing, vol. 41, pp. 479–487, 2016. View at Publisher · View at Google Scholar · View at Scopus
  13. Q. Su and Z. Hu, “Color image quantization algorithm based on self-adaptive differential Evolution,” Computational Intelligence and Neuroscience, vol. 2013, Article ID 231916, 8 pages, 2013. View at Publisher · View at Google Scholar · View at Scopus
  14. Z. Hu, Q. Su, and X. Xia, “Multiobjective image color quantization algorithm based on self-adaptive hybrid differential evolution,” Computational Intelligence and Neuroscience, vol. 2016, Article ID 2450431, 12 pages, 2016. View at Publisher · View at Google Scholar
  15. C. Igel, N. Hansen, and S. Roth, “Covariance matrix adaptation for multi-objective optimization,” Evolutionary Computation, vol. 15, no. 1, pp. 1–28, 2007. View at Publisher · View at Google Scholar · View at Scopus
  16. P. Civicioglu, “Backtracking search optimization algorithm for numerical optimization problems,” Applied Mathematics and Computation, vol. 219, no. 15, pp. 8121–8144, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  17. A. El-Fergany, “Optimal allocation of multi-type distributed generators using backtracking search optimization algorithm,” International Journal of Electrical Power & Energy Systems, vol. 64, pp. 1197–1205, 2015. View at Publisher · View at Google Scholar
  18. M. Modiri-Delshad and N. A. Rahim, “Multi-objective backtracking search algorithm for economic emission dispatch problem,” Applied Soft Computing, vol. 40, pp. 479–494, 2016. View at Publisher · View at Google Scholar · View at Scopus
  19. S. D. Madasu, M. L. S. S. Kumar, and A. K. Singh, “Comparable investigation of backtracking search algorithm in automatic generation control for two area reheat interconnected thermal power system,” Applied Soft Computing, vol. 55, pp. 197–210, 2017. View at Publisher · View at Google Scholar · View at Scopus
  20. J. A. Ali, M. A. Hannan, A. Mohamed, and M. G. M. Abdolrasol, “Fuzzy logic speed controller optimization approach for induction motor drive using backtracking search algorithm,” Measurement, vol. 78, pp. 49–62, 2016. View at Publisher · View at Google Scholar · View at Scopus
  21. M. A. Hannan, J. A. Ali, A. Mohamed, and M. N. Uddin, “A Random Forest Regression Based Space Vector PWM Inverter Controller for the Induction Motor Drive,” IEEE Transactions on Industrial Electronics, vol. 64, no. 4, pp. 2689–2699, 2017. View at Publisher · View at Google Scholar · View at Scopus
  22. K. Guney, A. Durmus, and S. Basbug, “Backtracking search optimization algorithm for synthesis of concentric circular antenna arrays,” International Journal of Antennas and Propagation, vol. 2014, Article ID 250841, 11 pages, 2014. View at Publisher · View at Google Scholar
  23. R. Muralidharan, V. Athinarayanan, G. K. Mahanti, and A. Mahanti, “QPSO versus BSA for failure correction of linear array of mutually coupled parallel dipole antennas with fixed side lobe level and VSWR,” Advances in Electrical Engineering, vol. 2014, Article ID 858290, 7 pages, 2014. View at Publisher · View at Google Scholar
  24. M. Eskandari and Ö. Toygar, “Selection of optimized features and weights on face-iris fusion using distance images,” Computer Vision and Image Understanding, vol. 137, article no. 2225, pp. 63–75, 2015. View at Publisher · View at Google Scholar · View at Scopus
  25. U. H. Atasevar, P. Civicioglu, E. Besdok, and C. Ozkan, “A new unsupervised change detection approach based on DWT image fusion and backtracking search optimization algorithm for optical remote sensing data,” in Proceedings of the ISPRS Technical Commission VII Mid-Term Symposium 2014, pp. 15–18, October 2014. View at Publisher · View at Google Scholar · View at Scopus
  26. S. K. Agarwal, S. Shah, and R. Kumar, “Classification of mental tasks from EEG data using backtracking search optimization based neural classifier,” Neurocomputing, vol. 166, pp. 397–403, 2015. View at Publisher · View at Google Scholar · View at Scopus
  27. L. Zhang and D. Zhang, “Evolutionary cost-sensitive extreme learning machine,” IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 12, pp. 3045–3060, 2016. View at Google Scholar
  28. F. Zou, D. Chen, S. Li, R. Lu, and M. Lin, “Community detection in complex networks: Multi-objective discrete backtracking search optimization algorithm with decomposition,” Applied Soft Computing, vol. 53, pp. 285–295, 2017. View at Publisher · View at Google Scholar · View at Scopus
  29. C. Zhang, J. Zhou, C. Li, W. Fu, and T. Peng, “A compound structure of ELM based on feature selection and parameter optimization using hybrid backtracking search algorithm for wind speed forecasting,” Energy Conversion and Management, vol. 143, pp. 360–376, 2017. View at Publisher · View at Google Scholar
  30. C. Lu, L. Gao, X. Li, and P. Chen, “Energy-efficient multi-pass turning operation using multi-objective backtracking search algorithm,” Journal of Cleaner Production, vol. 137, pp. 1516–1531, 2016. View at Publisher · View at Google Scholar
  31. M. Akhtar, M. A. Hannan, R. A. Begum, H. Basri, and E. Scavino, “Backtracking search algorithm in CVRP models for efficient solid waste collection and route optimization,” Waste Management, vol. 61, pp. 117–128, 2017. View at Publisher · View at Google Scholar · View at Scopus
  32. M. S. Ahmed, A. Mohamed, T. Khatib, H. Shareef, R. Z. Homod, and J. A. Ali, “Real time optimal schedule controller for home energy management system using new binary backtracking search algorithm,” Energy and Buildings, vol. 138, pp. 215–227, 2017. View at Publisher · View at Google Scholar · View at Scopus
  33. S. O. Kolawole and H. Duan, “Backtracking search algorithm for non-aligned thrust optimization for satellite formation,” in Proceedings of the 11th IEEE International Conference on Control and Automation (IEEE ICCA '14), pp. 738–743, June 2014. View at Publisher · View at Google Scholar · View at Scopus
  34. Q. Lin, L. Gao, X. Li, and C. Zhang, “A hybrid backtracking search algorithm for permutation flow-shop scheduling problem,” Computers & Industrial Engineering, vol. 85, pp. 437–446, 2015. View at Publisher · View at Google Scholar · View at Scopus
  35. J. Lin, “Oppositional backtracking search optimization algorithm for parameter identification of hyperchaotic systems,” Nonlinear Dynamics, vol. 80, no. 1-2, pp. 209–219, 2015. View at Publisher · View at Google Scholar · View at Scopus
  36. Q. L. Xu, N. Guo, and L. Xu, “Opposition-based backtracking search algorithm for numerical optimization problems,” in Proceedings of the In International Conference on Intelligent Science and Big Data Engineering, pp. 223–234, 2015.
  37. X. Yuan, B. Ji, Y. Yuan, R. M. Ikram, X. Zhang, and Y. Huang, “An efficient chaos embedded hybrid approach for hydro-thermal unit commitment problem,” Energy Conversion and Management, vol. 91, pp. 225–237, 2015. View at Publisher · View at Google Scholar · View at Scopus
  38. X. Yuan, X. Wu, H. Tian, Y. Yuan, and R. M. Adnan, “Parameter identification of nonlinear muskingum model with backtracking search algorithm,” Water Resources Management, vol. 30, no. 8, pp. 2767–2783, 2016. View at Publisher · View at Google Scholar · View at Scopus
  39. S. Vitayasak and P. Pongcharoen, “Backtracking search algorithm for designing a robust machine layout,” WIT Transactions on Engineering Sciences, vol. 95, pp. 411–420, 2014. View at Google Scholar
  40. S. Vitayasak, P. Pongcharoen, and C. Hicks, “A tool for solving stochastic dynamic facility layout problems with stochastic demand using either a Genetic Algorithm or modified Backtracking Search Algorithm,” International Journal of Production Economics, vol. 190, pp. 146–157, 2017. View at Publisher · View at Google Scholar · View at Scopus
  41. M. Li, H. Zhao, and X. Weng, “Backtracking search optimization algorithm with comprehensive learning strategy,” Journal of Systems Engineering and Electronics, vol. 37, no. 4, pp. 958–963, 2015 (Chinese). View at Google Scholar
  42. W. Zhao, L. Wang, Y. Yin, B. Wang, and Y. Wei, “An improved backtracking search algorithm for constrained optimization problems,” in Proceedings of the International Conference on Knowledge Science, Engineering and Management, pp. 222–233, Springer International Publishing, 2014.
  43. L. Wang, Y. Zhong, Y. Yin, W. Zhao, B. Wang, and Y. Xu, “A hybrid backtracking search optimization algorithm with differential evolution,” Mathematical Problems in Engineering, vol. 2015, Article ID 769245, p. 16, 2015. View at Publisher · View at Google Scholar
  44. S. Das, D. Mandal, R. Kar, and S. P. Ghoshal, “Interference suppression of linear antenna arrays with combined Backtracking Search Algorithm and Differential Evolution,” in Proceedings of the 3rd International Conference on Communication and Signal Processing (ICCSP '14), pp. 162–166, April 2014. View at Publisher · View at Google Scholar · View at Scopus
  45. S. Das, D. Mandal, R. Kar, and S. P. Ghoshal, “A new hybridized backscattering search optimization algorithm with differential evolution for sidelobe suppression of uniformly excited concentric circular antenna arrays,” International Journal of RF and Microwave Computer-Aided Engineering, vol. 25, no. 3, pp. 262–268, 2015. View at Publisher · View at Google Scholar
  46. S. Mallick, R. Kar, D. Mandal, and S. P. Ghoshal, “CMOS analogue amplifier circuits optimisation using hybrid backtracking search algorithm with differential evolution,” Journal of Experimental and Theoretical Artificial Intelligence, pp. 1–31, 2015. View at Google Scholar
  47. D. Chen, F. Zou, R. Lu, and P. Wang, “Learning backtracking search optimisation algorithm and its application,” Information Sciences, vol. 376, pp. 71–94, 2017. View at Publisher · View at Google Scholar · View at Scopus
  48. A. F. Ali, “A memetic backtracking search optimization algorithm for economic dispatch problem,” Egyptian Computer Science Journal, vol. 39, no. 2, pp. 56–71, 2015. View at Google Scholar
  49. Y. Wu, Q. Tang, L. Zhang, and X. He, “Solving stochastic two-sided assembly line balancing problem via hybrid backtracking search optimization algorithm,” Journal of Wuhan University of Science and Technology (Natural Science Edition), vol. 39, no. 2, pp. 121–127, 2016 (Chinese). View at Google Scholar
  50. Z. Su, H. Wang, and P. Yao, “A hybrid backtracking search optimization algorithm for nonlinear optimal control problems with complex dynamic constraints,” Neurocomputing, vol. 186, pp. 182–194, 2016. View at Google Scholar
  51. S. Wang, X. Da, M. Li, and T. Han, “Adaptive backtracking search optimization algorithm with pattern search for numerical optimization,” Journal of Systems Engineering and Electronics, vol. 27, no. 2, Article ID 7514428, pp. 395–406, 2016. View at Publisher · View at Google Scholar · View at Scopus
  52. H. Duan and Q. Luo, “Adaptive backtracking search algorithm for induction magnetometer optimization,” IEEE Transactions on Magnetics, vol. 50, no. 12, pp. 1–6, 2014. View at Publisher · View at Google Scholar · View at Scopus
  53. X. J. Wang, S. Y. Liu, and W. K. Tian, “Improved backtracking search optimization algorithm with new effective mutation scale factor and greedy crossover strategy,” Journal of Computer Applications, vol. 34, no. 9, pp. 2543–2546, 2014 (Chinese). View at Google Scholar
  54. W. K. Tian, S. Y. Liu, and X. J. Wang, “Study and improvement of backtracking search optimization algorithm based on differential evolution,” Application Research of Computers, vol. 32, no. 6, pp. 1653–1662, 2015. View at Google Scholar
  55. A. Askarzadeh and L. dos Santos Coelho, “A backtracking search algorithm combined with Burger's chaotic map for parameter estimation of PEMFC electrochemical model,” International Journal of Hydrogen Energy, vol. 39, pp. 11165–11174, 2014. View at Publisher · View at Google Scholar · View at Scopus
  56. X. Chen, S. Y. Liu, and Y. Wang, “Emergency resources scheduling based on improved backtracking search optimization algorithm,” Computer Applications and Software, vol. 32, no. 12, pp. 235–238, 2015 (Chinese). View at Google Scholar
  57. S. Nama, A. K. Saha, and S. Ghosh, “Improved backtracking search algorithm for pseudo dynamic active earth pressure on retaining wall supporting c-Φ backfill,” Applied Soft Computing, vol. 52, pp. 885–897, 2017. View at Publisher · View at Google Scholar · View at Scopus
  58. G. Karafotias, M. Hoogendoorn, and A. E. Eiben, “Parameter Control in Evolutionary Algorithms: Trends and Challenges,” IEEE Transactions on Evolutionary Computation, vol. 19, no. 2, pp. 167–187, 2015. View at Publisher · View at Google Scholar · View at Scopus
  59. N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller, “Equation of state calculations by fast computing machines,” The Journal of Chemical Physics, vol. 21, no. 6, pp. 1087–1092, 1953. View at Publisher · View at Google Scholar · View at Scopus
  60. S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, “Optimization by simulated annealing,” Science, vol. 220, no. 4598, pp. 671–680, 1983. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  61. D. Karaboga and B. Akay, “A comparative study of artificial Bee colony algorithm,” Applied Mathematics and Computation, vol. 214, no. 1, pp. 108–132, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  62. C. Zhang, Q. Lin, L. Gao, and X. Li, “Backtracking Search Algorithm with three constraint handling methods for constrained optimization problems,” Expert Systems with Applications, vol. 42, no. 21, pp. 7831–7845, 2015. View at Publisher · View at Google Scholar · View at Scopus
  63. T. P. Runarsson and X. Yao, “Stochastic ranking for constrained evolutionary optimization,” IEEE Transactions on Evolutionary Computation, vol. 4, no. 3, pp. 284–294, 2000. View at Publisher · View at Google Scholar · View at Scopus
  64. A. S. B. Ullah, R. Sarker, D. Cornforth, and C. Lokan, “AMA: A new approach for solving constrained real-valued optimization problems,” Soft Computing, vol. 13, no. 8-9, pp. 741–762, 2009. View at Publisher · View at Google Scholar · View at Scopus
  65. A. R. Hedar and M. Fukushima, “Derivative-free filter simulated annealing method for constrained continuous global optimization,” Journal of Global Optimization, vol. 35, no. 4, pp. 521–549, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  66. R. L. Becerra and C. A. Coello, “Cultured differential evolution for constrained optimization,” Computer Methods Applied Mechanics and Engineering, vol. 195, no. 33–36, pp. 4303–4322, 2006. View at Publisher · View at Google Scholar · View at MathSciNet
  67. D. Karaboga and B. Akay, “A modified Artificial Bee Colony (ABC) algorithm for constrained optimization problems,” Applied Soft Computing, vol. 11, no. 3, pp. 3021–3031, 2011. View at Publisher · View at Google Scholar · View at Scopus
  68. C.-H. Lin, “A rough penalty genetic algorithm for constrained optimization,” Information Sciences, vol. 241, pp. 119–137, 2013. View at Publisher · View at Google Scholar · View at Scopus
  69. M. Zhang, W. Luo, and X. Wang, “Differential evolution with dynamic stochastic selection for constrained optimization,” Information Sciences, vol. 178, no. 15, pp. 3043–3074, 2008. View at Publisher · View at Google Scholar · View at Scopus
  70. Y. Wang, Z. X. Cai, Y. R. Zhou, and Z. Fan, “Constrained optimization based on hybrid evolutionary algorithm and adaptive constraint-handling technique,” Structural and Multidisciplinary Optimization, vol. 37, no. 4, pp. 395–413, 2009. View at Publisher · View at Google Scholar · View at Scopus
  71. H. Liu, Z. Cai, and Y. Wang, “Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization,” Applied Soft Computing, vol. 10, no. 2, pp. 629–640, 2010. View at Publisher · View at Google Scholar · View at Scopus
  72. L. Wang and L.-P. Li, “An effective differential evolution with level comparison for constrained engineering design,” Structural and Multidisciplinary Optimization, vol. 41, no. 6, pp. 947–963, 2010. View at Publisher · View at Google Scholar · View at Scopus
  73. C. A. C. Coello and E. M. Montes, “Constraint-handling in genetic algorithms through the use of dominance-based tournament selection,” Advanced Engineering Informatics, vol. 16, no. 3, pp. 193–203, 2002. View at Publisher · View at Google Scholar · View at Scopus
  74. E. Mezura-Montes, C. A. C. Coello, and J. Vela'zquez-Reyes, “Increasing successful offspring and diversity in differential evolution for engineering design,” in Proceedings of the Seventh International Conference on Adaptive Computing in Design and Manufacture (ACDM '06), pp. 131–139, 2006.
  75. Q. He and L. Wang, “An effective co-evolutionary particle swarm optimization for constrained engineering design problems,” Engineering Applications of Artificial Intelligence, vol. 20, no. 1, pp. 89–99, 2007. View at Publisher · View at Google Scholar · View at Scopus
  76. Q. He and L. Wang, “A hybrid particle swarm optimization with a feasibility-based rule for constrained optimization,” Applied Mathematics and Computation, vol. 186, no. 2, pp. 1407–1422, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  77. B. Akay and D. Karaboga, “Artificial bee colony algorithm for large-scale problems and engineering design optimization,” Journal of Intelligent Manufacturing, vol. 23, no. 4, pp. 1001–1014, 2012. View at Publisher · View at Google Scholar · View at Scopus
  78. A. Sadollah, A. Bahreininejad, H. Eskandar, and M. Hamdi, “Mine blast algorithm: a new population based algorithm for solving constrained engineering optimization problems,” Applied Soft Computing, vol. 13, no. 5, pp. 2592–2612, 2013. View at Publisher · View at Google Scholar · View at Scopus
  79. E. Cuevas and M. Cienfuegos, “A new algorithm inspired in the behavior of the social-spider for constrained optimization,” Expert Systems with Applications, vol. 41, no. 2, pp. 412–425, 2014. View at Publisher · View at Google Scholar · View at Scopus
  80. J. Derrac, S. García, D. Molina, and F. Herrera, “A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms,” Swarm and Evolutionary Computation, vol. 1, no. 1, pp. 3–18, 2011. View at Publisher · View at Google Scholar · View at Scopus
  81. Y. Miao, Q. Su, Z. Hu, and X. Xia, “Modified differential evolution algorithm with onlooker bee operator for mixed discrete-continuous optimization,” SpringerPlus, vol. 5, no. 1, article no. 1914, 2016. View at Publisher · View at Google Scholar · View at Scopus
  82. E. L. Yu and P. N. Suganthan, “Ensemble of niching algorithms,” Information Sciences, vol. 180, no. 15, pp. 2815–2833, 2010. View at Publisher · View at Google Scholar · View at Scopus
  83. M. Li, D. Lin, and J. Kou, “A hybrid niching PSO enhanced with recombination-replacement crowding strategy for multimodal function optimization,” Applied Soft Computing, vol. 12, no. 3, pp. 975–987, 2012. View at Publisher · View at Google Scholar · View at Scopus