Computational Intelligence and Neuroscience

Volume 2018, Article ID 9167414, 27 pages

https://doi.org/10.1155/2018/9167414

## Modified Backtracking Search Optimization Algorithm Inspired by Simulated Annealing for Constrained Engineering Optimization Problems

^{1}School of Information and Mathematics, Yangtze University, Jingzhou, Hubei 434023, China^{2}School of Software, East China Jiaotong University, Nanchang, Jiangxi 330013, China

Correspondence should be addressed to Zhongbo Hu; moc.621@ddbzuh

Received 10 October 2017; Accepted 20 December 2017; Published 13 February 2018

Academic Editor: Silvia Conforto

Copyright © 2018 Hailong Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The backtracking search optimization algorithm (BSA) is a population-based evolutionary algorithm for numerical optimization problems. BSA has a powerful global exploration capacity while its local exploitation capability is relatively poor. This affects the convergence speed of the algorithm. In this paper, we propose a modified BSA inspired by simulated annealing (BSAISA) to overcome the deficiency of BSA. In the BSAISA, the amplitude control factor () is modified based on the Metropolis criterion in simulated annealing. The redesigned could be adaptively decreased as the number of iterations increases and it does not introduce extra parameters. A self-adaptive -constrained method is used to handle the strict constraints. We compared the performance of the proposed BSAISA with BSA and other well-known algorithms when solving thirteen constrained benchmarks and five engineering design problems. The simulation results demonstrated that BSAISA is more effective than BSA and more competitive with other well-known algorithms in terms of convergence speed.

#### 1. Introduction

Optimization is an essential research objective in the fields of applied mathematics and computer sciences. Optimization algorithms mainly aim to obtain the global optimum for optimization problems. There are many different kinds of optimization problems in real world. When an optimization problem has a simple and explicit gradient information or requires relatively small budgets of allowed function evaluations, the implementation of classical optimization techniques such as mathematical programming often could achieve efficient results [1]. However, many real-world engineering optimization problems may have complex, nonlinear, or nondifferentiable forms, which make them difficult to be tackled by using classical optimization techniques. The emergence of metaheuristic algorithms has overcome the deficiencies of classical optimization techniques to some extent, as they do not require gradient information and have the ability to escape from local optima. Metaheuristic algorithms are mainly inspired from a variety of natural phenomena and/or biological social behavior. Among these metaheuristic algorithms, swarm intelligence algorithms and evolutionary algorithms perhaps are the most attractive [2]. Swarm intelligence algorithms [3] generally simulate the intelligence behavior of swarms of creatures, such as particle swarm optimization (PSO) [4], ant colony optimization (ACO) [5], cuckoo search (CS) [6], and the artificial bee colony (ABC) algorithm [7]. These types of algorithms generally are developed by inspirations from a series of complex behavior processes in swarms with mutual cooperation and self-organization, in which “cooperation” is their core concept. The evolutionary algorithms (EAs) [8, 9] are inspired by the mechanism of nature evolution, in which “evolution” is the key idea. Examples of EAs include genetic algorithm (GA) [10], differential evolution (DE) [11–14], covariance matrix adaptation evolution strategy (CMAES) [15], and the backtracking search optimization algorithm (BSA) [16].

BSA is an iterative population-based EA, which was first proposed by Civicioglu in 2013. BSA has three basic genetic operators: selection, mutation, and crossover. The main difference between BSA and other similar algorithms is that BSA possesses a memory for storing a population from a randomly chosen previous generation, which is used to generate the search-direction matrix for the next iteration. In addition, BSA has a simple structure, which makes it efficient, fast, and capable of solving multimodal problems. BSA has only one control parameter called the* mix-rate*, which significantly reduces the sensitivity of the initial values to the algorithm’s parameters. Due to these characteristics, in less than 4 years, BSA has been employed successfully to solve various engineering optimization problems, such as power systems [17–19], induction motor [20, 21], antenna arrays [22, 23], digital image processing [24, 25], artificial neural networks [26–29], and energy and environmental management [30–32].

However, BSA has a weak local exploitation capacity and its convergence speed is relatively slow. Thus, many studies have attempted to improve the performance of BSA and some modifications of BSA have been proposed to overcome the deficiencies. From the perspective of modified object, the modifications of BSA can be divided into the following four categories. It is noted that we consider classifying the publication into the major modification category if it has more than one modification:(i)Modifications of the initial populations [33–38](ii)Modifications of the reproduction operators, including the mutation and crossover operators [39–47](iii)Modifications of the selection operators, including the local exploitation strategy [48–51](iv)Modifications of the control factor and parameter [52–57].

The research on controlling parameters of EAs is one of the most promising areas of research in evolutionary computation; even a little modification of parameters in an algorithm can make a considerable difference [58]. In the basic BSA, the value of amplitude control factor () is the product of three and the standard normal distribution random number (i.e., ), which is often too large or too small according to its formulation. This may give BSA a powerful global exploration capability at the early iterations; however, it also affects the later exploitation capability of BSA. Based on these considerations, we focus mainly on the influence of the amplitude control factor () on the BSA, that is, the fourth of the categories defined above. Duan and Luo [52] redesigned an adaptive based on the fitness statistics of population at each iteration. Wang et al. [53] and Tian et al. [54] proposed an adaptive based on Maxwell-Boltzmann distribution. Askarzadeh and dos Santos Coelho [55] proposed an adaptive based on Burger’s chaotic map. Chen et al. [56] redesigned an adaptive by introducing two extra parameters. Nama et al. [57] proposed a new to adaptively change in the range of and new mix-rate to randomly change in the range of . These modifications of have achieved good effects in the BSA.

Different from the modifications of in BSA described above, a modified version of BSA (BSAISA) inspired by simulated annealing (SA) is proposed in this paper. In the BSAISA, based on iterations is redesigned by learning a characteristic where SA can probabilistically accept a higher energy state and the acceptance probability decreases with the decrease in temperature. The redesigned can adaptively decrease as the number of iterations increases without introducing extra parameters. This adaptive variation tendency provides an efficient tradeoff between early exploration and later exploitation capability. We verified the effectiveness and competitiveness of BSAISA in simulation experiments using thirteen constrained benchmarks and five engineering design problems in terms of convergence speed.

The remainder of this paper is organized as follows. Section 2 introduces the basic BSA. As the main contribution of this paper, a detailed explanation of BSAISA is presented in Section 3. In Section 4, we present two sets of simulation experiments in which we implemented BSAISA and BSA to solve thirteen constrained optimization and five engineering design problems. The results are compared with those obtained by other well-known algorithms in terms of the solution quality and function evaluations. Finally, we give our concluding remarks in Section 5.

#### 2. Backtracking Search Optimization Algorithm (BSA)

BSA is a population-based iterative EA. BSA generates trial populations to take control of the amplitude of the search-direction matrix which provides a strong global exploration capability. BSA equiprobably uses two random crossover strategies to exchange the corresponding elements of individuals in populations and trial populations during the process of crossover. Moreover, BSA has two selection processes. One is used to select population from the current and historical populations; the other is used to select the optimal population. In general, BSA can be divided into five processes: initialization, selection I, mutation, crossover, and selection II [16].

##### 2.1. Initialization

BSA generates initial population and initial old population using where and are the th individual elements in the problem dimension () that falls in th individual position in the population size (), respectively, and mean the lower boundary and the upper boundary of the th dimension, respectively, and is a random uniform distribution.

##### 2.2. Selection I

BSA’s selection I process is the beginning of each iteration. It aims to reselect a new for calculating the search direction based on population and historical population . The new is reselected through the “if-then” rule in where is the update operation; and represent random numbers between 0 and 1. The update operation (see (2)) ensures BSA has a memory. After is reselected, the order of the individuals in is randomly permuted by

##### 2.3. Mutation

The mutation operator is used for generating the initial form of trial population with where* F* is the amplitude control factor of mutation operator used to control the amplitude of the search direction. The value , where , and is standard normal distribution.

##### 2.4. Crossover

In this process, BSA generates the final form of trial population . BSA equiprobably uses two crossover strategies to manipulate the selected elements of the individuals at each iteration. Both the strategies generate different binary integer-valued matrices (map) of size to select the elements of individuals that have to be manipulated.

Strategy I uses the mix-rate parameter (mix-rate) to control the numbers of elements of individuals that are manipulated by using , where mix-rate = 1. Strategy II manipulates the only one randomly selected element of individuals by using , where .

The two strategies equiprobably are employed to manipulate the elements of individuals through the “if-then” rule: if , where and , then, is updated with .

At the end of crossover process, if some individuals in have overflowed the allowed search space limits, they will need to be regenerated by using (1).

##### 2.5. Selection II

In BSA’s selection II process, the fitness values in and are compared, used to update the population based on a greedy selection. If have a better fitness value than , then, is updated to be . The population is updated by using

If the best individual of () has a better fitness value than the global minimum value obtained, the global minimizer will be updated to be , and the global minimum value will be updated to be the fitness value of .

#### 3. Modified BSA Inspired by SA (BSAISA)

As mentioned in the introduction of this paper, the research work on the control parameters of an algorithm is very meaningful and valuable. In this paper, in order to improve BSA’s exploitation capability and convergence speed, we propose a modified version of BSA (BSAISA) where the redesign of is inspired by SA. The modified details of BSAISA are described in this section. First, the structure of the modified is described, before we explain the detailed design principle for the modified inspired by SA. Subsequently, two numerical tests are used to illustrate that the redesigned improves the convergence speed of the algorithm. We introduce a self-adaptive -constrained method for handling constraints at the end of this section.

##### 3.1. Structure of the Adaptive Amplitude Control Factor

The modified is a normally distributed random number, where its mean value is an exponential function and its variance is equal to 1. In BSAISA, we redesign the adaptive to replace the original version using where is the index of individuals, is the adaptive amplitude control factor that corresponds to the th individual, is the absolute value of the difference between the objective function values of and (individual differences), is the normal distribution, and is the current iteration.

According to (6), the exponential function (mean value) decreases dynamically with the change in the number of iterations () and the individual differences (). Based on the probability density function curve of the normal distribution, the modified can be decreased adaptively as the number of iterations increases. Another characteristic of the modified* F* is that there are not any extra parameters.

##### 3.2. Design Principle of the Modified Amplitude Control Factor

The design principle of the modified is inspired by the Metropolis criterion in SA. SA is a metaheuristic optimization technique based on physical behavior in nature. SA based on the Monte Carlo method was first proposed by Metropolis et al. [59] and it was successfully introduced into the field of combinatorial optimization for solving complex optimization problems by Kirkpatrick et al. [60].

The basic concept of SA derives from the process of physical annealing with solids. An annealing process occurs when a metal is heated to a molten state with a high temperature; then it is cooled slowly. If the temperature is decreased quickly, the resulting crystal will have many defects and it is just metastable; even the most stable crystalline state will be achieved at all. In other words, this may form a higher energy state than the most stable crystalline state. Therefore, in order to reach the absolute minimum energy state, the temperature needs to be decreased at a slow rate. SA simulates this process of annealing to search the global optimal solution in an optimization problem. However, accepting only the moves that lower the energy of system is like extremely rapid quenching; thus SA uses a special and effective acceptance method, that is, Metropolis criterion, which can probabilistically accept the hill-climbing moves (the higher energy moves). As a result, the energy of the system evolves into a Boltzmann distribution during the process of the simulated annealing. From this angle of view, it is no exaggeration to say that the Metropolis criterion is the core of SA.

The Metropolis criterion can be expressed by the physical significance of energy, where the new energy state will be accepted when the new energy state is lower than the previous energy state, and the new energy state will be probabilistically accepted when the new energy state is higher than the previous energy state. This feature of SA can escape from being trapped in local minima especially in the early stages of the search. It can also be described as follows.

(i) , then the new state is accepted and the energy with the displaced atom is used as the starting point for the next step, where represents the energy of the atom. Both and are the states of atoms, and is the next state of .

(ii) , then calculate the probability of , and generate a random number , which is a uniform distribution over , where is Boltzmann’s constant (in general, ) and is the current temperature. If , then the new energy will be accepted; otherwise, the previous energy is used to start the next step.

*Analysis 1. *The Metropolis criterion states that SA has two characteristics: (1) SA can probabilistically accept the higher energy and (2) the acceptance probability of SA decreases as the temperature decreases. Therefore, SA can reject and jump out of a local minimum with a dynamic and decreasing probability to continue exploiting the other solutions in the state space. This acceptance mechanism can enrich the diversity of energy states.

*Analysis 2. *As shown in (4), is used to control the amplitude of population mutation in BSA, thus is an important factor for controlling population diversity. If is excessively large, the diversity of the population will be too high and the convergence speed of BSA will slow down. If is excessively small, the diversity of the population will be reduced so it will be difficult for BSA to obtain the global optimum and it may be readily trapped by a local optimum. Therefore, adaptively controlling the amplitude of is a key to accelerating the convergence speed of the algorithm and maintaining its the population diversity.

Based on Analyses 1 and 2, it is clear that if can dynamically decrease, the convergence speed of BSA will be able to accelerate while maintaining the population diversity. On the other hand, SA possesses this characteristic that its acceptance probability can be dynamically reduced. Based on these two considerations, we propose BSAISA with a redesigned , which is inspired by SA. More specifically, the new (see (6)) is redesigned by learning the formulation () of acceptance probability, and its formulation has been shown in the previous subsection.

For the two formulas of the modified and , the individual difference () of a population or the energy difference () of a system will decrease as the number of iterations increases in an algorithm, and the temperature of SA tends to decrease, while the iteration of BSA tends to increase. As a result, one can observe the correspondence between modified and where the reciprocal of individual difference () corresponds to the energy difference () of SA, and the reciprocal of current iteration () corresponds to the current temperature () of SA. In this way, the redesigned can be decreased adaptively as the number of iterations increases.

##### 3.3. Numerical Analysis of the Modified Amplitude Control Factor

In order to verify that the convergence speed of the basic BSA is improved with the modified , two types (unimodal and multimodal) of unconstrained benchmark functions are used to test the changing trends in and population variances and best function values as the iterations increases, respectively. The two functions are Schwefel 1.2 and Rastrigin, and their detailed information is provided in [61]. The two functions and the user parameters including the populations (), dimensions (), and maximum iterations (Max) are shown in Table 1. Three groups of test results are compared in the tests including (1) the comparative curves of the mean values of the modified and original for Schwefel 1.2 and Rastrigin, (2) the comparative curves of the mean values of BSA and BSAISA population variances for two functions, and (3) the convergence curves of BSA and BSAISA for two functions. They are depicted in Figures 1, 2, and 3, respectively. Based on Figures 1–3, two results can be observed as follows.