Research Article  Open Access
Wali Khan Mashwani, Zia Ur Rehman, Maharani A. Bakar, Ismail Koçak, Muhammad Fayaz, "A Customized Differential Evolutionary Algorithm for Bounded Constrained Optimization Problems", Complexity, vol. 2021, Article ID 5515701, 24 pages, 2021. https://doi.org/10.1155/2021/5515701
A Customized Differential Evolutionary Algorithm for Bounded Constrained Optimization Problems
Abstract
Boundconstrained optimization has wide applications in science and engineering. In the last two decades, various evolutionary algorithms (EAs) were developed under the umbrella of evolutionary computation for solving various boundconstrained benchmark functions and various realworld problems. In general, the developed evolutionary algorithms (EAs) belong to natureinspired algorithms (NIAs) and swarm intelligence (SI) paradigms. Differential evolutionary algorithm is one of the most popular and wellknown EAs and has secured top ranks in most of the EA competitions in the special session of the IEEE Congress on Evolutionary Computation. In this paper, a customized differential evolutionary algorithm is suggested and applied on twentynine largescale boundconstrained benchmark functions. The suggested CDE algorithm has obtained promising numerical results in its 51 independent runs of simulations. Most of the 2013 IEEECEC benchmark functions are tackled efficiently in terms of proximity and diversity.
1. Introduction
Bound optimization is the mathematical process of optimizing an objective function in the presence of constraints imposed on the decision space. These variables in the decision space may be continuous, discrete, or mixed. The basic elements of optimization are decision variables, objective function, and constrained functions. The standard form for a singleobjective, nonlinear, constrained optimization problem is described as follows [1]: where is the candidate solution with decision variables/real parameters, is the objective function, represents inequality constrained functions, denotes equality constrained functions, and are bound constrained over the search space . An optimum solution that is optimal in some certain neighbourhood is called a local optimum. Mathematically, by a local optimum, we mean a solution such that for all in some neighbourhood of , Then, optimum solution is called global optimum solution if .
In the last two decades, various optimization methods were developed and are still developing for dealing with a diverse test suite of benchmark functions and realworld optimization problems [2–6]. These algorithms are inspired by Darwin’s theory of evolution with Darwinian’s principle “Survival of the fittest”. The main intrinsic operators in the framework of EAs to evolve population include mutation and crossover operators, by using mutation and crossover operator with elitism scheme [7]. EAs are natureinspiredbased techniques [8–10]. In general, EAs can be categorized into nine different groups including the biologybased [11, 12], physicsbased [13, 14], socialbased [15–18], musicbased [19], chemicalbased [20], sportbased [21], mathematicsbased [22], swarmbased [23–29], and hybrid methods [30–37]. All these algorithms operate on a uniform and random set of solutions generated within the search domain of the given optimization and search problems. Among the evolutionary algorithms, the genetic algorithm (GA) [38], evolutionary strategy (ES) [39], evolutionary programming (EP) [5, 29], and genetic programming (GP) [40] are classical paradigms in evolutionary computing field [3]. Differential evolution (DE) [41], particle swarm optimization (PSO) [24], and ant colony optimization (ACO) [23, 42, 43], cuckoo search (CS) [44], and teaching learningbased optimization (TLBO) algorithm [45, 46] are comparatively recently developed EAs and successfully applied to various test suites of optimization problems and many realworld problems [18, 47–50].
Differential evolution (DE) is one of the wellestablished populationbased evolutionary algorithms, which was first introduced by Storn and Price for solving the Chebyshev polynomial fitting problems [51, 52]. DE is capable of handling nonlinear, linear, and multimodel objective functions and solving directional preferences over existing relays in the power system. The DE has also worked to train the weight of the neural network to deal with realworld problems. The basic idea of this technique is to avoid repetition of the existing set of solutions. Classic DE has two main control parameters and that need to be settled efficiently. The settlement of these parameters including the scaling factor, , and the probability of crossover,, is as discussed in [31, 41, 53–61]. Crossover is the main source of exploration, the mutation is utilized for exploitation purposes, and selection operators brought down the pressure for the survival of fittest individuals to evolve the population. The adjustment of various parameters used in search operator implementation is mainly problemdependent, and their proper settings are quite difficult and timeconsuming while performing trial and error experiments [62].
Different search operators behave differently at different levels of the optimization search process while dealing with complicated optimization and search problems. It is quite difficult to determine that this particular crossover or mutation or selection operator is useful in this particular proposition optimization process. In this paper, a customized differential evolutionary algorithm is suggested for solving benchmark functions designed for the special session of the 2013 IEEE Congress of Evolutionary Computation (CEC’13) [63]. The proposed CDE algorithm employs more than one mutation operator flowing by crossover to generate a new set of solutions for the next generation of the evolutionary search process. Experimental results demonstrated that the proposed CDE algorithm is more promising as compared to the algorithm that employs a single mutation operator for population evolution.
The rest of the paper is organized as follows. Section 2 introduces the framework of the proposed customized differential evolutionary algorithm. Section 3 demonstrates the experimental results and characteristics of the used benchmark functions. Section 4 finally concludes this paper.
2. A Customized Differential Evolutionary Algorithm for Numerical Optimization Problems
Differential evolution (DE) is a stochastic algorithm that was first developed by Kenneth Price in 1994 [40]. DE is one of the most promising global search techniques that converge toward the known optimal solution of the problem at hand in a significant manner [60]. In general, DE has five different mutation strategies; among them, rand/1 and best/1rand/1 have performed better in our case. Among the five DE variants, DE rand/1 has faced premature convergence issues and slow convergence behaviour as compared to DE best/1rand/1. The proposed customized DE employs DE rand/1 and DE best/1rand/1 mutation strategies simultaneously to alleviate premature convergence of original differential evolution. The customized DE does the switching of aforementioned mutation strategies with linearly reducing weight parameter from 0.25 to 0.01 during their whole course of optimization. (Figure 1).





Mutation strategies are the most distinctive components of differential evolution which are described as follows [64]:(1)Rand/1: mathematically, it is formulated as follows: where , , and are randomly taken distinct solutions from population other than (target solution). is the scaling factor which is randomly generated. is the mutant vector. In Algorithm 1, the mutation strategy as described in equation (2) is used and solved the CEC2013 unconstrained suite of benchmark functions. The abovementioned mutation strategy is good in exploration while comparatively weak in exploitation because of which it does not converge efficiently keeping in view the average value of most of the used benchmark functions (Algorithm 2).(2)Best/rand/1: mathematically, it is formulated as follows: where is the best solution in the current population, and are randomly chosen distinct solutions, and is the target solution. is the scaling factor which is randomly generated. is the mutant vector. In Algorithm 3, the mutation strategy as described in equation (3) has shown better performance in exploitation and comparatively not good in exploration that induces a premature convergence.(3)Current/rand/2: mathematically, it is formulated as follows: where is the target solution, , , , and are randomly taken distinct solutions from population other than target solution. is the scaling factor which is randomly generated. is the mutant vector. The mutation strategy as described in equation (4) is employed in Algorithm 4 for creating a new solution. The abovedescribed mutation strategy is good in exploitation while comparatively weak in exploration.(4)Rand/2: mathematically, it is formulated as follows: where , , , , and are randomly taken distinct solutions from population other than target solution. is the scaling factor which is randomly generated. is the mutant vector. The mutation strategy as described in equation (5) has been used in Algorithm 4 and performed good in exploration while less effective in exploitation.(5)Best/rand/2: mathematically, it is formulated as follows: where is the best solution in the current population and , , , and are taken randomly. is the scaling factor which is randomly generated within the specified bound. is the mutant vector. The mutation strategy as described in equation (6) has been employed in Algorithm 4 to maintain exploitation versus exploration issue in order to avoid premature convergence during population evolution.(6)Best/rand/1: mathematically, it is formulated as follows: where , , , , and are chosen randomly. The scaling factor, , is adjusted in all algorithms, and is the mutant vector as used in Algorithm 5.
The proposed CDE technique as outlined in Algorithm 6 employs the mutation strategies as described in equations (2) and (6) along with crossover as described in equation (8) to create its new set of solutions for the next generation of population evolution.

2.1. Crossover
In the current research, a binomial crossover operator is used. In the binomial crossover, every target vector is mixed with its corresponding mutant vector to get the trial vector as follows:where is the coordinate counter, is the mutant vector, is the probability of crossover, is a randomly generated integer, denotes the dimension of search space, is the target vector, and is the trial vector corresponding to .
2.2. Selection
The selection scheme for parent solution and their trial solution vector compete with each other based on their corresponding fitness values. The selection scheme is illustrated as follows:where is the trial vector, is the target vector, and and are the fitness values of the trial and target vector, respectively.
3. Simulation Results, Comparison, and Discussion
Due to the flurry of EAs recently developed, their performances are mainly analyzed by using different test suites of optimization and search problems. In the last few years, several unconstrained benchmark functions (i.e., boundconstrained) and constrained test problems are designed in special sessions of the IEEE Conference on Evolutionary Computation (IEEECEC) series for EA competition [63]. In a study of this paper, we have used twentyeight different unconstrained continuous benchmark functions whose characteristics are given in Table 1. The first five test problems, , are unimodal functions. Most of the evolutionary algorithms are getting trap in the local basin of attraction while solving these test problems. The problems from to are multimodal functions containing more than one local minima. These problems are much difficult to deal with as compared to unimodal functions. The functions from are composite functions.

3.1. PC Configuration and Simulation Environment
The simulations were conducted on a PC with 8 GB RAM and Intel(R)Core(TM)i37100U CPU @ 2.40 GHz processor. All simulations are done in MATLAB R2013a environment with . Each benchmark function has been solved in different dimensions including by executing 51 times independently. Maximum function evaluation was fixed to . The algorithms for each problem are run 51 times independently to obtain the final best solution. denotes the switching of mutation strategies.
Tables 2 and 3 present the numerical results of CDE as outlined in Algorithm 6 in comparison with DE1 as described in Algorithm 1 and DE5 hereby outlined in Algorithm 5. The numerical results for to benchmark functions in terms of minimum, maximum, average, median, and standard deviation in function values are summarized in Table 2. Similarly, the numerical results for the benchmark functions denoted by to are given in Table 3.
