Applied Computational Intelligence and Soft Computing

Volume 2017, Article ID 7974218, 18 pages

https://doi.org/10.1155/2017/7974218

## Differential Evolution with Novel Mutation and Adaptive Crossover Strategies for Solving Large Scale Global Optimization Problems

^{1}College of Computer and Information Systems, Al-Yamamah University, P.O. Box 45180, Riyadh 11512, Saudi Arabia^{2}Operations Research Department, Institute of Statistical Studies and Research, Cairo University, Giza 12613, Egypt^{3}College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia

Correspondence should be addressed to Ali Wagdy Mohamed; moc.liamg@ydgawila

Received 27 September 2016; Revised 3 December 2016; Accepted 6 February 2017; Published 8 March 2017

Academic Editor: Miin-Shen Yang

Copyright © 2017 Ali Wagdy Mohamed and Abdulaziz S. Almazyad. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper presents Differential Evolution algorithm for solving high-dimensional optimization problems over continuous space. The proposed algorithm, namely, ANDE, introduces a new triangular mutation rule based on the convex combination vector of the triplet defined by the three randomly chosen vectors and the difference vectors between the best, better, and the worst individuals among the three randomly selected vectors. The mutation rule is combined with the basic mutation strategy DE/rand/1/bin, where the new triangular mutation rule is applied with the probability of 2/3 since it has both exploration ability and exploitation tendency. Furthermore, we propose a novel self-adaptive scheme for gradual change of the values of the crossover rate that can excellently benefit from the past experience of the individuals in the search space during evolution process which in turn can considerably balance the common trade-off between the population diversity and convergence speed. The proposed algorithm has been evaluated on the 20 standard high-dimensional benchmark numerical optimization problems for the IEEE CEC-2010 Special Session and Competition on Large Scale Global Optimization. The comparison results between ANDE and its versions and the other seven state-of-the-art evolutionary algorithms that were all tested on this test suite indicate that the proposed algorithm and its two versions are highly competitive algorithms for solving large scale global optimization problems.

#### 1. Introduction

In general, global numerical optimization problem can be expressed as follows (without loss of generality minimization problem is considered here):where is the objective function, is the decision vector space consisting of variables, is the problem dimension, that is, the number of variables to be optimized, and and are the lower and upper bounds for each decision variable, respectively.

The optimization of the large scale problems of this kind (i.e., ) is considered a challenging task since the solution space of a problem often increases exponentially with the problem dimension and the characteristics of a problem may change with the scale [1]. Generally speaking, there are different types of real-world large scale global optimization (LSGO) problems in engineering, manufacturing, and economy applications (biocomputing, data or web mining, scheduling, vehicle routing, etc.). In order to draw more attention to this challenge of optimization, the first competition on LSGO was held in CEC 2008 [2]. Consequently, In the recent few years, LSGO has gained considerable attention and has attracted much interest from Operations Research and Computer Science professionals, researchers, and practitioners as well as mathematicians and engineers. Therefore, the challenges mentioned above have motivated the researchers to design and improve many kinds of efficient, effective, and robust various kinds of metaheuristics algorithms that can solve (LSGO) problems with high quality solution and high convergence performance with low computational cost. Evolutionary algorithms (EAs) have been proposed to meet the global optimization challenges. The structure of EAs has been inspired from the mechanisms of natural evolution. Due to their adaptability and robustness, EAs are especially capable of solving difficult optimization problems, such as highly nonlinear, nonconvex, nondifferentiable, and multimodal optimization problems. Generally, the process of EAs is based on the exploration and the exploitation of the search space through selection and reproduction operators [3]. Similar to other evolutionary algorithms (EAs), Differential Evolution (DE) is a stochastic population-based search method, proposed by Storn and Price [4]. The advantages are its simplicity of implementation, ease of use, speed, and robustness. Due to these advantages, it has been successfully applied for solving many real-world applications, like admission capacity planning in higher education [5, 6], financial markets dynamic modeling [7], solar energy [8], and many others. In addition, many recent studies prove that the performance of DE is highly competitive with and in many cases superior to other EAs in solving unconstrained optimization problems, constrained optimization problems, multiobjective optimization problems, and other complex optimization problems [9]. However, DE has many weaknesses as all other evolutionary search techniques. Generally, DE has a good global exploration ability that can reach the region of global optimum, but it is slow at exploitation of the solution [10]. Additionally, the parameters of DE are problem dependent and it is difficult to adjust them for different problems. Moreover, DE performance decreases as search space dimensionality increases [11]. Finally, the performance of DE deteriorates significantly when the problems of premature convergence and/or stagnation occur [11, 12]. The performance of DE basically depends on the mutation strategy and the crossover operator. Besides, the intrinsic control parameters (population size NP, scaling factor , and the crossover rate CR) play a vital role in balancing the diversity of population and convergence speed of the algorithm. For the original DE, these parameters are user-defined and kept fixed during the run. However, many recent studies indicate that the performance of DE is highly affected by the parameter setting and the choice of the optimal values of parameters is always problem dependent. Moreover, prior to an actual optimization process, the traditional time-consuming trial-and-error method is used for fine-tuning the control parameters for each problem. Alternatively, in order to achieve acceptable results even for the same problem, different parameter settings along with different mutation schemes at different stages of evolution are needed. Therefore, some techniques have been designed to adjust control parameters in adaptive or self-adaptive manner instead of trial-and-error procedure plus new mutation rules have been developed to improve the search capability of DE [13–22]. Based on the above considerations, in this paper, we present a novel DE, referred to as ANDE, including two novel modifications: triangular mutation rule and self-adaptive scheme for gradual change of CR values. In ANDE, a novel triangular mutation rule can balance the global exploration ability and the local exploitation tendency and enhance the convergence rate of the algorithm. Furthermore, a novel adaptation scheme for CR is developed that can benefit from the past experience of the individuals in the search space during evolution process. Scaling factors are produced according to a uniform distribution to balance the global exploration and local exploitation during the evolution process. ANDE has been tested on 20 benchmark test functions developed for the 2010 IEEE Congress on Evolutionary Computation (IEEE CEC 2010) [1]. The experimental results indicate that the proposed algorithm and its two versions are highly competitive algorithms for solving large scale global optimization problems. The remainder of this paper is organized as follows. Section 2 briefly introduces DE and its operators. Section 3 reviews the related work. Then, ANDE is presented in Section 4. The experimental results are given in Section 5. Section 6 discusses the effectiveness of the proposed modifications. Finally, the conclusions and future works are drawn in Section 7.

#### 2. Differential Evolution (DE)

This section provides a brief summary of the basic Differential Evolution (DE) algorithm. In simple DE, generally known as DE/rand/1/bin [4, 23], an initial random population consists of NP vectors , , and is randomly generated according to a uniform distribution within the lower and upper boundaries (). After initialization, these individuals are evolved by DE operators (mutation and crossover) to generate a trial vector. A comparison between the parent and its trial vector is then done to select the vector which should survive to the next generation [9]. DE steps are discussed below.

##### 2.1. Initialization

In order to establish a starting point for the optimization process, an initial population must be created. Typically, each decision variable in every vector of the initial population is assigned a randomly chosen value from the boundary constraints: where denotes a uniformly distributed number between , generating a new value for each decision variable.

##### 2.2. Mutation

At generation , for each target vector , a mutant vector is generated according to the following:with randomly chosen indices . is a real number to control the amplification of the difference vector . According to Price et al. [24], the range of is in . In this work, if a component of a mutant vector violates search space, the value of this component is generated newly using (2).

##### 2.3. Crossover

There are two main crossover types, binomial and exponential. In the binomial crossover, the target vector is mixed with the mutated vector, using the following scheme, to yield the trial vector .where ; is theth evaluation of a uniform random generator number. is the crossover rate; is a randomly chosen index which ensures that gets at least one element from ; otherwise no new parent vector would be produced and the population would not alter.

In an exponential crossover, an integer value is randomly chosen within the range . This integer value acts as a starting point in , from where the crossover or exchange of components with starts. Another integer value (denotes the number of components) is also chosen from the interval . The trial vector () is created by inheriting the values of variables in locations to from and the remaining ones from .

##### 2.4. Selection

DE adapts a greedy selection strategy. If and only if the trial vector yields fitness function value as good as or a better than , then is set to . Otherwise, the old vector is retained. The selection scheme is as follows (for a minimization problem): A detailed description of standard DE algorithm is given in Algorithm 1.