At the request of the authors, the article titled “An Improved SPEA2 Algorithm with Adaptive Selection of Evolutionary Operators Scheme for Multiobjective Optimization Problems” [1] has been retracted. The article was published without the knowledge or approval of Zhen Chen who graduated from the School of Computer and Communication Technology, Lanzhou University of Technology and wrote this article.

View the full Retraction here.

#### References

- F. Zhao, W. Lei, W. Ma, Y. Liu, and C. Zhang, “An improved SPEA2 algorithm with adaptive selection of evolutionary operators scheme for multiobjective optimization problems,”
*Mathematical Problems in Engineering*, vol. 2016, Article ID 8010346, 20 pages, 2016.

Mathematical Problems in Engineering

Volume 2016, Article ID 8010346, 20 pages

http://dx.doi.org/10.1155/2016/8010346

## An Improved SPEA2 Algorithm with Adaptive Selection of Evolutionary Operators Scheme for Multiobjective Optimization Problems

^{1}School of Computer and Communication Technology, Lanzhou University of Technology, Lanzhou 730050, China^{2}School of Economics and Management, Tongji University, Shanghai 200092, China^{3}H. Milton Stewart School of Industrial & Systems Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA

Received 6 May 2016; Revised 26 July 2016; Accepted 28 August 2016

Academic Editor: Alfredo G. Hernández-Diaz

Copyright © 2016 Fuqing Zhao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

A fixed evolutionary mechanism is usually adopted in the multiobjective evolutionary algorithms and their operators are static during the evolutionary process, which causes the algorithm not to fully exploit the search space and is easy to trap in local optima. In this paper, a SPEA2 algorithm which is based on adaptive selection evolution operators (AOSPEA) is proposed. The proposed algorithm can adaptively select simulated binary crossover, polynomial mutation, and differential evolution operator during the evolutionary process according to their contribution to the external archive. Meanwhile, the convergence performance of the proposed algorithm is analyzed with Markov chain. Simulation results on the standard benchmark functions reveal that the performance of the proposed algorithm outperforms the other classical multiobjective evolutionary algorithms.

#### 1. Introduction

The multiobjective optimization problems (MOPs) [1] usually have more than two objectives. However, the evolutionary multiobjective optimization (EMO) researchers are only interested in the problems whose objectives are in conflict. For example, to produce a product, it not only requires short production time but also needs high quality. Those two objectives are in conflict.

Because no single solution can simultaneously optimize all the objectives on the condition that these objectives are in conflict, therefore the purpose of the MOP is to achieve a group of Pareto optimal set and make it that the solutions distribution on the Pareto front has best possible approximation and uniformity. The traditional optimization algorithms can transform the multiobjective optimization problem into single objective problem with positive coefficient. The common weakness of the traditional algorithm is to produce single Pareto optima in a single run. However, the evolutionary algorithm is a population-based random search approach, which can generate a group of Pareto optimal solutions set in a single run and is very suitable for solving the MOPs. Since Schaffer [2] used multiobjective evolutionary algorithms (MOEAs) to solve MOPs, a variety of evolutionary algorithms have been developed. The characteristic of the first generation of MOEAs was using Pareto ranking to fitness assignment and making use of the niche or fitness sharing to maintain diversity. The representative algorithms contain nondominated sorting genetic algorithm (NSGA) [3], the multiobjective genetic algorithm (MOGA) [4], and the Niched Pareto genetic algorithm (NPGA) [5]. The feature of the second generation of MOEAs was whether the elitism is used or not. The classical algorithms are the Pareto archived evolution strategy (PAES) [6], the Pareto envelope based selection algorithm (PESA) [7] and its revised version PESA-II [8], the strength Pareto evolutionary algorithm (SPEA) [9] and its improved version SPEA2 [10], and the improved version of NSGA (NSGA-II) [11]. In recent years, some new frameworks of MOEAs have been proposed. The MOEA/D [12, 13] which combines traditional mathematical programming method with multiobjective evolutionary algorithm is one of the new frameworks. It shows high performance when solving MOPs with complicated shapes [14]. Meanwhile, many other nature-inspired metaheuristics including Ant Colony Optimization [15, 16], Particle Swarm Optimization [17, 18], Immune Algorithm [19, 20], and Estimation of Distribution Algorithm [21, 22] have been successfully applied to handle MOPs. Moreover, MOEAs for complicated MOPs have also been extensively investigated, such as MOEA for constraint MOPs [23], dynamic MOPs [24], and many objective optimization problems [25].

SPEA2 is one of the second generation MOEAs. Bleuler et al. [26] considered the program size as a second, independent objective besides the program functionality and combined with SPEA2. Over the past decade, SPEA2 has been successfully combined with other optimization strategies to form improved SPEA2 algorithms. Kim et al. [27] added a more efficient crossover mechanism and an archive mechanism to maintain diversity of the solutions in the objective and variable spaces. Zheng et al. [28] combined SPEA2 with the parallel genetic algorithm (PGA) to obtain the final solution. Wu et al. [29] proposed a modified method of calculating fitness value based on SPEA2. A more reasonable strategy of elitism population selection is used to improve the distribution performance of the multiobjective optimization. Li et al. [30] combined several specific local search strategies with SPEA2 to enhance the algorithm’s exploiting capability. Belgasmi et al. [31] improved the performance of SPEA2 by adequately applying a multiobjective quasigradient local search to some candidate solutions that have lower density estimation. Al-Hajri and Abido [32] adopted truncation algorithms to manage the Pareto optimal set size. Meanwhile, the best compromise solution is extracted by using fuzzy set theory in SPEA2. Sheng et al. [33] present an Improved Strength Pareto Evolutionary Algorithm (ISPEA2), which introduces a penalty factor in objective function constraints and adopts an adaptive crossover as well as a mutation operator in the evolutionary process; besides, it combines simulated annealing iterative process over SPEA2. Maheta and Dabhi [34] proposed the enhancements to improve convergence performance and diversity simultaneously for SPEA2. -nearest neighbor density estimation technique is used to maintain diversity among solutions.

Some researchers have shown that the operators are more suitable for certain types of problems but can not be available in the whole evolutionary process. For instance, simulated binary crossover (SBX) is widely used in MOEAs, but Deb [35] observed that SBX operator was unable to address problems with variable linkages. Therefore, an efficient evolutionary operator plays an important role in the evolutionary process of the optimization methods. And the operators have a great influence on the algorithms’ performance. Therefore, it is necessary to designate efficient operators for the MOEAs. At present, many efficient evolutionary operators are designed to enhance the performance of algorithms [36–39]. Pulido and Coello [40] introduced the best elected evolutionary operator to solve a given problem. A microgenetic algorithm called *µ*GA2 is proposed, which runs several simultaneous instances of *µ*GA2 with different evolutionary operators. Periodically, the instance with the poorest performance was replaced by the best performance ones after several generations. Thus, all the parallel instances only worked with the best performing operators after several generations. A disadvantage of this approach is that once an operator had been discarded, it could not be used again in the remaining evolutionary process. Huang et al. [41] utilized four different DE operators. Four operators were chosen in an adaptive way: the operator which contributed the most to the search was given a higher probability to create new solutions. Nebro et al. proposed two improved NSGA-II algorithms which are NSGA-IIr and NSGA-IIa [42]. NSGA-IIr is an extension of NSGA-II which employed three different evolutionary operators: the simulated binary crossover (SBX), polynomial mutation (PM), and DE. These operators are randomly selected when a new solution is to be produced. NSGA-IIa applies the same evolutionary operators as NSGA-II-r does; each operator selection probability is adjusted by considering operator success in the last iteration. And the algorithms’ performance has been greatly improved by making use of the adaptive way with evolutionary operators.

In this paper, an improved SPEA2 algorithm with adaptive selection of evolutionary operators (AOSPEA) is proposed. Multiobjective evolutionary operators including the simulated binary crossover, polynomial mutation, and differential evolution operator are employed to enhance the convergence performance and diversity of the SPEA2. Simulation results on the standard benchmarks show that the proposed algorithm outperforms SPEA2, NSGA-II, and PESA-II.

The rest of the paper is organized as follows: Section 2 provides a brief description of SPEA2 framework. In Section 3, the main loop of AOSPEA, with a particularly detailed description and analysis of the proposed adaptive the selection of evolutionary operator’s scheme. The convergence and complexity analysis of AOSPEA are also presented in detail in this section. Section 4 describes the experimental results. Section 5 makes a conclusion.

#### 2. Multiobjective Problems (MOPs)

##### 2.1. The Description of Multiobjective Problems (MOPs)

As no single solution can optimize all the objectives at the same time on the condition that these objectives are in conflict, the solution of a MOPs is a set of decision variable vectors rather than a unique solution. Let be two decision vectors, is said to dominate (), if for all , and . Besides, at least one objective function should satisfy . A point is called Pareto optimal solution or nondominated solution if there is no such that dominates . The set of all the Pareto optimal solutions is called the Pareto set, denoted by . The set of all the Pareto optimal objective vectors, , is called the Pareto front. It is impossible to find entire PS of continuous MOPs; the purpose is aiming at finding a finite set of Pareto optimal vectors which are uniformly scattered along the true PF and highly representative of the entire PF.

In general, the Multiobjective Problems can be illustrated mathematically as follows:

In the equation, is the decision vector and is the decision space. is the objective vector and is the objective space. The most difficult reason to treat the Multiobjective Problems is that each Objective is related, restrained, and even conflicted with each other. And in every MOPs, there are different objectives to confine the results.

##### 2.2. Pareto Dominance

Vector is said to Pareto dominate another vector if and only if , , .

*Definition 1 (Pareto optimal point). *A vector is said to be Pareto optimal (in ) if and only if , .

*Definition 2 (Pareto front). *The Pareto front, denoted by , of a set is given by .

*Definition 3 (general MOPs). *A general MOPs is defined as minimizing (or maximizing) subject to , , and , , . A MOPs solution minimizes (or maximizes) the components of a vector where is a -dimensional decision variable vector from some universe . It is noted that and represent constraints that must be fulfilled while minimizing (or maximizing) and contains all possible that can be used to satisfy an evaluation of . Thus, MOPs consist of objectives reflected in the objective functions, + constraints on the objective functions, and decision variables. The objective functions may be linear or nonlinear and continuous or discrete in nature. The evaluation function, , is a mapping from the vector of decision variables () to output vectors (). The vector of decision variables can also be continuous or discrete.

#### 3. The Improved SPEA2 Algorithm with Adaptive Selection of Evolutionary Operators Scheme (AOSPEA)

##### 3.1. Brief Introduction to SPEA2

SPEA2 is an improved version of the Strength Pareto Evolutionary Algorithm (SPEA). Compared with SPEA, a fine-grained fitness assignment strategy which incorporates density information is employed in SPEA2. The fixed archive size is adopted, that is, whenever the number of nondominated individuals is less than the predefined archive size, the archive is filled up by dominated individuals. Moreover, an alternative truncation method is used to replace the clustering technique in original SPEA but does not loose boundary points, which can guarantee the preservation of boundary solution. Finally, SPEA2 only makes members of the archive participate in the mating selection process. The procedure of the SPEA2 is as follows.

*SPEA2 Algorithm* *Input*: *Ne*: population size *N*: archive size *T*: maximum number of generations. *Output*: NDS: nondominated set.

*Step 1* (initialization). Generate an initial population and create an empty archive (external archive) . Set .

*Step 2* (fitness assignment). Calculate fitness values of individuals in and .

*Step 3* (environment selection). Copy all nondominated individuals in and to . If size of exceeds then reduce by means of the truncation operator; otherwise if size of is less than then fill with dominated individuals in and .

*Step 4* (termination). If is satisfied, then stop and output NDS. Otherwise, continue.

*Step 5* (mating selection). Perform binary tournament selection with replacement on in order to fill mating pool. The size of mating pool is .

*Step 6* (reproduction). Apply recombination and mutation operators to the mating pool to the resulting population. Set ; go to Step .

##### 3.2. The Evolutionary Operators Used in the AOSPEA

Due to the fixed evolutionary operator in the SPEA2 algorithm, it is easy to trap into local optima. The single operator can hardly meet the whole evolutionary process and different operators in the stage should be designed according to their contribution. Therefore, three different evolutionary operators including DE operator [43], the simulated binary crossover (SBX) operator [44], and PM operator [45] are employed to improve the performance of SPEA2. The description of different operators is as follows.

*(1) DE Operator*. Differential evolution (DE) has three processes including mutation, crossover, and selection. DE owns good global search ability [46] and makes use of the differences between randomly selected vectors (individuals) as the source of evolutionary dynamics. Besides, DE can control the evolutionary variation similar to the concept jump in neighborhood search by adding weighted vectors to the target vector properly. Therefore, DE operators are adopted in SPEA2. It can efficiently improve the convergence and the exploration ability of the SPEA2. The procedure of DE operators is displayed as follows.

*DE Operator* *Input*:* Ne*: population size, population . *Output*: A new individual .

*Step 1*. Randomly select three different individuals , , and ; they cannot dominate each other from .

*Step 2* (mutation operator). Produce the mutation individual ; is the number of dimensions with (4):where is the scale factor; .

*Step 3* (crossover operator). Produce the new individual with (6):where CR is a crossover rate, is a uniformly distributed random number between 0 and 1, and is randomly selected from .

DE operator employs the relative position of nondominated solutions to produce the evolutionary direction of the ideal Pareto front and the new search space. Figure 1 describes the theory of the DE operator, where is the offspring individual. It can be seen from Figure 1(a) that DE operator employs the relative position of nondominated individual in neighborhood to produce the offspring individual close to ideal Pareto front. Figure 1(b) expresses that DE operator can obtain more broad offspring.