Mathematical Problems in Engineering

Volume 2016 (2016), Article ID 6761545, 15 pages

http://dx.doi.org/10.1155/2016/6761545

## A Decomposition-Based Unified Evolutionary Algorithm for Many-Objective Problems Using Particle Swarm Optimization

School of Electronics and Information, Tongji University, Shanghai 201804, China

Received 2 June 2016; Revised 24 October 2016; Accepted 26 October 2016

Academic Editor: Giuseppe Vairo

Copyright © 2016 Anqi Pan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Evolutionary algorithms have proved to be efficient approaches in pursuing optimum solutions of multiobjective optimization problems with the number of objectives equal to or less than three. However, the searching performance degenerates in high-dimensional objective optimizations. In this paper we propose an algorithm for many-objective optimization with particle swarm optimization as the underlying metaheuristic technique. In the proposed algorithm, the objectives are decomposed and reconstructed using discrete decoupling strategy, and the subgroup procedures are integrated into unified coevolution strategy. The proposed algorithm consists of inner and outer evolutionary processes, together with adaptive factor , to maintain convergence and diversity. At the same time, a designed repository reconstruction strategy and improved leader selection techniques of MOPSO are introduced. The comparative experimental results prove that the proposed UMOPSO-D outperforms the other six algorithms, including four common used algorithms and two newly proposed algorithms based on decomposition, especially in high-dimensional targets.

#### 1. Introduction

Many-objective optimization problems (MaOPs) refer to the optimizations which consisted of a larger number of objectives. Generally, when the number of objectives in the multiobjective optimization problems (MOPs) reaches four, the problems are classified as MaOPs.

To MOPs, there exist many evolutionary algorithms which can be successfully applied in the optimization problem with two or three objectives. The commonly used evolutionary algorithms such as NSGA-II, SPEA2, MOPSO, and their improved or integrating methods are devoted to finding the solutions with uniform distribution to the Pareto front (PF) and maintaining the group diversity during the evolution as well [1–4]. However, none of these classical methods meet high level when facing the many-objective problems, because the process of Pareto dominance makes the individuals nondominated with each other at early generations [5]. The confusion of selection pressure appears between the increasing of objectives and the strategy of Pareto dominance. The computational complexity rises to execute searching in high-dimensional space results in long-time search procedure, poor convergence, and incomplete solution set.

Generally, approaches for evolutionary many-objective optimization (EMO) operate on three directions in order to face the high dimension challenges. The first one is discovering more efficient dominance relationship based on multicriteria decision-making, such as narrowing the search area by inputting a user-specified reference point [6] or modifying the selection pressure by softening the standard of dominance. The loose dominance strategy in the first attempt has a wide range of applications, such as -dominance [7], which has the advantage of reducing the computational complexity, and then the improved algorithm KS-MODE [8] solved the circle dominance in -dominance by proposing a novel energy function. Other methods focus on dominance criteria, such as the dynamic nondominance method [9], dominance area control [10], -dominated method [11], indicator-based method [12], deductive sort [13], and corner sort [14], which have made further additions to the development in this direction.

The second or the most recently discussed strategy to MaOPs is the decomposition-based method. One common way of decomposition is generating subproblems mapped on several scalars and transforming the multiple objectives into single objective with directional vector. As the fundamental algorithm, MOEA/D [15], proposed by Zhang and Li, provided a simple efficient decomposition way using weight vector approach. More recently, there have been several modifications to the standard MOEA/D which can improve its performance. MOEA/DD [16] combines the dominance-based approach and decomposition and makes improvements in weight vectors, niche strategy, and hybrid rule. In MOEA/D-UDM [17], and a uniform decomposition measurement was introduced to obtain uniform weight vectors, and a modified Tchebycheff decomposition approach was proposed to alleviate the inconsistency between the weight vector and the optimal direction. Reference [18] proposed a uniform evolutionary algorithm based on space decomposition and the control of dominance area of solutions. Another popular way of decomposition is dividing objectives into several subgroups and then executing a coevolutionary method to combine all the elite individuals. Choosing several objectives may adopt a local search technique to move solutions closer to the truth Pareto front and generate a better distribution on the approximate front. Following this idea, Gong et al. obtained subgroups using Spearman and generated a constructed objective by aggregating all the other objectives in every group [19]. Bandyopadhyay selected conflicting objectives for further processing [5]. Reference [20] proposed entropy-based objective reduction technique to find the most conflicting objectives.

The other well-known modification takes effort to keep the elite individuals by improving the diversity management mechanism and the congestion strategy. The elite preservation also can be found in popular evolutionary algorithms such as NSGA-II and SPEA2. In this direction Chen et al. have developed the idea of congestion control by relative positions and increase the evolutionary rate in sparse regions [21]. Reference [22] divided the repository into two parts, convergence archive and diversity archive, and focused on the two goals separately. Reference [16] introduced the procedure to find the worst solution which belongs to the worst nondominance level. Reference [23] compared migration models for multiobjective optimization.

Particle swarm optimization (PSO) is a well-known metaheuristical optimization technique inspired by the social behavior of birds while looking for food. Its advantages of fast convergence and high diversity are fully demonstrated in MOPs. MOPSO proposed by Coello et al. is the most popular PSO-based MOP method, which introduced a secondary repository of particles to guide the individuals, while using adaptive hypercubes in archive maintenance. Based on this, Tripathi et al. introduced the adaptive weight and acceleration parameters to improve the efficiency [24], Hu et al. proposed a parallel cell coordinate system to assess the environment and dynamically adjust the evolutionary [25], and Peng and Zhang combined MOEA/D with the MOPSO using the thought of decomposition [26]. Through the many attempts to improve MOPSO, it still suffers from premature convergence and uncertain learning pace.

In this paper, a novel strategy, named decomposition-based unified evolutionary algorithm using particle swarm optimization (UMOPSO-D), is proposed for the many-objective problems. UMOPSO-D optimizes the many objectives through decomposition and coevolution. Benefiting from the objective decomposition, the MaOP transforms to several MOP subproblems with three or less objectives, in which PSO method takes advantages (proved in Section 4). The coevolution between subproblems operates through uniform PSO framework, and the learning strategies constitute a double loop system; the inner learning system focuses mainly on convergence, while the outer system emphasizes diversity. The strategies for decoupling, personal and archive best selection, repository maintaining, recombination strategy, and coevolution method are designed to support this algorithm. Furthermore, two decoupling methods, relevant decoupling and conflict decoupling, are introduced in the decomposition, and the comparison between their contributions is organized. Compared with existing decomposition approaches, the main contributions of this work are summarized as follows:(1)A novel framework for MaOP algorithm is proposed, integrated with the idea of unified PSO. An adaptive unification factor is designed to dynamically balance the convergence and diversity according to the incremental entropy which stands for swarm stability. The entropy is calculated through hypercube strategy.(2)Two grouping strategies, relevant coupling and conflict coupling, are introduced and compared. The simulation results illustrate the different advantages of the two techniques.(3)The objective decomposition presents problems in population regroup and repository management. In UMOPSO-D, the population regroup is based on the leading objectives which are assigned for each particle. The repository management strategy includes two aspects, which are applied to local and global parts, respectively.(4)As a PSO-based algorithm, the best selection is essential. In this paper, different strategies are assigned for personal best, local best, and global best, respectively.

The remainder of this paper is organized as follows. Section 2 lists several definitions of the mentioned terms in MaOP, followed by the descriptions of related techniques and backgrounds in Section 3. Then the structures and formulations of standard PSO and unified PSO are described in Section 4. The proposed algorithm is detailed in Section 5. The comparative simulations and experimental results among seven MOEA algorithms are provided in Section 6. Finally, the conclusions are in Section 7.

#### 2. Definitions

*Many-Objective Optimization Problems, MaOPs.* Many-objective optimization problem originates from multiobjective optimization problems (MOPs), in which the number of objectives is larger than three. Like the MOPs, the MaOPs containing objective functions and decision variables can be described as the following expression. Since the objectives are restricted or conflicting with each other, there is no feasible way to combine any objectives.Here is the decision space, is the candidate solution, and is the objective space, which constitutes objective functions.

*Dominance.* We assume a vector dominates (denoted as ) if and only if element in is partially less than ; that is, , and , . Similarly, in many-objective optimization problem, the dominance relationship exists if and only if, , and , .

*Incomparable.* Neither dominates nor dominates , denoted as .

*Pareto-Optimal Set.* Pareto-optimal set is defined as .

*Pareto Front, PF.* The set of all Pareto-optimal objectives is called the Pareto front; .

#### 3. Related Backgrounds

To the best of our knowledge, decomposition-based methods can be divided into two levels. The first level is named scalar-based decomposition, which changes the research space through several scalars and gets close to the optimizations in scalar space. The second level is dividing the objectives into subsets following specified criterion and changing a many-objective problem to a bunch of multiobjective problems, named objective-based decomposition. The main challenges in both levels lie in the grouping criterion and coevolution strategy. In this section, we briefly review the current methods toward the two challenges and other techniques of MOP in state-of-the-art literatures as the background of this paper.

*Grouping Criterion.* In scalar-based decomposition, Liu randomly selected unit vectors to divide space into subregions, while Li defined each weight vector as a unique subregion in the scalar space.

In objective-based decomposition, Gong et al. decomposed the original objectives into several subgroups and added an aggregating function in each group [19], which presents a smart way to communicate between subgroups, but the aggregating functions bring additional objectives and complexity. The grouping criterion in Gong et al.’s paper is that objectives, which have high relativity, are selected into one group. On the contrary, Bandyopadhyay and Mukherjee selected a subset of conflicting objectives and recomputed the subset at each iteration; in the ending of every iteration, all the objectives are recombined and the differential evolution was executed [5]. The methods mentioned above use Pearson’s correlation coefficient and Spearman’s rank correlation coefficient, respectively, as the selecting criterion to evaluate the confliction and relevance, which is the key to generate subgroups.

*Coevolution.* Relatively many researchers have explored the idea of combining coevolution with MOEAs, by dividing subgroups, adding local searching processes, and other methods. Considering the MaOPs involved many conflicting objectives, the entirety group or the so-called global searching causes high computational complexity and large number of nondominated candidates. As a result, the coevolution can provide good performance in many objectives. The specific cooperative process among subparts varies in different strategies during MOEAs and can be executed in every generation or after certain assessments. The coevolution between the subparts has many different approaches, such as cooperative, competitive, and distributed approaches, which are fully described in [27]; moreover, the coevolution processes are executed in parameters, elitist recombination, algorithm selection, and so on.

*Archive Management.* In multiobjective algorithms, the archive strategies play the important roles, which filter those lower quality solutions to release space for the higher quality nondominated solutions. Though there exist several papers proposing the idea of unconstrained archive size [28], with the increasing number of objectives, the individuals become nondominated with each other at earlier generations, giving birth to a huge volume of data probably beyond the scope of physical implementation.

Density is the most commonly used quality for archive management. It can be based on several criteria, such as the following: Kernel approach, based on the sum of values from a distance function either in genotypic or in phenotypic space [29]; nearest neighbor approach, defined by the hyperrectangle values of the individuals’ nearest neighbors [1, 2]; and histogram approach, based on the number of solutions that lied in the same hypercubes [30].

Clustering mechanism is another proposal for maintaining an archive. When the population in archive reaches the constraint size, a clustering operation distributes the external nondominated solutions into several subsets and chooses a representative from each subset.

#### 4. Unified Many-Objective Particle Swarm Optimization

Particle swarm optimization (PSO) is an evolutionary computation technique which originated from the study of the bird predatory behaviors; similar to genetic algorithm, PSO is also approaching the optimal value of the solution space through constant learning and iteration. Each solution is seen as the ideal particle which has no quality or volume, updating their velocity and position in each iteration, approaching the optimal solution of fitness function gradually. If in dimensional search space, the velocity and position of particle are and , in each iteration, record particle ’s best position that it has ever been* (**)* and the global optimum position all particles have ever been* (**)*. Based on the following equations update the particles velocity and position. Inertia weight and acceleration coefficients and should be defined during the computation.The basic PSO, defined as the global scheme, has advantages of convenient information exchange and fast convergence speed. And algorithms such as [31, 32] are making attempts to overcome the PSO’s premature convergence and improve the diversity.

However, these positive features usually bring a rapid decline of diversity and make the algorithm greedy, leading to premature convergence in complicated algorithms such as MOPSO. As the number of objectives increases, the deterioration becomes more serious, and the particles were confused in following the best leader and result in frequent fluctuation. Moreover, given the archive maintaining based on crowding distance, with an excess of nondominated solutions, some excellent solutions may be deleted and set back the evolutions. The convergence and diversity metric curves of MOPSO evolution with 3, 4, and 5 objectives are calculated in Figure 1, which prove that the performance decreases when the number of objectives exceeds 3 (the metrics are detailed in Section 6). As expected, many improved MOPSOs made efforts in diversity maintaining through flexible mechanism, adaptive parameters, and so on. Another scheme, named neighborhood topology, divided the particles into several subsets for the searching of local optima and increasing of diversity. This scheme gives the PSO a wider and elaborate searching ability. Combining the global and neighborhood scheme, the unified particle swarm optimization is generated.