Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2016, Article ID 6761545, 15 pages
http://dx.doi.org/10.1155/2016/6761545
Research Article

A Decomposition-Based Unified Evolutionary Algorithm for Many-Objective Problems Using Particle Swarm Optimization

School of Electronics and Information, Tongji University, Shanghai 201804, China

Received 2 June 2016; Revised 24 October 2016; Accepted 26 October 2016

Academic Editor: Giuseppe Vairo

Copyright © 2016 Anqi Pan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Evolutionary algorithms have proved to be efficient approaches in pursuing optimum solutions of multiobjective optimization problems with the number of objectives equal to or less than three. However, the searching performance degenerates in high-dimensional objective optimizations. In this paper we propose an algorithm for many-objective optimization with particle swarm optimization as the underlying metaheuristic technique. In the proposed algorithm, the objectives are decomposed and reconstructed using discrete decoupling strategy, and the subgroup procedures are integrated into unified coevolution strategy. The proposed algorithm consists of inner and outer evolutionary processes, together with adaptive factor , to maintain convergence and diversity. At the same time, a designed repository reconstruction strategy and improved leader selection techniques of MOPSO are introduced. The comparative experimental results prove that the proposed UMOPSO-D outperforms the other six algorithms, including four common used algorithms and two newly proposed algorithms based on decomposition, especially in high-dimensional targets.

1. Introduction

Many-objective optimization problems (MaOPs) refer to the optimizations which consisted of a larger number of objectives. Generally, when the number of objectives in the multiobjective optimization problems (MOPs) reaches four, the problems are classified as MaOPs.

To MOPs, there exist many evolutionary algorithms which can be successfully applied in the optimization problem with two or three objectives. The commonly used evolutionary algorithms such as NSGA-II, SPEA2, MOPSO, and their improved or integrating methods are devoted to finding the solutions with uniform distribution to the Pareto front (PF) and maintaining the group diversity during the evolution as well [14]. However, none of these classical methods meet high level when facing the many-objective problems, because the process of Pareto dominance makes the individuals nondominated with each other at early generations [5]. The confusion of selection pressure appears between the increasing of objectives and the strategy of Pareto dominance. The computational complexity rises to execute searching in high-dimensional space results in long-time search procedure, poor convergence, and incomplete solution set.

Generally, approaches for evolutionary many-objective optimization (EMO) operate on three directions in order to face the high dimension challenges. The first one is discovering more efficient dominance relationship based on multicriteria decision-making, such as narrowing the search area by inputting a user-specified reference point [6] or modifying the selection pressure by softening the standard of dominance. The loose dominance strategy in the first attempt has a wide range of applications, such as -dominance [7], which has the advantage of reducing the computational complexity, and then the improved algorithm KS-MODE [8] solved the circle dominance in -dominance by proposing a novel energy function. Other methods focus on dominance criteria, such as the dynamic nondominance method [9], dominance area control [10], -dominated method [11], indicator-based method [12], deductive sort [13], and corner sort [14], which have made further additions to the development in this direction.

The second or the most recently discussed strategy to MaOPs is the decomposition-based method. One common way of decomposition is generating subproblems mapped on several scalars and transforming the multiple objectives into single objective with directional vector. As the fundamental algorithm, MOEA/D [15], proposed by Zhang and Li, provided a simple efficient decomposition way using weight vector approach. More recently, there have been several modifications to the standard MOEA/D which can improve its performance. MOEA/DD [16] combines the dominance-based approach and decomposition and makes improvements in weight vectors, niche strategy, and hybrid rule. In MOEA/D-UDM [17], and a uniform decomposition measurement was introduced to obtain uniform weight vectors, and a modified Tchebycheff decomposition approach was proposed to alleviate the inconsistency between the weight vector and the optimal direction. Reference [18] proposed a uniform evolutionary algorithm based on space decomposition and the control of dominance area of solutions. Another popular way of decomposition is dividing objectives into several subgroups and then executing a coevolutionary method to combine all the elite individuals. Choosing several objectives may adopt a local search technique to move solutions closer to the truth Pareto front and generate a better distribution on the approximate front. Following this idea, Gong et al. obtained subgroups using Spearman and generated a constructed objective by aggregating all the other objectives in every group [19]. Bandyopadhyay selected conflicting objectives for further processing [5]. Reference [20] proposed entropy-based objective reduction technique to find the most conflicting objectives.

The other well-known modification takes effort to keep the elite individuals by improving the diversity management mechanism and the congestion strategy. The elite preservation also can be found in popular evolutionary algorithms such as NSGA-II and SPEA2. In this direction Chen et al. have developed the idea of congestion control by relative positions and increase the evolutionary rate in sparse regions [21]. Reference [22] divided the repository into two parts, convergence archive and diversity archive, and focused on the two goals separately. Reference [16] introduced the procedure to find the worst solution which belongs to the worst nondominance level. Reference [23] compared migration models for multiobjective optimization.

Particle swarm optimization (PSO) is a well-known metaheuristical optimization technique inspired by the social behavior of birds while looking for food. Its advantages of fast convergence and high diversity are fully demonstrated in MOPs. MOPSO proposed by Coello et al. is the most popular PSO-based MOP method, which introduced a secondary repository of particles to guide the individuals, while using adaptive hypercubes in archive maintenance. Based on this, Tripathi et al. introduced the adaptive weight and acceleration parameters to improve the efficiency [24], Hu et al. proposed a parallel cell coordinate system to assess the environment and dynamically adjust the evolutionary [25], and Peng and Zhang combined MOEA/D with the MOPSO using the thought of decomposition [26]. Through the many attempts to improve MOPSO, it still suffers from premature convergence and uncertain learning pace.

In this paper, a novel strategy, named decomposition-based unified evolutionary algorithm using particle swarm optimization (UMOPSO-D), is proposed for the many-objective problems. UMOPSO-D optimizes the many objectives through decomposition and coevolution. Benefiting from the objective decomposition, the MaOP transforms to several MOP subproblems with three or less objectives, in which PSO method takes advantages (proved in Section 4). The coevolution between subproblems operates through uniform PSO framework, and the learning strategies constitute a double loop system; the inner learning system focuses mainly on convergence, while the outer system emphasizes diversity. The strategies for decoupling, personal and archive best selection, repository maintaining, recombination strategy, and coevolution method are designed to support this algorithm. Furthermore, two decoupling methods, relevant decoupling and conflict decoupling, are introduced in the decomposition, and the comparison between their contributions is organized. Compared with existing decomposition approaches, the main contributions of this work are summarized as follows:(1)A novel framework for MaOP algorithm is proposed, integrated with the idea of unified PSO. An adaptive unification factor is designed to dynamically balance the convergence and diversity according to the incremental entropy which stands for swarm stability. The entropy is calculated through hypercube strategy.(2)Two grouping strategies, relevant coupling and conflict coupling, are introduced and compared. The simulation results illustrate the different advantages of the two techniques.(3)The objective decomposition presents problems in population regroup and repository management. In UMOPSO-D, the population regroup is based on the leading objectives which are assigned for each particle. The repository management strategy includes two aspects, which are applied to local and global parts, respectively.(4)As a PSO-based algorithm, the best selection is essential. In this paper, different strategies are assigned for personal best, local best, and global best, respectively.

The remainder of this paper is organized as follows. Section 2 lists several definitions of the mentioned terms in MaOP, followed by the descriptions of related techniques and backgrounds in Section 3. Then the structures and formulations of standard PSO and unified PSO are described in Section 4. The proposed algorithm is detailed in Section 5. The comparative simulations and experimental results among seven MOEA algorithms are provided in Section 6. Finally, the conclusions are in Section 7.

2. Definitions

Many-Objective Optimization Problems, MaOPs. Many-objective optimization problem originates from multiobjective optimization problems (MOPs), in which the number of objectives is larger than three. Like the MOPs, the MaOPs containing objective functions and decision variables can be described as the following expression. Since the objectives are restricted or conflicting with each other, there is no feasible way to combine any objectives.Here is the decision space, is the candidate solution, and is the objective space, which constitutes objective functions.

Dominance. We assume a vector dominates (denoted as ) if and only if element in is partially less than ; that is, , and , . Similarly, in many-objective optimization problem, the dominance relationship exists if and only if, , and , .

Incomparable. Neither dominates nor dominates , denoted as .

Pareto-Optimal Set. Pareto-optimal set is defined as .

Pareto Front, PF. The set of all Pareto-optimal objectives is called the Pareto front; .

3. Related Backgrounds

To the best of our knowledge, decomposition-based methods can be divided into two levels. The first level is named scalar-based decomposition, which changes the research space through several scalars and gets close to the optimizations in scalar space. The second level is dividing the objectives into subsets following specified criterion and changing a many-objective problem to a bunch of multiobjective problems, named objective-based decomposition. The main challenges in both levels lie in the grouping criterion and coevolution strategy. In this section, we briefly review the current methods toward the two challenges and other techniques of MOP in state-of-the-art literatures as the background of this paper.

Grouping Criterion. In scalar-based decomposition, Liu randomly selected unit vectors to divide space into subregions, while Li defined each weight vector as a unique subregion in the scalar space.

In objective-based decomposition, Gong et al. decomposed the original objectives into several subgroups and added an aggregating function in each group [19], which presents a smart way to communicate between subgroups, but the aggregating functions bring additional objectives and complexity. The grouping criterion in Gong et al.’s paper is that objectives, which have high relativity, are selected into one group. On the contrary, Bandyopadhyay and Mukherjee selected a subset of conflicting objectives and recomputed the subset at each iteration; in the ending of every iteration, all the objectives are recombined and the differential evolution was executed [5]. The methods mentioned above use Pearson’s correlation coefficient and Spearman’s rank correlation coefficient, respectively, as the selecting criterion to evaluate the confliction and relevance, which is the key to generate subgroups.

Coevolution. Relatively many researchers have explored the idea of combining coevolution with MOEAs, by dividing subgroups, adding local searching processes, and other methods. Considering the MaOPs involved many conflicting objectives, the entirety group or the so-called global searching causes high computational complexity and large number of nondominated candidates. As a result, the coevolution can provide good performance in many objectives. The specific cooperative process among subparts varies in different strategies during MOEAs and can be executed in every generation or after certain assessments. The coevolution between the subparts has many different approaches, such as cooperative, competitive, and distributed approaches, which are fully described in [27]; moreover, the coevolution processes are executed in parameters, elitist recombination, algorithm selection, and so on.

Archive Management. In multiobjective algorithms, the archive strategies play the important roles, which filter those lower quality solutions to release space for the higher quality nondominated solutions. Though there exist several papers proposing the idea of unconstrained archive size [28], with the increasing number of objectives, the individuals become nondominated with each other at earlier generations, giving birth to a huge volume of data probably beyond the scope of physical implementation.

Density is the most commonly used quality for archive management. It can be based on several criteria, such as the following: Kernel approach, based on the sum of values from a distance function either in genotypic or in phenotypic space [29]; nearest neighbor approach, defined by the hyperrectangle values of the individuals’ nearest neighbors [1, 2]; and histogram approach, based on the number of solutions that lied in the same hypercubes [30].

Clustering mechanism is another proposal for maintaining an archive. When the population in archive reaches the constraint size, a clustering operation distributes the external nondominated solutions into several subsets and chooses a representative from each subset.

4. Unified Many-Objective Particle Swarm Optimization

Particle swarm optimization (PSO) is an evolutionary computation technique which originated from the study of the bird predatory behaviors; similar to genetic algorithm, PSO is also approaching the optimal value of the solution space through constant learning and iteration. Each solution is seen as the ideal particle which has no quality or volume, updating their velocity and position in each iteration, approaching the optimal solution of fitness function gradually. If in dimensional search space, the velocity and position of particle are and , in each iteration, record particle ’s best position that it has ever been () and the global optimum position all particles have ever been (). Based on the following equations update the particles velocity and position. Inertia weight and acceleration coefficients and should be defined during the computation.The basic PSO, defined as the global scheme, has advantages of convenient information exchange and fast convergence speed. And algorithms such as [31, 32] are making attempts to overcome the PSO’s premature convergence and improve the diversity.

However, these positive features usually bring a rapid decline of diversity and make the algorithm greedy, leading to premature convergence in complicated algorithms such as MOPSO. As the number of objectives increases, the deterioration becomes more serious, and the particles were confused in following the best leader and result in frequent fluctuation. Moreover, given the archive maintaining based on crowding distance, with an excess of nondominated solutions, some excellent solutions may be deleted and set back the evolutions. The convergence and diversity metric curves of MOPSO evolution with 3, 4, and 5 objectives are calculated in Figure 1, which prove that the performance decreases when the number of objectives exceeds 3 (the metrics are detailed in Section 6). As expected, many improved MOPSOs made efforts in diversity maintaining through flexible mechanism, adaptive parameters, and so on. Another scheme, named neighborhood topology, divided the particles into several subsets for the searching of local optima and increasing of diversity. This scheme gives the PSO a wider and elaborate searching ability. Combining the global and neighborhood scheme, the unified particle swarm optimization is generated.

Figure 1: Convergence and diversity metric curves in MOPSO evolutions.

Unified particle swarm optimization (UPSO) was proposed as a scheme that harnesses the global and local PSO variants, combining their exploration and exploitation properties [33]. Let be the velocity update of for global PSO variant and for local PSO variant. is the global best, while is the local best within the local subset of th particles.The parameter is called the unification factor to balance the global and local searching directions. According to the UPSO scheme, the aggregation criterion is

5. Proposed Approach

The proposed algorithm is based on the idea of coevolution through the concept of unified particle swarm optimization.

5.1. Decomposition of MaOPs

In light of the deterioration of MOPSO when the number of objectives rises, in the proposed technique, the many-objective optimization problems described in Section 2 are decomposed into several subgroups which have objectives less than or equal to three. As the program is running, the composition of subgroups was recomputed to make sure of the balance of whole objectives.

Dimensionality reduction technique always combines objectives based on correlation and coupling. As the strengthened nondominance pressure applying on subgroups, the Pareto-optimal set increases quickly and holds some extreme directions, which make sure of the searching area at the early iteration. Yet, in most MaOPs, the connections between the many objectives can not be defined clearly, so that farfetched decomposition and recombination may cause the incomplete description of the origin. We introduce the correlation-based decoupling for the many-objective optimization. As coupling degree is related to the correlation degree, correlation-based decoupling techniques can be divided into relevant decoupling and conflict decoupling. On the one hand, in relevant decoupling, the relevant objectives have similar trends of increasing and decreasing; as a result, the optimal solutions gather together, which provide advantage for the particles to search in a smaller neighborhood. On the other hand, in conflict decoupling, the conflict objective space holds a larger number of nondominated solutions, which can guarantee the searching diversity in low-dimensional spaces. In this paper, both of the correlation-based decoupling methods are considered in the simulations. According to the comparison result in Section 6, the conflict decoupling holds a better performance in UMOPSO-D algorithm.

There are several correlation coefficient measurements; the most common ones of these are Pearson’s correlation coefficient and Spearman’s rank correlation. Pearson’s correlation coefficient is sensitive to a linear relationship between two variables and has demand to the variance ranges, so it is not appropriate for the MaOPs’ decomposition. In comparison, rank correlation is a nonparametric measure of statistical dependence between two variables, makes the coefficient less sensitive to nonnormality in distributions, and gives good descriptions of consistency and tendency. Spearman’s rank correlation coefficient between objective functions and at iteration is defined as , ; the closer the coefficient is to either or , the stronger the correlation or anticorrelation between the objectives is. To avoid extra computation or memory space on correlation calculation, we use the solution fitnesses found by particles during iterations as the sample points.

Using this technique, the coupling degrees change corresponding with particles fitnesses; in other words, the coupling relations depend on the particles’ evolution procedure. To update the evolutionary process and express the nonlinear objectives, the decoupling operates before every decomposition procedure.

To balance the computational loads between subgroups, the formulation between the number of subgroups and the number of objectives is , which is initialized at the beginning of main algorithm. The size of population is , the maximum of global repository is , and maximal size of subgroup repositories is . The objectives decompositions are carried out during several UPSO iterations. The procedure of decomposition is shown in Algorithm 1.

Algorithm 1: Decomposition algorithm.
5.2. Local Best and Global Best Selection

In the updating formulation of PSO, the global best is defined as the best position that particles in population have ever been. Yet, in a multiobjective problem, the best position is not unique, and any solution that has nondominated quality or the front sort rank can be viewed as a . works as the leader in the MOPSO searching process, so global best selection is a key task which can determine the direction of the particles, the convergence speed, and the diversity. There are several popular methods of global best selection in the state-of-the-art MOPSOs, such as crowing distance technique, region-based strategy using adaptive hypercubes, and nondominated sorting technique. As iteration goes on, some particles gather together around several high expected optimal areas, making these areas more developed. The regularity we found was that the more developed the area is where the is selected, the higher the selection pressure is exerted on the particles, which leads to a fast convergence but a poor diversity; on the contrary, the less developed the area is where the is selected, the more flexible the moving directions are which promise a rich diversity but slow convergence speed.

In the UPSO, both the subgroup local best and global best influence the directions of particle . The subgroups and whole-set constitute a double loop system, in which we can apply two suites of selection strategies. The subgroup plays a role as the inner loop which needs to make sure of the convergence and speed, so we promote the closest distance method to select ; the whole-set function plays as the outer loop to maintain the population diversity and avoid premature phenomenon, so we promote the hypergroup method to select . In this way, the particles in the subgroup search around the optimal areas, try directions pointed to other groups constantly, and prepare for the next subgroups rearrangement. The proposed selection algorithms are detailed in the following paragraph.

Closest Distance Method. In this method, calculate the distances between the given particle and positions in the subgroup repository which collects those nondominated solutions in subgroup , and then select the closest solution as the for the given particle.

Hypergroup Method. Hypergroup represents that is in a different group from the given particle . is selected from the global repository , which stores the refined elite solutions from subgroup repositories. Each solution in is labeled by the group index which indicates where it comes from. is managed by a group of reference vectors which is detailed in Section 5.4. Based on the distances between the particle and the reference vectors in objective space, the closest vector is determined, and the solution associated with this vector is selected as . Under the situation that the vector has no associate solution or the associate solution has the same group index with particle , the global best selection moves on to the next closest vector.

5.3. Personal Best Selection

The personal best is defined as the best position that the particle has ever been during the searching process, which raises challenge in MOPs, and th particle’s fitness and are incomparable. As a result, the personal best cannot be updated directly. Reference [3] used the random way to select one best memory as the value of ; [34] built the personal archive and proposed the largest minimal distance for the decision. In this paper, a new method is presented named objective balance strategy, Algorithm 2, which defines from repository . The calculation of the leading objective indexes is detailed in Section 5.4.

Algorithm 2: Personal best selection.
5.4. Repositories Strategy

The high dimension objectives lead to large area of , which means there are a great many nondominated positions found during early iterations, and the repository soon reaches saturation. In the UMOPSO-D proposal, each subgroup with three or less objectives has archive (). Because the selecting pressure is relatively large in sRep, we employ dominated rank as the assessment strategy of each solution. When the repository reaches the limit, solutions with poor rank values are abandoned.

The maintenance of global repository () plays an important role in MOP algorithms, for the reason that it will provide leader information and balance the convergence and diversity. When the iteration terminates, outputs the optimal solutions. consisted of solutions from all subgroup repositories. In this paper, we use a set of reference vectors to work as a diversity standard, which is inspired from Das and Dennis’s systematic approach [35]. Each solution in finds its associated vector according to the closest distance. When the number of solutions in exceeds the upper boundary, the vector which associates the most solutions is selected, and one of the corresponding associates will be deleted. The above procedure repeats until the number of members in is equal to upper boundary. The method above is called crowding reference.

The algorithm regroups the objectives and population after several iterations according to the stability. After regrouping, the repositories of new subgroup become empty again. We propose the repositories recompose strategy that assigns each solution in the objective it belongs to. Combine all ’s and we have the global repository , then, to every solution, calculate each cost dimension’s value sorting from small to big in all the elements of , and choose the one top rank to be the leading objective. If more than one dimension share the top place, randomly choose one as the leading objective. When the regrouping begins, the solutions in follow the leading objective whose best performance points into the new subgroups repository.

Take a simple example; the distribute hypercube indexes and the leading objective indexes of the known Pareto front of from decomposed procedure in the th generation are listed in Table 1, where parameters are , , the number of solutions in is , and the grouping states is and . Assume, in the next decomposition, objectives go to subgroup and objectives go to subgroup . According to the leading objective indexes, the solution with bold font in Table 1 goes to subgroup repository , and the rest go to .

Table 1: Solutions’ leading objectives, hypercube indexes, and potential indexes.
5.5. Unified Coevolutionary Strategy

In order to have the visual representation of the solutions’ routes and express the feasible detailed designs in the coevolution process, we proposed a novel hypercube technique. For the proposed algorithm itself, this novel hypercube technique helps to assess the convergence of subgroups, balance the learning directions of particles, and set rules for parameters.

The hypercubes are used to grid the solutions in subgroup archives, which are adaptive, similarly to [3, 25]. The number of grids is provided in the initialization of the algorithm. According to (5), the th elite solution is assigned into a grid with integral labels. Then, is calculated for each solution.

In a previous paper [36], the entropy was used to measure the diversity of the population; in [25], was proposed to classify the population of exploitation and exploration, which can be referred to as the stability of the population. Similarly, we use the entropy method to assess the stability of the subgroup through of subgroup archive. For the reason that the MaOPs with decomposition technique need large amount of calculation in parallel cell coordinate system, improvement has been made based on this basis. We use the values of instead of to represent a solution and change the entropy computation space from G (row) × nObj (column) to 3G (row) × nSub (column), named potential hyperspace (PHS). As an example, the calculations of PHS are listed in Table 1. In every iteration, the records of falling in the PHSs form an information entropy which can reflect the uniformity of the subgroup. The entropy is defined according to (6), where is the number of archives which are located in grid , . While assesses the convergence of the subgroup, represents the variation range of convergence, that is, the stability.

After the decomposition, the solutions in ’s gather around the optimal areas leaded by the corresponding objectives. As Figure 2(a) illustrates, the unbalance distribution leads to the unequal learning pressure and incomplete PF solutions. Our coevolution proposal makes efforts toward the unequalness in the leading pressure, and we propose the following strategy. In early generations, the UPSO adjusts the evolution orientation to the subgroup objectives. This procedure exerts the particles’ maximum potential and accelerates the evolution. The procedure is abstracted into the routine from to in Figure 2(b). As the procedure moves on, more and more elite solutions dominated by subgroup objectives are found near the true Pareto front. To balance the learning pressure and expand the diversity, the UPSO generally change the emphases direction by enlarging the unification factor , changing the particle’s routine to . Through this approach, the new solution settles down as a new REP candidate and broadens the REP coverage as shown in Figure 2(c). To adaptively adjust the factor in a particle’s updating formulation, we make connection between and stabilities of the subgroups. When the group’s repository has a low stability, which means that the Pareto represented by the group is still searching, particles should have a smaller to focus on the group itself; when the group’s repository has a high stability, which means that the inner group search is nearly complete, particles should have a larger to search between groups. Following these rules, we built function equation (7). The entropy also worked as the role to determine the termination of one decomposed UPSO procedure or the so-called inner loop. The termination condition is set to be the time when the norm of vector less than threshold or the counting of inner iteration exceeds the maximum.

Figure 2: Illustration of the particle trace in UMOPSO-D.
5.6. Main Algorithm

The algorithm of UMOPSO-D is detailed in Algorithm 3.

Algorithm 3: Main algorithm.

6. Simulation Result

In this section, the benchmark function set DTLZ is employed as the test problem, and three metrics are selected to value the performance of the seven algorithms MOPSO, MOPSO/D, NSGA-II, MOEA/D, Gong, -DEMO-revised, and UMOPSO-D. We run UMOPSO-D with relevant decoupling and conflict decoupling, respectively.

6.1. Performance Metrics

We select performance metrics to evaluate the convergence performance, the diversity performance, and the running efficiency, to provide straightforward statistical quantifications on the comparison of MOEAs.

Generational Distance (GD). This metric evaluates the convergence, which requires an approximate set which represents . It is widely used in MOP performance assessment [19, 37]. It is mathematically defined in (8). Here, is the Euclidean phenotypic distance between the th solution in and the closest member in ; is the number of solutions in . For two solution sets and , .

Hypervolume (HV). The metric assesses the diversity as well as the convergence performance, through the summation of all the rectangular areas boxed by coordinate values of the predesigned reference point. It and its similar forms have been widely used in many researches [5, 38, 39]. The definition of HV metric is in (9), where is the Lebesgue measure. The selection of reference point is essential. Too-big or too-small references will result in high similarity evaluation values during the contrast. Based on the convergence difficulty and several repeated trials, the reference point is set according to Table 2. For two solution sets and , in either convergence or diversity or both of them.where is the number of solutions which dominate the reference point; is the Lebesgue measure between solution and reference, which reflects the small hypervolume bounded by the solution and reference; is the Lebesgue measure between coordinate origin and reference point.

Table 2: Setting of reference point.

Algorithm Running Efficiency (ARE). Algorithm Running Efficiency (ARE) is often evaluated to assess the computational effect [40]. According to (10), is the total calculation time and is the number of estimations. indicates is better than in efficiency.

6.2. Parameter Settings

Benchmark Test Problems. We run the comparison between the selected algorithms on four test problems, DTLZ1 to DTLZ4. Recommended in [41], the number of decision variables is set to . During our simulation, is set to 5 for DTLZ1 and set to 10 for DTLZ2, DTLZ3, and DTLZ4. The sample size in the true PF is set to 1000. ’s are set to 9, 15, 30, and 45 for each test instance, and the corresponding maximum generations are 1000, 1000, 1500, and 2000.

Settings for Algorithms. Seven MOEAs including the state-of-the-art many-objective algorithms are selected as the comparison algorithms in the comparison. Among them, three algorithms are based on the PSO metaheuristic, and the PSO parameters are defined as and . The maximal sizes of global repository and population are set to 100 for every algorithm. Based on the conclusion in [19] that Gong’s algorithm achieves the best performance on the 24 objectives’ problem when reduced to 6, we choose as the number of decomposition objectives for each instance. in -DEMO-revised is defined as 0.5. The hardware configuration in this paper is Intel(R) Core (TM) i5-4570.

6.3. Experiment Results

In this section, the detailed comparison results are organized in the following. The experimental results, including the mean and standard deviation of the performance metric GD generated by 15 independent simulations performed on each test instance, are summarized in Table 3, where the best results are highlighted. The DEMO is short for -DEMO-revised in the data table. stands for UMOPSO-D with relevant decoupling; stands for UMOPSO-D with conflict decoupling.

Table 3: Comparisons of GD between proposed algorithm and other MOEAs.

On GD metric, first of all, we focus on the comparison between UMOPSO-D and the peer algorithms. Both relevant decoupling and conflict decoupling contribute to the UMOPSO-D performance. For experiments when the number of objectives equals 9, UMOPSO-D achieves best result in DTLZ1, DTLZ2, and DTLZ3, and, in DTLZ4, the deviation from the best performance is tiny. MOEA/D indicates a good convergence performance in DTLZ4. For results on , UMOPSO-D, -DEMO-revised, Gong, and NSGA-II are the best in DTLZ1 to DTLZ4 relatively. With the objective dimension increases, UMOPSO-D has greater advantage. It is evident from Table 3 that UMOPSO-D obtains better GD values other than comparison algorithms in high-dimensional experiments with equaling 30 and 45, except in DTLZ3, 45 dimensions, where UMOPSO-D is sorted as the close second right after -DEMO-revised but have lower standard deviation value. Furthermore, the statistical records show that UMOPSO-D is associated with poorer performance in DTLZ3 as compared to other DTLZs in all dimensions, with the relatively higher GD value from other comparison algorithms; this indicates the difficulty in searching the PF for DTLZ3. The PFs obtained by -DEMO-revised and UMOPSO-D express good performance in general, while the UMOPSO-D works even better on the instance with more than 30 objectives. As a whole, on GD metric, UMOPSO-D together with the two recently proposed techniques, Gong and -DEMO-revised, shares the good performance part in the comparisons of seven comparison algorithms.

Secondly, the GD comparison between relevant decoupling and conflict decoupling in UMOPSO-D is illustrated as follows. Overall speaking, conflict decoupling has lower GD values in most situations, especially in DTLZ1 and DTLZ4. The differences between the two techniques in DTLZ2 and DTLZ3 are relatively small.

The HV metrics of the simulations are shown in Table 4. In DTLZ1, -DEMO-revised and UMOSPO-D have relatively better performance. The differences among all comparisons, except MOPSO and MOPSO/D, are very small. It is indicated that UMOPSO-D greatly improves the performance of PSO-based algorithms in MaOP performance. In DTLZ2, UMOPSO-D with relevant decoupling has the highest scores. The average performance of MOEA/D is the second best. DTLZ3 expresses the worst HV performance, where MOPSO, MOPSO/D, NSGA-II, and MOEA/D obtain 0 or almost 0 because of the reason that nearly all solutions fall outside of the area defined by the reference point and decision space boundary. UMOPSO-D with conflict decoupling has obvious best HV score in DTLZ3 in all test dimensions. In DTLZ4, UMOPSO-D performs better when dimension is larger than 15, and MOEA/D and UMOPSO-D share the best result on HV metric. UMOPSO-D achieves the best values when dimension equals 30 and 45 (except in DTLZ1, ) and exhibits feasible values which are close to the optimums in 9 and 15 objectives. So it is reasonable to believe that UMOPSO-D performs better considering both convergence and diversity. The following good performances are provided by MOEA/D and -DEMO-revised.

Table 4: Comparisons of HV between proposed algorithm and other MOEAs.

Furthermore, the HV metric indicates that the performance of relevant decoupling and conflict decoupling depends on the formulations of the given test instances. The conflict decoupling has obvious advantage in DTLZ3 but has relatively poor performance in DTLZ2. However, when taking GD into consideration, conflict decoupling has slight advantage compared to relevant decoupling.

The ARE metrics of the simulations are shown in Table 5. Under fixed objective dimension, the runtimes of different DTLZs are basically the same. The evaluation times are proportional to the POP sizes and number of iterations, so that there has been no difference among all DTLZ instances. The ARE scores, represented by DTLZ1 case, are displayed in Table 5. For the reason that MOPSO/D is inspired by MOPSO and MOEA/D, the AREs are omitted. According to the table, MOEA/D has the best efficiency among all algorithms, and NSGA-II is poorest. With reduced-dimension, -DEMO-revised guarantees a lower complexity. UMOPSO-D and MOPSO have similar ARE value; however, the former has great advantages in convergence and diversity which have been proved above.

Table 5: Comparisons of ARE between proposed algorithm and other MOEAs.

Figure 3 displays the convergence evolutionary curves evaluated by GD metric on DTLZ instances, 45 objectives among four well performed algorithms. The coupling method of UMOPSO-D in Figure 3 is relevant coupling. The GD curves illustrate that all the comparison algorithms are competent to further the convergence process and gradually get closer to the true PF with the generation. Note that UMOPSO-D shows poor performance at early iteration, which indicates the delayed reaction to the optimization problem. However, UMOPSO-D outperforms other algorithms in later generations. This finding can be explained by the decomposition and recomposition process which control the evolution reach optimum in several dimensions, and, with the coevolution continuing, more balanced subgroups cooperate and accelerate the convergence. Based on the results, UMOPSO-D is better than other algorithms on DTLZ1, DTLZ2, and DTLZ4 in high dimension situation and also performs a competitive result in DTLZ3.

Figure 3: Performance curves on DTLZs in high dimension experiments.

From the above statistical perspective and analysis, UMOPSO-D can guarantee a better or at least competitive performance in both convergence and diversity compared with the other six experimental algorithms in high-dimensional DTLZ problems. And conflict coupling has slight advantage compared to relevant coupling in objective decomposition. The computational efficiency is relatively low compared to -DEMO-revised and MOEA/D.

7. Conclusion

MaOP is a challenge to most MOEAs; the loss of selection pressure results in poor convergence and diversity. In this paper, we introduce the decomposition-based unified evolutionary algorithm and merge it with MOPSO, to improve the performance of convergence and diversity. The innovations include five points: (1) decomposing the many objectives with discrete decoupling; (2) making connections between subgroups through unified coevolutionary strategy; (3) adaptive factor based on the subgroup stability to adjust the coevolution process; (4) archive strategies which contain repository reconstruction and quantity maintenance; (5) novel local/global and personal best selection technique. The proposed algorithm consists of inner and outer evolutionary process loop to pursuit convergence and diversity, respectively. We have compared the performance of UMOPSO-D with that of a few MOEAs. The experimental results show the superior performance of UMOPSO-D in high dimension MaOPs.

UMOPSO-D is an attempt to solve MaOPs through reduction without discarding information; in other words, the decomposition and coevolution process reserve the original demand and objectives, which guarantees the authenticity, integrity, and originality of the optimization problem. According to the no-free-lunch principle, the complexity and data size will pay for the good performance of UMOPSO-D, and the reaction speed is slow compared with some state-of-the-art algorithms. The further experiments and statistical analysis recording the performance of scaled or other benchmark problems should be continued. In addition, probability-based strategy and coevolution based multiagent structure can reduce and share computational cost which will be exploited to UMOPSO-D to achieve better quality in our following research.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

This work was sponsored by the National Natural Science Foundation of China under Grant no. 61503287, no. 71371142, and no. 61203250, Program for Young Excellent Talents in Tongji University (2014KJ046), Program for Shanghai Natural Science Foundation (14ZR1442700), Program for New Century Excellent Talents in University of Ministry of Education of China, and Ph.D. Programs Foundation of Ministry of Education of China (20100072110038).

References

  1. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002. View at Publisher · View at Google Scholar · View at Scopus
  2. E. Zitzler, M. Laumanns, and L. Thiele, SPEA2: Improving the Strength Pareto Evolutionary Algorithm, 2001.
  3. C. A. C. Coello, G. T. Pulido, and M. S. Lechuga, “Handling multiple objectives with particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 256–279, 2004. View at Publisher · View at Google Scholar · View at Scopus
  4. W. Guo, M. Chen, L. Wang, and Q. Wu, “Hyper multi-objective evolutionary algorithm for multi-objective optimization problems,” Soft Computing, 2016. View at Publisher · View at Google Scholar
  5. S. Bandyopadhyay and A. Mukherjee, “An algorithm for many-objective optimization with reduced objective computations: a study in differential evolution,” IEEE Transactions on Evolutionary Computation, vol. 19, no. 3, pp. 400–413, 2015. View at Publisher · View at Google Scholar · View at Scopus
  6. J. Molina, L. V. Santana, A. G. Hernández-Díaz, C. A. Coello Coello, and R. Caballero, “g-dominance: reference point based dominance for multiobjective metaheuristics,” European Journal of Operational Research, vol. 197, no. 2, pp. 685–692, 2009. View at Publisher · View at Google Scholar · View at Scopus
  7. M. Farina and P. Amato, “A fuzzy definition of ‘optimality’ for many-criteria optimization problems,” IEEE Transactions on Systems, Man, and Cybernetics Part A:Systems and Humans., vol. 34, no. 3, pp. 315–326, 2004. View at Publisher · View at Google Scholar · View at Scopus
  8. J. Xiao, K. Wang, and X. Bi, “Multi-objective evolutionary algorithm based on improved k-dominated sorting,” Control and Decision, vol. 29, no. 12, pp. 2165–2170, 2014. View at Google Scholar
  9. O. Schütze, “A new data structure for the nondominance problem in multi-objective optimization,” in Evolutionary Multi-Criterion Optimization, C. M. Fonseca, P. J. Fleming, E. Zitzler, L. Thiele, and K. Deb, Eds., vol. 2632 of Lecture Notes in Computer Science, pp. 509–518, Springer, Berlin, Germany, 2003. View at Publisher · View at Google Scholar
  10. H. Sato, H. E. Aguirre, and K. Tanaka, “Controlling dominance area of solutions and its impact on the performance of moeas,” in Evolutionary Multi-Criterion Optimization: 4th International Conference, EMO 2007, Matsushima, Japan, March 5–8, 2007. Proceeding, vol. 4403 of Lecture Notes in Computer Science, pp. 5–20, Springer, Berlin, Germany, 2007. View at Publisher · View at Google Scholar
  11. M. Laumanns, L. Thiele, K. Deb, and E. Zitzler, “Combining convergence and diversity in evolutionary multiobjective optimization,” Evolutionary Computation, vol. 10, no. 3, pp. 263–282, 2002. View at Publisher · View at Google Scholar · View at Scopus
  12. C. Luo, K. Shimoyama, and S. Obayashi, “A study on many-objective optimization using the kriging-surrogate-based evolutionary algorithm maximizing expected hypervolume improvement,” Mathematical Problems in Engineering, vol. 2015, Article ID 162712, 15 pages, 2015. View at Publisher · View at Google Scholar · View at Scopus
  13. K. McClymont and E. Keedwell, “Deductive sort and climbing sort: new methods for non-dominated sorting,” Evolutionary Computation, vol. 20, no. 1, pp. 1–26, 2012. View at Publisher · View at Google Scholar · View at Scopus
  14. H. Wang and X. Yao, “Corner sort for pareto-based many-objective optimization,” IEEE Transactions on Cybernetics, vol. 44, no. 1, pp. 92–102, 2014. View at Publisher · View at Google Scholar · View at Scopus
  15. Q. Zhang and H. Li, “Moea/d: a multiobjective evolutionary algorithm based on decomposition,” IEEE Transactions on Evolutionary Computation, vol. 11, no. 6, pp. 712–731, 2007. View at Publisher · View at Google Scholar · View at Scopus
  16. K. Li, K. Deb, Q. Zhang, and S. Kwong, “An evolutionary many-objective optimization algorithm based on dominance and decomposition,” IEEE Transactions on Evolutionary Computation, vol. 19, no. 5, pp. 694–716, 2015. View at Publisher · View at Google Scholar
  17. X. Ma, Y. Qi, L. Li, F. Liu, L. Jiao, and J. Wu, “MOEA/D with uniform decomposition measurement for many-objective problems,” Soft Computing, vol. 18, no. 12, pp. 2541–2564, 2014. View at Publisher · View at Google Scholar · View at Scopus
  18. D. Cai and W. Yuping, “A new uniform evolutionary algorithm based on decomposition and CDAS for many-objective optimization,” Knowledge-Based Systems, vol. 85, pp. 131–142, 2015. View at Publisher · View at Google Scholar · View at Scopus
  19. D.-W. Gong, Y.-P. Liu, X.-Y. Sun, and Y.-Y. Han, “Parallel many-objective evolutionary optimization using objectives decomposition,” Acta Automatica Sinica, vol. 41, no. 8, pp. 1438–1451, 2015. View at Publisher · View at Google Scholar · View at Scopus
  20. H. Wang and X. Yao, “Objective reduction based on nonlinear correlation information entropy,” Soft Computing, vol. 20, no. 6, pp. 2393–2407, 2016. View at Publisher · View at Google Scholar
  21. Z.-X. Chen, X.-H. Yan, K.-A. Wu, and M. Bai, “Many-objective optimization integrating open angle based congestion control strategy,” Acta Automatica Sinica, vol. 41, no. 6, pp. 1145–1158, 2015. View at Publisher · View at Google Scholar · View at Scopus
  22. K. Praditwong and X. Yao, “A new multi-objective evolutionary optimisation algorithm: the two-archive algorithm,” in Computational Intelligence and Security: International Conference, CIS 2006. Guangzhou, China, November 3–6, 2006. Revised Selected Papers, vol. 4456 of Lecture Notes in Computer Science, pp. 95–104, Springer, Berlin, Germany, 2007. View at Publisher · View at Google Scholar
  23. W. Guo, L. Wang, and Q. Wu, “Numerical comparisons of migration models for Multi-objective Biogeography-Based Optimization,” Information Sciences, vol. 328, pp. 302–320, 2016. View at Publisher · View at Google Scholar · View at Scopus
  24. P. K. Tripathi, S. Bandyopadhyay, and S. K. Pal, “Multi-objective particle swarm optimization with time variant inertia and acceleration coefficients,” Information Sciences, vol. 177, no. 22, pp. 5033–5049, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  25. W. Hu, G. G. Yen, and X. Zhang, “Multiobjective particle swarm optimization based on Pareto entropy,” Journal of Software, vol. 25, no. 5, pp. 1025–1050, 2014. View at Publisher · View at Google Scholar · View at Scopus
  26. W. Peng and Q. F. Zhang, “A decomposition-based multi-objective particle swarm optimization algorithm for continuous optimization problems,” in Proceedings of the 2008 IEEE International Conference on Granular Computing (GRC '08), vol. 1-2, pp. 534–537, Hangzhou, China, August 2008. View at Publisher · View at Google Scholar · View at Scopus
  27. L. Wang, J.-N. Shen, S.-Y. Wang, and J. Deng, “Advances in co-evolutionary algorithms,” Control and Decision, vol. 30, no. 2, pp. 193–202, 2015. View at Publisher · View at Google Scholar · View at Scopus
  28. J. E. Alvarez-Benitez, R. M. Everson, and J. E. Fieldsend, “A MOPSO algorithm based exclusively on pareto dominance concepts,” in Evolutionary Multi-Criterion Optimization, C. A. Coello Coello, A. H. Aguirre, and E. Zitzler, Eds., vol. 3410 of Lecture Notes in Computer Science, pp. 459–473, Springer, Berlin, Germany, 2005. View at Publisher · View at Google Scholar
  29. J. Healer, D. Mcguinness, R. Carter, and E. Riley, “A niched pareto genetic algorithm for multiobjective optimization,” in Proceedings of the 1st IEEE Conference on Evolutionary Computation, IEEE World Congress on Computational Intelligence, vol. 1, pp. 82–87, 1994.
  30. J. D. Knowles and D. W. Corne, “Approximating the nondominated front using the pareto archived evolution strategy,” Evolutionary Computation, vol. 8, no. 2, pp. 149–172, 2000. View at Publisher · View at Google Scholar · View at Scopus
  31. W. Guo, W. Li, Q. Zhang, L. Wang, Q. Wu, and H. Ren, “Biogeography-based particle swarm optimization with fuzzy elitism and its applications to constrained engineering problems,” Engineering Optimization, vol. 46, no. 11, pp. 1465–1484, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  32. Y. Zhang, X. Xiong, and Q. Zhang, “An improved self-adaptive PSO algorithm with detection function for multimodal function optimization problems,” Mathematical Problems in Engineering, vol. 2013, Article ID 716952, 8 pages, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  33. K. E. Parsopoulos and M. N. Vrahatis, “Parameter selection and adaptation in unified particle swarm optimization,” Mathematical and Computer Modelling, vol. 46, no. 1-2, pp. 198–213, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  34. J. Branke and S. Mostaghim, Parallel Problem Solving from Nature—PPSN IX, About Selecting the Personal Best in Multi-Objective Particle Swarm Optimization, Springer, Berlin, Germany, 2006.
  35. I. Das and J. E. Dennis, “Normal-boundary intersection: a new method for generating the Pareto surface in nonlinear multicriteria optimization problems,” SIAM Journal on Optimization, vol. 8, no. 3, pp. 631–657, 1998. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  36. D. Xiaodong, G. A. O. Hongxia, L. I. U. Xiangdong, and Z. Xuedong, “Adaptive particle swarm optimization algorithm based on population entropy,” Computer Engineering, vol. 33, no. 18, pp. 222–248, 2007. View at Google Scholar
  37. Z.-G. Lu, H. Zhao, H.-F. Xiao, H.-R. Wang, and H.-J. Wang, “An improved multi-objective bacteria colony chemotaxis algorithm and convergence analysis,” Applied Soft Computing Journal, vol. 31, pp. 224–292, 2015. View at Publisher · View at Google Scholar · View at Scopus
  38. N. Al Moubayed, A. Petrovski, and J. McCall, “D2 MOPSO: MOPSO based on decomposition and dominance with archiving using crowding distance in objective and solution spaces,” Evolutionary Computation, vol. 22, no. 1, pp. 47–77, 2014. View at Publisher · View at Google Scholar
  39. H. Zhang, X. Zhang, X.-Z. Gao, and S. Song, “Self-organizing multiobjective optimization based on decomposition with neighborhood ensemble,” Neurocomputing, vol. 173, pp. 1868–1884, 2016. View at Publisher · View at Google Scholar · View at Scopus
  40. K. C. Tan, T. H. Lee, and E. F. Khor, “Evolutionary algorithms for multi-objective optimization: performance assessments and comparisons,” Artificial Intelligence Review, vol. 17, no. 4, pp. 253–290, 2002. View at Google Scholar · View at Scopus
  41. K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, “Scalable multi-objective optimization test problems,” in Proceedings of the Congress on Evolutionary Computation, vol. 1-2, pp. 825–830, Honolulu, Hawaii, USA, May 2002. View at Publisher · View at Google Scholar