Table of Contents Author Guidelines Submit a Manuscript
Journal of Applied Mathematics
Volume 2014 (2014), Article ID 931630, 13 pages
http://dx.doi.org/10.1155/2014/931630
Research Article

An Enhanced Differential Evolution Based Algorithm with Simulated Annealing for Solving Multiobjective Optimization Problems

1School of Software, Xiamen University, Xiamen 361005, China
2School of Information Science and Engineering, Xiamen University, Xiamen 361005, China

Received 1 December 2013; Revised 27 March 2014; Accepted 7 April 2014; Published 7 May 2014

Academic Editor: Frank Werner

Copyright © 2014 Bili Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

An enhanced differential evolution based algorithm, named multi-objective differential evolution with simulated annealing algorithm (MODESA), is presented for solving multiobjective optimization problems (MOPs). The proposed algorithm utilizes the advantage of simulated annealing for guiding the algorithm to explore more regions of the search space for a better convergence to the true Pareto-optimal front. In the proposed simulated annealing approach, a new acceptance probability computation function based on domination is proposed and some potential solutions are assigned a life cycle to have a priority to be selected entering the next generation. Moreover, it incorporates an efficient diversity maintenance approach, which is used to prune the obtained nondominated solutions for a good distributed Pareto front. The feasibility of the proposed algorithm is investigated on a set of five biobjective and two triobjective optimization problems and the results are compared with three other algorithms. The experimental results illustrate the effectiveness of the proposed algorithm.

1. Introduction

Many real world problems are MOPs and it has prompted a wide research boom about the MOPs over the past few decades. A lot of multiobjective evolutionary algorithms (MOEAs) have been suggested such as these famous algorithms in [14]. For comprehensive overviews, the reader can refer to [57].

In the present study, we emphasize on the differential evolution (DE) for solving MOPs. DE, one of the most popular evolutionary algorithms, was initially presented by Storn and Price [8, 9]. DE has a simple principle and is easy to be implemented. DE was originally used for solving single objective optimization problems in [10]. In 2001, Abbass et al. [11] presented a Pareto-frontier differential evolution algorithm (PDE) for solving MOPs. In 2002, Abbass [12] further improved the PDE with self-adaptive crossover and mutation operator. In the same year, the nondominated sorting and the concept of ranking used in NSGA-II are incorporated in the DE to solve MOPs in [13]. Xue et al. [14] proposed an algorithm called Pareto based multiobjective differential evolution (MODE), which incorporates the concept of Pareto optimal into the mutation operator. Parsopoulos et al. [15] presented a vector evaluated differential evolution (VEDE) for solving MOPs. A domination selection operator is adopted to improve the performance of the algorithm. Kukkonen and Lampinen [16, 17] proposed a generalized differential evolution (GDE) and an improved version of GDE for solving MOPs with constraints. In 2006, Hernández-Díaz et al. [18] combined the DE and the rough set theory for MOPs. Wang et al. [19] proposed a self-adapted differential evolution algorithm, which adopts an external elitist and a crowding entropy diversity measure tactic. Huang et al. [20, 21] improved their self-adaptive differential evolution (SaDE) to solve MOPs by a multiobjective adaptive differential evolution (MOSaDE). Moreover, they further extended MOSaDE by objective-wise learning strategies and called it as WO-MOSaDE. Ali et al. [22] proposed a modified differential evolution (MDE) for solving MOPs. In 2012, they [23] further continued their research by giving an enhanced version of MDE, named multiobjective differential evolution algorithm (MODEA), which incorporates the opposition-based learning [24] and the concept of random localization in mutation.

Besides the DE method, in this paper, the concept of simulated annealing is utilized for controlling the acceptance of the candidate solutions. Simulated annealing, proposed by Kirkpatrick et al. [25] is a popular probabilistic metaheuristic for solving the optimization problems. Simulated annealing is inspired by the annealing in metallurgy, a technique of controlling the temperature of a material from a high temperature to low one in a probability for increasing the volume of its crystals and reducing their defects. The simulated annealing algorithm works for finding minimal cost solutions by minimizing the associated energy function. In the process, the temperature is first in a high level () and then decreases gradually, if the cooling is sufficiently slow, the global minimum will be reached. The viewpoint has been proven by S. Geman and D. Geman in [26]. There are some attempts of adopting simulated annealing to solve MOPs. Especially, there have been a few studies that utilize the concept of Pareto-dominance in the multiobjective simulated annealing, the reader can refer to [27] for a review about this research. In [28], an archived multiobjective simulated annealing algorithm named AMOSA is proposed. The algorithm incorporates the concept of archive and adopts a domination status to determine the acceptance of a new solution.

The rest of the paper is organized as follows. In Section 2, the problem formulation is introduced. The proposed algorithm is presented in Section 3. In Section 4, some numerical examples are given to show the performance of the proposed algorithm with the other algorithms, while the conclusion is reached in Section 5.

2. Problem Formulation

We assume that the MOPs given in this paper are minimization problems, while each maximization problem can be transformed into a minimization problem. A MOP can be formally described as follows: where is the number of objective functions and is the th objective function to be optimized. and are the set of inequality and equality constraints, respectively. is the number of inequality constraints and is the number of equality constraints. is the vector of variables which is called a solution for a MOP, where is the number of variables. That is to say, a MOP is to find all which satisfies (2) and (3).

Suppose that there are two solutions and , is said to dominate if , and, . dominates which can be denoted as . A solution is called a Pareto-optimal solution if there is not another solution satisfying . A set containing all the Pareto-optimal solutions is called the Pareto-optimal set (). A set containing all the Pareto-optimal objective vectors is called the Pareto front (PF) such that [29].

3. The Proposed Algorithm

The proposed algorithm is an extension of MODEA algorithm proposed by Ali et al. [23]. In the proposed MODESA, we focus on improving the convergence of the obtained solution towards the Pareto-optimal front, and the diversity and distribution of the solutions. Besides using the simulated annealing process to control the acceptance of the candidate solutions, an efficient diversity maintenance mechanism [30] is also adopted to improve the distribution of these obtained solutions.

3.1. Population Initialization

In MODESA, initial population is generated in the same way as that of MODEA [23]. Because of the lack of a prior information about the solution, its corresponding opposite estimate is considered for getting a better solution. Here, the concept of opposite solution is given. Suppose that the current solution is and the lower and upper bounds of are and , respectively. Then the opposite solution of is calculated as [23].

The advantage of doing this is that we can choose the better one between the solution and the opposite one as the initial solution. The procedure of initialization in the paper is given as follows. A population with solutions is generated using uniform random distribution and a population containing their opposite solutions is constructed too. Next, a combined population with the population and the opposite population is constructed and the best solutions of the combined population are selected by using the nondominated sorting in NSGA-II [2] and the diversity maintenance mechanism based on vicinity distance [30].

3.2. Differential Evolution (DE)

Differential Evolution (DE) presented by Storn and Price [8, 9] is an easy and efficient evolutionary algorithm. DE maintains a population of solutions and contains three main operators: mutation, crossover, and selection.

The mutation operation is defined as follows: where , , and are randomly selected from the current population and . is a control parameter which is set in in Storn and Price [8, 9]. For the sake of convenient description, the solution which is selected to plus a perturbed value is called as the base vector. (the base vector) is perturbed by adding to it the product of the control parameter and the difference of between . is the generated perturbed solution. It is important to note that if is out of bounds, it must be repaired. As mentioned above, the lower and upper bounds of variable are and , respectively. In this study, if is less than , is set to ; if exceeds , is set to . Ali et al. [23] have proved that using the one having the best fitness value in these three randomly selected individuals as the base vector has two advantages. The first advantage is that it is neither merely random nor merely greedy. The other one is that it can offer a localized effect to help in finding more different regions of the search space around the underlying solutions [23]. Therefore, in MODESA, after choosing the three individuals, we select the best one of them to be the base vector.

After the perturbed solution is generated, a trial solution is produced by the crossover operation which can be defined as follows. For each variable , where is a random number and is the crossover rate. In general DE, the selection operator is utilized to decide whether the trial solution replaces the target solution . The selection operation can be defined according to the following:

In the proposed MODESA, the target individual is not simply replaced by the trial solution . A simulated annealing procedure introduced in Section 3.3 is employed to determine the acceptance of the trial solution.

3.3. Simulated Annealing Procedure

After the mutation phase, the crossover operator is performed, and the algorithm enters the selection phase. In MODESA, we incorporate the principle of simulated annealing into the selection phase. Before introducing the proposed selection phase, we first give a concept of domination in simulated annealing.

3.3.1. The Concept of Domination

To calculate the acceptance probability of a new solution, the amount of domination used in AMOSA [28] is adopted here. Given two solutions and , the amount of domination is calculated by the following equation: where is the number of objectives and is the range of the th objective. When is unknown, the solutions in the current population are used for calculating it.

3.3.2. Simulated Annealing Used in the Selection Operator of the Proposed MODESA

As (6), if the trial solution dominates the current solution, the general DE simply replaces the current solution by the trial solution. However, doing this may make the algorithm over converge and easy to trap into local optimal. As is known, there are three possibilities when comparing two solutions, the current solution and the new solution , as follows:(1)dominates ;(2) is dominated by ;(3) and are incomparable.

If dominates , it is obvious that is not better than , so that we can simply replace by . If is dominated by , we accept the new solution distinctly. If and are incomparable, the solution may have the potential to guide the solutions towards an unreached area of the search space. Therefore, in the DE process, we use the simulated annealing to accept under a certain probability. Meanwhile, we need to control the acceptance probability, and the reason is that if we do not control the acceptance probability, all these potential solutions which are not absolutely good will enter to the next generation which will confuse the algorithm and make it unable to have a right direction towards the Pareto-optimal front. That is to say, we must decrease the acceptance probability. In simulated annealing, with the decreasing of the temperature, the acceptance probability is decreased too, which will avoid the algorithm producing too much potential solutions in the end. It is worth mentioning that if we allow some potential solutions existing during the procedure, the next nondominated sorting procedure may again remove these potential solutions, which will make the purpose of allowing some potential solutions to search more areas of the searching space fail. Therefore, in the proposed MODESA, each individual is assigned a prior life cycle. The number of prior life cycles means how many times the individual can be selected preferentially in the nondominated sorting. For the sake of convenience, the life cycle value of solution is denoted as . Once the individual is selected preferentially in the nondominated sorting, the prior life cycle of the individual is minus one when it is greater than zero. Note that, in the population initialization, the prior life cycle of each individual is set to zero.

In this paper, given a temperature , the new solution is selected with a probability as follows: where is the number of solutions that dominates the new solution. It is worth mentioning that represents the average amount of domination of the new solution and the solutions that dominate it. Suppose that is constant, the acceptance probability decreases when the temperature decreases. The simulated annealing used in the selection operator of MODESA is as shown in Algorithm 1.

alg1
Algorithm 1: The proposed simulate annealing algorithm.

3.4. Diversity Maintenance Mechanism

The diversity maintenance mechanism adopted in the paper is presented by Kukkonen and Deb [30] for pruning of nondominated solutions. Instead of using the crowding distance in Deb et al. [2], the method proposed a new crowding estimation called vicinity distance using nearest neighbors. In this paper, we call this method as the vicinity distance based pruning method for convenience. The basic process of the method is to remove the most crowded solutions in the nondominated solution one by one. Moreover, after each removal, the vicinity distance values of the remaining solutions must be updated again. The vicinity distance is used to get an estimate of the density of solutions surrounding a particular solution, it calculates the product of nearest neighbors of the particular solution. The solutions with the smallest value of vicinity distance are considered to be the most crowded ones. In [30], the value of is assigned to the number of objectives; that is, . The reader can refer to [30] to get more information about this method.

3.5. The Framework of the Proposed MODESA Algorithm
3.5.1. The Framework of the Proposed MODESA Algorithm

As mentioned before, the proposed algorithm first uses the population initialization phase to generate a population with solutions. After the nondominated sorting, each solution has a ranking value. Next, the main loop begins. We introduce the th generation of MODESA in the following.

At the th generation, an empty temporary population is created first. The temporary population is going to store the new good solutions generated by the DE in the loop. Then for each individual in the current population, the DE operators are used to generate a trial individual. Next, there are three possibilities when comparing the current individual and the trail individual. If the trial individual dominates the current individual, we simply replace the current solution by the trial solution. If the trial individual and the current solution are incomparable, the proposed simulated annealing process is adopted to determine whether the trial individual is accepted. If the trial individual is dominated by the current solution, we also put the trial individual into the temporary population as MODEA does. After all the individuals are handled, the current population and the temporary population are combined as a union population. Note that, if the current generation is the last generation, the members that are dominated by any other individuals must be removed in the union population. The reason of doing this is that, because we allow some potential solutions which are not absolutely good and have a positive life cycle to exist in the population; therefore, at the end of the generation, these solutions must be removed or they will influence the effectiveness of the results. Finally, the nondominated sorting and the vicinity distance based pruning approach are used for getting the best solutions.

In this paper, the parameters that need to be set a priori are listed in Table 1 and the Pseudocode of MODESA is shown in Pseudocode 1.

tab1
Table 1: The parameters used in the proposed algorithm.

pseudo1
Pseudocode 1: Pseudocode of MODESA.

3.5.2. The Computational Complexity Analysis

The proposed MODESA approach is a simple algorithm to be implemented. In the following, the overall computational complexity of MODESA is analyzed. Basic operations and their worst case complexities are given as follows.(1)The initialization phase is selecting solutions out of solutions by using nondominated sorting and the vicinity distance base pruning method: .(2)The procedure of mutation is .(3)The procedure of checking the domination status of the trial solution and the current solution is , and the procedure of simulated annealing is .(4)The procedure of selecting solutions out of solutions by using nondominated sorting and the vicinity distance base pruning method: .

Here, and represent the number of objectives and the population size, respectively. We can see that the overall worst case complexity of the MODESA is .

3.5.3. The Difference between MODESA and MODEA

Considering that the proposed algorithm MODESA is an improvement of MODEA, it is necessary to provide the difference between MODESA and MODEA. There are two main differences between MODESA and MODEA as follows.(1)When comparing the new solution and the current solution, there exist two processing ways according to two different results of comparison. If the new solution dominates the current solution, the MODEA replaces the current solution with the new solution and puts the current solution into the temporary population; otherwise, the new solution is taken into the temporary population. After each solution in the population is handled, the two populations are combined. It indicates that the new solutions are all retained. However, MODESA works in three kinds of ways according to different results. If the new solution dominates the current solution, the current solution is replaced with the new solution and the current solution is removed directly, which means that if the current solution is worse than the new solution, it would not be retained. If the new solution and the current solution are incomparable, the proposed simulated annealing algorithm is adopted to determine the acceptance of the current solution; moreover, the usage of a prior life cycle enables some potential solutions preferentially to be selected in the next generation. Otherwise, the new solution is added to the temporary population and the current solution is retained in the current population.(2)The diversity maintenance mechanism is different. In MODEA, the diversity preserving strategy based on crowding distance in NSGA-II is utilized, while in MODESA, the vicinity distance based pruning method in [30] is adopted.

4. Results and Discussion

4.1. Test Problems

The performance of the proposed algorithm is tested on a set of five biobjective and two triobjective optimization problems. The first five problems are ZDT1, ZDT2, ZDT3, ZDT4, and ZDT6, which were described in [31]. The two triobjective optimization problems: DTLZ1 and DTLZ2 are taken from Deb [32]. None of these problems has any constraint. All objective functions are to be minimized. The number of variables, their bounds, the Pareto-optimal solutions, and the characteristics of the Pareto-optimal front of these test problems are also shown in Table 2 [2]. For the test problems DTLZ1 and DTLZ2, the number of objectives is set to 3. Therefore, the number of variables for DTLZ1 and DTLZ2 is equal to 7 and 12, respectively.

tab2
Table 2: Test problems used in the paper.
4.2. Performance Measures

In this section, three performance measures will be introduced to evaluate the effectiveness of the proposed algorithm MODESA with the other algorithms. Here, let and denote the Pareto-optimal front and the Pareto front which are obtained by the algorithm, respectively. In this study, three performance measures are used for evaluating the convergence, the diversity, and both of them, separately. It is worth mentioning that all the three performance measures can be used only when the Pareto-optimal front of the particular test problem is known. In this paper, 500 evenly distributed solutions in PF are selected as for each biobjective problem, and 990 solutions for triobjective problem, which is the same as that in [33]. The reader can download these solutions from the website http://dces.essex.ac.uk/staff/qzhang/index.html.

4.2.1. Inverted Generational Distance (IGD)

The concept of inverted generational distance (IGD) is proposed by Coello and Cortés in [34]. The metric can be formally described as follows: where is the minimum Euclidean distance between the solution and the solutions in . The IGD-metric value can evaluate both the convergence and diversity of to a certain degree. Algorithms with smaller value of IGD are satisfying.

4.2.2. Generational Distance (GD)

van Veldhuizen and Lamont [35] proposed the concept of generational distance (GD). The metric calculates the distance between and . It can be defined as follows: where is the minimum Euclidean distance of solution and the solutions in . GD evaluates the closeness of the obtained Pareto front and the Pareto-optimal front. Algorithms with smaller value of GD are desirable.

4.2.3. Spread

The spread metric was firstly proposed by Deb et al. [2] to measure the spread effect of the nondominated solutions obtained by the algorithm. However, it is only suitable for biobjective problems. Wang et al. [19] further extended the metric and made it suitable to measure problems that have more than two objectives. The improved can be stated mathematically as follows: where is a set of solutions, is the number of objectives, and () are extreme solutions in the set of true Pareto front (PF). The smaller the value of , the better the distribution and diversity of the obtained nondominated solutions.

4.3. Experimental Setup

In this paper, the population size and the number of generations are set to 100 and 250, respectively. For the other parameters in NSGA-II and MODEA, they have remained the same with their original studies [3, 23]. For MODESA, there are four parameters: , , , and that need to be set. Here we set , , , and for the study. Each algorithm runs 10 times independently for each problem. All the test problems have been executed on Microsoft Window XP Intel core 2 Duo CPU E8400 3.00 GHz, with 2 GB RAM. All methods have been implemented in C++.

4.4. Discussion of the Results
4.4.1. Results of Biobjective Optimization Problems

Tables 3 and 4 show the mean and standard deviation IGD-metric and -metric values of the obtained nondominated solutions in 10 runs on the group of ZDTs. It is obvious from Tables 3 and 4 that the IGD-metric values of MODESA achieve the best value in the three algorithms except for the test problem ZDT4.

tab3
Table 3: The mean and standard deviation IGD-metric values of nondominated solutions in 10 runs on ZDTs where and .
tab4
Table 4: The mean and standard deviation -metric values of nondominated solutions in 10 runs on ZDTs where and .

The test problems ZDT1 and ZDT2 may be the most simple problems in the series of ZDTs. The Pareto-optimal front of ZDT1 is convex while that of ZDT2 is nonconvex. It is clear from Tables 3 and 4 that both the values of the IGD metric and the spread metric of MODESA are better than that of NSGA-II and MODEA. Figures 1 and 2 demonstrate that MODESA converges to the Pareto-optimal front and has a good spread over the entire front on the test problems ZDT1 and ZDT2. It can be concluded that the proposed MODESA can well deal with these two kinds of test problems.

931630.fig.001
Figure 1: Plots of the nondominated solutions with the lowest IGD-metric values found by MODESA in 10 runs in the objective space on ZDT1 where and .
931630.fig.002
Figure 2: Plots of the nondominated solutions with the lowest IGD-metric values found by MODESA in 10 runs in the objective space on ZDT2 where and .

The third test problem ZDT3 is different from the above two test problems. The Pareto-optimal front of ZDT3 consists of five disjoint curves. From Figure 3, it is obvious that the nondominated solutions obtained by the MODESA almost converge to the Pareto-optimal front and have a good distribution over the entire front. From Tables 3 and 4, MODESA obtains the best values of IGD-metric and spread-metric between the three algorithms.

931630.fig.003
Figure 3: Plots of the nondominated solutions with the lowest IGD-metric values found by MODESA in 10 runs in the objective space on ZDT3 where and .

The fourth test problem is ZDT4 which is perhaps the most difficult problem in the five problems. It has different local Pareto-optimal fronts that confuse the multiobjective optimization algorithm [2]. From Tables 3 and 4, NSGA-II obtains the best values on the IGD-metric and the spread-metric in these three algorithms. In fact, in the 10 independent runs, some results of MODEA and MODESA are better than that of NSGA-II; however, MODEA and MODESA get bad results in a few runs which affects the mean value of the 10 runs. Indeed, the best IGD-metric value of MODESA in the 10 runs is 0.0044915, which is better than NSGA-II. Figure 4 demonstrates the plots of the nondominated solutions with the lowest IGD-metric values found by MODESA in the 10 runs. It is clear from Figure 4 that MODESA can get quite good convergence and distribution over the Pareto-optimal front, which reveals that the proposed algorithm is a promising but unstable algorithm for solving the test problem ZDT4. We will leave this problem for future work.

931630.fig.004
Figure 4: Plots of the nondominated solutions with the lowest IGD-metric values found by MODESA in 10 runs in the objective space on ZDT4 where and .

The last test problem ZDT6 is such a problem with thin density and nonuniformed spread of solutions [23]. We can observe from Tables 3 and 4 that MODESA finds the best convergence and spread in comparison to the other two algorithms. To show the superiority of MODESA, plots of the nondominated solutions found by MODESA and MODEA on ZDT 6 are shown in Figures 5 and 6, respectively. It is clear from Figures 5 and 6 that MODESA gets a better spread than MODEA.

931630.fig.005
Figure 5: Plots of the nondominated solutions with the lowest IGD-metric values found by MODESA in 10 runs in the objective space on ZDT6 where and .
931630.fig.006
Figure 6: Plots of the nondominated solutions with the lowest IGD-metric values found by MODEA in 10 runs in the objective space on ZDT6 where and .
4.4.2. Results of Triobjective Optimization Problems

In this section, to show the effectiveness of the proposed MODESA for solving problems that have more than two objectives, we choose two triobjective optimization problems DTLZ1 and DTLZ2 for comparison. The results of NSGA-II, MODEA, and MODESA are given in Table 5. It is obvious from Table 5 that MODESA performs better than the other two algorithms in terms of both the IGD-metric and spread-metric. Figures 7, 8, 9, and 10 show the nondominated solutions obtained by MODEA and MODESA, and it is obvious that MODESA outperforms MODEA on both of the two triobjective optimization problems. In particular, it is clear that MODESA has a better distribution than MODEA, the reason is that the vicinity distance based pruning method performs better than the crowding distance of Deb in maintaining diversity and distribution of the nondominated solutions.

tab5
Table 5: The mean and standard deviation IGD-metric and -metric values of nondominated solutions in 10 runs on DTLZ1 and DTLZ2 where and .
931630.fig.007
Figure 7: Plots of the nondominated solutions with the lowest IGD-metric values found by MODEA in 10 runs in the objective space on DTLZ1 where and .
931630.fig.008
Figure 8: Plots of the nondominated solutions with the lowest IGD-metric values found by MODESA in 10 runs in the objective space on DTLZ1 where and .
931630.fig.009
Figure 9: Plots of the nondominated solutions with the lowest IGD-metric values found by MODEA in 10 runs in the objective space on DTLZ2 where and .
931630.fig.0010
Figure 10: Plots of the nondominated solutions with the lowest IGD-metric values found by MODESA in 10 runs in the objective space on DTLZ2 where and .
4.5. Effect of the Proposed Simulated Annealing Procedure

As mentioned in Section 3.5.3, one of the main differences of MODEA and MODESA is the employment of the proposed simulated annealing procedure. To show the effect of the proposed simulated annealing procedure, a version of MODESA: MODESA-CD (multiobjective differential evolution with crowding distance) is proposed for comparison. The MODESA-CD is analogous to MODESA except that it replaces the vicinity distance based pruning method in MODESA with the crowding distance based approach in NSGA-II. In other words, the difference between MODEA and MODESA-CD is the using of the proposed simulated annealing procedure. Table 6 shows the mean and standard deviation GD-metric values of nondominated solutions in 10 runs where and . It is obvious from Table 6 that all the GD-metric values of MODESA-CD are better than that of MODEA. The reason of the bad performance of MODEA is that MODEA always accepts the current and the new solutions to the next generation and that makes it chaotic and unable to have a right direction towards the Pareto-optimal front. In MODESA-CD, we remove the bad solutions and only accept some potential solutions in an alterable probability.

tab6
Table 6: The mean and standard deviation GD-metric values of nondominated solutions in 10 runs where and .

5. Conclusion

In this paper, an enhanced differential evolution based algorithm with simulated annealing, named MODESA, is presented for solving MOPs. During the selection, MODESA employs a procedure of simulated annealing to control the acceptance of every candidate solution, and a concept of the prior life circle is proposed to allow some potential solutions entering the next generation to escape from the local optimum. Finally, a fast and efficient diversity maintenance mechanism is adopted. The proposed algorithm is tested on five biobjective and two triobjective optimization problems in terms of IGD-metric and spread metric, and the experimental results show that MODESA outperforms the other two algorithms on these test problems except on ZDT4. Furthermore, the effect of convergence is also tested in comparison to MODEA, the computational results show the superiority of the proposed MODESA.

Improvement can be made further in the performance of the proposed MODESA on the test problem ZDT4. In a word, MODESA is an easy and efficient method for solving MOPs. In the near future, applying MODESA for constrained MOPs and real-life application problems are our work.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This work is supported by the National Science Foundation of China (nos. 60672018 and 40774065).

References

  1. J. D. Knowles and D. W. Corne, “Approximating the nondominated front using the Pareto Archived Evolution Strategy,” Evolutionary Computation, vol. 8, no. 2, pp. 149–172, 2000. View at Google Scholar · View at Scopus
  2. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002. View at Publisher · View at Google Scholar · View at Scopus
  3. E. Zitzler, M. Laumanns, and L. Thiele, “SPEA2: improving the strength pareto evolutionary algorithm,” Tech. Rep., Swiss Federal Institute of Technology, Lausanne, Switzerland, 2003. View at Google Scholar
  4. Q. Zhang and H. Li, “MOEA/D: a multiobjective evolutionary algorithm based on decomposition,” IEEE Transactions on Evolutionary Computation, vol. 11, no. 6, pp. 712–731, 2007. View at Publisher · View at Google Scholar · View at Scopus
  5. C. A. Coello Coello, “An updated survey of GA-based multiobjective optimization techniques,” ACM Computing Surveys, vol. 32, no. 2, pp. 109–143, 2000. View at Google Scholar · View at Scopus
  6. C. A. Coello Coello, D. A. van Veldhuizen, and G. B. Lamont, Evolutionary Algorithms for Solving Multi-Objective Problems, Springer, New York, NY, USA, 2007. View at MathSciNet
  7. A. Zhou, B.-Y. Qu, H. Li, S.-Z. Zhao, P. N. Suganthan, and Q. Zhangd, “Multiobjective evolutionary algorithms: a survey of the state of the art,” Swarm and Evolutionary Computation, vol. 1, no. 1, pp. 32–49, 2011. View at Publisher · View at Google Scholar · View at Scopus
  8. R. Storn and K. Price, “Differential evolution-a simple and efficient adaptive scheme for global optimization over continuous spaces,” Tech. Rep. TR-95-012, Berkeley, Calif, USA, 1995. View at Google Scholar
  9. R. Storn and K. Price, “Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  10. K. V. Price, “Differential evolution vs. the functions of the 2nd ICEO,” in Proceedings of the IEEE International Conference on Evolutionary Computation (ICEC '97), pp. 153–157, April 1997. View at Scopus
  11. H. A. Abbass, R. Sarker, and C. Newton, “PDE: a Pareto-frontier differential evolution approach for multi-objective optimization problems,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '01), pp. 971–978, Piscataway, NJ, USA, May 2001. View at Scopus
  12. H. A. Abbass, “The self-adaptive Pareto differential evolution algorithm,” in Proceedings of the IEEE Congress on Evolutionary Computation, pp. 831–836, Honolulu, Hawaii, 2002.
  13. N. K. Madavan, “Multiobjective optimization using a Pareto differential evolution approach,” in Proceedings of the Congress on Evolutionary Computation, pp. 1145–1150, Honolulu, Hawaii, 2002.
  14. F. Xue, A. C. Sanderson, and R. J. Graves, “Multi-objective differential evolution and its application to enterprise planning,” in Proceedings of the IEEE International Conference on Robotics and Automation, pp. 3535–3541, Taipei, Taiwan, September 2003. View at Scopus
  15. K. E. Parsopoulos, D. K. Tasoulis, N. G. Pavlidis, V. P. Plagianakos, and M. N. Vrahatis, “Vector evaluated differential evolution for multiobjective optimization,” in Proceedings of the Congress on Evolutionary Computation (CEC '04), pp. 204–211, Portland, Ore, USA, June 2004. View at Scopus
  16. S. Kukkonen and J. Lampinen, “An extension of generalized differential evolution for multi-objective optimization with constraints,” in Parallel Problem Solving From Nature (PPSN2004), pp. 752–761, 2004. View at Google Scholar
  17. S. Kukkonen and J. Lampinen, “GDE3: the third evolution step of generalized differential evolution,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '05), pp. 443–450, Edinburgh, UK, September 2005. View at Scopus
  18. A. G. Hernández-Díaz, L. V. Santana-Quintero, C. Coello, R. Caballero, and J. Molina, “A new proposal for multi-objective optimization using differential evolution and rough sets theory,” in Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation, pp. 675–682, Seattle, Wash, USA, July 2006. View at Scopus
  19. Y.-N. Wang, L.-H. Wu, and X.-F. Yuan, “Multi-objective self-adaptive differential evolution with elitist archive and crowding entropy-based diversity measure,” Soft Computing, vol. 14, no. 3, pp. 193–209, 2010. View at Publisher · View at Google Scholar · View at Scopus
  20. V. L. Huang, A. K. Qin, P. N. Suganthan, and M. F. Tasgetiren, “Multi-objective optimization based on self-adaptive differential evolution algorithm,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '07), pp. 3601–3608, September 2007. View at Publisher · View at Google Scholar · View at Scopus
  21. V. L. Huang, S. Z. Zhao, R. Mallipeddi, and P. N. Suganthan, “Multi-objective optimization using self-adaptive differential evolution algorithm,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '09), pp. 190–194, May 2009. View at Publisher · View at Google Scholar · View at Scopus
  22. M. Ali, M. Pant, and A. Abraham, “A modified differential evolution algorithm and its application to engineering problems,” in Proceedings of the International Conference on Soft Computing and Pattern Recognition (SoCPaR '09), pp. 196–201, December 2009. View at Publisher · View at Google Scholar · View at Scopus
  23. M. Ali, P. Siarry, and M. Pant, “An efficient Differential Evolution based algorithm for solving multi-objective optimization problems,” European Journal of Operational Research, vol. 217, no. 2, pp. 404–416, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  24. H. R. Tizhoosh, “Opposition-based learning: a new scheme for machine intelligence,” in Proceedings of the International Conference on Computational Intelligence for Modelling Control and Automation (CIMCA '05), pp. 695–701, November 2005. View at Scopus
  25. S. Kirkpatrick, C. D. Gelatt, Jr., and M. P. Vecchi, “Optimization by simulated annealing,” Science, vol. 220, no. 4598, pp. 671–680, 1983. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  26. S. Geman and D. Geman, “Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 6, no. 6, pp. 721–741, 1984. View at Google Scholar · View at Scopus
  27. B. Suman and P. Kumar, “A survey of simulated annealing as a tool for single and multiobjective optimization,” Journal of the Operational Research Society, vol. 57, no. 10, pp. 1143–1160, 2006. View at Publisher · View at Google Scholar · View at Scopus
  28. S. Bandyopadhyay, S. Saha, U. Maulik, and K. Deb, “A simulated annealing-based multiobjective optimization algorithm: AMOSA,” IEEE Transactions on Evolutionary Computation, vol. 12, no. 3, pp. 269–283, 2008. View at Publisher · View at Google Scholar · View at Scopus
  29. K. Miettinen, Nonlinear Multiobjective Optimization, Kluwer Academic Publishers, 1999. View at MathSciNet
  30. S. Kukkonen and K. Deb, “A fast and effective method for pruning of non-dominated solutions in many-objective problems,” in Parallel Problem Solving from Nature—PPSN IX, vol. 4193 of Lecture Notes in Computer Science, pp. 553–562, 2006. View at Google Scholar
  31. E. Zitzler, K. Deb, and L. Thiele, “Comparison of multiobjective evolutionary algorithms: empirical results,” Evolutionary Computation, vol. 8, no. 2, pp. 173–195, 2000. View at Google Scholar · View at Scopus
  32. K. Deb, Multi-Objective Optimization Using Evolutionary Algorithms, John Wiley & Sons, Chichester, UK, 2001. View at MathSciNet
  33. H. Li and Q. Zhang, “Multiobjective optimization problems with complicated pareto sets, MOEA/ D and NSGA-II,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 2, pp. 284–302, 2009. View at Publisher · View at Google Scholar · View at Scopus
  34. C. A. C. Coello and N. C. Cortés, “Solving multiobjective optimization problems using an artificial immune system,” Genetic Programming and Evolvable Machines, vol. 6, no. 2, pp. 163–190, 2005. View at Publisher · View at Google Scholar · View at Scopus
  35. D. A. van Veldhuizen and G. B. Lamont, “Multiobjective evolutionary algorithm research: a history and analysis,” Tech. Rep. TR-98-03, Department of Electrical and Computer Engineering, Graduate School of Engineering, Air Force Inst Technol, Wright Patterson, Ohio, USA, 1998. View at Google Scholar