About this Journal Submit a Manuscript Table of Contents
Abstract and Applied Analysis
Volume 2012 (2012), Article ID 172041, 12 pages
http://dx.doi.org/10.1155/2012/172041
Research Article

Multiobjective Differential Evolution Algorithm with Multiple Trial Vectors

1Institute of Information and System Science, Beifang University of Nationalities, Yinchuan 750021, China
2Department of Mathematics, Yinchuan College, China University of Mining and Technology, Yinchuan 750011, China

Received 29 May 2012; Accepted 5 June 2012

Academic Editor: Yonghong Yao

Copyright © 2012 Yuelin Gao and Junmei Liu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper presents a multiobjective differential evolution algorithm with multiple trial vectors. For each individual in the population, three trial individuals are produced by the mutation operator. The offspring is produced by using the crossover operator on the three trial individuals. Good individuals are selected from the parent and the offspring and then are put in the intermediate population. Finally, the intermediate population is sorted according to the Pareto dominance relations and the crowding distance, and then the outstanding individuals are selected as the next evolutionary population. Comparing with the classical multiobjective optimization algorithm NSGA-II, the proposed algorithm has better convergence, and the obtained Pareto optimal solutions have better diversity.

1. Introduction

Multiobjective optimization problems (MOPs) are different from single objective optimization problems (SOPs). For MOPs, the objective functions are often conflict, so the best solution often does not exist. It is difficult to find the optimal solution. The MOPs need to find a noninferior solution set that is called Pareto optimal solution set (or nondominated optimal solution set). The key is to find a solution set which is as close to the Pareto front and uniformly distributed as possible. In the past ten years, evolutionary algorithms have been widely used for solving multiobjective optimization problems. The typical algorithm contains NSGA [1], NSGA-II [2], SPEA [3], SPEA2 [4], and so on.

Differential evolution (DE) [5] is an important branch of the evolutionary algorithms. Because of its easiness to use and robustness, it has been widely applied in many fields. The basic DE algorithm is a greedy algorithm and is used to solve the SOPs. The individual choice is based on the objective function value. In solving MOPs, the objective function value is a vector value and the individual choice is based on the vector value. Some scholars have improved DE algorithm to solve MOPs. Abbass et al. [6] proposed a differential evolution algorithm based on the Pareto front’s (PDE) to solve MOPs. Later, the PDE algorithm is improved to an adaptive PDE algorithm (SPDE) [7]. Madavan [8] proposed the differential evolution algorithm based on the Pareto (PDEA). Xue et al. [9] and Robi and Filipi [10] and Qian and Li [11] proposed the different kind of multiobjective differential evolution algorithm. Gong and Cai [12] proposed an improved multiobjective differential evolution based on the Pareto-adaptive 𝜀-dominance and the orthogonal design.

Here, a multiobjective differential evolution algorithm with multiple trial vectors (MTVDE) is proposed. In this algorithm, for each individual in the parent, three different mutation operators and a crossover operator are used to produce three trial individuals. First, we compare the three trial individuals. If an individual Pareto-dominate another, the better one is retained. If two individuals do not dominate each other, both two individuals are retained. Secondly, retained trial individuals compare with the parent individual: if an individual Pareto-dominate another, put the nondominated individual in the intermediate population, and if the two individuals do not dominate each other, two individuals are put in the intermediate population. Finally, in the intermediate population, the individuals are sorted according to the Pareto dominance relations and the crowding distance. We eliminate some low-quality individuals from the intermediate population and retain some fine individuals to form the next evolutionary population. The individuals are sorted according to the nondominated relations, and individuals with low ordinal value are better individuals. When individuals are with the same ordinal, the individual of larger crowding distance is better than the individual of smaller crowding distance.

2. Multiobjective Optimization Problem and Related Concepts

Below, we give several concepts related to this paper. Because the maximum optimization problems and minimum optimization problem can be transformed into each other, here we only consider the minimum multiobjective optimization problem.

Definition 2.1 (multiobjective optimization problems, MOPs). The MOPs are described as follows:𝑓min𝑦=𝑓(𝑥)=1(𝑥),𝑓2(𝑥),,𝑓𝑘,𝑔(𝑥)s.t.𝑔(𝑥)=1(𝑥),𝑔2(𝑥),,𝑔𝑙(𝑥)0,(𝑥)=𝑙+1(𝑥),𝑙+2(𝑥),,𝑚𝑥(𝑥)=0,𝑥=1,𝑥2,,𝑥𝑛𝑦𝑋,𝑦=1,𝑦2,,𝑦𝑘𝑌,(2.1) where 𝑦𝑅𝑘 indicates the target vector, 𝑥𝑅𝑛 indicates the decision vector, 𝑋 indicates decision space of the decision vector 𝑥,   𝑌 indicates the target space of the target vector 𝑦.

Definition 2.2 (the dominance relationship of the target vector). The target vector 𝑢=(𝑢1,𝑢2,,𝑢𝑘) is better than 𝑣=(𝑣1,𝑣2,,𝑣𝑘) if and only if for all 𝑖(1,2,,𝑘)  𝑢𝑙𝑣𝑙 and there exists at least one 𝑖(1,2,,𝑘) such that 𝑢𝑖<𝑣𝑖 was established, then the target the vector 𝑢 Pareto better than 𝑣, or 𝑣 Pareto inferior to 𝑢.

Definition 2.3 (decision vector relations of domination). If the decision vector 𝑧=(𝑧1,𝑧2,,𝑧𝑛) Pareto-dominates 𝑤=(𝑤1,𝑤2,,𝑤𝑛) if and only if 𝑓(𝑧) Pareto is superior to 𝑓(𝑤), we call the decision vector 𝑧 Pareto-dominating 𝑤 or the decision vector 𝑤 is Pareto dominated by 𝑧.

Definition 2.4 (Pareto optimal solution). Decision vector 𝑥𝑢𝐹 (𝐹 is the feasible solution set) is a Pareto optimal solution, if and only if an individual does not exist in the feasible solution set 𝐹 dominate 𝑥𝑢, then 𝑥𝑢 is called the MOPs’ Pareto optimal solution, or 𝑥𝑢 is called the MOPs’ nondominated optimal solution. The set of all Pareto optimal solution is known as the MOPs’ Pareto optimal solution set or nondominated optimal solution set.

Definition 2.5 (Pareto front). For given MOPs, Pareto optimal solution set corresponding to the objective function value vector is known as the MOPs’ Pareto front. Therefore, our task is to make the current nondominated optimal solution set as close as possible to the theoretical Pareto optimal solution set, from the view of Pareto front is to ask the Pareto front as possible as close to the theoretical Pareto front and uniformly distributed.

Definition 2.6 (individual nondominant relationship sort, INDR). There are 𝑛 individuals in the population 𝑃. For each individual 𝑥𝑖,𝑖=1,2,,𝑛, 𝑆𝑖 represents the set of individuals that are dominated by 𝑥𝑖, 𝑛𝑖 represents the number of individuals dominating the individual 𝑥𝑖. Finally, if 𝑛𝑖=0, then 𝑥𝑖 belongs to the first level. The sequence of 𝑥𝑖 is rank𝑖=1, 𝑥𝑖 joined to 𝐹1 (𝐹1 indicates the individual set of ordinal value is 1). The counter of the initial sequence value rank=1, if 𝐹rank𝜙, (𝐹rank represents the set of individuals’ sequence value rank=1) each individual 𝑥𝑖 in 𝐹rank, each individual 𝑥𝑖,𝑛𝑗=𝑛𝑗1 in the corresponding 𝑆𝑖. Finally, if 𝑛𝑖=0, then 𝑥𝑗 belongs to the rank+1 level. The sequence value of 𝑥𝑗 is rank𝑗=rank+1. Update sequence value’s counter rank=rank+1, 𝑥𝑗 joined to 𝐹rank. As a result, we can calculate the ordinal value of each individual in the population 𝑃.

Definition 2.7 (the same level of individuals’ crowding distance) (ICD). Let 𝐼 be a set of individuals of a certain grade. All the target 𝑚(𝑚=1,2,) individuals are sorted according to the objective function value in 𝐼. The first and the last individuals’ crowding distances are infinite. The other individuals’ crowding distance is calculated as subtract the two neighbor individuals’ objective function and then square it. Then sum the crowding distance on all objective functions as the individuals’ crowding distance. The individuals are sorted according to crowding distance in the same level.

3. Multiobjective Differential Evolution Algorithm with Multiple Trial Vectors (MTVDE)

3.1. The Mutation Operation

Mutation components are difference vectors of the parent generation. Each vector is generated by the different individuals in the parent population. The mutation operation on the basis of the difference vector and its equation is 𝑣𝑖1𝑡=𝑥𝑖𝑡𝑥+𝐹𝑟1𝑡𝑥𝑟2𝑡,𝑣𝑖2𝑡=𝑥𝑟1𝑡𝑥+𝐹𝑟2𝑡𝑥𝑟3𝑡,𝑣𝑖3𝑡=𝑥𝑖𝑡𝑥+𝐹𝑟1𝑡𝑥𝑟2𝑡𝑥+𝐹𝑟3𝑡𝑥𝑟4𝑡,(3.1) where 𝑣𝑖1𝑡,𝑣𝑖2𝑡,𝑣𝑖3𝑡 are the mutated individuals, 𝑥𝑖𝑡 is individual of parent population. 𝑥𝑟1𝑡,𝑥𝑟2𝑡,𝑥𝑟3𝑡,𝑥𝑟4𝑡 are different from each other, and they are different from 𝑥𝑖𝑡. 𝐹(0,2] is the scaling factor. It indicates the influence degree of the difference vector to the offspring. From the mutation operator, each individual in parent population produces three individuals through the mutation operator, which can enhance the diversity of the population.

The value of 𝐹 has a certain influence on the algorithm performance. If 𝐹 is too large, the convergence rate slows, and the algorithm runs longer. If 𝐹 is too small, the diversity of the population decreases, according to experimental tests, and 𝐹 is often set to 0.5.

3.2. Crossover Operator

The mutated individual 𝑣𝑖(𝑡+1),(𝑖=𝑖1,𝑖2,𝑖3), and the individual 𝑥𝑖(𝑡) in parent population use the crossover operator to generate the trial individual 𝑢𝑖(𝑡+1),(𝑖=𝑖1,𝑖2,𝑖3). The crossover operator is expressed as 𝑢𝑖𝑗𝑣(𝑡+1)=𝑖𝑗𝑥(𝑡+1),if(rand(0,1)CR)or(𝑗=rand𝑖(1,𝐷)),𝑖𝑗(𝑡),otherwise,(3.2) where rand(0,1) is an uniformly distributed random number between (0,1), rand𝑖(1,𝐷) is the integer randomly selected from {1,2,,𝐷}, which can insure at least one 𝑢𝑖(𝑡+1) is from 𝑣𝑖(𝑡+1). CR[0,1] is the crossover probability. It is used to control the variable in 𝑢𝑖(𝑡+1) to be from 𝑣𝑖(𝑡+1) or 𝑥𝑖(𝑡). If CR=1, then 𝑢𝑖(𝑡+1)=𝑣𝑖(𝑡+1).

3.3. Selecting Operation

For MOPs, this paper uses the following criteria to select the next evolution population.(1)For each individual in the parent population, three different mutated operators and a crossover operator are used to produce three trial individuals. For the three trial individuals and the parent individual, the one who Pareto-dominates another is retained in the intermediate population. If the two individuals do not dominate each other, both are retained.(2)Some low-quality individuals in the intermediate population need elimination. Individuals are sorted in the intermediate population according to the Pareto dominance relations and the crowding distance of individuals. First of all, the individuals are divided into several levels according to the non-dominant relations in the intermediate population. Individuals in the same level are sorted according to the crowding distance. We choose the former NP individuals to form the next evolution population.

3.4. MTVDE Description of the Algorithm

Step 1. Set the basic parameters: the population size NP, the scaling factor 𝐹, the maximum evolution generation, and the crossover probability factor CR.

Step 2. Initialize the population.

Step 3. Mutation is according to the mutation operation (3.1) to generate three variation individuals.

Step 4. Crossover is according to the crossover operation (3.2) to generate three test individuals.

Step 5. Select the next evolution population.

Step 6. If the maximum iteration is reached, stop and output optimal solution, otherwise return to Step 3.

4. Numerical Experiments

4.1. Algorithm Performance Evaluation Criteria

Multiobjective optimization algorithm performance can be evaluated by two criteria.(1)The obtained nondominated optimal solution set needs to be as close as possible to the true Pareto front.(2)The obtained nondominated optimal solution set need as possible as uniformly distribute to the true Pareto front.

The two indexes are used to test the convergence and distribution of the solution set, respectively. The definitions of the approximation index and the uniformity index are given below.

(a) Approximation index is defined as follows: 𝛾=𝑛𝑖=1𝑑𝑖𝑛,(4.1) where 𝑛 is the number of vectors in the set of nondominated solutions found so far, 𝑑𝑖 is the Euclidean distance (measured in objective space) between each of these solutions and the nearest member of the Pareto optimal set. 𝑑𝑖 is calculated as 𝑑𝑖=min{|𝑋𝑖𝑌𝑗|,𝑗=1,2,,𝑁}, where 𝑁 denotes the total number of the true Pareto front vectors, 𝑋𝑖 denotes the nondominated optimal solutions set, 𝑌𝑗 denotes the true Pareto front, and || denotes Euclidean distance. The value of 𝛾 is smaller, the degree of algorithm approximating to true Pareto front is higher. It is clear that a value 𝛾=0 indicates that all the generated elements are in the Pareto front.

(b) The uniformity index is defined as follows: 𝑑Δ=𝑓+𝑑𝑙+𝑛1𝑖=1|||𝑑𝑖𝑑|||𝑑𝑓+𝑑𝑙+(𝑛1)𝑑,(4.2) where 𝑑𝑖 is the Euclidean distance between neighboring solutions in the obtained nondominated solutions set and 𝑑 is the mean of all 𝑑𝑖. The parameters 𝑑𝑓 and 𝑑𝑙 are the Euclidean distances between the extreme solutions and the boundary solutions of the obtained nondominated set. 𝑑𝑓 is calculated as 𝑑𝑓=|𝑋𝑓𝑌𝑓|, where 𝑋𝑓 is the leftmost vector of obtained nondominated optimal target vector, 𝑌𝑓 is the leftmost vector of theoretical Pareto front. 𝑑𝑙 is calculated as 𝑑𝑙=|𝑋𝑙𝑌𝑙|, where 𝑋𝑙 is the rightmost vector of obtained nondominated optimal target vector, 𝑌𝑓 is the rightmost vector of theoretical Pareto front, || denotes Euclidean distance. The value of Δ is smaller, the degree of obtained nondominated solutions’ diversity and uniform distribution is better.

4.2. Numerical Experiments and Analysis

In order to know how competitive the MTVDE algorithm was, it was compared with NSGA-II. In algorithm NSGA-II, 𝜂𝑐=20, 𝜂𝑚=20 Pool size = 100, Tour size = 2. In MTVDE algorithm, CR = 0.9 except the problem ZDT4 that CR = 0.3. For all test functions, the population size of the two algorithms is taken as 100, and the number of iterations is taken as 250. Figures 1, 2, 3, 4, 5, 6, 7 and 8 show the results of the MTVDE algorithm by testing eight multiobjective functions. It can be seen from the figures the MTVDE algorithm produces a more uniform solution, and the solution quality is better than NSGA-II.

172041.fig.001
Figure 1: ZDT1.
172041.fig.002
Figure 2: ZDT2.
172041.fig.003
Figure 3: ZDT3.
172041.fig.004
Figure 4: ZDT4.
172041.fig.005
Figure 5: ZDT6.
172041.fig.006
Figure 6: SCH.
172041.fig.007
Figure 7: DTLZ1.
172041.fig.008
Figure 8: DTLZ2.

From Tables 1, 2, 3, 4, 5 and 6 eight testing functions are chosen to test the approximation index, uniformity index (mean value and variance in 10 runs). For the first five functions, the MTVDE algorithm is compared with the five classic algorithms in [8, 9, 11, 12]. For the rest three functions, the MTVDE algorithm is compared with NSGA2.

tab1
Table 1: Testing result’s statistics of the problem ZDT1.
tab2
Table 2: Testing result’s statistics of the problem ZDT2.
tab3
Table 3: Testing result’s statistics of the problem ZDT3.
tab4
Table 4: Testing result’s statistics of the problem ZDT4.
tab5
Table 5: Testing result’s statistics of the problem ZDT6.
tab6
Table 6: Testing result’s statistics of the problems SCH, DTLZ1, and DTLZ2.

N/A denotes the algorithm does not calculate the index; the PDEA algorithm in [7] calculates the index GD (generational distance) instead of calculating the index (the convergence metric). The MODE algorithm in [7] did not calculate the index (diversity metric). We can be seen from Tables 16 that MTVDE algorithm has better convergence solutions except the problems ZDT1, ZDT2, and SCH.

In this paper, the population size is 250. Through experiments, we found that the increasing of population size has no effect on performance of the algorithm but only can increase the running time. In order to detect the impact of the scaling factor and the crossover probability on performance of algorithm, we make 50 times independent experiments for problem SCH. In order to test the impact of 𝐹 and CR on the algorithm performance, it can be found that the 𝐹 and CR have little effect on the algorithm performance by the testing data. Through comparison, 𝐹=0.5, CR = 0.9 results are a little better, but the algorithm performance (approximation index and uniformity index) is not particularly sensitive to the values of 𝐹 and CR (Table 7).

tab7
Table 7

5. Summary

In MTVDE algorithm, the improved DE algorithm is used to solve MOPs. The main difference between MTVDE algorithm and other MOEA algorithms is the three test vectors and the new select method. Eight benchmark functions are used to test the MTVDE algorithm. Experimental results show that MTVDE algorithm is better than most MOEA. In order to find the best solution, different values of crossover probability are tested, the results of ZDT4 are very sensitive to the crossover probability (crossover probability is 0.3 better results). In short, the algorithm in this paper can effectively converge to the Pareto front of the problem and maintain the diversity of the Pareto optimal solution set. It is an effective algorithm for solving multiobjective optimization problems. In the future work, we intend to use MTVDE algorithm or appropriate improved MTVDE algorithms for solving other complex constrained MOPs.

Acknowledgments

The work is supported by the Foundations of National Natural Science China under Grant no. 60962006 and by the Foundations of Research Projects of State Ethnic Affairs Commission.

References

  1. N. Srinivas and K. Deb, “Multi-objective optimization using non-dominated sorting in genetic algorithms,” Evolutionary Computation, vol. 2, no. 3, pp. 221–248, 1994.
  2. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002. View at Publisher · View at Google Scholar · View at Scopus
  3. E. Zitzler and L. Thiele, “Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach,” IEEE Transactions on Evolutionary Computation, vol. 3, no. 4, pp. 257–271, 1999. View at Scopus
  4. E. Zitzler, M. Laumanns, and L. Thiele, “SPEA2: improving the strength Pareto evolutionary algorithm,” Tech. Rep. 103, Computer Engineering and Networks Laboratory (TIK), Swiss Federal Institute of Technology (ETH) Zurich, Zurich, Switzerland, 2001.
  5. R. Storn and K. Price, “Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997. View at Zentralblatt MATH · View at Scopus
  6. H. A. Abbass, R. Sarker, and C. Newton, “PDE: a Pareto-frontier differential evolution approach for multi-objective optimization problems,” in Proceedings of the Congress on Evolutionary Computation (CEC'01), pp. 971–978, May 2001. View at Scopus
  7. H. A. Abbass, “PDE: the self-adaptive Pareto differential evolution algorithm,” in Proceedings of the Congress on Evolutionary Computation (CEC'02), vol. 1, pp. 831–836, IEEE Service Center, Piscataway, NJ, USA, 2002.
  8. N. K. Madavan, “Multi-objective optimization using a Pareto differential evolution approach,” in Proceedings of the Congress on Evolutionary Computation (CEC'02), vol. 2, pp. 1145–1150, IEEE Service Center, Piscataway, NJ, USA, 2002.
  9. F. Xue, A.C. Sanderson, and R.J. Graves, “Pareto-based multi-objective differential evolution,” in Proceedings of the Congress on Evolutionary Computation (CEC'03), vol. 2, pp. 862–869, IEEE Press, Canberra, Australia, 2003.
  10. T. Robi and B. Filipi, “DEMO: differential evolution for multi- objective optimization,” in Lecture Notes in Computer Science, pp. 520–533, Springe, Berlin, Germany, 2005.
  11. W. Qian and A. li, “Adaptive differential evolution algorithm for multiobjective optimization problems,” Applied Mathematics and Computation, vol. 201, no. 1-2, pp. 431–440, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  12. W. Gong and Z. Cai, “An improved multiobjective differential evolution based on Pareto-adaptive ε-dominance and orthogonal design,” European Journal of Operational Research, vol. 198, no. 2, pp. 576–601, 2009. View at Publisher · View at Google Scholar · View at Scopus