`Abstract and Applied AnalysisVolume 2012, Article ID 172041, 12 pageshttp://dx.doi.org/10.1155/2012/172041`
Research Article

## Multiobjective Differential Evolution Algorithm with Multiple Trial Vectors

1Institute of Information and System Science, Beifang University of Nationalities, Yinchuan 750021, China
2Department of Mathematics, Yinchuan College, China University of Mining and Technology, Yinchuan 750011, China

Received 29 May 2012; Accepted 5 June 2012

Copyright © 2012 Yuelin Gao and Junmei Liu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper presents a multiobjective differential evolution algorithm with multiple trial vectors. For each individual in the population, three trial individuals are produced by the mutation operator. The offspring is produced by using the crossover operator on the three trial individuals. Good individuals are selected from the parent and the offspring and then are put in the intermediate population. Finally, the intermediate population is sorted according to the Pareto dominance relations and the crowding distance, and then the outstanding individuals are selected as the next evolutionary population. Comparing with the classical multiobjective optimization algorithm NSGA-II, the proposed algorithm has better convergence, and the obtained Pareto optimal solutions have better diversity.

#### 1. Introduction

Multiobjective optimization problems (MOPs) are different from single objective optimization problems (SOPs). For MOPs, the objective functions are often conflict, so the best solution often does not exist. It is difficult to find the optimal solution. The MOPs need to find a noninferior solution set that is called Pareto optimal solution set (or nondominated optimal solution set). The key is to find a solution set which is as close to the Pareto front and uniformly distributed as possible. In the past ten years, evolutionary algorithms have been widely used for solving multiobjective optimization problems. The typical algorithm contains NSGA , NSGA-II , SPEA , SPEA2 , and so on.

Differential evolution (DE)  is an important branch of the evolutionary algorithms. Because of its easiness to use and robustness, it has been widely applied in many fields. The basic DE algorithm is a greedy algorithm and is used to solve the SOPs. The individual choice is based on the objective function value. In solving MOPs, the objective function value is a vector value and the individual choice is based on the vector value. Some scholars have improved DE algorithm to solve MOPs. Abbass et al.  proposed a differential evolution algorithm based on the Pareto front’s (PDE) to solve MOPs. Later, the PDE algorithm is improved to an adaptive PDE algorithm (SPDE) . Madavan  proposed the differential evolution algorithm based on the Pareto (PDEA). Xue et al.  and Robi and Filipi  and Qian and Li  proposed the different kind of multiobjective differential evolution algorithm. Gong and Cai  proposed an improved multiobjective differential evolution based on the Pareto-adaptive -dominance and the orthogonal design.

Here, a multiobjective differential evolution algorithm with multiple trial vectors (MTVDE) is proposed. In this algorithm, for each individual in the parent, three different mutation operators and a crossover operator are used to produce three trial individuals. First, we compare the three trial individuals. If an individual Pareto-dominate another, the better one is retained. If two individuals do not dominate each other, both two individuals are retained. Secondly, retained trial individuals compare with the parent individual: if an individual Pareto-dominate another, put the nondominated individual in the intermediate population, and if the two individuals do not dominate each other, two individuals are put in the intermediate population. Finally, in the intermediate population, the individuals are sorted according to the Pareto dominance relations and the crowding distance. We eliminate some low-quality individuals from the intermediate population and retain some fine individuals to form the next evolutionary population. The individuals are sorted according to the nondominated relations, and individuals with low ordinal value are better individuals. When individuals are with the same ordinal, the individual of larger crowding distance is better than the individual of smaller crowding distance.

#### 2. Multiobjective Optimization Problem and Related Concepts

Below, we give several concepts related to this paper. Because the maximum optimization problems and minimum optimization problem can be transformed into each other, here we only consider the minimum multiobjective optimization problem.

Definition 2.1 (multiobjective optimization problems, MOPs). The MOPs are described as follows: where indicates the target vector, indicates the decision vector, indicates decision space of the decision vector ,    indicates the target space of the target vector .

Definition 2.2 (the dominance relationship of the target vector). The target vector is better than if and only if for all    and there exists at least one such that was established, then the target the vector Pareto better than , or Pareto inferior to .

Definition 2.3 (decision vector relations of domination). If the decision vector Pareto-dominates if and only if Pareto is superior to , we call the decision vector Pareto-dominating or the decision vector is Pareto dominated by .

Definition 2.4 (Pareto optimal solution). Decision vector ( is the feasible solution set) is a Pareto optimal solution, if and only if an individual does not exist in the feasible solution set dominate , then is called the MOPs’ Pareto optimal solution, or is called the MOPs’ nondominated optimal solution. The set of all Pareto optimal solution is known as the MOPs’ Pareto optimal solution set or nondominated optimal solution set.

Definition 2.5 (Pareto front). For given MOPs, Pareto optimal solution set corresponding to the objective function value vector is known as the MOPs’ Pareto front. Therefore, our task is to make the current nondominated optimal solution set as close as possible to the theoretical Pareto optimal solution set, from the view of Pareto front is to ask the Pareto front as possible as close to the theoretical Pareto front and uniformly distributed.

Definition 2.6 (individual nondominant relationship sort, INDR). There are individuals in the population . For each individual , represents the set of individuals that are dominated by , represents the number of individuals dominating the individual . Finally, if , then belongs to the first level. The sequence of is , joined to ( indicates the individual set of ordinal value is 1). The counter of the initial sequence value , if , ( represents the set of individuals’ sequence value ) each individual in , each individual in the corresponding . Finally, if , then belongs to the level. The sequence value of is . Update sequence value’s counter , joined to . As a result, we can calculate the ordinal value of each individual in the population .

Definition 2.7 (the same level of individuals’ crowding distance) (ICD). Let be a set of individuals of a certain grade. All the target individuals are sorted according to the objective function value in . The first and the last individuals’ crowding distances are infinite. The other individuals’ crowding distance is calculated as subtract the two neighbor individuals’ objective function and then square it. Then sum the crowding distance on all objective functions as the individuals’ crowding distance. The individuals are sorted according to crowding distance in the same level.

#### 3. Multiobjective Differential Evolution Algorithm with Multiple Trial Vectors (MTVDE)

##### 3.1. The Mutation Operation

Mutation components are difference vectors of the parent generation. Each vector is generated by the different individuals in the parent population. The mutation operation on the basis of the difference vector and its equation is where are the mutated individuals, is individual of parent population. are different from each other, and they are different from . is the scaling factor. It indicates the influence degree of the difference vector to the offspring. From the mutation operator, each individual in parent population produces three individuals through the mutation operator, which can enhance the diversity of the population.

The value of has a certain influence on the algorithm performance. If is too large, the convergence rate slows, and the algorithm runs longer. If is too small, the diversity of the population decreases, according to experimental tests, and is often set to 0.5.

##### 3.2. Crossover Operator

The mutated individual , and the individual in parent population use the crossover operator to generate the trial individual . The crossover operator is expressed as where is an uniformly distributed random number between , is the integer randomly selected from , which can insure at least one is from . is the crossover probability. It is used to control the variable in to be from or . If , then .

##### 3.3. Selecting Operation

For MOPs, this paper uses the following criteria to select the next evolution population.(1)For each individual in the parent population, three different mutated operators and a crossover operator are used to produce three trial individuals. For the three trial individuals and the parent individual, the one who Pareto-dominates another is retained in the intermediate population. If the two individuals do not dominate each other, both are retained.(2)Some low-quality individuals in the intermediate population need elimination. Individuals are sorted in the intermediate population according to the Pareto dominance relations and the crowding distance of individuals. First of all, the individuals are divided into several levels according to the non-dominant relations in the intermediate population. Individuals in the same level are sorted according to the crowding distance. We choose the former NP individuals to form the next evolution population.

##### 3.4. MTVDE Description of the Algorithm

Step 1. Set the basic parameters: the population size NP, the scaling factor , the maximum evolution generation, and the crossover probability factor CR.

Step 2. Initialize the population.

Step 3. Mutation is according to the mutation operation (3.1) to generate three variation individuals.

Step 4. Crossover is according to the crossover operation (3.2) to generate three test individuals.

Step 5. Select the next evolution population.

Step 6. If the maximum iteration is reached, stop and output optimal solution, otherwise return to Step 3.

#### 4. Numerical Experiments

##### 4.1. Algorithm Performance Evaluation Criteria

Multiobjective optimization algorithm performance can be evaluated by two criteria.(1)The obtained nondominated optimal solution set needs to be as close as possible to the true Pareto front.(2)The obtained nondominated optimal solution set need as possible as uniformly distribute to the true Pareto front.

The two indexes are used to test the convergence and distribution of the solution set, respectively. The definitions of the approximation index and the uniformity index are given below.

(a) Approximation index is defined as follows: where is the number of vectors in the set of nondominated solutions found so far, is the Euclidean distance (measured in objective space) between each of these solutions and the nearest member of the Pareto optimal set. is calculated as , where denotes the total number of the true Pareto front vectors, denotes the nondominated optimal solutions set, denotes the true Pareto front, and denotes Euclidean distance. The value of is smaller, the degree of algorithm approximating to true Pareto front is higher. It is clear that a value indicates that all the generated elements are in the Pareto front.

(b) The uniformity index is defined as follows: where is the Euclidean distance between neighboring solutions in the obtained nondominated solutions set and is the mean of all . The parameters and are the Euclidean distances between the extreme solutions and the boundary solutions of the obtained nondominated set. is calculated as , where is the leftmost vector of obtained nondominated optimal target vector, is the leftmost vector of theoretical Pareto front. is calculated as , where is the rightmost vector of obtained nondominated optimal target vector, is the rightmost vector of theoretical Pareto front, denotes Euclidean distance. The value of is smaller, the degree of obtained nondominated solutions’ diversity and uniform distribution is better.

##### 4.2. Numerical Experiments and Analysis

In order to know how competitive the MTVDE algorithm was, it was compared with NSGA-II. In algorithm NSGA-II, , Pool size = 100, Tour size = 2. In MTVDE algorithm, CR = 0.9 except the problem ZDT4 that CR = 0.3. For all test functions, the population size of the two algorithms is taken as 100, and the number of iterations is taken as 250. Figures 1, 2, 3, 4, 5, 6, 7 and 8 show the results of the MTVDE algorithm by testing eight multiobjective functions. It can be seen from the figures the MTVDE algorithm produces a more uniform solution, and the solution quality is better than NSGA-II.

From Tables 1, 2, 3, 4, 5 and 6 eight testing functions are chosen to test the approximation index, uniformity index (mean value and variance in 10 runs). For the first five functions, the MTVDE algorithm is compared with the five classic algorithms in [8, 9, 11, 12]. For the rest three functions, the MTVDE algorithm is compared with NSGA2.

Table 1: Testing result’s statistics of the problem ZDT1.
Table 2: Testing result’s statistics of the problem ZDT2.
Table 3: Testing result’s statistics of the problem ZDT3.
Table 4: Testing result’s statistics of the problem ZDT4.
Table 5: Testing result’s statistics of the problem ZDT6.
Table 6: Testing result’s statistics of the problems SCH, DTLZ1, and DTLZ2.

N/A denotes the algorithm does not calculate the index; the PDEA algorithm in  calculates the index GD (generational distance) instead of calculating the index (the convergence metric). The MODE algorithm in  did not calculate the index (diversity metric). We can be seen from Tables 16 that MTVDE algorithm has better convergence solutions except the problems ZDT1, ZDT2, and SCH.

In this paper, the population size is 250. Through experiments, we found that the increasing of population size has no effect on performance of the algorithm but only can increase the running time. In order to detect the impact of the scaling factor and the crossover probability on performance of algorithm, we make 50 times independent experiments for problem SCH. In order to test the impact of and CR on the algorithm performance, it can be found that the and CR have little effect on the algorithm performance by the testing data. Through comparison, , CR = 0.9 results are a little better, but the algorithm performance (approximation index and uniformity index) is not particularly sensitive to the values of and CR (Table 7).

#### 5. Summary

In MTVDE algorithm, the improved DE algorithm is used to solve MOPs. The main difference between MTVDE algorithm and other MOEA algorithms is the three test vectors and the new select method. Eight benchmark functions are used to test the MTVDE algorithm. Experimental results show that MTVDE algorithm is better than most MOEA. In order to find the best solution, different values of crossover probability are tested, the results of ZDT4 are very sensitive to the crossover probability (crossover probability is 0.3 better results). In short, the algorithm in this paper can effectively converge to the Pareto front of the problem and maintain the diversity of the Pareto optimal solution set. It is an effective algorithm for solving multiobjective optimization problems. In the future work, we intend to use MTVDE algorithm or appropriate improved MTVDE algorithms for solving other complex constrained MOPs.

#### Acknowledgments

The work is supported by the Foundations of National Natural Science China under Grant no. 60962006 and by the Foundations of Research Projects of State Ethnic Affairs Commission.

#### References

1. N. Srinivas and K. Deb, “Multi-objective optimization using non-dominated sorting in genetic algorithms,” Evolutionary Computation, vol. 2, no. 3, pp. 221–248, 1994.
2. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002.
3. E. Zitzler and L. Thiele, “Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach,” IEEE Transactions on Evolutionary Computation, vol. 3, no. 4, pp. 257–271, 1999.
4. E. Zitzler, M. Laumanns, and L. Thiele, “SPEA2: improving the strength Pareto evolutionary algorithm,” Tech. Rep. 103, Computer Engineering and Networks Laboratory (TIK), Swiss Federal Institute of Technology (ETH) Zurich, Zurich, Switzerland, 2001.
5. R. Storn and K. Price, “Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997.
6. H. A. Abbass, R. Sarker, and C. Newton, “PDE: a Pareto-frontier differential evolution approach for multi-objective optimization problems,” in Proceedings of the Congress on Evolutionary Computation (CEC'01), pp. 971–978, May 2001.
7. H. A. Abbass, “PDE: the self-adaptive Pareto differential evolution algorithm,” in Proceedings of the Congress on Evolutionary Computation (CEC'02), vol. 1, pp. 831–836, IEEE Service Center, Piscataway, NJ, USA, 2002.
8. N. K. Madavan, “Multi-objective optimization using a Pareto differential evolution approach,” in Proceedings of the Congress on Evolutionary Computation (CEC'02), vol. 2, pp. 1145–1150, IEEE Service Center, Piscataway, NJ, USA, 2002.
9. F. Xue, A.C. Sanderson, and R.J. Graves, “Pareto-based multi-objective differential evolution,” in Proceedings of the Congress on Evolutionary Computation (CEC'03), vol. 2, pp. 862–869, IEEE Press, Canberra, Australia, 2003.
10. T. Robi and B. Filipi, “DEMO: differential evolution for multi- objective optimization,” in Lecture Notes in Computer Science, pp. 520–533, Springe, Berlin, Germany, 2005.
11. W. Qian and A. li, “Adaptive differential evolution algorithm for multiobjective optimization problems,” Applied Mathematics and Computation, vol. 201, no. 1-2, pp. 431–440, 2008.
12. W. Gong and Z. Cai, “An improved multiobjective differential evolution based on Pareto-adaptive ε-dominance and orthogonal design,” European Journal of Operational Research, vol. 198, no. 2, pp. 576–601, 2009.