An Entropy-Based Multiobjective Evolutionary Algorithm with an Enhanced Elite Mechanism
Multiobjective optimization problem (MOP) is an important and challenging topic in the fields of industrial design and scientific research. Multi-objective evolutionary algorithm (MOEA) has proved to be one of the most efficient algorithms solving the multi-objective optimization. In this paper, we propose an entropy-based multi-objective evolutionary algorithm with an enhanced elite mechanism (E-MOEA), which improves the convergence and diversity of solution set in MOPs effectively. In this algorithm, an enhanced elite mechanism is applied to guide the direction of the evolution of the population. Specifically, it accelerates the population to approach the true Pareto front at the early stage of the evolution process. A strategy based on entropy is used to maintain the diversity of population when the population is near to the Pareto front. The proposed algorithm is executed on widely used test problems, and the simulated results show that the algorithm has better or comparative performances in convergence and diversity of solutions compared with two state-of-the-art evolutionary algorithms: NSGA-II, SPEA2 and the MOSADE.
Optimization problems exist in all kinds of engineering and scientific areas. When there is more than one objective in an optimization problem, it is called a multiobjective optimization problem (MOP). Since these objectives are usually in conflict with each other, the goal of solving a MOP is to find a set of compromise solutions regarding all objectives rather than a best one as in single-objective optimization problems. The solutions of MOP, also called as the Pareto-optimal solutions, are optimal in the sense that there exist no other feasible solutions which would decrease some criteria without causing the increase of at least one other criterion. Evolutionary algorithm (EA) is an optimization algorithm based on the evolution of a population. As it can search for multiple solutions in parallel, it has gained great attention from researchers. In recent years, many excellent EAs [1–4] have been proposed to solve the MOPs efficiently and MOEA has been recognized as one of the best methods to solve the MOPs.
Generally, there are two performance measures in evaluating the Pareto-optimal solutions obtained by MOEA. One is the convergence measurement, which evaluates the adjacent degree between the Pareto solutions and the true optimal front. Another one is the diversity measurement, which evaluates the distribution of solutions in the objective space. In order to achieve good performance, many excellent strategies and methods have been presented in MOEA [1, 2, 5–9]. For the convergence, the elite mechanism has proved to be very helpful to accelerate the evolution of population . The basic idea of the elite mechanism is that the information of good solutions, which have occurred in the progress of the evolution, is used to ensure the solution set converge to the optimal front as soon as possible. Its usual practice is that a certain number of best solutions are selected from the population as the parents to produce the good offspring . However, in the early stage of the algorithm applying this strategy, because there are many dominated solutions existing in the population selected as the parents, the population cannot converge at a fast speed. In order to maintain the diversity of nondominated solutions, two main methods are applied. The first is using the grid to maintain the diversity . It draws grids in the objective space and controls the number of solutions in a grid. Although this way can find the better solutions quickly, sometimes it cannot accurately reflect the global distribution of solutions because the grid position is fixed. The second one is the way based on the density [2, 8, 9]. Every solution obtains a value of the density and an outstanding density calculation can help to form a good distribution of solutions.
Since 1948, Shannon  introduced the information theoretic entropy to measure information content of a stochastic process, which led to the establishment of the field of information theory. Then, many different applications for entropy are given in various fields. In solving the multiobjective optimization problems, Farhang-Mehr and Azarm  and Gunawan et al.  have applied the entropy to maintain the diversity of the solution set well in multiobjective problems and multilevel multiobjective problems. Wang et al. proposed the MOSADE algorithm , which combines the self-adaptive differential evolution and the crowding entropy-based diversity measure to obtain the nondominated solution set. In this algorithm, every solution can calculate its crowding degree through the improved the information entropy formula according to solutions’ distribution. In essence, this method is similar to the crowding distance in NSGA-II. Thus, for some three objective problems, this algorithm cannot obtain the very ideal solution set.
In this paper, we propose a new MOEA to solve the MOP more effectively, in which an enhanced elitism makes the nondominated solutions play the better guide role and an entropy-based strategy is applied to preserve the diversity of the population. We call it an entropy-based multiobjective evolution algorithm with an enhanced elitism, namely, E-MOEA in brief. Specifically, we employ the enhanced elitism in which only the nondominated solutions in the union population are selected as the parents to ensure that the solution set converges to the optimal front more quickly. With the algorithm going on, the number of the nondominated solutions in union population will increase gradually. In order to keep the size of the elitist population (the maximum number of the elitist population in our algorithm is set as N) and maintain the diversity of solutions, the strategy based on entropy is applied. In this strategy, a region is determined by taking a solution as its center and the most crowded regions with the most uneven distribution of solutions are found through applying the entropy; then, in these regions, the most crowded solutions are deleted one by one. Compared with the MOSADE, the enhanced elitism can make the solution set approach to the Pareto-optimal front more easily in our algorithm, and the surface of the Pareto solution set is more uniform through the combination of the region and entropy. Experimental results on the 2-objective problems and the 3-objective problems show that the novel algorithm has better performance in both convergence and diversity, compared with NSGA-II, SPEA2 and MOSADE.
2. Multiobjective Optimization Problems
In this paper, we consider the following continuous multiobjective optimization problem (continuous MOP): where is the decision space and is the decision variable vector. consists of real value objective functions , and is the objective space.
Let , be two vectors, and is said to dominate , denoted by , if for all , and .
A point is called Pareto-optimal if there is no such that . The set of all the Pareto-optimal vectors is called Pareto set, denoted by PS. The set of all the Pareto objective vector, , is called the Pareto front [2, 9].
3. The Entropy-Based Multiobjective Evolutionary Algorithm with an Enhanced Elite Mechanism
In this section, we first present an enhanced elitist mechanism; then an entropy-based strategy is proposed to maintain the diversity of population. Finally, the entropy-based multiobjective evolutionary algorithm with the enhanced elitism is described.
3.1. The Enhanced Elitist Mechanism
Recent researches have proved that the elite mechanism is an excellent method to speed up the convergence of evolutionary algorithm. The population can produce a good offspring population through the elitism’s guide role, which achieves the rapid evolution of the population. On the basis of this idea, different forms of the elite mechanism have been proposed in some EAs [1, 14]. Among them, the most popular and effective one is used in NSGA-II, which first combines parent and offspring population then chooses a certain number of solutions from the union population as the parent population of the next iteration according to the nondominated sort and crowd distance assignment. As mentioned above, in the early running of the algorithm, a part of the dominated solutions may be selected into the elitist population so that the offspring produced by them are not good enough. Thus, we make some improvements to enhance the guiding role of good solutions when producing good offspring and we call it the enhanced elite mechanism.
According to the elite mechanism, it is reasonable that the better the solutions chosen as the parents are, the better the offspring solutions which are produced by these parents are. Therefore, in order to enhance the guide of the elitism in our algorithm, we just only select all the nondomination solutions as the parents of the next iteration instead of a certain number of relatively good solutions, which may include dominated solutions because the number of nondominated solutions in the union population is less than N during the early running of the algorithm. This enhanced mechanism can avoid several complex operations, such as the hierarchical nondominated sort and the crowd distance computation. On the other hand, all offspring solutions are generated by the nondominated population, which makes use of the best solutions to improve the efficiency of the evolution of the population. When the number of the nondominated solutions exceeds N, we delete the solutions with the worst distribution one by one through the entropy-based strategy, which accurately considers the distribution information of all solutions to obtain the elitist population with better distribution.
3.2. The Entropy-Based Strategy for Maintaining Diversity
In the enhanced elitist mechanism, the number of the nondominated solutions in union population will gradually increase with the algorithm going on. For keeping the maximum size of the elitist population N and maintaining the diversity of the population, we proposed the entropy-based strategy by combining the regional information and the knowledge about entropy. The strategy deletes the most crowded solutions one by one, and all the related values will be recalculated before deciding which solutions should be deleted from the population, which can reflect the distribution of solutions dynamically and accurately. It is obvious that the key operation of the strategy is how to select the most crowded solution in nondominated population. First we select the most crowded region with the most uneven distribution through applying the information entropy, and then the solution which seriously influences the distribution of the solution set in this region is picked and removed from the elitist population. The previous process is looped until the size of the nondominated population decreases to N to form the elitist population. The strategy is described in detail as follows.
For ease of operations, we order the nondominated solutions by one objective, and then, the region taking each solution as the center is defined as , where represents the th objective value of a solution , is a parameter which controls the area of the region, and is the length of numerical range for the objective. Obviously, for a 2-objective problem, the region is a square; for a 3-objective problem, the region is a cube. Because the areas of those regions for all solutions are the same, there is no doubt that the more the total number of solutions existing in a region is, the more crowded the region is. Thus, it is easy to find the most crowded region based on the number of solutions in a region. If there is only one region including the most solutions, the most crowded solution in this region will be deleted. However, in most case, there may be several regions which include the same and maximum number of solutions. The region with the most uneven distribution of solutions will be selected from these regions by applying the knowledge about entropy.
In light of the Shannon information theory , the entropy can be used to measure the uniformity of the probability distribution in a normalized system. Assume a stochastic process with possible outcomes where the probability of the th outcome is . The probability distribution of this process denoted as a probability vector can be shown as This probability vector has an associated Shannon’s entropy, , of the form where is assumed to be zero when . This function is at its maximum. , when all probabilities have the same value, and it is at a minimum of zero when one component of the -vector is 1 and the rest of the entries in the -vector are zero. Inspired by this, we compute the entropy of a region through taking the region as and viewing every distance between two adjacent solutions as a . If a region has a more uniform distribution where the distances between the adjacent solutions are roughly similar, its entropy will be bigger, and when a region has a more uneven distribution where the distances between the adjacent solutions vary greatly, its entropy will be smaller.
The schematic diagram for computing the entropy of a region is shown in Figure 1. There are solutions sorted by an objective in this region, are the distances between two adjacent solutions, and and are, respectively, the edge distances from two extreme solutions , , to the corresponding boundaries of the region at the objective ordered. In order to obtain a normalized system and be more reasonable to employ the entropy formula, some transforms need to be performed. Let , , then In light of (3), the entropies of those regions with the most solutions are computed and the region whose entropy is the smallest is obtained. It implies that not only this region includes the most solutions, but also its distribution of solutions in this region is the most uneven. Next, we will choose the most crowded solution from this region to delete. Here, a simple and effective method is used. In the region, first the two solutions with the smallest adjacent distance are found, and then we compare distances between them and their other adjacent solutions. Finally, the solution with smaller distance will be deleted from the population. A simple example is shown in Figure 2. and are found having the smallest distance, and then we compare the distance between and with the distance between and . Since , the solution will be deleted. The main difference between our method and the archive truncation method in SPEA2 is that all solutions in our algorithm have been sorted and we only perform some comparisons among the adjacent distances. This method not only can reduce the computation, but also can keep the even solution distribution reasonably.
3.3. The Framework of E-MOEA
Combing the basic evolutionary algorithm and the tradition of the method producing offspring (crossover and mutation) in genetic algorithm, we proposed the entropy-based multiobjective evolutionary algorithm with an enhanced elite mechanism (E-MOEA). The main steps are shown in the following.
We have the following: (population size), (maximum number of generation), (the current population), (the elitist population), (the offspring population).
Step 1. Generate an initial population and set .
Step 2. Copy all the nondominated solutions in to the population .
Step 3. If , reduce the size of by the entropy-based strategy one by one until .
Step 4. If , then go to Step 7.
Step 5. Execute recombination and mutation operators to the to obtain the offspring population .
Step 6. , , and go to Step 2,
Step 7. Output the current elitist population .
4. Experimental Design and Results
In this section, a large number of experiments are conducted to test the performance of E-MOEA on the biobjective and the 3-objective problems. Specifically, our algorithm is compared with other advanced MOEAs: NSGA-II and SPEA2 which have the different strategy of constructing the elitist population. And then, the comparisons of the proposed E-MOEA and the MOSADE are presented.
4.1. Test Instances and Performance Metrics
In our experiment, the biobjective problem is from ZDT series: ZDT1, ZDT2, ZDT3, and ZDT6. The 3-objective problems we selected is composed of the DTLZ family of scalable test problems .
There have been several metrics proposed for measuring the performance of the Pareto-optimal obtained by MOEAs. In our work, we choose the GD metric  and SP metric . GD can measure the distance between the nondominated solutions obtained and the real Pareto-optimal front where represents the Euclidean distance between the solution and the nearest member of ( is a solution set of uniform sampling from the true Pareto-optimal front). In our experiment, we use 10000 uniformly spaced Pareto-optimal solutions as the approximation of the true Pareto front.
The metric SP can be used to measure the diversity of obtained solutions. Here, is the average value of all , and is the number of objectives:
The another indicator which is used usually to evaluate the diversity of the solution set is . However, this indicator works only for biobjective problems and cannot be used directly to evaluate for problems of more than two objectives. Based on the metric proposed in , the indicator is extended to fit problems of more than two objectives by computing the distance from a given point to its nearest neighbor. The indicator is modified as where is a set of solutions, are extreme solutions in the set of Pareto-optimal solutions, is the number of objectives, and
4.2. Comparison of E-MOEA and Other MOEAs
In this part, we will compare the E-MOEA proposed and two state-of-the-art algorithms, NSGA-II and SPEA2. All three algorithms are given real-valued decision variables. Simulated binary crossover (SBX)  and polynomial mutation (PM)  are applied with distribution indexes of and , respectively. A crossover probability and a mutation probability (where is the number of decision variables for test problems) are used. In E-MOEA, should not be too big in order to decrease the computation of our algorithm; on the contrary, our experimental results also show that if is set too small, the algorithm also do not get good result. So for 2-objective problems, we set , and for 3-objective problems, . In all three MOEAs, the size of the population is 100 and the maximum number of function evaluation is 25000 for 2-objective problems. And for 3-objectives problems these two numbers are 200 and 80000, respectively.
Four biobjective problems, ZDT1–3 and ZDT6, and three 3-objective problems DTLZ1–3 are used. For each test problem, 30 times runs are executed. Convergence metric GD and diversity metric SP are employed to evaluate the performance. The results are given in Table 1, where Average and Sdt. dev, respectively, represent the mean and the standard deviation of indicators. In terms of the GD metric, we can clearly see that E-MOEA is nearer to the Pareto-optimal front than the others for all four 2-objective test problems. The reason for this is that the parent solutions are all made up of the nondominated solution in the current population in E-MOEA. Because of not using dominated solution, the population can approach the true Pareto-optimal front more quickly and efficiently. In light of the SP metric, for ZDT1, ZDT2, and ZDT3, E-MOEA is the best, SPEA2 is the second, and NSGA-II is the last. For ZDT6, SPEA2 is a little better than E-MOEA and they are much better than NSGA-II. The possible reason is that SPEA2 used the effective fitness assignment, however, which costs too much time. E-MOEA applies the strategy based on entropy to preserve the diversity of the population, which can evaluate the uniformity of distribution of solutions more scientifically so that the elitist population with excellent distribution is constructed.
DTLZ serial test problems are proposed by Deb K, which can be set with the different number of objectives. In here, we choose the same settings as in . For DTLZ1 and DTLZ2, all three algorithms can converge the Pareto-optimal front, but E-MOEA is the nearest to the true front, SPEA2 is the second, and NSGA-II is the last. For the diversity, E-MOEA and SPEA2 are much better than NSGA-II. Since DTLZ3 is the most difficult test problem, SPEA2 and NSGA-II cannot converge to the Pareto-optimal front. The reason is that both NSGA-II and SPEA2 always generate too much wild individuals in the obtained populations. Fortunately, E-MOEA can get solutions with good convergence and diversity for DTLZ3. Thus, Table 1 for DTLZ3 only gives the results obtained by E-MOEA, and “—" denotes that corresponding algorithms cannot obtain reasonable results.
Figures 3, 4, 5, and 6 show the final solutions obtained by three algorithms on four biobjective test problems. Obviously, we can see that solutions obtained by E-MOEA display the broadest and the most uniform distribution and are the nearer to the true Pareto-optimal front than the NSGA-II and SPEA2. Figures 7 and 8 clearly present the final solutions by three algorithms on DTLZ1 and DTLZ2. Figure 9 shows the final solutions by E-MOEA on DTLZ3.
4.3. Comparison of E-MOEA and MOSADE
Both of the E-MOEA and MOSADE make use of entropy to maintain the diversity of the solution sets. In E-MOEA, we select the region with the worst distribution to keep the distribution of solutions through applying the information entropy formula. In MOSADE, the improved information entropy formula is used to update the archive, which maintains the diversity of the solution set. For these two algorithms, further experiments are conducted to compare their performances. For E-MOEA, all parameters are the same as the parameters of E-MOEA as described in Section 4.2 except that the population size is 100 for all the test problems. For MOSADE, the population size and the external elitist archive size are 100, the lower and upper limits of mutant constant and crossover probability , , , and . The biobjective problems ZDT1–3 and ZDT6, the 3-objective problems DTLZ1–7 are considered. The GD and are used as the evaluation indicators in this experiment, and results are presented in Table 2.
The results obtained from Table 2 show that these two algorithms have both the good convergence according to the GD; however, E-MOEA gets better values than MOSADE in all test problems except DTLZ7. This means the resulting Pareto fronts from E-MOEA are closer to the true optimal Pareto fronts. The solution set obtained by E-MOEA can converge more quickly to the optimal front, which may be due to the enhanced elitism applied in E-MOEA. As DTLZ7 has a wider range of the optimal solutions, the effect of convergence in E-MOEA is not better than the MOSADE which applied the differential evolution. For the indicator , in the 2-objective problem, MOSADE has the more uniform distribution and in the 3-objective, E-MOEA has better results. The possible reason is that the crowding entropy diversity measure tactic in MOSADE is more effective to the test problems whose optimal front is an approximate curve. However, due to considering the spacial factor, the entropy-based strategy for maintaining diversity is more suitable to solve the problem whose optimal front is a surface. Thus, E-MOEA has better diversity for DTLZ1–4 and DTLZ7.
In this paper, a novel entropy-based multiobjective evolutionary algorithm with an enhanced elite mechanism (E-MOEA) is proposed. The algorithm improves the elitism and presents a new strategy based on entropy to construct the elitist population. At first we only select the nondominated solutions in the population as the elitist solutions, and when the size of the nondominated solutions exceeds the size of population, we delete worse solutions one by one to preserve the diversity of the population through the entropy-based strategy. Experimental results on seven widely used popular test functions show that E-MOEA can obtain the solutions set with better or comparative convergence and diversity performances compared with NSGA-II, SPEA2, and MOSADE.
As eliminating one solution needs to recalculate the entropies of the crowded regions, the worst time complexity of the E-MOEA is , which is same with SPEA2. The future research will be how to reduce the computational expense while keeping good performance.
This work is supported by the NSFC major research program (60496322, 60496327), the Beijing Natural Science Foundation (4102010). The authors thank Professor Wu Lianghong very much for his guidance to the experiment of MOSADE.
E. Zitzler, M. Laumanns, and L. Thiele, “SPEA2: improving the strength pareto evolutionary algorithm for multiobjective optimization,” in Evolutionary Methods for Design, Optimisation and Control, vol. 3242, pp. 95–100, CIMNE, Barcelona, Spain, 2002.View at: Google Scholar
J. Knowles and D. Corne, “The pareto archived evolution strategy: a new baseline algorithm for multiobjective optimization,” in Proceedings of the Congress on Evolutionary Computation, pp. 98–105, IEEE Press, Piscataway, NJ, USA, September 1999.View at: Google Scholar
E. Zitzler, K. Deb, and L. Thiele, “Comparison of multiobjective evolutionary algorithms: empirical results,” Evolutionary Computation, vol. 8, no. 2, pp. 173–195, 2000.View at: Google Scholar
C. E. Shannon, “A mathematical theory of communication,” Bell System Technical Journal, vol. 27, pp. 379–423 and 623–656, 1948.View at: Google Scholar
A. Farhang-Mehr and S. Azarm, “Diversity assessment of Pareto optimal solution sets: an entropy approach,” in Proceedings of the of the Congress on Evolutionary Computation, pp. 723–728, 2002.View at: Google Scholar
S. Gunawan, A. Farhang-Mehr, and S. Azarm, Multi-Level Multi-Objective Genetic Algorithm Using Entropy to Preserve Diversity, Springer, Berlin, Germany, 2003.
G. Rudolp, “Evolutionary search under paritialy ordered sets,” Tech. Rep. CI-67-99, Department of Computer Science/LS11, University ofDortmund, Dortmund, Germany, 1999.View at: Google Scholar
C. E. Shannon, “A Mathematical theory of communication,” BellSystem Technical Journal, vol. 27, pp. 379-423 and 623–656, 1948.View at: Google Scholar
K. Deb, L. Thiele, M. Laumanns et al., “Scalable test problems forevoltionary multi-objective optimization,” Tech. Rep., ETH Zurich, Zurich, Switzerland, 2001.View at: Google Scholar
D. A. van Veldhuizen and G. B. Lamont, “Evolutionary computation and convergence to a pareto front,” in Proceedings of the Late Breaking Papers at the Genetic Programming Conference, J. R. Koza, Ed., pp. 221–228, 1998.View at: Google Scholar
J. R. Schott, Fault tolerant design using single and multicriteria genetic algorithm optimization [M.S. thesis], Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, 1995.
A. Zhou, Y. Jin, Q. Zhang, B. Sendhoff, and E. Tsang, “Combining model-based and genetics-based offspring generation for multi-objective optimization using a convergence criterion,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '06), pp. 3234–3240, July 2006.View at: Google Scholar
D. Kalyanmoy and K. Amarendra, “Real-coded genetic algorithms with simulated binary crossover: studies on multimodal and multiobjective problems,” Complex Systems, vol. 9, no. 6, pp. 431–454, 1995.View at: Google Scholar
K. Deb and M. A. Goyal, “Combined genetic adaptive search(GeneAs)for engineering design,” Computer Science and Informatics, vol. 26, no. 4, pp. 30–45, 1996.View at: Google Scholar
K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, “Scalable multi-objective optimization test problems,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '02), D. B. Fogel, Ed., pp. 825–830, IEEE Service Center, Piscataway, NJ, USA, 2002.View at: Google Scholar