Abstract

Multiobjective genetic algorithm (MOGA) is a direct search method for multiobjective optimization problems. It is based on the process of the genetic algorithm; the population-based property of the genetic algorithm is well applied in MOGAs. Comparing with the traditional multiobjective algorithm whose aim is to find a single Pareto solution, the MOGA intends to identify numbers of Pareto solutions. During the process of solving multiobjective optimization problems using genetic algorithm, one needs to consider the elitism and diversity of solutions. But, normally, there are some trade-offs between the elitism and diversity. For some multiobjective problems, elitism and diversity are conflicting with each other. Therefore, solutions obtained by applying MOGAs have to be balanced with respect to elitism and diversity. In this paper, we propose metrics to numerically measure the elitism and diversity of solutions, and the optimum order method is applied to identify these solutions with better elitism and diversity metrics. We test the proposed method by some well-known benchmarks and compare its numerical performance with other MOGAs; the result shows that the proposed method is efficient and robust.

1. Introduction

In this paper, we consider the following multiobjective optimization problem:where is a multiobjective function (vector valued function), is a box set, and and are lower bound and upper bound, respectively. We assume that each function in is Lipschitz continuous but not necessarily differentiable.

The multiobjective optimization has extensive applications in engineering and management [13]. Most of the optimization problems appearing in the real-world application have multiple objectives; they can be modeled as multiobjective optimization problems. However, due to the theoretical and computational challenges, it is not easy to numerically solve multiobjective optimization problems. Therefore, the multiobjective optimization attracted lots of researches over the last decades [48].

So far, there are two types of methods to solve multiobjective optimization problems: the indirect method and direct method. The indirect method converts multiple objectives into a single one. One strategy is to combine the multiple objective functions using the utility theory [9] or the weighted sum method [10, 11]. The difficulty for such method is the selection of the utility function or proper weights so as to satisfy the decision-maker’s preference. Another indirect method is to formulate the multiple objectives, except one, as constraints. However, it is not easy to determine the upper bounds of these objectives. On the one hand, small upper bounds could exclude some Pareto solutions; on the other hand, large upper bounds could enlarge the objective function value space which leads to some sub-Pareto solutions. Additionally, the indirect method can only obtain a single Pareto solution in each run. However, in practical applications, decision-makers often prefer a number of Pareto solutions so that they can choose one strategy according to their preferences.

Direct methods are devoted to exploring the entire set of Pareto solutions or a representative subset. However, it is extremely hard or impossible to obtain the entire set of Pareto solutions for most multiobjective optimization problems, except for some simple cases. Therefore, stepping back to a representative subset is preferred. Genetic algorithm, as a population-based algorithm, is a good choice to achieve this goal. The generic single-objective genetic algorithm can be modified to find a set of nondominated solutions in a single run [1214]. The ability of the genetic algorithm to simultaneously search different regions of a solution space makes it possible to find a diverse set of solutions for difficult problems. The crossover and mutation operators of the genetic algorithm can be applied to various domains defined by different objectives, which in return creates new nondominated solutions in unexplored parts of the Pareto front. In addition, multiobjective genetic algorithm does not require the user to prioritize, scale, or weight objectives. Therefore, the genetic algorithm is one of the most popular metaheuristic approaches for solving multiobjective optimization problems [1517].

The first multiobjective optimization method based on the genetic algorithm, called the vector evaluated genetic algorithm (VEGA), was proposed by Schaffer [18]. Afterwards, several multiobjective evolutionary algorithms were developed, such as multiobjective genetic algorithm (MOGA) [19], niched Pareto genetic algorithm (NPGA) [20], weight-based genetic algorithm (WBGA) [21], random weighted genetic algorithm (RWGA) [22], nondominated sorting genetic algorithm (NSGA) [23], strength Pareto evolutionary algorithm (SPEA) [24], improved SPEA (SPEA2) [25], Pareto-archived evolution strategy (PAES) [26], Pareto envelope-based selection algorithm (PESA) [27].

There are two basic criteria to measure a set of solutions for the multiobjective optimization problem [28].(1)Elitism. The obtained solutions should be as close to the real Pareto solutions as possible. This can be measured by the closeness between the real Pareto frontier and the image of the obtained solutions since the image set of the real Pareto solutions is the real Pareto frontier.(2)Diversity. In order to extensively describe Pareto solutions, the obtained solutions should distribute uniformly over the set of real Pareto solutions. The diversity of the obtained solutions is measured by the diversity of their images.

These two criteria of Pareto solutions are often in conflict with each other. Therefore, one has to balance the trade-off between elitism and diversity. The aim of this paper is to introduce new techniques to tackle these issues. The rest of the paper is organized as follows. In Section 2, we review some basic definitions of multiobjective optimization and the process of genetic algorithm. In Section 3, we propose an improved genetic algorithm for solving multiobjective optimization problems. In Section 4, some numerical experiments are carried out and the results are analyzed. Section 5 concludes the paper.

2. Preliminaries

In this section, we first review some definitions and theorems in the multiobjective optimization and then introduce the general procedure of genetic algorithm.

2.1. Definitions in Multiobjective Optimization

First of all, we present the following notations which are often used in vector optimization. Given two vectors then (i) for all ;(ii) for all ;(iii) for all , and there is at least one such that ; that is, ;(iv) for all .,” “,” and “” can be defined similarly. In this paper, we call by dominates or is dominated by (in some literatures, is called dominates or is dominated by ; we reverse this definition since we are solving minimization problem in this paper).

Definition 1. Suppose that and . If for any , then is called an absolute optimal point of X.

Absolute optimal point is an ideal point, but it may not exist.

Definition 2. Let and . If there is no such that then is called an efficient point (or weakly efficient point).

The sets of absolute optimal points, efficient points, and weakly efficient points of are denoted as , , and , respectively. For the problem MOP, is called the decision variable space and its image set is called the objective function value space.

Definition 3. Suppose that . If for any , is called an absolute optimal solution of the problem MOP. The set of absolute optimal solutions is denoted as .

The concept of the absolute optimal solution is a direct extension of that for single-objective optimization. It is the ideal solution but may not exist for most cases.

Definition 4. Suppose that . If there is no such that that is, is an efficient point (or weakly efficient point) of the objective function value space , then is called an efficient solution (or weakly efficient solution) of the problem MOP. The sets of efficient solutions and weakly efficient solutions are denoted as and , respectively.

Another name of the efficient solution is Pareto solution, which was introduced by Koopmans and Reiter in 1951 [29]. The meaning of the Pareto solution is that if , then there is no feasible solution , such that any of is not worse than that of . In other words, is the best solution in the sense of “.” Another intuitive interpretation of Pareto solution is that it cannot be improved with respect to any objective without worsening at least one of the other objectives. The set of Pareto solutions is denoted by . Its image set is called the Pareto frontier, denoted by . The following two theorems are well known.

Theorem 5 (see [6]). For the multiobjective optimization, it holds that

Theorem 6 (see [6]). For the objective function value space , if the sets of efficient points and weakly efficient points (i.e., and , resp.) are known, then, in the feasible set , it holds that

Theorem 5 illustrates the relationship of the sets of absolute optimal solutions, efficient solutions, and weakly efficient solutions. Theorem 6 reveals that the preimage of efficient points (or weakly efficient points) in is efficient solutions (or weakly efficient solutions) of the problem MOP.

2.2. Genetic Algorithm

Genetic algorithm is one of the most important evolutionary algorithms. It was introduced by Holland in the 1960s and then developed by his students and colleagues at the University of Michigan between the 1960s and 1970s [30]. Over the last two decades, the genetic algorithm was increasingly enriched by plenty of literatures, such as [3134]. Nowadays various genetic algorithms are applied in different areas, for example, mathematical programming, combinational optimization, automatic control, and image processing.

Suppose that and represent parents and offspring at the th generation, respectively. Then, the general structure of genetic algorithm can be described in the following pseudocode.

General Structure of Genetic AlgorithmInitializationGenerate the initial population .Set crossover rate, mutation rate, and maximal generation time.Let .Since the maximal generation time is not reached, do the following.Crossover operator: generate .Mutation operator: generate .Evaluate and : compute the fitness function.Select operator: build the next population., go to .EndEnd.

From the pseudocode, we can see that there are three important operators in a general genetic algorithm: the crossover, mutation, and selection operators. The implementation of these operators is highly dependent on the way of encoding.

3. A New Multiobjective Genetic Algorithm

In this section, we present a new multiobjective genetic algorithm for solving the problem MOP. We first propose a ranking strategy called the optimum order method and then metrics for the elitism and diversity of solutions. Finally, a new selection operator for genetic algorithm is designed using the optimum order method and the elitism and diversity metrics.

Theoretically, the terminology “solution” means the point in the decision variable space, while the corresponding point in the objective function value space is named as “image of solution.” However, most of the following discussions are in the objective function value space. In order to simplify the description, we indiscriminately call the point from the decision variable space and its corresponding image from the objective function value space as the “solution,” if there is no confusion to do so.

3.1. The Optimum Order Method

In numerical optimization, in order to compare the numerical performance of different solutions, it is necessary to assign a fitness value for each solution. For the single-valued function, fitness is normally assigned as its function value. However, to assign the fitness of a multiobjective function is not straightforward. So far, there are three typical approaches. The first one is weighted sum approach [22, 35], which converts the multiple objectives into a single one using normalized weight , , . The decision of weight parameters is not an easy task for this approach. The second one is to alter the objective functions [18, 25], which randomly divides the current population into equal subpopulations: . Then, each solution in the subpopulation is assigned a fitness value based on the objective function . In fact, this approach is a straightforward extension of the single-objective genetic algorithm. The last one is Pareto-ranking approach [4, 23, 36], which is a direct application of the definition of Pareto solution. In the following, we present a rank strategy called the optimum order method [6].

Definition 7. Let and ; for any , defineThen, is called the optimal number of corresponding to for all objectives. Furthermore, is defined as the total optimal number of corresponding to all the other solutions for all objectives.

Obviously, for a minimization problem, a larger optimal number corresponds to a better solution. Therefore, optimal numbers can be considered as criteria for ranking a set of solutions. Due to this observation, we propose the following algorithm to rank a population of solutions.

Algorithm 8 (optimum order method (OOM)). Consider the following steps.
Step  1 (input). It includes the population of solutions and their objective function values.
Step  2 (compute optimal numbers). Compute the optimal number and total optimal number of each solution; fill these numbers into Table 1 according to (8).
Step  3 (rank the solution). Rearrange the order of solutions according to the decreasing order of the total optimal numbers . More precisely, denote and so on.

The solutions and , which are corresponding to , , and so forth, are called the best solution, the second best solution, and so forth. The new order is called the optimal order. It is worth mentioning that the optimum order method does not necessarily rank Pareto solutions in the frontier positions; however, those solutions who are more reasonable with respect to all objectives are more likely to be ranked in the foremost positions. This property is different from Pareto-ranking approach.

Lemma 9. Suppose that . If , then .

Proof. Let where denotes the number of components in a set. Obviously, we have , and . Then, Therefore, Since , .

Lemma 10. Suppose that . If , then, for any   (,  ), .

Proof. For any , let Then, we have and . (i)When . Since , we have Therefore,(ii)When . We have . Thus,On the other hand, let Obviously, and . Therefore,(iii)When . We haveOn the other hand, let Note that it is not possible to have . Otherwise, we will have , which is contrary to . Again, we have and . Thus,In light of (16)–(22), we have

Theorem 11. If , then the solution corresponding to must be an efficient solution (Pareto solution).

Proof. If is not an efficient solution, then there exists an , such that Therefore, according to Lemma 9,Due to Lemma 10, we have, for any ,Based on (25), (26) and , we have which is a contradiction to the assumption . The proof is completed.

Based on Theorem 11, the following results are obvious.

Corollary 12. If is an absolute optimal solution, then

Corollary 13. If is corresponding to and is corresponding to , then is an efficient solution of the population without .

Theorem 11 and Corollary 13 reveal the rationality of the optimum order method because the best solution must be an efficient solution, and the second best solution, although may not be an efficient solution, is an efficient solution without the best one. This statement is true until the solution with the smallest total optimal number (ranked as the last reasonable solution). Therefore, It is reasonable for us to choose solutions with large total optimal numbers as parents for the next generation.

3.2. The Elitism Metric

Over the last two decades, a number of different evolutionary algorithms were suggested to solve multiobjective optimization problems, such as MOGA [37], NSGA [23], NPGA [20], SPEA [24], PAES [38], and NSGA-II [4]. Among them, MOGA, NSGA, and NPGA did not introduce any elitism strategy at all. Therefore, although they were successful in finding solutions for many test problems and a number of engineering problems, researchers realized that they are still needed to be improved in the sense of obtaining better Pareto solutions [28]. Elitism is one of the main issues considered to be improved. After that, SPEA, PAES, and NSGA-II started to take into account the elitism of solutions.

SPEA introduces an external population which stores all nondominated solutions discovered so far beginning from the initial population. At each generation, a combined population based on the external population and the current population is generated firstly. Then, all the nondominated solutions in the combined population are identified and assigned a fitness based on the number of solutions they dominate. Furthermore, the dominated solutions are assigned a fitness which is worse than the worst fitness of the nondominated solution. Then, the next generation is selected based on the fitness of each solution. This elitism strategy makes sure that the search is directed toward the nondominated solutions. Although the computational complexity of SPEA is (where is the number of objectives and is the size of the population), with proper bookkeeping, the complexity can be reduced to .

PAES uses a single-parent single-offspring evolutionary algorithm which is similar to -evolution strategy. In PAES, the elitism strategy is implemented by comparing between one parent and one offspring. If the offspring dominates the parent, the offspring is accepted as the next parent; on the other hand, if the parent dominates the offspring, the offspring is replaced by another mutated solution (a new offspring) and the comparison between the parent and offspring is made again. However, if the parent and the offspring do not dominate each other, the choice between them is made by comparing them with an archive of the best solutions found so far. More specifically, if the offspring dominates any solution stored in the archive, then the offspring is accepted by the archive and all the dominated solutions are eliminated from the archive. On the contrary, if the offspring does not dominate any solution stored in the archive, then the parent and offspring are accepted by the archive after checking the diversity. The computational complexity of PAES is calculated as , where is the length of the archive. Since the archive size is usually chosen proportional to the population size , the overall complexity of PAES is .

NSGA implements the elitism strategy using naive nondominated sorting approach. For a population of solutions, the naive nondominated sorting first identifies the nondominated solutions in the population and then the nondominated solutions without considering the solutions which have already been identified. This process is repeated until all the solutions in the population have been identified. In this way, the solutions are classified into different Pareto frontiers and selected based on their Pareto ranking. The native nondominated sorting approach is based on the definition of Pareto frontier. It is easy to be implemented, but in the worst case, the task of identifying all the Pareto frontier requires a computational complexity of . In order to reduce the complexity of NSGA, NSGA-II improved the sorting approach and presented the fast nondominated sorting approach. NSGA-II introduced two entities, domination count () and dominated count (). The domination count represents the number of solutions which dominates the solution ; the dominated count represents the number of solutions which are dominated by the solutions .

All solution in the first nondominated front will have their domination count as zero. Now, for each solution with , we visit each member () of its set and reduce its domination count by one. In doing so, if for any member the domination count becomes zero, we put it in a separate list . These members belong to the second nondominated front. Now, the above procedure is continued with each member of and the third front is identified. This process continues until all frontiers are identified. It is evident to observe that the naive nondominated sorting approach and the fast nondominated sorting approach lead to the same Pareto ranking for a population of solutions. The overall computational complexity of the fast nondominated sorting approach is .

The previous elitism strategies use the definitions of domination or Pareto ranking, but they do not quantify the elitism of solutions. In this paper, we introduce a new elitism strategy which is still based on Pareto ranking but quantifies the elitism of solutions.

Definition 14. Suppose that is a population of solutions, we define the elitism metric of the nondominated solutions in , say , as ; define the elitism metric of the nondominated solutions in , say , as ; and so forth; and define the elitism metric of the nondominated solutions in , say as .

Remark 15. The process in Definition 14 is the same as the native nondominated sorting approach. And the elitism metric of each solution is actually its index of Pareto ranking.

According to Definition 14, the elitism measure of solutions can be taken as a function of the solutions, the value of the function is the elitism metric of solutions. We call this function the elitism function. Mathematically,It is obvious that a small value of means that enjoys a good Pareto ranking. Therefore, in order to find solutions with better Pareto ranking, we should minimize function . In the following, we present a method to calculate the value of function .

Suppose that , we calculate the value of by Then we construct the domination table (see Table 2).

Obviously, entries of Table 2 are just , , and . Now we successively check every row of the table, if any row, say the th one, whose entries are only or ; then we can claim that there is no solution dominating , which means that ’s elitism metric is . Then discard the rows and columns in which these solutions with elitism metric of locate. This means that we do not consider these solutions any more in the following process since they have already been identified. This process is repeated again and the identified solutions are assigned with elitism metric of and so forth until all the solutions are identified.

In Table 2, it is evident that , where . So the computational complexity of assigning elitism metrics for the population is . We provide the following example to illustrate the process described above.

Figure 1 illustrated a population of solutions. There are solutions in this population. The numbers on the side are indexes of these solutions. Table 2 is the domination table of this population. First, we successively check rows of Table 3; rows with only and are 1, 2, 5, 8, 9, 10, 16, 17, 19, 20, and 28. So solutions corresponding to these rows are in the first Pareto frontier and should be assigned elitism metric . Then, discard the corresponding rows and columns from the domination table and do the same search again. We assign the solutions corresponding to the identified rows with elitism metric . This process is repeated until all the rows of the domination table are discarded. We depict the elitism metric assignment in Figure 2.

3.3. The Diversity Metric

As discussed before, diversity of solutions is another criterion for solutions obtained by the multiobjective genetic algorithm. In order to obtain uniformly distributed solutions, it is important to maintain the diversity of the population in the iteration process. Without this, solutions in the population tend to form relatively few clusters. This phenomenon is known as genetic drift. Several approaches have been developed to maintain the diversity of solutions. Among them, the sharing function approach [19, 39] is one of the earliest approaches. The sharing function approach degrades the objective fitness of solutions by setting a threshold . This threshold is still called sharing parameter. For a solution in the current population, a niche count is first calculated. The niche count if there is no solution in the -niche (neighborhood) of (i.e., there is no solution whose distance with is less than ); the niche count if there are some solutions in the -niche (neighborhood) of (i.e., there are some solutions whose distances with ) less than . Then, the degradation is obtained through dividing the objective fitness by the niche count, that is, . This proportion is called the shared fitness. The original NSGA [23] used the sharing function approach to maintain the diversity of solutions. The performance of the sharing function approach terribly depends on the sharing parameter . However, is a user-decided parameter and it is difficult to choose a proper one. Since each solution must be compared with all other solutions in the population, the overall complexity of the sharing function approach is .

Deb et al. [4] introduced the crowding distance approach to maintain the diversity of solutions. This approach calculates the density estimation of a solution by summarizing the average distance of two solutions on either side of this solution along each of the objective function. The density estimation, which is still called crowding distance, serves as an estimate of the perimeter of the cuboid formed by using the nearest neighbors as the vertices. Obviously, a solution with a smaller value of crowding distance is more crowded by other solutions. Thus, the crowding distance can be considered as a measure of the diversity of a solution. The crowding distance approach does not require any user-defined parameter. Its computational complexity is , which is better than the sharing function approach. However, the crowing distance just reflects the local diversity of a solution.

The cell-based density approach [6, 26, 38, 40, 41] divides the objective space into -dimensional cells. The number of solutions in each cell is defined as the density of the cell, and the density of a solution is equal to the density of the cell in which the solution locates. An efficient approach to dynamically divide the objective function space into cells is proposed by Lu and Yen [40, 41]. The main advantage of the cell-based density approach is that a global density map of the objective function value space can be obtained. But the process of the cell-based density approach is complicated. For a single solution, in the worst case, it requires comparison to locate the solution into a proper cell, where is the number of cells for each objective function. Thus, the total complexity of the cell-based density approach is . And furthermore, is a user-defined parameter which is not easy to be chosen properly.

In this section, we define the metric about measuring the degree of crowdedness around a solution. Then, a specifical diversity metric is introduced.

Definition 16. Suppose that is a population, and are two solutions from , and and are two scalars corresponding to and , respectively. We call and the diversity metrics of and , respectively, if implies that solutions around are more crowded than those around .

Definition 16 is just a qualitative description of the diversity metric. In the following, we introduce a specifical definition of the diversity metric which quantitatively measures the degree of crowdedness around a solution.

Definition 17. Suppose that is a population with solutions, is a solution in . We provide a function aswhere is a decrease function and is the Euclidean distance between solutions and . Then we call the value of the global diversity metric of .

It is obvious that the global diversity metric is coincident with the definition of diversity metric. One improvement of the global diversity metric is that it reflects the diversity of with respect to all the other solutions since the function involves the whole population. The requirement for calculating the global diversity metric is only the Euclidean distance between any two solutions, so the computational complexity of the global diversity metric is . A proper choice of function is where .

As an example, Figure 3 illustrates the global diversity metric of the population presented in Figure 1. From Figure 3, one can observe that these solutions which are less crowded enjoy smaller global diversity metrics, such as solutions and (the index of the solution can be found in Figure 1) with and , respectively. On the contrary, these solutions which are more crowded have larger global diversity metrics, such as solutions , , and with , , and , respectively.

3.4. A New Selection Operator

As discussed before, the core criteria for multiobjective optimization are to obtain solutions with better elitism and, at the same time, maintain the diversity of solutions. If we tackle these criteria from a multiobjective optimization point of view, we need to solve a multiobjective optimization problem with two objectives: minimizing the elitism metric and the global diversity metric of the solution. Noting the definition of the elitism metric and the global diversity metric presented in the previous subsections, we design the following multiobjective subproblem:where is a multiobjective function and is a population of candidate solutions. It is clear that the image of is a finite set whose cardinal is the same as . One property of the multiobjective subproblem (33) is that its domain and image set are finite; that is, the problem (33) is discrete. Thus, if we rank points in by the optimum order method (Algorithm 8), it is reasonable to say that those points who have smaller optimal orders are better points; that is, solutions corresponding to these points should be maintained to the next generation. This process is summarized by the following algorithm.

Algorithm 18 (multiobjective selection operator). Consider the following steps.
Step  1. Input the selection pool and its objective function values . Input the prefixed population size ( is less than the cardinal of ).
Step  2. Calculate the elitism metric and global diversity metric of the solution in using the function (29) and (31), respectively. Let the result be Step  3. Construct the objective function value space of the multiobjective subproblem Step  4. Rank points in by the optimum order method (Algorithm 8) and take the first solutions according to their optimal orders as parents for the next generation.

In Table 4, we use the multiobjective selection operator to evaluate the population presented in Figure 1. In this table, , , , and represent the elitism metric, global diversity metric, total optimal number, and optimal order of solution , respectively. We can observe that the multiobjective selection operator systematically considers the criteria of elitism and diversity. We select 15 solutions whose optimal orders are the smallest as parents of the next generation. They are marked as red squares in Figure 4. However, from Figure 4, distribution of the selected points is not ideal. This is because there are some irregular distributed points, such as points 33, 34, and 35. They are far away from the others, which means that they should be eliminated. But because of their very small diversity metric, they are selected by the selection operator. In order to tackle this issue, we pretreat the diversity metric before running the selection operator.

In sport matches, the statistician always eliminates the highest and the lowest marks before calculating the average. We borrow this idea to pretreat the diversity metric. We simply identify the points with the highest and lowest diversity metric and eliminate them before the running of selection operator. In the example, the points with the highest and lowest diversity metrics are 14, 15, 16, and 18 and 32, 33, 34, and 35, respectively. Thus, we run the selection operator without considering them.

Table 5 presents the total optimal number and optimal order with pretreatment of the diversity metric. It can be observed that points 14–16, 18, and 32–35 are excluded in the ranking process. This gives more chance to some point close to the Pareto frontier. For example, points 24 and 30 (which were ranked as 22 and 23, resp.) were not selected without pretreatment of the diversity metric, but they are selected (ranked as 13 and 14, resp.) after considering the pretreatment of the diversity metric. From Figure 5, we can see that the selected points are now well distributed and closer to the Pareto frontier.

3.5. A New Multiobjective Genetic Algorithm

Given the multiobjective selection operator, we are now ready to propose a new multiobjective genetic algorithm.

Algorithm 19 (optimum order multiobjective genetic algorithm (OOMOGA)). Consider the following. InitializationGenerate the initial population .Set the crossover rate, mutation rate, and number of maximal generations.Let . Since the number of maximal generations has not been reached, do the following.Run the crossover operator: generate the offspring .Run the mutation operator: generate the offspring .Construct a selection pool by combining , , and ; that is, . Compute their multiobjective function values; that is, .Run the multiobjective selection operator (Algorithm 18) by inputting and ; the selected solutions are maintained as the parents of the next generation, that is, .Let ; go to (2.1).EndEnd.

4. Numerical Experiments

In this section, we investigate the numerical performance of OOMOGA. In order to evaluate the numerical performance of solvers, we use the performance metric proposed in [42]. Suppose that is a set of uniformly distributed points along the Pareto frontier (in the objective function value space). Let be a set of solutions obtained by a certain solver. Then, the average distance from to is defined as where is the minimum Euclidean distance between and the points in ; that is, In fact, represents a sample set of the real Pareto frontiers, if is large enough to approximate the Pareto frontier very well, could measure both the diversity and convergence of in a sense. A smaller means the set is closer to the real Pareto frontier and has better diversity.

The referential algorithms are those submitted to the special session on performance assessment of unconstrained/bound constrained multiobjective optimization algorithms at CEC’09. There are algorithms submitted to the special session. They are listed as follows:(1)MOEAD [43];(2)GDE3 [44];(3)MOEADGM [45];(4)MTS [46];(5)LiuLiAlgorithm [47];(6)DMOEADD [48];(7)NSGAIILS [49];(8)OWMOSaDE [50];(9)ClusteringMOEA [51];(10)AMGA [52];(11)MOEP [53];(12)DECMOSA-SQP [54];(13)OMOEAII [55].

The benchmarks are taken from [42] which are still used in CEC’09. Figure 6 illustrates the objective function value space of these test problems (the figure of test problem 2 is ignored here since it is similar to that of the test problem 1); the red curve/surface represents their Pareto frontiers. Among these test problems, Problems 1–7 have two objective functions, whereas Problems 8–10 have three objective functions. The Pareto solutions of Problems 5, 6, and 9 are disconnected, while the others are connected.

In order to keep consistency with the final report of CEC’09 [56], in the implementation of OOMOGA, we set the population size as for problems with two objectives and for problems with three objectives; the number of function evaluations is less than 300,000. For each test instance, we run OOMOGA independently for times. The numerical performance evaluated by is illustrated in Table 6.

From Table 6, the proposed solver OOMOGA performs the best at solving the test problem 2; its evaluation is 0.00527, which is better than all the other solvers. In solving the test Problem 9, OOMOGA (whose evaluation is 0.0601) performs only worse than DMOEADD (whose evaluation is 0.4896) but better than all the other solvers. In solving the test Problem 3, the evaluation of OOMOGA (0.0331) is ranked in the third position, worse than MOEAD (0.0072) and LiuLiAlgorithm (0.01497). In solving the test Problem 4, the evaluation of OOMOGA (0.0347) is ranked in the fourth one, worse than MTS (0.02356), GDE3 (0.0265), and DECMOSA-SQP (0.03392). The numerical performance of OOMOGA is moderate in solving test Problems 1, 6, and 8; the ranks are 8, 9, and 9, respectively. For test Problems 5 and 10, the numerical performance of the solver OOMOGA is not so good, ranking in 13 and 12, respectively. The worst performance of OOMOGA appears in solving the test Problem 7; the evaluation is 0.1267, which is worse than all the other solvers.

It is not uncommon that the numerical performance of the proposed solver OOMOGA is unstable among different test problems because the numerical results are not only affected by the performance of solvers, but also impacted by the linearity and structure of objective function themselves. Furthermore, the crossover and mutation operators are also affected by the distribution of points in the objective function value space. Generally speaking, if a new point generated by the crossover or mutation operators has a higher probability of locating around the Pareto frontier, then the Pareto frontier can be well approximated by the solver, for example, the test Problems 2, 3, and 4. On the contrary, if it is hard for the crossover or mutation operators to generate new point around the Pareto frontier, then the problem is difficult to be solved by MOGAs, for instance, the test Problems 5, 6, and 7. In fact, this instability still appears in the other solvers, such as MOEAD which is reported to be the best solver in [56]. The evaluation of MOEAD in solving the test Problems 4, 5, and 10 is not very good, ranking in 14, 7, and 9, respectively.

Figure 7 demonstrates Pareto frontiers of test Problems 1, 2, 4, and 9, respectively. From Figures 7(a) and 7(b), we can observe that, for the test Problems 1 and 2, the proposed solver obtained very good representations of their Pareto frontiers. Results for the test Problem 4 (see Figure 7(c)) is not very uniformly distributed, which may affect the performance of evaluation. Figure 7(d) illustrates the approximate Pareto frontier of the test Problem 9; we can see that the structure is more or less an approximation of the real Pareto frontier.

5. Conclusion

This paper proposed a solver based on genetic algorithm for multiobjective optimization. In the process of using genetic algorithm to solve multiobjective optimization, the evolutionary procedure prefers to select individuals with better elitism and diversity. Therefore, in the algorithm, we have to consider the trade-off between elitism and diversity. To tackle this issue, we first developed strategies to measure the elitism and diversity of populations. Then we used the proposed elitism and diversity metrics to construct a discrete multiobjective subproblem. Finally, the new selection operator is designed by applying the optimum order method to solve the multiobjective subproblem. We tested the proposed solver by the test instances used in CEC’09 and compared the numerical result with the referential solvers proposed in CEC’09. The numerical performance analysis shows that the proposed solver is good at solving problems whose objective function value space is high density around the Pareto frontier.

We will further study this topic from the following two points of view.(i)Improving the performance of crossover and mutation operators of the genetic algorithm: the crossover and mutation operators are very important for the diversity of solutions. However, the distribution of new individuals generated by the current crossover and mutation operators is coincident with the distribution of parents. This leads the excessive search of the “rich” area, but insufficient search of the “poor” area.(ii)Introducing some more reasonable diversity metrics: it is very important to maintain the diversity of population in the evolutionary process of genetic algorithm. However, measuring diversity is not an easy task. The global diversity metric proposed in this paper is a proper diversity metric, but it is far from perfect.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

Changzhi Wu was partially supported by Australian Research Council Linkage Program LP140100873, Natural Science Foundation of China (61473326), and Natural Science Foundation of Chongqing (cstc2013jcyjA00029 and cstc2013jjB0149).