Abstract

In this paper, an improved NSGA2 algorithm is proposed, which is used to solve the multiobjective problem. For the original NSGA2 algorithm, the paper made one improvement: joining the local search strategy into the NSGA2 algorithm. After each iteration calculation of the NSGA2 algorithm, a kind of local search strategy is performed in the Pareto optimal set to search better solutions, such that the NSGA2 algorithm can gain a better local search ability which is helpful to the optimization process. Finally, the proposed modified NSGA2 algorithm (MNSGA2) is simulated in the two classic multiobjective problems which is called KUR problem and ZDT3 problem. The calculation results show the modified NSGA2 outperforms the original NSGA2, which indicates that the improvement strategy is helpful to improve the algorithm.

1. Introduction

The maximization of profit is always the pursuit of mankind, no matter in which field, especially the development of the energy issues and the population problems in recent years. For many social and economic problems, it is needed to find an optimal solution to reduce the input and increase the output. As a result, the optimization problem is more and more concerned by scholars. In recent years, many scholars have put forward a lot of optimization problem solving methods from many aspects. Many of them are to find the value of the control variables to minimize the single objective. However, many real world decision problems not only need to consider just one factor, but a variety of factors to get a decision. In other words, a lot of practical problems are the multiobjective programming problem. Therefore, in recent years, scholars have conducted in-depth research on multiobjective planning and got a lot of meaningful research results, which lay the foundation for the future research of multiobjective problem. In recent years, the heuristic search algorithm has a great development. Some researchers introduced and developed many heuristic search algorithm according to the behavior of the social animal in the nature and the characteristic of the optimization problems to be optimized like genetic algorithm [1], particle swarm optimization algorithm [2], differential evolution algorithm [3], shuffle frog leap algorithm [4], artificial bee colony algorithm [5], and so on. In all of these heuristic search algorithms, the genetic algorithm is very representative. According to its good calculation performance and genetic idea, the genetic algorithm is largely used in many aspects [610]. The first genetic algorithm is used to deal with the single objective optimization problem, while as the time passes by, many researchers introduce a lot of multiobjective optimization algorithms according to the idea of the genetic algorithm. These multiobjective optimization algorithms contain vector evaluation genetic algorithm [11], multiobjective genetic algorithm [12, 13], strength Pareto evolutionary algorithm [14, 15], and nondominating sorting genetic algorithm [16]. After several years later, in order to modify each of these multiobjective genetic algorithms calculating performance, a lot of researchers also introduce the improved version of these multiobjective genetic algorithms like strength Pareto evolutionary algorithm 2 [17] which is the improved version of the strength Pareto evolutionary algorithm, strength Pareto evolutionary algorithm [18] which is also the improved version of the strength Pareto evolutionary algorithm, nondominated sorting genetic algorithm 2 which is the improved version of the nondominating sorting genetic algorithm [19] and so on. In this way, many multiobjective genetic algorithms are proposed in these years, which made a great contribution to these multiobjective optimization problems.

Nondominated sorting genetic algorithm 2 (which we call it as the NSGA2 algorithm in the rest of this paper) is the improved version of the nondominating sorting genetic algorithm which is first proposed by Deb in 2001. During these years development and use, the NSGA2 algorithm has been proved as a very good multiobjective genetic algorithm which has been successfully used in many multi objective optimization problems [2026] because the calculation performance of one algorithm varies very much when dealing with different optimization problems. Beside this, the algorithm’s calculation performance also depends on the optimizing strategy inside of the algorithm itself. So a lot of researchers also proposed some improvement strategies into the NSGA2 algorithm to deal with the shortcomings when optimizing some optimization problems. In this paper, in order to overcome the poor local search ability problem in the NSGA2 algorithm, a local search strategy is joined into the NSGA2 algorithm to enhance the local search ability. The general idea of the proposed improvement strategy is to search locally around the optimization solution found so far to find a better solution, such that the algorithm can get a better local search ability and can make a more detailed search. In order to see whether the local search ability can improve the calculation performance of the original NSGA2 algorithm or not, this paper also introduce two classic multiobjective optimization problems to let the modified NSGA2 algorithm to simulate. At last, the outcome of the simulation works is compared to the original NSGA2 algorithm. The comparison shows the local search ability can really improve the calculation performance of the NSGA2 algorithm.

The rest of this paper is organized as follows: Section 2 describes the mathematical model of the multiobjective problem. Section 3 shows the principles and steps of the NSGA2 algorithm, where we introduce three aspects to describe the original NSGA2 algorithm: fast nondominate sorting, crowding distance function, and environmental selection based on PCD. At the end of Section 3, the basic steps of the original NSGA2 algorithm are introduced to the readers. The simulation works and the results analysis of different multiobjective optimization problem are given in Section 4, where we introduced two classic multiobjective optimization problems. Section 5 draws the conclusion of this paper.

2. Multiobjective Problem

A multiobjective problem can be seen as a problem which should optimize at least two objectives while it satisfies a lot of constrains. A multiobjective optimization problem can be mathematically described aswhere represents the -th objective function, represents the control variables vector, represents the number of constrains, and is the total number of the objective to be optimized.

But the purpose of the decision scheme of the multiobjective optimization problem is not to make each objective to be optimized optimal, but to provide a series of solutions for the decision makers. In accordance with their own decision intention, the decision makers choose the decision scheme to make decisions. A series of optimal solution set of multiobjective optimization problems found in the end is called the Pareto optimal solution set. The mathematical explanation of the Pareto optimal solution set can be shown as follows:where the represents the Pareto optimal solutions, whose mathematical expression can be shown as follows:

In (3), the expression represents the individual dominating the individual . For a minimization optimization problem, a solution dominating another solution in an multiobjective can be mathematically is expressed as follows:

From (4) we can conclude that if a solution ’s each objective value in a multiobjective problem is not bigger than a ’s corresponding objective value, and at least one objective can be found the ’s value is smaller than the ’s, then we can say that the solution dominates .

The multiobjective optimization problem is exactly according to seeking the nondominating solution and use some kind of evolutionary strategy to evolve the evolutionary population to finally find the Pareto optimization set of the multiobjective optimization problem to be optimized. In order to more intuitively demonstrate the value of each objective value in the Pareto optimal solution set and its distribution in the solution space, the Pareto front is introduced. The Pareto front is a line or a curved surface formed by the objective values of each solution in the Pareto optimal set whose mathematical expression is as follows:

3. Improved NSGA2 Algorithm

3.1. NSGA2 Algorithm

The NSGA2 algorithm is the modified version of NSGA by Deb and other researchers; after several years of development and use, the NSGA2 algorithm is seen as a very good and mature multiobjective optimization algorithm at present. In NSGA2 algorithm, the crowding distance function is used to calculate each individual’s fitness value in the algorithm. According to the fast nondominating sorting idea and environmental selection of PCD, the algorithm uses the two strategies to evolve the evolutionary population to optimize the corresponding problem. In the rest of this paper, three important aspects of this algorithm is introduced to study the original NSGA2: fast nondominating sorting, which is used to evaluate each individual in the NSGA2 algorithm; crowding distance function which is used to calculate each individual’s fitness value. Environmental selection based on PCD is used to select individuals to the next generation of the evolutionary population in the algorithm.

3.1.1. Fast Nondominated Sorting (FNS)

In the evolutionary multiobjective optimization algorithm, the evolution of the population is to eliminate the inferior solution and select the excellent solutions by a certain strategy, such that the inferior solution is eliminated and the excellent solution is retained in the next generation of the evolutionary population so as to find the final excellent solution set through the step by step evolution. In NSGA2, the fast nondominated sorting strategy is used to evaluate each individual’s dominating status in the evolutionary population, and according to the dominating status of each individual, the evolutionary population is divided into different subgroups, which is a necessary way to calculate the fitness of each individual in the crowd and to evaluate the quality of each individual. In the fast nondominated sorting strategy, the first step of this strategy is to calculate each individual’s (e.g., individual ) corresponding dominating solutions number and the individuals number who dominate the individual in the evolutionary population. When an individual’s corresponding equals 0, then send it to the first level of the nondominating individuals. In the fast nondominated sorting strategy, it declares that when the nondominating individuals’ dominating grade in a level is determined already, the level of all individuals in the control of other individuals is no longer considered. Considering this, for an individual ’s dominating individual set , each of the individual’s (let us say individual ) corresponding in the ’s dominating individual set should subtract 1. After the subtraction, when , the individual will be put into the second nondominating level. The rest of the individuals in the evolutionary population use the same way to find which nondominating level they belong to. After each of the individuals in the evolutionary population finds its own nondominating level, the lowest level’s corresponding individuals are close to the real Pareto front of the multiobjective optimization problem to be optimized. So the lowest level’s corresponding individuals are more superior than the other individuals in the evolutionary population of the NSGA2 algorithm. And the NSGA2 algorithm exactly uses a larger probability to select the lowest level compared to the high level into the next generation of the evolutionary population, such that the NSGA2 algorithm can evolve to the direction of the real Pareto optimization set of the multiobjective optimization problem to be optimized.

3.1.2. Crowding Distance Function

The purpose of the multiobjective optimization algorithm is not just only to find the Pareto optimal set which is close to the real Pareto optimal set, but also to find the solutions in the Pareto optimal set have a very good distribution, such that better and more reasonable solutions can be provided to the decision maker. The purpose of the multiobjective optimization algorithm is to find a series of uniform distribution of the solution for decision makers to choose. In order to obtain the distribution of the solution set well, various evolutionary algorithms have used a lot of different kinds of treatment measures. Most algorithms evaluate the distribution of each individual by constructing a function that considers the distribution of individual distribution. In the NSGA2 algorithm, by introducing the crowding distance function, the congestion degree of each individual and other individuals is calculated, and the local aggregation of the solution set is avoided by a certain strategy. The crowding distance function of the NSGA2 algorithm can be expressed aswhere represents the -th individual’s crowding distance value, represents the -th individual’s the -th objective function value, and the and the separately represents the -th objective function value’s maximum and minimum value in the solution set. The mathematical expression of the crowding distance function can be seen that the PCD is the mean value of the Euclidean distance and each objective function of the individual.

On calculating the PCD value of each individual, we need to notice that the boundary point’s corresponding crowding distance function value of each objective function is set as 1. From the PCD mathematical expression we can see that the value of each individual’s PCD value ranges from 0 to 1. The purpose of the introduction of PCD concept into the NSGA2 algorithm is to evaluate the congestion of each other in the same nondominated level. A bigger PCD value can tell the NSGA2 algorithm its corresponding individual is more sparse with the surrounding individuals. The individuals who have a small PCD value have a very good distribution in the evolutionary population in the NSGA2 algorithm. According to the introduction of the PCD function into the NSGA2 algorithm, the individuals in the Pareto front will have a great distribution in the Pareto optimal set, such that it can guarantee the evenness and diversity of the evolutionary population in the NSGA2 algorithm.

3.1.3. Environmental Selection Based on PCD

In multiobjective evolution algorithm, the individuals evolve towards the optimal solution set depending on a certain evolution strategy. In the NSGA2 algorithm, the population evolutionary strategy can be concluded as according to fast nondominated sorting strategy to determine each individual’s nondominated level. By comparing the PCD value and each individual’s nondominated level to determine the elite individuals from the current population and former population and save the elite individuals to the next generation, and then the evolution of the dominating individuals in the NSGA2 algorithm is completed. In the NSGA2 algorithm, the selection of the individuals to the next generation for evolution follows the following two rules: if the nondominating level of an individual is higher than , then the former individual is chosen to the next generation; if the nondominating level of an individual is equal to , in this situation, the PCD value of these two individuals should be compared. In the NSGA2 algorithm, the one with a smaller PCD value will be selected to the next generation evolution population. In the NSGA2 algorithm, is defined as the following mathematical form to express the environmental selection strategy:

Through the NSGA2 algorithm’s environment selection strategy, we can see when two individuals’ nondominating level is different, in this situation the individual with smaller nondominating level, such that the NSGA2 algorithm can ensure the elite population; on the contrary, when two individuals’ nondominating level is the same, choosing the individual with larger crowding distance value can ensure the diversity of population, such that the evolution speed of the NSGA2 algorithm can be accelerated.

3.1.4. NSGA2 Algorithm Calculation Steps

Step 1. Randomly initialize the initial population and each parameter’s value in the NSGA2 algorithm, and then set the iteration number as 1; then start the algorithm’s iteration calculation.

Step 2. Determine if the iteration number right now attains the biggest iteration times set at Step ; if the former is bigger than the latter, then end the iteration calculation and output the Pareto optimal set; if the former is not bigger than the latter, then continue the iteration calculation until the two numbers are equal to each other.

Step 3. Use the fast nondominated sorting strategies to perform the evolutionary population. And then perform the genetic operation like selection, crossover, and mutation to the evolutionary population to form the new generation which is called in this paper.

Step 4. Combine the population and the population as the new mixture population which is called , and perform the fast nondominated sorting strategy to each individual in the population; then calculate the crowding distance of each individual in the population and sort them.

Step 5. Retain the elite individuals as the new generation of the evolutionary population which we call it as in our paper.

Step 6. Add 1 to the iteration number , and one iteration calculation is done; then go to Step and continue the rest of the iteration calculation.

3.2. Improvement Strategy

Although the NSGA2 algorithm is largely and successfully used in many fields as an excellent algorithm, it still has many drawbacks to be settled. In recent years, with the heuristic search algorithm being largely used in many aspects, many researchers have found many evolutionary algorithms having poor local search ability problem which is very important for the algorithm’s calculation performance. The algorithm with a better local search ability can use a more precise way to seek the optimal solution of the optimization problem to be settled. A better individual can be found by searching around a nondominated individual. So, searching around the nondominated individuals for better individual is needed. The studies show that the local search ability of the original NSGA2 algorithm is poor; especially when it is close to the Pareto frontier at the end of the evolutionary process, the evolutionary efficiency is greatly reduced. To enhance the local search ability, the local search strategy is added into the NSGA2 algorithm after the next evolutionary population is generated. The local search strategy is exploring an individual of the evolutionary population to determine whether a better performing solution can be found. There are two types of the mentioned solution with better performance: the new individual dominates the original; the new and the original individuals do not dominate each other, but the fitness of the new individual is better than the original one. So for an individual , search for a new individual in its neighborhood, if which is a new individual searched out from individual s neighborhood dominates or its corresponding fitness is better, replace the latter; otherwise proceed with the local search.

The specific steps of the local search strategy is as follows.

Step 1. Set the local search iteration time and start local search calculating.

Step 2. For each solution in the Pareto optimal set, generate a new feasible solution around it, and calculate its fitness.

Step 3. If dominates or , then replace with , if not, maintain the unchanged.

Step 4. .

Step 5. If , stop local search calculation and output the Pareto optimal set to the NSGA2 algorithm; if not, continue the local search calculation.

4. Simulation Works

4.1. Parameter Selection

There are a lot of parameters in NSGA2 algorithms such as population size, iteration times, cross probability, and mutation probability. The value of these parameters has a great influence on the NSGA2 algorithm’s calculation performance. So when we use the NSGA2 algorithm, the parameter value selection becomes a very important problem. For some complicated multiobjective problem, a small population size cannot help NSGA2 algorithm achieve a very ideal Pareto optimal set. A very large population size can slow down the calculation speed of the NSGA2 algorithm, although it can get a good Pareto optimal set. So how to select the population size value according to the complexity of the multiobjective problem is a problem to be further researched. Right now, the population size value is selected by the experience of the users. The researcher adjusts the population value step by step, until the Pareto optimal set is stable. The NSGA2 algorithm is a variation of the GA algorithm; they have many things in common although the first one is multiobjective algorithm while the second one is single objective algorithm. So the NSGA2 is also uses a iterative way to calculate the proposed problem. Then a very important parameter must be valued first which is iteration times. A very large iteration time can help the NSGA2 algorithm gain a very good Pareto optimal set, but it can also enlarge the calculation time which is bad to the algorithm users. While a small iteration times cannot guarantee that the NSGA2 algorithm achieves an ideal Pareto optimal set it can help the NSGA2 algorithm calculate very fast. Many researchers now also use a step by step method to adjust the iteration time’s value to select it. If the Pareto optimal set will not change a lot in the NSGA2 algorithm, the iteration value can be used as the final value of this parameter of the NSGA2 algorithm. The genetic manipulation includes selection, crossover, and mutation operations [6, 7] which is very important in the NSGA2 algorithm. In the NSGA2 algorithm the genetic operations are the main way of evolution which directly determine the outcome performance. In the genetic operations, there are a lot of parameters which can change the genetic evolution performance of the NSGA2 algorithm. So the genetic parameters is also very important and is key to the NSGA2 algorithm’s calculation performance. In the NSGA2 algorithm, the tournament selection is used in the selection process, whose parameter is called tournament value. The tournament value determines the selection pressure; the bigger the value is, the bigger the selection pressure is. In the NSGA2 algorithm, the tournament value is suggested to set 2. The NSGA2 algorithm crossover process uses a kind of simulated binary crossover method to cross-over two individuals. There is also a parameter in the crossover process which is called crossover distribution parameter. The value of the crossover distribution parameter determines the probability of the likelihood of offspring and father individual. The bigger the crossover distribution parameter, the bigger the probability of the likelihood of offspring and father individual. This paper sets the value of the crossover distribution parameter as 20. In the NSGA2 algorithm, a kind of polynomial mutation is used in the mutation process, whose parameter inside is called mutation distribution parameter. The mutation distribution parameter is more or less the same as the crossover distribution parameter. In our paper, the value of the mutation distribution parameter is set as 20. As the local search strategy is added in the NSGA2 algorithm, the local search iteration time should be valued. In this paper, the value of the local search iteration time is set as 10.

4.2. Simulation

In order to research the improvement effect to NSGA2, this paper selects some classic multiobjective problem for simulation. The first problem is called KUR which is MISA problem. The mathematical formulation is as follows:where , , and are the control variables, and their domains are all .

Figure 1 shows the NSGA2’s and the I-NSGA2’s Pareto fronts of this problem. From Figure 1 we can see the difference of the two algorithms Pareto fronts: the distribution of I-NSGA2’s Pareto fronts is more uniform than the NSGA2 algorithm. Furthermore, the I-NSGA2’s Pareto fronts is closer to the coordinate origin. So the I-NSGA2 can get a better Pareto optimal solution than the original one.

To further study and explain the difference of the two outcomes, we can conclude that just because of the local search strategy, when the NSGA2 algorithm finds a Pareto set solution in each iterative calculation, the local search can begin a deeper and more accurate search for better solution, and this is why the M-NSGA2’s Pareto set is more close to the coordinate origin point than the original algorithm.

The second simulation case is also a classic multiobjective problem called ZDT3 problem. The ZDT3 problem can be mathematically expressed as follows:where , and .

The comparison of the two different NSGA2 algorithms in the ZDT3 problems is shown in Figure 2. From Figure 2 we can clearly see the difference of the two Pareto fronts. The Pareto front of the M-NSGA2 algorithm can conclude the second one, which means the first Pareto front is closer to the real Pareto front of the ZDT3 problem. By comparing this case’s results with the first case, we also can see the more complex the problem is, the more the superiority of the M-NSGA2 will be obvious. Seeing this comparison we can conclude that the modified strategy in M-NSGA2 algorithm can improve the optimization effect of the NSGA2 algorithm.

5. Conclusion

This paper introduced a new improvement strategy to NSGA2 algorithm, which uses a local search way to search a better solution around the optimized solution of NSGA2 algorithm. In this paper, we firstly researched the basic NSGA2 algorithm, from three different but important parts: the fast nondominating sorting, the crowding distance function, and the environmental selection based on PCD. After that, we introduced the NSGA2 algorithm calculation steps for the multiobjective problem. After the introduction of NSGA2 algorithm, we also introduced the local search improvement for the basic NSGA2 problem and formed the M-NSGA2 algorithm. In the end, in order to test effect of the improvement in NSGA2 algorithm, we use two multiobjective problems to simulate the two algorithms. From the comparison figures we can see the difference of the two algorithms calculation results.

Disclosure

Ruihua Wang is a Ph.D. candidate in School of Traffic and Transportation of Beijing Jiaotong University and a lecturer in Management Engineering Department of Zhengzhou University.

Competing Interests

The author declares that they have no competing interests.