Journal of Robotics

Volume 2018, Article ID 9697104, 11 pages

https://doi.org/10.1155/2018/9697104

## An Improved Multiobjective Algorithm: DNSGA2-PSA

School of Science, Southwest Petroleum University, Sichuan, Chengdu 610500, China

Correspondence should be addressed to Xianfeng Ding; moc.361@dxxf

Received 13 June 2018; Accepted 31 July 2018; Published 2 September 2018

Academic Editor: L. Fortuna

Copyright © 2018 Dan Qu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

In general, the proximities to a certain diversity along the front and the Pareto front have the equal importance for solving multiobjective optimization problems (MOPs). However, most of the existing evolutionary algorithms give priority to the proximity over the diversity. To improve the diversity and decrease execution time of the nondominated sorting genetic algorithm II (NSGA-II), an improved algorithm is presented in this paper, which adopts a new vector ranking scheme to decrease the whole runtime and utilize Part and Select Algorithm (PSA) to maintain the diversity. In this algorithm, a more efficient implementation of nondominated sorting, namely, dominance degree approach for nondominated sorting (DDA-NS), is presented. Moreover, an improved diversity preservation mechanism is proposed to select a well-diversified set out of an arbitrary given set. By embedding PSA and DDA-NS into NSGA-II, denoted as DNSGA2-PSA, the whole runtime of the algorithm is decreased significantly and the exploitation of diversity is enhanced. The computational experiments show that the combination of both (DDA-NS, PSA) to NSGA-II is better than the isolated use cases, and DNSGA2-PSA still performs well in the high-dimensional cases.

#### 1. Introduction

In the past 30 years, evolutionary multiobjective optimization (EMO) has been popular in research and application [1–10], and a lot of multiobjective evolutionary algorithms (MOEAs) have been presented [11–25]. The nondominated sorting genetic algorithm (NSGA) being one of the first MOEA was introduced in [12]. Then, an improved version of NSGA, called NSGA-II [13], has been proposed. To improve the performance of NSGA-II, S. Salomon et al. [26, 27] introduced a diversity preservation mechanism that is based on partitioning algorithm for function selection. It is shown that this procedure significantly enhances the exploitation of diversity when embedded into NSGA-II [27]. Moreover, P. Mohapatra and S. Roy [28] proposed a new method which was called AP-NSGA-II. It employed the machine processed of NSGA-II but worked with a set of average-point-based techniques to maintain the diversity among solutions. The performance of the proposed AP-NSGA-II can maintain the diversity to some degree. Then, X. Y. Pan and J. Zhu [29] proposed an improved algorithm LDMNSGA-II. The algorithm adopted the technology of Latin hypercube sampling to ensure that the distribution of initial population could be uniform. And it used differential evolution operator to replace crossover operator of NSGA-II to enhance the local search ability and search accuracy. The results demonstrate that the proposed algorithm can achieve a good overall property on multiobjective optimization. Considering the execution time of NSGA-II, many scholars have made some research on the algorithm [30–34]. It is worth noting that Y. R. Zhou et al. [35] introduced the dominance degree matrix for a vector set. And a fast method was designed for constructing the new data structure. By using the dominance degree matrix, it developed an efficient implementation of nondominated sorting called dominance degree approach for nondominated sorting (DDA-NS). Empirical results demonstrated that DDA-NS transcends other typical approach for nondominated sorting and DDA-NS works well for solving multiobjective problems. For these reasons, we borrow the idea of dominance degree matrix and PSA method to improve the performance of NSGA-II. So that it can combine the advantages of reducing runtime and improving diversity.

This paper is organized as follows. Section 1 provides an introduction; Section 2 introduces the DDA-NS method and PSA. Section 3 describes a straightforward approach for integrating DDA-NS and PSA into NSGA-II. The performance comparison is shown at Section 4. Finally, the paper ends with some conclusions in Section 5.

#### 2. Basic Ideas and Concepts

Mathematically, a multiobjective problem (MOP) is described asHere is the decision vector of dimension , is the objective function, and is the decision space (DS). The image set, , is called the objective space (OS). Let be a decision vector. A decision vector is called a Pareto optimal solution if its objective vector is not dominated by any other vector in the objective space . All the Pareto optimal solutions constitute the Pareto optimal set. The Pareto optimal front is the set of images of the Pareto optimal set.

##### 2.1. DDA-NSS

We first introduce some basic notions of DDA-NS which was introduced in [35].

###### 2.1.1. Dominance Degree

Let , which is called dominance degree of to and represents the dominance between and , where is the cardinality of .

It is obvious to find the following:

If , then , which means that will reach its maximum value at when

If , then or

###### 2.1.2. Dominance Degree Matrix

Let be a objective vectors set, , and is a dimensional vector. Define the dominance degree matrix on the set for the dominance relation by

A faster method for calculating the dominance degree matrix follows the following two main steps:

Sort the objective vectors on every objective by using Quicksort [36].

Construct the dominance degree matrix by using the sorted results acquired in step .

Firstly, we construct the comparison matrix of a row vector and define the comparison matrix of by

One can see Algorithm 1 in literature [35] to calculate the matrix .

Secondly, sum the comparison matrices of each objective vector and then we can obtain the dominance degree matrix of a set of vectors. One can see Algorithm 2 in literature [35] for the details of this procedure.

Finally, a novel nondominated sorting method is provided in the following paragraph, in which the dominance degree matrix is utilized to assign the solutions to nondominated fronts in the population .

As we know, nondominated sorting approach assigns all solutions in initial population to nondominated fronts . The first front consists of the set of nondominated solutions in the initialized population . The second front is the set of nondominated solutions in the remaining set (i.e., removes all solutions assigned to from the population ) and repeats the procedure for the subsequent fronts.

Assume that denotes the dominance degree matrix of the population of size . In order to eliminate the influence of the same individuals, the corresponding elements of are set to 0. Let be a row vector which consists of the corresponding maximum elements from each column of . It can be verified that, for any , dominates if and only if equals . Then, the solutions corresponding to elements of whose subscripts are less than are nondominated solutions of . These nondominated solutions were classified to the first front . Then, delete the column and row vectors corresponding to from and denote the remaining matrix by . Then, continue the same procedure for matrix and obtain the second front . Repeat such a process until all solutions are classified to . Thus, a new nondominated sorting method is attained, which is provided in Algorithm 3 of literature [35].

The time complexity of the DDA-NS approach is . By using the DDA-NS approach, the complexity to compare the objective function values of nondominated sorting process in the NSGA-II can be reduced from to at iteration of the NSGA-II algorithm.

##### 2.2. Part and Select Algorithm (PSA)

The Part and Select Algorithm (PSA) was put forward by S. Salomon et al. [27]. In this part we will make a brief introduction of the Part and Select Algorithm (PSA) which aims to select the well-spread points from a set of (contains candidate solutions). The procedure contains two main steps.

*Step 1. *Divide into subsets, so that the similar members can be grouped in the same subset.

*Step 2. *Each diverse subset is constructed through selecting one member from each generated subset.

To divide a set into subsets, PSA carry on divisions of one set into two subsets. In each dividing process, the set with the greatest dissimilarity among whole elements is divided. The same procedure is continued until the desired stopping criterion is met. The desired stopping criterion can be either a predefined number of subsets or a maximal dissimilarity among each of the subsets. The dissimilarity of the set is the measure .

Let ( objective vector for points ) and denote

Obviously, a larger indicates a greater dissimilarity among the members of . The pseudocode of PSA originates from Algorithm 1 in literature [27].

After the set has been divided into the subsets , choose the member closest to the center of the hyperrectangle circumscribing in the sense of Euclidean metric. If there exist more than one member closest to the center, chose one of them randomly. Let be the original set of members, from which a subset of members is to be selected. The complexity of this selecting approach is .

Proposition 1 (see [27]). *Given a set , the overall complexity of choosing elements out of using PSA and the center point selection is .*

#### 3. Improved NSGA-II

NSGA-II is widely applied for solving multiobjective problems (MOPs) [27–30]. However, its nondominated sorting approach usually has high computational overhead and the final* Pareto* approximate set is lack of diversity. For these considerations, this paper proposed a variant of NSGA-II algorithm. We denote this improved algorithm by DNSGA2-PSA which takes NSGA-II as the framework and retains the NSGA-II elitist-preserving approach, the crowding distance mechanism, and the binary tournament selection. After the population initialized, rank the population by DDA-NS and then calculate the individual’s crowded distance in each layer by PSA, so as to reduce the running time of the nondominated sorting and keep the initial solutions well-distributed. When the parent and offspring populations merged, PSA is used to prune population size instead of crowding comparison operator. Specific changes are as follows.

##### 3.1. Modifying the Nondominated Sorting of NSGA-II

DDA-NS is integrated into the classical NSGA-II to replace fast nondominated sorting method. We assume that the current population of the size be sorted by DDA-NS. Firstly, we use Algorithm 2 in literature [35] to calculate the dominance degree matrix of the current population , donated as and then set it as input to Algorithm 3 in literature [35]. Further, the nondominated front is putted out.

##### 3.2. Integration of PSA into NSGA-II

Because of the minimal requirements of PSA, the algorithm can be integrated into any MOEA and used as a selection mechanism and/or as a crowding assignment mechanism. So, the purpose is to utilize the high competency of PSA to select a diversified subset from an arbitrary set for enhancing diversity along the Pareto front. The difference between DNSGA2-PSA and the classical NSGA-II is that, instead of selecting members from the last front according to the crowded-comparison operator, selection is according to PSA. That is, assume that the combined current population of the size* 2N* is sorted to the nondominated fronts and and . Then the next parent population is constructed from the members of the sets and from members of the set . The only difference from the classical NSGA-II is that, instead of selecting members from the last front according to the crowded-comparison operator , selection is according to the PSA, as follows. The last front is partitioned to subsets according to Algorithm 1 in literature [27], and the central element of each subset is chosen to . Besides, the crowding assignment of NSGA-II, used for the binary tournament, is modified as well. Each set , is partitioned according to Algorithm 1 of literature [27] into subsets, and every member of is assigned a crowding measure equal to the number of members in its subset.

##### 3.3. DNSGA2-PSA Algorithm

The DNSGA2-PSA is formed by integrating existing NSGA-II with DDA-NS and PSA. The procedure of the DNSGA2-PSA is depicted in Algorithm 1.