Abstract

Evolutionary algorithms (EAs) are an important instrument for solving the multiobjective optimization problems (MOPs). It has been observed that the combined ant colony (MOEA/D-ACO) based on decomposition is very promising for MOPs. However, as the number of optimization objectives increases, the selection pressure will be released, leading to a significant reduction in the performance of the algorithm. It is a significant problem and challenge in the MOEA/D-ACO to maintain the balance between convergence and diversity in many-objective optimization problems (MaOPs). In the proposed algorithm, an MOEA/D-ACO with the penalty based boundary intersection distance (PBI) method (MOEA/D-ACO-PBI) is intended to solve the MaOPs. PBI decomposes the problems with many single-objective problems, a weighted vector adjustment method based on clustering, and uses different pheromone matrices to solve different single objectives proposed. Then the solutions are constructed and pheromone was updated. Experimental results on both CF1-CF4 and suits of C-DTLZ benchmarks problems demonstrate the superiority of the proposed algorithm in comparison with three state-of-the-art algorithms in terms of both convergence and diversity.

1. Introduction

Over the past few decades, EAs have been successfully implemented to various real-world optimization problems with two or three objectives. However, EAs face many challenges when dealing with MOPs, which have more than three objectives and are commonly referred to as MaOPs [1]. The foremost reason is the case that the proportion of nondominated solutions in the population rises sharply with an increase in the number of objectives [2]. The Pareto dominate relation cannot provide sufficient selection pressure to ensure that the population evolve [3]. In recent years, more and more multiobjective evolutionary optimization algorithms (MOEAs) have been improved for solving MaOPs [4].

In recent years, as system functions become more and more complex. MaOPs have become a hot topic in the field of evolutionary multiobjective optimization community [5]. It has been found that EAs have succeeded in solving various optimization problems. However, as the number of optimization objectives increases, sufficient selection pressure will be released, resulting in a significant reduction in the performance of the algorithm. It is a challenging major problem and challenge in the EAs to maintain the balance between convergence and diversity in MaOPs.

At present, more and more MOEAs have made extraordinary improvement in solving MaOPs. The most popular method is the Pareto-dominance and modified Pareto-dominance that relaxes the form of Pareto-dominance to increase the selection pressure towards the Pareto front, for example, H Jain and Deb based on the NSGAIII framework to solve generic MaOPs [6] and fuzzy Pareto-dominance [7]. The disadvantage of the above methods is introduced in one or more parameters and needs to be adjusted heuristically. The indicator-based algorithm is promising method for MaOPs. It uses a single indicator value to guide the evolution of the population, such as R2 indicator [8] and hypervolume indicator [9]. The last method is based on decomposition. Decomposition strategy and neighbourhood concept are introduced in this method. As one of the popular algorithms, MOPs based on decomposition (MOEA/D) were proposed by Zhang and Li [10]. The MOEA/D uses aggregation function to compare the solution with the uniformly distributed weight vector to retain solution convergence and diversity [4, 11]. MOEA/D gets a real good convergence and diversity and low computational complexity and thus has an effective method.

ACO originally proposed dealing with single-objective combination optimization problems [12]. The successful application of ACO in single-objective optimization has led some researchers to develop ACO algorithms for multiobjective optimization problems. Recently, a multiobjective evolutionary algorithm for multiobjective combinatorial optimization problem based on decomposition and ant colony algorithm (MOEA/D-ACO) was proposed. Ke et al. combined ACO and decomposition-based multiobjective evolutionary algorithm (MOEAD/ACO) to solve biobjective TSP [13]. Despite the advance in adapting MOEAs for leading with MaOPs, the MOEA/D-ACO performed much better than the BicriterionAnt in all the test instances. The MOEA/D-ACO algorithm successfully solves the biobjective TSP problem. However, its performance in cases with three or more objectives requires further study. Murilo Zangari Souza et al. proposed a parallel implementation of MOEA/D-ACO on a graphics processing unit (GPU) using NVIDIA CUDA to improve efficiency and achieve high-quality results within a reasonable execution time [14].

At present, the MOEA/D-ACO optimization algorithm has been introduced to solve MaOPs in numerous studies. However, there are few studies on the further use of MOEA/D-ACO to solve the MaOPs. In this paper, we introduce the PBI method for MOEA/D-ACO to solve the MaOPs. And many-objective optimization techniques can be incorporated into our method to maintain convergence and diversity of algorithm.

From the discussion above, we propose a new algorithm with MOEA/D-ACO-PBI designed to solve the MaOPs. This paper mainly focuses on the algorithm that promotes the performance of convergence and diversity in many-objective optimization.

This remainder of this paper is organized as follows. In Section 2, the related work and background were discussed. The MOEA/D-ACO-PBI algorithm is described in Section 3. In Section 5, the experimental studies are provided to demonstrate the efficiency of the proposed method as well as some discussions on this paper. Finally, Section 6 makes a conclusion and points out some future research directions.

2.1. Basic Definitions

General MaOPs can be define as follows:

where x = is the n-dimensional decision variable vector, Ω is the decision space, and consists of m objective functions, (x) (i = 1, 2, …, m), m 4. is called the objective space.

Given two decision vectors x,, x is said to Pareto dominate y (xy), ifA solution xΩ is Pareto optimal if there is no x. f() is called a Pareto optimal objective vector. The set of all Pareto optimal objective vectors is called the Pareto front (PF) and the set of all Pareto optimal solutions is called the Pareto optimal set (PS).

2.2. MOEA/D

MOEA/D: it is the most popular multiobjective evolutionary algorithm in recent years. It first initializes N uniform and widely distributed vectors for each individual evolving in the direction of each vector, which is equivalent to dividing multiobjective optimization problems. The solution to the N scalar optimization subproblems is through the constant. Evolution can simultaneously obtain the optimal solution of these N subproblems. The algorithm proposes 3 weighted methods that can be used: weighted sum aggregation, Tchebycheff approach, and PBI approach.

The traditional weighted sum aggregation is the most direct method to deal with multiobjective optimization problems, and it is also a method used earlier. This method combines or aggregates all the optimized subobjectives into a single-objective, and the multiobjective optimization problem is converted into a single-objective optimization problem.

The expression of weighted sum aggregation is

The weighted sum aggregation method generates a set of different Pareto optimal solutions by generating different weight vectors in the above scalar optimization problem. However, in the case that the shape of the optimal Pareto surface is nonconvex, this method cannot obtain every Pareto optimal vector.

The expression of Tchebycheff approach is

For each Pareto optimality x there is always a weight vector y such that the solution of (4) is a Pareto optimal solution, which corresponds to the Pareto optimal solution of the original multiobjective problem. And by modifying the weight vector, different solutions on the Pareto optimal surface can be obtained. One disadvantage of this approach is that the aggregate method curves are not smooth when dealing with continuous multiobjective problems.

The PBI method aims to find the intersection of the uppermost boundary and a set of lines. In a sense, if the lines are evenly distributed, the resulting intersections can be seen as providing a good approximation to the overall Pareto optimal boundary. These methods can deal with the Pareto optimal boundary nonconvex problem. The PBI method will be described in detail later.

2.3. MOEA/D-ACO

The basic framework of MOEA/D-ACO first decomposes the MOPs into single-objective subproblems by N weight vectors . Then, MOEA/D-ACO utilizes N ants to solve these single-objective subproblems. Ant is responsible for subproblems . Finally, MOEA/D-ACO uses the concept of neighbourhood and group for decomposing the MaOPs. Figure 1 illustrates the method of above.

Firstly, MOEA/D-ACO decomposes the MOP into single-objective subproblems by choosing weight vectors (). Then the subproblem was associated with weight vector , and its objective function is denoted as . MOEA/D-ACO utilizes ants for solving these single-objective subproblems, while ant is responsible for subproblem . Both the weighted sum and Tchebycheff approaches are used in MOEA/D-ACO and are recognized as MOEA/D- and MOEA/D-, respectively. However, recently, Deb et al. proposed that PBI decomposition method is more suitable for solving MOPs.

The MOEA/D-ACO is shown in Algorithm 1. In Algorithm 1, for each generation, each ant constructs a solution in Step . In Step , the newly constructed solution is updated in the external archive, and an ant updates the pheromone matrix of its group in Step , if it has found a new nondominated solution. In Step , each ant updates its current solution if there is a solution that is better than its current solutions; and has not been used for updating any other old solution. The algorithm is terminated in Step .

Initialization: generate Solutions by weight vectors
Decomposition by some aggregation approach
Solution construction
Update of EP
Update of pheromone
Check the solutions in the neighbourhood and update the solutions
Termination: judgment and output EP (a set of high-quality solutions: )
2.4. PBI Method

In our paper, we use the PBI approach, because of its promising performance for many-objective optimization problems. The optimization problem of the PBI approach is defined aswhere

= is the ideal objective vector with <(x), , , a penalty parameter predefined by the user. Figure 2 presents an example to illustrate d1 and d2 of a solution . In the PBI approach, d1 is used to measure the convergence of towards the EF; d2 is a kind of measure for the population diversity. is a composite measure of x for both diversity and convergence. The distance between d1 and d2 is controlled by the parameter . In our paper, we set = 5.0 for empirical studies.

2.5. Cluster-Based Weight Vector Adjustment

For the MaOPs problem, this paper proposes a weighted vector adjustment method based on clustering and uses different pheromone matrices to solve different single objectives. This scalable approach can help the ant colony algorithm to better solve the MaOPs problem.

After the multiobjective problem is decomposed into N single-target subproblems by the PBI method, N ants are selected to solve these single-target subproblems. Ant i solves the subproblem i. The subproblem i is associated with . The corresponding objective function is . N ants are divided into K groups according to their corresponding weight distances. Each group approaches a small area of the Pareto frontier. The same group of ants share a pheromone matrix that accommodates the location information of the learned Pareto frontier subregions. Each ant looks for a special point on the Pareto frontier for a single-target subproblem. Each ant has its own heuristic information matrix to store a priori knowledge of its subproblems. The ant i's neighbour B (i) contains T ants whose weights are closest to in N weights, and i B (i), and ant i is its own neighbour.

In each iteration of the algorithm, if the solution set in the dominating set remains unchanged for the first time, it means that the current weight vector can no longer meet the need of guiding population evolution, and weight adjustment is needed. In this case, nondominated populations are classified based on the clustering method, assuming that there are 7 individuals in the objective space (x1, x2 x7). They had three clusters, x3, x5, x2, x6, and x1, x7, x4. Among them, x3, x6, and x7 are cluster centres, as shown in Figure 3.

3. Algorithm Description

3.1. MOEA/D-ACO-PBI

In this paper, we propose an algorithm called MOEA/D-ACO-PBI. The original MOEA/D-ACO used a weighted sum and Tchebycheff method to solve the problem. However, this is difficult; because the correlation between the algebraic and geometrical interpretation of the weight sets is not valid [15]. Recent studies conducted by [16] have shown that MOEA/D-PBI is the better method for many-objective optimization problems. Therefore, we introduce the PBI approach to MOEA/D-ACO, known as MOEA/D-ACO-PBI. The MOEA/D-ACO-PBI algorithm is presented in Algorithm 2.

Initialization
Decomposition by PBI approach
Solution construction
Update of EP
Update of pheromone
Update the solutions
Termination
3.2. Initialization Procedure

Each ant has a heuristic information value , each group j has pheromone trail , and each individual solution is a tour represented by permutation.

The initialization procedure of MOEA/D-ACO-PBI contains six main steps.

Setting of and : the sets of weight vectors = are generated by a systematic approach, developed from Das and Dennis's method [17]. Here, the size of λ is .

where H > 0 is the quantity of divisions considered along each objective coordinate. Therefore, a subproblem has a weight vector that satisfies and , where m is the number of objectives.

Initialize the solution to the subproblem and set = F (), for i = 1n.

Set group: the number of groups is J, which is a value parameter that requires to be selected. The number of subproblems of each group is equally distributed by clustering the weights in terms of the Euclidean distance.

Initialize the pheromone information matrix , for all j = 1k and k = 1n. The motivation is to encourage the search to focus on exploration in its early stage.

Initialize , which is the heuristic information matrix to the subproblem , for j = 1k.

Initialize EP as the set of the entire high-quality solutions in ; EP is initialized to be empty.

3.3. Solution Construction

Assume that ant is in group j and its current solution = (. Ant constructs its new solution through the following steps.

The probability of choice of the link is set. For k, l = 1n, set

is representing the attractiveness of the link between the k and l to ant . The indicator function in (, (k. l)) is equal to 1 if the link (k, l) is in tour or 0 if otherwise. , , and are the three control parameters.

First, ant chooses an ant randomly as its starting point and builds its tour. Suppose that its current position is l and it has not completed its tour. From S (not visited so far), it selects the ant k to visit next, according to the following probability by the roulette wheel selection:

If the ant has visited all the points, it returns its tour.

3.4. Update of Pheromone

The pheromone trail value of the link (k, l) for group j is updated as follows:

is the persistence rate of the pheromone trail. Π is the set of all the new solutions; it was constructed by the ants in group j in the current iteration and was just added to EP. It contains the link between the ants’ k and l. and are used to limit the range of the pheromone. is given by

where is a control parameter. Then, if < set = and if > set = is set

where B is the number of nondominated solutions found in the current iteration and is the smallest value obtained for the objective functions of all the Ns subproblems.

4. Simulation Results

In this section, we compare the MOEA/D-ACO-PBI algorithm with three state-of-the-art algorithms including MOEA/D-[13], MOEA/D [10], and Multiple Ant Colony System (MACS) [18]. The test problems are introduced in Section 4.1. Section 4.2 describes the quality indicators. Three state-of-the-art algorithms used for comparison and the corresponding parameter settings are briefly introduced in Section 4.3.

4.1. Test Problems

The well-known MaOPs test function definitions of CF1–CF4 [19] are used to test the proposed algorithm. Table 1 shows the CF1–CF4 problems with the CF test suites. CF1-CF4 are test functions with biobjective problem. CF test suites are used to test the low dimensional objective function. To verify the ability of algorithms to deal with more objectives, we select C1-DTLZ1, C2-DTLZ2, and C3-DTLZ4 [20] used to test the high dimensional objective function. CF suites are popular test problem for MaOPs.

C1-DTLZ1 is a MaOPs problem; the problem is formulated as

C2-DTLZ2 is a MaOPs problem; the problem is formulated as

C3-DTLZ4 is a MaOPs problem; the problem is formulated as

4.2. Quality Indicators

In our empirical study, the following two widely used quality indicators were considered. The first one reflects the convergence of an algorithm, while the second one reflects the convergence and diversity of the solutions simultaneously.

4.2.1. Generational Distance (GD) Indicator

For any kind of algorithm, let P be the set of final nondominated points obtained from the objective space. Let be a set of points uniformly spread over the true PF. The GD can indicate only the convergence of an algorithm [21], and a smaller value indicates better quality. The GD is computed as

4.2.2. Inverted Generational Distance (IGD) Indicator

For any kind of algorithm, let P be the set of final nondominated points obtained from the objective space and be a set of points uniformly spread over the true PF. The IGD can indicate both the convergence and diversity [21], and a smaller value indicates better quality. The IGD is computed as

4.3. Parameter Settings

The parameters for the four MOEAs considered in this paper are listed as follows.

Some parameter settings in MOEA/D-ACO-PBI: set T=8, =1, =8, =0.95, r=0.8, and =5.0.

Some parameter settings in MOEA/D- and MOEA/D: T=8, =1, =8, =0.95, and r=0.8. The other parameters can be obtained from [15].

Some parameter settings in MACS: =1, =2, =0.9, =5, and =10.

Number of runs and termination condition: all algorithms were programmed in PlatEMO [22], and each algorithm was independently run 30 times for each test instance and stopped after 300 generations. All the algorithms were executed on a desktop PC with a 2.1-GHz CPU, 8-GB RAM, and Windows 10.

5. Results and Discussion

The GD indicator was utilized to compare the convergence capacity between the proposed MOEA/D-ACO-PBI algorithm and three other algorithms (Figure 4). Table 2 shows the average and standard deviation of the GD values obtained by the four algorithms for test suits with different number of objectives. Table 2 shows that the MOEA/D-ACO-PBI algorithm could perform well on all the test functions, especially on CF1 and CF3 problems. When the number of objective test functions is higher, the performance of MOEA/D-ACO-PBI algorithm is better. The results of comparison of the MOEA/D-ACO-PBI algorithm with the other three MOEAs in terms of IGD values, using the test suite, are provided in Table 3. It shows the average IGD values for the four MOEAs over 30 independent runs and the best average and standard deviation values. From the experimental results of the test problem, the MOEA/D-ACO-PBI algorithm performs on CF2, CF4, C1-DTLZ1, C2-DTLZ2, and C3-DTLZ4 test problems. It is shown that the MOEA/D-ACO-PBI algorithm has better performance of convergence and diversity. In addition, performance of MOEA/D- algorithm is also better, and MACS performance is the worst.

The conclusions are given below.

MOEA/D-ACO-PBI has the best convergence performance, as its approximate fronts cover most of those returned by the other algorithms. MOEA/D- can obtain well-spread approximately fronts; however, some of its solutions were dominated by MOEA/D-ACO-PBI.

It can be observed in Figure 3 that MOEA/D-ACO-PBI dominates in most solutions and that the advantage of MOEA/D-ACO-PBI is more pronounced as the number of objectives increases.

6. Conclusions

The major contribution of this paper is the proposal of a MOEA/D-ACO-PBI algorithm for MaOPs. Due to the fact that the number of model optimization objectives increases, the PBI aggregate function was selected to decompose the objectives. MOEA/D-ACO-PBI algorithm can better ensure the convergence of algorithms and the balance of diversity, so as to obtain a better Pareto solution based on the simulation experiment; it could be seen that the results of the MOEA/D-ACO-PBI algorithm are significantly better than those of the other algorithms. Our future work aims at analysing further how to cope with constraint and dynamic constraint processing, and to optimize the number of objectives more effectively.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research work was supported by the national science and technology major projects (NO 2009ZX04001-111), subproject of national science and technology major (2013ZX04002-031), key laboratory of automotive power train and electronics (Hubei University of Automotive Technology, (No. ZDK1201703), and Hubei Provincial Education Department Youth Fund (No. Q20181801).