Abstract

When faced with complex optimization problems with multiple objectives and multiple variables, many multiobjective particle swarm algorithms are prone to premature convergence. To enhance the convergence and diversity of the multiobjective particle swarm algorithm, a multiobjective particle swarm optimization algorithm based on the grid technique and multistrategy (GTMSMOPSO) is proposed. The algorithm randomly uses one of two different evaluation index strategies (convergence evaluation index and distribution evaluation index) combined with the grid technique to enhance the diversity and convergence of the population and improve the probability of particles flying to the real Pareto front. A combination of grid technology and a mixed evaluation index strategy is used to maintain the external archive to avoid removing particles with better convergence based only on particle density, which leads to population degradation and affects the particle exploitation ability. At the same time, a variation operation is proposed to avoid rapid degradation of the population, which enhances the particle search capability. The simulation results show that the proposed algorithm has better convergence and distribution than CMOPSO, NSGAII, MOEAD, MOPSOCD, and NMPSO.

1. Introduction

Most of today’s scientific and engineering problems are characterized by the fact that they usually have multiple conflicting objectives [1], and decision makers need to simultaneously optimize multiple objectives as best as possible within a given range, namely, multiobjective optimization problems (MOPs). The optimization result of such problems is not single, and there exists a Pareto optimal solution set consisting of a set of compromise solutions [2]. The goal of solving such problems is to obtain well-distributed Pareto fronts in the objective space [36].

With the expansion of human existence and the widening of the scope of understanding and transforming the world, the complex optimization problems encountered in reality are characterized by complex, multipolar, nonlinear, strongly constrained, and difficult modeling, which cannot be solved in polynomial time by classical algorithms, such as simplex algorithm and conjugate gradient method, or even cannot be solved effectively with the model. With the development of information technology, swarm intelligence algorithms are widely used, and such algorithms simulate the centralized learning process of a group composed of individuals. Such algorithms can effectively overcome the models that cannot be solved by classical algorithms. As a kind of swarm intelligence algorithm, particle swarm optimization (PSO) algorithm has the characteristics of simple operation and fast convergence and has good solution potential in solving MOPs.

When PSO deals with MOPs, it is called multiobjective particle swarm optimization (MOPSO). PSO needs to solve at least three problems when dealing with MOPs. The first problem is how to preserve a set of noninferior solutions generated by the algorithm at each iteration, especially when the number of iterations is large, and how to control the number of solutions to ensure that the noninferior solutions obtained by the algorithm can meet the solution quality requirements. The second problem is how to select the learning samples of particles among the many noninferior solutions. Since there is no truly optimal solution in MOPs, the global optimal particle and the individual optimal particle of MOPSO need to be selected by a specific strategy. The third problem is how to maintain the population diversity and prevent the population from falling into local optimal solutions. Due to the faster convergence speed of MOPSO, the resulting set of noninferior solutions is prone to lose diversity rapidly during the algorithm learning process.

The problems of maintaining the size of external archives, selecting learning samples of particles among many noninferior solutions, maintaining the diversity of the population, and preventing the population from falling into local optimal solutions are the core of MOPSO. In [7], MOPSOhv, a new hypervolume-based multiobjective particle swarm optimizer, is proposed. The algorithm uses the hypervolume contribution of the archiving solution to select global and individual leaders for each particle in the master cluster and is used to update the noninferior solution of the external archive. To increase the diversity of particles, the algorithm introduces a mutation operator. In [8], R2-MOPSO, a multiobjective particle swarm optimizer based on R2 indicator and decomposition, is proposed. The algorithm uses one strategy of R2 contribution rate to select the global best leader and update the individual best leader. Also, the algorithm uses an elite learning strategy and a Gaussian learning strategy to improve the diversity of particles. In [9], balancing convergence and diversity is a key problem in high-dimensional target spaces, and handling many objective optimization problems with R2 indicator and decomposition-based particle swarm optimizer is proposed to solve this problem. To balance convergence and diversity, a two-level archiving maintenance method based on r2 metrics and a target space decomposition strategy are designed. The algorithm selects the best global leader based on the r2 metric, while the selection of the best individual leader is based on Pareto dominance. Also, the target space decomposition leader selection used feedback from a two-level profile. A new velocity update method improves the exploitation ability of particles. Also, an elite learning strategy and an intelligent Gaussian learning strategy are embedded in R2-MaPSO to improve the ability of particles to jump out of local optimal solutions.

The current improved MOPSO improves the convergence and diversity of MOPSO to a certain extent, but there are still shortcomings. To further improve the performance of MOPSO, this paper proposes a multiobjective particle swarm algorithm based on grid technology and multistrategy. The main improvements to MOPSO are as follows: (1) for the maintenance of external archives, a grid technique and mixed evaluation index are used to remove noninferior solutions in the external archives to improve the quality of candidate solutions; (2) to enhance the diversity and convergence of the population, one of the two different evaluation index strategies is randomly used to select the global optimal sample based on the grid technique; and (3) to further increase the population diversity, a variation operation strategy is used to vary the particle positions. The simulation experimental results show that the proposed algorithm has certain advantages over the other five multiobjective particle swarm algorithms.

The rest of this paper is organized as follows. Section 2 introduces the multiobjective optimization problem, related work, and the concept of particle swarm algorithm. Section 3 presents the details of the algorithm in this paper. Section 4 shows the comparison results and analysis of GTMSMOPSO with other 5 multiobjective particle swarm algorithms. Finally, the conclusion of the algorithm in this paper is given in Section 5.

2. Background

2.1. Multiobjective Optimization Problem

The definition of the general multiobjective optimization problem is described as follows:where is the n-dimensional decision vector, is the objective function, and is the inequality constraint and the equality constraint, respectively.

Definition of multiobjective optimization is given as follows.

Definition 1 (Pareto dominance). A vector is dominant , if and only if

Definition 2 (Pareto optimal). A solution is a Pareto optimal or a noninferior solution, if and only if

Definition 3 (Pareto optimal solution set). The set S including all Pareto optimal solutions is a Pareto optimal solution set, which is defined as

Definition 4 (Pareto frontier). The set PF of all objective function values with the Pareto optimal solution set as the feasible region is called the Pareto frontier and is defined as

2.2. Related Work

PSO is a kind of population intelligence optimization algorithm with simple operation and fast convergence, which is widely used in single objective optimization problems. It is the simplicity of operation and fast convergence of particle swarm algorithm that has attracted increasing scholars to study it. Since MOPs have no truly optimal solutions, MOPSO generally selects the global optimal samples in the set of noninferior solutions, and because MOPSO has a fast convergence rate in the learning process, it is easy to make the obtained set of noninferior solutions lose diversity rapidly, leading to a fall into local optimal solutions.

In order to solve the above problems, increasing scholars have proposed many improvement methods for MOPSO. Coello et al. [10] proposed a multiobjective particle swarm optimization algorithm based on adaptive grid, which uses adaptive grid technology to maintain external archives, guide particle updates, and implement mutations on the particle and particles flight area. Although the algorithm shows some advantages over traditional multiobjective evolutionary algorithms for MOPs, it has difficulties in solving complex MOPs with multiple local fronts and poor diversity of nondominated solutions. Li et al. [11] proposed grid search-based multipopulation particle swarm optimization algorithm for multimodal multiobjective optimization, which uses a multiple cluster algorithm based on the k-means clustering method to locate more equivalent PSs in the decision space and uses a grid in GSMPSO-MM to explore high-quality solutions in the decision space, but the algorithm still suffers from insufficient convergence.

It is different from the abovementioned MOPSO which determines the search mechanism through the dominance relationship. In [12], a multiobjective particle swarm algorithm with random migration is proposed. The algorithm sets an age threshold. When a particle does not improve its individual position, it will increase its age, and when the particle age exceeds this age threshold, the particles will be reinitialized to increase the diversity of the population and avoid falling into local extremes. However, because decomposition replaces the dominant relationship, the algorithm cannot cover the entire Pareto frontier when solving some complex MOPs. To solve this problem, Han et al. [13] proposed multiobjective particle swarm optimization with adaptive strategy for feature selection, which mainly uses the PBI decomposition method to select the optimal solution and adaptively provides different penalty values for each weight vector and further improves the ability of MOPSO to solve MOPs. Some scholars also proposed some improved MOPSO. In [14], a modified particle swarm optimization for multimodal multiobjective optimization was proposed. The algorithm introduces a community-based dynamic learning strategy to replace the global learning strategy, enhances the population diversity, and introduces a competition mechanism to improve the performance of the particle swarm algorithm. In [15], a self-organized speciation based multiobjective particle swarm optimizer for multimodal multiobjective problems is proposed, which uses a species formation strategy to establish multiple stable ecological niches to increase the probability of species flying to the true Pareto front. In [16], a simplified multiobjective particle swarm optimization algorithm was proposed. The algorithm uses an adaptive penalty mechanism for the PBI parameters, which can adaptively adjust the penalty value to enhance the selection pressure of the archive and improve the selection pressure of each particle. In [17], multiobjective reservoir operation using particle swarm optimization with adaptive random inertia weights is proposed, which combines the ARIW algorithm with the traditional PSO, while using a triangular probability density function, randomly generates inertia weights, and automatically adjusts the probability distribution function as it evolves. In [18], based on penalty function theory and particle swarm optimization (PSO) algorithm, an improved multiobjective particle swarm optimization algorithm based on archive management is proposed, while multiple swarm coevolution of crowded distance archive management is used to improve the search ability and diversity of the population.

Most of the current improved MOPSO rely on only one strategy to select the learning samples of particles and maintain external archives, without considering the performance of particles at different stages, resulting in insufficient convergence and diversity of the algorithms when solving complex MOPs. In this paper, a multiobjective particle swarm algorithm based on grid technology and multistrategy is proposed, and experimental simulations show that the algorithm effectively improves the performance of MOPSO.

2.3. Particle Swarm Optimization

The particle swarm optimization [19] is an optimization algorithm based on an iterative model, proposed by Dr. Eberhart and Dr. Kennedy in 1995, and originated from the study of the behavior of bird flocks foraging. The group searches for the global optimal solution within a range, and each particle has a fitness and speed to adjust its flight direction. During the flight, all particles in the group have a memory function, and each particle continuously learns from its own optimal position and the optimal particle position in the group. The particle velocity and position update formula are as follows:

Among them, is the inertia weight, and its size controls the size of the search ability. and are the acceleration factors to make the particles have the ability to self-summarize and learn from the outstanding individuals in the group. and are random numbers in the range [0,1]. and , respectively, represent the individual optimal solution and the global optimal solution.

3. The Proposed GTMSMOPSO Algorithm

The problems of maintaining the size of external archives, selecting learning samples of particles among many noninferior solutions, maintaining the diversity of the population, and preventing the population from falling into local optimal solutions are the core of MOPSO. In this paper, we propose a grid technique and a multistrategy approach in the part of maintenance of external archives and selection of globally optimal particles, to further improve the performance of MOPSO by a variational operation on the positions of particles.

3.1. Grid Technique

In MOPSO, each iteration generates a set of noninferior solutions. The grid technique divides the grid area by the information of all noninferior solution function values in the external archive, so that the grid location to which the noninferior solution belongs can be found according to the function value of the noninferior solution. The definition of grid technology is introduced as follows.

3.1.1. Coordinates of the Grid Boundary

In the target space, the maximum and minimum values of the noninferior solution of the mth objective function are and , respectively, and then the upper and lower boundaries of the mth target grid are given bywhere b is the number of mesh divisions.

3.1.2. Grid Coordinates (Gm)

let be the function value of particle x as the mth dimensional target, then the corresponding grid coordinates arewhere and is the grid width

3.2. Global Optimal Sample Selection Based on Grid Technique and Multistrategy

The global optimal particle selection in MOPSO is a key factor affecting the diversity and convergence of the algorithm. To enhance the convergence and diversity of MOPSO, this paper uses two different evaluation indices (convergence evaluation index and distribution evaluation index) to select noninferior solutions in each grid based on the roulette wheel strategy.

3.2.1. Selection Method of Convergence Evaluation Index and Distribution Evaluation Index

To improve the convergence of the algorithm, the noninferior solution with the largest contribution to the Pareto solution set is selected as the global optimal sample in the same grid. In this paper, the inflection point distance of particles is used as the evaluation index of particle convergence.

The inflection point distance (IPD) is to determine an extreme straight line through the two extreme noninferior solutions in the external archive of two-objective functions and then calculate the distance from the particle to this straight line. If there are three or more goal functions, it is necessary to calculate the distance from each solution in the noninferior solution set to the extreme “hyperplane.”

The equation of the extreme straight line L is as follows:where A, B, and C are all real numbers.

Supposing that the coordinate of the noninferior solution x is , then the distance from the noninferior solution x to the straight line L is

The hyperplane in n-dimensional space is determined by the following equation:

Among them, A is an n-order square matrix, x is an n-dimensional column vector, and b is a real number, representing the distance from the hyperplane to the origin. The distance from each solution in the noninferior solution is set to the extreme “hyperplane”:

The inflection point of the curve is the “most concave” point on the front surface of Pareto, which contributes the most to the Pareto solution set. IPD is an index reflecting the convergence of particles in the same grid. The smaller the IPD index of the particle, the better the convergence of the particle. Taking the dual objective function as an example, as shown in Figure 1, the black noninferior solution has the best convergence, the red noninferior solution is the extreme noninferior solution, and the noninferior solution above the extreme straight line has the worst convergence.

To increase the diversity of the population, the most dispersed noninferior solution in the same grid is selected as the global optimal sample. In this paper, the grid density of particles is used as the evaluation standard of particle distribution.

The average of the sum of Euclidean distances between particle and all other particles in the same grid is defined as the grid density of particle , which is defined as follows:where S is the number of particles in the same grid and M is the number of objective functions.

gd is an indicator that reflects the distribution of particles in the same grid. The larger the gd index of the particles, the more dispersed the particle distribution.

3.2.2. The Gbest Selection Method of Two Different Evaluation Indicators on Roulette

To ensure the diversity of the population, the noninferior solutions of each grid are selected as the global optimal sample. Due to the different number of noninferior solutions in the grid, the more noninferior solutions are selected by the roulette strategy for the grid with more noninferior solutions. To further balance the convergence and diversity of the algorithm and enable the population to explore more regions, one of the convergence evaluation index and diversity evaluation index is selected according to the current iteration number of the algorithm to select noninferior solutions in the same grid. Suppose that the noninferior solution with the largest inflection point distance in the j-th grid is , and the noninferior solution with the largest grid density in the j-th grid is . Then, in the j-th grid, the particles select or as the gbest probability:where rand is a random number between [0,1], iter is the current number of iterations, itermax is the maximum number of iterations, and k is a real number. Through experimental simulation, the best effect is obtained when k = 1.3. From equation (15), it can be seen that the good particle diversity but poor convergence in the early iteration of the algorithm makes the particles have a higher probability of choosing as the gbest to guide the flight as a way to improve the particle exploitation. Particles converge better but are less diverse in the late iterations of the algorithm, allowing particles to have a higher probability of selecting as the gbest to guide their flight as a way to improve particle exploration.

3.3. External Archive Maintenance Based on Mixed Evaluation Indicators

In MOPSO, each iteration of the algorithm will generate a set of noninferior solutions, and these noninferior solutions will be stored in the external archive. As the algorithm runs, the number of noninferior solutions will increase. The external archive needs to be maintained when the maximum size of the external archive is reached. Most MOPSO delete highly crowded particles to maintain external archives. However, although this method guarantees the uniform distribution of noninferior solutions in the external archive, it is possible to delete noninferior solutions with better convergence. Therefore, this paper adopts a hybrid evaluation index of convergence evaluation index and distribution evaluation index to delete noninferior solutions in external archives to improve the quality of candidate solutions.

It is deleted by finding the grid with the largest number of particles. The deletion method is a hybrid evaluation index strategy of the convergence evaluation index and the distribution evaluation index of the particles. The specific formula is as follows:

The larger the IPD index of the particle, the greater the contribution of the particle to the Pareto solution set, and the better its convergence. From formula (14), it can be seen that the larger the gd index of the particle, the more dispersed the particle distribution. It can be seen from formula (16) that the particle ME index can reflect not only the degree of convergence of the particle but also the distribution of the particle. The smaller the particle ME index, the better the overall performance of the particle. Therefore, when the external archive reaches the maximum limit, the grid with the largest number of particles is found, and the particles with the smaller ME index in the grid are deleted to improve the quality of the candidate solutions.

3.4. Mutation Operation

MOPSO has strong exploration capabilities in the early iterations and can continuously search for new areas. However, MOPSO has a fast convergence effect, which makes the algorithm very likely to search around the local optimal solution in the later iterations, which makes the algorithm converge prematurely. Therefore, to make the algorithm maintain good population diversity in the early iterations and increase population diversity in the later iterations, mutation operations are used to increase the degree of position mutation. The position variation of particles increases linearly with the increase in the number of iterations. At the beginning of the iteration, the diversity of the population is good, so that the variation of the particles is small, so that the diversity of the population remains stable. In the later stage of the iteration, the diversity of the population is insufficient, which makes the particles vary greatly to increase the diversity of the population. In GTMSMOPSO, the update formula of particle velocity is given in equation (6), and the update formula of particle position is as follows:where itermax is the maximum number of iterations, iter is the current iteration number, and e is the natural logarithm.

3.5. The Main Flow of the Algorithm

The main flow of GTMSMOPSO is as follows.

Step 1. Set the number of populations, the maximum number of external archives, the dimension of particles, the number of grids, and the maximum number of iterations. Initialize the position and velocity randomly, set the initial value of each particle position as the best value of the individual particle, and create an external archive and set it to an empty set.

Step 2. Calculate the fitness of each particle and store the noninferior solutions in an external archive.

Step 3. Establish grids for the target space, and calculate the noninferior solutions in each grid with the inflection point distance formula and the grid density formula.

Step 4. The best sample of an individual is the best between the current position and the best position in the history of the individual. If the rankings are equal, one of them is randomly selected. First use the roulette method to determine the number of noninferior solutions that need to be selected in each grid, and then use equation (15) to determine the global optimal sample for the same grid.

Step 5. Use flight formulas (6) and (17) to update the speed and position of each particle.

Step 6. If the external archive does not reach saturation, continue to add noninferior solutions. If the external archive reaches a saturated state, find the grid with the largest number of grid particles, use formula (14) to delete the particles with a smaller ME index, and then introduce noninferior solutions into the external archive to update the external archive.

Step 7. If the current number of iterations is less than the maximum number of iterations, return to the second step; otherwise, output the optimal solution set.
The algorithm flowchart of GTMSMOPSO is presented in Figure 2.

4. Experimental Simulation Analysis

4.1. Performance Evaluation Index

To evaluate the performance of each algorithm, this paper uses inverse generation distance (IGD) [20] and super volume (HV) [21] to evaluate the algorithm respectively. IGD is a measure of the distance between the true Pareto front and the approximate Pareto front. The lower the IGD value, the better the convergence and diversity of the approximate Pareto frontier obtained by the algorithm, and the closer it is to the true Pareto frontier. Its calculation formula is as follows:where is the set of solutions uniformly distributed over the true PF and is the number of individuals in the set of solutions distributed over the true PF. is the set of Pareto optimal solutions obtained by the algorithm, and is the minimum Euclidean distance from a single in to the population .

The HV measures the noninferiority distribution of the algorithm in the space, and a larger HV indicates a better noninferiority distribution in the space.

4.2. Selection of Parameters

The parameter settings of GTMSMOPSO are as follows: the external archive size is set to 200, the population size is set to 200, the maximum number of iterations is 2000, the number of grids is 50, c1=c2=2, and . The parameter settings of the other four comparisons of multiobjective particle swarm optimization are consistent with the original literature.

4.3. Experimental Results and Data Analysis

To verify the effectiveness of GTMSMOPSO, we selected typical multiobjective test function sets ZDT [22], UF [23], and DTLZ [24], representative 14 multiobjective functions (ZDT1-4, ZDT6, UF2-5, UF8-10, DTLZ1, and DTLZ6), and comparative algorithms including CMOPSO [25], NSGAII [26], MOEAD [27], MOPSOCD [28], and NMPSO [29]. All algorithms, except GTMSMOPSO, run on the platform [30]. The mean (Mean) and standard deviation (Std.) of the IGD metrics and HV metrics for GTMSMOPSO and the five multiobjective intelligence algorithms on the 14 tested functions are given in Tables 1 and 2, respectively. The bolded data in the table represent the best values.

The multiobjective test function sets ZDT (ZDT1-4 and ZDT6) are all biobjective test functions. ZDT1 has a convex PF and ZDT2 has a nonconvex PF. From Tables 1 and 2, it can be seen that both GTMSMOPSO and CMOPSO have smaller IGD values and larger HV values on ZDT1 and ZDT2 test functions, so they have better overall performance. The performance of NSGAII and MOEAD is slightly worse. ZDT3 has a broken Pareto front and is nonconvex. It can be seen from Tables 1 and 2 that GTMSMOPSO has the best HV value on the ZDT3 test function, although it has a poor IGD value. ZDT4 has many locally optimal solutions and has convex PFs, and ZDT6 has nonconvex and nonuniformly spaced PFs. GTMSMOPSO showed the best IGD and HV values on the ZDT4 test function. GTMSMOPSO showed the best IGD value on the ZDT6 test function, and its HV value was slightly lower than the HV value of NMPSO. In summary, it can be seen that GTMSMOPSO has the best performance on the multiobjective test function set ZDT series.

The multiobjective test functions UF2-UF5 are biobjective test functions and UF8-10 are three-objective test function. As can be seen from Tables 1 and 2, GTMSMOPSO has four optimal IGD averages and three optimal HV averages in the UF series of test functions. GTMSMOPSO performs suboptimally for the IGD value in the UF8 test function. In summary, it can be seen that GTMSMOPSO outperforms the other three multiobjective intelligent algorithms in the UF series of test functions.

The multiobjective test functions DTLZ1 and DTLZ6 are both three-objective test functions. DTLZ1 is a multimodal function containing many local Pareto hyperplanes. The GTMSMOPSO algorithm achieves the best IGD and HV values in the DTLZ1 function, which reflects a good performance. The GTMSMOPSO algorithm in the DTLZ6 function has suboptimal IGD values and HV values. In summary, it can be seen that the performance of the GTMSMOPSO algorithm in the test function of the DTLZ series is good.

To visualize the convergence and diversity of the noninferior solution sets obtained by each algorithm, Figures 38 show the noninferior solution sets obtained by each algorithm on the two-objective test functions (ZDT1, ZDT2, ZDT4, and ZDT6) and the three-objective test functions (UF10 and DTLZ1) with the true Pareto front shown. Experimental simulations show that NSGAII and MOEAD both suffer from multiple underdiversity and underconvergence on the test functions in the ZDT series, and CMOPSO suffers from severe underdiversity and underconvergence on the ZDT4 test function, as seen in Figures 36. However, GTMSMOPSO not only approximates the true Pareto front on the test functions in the ZDT series, but also has better distributivity. Therefore, GTMSMOPSO has better convergence and distribution of the test functions in the ZDT series. As can be seen from Figures 7 and 8, GTMSMOPSO is closer to the true Pareto front in the three-objective test functions (UF10 and DTLZ1), and the other five algorithms show multiple underconvergence or underdistribution. In summary, it can be seen that GTMSMOPSO has good convergence and distributivity compared with the other five compared algorithms.

From Figure 9, it can be concluded that GTMSMOPSO can converge faster to a smaller and stable IGD value with ZDT1, ZDT2, ZDT4, ZDT6, UF10, and DTLZ1 test functions among the 6 multiobjective test functions. Therefore, the algorithm in this paper outperforms the other five comparison algorithms.

5. Conclusion

In this paper, we propose a multiobjective particle swarm algorithm based on grid technology and multistrategy. The algorithm is improved by the maintenance of external archives and the selection of global optimal samples, and the variational operation of positions is proposed. For the maintenance of the external archive, the particles in the grid with the highest number of particles are removed by using a mixed evaluation index strategy, which provides a better quality of particles for selecting the global optimal solution. In the selection of globally optimal samples, the particles in the same grid are selected according to the current number of iterations using one of the inflection distance strategies and the grid density strategy to achieve a balance between algorithm exploration and exploitation, thus improving the diversity and convergence of the algorithm. To further enhance the diversity of the population, a linear incremental variation of the position of the particles is performed to enhance the exploration capability of the algorithm. To verify the effectiveness of the algorithm in this paper, simulation experiments are performed on 14 multiobjective functions (ZDT1-ZDT4, ZDT6, UF2-UF5, UF8-UF10, DTLZ1, and DTLZ6) of this paper’s algorithm and five other multiobjective particle swarm algorithms. The experimental simulation results show that the algorithm in this paper has good convergence and diversity and has a good spatialization effect.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.