Abstract

In MOPSO (multiobjective particle swarm optimization), to maintain or increase the diversity of the swarm and help an algorithm to jump out of the local optimal solution, PAM (Partitioning Around Medoid) clustering algorithm and uniform design are respectively introduced to maintain the diversity of Pareto optimal solutions and the uniformity of the selected Pareto optimal solutions. In this paper, a novel algorithm, the multiobjective particle swarm optimization based on PAM and uniform design, is proposed. The differences between the proposed algorithm and the others lie in that PAM and uniform design are firstly introduced to MOPSO. The experimental results performing on several test problems illustrate that the proposed algorithm is efficient.

1. Introduction

Many real-world optimization problems often need to simultaneously optimize multiple objectives that are incommensurable and generally conflicting with each other. They can usually be written aswhere is a variable vector in a real and -dimensional space, is the feasible solution space, and is the number of the objective functions. Since the pioneering attempt of Schaffer [1] to solve multiobjective optimization problems, many kinds of multiobjective evolutionary algorithms (MOEAs), ranging from traditional evolutionary algorithms to newly developed techniques, have been proposed and widely used in different applications [24].

Multiobjective evolutionary algorithms, MOEAs, have become well-known methods for solving the multiobjective optimization problems that are too complex to be solved by exact methods. The main challenge for MOEAs is to be satisfied with three goals at the same time: (1) the Pareto optimal solutions are as near to true Pareto front, which means the convergence of MOEAs, (2) the nondominated solutions are evenly scattered along the Pareto front, which means the diversity of MOEAs, and (3) MOEAs obtain Pareto optimal solutions in limited evolution times [5].

The particle swarm optimization algorithm, PSO, and MOEAs are both intelligent optimization algorithms. It was proposed by Eberhart and Kennedy in 1995 [6, 7]. It originates from sharing and exchanging of information in the process of searching food among the bird’s individuals. Each individual can benefit from the discovery and flight experience of the others. PSO seems particularly suitable for multiobjective optimization mainly because of the high speed of convergence [8, 9].

PAM is one of -medoids clustering algorithms based on partitioning methods. It attempts to divide data objects into partitions. Namely, it can divide a swarm into different subswarms with different features.

This paper proposed a novel multiobjective particle swarm optimization based on PAM and uniform design, abbreviated as UKMOPSO. It first uses PAM to partition the data points into several clusters, and then the smallest cluster is implemented crossover operator based on the uniform design to generate some new data points. When the size of the Pareto solution is larger than the size of the external archive, PAM is used to determine which Pareto solution is to be removed or appended. The results of the experimental simulation implemented on several well-known test problems indicate that the proposed algorithm is efficient.

The rest of this paper is organized as follows. Section 2 states the preliminaries of the proposed method. Section 3 presents our method in detail. Section 4 gives the numerical results of the proposed method. The conclusion of the work is made in Section 5.

2. Preliminaries

In this section, we describe some concepts concerning particle swarm optimization, multiobjective particle swarm optimization, PAM, and uniform design.

2.1. Particle Swarm Optimization Algorithms

In -dimensional search space, the position and velocity of the th particle are, respectively, represented as and . The optimal positions of the th particle and the whole swarm, namely, the individual optimal and the global optimal, are denoted as and , respectively. The individuals or particles in the swarm update their velocities and positions according to the following formulas:where the inertia weight coefficient indicates the ability to maintain the previous speed; the acceleration coefficients and are used to coordinate the degrees of tracking individual optimal and global optimal; and and are two random numbers drawn from the uniform distribution on the interval .

The update equation of the velocity consists of the previous velocity component, a cognitive component and a social component. They are mainly controlled by three parameters: the inertia weight and two acceleration coefficients.

From the theoretical analysis for the trajectory of particles in PSO [10], the trajectory of a particle converges to a weighted mean of and . Whenever the particle converges, it will “fly” to the individual best position and the global best position. According to the update equation, the individual best position of the particle will gradually move closer to the global best position. Therefore, all the particles will converge onto the global best particle’s position.

2.2. Multiobjective Particle Swarm Optimization

MOPSO is proposed by Coello et al.; it adopts swarm intelligence to optimize MOPs, and it uses the Pareto optimal set to guide the particle’s flight [9].

Particle swarm optimization has been proposed for solving a large number of single objective problems. Many researchers are interested in solving multiobjective problems (MOP) using PSO. To modify a single objective PSO to MOPSO, a guide must be redefined in order to obtain a set of nondominated solutions (Pareto front). In MOPSO, the Pareto optimal solutions should be used to determine the guide for each particle. How to select suitable local guides for attaining both convergence and diversity of solutions becomes an important issue.

There have been some publications to use PSO to solve MOP. A dynamic neighborhood PSO was proposed [11], which optimizes only one objective at a time and uses a scheme similar to lexicographic ordering. In addition, this approach also proposes an unconstrained elite archive named dominated tree to store the nondominated solutions. However, it is a difficult issue for this approach to pick up a best local guide from the set of Pareto optimal solutions for each particle of the population. A strategy for finding suitable local guides for each particle was proposed and named Sigma method. The local guide is explicitly assigned to specify particles according to the Sigma value [12]. This results in the desired diversity and convergence, but it is still not close enough to the Pareto front. On the other hand, an enhanced archiving technique to maintain the best (nondominated) solutions found during the course of a MO algorithm was proposed [13]. It shows that using archives in PSO for MO problems will improve their performance directly. A parallel vector evaluated particle swarm optimization (VEPSO) method for multiobjective problems was proposed [14], which adopted a ring migration topology and PVE system to simultaneously work 2–10 CPUs to find nondominated solutions. In [9], MOPSO method was proposed. It incorporates Pareto dominance and a special mutation operator to solve MO problems [15].

Recently, a hybrid multiobjective algorithm combining both genetic algorithm (GA) and particle swarm optimization (PSO) was proposed [16]. A multiobjective particle swarm optimization based on self-update and grid strategy was proposed for improving the function of Pareto set [17]. A new dynamic self-adaptive multiobjective particle swarm optimization (DSAMOPSO) method is proposed to solve binary-state multiobjective reliability redundancy allocation problems [18], which used a modified nondominated sorting genetic algorithm (NSGA-II) method and a customized time-variant multiobjective particle swarm optimization method to generate nondominated solutions.

The MOPSO method is becoming more popular due to its simplicity to be implemented and its ability to quickly converge to a reasonably acceptable solution for problems in science and engineering.

2.3. PAM Algorithm

There are many clustering methods available in data mining. Typical clustering analysis methods are clustering based on partition, hierarchical clustering, clustering based on density, clustering based on grid, and clustering based on model.

The most frequently used clustering methods based on partition are -means and -medoids. In contrast to the -means algorithm, -medoids chooses data points as centroids, which make -medoids method more robust than -means in the presence of noise and outliers. The reason is that -medoids method is less influenced by outliers or other extreme values than -means. PAM (Partitioning Around Medoids) is the first and the most frequently used -medoids algorithms. It is shown in Algorithm 1.

Algorithm: PAM, a -medoids algorithm for partitioning based on medoid or central objects.
Input:
: the number of clusters,
: a data set containing objects.
Output: A set of clusters.
Method:
(1) Arbitrarily choose objects in as the initial representative objects or seeds;
(2) Repeat
(3) Assign each remaining object to the cluster with the nearest representative object;
(4) Randomly select a nonrepresentative object, random;
(5) Compute the total cost, , of swapping representative object, , with random;
(6) if then swap with random to form the new set of representative objects;
(7) Until no change;

PAM constructs partitions (clusters) of the given dataset, where each partition represents a cluster. Each cluster may be represented by a centroid or a cluster representative which is some sort of summary description of all the objects contained in a cluster. It needs to determine partitions for objects. The process of PAM is by and large as follows. Firstly, randomly select representative objects, and cluster other objects to the same group as the representative object according to the minimum distances between the representative object and other objects. Then try to replace these representative objects with other nonrepresentative objects in order to minimize squared error. All the possible pairs of objects are analyzed, where one object in each pair is considered a representative object and the other is not. The total cost of the clustering is calculated for each such combination. An object will be replaced with such an object having minimized squared error. The set of best objects in each cluster after iteration forms the representative objects for the next iteration. The final set of representative objects is the respective centroids of the clusters [1922].

2.4. Uniform Design
2.4.1. Uniform Design

The main objective of uniform design is to sample a small set of points from a given set of points such that the sampled points are uniformly scattered [23].

Let there be factors and levels per factor. When and are given, the uniform design selects from possible combinations, such that these combinations are uniformly scattered over the space of all possible combinations. The selected combinations are expressed as a uniform array , where is the level of the th factor in the th combination and can be calculated by the following formulawhere is a parameter given in Table 1.

2.4.2. Improved Generation of Initial Population

An algorithm for dividing the solution space and an algorithm for generating the initial population have been designed [23]. However, Algorithm  2 in [23] considers only the dividing of the solution space, not the dividing of the -dimension space. This will bring about some serious problems. If we assume that, in Step 2, and , then is impossible to be generated because must be larger than . Namely, Algorithm  2 in [23] is only suitable for the low dimensional problem. In order to overcome the shortcomings, the dividing of the -dimension space is introduced to improve the algorithm. The improved algorithm can be suitable for not only low dimensional but also high dimensional problem. The improved algorithm is shown as follows.

Algorithm A (improved generation of initial population)

Step 1. Judge whether is valid or not, as it must be found in the 1st column of Table 1. If it is not valid, it stops and shows error messages, otherwise it continues.

Step 2. Execute Algorithm  1 in [23] to divide into subspaces .

Step 3.1. Judge whether is more than , if yes, turn to Step 3.2, otherwise turn to Step 3.3.

Step 3.2. Quantize each subspace, and apply the uniform array to sample points.

Step 3.3. Divide -dimension space into parts, where is an integer more than 1 and less than and is generally taken as . Among parts, the 1st part corresponds to the dimension from 1 to , and the 2nd part corresponds to the dimension from to , and so forth. If the remainder of is not equal to 0, then a plus part corresponds to the dimension from to , whose length is surely less than . Repeat to execute Step 3.4 for each part.

Step 3.4. Quantize each subspace, and then apply uniform array to sample points. It is noteworthy that is replaced with in the plus part, where the remainder is equal to .

2.4.3. Crossover Operator Based on the Uniform Design

The crossover operator based on the uniform design acts on two parents. It quantizes the solution space defined by these parents into a finite number of points, and then it applies the uniform design to select a small sample of uniformly scattered points as the potential offspring.

For any two parents and , their minimal values and the maximal values of each dimension are used to form the novel solution space . Each domain of is quantized into levels, where is a predefined prime number. Then the uniform design is applied to select a sample of points as the potential offspring. The details of the algorithm can be referred to [23].

3. The Proposed Algorithm

3.1. Elitist Selection or Elitism [4]

Elitism means that elite individuals cannot be excluded from the mating pool of the population. A strategy presented can always include the best individual of the current population in the next generation in order to prevent the loss of good solutions found so far. This strategy can be extended to copy the best individuals to the next generation. This is explanation of the elitism. In MOP, elitism plays an important role.

Two strategies are often used to implement elitism. One maintains elitist solutions in the population; the other stores elitist solutions into an external secondary list or external archive and reintroduces them to the population. The former copies all nondominated solutions in the current population to the next population and then fills the rest of the next population by selecting from the remaining dominated solutions in the current population. The latter uses an external secondary list or external archive to store the elitist solutions. External list stores the nondominated solutions found. It will be updated in the next generation by means of removing elitist solutions dominated by a new solution or adding the new solution if it is not dominated by any existing elitist solution.

The work adopts the second strategy, namely, storing elitist solutions to an external secondary list. Its advantage is that it can preserve and dynamically adjust all the nondominated solutions set till the current generation. The pseudocodes of selecting elitist and updating elitist are, respectively, shown in Algorithms 2 and 3.

population paretocreate (pop)
Input: pop indicates the population.
Output: pareto indicates the non-dominated solutions.
 pareto pop;
for chr1 pop;
  for chr2 pareto chr2 ≠ chr1;
   if chr1 dominate chr2
    pareto pareto − chr2;
   end if
  end for
end for
return pareto;

population paretoupdate(offspring, pareto)
Input: offspring indicates the offsprings after performing the crossover operator
   and mutation operator; pareto indicates the non-dominated solutions.
Output: pareto indicates the non-dominated solutions.
 offspring call paretocreate(offspring);
for chr1 offspring;
  nondominated true;
  for chr2 pareto;
   if chr1 dominate chr2
    pareto pareto − chr2;
   else if chr2 dominate chr1
    nondominated false;
   else
    continue
   end if
  end for
  if nondominated
   pareto pareto chr1
  end if
end for
return pareto;

3.2. Selection Mechanism for the Swarm Based on Uniform Design

This paper adopts the uniform design to select the best points () from the points and acquires their objectives . The detailed steps are as follows.

Algorithm B

Step 1. Calculate each of objectives for each of the points; normalize each of objectives as follows:where is a set of points in the current population and is the normalized objective.

Step 2. Apply the uniform design to generate the weight vectors ; each of them is used to compose one fitness function by the following formula, where is a design parameter and it is prime:

Step 3. Based on each of fitness functions, evaluate the quality of the points. Assume the remainder is . For the first fitness functions, select the best points; for the rest of fitness functions, select the best points. Overall, a total of points are selected. The objectives of these selected points are correspondingly stored into .

3.3. Selection Mechanism for Based on PAM

In MOPSO, plays an important role in guiding the entire swarm toward the global Pareto front [24]. In contrast to the single objective PSO having only one global best , MOPSO has multiple Pareto optimality solutions which are nondominated each other. How to select a suitable for each of particles from Pareto optimal solutions is one very key issue.

The paper presents the selection mechanism for based on PAM as follows.

Algorithm C. Assume the population and the numbers of particles are denoted as and , and the Pareto optimality and the numbers of Pareto optimality are denoted as and .

Step 1. Acquire the number of cluster according to the following formula:where and , respectively, denote the minimal and maximal value of , namely, ; and indicate the th iteration and the maximal iteration number. The formula can acquire the linearly increasing number of cluster so as to accord with the process from the coarse search to the elaborate search.

Step 2. If , then for each particle in pop, find the nearest Pareto optimal solution as its . Otherwise, turn to Step 3.

Step 3. Perform PAM described in Section 2.3 to partition and into clusters, respectively, the cluster centroids of which are denoted as and .

Step 4. For each , find the nearest . For all the particles in the cluster represented by , their randomly takes one of Pareto optimal solutions in the cluster represented by .

3.4. Adjustment of Pareto Optimal Solutions Based on the Uniform Design and PAM

The number of Pareto optimal solutions in the external archive will increasingly enlarge with evolution; namely, the size of the external archive will become very large with the iteration number. This will increase the computation complexity and the execution time of an algorithm. Therefore, the size of the external archive must be controlled. In the meanwhile, Pareto optimal solutions in the external archive are possibly close to each other in the objective space. If the solutions are close to each other, they are nearly the same choice. It is desirable to find the Pareto optimal solutions scattered uniformly over the Pareto front, so that the decision maker can have a variety of choices. Therefore, how to control the size of the external archive and select representative Pareto optimal solutions scattered uniformly over the Pareto front is a key issue.

The paper presents the adjustment algorithm of Pareto optimal solutions based on the uniform design and PAM. The algorithm firstly implements PAM to partition the Pareto front into clusters in the objective space and then implements the uniform crossover operator on the minimal cluster so as to generate more Pareto optimality in the minimal cluster; finally it keeps all the points in the lesser clusters and discards some points in the larger clusters. The detailed steps of the algorithm are shown as follows.

Algorithm D. Assume the Pareto optimality, their values of functions, and the numbers of Pareto optimality are denoted as , , and . The size of the external archive is assumed as . The number of clusters is .

Step 1. If the numbers of data points in the maximal and the minimal clusters have too many differences or are both very small, select all the data points from the minimal cluster and the same number of points from the cluster which is the nearest to the minimal cluster. For each pair of data points, perform the uniform crossover operator described in Section 2.4.3. Update , , and according to Algorithm 3.

Step 2. If , implement PAM to partition the into clusters in the objective space and let ; it indicates the average points to select from each of clusters. We can classify three situations according to the number of points in each cluster and as follows.(i)Situation 1: the numbers of points in all clusters are larger than or equal to .(ii)Situation 2: the number of data points is larger than or equal to only in one cluster.(iii)Situation 3: the number of the clusters, in which the number of data points is larger than , is larger than 1 and less than .

Step 3. For Situation 1, sort all clusters according to their containing points in ascending order. Select points from the previous clusters, and select the remainder points, , from the last cluster. Turn to Step 7.

Step 4. For Situation 2, keep the points of all the clusters, in which the number of the points is less than or equal to , and the remainder points are selected from that only cluster. Turn to Step 7.

Step 5. For Situation 3, keep all the points of all the clusters, in which the number of points is less than or equal to . Turn to Step 6 to select the remainder point.

Step 6. Recalculate , and let , where and indicate the number of the remainder points and clusters. According to the new , turn to Step 3 or Step 4.

Step 7. Save the selected points and terminate the algorithm.

3.5. Thoughts on the Proposed Algorithm

In MOP, for objectives and any two points and in the feasible solution space , if each objective is satisfied with and at least one objective is satisfied with , namely, is at least as good as with respect to all the objectives, and is strictly better than with respect to at least one objective, then we say that dominates . If no other solution is strictly better than , then is called a nondominated solution or Pareto optimal solution.

In single objective problems, there is only one objective to be optimized. Therefore, the global best () of the whole swarm is only one. In multiobjective problems, there are multiple consistent or conflicting objectives to be optimized. Therefore, there exist very large or infinite numbers of solutions which cannot dominate each other. These solutions are nondominated solutions or Pareto optimal solutions. Therefore, of the whole swarm is more than one. How to select suitable is a very key issue.

When it is not possible to find all these solutions, it may be desirable to find as many solutions as possible in order to provide more choices to the decision maker. However, if the solutions are close to each other, they are nearly of the same choice. It is desirable to find the Pareto optimal solutions scattered uniformly over the Pareto front, so that the decision maker can have a variety of choices. The paper introduces uniform design to ensure that the acquired solutions scatter uniformly over the Pareto front in the objective space.

For the MOPSO, the diversity of the population is a very important factor. It has a key impact on the convergence of an algorithm and uniformly distribution of Pareto optimal solution. It can effectively get rid of the premature of the algorithms. The paper introduces PAM clustering algorithms to maintain the diversity of the population. The particles in the same cluster have similar features, whereas ones in different clusters have dissimilar features. Thus, choosing particles from different clusters may necessarily increase the diversity of the population.

3.6. Steps of the Proposed Algorithm

Step 1. According to the population size , determine the number of subintervals and the population size in each subinterval, such that is more than ; is a prime and must exist in Table 1. Execute Algorithm A in Section 2.4.2 to generate a temporary population containing points.

Step 2. Perform Algorithm B described in Section 3.2 to select the best points from the points as the initial population and acquire their objectives .

Step 3. Initialize the speed and of the particle by and ; where ; denotes objectives of .

Step 4. According to Algorithm 2, select elitist or Pareto optimal solutions from and store them to the external secondary list, , the size of which is assumed as .

Step 5. Choose the suitable for each of the particles from in terms of Algorithm C described in Section 3.3.

Step 6. Update the position and velocity for each of the particles, respectively, using formulas (2) and (3), where formula (2) is modified as follows:

Step 7. For the th dimensional variable of the th particle, if it goes beyond its lower boundary or upper boundary, then it takes the th dimensional value of its corresponding boundary and the th dimensional value of its velocity takes the opposite value.

Step 8. Calculate each of objectives for each of the particles, and update them into .

Step 9. Update of each particle as follows.

For the th particle, if dominates , then let and ; otherwise, and are kept unchanged. If neither of them is dominated by the other, then randomly select one of them as and update its corresponding .

Step 10. Update the external archive storing Pareto optimality and its size according to Algorithm 3.

Step 11. Implement Algorithm D described in Section 3.4 to adjust the Pareto optimal solutions such that the number of Pareto optimal solutions is less than or equal to the size of the external archive .

Step 12. If the stop criterion is satisfied, terminate algorithm; otherwise, turn to Step 5 and continue.

4. Numerical Results

In order to evaluate the performances of the proposed algorithm, we compare it with two outstanding algorithms, UMOGA [23] and NSGA-II [25].

4.1. Test Problems

Several well-known multiobjective test functions are used to test the performance of the proposed algorithm.Biobjective problem: FON [25, 26], KUR [25, 27], ZDT1, ZDT2, ZDT3, and ZDT6 [25, 2831].Three-objective problems: DTLZ1 and DTLZ [2, 32].

The definitions of them are as follows.

FON is defined aswhere .

This test function has a nonconvex Pareto optimal front.

KUR is defined aswhere and .

This test function has a nonconvex Pareto optimal front, in which there are three discontinuous regions.

ZDT1 is defined aswhere and .

This test function has a convex Pareto optimal front.

ZDT2 is defined aswhere and .

This test function has a nonconvex Pareto optimal front.

ZDT3 is defined aswhere and .

This test function represents the discreteness feature. Its Pareto optimal front consists of several noncontiguous convex parts. The introduction of the sine function in causes discontinuity in the Pareto optimal front.

ZDT6 is defined aswhere and .

This test function has a nonconvex Pareto optimal front. It includes two difficulties caused by the nonuniformity of the search space. Firstly, Pareto optimal solutions are nonuniformly distributed along the global Pareto optimal front (the front is biased for solutions in which is near one). Secondly, the density of the solutions is the lowest near the Pareto optimal front and the highest away from the front.

DTLZ1 is defined aswhere and .

This test function has difficulty in the convergence to the Pareto optimal hyperplane and contains () local Pareto optimal fronts.

DTLZ2 is defined aswhere and .

The Pareto optimal solutions of this test function must lie inside the first octant of the unit sphere in a three-objective plot. It is more difficult than DTLZ1.

4.2. Parameter Values

The parameter values of the proposed algorithm, UKMOPSO, are adopted as follows.(i)Parameters for PSO: the linearly descending inertia weight [33, 34] is within the interval , and the acceleration coefficients and are both taken as 2.(ii)Parameter for PAM: the minimal and maximal numbers of clusters are and .(iii)Population size: the population size is 200.(iv)Parameters for the uniform design: the number of subintervals is 64; the number of the sample points or the population size of each subinterval is 31; set and .(v)Stopping condition: the algorithm terminates if the number of iterations is larger than the given maximal generations of 20.

All the parameter values in UMOGA [23] are set equal to the original values in [23]; the different and additional parameter values between UMOGA and UKMOPSO are as follows: the number of subintervals is 16; (the number of variables); ; .

NSGA-II in [25] adopted a population size of 100, a crossover probability of 0.8, a mutation probability of (where is the number of variables), and maximal generations of 250. In order to make the comparisons fair, we have used population size of 100 and maximal generations of 40 in NSGA-II, so that the total number of function evaluations in NSGA-II, UKMOPSO, and UMOGA is the same.

4.3. Performance Metric

Based on the assumption that the true Pareto front of a test problem is known, many kinds of performance metrics have been proposed and used by many researchers such as [2, 9, 24, 25, 29, 3540]. Three of them are adopted in the paper to compare the performance of the proposed algorithm with them. The first metric is metric [29, 38]. It is taken as the quantitative metric of the solution quality and often used to show that the outcomes of one algorithm dominating the outcomes of another algorithm. It is defined aswhere represents being dominated by ; represents the Pareto front obtained by Algorithm , and represents the Pareto front obtained by Algorithm in a typical run. Furthermore, represents the number of elements of .

The metric value means that all points in are dominated by or equal to points in . In contrast, means that none of the points in are covered by the ones in .

The second metric is the IGD metric (namely, Inverted Generational Distance) [3941]. It is the mean value of the distances from the set of solutions uniformly distributed in the true Pareto front to the set of solutions obtained by an algorithm in objective space. Let be a set of uniformly distributed points along the true PF (Pareto Front). Let be an approximate set to the PF; the average distance from to is defined aswhere is the minimum Euclidean distance between and the points in . If is large enough to represent the PF very well, could measure both the diversity and convergence of in a sense. To have a low value of , the set must be very close to the PF and cannot miss any part of the whole PF.

The smaller the IGD value for the set of obtained solutions is, the better the performance of the algorithm will be.

The third measure is the measure of maximum spread (MS) [2, 24, 35], which is proposed in [42, 43]. It can measure how well the true Pareto front (PFtrue) is covered by the discovered Pareto front (PFknown) through hyperboxes formed by the extreme function values observed in the PFtrue and PFknown. It is defined aswhere is the number of objectives, and are the maximum and minimum values of the th objective in PFknown, respectively, and and are the maximum and minimum values of the th objective in PFtrue, respectively.

Note that if , then . Algorithms with larger MS values are desirable and means that the true Pareto front is totally covered by the obtained Pareto front.

4.4. Results

For each test problem, we perform the proposed algorithm (called UKMOPSO) for 30 independent runs and compare its performance with UMOGA [23] and NSGA-II [25]. The values of several metrics, the metric, IGD metric, and metric, are shown in Tables 2, 3, 4, and 5. The Pareto fronts obtained by several algorithms implemented on several test functions are illustrated in Figures 16.

For brevity, , , , and are, respectively, marked as , , , and .

As shown in Table 2, for the test functions, Fon and KUR, and mean that 76.8% and 92.9% of the solutions obtained by UMOGA are dominated by those obtained by UKMOPSO. Similarly, means that 1% of the solutions obtained by UKMOPSO are dominated by those obtained by UMOGA. For DTLZ1 and DTLZ2, and mean that all solutions obtained by UMOGA are dominated by those obtained by UKMOPSO, but none of solutions from UKMOPSO is dominated by those from UMOGA. It means that the solution quality via UKMOPSO is much better than that via UMOGA for the above test functions. For ZDT1, ZDT2, ZDT3, and ZDT6, the values of the metrics do not have too many differences between UKMOPSO and UMOGA. It means the solution qualities via both are almost identical.

From Table 3, we can see that for ZDT1, ZDT2, ZDT3, and ZDT6, and mean that all solutions obtained by NSGA-II are dominated by those obtained by UKMOPSO, but none of solutions from UKMOPSO is dominated by those from NSGA-II. It means that the solution quality via UKMOPSO is much better than that via NSGA-II for the above test functions. For the rest of the test functions, the solution qualities via both are almost identical.

In Table 4, for all test problems, the IGD values of UKMOPSO are the smallest among UKMOPSO, UMOGA, and NSGA-II. This means the PF found by UKMOPSO is the nearest to the true PF compared with the PF obtained by the other two algorithms; namely, the performance of UKMOPSO is the best in the three algorithms. The IGD values of UMOGA are all larger than those of NSGA-II for all test functions. It means that the PF found by UMOGA is closer to the true PF than the PF obtained by NSGA-II.

From Table 5, we can see that all the MS values of UKMOPSO are almost close to 1 for each test function. It means that almost all the true PF is totally covered by the PF obtained by UKMOPSO. UMOGA is similar to UKMOPSO. The MS values of NSGA-II are much lesser than UKMOPSO and UMOGA, especially for ZDT2.

Figures 16 all demonstrate that the PF found by UKMOPSO is the nearest to the true PF and scatters most uniformly in those obtained by three algorithms. Most of the points in the PF found by UKMOPSO overlap the points in the true PF. It means the solution quality obtained by UKMOPSO is very high.

4.5. Influence of the Uniform Crossover and PAM

In order to find out the influence of the uniform crossover on the proposed algorithm, we compare the distribution and number of Pareto optimal solutions before and after performing the uniform crossover. One of the simulating results on the test function ZDT1 is shown in Figure 7.

From Figure 7, it can be seen that, before and after performing the uniform crossover, the number of data points in the 1st cluster and the 2nd cluster varies from 7 to 15 and from 19 to 26, respectively; namely, many new Pareto optimal solutions having not been found before performing the uniform crossover are generated, and the differences of the data points between two clusters have been decreased. This will directly influence the uniformity of the PF and acquire more uniform Pareto solutions.

PAM is used to determine which Pareto solutions are to be removed from or be inserted into the external archive. This is to maintain the diversity of Pareto optimal solutions. The diversity can be computed by the “distance-to-average-point” measure [44] defined aswhere is the population, is the swarm size, is the dimensionality of the problem, is the th value of the th particle, and is the th value of the average point .

If the number of Pareto optimality is less than or equal to the size of the external archive, PAM has no influence on the proposed algorithm. Otherwise, it is used to select the different type of Pareto optimality from several clusters. Therefore, The diversity of Pareto optimality will certainly increase. We monitor the diversity of Pareto optimality before and after performing PAM on ZDT1 at a certain time. The values are, respectively, 87.62 and 117.52. This fully demonstrates that PAM can improve the diversity of Pareto optimality.

5. Conclusion and Future Work

In this paper, a multiobjective particle swarm optimization based on PAM and uniform design is presented. It firstly implements PAM to partition the Pareto front into clusters in the objective space; and then it implements the uniform crossover operator on the minimal cluster so as to generate more Pareto optimality in the minimal cluster. When the size of the Pareto solution is larger than that of the external archive, PAM is used to determine which Pareto solutions are to be removed from or be inserted into the external archive. Finally, it keeps all the points in the lesser clusters and discards some points in the larger clusters. This can ensure that each of the clusters will contain approximately the same data points. Therefore, the diversity of the Pareto solutions will increase, and they can scatter uniformly over the Pareto front. The results of the experimental simulation performed on several well-known test problems indicate that the proposed algorithm obviously outperforms the other two algorithms.

This algorithm is going on for further enhancement and improvement. One attempt is to use a more efficient or approximate clustering algorithm to speed up the execution time of this algorithm. Another attempt is to extend its application scopes.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This research was supported by Guangxi Natural Science Foundation (no. 2013GXNSFAA019337), Guangxi Universities Key Project of Science and Technology Research (no. KY2015ZD099), Key project of Guangxi Education Department (no. 2013ZD055), Scientific Research Staring Foundation for the PHD Scholars of Yulin Normal University (no. G2014005), Special Project of Yulin Normal University (no. 2012YJZX04), and Key Project of Yulin Normal University (no. 2014YJZD05). The authors are grateful to the anonymous referee for a careful checking of the details and for helpful comments that improved this paper.