Mathematical Problems in Engineering

Volume 2015, Article ID 126404, 17 pages

http://dx.doi.org/10.1155/2015/126404

## Multiobjective Particle Swarm Optimization Based on PAM and Uniform Design

School of Computer Science and Engineering, Guangxi Universities Key Lab of Complex System Optimization and Big Data Processing, Yulin Normal University, Yulin 537000, China

Received 26 November 2014; Accepted 3 March 2015

Academic Editor: Pandian Vasant

Copyright © 2015 Xiaoshu Zhu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

In MOPSO (multiobjective particle swarm optimization), to maintain or increase the diversity of the swarm and help an algorithm to jump out of the local optimal solution, PAM (Partitioning Around Medoid) clustering algorithm and uniform design are respectively introduced to maintain the diversity of Pareto optimal solutions and the uniformity of the selected Pareto optimal solutions. In this paper, a novel algorithm, the multiobjective particle swarm optimization based on PAM and uniform design, is proposed. The differences between the proposed algorithm and the others lie in that PAM and uniform design are firstly introduced to MOPSO. The experimental results performing on several test problems illustrate that the proposed algorithm is efficient.

#### 1. Introduction

Many real-world optimization problems often need to simultaneously optimize multiple objectives that are incommensurable and generally conflicting with each other. They can usually be written aswhere is a variable vector in a real and -dimensional space, is the feasible solution space, and is the number of the objective functions. Since the pioneering attempt of Schaffer [1] to solve multiobjective optimization problems, many kinds of multiobjective evolutionary algorithms (MOEAs), ranging from traditional evolutionary algorithms to newly developed techniques, have been proposed and widely used in different applications [2–4].

Multiobjective evolutionary algorithms, MOEAs, have become well-known methods for solving the multiobjective optimization problems that are too complex to be solved by exact methods. The main challenge for MOEAs is to be satisfied with three goals at the same time: (1) the Pareto optimal solutions are as near to true Pareto front, which means the convergence of MOEAs, (2) the nondominated solutions are evenly scattered along the Pareto front, which means the diversity of MOEAs, and (3) MOEAs obtain Pareto optimal solutions in limited evolution times [5].

The particle swarm optimization algorithm, PSO, and MOEAs are both intelligent optimization algorithms. It was proposed by Eberhart and Kennedy in 1995 [6, 7]. It originates from sharing and exchanging of information in the process of searching food among the bird’s individuals. Each individual can benefit from the discovery and flight experience of the others. PSO seems particularly suitable for multiobjective optimization mainly because of the high speed of convergence [8, 9].

PAM is one of -medoids clustering algorithms based on partitioning methods. It attempts to divide data objects into partitions. Namely, it can divide a swarm into different subswarms with different features.

This paper proposed a novel multiobjective particle swarm optimization based on PAM and uniform design, abbreviated as UKMOPSO. It first uses PAM to partition the data points into several clusters, and then the smallest cluster is implemented crossover operator based on the uniform design to generate some new data points. When the size of the Pareto solution is larger than the size of the external archive, PAM is used to determine which Pareto solution is to be removed or appended. The results of the experimental simulation implemented on several well-known test problems indicate that the proposed algorithm is efficient.

The rest of this paper is organized as follows. Section 2 states the preliminaries of the proposed method. Section 3 presents our method in detail. Section 4 gives the numerical results of the proposed method. The conclusion of the work is made in Section 5.

#### 2. Preliminaries

In this section, we describe some concepts concerning particle swarm optimization, multiobjective particle swarm optimization, PAM, and uniform design.

##### 2.1. Particle Swarm Optimization Algorithms

In -dimensional search space, the position and velocity of the th particle are, respectively, represented as and . The optimal positions of the th particle and the whole swarm, namely, the individual optimal and the global optimal, are denoted as and , respectively. The individuals or particles in the swarm update their velocities and positions according to the following formulas:where the inertia weight coefficient indicates the ability to maintain the previous speed; the acceleration coefficients and are used to coordinate the degrees of tracking individual optimal and global optimal; and and are two random numbers drawn from the uniform distribution on the interval .

The update equation of the velocity consists of the previous velocity component, a cognitive component and a social component. They are mainly controlled by three parameters: the inertia weight and two acceleration coefficients.

From the theoretical analysis for the trajectory of particles in PSO [10], the trajectory of a particle converges to a weighted mean of and . Whenever the particle converges, it will “fly” to the individual best position and the global best position. According to the update equation, the individual best position of the particle will gradually move closer to the global best position. Therefore, all the particles will converge onto the global best particle’s position.

##### 2.2. Multiobjective Particle Swarm Optimization

MOPSO is proposed by Coello et al.; it adopts swarm intelligence to optimize MOPs, and it uses the Pareto optimal set to guide the particle’s flight [9].

Particle swarm optimization has been proposed for solving a large number of single objective problems. Many researchers are interested in solving multiobjective problems (MOP) using PSO. To modify a single objective PSO to MOPSO, a guide must be redefined in order to obtain a set of nondominated solutions (Pareto front). In MOPSO, the Pareto optimal solutions should be used to determine the guide for each particle. How to select suitable local guides for attaining both convergence and diversity of solutions becomes an important issue.

There have been some publications to use PSO to solve MOP. A dynamic neighborhood PSO was proposed [11], which optimizes only one objective at a time and uses a scheme similar to lexicographic ordering. In addition, this approach also proposes an unconstrained elite archive named dominated tree to store the nondominated solutions. However, it is a difficult issue for this approach to pick up a best local guide from the set of Pareto optimal solutions for each particle of the population. A strategy for finding suitable local guides for each particle was proposed and named Sigma method. The local guide is explicitly assigned to specify particles according to the Sigma value [12]. This results in the desired diversity and convergence, but it is still not close enough to the Pareto front. On the other hand, an enhanced archiving technique to maintain the best (nondominated) solutions found during the course of a MO algorithm was proposed [13]. It shows that using archives in PSO for MO problems will improve their performance directly. A parallel vector evaluated particle swarm optimization (VEPSO) method for multiobjective problems was proposed [14], which adopted a ring migration topology and PVE system to simultaneously work 2–10 CPUs to find nondominated solutions. In [9], MOPSO method was proposed. It incorporates Pareto dominance and a special mutation operator to solve MO problems [15].

Recently, a hybrid multiobjective algorithm combining both genetic algorithm (GA) and particle swarm optimization (PSO) was proposed [16]. A multiobjective particle swarm optimization based on self-update and grid strategy was proposed for improving the function of Pareto set [17]. A new dynamic self-adaptive multiobjective particle swarm optimization (DSAMOPSO) method is proposed to solve binary-state multiobjective reliability redundancy allocation problems [18], which used a modified nondominated sorting genetic algorithm (NSGA-II) method and a customized time-variant multiobjective particle swarm optimization method to generate nondominated solutions.

The MOPSO method is becoming more popular due to its simplicity to be implemented and its ability to quickly converge to a reasonably acceptable solution for problems in science and engineering.

##### 2.3. PAM Algorithm

There are many clustering methods available in data mining. Typical clustering analysis methods are clustering based on partition, hierarchical clustering, clustering based on density, clustering based on grid, and clustering based on model.

The most frequently used clustering methods based on partition are -means and -medoids. In contrast to the -means algorithm, -medoids chooses data points as centroids, which make -medoids method more robust than -means in the presence of noise and outliers. The reason is that -medoids method is less influenced by outliers or other extreme values than -means. PAM (Partitioning Around Medoids) is the first and the most frequently used -medoids algorithms. It is shown in Algorithm 1.