Computational Intelligence and Neuroscience

Volume 2016 (2016), Article ID 1898527, 10 pages

http://dx.doi.org/10.1155/2016/1898527

## -Based Multi/Many-Objective Particle Swarm Optimization

^{1}Facultad de Ingeniería y Ciencias, Universidad Autónoma de Tamaulipas, 87000 Victoria, TAMPS, Mexico^{2}Cinvestav Tamaulipas, Km. 5.5 Carretera Ciudad Victoria-Soto La Marina, 87130 Victoria, TAMPS, Mexico

Received 7 November 2015; Revised 26 January 2016; Accepted 16 February 2016

Academic Editor: Ezequiel López-Rubio

Copyright © 2016 Alan Díaz-Manríquez et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We propose to couple the performance measure and Particle Swarm Optimization in order to handle multi/many-objective problems. Our proposal shows that through a well-designed interaction process we could maintain the metaheuristic almost inalterable and through the performance measure we did not use neither an external archive nor Pareto dominance to guide the search. The proposed approach is validated using several test problems and performance measures commonly adopted in the specialized literature. Results indicate that the proposed algorithm produces results that are competitive with respect to those obtained by four well-known MOEAs. Additionally, we validate our proposal in many-objective optimization problems. In these problems, our approach showed its main strength, since it could outperform another well-known indicator-based MOEA.

#### 1. Introduction

Evolutionary Algorithms (EAs) encompass a set of bioinspired techniques that make a multidimensional search and have been found to be effective in locating solutions close to the global optimum even in highly rugged search spaces. EAs are suitable alternatives for solving problems with two or more objectives (the so-called multiobjective optimization problems or MOPs for short), since they are able to simultaneously explore different regions of the search space and obtain several points from the trade-off surface in a single run. Since the mid-1980s, the field of evolutionary multiobjective optimization (EMO) has grown and a wide variety of multiobjective EAs (MOEAs) to solve real applications have been proposed so far [1–3].

Moreover, the use of Pareto dominance (PD) to solve multiobjective optimization problems (MOPs) has been successfully used for several years. However, when the number of objectives increases, the proportion of nondominated solutions also increases but in an exponential way [4–6]. Therefore, very quickly, it becomes impossible to distinguish individuals for selection purposes. Such a behavior dilutes the selection pressure, since the choice of solutions is performed practically at a random way.

For this reason, the evolutionary multiobjective optimization (EMO) community has developed several approaches to overcome this shortcoming in the fitness assignment process [7, 8]. Several of these approaches drive the search using a quality assessment indicator. This idea has become more popular in the last few years, mainly because of the growing interest in tackling multiobjective problems having 4 or more objectives (commonly called “many-objective optimization problems” or MaMOP for short), for which indicator-based MOEAs seem to be particularly suitable [9]. The idea of using an indicator-based selection is to identify the solutions that contribute the most to the improvement of the performance indicator adopted in the selection mechanism [10, 11].

The Indicator-Based Evolutionary Algorithm (IBEA) proposed by Zitzler and Künzli [12] is the most general version of an algorithm of this sort. Instead of using the problem at hand as a fitness function, these methods minimize or maximize (whichever the case) a performance indicator. Originally, IBEA was proposed to be used with two different performance measures: hypervolume [13] and the -indicator. Among other results, Zitzler and Künzli found that no additional diversity preservation mechanism was required when ranking the solutions with an indicator. Similarly, Beume et al. [14] proposed Metric Selection Evolutionary Multiobjective Algorithm (SMS-EMOA). However, in this case, Beume et al. replaced the Crowding Distance with the hypervolume indicator. Ishibuchi et al. [15] presented a novel approach that iteratively optimizes separately each objective and searches for the solution that contributes more to the hypervolume indicator. This approach was designed to search for a small number of nondominated solutions along the entire Pareto front. Igel et al. [16] proposed the Multiobjective Covariance Matrix Adaptation Evolution Strategy (MO-CMA-ES). This algorithm uses a set of monoobjective optimizers as its population. Each optimizer generates new solutions that can be accepted back into the population according to their ranking with respect to PD or their contribution to the hypervolume.

Although the hypervolume’s nice theoretical property has positioned it as the most popular choice for implementing indicator-based MOEAs [14], it is well-known that its computational cost considerably increases as we raise the number of objectives. To overcome this drawback, some researchers [9] have opted for approximating the hypervolume. However, this sort of scheme can decrease the accuracy of the selection mechanism. Rodríguez Villalobos and Coello Coello [17] recently proposed -differential evolution in which the authors adopted performance indicator [18] as an alternative to the hypervolume. The fitness assignment for each solution is performed through the contribution to . indicator requires a reference set to be calculated, and the authors used the nadir point and the ideal vector to create such a reference set. The authors reported that this approach could obtain competitive results with respect to other MOEAs (including SMS-EMOA), having as its main advantage its very low computational cost, even when dealing with many-objective problems.

Moreover, the PSO has been used to solve a lot of problems [19–21]. However, in the literature the works with respect to the use of an indicator to guide the search of a PSO are very limited. In [22], Padhye uses the contribution to the hypervolume to select the and the of the PSO. However, in this work the hypervolume is not used to select the particles that advance to the next iteration. This algorithm is used for the topology optimization of a compliant mechanism. In [23], the authors proposed hybridization of MOPSO with a local search operator. The proposed MOPSO uses an indicator to truncate an external archive of solutions. In this work, two approaches were proposed one that uses the -indicator and another that adopts the hypervolume performance measure. Both approaches reached similar results. However, these approaches were compared only with problems with and objectives.

Other suitable performance indicators are the indicator [24]. In recent works, it has been reported that desirable properties (i.e., it is weakly monotonic, it produces well-distributed solutions, and it can be computed in a fast manner) make the indicator a viable candidate to be incorporated into an indicator-based MOEA [10, 25–29]. In such works, the behavior has been compared with respect to that of the hypervolume and concludes that they both have a similar behavior, but has a considerably lower computational cost.

In [27], the authors proposed an approach to fast ranking the population (of a genetic algorithm and a differential evolution) using the indicator. Although their approach is able to work with many-objective problems, it uses an external archive. Therefore, the metaheuristic at hand has to be highly modified in order to work with the approach. In this paper, we propose a -based multiobjective approach that maintains the nature of PSO, while empowering it to handle many-objective problems.

In this work, we propose to use the indicator to guide the search of MOPSO. The new approach is then compared with respect to some state-of-the-art MOEAs taken from the specialized literature. Furthermore, we present a scalability study to analyze the behavior of our proposed approach as the number of objectives of the problem increases. The remainder of this work is organized as follows. Details of the indicator are given in Section 2. Section 3 presents our proposed approach. A comparative study with respect to other algorithms is presented in Section 4. Finally, Conclusions and Future Work are given in Section 5.

#### 2. Indicator

The family of indicators [24] is based on utility functions which map a vector to a scalar value in order to measure the quality of two approximations of the Pareto front.

*Definition 1. *For a set of utility functions, probability distribution on , and a reference set , the indicator of a solution set is defined as

*Definition 2. *For a discrete and finite set and uniform distribution over , the indicator can be defined as [30]Since the first summand () is constant if we assume a constant , the first summand can be deleted in order to have unary indicator as a result (called for simplicity) [25].

*Definition 3. *For a constant reference set, the indicator can be defined as unary indicator as follows:We selected the Tchebycheff function to be used as the utility function of our approach. This function works well when optimizing different types of Pareto fronts. Also, this aggregation function is not smooth for continuous multiobjective problems. However, our algorithm does not need to compute the derivative of the aggregation function. The Tchebycheff function can be defined as , where is a weight vector and is utopian point (an objective vector that is not dominated by any feasible search point).

*Definition 4. *The indicator of a solution set for a given set of weight vectors and utopian point is defined as

*Definition 5. *Finally, we say that the contribution of one solution to the indicator can be defined as

#### 3. Proposed Approach

##### 3.1. The PSO Algorithm

PSO has been successfully used for both continuous nonlinear and discrete binary single objective optimization [31]. The pseudocode of PSO is shown in Algorithm 1.