Abstract

Due to its fast convergence and population-based nature, particle swarm optimization (PSO) has been widely applied to address the multiobjective optimization problems (MOPs). However, the classical PSO has been proved to be not a global search algorithm. Therefore, there may exist the problem of not being able to converge to global optima in the multiobjective PSO-based algorithms. In this paper, making full use of the global convergence property of quantum-behaved particle swarm optimization (QPSO), a novel multiobjective QPSO algorithm based on the ring model is proposed. Based on the ring model, the position-update strategy is improved to address MOPs. The employment of a novel communication mechanism between particles effectively slows down the descent speed of the swarm diversity. Moreover, the searching ability is further improved by adjusting the position of local attractor. Experiment results show that the proposed algorithm is highly competitive on both convergence and diversity in solving the MOPs. In addition, the advantage becomes even more obvious with the number of objectives increasing.

1. Introduction

Optimization problems with more than one objective are rather common in real-world practice, such as information system design [1], reservoir flood control operation (RFCO) problem [2], community detection [3] in social networks, and battery hybrid storage system optimization problems [4]. In such multiobjective optimization problems (MOPs), the objectives to be optimized are normally in conflict with each other, which means there is no unique solution to these problems. Instead, we are supposed to find Pareto optimal solutions that represent the best possible compromises among all the objectives.

In recent years, due to their population-based nature, a variety of evolutionary algorithms are applied to address the MOPs. Among these algorithms, particle swarm optimization (PSO) has attracted great interest for its relatively simple operation and competitive performance. Since the first multiobjective PSO (MOPSO) proposed in 1999 [5], more than fifty variants of MOPSOs have been reported in literature, among which OMOPSO [6] proposed by Sierra and Coello is one of the most representative methods. Sierra and Coello [7] had given a survey of the existing studies on OMOPSOs before 2006, and the state-of-the-art MOPSOs are summarized by Zhou et al. [8]. Since classical PSO is designed for single-objective optimization problems and cannot be applied to multiobjective optimization problems directly, most of the existing studies have focused on how to extend PSO to its multiobjective versions, such as researches on how to select the global and local best particles [911], as well as how to maintain good points found so far.

However, as proved by Van Den Bergh [12], the classical PSO is not a global search algorithm, not even a local one, according to the convergence criteria provided by Solis and Wets [13]. Therefore, MOPSOs, which are derived from PSO, are unable to converge to global optima.

Quantum-behaved particle swarm optimization (QPSO) [14], first introduced by Sun et al. in 2004, is a new population-based algorithm, which is inspired by quantum mechanics and the trajectory analysis of PSO. Besides the introduction of mean best position (mbest), the particles in QPSO are assumed to follow a double exponential distribution in a quantum δ potential well around its local focus when a new position is sampled, which is the most significant difference between QPSO and PSO. Therefore, QPSO needs no velocity vectors for particles at all. Since its first proposal, QPSO has shown its success in solving a wide range of single-objective optimization problems [1517].

In contrast with PSO, the global convergence of QPSO can be guaranteed if the contraction-expansion (CE) coefficient of the algorithm is properly selected [18, 19]. Sun et al. proved that the QPSO is a form of contraction mapping on the probability metric space and its orbit is probabilistic bounded, and, in turn, the algorithm converges asymptotically to the global optimum. It is the exact reason why QPSO outperforms PSO as well as most of the other evolutionary algorithms.

Although QPSO has been successfully applied in conventional single-objective optimization problems due to its global convergence and easy control, it is rarely used in solving multiobjective optimization problems [20, 21]. Although the QPSO algorithm can be global convergent, the CE coefficient is generally selected to be relatively small in order to accelerate the convergence of the algorithm for real-word problems so that premature convergence can result when the algorithm is performed for the MOPs. To eliminate this defect, we propose a novel position-update mechanism based on the ring model and combine it with the classical QPSO. This leads to MOQPSOr, an enhanced QPSO method which can be applied to the multiobjective optimization problems. The combination of the ring model with QPSO has several merits. Firstly, it employs a novel communication mechanism between particles using the ring model. This modification enables the swarm to have much larger mutate scope compared to the original QPSO, which effectively slows down the descent speed of the swarm diversity, solving the problem of premature caused by the quick convergence when applying QPSO directly into multiobjective optimization. Secondly, in this ring model, by adjusting the position of local attractor, the global searching ability is enhanced at the beginning of iteration, while the local searching ability is enhanced in the later stage of iteration. By employing this novel position-update strategy based on the ring model, the efficiency of MOQPSOr on multiobjective optimization is further improved, since there is no need for any additional mutation operation.

The rest of the paper is organized as follows: After a brief introduction of the background of PSO and QPSO in Section 2, a novel ring model for position update is proposed in Section 3 and a new version of multiobjective quantum-behaved particle swarm optimization algorithm (MOQPSOr) is presented by integrating the new position-update strategy into it accordingly. Numerical tests and performance comparison on 12 benchmark functions are provided in Section 4. Finally, the paper is concluded in Section 5.

Being a heuristic search technique that simulates the sociology behaviour of an organism, particle swarm optimization (PSO) [22, 23] has become one of the most popular methods in the fields of evolutionary computation. In PSO, each particle represents a candidate solution to the problem and flies through a D-dimensional search space according to the following position-update equation:where the current position and velocity of th particle at the tth iteration are represented, respectively, as and . besti is the best previous position of particle i, while best is the position of the best particle in the whole swarm. The parameters and are different random numbers distributed uniformly on (0,1), and as well as denote the acceleration coefficients that typically are both set to a value of 2.0, which implies that the “social” and “cognition” parts have the same influence on the velocity update. The parameter ω is known as the inertia weight and is usually set to a positive value chosen from a linear or nonlinear function of the iteration number.

Compared with PSO, the most significant advantage of QPSO is that its global convergence can be theoretically guaranteed [18]. In addition, QPSO is much easier to be controlled, benefiting from the fact that it only has one parameter. Trajectory analyses demonstrated the fact that convergence of the whole particle swarm may be achieved if each particle converges to its local attractor [24]:where φ is a sequence of uniformly distributed random numbers in (0,1).

Unlike PSO, each individual particle in QPSO moves in the search space with a δ potential on each dimension, of whose center is point . When a particle evolves its position in this δ potential, the new position is subject to an exponential distribution whose probability density function iswhere determines the distribution scope. In QPSO, the distribution scope of each particle is set elaborately to relate to its relative position in the whole swarm:where mbest is the mean of the personal best positions among all particles:

In this way, particles far away from the center of the whole swarm will have a larger searching scope, while those particles close to the middle can only search in a relatively limited small space. Therefore, the position of the particle in QPSO is updated according to the following iteration equation:where μ is a random number uniformly distributed in (0,1) and β is called Contraction-Expansion Coefficient, which is employed to control the convergence speed of the algorithm. As proved by Sun et al. [14], β must be set as to guarantee convergence of the particle.

3. Proposed Method

3.1. Novel Ring Model Based Position-Update Strategy

From the perspective of both empirical evidence and theory analysis, the global search ability as well as the convergence rate of QPSO and its variants has been fully discovered on the single-objective optimization problems. However, this advantage of QPSO leads to premature convergence when it is applied directly to the multiobjective optimizations. Without the loss of generality, a multiobjective optimization problem can be formulated as follows:where are the objective functions, while is the vector of decision variable. The optimization performance is generally measured by two aspects: closeness to the ideal Pareto front and distribution of the approximated solutions [8]. However, the quick convergence property of QPSO is apt to lead rapid decline of the swarm’s diversity, which becomes a serious problem that must be addressed when it is extended into multiobjective optimization.

Each particle in QPSO is located in an exponential distributed potential, with the center and distribution scope , respectively. Figure 1 illustrates the relationship between the particle position and its distribution scope in QPSO. Here, represents the particle far away from the mean best position (mbest) of the swarm and its corresponding distribution at next iteration with the center visualised on upper right; denotes the particle near the mean best position of the swarm with being the center of the exponential distribution of its position at next iteration. According to the iteration equation of QPSO, we can see in the figure that has a much smaller variation scope than . That is, the closer to the mean best position, the smaller the scope of the variation. Only those particles away from the mbest, like , have the large variation scope. That implies that, in original QPSO, a certain number of particles in the swarm are supposed to have small, or even very close to zero, distribution scopes.

In order to control the descent speed of the swarm diversity, we propose a novel position-update strategy based on the ring model. In this model, for a swam with particles, all the particles are arranged in a circle like Figure 2, numbered as . Different from the way of deciding particle’s variation scope according to its location in the swarm in original QPSO, in our proposed method, when the particle evolves, its variation scope is decided by the distance to its next-numbered particle . Accordingly, for the iteration equation of particle , we replace mbest by , which represents the personal best position of particle . Since particles in the swarm are distributed randomly and independently, the position of can locate everywhere in the search space theoretically. That means that particles with continuous indices are not necessarily adjacent in position. The particles around the center of the swarm could also have the opportunity to mutate with large scopes. Therefore, by this novel position-update strategy, the swarm of MOQPSOr can mutate more than the original QPSO, which subsequently leads to the slowdown of the descent speed of the swarm diversity.

In terms of the local attractor , in QPSO, it is set to lie uniform-randomly in the hyperrectangle with pbesti and best being two ends of its diagonal. Generally speaking, the local searching ability will be enhanced when moves towards best, and when moves towards pbesti, the global searching ability will be enhanced. Therefore, in MOQPSOr, is given larger probabilities locating near pbesti in the beginning and near best in the later stage of iteration, respectively

Based on the above analysis, particles in MOQPSOr that move according to the position-updating strategy can be described as follows:where parameter α is called Searching Coefficient, by adjusting which, local attractor can be controlled to appear near pbesti,j or bestj. Figure 3 plots the distribution of p’s location formulated in (9), where the horizontal axis denotes the random number r and the vertical axis denotes p’s location between pbest and best. and are used as examples to demonstrate situations when and , respectively. From the red dotted lines, it could be seen that when , p has half probability to locate in , which is much closer to pbest than to best. In Figure 3(b), when , p has half probability to locate in , which is closer to best than to pbest. That is, when , the local attractor would appear near pbest with large probability. The smaller the value of α is, the closer the point gathers towards pbest. On the contrary, when , p would appear with large probability near best. The bigger the value of is, the closer the point gathers towards best. Therefore, the algorithm’s global searching ability could be enhanced by setting , while the local searching ability could be enhanced by setting .

3.2. Multiobjective QPSO with Ring Model (MOQPSOr)

In MOQPSOr, we adopt the concept of crowding distance [25] for the leader selection. Whenever a leader particle needs to be selected as the global best position from the external archive, the crowding factor of each leader is calculated, followed by the subsequent selection by means of a binary tournament based on these crowding factors. A particle with larger crowding distance has more chances to be chosen as leader.

Crowding distance of each individual is also used to decide which leaders would keep over generations when the maximum external archive size is exceeded in MOQPSOr. The particle in the archive with the smallest crowding distance will be removed first whenever needed.

Since MOQPSOr could have already slowed down the descent speed of the swarm diversity by the novel position-update strategy based on ring model, there is no need for any additional mutation operation. The procedure of the MOQPSOr algorithm could be described as follows.

Step 1 (initialization).

Step 1.1. Parameter settings are as follows: the swarm size M, the external archive size , the stopping criterion , iteration time , Searching Coefficient , and Contraction-Expansion Coefficient .

Step 1.2. Initialize the swarm randomly within the feasible solution space ), as well as the personal best positions , where .

Step 1.3. Initialize the external archive as the nondominated solution in pbest.

Step 2 (termination). If termination condition is met, stop and return all the individuals in the current . Otherwise, go to Step 3.

Step 3 (reproduction). For each particle , note the following.
Step 3.1. Select a global best position besti from external archive.
Step 3.2. Update position by (8) and (9).
Setp 3.3. Update personal best position pbesti.

Step 4 (external archive update).
Step 4.1. Consider .
Step 4.2. Remove dominated solutions in .
Step 4.3. If the size of the current is larger than , calculate the crowding distance of each individual in E, sort them in descending order of crowding distance, and keep the first individuals in .

Step 5. Consider ; go to Step 2.

4. Experiments and Analysis

4.1. Test Functions

Walking-Fish-Group (WFG) [26], a well-designed multiobjective test suite which provides a truer means of assessing the performance of optimization algorithms on a wide range of different problems, is used to validate the performance of our approach in 2-objective space. Compared with the other two commonly used suite of ZDT [27] and DTLZ [28], WFG test suite is more challenging and contains a number of problems that exhibit properties not evident in ZDT and DTLZ, including nonseparable problems, deceptive problems, a truly degenerate problem, a mixed shape Pareto front problem, problems scalable in the number of position-related parameters, and problems with dependencies related to position and distance parameters.

Besides WFGs, another three 3-objective benchmark functions, which are acknowledged for the extreme difficulty to optimize in the DTLZ test suite, are also involved in the comparison test. DTLZ2 tests the ability of global convergence by providing a spherical Pareto front. DTLZ4 assesses the maintainability of a good distribution of solutions by generating a nonuniform distribution of points along the true Pareto front. The Pareto front of DTLZ7 is the intersection of a straight line and a hyperplane. All these twelve benchmark problems are listed in Table 1.

4.2. Performance Metrics

To assess the performance of algorithms in this experiment, three quality indicators are considered: Additive Unaryε-indicator () [29], hypervolume () [30], and the Inverted Generational Distance (IGD) [31].

Additive Unary ε-Indicator . It measures the convergence of the resulting Pareto fronts. A lower value indicates a better approximation set. For an approximation set X, the additive Unaryε-indicator is defined as [32]where is the ideal Pareto front.

Hypervolume (). This metric measures both convergence and diversity of the solutions. The higher the values are, the better the algorithm performs. Generally speaking, the hypervolume measures the volume of the space dominated by the approximation set, bounded by a reference point. of an approximation set can be mathematically defined aswhere is the reference point and is the usual Lebesgue measure.

Inverted Generational Distance (IGD). Inverted Generational Distance is the average distance from every solution in the reference set to the nearest solution in the approximation set; it, therefore, reflects convergence of the solutions. The fewer the IGD values, the better the algorithm’s performance. The IGD metric is calculated for the solution set using the reference point set as follows:where is the distance between and in the objective space.

4.3. Algorithm for Comparison and Parameter Setting

In this experiment, five state-of-the-art multiobjective optimization algorithms are chosen for comparison, including the most efficient and widely used multiobjective particle swarm optimizer OMOPSO [6] and another two well-known PSO-based multiobjective optimization algorithms: σMOPSO [33] and pdMOPSO [34], as well as two competitive evolutionary multiobjective optimizers: NSGA-II [25] and PESA-II [35].

To make a fair comparison, the population size and the leader archive in MOQPSOr and all the other five comparison algorithms are fixed to 100 for all test instances. The stopping condition is set to 250 iterations, which means a total of 25000 function evaluations. For MOQPSOr, α increases linearly from 0.6 to 1.2, and β is set to 0.3. For OMOPSO, σMOPSO, and pdMOPSO, the sets of , , and ω, as well as the mutation method and the mutation probability are all the same as in Durillo et al.’s work [36]. For NSGA-II and PESA-II, the crossover rate sbx.rate and the distribution index sbx.distributionIndex for simulated binary crossover are set to 1.0 and 15.0, respectively, while the mutation rate and the distribution index for polynomial mutation are set to pm.rate = 1/N and pm.distributionIndex = 20.0, where is the number of decision variables. For PESA-II, in addition, the number of bisections in the adaptive grid archive is set to 8.

All the experiments are implemented using MOEA framework [37], an open source Java library for developing and experimenting with multiobjective evolutionary algorithms.

Every algorithm runs on each problem over 30 independent trials; and the average results of ,   and IGD are recorded.

4.4. Experimental Results

Tables 24 tabulate the performance results on , , and IGD, respectively. For each test function, the best result is bolded. Markers “+” and “−” in Tables 2, 3, and 4 are used to indicate the performance comparison results. “+” means MOQPSOr outperforms its rivals, while “−” means MOQPSOr underperforms. The summaries of the comparison results on each metric are also shown in each table.

Since and IGD both mainly focus on reflecting the ability of converging to the global Pareto front, Tables 2 and 4 will be discussed together. It is obvious whether on or on IGD metric that MOQPSOr no doubt performs best among all the algorithms. On , MOQPSOr achieves 8 best values out of the 12 problems, as well as 2 second best values. In comparison, OMOPSO, NSGA-II, and PESA-II all get no more than 2 best values each, while σMOPSO and pdMOPSO do not achieve any best result at all. In terms of IGD, similarly, MOQPSOr gets the best values in 8 out of the 12 problems, while NSGA-II and PESA-II get 2 best values each. Thus, MOQPSOr claims to be able to produce solutions closer to the global Pareto front than other comparison algorithms in our study.

measures both the convergence and the diversity of the solutions. It could be observed clearly again from Table 3 that MOQPSOr is the best-performing algorithm, yielding the best values in 7 out of 12 problems. The next best-performing algorithms are NSGA-II and PESA-II, which achieve 2 best values each. Although MOQPSOr does not get the best on 3-objective DTLZ7, it is the second best performing algorithm, only a little bit inferior to OMOPSO.

Figure 4 illustrates the comparison between the final nondominated fronts obtained by MOQPSOr and those by the other algorithms on 2-objective WFG6. In order to display more clearly, each comparison pair contains an overall (Figures 4(a), 4(c), and 4(e)) figure and a sectional (Figures 4(b), 4(d), and 4(f)) figure. In each diagram in Figure 4, thin blue lines demonstrate the ideal Pareto fronts of the problems, while the red dots present the nondominated solutions obtained by MOQPSOr. The black dots in the three pairs of Figures 4(a) and 4(b), Figures 4(c) and 4(d), and Figures 4(e) and 4(f) represent the Pareto front obtained by OMOPSO, NSGA-II, and PESA-II, respectively. It could be observed from Figure 4 that, in terms of the closeness to the blue real Pareto front, NSGA-II performs the worst on 2-objective WFG6. Although the nondominated solutions obtained by OMOPSO and PESA-II are both very close to the one obtained by MOQPSOr, the latter is still a little closer to the ideal Pareto front. Besides, MOQPSOr’s nondominated solution shows a much better balanced distribution than OMOPSO’s and PESA-II’s.

Figures 5 and 6 demonstrate the final nondominated fronts found by MOQPSOr and by other algorithms on 3-objective DTLZ2 and DTLZ7. Figures 5(a), 5(c), 5(e), 5(g), 6(a), 6(c), 6(e), and 6(g) are the overall view of the fronts; Figures 5(b), 5(d), 5(f), 6(h), 6(b), 6(d), 6(f), and 6(h) are the side views. For DTLZ2, MOQPSOr obviously achieves the best performance. It could be observed from the side views that neither NSGA-II nor PESA-II can converge to the ideal Pareto front completely, with some dots astray, the convergence of NSGA-II being even worse than PESA-II. Although the nondominated front found by OMOPSO can converge to the ideal Pareto front as MOQPSOr, its distribution is less balanced. In other words, the distribution of the nondominated front achieved by MOQPSOr is the best.

It can be seen from Figure 6 that the solutions obtained by PESA-II cannot cover the entire ideal Pareto front on every plane when it runs on DTLZ7. Similar observations can also be found when NSGA-II runs on DTLZ7. In contrast, both OMOPSO and MOQPSOr can converge to the ideal Pareto front evenly.

In a word, it could be concluded that MOQPSOr is the most effective algorithm among all the 6 algorithms discussed in our study. Whether considered on solutions’ convergence or the diversity, MOQPSOr outperforms all the PSO-based algorithms here on all 12 test functions. Compared with NSGA-II and PESA-II, MOQPSOr can achieve better solution sets on 8 out of 12 problems. Moreover, MOQPSOr performs the best on all the 3-objective functions.

It is worth noting that there are 4 multimodal functions in all of the 12 test problems, which are WFG2, WFG4, WFG9, and DTLZ7. It could be seen from Tables 24, on all these multimodal functions, that MOQPSOr performs the best except on 2-objective WFG2 and WFG4 compared with the NSGA-II and PESA-II. However, the performances of every algorithm change when coming to more-than-2-objective optimizations. Table 5 shows the performance results for all the algorithms with different number of objectives on these four multimodal benchmarks; the 3-objective and 4-objective optimization results are in italic font. It could be noticed that although MOQPSOr is not even the third best performed algorithm on 2-objective WFG2 and WFG4, with a more obvious disadvantage especially towards NSGA-II and PESA-II, it turns out to be much more effective than all of the other algorithms when dealing with the 3-objective and 4-objective optimizations. In addition, the lead swells as the number of objectives increases. In conclusion, MOQPSOr is a competitive multiobjective optimization algorithm, especially on the multimodal problems with large number of objectives.

5. Conclusion

Generally speaking, most multiobjective optimization algorithms are reformed from various single-objective optimizers, and the latter play vital roles in deciding the performance of the former. Since the canonical PSO has been proved to be not a global search algorithm, even not a local one, there may exist the problem of not being able to converge to global optima in the multiobjective PSO-based algorithms. On the contrary, though QPSO’s global convergence has been proved, the works on extending QPSO to multiobjective optimization are rare. Therefore, we have proposed a novel version of multiobjective QPSO algorithm based on the ring model (MOQPSOr) in this paper, whose position-update strategy is improved in comparison with QPSO, making it more suitable for multiobjective optimization problems. In MOQPSOr, all the particles are arranged in a circle. When a particle evolves, the distribution scope is decided by the distance to its next particle in numerical order, which makes the swarm mutates more than the original QPSO. With a high degree of probability, the local attractor is located near the personal best position during the early stage of the search but near the global best position gbest in the later stage of iteration. Unlike most MOPSOs, there is no additional mutation operation in MOQPSOr. Compared with the 5 widely used evolutionary multiobjective optimization algorithms on 12 benchmark functions, the experiment results show that the proposed algorithm is highly competitive in both convergence and diversity when solving the multiobjective optimization problems. On top of that, the advantage becomes even more obvious with the number of objectives increasing.

Competing Interests

The authors have no conflict of interests regarding the publication of this manuscript.

Acknowledgments

This work was supported by the Natural Science Foundation of Jiangsu province (Project nos. BK20130155 and BK20130160).