- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
The Scientific World Journal
Volume 2013 (2013), Article ID 510763, 19 pages
Improving Vector Evaluated Particle Swarm Optimisation by Incorporating Nondominated Solutions
1Faculty of Electrical Engineering, Universiti Teknologi Malaysia, 81310 Johor Bahru, Malaysia
2Faculty of Electrical and Electronic Engineering, Universiti Malaysia Pahang, 26600 Pekan, Malaysia
3Department of Electrical Engineering, Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur, Malaysia
Received 16 January 2013; Accepted 11 March 2013
Academic Editors: P. Agarwal, V. Bhatnagar, and Y. Zhang
Copyright © 2013 Kian Sheng Lim et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm.
In multiobjective optimisation (MOO) problems, multiple objective functions are solved simultaneously by either minimising or maximising the fitness of the functions. These multiple objective functions usually conflict with each other. Therefore, the solution to an MOO problem is a set of multiple tradeoffs, or nondominated solutions, rather than a single solution.
The Vector Evaluated Particle Swarm Optimisation (VEPSO)  algorithm introduced by Parsopoulos and Vrahatis has been used to solve various MOO problems, such as the design of radiometer array antennas , the design of supersonic ejectors for hydrogen fuel cells , the design of composite structures , the design of steady-state performance for power systems , and the design of multiple machine-scheduling systems . In the VEPSO algorithm, one swarm of particles optimises an objective function using guidance from the best solution found by another swarm.
The nondominated solutions found during the optimisation are usually preferred for effective guidance . As an example, the multiobjective PSO (MOPSO) algorithm [8, 9] divides all nondominated solutions into several groups based on their locations in the objective space. Then, one of the nondominated solutions is randomly selected from the group that has the fewest solutions to be used as the particle guide. Furthermore, the nondominated sorting PSO (NSPSO) algorithm  uses the primary mechanism of nondominated sorting genetic algorithm-II , in which one nondominated solution is randomly selected to be used as the guide for the particles based on the niche count and nearest-neighbour density estimator. In addition, the optimised MOPSO (OMOPSO) algorithm  by Margarita Reyes-Sierra and Carlos Coello Coello uses the crowding distance mechanism for binary tournaments to select one of the nondominated solutions as the guide for each particle. Abido  uses two nondominated solutions, a local set and a global set, to optimise the problem. Each particle is guided by the nondominated solution that has smallest distance between the particle and both nondominated solution sets.
The conventional VEPSO algorithm solves an MOO problem by improving the solutions in a swarm under the guidance of the best solution with respect to a single objective, found by another swarm. However, the nondominated solution which has better fitness with respect to the other objectives may exist, but it was not used to guide the particles in other swarm. The nondominated solutions are always equal or better solutions compared with the best solution used in conventional VEPSO. The superiority of the nondominated solutions motivates the use of nondominated solutions as particle guides for each swarm in improving the VEPSO algorithm. Thus, in this study, the guidance of a swarm is selected by the nondominated solution which has best fitness with respect to a single objective function, optimised by the other swarm.
The paper is organized as follows. In Section 2, we explain some information on MOO problem. Then, in Section 3, we explain the particle swarm optimisation (PSO), the conventional VEPSO, and the improved VEPSO algorithms. In the next section, we demonstrate the simulation experiment which includes several performance measures and benchmark test problem, before we discuss the results. Lastly, we present the conclusion and include some suggestion for future work.
2. Multiobjective Optimisation
Consider a minimisation of a multiobjective problem: where is the -dimensional vector of decision variables that represent the possible solutions, is the number of objectives, is the objective function, and are the inequality and equality constraint functions, respectively.
In explaining the concept of Pareto optimality, consider two vectors . dominates (denote as ) if and only if for and at least once. The dominance relations and for a two-objective problem are indicated by the labelled circles in Figure 1. Hence, a vector of decision variables is a nondominated solution if and only if there is no other solution such that . The nondominated solution is also known as the Pareto optimal solution. The set of nondominated solutions of an MOO problem is known as the Pareto Optimal set, . The set of objective vectors with respect to is known as the Pareto Front, . The for a two-objective problem is illustrated by the black circles in Figure 1.
The goal of an MOO algorithm is to find as many nondominated solutions as possible according to the objective functions and constraints. The Pareto front corresponding to the nondominated set should be as close to and well distributed over the true Pareto front as possible. However, it is possible to have different solutions that map to the same fitness value in objective space.
3. Particle Swarm Optimisation
3.1. Original Particle Swarm Optimisation Algorithm
Based on the social behaviour of birds flocking and fish schooling, a population-based stochastic optimisation algorithm named Particle Swarm Optimisation (PSO) was introduced by Kennedy et al. [14, 15]. The PSO algorithm contains individuals referred to as particles that encode the possible solutions to the optimisation problem using their positions. These particles explore the defined search space to look for solutions that better satisfy the objective function of the optimised problem. Each particle collaborates with the others during the search process by comparing its current position with the best position that it and the other particles in the swarm have found .
Figure 2 shows the flow chart of the PSO algorithm. For the PSO algorithm, consider the following minimisation problem: there are -particles flying around in an -dimensional search space, where their positions, , represent the possible solutions. Initially, all particles are randomly positioned in the search space and assigned random velocities, . Then, the objective fitness, , for each particle is evaluated by calculating the objective functions with respect to . Next, each particle's best position, , is initialised to its current position. Meanwhile, the best among all is set as the swarm’s best position, , as specified in (2), where is the swarm of particles:
Next, the algorithm iterates until the stopping condition is met; that is, either the maximum number of iterations is exceeded or the minimum error is attained. In each iteration, each particle’s velocity and position are updated using (3) and (4), respectively, where is the constriction factor and is the inertia weight. and are the cognitive and social coefficients, respectively. Meanwhile, and are both random values between zero and one. After the velocity and position are updated, the for each particle is evaluated again. Later, is updated with the more optimal between the new position of the th particle or . Then, the is updated with the most optimal among all the particles, as given in (2). Finally, when the stopping condition is met, represents the optimum solution found for the problem optimised using this algorithm.
3.2. Vector Evaluated Particle Swarm Optimisation Algorithm
Parsopulos and Vrahatis  introduced the VEPSO algorithm, which was inspired by the multiswarm concept of the VEGA algorithm . In this multiswarm concept, each objective function is optimised by a swarm of particles using the from another swarm. The for the th swarm is the that has most optimal fitness with respect to the th objective, among all from the th swarm, as given below:
Generally, the PSO and VEPSO algorithms have similar process flows, except that all processes are repeated for swarms when optimising problems with objective functions. Because each swarm optimises using from another swarm, in VEPSO, the velocity is updated using (6). The velocity equation for particles in the th swarm updates , where is given in (7):
In addition to the difference in the velocity equation, all nondominated solutions found during the optimisation are stored in an archive each time after the objective functions are evaluated. To ensure that the archive contains nondominated solutions only, the fitness of each particle is compared, based on the Pareto optimality criterion, to those of all particles before it is compared to the nondominated solutions in the archive. All nondominated solutions in the archive represent possible solutions to the MOO problem.
3.3. The Improved VEPSO Algorithm
In conventional VEPSO, each particle of a swarm is updated by the from the other swarm that is optimal with respect to the objective function optimised by the other swarm. Consider a two-objective optimisation problem as an example; the of the first swarm is only updated when a newly generated solution has better fitness with respect to the first objective, as specified in (5). Thus, is not updated even if the new solution, nondominated solution, has equal fitness with respect to the first objective and better fitness with respect to the second objective. Hence, as in Figure 3(a), each particle from the second swarm moves under the guidance of the but not the better, nondominated solutions.
However, this limitation can be overcome by updating with a new solution, nondominated solution, that has equal fitness with respect to the optimised objective function and better fitness with respect to the other objective. This improved VEPSO algorithm is represented in Figure 3(b), where is now a nondominated solution that is best with respect to the first objective function. Thus, each particle from the second swarm will be guided by its own and , which is a nondominated solution, with the hope that the particle will converge toward the Pareto front faster.
In the improved VEPSO algorithm, the generality of conventional VEPSO is not lost; so the of a swarm is the best nondominated solution with respect to the objective function optimised by the swarm. Therefore, the of the th swarm is given as following: where is a nondominated solution and is the set of nondominated solutions in the archive. For a two-objective-function problem, the particles from the second swarm are guided by the nondominated solution that is best with respect to the first objective function. Meanwhile, the particles of the first swarm are guided by the nondominated solution that is optimal with respect to the second objective function. Thus, this improved algorithm is called Vector Evaluated Particle Swarm Optimisation incorporate nondominated solutions (VEPSOnds).
In addition, the PSO algorithm has the natural limitation that particles tend to become stuck in locally optimal solutions [18, 19]. Therefore, this improved VEPSO algorithm also includes the polynomial mutation mechanism from nondominated sorting genetic algorithm-II . The polynomial mutation mechanism modifies the particle position with a certain probability such that the particle can mutate out from the locally optimal solution and continue the search for a globally optimal solution. In this work, one of every ten particles is mutated in the improved VEPSO algorithm.
4.1. Performance Measure
In order to analyse the performance of the VEPSOml algorithm, several quantitative performance mesasures are used. Since MOO problems have different features, for example multilocal optima solution, which could trap the particles from obtaining more nondominated solutions; hence, the number of solution (NS) measure is used to quantify the total number of nondominated solutions found at the end of the computation. Besides, for example when the particles one trapped in a local optima solution, the obtained Pareto front will not be converged close to the true Pareto front which means that the best possible solutions were not found yet. Thus, the generational distance (GD)  is used and defined as the average Euclidean distance between the obtained Pareto front, , and the true Pareto front, , using (9). A smaller GD value indicates better performance:
A well-converged Pareto front does not guarantee to have good diversity of nondominated solutions along the Pareto front. Therefore, the third performance metric used is the spread (SP) , which is used to measure the extent of the distribution of the along the . Equations (10), are used to measure SP, and smaller values indicate better performance: where is the Euclidean distance between the first extreme members in and and is the Euclidean distance between the last extreme members in and . In some cases, the obtained Pareto fronts could be converged well to the true Pareto front but it has poor diversity performance. Hence, it is not fair by comparing different algorithms with the GD and SP measures only. Finally, the hypervolume (HV)  is used to measure the total space or area enclosed by the and a reference point, , which is a vector constructed from the worst objective value from the . Equation (11) is used to evaluate the HV value. The total area for HV is the enclosed area in Figure 4 and is calculated using (11). Larger HV values represent better performance: where is the space or area between and the diagonal corner of th solution of .
4.2. Test Problems
Five of the benchmark test problems from ZDT  are used to evaluate the performance of the algorithm. Because this study focused on continuous search space problems, the ZDT5 problem is not used as it is for the evaluation of binary problems. All benchmark problems are set up using the parameter values recommended in the paper . For evaluating the performance measure, the true Pareto front for each problem is obtained from the standard database generated by the jMetal (http://jmetal.sourceforge.net/problems.html).
4.3. Evaluation of VEPSO Algorithms
Because the VEPSOnds algorithm includes polynomial mutation, the experiment in this work should analyse a version of VEPSOnds that does not include polynomial mutation. This implementation exists because the polynomial mutation affects the algorithm’s performance, and it is necessary to determine whether the change in performance is due to the use of multiple nondominated solutions or the polynomial mutation. Thus, in this work, the VEPSOnds algorithm without mutation is denoted as VEPSOnds1 and the VEPSOnds algorithm with mutation is denoted as VEPSOnds2.
In this experiment, the total number of particles is fixed to 100 and divided equally among all swarms. The archive size is controlled by removing the nondominated solutions with the smallest crowding distance . In addition, the maximum iteration and archive size are set to 250 and 100, respectively. During the computation, the inertia weight is linearly degraded from 1.0 to 0.4. The cognitive and social constants are both random values between 1.5 and 2.5. Moreover, the distribution index is set to 0.5 for the mutation operation. Each test problem is simulated for 100 runs to enable statistical analysis.
The performance of each algorithm tested on the ZDT1 problem is presented in Table 1. For the average NS measure, the number of nondominated solutions found by both improved VEPSO algorithms was significantly greater for conventional VEPSO. For the GD measure, VEPSOnds1 demonstrated significant improvement compared with the conventional VEPSO algorithm. Meanwhile, the VEPSOnds2 algorithm exhibited an extremely large improvement compared with both conventional VEPSO and VEPSOnds1. Similarly, the SP measures for both improved VEPSO algorithms also indicated significant improvement compared with conventional VEPSO. As expected, the HV performance also improved dramatically when the problem was optimised using multiple nondominated solutions as particle guides.
For better visual comparison, the Pareto fronts with the best GD value returned for each test problem are shown in Figure 5 through Figure 9. Figure 5 shows the plot of nondominated solutions with the best GD measure returned for the ZDT1 problem. The nondominated solutions obtained by VEPSO are clearly located very far away from the true Pareto front, which leads to a large GD value. Moreover, the obtained solutions are unevenly distributed around the objective space, which yields a large SP value. In contrast, the VEPSOnds1 and VEPSOnds2 algorithms generated nondominated solutions close to and evenly distributed over the true the Pareto front. Therefore, the GD and SP values for both improved VEPSO algorithms are significantly smaller than those for conventional VEPSO. However, the VEPSOnds2 has better distribution of nondominated solutions than the VEPSOnds1.
Table 2 presents the performance measures for all algorithms tested on the ZDT2 problem. Again, both the improved VEPSO algorithms dramatically improved the ability to obtain a large number of solutions compared with VEPSO, especially VPESOnds2. In addition to the NS performance, the GD and SP performances were also dramatically improved because the nondominated solutions used in the improved VEPSO algorithms are better guides compared with the best solution among each particle, which is used in conventional VEPSO. However, in SP measure, the VEPSOnds1 shows negligible improvement, whereas the VEPSOnds2 shows distinguished improvement over the conventional VEPSO. The conventional VEPSO algorithm was unable to yield a meaningful HV because the obtained nondominated solutions were far worse than the true Pareto front. However, the VEPSOnds1 and VEPSOnds2 algorithms yielded good HV values.
Figure 6 shows the nondominated solutions with the best GD measure returned for the ZDT2 problem. The poor performance of conventional VEPSO is visible because the nondominated solutions found are very distant from the true Pareto front and distributed unevenly in the objective space. Conversely, the VEPSOnds1 algorithm was able to obtain a nondominated solution that is located on the true Pareto front. However, there is only one nondominated solution, which increases the SP value of this algorithm. In contrast, the VEPSOnds2 algorithm successfully found nondominated solutions very close to the true Pareto front, and the nondominated solutions found are distributed evenly over the true Pareto front. Thus, the polynomial mutation preventing the particles from converging too early is an important mechanism in improving the diversity performance of the algorithm.
Table 3 presents the performance measures of all algorithms tested for the ZDT3 problem. Regarding the NS measure, both the improved VEPSO algorithms successfully obtained a large number of nondominated solutions. Moreover, both improved VEPSO algorithms yielded great improvement compared with the conventional VEPSO in terms of convergence. However, the SP value of the solutions obtained by both improved VEPSO was degraded in this test. Even with the degradation in the diversity performance of both improved VEPSO, they still hold the performance advantages with their superior convergence improvement. Besides, both improved VEPSO performances are better as their HV value was also improved when the particles in the algorithm used additional guides during the optimisation.
Figure 7 shows the nondominated solutions with the best GD measure returned for ZDT3 problem. Unavoidably, the nondominated solutions obtained by the conventional VEPSO algorithm are scattered far from the true Pareto front, which leads to poor performance. Conversely, both the improved VEPSO algorithms were able to obtain nondominated solutions that cover the true Pareto front almost perfectly. Hence, both the improved VEPSO algorithms exhibited almost equal improvement, but VEPSOnds1 has weaker diversity performance as there are lesser solutions at the middle of the Pareto front.
Table 4 presents the performance measures for all algorithms tested for the ZDT4 problem. The average number of nondominated solutions found by the conventional VEPSO algorithm is relatively low compared with VEPSOnds1, which found most of the solutions. The conventional VEPSO algorithm had great difficulty escaping from the multiple local optima, which resulted in a very large GD value. However, the improved VEPSO algorithms, in which particles are guided by the nondominated solutions, had less chance of being stuck in local optima. Meanwhile, the HV value yielded by the conventional VEPSO algorithm is relatively small compared with that of the improved VEPSO algorithms. Thus, the smaller SP value for conventional VEPSO does not mean it has better performance, as both improved VEPSO still maintain performance advantages with their better GD and HV values.
Figure 8 shows the nondominated solutions with the best GD measure returned for the ZDT4 problem. The conventional VEPSO algorithm, in which particles follow only one guide, was easily stuck in local optima, as shown in the first plot. Thus, the algorithm was able to find only one nondominated solution. However, both the improved VEPSO algorithms, in which additional guides are used, had less difficulty in obtaining a greater number of diverse nondominated solutions.
Table 5 presents the performance measures for all algorithms tested on the ZDT6 problem. Interestingly, all algorithms found approximately the same number of nondominated solutions. Moreover, the SP and HV values for all algorithms are also similar. However, noticeably, both improved VEPSO have outperformed the conventional VEPSO in terms of convergence performance.
Figure 9 shows the nondominated solutions with the best GD measure returned for the ZDT6 problem. As predicted, the plots of nondominated solutions are similar because all algorithms exhibit similar results in terms of convergence and diversity. However, the nondominated solutions for the VEPSOnds2 algorithm were not well distributed over the true Pareto front, middle of the Pareto front in this case, which caused the algorithm to have the largest SP value, as shown in Table 5.
For all test problems, the improved VEPSO algorithms exhibited significant improvement compared with the conventional VEPSO algorithm for most of the performance measures. The performance improvements occurred because the nondominated solutions always provide a better solution than a solution that optimises only a single-objective function. Using a better solution as the leader increases the quality of the result.
4.4. Analysis of the Number of Particles
The performance of the VEPSOnds2 algorithm with various numbers of particles is analysed in this experiment. Most of the parameters are the same as in the previous experiment, except that the particles are equally divided between swarms for a total of 10, 30, 50, 100, 300, 500, and 1000 particles. The performance measurements, taken for each total number of particles and for each benchmark problem, are plotted in Figure 10.
In short, the performance of VEPSOnds2 improves when the number of particles is increased. When VEPSOnds2 is computed with 250 iterations, the algorithm performs well at 300 particles, which is equivalent to 75 000 function evaluations. With a higher number of particles, the algorithm exhibits even better results, but the computational time increases dramatically.
4.5. Analysis of the Number of Iterations
The effect of various numbers of iterations on VEPSOnds2 performance is investigated in this experiment. In this experiment, the number of iterations becomes 10, 30, 50, 100, 300, 500, 1000, 3000, 5000, or 10 000. All parameters are set as in the previous experiments, and the number of particles is set to 100, which is divided equally between swarms. The plot of performance metrics for the various numbers of iterations for each benchmark problem is displayed in Figure 11.
When the number of iterations is increased, the performance of VEPSOnds2 improves. The VEPSOnds2 algorithm performs consistently and acceptably with 100 particles when there are 300 iterations or 30 000 function evaluations. Computation of the algorithm with a higher number of iterations, such as 3000 particles or 300 000 function evaluations, could result in a better performance but is only recommended if a powerful computing platform is used.
4.6. Benchmarking with the State-of-the-Art Multiobjective Optimisation Algorithms
The VEPSOnds2 algorithm performed better than the other algorithms in most test cases. Thus, the performance of this algorithm is compared to four other MOO algorithms which are nondominated sorting genetic algorithm-II (NSGA-II) , strength pareto evolutionary algorithm 2 (SPEA2) , archive-based hYbrid scatter search (AbYSS) , and speed-constrained multiobjective PSO (SMPSO) . For a fair comparison, all algorithms compute 25 000 function evaluations with the archive size set to 100. The NSGA-II, commonly used for performing comparisons, was set to use a population size of 100 for optimisation. This algorithm was set to use the Simulated Binary Crossover (SBX) operator with the crossover probability and polynomial mutation  operators with the mutation probability . The distribution index for both operators was set to . The SPEA2 was set to use the same parameters as in NSGA-II. The AbYSS was set to use a population size of 20. The pairwise combination parameters in AbYSS were set to and . The polynomial mutation parameters were set to similar values as those in NSGA-II and SPEA2. In SMPSO, the population size and maximum iteration were set to 100 and 250, respectively. The terms , and the terms . This algorithm was also set to use polynomial mutation  with and .
Table 6 lists the performance of the algorithms on the ZDT1 test problem. The number of solutions found by the VEPSOnds2 is comparable to the other algorithms. However, the average GD value of the VEPSOnds2 is at least 10 times greater than that of the others even though its minimum GD value is close to that of the other algorithms. VEPSOnds2 also has the highest average SP value, but its minimum SP is better than that of NSGA-II. The HV value for VEPSOnds2 is similar to that of the other algorithms.
Table 7 lists the performance of the algorithms on the ZDT2 test problem. VEPSOnds2 was able to obtain a reasonable number of solutions compared to the other algorithms. However, the GD value for VEPSOnds2 is the highest among all algorithms. Additionally, VEPSOnds2 has the greatest average SP value, even though its minimum SP value is better than that of NSGA-II. In the HV measure, the average value returned by VEPSOnds2 is relatively close to the other algorithms and even outperforms the NSGA-II with its maximum value.
Table 8 lists the performance of the algorithms on the ZDT3 test problem. SMPSO and VEPSOnds2 both show poor performance with respect to the maximum number of solutions for all runs. Again, VEPSOnds2 has a 10 times greater GD value compared to the other algorithms. Interestingly, the diversity performance of VEPSOnds2 is very poor, as the average SP value is higher than 1.0. However, the maximum HV value of VEPSOnds2 was not the smallest, and its average is almost as large as the rest.
Table 9 lists the performance of the algorithms on the ZDT4 test problem. The multiple local optima featured in this problem challenged VEPSOnds2 greatly, as the number of solutions obtained is very low. In addition, the convergence and diversity performances were very poor, as the GD and SP values are both very large compared to the other algorithms. The HV value was also poor, as the multiple local optima feature is well known as a natural weakness in PSO-based algorithms [18, 19].
Finally, Table 10 lists the performances of the algorithms on the ZDT6 test problem. On average, VEPSOnds2 does not obtain the highest number of nondominated solutions, but the number is still in an acceptable range. However, the GD value for VEPSOnds2 was too far from the other algorithms. In addition, the SP value for VEPSOnds2 is extremely large compared to the other algorithms, and the average HV value for VEPSOnds2 is smaller than that for the other algorithms. On a positive note, the maximum HV value for VEPSOnds2 improves upon that for AbYSS, NSGA-II, and SPEA2.
The main purpose of this experiment is to present the overall performance of the improved VEPSO algorithm in comparison to state-of-the art algorithms, not to show how it outperforms them. Indeed, the overall performance of the VEPSOnds2 is not better than all the compared algorithms. However, relatively speaking, its performance is still within the acceptable range and is better than some of the other algorithms in certain cases.
The conventional VEPSO algorithm uses one swarm to optimise one objective function. The optimisation is guided using only one best solution found by another swarm with respect to the objective function optimised by that swarm. In contrast, recent PSO-based MOO algorithms prefer to use the nondominated solutions as the particle guides. Thus, it is possible to modify the VEPSO algorithm such that the particles are guided by nondominated solutions that are optimal at specific objective function. Five ZDT test problems were used to investigate the performance of the improved VEPSO algorithm based on the measures of the number of nondominated solutions found, the Generational Distance, the Spread, and the Hypervolume.
The experimental results show that the improved algorithms were able to obtain better quality Pareto fronts than conventional VEPSO, especially VEPSOnds2, which consistently returned the best convergence and diversity performance. On the other hand, the introduction of polynomial mutation should reduce the chance for a particle to get stuck in local optima, which features greatly in the ZDT4 test problem. However, VEPSOnds2 did not show much improvement compared to VEPSOnds1. This could possibly be due to the choice of the number of particles that are subject to mutation. Hence, the analysis for proper number of particles subject to mutation should be considered in future work. Even so, VEPSOnds2 is relatively better than VEPSOnds1, as confirmed by most of the performance measurements.
In addition, in VEPSOnds2, the particles of a swarm are guided by the same . Thus, there is a greater chance for them to converge prematurely around the that might represent a locally optimal solution. On the other hand, in SMPSO, each particle will select one of the nondominated solutions by binary tournament, using the crowding distance as its guide. This means that in SMPSO, each particle has a different as a guide during optimisation. Thus, the future VEPSOnds2 algorithm should reduce the chances for all particles to follow the same , in order to prevent premature convergence.
This work is supported by the Research University Grant (VOT 04J99) from Universiti Teknologi Malaysia, High Impact Research—UM.C/HIR/MOHE/ENG/16(D000016-16001), Research Acculturation Grant Scheme (RDU 121403), and MyPhD Scholarship from Ministry of Higher Education of Malaysia.
- K. E. Parsopóulos and M. N. Vrahatis, “Particle swarm optimization method in multiobjective problems,” in Proceeedings of the ACM Symposium on Applied Computing, pp. 603–607, ACM, March 2002.
- D. Gies and Y. Rahmat-Samii, “Vector evaluated particle swarm optimization (VEPSO): optimization of a radiometer array antenna,” in IEEE Antennas and Propagation Society Symposium, pp. 2297–2300, June 2004.
- S. M. V. Rao and G. Jagadeesh, “Vector evaluated particle swarm optimization (VEPSO) of supersonic ejector for hydrogen fuel cells,” Journal of Fuel Cell Science and Technology, vol. 7, no. 4, Article ID 0410141, 2010.
- S. N. Omkar, D. Mudigere, G. N. Naik, and S. Gopalakrishnan, “Vector evaluated particle swarm optimization (VEPSO) for multi-objective design optimization of composite structures,” Computers and Structures, vol. 86, no. 1-2, pp. 1–14, 2008.
- J. G. Vlachogiannis and K. Y. Lee, “Multi-objective based on parallel vector evaluated particle swarm optimization for optimal steady-state performance of power systems,” Expert Systems with Applications, vol. 36, no. 8, pp. 10802–10808, 2009.
- J. Grobler, Particle Swarm Optimization and Differential Evolution for Multi Objective Multiple Machine Scheduling [M.S. thesis], University of Pretoria, 2009.
- M. Reyes-Sierra and C. A. C. Coello, “Multi-objective particle swarm optimizers: a survey of the state-of-the-art,” International Journal of Computational Intelligence Research, vol. 2, no. 3, 2006.
- C. A. Coello Coello and M. S. Lechuga, “MOPSO: a proposal for multiple objective particle swarm optimization,” in Proceedings of the Congress on Evolutionary Computation (CEC '02), vol. 2, pp. 1051–1056, 2002.
- C. A. Coello Coello, G. T. Pulido, and M. S. Lechuga, “Handling multiple objectives with particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 256–279, 2004.
- X. Li, “A non-dominated sorting particle swarm optimizer for multiobjective optimization,” in Genetic and Evolutionary Computation, E. CantÃž-Paz, J. Foster, K. Deb et al., Eds., vol. 2723 of Lecture Notes in Computer Science, pp. 198–198, Springer, Berlin, Germany, 2003.
- K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002.
- M. Reyes-Sierra and C. A. Coello Coello, “Improving PSO-based Multi-Objective optimization using crowding, mutation and ε,-dominance,” in Evolutionary Multi-Criterion Optimization, C. A. Coello Coello, A. HernÃąndez Aguirre, and E. Zitzler, Eds., vol. 3410 of Lecture Notes in Computer Science, pp. 505–519, Springer, Berlin, Berlin, 2005.
- M. A. Abido, “Multiobjective particle swarm optimization with nondominated local and global sets,” Natural Computing, vol. 9, no. 3, pp. 747–766, 2010.
- J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, pp. 1942–1948, December 1995.
- J. Kennedy, R. C. Eberhart, and Y. Shi, Swarm Intelligence, The Morgan Kaufmann Series in Evolutionary Computation, Morgan Kaufmann Publishers, San Francisco, Calif, USA, 2001.
- H. El-Sayed, M. Belal, A. Almojel, and J. Gaber, “Swarm intelligence,” in Handbook of Bioinspired Algorithms and Applications, S. Olariu and A. Y. Zomaya, Eds., Chapman AND Hall/CRC computer and information science series, pp. 55–63, Taylor & Francis, Boca Raton, Fla, USA, 1st edition, 2006.
- J. D. Schaffer, Some Experiments in Machine Learning Using Vector Evaluated Genetic Algorithms (Artificial Intelligence, Optimization, Adaptation, Pattern Recognition) [Ph.D. thesis], Vanderbilt University, 1984.
- E. Ãűzcan and M. Yäślmaz, “Particle swarms for multimodal optimization,” in Adaptive and Natural Computing Algorithms, B. Beliczynski, A. Dzielinski, M. Iwanowski, and B. Ribeiro, Eds., vol. 4431 of Lecture Notes in Computer Science, pp. 366–375, Springer, Berlin, Germany, 2007.
- I. Schoeman and A. Engelbrecht, “A parallel vector-based particle swarm optimizer,” in Adaptive and Natural Computing Algorithms, B. Ribeiro, R. F. Albrecht, A. Dobnikar, D. W. Pearson, and N. C. Steele, Eds., pp. 268–271, Springer, Vienna, Austria, 2005.
- D. A. Van Veldhuizen, Multiobjective Evolutionary Algorithms: Classiffications, Analyses, and New Innovations [Ph.D. thesis], Air Force Institute of Technology, Air University, 1999.
- E. Zitzler and L. Thiele, “Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach,” IEEE Transactions on Evolutionary Computation, vol. 3, no. 4, pp. 257–271, 1999.
- E. Zitzler, K. Deb, and L. Thiele, “Comparison of multiobjective evolutionary algorithms: empirical results,” Evolutionary Computation, vol. 8, no. 2, pp. 173–195, 2000.
- E. Zitzler, M. Laumanns, and L. Thiele, “SPEA2: improving the strength pareto evolutionary algorithm for multiobjective optimization,” in Evolutionary Methods for Design, Optimisation and Control with Application To Industrial Problems, E. ZitlLer, M. Laumanns, and L. Thiele, Eds., pp. 95–100, International Center for Numerical Methods in Engineering (CIMNE), 2002.
- A. J. Nebro, F. Luna, E. Alba, B. Dorronsoro, J. J. Durillo, and A. Beham, “AbYSS: adapting scatter search to multiobjective optimization,” IEEE Transactions on Evolutionary Computation, vol. 12, no. 4, pp. 439–457, 2008.
- J. Durillo, J. GarcÃna-Nieto, A. Nebro, C. A. Coello Coello, F. Luna, and E. Alba, “Multi-objective particle swarm optimizers: An experimental comparison,” in Evolutionary Multi-Criterion Optimization, M. Ehrgott, C. Fonseca, X. Gandibleux, J. K. Hao, and M. Sevaux, Eds., vol. 5467 of Lecture Notes in Computer Science, pp. 495–509, Springer, Berlin, Germany, 2009.
- K. Deb, Multi-Objective Optimization Using Evolutionary Algorithms, vol. 16 of Systems and Optimization Series, John Wiley and Sons, Chichester, UK, 2001.
- A. J. Nebro, J. J. Durillo, G. Nieto, C. A. C. Coello, F. Luna, and E. Alba, “SMPSO: a new pso-based metaheuristic for multi-objective optimization,” in Proceedings of the IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making (MCDM '09), pp. 66–73, usa, April 2009.