Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2015, Article ID 940592, 10 pages
http://dx.doi.org/10.1155/2015/940592
Research Article

Pareto-Ranking Based Quantum-Behaved Particle Swarm Optimization for Multiobjective Optimization

1Institute of Educational Informatization, Jiangnan University, Wuxi 214122, China
2Institute of Electrical Automation, Jiangnan University, Wuxi 214122, China

Received 27 April 2015; Accepted 25 June 2015

Academic Editor: Fabio Tramontana

Copyright © 2015 Na Tian and Zhicheng Ji. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A study on pareto-ranking based quantum-behaved particle swarm optimization (QPSO) for multiobjective optimization problems is presented in this paper. During the iteration, an external repository is maintained to remember the nondominated solutions, from which the global best position is chosen. The comparison between different elitist selection strategies (preference order, sigma value, and random selection) is performed on four benchmark functions and two metrics. The results demonstrate that QPSO with preference order has comparative performance with sigma value according to different number of objectives. Finally, QPSO with sigma value is applied to solve multiobjective flexible job-shop scheduling problems.

1. Introduction

Most real-world optimization problems have more than one objective, with at least two objectives conflicting with each other. The conflicting objectives lead to a problem where a single solution does not exist. Instead, a set of optimal trade-off solutions exists, which are referred to as the pareto-optimal front or pareto front. This kind of optimization problems is referred to as multiobjective optimization problems.

In the past ten years, a wide variety of algorithms have been proposed to address such problems. Deb et al. [1] gave rise to a fast and elitist multiobjective genetic algorithm (NSGA-II) to reduce the computational complexity and adopted an elitism strategy. In [2], Konak et al. gave a tutorial about several variants of genetic algorithms and compared their performance. A particle swarm optimization (PSO) with pareto dominance was proposed in [3], in which a secondary repository and a special mutation operator were incorporated. A survey of the state-of-the-art multiobjective PSOs is presented in [4]. From the above literatures, the great difference between genetic algorithm (GA) and PSO exists for solving the multiobjective optimization problems. In GA, a set of particles are needed to be selected into next generation, while, in QPSO, one particle has to be chosen from the nondominated optima as the global best position (). Therefore, different pareto-ranking strategies are required.

Inspired by the quantum mechanics, QPSO was proposed by Sun et al. in 2004 [5], which has been proved to outperform PSO in both convergence rate and global search ability [6, 7]. Similar to other population-based algorithms (GA, PSO), maintaining a diverse population is an important consideration in multiobjective optimization to ensure solutions uniformly distributed over the pareto front. Therefore, in this paper, QPSO with three pareto-ranking strategies is compared on some benchmark functions and two metrics are used to test the performance of each strategy.

The research on the multiobjective FJSP is not as widely as that on the monoobjective FJSP. Brandimarte [8] was first to apply the decomposition method to FJSP and solved the routing subproblem by using some existing dispatching rules and addressed the scheduling subproblem by using a tabu search algorithm. Davarzani et al. [9] used artificial immune system to solve the multiobjective FJSP and compared it with Approach by Localization [10], AL + CGA [11]. Considering the advantages and disadvantages of stochastic algorithms and local search methods, some hybrid approaches were also proposed [12, 13]. In this paper, QPSO with sigma value is applied to solve multiobjective flexible job-shop scheduling.

The remainder of this paper is organized as follows. In Section 2, some basic concepts for multiobjective optimization are provided. Section 3 describes the three variants of pareto-ranking based QPSO in detail. Numerical tests on benchmark functions are presented in Section 4. In Section 5, QPSO with sigma value is applied to solve multiobjective flexible job-shop scheduling. Finally, conclusion is given in Section 6.

2. Basic Concepts for Multiobjective Optimization

Without loss of generality, only minimization problem is assumed here: where is the vector of decision variables. , , are the objective functions, and , are the constraint functions of the problem, is the dimension size of the search space, and is the number of objectives.

Definition 1 (pareto dominance). A vector dominates (denoted by ) if and only if is partially less than ; that is, , , and , .

Definition 2 (pareto optimal set). One has   .

Definition 3 (pareto front). For a given pareto optimal set , the pareto front is defined as .

Generally, the analytical expression of the line or surface which contain those points does not exist. It is only possible for us to determine the nondominated points and to produce the pareto front.

3. Pareto-Ranking Based Quantum-Behaved Particle Swarm Optimization

3.1. Particle Swarm Optimization

PSO is a population-based optimization technique originally proposed by Kennedy and Eberhart in 1995 [14]. A PSO system simulates the knowledge evolvement of a social organism, in which particles represent candidate solutions of an optimization problem. The position of each particle is evaluated according to the objective function, and particles share memories of their “best” positions. These memories are used to adjust the particles’ own velocities and their subsequent positions. It has already been shown that PSO is comparable in performance with and may be considered as an alternative to GA [15].

Ina PSO system with particles in the -dimensional space, the position, and velocity vector of particle at the kth iteration is represented as and . The particle moves according to the following equations: where, , , and are acceleration coefficient. and are random numbers distributed uniformly in . is inertia weight. Vector is the previous best position of particle (), and vector is the position of the best particle in the swarm ().

3.2. Quantum-Behaved Particle Swarm Optimization

The main disadvantage of PSO is that it is not guaranteed to be global convergent [16]. Concepts of QPSO were developed to address this disadvantage and first reported by Sun et al. in 2004 [5]. Trajectory analysis in [17] demonstrated the fact that the convergence of PSO may be achieved if each particle converges to its local attractor, , defined as where . It can be seen that is a stochastic attractor of particle that lies in a hyperrectangle with and being two ends of its diagonal and moves following and .

In quantum world, the velocity of the particle is meaningless, so, in QPSO system, position is the only state to depict the particles, which moves according to the following equation [5]: where is a random number uniformly distributed in and , called mean best position, is defined as the mean value of personal best positions of the swarm: The parameter in (4) is named as contraction-expansion (CE) coefficient, which can be adjusted to control convergence rate. The most commonly used method to control CE is linearly decreasing from to : where is the current iteration step, is the predefined maximum iteration steps, and and are the maximum and minimum value of CE.

QPSO does not require velocity vectors for the particles and has fewer parameters to control, making the algorithm easier to implement. Experimental results performed on some well-known benchmark functions show that QPSO has better performance than PSO [57].

A great deal of efforts has been made to PSO for solving multiobjective optimization problems. A survey of state-of-the-art works is presented in [4]. The most contributive work is proposed by Ceollo et al. [3], in which the objective space is divided into hypercubes using an adaptive grid. The nondominated solutions are distributed into grids and then the grid is chosen according to a roulette wheel selection strategy. However, the grid is problems dependent and computation complicated. While in [18] a new density measure strategy named as sigma method was introduced to multiobjective PSO, so as to choose a from the nondominated archive, the sigma method can guide particles to the pareto optimal front. However, if the initialized solutions in nondominated archive are bad, the algorithm will fall into local optimum due to the fast convergence. Yang et al. [19] proposed a hybrid method which combined the density information and sigma method so as to achieve a balance between global search and local search. Considering the calculation of the density value and crowding distance is limited by two objectives. When the number of objectives becomes large, the measure of density and crowding distance will become invalid. Therefore, the idea of preference order is introduced [20].

3.3. Preference Order

As we all know, when the optimization problem has two objectives, the pareto optimal solutions can be plotted on a curve. Though the trade-off surface can also be visualized for three objectives, it is not easy to pick the final point. For problems with more than three objectives, it becomes extremely difficult to find the optimized solution through visualization.

The idea of preference order was firstly proposed in 1999 [21]. A point is said to be efficiency of order if is not dominated by any of the -element subsets of .

Consider a minimization example with three objectives in Table 1; combination of all subsets of the objectives is .

Table 1: Nondominated set.

The dominant relation of all points in 2-element subsets is shown in Table 2, from which only Point 3 is nondominated by any points on all 2-element subsets. So Point 3 is said to be efficiency of order 2.

Table 2: Dominant relation.

About efficiency of order, [22] gave three claims.

Claim 1. In a space with objectives, there is no more than one point with efficiency of order .

Claim 2. is efficient of order , if it is efficient of order .

Claim 3. is not efficient of order , if it is not efficient of order .

The pseudocode of QPSO with preference order is shown below.

Preference-Order Based QPSO(1)Initialization: initialize the swarm size , the external archive size , and the maximum number of iterations , CE . Position of each particle , , is randomly initialized. Personal best position of each particle is equal to . Construct external nondominated set .(2) Update the following:(a) Identify global best position using preference-order method.(i) Get combination of all subsets for objectives.(ii) Compute the efficiency order for each nondominated solution.(iii) Sort nondominated solutions in descending order according to their efficiency order.(iv) Get .(b) Update position of each particle according to (3), (4), and (5).(c) Update according to dominance rule.(d) Update external archive and remove solution with lower efficiency order if archive size is exceeded.(e) Update according to (6).(3) Repeat until the maximum number of iterations is reached.

3.4. Sigma Method

The sigma method was first proposed in [18] for finding the best local guide for each particle. Each nondominated solution in the external repository is assigned a value , , which is defined as (for two objectives shown in Figure 1.) In the general case, is a vector of elements, and each element is the combination of two coordinates. Take a problem of three objectives; for example,

Figure 1: Sigma value for a two-objective space.

The pseudocode of QPSO with sigma value is shown below.

QPSO with Sigma Method(1)Initialization: initialize the swarm size , the external archive size , and the maximum number of iterations , CE . Position of each particle , , is randomly initialized. Personal best position of each particle is equal to . Construct external nondominated set .(2)Update the following:(a)Identify local best position using sigma method.(i)Assign the value to each particle in the external archive.(ii)Calculate the value for each particle in the swarm.(iii)Calculate the distance between and .(iv)The particle in the archive with has the minimum distance to selected as the best local guide for the particle .(b)Update position of each particle according to (3), (4), and (5).(c)Update according to dominance rule.(d)Update according to (6).(3)Repeat until the maximum number of iterations is reached.

4. Numerical Experiments

4.1. Test Functions

Seven common test functions [2023] are used to test the proposed algorithm (DTLZ1, DTLZ2, DTLZ3, and DTLZ4), as listed in Table 3. We call difficulty factor. Table 4 gives the difficulty factor, number of runs, number of iterations, number of particles, and size of nondominated archive used in our test. The reason why these functions are chosen is due to the following features:(1)The relatively small implementation effort.(2)The ability to be scaled to any number of objectives and decision variable.(3)The global pareto front being known analytically.(4)Convergence and diversity difficulties that can be easily controlled.

Table 3: Description of the tested functions.
Table 4: Difficulty factor, number of runs, iterations, particles, and external archive.
4.2. Performance Metrics

Usually, pareto-based multiobjective algorithms consider two aspects: closeness to the global pareto front (distance) and spread along the pareto front (diversity) [2], which are defined as follows.

Generational Distance (GD). It is a way of estimating how far the elements in the nondominated set are found so far from those in the pareto optimal set, which is defined aswhere is the number of points in nondominated set and is the Euclidean distance between a point and the nearest member of Pareto optimal set.

Spacing (SP). It is a metric measuring the distance variance of neighboring points in the nondominated set, which is defined as where ,   , is the mean of all , and is the number of objectives.

4.3. Experiments Results

Tables 5 and 6 show the mean values of generational distance and spacing for three methods (QPSO + PO, QPSO + Rand, and QPSO + Sigma) with different number of objectives (2, 3, 4, 5, 6, 7, and 8). It is easy to note that QPSO with preference order has comparative performance with sigma value. When the number of objectives is small (e.g., 2, 3, 4, and 5), QPSO with sigma value performs better. When the number of objectives becomes larger (e.g., 6, 7, and 8), QPSO with preference order obtains solutions closer to the true pareto front, but it is very time-consuming.

Table 5: Mean value of generational distance.
Table 6: Mean value of spacing.

When becomes larger, the number of nondominated particles in external archive and the number of subsets will accordingly become large. Therefore, the computing complexity of QPSO with preference order is .

Furthermore, in QPSO with sigma value, each particle flies towards its nearest nondominated particle in external archive, which will easily result in a premature convergence and the particles will get trapped in a local optimum.

5. QPSO for Flexible Job-Shop Scheduling Problems

5.1. Flexible Job-Shop Scheduling Problems

The FJSP could be formulated as follows. There is a set of jobs and a set of machines. denotes the set of all machines. Each job consists of a sequence of operations. Each operation ; of job has to be processed on one machine out of a set of given compatible machines for (, ). The execution of each operation requires one machine selected from a set of available machines. In addition, the FJSP needs to set each operation’s starting and ending time. The FJSP is needed to determine both an assignment and a sequence of the operations on the machines in order to satisfy the given criteria. If there is for at least one operation, it is partial flexibility FJSP (P-FJSP); while there is for each operation, it is total flexibility FJSP (T-FJSP) [10, 11].

In this paper, the following criteria are to be minimized:(1)The maximal completion time of machines, that is, the makespan.(2)The maximal machine workload, that is, the maximum working time spent on any machine.(3)The total workload of machines, which represents the total working time over all machines.During the process of solving this problem, the following assumptions are made:(1)Each operation cannot be interrupted during its performance (nonpreemptive condition).(2)Each machine can perform at most one operation at any time (resource constraint).(3)The precedence constraints of the operations in a job can be defined for any pair of operations.(4)Setting up time of machines and move time between operations are negligible.(5)Machines are independent from each other.(6)Jobs are independent from each other.(7)All machines are available at time 0.(8)Each job has its own release date.(9)The order of operations for each job is predefined and invariant.The model of the multiobjective FJSP is then given as follows:Inequation (14) ensures the operation precedence constraint. Inequity (15) ensures that each machine could process only one operation at each time. Equation (16) states that one machine could be selected from the set of available machines for each operation.

5.2. Encoding Scheme

Particle’s position representation is the most important task for successful application of PSO or QPSO to solve the FJSP. In this paper, the particle’s position is represented by a vector which consists of two parts (take Table 7, e.g.). The first part represents the sequence of the operations (). The second part represents the machines on which the operations in Table 7 are processed; take Figure 2 for example, 1000 represents is processed on and 0001 represents is processed on . The length of the vector is equal to . The advantage of this encoding method is that it is easy to apply crossover and mutation on operation part and machine part separately, while the disadvantage is the storage complexity.

Table 7: Example of 3 jobs in 4 machines.
Figure 2: Encoding of a particle.
5.3. Swarm Initialization

The initial population has great influence on the performance of the algorithm. Here in this paper, a guided initialization method is used. In order to ensure the diversity of the population, 50% of the particles are initialized randomly, and the others are initialized according to the following rules.

Guided Initialization

Step 0: Initialize. It includes the following:(a)Create an array for a random job order J (e.g., J4 J2 J3 J1).(b)Assign the first operation of each job to a suitable machine following the order . This machine is selected by calling Step ; is assigned directly to the selected machine. If there is no operation assigned to the selected machine, starts from time 0. Otherwise, it starts from the stop time of the last operation on the selected machine.

Step 1: Find a Suitable Machine to Process an Operation. It includes the following:(a)Identify the machine with earliest stop time Ms and the last operation on Ms. If more than one machine is satisfied, a random machine of them is selected. Let be the stop time of Ms.(b)Find a suitable machine to process by the following procedure:(i)Identify all the machines that can process : ().(ii)Calculate the waiting time of each machine : if does not process any operation at time , it is equal to processing time of on . Otherwise, if it is processing an operation , its waiting time is the total processing times of operations on ’s waiting list + (remaining time of ) + processing time of on machine .(iii)The machine with shortest waiting time is chosen to process . If this machine does not process any operation at time , assign to it to be processed. Otherwise, add to the waiting list of this machine.

Step 2: Find a Suitable Order of Operations on a Machine. from Step , Ms and are identified, and then assign to a machine. Now an operation is needed to find on the waiting list to be processed on Ms. If there is more than one operation on the waiting list of Ms, select one of them depending on one of the following dispatching rules: shortest processing time, longest processing time, or first in first out.

Step 3. Return to Step 1.

5.4. Crossover and Mutation

For combinational optimization problems, effective information exchanging is able to help find the better solution. The crossover operator on the operation part is shown in Figure 3, in which the offspring C1 and C2 inherit some operations from the parents P1 and P2 directly, and the other operations are copied from another parent.

Figure 3: Crossover on operation part.

The crossover operator on the machine part is shown in Figure 4, in which two positions are selected randomly, the same as that in GA.

Figure 4: Crossover on machine part.

The mutation of a particle is also divided into two steps; one is on operation part (Figure 5), in which, two positions are randomly chosen, and the operations between them are reversed.

Figure 5: Mutation on operation part.

Another mutation is on machine part, but the operation to be mutated cannot be randomly chosen. Because the makespan and workload are determined by the critical path [13] in the scheduling sequence, the feasible mutation method to reduce the makespan is to choose a critical operation on the critical path, and then try to move it to another machine. Taking Figure 6, for example, the critical path is constructed by . The operation is moved from to , and then the makespan is reduced from 12 to 11.

Figure 6: The Gantt graph of moving operation.
5.5. Experimental Results

As the benchmark results analysis in Section 4, QPSO with sigma value outperforms QPSO with preference order for smaller number of objectives. In this section, only three objectives are defined (makespan, max machine workload, and total workload), so QPSO with sigma value is adopted.

To illustrate the performance of QPSO with sigma value, four representative examples based on practical data have been selected [811]: problem , problem , problem , and problem .

The parameters are set as population size ; size of the external archive ; dimensional size equals the total number of operations ; the maximum number of iterations .

From Table 8, it is easy to notice that, for the problems of , , and , the results obtained by QPSO with sigma dominate other algorithms. For problem of , the result obtained by QPSO with sigma is not dominated by other algorithms. Therefore, QPSO with sigma value is competitive for solving multiobjective flexible job-shop scheduling problems.

Table 8: Comparison of results in four problems.

Figures 710 show the Gantt graph of the optimized scheduling sequence by QPSO with sigma.

Figure 7: Gantt graph of problem .
Figure 8: Gantt graph of problem .
Figure 9: Gantt graph of problem .
Figure 10: Gantt graph of problem .

6. Conclusion and Future Works

In this paper, QPSO with three elitist selection strategies are used to solve multiobjective optimization problems. The results show that QPSO with preference order has comparative performance with sigma value. When the number of objectives is small (e.g., 2, 3, 4, and 5), QPSO with sigma value performs better. However, when the number of objectives becomes larger (e.g., 6, 7, and 8), QPSO with preference order obtains solutions closer to the true pareto front but is very time-consuming. Therefore, in the future work, we will try to combine these two methods to achieve a balance between global search and local search.

Moreover, we apply QPSO with sigma value to solve multiobjective flexible job-shop scheduling problems, the results of which demonstrate the competitive performance of the algorithm.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by Jiangsu Postdoctoral Funding (Project no. 1401004B), by National High-Technology Research Development Plan Project (Project no. 2013AA040405), and by National Natural Science Funding (Project no. 61300152).

References

  1. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002. View at Publisher · View at Google Scholar · View at Scopus
  2. A. Konak, D. W. Coit, and A. E. Smith, “Multi-objective optimization using genetic algorithms: a tutorial,” Reliability Engineering and System Safety, vol. 91, no. 9, pp. 992–1007, 2006. View at Publisher · View at Google Scholar · View at Scopus
  3. C. A. C. Ceollo, G. T. Pulido, and M. S. Lechuga, “Handling multiple objectives with particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 256–279, 2004. View at Publisher · View at Google Scholar · View at Scopus
  4. M. Reyes-Sierra and C. A. Coello Coello, “Multi-objective particle swarm optimizers: a survey of the state-of-the-art,” International Journal of Computational Intelligence Research, vol. 2, no. 3, pp. 287–308, 2006. View at Google Scholar · View at MathSciNet
  5. J. Sun, B. Feng, and W. B. Xu, “Particle swarm optimization with particles having quantum behavior,” in Proceedings of the Congress on Evolutionary Computation (CEC '04), pp. 325–331, Portland, Ore, USA, June 2004. View at Scopus
  6. J. Sun, W. Fang, X. J. Wu, V. Palade, and W. Xu, “Quantum-behaved particle swarm optimization: analysis of individual particle behavior and parameter selection,” Evolutionary Computation, vol. 20, no. 3, pp. 349–393, 2012. View at Publisher · View at Google Scholar · View at Scopus
  7. J. Sun, X. Wu, V. Palade, W. Fang, C.-H. Lai, and W. Xu, “Convergence analysis and improvements of quantum-behaved particle swarm optimization,” Information Sciences, vol. 193, pp. 81–103, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  8. P. Brandimarte, “Routing and scheduling in a flexible job shop by tabu search,” Annals of Operations Research, vol. 41, no. 3, pp. 157–183, 1993. View at Publisher · View at Google Scholar · View at Scopus
  9. Z. Davarzani, M. Akbarzadeh, and N. Khairdoost, “Multi-objective artificial immune algorithm for flexible job shop scheduling problem,” International Journal of Hybrid Information Technology, vol. 5, no. 3, pp. 75–88, 2012. View at Google Scholar
  10. I. Kacem, S. Hammadi, and P. Borne, “Approach by localization and multiobjective evolutionary optimization for flexible job-shop scheduling problems,” IEEE Transactions on Systems, Man and Cybernetics, Part C: Applications and Reviews, vol. 32, no. 1, pp. 1–13, 2002. View at Publisher · View at Google Scholar · View at Scopus
  11. I. Kacem, S. Hammadi, and P. Borne, “Pareto-optimality approach for flexible job-shop scheduling problems: hybridization of evolutionary algorithms and fuzzy logic,” Mathematics and Computers in Simulation, vol. 60, no. 3–5, pp. 245–276, 2002. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  12. G. H. Zhang, X. Y. Shao, P. G. Li, and L. Gao, “An effective hybrid particle swarm optimization algorithm for multi-objective flexible job-shop scheduling problem,” Computers & Industrial Engineering, vol. 56, no. 4, pp. 1309–1318, 2009. View at Publisher · View at Google Scholar · View at Scopus
  13. J. Gao, L. Sun, and M. Gen, “A hybrid genetic and variable neighborhood descent algorithm for flexible job shop scheduling problems,” Computers & Operations Research, vol. 35, no. 9, pp. 2892–2907, 2008. View at Publisher · View at Google Scholar · View at Scopus
  14. J. Kennedy and R. C. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, pp. 1942–1948, Perth, Australia, December 1995. View at Scopus
  15. R. C. Eberhart and Y. Shi, “Comparison between genetic algorithms and particle swarm optimization,” in Evolutionary Programming VII, vol. 1447 of Lecture Notes in Computer Science, pp. 611–616, Springer, Berlin, Germany, 1998. View at Publisher · View at Google Scholar
  16. F. Van den Bergh, An analysis of particle swarm optimizers [Ph.D. thesis], University of Pretoria, Pretoria, South Africa, 2001.
  17. M. Clerc and J. Kennedy, “The particle swarm—explosion, stability, and convergence in a multidimensional complex space,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 1, pp. 58–73, 2002. View at Publisher · View at Google Scholar · View at Scopus
  18. S. Mostaghim and J. Teich, “Strategies for finding good local guides in multi-objective particle swarm optimization,” in Proceedings of the IEEE Swarm Intelligence Symposium, pp. 26–33, 2003.
  19. J. J. Yang, J. Zhou, L. Liu, and Y. Li, “A novel strategy of pareto-optimal solution searching in multi-objective particle swarm optimization (MOPSO),” Computers & Mathematics with Applications, vol. 57, no. 11-12, pp. 1995–2000, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  20. Y. Wang and Y. Yang, “Particle swarm optimization with preference order ranking for multi-objective optimization,” Information Sciences, vol. 179, no. 12, pp. 1944–1959, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  21. I. Das, “A preference ordering among various pareto optimal alternatives,” Structural Optimization, vol. 18, no. 1, pp. 30–35, 1999. View at Publisher · View at Google Scholar · View at Scopus
  22. F. di Pierro, S.-T. Khu, and D. A. Savić, “An investigation on preference order ranking scheme for multiobjective evolutionary optimization,” IEEE Transactions on Evolutionary Computation, vol. 11, no. 1, pp. 17–45, 2007. View at Publisher · View at Google Scholar · View at Scopus
  23. K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, “Scalable multi-objective optimization test problems,” in Proceedings of the Congress on Evolutionary Computation (CEC '02), vol. 1, pp. 825–830, May 2002. View at Publisher · View at Google Scholar · View at Scopus