Pareto-Ranking Based Quantum-Behaved Particle Swarm Optimization for Multiobjective Optimization
A study on pareto-ranking based quantum-behaved particle swarm optimization (QPSO) for multiobjective optimization problems is presented in this paper. During the iteration, an external repository is maintained to remember the nondominated solutions, from which the global best position is chosen. The comparison between different elitist selection strategies (preference order, sigma value, and random selection) is performed on four benchmark functions and two metrics. The results demonstrate that QPSO with preference order has comparative performance with sigma value according to different number of objectives. Finally, QPSO with sigma value is applied to solve multiobjective flexible job-shop scheduling problems.
Most real-world optimization problems have more than one objective, with at least two objectives conflicting with each other. The conflicting objectives lead to a problem where a single solution does not exist. Instead, a set of optimal trade-off solutions exists, which are referred to as the pareto-optimal front or pareto front. This kind of optimization problems is referred to as multiobjective optimization problems.
In the past ten years, a wide variety of algorithms have been proposed to address such problems. Deb et al.  gave rise to a fast and elitist multiobjective genetic algorithm (NSGA-II) to reduce the computational complexity and adopted an elitism strategy. In , Konak et al. gave a tutorial about several variants of genetic algorithms and compared their performance. A particle swarm optimization (PSO) with pareto dominance was proposed in , in which a secondary repository and a special mutation operator were incorporated. A survey of the state-of-the-art multiobjective PSOs is presented in . From the above literatures, the great difference between genetic algorithm (GA) and PSO exists for solving the multiobjective optimization problems. In GA, a set of particles are needed to be selected into next generation, while, in QPSO, one particle has to be chosen from the nondominated optima as the global best position (). Therefore, different pareto-ranking strategies are required.
Inspired by the quantum mechanics, QPSO was proposed by Sun et al. in 2004 , which has been proved to outperform PSO in both convergence rate and global search ability [6, 7]. Similar to other population-based algorithms (GA, PSO), maintaining a diverse population is an important consideration in multiobjective optimization to ensure solutions uniformly distributed over the pareto front. Therefore, in this paper, QPSO with three pareto-ranking strategies is compared on some benchmark functions and two metrics are used to test the performance of each strategy.
The research on the multiobjective FJSP is not as widely as that on the monoobjective FJSP. Brandimarte  was first to apply the decomposition method to FJSP and solved the routing subproblem by using some existing dispatching rules and addressed the scheduling subproblem by using a tabu search algorithm. Davarzani et al.  used artificial immune system to solve the multiobjective FJSP and compared it with Approach by Localization , AL + CGA . Considering the advantages and disadvantages of stochastic algorithms and local search methods, some hybrid approaches were also proposed [12, 13]. In this paper, QPSO with sigma value is applied to solve multiobjective flexible job-shop scheduling.
The remainder of this paper is organized as follows. In Section 2, some basic concepts for multiobjective optimization are provided. Section 3 describes the three variants of pareto-ranking based QPSO in detail. Numerical tests on benchmark functions are presented in Section 4. In Section 5, QPSO with sigma value is applied to solve multiobjective flexible job-shop scheduling. Finally, conclusion is given in Section 6.
2. Basic Concepts for Multiobjective Optimization
Without loss of generality, only minimization problem is assumed here: where is the vector of decision variables. , , are the objective functions, and , are the constraint functions of the problem, is the dimension size of the search space, and is the number of objectives.
Definition 1 (pareto dominance). A vector dominates (denoted by ) if and only if is partially less than ; that is, , , and , .
Definition 2 (pareto optimal set). One has .
Definition 3 (pareto front). For a given pareto optimal set , the pareto front is defined as .
Generally, the analytical expression of the line or surface which contain those points does not exist. It is only possible for us to determine the nondominated points and to produce the pareto front.
3. Pareto-Ranking Based Quantum-Behaved Particle Swarm Optimization
3.1. Particle Swarm Optimization
PSO is a population-based optimization technique originally proposed by Kennedy and Eberhart in 1995 . A PSO system simulates the knowledge evolvement of a social organism, in which particles represent candidate solutions of an optimization problem. The position of each particle is evaluated according to the objective function, and particles share memories of their “best” positions. These memories are used to adjust the particles’ own velocities and their subsequent positions. It has already been shown that PSO is comparable in performance with and may be considered as an alternative to GA .
Ina PSO system with particles in the -dimensional space, the position, and velocity vector of particle at the kth iteration is represented as and . The particle moves according to the following equations: where, , , and are acceleration coefficient. and are random numbers distributed uniformly in . is inertia weight. Vector is the previous best position of particle (), and vector is the position of the best particle in the swarm ().
3.2. Quantum-Behaved Particle Swarm Optimization
The main disadvantage of PSO is that it is not guaranteed to be global convergent . Concepts of QPSO were developed to address this disadvantage and first reported by Sun et al. in 2004 . Trajectory analysis in  demonstrated the fact that the convergence of PSO may be achieved if each particle converges to its local attractor, , defined as where . It can be seen that is a stochastic attractor of particle that lies in a hyperrectangle with and being two ends of its diagonal and moves following and .
In quantum world, the velocity of the particle is meaningless, so, in QPSO system, position is the only state to depict the particles, which moves according to the following equation : where is a random number uniformly distributed in and , called mean best position, is defined as the mean value of personal best positions of the swarm: The parameter in (4) is named as contraction-expansion (CE) coefficient, which can be adjusted to control convergence rate. The most commonly used method to control CE is linearly decreasing from to : where is the current iteration step, is the predefined maximum iteration steps, and and are the maximum and minimum value of CE.
QPSO does not require velocity vectors for the particles and has fewer parameters to control, making the algorithm easier to implement. Experimental results performed on some well-known benchmark functions show that QPSO has better performance than PSO [5–7].
A great deal of efforts has been made to PSO for solving multiobjective optimization problems. A survey of state-of-the-art works is presented in . The most contributive work is proposed by Ceollo et al. , in which the objective space is divided into hypercubes using an adaptive grid. The nondominated solutions are distributed into grids and then the grid is chosen according to a roulette wheel selection strategy. However, the grid is problems dependent and computation complicated. While in  a new density measure strategy named as sigma method was introduced to multiobjective PSO, so as to choose a from the nondominated archive, the sigma method can guide particles to the pareto optimal front. However, if the initialized solutions in nondominated archive are bad, the algorithm will fall into local optimum due to the fast convergence. Yang et al.  proposed a hybrid method which combined the density information and sigma method so as to achieve a balance between global search and local search. Considering the calculation of the density value and crowding distance is limited by two objectives. When the number of objectives becomes large, the measure of density and crowding distance will become invalid. Therefore, the idea of preference order is introduced .
3.3. Preference Order
As we all know, when the optimization problem has two objectives, the pareto optimal solutions can be plotted on a curve. Though the trade-off surface can also be visualized for three objectives, it is not easy to pick the final point. For problems with more than three objectives, it becomes extremely difficult to find the optimized solution through visualization.
The idea of preference order was firstly proposed in 1999 . A point is said to be efficiency of order if is not dominated by any of the -element subsets of .
Consider a minimization example with three objectives in Table 1; combination of all subsets of the objectives is .
The dominant relation of all points in 2-element subsets is shown in Table 2, from which only Point 3 is nondominated by any points on all 2-element subsets. So Point 3 is said to be efficiency of order 2.
About efficiency of order,  gave three claims.
Claim 1. In a space with objectives, there is no more than one point with efficiency of order .
Claim 2. is efficient of order , if it is efficient of order .
Claim 3. is not efficient of order , if it is not efficient of order .
The pseudocode of QPSO with preference order is shown below.
Preference-Order Based QPSO(1)Initialization: initialize the swarm size , the external archive size , and the maximum number of iterations , CE . Position of each particle , , is randomly initialized. Personal best position of each particle is equal to . Construct external nondominated set .(2) Update the following:(a) Identify global best position using preference-order method.(i) Get combination of all subsets for objectives.(ii) Compute the efficiency order for each nondominated solution.(iii) Sort nondominated solutions in descending order according to their efficiency order.(iv) Get .(b) Update position of each particle according to (3), (4), and (5).(c) Update according to dominance rule.(d) Update external archive and remove solution with lower efficiency order if archive size is exceeded.(e) Update according to (6).(3) Repeat until the maximum number of iterations is reached.
3.4. Sigma Method
The sigma method was first proposed in  for finding the best local guide for each particle. Each nondominated solution in the external repository is assigned a value , , which is defined as (for two objectives shown in Figure 1.) In the general case, is a vector of elements, and each element is the combination of two coordinates. Take a problem of three objectives; for example,
The pseudocode of QPSO with sigma value is shown below.
QPSO with Sigma Method(1)Initialization: initialize the swarm size , the external archive size , and the maximum number of iterations , CE . Position of each particle , , is randomly initialized. Personal best position of each particle is equal to . Construct external nondominated set .(2)Update the following:(a)Identify local best position using sigma method.(i)Assign the value to each particle in the external archive.(ii)Calculate the value for each particle in the swarm.(iii)Calculate the distance between and .(iv)The particle in the archive with has the minimum distance to selected as the best local guide for the particle .(b)Update position of each particle according to (3), (4), and (5).(c)Update according to dominance rule.(d)Update according to (6).(3)Repeat until the maximum number of iterations is reached.
4. Numerical Experiments
4.1. Test Functions
Seven common test functions [20–23] are used to test the proposed algorithm (DTLZ1, DTLZ2, DTLZ3, and DTLZ4), as listed in Table 3. We call difficulty factor. Table 4 gives the difficulty factor, number of runs, number of iterations, number of particles, and size of nondominated archive used in our test. The reason why these functions are chosen is due to the following features:(1)The relatively small implementation effort.(2)The ability to be scaled to any number of objectives and decision variable.(3)The global pareto front being known analytically.(4)Convergence and diversity difficulties that can be easily controlled.
4.2. Performance Metrics
Usually, pareto-based multiobjective algorithms consider two aspects: closeness to the global pareto front (distance) and spread along the pareto front (diversity) , which are defined as follows.
Generational Distance (GD). It is a way of estimating how far the elements in the nondominated set are found so far from those in the pareto optimal set, which is defined aswhere is the number of points in nondominated set and is the Euclidean distance between a point and the nearest member of Pareto optimal set.
Spacing (SP). It is a metric measuring the distance variance of neighboring points in the nondominated set, which is defined as where , , is the mean of all , and is the number of objectives.
4.3. Experiments Results
Tables 5 and 6 show the mean values of generational distance and spacing for three methods (QPSO + PO, QPSO + Rand, and QPSO + Sigma) with different number of objectives (2, 3, 4, 5, 6, 7, and 8). It is easy to note that QPSO with preference order has comparative performance with sigma value. When the number of objectives is small (e.g., 2, 3, 4, and 5), QPSO with sigma value performs better. When the number of objectives becomes larger (e.g., 6, 7, and 8), QPSO with preference order obtains solutions closer to the true pareto front, but it is very time-consuming.
When becomes larger, the number of nondominated particles in external archive and the number of subsets will accordingly become large. Therefore, the computing complexity of QPSO with preference order is .
Furthermore, in QPSO with sigma value, each particle flies towards its nearest nondominated particle in external archive, which will easily result in a premature convergence and the particles will get trapped in a local optimum.
5. QPSO for Flexible Job-Shop Scheduling Problems
5.1. Flexible Job-Shop Scheduling Problems
The FJSP could be formulated as follows. There is a set of jobs and a set of machines. denotes the set of all machines. Each job consists of a sequence of operations. Each operation ; of job has to be processed on one machine out of a set of given compatible machines for (, ). The execution of each operation requires one machine selected from a set of available machines. In addition, the FJSP needs to set each operation’s starting and ending time. The FJSP is needed to determine both an assignment and a sequence of the operations on the machines in order to satisfy the given criteria. If there is for at least one operation, it is partial flexibility FJSP (P-FJSP); while there is for each operation, it is total flexibility FJSP (T-FJSP) [10, 11].
In this paper, the following criteria are to be minimized:(1)The maximal completion time of machines, that is, the makespan.(2)The maximal machine workload, that is, the maximum working time spent on any machine.(3)The total workload of machines, which represents the total working time over all machines.During the process of solving this problem, the following assumptions are made:(1)Each operation cannot be interrupted during its performance (nonpreemptive condition).(2)Each machine can perform at most one operation at any time (resource constraint).(3)The precedence constraints of the operations in a job can be defined for any pair of operations.(4)Setting up time of machines and move time between operations are negligible.(5)Machines are independent from each other.(6)Jobs are independent from each other.(7)All machines are available at time 0.(8)Each job has its own release date.(9)The order of operations for each job is predefined and invariant.The model of the multiobjective FJSP is then given as follows:Inequation (14) ensures the operation precedence constraint. Inequity (15) ensures that each machine could process only one operation at each time. Equation (16) states that one machine could be selected from the set of available machines for each operation.
5.2. Encoding Scheme
Particle’s position representation is the most important task for successful application of PSO or QPSO to solve the FJSP. In this paper, the particle’s position is represented by a vector which consists of two parts (take Table 7, e.g.). The first part represents the sequence of the operations (). The second part represents the machines on which the operations in Table 7 are processed; take Figure 2 for example, 1000 represents is processed on and 0001 represents is processed on . The length of the vector is equal to . The advantage of this encoding method is that it is easy to apply crossover and mutation on operation part and machine part separately, while the disadvantage is the storage complexity.
5.3. Swarm Initialization
The initial population has great influence on the performance of the algorithm. Here in this paper, a guided initialization method is used. In order to ensure the diversity of the population, 50% of the particles are initialized randomly, and the others are initialized according to the following rules.
Step 0: Initialize. It includes the following:(a)Create an array for a random job order J (e.g., J4 J2 J3 J1).(b)Assign the first operation of each job to a suitable machine following the order . This machine is selected by calling Step ; is assigned directly to the selected machine. If there is no operation assigned to the selected machine, starts from time 0. Otherwise, it starts from the stop time of the last operation on the selected machine.
Step 1: Find a Suitable Machine to Process an Operation. It includes the following:(a)Identify the machine with earliest stop time Ms and the last operation on Ms. If more than one machine is satisfied, a random machine of them is selected. Let be the stop time of Ms.(b)Find a suitable machine to process by the following procedure:(i)Identify all the machines that can process : ().(ii)Calculate the waiting time of each machine : if does not process any operation at time , it is equal to processing time of on . Otherwise, if it is processing an operation , its waiting time is the total processing times of operations on ’s waiting list + (remaining time of ) + processing time of on machine .(iii)The machine with shortest waiting time is chosen to process . If this machine does not process any operation at time , assign to it to be processed. Otherwise, add to the waiting list of this machine.
Step 2: Find a Suitable Order of Operations on a Machine. from Step , Ms and are identified, and then assign to a machine. Now an operation is needed to find on the waiting list to be processed on Ms. If there is more than one operation on the waiting list of Ms, select one of them depending on one of the following dispatching rules: shortest processing time, longest processing time, or first in first out.
Step 3. Return to Step 1.
5.4. Crossover and Mutation
For combinational optimization problems, effective information exchanging is able to help find the better solution. The crossover operator on the operation part is shown in Figure 3, in which the offspring C1 and C2 inherit some operations from the parents P1 and P2 directly, and the other operations are copied from another parent.
The crossover operator on the machine part is shown in Figure 4, in which two positions are selected randomly, the same as that in GA.
The mutation of a particle is also divided into two steps; one is on operation part (Figure 5), in which, two positions are randomly chosen, and the operations between them are reversed.
Another mutation is on machine part, but the operation to be mutated cannot be randomly chosen. Because the makespan and workload are determined by the critical path  in the scheduling sequence, the feasible mutation method to reduce the makespan is to choose a critical operation on the critical path, and then try to move it to another machine. Taking Figure 6, for example, the critical path is constructed by . The operation is moved from to , and then the makespan is reduced from 12 to 11.
5.5. Experimental Results
As the benchmark results analysis in Section 4, QPSO with sigma value outperforms QPSO with preference order for smaller number of objectives. In this section, only three objectives are defined (makespan, max machine workload, and total workload), so QPSO with sigma value is adopted.
The parameters are set as population size ; size of the external archive ; dimensional size equals the total number of operations ; the maximum number of iterations .
From Table 8, it is easy to notice that, for the problems of , , and , the results obtained by QPSO with sigma dominate other algorithms. For problem of , the result obtained by QPSO with sigma is not dominated by other algorithms. Therefore, QPSO with sigma value is competitive for solving multiobjective flexible job-shop scheduling problems.
6. Conclusion and Future Works
In this paper, QPSO with three elitist selection strategies are used to solve multiobjective optimization problems. The results show that QPSO with preference order has comparative performance with sigma value. When the number of objectives is small (e.g., 2, 3, 4, and 5), QPSO with sigma value performs better. However, when the number of objectives becomes larger (e.g., 6, 7, and 8), QPSO with preference order obtains solutions closer to the true pareto front but is very time-consuming. Therefore, in the future work, we will try to combine these two methods to achieve a balance between global search and local search.
Moreover, we apply QPSO with sigma value to solve multiobjective flexible job-shop scheduling problems, the results of which demonstrate the competitive performance of the algorithm.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
This work is supported by Jiangsu Postdoctoral Funding (Project no. 1401004B), by National High-Technology Research Development Plan Project (Project no. 2013AA040405), and by National Natural Science Funding (Project no. 61300152).
J. Sun, B. Feng, and W. B. Xu, “Particle swarm optimization with particles having quantum behavior,” in Proceedings of the Congress on Evolutionary Computation (CEC '04), pp. 325–331, Portland, Ore, USA, June 2004.View at: Google Scholar
Z. Davarzani, M. Akbarzadeh, and N. Khairdoost, “Multi-objective artificial immune algorithm for flexible job shop scheduling problem,” International Journal of Hybrid Information Technology, vol. 5, no. 3, pp. 75–88, 2012.View at: Google Scholar
I. Kacem, S. Hammadi, and P. Borne, “Approach by localization and multiobjective evolutionary optimization for flexible job-shop scheduling problems,” IEEE Transactions on Systems, Man and Cybernetics, Part C: Applications and Reviews, vol. 32, no. 1, pp. 1–13, 2002.View at: Publisher Site | Google Scholar
J. Kennedy and R. C. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, pp. 1942–1948, Perth, Australia, December 1995.View at: Google Scholar
F. Van den Bergh, An analysis of particle swarm optimizers [Ph.D. thesis], University of Pretoria, Pretoria, South Africa, 2001.
S. Mostaghim and J. Teich, “Strategies for finding good local guides in multi-objective particle swarm optimization,” in Proceedings of the IEEE Swarm Intelligence Symposium, pp. 26–33, 2003.View at: Google Scholar