Abstract

This paper investigates the permutation flowshop scheduling problem (PFSP) with the objectives of minimizing the makespan and the total flowtime and proposes a hybrid metaheuristic based on the particle swarm optimization (PSO). To enhance the exploration ability of the hybrid metaheuristic, a simulated annealing hybrid with a stochastic variable neighborhood search is incorporated. To improve the search diversification of the hybrid metaheuristic, a solution replacement strategy based on the pathrelinking is presented to replace the particles that have been trapped in local optimum. Computational results on benchmark instances show that the proposed PSO-based hybrid metaheuristic is competitive with other powerful metaheuristics in the literature.

1. Introduction

Due to the strong industrial background, the permutation flowshop scheduling problem (PFSP) has attracted considerable attention from researchers all over the world. In this problem, a set of jobs needs to be processed through a set of machines . Each job should be processed through these machines with the same machine order, that is, starting from machine 1 and finishing on the last machine . The processing time of each job on machine is nonnegative and known before scheduling. It is assumed that all the jobs are available before processing and once started the processing cannot be interrupted. It is required that each job can only be processed by only one machine at any time, and at the same time each machine cannot process more than one job. The processing of a job cannot start on the next machine until this job has been completed on the current machine and machine is idle. The objective is to determine the sequence of these jobs so that a certain performance measure can be optimized. The most commonly studied performance measures are the minimization of makespan and the minimization of total flowtime . Let denote a permutation of the jobs, in which represents the job arranged at the kth position, and the completion time of each job on each machine can be calculated as follows: Then the makespan can be defined as , and the total flowtime can be defined as the sum of completion times of all jobs .

Since the first introduction of the PFSP [1], considerable attention of researchers has been paid to this problem and many kinds of algorithms have been proposed in the literature. According to the comprehensive review of Ruiz and Maroto [2] and Framinan et al. [3] for PFSP, these solution methods can be classified into three categories: exact methods, heuristics methods, and metaheuristic methods.

Since it has been proven that the PFSP with makespan minimization is NP-complete in the strong sense when and the PFSP with total flowtime minimization is NP-complete in the strong sense when [4], few exact methods have been proposed for PFSP in the literature due to their unacceptable computation time. These exact methods include the mixed integer linear programming method [5] and the branch and bound algorithms ([68] for the makespan minimization and [911] for the total flowtime minimization). However, these exact methods are feasible for only small size problems because they cannot solve large size problems in reasonable computation time.

Heuristic methods can be classified into two categories: constructive heuristics and improvement heuristics. Constructive methods start from an empty solution and try to build a feasible solution in a short time. Johnson’s algorithm [1] is the earliest known heuristic for PFSP, which can obtain optimal solutions for . Campbell et al. [12] proposed the CDS heuristic, and Koulamas [13] proposed a two-phase heuristic for PFSP, which were both extensions of Johnson’s algorithm. Palmer [14] proposed a slope index heuristic for PFSP, and then Gupta [15] and Hundal and Rajgopal [16] extended Palmer’s heuristic and proposed two simple heuristics. Nawaz et al. [17] proposed a so-called NEH heuristic based on the idea that jobs with high total processing times on all machines should be scheduled as early as possible, and this NEH heuristic is regarded as the best heuristic for PFSP with makespan minimization (Ruiz and Maroto [2], Taillard [18]). Recently, complex heuristics have been proposed for PFSP, for example, Liu and Reeves [19] and Framinan and Leisten [20]. As far as the solution quality is concerned, the FL heuristic proposed by Framinan and Leisten [20] was the best one among simple heuristics (Framinan et al. [3]). Contrary to constructive heuristics, improvement heuristics start from an existing initial solution and try to improve it by a given procedure, for example, a local search. Fan and Winley [21] proposed a heuristic named the intelligent heuristic search algorithm for the PFSP. Suliman [22] proposed a two-phase improvement heuristic, in which an initial solution is generated by the CDS heuristic in the first phase and then improved by a pair exchange neighborhood search in the second phase. Framinan et al. [23] proposed a new efficient heuristic for the PFSP with no-wait constraint.

Metaheuristics are high-level strategies that combine and guide other heuristics in the hope of obtaining a more efficient or more robust procedure so that better solutions can be found. The main procedure of metaheuristics generally starts from an initial solution or a set of solutions generated by heuristics and iterates to improve the initial solution or solutions until a given stopping criterion is reached. The metaheuristics proposed for PFSP are mainly the genetic algorithm (Chang et al. [24], Ruiz et al. [25]), the simulated annealing (Hooda and Dhingra [26], Nouri et al. [27]), the tabu search (Gao et al. [28]), the ant colony algorithm (Rajendran and Ziegler [29]), the iterated greedy algorithm (Ruiz and Stützle [30]), and the particle swarm optimization (PSO) (Tasgetiren et al. [31], Wang and Tang [32]). These metaheuristics use the benchmark problems proposed by Taillard [33] to evaluate their performance. The ant colony algorithms proposed by Rajendran and Ziegler [29], named M-MMAS and PACO, obtained much better solutions than constructive heuristics of Framinan and Leisten [20]. The iterated greedy algorithm proposed by Ruiz and Stützle [30] improved the best known results for some instances of PFSP with makespan minimization. The particle swarm optimization (PSO) named PSOVNS, which incorporates variable neighborhood search (VNS) into PSO, proposed by Tasgetiren et al. [31] improved 57 out of 90 best known solutions reported by Framinan and Leisten [20] and Rajendran and Ziegler [29] for the total flowtime criterion.

In this paper, we propose an improved PSO for the PFSP. To enhance the exploration ability of PSO, the path relinking and the hybrid simulated annealing with stochastic VNS are incorporated. To improve the search diversification of PSO, a population update method is applied to replace the nonpromising particles. The rest of this paper is organized as follows. Section 2 is devoted to describe the proposed PSO algorithm. The computational results on benchmark problems are presented in Section 3. Finally, Section 4 concludes the paper.

2. PSO Algorithm for PFSP

2.1. Brief Introduction of PSO

PSO algorithm is a population based metaheuristic method introduced by Kennedy and Eberhart [34, 35] based on the social behavior of bird flocking and fish schooling, as well as the means of information exchange between individuals, to solve optimization problems. In the PSO, a swarm consists of particles and these particles fly around in an -dimensional search space. The solution of a problem is represented by the position of a particle; that is, the th particle at the th generation is denoted as . At each generation, the flight of each particle is determined by three factors: the inertia of itself, the best position found by itself , and the best position found by the whole swarm . Generally, and are represented as and , respectively. Then the velocity of the particle for the next generation can be obtained from the following equation: where is called the inertia parameter, and are the cognitive and social parameters, and , are random numbers between . Based on the above equations, the particle can fly through search space toward and in a navigated way while still exploring new areas by the stochastic mechanism to escape from local optima.

2.2. Solution Representation

Since the PSO operates in the continuous space, a job is represented by a dimension of a particle and then the jobs can be denoted as a particle in the continuous space. Due to the continuous characters of the position values of particles in the PSO, the smallest position value (SPV) rule proposed by Tasgetiren et al. [31] is adopted to transform a particle with continuous position values into a job permutation. A simple example is provided in Table 1 to show the mechanism of the SPV rule. In this instance (), the smallest position value is , so job 2 is assigned to the first position of the job permutation according to the SPV rule; then job 9 is assigned to the second position of the job permutation because it has the second smallest position value . With the same way, other jobs are assigned in their corresponding position of the job permutation according to their position values. Thus, based on the SPV rule, the job permutation is obtained; that is, .

2.3. Population Initialization

The population with solutions is initialized with random solutions according to , where is a uniform random number in , , and . Also, we generate the corresponding velocity of each particle by a similar way: , where and . In addition, another solution generated by the NEH heuristic [18] is added to the initial population and replaces a random selected solution so as to ensure the quality of initial population.

2.4. Hybrid Method of Simulated Annealing and Stochastic VNS

In the PSOVNS proposed by Tasgetiren et al. [31], a stochastic VNS, which itself is a variant of VNS (Hansen and Mladenović [36]), is developed as the local search. For a given discrete job permutation , let and denote two different random integer numbers generated in , and then the two stochastic neighborhoods moves used in the stochastic VNS to generate a neighbor solution are   : remove the job at the wth position and insert it in the zth position; and   : interchange two jobs arranged at the wth position and the zth position. After a job permutation is changed according to a local search operator such as insert or interchange, the position value of each dimension is adjusted correspondingly to guarantee that the permutation that resulted by the SPV rule for new position values is the same as the permutation that resulted by the local search operator. For example, Table 2 shows the interchange move applied to two jobs 3 and 7, and the corresponding position value changes. It is clear that the interchange of jobs 3 and 7 is corresponding to the interchange of position values −1.02 and 0.23. The position value adjustment for the insert move is similar.

To further improve the exploration ability of the local search, we incorporate the solution acceptance scheme of simulated annealing into the stochastic VNS and thus obtain a hybrid method of simulated annealing and stochastic VNS (denoted as SA_VNS). To reduce the computation time and make the search process focus on the intensification phase, we use a decreasing acceptance threshold to act as the cooling procedure of simulated annealing. The procedure of the proposed SA_VNS algorithm is illustrated in Algorithm 1.

Begin:
  Initialization:
    Let be the input initial solution. Set , the acceptance threshold , and
     .
  while ( ) do
    (1) Generate a random number in 0, 1 , and two random integer numbers and .
    If > 0.5, generate = insert ( ); otherwise generate = interchange ( ).
    (2) Set .
    (3) while     do
      Set .
      while ( ) do
       (1) Generate two random integer numbers and .
       (2) If , generate = .
       (3) If , generate = .
       (4) If and ; otherwise set .
      end while
      Set .
    end while
    (4) If , and then go to step 6; otherwise go to step 5.
    (5) If ; otherwise, generate a random
      number r in 0, 1 , and then set if and if .
    (6) Set and .
  end while
  Report the improved solution   .
End

In the PSOVNS proposed by Tasgetiren et al. [31], the stochastic VNS is applied on the global best particle found at each iteration. However, a drawback of such application is that the starting point of the stochastic VNS may be the same solution if the global best particle cannot be improved for a number of consecutive iterations, and consequently the exploration ability of the PSO may be decreased. Therefore, for a given population at iteration , we propose to use the following strategy.

Step 1. Apply the SA_VNS on the promising particles satisfying , in which is the objective value of particle .

Step 2. Update the global best particle. If a new global best particle is found, then further improve it using the SA_VNS.

2.5. Population Update Method

It is well known that the advantage of PSO is that it has a high convergence speed. However, this advantage may become the disadvantage for complex scheduling problems because the scheduling problems generally have many local optimal regions in the search space. That is, for the PSO applied to PFSP, some particles may always fly around a local region and thus are trapped in local optimum. Therefore, we propose a solution replacement strategy based on the pathrelinking [37] to remove these solutions with new solutions with good quality.

In our algorithm, a particle is viewed as being trapped in local optimum if its personal best solution has not been improved for a number of consecutive generations (i.e., 20). For these particles, we give them the last chance to stay in the population by applying the path relinking algorithm on it to check if its personal best can be improved. If so, this particle can remain in the population; otherwise, we replace this particle with a new random particle.

The path relinking is originally proposed by Glover et al. [37] to generate new solutions by exploring a path that connects an initial solution and a guiding solution. In this path, moves are selected that introduce attributes contained in the guiding solution. To present the path relinking algorithm, we define the distance between two particles and as , where is the corresponding job permutation (obtained by the SPV rule) of particle and is the Hash function value of that is calculated as . Then the path relinking algorithm can be described as follows.

Step 1. Set as the initial solution. Find the particle that has the largest distance to particle in the current population (denoted as ), and set its corresponding job permutation as the guiding solution. Set and the local optimum .

Step 2. If , find the job with index in and swap it with to generate a new job permutation . If is better than , then set .

Step 3. Set . If , stop; otherwise, go to Step 2.

It should be noted that the above path relinking stops when is larger than , because if particle is better than , then will be replaced by if the path relinking stops when , which may result in duplicated particles and thus decrease the search diversification.

2.6. Complete Procedure of the Proposed PSO

Step 1. Initialization

Step 1.1. Set initial values for the population size , the inertia weight, and the cognitive and social parameters. Set and for each particle in the population. Create the initial population and the initial velocities for each particle using the method described in Section 2.3.

Step 1.2. Generate the job permutation for each particle in the population using the SPV rule, and calculate the objective value of each particle.

Step 1.3. Set the personal best of each particle to be the particle itself and the global best to be the best one among the population.

Step 2. Update Particle Positions

Step 2.1. Update iteration counter .

Step 2.2. Update inertia weight by .

Step 2.3. For each particle, update the velocity and position values according to (2).

Step 2.4. Generate the job permutation for each particle in the current population using the SPV rule, and calculate the objective value of each particle.

Step 3. Local Search Phase

Step 3.1. Use the SA_VNS algorithm to improve the promising particles in the current population and then the global best particle found so far according to the adoption strategy of SA_VNS described in Section 2.4.

Step 3.2. For each particle in the current population, update its personal best . If of particle is improved, then set ; otherwise set .

Step 4. Population Update. For each particle in the current population, use the population update method described in Section 2.5 to update the current population.

Step 5. Stopping Criterion. If (the maximum iteration number) or the runtime has reached the limit, then stop; otherwise, go to Step 2.

3. Computational Experiments

To test the performance of our PSO algorithm (denoted as PSO*), computational experiments were carried out on the well-known standard benchmark set of Taillard [33] that is composed of 110 instances ranging from 20 jobs and 5 machines to 200 jobs and 20 machines. This benchmark set contains some instances proven to be very difficult to solve. In this benchmark set there are 10 instances for each problem size. Our PSO algorithm was implemented using C++ and tested on a personal PC with Pentium IV 3.0 GHz CPU and 512 MB memory. To make a fair comparison with the PSOVNS, we use the same parameter setting proposed by Tasgetiren et al. [31]. That is, the population size is taken as ; the initial inertia weight is set to and never less than 0.4; the decrement factor for is taken as 0.975; the acceleration coefficients are set to ; the maximum iteration number is taken as 500.

3.1. Results for PFSP with Makespan Minimization

For the makespan minimization objective, our PSO algorithm was compared with other powerful methods, for example, the ant colony algorithm named PACO of Rajendran and Ziegler [29], the genetic algorithm named HGA_RMA of Ruiz et al. [25], the iterated greedy algorithm named of Ruiz and Stützle [30], and the PSOVNS algorithm of Tasgetiren et al. [31]. The solution quality was measured by the average relative percent deviation (denoted as ) over replicated runs for each instance in makespan with respect to the best known upper bounds. More specifically, is calculated as , in which is the makespan obtained by a certain algorithm, whereas is the best known upper bound value for Taillard’s instances as of April 2004 for the makespan criterion. As done by many researchers, is set to in our experiments.

The comparison results for these algorithms are given in Table 3, in which the values are the average performance of the 10 instances for each problem size. As seen in Table 3, our algorithm achieves the best average performance and it obtains the best results for instances of , , , , , , and . The method also performs well with the HGA_RMA method being close. More specifically, the PACO method cannot obtain the lowest ARPD for any group of problem size compared to other rival methods. The HGA_RMA method has the lowest ARPD for instances of , , and . The method demonstrates the best results for instances of , , , , and . For instances of , both the HGA_RMA method and the PSOVNS method give the best performance. For instances of , all the four methods except for PACO can obtain the lowest ARPD. For instances of , only the two PSO algorithms give the best performance. Our algorithm performs better than its rivals in and instances, which have been proven more difficult to solve. Therefore, it can be concluded that our algorithm is competitive with other powerful methods in the literature.

3.2. Results for PFSP with Total Flowtime Minimization

For the total flowtime minimization objective, our PSO algorithm was compared with other powerful methods, for example, the constructive heuristics of Framinan and Leisten [20], the ant colony algorithm of Rajendran and Ziegler [29], and the PSOVNS of Tasgetiren et al. [31], using the benchmark problems of Taillard [33]. The solution quality was measured by the relative percent deviation (denoted as RPD) of the best solution found among replicated runs for each instance in the total flowtime criterion with respect to the best known results. That is, RPD is calculated as , in which is the total flowtime value obtained by a certain algorithm, whereas is the best result obtained among the algorithms of Framinan and Leisten [20] and Rajendran and Ziegler [29] (this best result is denoted as LR and RZ).

For the minimization of the total flowtime criterion, the PSOVNS algorithm [31] is demonstrated to be a very powerful PSO algorithm because it improved 57 out of 90 best known solutions reported in [20, 29]. The comparison results between our PSO* and the PSOVNS are given in Table 4. From this table, we can see that the PSOVNS algorithm can obtain the best results for instances of , , , , and , while our PSO* algorithm obtains the best results for the other large size instances of , , , and . On average, our PSO* shows a much better performance in the solution quality and robustness than the PSOVNS algorithm.

4. Conclusions

This paper presents a PSO-based hybrid metaheuristic for the permutation flowshop problems to minimize the makespan and the total flowtime. In this algorithm, a hybrid method of simulated annealing and stochastic variable neighborhood search is incorporated to improve the exploitation ability, and a solution replacement strategy based on the path relinking method is developed to improve the exploration ability. Computational experiments are carried out to test the performance of the proposed PSO-based hybrid metaheuristic, and the results show that the proposed algorithm is competitive or superior to some other powerful algorithms in the literature for this problem. Future research may lie in the application of this algorithm in practical production scheduling problems.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by National Nature Science Foundation under Grant 61004039, by the Science Foundation of Educational Department of Liaoning Province under Grant L2010374, and also by the Key Laboratory for Manufacturing Industrial Integrated Automation of Liaoning Province. The authors would like to thank the Associate Professor Hong Yang for her valuable comments.