Abstract

Different from most researches focused on the single objective hybrid flowshop scheduling (HFS) problem, this paper investigates a biobjective HFS problem with sequence dependent setup time. The two objectives are the minimization of total weighted tardiness and the total setup time. To efficiently solve this problem, a Pareto-based adaptive biobjective variable neighborhood search (PABOVNS) is developed. In the proposed PABOVNS, a solution is denoted as a sequence of all jobs and a decoding procedure is presented to obtain the corresponding complete schedule. In addition, the proposed PABOVNS has three major features that can guarantee a good balance of exploration and exploitation. First, an adaptive selection strategy of neighborhoods is proposed to automatically select the most promising neighborhood instead of the sequential selection strategy of canonical VNS. Second, a two phase multiobjective local search based on neighborhood search and path relinking is designed for each selected neighborhood. Third, an external archive with diversity maintenance is adopted to store the nondominated solutions and at the same time provide initial solutions for the local search. Computational results based on randomly generated instances show that the PABOVNS is efficient and even superior to some other powerful multiobjective algorithms in the literature.

1. Introduction

In a typical hybrid flow shop scheduling (HFS) problem (Figure 1), a set of jobs need to be processed through production stages and at each stage there are identical parallel machines. Once completed at one stage, a job can be directly sent to the immediately following stage if at least one machine in this stage is available or can be stored at the infinite buffer between consecutive stages. The task of the HFS problem is to establish a production schedule so that some performance can be optimized (e.g., minimization of makespan, total weighted completion time, and tardiness of jobs [1]).

The HFS problem has been one of the important research issues in the production scheduling since proposed because many practical production scheduling problems can be modeled as a HFS problem [2, 3]. For example, in the iron and steel industry, most products are generally obtained by processing slabs through several consecutive stages, that is, hot rolling, cold rolling, and the continuous annealing (Figure 2).

Different from the single objective HFS problem in the literature, in practical production decision makers usually need to consider multiple objectives during scheduling, for example, (1) minimization of the total sequence-dependent setup times between consecutive jobs and (2) minimization of the total tardiness of all jobs. The two objectives are generally conflicting with each other. For example, to guarantee jobs can be completed before their due dates, urgent jobs should be arranged for production as early as possible, which in turn often causes huge transition of dimension or setup times. On the contrary, a good transition of dimension or small setup time also often causes large tardiness of jobs in practical production.

Therefore, in this paper we consider the biobjective HFS and develop a Pareto-based adaptive biobjective variable neighborhood search (PABOVNS) algorithm to solve it. The major features of the proposed PABOVNS algorithm are as follows.(i)In the PABOVNS, a solution is coded as a sequence of all jobs and a decoding procedure is designed to obtain the corresponding complete schedule.(ii)Since the adaptive strategy has been successfully used in the control system design and intelligent algorithms [411], an adaptive selection strategy of neighborhoods is proposed in the PABOVNS to select the most promising neighborhood, instead of the sequential selection strategy of canonical variable neighborhood search (VNS).(iii)A two-phase multiobjective local search based on neighborhood search and path relinking [12] is designed for each selected neighborhood.(iv)An external archive with diversity maintenance based on Pareto concept is adopted to store the nondominated solutions obtained by PABOVNS and at the same time it can provide initial solutions for the local search of PABOVNS.

The rest of this paper is organized as follows. Section 2 reviewed the related research results on HFS problem and biobjective HFS problem in the literature. The description of the biobjective HFS problem is given in Section 3. In Section 4, the proposed PABOVNS is described in detail. Computational results on randomly generated instances are presented in Section 5. Finally, the paper is concluded in Section 6.

2. Literature Review

Since proposed, the HFS problem has drawn a great deal of attention from researchers. Detailed reviews on the complexity, scheduling criteria, solving approaches, and applications of HFS problem can be found in Linn and Zhang [13], Ruiz and Vazquez-Rodriguez [14], and Ribas et al. [1]. Based on these reviews, it can be found that most previous researches focused on the single objective and that the solution methods in the literature can be classified into three categories: exact methods, heuristic methods, and metaheuristic methods.

Among the exact methods for the HFS problem, branch-and-bound (B&B) is the most preferred solution method. Dessouky et al. [15] first presented B&B method for the two- and three-stage HFS with uniform parallel machines, and the other kinds of B&B methods have been proposed for the general HFS problem with any amount of stages and parallel machines in many references [1618].

Although the B&B method can solve the HFS problem to optimality, it should be noted that the size of solved problems is relatively small. Consequently, many researchers turned to develop dispatching rules and constructive heuristics so as to quickly obtain a near-optimal solution for the complex HFS problem. Brah [19] compared 10 different dispatching rules for the HFS with multiple stages to minimize the maximum tardiness. Guinet and Solomon [20] presented several dispatching rules and tailored heuristics for the -stage HFS problem to minimize makespan and maximum tardiness. Sevastianov [21] and Kyparisis and Koulamas [22] tackled the HFS problem with multiple stages and uniform parallel machines in each stage and developed some heuristics to minimize makespan. Botta-Genoulaz [23] presented six heuristics for the HFS problem with precedence constraints so as to minimize the maximum lateness of jobs. Yang et al. [24] compared three heuristics based decomposition and local search methods for the HFS problem to minimize the total weighted tardiness. Recently, Lee et al. [25] developed an efficient heuristic based on the beam search and NEH method for the HFS problem.

The advantage of constructive heuristics is that they can provide a feasible solution quickly; however, the quality of the obtained solution is often not good enough for practical production. So metaheuristics become more and more popular for HFS problems in the literature. Grabowski and Pempera [26] investigated a no-wait HFS derived from real manufacturing system and developed a tabu search algorithm. Both Sawik [27] and Wardono and Fathi [28] proposed tabu search algorithm for the HFS problem with limited buffers. Wang and Tang [29] developed a hybrid tabu search that incorporated scatter search for the HFS with finite intermediate buffers. Cui and Gu [30] presented a discrete group search optimizer for HFS with random breakdown. Xiao et al. [31] designed a genetic algorithm for the M-stage HFS problem to minimize makespan. For the HFS problem with sequence-dependent setup times, Kurz and Askin [32] presented a genetic algorithm with the random keys representation. Ruiz and Maroto [33] further considered both the sequence-dependent setup times and machine eligibility in HFS problems and proposed a genetic algorithm. Zhang et al. [34] studied the HFS problem derived from a practical aeronautic production environment and developed a genetic algorithm hybrid with clustering. An artificial immune system was presented by Engin and Döyen [35] for HFS problem to minimize makespan. Tang and Wang [36] proposed an improved particle swarm optimization algorithm for the HFS problem. Ying and Lin [37] developed an ant colony optimization algorithm for the multiprocessor task problem with job precedence.

As reviewed above, different kinds of solution methods have been proposed in the literature for the HFS problem and most of them are focused on the single objective HFS problem. Although some researchers have paid more attention to the multiobjective HFS problem, the related references are very few in the literature. Behnamian et al. [38] presented a three-phase hybrid metaheuristic based on Pareto optimal concept for a biobjective HFS to minimize the makespan and the earliness and tardiness of jobs. Rashidi et al. [39] developed a parallel multiobjective genetic algorithm for the biobjective HFS problem to minimize the makespan and total tardiness of jobs by dividing the population into several different subpopulations. Then, each subpopulation was assigned with different weights that were used to aggregate the two objectives. Abyaneh and Zandieh [40] proposed several methods for the biobjective HFS with sequence-dependent setup times and limited buffers. A biobjective local search algorithm was presented by Mousavi et al. [41] for the HFS to minimize the objectives of makespan and total tardiness. Recently, Marichelvam et al. [42] developed a discrete firefly algorithm for the biobjective HFS problem to minimize the makespan and mean flow time.

3. Problem Statement

The biobjective HFS problem considered in this paper can be stated as follows. There are a set of jobs to be processed successively through production stages, each of which has identical parallel machines and any machine can be used to process jobs. It is assumed that all jobs have arrived as time zero. Each job () has a positive processing time at each stage (), a due date (), and a weight (). Once a job has completed the required processing at a stage, it can be stored at the infinite buffer between consecutive stages or sent to the immediately following stage if at least one machine in this stage is available. In each stage , whenever two jobs and are adjacent in the processing sequence of a machine (i.e., job is processed immediately after job on the same machine in stage ), a setup time () will be needed. Different from the objectives considered in previous literature [3842] (e.g., makespan and total tardiness), the task of our HFS problem is to minimize the total sequence-dependent setup times and the total weighted tardiness of all jobs. The two objectives are adopted based on the description described for the practical production scheduling of iron and steel industry (Figure 2).

4.1. Canonical Single Objective VNS

VNS is a simple but powerful search algorithm for combinatorial optimization problems. The procedure of canonical VNS for the single objective optimization problem is given in Algorithm 1. The main search mechanism of VNS is to systematically change the neighborhood among candidate neighborhoods so as to enhance the search diversity.

)    Input: initial solution , a set of candidate neighborhoods ,
()    Set the iteration number  .
()    while  
()       
()       while    do
()            //a random solution is generated in neighborhood
()            //improve in neighborhood and get new solution
()        if   is better than
()          
()                         //next search will start from the first neighborhood
()        else
()                    //next neighborhood is selected
()        end if
()       end while
()        
() end while

Since proposed by Mladenović and Hansen [43], VNS has been successfully adopted to solve many kinds of difficult combinatorial optimization problems, especially the scheduling problems [4446]. Therefore, in this paper we also adopt VNS and extend it to solve the biobjective HFS problem based on the Pareto dominance concept.

4.2. Multiobjective Optimization Based on Pareto Dominance

To give a clear description of our PABOVNS algorithm, we first give some explanations on the multiobjective optimization based on the concept of Pareto dominance.

Instead of optimizing a single objective, the multiobjective optimization endeavors to simultaneously achieve the optimization of multiple objectives that are conflicting with each other. For two given solutions, namely, and , let the th () objective of them be denoted by and , and then solution is said to dominate if and only if the following two conditions are satisfied: (1) for every objective ; and (2) at the same time there exists at least one objective such that . If each one of a set of solutions cannot dominate any other solution, then the set of solutions are called the nondominated solutions. So if a solution is not dominated by any other solution in the solution space, then solution is called a Pareto optimal solution and the set of all Pareto optimal solutions is called the Pareto optimal set. Correspondingly, the objective vectors of the Pareto optimal set in the objective space are called the Pareto front. The task of multiobjective optimization is to achieve the Pareto optimal set so that the corresponding Pareto front can be distributed as evenly as possible in the objective space.

4.3. PABOVNS Algorithm

Based on the four features described above, the overall framework of the proposed PABOVNS algorithm can be given in Algorithm 2 and the components of it are described in the following subsections.

()    Input: a set of candidate neighborhoods , the maximum iterations , the
    selection probability of each neighborhood to be , the iteration number , the
    external archive to be empty, and .
()    Initialization of external archive :
()       Generate an initial solution and set
()       while               //q is the sum of candidate neighborhoods
()              //perform multi-objective local search on in and
                         obtains a set of non-dominated solutions
()              //update the external archive with
()           
()        end while
()    while
()           //select a neighborhood based on selection probabilities
()     Multi-objective local search:  Phase I – neighborhood search:
()               //randomly select a solution from
()                //a random solution is generated in neighborhood
()               //perform multi-objective local search on in
()     Multi-objective local search:  Phase II – path relinking:
()              //randomly select a solution from
()              //randomly select another solution from
()            //perform multi-objective path relinking from to
()     Update external archive and selection probability of the selected neighborhood:
()         //update the external archive with and
()       UpdateProbability           //update the selection probability of neighborhood
()     
() end while

In the PABOVNS, we first generate an initial solution by the CreateInitialSolution( ) method. Then, a multiobjective neighborhood search LocalSearch( ) is performed on from neighborhood to so as to initialize the external archive . Subsequently, at each iteration of the VNS algorithm we will first select a neighborhood by the adaptive selection method SelectNeighborhood( ) based on selection probabilities of neighborhoods and then perform the two-phase multiobjective local search. In the first phase, LocalSearch( ) is carried out on a perturbed solution, which is randomly selected from , in the selected neighborhood and a set of nondominated solutions (i.e., ) is obtained. In the second phase, two random solutions and are first selected from and then the multiobjective path-relinking method PathRelinking( ) based on the selected neighborhood is adopted to obtain another set of nondominated solutions (i.e., ). Finally, the nondominated solutions of and obtained in the two-phase local search are used to update the external archive . Whenever can be improved by , the selection probability of the selected neighborhood will be increased. On the contrary, if fails to update , the selection probability of neighborhood will be decreased.

4.3.1. Solution Representation

The decision variables in HFS consist of two parts: the allocation of jobs to machines in each stage and the scheduling of jobs assigned to each machine. In the literature, the random key representation is often adopted [39, 41]. In this representation, each job is assigned to a real number that is generated within for each stage . The integer part denotes the machine number to which this job is assigned and the fractional part is used to determine the sequence of jobs assigned to the same machine (e.g., sort these jobs in the nondescending order of the factional part). Although this representation method is simple, it is still difficult to define neighborhood structures. For example, for the job sequence of a certain machine, the insertion of a job from its originally assigned position to another position can be achieved by changing the fractional part of a job. However, such an insertion move is quite random because we have to sort the fractional part of jobs first if we want to accurately insert this job to a designated position. Therefore, we prefer to adopt a discrete version of solution representation so as to make it easy for solution space construction and neighborhood search. In this kind of representation, a solution is denoted as a sequence of all jobs (just like the solution for a canonical flow shop scheduling problem) and a decoding procedure is presented to obtain the corresponding complete schedule.

For a given solution represented by in which denotes the job arranged at the th position, the decoding procedure to construct a complete schedule of HFS can be given as follows.

Step 1. Set the earliest available time of all machines to be zero.

Step 2. Set , and allocate the first jobs in to the machines in Stage 1.

Step 3. Calculate the completion time of each job currently processed in Stage 1 and then make the complete schedule of the first completed job in Stage 1 on the next stages based on the first available machine rule (i.e., assign the first completed job to the first available machine in each of the next stages). Calculate the completion time of this job on each stage and then update the first available time of each machine in each stage.

Step 4. Set . If , stop; otherwise, allocate job to the first available machine in Stage 1 and go to Step .

In the above greedy decoding heuristic, the principle is that we select the first completed job in Stage 1 and then make its schedule on the next stages based on the first available machine rule. After this, next job is assigned to the first available machine in Stage 1. This process will be repeated until all the jobs have been scheduled. Since the intermediate buffer is infinite, the heuristic is quite simple. Based on this decoding method, we can deal with the neighborhood construction and search using the same way as for the flow shop scheduling whose solution is also represented as a sequence of jobs.

4.3.2. Initialization of the External Archive A

(1) CreateInitialSolution(): Generation of Initial Solution. The initial solution is generated by a modified version of NEH method proposed by Nawaz et al. [47]. Since NEH is used in the single objective environment, we use the linear combination of the two objectives (i.e., , where and denote the total setup times and the total weighted tardiness). The procedure of this method can be described as follows.

Step 1. Sort all the jobs with the nondescending order of their due dates and denote the obtained job sequence as .

Step 2. Select the first two jobs and and determine their best sequence as if there are only the two jobs to be scheduled. Let the obtained partial sequence be .

Step 3. Select the next job from and insert it into at the optimal position that can result in a minimal increase of the sum of setup time and weighted tardiness of jobs in .

Step 4. Repeat Step until all jobs have been inserted to , and then calculate the two objectives of the obtained solution . Set .

In the above procedure, the decoding method described above will be used to transform each partial solution into the corresponding HFS schedule so as to evaluate the increase value of setup time and job tardiness.

(2) LocalSearch(): Multiobjective Local Search Based on Neighborhood Search. As shown in [48], the multiobjective local search is very important for scheduling problems. So based on the initial solution , we next turn to generate a set of nondominated solutions by a neighborhood based multiobjective local search. Since a solution is denoted as a job sequence, we adopt four kinds of neighborhoods: insertion, swap, block insertion, and block swap. The first two neighborhoods are often used for flow shop scheduling problems. The description of each neighborhood is given as follows.(i)The insertion move removes a job in its current position and then inserts it to another position in a solution.(ii)The swap move just swaps two jobs each time in a solution.(iii)The block insertion move removes two adjacent jobs from their current position and then inserts them to another two adjacent positions.(iv)The block swap move swaps two adjacent jobs with the other two adjacent jobs.

A neighborhood of a solution consists of all the possible neighborhood solutions that can be obtained by performing this kind of neighborhood move on this solution. In our algorithm, the four neighborhood are denoted as , , , and, .

During the local search, whenever a new solution is generated, it will be added to a solution set named . After the search of a neighborhood, will be truncated to only contain the nondominated solutions and then is used to update the external archive .

(3) UpdateArchive(): Update of the External Archive. In our algorithm, the external archive is adopted to store the obtained nondominated solutions. Besides, the starting solution for each iteration of the VNS is also selected from it (see the SelectSolution() function) so as to enhance the ability of escaping from local optimum. For a given new solution , the external archive is updated as follows: if is dominated by at least one solution in , then is discarded; otherwise, is inserted into . Since we have an upper bound on the size of , whenever the size of exceeds the upper bound, we will delete the most crowded solution from based on the crowding metric of each solution in . For the biobjective HFS, we first sort the solutions in with the ascending order of the first objective (e.g., let the solution sequence as ) and then the crowding metric of a solution is calculated as the sum of Euclidean distances to solution and . Please note that the two objectives are normalized when calculating the Euclidean distances, and the normalized objectives are calculated as in which is the original th objective function value of solution and is the maximum value of the th objective function value in the current external archive A.

4.3.3. Adaptive Selection Strategy: SelectNeighborhood() and UpdateProbability()

In this section, we first give the definition of success and failure of a neighborhood search so as to determine and update the selection probability of each neighborhood.

As is shown in Algorithm 1, initially we set the selection probability of each neighborhood to be . Once the two-phase multiobjective local search based on a neighborhood is completed (the nondominated solutions obtained in each phase are stored in and , resp.), we use the two sets of nondominated solutions in and to update the external archive . If is updated (i.e., new nondominated solutions are added into ), the neighborhood is viewed as successful; otherwise, it is viewed as unsuccessful. During the iteration of our algorithm, we memorize the number of successful count and the unsuccessful count of each neighborhood . Based on these counts, the selection probability of each neighborhood can be calculated as , in which is called the success ratio. We add 0.01 to each so as to avoid that the denominator has a value of zero. Based on the selection probability, we then use the roulette wheel method to select the neighborhood.

4.3.4. Two-Phase Multiobjective Local Search

After the neighborhood is selected, the two-phase multiobjective local search will be performed to generate new nondominated solutions. The motivation to adopt two-phase local search is that we want to achieve a balance between exploitation and exploration.

In Phase I, an initial solution is randomly selected and then a shaking function (i.e., ) is used to perturb it by performing a random move selected from . Finally, the full neighborhood search is performed on it to obtain a set of new solutions that are stored in . This full neighborhood search is the same one described in Section 4.3.2 and its search is focused on the exploitation.

Instead of focusing on exploitation, we prefer to develop a multiobjective path-relinking search in Phase II () so as to enhance the exploration ability of the local search. Path relinking (PR) is proposed by Glover [12] to generate new solutions by exploring a path that connects an initial solution and a target solution with a given kind of neighborhood move. At each step of PR, a move is performed so as to gradually decrease the difference between the initial solution and the target solution. Once the initial solution becomes the same one as the target solution, the search process of PR terminates. So the search behavior of PR is different from classical full neighborhood search and it can be used to enhance the search diversity (i.e., exploration ability) of local search. In our algorithm, we extend the classical single objective PR into multiobjective PR. For simplicity, we just give the procedure of multiobjective PR for the swap move as follows.

Step 1. Randomly select two solutions, namely, and in which and denote the job index arranged at the th position in the two solutions, from the external archive, and let and be the initial solution and the target solution, respectively. Set and to be empty.

Step 2. If , find the job whose index is in and swap it with to generate a new solution . Evaluate this new solution and use it to update using the follow way: if is not dominated by any solution in , then add it to and then delete all the solutions that are dominated by ; otherwise, discard .

Step 3. Set . If , go to Step ; otherwise, go to Step .

After the two-phase multiobjective local search, two sets of nondominated solutions ( and ) are used to update the external archive.

5. Computational Experiments

5.1. Experiment Setting

To test the performance of the PABOVNS algorithm, computational experiments were carried out based on a set of randomly generated instances. In the experiments, our PABOVNS algorithm was implemented in C++ and all the experiments were carried out on a personal computer with Intel i7 4770 CPU (3.4 GHz) and 8 GB memory. The parameters used in our algorithm are set as follows: the size of the external archive is set to 50 and the stopping criterion is adopted as the maximum available CPU time that is set as seconds (please note that is the number of jobs and is the number of stages). In the experiments, all the testing algorithms share the same stopping criterion.

For the randomly generated instances, the number of stages is selected from , the number of machines in each stage is uniformly generated in , and the number of jobs to be scheduled is selected from . In addition, the processing time of each job is uniformly generated in , the sequence-dependent setup time of jobs is uniformly generated in , and the weight of each job is generated in . For the due date of each job , we follow the generation method in [49] for the multiobjective permutation flow shop scheduling problem with setup times. That is, the due date of each job is generated by , where is the total processing times of job on all stages, is the sum of average setup time for all possible following jobs on all machines, and is a random number uniformly distributed in . For each problem size (denoted by the number of stages and the number of jobs), we generate 30 random instances and thus there are a total of 270 instances tested in the experiments.

5.2. Performance Metrics

To evaluate the performance of the proposed PABOVNS algorithm, the performance metric named inverse general distance (IGD) is adopted in the experiments because this metric has been often adopted in the literature for multiobjective optimization (Zhou et al. [50]). The IGD metric can be defined as , where is the Pareto optimal front or the reference Pareto front and is the minimum Euclidean distance (in objective space) between point and the points in . Based on the definition, it can be seen that the IGD metric can measure both the convergence of to and the distribution diversity of points in . Therefore, it is more favorable for to have a small value of .

It should be noted that it is impossible to obtain the true Pareto optimal set and the corresponding Pareto optimal front for the instances because the problem is NP-hard. So for each instance, we tested all the testing algorithms with a maximum available CPU time of seconds and combine all the obtained external archives. Then, the nondominated solutions selected from the union of these external archives are used as the reference Pareto optimal set and the corresponding Pareto front is adopted as the reference Pareto front . In addition, since the objectives in the biobjective HFS problem have different dimensions, we prefer to normalize the objective values of the nondominated solutions obtained by different testing algorithms into so that the comparison results can be clear.

5.3. Computational Results

In this section, we first carried out preliminary experiments to illustrate the efficiency of the proposed improvement strategies, that is, the adaptive selection strategy of neighborhoods and the two-phase multiobjective local search. Then, we compared the PABOVNS algorithm to some other algorithms in the literature.

5.3.1. Efficiency of the Adaptive Selection Strategy of Neighborhoods

To analyze the impact of the adaptive selection strategy of neighborhoods on the performance of PABOVNS, we compared the proposed PABOVNS to another version of it in which the adaptive selection strategy is not adopted. In this experiment, the version without the adaptive selection strategy is denoted as PBOVNS, and the selection sequence of each neighborhood is , , , and . That is, in the two-phase multiobjective local search process of PBOVNS, neighborhood is firstly used. If the obtained and cannot update the external archive , it turns to adopt the next neighborhood. But whenever the external archive is updated, the algorithm turns back to adopt the first neighborhood .

The comparison results of IGD metric between the two algorithms are given in Table 1, in which is the number of jobs and is the number of stages and is the statistical -test result to show whether the two algorithms have significant performance difference. In Tables 1, 2, and 3, please note that the result is the average value of the ten instances for each problem size and the better results are shown in bold type. From the results, it can be seen that the PBOVNS can obtain the best results for only two groups of small size problems, namely, 3 × 30 and 5 × 30, while the PABOVNS succeeds in achieving the best results for 8 out of the 9 problem groups. The average improvement of PABOVSN over the PBOVNS is about 5.66%, which illustrates the efficiency of the proposed adaptive selection strategy of neighborhoods. In addition, the -test with 95% confidence level show that the PABOVSN obtains significantly better results over PBOVNS for 5 problem groups (“+” means the performance difference is significant, while “−” means that the performance difference is insignificant). These experimental results show that the adaptive selection strategy of neighborhoods can help to improve the performance of PBOVNS. The major reason behind this phenomenon is that the adaptive selection strategy of neighborhoods can adaptively select the most promising neighborhood for current problem, which in turn improves the search efficiency.

5.3.2. Efficiency of the Two-Phase Multiobjective Local Search

As described in Section 4.3.4, the motivation of designing the two-phase multiobjective local search is to achieve a local search with a good balance of exploitation and exploration. So in this section, we further tested two variants of PABOVNS: the first variant only adopts Phase I as the local search and the second variant only adopts Phase II as the local search.

The comparison results between the three algorithms are given in Table 2, in which the first variant is denoted as and the second variant is denoted as . Since the three algorithms have similar CPU times as is shown in Table 1, in this table the CPU time of each algorithm is not provided. When comparing and , it appears that the algorithm is superior to the for all the testing problem groups. In addition, the algorithm can obtain the best results for 2 problem groups, namely, 3 × 30 and 5 × 30, among all the three testing algorithms. The proposed PABOVNS can succeed to obtain the best results for 8 problem groups. More specifically, the proposed PABOVNS algorithm obtains better results over for 7 problem groups, among which the performance difference is significant for 5 problem groups. In addition, the PABOVNS algorithm obtains better results over for all the problem groups, among which the performance difference is significant for 8 problem groups. The reason is that the local search of Phase II (path relinking) has a better search diversity and it can help to enhance the exploration ability when combined with the local search of Phase I.

5.3.3. Comparison with Other Algorithms

In this section, we further compared our algorithm to the other two powerful algorithms in the literature. Since the biobjective HFS in [37] is similar to our problem (only the objectives are different), the first algorithm for comparison is the multiobjective parallel genetic algorithm (MOPGA) proposed by Rashidi et al. [39] (in this experiment, we reimplemented this algorithm using suggested parameter settings in [39]). Besides the MOPGA, we developed another comparison algorithm based on the NSGA-II (Deb et al. [51]) which is the most famous multiobjective algorithm in the literature. To make the NSGA-II able to solve our problem, the following modifications are made. In the modified NSGA-II, the solution representation is the same one used in our PABOVNS, and the two-phase multiobjective local search is applied on a randomly selected solution from the first Pareto front (the NSGA-II ranks solutions in different Pareto front according to the objectives of solutions and the first Pareto front is the nondominated solutions obtained by NSGA-II). The crossover operator used in NSGA-II is the traditional two-cutting crossover operator and the mutation operator is a random insertion move performed on a given solution. The size of the population in NSGA-II is set to 500 and the mutation probability is set to . At each iteration of NSGA-II, the new solutions generated by the local search and the new solutions generated by crossover and mutation are used to update the population. In this experiment, both of the two comparison algorithms adopt the same stopping criterion of our PABOVNS, that is, the maximum available CPU time of seconds.

The comparison results for the three algorithms are presented in Table 3. Based on the results shown in Table 3, the following observations can be obtained.(1)Both the MOPGA and our PABOVNS algorithms are superior to the modified NSGA-II algorithm. The MOPGA algorithm obtains better results than the NSGA-II for 8 out of the 9 problem groups.(2)With the increase of problem size, the IGD values of the three algorithms tend to deteriorate due to the fact that large size problems are more difficult to solve.(3)Our PABOVNS succeeds to achieve the best results for all the 9 problem groups. The average improvement achieved by the PABOVNS is 3.34% over the MOPGA and 6.57% over the modified NSGA-II, respectively.(4)The PABOVNS can obtain significantly better results over MOPGA for 4 large size problem groups, namely, 5 × 100, 8 × 30, 8 × 80, and 8 × 100. With comparison to the modified NSGA-II algorithm, the PABOVNS achieves significantly better result for 7 problem groups.

To give a better understanding of the performance difference among the three algorithms, we further give the graphical illustration of the results obtained by the three algorithms for problem groups of 3 × 100, 5 × 100, and 8 × 100 in Figures 3, 4, and 5, respectively. From these figures, it can also be seen that our PABOVNS algorithm obtains the best Pareto front for the three problems. More specifically, the Pareto fronts obtained by the PABOVNS algorithm are closer to the referenced Pareto optimal front, and their distribution is also much better with comparison to the Pareto fronts obtained by the MOPGA and the NSGA-II.

6. Conclusions

In this paper, we investigate the biobjective HFS to minimize the total setup times and the total weighted tardiness. To efficiently solve this problem, we developed a Pareto-based adaptive biobjective variable neighborhood search algorithm with four major features: job sequence based coding and decoding method, adaptive selection strategy of neighborhood, two-phase multiobjective local search, and external archive with diversity maintenance. Computational experiments based on a set of randomly generated problems were carried out and the obtained results demonstrated that the proposed algorithm is effective and efficient for the biobjective HFS problem. In addition, the comparison results of the proposed algorithm to the other powerful metaheuristics in the literature also showed the proposed algorithm’s efficiency and superiority.

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grants 61403277 and 71602143).