Abstract

We propose an efficient heuristic algorithm for two-stage hybrid flowshop scheduling with sequence-dependent setup times. In the past, metaheuristic approaches, which usually need long time, have been mostly used for the problem. In this study, due to practical reasons of the application that we consider, we need to obtain the solution of the problem within a reasonably short computational time, even for large-sized problems. In this study, we devise the proposed algorithm as a hybrid of two methods, that is, the beam search and NEH method, and we compare the performances with other existing local search methods. The results of the computational experiments show that the proposed algorithm solves the problems in a relatively shorter computation time, while the scheduling performances are superior to the existing methods.

1. Introduction

For the past several decades, many researchers have studied hybrid flowshop (HFS) scheduling problems. Among the HFS problems, two-stage HFS scheduling problems are considered simpler problems; however, there have been many studies considering two-stage HFS due to its practical and diverse applications [1]. In addition, if the HFS scheduling problems have sequence-dependent setup times (SDSTs), such problems can be more frequently seen in many real industries [2]. Thus, studies dealing with HFS scheduling problems with both two-stage and SDST, such as those by Lin and Liao [3] and Rossi et al. [4], usually have the actual applications for the problems. This study has the same characteristics as those in these literatures, which means that we consider the two-stage HFS scheduling problem with SDST and the considered problem has real applications occurring in three-dimensional (3D) automated optical inspection (AOI) machine operations.

HFS scheduling has been known as a complex problem because the simplest system in HFS, that is, two-stage HFS without SDST, was proven as NP-hard by Gupta [5] decades ago. Therefore, many researchers have developed heuristic methods to solve HFS scheduling problems regardless of the number of stages and the existence of setup times. In studies mentioned previously in this paper, which dealt with the two-stage HFS scheduling problems with SDST, heuristic methods were proposed [3, 4]. For multistage HFS scheduling problems, most researchers used metaheuristic approaches, such as genetic algorithms, simulated annealing (SA), and immune algorithms (IAs). One of the most well-known studies was conducted by Kurz and Askin [6]. They considered the multistage HFS scheduling problem with SDST and proposed a genetic algorithm, the Random Keys Genetic Algorithm (RKGA), which has been the standard benchmark since then for other studies regarding the multistage HFS scheduling problem with SDST. Zandieh et al. [7] proposed an IA that performed better than the RKGA in large-sized problems; however, the computational times of the IA are also greater than the RKGA. Naderi et al. [8] proposed a hybrid simulated annealing (HSA) algorithm and compared it with the RKGA and the IA. The test results showed the outperformance of the HSA over the RKGA and the IA, with a slight increase of average computational time. Recently, Mirsanei et al. [2] proposed an SA algorithm that outperformed the RKGA and the IA with less computational time.

In this study, we consider the two-stage HFS scheduling problem with SDSTs, which originates from the real application, that is, automated optimal inspection (AOI) machine operations. The following section presents a detailed description of the AOI machine operation problem. Additionally, in this study, we consider important factors for operators in the practical fields, that is, that the AOI machine operations should be done within a reasonably short time, even for large operations. Thus, we propose a method that can solve the problem with up to 400 jobs within a very short time. (Note that problems with around 100 jobs were the largest in previous research.) To achieve such practical requirements, we devise the proposed method with two efficient heuristics other than metaheuristics, which commonly require long computation times. We combine two heuristics properly so that the proposed method is effective as well as efficient in solving the considered problems.

The rest of the paper is organized as follows. In the following section, we describe in detail the problem considered in this study. Section 3 presents the proposed method and Section 4 introduces the benchmark methods. In Section 5, we present the contents and results of the computational experiments for performance validation. We conclude the paper and mention future extensions of the study in the last section.

2. Problem Description

2.1. Three-Dimensional Automated Optical Inspection Machine Path Planning Problem (3D-AOI-PPP)

In the printed circuit board (PCB) manufacturing process, AOI machines inspect whether or not the components on the PCB are correctly mounted with vision technologies. A typical aspect of the AOI machine focuses on camera movement, depicted in Figure 1, which is presented in Park et al. [9].

As we can see in Figure 1, the camera moves around over the PCB and acquires the images of the components mounted on the PCB; then, the CPU cores in the AOI machine perform the inspection by processing the images. One constraint is that the whole aspect of a PCB cannot be accommodated in a single camera shot. Thus, inspecting one PCB requires multiple camera shootings. Figure 2 shows the camera path on a PCB capturing multiple pictures at an AOI machine.

In Figure 2, the dotted lines and arrows form a camera path that shows the camera movements over the PCB. As Figure 2 shows, the camera moves from the center of a rectangular area to the center of another rectangular area, and so on. These rectangular areas are called fields of view (FOVs), which is the maximum region that can be acquired by one camera shoot. Each FOV contains a certain amount of mounted components on the PCB. To inspect the whole PCB, the AOI machine needs to take pictures of all FOVs on the PCB.

The traditional AOI machine path planning problem is to find the optimal camera path that minimizes total working time [9]. Finding a solution to the problem is similar to finding the shortest camera path that must visit all FOVs; thus, the solution can be obtained by using TSP-based methods. In this study, the considered system is the three-dimensional (3D) AOI machine, which uses 3D images when inspecting the correctness of component mountings.

Using 3D images enhances the quality of the inspection; however, the time of inspection increases due to 3D image processing. To reduce the image processing time, 3D AOI machines usually have multiple CPU cores. In traditional AOI machines, that is, 2D AOI machines, the total working time is mainly determined by the move time of the camera, whereas, in 3D AOI machines, the total working time is affected not only by the move time but also by the image processing times of FOV.

In this study, we consider the path planning problems on 3D AOI machines, in which the camera path, that is, the FOV visit sequence, is determined with the objective of minimizing the total working time. The following constraints should be met: the camera must visit the center positions of all given FOVs; when the picture of an FOV is acquired by the camera’s shooting, the camera starts to move to the next FOV and the image processing of the FOV is simultaneously performed on one of the available CPU cores. If there is no available CPU core, the image processing waits until a CPU becomes available.

2.2. Two-Stage Hybrid Flowshop with Sequence-Dependent Setup Time Scheduling Problem (2S HFS SDST SP)

In this subsection, we first present that the considered problem, that is, the 3D-AOI-PPP, is equivalent to a scheduling problem, which is the two-stage HFS with an SDST scheduling problem (2S-HFS-SDST-SP).

In 3D-AOI-PPP, inspection of one FOV consists of two operations: acquiring a picture of the FOV and processing a 3D image of the acquired FOV, which can be corresponded to the first and second operations in 2S-HFS-SDST-SP, respectively. An AOI machine has one camera and multiple CPU cores, which correspond to a single machine at Stage 1 and identical parallel machines at Stage 2 in 2S-HFS-SDST-SP. To inspect the next FOV in 3D-AOI-PPP, a certain amount of time is needed to move the camera, which can be considered as the SDST between two consecutively operated jobs at Stage 1 in the 2S-HFS-SDST-SP. In 3D-AOI-PPP, we decide the visit sequence of all the given FOVs to minimize the total working time, which is the same as with 2S-HFS-SDST-SP such that the job sequence is decided with the objective of minimizing the makespan. Consequently, the 3D-AOI-PPP can be considered the same problem as the 2S-HFS-SDST-SP. This is similar to the fact that the TSP is equivalent to a single machine with an SDST scheduling problem [10]. Table 1 summarizes the equivalencies between the 3D-AOI-PPP and the 2S-HFS-SDST-SP.

As mentioned previously, the two-stage (HFS) scheduling problem, even without the SDST, is NP-hard and there is no research that tries to solve the 2S-HFS-SDST-SP with mathematical models. However, mathematical formulation can be a tool to understand the problem precisely. In this study, using the formulation by Kurz and Askin [6], which presents mathematical formulation for the multistage HFS with an SDST problem, we present the mathematical model for the considered problem. First, we introduce the notation used in the formulation in the Notations.

Using Notations, we can present a mathematical model for the considered scheduling problem. In the formulation, we use job 0 as a dummy job to impose the setup time for job 1 and the processing time of job 0 is set to be zero. We also assume that all jobs must visit both stages according to the requirements of the AOI path planning problem.

Formulation. Consider the following: Equation (1) shows the objective function of the problem, that is, minimizing makespan. Constraints (2) and (3) ensure that the jobs are scheduled on a machine at Stage 1 and on machines at Stage 2, respectively. Constraints (4) and (5) ensure that each job is sequenced immediately before and after only one job, respectively, on only one machine at each stage. Constraint (6) ensures that the completion time of job at Stage 1 is greater than that of job by at least the processing time of job at Stage 1 plus the setup time between jobs and , if job is sequenced immediately before job at Stage 1. In the equation, is specified as a very large value, ensuring the inequality is always met. Constraint (7) ensures that the completion time of job at Stage 2 is greater than that of job by at least the processing time of job at Stage 2, if job is sequenced immediately before job on a machine at Stage 2. Constraint (8) ensures that the completion time of a job at Stage 2 is always greater than that of the same job at Stage 1 by at least the processing time of the job at Stage 2. Constraint (9) forces job 0 to be sequenced first. Constraint (10) links the objective function and the decision variables. Constraints (11), (12), and (13) impose the boundary of the decision variables. Note that the formulation is introduced to describe the considered scheduling problem in more detail. Solving the problem with the mathematical model is not appropriate for this study because, for practical reasons, we need to solve a relatively large-sized problem and such problems need to be solved within a reasonably short time.

3. Proposed Methods

To solve the considered problem, we propose a method that combines two heuristics, that is, beam search and NEH. The beam search introduced by Ow and Morton [11] in scheduling is a truncated branch and bound search, where only a small number of solutions are maintained in each level of the search tree. NEH is a well-known and effective flowshop scheduling heuristic proposed by Nawaz et al. [12]. An NEH-based heuristic for the problem was proposed by Lee et al. [13], and the solutions obtained by this heuristic were effective and efficient. In this study, we think that embedding the NEH procedure in the beam search enables a more diverse search on the solution space. Although well-known metaheuristics could search even wider solution space and find better solutions than the proposed method, metaheuristics will need much more computation time, which are not acceptable in the considered problem due to the practical reason. In this study, with the proposed beam search method, we could obtain the solutions more effectively as well as efficiently.

The proposed method consists of three parts: the first part is constructing the initial solution, the second part is the beam search with the embedded NEH, and the last part is the improvement procedure by job-pair interchange and job insertion, which is also modified as a beam search style. In the second part, we present the procedure of constructing the initial solution. We based the procedure on the famous Johnson’s rule, which was originally used for a two-stage flowshop scheduling problem with minimizing makespan [14]. The computational complexity of Johnson’s rule is known as [15]. In the considered problem, we assume the processing times of all jobs at Stage 1 to be the same because they are the shooting times, which are the same regardless of FOVs. Thus, Johnson’s rule can be applied by sorting the jobs in descending order of the processing times at Stage 2. The detailed procedure can be described as follows. In the procedure, denotes the sequence of ordered jobs, and denotes the job at position in . Thus, , and denotes the set of jobs whose sequence is not decided.

3.1. Johnson’s Rule-Based Procedure (JS)

Step  0. , .

Step  1. . . .

Step  2. If , go to Step 1; otherwise, is the final solution. STOP.

After obtaining the initial solution with the above JS, we apply the beam search with the embedded NEH to improve the solution. While the NEH maintains only one best solution at each level for branching, the beam search proposed in this study maintains multiple best solutions. The maximum number of solutions maintained at each level of the search tree is called beam width, which is denoted as in the remaining of the paper. Because the NEH evaluates a total of sequences [16] and the proposed beam search evaluates -fold number of sequences compared to the NEH, the total number of sequences evaluated in the beam search is . Since is a constant, we obtain a computational complexity of . We present the detailed procedure of the beam search in the following procedure. Let be the job sequence of the th best solution and let denote the job at position in . Thus, .

3.2. Beam Search Procedure (BS())

Step  0. Let be the solution obtained by JS. Set , for all . . .

Step  1. If , go to Step 4; otherwise, select job .

Step  2. Generate candidate solutions by putting job at every available position in for all and obtain the makespans of the candidate solutions.

Step  3. Update with the best solutions in Step 2. . . Go to Step 1.

Step  4. is the final solution. STOP.

In Step 2, the available positions for job are positions in . For example, job has two available positions, whereas job has available positions, resulting in candidate solutions. In the Appendix, illustrations of solving an example problem by the beam search procedure as well as Johnson’s rule-based procedure are presented.

Because the beam search maintains best solutions at each level of its search tree, we would expect a wider solution space search than the NEH. However, the improvement of the solution quality by the beam search presented previously is not sufficient and there is room for enhancement. Therefore, we propose two improvement procedures: one is pairwise interchange and the other is single job insertion, and we devised both procedures to contain the beam search concept. In the rest of the paper, the former improvement procedure is denoted as BPI (beam search pairwise interchange) and the latter is denoted as BNI (beam search NEH insertion). We present the following two procedures, respectively. In the procedure, denotes the maximum number of iterations, which is a prespecified value.

3.3. Procedure of BPI()

Step  0 (). Let be a given solution. Set , for all .

Step  1. Do the pairwise interchange for all .

Step  1.1. Schedule the jobs with on the two-stage HFS.

Step  1.2. Select a job arbitrarily among the jobs on the machine with the largest load at Stage 2.

Step  1.3. Select another job arbitrarily on machines except the machine from which a job has already been selected in Step 1.2.

Step  1.4. Generate the modified sequence by changing the positions of the two selected jobs. If the makespan of the modified sequence decreases, update with the modified sequence.

Step  2. Update with the best solutions in Step 1.

Step  3. If Iteration = , then is the final solution. STOP; o/w, , go to Step 1.

3.4. Procedure of BNI()

Step  0 (). Let be a given solution. Set , for all .

Step  1. Do the NEH interchange for all .

Step  1.1. Select a job arbitrarily.

Step  1.2. Generate candidate solutions by putting the selected job at every position in and obtain the makespans of the candidate solutions.

Step  2. Update with the best solutions in Step 1.

Step  3. If , then is the final solution. STOP; , , go to Step 1.

In both procedures, Step 0 needs a given solution to initiate the procedure, which can be obtained from JS or BS or even BPI or BNI. As mentioned previously, the proposed method consists of the three phases: the initial solution is first obtained using JS; then, BS is applied, and the final improvement procedure is applied with a combination of BPI and BNI. In this study, we use BPI and BNI alternately with small iterations. For example, the procedures are not applied with “”; rather they are applied with “”. We determine an alternating number of BPI and BNI through preliminary tests, and the parameter values needed in the procedure, that is, , , are also set through the preliminary tests.

After the preliminary tests, the following procedure is selected as the final proposed method, that is, “ ”. There could be several comments about the procedure. First, the beam width value is set to two in all BS, BNI, and BPI. We tested other beam width values, such as three and four, which gave similar performances but showed more computing times. When we alternate BNI and BPI, we put BNI first and decided to repeat the alternation four times. The maximum iteration numbers are set to 400 and 4,000 in BNI and BPI, respectively. Note that one iteration in BNI requires more time than one iteration in BPI. All these configurations in the proposed method are decided by a series of parameter tuning work. We denote the proposed method as BS+ in the rest of the paper.

4. Benchmarks

To investigate the performance of the proposed algorithm, we introduce three benchmark methods, including two metaheuristics and one existing heuristic method. In this section, we present each benchmark method.

4.1. Genetic Algorithm (GA)

For the first benchmark method, we introduce a genetic algorithm that borrows the procedure of the RKGA, which is the algorithm proposed by Kurz and Askin [6], to solve the problem of scheduling HFS with SDST. We modify the introduced genetic algorithm for the problem; that is, the solution representation no longer uses the random keys, and, rather, we use the permutation sequence as the solution representation for the GA. However, many other features in the genetic algorithms are similar to those in RKGA; the population size is set to 100; that is, each generation has 100 chromosomes. The first generation of the first chromosome is constructed by using JS, and the remaining 99 chromosomes are obtained by applying a single pairwise interchange on the first chromosome.

In each iteration, the next generation is constructed as follows: 20% of the sequences (i.e., chromosomes) in the parent generation, which have the minimum makespans, are automatically moved to the next generation; 1% of chromosomes in the next generation are randomly generated. We obtain the remaining 79% of chromosomes in the next generation by applying crossover operations on the parent generation; that is, making a new chromosome requires that we randomly select two different chromosomes (denoted as 1st and 2nd here) in the parent generation. Each gene value of the new chromosome is then decided as a corresponding gene of the 1st or 2nd chromosome. For each gene, a random number is generated. If the value is less than 0.7, the gene value of the 1st chromosome is used; otherwise, the gene value of the 2nd chromosome is used. Through this crossover procedure, we construct 79% of the next generation. The iteration continues until finding 2,000 consecutive generations without solution improvement. This genetic algorithm is denoted as GA in the rest of the paper.

4.2. Simulated Annealing (SA)

In addition to the previously introduced GA, another metaheuristic method, that is, simulated annealing (SA), is additionally used as a benchmark method. SA has been one of the most widely used metaheuristic methods in combinatorial optimization problems since being introduced by Kirkpatrick et al. [17]. The performance of this method is mainly dependent on a neighborhood search scheme, that is, the procedure of finding good solutions near the current solution.

In this study, several features of the SA are manipulated for the considered problem. Once the initial solution is generated by the JS, in each iteration (i.e., at one temperature), a new solution is generated through a neighborhood search. In this study, positions of two randomly selected jobs in the current solution are interchanged to generate a neighborhood solution. This process repeats until the number of neighborhood solutions reaches the prespecified neighborhood size. If the best among the neighborhood solutions has a shorter makespan than the current solution, the current solution moves to the new solution. Otherwise, the current solution does not change; however, with very small amount of the prespecified probability, the solution moves to the new solution, even if the solution quality is not improved. This feature makes the SA more powerful; that is, it enables the search to escape from the local optimum.

To specify the search characteristics and termination condition of the SA, we used parameters introduced in Johnson et al. [18]. First, TempFactor is a multiplier for the current temperature, which specifies the decrease in the temperature in each iteration. The neighborhood size can be determined by SizeFactor, by multiplying its value with the job size. Finally, the SA procedure terminates if the acceptance ratio in the past five iterations is less than the MinPercent value. In this study, we set the values of TempFactor, SizeFactor, and MinPercent to 0.98, 10, and 1%, respectively.

4.3. NEH-Based Heuristic

The last benchmark method is the heuristic method proposed by Lee et al. [13], which was developed for solving the same scheduling problem considered in this study. Actually, this benchmark method is the same as the proposed method, if the beam width of the proposed method is set to be one. This approach consists of the following four parts performed sequentially; first (1) using JS, the initial solution is constructed, then (2) classic NEH, (3) the pairwise interchange, and (4) NEH insertion are applied to the solution respectively for the improvements. We denote this method as NEH+ in the rest of the paper.

5. Computational Experiments

To verify the performance of the proposed method, we conducted computation experiments. In these computational experiments, we investigate the performance of the proposed method compared to those of the benchmark methods introduced in the earlier section. In the following subsections, we present features of the tested problems used in the computational experiments, and we present the results of the experiments in scheduling and computing performances. All the tested methods were implemented in C language and the computational experiments were performed on a PC with 3.20 GHz i5 CPU and 3 GB RAM.

5.1. Problem Instances

In this study, the tested problems are randomly generated; however, the ranges of values are set based on real 3D AOI problems. That is, we first generate the problem data for the 3D AOI machine path planning problem and transform it into data for the two-stage HFS with an SDST scheduling problem. For example, we first generate -, -coordinates of FOVs for 3D AOI machine path planning problem and use the coordinates to calculate the move times between the two FOVs considering the speed information of the camera; these data are the SDSTs in the scheduling problem. In addition, the 3D image processing time for each FOV in the AOI problem is the processing time of the corresponding job at Stage 2 in the scheduling problem, and so on.

To generate the various types of problems, we vary the number of jobs in four levels, that is, 50, 100, 200, and 400, and the number of machines at Stage 2 in three levels, that is, 4, 8, and 16. For each instance of the problem, we generate ten replications. In total, 120 problem instances are generated and used for the experiments. As you can see from the job sizes of the generated problems, in this study, we are focusing on rather large job-size problems, that is, mostly over 100 and up to 400, which are efforts reflecting the practical needs in operating the 3D AOI machines. From a scheduling perspective, they are also relatively large problems compared to the tested problems in the previous HFS scheduling research.

5.2. Test Results

In the test, we compare the proposed method (BS+) with three benchmarks, that is, GA, SA, and NEH+ in terms of the scheduling performance and computing speed. Table 2 shows the objective function values, that is, makespans, obtained by all the tested methods across the different job sizes. In Table 2, values in parenthesis are the number of instances for which the corresponding method performed best.

As you can see from Table 2, NEH+ and BS+ outperform the two metaheuristic methods. Although SA performs relatively well in the small-sized problems, the proposed method dominates other methods in the problem for the job size of 50. The proposed method, that is, BS+, and NEH+ show very close performance results in the makespan value. However, we see the superiority of the proposed method when we look at the values in parenthesis, that is, the number of instances the method gives the minimum makespan. The overall number is the biggest. Moreover, the number is consistently larger than others across the different job size problems.

Table 3 shows the scheduling performance results of the tested methods across the different number of machines at Stage 2. As the results of Table 2 show, the proposed method (BS+) and NEH+ show better performance over the two metaheuristics, regardless of the number of machines. While NEH+ has slightly weaker results in the small number of machines, BS+ consistently performs better than others do across the different numbers of machines in terms of makespan values and number of best instances.

In the next results of the test, we show the computing speed of the tested method. Table 4 shows the computational times of the tested method across the job sizes in seconds. As you can see from Table 4, the proposed method shows the dominant performance in computing speed. Although SA performs better in a job size of 50, computational time of SA increases as the job size increases. Additionally, Table 5 shows that the proposed method obtains the solutions the fastest among the tested methods, regardless of the number of machines at Stage 2. In addition to the scheduling performance in the previous test results, we can argue that the proposed method gives effective and efficient solutions in the two-stage HFS scheduling problems.

Through the various results of the experiments, we can see that the proposed method shows the outperformance over the benchmarks in diverse problems. Although a slight scheduling performance gap between NEH+ and the proposed method can be seen, BS+ definitely delivers a higher number of best instances as compared to NEH+. Furthermore, in terms of the computational time, we can say that BS+ is more efficient than NEH+.

6. Conclusions

In this study, we considered the two-stage HFS scheduling problem with SDSTs. The considered problem was motivated by the practical application from automated optimal inspection machine operations in which the jobs need to be done within a reasonably short time. Thus, we proposed an efficient heuristic algorithm embedded with the NEH procedure in the beam search framework. From the series of the computational experiments, we found that the proposed algorithm produced the solutions in a shorter time than the benchmark methods, including two metaheuristic methods. Furthermore, the experimental results also showed that the scheduling performance of the proposed algorithm is placed in the best among the benchmark methods.

Appendix

A. Solving an Example Problem

A.1. Example Problem Configurations

Number of jobs (): 6.Number of machines at Stage 2 (): 2.Processing times of each job at Stage 1: 20.Processing times of job at Stage 2 (see Table 6).Sequence-dependent setup times between two jobs (see Table 7).

First, we illustrate the step-by-step procedure of solving the above example problem by Johnson’s rule-based procedure (JS).

A.2. Solving the Example by JS

Step  0. ,  .

Step  1. . .   .

Step  2. . Thus, go to Step 1.

Step  1. . . .

Step  2. . Thus, go to Step 1.

Step  1. . . .

Step  2. . Thus, go to Step 1.

Step  1. . . .

Step  2. . Thus, go to Step 1.

Step  1. . . .

Step  2. . Thus, go to Step 1.

Step  1. . . .

Step  2. . Thus, , that is,   , is the final solution. STOP.

Using JS, we obtained a solution, , with makespan value 623. Next, the step-by-step procedure of solving the same example by the proposed beam search procedure with beam width two, that is, BS(2), is presented.

A.3. Solving the Example by BS(2)

Step  0.   (=the solution obtained by JS). and .   . .

Step  1 . Thus, select , that is, job 2.

Step  2. For and , there are two candidate solutions as shown in Table 8.

Step  3. , which is the best solution, and , which is the second best solution. .   . Go to Step 1.

Step  1 . Thus, select , that is, job 6.

Step  2. All candidate solutions are listed in Table 9.

Step  3. , which is the best solution, and , which is the second best solution. . . Go to Step 1.

Step  1 . Thus, select , that is, job 1.

Step  2. All candidate solutions are listed in Table 10.

Step  3. , which is the best solution, and , which is the second best solution. . . Go to Step 1.

Step  1 . Thus, select , that is, job 3.

Step  2. All candidate solutions are listed in Table 11.

Step  3. , which is the best solution, and , which is the other best solution (when there are multiple best solutions, we arbitrarily designate one of the multiple best solutions as and another best solution as , resp.). . . Go to Step 1.

Step  1 . Thus, select , that is, job 5.

Step  2. All candidate solutions are listed in Table 12.

Step  3. , which is the best solution, and , which is the other best solution. . . Go to Step 1.

Step  1 . Thus, go to Step 4.

Step  4. , that is, , is the final solution. STOP.

The final solution of BS(2) for the example problem has the makespan of 571, which has been improved from the initial solution obtained by JS, whose makespan was 623.

Notations

Parameters
:Number of jobs
:Number of machines at Stage 2
:Processing time of job at Stage
:Sequence-dependent setup time between jobs and at Stage 1.
Variables
:Completion time of job at Stage
:1, if job is sequenced immediately before job on the same machine at Stage ; 0, otherwise
:Makespan, that is, the completion time of the job finished last at Stage 2.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This paper was supported by Konkuk University in 2013.