Discrete Dynamics in Nature and Society

Volume 2018, Article ID 8170294, 15 pages

https://doi.org/10.1155/2018/8170294

## Minimizing the Makespan for a Two-Stage Three-Machine Assembly Flow Shop Problem with the Sum-of-Processing-Time Based Learning Effect

Department of Statistics, Feng Chia University, Taichung 40724, Taiwan

Correspondence should be addressed to Win-Chin Lin; wt.ude.ucf@cwnil

Received 30 January 2018; Accepted 8 April 2018; Published 23 May 2018

Academic Editor: Manuel De la Sen

Copyright © 2018 Win-Chin Lin. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Two-stage production process and its applications appear in many production environments. Job processing times are usually assumed to be constant throughout the process. In fact, the learning effect accrued from repetitive work experiences, which leads to the reduction of actual job processing times, indeed exists in many production environments. However, the issue of learning effect is rarely addressed in solving a two-stage assembly scheduling problem. Motivated by this observation, the author studies a two-stage three-machine assembly flow shop problem with a learning effect based on sum of the processing times of already processed jobs to minimize the makespan criterion. Because this problem is proved to be NP-hard, a branch-and-bound method embedded with some developed dominance propositions and a lower bound is employed to search for optimal solutions. A cloud theory-based simulated annealing (CSA) algorithm and an iterated greedy (IG) algorithm with four different local search methods are used to find near-optimal solutions for small and large number of jobs. The performances of adopted algorithms are subsequently compared through computational experiments and nonparametric statistical analyses, including the Kruskal–Wallis test and a multiple comparison procedure.

#### 1. Introduction

This study considers a two-stage assembly scheduling problem consisting of jobs. Each of the jobs has two parts (components) to be processed first, and the parts are then assembled on one assembly operation into an ordered product. At the first stage, two distinct parts can be fabricated at the same time in two dedicated machines, and . Once the parts are finished, they are transmitted to the second stage, the assembly stage. There is a single assembly machine, , in this stage to assemble parts into the ordered product. The assembly work can only start after the two parts are completed and delivered to the assembly line. The objective of the production plan is to minimize the maximum completion time among all the jobs, i.e., to minimize the makespan.

Wu et al. [1] studied the makespan minimization in the two-stage three-machine flow shop problem with position-based learning consideration based on Biskup [2]. Wu et al. [3] addressed a two-stage three-machine flow shop scheduling problem with a cumulated learning function to minimize the flowtime (or total completion time). They developed a branch-and-bound algorithm and proposed six versions of hybrid particle swam optimization (PSO) to solve it. However, the sum-of-processing-times-based learning model (based on [4]) has yet to be considered in two-stage assembly flow shop environment. In light of the lack of studies on the sum-of-processing-times-based learning model, this research studies a two-stage three-machine assembly problem with a learning effect based on sum of the processing times of already processed jobs to minimize the makespan criterion.

The paper proceeds as follows. Section 2 presents some notations and problem definition. Section 3 provides some dominant properties and a lower bound to be used in a branch-and-bound method and then introduces a cloud theory-based simulated annealing (CSA) and four versions of the iterated greedy (IG) algorithm. Section 4 provides computational results. Section 5 is the conclusion.

##### 1.1. Literature Survey

Invoking a real-world example of fire engine manufacturing and assembly procedure, Lee et al. [5] studied the issue of minimizing the makespan in the three-machine assembly-type flow shop scheduling problem. A fire engine consists of two parts, a body and a chassis, which are manufactured separately in two production lines in the plant; an engine, bought from outside, can be considered ready. Once the body and chassis are completed, they will be transmitted to the assembly line, where the end product, a fire engine, is built up. They proved the problem is strongly NP-complete, discussed some special polynomial-time solvable cases, and constructed exact and heuristics for them. Potts et al. [6] also demonstrated the complexity of the proposed problem with several machines in the first stage. They introduced several heuristic algorithms and presented the worst-case analysis of performances of these algorithms. Hariri and Potts [7] developed a branch-and-bound method, embedded with some dominance properties, for the considered problem with several machines in the first stage. Sun et al. [8] proposed 14 heuristics algorithms to solve the problem.

Allahverdi and Al-Anzi [9] proposed two evolutionary algorithms, a particle swarm optimization (PSO) and a tabu search, and a simple efficient algorithm to solve the considered problem. Koulamas and Kyparisis [10] studied a two-stage assembly flow shop scheduling problem with uniform parallel machines on one or both stages. They presented an asymptotically optimal heuristic to minimize the makespan, as the number of jobs increase. Their results improved earlier results in some situations. Fattahi et al. [11] considered the same problem but all machines in the first (possibly including two manufacturing stages) stage(s) can process all kinds of parts, named a hybrid flow shop in the first stage(s). They proposed several heuristics based on Johnson’s rule and evaluated the performance of them. Fattahi et al. [12] also considered the hybrid flow shop problem but with setup time and presented a hierarchical branch-and-bound algorithm with several lower bounds and upper bounds. Hatami et al. [13] addressed a distributed assembly permutation scheduling problem in which the first stage comprised several identical flow shop factories. These factories, each with a permutation flow shop scheduling problem, produced parts which were later assembled by a single machine in the assembly stage. Komaki et al. [14] addressed a two-manufacturing-stage hybrid flow shop scheduling problem and one assembly stage to minimize the makespan, where the hybrid meant identical parallel machines or dedicated machines could present in the fabrication stages. They presented a lower bound, several heuristics, and two metaheuristic algorithms based on an artificial immune system. Their computational results showed the outperformance of the proposed lower bound and heuristic algorithms. Jung et al. [15] considered a slightly different two-stage assembly scheduling problem. A single manufacturing machine was used in the first stage to produce various types of parts which were assembled into various products (ordered by customers) in the second stage. The aforementioned studies addressed the problem with respect to the makespan criterion.

With regard to performance criteria other than the makespan for two stages or more stages assembly flow shop problems, Allahverdi and Aydilek [16] provided research works focusing on the following criteria: mean (or total) completion time, a weighted sum of mean completion time and makespan, maximum lateness, total tardiness, and bicriteria. Some of these studies were together with other machine settings, for example, setup times often treated as separate ones from processing times or due dates required by customers. For instance, Maleki-Darounkolaei et al. [17] minimized the weighted mean completion time and makespan for the proposed problem with sequence-dependent setup times and blocking times. For more studies on different criteria and different machine settings for two-stage or three-stage assembly flow shop problem, readers can refer to Al-Anzi and Allahverdi [18], Allahverdi and Al-Anzi [19], Al-Anzi and Allahverdi [20], Allahverdi and Al-Anzi [21], Fattahi et al. [11], Allahverdi and Aydilek [16], and Jung et al. [15], among others.

Several real-life examples or applications for two-stage and three-stage assembly problem can be found in Lee et al. [5], Potts et al. [6], Tozkapan et al. [22], Allahverdi and Al-Anzi [19], Al-Anzi and Allahverdi [20], Sung and Kim [23], Allahverdi and Al-Anzi [24], Cheng et al. [25], Fattahi et al. [11], Mozdgir et al. [26], Fattahi et al. [12], Hatami et al. [13], and Jung et al. [15], among others.

On the other hand, the learning concept has been widely addressed in different branches of industry, for example, product development, job rotation, and job scheduling problem [27]. The first application in scheduling literature appeared in airplane’s manufacturing, where the assembly costs reduced as the times of repetitions increased [28]. Yelle [29] gave a comprehensive survey on learning curves applied in many industries including manufacturing sector. In addition to the excellent review on scheduling with learning effect by Biskup [30], Anzanello and Fogliatto [27] also gave a review on the wide applicability of learning curve models in production systems. The work of Azzouz et al. [31] is a more recent survey paper in the scheduling literature providing applications of learning effects.

In scheduling techniques, Biskup [2] and Cheng and Wang [32] were some of the early researchers who proposed modeling a learning effect by a learning curve (function). Since then, many other learning curves (functions) in connection with a single machine, flow shop, or other scheduling problems, have been developed and modified.

Kuo and Yang work [4], one of the extensively discussed studies on the scheduling research community, proposed the sum-of-processing-times-based learning model, in which the real processing time of a job is relevant to the sum of the previous work times. Actually, the sum-of-processing-times based learning model is relevant to a long-winded and uninterrupted production process in which initial steps are set up. In manufacturing steel plates or bars by a foundry, the products are required to be inspected for surface defects. A single inspector is regarded as a processor or a system to work on those steel bars (jobs). After the processor finishes some work units (jobs), he will spend less time on processing follow-up work units as his experiences grow. Thus, Dondeti and Mohanty [33] modeled the real processing time of a job as a function of the sum of the processing times of the already processed jobs.

Further modifications of the sum-of-processing-times based learning effect in scheduling problems were proposed by Cheng et al. [34], Yin et al. [35], Rudek [36], Wu et al. [37], Lu et al. [38], Wu and Wang [39], and Pei et al. [40], among others. The readers may refer to Azzouz et al. [31] for other commonly discussed learning models: position-based learning, truncated learning, and exponential learning.

#### 2. Notation Definition and Problem Formulation

At the outset, we define some notations and then describe the proposal problem.

##### 2.1. Notations

: total number of jobs : total number of machines : code for the th job, : set of jobs : machine code , and* S*: complete job schedules (sequences) , , , and : partial job schedules (sequences) index : the job scheduled in the th position (in a job sequence) : number of jobs removed from a sequence

##### 2.2. Input Parameters

, , : component processing times of job on machines , , and , respectively, for : learning index and : initial temperature : final temperature : annealing index

##### 2.3. Decision Variables

: starting time of the th job (in ) on machine , : completion time of job on machine , *; * : completion time of the job in position on machine , : completion time for the last job in or makespan or

##### 2.4. Problem Formulation

We assume that a set of jobs, , are available at time zero and will be processed on fixed three machines, , , and . Each job has three components to be processed on all three machines in three operations. Two components are first processed in parallel on machines and and then on machine for the assembly operation. Machines and have no idle times at the first stage of operation. A job to be processed on machine cannot start until it has been processed on machines and . No machine can process more than one job at a time. Furthermore, preemption is not allowed; i.e., a job cannot be interrupted once it has begun processing. Sum of the component processing times on machines , , and are , , and , respectively, where is a learning index and .

For the convenience of reading, we further let , , , , , and .

Now, for a schedule , the completion time of a job to be scheduled on th position on machine in is calculated as , where , , and denote the starting times of a job (job here) to be assigned to the th position on machines , , and , respectively, in . The objective is to find a schedule* S* that minimizes the makespan; i.e., . For the Gantt chart of illustrating the considered two-stage three-machine environment, readers can refer to Lee et al. [5].

#### 3. Solution Approach

A branch-and-bound (B&B) method is used to solve the proposed problem for small number of jobs with . Several dominance properties and a lower bound (LB) are developed and incorporated into the branch-and-bound method to increase the search efficiency. Since the proposed problem is NP-hard, to solve the problem for large number of jobs, we utilize a cloud theory-based simulated annealing (CSA) algorithm and an iterated greedy (IG) algorithm with four different local search methods. It has been shown [7, 22, 23, 41] that the permutation schedules, in which the jobs are processed in the same order in all machines, are dominant with respect to the makespan criterion. Consequently, we limit our research to permutation schedules for the proposed problem. Using the three-field classified system in scheduling literature (see [42]), the proposed problem can be represented as AF2(2,1)/prmu, .

##### 3.1. Dominance Properties and a Lower Bound

Dominance rules are commonly used to improve search efficiency of heuristics in the scheduling research, for example, Lee et al. [5], Potts et al. [6], and Tozkapan et al. [22]. In this section, we will develop some dominance (elimination) properties and a LB embedded in a B&B method for finding optimal solutions. Let and denote two sequences in which and are partial job sequences (or empty) and there are at most scheduled jobs in , . To show dominates , it suffices to show that . In the following, we construct several dominance rules to assist in eliminating nodes in the B&B method. Two lemmas are provided to shorten the proof of the properties. The proofs of Lemma 1 and Properties 3 to 9 are given in the Appendix.

Lemma 1. *If , , and , then .*

Note that if , , and , then .

Lemma 2. *If , , and , then .*

Note that Lemma 2 has already been proved by Kuo and Yang [4].

*Property 3. *If , , , max, , and , then dominates .

*Property 4. *If , , , , , and , then dominates .

*Property 5. *If , , , , and , then dominates .

*Property 6. *If , , , and , then dominates .

*Property 7. *If , , , and , then dominates .

*Property 8. *If , , , , and , then dominates .

*Property 9. *If , , , , and , then dominates .

The next property follows the same idea of Kuo and Yang [4], Theorem 1 thereof, but it is modified to accommodate the learning situation. Let , where has jobs, , and has jobs. Let represent nondecreasing orders of processing time of jobs in for Machine 3.

*Property 10. *If , then dominate , where is the partial schedule according to nondecreasing order of the processing times of in Machine 3. (Note that the complexity of Property 10 is )

As a job being placed in a schedule on the th position, , a LB, based on similar idea of Lee et al. [5], can be acquired as follows. Note that , , , , , .

For the branch-and-bound (B&B) method in this study, we employ the depth-first search [43] in the branching procedure: a branch is selected and explored systematically until it is pruned out or its end node is reached. The B&B method allocates jobs starting from the first position and in the forward to last position allocation. To enhance the performance of the B&B method, the starting sequence (schedule) found by the algorithm IGLS3 (see more details in the next subsection) is considered as an incumbent sequence. Then, the depth-first search method is executed from the initial node and its descents from the tree, to renew the incumbent sequence. Apply Properties 3 to 9 to eliminate branches. If the LB of unsearched nodes is greater than or equal to the makespan of the incumbent sequence than eliminate the node and all nodes sprouting from it in the branch. For active nodes, Property 10 are applied to examine whether the order of unscheduled jobs can be allocated. Repeat the procedure until all nodes are explored.

The above B&B method is also used to evaluate the effectiveness of the adopted algorithms, a cloud-based annealing algorithm, and an iterated greed algorithm for small number of jobs.

For the study problem without learning consideration, AF2(2,1)// has been shown to be NP-hard (see [5]), so does AF2(2,1)/prmu,. In order to search for good quality solutions and shorten the computer CPU times, we utilize a metaheuristic cloud theory-based simulated annealing (CSA) algorithm, Li et al. [44] and Li and Du [45], and four versions of the iterated greedy (IG) algorithm, Ruiz and Stützle [46], for (near-) optimal solutions. The details are introduced in the following subsections.

##### 3.2. A Cloud Theory-Based Simulated Annealing Algorithm

The simulated annealing algorithm (SA), a random optimization method, was first proposed by Kirkpatrick [47]. SA has been frequently and successfully adopted in combinatorial optimization problems because of its capability of escaping from local minimum in the searching process. SA imitates the process of the physical annealing to the solution of an optimization problem. It starts by constructing an initial solution, , from a high initial temperature, , and selects a solution randomly in the neighborhood of . Whether is accepted as an incumbent solution depends on the temperature and on and , values of the objective function and the makespan. If , then is accepted and replaces ; otherwise, can also be accepted with a probability . As the temperature decreases gradually, SA repeats the above process and converges to a global solution (in theory) or finds a local solution at the end the process. Allahverdi and Al-Anzi [21] successfully applied SA on an assembly flow shop problem. Although SA has been successfully applied to solve several optimization problems, it has also been found unable to solve some combinatorial optimization problems. Readers can refer to the reference listed in Boussaïd et al. [48].

The basic concept of the cloud theory has been proposed by Li et al. [44] and Li and Du [45]. Torabzadeh and Zandieh [49] proposed a cloud theory-based simulated annealing (CSA) algorithm for the 2-stage -machine assembly flow shop scheduling problem with a bicriteria objective function. They showed that their CSA performed better than the best existing SA [21]. Studying the same scheduling problem but with learning consideration and a different criterion, we then adapt the CSA for the problem. For more details about CSA, readers may refer to Li and Du [45] and Lv et al. [50].

For the CSA to search better quality solutions, one local heuristic based on the idea of Johnson’s rule [51] was constructed. The resultant solution from the heuristic was then put in CSA as an initial solution. The Johnson-based local search heuristic, coded as _mean, is as follows.

The _mean local search heuristic is shown as follows.(1)For each , , let and be the processing times and . .(2)Let , .(3) (i)If , then assign to the th position and delete job from . Set , and go to (4).(ii)If , then assign to the th position and delete from . Set , and go to (4).(4)Delete from , if is not empty, and then go to ; otherwise, stop.

Note that the total computational complexity of _mean algorithm, which is the same as that of Johnson’s rule, is . The parameters aroused in CSA such as initial temperature , final temperature , and annealing index need to be determined. We referred to the values of the corresponding parameters used in Allahverdi and Aydilek [16] and performed a pilot study similar to the study in Wu et al. (2017). Finally, we adopt the , , and . The procedure of CSA is as in Algorithm 1.