Abstract

Two-stage production process and its applications appear in many production environments. Job processing times are usually assumed to be constant throughout the process. In fact, the learning effect accrued from repetitive work experiences, which leads to the reduction of actual job processing times, indeed exists in many production environments. However, the issue of learning effect is rarely addressed in solving a two-stage assembly scheduling problem. Motivated by this observation, the author studies a two-stage three-machine assembly flow shop problem with a learning effect based on sum of the processing times of already processed jobs to minimize the makespan criterion. Because this problem is proved to be NP-hard, a branch-and-bound method embedded with some developed dominance propositions and a lower bound is employed to search for optimal solutions. A cloud theory-based simulated annealing (CSA) algorithm and an iterated greedy (IG) algorithm with four different local search methods are used to find near-optimal solutions for small and large number of jobs. The performances of adopted algorithms are subsequently compared through computational experiments and nonparametric statistical analyses, including the Kruskal–Wallis test and a multiple comparison procedure.

1. Introduction

This study considers a two-stage assembly scheduling problem consisting of jobs. Each of the jobs has two parts (components) to be processed first, and the parts are then assembled on one assembly operation into an ordered product. At the first stage, two distinct parts can be fabricated at the same time in two dedicated machines, and . Once the parts are finished, they are transmitted to the second stage, the assembly stage. There is a single assembly machine, , in this stage to assemble parts into the ordered product. The assembly work can only start after the two parts are completed and delivered to the assembly line. The objective of the production plan is to minimize the maximum completion time among all the jobs, i.e., to minimize the makespan.

Wu et al. [1] studied the makespan minimization in the two-stage three-machine flow shop problem with position-based learning consideration based on Biskup [2]. Wu et al. [3] addressed a two-stage three-machine flow shop scheduling problem with a cumulated learning function to minimize the flowtime (or total completion time). They developed a branch-and-bound algorithm and proposed six versions of hybrid particle swam optimization (PSO) to solve it. However, the sum-of-processing-times-based learning model (based on [4]) has yet to be considered in two-stage assembly flow shop environment. In light of the lack of studies on the sum-of-processing-times-based learning model, this research studies a two-stage three-machine assembly problem with a learning effect based on sum of the processing times of already processed jobs to minimize the makespan criterion.

The paper proceeds as follows. Section 2 presents some notations and problem definition. Section 3 provides some dominant properties and a lower bound to be used in a branch-and-bound method and then introduces a cloud theory-based simulated annealing (CSA) and four versions of the iterated greedy (IG) algorithm. Section 4 provides computational results. Section 5 is the conclusion.

1.1. Literature Survey

Invoking a real-world example of fire engine manufacturing and assembly procedure, Lee et al. [5] studied the issue of minimizing the makespan in the three-machine assembly-type flow shop scheduling problem. A fire engine consists of two parts, a body and a chassis, which are manufactured separately in two production lines in the plant; an engine, bought from outside, can be considered ready. Once the body and chassis are completed, they will be transmitted to the assembly line, where the end product, a fire engine, is built up. They proved the problem is strongly NP-complete, discussed some special polynomial-time solvable cases, and constructed exact and heuristics for them. Potts et al. [6] also demonstrated the complexity of the proposed problem with several machines in the first stage. They introduced several heuristic algorithms and presented the worst-case analysis of performances of these algorithms. Hariri and Potts [7] developed a branch-and-bound method, embedded with some dominance properties, for the considered problem with several machines in the first stage. Sun et al. [8] proposed 14 heuristics algorithms to solve the problem.

Allahverdi and Al-Anzi [9] proposed two evolutionary algorithms, a particle swarm optimization (PSO) and a tabu search, and a simple efficient algorithm to solve the considered problem. Koulamas and Kyparisis [10] studied a two-stage assembly flow shop scheduling problem with uniform parallel machines on one or both stages. They presented an asymptotically optimal heuristic to minimize the makespan, as the number of jobs increase. Their results improved earlier results in some situations. Fattahi et al. [11] considered the same problem but all machines in the first (possibly including two manufacturing stages) stage(s) can process all kinds of parts, named a hybrid flow shop in the first stage(s). They proposed several heuristics based on Johnson’s rule and evaluated the performance of them. Fattahi et al. [12] also considered the hybrid flow shop problem but with setup time and presented a hierarchical branch-and-bound algorithm with several lower bounds and upper bounds. Hatami et al. [13] addressed a distributed assembly permutation scheduling problem in which the first stage comprised several identical flow shop factories. These factories, each with a permutation flow shop scheduling problem, produced parts which were later assembled by a single machine in the assembly stage. Komaki et al. [14] addressed a two-manufacturing-stage hybrid flow shop scheduling problem and one assembly stage to minimize the makespan, where the hybrid meant identical parallel machines or dedicated machines could present in the fabrication stages. They presented a lower bound, several heuristics, and two metaheuristic algorithms based on an artificial immune system. Their computational results showed the outperformance of the proposed lower bound and heuristic algorithms. Jung et al. [15] considered a slightly different two-stage assembly scheduling problem. A single manufacturing machine was used in the first stage to produce various types of parts which were assembled into various products (ordered by customers) in the second stage. The aforementioned studies addressed the problem with respect to the makespan criterion.

With regard to performance criteria other than the makespan for two stages or more stages assembly flow shop problems, Allahverdi and Aydilek [16] provided research works focusing on the following criteria: mean (or total) completion time, a weighted sum of mean completion time and makespan, maximum lateness, total tardiness, and bicriteria. Some of these studies were together with other machine settings, for example, setup times often treated as separate ones from processing times or due dates required by customers. For instance, Maleki-Darounkolaei et al. [17] minimized the weighted mean completion time and makespan for the proposed problem with sequence-dependent setup times and blocking times. For more studies on different criteria and different machine settings for two-stage or three-stage assembly flow shop problem, readers can refer to Al-Anzi and Allahverdi [18], Allahverdi and Al-Anzi [19], Al-Anzi and Allahverdi [20], Allahverdi and Al-Anzi [21], Fattahi et al. [11], Allahverdi and Aydilek [16], and Jung et al. [15], among others.

Several real-life examples or applications for two-stage and three-stage assembly problem can be found in Lee et al. [5], Potts et al. [6], Tozkapan et al. [22], Allahverdi and Al-Anzi [19], Al-Anzi and Allahverdi [20], Sung and Kim [23], Allahverdi and Al-Anzi [24], Cheng et al. [25], Fattahi et al. [11], Mozdgir et al. [26], Fattahi et al. [12], Hatami et al. [13], and Jung et al. [15], among others.

On the other hand, the learning concept has been widely addressed in different branches of industry, for example, product development, job rotation, and job scheduling problem [27]. The first application in scheduling literature appeared in airplane’s manufacturing, where the assembly costs reduced as the times of repetitions increased [28]. Yelle [29] gave a comprehensive survey on learning curves applied in many industries including manufacturing sector. In addition to the excellent review on scheduling with learning effect by Biskup [30], Anzanello and Fogliatto [27] also gave a review on the wide applicability of learning curve models in production systems. The work of Azzouz et al. [31] is a more recent survey paper in the scheduling literature providing applications of learning effects.

In scheduling techniques, Biskup [2] and Cheng and Wang [32] were some of the early researchers who proposed modeling a learning effect by a learning curve (function). Since then, many other learning curves (functions) in connection with a single machine, flow shop, or other scheduling problems, have been developed and modified.

Kuo and Yang work [4], one of the extensively discussed studies on the scheduling research community, proposed the sum-of-processing-times-based learning model, in which the real processing time of a job is relevant to the sum of the previous work times. Actually, the sum-of-processing-times based learning model is relevant to a long-winded and uninterrupted production process in which initial steps are set up. In manufacturing steel plates or bars by a foundry, the products are required to be inspected for surface defects. A single inspector is regarded as a processor or a system to work on those steel bars (jobs). After the processor finishes some work units (jobs), he will spend less time on processing follow-up work units as his experiences grow. Thus, Dondeti and Mohanty [33] modeled the real processing time of a job as a function of the sum of the processing times of the already processed jobs.

Further modifications of the sum-of-processing-times based learning effect in scheduling problems were proposed by Cheng et al. [34], Yin et al. [35], Rudek [36], Wu et al. [37], Lu et al. [38], Wu and Wang [39], and Pei et al. [40], among others. The readers may refer to Azzouz et al. [31] for other commonly discussed learning models: position-based learning, truncated learning, and exponential learning.

2. Notation Definition and Problem Formulation

At the outset, we define some notations and then describe the proposal problem.

2.1. Notations

: total number of jobs: total number of machines: code for the th job, : set of jobs: machine code , and S: complete job schedules (sequences), , , and : partial job schedules (sequences)index : the job scheduled in the th position (in a job sequence): number of jobs removed from a sequence

2.2. Input Parameters

, , : component processing times of job on machines , , and , respectively, for : learning index and : initial temperature: final temperature: annealing index

2.3. Decision Variables

: starting time of the th job (in ) on machine , : completion time of job on machine , ; : completion time of the job in position on machine , : completion time for the last job in or makespan or

2.4. Problem Formulation

We assume that a set of jobs, , are available at time zero and will be processed on fixed three machines, , , and . Each job has three components to be processed on all three machines in three operations. Two components are first processed in parallel on machines and and then on machine for the assembly operation. Machines and have no idle times at the first stage of operation. A job to be processed on machine cannot start until it has been processed on machines and . No machine can process more than one job at a time. Furthermore, preemption is not allowed; i.e., a job cannot be interrupted once it has begun processing. Sum of the component processing times on machines , , and are , , and , respectively, where is a learning index and .

For the convenience of reading, we further let , , , , , and .

Now, for a schedule , the completion time of a job to be scheduled on th position on machine in is calculated as , where , , and denote the starting times of a job (job here) to be assigned to the th position on machines , , and , respectively, in . The objective is to find a schedule S that minimizes the makespan; i.e., . For the Gantt chart of illustrating the considered two-stage three-machine environment, readers can refer to Lee et al. [5].

3. Solution Approach

A branch-and-bound (B&B) method is used to solve the proposed problem for small number of jobs with . Several dominance properties and a lower bound (LB) are developed and incorporated into the branch-and-bound method to increase the search efficiency. Since the proposed problem is NP-hard, to solve the problem for large number of jobs, we utilize a cloud theory-based simulated annealing (CSA) algorithm and an iterated greedy (IG) algorithm with four different local search methods. It has been shown [7, 22, 23, 41] that the permutation schedules, in which the jobs are processed in the same order in all machines, are dominant with respect to the makespan criterion. Consequently, we limit our research to permutation schedules for the proposed problem. Using the three-field classified system in scheduling literature (see [42]), the proposed problem can be represented as AF2(2,1)/prmu, .

3.1. Dominance Properties and a Lower Bound

Dominance rules are commonly used to improve search efficiency of heuristics in the scheduling research, for example, Lee et al. [5], Potts et al. [6], and Tozkapan et al. [22]. In this section, we will develop some dominance (elimination) properties and a LB embedded in a B&B method for finding optimal solutions. Let and denote two sequences in which and are partial job sequences (or empty) and there are at most scheduled jobs in , . To show dominates , it suffices to show that . In the following, we construct several dominance rules to assist in eliminating nodes in the B&B method. Two lemmas are provided to shorten the proof of the properties. The proofs of Lemma 1 and Properties 3 to 9 are given in the Appendix.

Lemma 1. If , , and , then .

Note that if , , and , then .

Lemma 2. If , , and , then .

Note that Lemma 2 has already been proved by Kuo and Yang [4].

Property 3. If , , , max, , and , then dominates .

Property 4. If , , , , , and , then dominates .

Property 5. If , , , , and , then dominates .

Property 6. If , , , and , then dominates .

Property 7. If , , , and , then dominates .

Property 8. If , , , , and , then dominates .

Property 9. If , , , , and , then dominates .

The next property follows the same idea of Kuo and Yang [4], Theorem 1 thereof, but it is modified to accommodate the learning situation. Let , where has jobs, , and has jobs. Let represent nondecreasing orders of processing time of jobs in for Machine 3.

Property 10. If , then dominate , where is the partial schedule according to nondecreasing order of the processing times of in Machine 3. (Note that the complexity of Property 10 is )

As a job being placed in a schedule on the th position, , a LB, based on similar idea of Lee et al. [5], can be acquired as follows. Note that , , , , , .

For the branch-and-bound (B&B) method in this study, we employ the depth-first search [43] in the branching procedure: a branch is selected and explored systematically until it is pruned out or its end node is reached. The B&B method allocates jobs starting from the first position and in the forward to last position allocation. To enhance the performance of the B&B method, the starting sequence (schedule) found by the algorithm IGLS3 (see more details in the next subsection) is considered as an incumbent sequence. Then, the depth-first search method is executed from the initial node and its descents from the tree, to renew the incumbent sequence. Apply Properties 3 to 9 to eliminate branches. If the LB of unsearched nodes is greater than or equal to the makespan of the incumbent sequence than eliminate the node and all nodes sprouting from it in the branch. For active nodes, Property 10 are applied to examine whether the order of unscheduled jobs can be allocated. Repeat the procedure until all nodes are explored.

The above B&B method is also used to evaluate the effectiveness of the adopted algorithms, a cloud-based annealing algorithm, and an iterated greed algorithm for small number of jobs.

For the study problem without learning consideration, AF2(2,1)// has been shown to be NP-hard (see [5]), so does AF2(2,1)/prmu,. In order to search for good quality solutions and shorten the computer CPU times, we utilize a metaheuristic cloud theory-based simulated annealing (CSA) algorithm, Li et al. [44] and Li and Du [45], and four versions of the iterated greedy (IG) algorithm, Ruiz and Stützle [46], for (near-) optimal solutions. The details are introduced in the following subsections.

3.2. A Cloud Theory-Based Simulated Annealing Algorithm

The simulated annealing algorithm (SA), a random optimization method, was first proposed by Kirkpatrick [47]. SA has been frequently and successfully adopted in combinatorial optimization problems because of its capability of escaping from local minimum in the searching process. SA imitates the process of the physical annealing to the solution of an optimization problem. It starts by constructing an initial solution, , from a high initial temperature, , and selects a solution randomly in the neighborhood of . Whether is accepted as an incumbent solution depends on the temperature and on and , values of the objective function and the makespan. If , then is accepted and replaces ; otherwise, can also be accepted with a probability . As the temperature decreases gradually, SA repeats the above process and converges to a global solution (in theory) or finds a local solution at the end the process. Allahverdi and Al-Anzi [21] successfully applied SA on an assembly flow shop problem. Although SA has been successfully applied to solve several optimization problems, it has also been found unable to solve some combinatorial optimization problems. Readers can refer to the reference listed in Boussaïd et al. [48].

The basic concept of the cloud theory has been proposed by Li et al. [44] and Li and Du [45]. Torabzadeh and Zandieh [49] proposed a cloud theory-based simulated annealing (CSA) algorithm for the 2-stage -machine assembly flow shop scheduling problem with a bicriteria objective function. They showed that their CSA performed better than the best existing SA [21]. Studying the same scheduling problem but with learning consideration and a different criterion, we then adapt the CSA for the problem. For more details about CSA, readers may refer to Li and Du [45] and Lv et al. [50].

For the CSA to search better quality solutions, one local heuristic based on the idea of Johnson’s rule [51] was constructed. The resultant solution from the heuristic was then put in CSA as an initial solution. The Johnson-based local search heuristic, coded as _mean, is as follows.

The _mean local search heuristic is shown as follows.(1)For each , , let and be the processing times and . .(2)Let , .(3)(i)If , then assign to the th position and delete job from . Set , and go to (4).(ii)If , then assign to the th position and delete from . Set , and go to (4).(4)Delete from , if is not empty, and then go to ; otherwise, stop.

Note that the total computational complexity of _mean algorithm, which is the same as that of Johnson’s rule, is . The parameters aroused in CSA such as initial temperature , final temperature , and annealing index need to be determined. We referred to the values of the corresponding parameters used in Allahverdi and Aydilek [16] and performed a pilot study similar to the study in Wu et al. (2017). Finally, we adopt the , , and . The procedure of CSA is as in Algorithm 1.

Input initial and final temperature, (, and annealing index λ.
Randomly generate a solution, improved by (Johnson’s rule based) _mean, then by a
pairwise interchange, denote the solution , Calculate .
Do while
Set , , and .
Let, , and.
Let where eps <.
Let .
Set .
Do while (, a predetermined number of iteration is set at 20)
Choose i and j randomly with , swap the th
job and jth job in to obtain a new solution .
Calculate the makespans and .
If , then replace by; else, replace by when
, where
U~rand(0, 1).
Let.
End while
Let and .
End while
Output a final solution .

In the CSA procedure (Algorithm 1), (the range of the cloud), He (the cloud drops’ dispersive degree), and a given certain value are parameters from a normal distribution in the cloud theory-based simulated annealing algorithm [50]. In addition, the number of iterations can be raised to enhance the searching capability of CSA [16].

3.3. Iterated Greedy Algorithm

The iterated greedy (IG) algorithm was proposed by Ruiz and Stützle [46]. The great advantages of this method are as follows: it is simple to implement and has been confirmed to yield high-quality solutions for flow shop problems [52]. The IG has been extensively employed for the last ten years in the scheduling research, especially in the (permutation or hybrid) flow shop problem, for example, Pan and Ruiz [53], Msakni et al. [54], Dubois-Lacoste et al. [55], and Pan et al. [56]. In this study, we propose an IG algorithm with four different local search methods, i.e., four versions of IG, coded as IGLS1, IGLS2, IGLS3, and IGLS4.

When performing the IG, one randomly generates an initial solution and uses it repeatedly to execute destruction and construction cycles until a stopping criterion is reached [46]. In the destruction stage, jobs are removed from the incumbent solution to form a new partial sequence with jobs. Let be the sequence with a size of jobs created that contain the removed jobs according to the order of their appearances in . In the construction stage, we choose the jobs of to be reinserted in the partial solution using the Nawaz-Enscore-Ham (NEH) mechanism [57]. Each job in has an assigned order, and the job with the first order is inserted in all possible positions of . The position with the best objective value is held. The same iteration is repeated for all remaining jobs in . Note that the complexity of NEH algorithm is .

Similar to Ruiz and Stützle [46], the IGLS algorithms utilize an acceptance criterion to determine whether a new solution should be kept or not. It adopts as the temperature in the IG algorithm, where is a controllable variable with . The procedure of IG algorithm is described as in Algorithm 2.

Initialization:
Randomly generate an initial solution S.
Improve S by a local search (e.g. LS3). The LS3 is as follows.
Randomly select one job among the n jobs.
Insert the selected job to n possible positions, keep the best (the lowest
) solution.
Repeat the steps until all n jobs have been selected and inserted back;
update the best solution if better solution is found.
Output the final solution, S0.
Set S0 = S1 and ;
Do whileiter (a predetermined number of iterations is set equal to 30)
Destruction stage.
Divide S1 into SD and SR, where SD is a set of d removed jobs and SR a set of
the remaining jobs;
Construction stage
For the removed d jobs, execute the NEH mechanism to SR one after another
until it yields the best sequence, coded as S2;
Improve S2 by one of the four local search methods, LS1, LS2, LS3, LS4, coded
as S3.
(Acceptance criterion)
If , then update S1 by S3;
If, then Update by S3;
Else if then Update S1 by S3;
(Note that q ~ rand(0,1).)
End while
Output the final solution Sbest

We used four different neighborhood structures as local search paths after the construction stage in the IG algorithm. The insertion neighborhood structure, LS3, is described in step of the IG algorithm. The neighborhood structures for LS1, LS2, and LS4 are described as follows. Let be a solution (i.e., a sequence of jobs) and be the objective value, i.e., the makespan, of the .

LS1: swap the structures of adjacent positions.(1)Select , .(2)Swap the job numbers between the positions and in , and let denote the new solutions.(3)If is less than then update by .(4)Let and . Repeat and until .(5)Repeat steps to until cannot be improved.(6)Return the best found solution, denoted as .

LS2: complete swap and best-first neighborhood structure.(1)Swaps each pair of positions (jobs) in S.(2)Calculate the resultant .(3)Compare ’s for all possible pairs.(4)Keep the best one, say S0, with the lowest value of .(5)If , then interchange the job numbers between the pair of positions for the best; i.e., update S by S0.(6)Repeat steps to until cannot be improved.(7)Return the best found solution, .

LS4: first-improvement swap neighborhood structure is as follows.(1)Let , .(2)Swap the job numbers between the positions and in , and let denote the new solutions.(3)If is less than then update by .(4)Let . Repeat and until .(5)Let . Repeat steps to until .(6)Repeat steps to until cannot be improved.(7)Return the best found solution, denoted as .

The ideas of the LS1, LS2, LS3, and LS4 are modified from three local searches proposed by Della Croce et al. [58]. The complexities of those three local searches are . The proposed LS1, LS2, LS3, and LS4 are very fast; even their complexities are not easily found. As for other neighborhood structures and local search methods, one can refer to Li et al. [59] and Wang et al. [60, 61], among others.

4. Computational Experiments and Results

In this section, we present our experimental tests of examining the efficiency of all adopted algorithms. The branch-and-bound method was performed for small number of jobs; a CSA and an iterated greedy (IG) algorithm with 4 different local search methods (coded as IGLS1 IGLS2, IGLS3, and IGLS4) were performed for both small and large numbers of jobs. These algorithms were performed in Fortran (version 6.6, Compaq Visual Fortran) on a PC (Windows XP) with Intel(R) Core(TM) i7 Duo CPU 2.66-GHz and RAM size of 1.99 GB. The normal job processing times on machines , , and , in line with the scheme of Lee et al. [5], were generated at random from a discrete uniform distribution , respectively. All the parameters used in CSA adopted the settings of Wu et al. [1] while all the parameters in IG used the results of previous tests.

In the computational experiments for small number of jobs, was set at 8, 9, 10, and 11. Following the suggestion by Liu et al. [62], we set the learning index at , , and . The number of jobs removed from a sequence was set at , 3, and 4; no more than half of a number of jobs were removed. One hundred instances were tested for each combination of , , and . A total of 3600 instances were examined. Tables 1, 2, and 3 summarized the computational results. The optimal solutions were acquired by running the B&B method for the chosen small number of jobs. We report the average error percentage (AEP) here, where , and was obtained by each metaheuristic and by the B&B method. Figure 1 shows the performance results of CSA and IGLS algorithms.

As for the capability of the B&B method, we report the maximum and average number of nodes and the maximum and average execution times (in seconds) in Table 1. As the numbers of job size increased, the (mean and maximum) nodes increased; the CPU times increased dramatically as the increased (Table 1, columns 4 and 5). If the number of nodes goes beyond 108, the B&B method would skip to the next instance, and instances solved out with less than 108 nodes were recorded as feasible solution. The computational results are shown in the last column of Table 1. No matter what the number of jobs or the learning index , all instances are solvable within 108 nodes. As for the impact of particular dominant properties on the efficiency of branch and bound (B&B), it can be easily done in the same way as that of Wu et al. [1].

For the proposed algorithms, CSA and IGLS, the AEPs are presented in Table 2. All AEPs of the CSA and IGLS algorithms increased slightly as the learning effect strengthened; i.e., the value of learning index became larger, as shown in Table 2. As the amount of destruction increased, the AEPs decreased as well; this is because the larger the value of is, the wider the searching scope explores. Overall, the CSA (with mean AEP of 0.1%) performed the best; however, also note that the worst IGLS1 was with mean AEP of 1.5% (the last row of Table 2). The CPU times were all within 0.1 seconds; we thus omitted them here.

To compare whether the differences in the solution quality of the CSA and IGLS algorithms were statistically significant, the parametric or nonparametric statistical tests have commonly been employed [63]. There are three assumptions of the parametric analysis of variance (ANOVA), i.e., normality, homoscedasticity, and independence of the residuals. We carried out an ANOVA on AEPs and examined the assumption of normality for the residuals (using SAS 9.4). The value of the Kolmogorov–Smirnov D statistic was 0.18, and the asymptotic two-sided value was less than 0.01; the level of significance was in general set at 0.05. Since the normality assumption was violated for the samples of AEP, we then used the nonparametric Kruskal–Wallis test on ranks of AEPs to test whether the populations of AEPs were the same. Table 3 (column 3) showed the mean ranks of the CSA and IGs algorithms.

In Table 4 (column 2), with value (Asymp. Sig) < 0.001, it is indicated that there were significantly statistical differences among the performances of the CSA and IGLS algorithms. Consequently, a multiple comparison method, which was based on pairwise two-sample rankings, was conducted to determine pairwise differences between algorithms. We employed the Dwass–Steel–Critchlow–Fligner (DSCF) procedure [64, 65]. As shown in Table 5, the mean ranks of AEPs could be separated significantly into two groups, at 0.05 level of significance. As illustrated in Table 5 (columns 2 and 3) of SDCF test, CSA (AEP = 0.001), IGLS2 (AEP = 0.004), and IGLS3 (AEP = 0.004) were in a better performance group and the IGLS1 (AEP = 0.015) and IGLS4 (AEP = 0.013) were in a worse performance group.

In the computational experiments for the large number of jobs, following Liu et al. [62], we also set the learning index , , and ; the number of jobs removed, , was set at 10, 15, and 20; the numbers of jobs was set at , 50, 60, and 70. One hundred instances were tested for each combination of , , and ; a total of 3600 instances were examined. We calculated the relative percent deviation (RPD), , where was obtained from each algorithm and was the minimum value among the CSA and four IGLS algorithms.

As shown in Table 6, the mean RPDs of CSA and IGLS algorithms increased as the learning index increased from to regardless of the number of jobs. As for the impact of the number of removed jobs on RPDs, the average RPDs were 0.0118, 0.0113, and 0.0113, for , 15, and 20, respectively. Figure 2 provided the boxplots of RPDs for each number of with these five algorithms.

The boxplots of the RPDs displayed the performances of the CSA and IG algorithms for large numbers of jobs. The CSA algorithm performed slightly worse than those of the four IGLS types. We carried out another ANOVA on RPDs and examined the normality assumption for the residuals; the value was less than 0.01 from the Kolmogorov–Smirnov normality test. Therefore, the normality assumption was invalid for the RPD samples. A Kruskal–Wallis test on ranks of RPDs was subsequently used to check whether the RPD samples were from the same distribution. Table 3 (column 5) shows the mean ranks of the CSA and four IGLS algorithms for the large numbers of jobs. In Table 4 (column 3), the asymptotic value is less than 0.0001. Consequently, the DSCF procedure was used to find pairwise differences between pairs of the CSA and IGLS algorithms. As displayed in Table 5 (columns 4 and 5), the IGLS algorithms were in one better performance group, whereas the CSA algorithm was the worst in large number of jobs case. Moreover, the boxplots of the RPDs (Figure 2) of the IGLS3 and IGLS2 showed less dispersion than the others. In other words, the IGLS2 and IGLS3 were both accurate and stable for solving the problem of the large numbers of jobs.

5. Conclusions and Future Studies

A makespan minimization problem for a two-stage three-machine assembly flow shop scheduling with sum-of-processed-times based processing times has been addressed. A branch-and-bound method embedded with a LB and several dominance properties was proposed for the small number of jobs . Since the problem is NP-hard, two algorithms, a cloud theory-based simulated annealing (CSA) algorithm and an iterated greedy (IG) algorithm with four local search methods, have been adopted to solve the problem for the large number of jobs . These algorithms seemed to perform well for both small number of jobs and large number of jobs . The computational results validate the effectiveness, as claimed by researchers in the scheduling research community, of the proposed IG algorithm with four different local search methods. However, the CSA was still a competitive metaheuristic algorithm, especially for small-number-of-jobs problems in this study.

The occurrences of the study problem in many plant manufacturing environments have been exemplified by outstanding researchers such as these referred to in the introduction. Some potential research topics worthy of investigation are different learning curves including the position-based learning, an exponential learning, truncated learning, or general learning functions, especially different learning functions for different stages, i.e., at component manufacturing stage and at the assembly stage. This study considers two stages; however, there are several published papers that investigated three-stage flow shop assembly problem but all without learning effect. Therefore, extending to multistage assembly problem with learning consideration in some of these stages can be another future topic to pursue. One may generalize the properties for general nonincreasing learning curves, , where is a nonincreasing function describing the learning effect (similarly for and ) or job based nonincreasing learning curve, or Readers may refer to Rudek [6668] and Wang et al. [60, 61] for further details.

Appendix

Proofs of Lemma 1 and Properties

Proof of Lemma 1. Because , thus , and , the result holds.

Proof of Property 3. The completion times of job and job in the positions of and on machines , , in are as follows. The completion times of job and job in the positions of and on machines , , in are as follows. In the following, we claim that .
The inequality implies that The last equality holds because
Since , , and , then .
Apply the last inequality to :The last equality comes from .
So,Letting , and , , then by applying Lemma 2, the first term of the last equation is greater than 0. Letting , , and , then Lemma 1 implies the second term of the last equation in is greater than 0. Consequently, , as required.

Proof of Property 4. .
Letting , and , , then following the argument used in proof of Property 3, we have .

Proof of Property 5. Because , , and , . In addition, , together with three conditions, and it follows .
Let , and , ; then by applying Lemma 2, the last term in the above equality is greater than 0. Therefore, we have .

Proof of Property 6. Note that , and . This is because and , . ; this can be obtained directly from Lemma 1.
Now, apply the two longer inequalities given in the conditions of Property 6; we have Let , and , ; then by applying Lemma 2, the first term above is greater than 0. Directly applying Lemma 1, the second term above is greater than 0.

Proof of Property 7. The proof is similar to that of Property 6 with and now being replaced by and .

Proof of Property 8. The value of inside max in the is equal to because and . Applying the given inequality of to the first two terms of the outside max, we have .
The value of max inside the is equal to because and .
Now,Note that ,, and . We have .
Therefore, Let , , and , . Applying Lemma 2, we obtain the desired .

Proof of Property 9. The proof is similar to that of Property 8 and thus omitted.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The author declares that they have no conflicts of interest.

Disclosure

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Acknowledgments

The author thanks Professor K. Lai for helping in editing the English language.