Abstract

The present paper aims to address the flow-shop sequence-dependent group scheduling problem with learning effect (FSDGSLE). The objective function to be minimized is the total completion time, that is, the makespan. The workers are required to carry out manually the set-up operations on each group to be loaded on the generic machine. The operators skills improve over time due to the learning effects; therefore the set-up time of a group under learning effect decreases depending on the order the group is worked in. In order to effectively cope with the issue at hand, a mathematical model and a hybrid metaheuristic procedure integrating features from genetic algorithms (GA) have been developed. A well-known problem benchmark risen from literature, made by two-, three- and six-machine instances, has been taken as reference for assessing performances of such approach against the two most recent algorithms presented by literature on the FSDGS issue. The obtained results, also supported by a properly developed ANOVA analysis, demonstrate the superiority of the proposed hybrid metaheuristic in tackling the FSDGSLE problem under investigation.

1. Introduction

Scheduling problems have received extensive attention since the middle of the last century. One of the basic assumptions of the classic scheduling theory is that job descriptors like processing times and setup times are a priori known and do not change during the time horizon. Nowadays, due to the availability of computational resources, it is possible to consider a more realistic situation where such descriptors may vary due to the learning effect. In fact, the workers improve their performance by repeating the provided operations and, as a result, processing or/and set-up times of a job may be reduced if it is scheduled later in the sequence.

The first studies about learning effect had been developed by Wright [1] and Biskup [2]. They stated that the production time of a job under learning effect decreases depending on the order the job is worked in. The corresponding learning effect model, which computes the processing time of job when it is scheduled in the position, is defined as where is the normal processing time of job and is the learning index, which is a function of the learning rate . The processing time needed decreases by the number of repetitions, meaning that learning is primarily based on the repetition of a task, such as machine setup.

In alternative to the above position based learning model, Kuo and Yang [3] proposed a sum-of-processing-time based learning model which is time dependent: Starting from these two fundamental models, some authors proposed other learning models that are a modification of the above models or a combination of those. Cheng et al. [4] proposed model of learning effects in which the actual processing time of a job is a function of the total normal processing times of the jobs already processed and of the job’s scheduled position. Yin et al. [5] developed a general learning effects model where the actual processing time of a job is not only a function of the total normal processing times of the jobs already processed, but also a function of the job’s scheduled position. Soroush [6] considers a single machine scheduling problem with general past-sequence-dependent setup time and log-linear learning in which the setup times and learning effects are job-dependent.

Biskup [7] reviews extensively the literature on scheduling problems that considers the two types of learning effects.

The learning effect has been widely applied on the single machine scheduling problems. Recently, Lee [8] proposed a model where the setup time is past-sequence-dependent, and the actual job processing time is a general function of the processing times of the jobs, already processed, and its scheduled position. Costa et al. [9] consider the single machine total weighted completion time scheduling problem. The jobs have nonzero release time and processing time increases during the production due to the effect of deterioration on the machine. The setup time and the removal time are influenced by the ability of the worker, which depends on work experience and learning capacity: Zhang [10] studied a single machine model where the jobs are grouped in families and the learning effect influences both processing time of the jobs and set-up time of the groups.

Some authors applied the principle of learning effect to other manufacturing models. Vahedi-Nouri et al. [11] investigated a nonpermutation flow shop scheduling problem with the objective of minimizing the total flow time. Each job has nonzero release time and the processing time depends on its position in the sequence, because of the learning effect. Liu [12] studied the scheduling problems of jobs on identical parallels machines where delivery times are past-sequence-dependent and the learning effect on processing times is considered. Sun et al. [13] studied a permutation flow shop scheduling, where the actual processing time of a job is defined by a general nonincreasing function of its scheduled position. To solve this problem, several algorithms derived from the corresponding single machine scheduling problem are presented.

To the best of our knowledge, the learning effect has not been investigated so far in the context of flow shop where the jobs are grouped in families. This scheduling problem is known as flow-shop group scheduling (FSGS) problem. A set of jobs has to be processed through serial machines arranged in a flow-shop layout. According to group technology (GT) manufacturing principles, the whole set of jobs to be worked may be partitioned into smaller subsets, called groups or families, made by jobs sharing the same technological requirements in terms of tooling and setups. A major setup is required when switching from one family to another, whilst the setup time between jobs belonging to the same family either is assumed to be negligible or it can be included along with the run time. The solution to the problem is represented by the permutation of the jobs of each family and the permutation order of the groups. In a more realistic context as printed circuit board (PCBs) manufacturing [14, 15] or automotive sector [16] the set-up time depends on the sequence of the group in the schedule, and this problem is addressed as flow-shop sequence-dependent group scheduling (FSDGS).

The FSDGS problem has been taken into consideration by a significant number of researchers: Zhu and Wilhelm [17] proposed a complete review. Salmasi et al. [18] presented hybrid ant colony optimization (HACO) algorithm for minimizing makespan in a flow-shop sequence-dependent group scheduling problem. Hajinejad et al. [19] proposed a hybrid particle swarm optimization (PSO) algorithm that outperforms the HACO proposed by Salmasi et al. [18]. Recently, Naderi and Salmasi [20] proposed two different MILP formulations, together with a metaheuristic technique hybridising genetic and simulated annealing algorithm, called GSA, to cope with the FSDGS issue under the total completion time minimization viewpoint.

The aim of this paper is to propose the flow-shop sequence-dependent group scheduling problem with learning effect (FSDGSLE). The objective function to be minimized is the total completion time, that is, the makespan. In this scheduling problem the workers are required to manually carry out the set-up operations on each group to be loaded on the generic machine. An anticipatory set-up is assumed; that is, the set-up of each group can be started by a worker even though the first job of the group is not yet available on the machine to be set-up. The workers are not a critical resource: that is, a worker from the crew is always available to perform a set-up activity on a machine, thus preventing machine starvation and consequent delays in the makespan. The ability of operators increases over time due to the learning effect; therefore the set-up time of a group under learning effect decreases according to the order the group is worked in. The jobs processing time does not change because they are automatically worked on every machine of the flow shop. A new mathematical model for this problem and an efficient heuristic algorithm to solve also large-sized problems are proposed.

The remainder of the paper is organized as follows: Section 2 presents the mixed integer programming mathematical model for the proposed FSDGSLE problem. Section 3 describes the proposed heuristic algorithm based on the evolutionary principle. Section 4 deals with an extensive comparison between the proposed optimization procedure and other two algorithms in the field of FSDGS problems. Finally, Section 5 concludes the paper.

2. The FSDGSLE Mathematical Model

The proposed mathematical model integrates the learning effect on the group setup time with the flow-shop sequence-dependent group scheduling problem (FSDGS) by means of a mixed integer linear programming (MILP) approach. According to the formalization proposed in Pinedo [15], this problem can be denoted as: , , ,   , where indicates a flow-shop with machines, indicates that the jobs are assigned to different groups, means that the set-up of each group is sequence-dependent, refers to a permutation type process (i.e., all the jobs and the groups are processed by respecting the same order on each machine), and means that the setup time depends on the position of the group in the sequence order of the groups, while , that is, the makespan, is the objective to be minimized.

The following notation has been adopted where a slot is a position of the sequence of groups that should be occupied just by a single group; that is, each group should be assigned to one slot.

Indices/Parameters: number of machines,: number of groups,: indexes of groups,: number of jobs in group ,: index of group slots,: index of jobs in group ,: index of job slots in group ,: index of machines,: processing time of job of group on machine ,: setup time of group preceded by group on machine ,: learning index,: a big number.

Binary Variables. Consider Continuous Variables: completion time of job processed in slot of group on machine ,: starting time of group on machine ,: finishing time of group on machine ,: makespan.

Objective. Consider subject  to Constraints (5), (6), and (7) ensure that each group is assigned to only one slot, being preceded by one and only one group and preceding one other group at most. Constraint (8) states that if group is assigned to slot , being , there must be a group preceded by group . Constraints (9) and (10) ensure that each job of a given group is assigned to one only slot and that each slot does not hold more than one job. Constraint (11) defines the starting time of each group on the basis of the finishing time of the previous one. Constraint (12) ensures that the first job of each group starts after the starting point of the group. Constraint (13) forces each job to be processed after the previous job of the same group is finished. Constraint (14) states that each job must be processed after it is concluded on the previous machine. Constraint (15) links the finishing time of the group to the completion time of the last job processed in the group itself. Constraint (16) forces the finishing time of dummy group 0 to be null. Constraint (17) defines the makespan.

3. The Proposed Genetic Hybrid Algorithm

The FSDGSLE problem is -hard and the exact solution of the mathematical model can be reached for problems with a limited number of groups. For this reason a metaheuristic algorithm based on evolutionary principle has been proposed. Starting from a genetic algorithm properly adapted to a group scheduling problem, the proposed genetic hybrid algorithm aims to enhance the efficiency of the genetic procedure by embedding a local search algorithm.

Generally, a genetic algorithm works with a set of problem solutions called population. At every iteration, a new population is generated from the previous one by means of two operators, crossover and mutation, applied to solutions (chromosomes) selected on the basis of their fitness, that is, the objective function value; thus, best solutions have greater chances of being selected. Crossover operator generates new solutions (offspring) by coalescing the structures of a couple of existing ones (parents), while mutation operator brings a change into the scheme of selected chromosomes, with the aim of avoiding that the procedure remains trapped in the local optima. The algorithm proceeds by evolving the population through successive generations, until a given stopping criterion is reached.

Whenever a real problem should be addressed through an evolutionary algorithm, the choice of a proper encoding scheme (i.e., the way a solution is represented by a string of genes) plays a key role under both the quality of solutions and the computational burden viewpoints [21]. In addition, a valid decoding procedure able to transform a given string into a feasible solution should be provided.

The following subsections provide a detailed description of the proposed GA-based optimization procedure, named GHA.

3.1. Problem Encoding

Problem encoding is the way a given problem to be optimized through a metaheuristic procedure can be represented by means of a numerical chromosome. With reference to the proposed GHA, a matrix-encoding scheme has been employed. Following the same notation adopted in Section 2 and indicating by the number of jobs within each group (), each solution is described by a matrix, being . The first rows are made by the permutation vectors indicating the sequence of jobs within each group , while the last row is the permutation vector Ω representing the sequence of groups to be processed: Each row () of the partitioned matrix codes a specific schedule concerning the problem in hand and it is worth pointing out that it is independent from the other sequences; hereinafter it will be denoted as a subchromosome. Thus, a certain subchromosome corresponds to the sequence of jobs scheduled within group ; subchromosome identifies the sequence Ω of groups. For the sake of clarity, a feasible solution for a problem in which and could be represented by the following chromosome : Subchromosomes from 1 to 5 hold the schedules of jobs within each group (i.e., schedule 3-1-2 for group 1, schedule 2-5-1-4-3 for group 2, schedule 2-1 for group 3, schedule 3-4-1-2 for group 4, and schedule 1-2-3 for group 5); subchromosome fixes the sequence of groups Ω = 5-1-3-2-4. All the digits equal to zero do not take part either in the solution decoding or in the genetic evolutionary process. Once the problem encoding is defined, the fitness function of individuals pertaining to the genetic population may be computed.

3.2. Crossover Operator

Through crossover operator, the genetic material of two properly selected parent chromosomes is recombined in order to generate two offspring. The selection mechanism employed by the proposed GHA is the well-known roulette wheel scheme [22], which assigns to each solution a probability of being selected inversely proportional to the makespan value. Once two parent chromosomes have been selected, each couple of subchromosomes belonging to the parent solutions is selected for crossover according to an a priori fixed probability, hereinafter called . Two methods of crossover operators have been adopted to recombine alleles within each couple of subchromosomes: they are denoted as position based crossover (PBC) and two-point crossover (TPC), respectively. Both of these two crossover operators have been largely adopted by literature within GAs applied to combinatorial problems [23]; PBC generates offspring by considering the relative order in which some alleles are positioned within the parents. Indeed, it works on a couple of subchromosomes (P1) and (P2) as follows: (1) one or more alleles are randomly selected; (2) the alleles genetic information of parent 1 (P1) are reordered in offspring 1 (O1) in the same order as they appear within parent 2 (P2); (3) remaining elements are positioned in the sequence by copying directly from parent 1 the unselected alleles. The same procedure is followed in the second parent, that is, parent 2, to obtain offspring (O2). Figure 1 shows the implementation of the PBC on a couple of parents where alleles in positions , , , and have been selected. As far as the TPC method is concerned, two positions are randomly selected and each subchromosome parent is divided into three blocks of alleles: both head and tail blocks are copied directly in the corresponding offspring, while the alleles belonging to the middle block are reordered within the offspring in the same order as they appear in the other parent subchromosome (see Figure 2). A “fair coin toss” probability equal to 0.5 has been chosen for selecting either PBC or TPC crossover.

3.3. Mutation Operator

After a new population has been generated from the previous one by means of crossover, mutation operator is applied according to an a priori fixed probability . Whether mutation occurs, a chromosome is randomly chosen from the population; within such chromosome, a subchromosome is randomly selected for mutation. Two kinds of operators have been adopted in the present research: an allele swapping operator (ASO), which performs an exchange of two randomly selected alleles of the subchromosome, and a block swapping operator (BSO), which performs a block exchange (see Figure 3). To avoid any loss of the current best genetic information, the survival of the two current fittest individuals within the population is ensured by an elitist strategy. A “fair coin toss” probability equal to 0.5 has been chosen for selecting either ASO or BSO mutation operator.

3.4. Diversity Operator

A population diversity control technique has been embedded within the proposed optimization procedure, in order to mutate those identical chromosomes exceeding a preselected value . In the present research, a value equal to 2 has been selected, thus avoiding having more than two identical solutions within the current population.

3.5. Local Search and Termination Rule

In order to improve the performances of the proposed metaheuristic procedure, a local search algorithm has been embedded within the evolutionary optimization strategy of the proposed GHA. Such procedure operates only on a subpopulation, having size , of the best individuals obtained after each generation. For each selected individual (), a sample of neighbour solutions is generated by modifying the sequence Ω of groups, that is, the subchromosome , pertaining to . In the present research has been set equal to 4. In the new sequence each group is casually ordered, each group is drawn with different probability depending on the initial position. The new sequence replaces the starting sequence if it leads to a better makespan value. Such procedure is executed for all the individuals originally selected. Then, the newly obtained population undergoes the next generation cycle.

The termination rule of the proposed GHA consists in seconds of CPU time, similarly being done by Naderi and Salmasi [20].

4. Computational Experiments and Results

In order to evaluate the efficiency of the proposed GHA algorithm a benchmark of problems has been generated using the scheme provided by Salmasi et al. [18]. Processing times of each job on each machine has been randomly drawn from the range [1, 20]. The instances are generated according to three factors varied to three levels, namely, the number of groups, the number of machines, and the number of jobs within each group. The levels of setup times are three in the case of problems with 2 and 6 machines and nine in the case of problems with 3 machines. The value corresponding on each level of the benchmark is showed in Tables 1 and 2, where symbol denotes a value extracted by a uniform distribution between and .

Two distinct replicates have been randomly generated for each problem of the proposed benchmark.

Therefore, a total of 54 (two machines) + 162 (three machines) + 54 (six machines) = 270 separate instances have been created.

The results of the proposed GHA have been compared with those obtained solving the mathematical model and two effective algorithms from the relevant literature on the flow shop group scheduling. The first algorithm is a hybrid particle swarm optimization (hereinafter coded as PSH) devised by Salmasi et al. [18]. Such method employs a real number matrix-encoding scheme and makes use of a properly developed transformation procedure able to convert the components of each solution to integer numbers, so as to obtain the sequence of groups and jobs within groups to be scheduled. Furthermore, the authors equipped the algorithm with a neighborhood search strategy, called individual enhancement, aimed to enhance the search by balancing exploration and exploitation power.

The second algorithm is a metaheuristic procedure hybridizing genetic and simulated annealing algorithms (hereinafter coded as GSA) proposed by Naderi and Salmasi [20]. Similar to the proposed GHA, such algorithm works with an integer number matrix-encoding scheme. It employs a twofold optimization strategy: a genetic algorithm is used to find the sequence of groups, while a simulated annealing-based local search engine drives the search towards better job sequences.

The overall set of instances has been solved by means of the three optimization procedures to be compared, namely, GHA, PSH, and GSA. All the heuristic algorithms has been coded in MATLAB language; the mathematical model has been solved through MILP solver into ILOG CPLEX commercial tool. The optimization procedures are executed on a 2 GB RAM virtual machine embedded on a workstation powered by two quad-core 2.39 GHz processors. The stopping criterion was set to seconds of CPU time for all algorithms tested. The MILP solver has been stopped to 3,600 sec of CPU time: the exact solutions are those obtained before this time and correspond to a subset of 130 instances, in particular the first 30 instances of the problems with 2 and 6 machines and the first 70 instances of the problems with 3 machines.

The values of the learning effects are 90% and 80%, which correspond to a learning index of −0.152 and −0.322 according to Biskup’s [2] model, respectively. Thus, a total of (270 (instances) × 3 (algorithms) + 130 (mathematical model)) × 2 (learning index) = 1880 runs have been taken into account. The key performance indicator used to compare the alternative metaheuristics is the relative percentage deviation (RPD), calculated as follows: where ALGsol is the makespan solution provided by a given algorithm with reference to a certain instance and BESTsol is the exact solution or the lowest makespan value among those obtained by the provided optimization procedures.

Before starting the experimental design a preliminary tuning of the GHA algorithm was performed to determine the optimal parameter configuration. Table 3 reports the selected values.

Tables 4, 5, and 6 show the numerical results by the tested algorithms in terms of RPD values, for 2, 3, and 6 machines, respectively. Notably, every table refers to a given number of machines and two levels of learning effect equal to −0.152 and −0.323, respectively. The bold values indicate the minimum value reached by the algorithms. Both italic and bold values denote the aforementioned BESTsol. If the minimum value is obtained by only one algorithm than it is underlined.

In the bottom of each table are reported four performance indicators to evaluate the effectiveness of the algorithms:(i)RPDaverage is the average value of all RPDs;(ii) denotes the number of times each optimization procedure reaches the best solution;(iii) represents the number of times an algorithm reaches the global minimum;(iv) represents the number of times the algorithm is the best one among the three metaheuristics.The results on Table 4, which refer to the benchmark with 2 machines, show the effectiveness of both GHA and PSH in solving the problem at hand, with a superiority of the proposed genetic hybrid algorithm. In fact, when the learning effect is −0.152, GHA assures the lowest average RPDaverage equal to 0.12 and reaches the best solution in the 85% of the test cases. The 25% of the GHA solutions match the absolute minima of the problem and GHA is the best among the three algorithms in the 24% of times. The performances between GHA and PSH are rather comparable when the learning effect is −0.323, with exception of the RPDaverage indicator, as the PSH algorithm reaches 0.25 while GAH reaches 0.35.

Tables 5 and 6, which, respectively, hold the results from the benchmark with 3 machines and 6 machines, confirm the same findings discussed before: GHA and PSH perform significantly better than GSA, and GHA outperforms PSH under all the performance indicators viewpoints.

A slight difference among the three algorithms can be observed in terms of as all of them can easily reach the exact solution due to the small dimension of the solution space.

The learning effect strongly affects the RPDaverage indicator, particularly in the benchmark with 3 machines; the other measures of performance seem to be less influenced by that.

In order to infer statistical conclusion regarding the observed differences among the tested algorithms, an ANOVA analysis has been performed through Design Expert 7.0.0 version commercial tool, calculating LSD intervals at 95% confidence level for the RPDaverage performance measure connected to each optimization procedure. The analysis has been carried out for each scenario problem at varying machines, considering the mean of the RPDaverage values. The corresponding charts are reported in Figures 4, 5, and 6.

All the charts clearly show the superiority of GHA and PSH algorithms compared to GSA. In Figure 4, when 2 machines are taken into account, even though the average RPDaverage related to GHA is lower than those obtained by PSH procedure, such a difference cannot be considered statistically significant, as LSD intervals of the two algorithms are overlapped. On the other hand, the narrow difference of performance between the two algorithms should depend on the poor computational complexity of the instances. As for the benchmarks with 3 and 6 machines, the superiority of the proposed GHA approach is clear from a statistical viewpoint as no overlap exists between its LSD interval and those of PSH and GSA.

5. Conclusions

In this paper the scheduling of a group of jobs in a flow shop environment managed by workforce having different levels of learning abilities is considered (FSDGSLE). The set-up times of the scheduled groups is influenced by the reciprocal position of the groups in the sequence and by the workforce ability. The jobs are automatically worked on every machine of the flow shop; therefore the processing time is not influenced by the learning effect of the operators. The objective of the scheduling is the minimization of the completion time. The operators are not a critical resource for the scheduling issue under investigation. A new mathematical model is considered to optimally solve small- and medium-sized instances of this FSDGSLE scheduling problem. Due to the large computational time required to cope with large-sized instances a genetic hybrid algorithm (GHA) is proposed. A comparison campaign based on three separate benchmarks risen from literature involving two-, three- and six-machine problems has been fulfilled in order to test the performance of GHA with respect to the two latest metaheuristic procedures presented by literature in the field of FSDGS scheduling problems. An ANOVA analysis focusing on a statistical validation of the obtained outcomes has been performed. The obtained numerical results highlight the effectiveness of GHA in approaching the scheduling problem at hand, regardless of the specific class of problem to be analyzed. Further research should involve other variants of the flow-shop group scheduling issue, especially inspired to real-world applications. For instance, it would be interesting to investigate manufacturing system wherein workers have different learning abilities or machines are affected by deterioration effects.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.