About this Journal Submit a Manuscript Table of Contents
Advances in Mechanical Engineering
Volume 2013 (2013), Article ID 124903, 13 pages
http://dx.doi.org/10.1155/2013/124903
Research Article

An Improved Genetic-Simulated Annealing Algorithm Based on a Hormone Modulation Mechanism for a Flexible Flow-Shop Scheduling Problem

1College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics & Astronautics, Nanjing 210016, China
2Jiangsu Key Laboratory of Precision and Micro-Manufacturing Technology, Nanjing 210016, China

Received 22 April 2013; Revised 10 July 2013; Accepted 14 July 2013

Academic Editor: Shengyong Chen

Copyright © 2013 Min Dai et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A flexible flow-shop scheduling (FFS) with nonidentical parallel machines for minimizing the maximum completion time or makespan is a well-known combinational problem. Since the problem is known to be strongly NP-hard, optimization can either be the subject of optimization approaches or be implemented for some approximated cases. In this paper, an improved genetic-simulated annealing algorithm (IGAA), which combines genetic algorithm (GA) based on an encoding matrix with simulated annealing algorithm (SAA) based on a hormone modulation mechanism, is proposed to achieve the optimal or near-optimal solution. The novel hybrid algorithm tries to overcome the local optimum and further to explore the solution space. To evaluate the performance of IGAA, computational experiments are conducted and compared with results generated by different algorithms. Experimental results clearly demonstrate that the improved metaheuristic algorithm performs considerably well in terms of solution quality, and it outperforms several other algorithms.

1. Introduction

A flexible or hybrid flow-shop scheduling problem is a further development of the classical flow-shop scheduling [1]. FFS problems have been widely applied in process industries and flexible manufacturing systems, such as iron and steel, printed circuit board, textile, and chemical industries [2]. In the literature, most of the studies related to FFS problems are concentrated on multiple stages involved with identical parallel machines which have the same processing capacity [36]. However, due to newer or more modern facilities running side by side with older and less efficient ones, it is common that practical production is executed on nonidentical parallel machines in a real manufacturing system. In other words, the processing time differs in unrelated parallel machines for the same operation. In this paper, the FFS with nonidentical parallel machines is considered that is, there are different parallel machines at each production stage, and each job can be processed with different speeds at one production stage.

Scheduling problems is considered to be NP-hard in a flexible flow shop. It has been proved that the FFS with two production stages, where one machine is in the first production stage and several machines are in the second production stage, is NP-hard [7]. Therefore, many variants of algorithms are employed to solve such problems. In terms of the algorithms characteristics, there are three categories for solving FFS problems, that is, exact methods, heuristic, and metaheuristic algorithms [8]. Exact methods that have focused on simplified versions of FFS problems can precisely obtain best solutions using mathematical models. Branch and bound is the preferred approach for solving the problems. A detailed survey for branch and bound approach is firstly proposed to investigate FFS problems with the objective of minimizing the makespan by Salvador [9]. Later, various branch and bound algorithms for flow-shop problems with parallel machines have been defined [1013]. These exact algorithms can guarantee optimal solutions in small-size problems. However, a major disadvantage of the optimization approaches to production scheduling is the need to reach the better solution quality with respect to large-scale scheduling problems.

On the other hand, when FFS problems grow in complexity and data volume, a variety of approximate methods are usually employed to explore solution qualities, that is, heuristic and metaheuristic algorithms. These algorithms are used to solve such problems and to get optimal or near-optimal solutions with considerably less running time. Dispatching rules are one of the most outstanding paradigms of heuristic algorithms. For instance Sriskandarajah and Sethi [14] presented heuristic algorithms based on dispatching rules for a flexible flowshop problem with the minimum makespan criterion. Verma and Dessouky [15] studied a multistage problem with identical jobs and uniform parallel machines to minimize the makespan and investigated the performance of dispatching rules. In further study, researchers have developed prominent strategies to enhance the performance of heuristics, known as metaheuristic algorithms. Metaheuristics like genetic algorithm (GA) and simulated annealing algorithm (SAA) have been successfully used in FFS problems. Oĝuz and Ercan [16] described the GA with a new crossover operator to solve the hybrid flow-shop scheduling with multiprocessor task problems. Kahraman et al. [17] developed an efficient GA for hybrid flow-shop scheduling problems with the objective of minimizing the makespan. Wang et al. [18] presented the SAA to study a hybrid flow-shop scheduling problem with multiprocessor tasks under the makespan criterion. Mirsanei et al. [19] provided a novel SAA algorithm with a new neighborhood function to obtain a better result of the makespan in a hybrid flow-shop scheduling problem with identical parallel machines. Other metaheuristic algorithms have also been applied to FFS, such as ant colony optimization (ACO) [20], neural network (NN) [21], and particle swarm optimization (PSO) [22]. In addition, combinational metaheuristic algorithm which absorbs advantages of more than one algorithm is one of the most important approaches. The researches that highlight simulated annealing algorithms are integrated together with genetic algorithms, which have provided promising results in scheduling problems [2325]. Most of these researches demonstrate that heuristic and metaheuristic approaches have been performed to reduce the gap between the theories and their practical production scheduling, but they have not yet been solved satisfactorily.

To the best of our knowledge, the research on combinational metaheuristics of GA and SAA for FFS problems has not been investigated. In making a step towards reducing the gap between the theory and their practice, this paper focuses on the development and performance of an improved genetic-simulated annealing algorithm (IGAA) for minimizing the makespan in a flexible flow shop with non-identical parallel machines. Due to the FFS problem with NP-hard, an IGAA with a new crossover and mutation operation is developed to obtain optimal or near-optimal solutions of the makespan. More specifically, the crossover operation that is based on an encoding matrix is designed to create legal schedules and to ensure the population diversity. The mutation operation that is based on an improved SAA can avoid falling into a local optimum.

The remainder of this paper is organized as follows. In Section 2, an FFS problem is described, and a linear mixed integer programming model with the objective of minimizing the makespan during the flexible flow-shop environment is constructed. In Section 3, a metaheuristic approach IGAA for solving the scheduling optimization problem is used. In Section 4, computational experiments are conducted to test the performance of IGAA for FFS problems. In Section 5, the conclusions are concluded and future work is introduced.

2. The Mathematical Model for an FFS Problem

2.1. The Problem Description.

The FFS is a multistage production process that is composed of two or more production stages in series. There is at least one machine tool at each production stage, and at least one stage has more than one machine tool. All jobs have to go through every production stage in the same order. The FFS has infinite intermediate storage between any successive production stages [26]. One instance of the FFS consists of a set of jobs and a set of machine tools. Each job () on machine () has corresponding processing time at a given speed. All jobs are available to be processed sequentially and nonpreemptively at different machine stages as illustrated in Figure 1. The scheduling objective of the FFS is to assign jobs to machine tools at the corresponding stages and determine the processing sequence of operations on each machine in order to minimize the maximum completion time, that is, the makespan ().

124903.fig.001
Figure 1: A flexible flow-shop layout.

The constraints of FFS are made as follows.(1)One job can be processed by only one machine at each production stage. (2)One machine can process at most one operation at a time. (3)For the first stage, all jobs are available at time . (4)There are no precedence relationships between operations of different jobs, but there are precedence relationships between different operations of one job. (5)Preemption is not allowed for processing each job that is, once an operation is started, it must be finished without interruption. (6)Every operation of one job can be executed in a machine at a given speed for every stage.(7)For the same operation, the processing time differs in different unrelated parallel machines in a production stage.

2.2. The Mathematical Model

The following parameters and decision variables are associated with the FFS problem:(i) is the set of spindle speeds for one machine tool;(ii) is the set of jobs;(iii) is the set of production stages that all jobs have to be executed;(iv) is the set of machine tools at production stage ;(v) is a very large positive number;(vi), , , is the processing time when job at production stage is processed on machine tool with speed ;(vii), , is the starting time when job at production stage is processed on machine tool ;(viii), , is the completion time when job at production stage is processed on machine tool ;(ix) is the makespan of the schedule, that is, the completion time of the last job in the schedule;(x) is due date;(xi), , , is an integer variable that have two possible values: 0 or 1, it is set to 1 if job at production stage is required to process on machine tool with speed , and it is set to 0 otherwise;(xii), , , , is an integer variable that have two possible values: 0 or 1, it is set to 1 if job is before job at production stage on machine tool , and it is set to 0 otherwise.The following is a linear mixed integer programming model that discovers a production sequence for jobs on the machines in order to minimize the makespan: Constraints (2)-(3) define that the makespan, which requires arrival before the due date, is equal to the completion time of the last job in the schedule. Constraint (4) means that one job can be assigned to only one machine tool at each production stage; that is, it does not allow a job to be executed on more than one machine tool at any time. Constraint (5) imposes that one job can be processed on one machine tool with one chosen speed. Constraint (6) points out that the completion time of job is composed of the processing time and starting time at each production stage. Constraint (7) gives the precedence constraints between the operations of job ; that is, one operation of the job cannot be processed at next production stage until it has been finished at the current stage. Constraint (8) ensures that one machine can process next job only after it has finished the current one; that is, it does not allow more than one job to be executed on a machine tool at the same time.

3. IGAA Design for an FFS

In this section, an improved hybrid metaheuristic algorithm is proposed for a flexible flow-shop scheduling problem. There are many metaheuristic algorithms that have been implemented in an FFS, such as genetic algorithm (GA), simulated annealing algorithm (SAA), particle swam optimization (PSO), and ant colony optimization (ACO). Among these approaches, GA can quickly approach the optimization solution, but a fatal shortcoming is that it is liable to be trapped in a local optimum, that is, premature convergence. Fortunately, SAA has the ability to jump out of the local optimization and search for the best solution. Therefore, this paper proposes to incorporate the strengths of a simulated annealing algorithm into a genetic algorithm. GA is developed to rapidly search for an optimal or near-optimal solution among the solution space, and then SAA is utilized to seek a better one on the base of the solution. Furthermore, due to the low search efficiency of the SAA, a novel annealing rate function, which is inspired from hormone modulation mechanism, is adopted to further improve the efficiency of the exploration. The proposed improved genetic-simulated annealing algorithm (IGAA) for an FFS is illustrated in Figure 2.

124903.fig.002
Figure 2: The flow chart of an improved genetic-simulated annealing algorithm.
3.1. The Matrix Encoding and Decoding Representation

In general, each chromosome that consists of a series of genes is a bit string structure using binary coding or real-value coding. The length of the chromosome has a direct influence on genetic operation and running time. The longer the length is, the more complex genetic operation become, and the longer running time is. In terms of the elements and their corresponding positions in a matrix describing the constraints between jobs, a matrix encoding approach for an FFS is presented. In this representation, the encoding matrix is described as a whole, instead of a bit string structure generated by the matrix. One dimensional chromosome is converted into a multidimensional chromosome, which is defined as a matrix chromosome. The advantage of the matrix-chromosome is convenient to select, cross, and mutate. Moreover, it can ensure the completeness and validity of the offspring. Hence, each matrix-chromosome represents a legal and feasible schedule. Suppose that jobs are to be processed on a set of machine tools and each job is required to pass stages. There are () unrelated parallel machine tools at each production stage. A matrix-chromosome is constructed as follows:

The matrix elements : (, , , ) are random real numbers. ) is the integer of ; it indicates the machine tools’ identifier that deals with the th process of job . In decoding step, if the condition and is satisfied, then it means that there are several jobs waiting to be processed on the same machine tool for the same process. When the process is the first one, these jobs are arranged to operate in accordance with the ascending sequence of . When the process number is greater than one, these jobs are determined by their completion time of previous process. In other words, the shorter the finishing time of previous process is, the earlier the next process can be operated. If the completion time is the same, jobs are operated according to the ascending sequence of , (). For example, assume that 3 jobs are scheduled at 3 production stages in a flexible flow shop. Each job has 3 processes. The number of parallel machines for each stage is 3, 2, and 2. The example of a flexible flow-shop scheduling problem is given in Table 1.

tab1
Table 1: A flexible flow-shop scheduling problem.

A matrix-chromosome based on the encoding rule is generated randomly using Matlab simulation software as follows: The matrix (10) is converted into the matrix (11) in the form of the integer as expressed in the following: According to the matrix (10), the relationship between jobs and machine tools can be described as in the following. (i)Three processes of job 1 can be processed on M1, M5, and M7, respectively.(ii)Three processes of job 2 can be processed on M3, M4, and M7, respectively.(iii)Three processes of job 3 can be processed on M3, M5, and M6, respectively.And then on the base of the rule of the matrix decoding, the sequence of jobs on the same machine tool can be obtained:(i)M1: the first process of job 1,(ii)M2: readiness,(iii)M3: the first process of job 2 the first process of job 3 (Due to ),(iv)M4: the second process of job 2,(v)M5: the second process of job 1 the second process of job 3 (Due to ),(vi)M6: the third process of job 3,(vii)M7: the third process of job 1 the third process of job 2 (Due to ).

The matrix encoding and decoding scheme along with the Gantt chart for a possible solution is depicted in Figure 3.

124903.fig.003
Figure 3: The Gantt chart of the flexible flow-shop scheduling .
3.2. Fitness Function

The genetic-simulated annealing algorithm assesses the solution that is based on the fitness function. The greater fitness an individual has, the higher chance it has to be chosen into the next generation. In general, the fitness is related with the objective function. As the objective function is to minimize the makespan, the aforementioned function can be transformed into the fitness function for solution as follows:

3.3. Selection Scheme

Selection operator has a significant impact on the performance of the GA. On the base of the fitness of the matrix-chromosome, the selection operator chooses matrix-chromosomes that are used for crossover and mutation. Often the fitness value is not the best one. In the literature, some remarkable selection schemes which have been applied to scheduling problems are developed to explore good solution space. A “2/4 selection” is adopted to preserve fittest matrix chromosomes at each generation, and maintain the diversity of the matrix population as well [27].

3.4. Crossover Operator Based on an Encoding Matrix

Crossover operator is the main way to create new population. A crossover operation picks randomly two parents with crossover probability () [28]. In other words, if is greater than a uniformly distributed random number generated on the interval 0 to 1, the crossover operation can be applied to the parents. A large number of crossover operator approaches, such as one-point crossover, two-point crossover, multi point crossover, uniform crossover, and nonuniform crossover have been reported widely in the literature. In this study, based on two-point crossover technique a novel crossover operator, namely, two-row/column crossover operator (TCO), for our proposed heuristic algorithm is introduced. TCO can be viewed as an extension of two-point crossover operation. The aim of developing TCO can not only always produce feasible permutations, but also effectively increase the diversity of population and further enhance the ability to explore solution space. Only if each matrix-chromosome element satisfies the encoding condition , (; ; ), TCO will yield legal offspring and will not need a repairing procedure. A permutation representation of TCO with can be described in the following.

A column permutation operation is considered between two matrix-chromosomes, that is, Parent A and Parent B. Two different cut-off columns are randomly picked with between the range 1 and . New matrix-chromosomes are generated by relocating zones between the cut-off columns of paired matrix chromosomes. The codes between 2th and th column of matrix-chromosomes given in Figure 4 are changed.

124903.fig.004
Figure 4: Two-column crossover operator (TCO).

3.5. Mutation Operator Based on an Improved SAA Operation

As crossover operation cannot yield solutions with new information, it requires mutation operation with some probability for each segment in order to generate the solutions with greater fitness. A mutation operator is divided into two important issues. One is the proportion of population implementing mutation, that is, mutation probability (). If is greater than a random number generated on the interval 0 to 1 using uniformly distributed rule, the mutation operation can be executed. The other is the mutational strength, that is, the perturbation produced in a matrix-chromosome. There are a variety of approaches for mutation operation, such as uniform mutation and nonuniform mutation [29], power mutation [30], and mutation operator based on the immunity operation [31].

As the general mutation operator is inefficient and insufficient for solving complex global optimization problems, a mutation operation based on an improved SAA operation is proposed for the IGAA. When a temperature is high enough, the search space is very enormous and the SAA can accept a new state with a quite large probability; when a temperature is low, the search space becomes very small and the SAA can accept a new state with a small probability. Thus, the new mutation operation can enhance the search ability and search efficiency of the IGAA in the solution space.

In Figure 2 some parameters with respect to the SAA are required to investigate the performance of the IGAA, including the initial temperature, the neighborhood structure, the annealing rate, and the termination condition. These factors play a significant role in the performance of the IGAA and should be implemented carefully as follows.

(1) The initial temperature. The initial temperature should be set to a high enough value to ensure a vast number of accepted states at an initial stage. It decreases slowly during the iterations of the algorithm. As the temperature becomes lower, it is likely that the state will be accepted with a small probability. It is possible that the IGAA can avoid falling into a local optimum. Finally, a global optimum solution or near-optimal solution will be obtained. Note that the initial temperature value should be set appropriately. Setting a too high temperature value will consume computation time largely, while setting a too low one will reject many accepted states. In this paper, the initial temperature function in the IGAA can be set as where is the maximum sum of the processing time of all jobs, is the minimum sum of the processing time of all jobs, and Pr is the initial acceptance probability.

(2) The neighborhood structure. Neighborhood structures have a direct impact on the efficiency of local search. A neighborhood structure can yield a new set of neighbor solutions by changing the order sequence of the current operation for a given solution. One of the most effective neighborhood strategies for the SAA regarding a production scheduling problem is based on the critical path method [32]. One critical path which is decomposed into a number of blocks corresponds to one feasible solution. One block represents a maximal sequence of several operations that either are required to process on the same machine tool or belongs to the same job. In this study, moving an operation of one critical block to the end of the block or the beginning of the block is adopted to generate the neighborhood.

(3) The annealing rate function. The performance of the IGAA has a significant relation with an annealing rate [33]. In order to enhance the search efficiency of the IGAA, a novel annealing rate method, which is inspired from a hormone modulation mechanism, is developed. Farhy [34] pointed out that the hormone modulation had characteristics with monotone and nonnegative. A control function with nonlinearity is employed, that is, the upregulatory or downregulatory Hill function, as shown in the following: where is a threshold, ; is an independent variable; is a Hill coefficient, . Note that and , . The Hill functions can reach a quick stability, which keeps hormone modulation adaptive and stable. If one hormone is controlled by another hormone , the secretion of the former Va is determined by the concentration of the latter , which can be described as where is the basal secretion of hormone , and is a constant.

Based on the previous hormone modulation mechanism, the annealing rate function can be designed as follows: where is a small constant, is the number of iterations, and is a Hill coefficient, . is the difference between the current temperature and the previous temperature , and . The annealing rate function is exemplified in the plots in Figure 5 (for , and 2 and ).

124903.fig.005
Figure 5: Function profiles of the annealing rate (for , , , and ).

(4) The terminating condition. In the SAA, a terminating criterion which is used to end the annealing procedure consists of the Markov chain stability criterion and the external circulation stopping criterion. In this study, calculating the iterations at each given temperature decides if the condition of the Markov chain stability criterion is satisfied or not. When no improvement in the fitness values can be obtained, the IGAA will be ended.

4. Computational Experiments

In this section, the improved genetic-simulated annealing algorithm is used to analyze the flexible flow-shop scheduling problem with the objective of minimizing the makespan. To evaluate the performance of the algorithm, two experiments are conducted. Test problems consist of the number of jobs, number of machines, and number of production stages and range of processing times. The simulation is carried out by utilizing Matlab programming language. The experimental tests are displayed on a personal computer with Intel Pentium (R) with 1 GB memory and 3.20 GHz processor, and its operating system is Windows XP.

4.1. Evaluation of IGAA

In the first experiment the performance of IGAA is evaluated through using an average relative percentage deviation (ARPD) with respect to the optimal solution. ARPD is obtained by the given formula in the following: where represents the number of replications for each instance. PD which is used as the performance of the makespan represents the percentage deviation of one algorithm.The percentage deviation (PD) is defined as follows: where denotes the makespan value produced by the algorithm and is corresponding to the best result of objective function in the replications.

The choice of algorithm parameters may affect solution quality. First, two important parameters regarding IGAA are considered in this paper. One is the crossover probability (). The other is the mutation operation which is replaced with improved simulated annealing algorithm. The principle of incorporating the improved SAA based on the hormone modulation mechanism into the genetic algorithm is the most central innovation of this paper. Therefore, computational experiments are conducted to illustrate that different factors of annealing rate functions really have significant influence on the quality of the solutions. The population size is fixed at 30; annealing rate function factor (): 1.0, 1.2, 1.5, and 2.0; crossover probability (): 0.5, 0.6, 0.7, 0.8, and 0.9; jobs (): 20 and 50; production stages (): 2, 5, and 8; the processing times for each job at each production stage are uniformly yielded over the interval from 1 to 100. The algorithm was run 10 times for each instance. The computational results of different parameters are shown in Table 2. The minimum values of ARPD with different crossover probability () for all problem sizes are in bold and with different annealing rate functions factor () in italic.

tab2
Table 2: The effect of different parameters on the performance of IGAA.

In Table 2, the annealing rate function factor is the best when crossover probability is fixed. Meanwhile, the best crossover probability is when the annealing rate function factor is fixed. Therefore, the annealing rate function factor with 1.5 and the crossover probability with 0.9 are used to evaluate the performance of the proposed algorithm in the following experiments.

In addition, the characteristics used to describe the problem in the proposed algorithm, such as the population size of jobs, the number of production stages, and the number of machines at each production stage, are required to assess the performance of solution quality. Three experiments are conducted for different sets of these characteristics as displayed in Figures 6, 7, and 8, respectively. It is given that the processing times of jobs are uniformly ranged from 1 to 100.

124903.fig.006
Figure 6: The performance of the IGAA with different population size.
124903.fig.007
Figure 7: The performance of the IGAA with different production stages.
124903.fig.008
Figure 8: The performance of the IGAA with different machine tools.

In Figure 6 an experiment is conducted for a 50 jobs ×5 stages problem in which there are five machine tools at each production stage. The population size varies from 10 to 30. As shown in Figure 6, it is obvious that the IGAA obtains the optimal or near-optimal solution quickly when the number of population size increases.

In Figure 7 the number of jobs is 50, the number of production stage varies from 3 to 5, and the number of machine tools at each production stage is fixed at 5. The population size is given at 25. It is observed from Figure 7 that the performance of the IGAA with less production stages performs better.

In Figure 8 an experiment was conducted with a five-stage and 50-job instance. The number of machines at each production stage varies from 3 to 7. The population size is given at 25. The graph clearly shows that increasing the number of machine tools is beneficial to improve the algorithm performance, and thereby more machine tools are selected as the proposed algorithm.

4.2. Comparison with Several Existing Algorithms

To evaluate the solution quality of the IGAA, the algorithm is compared with several different algorithms in the second experiment. The compared algorithms include genetic algorithm (GA) given by Şerifoğlu and Ulusoy [35] and simulated annealing (SA) by Wang et al. [18]. The problem sizes are involved in three different numbers of jobs (i.e., , 60, and 100) and three different numbers of production stages (i.e., ). The processing times for each job at each production stage are generated randomly from a discrete uniform distribution over the range 10 to 100. The algorithm was run five times for each instance. The experimental results of performance comparison among four algorithms with respect to the best value, the worst value, and the average value of the objective function are depicted in Table 3. The aforementioned ARPD used in the problem sizes is also suitable. The results of the ARPD for GA, SA, and IGAA are presented in Table 4.

tab3
Table 3: Experimental results of IGAA compared with GA and SA.
tab4
Table 4: ARPD of IGAA compared with GA and SA.

In Table 3, the best values of objective function are displayed in boldface. It is observed that IGAA is an effective algorithm for obtaining the optimal or near-optimal solution of all instances. The IGAA provides the better chance to further improve the quality of the solutions during the 37 instances of the total 45 instances. All algorithms obtain optimal solution for the remaining instances, but it only focuses on small-scale and medium-scale problems. At the same time, it is noticed that the proposed algorithm outperforms GA and SA with respect to the worst value and the average value of the objective function.

In Table 4, the total average ARPD of IGAA is 1.467 while the total average ARPD for GA and SA are 4.066 and 3.104, respectively. It is obvious that IGAA is superior to GA and SA in solution quality. Furthermore, the improvement rate for the total average ARPD using IGAA is defined in the following: On the total average ARPD of IGAA, an improvement of 63.92% with respect to GA and 52.74% with respect to SA has been achieved.

5. Conclusions

The purpose of this study tries to make a step towards constructing an effective metaheuristic algorithm for a flexible flow-shop scheduling problem with nonidentical parallel machines to minimize the makespan. An improved genetic-simulated annealing algorithm (IGAA) is proposed. Specially, a matrix encoding scheme which employs a real-value representation is developed to deal with the FFS. Next, instead of the mutation operator of GA, an improved SAA inspired from a hormone modulation mechanism is implemented into GA to avoid premature convergence. Several annealing rate functions are compared to determine the appropriate variant of IGAA for the scheduling problem. Finally, the hybrid metaheuristic algorithm is provided to solve the FFS problem.

Computational experiments are carried out to verify the performance of the IGAA with different parameters. The experimental results demonstrate that the IGAA performs better according to the average relative percentage deviation (ARPD) of the solution when the annealing rate function factor with 1.5 and the crossover probability with 0.9 are selected. At the same time, the proposed algorithm with the two appointed parameters is implemented to test other parameters. The results show the performance of the IGAA performs well when the population size of jobs, the number of production stages, and the number of machines at each production stage increases. It is furthermore observed that the makespan can reduce effectively.

The experiments also involve a comparison of the IGAA with other algorithms from the literature to evaluate the quality of solutions. The results reveal that the performance of the IGAA outperforms several other algorithms. From these, we can conclude that the IGAA is an effective method for the FFS problem. Further research may investigate the problems with other constraint conditions, such as release dates and setup times.

Conflict of Interests

The authors have declared no conflict of interests.

Acknowledgments

This project is supported by National Natural Science Foundation of China (Grant no. 51175262), Jiangsu Province Science Foundation for Excellent Youths (Grant no. BK201210111), Jiangsu Province Industry-Academy-Research Grant (Grant no. BY201220116), NUAA Fundamental Research Funds (no. NS2013053), and Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD).

References

  1. R. Linn and W. Zhang, “Hybrid flow shop scheduling: a survey,” Computers & Industrial Engineering, vol. 37, no. 1, pp. 57–61, 1999. View at Publisher · View at Google Scholar · View at Scopus
  2. O. Engin, G. Ceran, and M. K. Yilmaz, “An efficient genetic algorithm for hybrid flow shop scheduling with multiprocessor task problems,” Applied Soft Computing, vol. 11, no. 3, pp. 3056–3065, 2011. View at Publisher · View at Google Scholar · View at Scopus
  3. B.-C. Choi and K. Lee, “Two-stage proportionate flexible flow shop to minimize the makespan,” Journal of Combinatorial Optimization, vol. 25, no. 1, pp. 123–134, 2013. View at Publisher · View at Google Scholar · View at Scopus
  4. J. N. D. Gupta and A. J. Ruiz-Torres, “Minimizing makespan subject to minimum total flow-time on identical parallel machines,” European Journal of Operational Research, vol. 125, no. 2, pp. 370–380, 2000. View at Publisher · View at Google Scholar · View at Scopus
  5. J. N. D. Gupta, K. Krüger, V. Lauff, F. Werner, and Y. N. Sotskov, “Heuristics for hybrid flow shops with controllable processing times and assignable due dates,” Computers & Operations Research, vol. 29, no. 10, pp. 1417–1439, 2002. View at Publisher · View at Google Scholar · View at Scopus
  6. G. J. Kyparisis and C. Koulamas, “Flexible flow shop scheduling with uniform parallel machines,” European Journal of Operational Research, vol. 168, no. 3, pp. 985–997, 2006. View at Publisher · View at Google Scholar · View at Scopus
  7. J. N. D. Gupta, “Two-stage, hybrid flowshop scheduling problem,” Journal of the Operational Research Society, vol. 39, no. 4, pp. 359–364, 1988. View at Scopus
  8. R. Ruiz and J. A. Vázquez-Rodríguez, “The hybrid flow shop scheduling problem,” European Journal of Operational Research, vol. 205, no. 1, pp. 1–18, 2010. View at Publisher · View at Google Scholar
  9. M. S. Salvador, “A solution to a special case of flow shop scheduling problems,” in Symposium of the Theory of Scheduling and Applications, S. E. Elmaghraby, Ed., pp. 83–91, Springer, New York, NY, USA, 1973.
  10. O. Moursli and Y. Pochet, “A branch-and-bound algorithm for the hybrid flowshop,” International Journal of Production Economics, vol. 64, no. 1–3, pp. 113–125, 2000. View at Publisher · View at Google Scholar · View at Scopus
  11. J. N. D. Gupta and E. A. Tunc, “Schedules for a two-stage hybrid flowshop with parallel machines at the second stage,” International Journal of Production Research, vol. 29, no. 7, pp. 1489–1502, 1991. View at Scopus
  12. S. A. Brah and J. L. Hunsucker, “Branch and bound algorithm for the flow shop with multiple processors,” European Journal of Operational Research, vol. 51, no. 1, pp. 88–99, 1991. View at Scopus
  13. J. N. D. Gupta, A. M. A. Hariri, and C. N. Potts, “Scheduling a two-stage hybrid flow shop with parallel machines at the first stage,” Annals of Operations Research, vol. 69, pp. 171–191, 1997. View at Scopus
  14. C. Sriskandarajah and S. P. Sethi, “Scheduling algorithms for flexible flowshops: worst and average case performance,” European Journal of Operational Research, vol. 43, no. 2, pp. 143–160, 1989. View at Scopus
  15. S. Verma and M. Dessouky, “Multistage hybrid flowshop scheduling with identical jobs and uniform parallel machines,” Journal of Scheduling, vol. 2, no. 3, pp. 135–150, 1999. View at Scopus
  16. C. Oĝuz and M. F. Ercan, “A genetic algorithm for hybrid flow-shop scheduling with multiprocessor tasks,” Journal of Scheduling, vol. 8, no. 4, pp. 323–351, 2005. View at Publisher · View at Google Scholar · View at Scopus
  17. C. Kahraman, O. Engin, I. Kaya, and M. K. Yilmaz, “An application of effective genetic algorithms for solving hybrid flow shop scheduling problems,” International Journal of Computational Intelligence Systems, vol. 1, no. 2, pp. 134–147, 2008. View at Publisher · View at Google Scholar · View at Scopus
  18. H.-M. Wang, F.-D. Chou, and F.-C. Wu, “A simulated annealing for hybrid flow shop scheduling with multiprocessor tasks to minimize makespan,” The International Journal of Advanced Manufacturing Technology, vol. 53, no. 5–8, pp. 761–776, 2011. View at Publisher · View at Google Scholar · View at Scopus
  19. H. S. Mirsanei, M. Zandieh, M. J. Moayed, and M. R. Khabbazi, “A simulated annealing algorithm approach to hybrid flow shop scheduling with sequence-dependent setup times,” Journal of Intelligent Manufacturing, vol. 22, no. 6, pp. 965–978, 2011. View at Publisher · View at Google Scholar · View at Scopus
  20. K. Alaykýran, O. Engin, and A. Döyen, “Using ant colony optimization to solve hybrid flow shop scheduling problems,” The International Journal of Advanced Manufacturing Technology, vol. 35, no. 5-6, pp. 541–550, 2007. View at Publisher · View at Google Scholar · View at Scopus
  21. L. Tang, W. Liu, and J. Liu, “A neural network model and algorithm for the hybrid flow shop scheduling problem in a dynamic environment,” Journal of Intelligent Manufacturing, vol. 16, no. 3, pp. 361–370, 2005. View at Publisher · View at Google Scholar · View at Scopus
  22. M. R. Singh and S. S. Mahapatra, “A swarm optimization approach for flexible flow shop scheduling with multiprocessor tasks,” The International Journal of Advanced Manufacturing Technology, vol. 62, no. 1–4, pp. 267–277, 2012. View at Publisher · View at Google Scholar · View at Scopus
  23. L. Wang and D. Z. Zheng, “A modified genetic algorithm for job shop scheduling,” The International Journal of Advanced Manufacturing Technology, vol. 20, no. 1, pp. 72–76, 2002. View at Publisher · View at Google Scholar · View at Scopus
  24. A. C. Nearchou, “A novel metaheuristic approach for the flow shop scheduling problem,” Engineering Applications of Artificial Intelligence, vol. 17, no. 3, pp. 289–300, 2004. View at Publisher · View at Google Scholar · View at Scopus
  25. A. C. Nearchou, “Flow-shop sequencing using hybrid simulated annealing,” Journal of Intelligent Manufacturing, vol. 15, no. 3, pp. 317–328, 2004. View at Publisher · View at Google Scholar · View at Scopus
  26. M. Pindo, Scheduling: Theory, Algorithms, and Systems, Prentice Hall, Englewood Cliffs, NJ, USA, 2002.
  27. G. Shi, “A genetic algorithm applied to a classic job-shop scheduling problem,” International Journal of Systems Science, vol. 28, no. 1, pp. 25–32, 1997. View at Publisher · View at Google Scholar · View at Scopus
  28. D. S. Sepich, D. C. Myers, R. Short, J. Topczewski, F. Marlow, and L. Solnica-Krezel, “Role of the zebrafish trilobite locus in gastrulation movements of convergence and extension,” Genesis, vol. 27, no. 4, pp. 159–173, 2000.
  29. Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs, Springer, New York, NY, USA, 1992.
  30. K. Deep and M. Thakur, “A new mutation operator for real coded genetic algorithms,” Applied Mathematics and Computation, vol. 193, no. 1, pp. 211–230, 2007. View at Publisher · View at Google Scholar · View at Scopus
  31. L.-N. Xing, Y.-W. Chen, and K.-W. Yang, “A novel mutation operator based on the immunity operation,” European Journal of Operational Research, vol. 197, no. 2, pp. 830–833, 2009. View at Publisher · View at Google Scholar · View at Scopus
  32. C.-F. Liaw, “A tabu search algorithm for the open shop scheduling problem,” Computers & Operations Research, vol. 26, no. 2, pp. 109–126, 1999. View at Publisher · View at Google Scholar · View at Scopus
  33. M. M. Keikha, “Improved simulated annealing using momentum terms,” in Proceedings of the 2nd International Conference on Intelligent Systems, Modelling and Simulation (ISMS '11), pp. 44–48, Phnom Penh, Cambodia, January 2011. View at Publisher · View at Google Scholar · View at Scopus
  34. L. S. Farhy, “Modeling of oscillations in endocrine networks with feedback,” in Numerical Computer Methods, p. 54, 2004.
  35. F. S. Şerifoǧlu and G. Ulusoy, “Multiprocessor task scheduling in multistage hybrid flow-shops: a genetic algorithm approach,” Journal of the Operational Research Society, vol. 55, no. 5, pp. 504–512, 2004. View at Publisher · View at Google Scholar · View at Scopus