- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Recently Accepted Articles ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Mathematical Problems in Engineering

Volume 2013 (2013), Article ID 409486, 9 pages

http://dx.doi.org/10.1155/2013/409486

## Particle Swarm Optimization Algorithm for Unrelated Parallel Machine Scheduling with Release Dates

Department of Industrial Engineering and Systems Management, Feng Chia University, P.O. Box 25-097, Taichung 40724, Taiwan

Received 6 September 2012; Revised 11 December 2012; Accepted 25 December 2012

Academic Editor: Baozhen Yao

Copyright © 2013 Yang-Kuei Lin. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We consider the NP-hard problem of minimizing makespan for jobs on unrelated parallel machines with release dates in this research. A heuristic and a very effective particle swarm optimization (PSO) algorithm have been proposed to tackle the problem. Two lower bounds have been proposed to serve as a basis for comparison for large problem instances. Computational results show that the proposed PSO is very accurate and that it outperforms the existing metaheuristic.

#### 1. Introduction

This research considers the problem of scheduling jobs on unrelated parallel machines in the presence of release dates. The performance measure, makespan, is defined as , where is the completion time of job . Minimizing makespan not only completes all jobs as quickly as possible but also is a surrogate for maximizing the utilization of machines. Following the three-field notation of Graham et al. [1], we refer to this problem as . This problem is NP hard [2].

Chen and Vestjens [3] used the largest processing time (LPT) to minimize makespan for identical parallel machines with release dates (). The release date of a job is not known in advance, and its processing time becomes known at its arrival. Kellerer [4] proposed algorithms for the problem and problem. Koulamas and Kyparisis [5] considered uniform parallel machine scheduling problems (). They proposed a heuristic and derived a tight worse-case ratio bound for this heuristic. Centeno and Armacost [6] showed that the LPT rule performed better than the least flexible job (LFJ) rule for the problem with machine eligibility restrictions (). Lancia [7] applied a branch-and-bound (b & b) procedure to solve scheduling problems with release dates and tails on two unrelated parallel machines (). Similarly, Gharbi and Haouari [8] also presented a b & b procedure to solve the problem. Carlier and Pinson [9] reported new results on the structures of Jackson’s pseudopreemptive scheduling applied to the problem. Li et al. [10] used a polynomial time approximation scheme for scheduling identical parallel batch machines (). Li and Wang [11] proposed an efficient algorithm for scheduling with inclusive processing set restrictions and job release times ().

To the best of our knowledge, no research has yet been published that develops an efficient algorithm to minimize makespan for unrelated parallel machines with release dates. The rest of this paper is organized as follows. Section 2 presents our proposed lower bounds. Section 3 presents the proposed PSO. In Section 4, the computational results are reported. Section 5 presents our conclusions and suggestions for future research.

#### 2. Lower Bounds to

We propose two straightforward and easily implementable lower bounds for the studied problem. is the maximum value of each job’s release date plus the minimum processing time (across all machines). is set to the minimum release date (among all jobs) plus the sum of all jobs’ minimum processing times (across machines) divided by the number of machines. We set lower bound LB equal to the maximum value of and :

To illustrate the proposed lower bounds, we consider example 1, which has 2 machines and 7 jobs. The matrix of processing times for example 1 is given in Table 1:

#### 3. The Proposed PSO Algorithm

PSO was first introduced by Kennedy and Eberhart [12] for solving continuous nonlinear function optimization problems. PSO is based on the metaphor of social interaction and communication in flocks of birds or schools of fish. In these groups, there is a leader (the one with the best performance) who guides the movement of the whole swarm. In a PSO, each individual is called a “particle,” and each particle flies around the search space with some velocity. In each iteration, a particle moves from its previous location to a new location at its newly updated velocity, which is calculated based on the particle’s own experience and the experience of the whole swarm.

A population of particles are assumed to evolve in an -dimensional vector search such that each particle is assigned the position vector and velocity vector , where represents the location and represents the velocity of particle in the *d*th dimension of the search space at the *t*th iteration and , . Each particle knows its position and the corresponding objective function. The local best position for each particle is encoded in the variables , and the global best position among all particles is encoded in the variable . The standard PSO equations can be described as follows [12]:
where is the weight that controls the impact of the previous velocities on the current velocity, is the cognition learning factor, is the social learning factor, and and are random numbers uniformly distributed in .

PSO has been successfully applied to a variety of continuous nonlinear optimization problems. In recent years, considerable effort has been expended on solving scheduling problems by PSO algorithms. Articles [13, 14] used PSO algorithms to solve scheduling problems similar to the problems in this paper. Reference [13] provided a PSO algorithm for scheduling identical parallel machines to minimize makespan (). Reference [14] presented a PSO algorithm for scheduling nonidentical parallel batch processing machines to minimize makespan (). This research uses PSO for the problem. The following five headings describe the PSO algorithm used in this research: particle representation, initial population generation, particle velocity and sequence metric operators, local search, stopping criteria, and parameter settings.

##### 3.1. Particle Representation

A coding scheme developed in [15] is used to represent a solution to the problem at hand. The coding scheme uses a list of job symbols and partitioning symbols. A sequence of job symbols, denoted by integers, represents a possible sequence of jobs. The partitioning symbol, an asterisk, designates the partition of jobs to machines. Generally, for an -machine -job problem, a solution contains partitioning symbols and job symbols, resulting in a total size of . For example, for a schedule with 7 jobs and 2 machines, the particle can be represented as shown in Figure 1.

The completed schedule is thus jobs 7, 6, 1, and 4 on machine 1; jobs 3, 2, and 5 on machine 2. This coding scheme specifies not only which jobs are assigned to which machine but also the order of the jobs on each machine. These pieces of information are important, since we are scheduling unrelated parallel machines with release dates.

##### 3.2. Initial Population Generation

In order to give the PSO algorithm good initial solutions and to increase the chances of getting closer to regions that yield good objective functions, we propose a heuristic, named SRD_Reassign. The proposed heuristic SRD_Reassign is described as follows.

###### 3.2.1. Heuristic SRD_Reassign

*Step 1. *Let be the set of unscheduled jobs; let be the sum of the processing times of the jobs that have already been scheduled on machine , . Initially, set and , for .

*Step 2. *Arrange the jobs in the order of the shortest release date (SRD) first, and then assign the job to machine that has the minimum processing time, that is, . Repeat until all jobs have been scheduled to generate a complete schedule.

*Step 3. *Let be the set of scheduled jobs on machine , ; let represent the maximum completion time and denote the set of candidate jobs for reassignment as . Initially, set .

*Step 4. *Identify machine for which . For every job , , search for machine , such that if job was reassigned to machine and the jobs on machine were sorted in SRD, the new calculated would be smaller than , that is, . If job and machine can be found, update the candidate set by adding job on machine to the candidate set by setting . If , then go to Step 6.

*Step 5. *Select machine and job from for the reassignment that has maximum , where . Reassign job to machine by setting . Sort the jobs on machine in SRD and update . Remove job from machine by setting . Sort the jobs on machine in SRD and update . Set and . Go to Step 4.

*Step 6. *Terminate the procedure.

The first two particles are generated by first-come, first-serve (FCFS) rule and SRD_Reassign. The remaining particles are generated by applying local search to the solution found by SRD_Reassign. For FCFS rule, we consider all unscheduled jobs and schedule each one on the first available machine according to FCFS. Local search is done by randomly choosing two jobs and from the solution found by SRD_Reassign and then interchanging jobs and to generate a new solution. From the initial population pool, we identify the best current solution and update the global best location ().

##### 3.3. Particle Velocity and Sequence Metric Operators

Kashan and Karimi [13] worked on the classical PSO equations to provide a discrete PSO algorithm that maintained all major characteristics of the original continuous PSO equations when solving parallel machine scheduling problems. In this research, we use the two equations proposed in [13] to update the particle velocity and the sequence metric operators as shown in (4) and (5):
In (4), and represent the velocity and position arrays of particle at the *t*th iteration, respectively. and represent the local best position for each particle and the global best position among all particles visited so far. and are 1-by-() arrays in which each digit is 0 or 1. These random arrays are generated from a Bernoulli distribution. , , and are subtraction, multiplication, and addition operators, respectively. The definitions of the operators are as follows.

The subtraction operator () defines the differences between the current position of the *k*th particle, , and a desired position (or ). It first finds elements that do not have the same content in and (or ). It schedules those elements based on their orders in (or ). Next, it finds elements that have the same content in and (or ) and gives those elements zero values. It puts zero-valued elements in SRD and then schedules them on whatever machine that offers the earliest completion time (ECT). Figure 2 demonstrates the manner in which the operator works for example 1.

The multiplication operator () can enhance our PSO algorithm’s exploration. It first generates a 1-by-() binary vector for a solution vector, and then it does a multiplication process where the asterisk positions from solution are kept. These random binary arrays perform subroutines within PSO that use random numbers to enhance the exploration ability of PSO. Figure 3 demonstrates the manner in which the operator works for example 1. The nonzero-valued elements in are scheduled first. Then, all zero-valued elements in are sorted in SRD, and then each zero-valued element is assigned to the machine that offers the ECT.

The addition operator () is a crossover operator that is commonly used in genetic algorithms. Here, we used a crossover that was proposed in [15]. The crossover scheme has three main steps: it obtains asterisk positions from the first parent ; it obtains a randomly selected subschedule from the first parent ; it scans the second parent from left to right and fills the gaps in the child’s () schedule with jobs taken from the second parent . Figure 4 shows an illustration of this crossover scheme.

During the execution of the , , and operators, whenever a complete schedule generates a better solution than the best current solution , we will update the best current solution and the global best location ().

##### 3.4. Local Search

It is well known that evolutionary memetic algorithms can be improved by hybridization with local search. For each particle , we do the following local search procedure (LSP) to further improve the current solution.

###### 3.4.1. Local Search Procedure (LSP)

*Step 1. *Initially, set .

*Step 2. *Identify machine that has maximum completion time (). Randomly choose one job from machine and randomly choose one job from machine . Insert job after job on machine . If a better is found, update and go to Step 2; otherwise, go to Step 3.

*Step 3. *Randomly choose two jobs and from ; and can be on the same machine or on two different machines. Interchange jobs and . If a better is found, update and go to Step 2; otherwise, go to Step 4.

*Step 4. *If , stop; otherwise set and go to Step 2.

Again, during the execution of LSP, whenever a complete schedule generates a better solution than the best current solution , we will update the best current solution and the global best location ().

##### 3.5. Stopping Criteria and Parameter Settings

We studied the effects of five important parameters (, , local search moves , population size, and number of iterations) on the performance of our proposed PSO. The model was tested and parameterized through a factorial study. The selected PSO parameters were , , , population size , and number of iterations . The appendix includes a detailed description of our parameterization study.

#### 4. Computational Results

In this section, we present several computational results of the proposed PSO algorithm. We compare our proposed PSO algorithm with a mixed integer programming (MIP) model developed in our previous research [16] on the problem. The MIP model [16] was coded in AMPL and implemented in CPLEX 11.2. The proposed heuristic, SRD_Reassign, and the proposed PSO algorithm were implemented in *C*. The MIP model, heuristic, and PSO algorithm were executed on a computer with a 2.5 GHz CPU and 2 GB of memory. Processing times were generated from the uniform distribution . Release dates were generated in a manner similar to that of Mönch et al. [17]. We generated release dates from the uniform distribution . controlled the range of release dates. High values of tend to produce widely separated release dates. values were set at 0.1, 0.25, and 0.5. We used 4 machines with 18 jobs () to represent small problem instances and 10 machines with 100 jobs () to represent large problem instances. For each , 20 problem instances were randomly generated. The effectiveness of each algorithm was evaluated by the mean performance and the required computation time (labeled as “Avg. time”). For small problem instances, a ratio was calculated by dividing the algorithm’s makespan by the optimal MIP makespan. For large problem instances, a ratio was calculated by dividing the algorithm’s makespan by the makespan from LB. The mean performance of the algorithm for each was the average ratio obtained from 20 runs of the algorithm.

##### 4.1. Comparison of Heuristics for Problem

We compared the proposed heuristic SRD_Reassign with the optimal solutions obtained from the MIP model [16] and FCFS. FCFS is a dispatching rule that is commonly used for practical problems with release dates. Computational results for small and large problem instances are given in Tables 2 and 3, respectively. The results show that the proposed SRD_Reassign outperformed FCFS in terms of makespan. For small problem instances, the average SRD_Reassign makespan was 1.08 times the optimum, and the average FCFS makespan was 1.41 times the optimum. Both heuristics outperformed the MIP model in terms of computation time. When was small, both heuristics had larger ratios to the optimal solutions than they had when was large. Also, the MIP took more computation time to find optimal solutions when was small. This probably indicates that problems with small release date ranges are harder to solve than problems with large release date ranges. For large problem instances, the average SRD_Reassign makespan was 1.22-times greater than the lower bound (LB), and the average FCFS makespan was 1.67 times the LB. Both heuristics were calculated very quickly (in less than 1 second) even for large problem instances.

##### 4.2. Comparison of Metaheuristics for Problem

We compared the proposed PSO with an existing metaheuristic, namely, the version of simulated annealing (SA) described by Lee et al. [18]. This SA variant was originally designed for solving the problem. SA is a metaheuristic, and it can be used without any problem-dependent knowledge; therefore, it can be used to solve the problem. In order to provide a fair comparison, we used the same initial solution (SRD_Reassign) for both PSO and SA. We also adjusted the SA parameters to ensure that both PSO and SA ran for similar computation times. The termination criterion for SA was set to run for 12 seconds for small problem instances and 83 seconds for large problem instances. If SA found a solution equal to LB, the program would terminate earlier. Computational results for small and large problem instances are given in Tables 4 and 5, respectively.

Computational results show that the proposed PSO outperformed the SA in terms of makespan. For small problem instances, the PSO found optimal solutions at all three settings. The average SA makespan was 1.05 times the optimum. Both metaheuristics outperformed the MIP model in terms of computation time. The last column in Table 4 reports how many times a given algorithm produced a better makespan than the other algorithm. For instance, a value of in column PSO/SA means that, out of 20 problems, there were problems for which PSO yielded a better solution than SA, problems for which SA performed better, and 20-- problems for which PSO and SA yielded the same makespan.

For large problem instances, the average PSO makespan was 1.08-times greater than the LB, and the average SA makespan was 1.15 times the LB. The last column in Table 5 shows how many times out of 20 the LB was obtained by and how many times the LB was obtained by . When was small, provided a better lower bound than ; however, when was large, provided a better lower bound than . This suggests that performs better for problems with narrow release date ranges, and performs better for problems with wide release date ranges.

##### 4.3. The Effects of the Proposed PSO

Next, since the proposed PSO effectively incorporates a number of ideas (initial solutions, SRD, ECT, and LSP), we examine which parts are essential to its functionality. We examine these effects by disabling a single component, running the proposed PSO without that component, and observing performance. We choose to study large problems. These experiments are described as follows.

*PSO-Initial Heuristics*: instead of generating an initial population by heuristics, we randomly generated an initial set of solutions to make up the initial population.

*PSO-SRD*: instead of sorting elements by SRD and then scheduling them on whatever machine that offered the ECT within PSO operators ( and ), we randomly chose unscheduled jobs and then scheduled them on whatever machine that offered the ECT.

*PSO-ECT*: instead of sorting elements by SRD and then scheduling them on whatever machine that offered the ECT within PSO operators ( and ), we sorted elements by SRD and then scheduled them on the first available machine.

*PSO-LSP*: local search procedure was disabled.

[13]: local search procedure was disabled and a local search algorithm used in [13] was applied. Since the formulation in step 4 [13] was not suitable for the unrelated parallel machines environment, we modified it to find two jobs from and such that an exchange of those two jobs was able to improve the current best makespan.

Table 6 lists the average makespan ratio of PSO-variant to standard PSO (which has initial heuristics, SRD, ECT, and LSP by default). Table 6 shows that the PSO performed poorly when the initial heuristics were not applied and when the initial population was randomly generated. The PSO also performed poorly when the ECT strategy was not applied within the PSO operators. The average ratio of PSO without initial heuristics to standard PSO was 1.019, the average of PSO without SRD to standard PSO was 1.006, the average of PSO without ECT to standard PSO was 1.020, and the average of PSO without LSP to standard PSO was 1.007. In all, all of the proposed PSO versions without any one of the parts (heuristic initial solutions, SRD, ECT, and LSP) performed worse than the PSO with all of them.

Moreover, we compared our proposed PSO with another existing PSO. The closest existing PSO that we were able to find was the hybridized discrete PSO (HDPSO) proposed in [13]. The HDPSO [13] was designed to minimize makespan for identical parallel machines (). The proposed PSO and the HDPSO both are designed to minimize makespan for parallel machines and they both use formulas (4)-(5) to update each particle’s velocity and position. However, the HDPSO is still quite different from our PSO. The HDPSO considers problems without release dates; hence, its coding scheme does not consider the order of jobs on the same machine. Also, HDPSO considers an identical parallel machine environment; it uses the LPT rule to assign jobs with zero-valued elements within , , and operators in formulas (4)-(5). It is well known that the LPT rule does not perform well in unrelated parallel machine environments. If HDPSO is used to solve the problem without modifications, it will not perform very well. Table 7 shows a comparison between HDPSO and our PSO. Since the , , and operators within formulas (4)-(5) are quite different in both PSOs, there is no point in comparing them. We focus our comparison on initial heuristics and local searches. The first column in Table 6 indicates that our PSO with an initial heuristic performs better than a version without an initial heuristic. HDPSO might exhibit similar performance differences. The last column in Table 6 indicates that our proposed LSP performs better than the local search algorithm used in [13]. The standard PSO (LSP is embedded) versus PSO-LSP+LS [13] is 1.000 versus 1.009. Moreover, the proposed LSP outperforms the local search algorithm used in [13] in terms of average computation time. Therefore, we can conclude that our proposed PSO provides better and more efficient strategies for parallel machine makespan minimization problems than what HDPSO provides. Our PSO is more likely to provide promising results than HDPSO.

#### 5. Conclusions and Future Work

We studied the problem of scheduling jobs on unrelated parallel machines with release dates to minimize makespan. In this research, we proposed two lower bounds for the studied problem. We also proposed a heuristic, SRD_Reassign, and a metaheuristic, PSO, to tackle the problem. Computational results showed that SRD_Reassign outperformed the commonly used heuristic, FCFS, in terms of makespan. The proposed PSO outperformed a comparable variant of SA in terms of makespan. Future work can extend our approach for other performance criteria or even for multiobjective parallel machine scheduling problems.

#### Appendix

We studied the effects of five important parameters (, , local search moves *, *population size, and number of iterations) on the performance of our proposed PSO. In order to test the significance of each parameter, was chosen as a representative problem instance; the objective was to minimize in an unrelated parallel machine environment with release dates. Because problems with small ranges of release dates are harder to solve than problems with large ranges of release dates, the release date factor was set to 0.1. In order to obtain information about the importance of each of the factors, we conducted an initial screening experiment. Each parameter was categorized as being at a high or low level, as shown in Table 8. We conducted a screening experiment to determine which factors were significant. The results of this experiment are shown in Table 9 where each value is the average of 20 problem instances. The half normal plot provided by Design Expert indicates that (factor ), population size (factor ), and iterations (factor ) were significant to the response in the screening experiment. The model in terms of coded factors is . The model shows that increasing the , , and values could decrease makespan.

Next, we used the method of steepest descent (Myers et al. [19]) to provide information about the region of improved response. The path of steepest descent is shown in Table 10. The results show that a reduction in was experienced after Run 2. Although Run 2 improved by 2.4% compared with Run 0, its computation time was very long. Run 1 improved by 2.1% relative to Run 0 and used less computation time. We chose the settings of Run 1 as our final set of parameters. Hence, PSO parameters were set to , population size , and iterations . Since the and were not significant, they were kept at low levels ( and ) to save computation time.

#### References

- R. L. Graham, E. L. Lawler, J. K. Lenstra, and A. H. G. Rinnooy Kan, “Optimization and approximation in deterministic sequencing and scheduling: a survey,”
*Annals of Discrete Mathematics*, vol. 5, pp. 287–326, 1979. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - M. L. Pinedo,
*Scheduling Theory, Algorithms, and Systems*, Springer, New York, NY, USA, 4th edition, 2012. View at MathSciNet - B. Chen and A. P. A. Vestjens, “Scheduling on identical machines: how good is LPT in an on-line setting?”
*Operations Research Letters*, vol. 21, no. 4, pp. 165–169, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - H. Kellerer, “Algorithms for multiprocessor scheduling with machine release times,”
*IIE Transactions*, vol. 30, no. 11, pp. 991–999, 1998. View at Publisher · View at Google Scholar · View at Scopus - C. Koulamas and G. J. Kyparisis, “Makespan minimization on uniform parallel machines with release times,”
*European Journal of Operational Research*, vol. 157, no. 1, pp. 262–266, 2004. View at Publisher · View at Google Scholar · View at Scopus - G. Centeno and R. L. Armacost, “Minimizing makespan on parallel machines with release time and machine eligibility restrictions,”
*International Journal of Production Research*, vol. 42, no. 6, pp. 1243–1256, 2004. View at Publisher · View at Google Scholar · View at Scopus - G. Lancia, “Scheduling jobs with release dates and tails on two unrelated parallel machines to minimize the makespan,”
*European Journal of Operational Research*, vol. 120, no. 2, pp. 277–288, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - A. Gharbi and M. Haouari, “Minimizing makespan on parallel machines subject to release dates and delivery times,”
*Journal of Scheduling*, vol. 5, no. 4, pp. 329–355, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - J. Carlier and E. Pinson, “Jackson's pseudo-preemptive schedule and cumulative scheduling problems,”
*Discrete Applied Mathematics*, vol. 145, no. 1, pp. 80–94, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - S. Li, G. Li, and S. Zhang, “Minimizing makespan with release times on identical parallel batching machines,”
*Discrete Applied Mathematics*, vol. 148, no. 1, pp. 127–134, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - C.-L. Li and X. Wang, “Scheduling parallel machines with inclusive processing set restrictions and job release times,”
*European Journal of Operational Research*, vol. 200, no. 3, pp. 702–710, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - J. Kennedy and R. Eberhart, “Particle swarm optimization,” in
*Proceedings of the 1995 IEEE International Conference on Neural Networks*, pp. 1942–1948, Perth, Australia, December 1995. View at Scopus - A. H. Kashan and B. Karimi, “A discrete particle swarm optimization algorithm for scheduling parallel machines,”
*Computers and Industrial Engineering*, vol. 56, no. 1, pp. 216–223, 2009. View at Publisher · View at Google Scholar · View at Scopus - P. Damodaran, D. A. Diyadawagamage, O. Ghrayeb, and M. C. Vélez-Gallego, “A particle swarm optimization algorithm for minimizing makespan of nonidentical parallel batch processing machines,”
*International Journal of Advanced Manufacturing Technology*, vol. 58, pp. 1131–1140, 2012. View at Publisher · View at Google Scholar · View at Scopus - R. Cheng, M. Gen, and T. Tozawa, “Minmax earliness/tardiness scheduling in identical parallel machine system using genetic algorithms,”
*Computers and Industrial Engineering*, vol. 29, no. 1–4, pp. 513–517, 1995. View at Scopus - Y. K. Lin and C. W. Lin, “Dispatching rules for unrelated parallel machine scheduling with release dates,”
*International Journal of Advanced Manufacturing Technology*, 2013. View at Publisher · View at Google Scholar - L. Mönch, H. Balasubramanian, J. W. Fowler, and M. E. Pfund, “Heuristic scheduling of jobs on parallel batch machines with incompatible job families and unequal ready times,”
*Computers and Operations Research*, vol. 32, no. 11, pp. 2731–2750, 2005. View at Publisher · View at Google Scholar · View at Scopus - W. C. Lee, C. C. Wu, and P. Chen, “A simulated annealing approach to makespan minimization on identical parallel machines,”
*International Journal of Advanced Manufacturing Technology*, vol. 31, no. 3-4, pp. 328–334, 2006. View at Publisher · View at Google Scholar · View at Scopus - R. H. Myers, D. C. Montgomery, and C. M. Anderson-Cook,
*Response Surface Methodology: Process and Product Optimization Using Designed Experiments*, John Wiley & Sons, New York, NY, USA, 3rd edition, 2009. View at MathSciNet