Abstract

The problem of minimizing the makespan on single batch processing machine is studied in this paper. Both job sizes and processing time are nonidentical and the processing time of each batch is determined by the job with the longest processing time in the batch. Max–Min Ant System (MMAS) algorithm is developed to solve the problem. A local search method MJE (Multiple Jobs Exchange) is proposed to improve the performance of the algorithm by adjusting jobs between batches. Preliminary experiment is conducted to determine the parameters of MMAS. The performance of the proposed MMAS algorithm is compared with CPLEX as well as several other algorithms including ant cycle (AC) algorithm, genetic algorithm (GA), and two heuristics, First Fit Longest Processing Time (FFLPT) and Best Fit Longest Processing Time (BFLPT), through numerical experiment. The experiment results show that MMAS outperformed others especially for large population size.

1. Introduction

Scheduling of batch processing machine is a typical combinatorial optimization problem. Different from traditional scheduling problems, batch processing machine can process several jobs simultaneously as a batch. Scheduling batch processing machine is usually encountered in manufacturing industry such as heat treatment in metal industry and environment stress-screening in integrated circuit production. As these operations often tends to be the bottleneck of manufacturing sequence, scheduling batch processing machine will effectively promote the completion time of jobs.

The problem was first proposed by Ikura and Gimple [1] who studied the problem with identical job sizes and constant batch processing time and machine capacity was defined by the number of jobs processed simultaneously. Considering the burning operation in semiconductor manufacturing, Lee et al. [2] presented an efficient dynamic programming based algorithms for minimizing a number of different performance measures. The same problem was studied by Sung and Choung [3] who proposed branch and bound algorithm and several heuristics to minimize the objective of makespan.

The problem is much more complicated when nonidentical job sizes are considered. Uzsoy [4] studied the problem of scheduling single batch processing machine under the objectives of minimizing makespan () and the total processing time () with nonidentical job sizes. Both problems were proved to be NP-hard and several heuristics were proposed including First Fit Longest Processing Time (FFLPT), first fit shortest processing time (FFSPT) et al. Dupont and Jolai Ghazvini [5] proposed two effective heuristics’ successive knapsack problem (SKP) and best fit longest processing time (BFLPT) and later one outperformed FFLPT. To solve the problem optimally, exact algorithm like branch and bound was proposed by Dupont and Dhaenens-Flipo [6], they present some dominance properties for a general enumeration scheme for the makespan criterion. Enumeration scheme [7] was also developed combined with existing heuristics to solve large scale problems.

Since the batch processing machine scheduling problem is NP-hard [4], various metaheuristic algorithms have been developed to solve the problem. Melouk et al. [8] studied the problem using Simulated Annealing (SA), and random instances were generated to evaluate the effectiveness of the algorithm. The same problem was considered by Damodaran et al. [9] with genetic algorithm (GA). The experiment showed that GA outperformed SA in run time and solution quality. Husseinzadeh Kashan et al. [10] proposed the grouping version of the particle swarm optimization (PSO) algorithm and the application of which was made to the single batch-machine scheduling problem. A GRASP approach developed by Damodaran et al. [11] was used to minimize the makespan of a capacitated batch processing machine and the experimental study concluded that GRASP outperformed other solution approaches. For the problems considering multimachines, Zhou et al. [12] proposed an effective differential evolution-based hybrid algorithm to minimize makespan on uniform parallel batch processing machines and the algorithm was evaluated by comparing with a random keys genetic algorithm (RKGA) and a particle swarm optimization (PSO) algorithm. Similar problem was studied by Jiang et al. [13] considering batch transportation. A hybrid algorithm combining the merits of discrete particle swarm optimization (DPSO) and genetic algorithm (GA) is proposed to solve this problem. The performance of the proposed algorithms was improved by using a local search strategy as well. All of these studies show the effectiveness of solving batch processing machines problems by using metaheuristic algorithms.

The studies reviewed above mainly solve the problem by sequencing the jobs into job list and then grouping the jobs into batches. Different from the existing studies, a metaheuristic algorithm MMAS (Max–Min Ant System) was designed in a constructive way by combining these two stages of decisions together. That is to say, the batches are constructed directly without considering job sequences and then the batches process on a batch processing machine. In the process of batch construction, jobs to be added to the existing batches can be selected elaborately by considering batch utilization and batch processing time. To improve the global searching ability of MMAS, local search method based on multiple jobs iterative exchange was proposed.

The remaining part of this paper is organized as follows. The mathematic model of the problem studied in this paper is presented in Section 2. In Section 3, we show the detailed MMAS algorithm used to solve the problem under the study. The parameters tuning and the numeric experimentation are given in Section 4. The paper is finally concluded in Section 5.

2. Mathematic Model

The problem of scheduling single batch processing machine is studied and the objective is to minimize makespan. Batch processing machine can process several jobs as a batch and all the jobs in that batch have the same start and completion time. The process cannot be interrupted once the process begins and no jobs can be added or removed from the machine until all jobs have been finished. Batch processing time is determined by the job with longest processing time in the batch. Jobs are all available at the time of zero.

Symbols and notations used in this paper are listed as follows:(1)There are jobs to be processed and each job has nonidentical processing time and size .(2)The capacity of the machine is assumed to be and each job has . Job list will be scheduled into batches before they are processed where denotes a batch list, that is, a feasible solution, and means the number of batches in . Processing time of each batch equals .(3)The objective is to minimize makespan () which is equal to the total batch processing time in a solution .

Base on the assumptions and notations given above, we can get the following mathematic model of the problem.

Objective (1) is to minimize the makespan. As only one processing machine is considered, the makespan is equal to the total completion time of all batches formed. Constraint (2) ensures that each job can be assigned exactly to one batch. Constraint (3) guarantees that total size of jobs in a batch does not exceed the machine capacity . Constraint (4) explains that the batch processing time is determined by the job with longest processing time in that batch. Constraint (5) denotes the binary restriction of variable which is equal to 1 if job is assigned to batch and otherwise. Constraint (6) gives the upper and lower bound of the number of batches in a feasible solution . The lower bound is calculated when assuming jobs can be processed partially across the batches [4]. And the upper bound is generated when each batch accommodates only one job.

3. Max–Min Ant System

MMAS [14] is one of the most successful variants in the framework of ant colony optimization (ACO) [15, 16] which have been applied to many combinatorial optimization problems such as scheduling problems [17], traffic assignment problems [18], and travelling salesman problems [15]. In MMAS, pheromone trail limits interval is used to prevent premature convergence and to exploit the best solutions found during the solution search. The performance of the algorithm is significantly affected by the values of the parameters; thus parameters tuning is performed to optimize algorithm performance. A local search method is also developed to enhance the search ability of MMAS.

MMAS is a constructive metaheuristic algorithm which is able to build a solution step by step. It can be adapted to various combinatory optimization problems with a few modifications. Given a list of jobs, MMAS will group the jobs into batches by adding jobs to the existing or new batches one at a time. The sequence of selecting jobs to be constructed into batches depends on the state transition probability calculated according to density of pheromone trail and heuristic information of each solution element. A solution is generated once all jobs are arranged into a batch.

3.1. Pheromone Trails

When solving the problem of TSP (travelling salesman problem) by using ant colony optimization algorithms [15], the pheromone trails are defined by the expectation of choosing city from city , that is, the amount of pheromone of the . The city with higher density of pheromone trail will be selected with a higher possibility. However, in the problem of scheduling batch processing machine, each solution is a set of batches and the sequence of jobs in a batch does not affect the batch processing time; thus the pheromone trails imply the expectation of arranging a job into a batch. In this study, we measure the expectation of adding a job to a batch by using the average pheromone trails between the job and each job in batch as follows.where means the pheromone trail between job and the existing job in batch . denotes the expectation of adding the job to the current batch . denotes the number of jobs in a batch . And this expectation will be used as pheromone trails in the calculation of state transition probability.

3.2. Heuristic Information

As the objective equals the total processing time of all batches formed, the quality of a solution is affected by both the number of batches and the processing times of each batch in the solution. Thus, two kinds of heuristic information are considered in this study, that is, the utilization of machine capacity and efficiency of batch processing time.

Usually, solutions with smaller number of batches generate better results. To reduce the unoccupied capacity of each batch, we add job to batch with the most feasible capacity in priority according to FFD (first fit decreasing) algorithm in bin packing problem [19]. The heuristic to add a feasible job to batch is defined as follows:

The processing time of a batch is determined by the job with the longest processing time in the batch. Obviously, jobs with similar processing times should be batched together to increase the efficiency of batch processing time. Thus, we give another heuristic information for adding job to batch as follows:

3.3. Solution Construction

For each ant , a solution is constructed by selecting an unscheduled job and adding it to an existing batch according to a state transition probability . If no existing batches can accommodate the job , a new empty batch will be created and accommodate it. A solution will be generated when all jobs are scheduled into a batch. Since a solution construction depends on the sequence of jobs chosen, solution quality is significantly affected by state transition probability. The probability is determined by the pheromone trails and heuristic information between the current batch and job to be scheduled. The state transition probability is defined as follows:where denotes the feasible jobs that can be added to the current batch such that the machine capacity is not violated. , , and show the relative importance of pheromone trails and two kinds of heuristic information. For each ant , a feasible job in will be selected with probability and added to the current batch or a new batch until all jobs are scheduled.

3.4. Update of Pheromone Trails

The density of pheromone trails is an important factor in the process of solution construction, as it indicates the quality of solution components. When all ants build its feasible solution, the pheromone trails on every solution component will be updated through pheromone depositing and evaporating. After each iteration, the pheromone trails of each solution component will be decreased with the evaporation rate while solution component of the iterative best solution or the global best solution will be increased by a quantity . The pheromone update for each ant at th iteration between the solution components of job and job is performed according to (11) as follows.where controls the pheromone evaporation rate. denotes the solution makespan get by ant .

The pheromone trails are limited in the interval of in MMAS algorithm. We set and in this study according to Stützle and Hoos [14]. is the global best makespan that ants have found.

3.5. Local Search Algorithm

Solution qualities can be effectively improved when local search methods are applied in metaheuristics [20]. That is because greedy strategies are usually adopted in local search methods and local optimal can be easily found in the neighborhood of a given solution. Metaheuristics’ ability in global optimal searching can be enhanced by combining their advantages in exploring solution space with local search methods.

As batch processing time is determined by the job with the longest processing time in the batch, jobs with smaller processing time have no effect on batch processing time. Local search method can be applied to decrease batch processing time by batching jobs with similar processing time together.

Definition 1. The job with longest processing time in batch is called dominant job . The total processing time of jobs without dominant job is denoted as dominated job processing time, noted as .

Proposition 2. The makespan will be minimized by maximizing .

Proof. We have for a given solution . According to Definition 1, ; thus we have . As the total job processing time is constant for a given instance, the makespan will be minimized when DT is maximized.

Definition 3. For each batch , the sum of its remaining capacity and the size of dominant job is called exchangeable capacity ; we denote .

Proposition 4. Given , larger DT will be obtained by exchange of batch with jobs in batch where and , while the does not increase.

Proof. Given two batches and , , jobs in batch can be divided into three sets , jobs , and other jobs , where and . Jobs in batch can be divided into sets and other jobs . Therefore, we have . After the exchange is applied to batches, we have , where . As , the relation is satisfied. According to Proposition 2, there is ; a minor indicates a larger DT. Proposition 4 holds.

According to Proposition 4, multiple jobs can be exchanged iteratively between batches to decrease batch processing time. For a given solution , the number of batches is limited by the total number of jobs where each job is grouped as one batch. The detailed procedure of the proposed local search algorithm MJE (Multiple Jobs Exchange) is listed as follows:

Algorithm MJE (Multiple Jobs Exchange)

Step 1. For the iterative best solution , arrange the batches in decreasing order of their processing time and order jobs of each batch in decreasing order of job size.

Step 2. Initialize parameter , set , is the remaining capacity of batch , and and denote the dominated job of batch and batch , respectively.

Step 3. If , exit; else .

Step 4. Exchange the job of batch with jobs in batch if , and ; go to Step .

Corresponding notations are illustrated in Figure 1.

The flow chart of algorithm MJE is provided as in Figure 2.

3.6. Algorithm Framework of MMAS with Local Search

According to basic framework of ant algorithms [16], the framework of MMAS to solve the problem under study is given as follows.

Algorithm MMAS

Step 1. Initialize parameters in MMAS including α, β, γ, and .

Step 2. Initialize ant population.

Step 3. Select a unscheduled job for each ant and add the job to a new batch.

Step 4. Arrange the next unscheduled feasible job with the maximum state transition probability calculated by (10) into the current batch.

Step 5. If there are existing jobs that can be added to the current batch, then go to Step .

Step 6. If there are jobs unscheduled, then go to Step .

Step 7. Apply local search method MJE to iteration solutions.

Step 8. Update pheromone trails according to formula (11).

Step 9. If the termination condition is satisfied, then output the solution. Else go to Step .

To illustrate the procedure of MMAS more logically, the flow chart of algorithm MMAS is provided as in Figure 3.

4. Experimentation

4.1. Experimental Design

Random instances were generated to verify the effectiveness of the proposed algorithm. Three factors were considered in the numeric experiment including number of jobs , job processing time , and job size . 24 () problem categories were generated and denoted in the form of (; ; ). Factors and levels are shown in Table 1. The machine capacity is assumed to be 10 for all problem instances.

For example, represented the category of instances with 10 jobs, jobs’ processing time were randomly generated from a discrete uniform distribution, and jobs’ sizes were randomly generated from a discrete uniform distribution.

4.2. Parameter Tuning

As each ant will build feasible solutions with probability, a large population of ants will enhance the algorithm’s ability in exploring solution space while more times of iteration help to make better use of pheromone trails and heuristic information. However, the algorithm is much more time consuming with larger number of ants and iterations. Preliminary tests were done and the results showed that to increase the number of iteration yields better results in a given run time. That is easy to be understood as the algorithm inclines to perform randomly search in the initial stages and more empirical results are considered along with the increasing of iterations. To obtain a tradeoff between solution quality and time cost, ant population was set to 30 and iterations were set to 80 in this study.

It can be seen from formula (10) that there is exponential relationship between state transition probability, pheromone trails, and heuristic information. Thus, the probability is more sensitive to these factors with larger parametersα,β, andγ. To verify the influence of these parameters, preliminary tests were done to choose the parameters levels as well. was used as a reference level to study the effect of parametersβ andγ on makespan in the interval of . The results are shown in Figure 4.

It can be observed from Figure 4 that mediumβ and smallerγ should be used for instances of categories with small jobs to obtain better makespan while the largerγshould be adopted for instances of categories with large jobs on the contrary. For instances of categories with mixed job sizes, smaller values forβ and mediumγ seem better compared with the former two categories. All instances generate bad results whenβ is too close to 1. Large size jobs usually have lower flexibility when arranged in batches compared with small size jobs. So high level ofγ is used to arrange large size jobs first with high probability for instances with large jobs. Similarly, bigβ is used to batch jobs with similar processing time together to increase the efficiency of batch processing time. According to the analysis above, three levels for each parameter are selected in this study as shown in Table 2.

Pheromone trails evaporate at each iteration and the speed is controlled by the parameterρ. () means evaporation rate. A high evaporation rate leads to a constant change of pheromone trails on each solution element while a low rate results in the pheromone trails on solution elements that cannot evaporate in time.

Pheromone trails of each solution element is limited in the interval of in MMAS algorithm. The pheromone trails are usually set to a high value of at the beginning of running in order to improve the searching ability in solution space. If a solution element is always in the state of evaporation and its pheromone trail decreases to just after iterations then we have . Since as mentioned above, it can be derived that ; that is, , , where is the number of jobs. The relationship between andρ for 100 jobs is described in Figure 5.

It can be seen from Figure 5 that increases dramatically when ; thus is set to 0.6 in this study in order to ensure the pheromone trails evaporate not too quickly or slowly and the solution space can be explored in a reasonable run time.

4.3. Performance Comparison for Small Scale Instances

As shown in Table 3, the results obtained from MMAS is compared with that from CPLEX (a commercial solver for linear and mixed-integer problems). Due to the NP-hardness of the problem, only small scale instances of and problem categories are solved by using CPLEX. The minimum, maximum, and average value of reported from MMAS are listed in columns 3 to 5. The average run time of MMAS and CPLEXT are given in columns 7 and 8. Column 6 indicates the percent of the optimal values obtained by using MMAS compared with the results from CPLEX.

Nearly all small scale instances can be solved optimally by using MMAS except several instances from and with relative larger solution space. MMAS is competitive in computational time compared with CPLEX for all instances. Computational time of all instances from MMAS is less than 1 second. The computational time of CPLEX varies greatly depending on the solution space of an instance.

4.4. Algorithm Performance Evaluation

To evaluate the performance of the algorithm for all problem categories, MMAS designed in this paper was compared with GA [9]. A basic ant algorithm ant cycle (AC) was coded to solve the problem as well and the parameters used were as prescribed in Cheng et al. [21]. Two well-known heuristics FFLPT and BFLPT were also compared with MMAS in the experiment study. 100 instances were generated for each problem category and the best result of 10 runs for each instance was eventually used.

Comparison of various algorithms is summarized in Table 4. Column 1 in Table 4 presents the run code. Column 2 shows the results of MMAS compared with other algorithms where B, E, and I mean the makespan obtained by MMAS are better than, equal to, or inferior to that of other algorithms, respectively. Columns 3–6 report the proportion that the makespan obtained by using MMAS compared with that of other algorithms in 100 instances of problem categories. Columns 7–10 and columns 11–14 are similar to columns 3–6. For example, the first number of 0.01 indicates that there is a proportion of 0.01 results reported by MMAS better than that generated by AC. As the problem under study is strongly NP-hard, the solution space is to explode along with the population size increasing form to . On the other hand, smaller job sizes give more combination of batches; thus the solution space becomes smaller for problem categories compared with for a given number of jobs.

It can be seen from Table 4 that MMAS outperforms other algorithms on the whole. Several results reported by MMAS inferior to those generated by other algorithms have been earmarked by rectangle. For problem categories, results obtained from MMAS are mostly equal to that from other algorithms. A percentage of about 16% results from MMAS is better than that from algorithms BFLPT and FFLPT for and problem categories. The percentage is 7% for problem categories, which is because more results are the same for all algorithms in a small solution space. Up to 14% reported by MMAS are better than GA for problem category. The advantage of MMAS to algorithms like GA, BFLPT, and FFLPT is much more remarkable for problem categories. More better results can be found by MMAS than AC and the percentage is up to 6% to 13% for problem categories. More than 80% results obtained from MMAS are better than that from GA, BFLPT, and FFLPT for and problem categories. About 23% to 43% percent of results from MMAS are better than AC while the latter reports few better results. For problem categories, more than 90% results generated by MMAS are better than that from algorithms AC, GA, BFLPT, and FFLPT for large job population instances. Almost all other algorithms report several better results than MMAS for problem categories.

Based on the analysis given above, MMAS possesses higher ability in solution space exploring. It outperforms all other algorithms for almost all instances especially for large job population. As MMAS is exploring solution space in probability, the optimal solution cannot be guaranteed with a limited population and iterations. Therefore, several results obtained from MMAS are inferior to other algorithms even to effective heuristics like BFLPT and FFLPT.

The makespan increases for problem categories compared with problem categories but there is no significant performance change for various algorithms.

To illustrate the gap between MMAS and other algorithms for different population size, the average percentage of results with better makespan obtained by MMAS compared with other algorithms is given in Figure 6.

Averagely, a percentage of 10% results obtained from MMAS are better than that from other algorithms for problem categories while the number is 70% for . Algorithm AC is the second best algorithm compared to GA and the other two heuristics. BFLPT and FFLPT are all effective in solving the problem especially for small population size. BFLPT is generally better than FFLPT for almost all problem categories and BFLPT is also used in GA to generate initial solutions.

5. Conclusions and Future Work

The problem of scheduling single batch processing machine was studied in this paper. An improved ant algorithm MMAS (Max–Min Ant System) was designed and applied to solve the problem. Parameters of MMAS were determined by preliminary experiment and a local search method MJE was also presented to improve the performance of MMAS. Optimal objectives can be obtained by using MMAS in less computational time compared to CPLEX for small scale problems. To evaluate the performance of MMAS, it was compared with basic ant algorithm AC (Ant Cycle), GA (Genetic Algorithm), and the other two well-known heuristics of scheduling batch processing machine, that is, FFLPT (First Fit Longest Processing Time) and BFLPT (Best Fit Longest Processing Time). The numeric experiment showed that MMAS outperformed other algorithms for almost all instances in various problem categories especially for larger population size.

Considering the good performance of MMAS, better heuristics mechanism can be designed and applied to other problems of scheduling batch processing machine. The problem under study with other objectives and problem constraints including release times and due dates can be studied as well.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work is supported by the Fundamental Research Funds for the Central Universities (no. 2014QNA48) and National Natural Science Foundation of China (no. 71401164).