Abstract

Effective control of the heat-treatment operation is essential to reducing manufacturing cost in mould manufacturing. The heat-treatment shop floor is a flow shop with parallel batch processors and incompatible jobs. The jobs differ from each other in product types, sizes, release times, and due-dates. The scheduling objective is to minimize manufacturing cost, including energy cost, subcontracting cost, and jobs’ tardiness penalties cost. A hierarchical production planning structure is proposed, which contains three decision-making levels: (1) balancing capacity and demand, (2) machining at the quenching stage, and (3) machining at the tempering stage. At the first level, the periodic rolling scheduling heuristic is proposed for the purpose of balancing the capacity and the demand in the coming period in the heat-treatment shop floor. At the second and third levels, two new look-ahead batching heuristics are proposed. An extensive computational experiment is conducted to verify the effectiveness of the proposed heuristics.

1. Introduction

Product quality, delivery, and price are three elements for which customers are concerned in the mould market. Most jobs of mould production need a heat-treatment operation between cutting machining and finish machining. The heat-treatment process is very important to ensure the precision, intension, and the life-span of mould production (cf. [1]). Companies which offer on-time delivery generally have a better chance of retaining customers and receiving subsequent orders. However, a major challenge faced by the mould industry is difficult in meeting customers’ due-dates. Heat-treatment is an important process in mould manufacturing for the timely delivery of jobs. However, the mould heat-treatment furnace is a kind of exiguous resource and thus often becomes bottleneck resource, leading to frequent delay of the heat-treatment operation. Heat-treatment is also an energy consuming process (cf. [2]). The jobs always need to be heated dozens of hours on heat-treatment furnaces which uses a lot of power. It is important to control the manufacturing cost of heat-treatment for reducing the cost of mould production. Raw material cost, equipment depreciation cost, energy cost, subcontracting cost, and staff wages are dominant manufacturing costs in heat-treatment shop floor. In addition, another hidden cost that cannot be ignored is the cost of jobs’ tardiness penalties. When raw material, equipment depreciation cost, and wages remain unchanged, controlling the amount of energy consumption, subcontracting, and tardiness is critical to control the manufacturing cost. The scheduling objective in this paper is to minimize the manufacturing cost of the heat-treatment.

Heat-treatment shop floor is a flow shop which contains the quenching process and the tempering process. The flow chart of heat-treatment shop floor is shown in Figure 1. There are multiple heat-treatment furnaces at the quenching stage and the tempering stage, respectively. Heat-treatment furnace allows simultaneous processing of multiple jobs of mould, which differ from each other in material, sizes, and due-dates. Different materials have different heating temperatures and processing methods in heat-treatment. For example, for the mould jobs made ​​from Cr12Mov or H13, in order to increase the harden ability, the tempering operation must be performed three times after each quenching operation. And most of the other materials need only once tempering after quenching. Thus, only the mould jobs with the same material can be processed in the same furnace. Batch process times are constant, specific to product material, and do not depend on the number and size of the products in the batch.

Heat-treatment is usually not the first operation in the mould manufacturing system. Jobs arrive at the heat-treatment shop floor in a dynamic way from upstream operations. Since the mould is an order oriented production, the job processing time is not always the same as expectation for a new mould. If the production planning of the heat-treatment shop floor is made by a fixed period, the actual batches and machining times will be usually different than the plan, since the arrival times of the jobs at the heat-treatment shop floor are always different from expectation. How to establish an effective scheduling mechanism to improve the robustness of the production plans, this issue has plagued the workshop manager. The heat-treatment furnace is the bottleneck resource in the mould manufacturing system. On the production season of demand greater capacity, in order to reduce the amount of lateness, manager often subcontracts some mould jobs to other companies. However, subcontracting needs longer preparation times and higher cost. Therefore, how to choose the suitable subcontracting jobs is another decision issue in this study.

This paper focuses on the dynamic control of parallel batch machines in the mould heat-treatment shop floor, which is subject to nonidentical jobs with different product types, sizes, release times, and due-dates. A hierarchical production planning structure is proposed to minimize manufacturing cost, including energy cost, subcontracting cost, and jobs’ tardiness penalties cost. A periodic rolling scheduling strategy is proposed to select the subcontracting jobs. Based on a look-ahead batch control strategy, two machining control strategies at the quenching stage and tempering stage are proposed, respectively.

The remainder of the paper is organized as follows. In Section 2, a short review of related literature is proposed. In Section 3, the mould heat-treatment system is described with the problem definition, and a hierarchical production planning structure is proposed. In Section 4, the heuristic algorithms are proposed for the hierarchical production planning. To demonstrate the potential of the proposed heuristics, a computational experiment is designed and the outcomes are analyzed in Section 5. Finally, conclusions are made in Section 6.

Production control of a batch process has been widely studied in the literature. There are two decision approaches to assign resources to jobs: scheduling and real-time control.

Scheduling is the task of allocating available production resources (machine, labor, material, etc.) to jobs over time with full knowledge of future job arrivals and system status (cf. [3]). Jia et al. [4] presented a metaheuristic for makespan minimization in parallel batch machines with nonidentical job sizes and incompatible job families. Hulett et al. [5] proposed a particle swarm optimization algorithm for minimizing total weighted tardiness on nonidentical parallel batch processing machines. Arroyo and Leung [6] proposed an effective iterated greedy algorithm for scheduling unrelated parallel batch machines with nonidentical capacities and unequal ready times. Su et al. [7] established the model of steel making casting production and proposed a fuzzy genetic algorithm for this problem. Gokhale and Mathirajan [8] addressed a scheduling problem for minimizing total weighted tardiness, observed in automobile gear manufacturing.

Real-time control strategies of batch processing machine (BPM) can be categorized into two policies, threshold policy and look-ahead policy. The threshold policy is applied when there is no future arrival information available. Neuts [9] introduced the minimum batch size (MBS) control strategy which is the most basic control strategy of threshold policy. A few works applied the MBS policy for systems with nonidentical jobs and parallel machines (cf. [10, 11]).

The look-ahead policy is applied when limited near-future arrival information is available to the decision-maker. Glassey and Weng [12] may be the first to use near-future information for real-time BPM control. Early research in this domain mainly focuses on cycle-time related performance. But recently, more and more researchers deal with the real-time BPM control considering due-date. Gupta and Sivakumar [13] presented LAB (look-ahead batch) control strategy for the single product single batch machine system with due-date objectives, such as average tardiness, maximum tardiness, and number of tardy jobs in batch processes. Later, Gupta et al. [14] improved LAB to control delivery performance, which involves a trade-off between the conflicting objectives of minimizing earliness and tardiness related measures simultaneously. Liu et al. [15] proposed a new LAB strategy to control two kinds of conflicting objectives related to the delivery and utilization performances, which achieved a trade-off based on the compromise programming method. Çerekçi and Banerjee [16] proposed a new LAB control strategy for controlling cycle time performance of batch processors.

Based on LAB strategy, three control strategies are proposed in this paper for the three decision-making stages in a mould heat-treatment shop floor with the objective of minimizing manufacturing cost, including energy cost, subcontracting cost, and jobs’ tardiness penalties cost. The scheduling environment we study is in a flow shop with parallel batch machines and nonidentical jobs. To the best of our knowledge, this is the first application of LAB strategy in this regard.

3. Problem Description

There are parallel quenching furnaces and tempering furnaces in the flow shop. Every job must go through the quenching process and then the tempering process. The heat-treatment process is not the first operation in mould manufacturing. The jobs arrive at the heat-treatment shop floor in a dynamic way from upstream shop floors. The jobs differ from each other in product type i, size, release time , and due-date . The larger the release time and processing time, the greater the due-date. Only the jobs with the same product type can be placed in the same batch. The quenching processing time and tempering processing time of each batch depend on the product type, regardless of the job size. Other assumptions made for the problem are summarized as follows. The batch capacity C is limited by the maximum loading sizes of the furnace. No additional jobs can be added or removed from a batch while the batch is being processed. Once a batch processing machine is started, it cannot be interrupted.

The objective is to minimize manufacturing cost C, which includes jobs’ tardiness penalties cost , subcontracting cost , and energy cost . In mould manufacturing, the completion time of the whole mould is usually decided by some key mould jobs. Once these jobs are tardy, it will seriously affect the production progress of the whole mould. Hence, the key jobs are set a higher tardiness penalty cost rate, so that they can be processed with higher priorities. The subcontracting cost () is proportional to the size of the subcontracting jobs. The energy cost of each batch is proportional to the energy cost rateand the processing time, regardless of the number and size of the jobs in the batch.

The following notations are used throughout this paper:: Set of jobs : Set of batches I: Set of product types : Set of machines at the quenching stage: Set of machines at the tempering stage: Quenching processing time for product type : Tempering processing time for product type : Size of job : Release time of job : Due-date of job W: Capacity of a machine: Tardiness penalty cost rate of job j (/h): Energy cost rate (/h): Subcontracting cost rate of job j (/L): Completiontime of job j: Tardiness of job ; : 1, if job is completed by the subcontracting mode; 0, otherwise: 1, if product type i is in batch b; 0 otherwise: 1, if job j is in batch b; 0 otherwise: Tardiness penalties cost;: Subcontracting cost; : Energy cost;

The objective function is described as follows:

The near-future necessary information can be obtained from the computer integrated manufacturing environment. However, due to various stochastic and uncertain sources in mould manufacturing, the arrival times of jobs are always inaccurate. Generally speaking, only the arrival times of jobs in the most upstream operation are relatively accurate (called predicted future), while the arrival times of jobs in other upstream operations are difficult to predict accurately, which are called the roughly predicted future or random future. The division of the jobs in the upstream operation is shown in Figure 2.

The mould jobs arrive at the heat-treatment shop floor in a dynamic way from upstream shop floors. The workshop manager must make the production planning for the jobs and choose some jobs to be subcontracting when the demand is greater than the capacity. Traditionally, the production planning is made by a fixed period. However, if the fixed period is too long, the actual batches and machining times of the batches will be different from the production planning, since the release times of the jobs in the roughly predicted future are inaccurate; if the fixed period is too short, there is not enough time for subcontracting, since subcontracting needs a longer preparation time. Therefore, the batch plan, machining plan, and subcontracting plan are hard to be made simultaneously. Hierarchical production planning is proposed here to solve the issue. As shown in Figure 3, hierarchical production planning contains three decision-making levels: balancing capacity and demand, machining at the quenching stage, and machining at tempering stage.

A periodic rolling scheduling strategy (PRS) is used to balance capacity and demand in the coming period, which does not need to assign the mould lot to a specific machine. The decision point is rolling by a fixed period. The subcontracting strategy is often used in the heat-treatment shop floor. Since subcontracting requires a longer preparation time, the control decision needs to be better at looking forward. More upstream operations should be considered in the predictive horizon. The upcoming jobs in the predicted future and roughly predicted future will be considered at the decision point. The jobs needed to be processed in this workshop will be placed in the buffer, which is assumed to have an unlimited storage capacity.

The quenching stage machining strategy (QSM) is used to choose the suitable jobs and put them in a batch and determine the starting time of the batch. The decision point is defined as one of the quenching furnaces becomes available or a new job arrives at an idle quenching furnace. Robustness of the control decision is important at this stage. Hence, only the jobs in the buffer and the upcoming jobs in the predicted future are considered at the decision point.

Assume that all the furnaces for quenching and tempering have the same capacity. Every batch will not be split between the quenching stage and tempering stage. When there is more than one batch waiting for processing, the controller must choose which batch to be processed first. Traditionally, the first come first served (FCFS) rule is applied to solve this issue. However, the jobs with higher priority usually cannot be processed first by the FCFS rule, which usually causes a lot of tardiness penalties cost. The tempering stage machining (TSM) strategy is used to choose the suitable batch to be processed first. The decision point is defined as one of the quenching furnaces becomes available or a new batch arrives at an idle tempering furnace.

Example 1. There are 10 jobs to be processed. All the jobs are in the buffer or in the predicted future. The release times (), due-dates (), sizes (), tardiness penalty cost rate (), and product types (i) of the jobs are provided in Table 1. There are two parallel machines in quenching stage and tempering stage, respectively. All machines have the same capacity, 100 litres. The processing time for quenching and tempering of type A job is (7, 8) hours, while type B job is (8, 10) hours. Energy cost rate is 100 /hour. The preparation time for subcontracting is 10 hours. Subcontracting processing time is 1.2 times the original processing time. The next section will introduce the solution process of this example.

4. Heuristic Algorithms

4.1. Periodic Rolling Scheduling (PRS) Heuristic

The periodic rolling scheduling heuristic is introduced in this section. The purpose of this heuristic is to balance the capacity and the demand in the coming period in the heat-treatment shop floor. Look-ahead in the decision point for near-future arrivals of the upcoming jobs within a fixed period is called the look-ahead window in this section. Since subcontracting needs preparation time, a wider look-ahead window can get better control. However, it is difficult to accurately predict the arrival times of the jobs which arrive a week later. Hence, the width of look-ahead window is set to be 6 days. Also, the adjustment of the look-ahead window is allowed.

Since the arrivals of the jobs are not in a uniform manner, the jobs may arrive intensively in some periods of the look-ahead window, while being rare in arrival in other periods. In order to control the load in each period of the look-ahead window, the look-ahead window is uniformly divided into several control cycles. The number of control cycles in the look-ahead window can be set by the controller. Since the jobs need not be assigned to a specific machine in this scheduling strategy, hence only the capacity of each control cycle needs to be calculated, without the need to analyze the capacity of each machine. The capacity of a control cycle is equal to the sum of working hours of all the machines in this control cycle. The jobs will be classified according to product types and sorted in nondecreasing order of due-dates. Then, place the jobs in batches and calculate the demands of the quenching stage and the tempering stage, respectively. The demand of the quenching stage is equal to the sum of the quenching processing times of all the batches. The demand of the tempering stage is calculated in the same way.

Once the demand is greater than the capacity at one of the two stages in a control cycle, some suitable jobs are chosen to be subcontracting or left to the next control cycle. Which are the suitable jobs? The suitable jobs are chosen by the following two principles. (1) The total size of the subcontracting jobs is as light as possible, since subcontracting needs more cost. (2) Assuming the jobs which left to the next control cycle will be started at the earliest time of the next control cycle, if the amount of tardiness by subcontracting is less than that when they left to the next control cycle, the job will be subcontracting. The procedure of the period rolling scheduling strategy is proposed as follows.

Set the width of look-ahead window to be 20 hours. By running the PRS algorithm, the J8 in Example 1 will be processed by subcontracting. (see Box 1)

4.2. Quenching Stage Machining (QSM) Heuristic

The quenching stage machining heuristic is to choose the suitable jobs and put them in the batch for the current machine processing at the quenching stage. The LAB strategy has been widely used to control a single batch machine with a single product type. In this section, the LAB strategy will be refined for multiple parallel batch machines and multiple product types. The quenching stage machining heuristic follows a four-step algorithm.

Step 1 (read jobs which are in the buffer or in the predicted future). In common practice, at any decision point, a batch is formed from the arrived jobs which are in the buffer. However, there are often cases of a high priority job arriving in the near future, which needs to be given priority for loading. Therefore, look-ahead at the decision point for the upcoming jobs in the predicted future. In order to ensure the robustness of scheduling, only the jobs in the buffer and the upcoming jobs in the predicted future are considered at the decision point.

Step 2 (construct suggested scenarios for each product type individually). For each product type, at the decision point or at any time a new job arrives, a suggested scenario is formed. Each suggested scenario forms a batch in which all the arrived jobs of the same type are considered. The suggested start processing time of the batch is the latest arrival time of all the jobs considered. The size of the batch is equivalent to either partial or full machine capacity. If the total size of the considered jobs in the suggested scenario is larger than the machine capacity, the suitable jobs are chosen by combining the earliest due-date (EDD) rule and largest size (LS) rule.
According to the data in Example 1, four suggested scenarios for product type A are constructed in Figure 4. At the decision point, scenario 1 is constructed by considering all the four jobs. Firstly, based on the EDD rule, the job is chosen to be put in the batch. Further, if the job is chosen, the cumulative size is 110L which exceeds the capacity constraint. Then other job is chosen based on the LS rule. Since the size of the job is larger than the size of the job, the job is considered first. Hence, the job is chosen to be put in the batch. Finally, the job and the job are put in the batch in scenario 1. At the time of the job arrival, scenario 2 is constructed. Supposing the due-date and size of the job are (3, 30L), based on the EDD rule and the LS rule, the job and the job are chosen to be put in the batch. Even though the job is not chosen, time 1 is set as the start processing time of the batch in scenario 2, since it is the latest arrival time of all the jobs we considered. When scenario 3 and scenario 4 are constructed, the effect of the job does not need to be considered, since it is not type A.

Step 3 (compare the suggested scenarios and set the best suggested scenario as the suggested decision for each product type individually). The suggested scenario, where the sum of surplus capacity cost and penalty cost is the lowest, is set to be the suggested decision. The surplus capacity cost is defined as energy wastage due to the batch being not full. Suppose that there are n0 jobs of type i in the buffer and q0 jobs of type i in the predicted future. At a particular decision point , the job of type i arrives and a suggested scenario is constructed. The cost of surplus capacity can be calculated as

Suppose that the job is the latest job of type i in the predicted future. Set the release time of the job as . In order to compare the performance of the penalty cost of each suggested scenario, max (t0, ) is set as the start processing time of the jobs which are not chosen to be put in the batch. Then, the cost of the penalty can be calculated as

Step 4 (evaluate all the suggested decisions and take the action corresponding to the best suggested decision). The best suggested decision will be obtained by evaluating all the suggested decisions for each product type. The following principles will be used to select the best suggested decision.
Step 4.1. Calculate the sum of the penalty amounts of the jobs in the batch. Select the suggested decision in which the sum of the penalty amounts is largest. If there are ties, then go to Step 4.2; else, go to Step 4.4.
Step 4.2. Calculate the sum of the sizes of the jobs in the batch. Select the suggested decision in which the sum of the sizes is largest. If there are ties, then go to Step 4.3; else, go to Step 4.4.
Step 4.3. Select the suggested decision in which the processing time is shortest. Go to Step 4.4.
Step 4.4. Take the action corresponding to the best suggested decision.

After four steps of the QSM algorithm, the processing sequence of the jobs in the quenching stage will be determined. The Gantt chart for the jobs in Example 1 is shown in Figure 5.

4.3. Tempering Stage Machining (TSM) Heuristic

When one of the tempering furnaces becomes available and there is only one batch waiting for processing, or when the new batch arrives at an idle tempering furnace, the batch will put on the tempering furnace immediately. However, when one of the tempering furnaces becomes available and there is more than one batch waiting for processing, the suitable batch must choose to be processed first. An algorithm based on a look-ahead policy is proposed in the following.

Step 1. Set t1 as the idle time of the current machine; assuming each batch is processed at t1, calculate the tardiness penalty cost for each batch, respectively, by the following equation:

Step 2. Set t2 as the idle time of the next available machine; assuming each batch is processed at t2, calculate the tardiness penalty cost for each batch, respectively, by the following equation:

Step 3. Calculate for each available batch, respectively, and take the batch in which the value is maximum to the current idle machine.

In Example 1, one of the machines in the tempering stage at time 15 is idle. The batch (signed by B1) with J9 and J10 and the other batch (signed by B2) with J3, J6, and J7 are waiting for processing at this decision point. The TSM algorithm is used to select the first processing batch. If both of the batches start at time 15, the penalty cost of B1 and B2 will be 200 and 190, respectively. If both of the batches start at the next idle time 16, the penalty cost of B1 and B2 will be 280 and 250, respectively. The penalty cost of both batches will increase 80 and 60, respectively. Hence, B1 will be chosen to process first. The Gantt chart for the jobs at the tempering stage is shown in Figure 6.

5. Computational Experiment

5.1. Design of the Experiment

An experiment is designed to demonstrate the potential of the proposed heuristics. The PRS heuristic is used to select the subcontracting jobs, and then the QSM heuristic and the TSM heuristic are used to put the jobs in batches and determine the order of each batch at both stages. The proposed heuristics will be compared with other rules. The setting minimum waiting jobs (SMWJ) rule and the setting maximum waiting time (SMWT) rule are widely adopted in the real-world mould heat-treatment shop floor. In SMWJ, once one of the types of the cumulative sizes of the arrived jobs reaches a minimum value, put these jobs in a batch and arrange the batch on the furnace. In SMWT, once the waiting time reaches a maximum value, the types in which the cumulative sizes of the arrived jobs are largest form a batch to be processed immediately.

At the quenching stage, consider the SMWJ rule only as a benchmark first. When one of the quenching furnaces becomes available, start the SMWJ rule. The minimum size is set to be 80% batch capacity W. After a lot of experiments, it is found that the performance of setting the minimum size to 60%, 70%, and 90% is inferior to 80%, so this paper only records the experimental results of 80%. Combine the SMWJ and SMWT rules as another benchmark for the quenching stage. When one of the quenching furnaces becomes available, start the SMWJ rule first; once the waiting time reaches the maximum value and no batch is formed yet, the types in which the cumulative sizes of the arrived jobs are largest form a batch. The maximum waiting time for SMWT is set to be (1/3) the mean quenching processing time (PT). At the tempering stage, consider the FCFS rule as a benchmark. It is a simple rule which is much applied in both practice and theory. The constructed four control strategies and their corresponding settings are listed in Table 2.

The release times of jobs are related to the traffic intensity ρ. Gupta and Sivakumar [13] considered only a single batch and a single product. They assumed all job sizes are identical and defined the traffic intensity as

In this paper, since the jobs are nonidentical in size and there are multiple product types and machines, redefine the traffic intensity ρ as

Considering the production characteristics in the company, the length of the predictive horizon is set to be 144 hours. All of the jobs in the predictive horizon are generated by the settings in Table 2. There are 21 factor combinations in Table 2, i.e., 3 product types multiplied by 7 kinds of traffic intensity. For each factor combination, 100 independent instances are generated and the average values are recorded. All the algorithms are coded in the MATLAB language and implemented on a Pentium dual-core (2.00 GHz) computer with 2G RAM.

5.2. Analysis of Experiment Results

The preceding section discussed the design of the computational experiment, the outcomes of which will be analyzed in this section. The manufacturing costs for alternative product types and traffic intensities are displayed in Figures 7(a)7(c). The performances of algorithms PQT and QT are compared with the benchmark algorithms SF and SSF.

Figures 7(a), 7(b) and 7(c) present the scheduling results when the number of product types is two, four, and six, respectively. The manufacturing cost increases with the increase of traffic intensity, since, with higher traffic intensity, there are more jobs to be processed in the same period of time. The manufacturing cost calculated by algorithm SF is obviously higher than the other three algorithms. The algorithm QT is slightly better than algorithm SSF. The performance of algorithm PQT is the best of the four algorithms. It is worth mentioning that, with the increase of traffic intensity, the advantage of algorithm PQT is more obvious. This is because, with higher traffic intensity, more jobs will be tardy, especially when the demand is greater than the capacity. In algorithm PQT, some jobs are subcontracted and the number of tardy jobs will be decreased; hence the total manufacturing cost will be decreased. However, when the traffic intensity is less than 0.6, the manufacturing cost is not much different between algorithms PQT and QT. When the capacity is far greater than the demand, few jobs will be chosen to be subcontracted in the algorithm PQT. In this case, the scheduling results of algorithm PQT and QT are the same in practice.

Compared with the three figures in Figure 7, when the traffic intensity is less than 0.8, and the number of product types is four and six, the performance of algorithm PQT is not obviously better than algorithms QT and SSF. However, when the traffic intensity is more than 1, the performance of algorithm PQT is obviously better than the other three algorithms for various numbers of product types. This is because, in algorithm PQT, in order to balance the load in each control cycle, some jobs will be subcontracted. However when there are multiple product types and the traffic intensity is relatively low, it is not needed to subcontract jobs. When the traffic intensity is more than 1, the tardiness penalties cost in algorithm PQT is obviously lower than the other three algorithms.

Figures 8(a)8(c) present the tardiness penalties cost for alternative product types and traffic intensities. As shown in Figure 8, with the increase of traffic intensity, the tardiness penalties cost increases significantly for algorithms QT, SF, and SSF; however the increase is slow for algorithm PQT for various numbers of product types.

6. Conclusions and Future Works

This paper is motivated by mould heat-treatment processing in dynamic environments. The heat-treatment shop floor is a flow shop with parallel batch processors and nonidentical jobs. The jobs differ from each other in product types, sizes, and due-dates. The objective is to minimize the manufacturing cost, which includes energy cost, subcontracting cost, and tardiness penalties cost. The heat-treatment shop floor is in a sequential production system where jobs arrive in a dynamic way from upstream shop floors. According to the accuracy of arrival information, the arrival times of the jobs are divided into three situations: predicted future, roughly predicted future, and random future. A hierarchical production planning structure is proposed for the problem, which contains three decision-making levels: balancing capacity and demand, machining at the quenching stage, and machining at the tempering stage.

The PRS algorithm is proposed to balance the capacity and the demand in the coming period in the heat-treatment shop floor. Some jobs are chosen to be subcontracted when the demand is greater than the capacity. Since subcontracting needs preparation time, a wider look-ahead window can obtain better control. Hence, the upcoming jobs in the predicted future and roughly predicted future are considered at this stage. Two new LAB strategies are exploited for the machining at the quenching stage and the tempering stage, called QSM algorithm and TSM algorithm, respectively. The LAB strategy is refined for multiple parallel batch machines and multiple product types in these two algorithms. Robustness of the control decision is important at the quenching stage. Hence, only the jobs in the buffer and the upcoming jobs in the predicted future are considered at the decision point.

An extensive computational experiment is conducted to compare the performance of the proposed strategies with the benchmark strategies. Results indicate that the manufacturing cost can be obviously decreased by running the PRS algorithm, QSM algorithm, and TSM algorithm, especially when the demand is greater than the capacity. However, the PRS strategy does not contribute much to reducing manufacturing cost when the traffic intensity values are low or moderate. The performance of the PRS strategy is better when the number of product types is low.

In future work, this study can be extended to address different optimization objectives, such as inventory-related performance. Since high holding cost of products is not common in practice, other interesting extensions of the model may address the limitations on buffer size, setup times, and transportation times between stations. The effectiveness of these strategies might be improved by considering these characteristics in the problem.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Additional Points

The experimental data are generated by random way, and the method of generating data is explained in detail in Section 5.1.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This article is supported by the Hanshan Normal University doctoral startup project (QD201804024), the Pearl River S&T Nova Program of Guangzhou (201710010004), and the Special Plan Young Top-notch Talent of Guangdong (2016TQ03X364).