Research Article | Open Access

Cuixia Miao, "Parallel-Batch Scheduling with Two Models of Deterioration to Minimize the Makespan", *Abstract and Applied Analysis*, vol. 2014, Article ID 495187, 10 pages, 2014. https://doi.org/10.1155/2014/495187

# Parallel-Batch Scheduling with Two Models of Deterioration to Minimize the Makespan

**Academic Editor:**Fabio M. Camilli

#### Abstract

We consider the bounded parallel-batch scheduling with two models of deterioration, in which the processing time of the first model is and of the second model is . The objective is to minimize the makespan. We present time algorithms for the single-machine problems, respectively. And we propose fully polynomial time approximation schemes to solve the identical-parallel-machine problem and uniform-parallel-machine problem, respectively.

#### 1. Introduction

Consider the following problem of parallel-batch scheduling with linear processing times. There are independent deteriorating jobs to be processed on a single or (identical or uniform) parallel batching machines. The actual processing time of the first model is and of the second model is , where and are the basic processing times, and are the deteriorating rates, and is the starting time of in a given schedule. All jobs are available from time zero onwards. The batching machines can process up to jobs simultaneously as a batch, and the processing time of the batch is equal to the longest time of any job in the batch; that is, or . Let and denote the th machine and its proceeding speed, respectively. The speeds are identical in the identical parallel machines environment, while they are different from each other in the uniform parallel machines environment. Our objective is to minimize the makespan. Following Gawiejnowicz [1], we denote our problems as , , , and , , where “p-batch” means parallel-batch.

The model described above falls into two categories: parallel-batch scheduling and scheduling with deteriorating jobs. The parallel-batch scheduling is motivated by burn-in operations in semiconductor manufacturing. Brucker et al. [2] said that a parallel-batch machine is a machine that can process up to jobs simultaneously as a batch, and the processing time of the batch is equal to the longest time of any job in the batch. All jobs contained in the same batch start and complete at the same time. Once processing of a batch is initiated, it cannot be interrupted and other jobs cannot be introduced into the batch until processing is completed. For the parallel-batch scheduling, there are two models: the* bounded model*, in which the bound for each batch size is effective, that is, , and the* unbounded model*, in which there is effectively no limit on the size of batch, that is, . This processing system has been extensively studied in the last two decades. The extensive survey of different models and results was provided both by Potts and Kovalyov [3] and Zhang and Cao [4]. Afterwards, Yuan et al. [5] gave some new results for the parallel-batch scheduling.

Traditional scheduling problems all assume that the processing times of jobs are constant. However, the processing times may change in the real world. Examples can be found in steel production and firefighting, where any delay in processing a task may increase its completion time. Scheduling with deteriorating job was first considered by J. N. D. Gupta and S. K. Gupta [6] and Browne and Yechiali [7]. From then on, this scheduling model has been extensively studied. Cheng et al. [8] gave a survey and the monograph by Gawiejnowicz [1] presented this scheduling from different perspectives and covers results and examples. Ji and Cheng [9, 10] and Liu et al. [11] gave some new results for this scheduling.

The parallel-batch scheduling with deteriorating jobs was initiated by Qi et al. [12]; Li et al. [13] and Miao et al. [14] also considered this scheduling; they gave some results for the minimizing makespan of the parallel-batch scheduling with the simple linear deterioration (i.e., ). In this paper, we consider the parallel-batch scheduling with and . We not only consider the single-machine problem but also the parallel-machine problems.

The remainder of the paper is organized as follows. In Section 2, we give some preliminaries to be used in this paper. In Section 3, we present an time algorithm for the single-machine problem , , and propose two fully polynomial time approximation schemes for problems , and , , respectively. In Section 4, we present an time algorithm for the single-machine problem , , and propose two fully polynomial time approximation schemes for problems , and , , respectively. We conclude the paper and suggest some interesting topics for future research in the last section.

#### 2. Preliminaries

In this section, we give some preliminaries to be used in the following sections.

J. N. D. Gupta and S. K. Gupta [6] considered the general model , where and denote the basic processing time and deteriorating rate of job , respectively. Lemma 1 presented in [6] is useful for our problems.

Lemma 1 (see [6]). *The problem is solvable in time by scheduling jobs in the nonincreasing order of ratios.*

Lemma 2. *The problem is solvable in time by scheduling jobs in the increasing order of their basic processing times, and the makespan is for the given schedule with .*

Lemma 3. *The problem is solvable in time by scheduling jobs in the nonincreasing order of their deteriorating rates, and the makespan is for the given schedule with .*

In the uniform parallel machines environment, let and denote the number of jobs scheduled on machine and the th job on this machine. Then . Without loss of generality, let be the jobs scheduled on machine . Then, there exists such that with or .

From Lemma 2, we get the following lemma for problem .

Lemma 4. *The completion time of job is .*

And from Lemma 3, we get the following lemma for problem .

Lemma 5. *The completion time of job is .*

An algorithm is called a algorithm for a minimization problem if it produces a solution that is at most times as big as the optimal value, running in time that is polynomial in the input size. A family approximation algorithm is a fully polynomial-time approximation scheme (FPTAS) if, for each , the algorithm is a algorithm that is polynomial in the input size and in . Without loss of generality, we assume that .

#### 3. Model

In this section, we discuss the model . First, we present polynomial time algorithm for problem , . Then we propose fully polynomial time approximation schemes for problems , and , , respectively.

##### 3.1. Single-Machine Problem , ,

In this subsection, we present an time algorithm for the single-machine problem.

*Algorithm *. Consider the following steps.

*Step **1*. Reindex the jobs in increasing order of their basic processing times such that .

*Step **2*. Let and .

*Step **3*. Form batches such that , , .

*Step **4*. Schedule these batches in the increasing order of their basic processing times from time zero.

Theorem 6. *The problem is solvable in time by Algorithm .*

*Proof. *Suppose that we reindex the jobs in increasing order of their basic processing times. Now, we only need to prove that there exists an optimal schedule satisfying the following properties. (i) The indices of jobs in each batch are consecutive. (ii) All batches are full except possibly the one which contains job . (iii) Batches are scheduled in the increasing order of their basic processing times.

We consider an optimal schedule in the following proof.

To show (i), suppose that there are two batches and and three jobs , , and with such that and in schedule . We obtain a new schedule by shifting job with job ; that is, and . And then and since and the starting time is not changed. Thus, and , and the completion times of other jobs do not increase. Thus, remains optimal. A finite number of repetitions of this procedure yields an optimal schedule of the required form.

To show (ii), suppose that there is a batch in such that is not full. We know that the indices of jobs in are consecutive from (i). Without loss of generality, let with . If we move the remaining consecutive jobs from other batches to , this procedure cannot increase the objective value since for all . A finite number of repetitions of this procedure yields an optimal schedule with that all batches are full except possibly the one which contains the first job .

If we view each batch as single aggregate job with in schedule , property (iii) holds from Lemma 2.

This completes the proof of Theorem 6.

##### 3.2. Problem , ,

In this subsection, we assume that (≥2), and are all integral. We drive some properties of the optimal schedule and propose an FPTAS.

We reindex jobs in increasing order of their basic processing times such that .

Theorem 7. *For problem , there exists an optimal schedule satisfying the following properties.*(i)*The indices of jobs in each batch on every machine are consecutive.*(ii)*All batches are full except possibly the one which contains job .*(iii)*Batches are scheduled in the increasing order of their basic processing times on every machine.*

The proof of this theorem is similar to the proof of Theorem 6.

For problem , the properties allow us to determine the batch structure of an optimal solution a priori. So we divide jobs into batches according to Algorithm , where . It is possible to view the batch as single aggregate job with processing times .

Similar to the establishment of FPTAS in Kovalyov and Kubiak [15], we introduce the variables , , , where if batch is scheduled on machine . Let be the set of all vectors with , , . Set Then problem is reduced to the following problem: We introduce the procedure Partition proposed by Kovalyov and Kubiak [15], where , is a nonnegative integer function on , and . This procedure partitions into disjoint subsets such that for any , . The following description provides the details of Partition .

*Procedure Partition *. Arrange the vectors in the order , where . Assign the vectors to set until is found such that and . If such does not exist, then take and stop.

Assign to set until is found such that and . If such does not exist, then take and stop.

Continue the above process until is included in for some .

The main properties of Partition were given by Kovalyov and Kubiak [15] as follows.

Proposition 8. *
Consider for any , .*

Proposition 9. *Consider for and .*

Now, we give a fully polynomial time approximation scheme for problem .

*Algorithm **.*
Consider the following steps.

*Step **1*. Reindex the jobs in increasing order of their basic processing times such that .

*Step **2*. Form batches by using Algorithm , where .

*Step **3*. Regard batch as an aggregate job with .

*Step **4*. Set , , and for .

*Step **5*. For the set , generate the set by adding in position of each vector from . Calculate the following for any , without loss of generality, assuming :
If , then set , and go to Step 6.

If , then set , and perform the following computation.

Call Partition to partition the set into disjoint subsets .

Divide set into disjoint subsets , . For each nonempty subset , choose a vector such that Set , and , and . Repeat Step 5.

*Step **6*. Select set such that .

Let , where , .

We get the following theorem.

Theorem 10. *Algorithm ** finds ** for * such that in , where is an optimal solution.

*Proof. *Suppose that for some and . Algorithm may not choose for further construction; however, for a vector chosen instead of it, we have
Set . Consider the vector and . We assume . It follows that

Consequently,

Similarly, for , we have

Assume that and Algorithm chooses instead of in the iteration; then we have
Then we have
Set , .

Then,
By repeating the above argument for , we show that such that
Since,

Then,
Now, we have
From Step 6 in Algorithm , we know that the vector will be chosen such that
Then,
So,
To establish the computation of Algorithm , we know that Step 5 requires the time of to complete. We have .

By Proposition 9, for ,
So and . Now, we have that the time complexity of Algorithm is .

This completes the proof of Theorem 10.

##### 3.3. Problem , ,

Motivated by Liu et al. [11], we propose an FPTAS for our uniform-parallel-machine problem .

We reindex jobs in increasing order of their basic processing times such that .

We can obtain the following similar theorem to Theorem 7.

Theorem 11. *For problem , there exists an optimal schedule satisfying the following properties.*(i)*The indices of jobs in each batch on every machine are consecutive.*(ii)*All batches are full except possibly the one which contains job .*(iii)*Batches are scheduled in the increasing order of their basic processing times on every machine.*

For problem , these properties allow us to determine the batch structure of an optimal solution a priori. So we divide jobs into batches according to Algorithm , where . It is possible to view the batch as single aggregate job with processing times .

We design the FPTAS by using the procedure* Partition* proposed in Kovalyov and Kubiak [15] which requires that the function used must be a nonnegative integer function. Similar to Liu et al. [11], we first modify the objective function to a nonnegative integer function, and this operation does not affect the schedule. For any and , define , . For simplicity, we suppose that and are finite decimals. Arbitrarily find integers such that and . The parameter can be verified as an integer since . Define ; the transformed objective function can be expressed as . And we have that is a nonnegative integer. Meanwhile, the above scale operation only increases the absolute value of the and does not change the schedule. We use the new objective function instead of the original one in the remainder. Now our problem is equivalent to .

Similar to the FPTAS in Section 3.2, we introduce the variables , , , where if batch is scheduled on . Let be the set of all vectors with , , . Set where is the magnified workload of machine for the jobs among .

Then problem is reduced to the following problem:

Now, we propose the fully polynomial time approximation scheme.

*Algorithm *. Consider the following steps.

*Step **1*. Reindex the jobs in increasing order of their basic processing times such that .

*Step **2*. Form batches by using Algorithm , where .

*Step **3*. Regard batch as an aggregate job with .

*Step **4*. Set , and for .

*Step **5*. For the set , generate the set by adding in position of each vector from . Calculate the following for any , without loss of generality, assuming . Set
If , then set , and go to Step 6.

If , then set , and perform the following computation.

Call Partition to partition the set into disjoint subsets .

Divide set into disjoint subsets , ; . For each nonempty subset , choose a vector such that Set ; , and , and . Repeat Step 5.

*Step **6*. Select set such that .

Let , where and .

We get the following theorem.

Theorem 12. *Algorithm finds for such that in , where is an optimal solution.*

*Analysis.* The proof is similar to the proof of Theorem 10 except the following:
Consequently,

Similarly, for , we have

To establish the computation of Algorithm , we know that Step 5 requires the time of to complete. We have .

By Proposition 9, for , So and . Now, we have that the time complexity of Algorithm is .

#### 4. Model

In this section, we discuss the model . First, we present polynomial time algorithm for problem . Then we propose fully polynomial time approximation schemes for problems and , respectively.

##### 4.1. Single-Machine Problem , ,

In this subsection, we present an time algorithm for the single-machine problem.

*Algorithm FBLDR* (fully batching longest deteriorating rate). Consider the following steps

*Step **1*. Reindex jobs in nonincreasing order of their deteriorating rates such that .

*Step **2*. Form batches by placing jobs through together in the same batch, for

*Step **3*. Schedule the batches in the increasing order of their indices.

The schedule contains at most batches and all batches are full except possibly the last one, where denotes the largest integer less than or equal to .

Theorem 13. *Algorithm FBLDR solves problem optimally, and the optimal objective value is .*

We omit the proof as it is simple.

##### 4.2. Problem , ,

In this subsection, we assume that , and are all integral. We drive some properties of the optimal schedule and propose an FPTAS.

We reindex jobs in nonincreasing order of their deteriorating rates such that .

Theorem 14. *For problem , there exists an optimal schedule satisfying the following properties.*(i)*The indices of jobs in each batch on every machine are consecutive.*(ii)*All batches are full except possibly the one which contains job .*(iii)*Batches are scheduled in the increasing order of their indices on every machine.*

For problem , the properties allow us to determine the batch structure of an optimal solution a priori. So we divide jobs into batches according to Algorithm FBLDR, where . It is possible to view the batch as single aggregate job with processing times .

Similar to the establishment of FPTAS in Kovalyov and Kubiak [15], we introduce the variables , , , where if batch is scheduled on machine . Let be the set of all vectors with , , . Set Then problem is reduced to the following problem:

Using the* Procedure Partition *, we design a fully polynomial time approximation scheme for problem as follows.

*Algorithm *. Consider the following steps.

*Step **1*. Reindex the jobs in nonincreasing order of their deteriorating rates such that .

*Step **2*. Form batches by using Algorithm FBLDR, where .

*Step **3*. Regard batch as an aggregate job with .

*Step **4*. Set , , and for .

*Step **5*. For the set , generate the set by adding in position of each vector from . Calculate the following for any , without loss of generality, assuming :
If , then set , and go to Step 6.

If , then set , and perform the following computation.

Call Partition to partition the set into disjoint subsets .

Divide set into disjoint subsets , . For each nonempty subset , choose a vector such that Set , and , and . Repeat Step 5.

*Step **6*. Select set such that .

Let , where .

We get the following theorem.

Theorem 15. *Algorithm finds for such that in , where is an optimal solution.*

*Proof. *Suppose that for some and . Algorithm may not choose for further construction; however, for a vector chosen instead of it, we have
Set . Consider the vector and . We assume . It follows that

Consequently,

Similarly, for , we have