Mathematical Problems in Engineering

Volume 2013, Article ID 957494, 7 pages

http://dx.doi.org/10.1155/2013/957494

## A Note on a Fully Polynomial-Time Approximation Scheme for Minimizing Makespan of Deteriorating Jobs

School of Information Technology, Jiangxi University of Finance and Economics, Nanchang, Jiangxi 330013, China

Received 12 June 2013; Accepted 9 August 2013

Academic Editor: Yunqiang Yin

Copyright © 2013 Long Wan. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

In (1998), Kovalyov and Kubiak studied the problem of scheduling deteriorating jobs on a single machine to minimize makespan. They presented a fully polynomial-time approximation scheme which is based on a dynamic programming. Unfortunately, their dynamic programming is incorrect. So the fully polynomial-time approximation scheme is also invalid. In this paper, we construct an instance to show how their dynamic programming does not work and provide a correct dynamic programming, based on which a new fully polynomial-time approximation scheme is derived.

#### 1. Introduction

For most scheduling problems, the job processing times are fixed parameters whenever they start. However, such restrictive assumptions do not hold in some environments of financial management, steel production, resource allocation, national defense, and so forth, where the jobs can deteriorate when they are delayed. To the best of our knowledge, Brown and Yechiali [1] and Kunnathur and Gupta [2] initiated the problems of scheduling the deteriorating jobs nonpreemptively. They mainly considered the single-machine makespan problems with job deterioration. Mosheiov [3] first concerned the job deterioration single-machine scheduling problem for which the only machine is available at any given time point. Chen [4] extended the study to the parallel-machine scheduling problem for which the objective is to minimize the total completion time of the jobs. He showed that it is ordinarily NP-hard when the number of the machines is fixed and strongly NP-hard when the number of the machines is input. Ji and Cheng [5] considered the parallel-machine scheduling problem with the objective to minimize the makespan and showed that the problem is ordinarily NP-hard when the number of the machines is fixed and strongly NP-hard when the number of the machines is input. Wu and Lee [6] first studied the deteriorating job scheduling problem under the machine availability constraint condition. For more papers about this topic, the reader is referred to Fan et al. [7], Ji et al. [8], and Li and Fan [9].

In this paper, we revisit the following single machine scheduling problem with bounded deterioration introduced in [10]. There are independent jobs that need to be processed on a single machine. The machine is continuously available for processing from time zero onwards, and it can process at most one job at a time. Each job has a *basic* processing time denoted by , . The jobs have a given *common critical date * after which they start to deteriorate and a *common maximum deterioration date * () after which the jobs deteriorate no further. The *actual* processing time of job is stated as follows:
Here is the *deterioration rate* of job and denotes the start time of job . Without loss of generality, we assume that , , , are integral. The problem is to schedule the jobs to minimize makespan , that is, the completion time of the last job. Following the notation in Kovalyov and Kubiak [11], we denote this problem by for simplicity.

For problem , Kovalyov and Kubiak [11] presented a dynamic programming and a fully polynomial-time approximation scheme (FPTAS). Unfortunately, their dynamic programming is incorrect. So their FPTAS is also invalid.

In this paper, we construct an instance to show how their dynamic programming does not work and provide a correct dynamic programming, based on which a new FPTAS is derived.

The rest of this paper is constructed as follows. We first give some preliminary knowledge in Section 2. In Section 3, we give a counterexample and show that the dynamic programming in [11] does not work. In Section 4, we provide a correct dynamic programming based on which an FPTAS is presented.

#### 2. Preliminaries

Let be a positive number. An algorithm for problem is a -approximation algorithm if for all instances, where is the optimal makespan and is the makespan obtained by the algorithm. A family of -approximation algorithm defines a fully polynomial-time approximation scheme (FPTAS for short), if for any , runs in time bounded by a polynomial of , , and , where is the number of bits in the binary encoding of the largest numerical parameter in the input with and . In the sequel, we may assume .

We can assume that . Otherwise, the problem becomes trivial. Furthermore, if there is idle time on the machine in any schedule, we can eliminate the idle time and shorten the makespan of the schedule. So a schedule can be determined by a permutation of jobs. Under the two conditions, for any given schedule, there is a unique job that starts by time and completes after time . We call such a job *pivotal*. Any job that starts before and completes by is called *early*, any job that starts after but by is called *tardy*, and any job that starts after is called *suspended*. For each job , we write and . If is a pivotal job with completion time , then . Thus, and are the lower bound and upper bound of the pivotal job , respectively.

For a given job and an integer with , let be the restricted problem in which the pivotal job is and the completion time of is . Notice that there must exist a pair with and , such that the optimal solution of the restricted problem is also optimal for problem . Thus problem can be reduced to the solving of these restricted problems with and . Notice that the number of the restricted problems is exponential in the size of problem .

To guarantee that each restricted problem (, ) has a solution, we allow the machine to be idle before the starting time of the first job in a schedule of . Then the total processing time of the early jobs is upper bounded by in every feasible schedule of problem .

For a fixed job , let and be two time points such that and , where . Let and be the optimal solutions to the restricted problems and , respectively. The following lemma shows that we will have only a little loss if we use instead of as a solution of problem .

*Remark 1. *The following two lemmas are established in Kovalyov and Kubiak [11]. We present more clarified proofs for completion.

Lemma 2. *. *

* Proof. *Recall that . Then the total processing time of the early jobs in is upper bounded by . Let be the schedule obtained from by shifting the processing of all jobs right so that completes at time . Then is also the pivotal job in , and the starting time of the earliest tardy job (if any) in is . Since is a feasible solution of , we have . Note that the suspended jobs in remain suspended in . Suppose that be the order of the tardy jobs in . Let and be the actual processing time of job in and , respectively, . Then
where . This means that each is postponed at most to be processed in , . Therefore,
On the other hand, since starts at in solution , we have
where . Hence,
Note that . Then we have
The lemma follows.

By Lemma 2, we may first partition into subintervals such that , , where . Let be the best solution among the optimal solutions of the restricted problems , , and let be the optimal solution of . Then we have . Note that the number of these subproblems is at most which is polynomial in the size of the input and . So if we have an FPTAS for each restricted problem , then we can get an FPTAS for . We form the conclusion in the following lemma.

Lemma 3. *If one has an FPTAS for every restricted problem , then one can get an FPTAS for . *

*Proof. *We apply the an FPTAS for each of the restricted problems , , within a factor of . Let be the best output solution. By Lemma 2, we can easily deduce
where . The lemma follows.

By Lemma 3, we only need to design an FPTAS for any given restricted problem . Given the pivotal job and the completion time of , we use to denote the processing time of job and reindex the remaining jobs according to Smith’s rule [12] such that . By a simple exchange of pairwise consecutive jobs, we can show that the tardy jobs is sequenced by Smith’s rule in the optimal schedule. Note that the actual processing time of early jobs and suspended jobs are unchanged. So the orders of early jobs and suspended jobs do not matter. It follows that there exists an optimal schedule in which the early jobs are sequenced arbitrarily, followed by the pivotal job, followed by the tardy jobs sequenced by Smith’s rule, and followed by the suspended jobs sequenced arbitrarily.

#### 3. Counterexample

First we demonstrate the dynamic programming adopted by [11]. The authors introduced the variables , where means that is early and means that is tardy or suspended. Let be the set of all vectors . Set where and . Note that neither nor nor depends on . is the total processing time of early jobs from the job set . Obviously, we should have . If starts at , then it is tardy or suspended and its start delay is . Consequently, finished at . If is scheduled early, then it does not change the makespan, and so, . Thus, problem reduces to the following programming:

If we require that all of the tardy and suspended jobs are processed in the nonincreasing order of Smith’s ratios, programming (9) does work. Unfortunately, for problem , the requirement cannot guarantee the optimality. Then programming (9) above is invalid to solve problem .

Consider the following job instance. We have three jobs , , with , , and and , , and , and . Since , there are three restricted problems , , and . Note that .

For , programming (9) schedules the jobs in the order consecutively on the machine with makespan . But if we schedule the jobs in the order on the machine, the makespan is .

For , programming (9) schedules the jobs in the order consecutively on the machine with makespan . But if we schedule the jobs in the order on the machine, the makespan is .

For , programming (9) schedules the jobs in the order consecutively on the machine with makespan . But if we schedule the jobs in the order on the machine, the makespan is .

So the solution for via programming (9) has makespan . But the optimal solution for has makespan .

#### 4. Dynamic Programming and FPTAS

We are now ready to formulate the recursive equations for . In a feasible schedule for the jobs other than are partitioned into , , and , where is the set of all early jobs, is the set of all tardy jobs, and is the set of all suspended jobs. Recall that , and the jobs other than have been reindexed according to Smith’s rule such that .

We introduce the variables , , where if is (assumed to be) early and put into job set , if is (assumed to be) tardy and put into job set , and if is (assumed to be) suspended and put into job set . Then each vector represents a schedule (may be infeasible) of . Let be the set of all vectors with , . Set where and .

Given , it can be observed that , , and only depend on . We overload the notations , , and so that they also represent the sets of all early jobs, all tardy jobs and all suspended jobs, respectively, among the first jobs . Then is the total processing time of the jobs in , is the last completion time of the jobs in , and is the total processing time of the jobs in . We say that an is a quasi-feasible schedule if . Note that the condition means that we should have enough time space to schedule the early jobs before the pivotal job . The objective value of a quasi-feasible schedule is defined to be .

For a quasi-feasible schedule vector , we can process the jobs in starting at time , followed by the pivotal job , followed by the jobs in , and then followed by the jobs in , consecutively. The real schedule generated by this way is denoted by . We call the real schedule corresponding to . We can observe that for every quasi-feasible schedule . In the case that and , we just have .

Generally, a quasi-feasible schedule may not be a feasible schedule, since (or ) may not be the set of tardy jobs (or the set of suspended jobs) in . But the following lemma enables us to focus the attention on the quasi-feasible schedules.

Lemma 4. *For each quasi-feasible schedule , one has , and furthermore, the equality holds if and only if . *

* Proof. *Since , we only need to show that . Note that . We distinguish the following two cases.(i). Then and . For each , the actual processing time of in is equal to the contribution of for when is considered. For each , the actual processing time of in is less than the contribution of for when is considered. Therefore, with equality holding if and only if , that is, and .(ii). Then and . For each , the actual processing time of in is equal to the contribution of for when is considered. For each , the actual processing time of in is less than the contribution of for when is considered. Therefore, with equality holding if and only if , that is, and . The lemma follows.

The result of Lemma 4 implies that an optimal quasi-feasible schedule of problem is also an optimal schedule of the same problem. Thus problem can be reduced to the following programming:

Theorem 5. * Programming (11) solves problem correctly in time. *

* Proof. *The correctness of programming (11) is implied in the previous discussion.

In each iteration of programming (11), we generate the values , , and from , , and . To save the space, we only need to consider the states so that and . Here, means that we only consider the quasi-feasible schedules, and means that any job starting at or after time should be a suspended job.

Write : , , , . Since , , and , we have . Then . Consequently, the time complexity of programming (11) is upper bounded by . The lemma follows.

To construct the FPTAS, we employ a procedure called , where is a set of vectors, is a nonnegative function in the vectors , and is a positive number.

*Procedure *(i)Arrange vectors in order such that .(ii)Assign vectors , one by one to set until we meet the first index such that . If such an does not exist, then set and stop.(iii)Assign vectors , one by one to set until we meet the first index such that . If such an does not exist, then set and stop.(iv)Continue the above procedure until is included in for some .

Assume that the calculation of needs a constant time for each . Procedure requires time to arrange elements of in nondecreasing order of and time to produce the partition. Then the total complexity of procedure is .

The two lemmas in the following are straightforward and can be found in [11].

Lemma 6. * for any . *

Lemma 7. * for and . *

Based on procedure partition, we will provide an FPTAS, called Algorithm for the restricted problem . The algorithm is formally stated below.

*Algorithm *(1) (Initialization) Reindex the jobs except the pivotal job by Smith’s rule such that . Set and .(2) (Generation of ) Generate set by assigning or to the th coordinate of vectors from , that is, . Calculate the following for any : , , and . If for some , then delete from . If , go to 3. If , then set and perform the following computations: call to partition set into disjoint subsets . call to partition set into disjoint subsets . Divide set into disjoint subsets , ; . In each nonempty subset , choose a vector such that . Set , ; , and , go to 2.(3) (Solution) Select vector such that

We now show that the vector returned by Algorithm has the objective value at most multiples of the objective value of the optimal vector of programming (11). Let be the real schedule corresponding to , and let be the optimal schedule for . Then we have the following lemma.

Lemma 8. *.*

* Proof. *From Lemma 4, let be the optimal vector of programming (11); we know and . It is sufficient to show that .

In the following, we will show that there exists a vector which meets the following three conditions by induction for :(1),(2),(3), where and , .

When , . By setting , the previous three conditions hold apparently.

Inductively, suppose that , and there exists a vector which meets the previous three conditions for index . Then we have .

Let for simplicity, . Then,
where the last inequality follows the fact , and
Since , by the algorithm and Lemma 6, there exists such that
From (13) and (16), we have . From (14) and (17), we have
where the last inequality follows from the fact . Similarly, by (15) and (18), we get . Note that for . So

This completes the proof.

Now, we consider the time complexity of Algorithm . Note that and for each and each . Actually, we can get , because once , we may set ; that is, we do not put into . Hence, by Lemma 7, . Therefore, the complexity of Algorithm is . Note that there are at most restricted problems . Herefore, by Lemmas 4 and 8, we can get a -approximation solution in time.

#### References

- S. Browne and U. Yechiali, “Scheduling deteriorating jobs on a single processor,”
*Operations Research*, vol. 38, no. 3, pp. 495–498, 1990. View at Google Scholar · View at Scopus - A. S. Kunnathur and S. K. Gupta, “Minimizing the makespan with late start penalties added to processing times in a single facility scheduling problem,”
*European Journal of Operational Research*, vol. 47, no. 1, pp. 56–64, 1990. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - G. Mosheiov, “Scheduling jobs under simple linear deterioration,”
*Computers and Operations Research*, vol. 21, no. 6, pp. 653–659, 1994. View at Google Scholar · View at Scopus - Z.-L. Chen, “Parallel machine scheduling with time dependent processing times,”
*Discrete Applied Mathematics*, vol. 70, no. 1, pp. 81–93, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - M. Ji and T. C. E. Cheng, “Parallel-machine scheduling of simple linear deteriorating jobs,”
*Theoretical Computer Science*, vol. 410, no. 38–40, pp. 3761–3768, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - C.-C. Wu and W.-C. Lee, “Scheduling linear deteriorating jobs to minimize makespan with an availability constraint on a single machine,”
*Information Processing Letters*, vol. 87, no. 2, pp. 89–93, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - B. Fan, S. Li, L. Zhou, and L. Zhang, “Scheduling resumable deteriorating jobs on a single machine with non-availability constraints,”
*Theoretical Computer Science*, vol. 412, no. 4-5, pp. 275–280, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - M. Ji, Y. He, and T. C. E. Cheng, “Scheduling linear deteriorating jobs with an availability constraint on a single machine,”
*Theoretical Computer Science*, vol. 362, no. 1–3, pp. 115–126, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - S. S. Li and B. Q. Fan, “Single-Machine scheduling with proportionally deteriorating jobs subject to availability Constraints,”
*Asia-Pacific Journal of Operational Research*, vol. 29, no. 4, 10 pages, 2012. View at Publisher · View at Google Scholar - W. Kubiak and S. van de Velde, “Scheduling deteriorating jobs to minimize makespan,”
*Naval Research Logistics*, vol. 45, no. 5, pp. 511–523, 1998. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - M. Y. Kovalyov and W. Kubiak, “A fully polynomial approximation scheme for minimizing makespan of deteriorating jobs,”
*Journal of Heuristics*, vol. 3, no. 4, pp. 287–297, 1998. View at Google Scholar · View at Scopus - W. E. Smith, “Various optimizers for single-stage production,”
*Naval Research Logistics Quarterly*, vol. 3, no. 1-2, pp. 59–66, 1956. View at Publisher · View at Google Scholar · View at MathSciNet