• Views 530
• Citations 1
• ePub 14
• PDF 274
Mathematical Problems in Engineering
Volume 2015, Article ID 956158, 6 pages
http://dx.doi.org/10.1155/2015/956158
Research Article

## Parallel-Machine Scheduling with Time-Dependent and Machine Availability Constraints

School of Mathematical Sciences, Qufu Normal University, Qufu, Shandong 273165, China

Received 9 February 2015; Revised 11 April 2015; Accepted 12 April 2015

Academic Editor: Chin-Chia Wu

Copyright © 2015 Cuixia Miao and Juan Zou. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We consider the parallel-machine scheduling problem in which the machines have availability constraints and the processing time of each job is simple linear increasing function of its starting times. For the makespan minimization problem, which is NP-hard in the strong sense, we discuss the Longest Deteriorating Rate algorithm and List Scheduling algorithm; we also provide a lower bound of any optimal schedule. For the total completion time minimization problem, we analyze the strong NP-hardness, and we present a dynamic programming algorithm and a fully polynomial time approximation scheme for the two-machine problem. Furthermore, we extended the dynamic programming algorithm to the total weighted completion time minimization problem.

#### 1. Introduction

Consider the following scheduling problem with deteriorating job and machine availability constrains. There are independent deteriorating jobs to be processed on identical parallel machines. The actual processing time of job    is , where (>0) and are the deteriorating rate and the starting time of job , respectively. Each job has a weight . All jobs are released at time (>0). The case with is not considered because the job would have its processing rimes equal to zero if . We assume that the jobs are nonresumable in our problems. Machine    is not continuously available and unavailable during the period . In addition, we assume that and , where . Otherwise, all the jobs can be finished before the nonavailable period and the problem becomes trivial. Without loss of generality, we assume that all parameters are integral unless stated otherwise.

Our objective is to minimize the makespan and the total (weighted) completion time. Following Gawiejnowicz [1], we denote our problems as and , where means nonresumable.

The model described above falls into the categories of the scheduling with deteriorating job and the machine scheduling with availability constraints. The scheduling with deteriorating job was first considered by J. N. D. Gupta and S. K. Gupta [2] and Browne and Yechiali [3]. Cheng et al. [4] gave a survey and the monograph by Gawiejnowicz [1] presented this scheduling from different perspectives and covers results and examples. Graves and Lee [5] pointed out that machine scheduling with an availability constraint is very important and is still relatively unexplored. They studied this problem whose maintenance needs to be performed within a fixed period. Lee [6] presented extensive study of single and parallel machine scheduling problems with an availability constraint, with respect to various performance measures and two cases are considered: resumable and nonresumable. A job is said to be resumable if it cannot be finished before the nonavailable interval of a machine and can continue after the machine is available again. On the other hand, a job is said to be nonresumable if it has to restart rather than continue. Ma et al. [7] gave a survey of this scheduling.

In this paper, we consider the deteriorating job scheduling with machine availability constraints on identical parallel machines. The jobs are nonresumable and our objective is to minimize the makespan and the total (weighted) completion time.

Relevant Previous Work. Wu and Lee [8] initiated the deteriorating job scheduling with machine availability constraints; they showed that minimizing the makespan of scheduling deteriorating jobs on a single machine with an availability constraint can be transformed into 0-1 integer programming. Ji et al. [9] gave some results for the linear deteriorating jobs with an availability constraint on a single machine. Gawiejnowicz and Kononov [10] considered the complexity and approximability of scheduling resumable proportionally deteriorating jobs. Fan et al. [11] considered the scheduling resumable deteriorating jobs on a single machine with nonavailability constraints. Li and Fan [12] addressed the nonresumable scheduling problem . The problems they considered are on the single machine. In this paper, we consider the parallel-machine scheduling problem with deteriorating jobs and machine availability constraints, and we show that the problems are strongly NP-hard and present some algorithms.

#### 2. Minimizing the Makespan

In this section, we first show that problem is NP-hard in the strong sense. Ji and Cheng [13] showed that problem is NP-hard in the strong sense when is arbitrary. In their problem, all the machines are available all the time. Thus, our problem is NP-hard in the strong sense when is arbitrary.

In the following, we discuss the Longest Deteriorating Rate (LDR for short) algorithm and List Scheduling (LS for short) algorithm and analyze a lower bound of any optimal schedule.

##### 2.1. LDR and LS Algorithms

LS Algorithm. Given a sequence of jobs , assign the jobs one by one according to the list. Each job is assigned to the machine where the job can be finished as early as possible.

LDR Algorithm. Sort the jobs in the nonincreasing order of their deteriorating rates, and then assign the jobs by LS algorithm.

, , and denote the makespan corresponding LDR, LS, and the optimal solution, respectively.

Theorem 1. can be arbitrarily large even for the two-machine problem .

Proof. Consider a problem with the following instance: , , , , , and . In the optimal schedule with makespan , jobs and are scheduled before on the first machine, and jobs , , and are scheduled before on the second machine. However, . As a consequence, when .

Theorem 2. If , then and , where and .

The proof of this theorem is similar to the proofs of Theorems 1 and 3 in Liu et al. [14].

##### 2.2. Lower Bound of Any Optimal Schedule

Without loss of generality, let , , and .

Theorem 3.

Proof. In any optimal schedule, let and denote the set of jobs scheduled on machine and before on   , respectively. Then, the set of jobs scheduled after is , and . We have the load of machine denoted by holding for . Note that . Thus, .

#### 3. Minimizing the Total Completion Time

In this section, we discuss the total completion time minimization problem.

Ji and Cheng [13] showed that problem is NP-hard in the strong sense when is arbitrary, which implies that our problem is NP-hard in the strong sense when is arbitrary.

##### 3.1. Dynamic Programming Algorithm

In this subsection, we present a dynamic programming algorithm for when machine is always available. For convenience, let , and let the only nonavailable interval on machine be , , and .

Smallest Deteriorating Rate (SDR for Short). Sort the jobs in the nondecreasing order of their deteriorating rates such that .

Lemma 4. In any optimal solution to problem , the jobs scheduled before the nonavailable interval are processed by the SDR order and so are the jobs scheduled after the nonavailable interval and on machine .

We can proof this lemma by the interchanging argument.

We assume that the jobs are reindexed in the SDR order. Let denote the optimal value of the objective function satisfying the following conditions:(i)The jobs in consideration are .(ii)The total processing time of before is .(iii)The total processing time of is .

To get , we distinguish three cases as follows.

Case 1. Job is scheduled before .
In this case, does not change. The starting time of is , the total processing time of before is before inserting job , and the completion of is . Thus, .

Case 2. Job is scheduled on machine .
In this case, does not change. The starting time of is , the total processing time of is before inserting job , and the completion of is . Thus, .

Case 3. Job is scheduled after .
In this case, both and do not change. The completion of is since , where denotes the set of jobs scheduled after . Thus, .

Combining the above cases, we design a dynamic programming algorithm as follows.

Algorithm DP1.

Step 1. Reindex the jobs in nondecreasing order of their deteriorating rates such that .

Step 2 (Initialization).

Step 3 (Iteration).

Step 4 (Solution). The optimal value is .

Theorem 5. The problem is solvable in time by Algorithm DP1.

Proof. The correctness of Algorithm DP1 is guaranteed by the above discussion. Note that and . Thus, the recursive function has at most states. Each iteration takes time to execute. Hence, the running time is .

##### 3.2. A Fully Polynomial Time Approximation Scheme

In this subsection, we present a fully polynomial time approximation scheme for problem when machine is always available. Following Woeginger [15], we show that is DP-benevolent, which follows that there exists a fully polynomial time approximation scheme for our problem. For convenience, let .

The fully polynomial time approximation scheme is based on Lemma 4 stated in Section 3.2. Thus, we first sort the jobs in the nondecreasing order of their deteriorating rates such that . The dynamic programming algorithm proposed in the following goes through phases. In the th phase, we input the vector ; meanwhile, a state set is generated. Any state in is a vector which encodes a partial schedule for the first jobs . The component represents the total processing time before on machine , represents the total processing time after on machine , represents the total processing time on machine , and the component represents the objective value of the current schedule. The initial set contains the only state . The state is generated from the state by three mappings , , and which are defined as follows:Intuitively, function puts job at the end before on machine if it is possible for the given state; and function puts job at the end after on machine and function puts job at the end on machine . Finally, set .

Combining the above discussion, we design a dynamic programming algorithm as follows.

Algorithm DP2.Initialize For to doLet For every do assignments and assignments If Then assignments EndforEndforOutput assignments .

Note that the number of states in the above dynamic programming is bounded by . There holds the following result.

Theorem 6. There exists a fully polynomial time approximation scheme for problem when one machine is always available.

Proof. The functions , , and are vectors of polynomials with nonnegative coefficients and the polynomial functions in , , and that yield the components are polynomials. Moreover, all polynomials linearly depend on , , , and . The inequality inside operator “if” can be checked in polynomial time. The objective function is a polynomial with nonnegative coefficients. Therefore, similar to the example in Section 5.3 of Woeginger [15], it is not hard to verify that the above dynamic programming satisfies the conditions of Lemma 6.1 and Theorem 2.5 from [15]. Thus, problem is DP-benevolent. As a result, there exists a fully polynomial time approximation scheme for problem when one machine is always available.

#### 4. Minimizing the Total Weighted Completion Time

In this section, we extend the dynamic programming algorithm to the total weighted completion time minimization problem, that is, . We also assume that machine is always available.

We assume that the jobs are reindexed such that . For convenience, denot this order by Weight Deteriorating Rate (WDR for short).

Lemma 7. In any optimal solution to problem , the jobs scheduled before the nonavailable interval are processed by the WDR order and so are the jobs scheduled after the nonavailable interval and on machine .

We can proof this lemma by the interchanging argument.

We assume that the jobs are reindexed in the WDR order. Similar to the context of Section 3.1, let denote the optimal value of the objective function satisfying the following conditions:(i)The jobs in consideration are .(ii)The total processing time of before is .(iii)The total processing time of is .

We design a dynamic programming algorithm as follows.

Algorithm DP3.

Step 1. Reindex the jobs such that .

Step 2 (Initialization).

Step 3 (Iteration).

Step 4 (Solution). The optimal value is .

Theorem 8. The problem is solvable in time by Algorithm DP3.

#### 5. Conclusions

In this paper, we considered the parallel-machine scheduling with time-dependent and machine availability constraints. We showed that our two problems are NP-hard in the strong sense. We analyzed the LDR and LS algorithms and the lower bound of any optimal schedule and presented dynamic programming algorithm and fully polynomial time approximation scheme for the problem . Furthermore, we extended the dynamic programming algorithm to problem .

For future research, the other objectives are worth considering. The design of PTAS for our problems is another worthy topic.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgments

The authors thank the editor and the anonymous reviewers for their helpful and detailed comments on an earlier version of their paper. This work was supported by The National Natural Science Foundation of China (11201259), the Doctoral Fund of the Ministry of Education (20123705120001, 20123705110003), Domestic Visiting Scholar Program for Outstanding Teachers of Higher Education in Shandong Province, and the Natural Science Foundation of Shandong Province (ZR2014AM012, BS2013SF016, J13LI09).

#### References

1. S. Gawiejnowicz, Time-Dependent Scheduling, vol. 18 of Monographs in Theoretical Computer Science. An EATCS Series, Springer, Berlin, Germany, 2008.
2. J. N. D. Gupta and S. K. Gupta, “Single facility scheduling with nonlinear processing times,” Computers and Industrial Engineering, vol. 14, no. 4, pp. 387–393, 1988.
3. S. Browne and U. Yechiali, “Scheduling deteriorating jobs on a single processor,” Operations Research, vol. 38, no. 3, pp. 495–498, 1990.
4. T. C. E. Cheng, Q. Ding, and B. M. T. Lin, “A concise survey of scheduling with time-dependent processing times,” European Journal of Operational Research, vol. 152, no. 1, pp. 1–13, 2004.
5. G. H. Graves and C.-Y. Lee, “Scheduling maintenance and semiresumable jobs on a single machine,” Naval Research Logistics, vol. 46, no. 7, pp. 845–863, 1999.
6. C.-Y. Lee, “Machine scheduling with an availability constraint,” Journal of Global Optimization, vol. 9, no. 3-4, pp. 395–416, 1996.
7. Y. Ma, C. B. Chu, and C. R. Zuo, “A survey of scheduling with deterministic machine availability constraints,” Computers and Industrial Engineering, vol. 58, no. 2, pp. 199–211, 2010.
8. C.-C. Wu and W.-C. Lee, “Scheduling linear deteriorating jobs to minimize makespan with an availability constraint on a single machine,” Information Processing Letters, vol. 87, no. 2, pp. 89–93, 2003.
9. M. Ji, Y. He, and T. C. E. Cheng, “Scheduling linear deteriorating jobs with an availability constraint on a single machine,” Theoretical Computer Science, vol. 362, no. 1–3, pp. 115–126, 2006.
10. S. Gawiejnowicz and A. Kononov, “Complexity and approximability of scheduling resumable proportionally deteriorating jobs,” European Journal of Operational Research, vol. 200, no. 1, pp. 305–308, 2010.
11. B. Fan, S. S. Li, L. Zhou, and L. Q. Zhang, “Scheduling resumable deteriorating jobs on a single machine with non-availability constraints,” Theoretical Computer Science, vol. 412, no. 4-5, pp. 275–280, 2011.
12. S. S. Li and B. Q. Fan, “Single-machine scheduling with proportionally deteriorating jobs subject to availability constraints,” Asia-Pacific Journal of Operational Research, vol. 29, no. 4, Article ID 1250019, 2012.
13. M. Ji and T. C. Cheng, “Parallel-machine scheduling of simple linear deteriorating jobs,” Theoretical Computer Science, vol. 410, no. 38–40, pp. 3761–3768, 2009.
14. M. Liu, F. F. Zheng, S. J. Wang, and Y. F. Xu, “Approximation algorithms for parallel machine scheduling with linear deterioration,” Theoretical Computer Science, vol. 497, pp. 108–111, 2013.
15. G. J. Woeginger, “When does a dynamic programming formulation guarantee the existence of a fully polynomial time approximation scheme (FPTAS)?” INFORMS Journal on Computing, vol. 12, no. 1, pp. 57–74, 2000.