Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2016, Article ID 3203728, 5 pages
http://dx.doi.org/10.1155/2016/3203728
Research Article

Parallel Machine Scheduling with Nested Processing Set Restrictions and Job Delivery Times

1Key Laboratory of Intelligent Information Processing in Universities of Shandong (Shandong Institute of Business and Technology), Yantai 264005, China
2College of Computer Science and Technology, Shandong Institute of Business and Technology, Yantai 264005, China

Received 7 June 2016; Accepted 4 September 2016

Academic Editor: Bruno G. M. Robert

Copyright © 2016 Shuguang Li. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The problem of scheduling jobs with delivery times on parallel machines is studied, where each job can only be processed on a specific subset of the machines called its processing set. Two distinct processing sets are either nested or disjoint; that is, they do not partially overlap. All jobs are available for processing at time 0. The goal is to minimize the time by which all jobs are delivered, which is equivalent to minimizing the maximum lateness from the optimization viewpoint. A list scheduling approach is analyzed and its approximation ratio of 2 is established. In addition, a polynomial time approximation scheme is derived.

1. Introduction

The problems of scheduling with processing set restrictions have been extensively studied in the past few decades [1, 2]. In this class of problems, we are given a set of jobs and a set of parallel machines . Each job can only be processed on a certain subset of the machines, called its processing set, and, on those machines, it takes time units of uninterrupted processing to complete. Each machine can process at most one job at a time. The goal is to find an optimal schedule where optimality is defined by some problem dependent objective.

There are several important special cases of processing set restrictions: inclusive, nested, interval, and tree-hierarchical [2]. In the case of inclusive processing set, for any two jobs and , either or . In the nested processing set case, either , or , or . In the interval processing set case, the machines are linearly ordered, and each job is associated with two machine indices and such that . It is easy to see that the inclusive processing set and the nested processing set are two special cases of the interval processing set. In the tree-hierarchical processing set case, each machine is represented by a vertex of a tree, and each job is associated with a machine index such that consists of the machines on the unique path from to the root of the tree.

In this paper, we consider the problem of scheduling jobs with nested processing set restrictions on parallel machines. Besides its processing time and processing set , each job requires an additional delivery time after completing its processing. If denotes the time job starts processing, it has been delivered at time , which is called its delivery completion time. All jobs are available for processing at time 0. The objective is to minimize the time by which all jobs are delivered, that is, the maximum delivery completion time, . Minimizing the maximum delivery completion time is equivalent to minimizing the maximum lateness from the optimization viewpoint [3]. Following the classification scheme for scheduling problems by Graham et al. [4], this problem is noted: .

The motivation for this problem is the scenario in which the jobs (with nested processing set restrictions) are first processed on the machines and then delivered to their respective customers. In order to be competitive, the jobs are needed to be delivered as soon as possible to their customers. Thus, the industry practitioners are required to coordinate job production and job delivery. In manufacturing and distribution systems, finished jobs are delivered by vehicles such as trucks. Since there are sufficient vehicles for delivering jobs, delivery is a nonbottleneck activity. Therefore, we assume that all jobs may be simultaneously delivered. Considering job production and job delivery as one system, we choose the cost function to measure the customer service level. In particular, we are interested in the objective of minimizing the time by which all jobs are delivered.

The problem as stated is a natural generalization of strongly NP-hard problem , which corresponds to the special case where all and all [5]. For NP-hard problems, the research focuses on developing polynomial time approximation algorithms. Given instance of a minimization problem and approximation algorithm , let and denote the objective value of the solution obtained by algorithm and the optimal solution value, respectively, when applied to . If for all , then we say that algorithm has approximation ratio , and is called -approximation algorithm for this problem. Ideally, one hopes to obtain a family of polynomial time algorithms such that, for any given , the corresponding algorithm is -approximation algorithm; such a family is called a polynomial time approximation scheme (PTAS) [6].

When all , reduces to the classic scheduling problem . Woeginger [7] provided three fast heuristics for with approximation ratios , , and , respectively. For the single machine case with release times , Potts [8] presented a 1.5-approximation algorithm, and Hall and Shmoys [9, 10] proposed two PTASs whose running times are and . For the parallel machines case with release times , Hall and Shmoys [10] obtained a PTAS with running time , and Mastrolilli [11] developed a PTAS that runs in , where is a constant that depends exponentially on . Mastrolilli [11] also presented an improved PTAS for that runs in time.

When all , reduces to the problem of minimizing makespan with nested processing set restrictions, . There are a number of approximation algorithms for : -approximation algorithm [12], 7/4-approximation algorithm [13], 5/3-approximation algorithm [14], and two PTASs [15, 16].

As mentioned above, the classic scheduling problem (without processing set restrictions) of minimizing the maximum delivery completion time and the problem of minimizing makespan with nested processing set restrictions have been studied in the literature. However, to the best of our knowledge, the problem of minimizing the maximum delivery completion time with nested processing set restrictions, , has not been studied to date. In this paper, we first use Graham’s list scheduling [17] to get a simple and fast 2-approximation algorithm. We then derive a polynomial time approximation scheme, which is heavily built on the ideas of [15]. The PTAS result generalizes the approximation schemes of [15, 16], both of which deal with only the special case where all .

The paper is organized into sections as follows. Section 2 presents a 2-approximation algorithm which uses list scheduling as a subroutine. The next three sections are devoted to designing the polynomial time approximation scheme. Section 3 shows how to simplify the input instance to get a so-called rounded instance. Section 4 shows how to solve the rounded instance optimally. Section 5 wraps things up to derive the polynomial time approximation scheme. The discussion in Section 6 completes the paper.

2. A 2-Approximation Algorithm

In this section, we will present a simple and fast 2-approximation algorithm for .

As observed in [12], nested processing sets have a partial ordering defined by the inclusion relationship and, thus, offer a natural topological sort on jobs, taking more constrained jobs first.

We now explore the behavior of Graham’s list scheduling algorithm [17], with jobs sorted to respect nestedness. It does not depend on the delivery times. The algorithm is called Nested-LS.

Nested-LS

Step 1. Place all the jobs in a list in the order of the topological sort on jobs, taking more constrained jobs first. Set , .

Step 2. For the first unscheduled job in the list, select a machine for which is the smallest (ties broken arbitrarily). Assign job to machine . Set . Repeat this step until all the jobs are scheduled.

The load on a machine is defined to be the total processing time of the jobs assigned to this machine. The quantity represents the current load on machine during the run of Nested-LS, .

Theorem 1. Nested-LS is a 2-approximation algorithm for that runs in time.

Proof. Let be the objective value of an optimal schedule. Denote by the objective value of the schedule generated by Nested-LS. Let be the first job in the list generated in Step 2 for which holds, where denotes the time job starts processing. Remove all the following jobs from the list. Remove all the machines in and all the jobs assigned to these machines. This cannot increase and does not decrease .
A straightforward lower bound on is for any job . By the rule of Nested-LS, at the time when job is assigned to its machine , is the least-loaded machine among the machines in . It follows that . Consequently, we get .
The initial step of sorting the jobs to respect nestedness requires time. Assigning a job to the least-loaded eligible machine in Step 2 runs in time. Thus, Nested-LS runs in time.

3. Simplifying the Input

The nested structure of processing sets can be depicted by rooted tree , in which each processing set is represented by a vertex, and the predecessor relationship is defined by inclusion of the processing sets. Each machine can be regarded as a one-element processing set and thus it corresponds to a leaf in , even if there are no jobs associated with this processing set. The root of corresponds to .

For each vertex , let denote the set of machines associated with , which is the disjoint union of all the processing sets associated with the sons of . Let denote the set of jobs for which coincides with .

Following [15], we transform tree into a binary tree as follows. If there is vertex with at least three sons, create new vertex as the new father of two sons of and as a new son of . Repeat this procedure until we reach a binary tree, which consists of leaves and of nonleaf vertices. Therefore, we can assume without loss of generality that is a binary tree.

Given instance of , let be the objective value of an optimal schedule for . Let be the objective value of the schedule for generated by Nested-LS. We have

Let be some fixed positive integer. Classify jobs as big and small according to their processing times. Job is big, if , and otherwise it is small.

Modify to get rounded instance as follows:(i)For each job , round its delivery time up to the nearest integer multiple of ; that is, set rounded delivery time . Note that there are at most different delivery times in the rounded instance. Let denote th delivery time, .(ii)For each big job , round its processing time up to the nearest integer multiple of . Let denote the rounded processing time of big job . Note .(iii)For each vertex , let be the total processing time of the small jobs with delivery time in . Let be the value of rounded up to the nearest integer multiple of . The small jobs with delivery time in are replaced with new jobs each of which has delivery time and processing time , .

Lemma 2. There is schedule for with objective value .

Proof. Let be an optimal schedule for instance with objective value . Recall that the single machine case can be solved optimally by Jackson’s rule [18]: process the jobs successively in order of nondecreasing delivery times. Hence, we can assume that in on each machine the jobs with the same delivery time are processed together in succession. Let be the total processing time of the small jobs with delivery time processed on machine in . Let be the value of rounded up to the nearest integer multiple of . Replace the small jobs with delivery time processed on machine in by slots each of which has delivery time and size , , .
We next explain how to assign all small jobs in to these slots of size . Starting from the leaves of tree , we work our way towards the root in a bottom-up fashion. Suppose that we are handling vertex . At this point, all the descendants of except itself have already been handled and some slots have been occupied. Let be the subtree of rooted at which contains exactly vertices, where denotes the number of machines (leaves) in . Let denote the total processing time of the small jobs with delivery time in all descendants of (including itself) in instance , and let denote the total processing time of the corresponding small jobs in , . Let denote the total size of the slots with delivery time and size on all machines in descendant leaves of . We have the following two inequalities: and . Therefore, we get .
Let denote the set of small jobs in with delivery time , processing time , and processing set . Since , when we handle vertex , there are enough unoccupied slots of size to accommodate the jobs in . For each job in (), we assign an unoccupied slot to it and then mark this slot as occupied.
After we handle the root of , we fit all small jobs in into the slots of size and thereby get an assignment of the small jobs in to machines. We schedule the small jobs in assigned to machine as follows. If (which means that in machine processes no small job with delivery time ) but the replacement procedure has assigned some small jobs with delivery time to , then we schedule these small jobs on first, , . We then schedule all the other small jobs assigned to by Jackson’s rule, that is, keeping their order in . (In fact, scheduling all the small jobs assigned to by Jackson’s rule can improve the solution. We schedule them in this way only for ease of the subsequent analysis.)
The big jobs in are easily scheduled. We simply replace every big job in by its rounded counterpart in .
Let denote the obtained schedule for . It remains only to analyze how the objective value changes, when we move from schedule to schedule . Rounding up the delivery times may increase the objective value by . Rounding up the processing times of the big jobs may increase the objective value by a factor of . Since there are at most different delivery times in , the replacement procedure for the small jobs increases the objective value by . Hence, , the objective value of , is no more than .

Lemma 3. Let be a schedule for with objective value . Then, there is a schedule for with objective value .

Proof. We assume without loss of generality that, in on each machine, the jobs are scheduled by Jackson’s rule, and thus the small jobs with the same delivery time are processed together in succession on each machine. Let be the total processing time of the small jobs with delivery time processed on machine in , , .
We now explain how to assign all small jobs in to machines. Starting from the leaves of tree , we work our way towards the root in a bottom-up fashion, as in the proof of Lemma 2. Suppose that we are handling vertex . At this point, all the descendants of except itself have already been handled and the associated small jobs have been assigned. Let denote the total processing time of the small jobs with delivery time in all descendants of (including itself) in instance , and let denote the total processing time of the corresponding small jobs in , . We have .
Let denote the set of small jobs in with delivery time and processing set , . For each machine , if (which means that in machine processes at least one small job with delivery time ), we assign the small jobs in to until the first time that the total processing time of the small jobs with delivery time assigned to exceeds (or until there are no unassigned small jobs in ). Since , each job in can be assigned to a machine in .
After we handle the root of , we assign all small jobs in to machines. The big jobs in are easily scheduled. We simply replace every rounded big job in with its original counterpart in . We then schedule all the jobs assigned to by Jackson’s rule, that is, keeping their order in , .
Let be the objective value of the obtained schedule for . Since all small jobs in have processing time at most and there are at most different delivery times in , the replacement procedure for the small jobs may increase the objective value by . The replacement of big jobs will not increase the objective value. Hence, we get .

4. Solving the Rounded Instance Optimally

In this section, we will present an polynomial time optimal algorithm for rounded instance obtained in the preceding section. The basic idea is to generalize the dynamic programming method used in [15] for solving problem .

Recall that all jobs in rounded instance have processing times of form , where . We thus can represent subsets of the jobs in as vectors , where denotes the number of jobs with delivery time and processing time , , . Let be the set of all such vectors. Clearly, there are different vectors in . For each vertex , set of jobs in with is encoded by vector .

Let and , where vector represents a subset of jobs whose processing sets are proper supersets of . Let denote the minimum objective value over all the schedules which process the jobs in the descendants of and the additional jobs in on the machines in . All jobs in the descendants of must obey the processing set restrictions, whereas the jobs in can be assigned to any machine in .

All values can be computed and tabulated in a bottom-up fashion. If is a leaf, then is equal to the objective value of the single machine schedule which processes all the jobs in by Jackson’s rule. If is a nonleaf vertex, then has two sons and , and

Since there are different vectors in and is a fixed integer that does not depend on the input, all values can be computed in polynomial time. Finally, since, for root vertex , there is no job whose processing set is a proper superset of , the optimal objective value for is achieved by computing .

We establish the following theorem.

Theorem 4. For any fixed integer , rounded instance can be solved optimally in polynomial time.

5. The Approximation Scheme

We can now put things together.

Let be an arbitrarily small positive constant. Set . Given instance of , we get rounded instance as described in Section 3. Solve optimally as described in Section 4. Denote by the resulting optimal schedule with objective value for . Transform into schedule with objective value for instance as described in the proof of Lemma 3.

By Lemma 2, we have . Together with Lemma 3, we have . By inequality (1), we have . It follows that . We have the following theorem.

Theorem 5. Problem admits a polynomial time approximation scheme.

6. Concluding Remarks

In this paper, we initiated the study of scheduling parallel machines with job delivery times and nested processing set restrictions. The objective is to minimize the maximum delivery completion time. For this strong NP-hard problem, we presented a simple and fast 2-approximation algorithm. We also presented a polynomial time approximation scheme. A natural open problem is to design fast algorithms with approximation ratios better than 2. It would also be interesting to study the problem with job release times.

Competing Interests

The author confirms that this article content has no competing interests.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (nos. 61373079, 61472227, 61272244, and 61672327), Key Project of Chinese Ministry of Education (no. 212101), and Shandong Provincial Natural Science Foundation of China (no. ZR2013FM015).

References

  1. J. Y.-T. Leung and C.-L. Li, “Scheduling with processing set restrictions: a survey,” International Journal of Production Economics, vol. 116, no. 2, pp. 251–262, 2008. View at Publisher · View at Google Scholar · View at Scopus
  2. J. Y. Leung and C. Li, “Scheduling with processing set restrictions: a literature update,” International Journal of Production Economics, vol. 175, pp. 1–11, 2016. View at Publisher · View at Google Scholar
  3. H. Kise, T. Ibaraki, and H. Mine, “Performance analysis of six approximation algorithms for the one-machine maximum lateness scheduling problem with ready times,” Journal of the Operations Research Society of Japan, vol. 22, no. 3, pp. 205–224, 1979. View at Google Scholar · View at MathSciNet
  4. R. L. Graham, E. L. Lawler, J. K. Lenstra, and A. H. Rinnooy Kan, “Optimization and approximation in deterministic sequencing and scheduling: a survey,” Annals of Discrete Mathematics, vol. 5, pp. 287–326, 1979. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  5. M. R. Garey and D. S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness, vol. 174, W. H. Freeman, New York, NY, USA, 1979. View at MathSciNet
  6. C. H. Papadimitriou and K. Steiglitz, Combinatorial Optimization: Algorithms and Complexity, Courier Dover, 1998.
  7. G. J. Woeginger, “Heuristics for parallel machine scheduling with delivery times,” Acta Informatica, vol. 31, no. 6, pp. 503–512, 1994. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  8. C. N. Potts, “Analysis of a heuristic for one machine sequencing with release dates and delivery times,” Operations Research, vol. 28, no. 6, pp. 1436–1441, 1980. View at Publisher · View at Google Scholar · View at MathSciNet
  9. L. A. Hall and D. B. Shmoys, “Jackson's rule for single-machine scheduling: making a good heuristic better,” Mathematics of Operations Research, vol. 17, no. 1, pp. 22–35, 1992. View at Publisher · View at Google Scholar · View at MathSciNet
  10. L. A. Hall and D. B. Shmoys, “Approximation schemes for constrained scheduling problems,” in Proceedings of the 30th Annual IEEE Symposium on Foundations of Computer Science, pp. 134–139, 1989.
  11. M. Mastrolilli, “Efficient approximation schemes for scheduling problems with release dates and delivery times,” Journal of Scheduling, vol. 6, no. 6, pp. 521–531, 2003. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  12. C. A. Glass and H. Kellerer, “Parallel machine scheduling with job assignment restrictions,” Naval Research Logistics, vol. 54, no. 3, pp. 250–257, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  13. Y. Huo and J. Y.-T. Leung, “Parallel machine scheduling with nested processing set restrictions,” European Journal of Operational Research, vol. 204, no. 2, pp. 229–236, 2010. View at Publisher · View at Google Scholar
  14. Y. Huo and J. Y.-T. Leung, “Fast approximation algorithms for job scheduling with processing set restrictions,” Theoretical Computer Science, vol. 411, no. 44–46, pp. 3947–3955, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  15. G. Muratore, U. M. Schwarz, and G. J. Woeginger, “Parallel machine scheduling with nested job assignment restrictions,” Operations Research Letters, vol. 38, no. 1, pp. 47–50, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  16. L. Epstein and A. Levin, “Scheduling with processing set restrictions: PTAS results for several variants,” International Journal of Production Economics, vol. 133, no. 2, pp. 586–595, 2011. View at Publisher · View at Google Scholar · View at Scopus
  17. R. L. Graham, “Bounds for certain multiprocessing anomalies,” Bell System Technical Journal, vol. 45, no. 9, pp. 1563–1581, 1966. View at Publisher · View at Google Scholar
  18. J. R. Jackson, “Scheduling a production line to minimize maximum tardiness,” DTIC Document, 1955.