About this Journal Submit a Manuscript Table of Contents
Advances in Operations Research
VolumeΒ 2012Β (2012), Article IDΒ 748597, 9 pages
http://dx.doi.org/10.1155/2012/748597
Research Article

The 𝐢max Problem of Scheduling Multiple Groups of Jobs on Multiple Processors at Different Speeds

Department of Mathematics, Sun Yat-Sen University, Guangzhou 510275, China

Received 1 April 2012; Revised 4 July 2012; Accepted 19 July 2012

Academic Editor: Ching-JongΒ Liao

Copyright Β© 2012 Wei Ding. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We mainly study the 𝐢max problem of scheduling n groups of jobs on n special-purpose processors and m general-purpose processors at different speeds provided that the setup time of each job is less than 𝛼 times of its processing time. We first propose an improved LS algorithm. Then, by applying this new algorithm, we obtain two bounds for the ratio of the approximate solution 𝑇LS to the optimal solution T* under two different conditions.

1. Introduction

It is a well-studied problem to minimize the makespan in scheduling 𝑛 jobs {𝐽1,𝐽2,…,𝐽𝑛} on π‘š identical machines {1,2,…,π‘š}, where processing job 𝐽𝑗 immediately after 𝐽𝑖 needs a setup time 𝑀(𝑖,𝑗). As it is NP-hard (cf. [1]), quite a few authors have made their efforts to its approximate and heuristic algorithms, as well as the corresponding worst-case analysis.

In 1969, Graham [2] showed in his fundamental paper that the bound of this scheduling problem is 2βˆ’1/π‘š as 𝑀(𝑖,𝑗)=0 under the LS (List Scheduling) algorithm and the tight bound is 4/3βˆ’1/3π‘š under the LPT (Longest Processing Time) algorithm. In 1993, Ovacik and Uzsoy [3] proved that the bound is 4βˆ’2/π‘š as 𝑀(𝑖,𝑗)≀𝑑𝑗, where 𝑑𝑗 is the processing time of the job 𝐽𝑗, under the LS algorithm. In 2003, Imreh [4] studied the online and offline problems on two groups of identical processors at different speeds, presented the LG (Load Greedy) algorithm, and showed that the bound about minimizing the makespan is 2+(π‘šβˆ’1)/π‘˜ and the bound about minimizing the sum of finish time is 2+(π‘šβˆ’2)/π‘˜, where π‘š and π‘˜ are the numbers of two groups of identical processors. Gairing et al. [5] proposed a simple combinatorial algorithm for the problem of scheduling 𝑛 jobs on π‘š processors at different speeds to minimize a cost stream and showed that it is effective and of low complexity.

Besides the above well-studied scheduling problem, one may face the problem of scheduling multiple groups of jobs on multiple processors in real production systems, such as, the problem of processing different types of yarns on spinning machines in spinning mills. Recently, the problem of scheduling multiple groups of jobs on multiple processors at same or different speeds were studied provided that each job has no setup time. In 2006, Ding [6] studied the problem of scheduling 𝑛 groups of jobs on one special-purpose processor and 𝑛 general-purpose processors at same speeds under an improved LPT algorithm. In 2008, Ding [7] investigated the problem of scheduling 𝑛 groups of jobs on 𝑛 special-purpose processors and π‘š general-purpose processors at same speeds under an improved LPT algorithm. In 2009, Ding [8] presented an improved LS algorithm for the π‘„π‘š+2/π‘Ÿπ‘—/𝐢max scheduling problem on π‘š general-purpose processors and two special-purpose processors. In 2010, Ding [9] studied a heuristic algorithm of the 𝑄//𝐢max problem on multitasks with uniform processors. In the same year, Ding and Zhao [10] discussed an improved LS algorithm for the problem of scheduling multiple groups of jobs on multiple processors at the same speed provided each job has a setup time.

Recently, Ding and Zhao [11] investigated an improved LS algorithm for the problem of scheduling multiple jobs on multiple uniform processors at different speeds provided that each job has a setup time. However, if each job has a setup time, then the problem of scheduling multiple groups of jobs on multiple processors at different speeds has not been studied yet. Note that the LPT algorithm and the improved LPT algorithm are not effective ways to deal with such a problem if each job has a setup time. Meanwhile, the classical LS algorithm is only useful to solve the problem of scheduling one group of jobs on multiple processors at same or different speeds. Therefore, our purpose of this study is to propose an improved LS algorithm based on the classical LS algorithm and the fact that the optimal solution π‘‡βˆ— is bigger than the average finish time of all processors, see the inequality (3.6) below, and to use this new algorithm to analyze this problem of scheduling multiple groups of jobs on multiple processors at different speeds provided that each job has a setup time.

The remainder of the paper is organized as follows. In Section 2, we proposed an improved LS algorithm for this scheduling problem. In Section 3, we obtain two bounds for the ratio of the approximate solution 𝑇LS to the optimal solution π‘‡βˆ— under the improved LS algorithm.

Notation 1. As above and henceforth, we let 𝐿𝑖 (𝑖=1,…,𝑛) denote the 𝑖th group of jobs, and let 𝑀𝑖 (𝑖=1,…,𝑛) and 𝑀𝑛+𝑗 (𝑗=1,…,π‘š) denote the set of jobs on the 𝑖th special-purpose processor and the set of jobs on the 𝑗th general-purpose processor, respectively. Let π‘›π‘Ÿ (π‘Ÿ=1,…,𝑛) denote the number of jobs in the π‘Ÿth group. We then use 𝐽(π‘Ÿ,𝑖) (π‘Ÿ=1,…,𝑛; 𝑖=1,…,π‘›π‘Ÿ) to denote the 𝑖th job of the π‘Ÿth group and use 𝑑(π‘Ÿ,𝑖) (π‘Ÿ=1,β‹―,𝑛; 𝑖=1,…,π‘›π‘Ÿ) to denote the processing time of 𝐽(π‘Ÿ,𝑖). Let π‘ƒπ‘Ÿ (π‘Ÿ=1,…,𝑛) denote the set of the processing time 𝑑(π‘Ÿ,𝑖) (π‘Ÿ=1,…,π‘›π‘Ÿ). Moreover, we denote by 𝑠𝑖 (𝑖=1,…,𝑛) the speed of the special-purpose processor 𝑖 and by 𝑠𝑛+𝑗 (𝑗=1,…,π‘š) the speed of the general-purpose processor 𝑛+𝑗, respectively.
Note that the speeds of general-purpose processors are less than those of special-purpose processors in real production systems. For simplicity, we take 𝑠𝑛+𝑗=1 (1β‰€π‘—β‰€π‘š) and assume 𝑠𝑖β‰₯1 (𝑖=1,…,𝑛). If the job 𝐽(β„Ž,𝑗) (β„Ž=1,…,𝑛; 𝑗=1,…,π‘›β„Ž) is processed after the job 𝐽(𝑙,𝑖) (𝑙=1,…,𝑛; 𝑖=1,…,𝑛𝑙), then we use 𝑀(𝑙,𝑖;β„Ž,𝑗) to denote the setup time the processor needs.
If the job 𝐽(π‘Ÿ,𝑖) is assigned to the processor π‘˜ (π‘˜=1,2,…,𝑛+π‘š), then we write 𝐽(π‘Ÿ,𝑖)βˆˆπ‘€π‘˜. Let π‘€πΏπ‘˜(π‘˜=1,2,…,𝑛+π‘š) stand for the set of jobs being processed to the processor π‘˜ and let π‘€π‘‡π‘˜ξ“βˆΆ=𝐽(β„Ž,𝑗)βˆˆπ‘€π‘˜ξ‚΅π‘€(βˆ—,βˆ—;β„Ž,𝑗)+𝑑(β„Ž,𝑗)π‘ π‘˜ξ‚Ά,π‘˜=1,2,…,𝑛+π‘š.(1.1) Then, we use π‘€π‘‡π‘˜(π‘˜=1,2,…,𝑛+π‘š) to denote the actual finish time of the processor π‘˜. Next, we write 𝑇LS=max1β‰€π‘˜β‰€π‘›+π‘š{π‘€π‘‡π‘˜} as the actual latest finish time of 𝑛+π‘š processors under the improved LS algorithm and π‘‡βˆ— as the actual latest finish time of 𝑛+π‘š processors under the optimal algorithm, respectively. We finally denote 𝑇LS by the approximate solution under the improved LS algorithm, 𝑇LS/π‘‡βˆ— by the bound of a scheduling problem under the improved LS algorithm.

2. An Improved LS Algorithm

In the section, we will propose an improved LS algorithm for the problem of scheduling multi groups of jobs on multi processors at different speeds provided that each job has a setup time.

The algorithm is defined by the fact that whenever a processor becomes idle for assignment, the first job unexecuted is taken from the list and assigned to this processor. If there are no less than one processor being idle, then the algorithm chooses the processor with the smallest index. If the processor is a special-purpose processor for some group, then the first job unexecuted in this group is assigned to the processor. If the processor is a general-purpose processor, then the job with the smallest second index is assigned to the processor. If there are several groups of jobs with the same second index, then the job with the smallest first index is assigned. In addition, there is an arbitrary order for any job in any group at the beginning of being processed.

The steps of the improved LS algorithm are the following.

Step 1 (Initialization). Set 𝑄1={1,2,…,𝑛}, 𝑄2={𝑛+1,𝑛+2,…,𝑛+π‘š}, π‘–π‘Ÿ=1, π‘€πΏπ‘Ÿ=βˆ…, π‘€π‘‡π‘Ÿ=0, π‘Ÿβˆˆπ‘„1.

Step 2 (Choose the first idle processor). If for some π‘Ÿβˆˆπ‘„1, π‘–π‘Ÿ>π‘›π‘Ÿ, then set 𝑄1=𝑄1βˆ’{π‘Ÿ} (i.e., all jobs in the group πΏπ‘Ÿ have been assigned). If 𝑄1=βˆ…, then go to Step 5 (i.e., all jobs in all groups have been assigned). Set 𝑝=min{π‘˜ξ…žβˆ£π‘€π‘‡π‘˜β€²=minπ‘˜βˆˆπ‘„1⋃𝑄2π‘€π‘‡π‘˜}, (i.e., seek the first idle processor).

Step 3 (Choose the job). If 𝑝≀𝑛, then set π‘Ÿ=𝑝, π‘ž=𝑖𝑝, 𝑖𝑝=𝑖𝑝+1 (i.e., the special-purpose processor is the first idle processor, then the first job waiting for assignment in the 𝑝th group is assigned). If 𝑝>𝑛 (i.e., the general-purpose processor is the first idle processor), then set β„Ž=min{π‘Ÿξ…žβˆ£π‘–π‘Ÿβ€²=minπ‘Ÿβˆˆπ‘„1π‘–π‘Ÿ} (i.e., the job with the smallest second index in nonempty groups is assigned), π‘Ÿ=β„Ž, π‘ž=π‘–β„Ž, π‘–β„Ž=π‘–β„Ž+1.

Step 4. Update the assignment and the latest finish time of the processor 𝑝. Set 𝑀𝐿𝑝=𝑀𝐿𝑝+{𝐽(π‘Ÿ,π‘ž)} and 𝑀𝑇𝑝=𝑀𝑇𝑝+𝑀(βˆ—,βˆ—;π‘Ÿ,π‘ž)+(𝑑(π‘Ÿ,π‘ž)/𝑠𝑝). Then go to Step 2.

Step 5. Output the assignment π‘€πΏπ‘˜, π‘˜=1,2,…,𝑛+π‘š, for every processor and the latest finish time 𝑇LS=max1β‰€π‘˜β‰€π‘›+π‘šξ€½π‘€π‘‡π‘˜ξ€Ύ.(2.1)

3. Analysis of the Improved LS Algorithm

In the section, we obtain two bounds for the ratio of the approximate solution 𝑇LS to the optimal solution π‘‡βˆ— under two different conditions.

Theorem 3.1. Consider the problem of scheduling 𝑛 groups of jobs {𝐿1,𝐿2,…,𝐿𝑛} on {1,2,…,𝑛} special-purpose processors and {𝑛+1,𝑛+2,…,𝑛+π‘š} general-purpose processors at different speeds provided each job has a setup time. Assume that 𝑀(𝑙,𝑖;β„Ž,𝑗)≀𝛼𝑑(β„Ž,𝑗) for all 𝑙,β„Ž,𝑖,𝑗. If the optimal solution π‘‡βˆ— is bigger than the processing time 𝑑(π‘Ÿ,𝑗) of the latest finish job 𝐽(π‘Ÿ,𝑗), then the bound of this scheduling problem under the improved 𝐿𝑆 algorithm is 𝑇LSπ‘‡βˆ—β‰€ξ€·ξ€·(𝑛+π‘šβˆ’1)𝛼+1/π‘ π‘˜+βˆ‘ξ€Έξ€Έ(𝛼+2)𝑛𝑖=1𝑠𝑖𝑛+π‘š(3.1) for any 𝛼β‰₯0, where π‘ π‘˜ is the speed of the latest finish processor.

Proof. Based on the improved LS algorithm, we may assume that some processor π‘˜β€‰β€‰(1β‰€π‘˜β‰€π‘›+π‘š) is the latest finish processor and the latest finish job is 𝐽(π‘Ÿ,𝑗)  (1β‰€π‘Ÿβ‰€π‘›,1β‰€π‘—β‰€π‘›π‘Ÿ). Then on the processor π‘˜, we have 𝑇LS=π‘€π‘‡π‘˜.(3.2) On other processors, we have 𝑀𝑇𝑖β‰₯π‘€π‘‡π‘˜βˆ’ξ‚΅π‘€(βˆ—,βˆ—;π‘Ÿ,𝑗)+𝑑(π‘Ÿ,𝑗)π‘ π‘˜ξ‚Ά,𝑖=1,2,…,𝑛+π‘š,π‘–β‰ π‘˜.(3.3) By the assumption 𝑀(βˆ—,βˆ—;π‘Ÿ,𝑗)≀𝛼𝑑(π‘Ÿ,𝑗), π‘‡βˆ—β‰₯𝑑(π‘Ÿ,𝑗), (3.2) and (3.3), we get 𝑀𝑇𝑖β‰₯𝑇LSβˆ’ξ‚΅π‘€(βˆ—,βˆ—;π‘Ÿ,𝑗)+𝑑(π‘Ÿ,𝑗)π‘ π‘˜ξ‚Άβ‰₯𝑇LSβˆ’ξ‚΅1𝛼+π‘ π‘˜ξ‚Άπ‘‘(π‘Ÿ,𝑗)β‰₯𝑇LSβˆ’ξ‚΅1𝛼+π‘ π‘˜ξ‚Άπ‘‡βˆ—,𝑖=1,2,…,𝑛+π‘š,π‘–β‰ π‘˜.(3.4) Thus 𝑛+π‘šξ“π‘–=1𝑀𝑇𝑖=π‘€π‘‡π‘˜+𝑛+π‘šξ“π‘–=1π‘–β‰ π‘˜π‘€π‘‡π‘–β‰₯(π‘š+𝑛)𝑇LSξ‚΅1βˆ’(π‘š+π‘›βˆ’1)𝛼+π‘ π‘˜ξ‚Άπ‘‡βˆ—.(3.5) On the other hand, since π‘‡βˆ— is the optimal solution, it follows that π‘‡βˆ—β‰₯βˆ‘π‘›π‘™=1βˆ‘π‘›π‘™π‘–=1𝑑(𝑙,𝑖)βˆ‘π‘›+π‘šπ‘–=1𝑠𝑖.(3.6) In view of the assumption and (3.6), we deduce 𝑛+π‘šξ“π‘–=1𝑀𝑇𝑖=𝑛+π‘šξ“π‘–=1{𝑑(β„Ž,𝑝)}βˆˆπ‘€πΏπ‘–ξ‚΅π‘€(βˆ—,βˆ—;β„Ž,𝑝)+𝑑(β„Ž,𝑝)𝑠𝑖≀𝑛+π‘šξ“π‘–=1{𝑑(β„Ž,𝑝)}βˆˆπ‘€πΏπ‘–ξ‚΅1𝛼+𝑠𝑖𝑑(β„Ž,𝑝)≀(𝛼+2)𝑛+π‘šξ“π‘–=1{𝑑(β„Ž,𝑝)}βˆˆπ‘€πΏπ‘–π‘‘(β„Ž,𝑝)≀(𝛼+2)π‘›ξ“π‘›β„Ž=1β„Žξ“π‘=1𝑑(β„Ž,𝑝)≀(𝛼+2)π‘‡βˆ—π‘›+π‘šξ“π‘–=1𝑠𝑖.(3.7) Using (3.5) and (3.7), we have (𝛼+2)π‘‡βˆ—π‘›+π‘šξ“π‘–=1𝑠𝑖β‰₯𝑛+π‘šξ“π‘–=1𝑀𝑇𝑖β‰₯(π‘š+𝑛)𝑇LSξ‚΅1βˆ’(π‘š+π‘›βˆ’1)𝛼+π‘ π‘˜ξ‚Άπ‘‡βˆ—.(3.8) This yields 1(π‘š+π‘›βˆ’1)𝛼+π‘ π‘˜ξ‚Ά+(𝛼+2)𝑛+π‘šξ“π‘–=1𝑠𝑖ξƒͺπ‘‡βˆ—β‰₯(π‘š+𝑛)𝑇LS.(3.9) Therefore 𝑇LSπ‘‡βˆ—β‰€ξ€·ξ€·(𝑛+π‘šβˆ’1)𝛼+1/π‘ π‘˜+βˆ‘ξ€Έξ€Έ(𝛼+2)𝑛𝑖=1𝑠𝑖.𝑛+π‘š(3.10) This completes the proof of the theorem.

Example 3.2. Consider the following scheduling problem. Assume that there are three groups of jobs and each group separately owns two special-purpose processors and jointly owns three general-purpose processors. Assume further that 𝛼=1,  𝑠1=3,  𝑠2=2,  𝑠3=1,  𝑠4=1,  𝑠5=1, (see Tables 1 and 2).

tab1
Table 1: The processing time of jobs.
tab2
Table 2: The setup time of jobs.

The schedule for this example under the improved LS algorithm is found in Table 3.

tab3
Table 3: The improved LS schedule.

The schedule for this example under the optimal algorithm is arranged as follows. found in Table 4.

tab4
Table 4: The optimal schedule.

In this example, we have 𝑛=3, π‘š=2, 𝛼=1 and π‘ π‘˜=3. Thus, we get 𝑇LS=min923=,18,18,20,20923,π‘‡βˆ—π‘‡=8,LSπ‘‡βˆ—=236≀(3+2βˆ’1)(1+1/3)+(1+2)(3+2+1)=3+2143,(3.11) which is consistent with the conclusion of Theorem 3.1.

If we do not know whether or not π‘‡βˆ— is bigger than the processing time 𝑑(π‘Ÿ,𝑗) of the latest finish job 𝐽(π‘Ÿ,𝑗), then we have the following result.

Theorem 3.3. Consider the scheduling problem in Theorem 3.1. Assume that 𝑀(𝑙,𝑖;β„Ž,𝑗)≀𝛼𝑑(β„Ž,𝑗) for all 𝑙,β„Ž,𝑖,𝑗. Then the bound of this scheduling problem under the improved 𝐿𝑆 algorithm is π‘‡πΏπ‘†π‘‡βˆ—β‰€ξ€·(𝑛+π‘šβˆ’1)π›Όπ‘ π‘˜ξ€Έ+βˆ‘+1(𝛼+2)𝑛𝑖=1𝑠𝑖𝑛+π‘š(3.12) for any 𝛼β‰₯0, where π‘ π‘˜ is the speed of the latest finish processor.

Proof. Based on the improved LS algorithm, we may assume that some processor π‘˜β€‰β€‰(1β‰€π‘˜β‰€π‘›+π‘š) is the latest finish processor and the latest finish job is 𝐽(π‘Ÿ,𝑗)  (1β‰€π‘Ÿβ‰€π‘›,1β‰€π‘—β‰€π‘›π‘Ÿ). Then on the processor π‘˜, we have 𝑇LS=π‘€π‘‡π‘˜.(3.13) On other processors, we have 𝑀𝑇𝑖β‰₯π‘€π‘‡π‘˜βˆ’ξ‚΅π‘€(βˆ—,βˆ—;π‘Ÿ,𝑗)+𝑑(π‘Ÿ,𝑗)π‘ π‘˜ξ‚Ά,𝑖=1,2,…,𝑛+π‘š,π‘–β‰ π‘˜.(3.14) By the assumption 𝑀(βˆ—,βˆ—;π‘Ÿ,𝑗)≀𝛼𝑑(π‘Ÿ,𝑗), π‘‡βˆ—β‰₯𝑑(π‘Ÿ,𝑗)/π‘ π‘˜, (3.13) and (3.14), we get 𝑀𝑇𝑖β‰₯𝑇LSβˆ’ξ‚΅π‘€(βˆ—,βˆ—;π‘Ÿ,𝑗)+𝑑(π‘Ÿ,𝑗)π‘ π‘˜ξ‚Άβ‰₯𝑇LSβˆ’ξ‚΅1𝛼+π‘ π‘˜ξ‚Άπ‘‘(π‘Ÿ,𝑗)β‰₯𝑇LSβˆ’ξ€·π›Όπ‘ π‘˜ξ€Έπ‘‡+1βˆ—,𝑖=1,2,…,𝑛+π‘š,π‘–β‰ π‘˜.(3.15) Thus 𝑛+π‘šξ“π‘–=1𝑀𝑇𝑖=π‘€π‘‡π‘˜+𝑛+π‘šξ“π‘–=1π‘–β‰ π‘˜π‘€π‘‡π‘–β‰₯(π‘š+𝑛)𝑇LSξ€·βˆ’(π‘š+π‘›βˆ’1)π›Όπ‘ π‘˜ξ€Έπ‘‡+1βˆ—.(3.16) On the other hand, since π‘‡βˆ— is the optimal solution, it follows that π‘‡βˆ—β‰₯βˆ‘π‘›π‘™=1βˆ‘π‘›π‘™π‘–=1𝑑(𝑙,𝑖)βˆ‘π‘›+π‘šπ‘–=1𝑠𝑖.(3.17) In view of the assumption and (3.17), we deduce 𝑛+π‘šξ“π‘–=1𝑀𝑇𝑖=𝑛+π‘šξ“π‘–=1{𝑑(β„Ž,𝑝)}βˆˆπ‘€πΏπ‘–ξ‚΅π‘€(βˆ—,βˆ—;β„Ž,𝑝)+𝑑(β„Ž,𝑝)𝑠𝑖≀𝑛+π‘šξ“π‘–=1{𝑑(β„Ž,𝑝)}βˆˆπ‘€πΏπ‘–ξ‚΅1𝛼+𝑠𝑖𝑑(β„Ž,𝑝)≀(𝛼+2)𝑛+π‘šξ“π‘–=1{𝑑(β„Ž,𝑝)}βˆˆπ‘€πΏπ‘–π‘‘(β„Ž,𝑝)≀(𝛼+2)π‘›ξ“π‘›β„Ž=1β„Žξ“π‘=1𝑑(β„Ž,𝑝)≀(𝛼+2)π‘‡βˆ—π‘›+π‘šξ“π‘–=1𝑠𝑖.(3.18) Using (3.16) and (3.18), we have (𝛼+2)π‘‡βˆ—π‘›+π‘šξ“π‘–=1𝑠𝑖β‰₯𝑛+π‘šξ“π‘–=1𝑀𝑇𝑖β‰₯(π‘š+𝑛)𝑇LSξ€·βˆ’(π‘š+π‘›βˆ’1)π›Όπ‘ π‘˜ξ€Έπ‘‡+1βˆ—.(3.19) This yields (π‘š+π‘›βˆ’1)π›Όπ‘ π‘˜ξ€Έ+1+(𝛼+2)𝑛+π‘šξ“π‘–=1𝑠𝑖ξƒͺπ‘‡βˆ—β‰₯(π‘š+𝑛)𝑇LS.(3.20) Therefore 𝑇LSπ‘‡βˆ—β‰€ξ€·(𝑛+π‘šβˆ’1)π›Όπ‘ π‘˜ξ€Έ+βˆ‘+1(𝛼+2)𝑛𝑖=1𝑠𝑖.𝑛+π‘š(3.21) This completes the proof of the theorem.

Acknowledgments

This work was partially supported by NSFC (No. 10971234). The author thanks the referee for the valuable comments and suggestions.

References

  1. P. Schuurman and G. J. Woeginger, β€œPolynomial time approximation algorithms for machine scheduling: ten open problems,” Journal of Scheduling, vol. 2, no. 5, pp. 203–213, 1999. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  2. R. L. Graham, β€œBounds on multiprocessing timing anomalies,” SIAM Journal on Applied Mathematics, vol. 17, pp. 416–429, 1969. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  3. I. M. Ovacik and R. Uzsoy, β€œWorst-case error bounds for parallel machine scheduling problems with bounded sequence-dependent setup times,” Operations Research Letters, vol. 14, no. 5, pp. 251–256, 1993. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  4. C. Imreh, β€œScheduling problems on two sets of identical machines,” Computing, vol. 70, no. 4, pp. 277–294, 2003. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  5. M. Gairing, B. Monien, and A. Woclaw, β€œA faster combinatorial approximation algorithm for scheduling unrelated parallel machines,” Theoretical Computer Science, vol. 380, no. 1-2, pp. 87–99, 2007. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  6. W. Ding, β€œA type of scheduling problem on general-purpose machinery and n group tasks,” OR Transactions, vol. 10, no. 4, pp. 122–126, 2006.
  7. W. Ding, β€œA type of scheduling problem on m general-purpose machines and n-groups of tasks with uniform processors,” Acta Scientiarum Naturalium Universitatis Sunyatseni, vol. 47, no. 3, pp. 19–22, 2008.
  8. W. Ding, β€œAn improved LS algorithm for the Qm+2/rj/Cmax scheduling problem on m general-purpose machines and two special-purpose machines,” Communication on Applied Mathematics and Computation, vol. 23, no. 2, pp. 26–34, 2009.
  9. W. Ding, β€œHeuristic algorithm for the Q//Cmax problem on multi-tasks with uniform processors,” Acta Scientiarum Naturalium Universitatis Sunyatseni. Zhongshan Daxue Xuebao. Ziran Kexue Ban, vol. 49, no. 1, pp. 5–8, 2010.
  10. W. Ding and Y. Zhao, β€œAn improved LS algorithm for the problem of scheduling multi groups of jobs on multi processors at the same speed,” Algorithmic Operations Research, vol. 5, no. 1, pp. 34–38, 2010.
  11. W. Ding and Y. Zhao, β€œAn analysis of LS algorithm for the problem of scheduling multiple jobs on multiple uniform processors with ready time,” Pacific Journal of Optimization, vol. 7, no. 3, pp. 551–564, 2011. View at Zentralblatt MATH