Abstract

Recently, interest in scheduling with deteriorating jobs and learning effects has kept growing. However, research in this area has seldom considered the multiagent setting. Motivated by these observations, we consider two-agent scheduling on a single machine involving the learning effects and deteriorating jobs simultaneously. In the proposed model, we assume that the actual processing time of a job of the first (second) agent is a decreasing (increasing) function of the total processing time of the jobs already processed in a schedule. The objective is to minimize the total weighted completion time of the jobs of the first agent with the restriction that no tardy job is allowed for the second agent. We develop a branch-and-bound and a simulated annealing algorithms for the problem. We perform extensive computational experiments to test the performance of the algorithms.

1. Introduction

In classical scheduling, researchers routinely assume that the job processing time is known and fixed from the processing of the first job to the completion of the last job. This assumption is invalid in situations where the job processing time may be prolonged due to deterioration or shortened due to learning over time. For example, Browne and Yechiali [1] observed that the time and effort required to put out a fire increase if there is a delay in the starting of the fire-fighting effort. In such environments, a job that is processed later consumes more time than the same job when processed earlier. Scheduling in this setting is known as “scheduling deteriorating jobs.” Meanwhile, Biskup [2] points out that the repeated processing of similar tasks improves the workers’ skills; for example, workers are able to perform setups, deal with machine operations or software, or handle raw materials and components at a faster pace. This phenomenon is known as the “learning effect” in the literature.

The deteriorating job scheduling problem was first introduced by J. N. D. Gupta and S. K. Gupta [3] and Browne and Yechiali [1], independently. J. N. D. Gupta and S. K. Gupta [3] considered the problem using the polynomial processing time functions to minimize the makespan and propose branch-and-bound and heuristic algorithms to search for the optimal and near-optimal solutions. Browne and Yechiali [1] studied the problem using the exponential job processing times to minimize the makespan and provide insights into problem solutions. Since then, an abundance of studies of the subject have emerged. For different models of the problem dealing with different criteria, we refer readers to Alidaee and Womer [4] and Cheng et al. [5].

On the other hand, Biskup [2] and Cheng and Wang [6], independently, incorporate the concept of learning into scheduling. Many researchers have since devoted large amounts of effort to this relatively young but booming area of scheduling research. For detailed reviews of motivations, results, and applications of scheduling with learning effects, we refer the reader to a comprehensive review of scheduling research with learning considerations by Biskup [7].

Recently, there is a growing interest in scheduling research that considers deteriorating jobs and learning effects simultaneously. Wang [8] assumes that the job processing times has the following form: , where is the actual processing time of a job scheduled in the th position of a sequence, is the basic processing time, and is the common deteriorating rate. Wang [9] studies a model in which the job processing time has the following form: , where is the basic processing time and is an increasing deterioration function with . Wang and Cheng [10] consider a model in which the actual processing time is , where is the common basic processing time, is the growth rate, and is the learning index. Cheng et al. [11, 12] study a new scheduling model with deteriorating jobs and learning effects in which the actual processing time of a job scheduled in the th position of a sequence is modeled as , where denotes the normal processing time of the job scheduled in theth position of the sequence, is a given parameter, and and denote the deteriorating and learning indices with and . Toksari et al. [13] consider several scheduling problems under the assumption of the nonlinear effects of learning and deterioration, where they assume that , , , , where is the starting time of the job scheduled in position . Huang et al. [14] consider the single-machine scheduling problem with time-dependent deterioration and an exponential learning effect, that is, the actual processing time of a job depends not only on the processing times of the jobs already processed but also on its scheduled position. Cheng [15] modeled that the actual processing time of is defined as if it is scheduled in the th position in a schedule, where and 0 < < 1. Li and Hsu [16] modeled that the real processing time of job varies with position based on the learning effect, that is , where is the learning ratio with and .

In classical scheduling, it is assumed that there is a single customer (i.e., agent) who seeks to minimize a scheduling criterion that is function of the order in which the customer’s orders (i.e., jobs) are processed by the available processing resources (i.e., machines). In many management situations, however, multiple agents compete on the usage of common processing resources. For instance, Agnetis et al. [17] observe that multiple agents compete on the usage of a common processing resource in different application environments and different methodological fields, such as artificial intelligence, decision theory, and operations research. One major stream of research in this context is related to multiagent scheduling, in which different agents interact to perform their respective tasks, negotiating among one another for the usage of the common resources over time. For research on multiagent scheduling without learning or deteriorating jobs or both, the reader may refer to Baker and Smith [18], Agnetis et al. [19], Yuan et al. [20], Cheng et al. [21], Ng et al. [22], Agnetis et al. [17], Cheng et al. [11, 12], Yin et al. [23], Cheng et al. [24], and so forth. Scheduling in the multiagent setting provides the first motivation for this paper.

Another motivation is that research on multiagent scheduling with deteriorating jobs or learning effects is relatively limited. Liu and Tang’s study [25] is probably the only scheduling study that considers deteriorating jobs and multiple agents. They assume that the actual processing time of is , where denotes the deterioration rate and is the job’s starting time. Under the proposed model, they consider the scheduling objectives of minimizing the makespan, maximum lateness, maximum cost, and total completion time. In this paper, we assume that the actual processing time of a job of the first agent is a decreasing function of the total processing time of the jobs already processed in a schedule, while the actual processing time of a job of the second agent is an increasing function of the total processing time of the jobs already processed in a schedule. The objective is to minimize the total weighted completion time of the jobs of the first agent with the restriction that no tardy job is allowed for the second agent.

The remainder of this paper is organized as follows. In Section 2, we present some dominant properties and develop a lower bound to speed up the search for the optimal solution, followed by discussions of branch-and-bound and simulated annealing (SA) algorithm Section 3. We present the results of extensive computational experiments to assess the performance of all of the proposed algorithms under different experimental conditions in Section 4. We conclude the paper and suggest topics for further research in the Section 5.

2. Problem Statement

We formulate the problem under study in the following. There are jobs ready to be processed on a single machine. Each job belongs to one of the two agents, namely, and . Associated with job , there is a normal processing time , a weight , a due date , and an agent code , where if , or if . We assume that all the jobs of have a learning rate with , while all the jobs of have a deteriorating rate with . Under the proposed model, the actual processing time of is () if it is a job of () scheduled in the th position of a sequence, where the subscript [] denotes the job in the th position of a sequence. For a given schedule , let be the completion time of and let be the lateness of . The objective of the scheduling problem is to find an optimal schedule to minimize subject to . Since the objective function and constraint involve regular scheduling criteria, we use the terms schedule and sequence interchangeably.

3. Branch-and-Bound and Simulated Annealing Algorithms

Ng et al. [22] show that our problem without learning or deteriorating consideration is strongly NP-hard. So, we apply branch-and-bound and SA algorithms to search for the optimal and near-optimal solutions, respectively, in this paper. In order to speed up the searching process, we first develop three adjacent pairwise interchange properties, followed by two dominant rules. We then present the procedures of the branch-and-bound and SA algorithms.

3.1. Dominant Properties

Assume that the schedule (sequence) has two adjacent jobs and with immediately preceding and that and are in the th and the ()th positions of , respectively. Perform a pairwise interchange of and to derive a new sequence .

Proposition 1. For any two jobs and to be scheduled consecutively, if , then dominates .

Proof. Assume that and denote two sequences in which and denote partial sequences. To show that dominates , it suffices to show that and . In addition, let be the completion time of the last job in the subsequence with () jobs. Since and , the completion time of the jobs and in sequences and is given by the following: One also has Taking the difference between and , we obtain the following: By substituting , , and , we simplify (3) as follows: where , , , and . Taking the first and second derivatives of (4), we can see that (4) is nonnegative. Hence, we have On the other hand, taking the difference between (1) and (2), we have By substituting , , and into (6), we have Taking the first and second derivatives of (7), and noting that , , and , we have . Therefore, dominates .

Proposition 2. For any two jobs and to be scheduled consecutively, if , , and
, then dominates .

Proposition 3. For job and job to be scheduled consecutively, if and , then dominates .

We omit the proofs of Propositions 2 and 3 because they are similar to that of Proposition 1.

We next present two propositions to determine the feasibility of a partial sequence. Let () be a sequence of jobs, where is the scheduled part with jobs and is the unscheduled part. Moreover, let be the completion time of the last job in .

Proposition 4. If there is a job such that , then sequence is not a feasible solution.

Proof. If there is a job such that , this violates the constraint that no tardy job is allowed for the second agent.

Proposition 5. If there is a job such that , then sequence is not a feasible solution.

Proof. Similar to Proposition 4.

3.2. Lower Bound

In this subsection, we develop a lower bound for the proposed branch-and-bound algorithm. Suppose that PS is a partial schedule in which the order of the first jobs is determined, and, be the unscheduled part with () jobs, where there are jobs of and jobs of with = . Moreover, let denote the normal processing time of the () unscheduled jobs when arranged in nondecreasing order, and let be the completion time of the last job in . Given that the learning effect can shorten the processing time and that the deteriorating effect can lengthen the processing time, we assign the jobs with the learning effect to the first positions and the jobs with the deteriorating effect to the following positions of the remaining () unscheduled positions after the th position which contributes in the reduction the total weighted completion time of the jobs of the first agent. The completion time of the ()th job is Similarly, the completion time of the th job is Next, we deriequence by arranging the jobs ofe the completion time of the jobs with the deteriorating effect assigned to the remaining positions. The completion time of the th job is Note that and Following the same idea of Cheng et al. [26] and Wu et al. [27], we want to assign the job completion time to the jobs of agents and . Given the constraint that the jobs of agent cannot be tardy, we assign the completion time to the jobs of agent as late as possible. The procedure is as follows.

3.2.1. Algorithm

Step 1. Set

Step 2. Sort the jobs of agent in a nondecreasing order of their weights and the jobs of agent in a nondecreasing order of their due dates, that is,

Step 3. Set , , and .

Step 4. If , go to Step 7.

Step 5. If , set and . Otherwise, set and .

Step 6. Set , and go to Step 4.

Step 7. Compute . Evidently, a lower bound for the partial sequence is

3.3. Simulated Annealing Algorithms

Simulated annealing is one of the most popular metaheuristic methods widely applied to solve combinatorial optimization problems [2830]. SA has the advantage of avoiding getting trapped in a local optimum because of its hill climbing moves, which are governed by a control parameter. In the following, we apply SA to derive near-optimal solutions for our problem. We apply an SA algorithm with three different initials as follows.

Step 1. Initial sequence.
We use three initial sequences for the three SA algorithms. In SA1, we use random numbers to generate an initial job sequence; if there is any tardy job in this sequence, we regenerate another job sequence until it is feasible. In SA2, we create an initial job sequence by arranging the jobs of in the earliest due date (EDD) order, followed by arranging the jobs of in the shortest processing time (SPT) order. In , we create an initial job sequence by arranging the jobs of in the EDD order, followed by arranging the jobs of in the weighted shortest processing time (WSPT) order. In order to get a good initial solution, we apply the NEH method [31] to the initial sequences produced by and .

Step 2. Neighborhood generation.
Neighborhood generation plays an important role in the efficiency of SA algorithms. We use the pairwise interchange (PI) neighborhood generation method in the algorithms.

Step 3. Acceptance probability.
When a new feasible sequence is generated, it is accepted if its objective value is smaller than that of the original sequence; otherwise, it is accepted with a probability that decreases as the process evolves. The probability of acceptance is generated from an exponential distribution as follows: where is a control parameter and is the change in the objective value. In addition, we use the method suggested by Ben-Arieh and Maimon [32] to change in the th iteration as follows: where is an experimental constant. After preliminary trials, we used in our experiments.
If the total weighted completion time increases as a result of a random pairwise interchange, the new sequence is accepted when , where is randomly sampled from the uniform distribution .

Step 4. Stopping condition.
Our preliminary trials indicate that the quality of the schedule is stable after 200 iterations, where is the number of jobs.

3.4. Details of the Branch-and-Bound Algorithm

We use the depth-first search in the branching procedure and assign jobs in a forward manner starting with the first position. In the searching tree, we adopt a branch and systematically work down the tree until we either eliminate it by virtue of the dominant properties and the lower bounds or reach its final node, in which case this sequence either replaces the initial solution or is eliminated. We summarize the main steps in the following.

3.4.1. The Branch-and-Bound Algorithm

Step 1. Apply the best solution obtained from the three proposed simulated annealing algorithms as the initial solution for the branch-and-bound algorithm.

Step 2. Use the dominant rules 1–5 to eliminate the dominated partial sequences.

Step 3. Compute the lower bound of the total weighted completion time of the first agent for the unscheduled partial sequences or the total weighted completion time of the first agent for the completed sequences. If the lower bound for an unscheduled partial sequence is greater than that of the initial solution, eliminate that node and all the nodes beyond it in the branch. If the value of the completed sequence is less than that of the initial solution, replace it as the new solution. Otherwise, eliminate it.

Step 4. Repeat Steps 2 and 3 until all the branches are explored.

4. Computational Experiments

We conducted extensive computational experiments to evaluate the efficiency of the branch-and-bound algorithm and the performance of the three simulated annealing algorithms. We coded all the algorithms in Fortran using the Compaq Visual Fortran version 6.6 and performed the experiments on a personal computer powered by an Intel Core2 Quad CPU 2.66 GHz with 4 GB of RAM and operating under Windows XP. The job processing times was generated from a uniform distribution over the integers 1–100. The weights of the jobs from the first agent were generated from another uniform distribution over the integers 1–100. In addition, the due date of of was generated from a uniform distribution over the integers between and , where is the total normal processing time of the jobs, that is, as proposed by Fisher [33]. took the values 0.4, 0.5, and 0.6, while took the values 0.4, 0.5, and 0.6. We fixed the proportions of the jobs of agent at pro = 0.4 and 0.6 in the tests.

For the branch-and-bound algorithm, we recorded the average and standard deviation of the number of nodes as well as the average and standard deviation of the execution time (in seconds). For the SA heuristics, we recorded the mean and standard deviation percentage errors. We calculate the percentage error of a solution produced by a heuristic algorithm as follows: where and are the total weighted completion time of the heuristic and the optimal solutions, respectively. We did not record the computational time of the heuristic algorithms because for all of the algorithms it was less than a second CPU time in generating solutions.

We conducted the computational experiments in two parts. In the first part of the experiments, we fixed the number of jobs at 10 and 15. We set the learning index at 1.001, 1.01, and 1.1, and the deteriorating index at −1.1, −1.01, and −1.001. As a result, we examined 324 experimental situations. We randomly generated a set of 20 instances for each situation. Tables 1, 2, 3, and 4 report the results, which include the CPU time (mean and standard deviation) and the number of nodes for the branch-and-bound algorithm.

As for the performance of the branch-and-bound algorithm, we see from Figure 1 and Table 1 that the number of nodes and the mean CPU time increase when becomes bigger. When the proportion of the jobs of agent is smaller (pro = 0.4), the number of nodes is obviously more than that with a bigger one as shown in Figure 2. In addition, the instances with a bigger value of (e.g., = 0.5 and 0.6) are easier to solve than those with a smaller one. Tables 14 also show that the mean error percentages of SA3 (between 0.104% and 0.335%) are lower than those of SA1 (between 0.277% and 0.832%) and those of SA2 (between 0.272% and 0.730%.) Moreover, the standard deviations of the percentage error follow the same pattern. In particular, the standard deviations of the percentage error of SA1, SA2, and SA3 were between 0.870% and 1.464%, 0.841% and 1.253%, and 0.352% and 0.588%, respectively. This indicates that the performance of SA3 is better than that of the other two. In addition, the means error percentages of SA3 are affected by the parameters and pro. In particular, it was up to 0.462% when the value of became smaller (at = 0.4) or pro was 0.6 (see Figure 2 and Tables 14). However, the mean error percentages of SA3 were all below 0.5%. The results also indicate that the impact of the learning or deteriorating effect is insignificant.

In the second part of the experiments, we further assessed the performance of the proposed SA heuristics in solving instances with large numbers of jobs. Given the fact that it is not easy to generate a feasible solution when becomes lager, we set at 25 and 30, fixed the parameters at the values as those used for the small job-size design, and set the proportion of the jobs of agent at pro = 0.4 and 0.6 in the tests. We set the learning index at 1.001, 1.01, and 1.1, and the deteriorating index at −1.1, −1.01, and −1.001. As a result, we examined 324 experimental situations. We randomly generated a set of 20 instances for each situation. We obtained the relative percentage with respect to the best known solution among SA1, SA2, and SA3 for each instance. We also recorded the mean execution time and the mean relative percentage deviation for each SA heuristic. We calculate the relative percentage deviation RPD as follows: where is the value of the objective function generated by , and is the smallest value of the objective function obtained from the three SA heuristics. Tables 5-6 report the results.

As shown in Tables 5-6, we see that the mean RPDs of SA1 and SA2 become bigger as increases. In general, the mean RPDs of SA3 are lower than those of SA1 and SA2. Furthermore, Figure 3 and Tables 5-6 show that the RDP mean of SA3 becomes smaller as becomes larger (e.g., = 0.6). However, when = 0.6, the mean RPD of SA3 at pro = 0.4 was less than that at pro = 0.6. Figure 4 further shows that when , SA3 has a smaller RPD value when or becomes larger. In addition, all of the mean RPDs of SA3 were less than 0.2%. In sum, SA3 outperforms the other two SA heuristics in terms of both solution accuracy and performance stability.

5. Conclusions

In this paper, we study a two-agent single-machine scheduling problem with learning and deteriorating effects simultaneously. The objective is to minimize the total weighted completion time of the jobs of the first agent with the restriction that no tardy job is allowed for the second agent. We develop a branch-and-bound algorithm incorporating several dominant properties and a lower bound to derive the optimal solution. We also propose three simulated annealing algorithms to obtain near-optimal solutions. The computational results show that with the help of the proposed heuristic initial solutions, the branch-and-bound algorithm performs well in terms of the number of nodes and execution time, when the number of jobs is fewer than or equal to 15. Moreover, the computational experiments also show that the proposed SA3 performs well since its mean error percentage was less than 0.4% for all the tested cases. Further research lies in the devising of efficient and effective methods to solve the problem with significantly larger numbers of jobs.

Acknowledgment

The author would like to thank the National Science Council of the Republic of China, Taiwan, for financially supporting this paper under the Contract nos. NSC 101-2410-H-264-004 and NSC 102-2410-H-264-003.