Abstract

The learning effect has gained much attention in the scheduling research recently, where many researchers have focused their problems on only one optimization. This study further addresses the scheduling problem in which two agents compete to perform their own jobs with release times on a common single machine with learning effect. The aim is to minimize the total weighted completion time of the first agent, subject to an upper bound on the maximum lateness of the second agent. We propose a branch-and-bound approach with several useful dominance properties and an effective lower bound for searching the optimal solution and three simulated-annealing algorithms for the near-optimal solutions. The computational results show that the proposed algorithms perform effectively and efficiently.

1. Introduction

In traditional scheduling problems, most studies assumed that each of the operations of all job processing times is known and fixed. But in some circumstances, the jobs processing times are affected by learning effect. The ‘‘learning effect” is the phenomenon that unit costs reduce as firms produce more of a product and gain knowledge or experience. Biskup [1] and Cheng and Wang [2] first brought the phenomenon of learning effect into scheduling field. Afterwards the learning effect is getting much to pay attention in the scheduling research in the last decade such as Mosheiov [3], Mosheiov and Sidney [4], Bachman and Janiak [5], Lee and Wu [6], Kuo and Yang [7], and Koulamas and Kyparisis [8]. Biskup [9] provided the most recent learning effects survey paper in scheduling research. More recent studies involving learning effects were by Wang et al. [10], Janiak and Rudek [11], Yin et al. [12], Toksari and Güner [13], Wang et al. [14], Lee et al. [15], J.-B. Wang and C. Wang [16], Wu et al. [17], Zhang et al. [18], Yin et al. [19], and Yang et al. [20], Yang et al. [21], Wang et al. [22], J.-B. Wang and J.-J. Wang [23], Cheng et al. [24], and so forth.

In addition, most research assumed that all jobs met a single criterion. But jobs might come from different agents. There might be multiple agents who compete on the same resources, and each agent has its own objective. This concept was first initiated and considered into scheduling field by Baker and Smith [25] and Agnetis et al. [26]. After that time, many researchers focused on multiagent in scheduling field. However, little research has been done on scheduling problem with learning effect and multiagent. Liu et al. [27] studied the optimal polynomial time algorithms to solve a single-machine scheduling problem with two-agent and position-dependent processing time aging and learning effect. The objective is to find a schedule that minimizes the total completion time of the first agent with a maximum cost limit of the second agent. Cheng et al. [28] investigated a two-agent single-machine scheduling problem with a truncated sum-of-processing times-based learning effect and developed algorithms to minimize the total weighted completion time of the jobs of the first agent, subject to the restriction that no tardy job is allowed for the second agent. Li and Hsu [29] investigated the job scheduling problem of two agents competing for the usage of a common single machine with learning effect. The objective is to minimize the total weighted completion time of both agents with the restriction that the makespan of either agent cannot exceed an upper bound. Wu et al. [30] address a two-agent single-machine scheduling problem with the co-existing sum-of-processing times-based deteriorating and learning effects. The goal is to minimize the total weighted completion time of the jobs of the first agent given that no tardy job is allowed for the second agent.

The previous both scheduling issues were yet relatively unexplored. In this paper, we therefore study a two-agent scheduling problem with position-based learning competing on a common single machine and further consider each job a different release time. The objective is to minimize the total weighted completion time of first agents, subject to the constraint that the maximum lateness of second agent the jobs cannot exceed an upper bound. In the classical scheduling notation, the problem can be notated by a triplet as . As shown by Agnetis et al. [26], it is a Binary NP-hard problem, even without release time and learning effect ().

The rest of this paper is organized as follows: the problem formulation is introduced in the next section. The branch-and-bound and simulated-annealing algorithms are employed to find the optimal solution and the near-optimal solutions, respectively. Dominance properties and a lower bound are developed to be used in the branch-and-bound algorithm in Section 3. The details of simulated-annealing algorithm are described in Section 4, and the computational experiment results are given in Section 5. In the last section, some conclusions and extensions are presented.

2. Problem Formulation

In this section, we describe a formal definition of the model as there are jobs from two competing agents ( = agent or agent ) to be scheduled. The number of jobs in the two sets is recorded as and , such that . The processing time and weight of job are known and denoted as and , respectively. Based on the learning effect, the actual processing time of job changes with position and learning ratio , that is , where and .

Furthermore, we consider the problem of scheduling a set of independent jobs with integer release times () on single machine. The goal is to minimize the weighted sum of completion times of the jobs from agent , subject to the constraint that the maximum lateness of the jobs from agent is not more than a given upper bound .

3. Branch-and-Bound Algorithm

The computational complexity of this problem does not consider ready time and learning effect (), which is showed a Binary NP-hard problem by Agnetis et al. [26]. Therefore, the addressed problem here () is more complex than the problem (). We basically try to employ the branch-and-bound algorithm to gain the optimal solution, and for speeding up the searching process, several dominance properties and a lower bound were presented in the following.

3.1. Dominance Property

Suppose that there are two contiguous jobs ( and ) in sequence , where and denote the scheduled and unscheduled partial sequence, respectively. One can perform the contiguous jobs and interchanges to obtain another sequence . To show that dominates , it is sufficient to ensure or . Before proving the proposed properties, we let denote the completion time of the last job in the scheduled partial sequence to determine the following properties.

Property 1. If , , , and , then dominates .

Proof. By the definition, the completion times of jobs and in and are, respectively, Since and . Moreover, By and , we have hence , as required.

The proofs of Properties 24 are omitted since they are similar to that of Property 1.

Property 2. If ,, and , then dominates .

Property 3. If , , and , then dominates .

Property 4. If ,, and , then dominates .

In addition, let be a sequence of the jobs where is the scheduled part with jobs, and is the unscheduled part with jobs. The following property is found for determining the sequence feasibility by the unscheduled . Moreover, is the completion time of the last job in .

Property 5. If there is a in such that , then sequence () is not a feasible solution.

Proof. Since , lateness of the unscheduled jobs must exceed the given bound . So () is not a feasible sequence.

The next property is for assigning the unscheduled in ()th position.

Property 6. If all the unscheduled jobs belong to and there exists an such that , where for all jobs and , then job may be assigned to the ()th position.

3.2. Lower Bound

In addition to the previous properties, we hatch up a lower bound to speed up the building of searching trees in the branch-and-bound algorithm. Assume that PS and US are the two partial scheduling sequences. PS is the scheduled jobs, and US is the remaining unscheduled jobs in which is agent jobs and is agent jobs, where . The lower bound is obtained by scheduling agent jobs first and then scheduling agent jobs in any order. To elaborate this, let be the completion time of the last job in PS; then the completion time for the ()th job is Similarly, the completion time for the job is Hence the lower bound of the partial sequence PS can thus be found as follows:

4. Simulated-Annealing Algorithm

The simulated-annealing algorithm is one of the meta-heuristic methods to solve large-scaled combinatorial minimization problems [28, 3134]. It was first described by Kirkpatrick et al. [35] based upon the research of Metropolis et al. [36]. The major advantage of this approach is that it avoids getting trapped in local minima for global optimization by controlling the parameter which influences the probability of accepting a worse solution in the iterative process. Here, we employ SA algorithm to obtain near-optimal solutions as described in the following.

Step 1 (initial feasible solution). An initial feasible sequence was generated by putting agent in front of agent for considering the conditionality objective. Thus, we employ the earliest due date (EDD) rule for agent first, and for agent , four different rules are employed as EDD rule, shortest processing time (SPT) rule, shortest release time (SRT) rule, and weighted shortest processing time (WSPT) rule; they were denoted by SA1, SA2, SA3, and SA4, respectively.

Step 2 (adjusting the solution). To improve on the initial schedule, we shift the neighborhood job schedules. The exchange sequence strategy procedures were to choose two different locations randomly and irregularly select one of the three resources (pairwise interchanges, extraction, and forward/backward shifted reinsertions) to ameliorate the quality of the SA.

Step 3 (acceptance probability). If there exists a new schedule that improves the value of the objective, it replaces the previous schedule. Besides, the SA algorithms prevent it to get stuck to local minima with an acceptance probability. The acceptance probability, given in the next equation, is based on the exponential distribution: where is the control parameter and is the variation of the objective value. If , the new sequence is accepted, otherwise new sequence will be rejected. Ben-Arieh and Maimon [37] suggested that in the th iteration is where is an experimental constant. After preliminary trials, was used in our experiment.

4.1. Stopping Condition Iterations

The SA algorithms were terminated after 300 iterations in our preliminary experiments, where is the number of jobs.

5. Computational Experiments

A computational experiment was conducted to assess the performance of the proposed branch-and-bound algorithm and the accuracy of the SA algorithms. All the algorithms were coded in Compaq Visual Fortran version 6.6 and run on an Intel(R) Core(TM) i7-2600 CPU @ 3.40 GHz with 4 GB RAM operating system under Windows 7 environment. The experimental design followed Chu's [38] and Fisher [39] framework, where the normal job processing times were generated randomly from a uniform distribution over the integers between 1 and 100; the release times were from a uniform distribution over the integers (0, 50.5), where was the job size and was a control variable; the due dates were from a uniform distribution over the range of integers to , where ,, and are the sum of the processing times of all the jobs , the tardiness factor, and the due date range, respectively. Furthermore, the bound was fixed at 0 and each agent has half of the total number of jobs.

5.1. The Accuracy of SAs for Small Job Size

First, to assess the accuracy of SA algorithms, the error percentage was calculated as where was the solution obtained from the SA algorithm and OP was the optimal solution of the objective function obtained from the branch-and-bound algorithm. In the first simulation experiment, the job size , tardiness factor (), the due date range (), and learning effect (le) were fixed at  = . The values of were taken as 0.2,,,,,,,,, and 3.0. For each situation, 100 replications were randomly generated. The mean and maximum of the error percentages were recorded in Table 1. The results showed that SA1, SA2, SA3, and SA4 are not affected by the variation of . It was observed that the mean error percentage of the SA algorithm was less than 0.1352%. In order to diminish some peculiar worst case of SAs, we combined the four SAs to obtain SA5 which was the smallest value of the objective function obtained from the SAs. The mean error percentage of the SA5 was less than 0.0429%.

In another simulation experiment, variables were fixed at . The values of () were taken as , , , , , and . For each situation, 100 replications were randomly generated. The mean and maximum of the percentage errors were also recorded in Table 2. The results were similar to the former experiment. The mean error percentage of the SA5 was less than 0.0151%. By the two experiment results, it was recommended to combine SAs into SA5.

5.2. Performance of the Branch-and-Bound Algorithm for Small Job Size

To evaluate the performance of the branch-and-bound algorithm, the mean and maximum numbers of nodes were recorded as well as the computation times (in seconds) to calculate the mean and maximum computation times. In this simulation experiment, the parameters were set as the job size ( and 14), release times control value (,,,,,,,,, and 3.0), tardiness factor ( and 0.5), the due date range ( and 0.75), and the learning effect (le = 70% and 90%). For each situation, 100 replications were randomly generated to yield a total of 16000 instances. Table 3 summarized the performance of the branch-and-bound and SA algorithms. It indicated as follows.

First, the mean and maximum numbers of nodes increased as the job size became larger, as became smaller, as became smaller, as became smaller, or as learning effect became strong. The CPU times were observed to increase exponentially as the job size increased. Some of instances could not be solved in the reasonable time. We recorded the number of solvable instances (NSI) if the numbers of nodes were less than 108. There were 15,848 solvable instances for all instances. When  = , the least NSI was 73 which implied that the aforesaid parameters affected the nodes. Second, the accuracy of SA algorithms was assessed, and the results presented in Table 3 indicated that the mean error percentage of SA5 was less than 0.7376%. The performance of error percentage of SA5 trend was not subject to the parameter influenced.

5.3. The Performance of the SA Algorithms for Large Job Size

On evaluating the accuracy of SA algorithms for large job size, we carried out the jobs size ( and 80) simulation experiments. The other parameters were taken as (,, and 0.6), ( and 0.5), (,, and 0.75), and (le = 70%, 80%, and 90%). For each situation, 100 replications were randomly generated. For evaluating the accuracy of SA algorithms, the mean relative deviance percentage was calculated as where was the solution obtained from SA algorithms and SA5 was the smallest value of the objective function obtained from the th SA algorithms. We get the total average for each algorithm experimental results as depicted in Figures 15. In Figure 4, parameters could slightly affect the performance of SAs algorithm. It was observed that there was no significant difference among the performance of SA algorithms. Besides, the CPU times of the SA algorithms were not recorded since they were completed within one second. Thus, the SA5 is recommended for solving the large job size method.

6. Conclusions

In this paper, we study the two-agent single-machine scheduling problems in which jobs with arbitrary release times in learning effect condition. The objective is minimizing the total weighted completion time of one agent, subject to an upper bound on the maximum lateness of the second agent. To solve the problem, a branch-and-bound algorithm incorporated with several dominance properties and a lower bound is developed. In addition, simulated-annealing algorithms are also developed to test the accuracy. Computational results show that the branch-and-bound algorithm can find the optimum up to 14 jobs in a reasonable time, and the simulated-annealing algorithm is effective and efficient in obtaining near-optimal solutions. Suggested further research includes considering other optimization criteria or multiobjective optimization to further verify the proposed approach.