#### Abstract

In the field of distributed decision making, different agents share a common processing resource, and each agent wants to minimize a cost function depending on its jobs only. These issues arise in different application contexts, including real-time systems, integrated service networks, industrial districts, and telecommunication systems. Motivated by its importance on practical applications, we consider two-agent scheduling on a single machine where the objective is to minimize the total completion time of the jobs of the first agent with the restriction that an upper bound is allowed the total completion time of the jobs for the second agent. For solving the proposed problem, a branch-and-bound and three simulated annealing algorithms are developed for the optimal solution, respectively. In addition, the extensive computational experiments are also conducted to test the performance of the algorithms.

#### 1. Introduction

Multiagent simulation is a branch of artificial intelligence that offers a promising approach to dealing with multistakeholder management systems, such as common pool resources. It provides a framework which allows analysis of stakeholders’ (or agents’) interactions and decision making. For example, in forest plantation comanagement, Purnomo and Guizol [1] further pointed out “Comanagement of forest resources is a process of governance that enables all relevant stakeholders to participate in the decision-making processes. Illegal logging and forest degradation are currently increasing, and logging bans are ineffective in reducing forest degradation. At the same time interest in forest plantations and concern about poverty problems of neighboring people whose livelihoods depend on forest services and products continue to increase rapidly. Governments have identified the development of small forest plantations as an opportunity to provide wood supplies to forest industries and to reduce poverty. However, the development of small plantations is very slow due to an imbalance of power and suspicion between communities and large companies.” In blood cell population dynamics, Bessonov et al. [2] gave some possible explanations of the mechanism for recovery of the system under important blood loss or blood diseases such as anemia. Agnetis et al. [3] also indicated that multiple agents compete on the usage of a common processing resource in different application environments and different methodological fields, such as artificial intelligence, decision theory, operations research, and so forth.

Scheduling with multiple agents has received growing attention in recently years. Agnetis et al. [4] and Baker and Smith [5] were among the pioneers to introduce the concept of multiagent into scheduling problems. Yuan et al. [6] discussed two dynamic programming recursions in Baker and Smith [5] and proposed a polynomial-time algorithm for the same problem. Cheng et al. [7] addressed the feasibility model of multiagent scheduling on a single machine where each agent’s objective function is to minimize the total weighted number of tardy jobs. Ng et al. [8] proposed a two-agent scheduling problem on a single machine, where the objective is to minimize the total completion time of the first agent with the restriction that the number of tardy jobs of the second agent cannot exceed a given number. Agnetis et al. [3] discussed the complexity some single-machine scheduling problems with several agents. Cheng et al. [9] studied multiagent scheduling on a single machine where the objective functions of the agents are of the max-form. Agnetis et al. [10] used a branch-and-bound method to solve several two-agent scheduling problems on a single machine. Lee et al. [11] considered a multiagent scheduling problem on a single machine in which each agent is responsible for his own set of jobs and wishes to minimize the total weighted completion time of his own set of jobs. Recently, Leung et al. [12] generalized the single machine problems proposed by Agnetis et al. [4] to the environment with multiple identical machines in parallel. Yin et al. [13] considered several two-agent scheduling problems with assignable due dates on a single machine, where the goal was to assign a due date from a given set of due dates and a position in the sequence to each job so that the weighted sum of the objectives of both agents is minimized. For different combinations of the objectives, which include the maximum lateness, total (weighted) tardiness, and total (weighted) number of tardy jobs, they provided the complexity results and solve the corresponding problems, if possible. Yin et al. [14] investigated a scheduling environment with two agents and a linear nonincreasing deterioration with the objective of scheduling the jobs such that the combined schedule performs well with respect to the measures of both agents. Three different objective functions were considered for one agent, including the maximum earliness cost, total earliness cost, and total weighted earliness cost, while keeping the maximum earliness cost of the other agent below or at a fixed level . They proposed the optimal (nondominated) properties and present the complexity results for the problems. Yin et al. [15] addressed a two-agent scheduling problem on a single machine where the objective is to minimize the total weighted earliness cost of all jobs, while keeping the earliness cost of one agent below or at a fixed level . A mixed-integer programming (MIP) model was first formulated to find the optimal solution which is useful for small-size problem instances and then a branch-and-bound algorithm incorporating with several dominance properties, a lower bound and a simulated annealing heuristic algorithm were provided to derive the optimal solution for medium- to large-size problem instances. For more recent works with two-agent issues, readers can refer to Nong et al. [16], Wan et al. [17], Luo et al. [18], Yin et al. [19], and so forth. In addition, for more multiple-agent works with time-dependent, readers can refer to Liu and Tang [20], Liu et al. [21], Cheng et al. [22, 23], Wu et al. [24], Li and Hsu [25], Yin et al. [26–28], Wu [29], and Li et al. [30], and so forth.

For the importance of multiple agents competing on the usage of a common processing resource in different application environments and different methodological fields, we considered two-agent scheduling on a single machine. The objective is to minimize the total completion time of the jobs of the first agent with the restriction that an upper bound is allowed the total completion time of the jobs for the second agent which has been shown binary NP-hard by Agnetis et al. [4]. The problem is described as follows. There are jobs which belong to one of the agents or . For each job , there is a normal processing time and an agent code , where if or if . All the jobs are available at time zero. Under a schedule , let be the completion time of job . The objective of this paper is to find an optimal schedule to minimize subject to , where is a control upper bound.

The remainder of this paper is organized as follows. In Section 2 we present some dominance properties and develop a lower bound to speed up the search for the optimal solution, followed by discussions of a branch-and-bound and three simulated annealing (SA) algorithms. We present the results of extensive computational experiments to assess the performance of all of the proposed algorithms under different experimental conditions in Section 3. We conclude the paper and suggest topics for further research in the last section.

#### 2. Dominance Properties

We will develop some adjacent dominance rules by using the pairwise interchange method. Assume that and denote two given job schedules in which the difference between and is a pairwise interchange of two adjacent jobs and . That is, and , where and each denote a partial sequence. In addition, let be the starting time of the last job in .

*Property 1. *If jobs , , and , then dominates .

*Property 2. *If jobs , , and , then dominates .

Next, we give a proposition to determine the feasibility of the partial schedule. Let () be a sequence of jobs where is the scheduled part with jobs and is the unscheduled part with () jobs. Among the unscheduled jobs, let . Moreover, let be the completion times of the last job in . Also, let and denote the unscheduled jobs in and arranged in the smallest processing times (SPT) order, respectively.

*Property 3. *If there is a job such that , then () is not a feasible sequence.

*Property 4. *If all unscheduled jobs are belonging to , then () dominates ().

*Property 5. *If all unscheduled jobs are belonging to , then () can be determined by (). That is, () either is feasible solution or can be deleted.

##### 2.1. Lower Bound

Assume that PS is a partial schedule in which the order of the first jobs is determined and let US be the unscheduled part with () jobs. Among the unscheduled jobs, there are jobs from agent and jobs from agent . Moreover, let denote the completion times of the th job in PS. The completion time for the ()th job isThen a lower bound can be obtained as follows:where .

##### 2.2. Simulated Annealing Algorithms

Simulated annealing has become one of the most popular metaheuristic methods to solve combinatorial optimization problems since it was proposed by Kirkpatrick et al. [31]. For example, Kim et al. [32] applied the method in scheduling of raw-material unloading from ships at a steelworks. Sun et al. [33] used the technique for allocating product lots to customer orders in semiconductor manufacturing supply chains. Moreover, the method has the advantage of avoiding getting trapped in a local optimum because of its hill climbing moves, which are governed by a control parameter. Thus, we will use SA to derive near-optimal solutions for our problem. The steps of SA algorithms are summarized as follows.

###### 2.2.1. Initial Sequence

Three initial sequences for the three SA algorithms are adopted for our problem. For the first initial sequence in , the sequence of jobs of is arranged according to the smallest processing times (SPT) order, followed by arranging the sequence of jobs of AG_{0} in the shortest processing time (SPT) order. In order to get a good initial one, two more initial sequences are considered. For the second initial sequence in , the pairwise interchange is used to the initial sequence produce by . For the third initial sequence in , the NEH method [34] is applied to the initial sequence produce by .

###### 2.2.2. Neighborhood Generation

The pairwise interchange (PI) neighborhood generation method is adopted in the algorithms.

###### 2.2.3. Acceptance Probability

When a new feasible sequence is generated, it is accepted if its objective value is smaller than that of the original sequence; otherwise, it is accepted with a probability that decreases as the process evolves. The probability of acceptance is generated from an exponential distribution:where is a control parameter and is the change in the objective value. In addition, we use the method suggested by Ben-Arieh and Maimon [35] to change in the th iteration as follows:where is an experimental constant. After preliminary trials, is used in our experiments.

If the total completion time of jobs from agent increases as a result of a random pairwise interchange, the new sequence is accepted when , where is randomly sampled from the uniform distribution .

###### 2.2.4. Stopping Rule

All three proposed SAs are stopped after iterations, where is the number of jobs.

#### 3. Computational Experiments

The extensive computational experiments were conducted to test the performances of the branch-and-bound algorithm and the three simulated annealing algorithms. All the algorithms were coded in Fortran using Compaq Visual Fortran version 6.6 and performed the experiments on a personal computer powered by an Intel(R) Core(TM)2 Quad CPU 2.66 GHz with 4 GB RAM operating under Windows XP. The job processing times were generated from a uniform distribution over the integers 1–100. For the control upper bound , we arrange the jobs of by the smallest processing times (SPT) order and compute out the total completion times of the jobs of , (recorded as ). Then following the sequence of the jobs of , we arrange the sequence of jobs of in the shortest processing time (SPT) order and compute out the total completion times of jobs of and , (recoded as ). We let , where . Moreover, was taken as the values of 0.25, 0.5, and 0.75, while the proportion of the jobs of agent at pro = 0.25, 0.5, and 0.75 in the tests.

For the branch-and-bound algorithm, the average and standard deviation of the numbers of nodes and the execution times (in seconds) were recorded. For the SA heuristics, the mean and standard deviation percentage errors were recorded. The percentage error of a solution obtained from a heuristic algorithm was given bywhere and are denoted as the total completion time of the heuristic and the optimal solution, respectively. The computational times of the heuristic algorithms were not recorded because they all were fast in generating solutions.

The computational experiments were divided into two parts. In the first part of the experiments, three number of jobs were tested at , 12, and 16. As a result, 27 experimental situations were examined. 50 instances were randomly generated for each case. The results were summarized in Table 1 that includes the CPU time (mean and standard deviation) and the number of nodes for the branch-and-bound algorithm.

For the performance of the branch-and-bound algorithm, it can be observed from Table 1 that the number of nodes and the mean of the CPU time increase when becomes bigger. The difficult situations occur at . Especially, the instances with a bigger value of () are difficult to solve than those with a smaller one (, ). Moreover, the instances with a bigger value of pro (pro = 0.5, 0.75) are difficult to solve than those with a smaller one (pro = 0.25). The most difficult case is located at pro = 0.5, 0.75, and as the number of instances, which can be solved out within less nodes, were declined to 10 or below. In addition, as shown in Figure 1 and Table 1, the instances with pro = 0.5 took more nodes or CPU time than others, and the trend became clear as the value of got bigger.

For the performance of the proposed SAs, it can be seen in Table 1 that the means of the percentage error of (between 0.0% and 4.78%) are lower than those of (between 0.0% and 7.79%) and those of (between 0.0% and 6.65%.) Moreover, the trend of the standard deviations of the percentage error keeps the similar pattern. As shown in Table 1, the standard deviations of the percentage error of , , and were between 0.0% and 7.29%, 0.0% and 12.0%, and 0.0% and 8.43%, respectively. It released that the performance of was slightly better than the other two. Furthermore, Figure 2 and Table 1 indicated that the means of the percentage error of , , and are slightly affected by the parameter . For example, the means of the percentage errors of , , and were lager than 2% at . Specifically, the behavior became clear at pro = 0.25 and 0.5. It also can be observed that there is no absolutely dominance relationship among three proposed SAs. Due to get a good quality solution, we further combined three proposed SAs into one (recorded as SA_{4} and ). The means of the percentage error of were located between 0.0% and 2.12%. Meanwhile its standard deviations of the percentage error were between 0.0% and 4.26%. Moreover, the impacts of pro and were not present for proposed three SAs.

In the second part of the experiments, the performances of the proposed SA heuristics were further tested for large numbers of jobs. Three different numbers of jobs were considered at , 40, and 60. The proportion of the jobs of agent at pro = 0.25, 0.5, and 0.75 in the tests. Moreover, was taken as the values of 0.25, 0.5, and 0.75. As a result, 27 experimental situations were tested. 50 instances were randomly generated for each situation. The relative deviance percentage with respect to the best known solution was reported for each instance. The mean execution time and mean relative deviance percentage were also recorded for each SA heuristic. The relative deviation percentage RDP was given bywhere is the value of the objective function generated by and is the smallest value of the objective function obtained from the three SA heuristics. The results were summarized in Table 2.

As shown in Figure 3 and Table 2, it can be seen that the RPD means of , , and are getting slightly bigger as the value of increases. In general, the RPD means of are lower than those of and . Furthermore, all of the RDP means of , , and were less than 2%. Figure 2 also indicated that there is no absolutely dominance relationship among three proposed SAs.

#### 4. Conclusions

This paper explored the single-machine two-agent scheduling problem where the objective is to minimize the total completion time of the jobs belonging to the first agent with the restriction that total completion time of jobs from the second agent has an upper bound. Due to the fact that the problem under study is binary NP hard, a branch-and-bound algorithm incorporating several dominance properties and a lower bound was proposed for the optimal solution. Three simulated annealing algorithms were also proposed for near-optimal solutions. The computational results show that, with the help of the proposed heuristic initial solutions, the branch-and-bound algorithm performs well in terms of number of nodes and execution time when the number of jobs is fewer than or equal to 16. Moreover, the computational experiments also show that the proposed combined performs well since its mean error percentage was less than 2.12% for all the tested cases. Further research lies in devising efficient and effective methods to solve the problem with significantly larger numbers of jobs.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgment

This research was supported in part by the foundation of Beijing Municipal Education on Science and Technology Plan Project (KM 20121417015).