Abstract

The way to gain knowledge and experience of producing a product in a firm can be seen as new solution for reducing the unit cost in scheduling problems, which is known as “learning effects.” In the scheduling of batch processing machines, it is sometimes advantageous to form a nonfull batch, while in other situations it is a better strategy to wait for future job arrivals in order to increase the fullness of the batch. However, research with learning effect and release times is relatively unexplored. Motivated by this observation, we consider a single-machine problem with learning effect and release times where the objective is to minimize the total completion times. We develop a branch-and-bound algorithm and a genetic algorithm-based heuristic for this problem. The performances of the proposed algorithms are evaluated and compared via computational experiments, which showed that our approach has superior ability in this scenario.

1. Introduction

Learning effect has been considered as the most intensive phenomenon since it has been proposed in Biskup [1]. The basic concept has been set to fix the processing time of scheduling sequence job from the first job to the last job, which demonstrated that the processing time can be improved after continuous learning. Moreover, some researches also indicated that learning effect can be regarded as a controllable and important component affect processing time in scheduling problems (Vickson, Nowicki and Zdrzałka, and Cheng et al. [24]). Although there were many researches focusing on this phenomenon, all of them sometimes assumed that all jobs were allowed to process on machine in time. However, the release times must be considered in many real-world applications in order to make this assumption valid. For example, products in a semiconductor wafer fabrication facilities undergo several hundreds of manufacturing steps such as reentrant process flows, sequence-dependent setups, diversity of product mix, and batch processing. With such complexities, it would be a great challenge to meet the customers’ requirements such as different priorities, ready times, and due dates. In the presence of unequal ready times, the application of a nonfull batch would sometimes be advantageous. In some cases it would be better to wait for new jobs to arrive to increase the completeness of the batch (Mönch et al. [5]). Following are some researches that focus on scheduling problem by considering both learning effect and release times (Lee et al., Eren, Wu and Liu, Toksarı, Wu et al., Ahmadizar and Hosseini, Rudek, Li and Hsu, Kung et al., and Yin et al. [616]). In this paper, we would like to discuss the single-machine total completion time problem with sum of processing time-based learning and release times, given that it is a topic still to be studied and explored.

Rinnooy Kan [17] showed that the same problem without learning consideration was NP-hard unless the release times were identical. Therefore, we apply the branch-and-bound algorithm and genetic algorithm search for the optimal solution and near-optimal solutions. The results show that the branch and bound algorithm has solved the instances less than or equal to 24 jobs. Moreover, GA also shows good performance in computational experiments.

The rest of the paper is organized as follows. Some learning effect works are described in Section 2. In Section 3, the description of notations and the problem formulation are given. Some dominance properties and two lower bounds are developed to enhance the search efficiency for the optimal solution, followed by descriptions of the genetic algorithm and the branch-and-bound algorithms are shown in Section 4. The results of a computational experiment are given in Section 5, and the conclusions are given in the last section.

There were some related researches dealing with scheduling problem via learning effect. In Heizer and Render [18], authors verified that unit costs decrease while a firm gains more product knowledge and experience. Cheng and Wang [19] introduced the framework of learning effect in a single machine. More recently, Biskup [20] provided reviews of state-of-the-art scheduling. Wang et al. [21] studied the time-dependent learning effect in scheduling recently and came up with the same learning model as proposed by Kuo and Yang [22], where the job processing time is a function of total normal processing time of the previously scheduled jobs. Cheng et al. [23] brought up with a new learning model in which the actual processing time of a job depends on both the job’s scheduled position and the processing times of the jobs that are already processed. The two models proposed by Biskup [1] and Koulamas and Kyparisis [24] were further combined by Wu and Lee, Lee and Wu, and Yin et al. [2527]. Also, a new experience-based learning effect model, which is based on the S-shaped learning curve, where job processing times would be dependent on the experience of the processor, was introduced and analyzed by Janiak and Rudek [28]. S.-J. Yang and D.-L. Yang [29] investigated a new group learning effect model on scheduling problem aiming to minimize the total processing time. Yin et al. [30] brought a general learning effect model into the field of scheduling which states that the actual processing time of a job is a general function of both the total actual processing times of the jobs already processed and the job’s scheduled position. They had shown that the problems of minimizing makespan and the sum of the th power of completion time could be solved in polynomial time, respectively. A single-machine scheduling problem with a truncation learning effect proposed by Wu et al. [31] states that job processing time depends on the processing times of the jobs already processed and a controlled parameter. They also showed that polynomial time can be used to solve some single-machine scheduling problems. J.-B. Wang and M.-Z. Wang [32] also came up with a revised model based on general learning effect and proved that some single-machine and flowshop scheduling problems can be solved using polynomial time. Lu et al. [33], applying different learning effect models simultaneously, studied several single-machine scheduling problems and concluded that under the proposed models the scheduling problems of minimization of the makespan, the total completion time, and the sum of the th power of job completion times can be solved in polynomial time. J.-B. Wang and J.-J. Wang [34] studied learning effect model where the actual processing time of a job is not only a nonincreasing function of the total weighted normal processing times of the jobs already processed, but also a nonincreasing function of the job’s scheduled position, where the weight is a position-dependent weight. They also show that their approach can solve the problem in polynomial time. Li et al. [35] investigated several single-machine problems with a truncated sum of processing times based learning effect that remain polynomially solvable. Cheng et al. [36] addressed two-machine flowshop scheduling with a truncated learning function while minimizing the makespan. They applied a branch-and-bound and three heuristic algorithms to derive the optimal and near-optimal solutions. Wu [37] studied two-agent scheduling on a single machine involving the learning effects and deteriorating jobs simultaneously. The objective is to minimize the total weighted completion time of the jobs of the first agent with the restriction that no tardy job is allowed for the second agent.

3. Notation and Problem Formulation

Before formulating the problem, we first introduce some notations that will be used throughout the paper.: the number of jobs., , , : the sequences of jobs., : the job and job .: the normal processing time of job .: the normal processing time for the job scheduled in the th position in .: the th job processing time when they are in a nondecreasing order.: the actual processing time of a given job if it is scheduled in the th position.: the release time of job .: the release time of a job scheduled in the th position in .: the th job release time when they are in a nondecreasing order.: the learning ratio, where ., : the completion time of job in and .: the completion time of job in optimal schedule ., : the completion time of a job scheduled in the th position in and ., : the total completion times of sequences and .: the subsequences of jobs.

The formulation of the problem is described as follows. There are jobs to be processed on a single machine. The machine can handle one job at a time and machine idle and job preemption are not allowed. Each job has a normal processing time and a release time . The general job learning model is , where is the actual processing time of job scheduled in the th position and is a learning ratio. The objective of this problem is to find an optimal schedule that minimizes the total completion time; that is, for any schedule . Using the standard three-field notation of Graham et al. [38], our scheduling problem can be denoted as .

4. The Branch-and-Bound and Genetic Algorithms

In this paper, we will apply the branch-and-bound and genetic algorithms search for the optimal solution and obtain near-optimal solution, respectively. First, in order to facilitate the searching process and improve the branching procedure, we develop some adjacent pairwise interchange properties and two lower bounds to use in branch-and-bound algorithm. Then, the procedure of the genetic algorithms is given at last.

4.1. Dominance Properties

Before presenting the adjacent pairwise interchange properties, we provide two lemmas, which will be used in the proofs of the properties in the sequel.

Lemma 1. Let , then , for and .

Lemma 2. Let , then , for , , and .

To fathom the searching tree, we develop some dominance properties based on a pairwise interchange of two adjacent jobs and . Let and be two sequences in which and denote partial sequences. To show that dominates , it suffices to show that and . In addition, let be the completion time of the last job in subsequence with () jobs.

Property 1. If and , then dominates .

Proof. Since , we have After taking the difference of (1), we have On substituting , , and into (2) and simplifying it, we obtain By Lemma 2, with , , and , we have .
Moreover, after taking the difference of total completion times (TC) between sequences and , we have By (3), it can be easily shown that (4) is nonnegative for . Therefore, dominates .

The proofs of Properties 2 to 5 are omitted since they are similar to that of Property 1.

Property 2. If and , then dominates .

Property 3. If and , then dominates .

Property 4. If , , and , then dominates .

Property 5. If and , then dominates .

In order to further determine the ordering of the remaining unscheduled jobs to further speed up the searching process, we provide the following property. Assume that is a sequence of jobs where is the scheduled part containing jobs and is the unscheduled part. Let be the sequence in which the unscheduled jobs are arranged in a nondecreasing order of job processing times; that is, .

Property 6. If , then dominates sequences of the type () for any unscheduled sequence .

Proof. Since , it implies that all the unscheduled jobs are ready to be processed on time . To obtain the optimal subsequence, let be the sequence in which the unscheduled jobs are arranged in nondecreasing order of jobs processing times.

4.2. Lower Bounds

In this subsection, we develop two lower bounds by using the following lemma from Hardy et al. [39].

Lemma 3. Suppose that and are two sequences of numbers. The sum of products of the corresponding elements is the least if the sequences are monotonic in the opposite sense.

First, let be a partial schedule in which the order of the first jobs has been determined and let be a complete schedule obtained from . By definition, the completion time for the job is Similarly, the completion time for the job is The first term on the right hand side of (6) is known, and a lower bound of the total completion time for the partial sequence can be obtained by minimizing the second term. Since the value of is a decreasing function of , the total completion time is minimized by sequencing the unscheduled jobs according to the shortest processing time (SPT) rule according to Lemma 3. Consequently, the first lower bound is where . On the other hand, this lower bound may not be tight if the release time is long. To overcome this situation, a second lower bound is established by taking account of the release time. The completion time for the ()th job is Similarly, the completion time for the job is Note that is greater than or equal to , where denote the release times of the unscheduled jobs arranged in a nondecreasing order. The second term on the right hand side of (9) is minimized by the SPT rule since is a decreasing function of . It follows that we have the following second lower bound: where . Note that and do not necessarily come from the same job. In order to make the lower bound tighter, we choose the maximum value from (7) and (10) as the lower bounds of . That is,

4.3. The Procedure of Genetic Algorithms

A genetic algorithm (GA) is an optimization method that mimics natural processes. GA was invented by Holland [40] and the most widely used to solve numerical optimization problems in a wide variety of application fields, including biology, economics, engineering, business, agriculture, telecommunications, and manufacturing. For example in Goldberg [41], authors using GA in engineering design problems is reviewed in Gen and Cheng [42]. Soolaki et al. [43] use a GA to solve an airline boarding problem with linear programming models [44, 45] and use genetic algorithms to optimize the parameters for the given test collections. GAs start evolving by generating an initial population of chromosomes. Then, a fitness function is used to compute the relative fitness of each chromosome of the population. The selection, crossover, and mutation operators are used in succession to create a new population of chromosomes for the next generation. This approach has gained increasing popularity in solving many combinatorial optimization problems in a wide variety of different disciplines.

4.3.1. Initial Settings

In a GA, every problem is presented by a code and each code is seen as a gene. The existing genes can be combined and seen as a chromosome, each of which is one of the feasible solutions to a problem. However, traditional representation of GA does not work for scheduling problems (Etiler et al. [46]). In dealing with this condition, this study adopts the same method that a structure can describe the jobs as a sequence in the problem. To specify our approach, several initial sequences are adopted. In GA1, jobs are placed according to the shortest processing times (SPT) first rule. In GA2, jobs are arranged in earliest ready times (ERT) first rule. In GA3, jobs are arranged in a nondecreasing order on the sum of job processing times and ready times. Note that before performing GA, NEH algorithm (Nawaz et al. [47]) is utilized to improve the quality of the solutions obtained from the previous rules to reduce many idle periods. The process of GA1, GA2, and GA3 are different initial sequences and use the same selection, crossover, mutation operators, population size, and generations to obtain near-optimal solution. In addition, the fourth genetic algorithm, denoted as GA4, is the best one among GA1, GA2, and GA3; that is, .

In order to avoid rapidly observing a local optimum in a small population or consume more waiting time in a large one, this study set a suitable population size as 60 in a preliminary trial. It is also an important work to evaluate the fitness of selected chromosomes that each of the chromosomes is included or excluded from a feasible solution. The main goal of this study is to minimize the total completion time. Assume that is the th string in the th generation and the total completion time of is . Then, the fitness function of can be represented as . Following are the calculations of the strings in fitness function: Moreover, it is also crucial work to ensure that the probability of selection for a sequence with lower value of the objective function is higher. Thus, the probability, , can be written as follows:

4.3.2. Operators

There are a few operators that are used in this study. Following are the descriptions of those operators, crossover, mutation, and selection.

(a) Crossover. This is an operator that exchanges some of the genes of the selected parents, with the main concept being that the descendant can inherit the advantages of its parents. This study applied the linear order crossover operator (LOX) proposed by Falkenauer and Bouffouix [48] and is one of the better performers among the others (Etiler et al. [46]). The probability of crossover is set to 1.

(b) Mutation. The main object of mutation is to achieve for an overall optimal solution and to avoid a locally optimal one. In this study, the mutation rates are set at 0.10 based on our preliminary experiment as shown in Figure 1. For , 100 sets of data were randomly generated to evaluate the performance of the proposed algorithms with varying values of . The results showed that the proposed algorithms had the least mean error percentage at .

(c) Selection. This is a process that determines the probability of each chromosome and is used to decide the better chromosomes with the better fitness value. The evolution implemented in our algorithm is based on the elitist list. We copy the best offspring and use them to generate some of the next generation. The rest of the offspring are generated from the parent chromosomes by the roulette wheel selection method, which can maintain the variety of genes.

4.3.3. Stopping Criteria

In the preliminary experiment, the proposed GAs are terminated after generations as shown in Figures 2 and 3. For , the above 100 sets of randomly generated data were used to evaluate the performance of the proposed algorithms with varying values of . The results showed that the least mean error percentage of the proposed algorithms would stabilize with reasonable CPU time range after .

5. Computational Experiment

A computational experiment was conducted to evaluate the efficiency of the branch-and-bound algorithm and the accuracies of the genetic algorithms. The algorithms were coded in Fortran and run on Compaq Visual Fortran version 6.6 on an Intel(R) Core(TM)2 Quad CPU 2.66 GHz with 4 GB RAM on Windows Vista. The experimental design followed Reeves [49] design. The job processing times were generated from a uniform distribution over the integers between 1 and 20 in every case, while the release times were generated from a uniform distribution over the integers on , where is the number of jobs. Five different sets of problem instances were generated by giving the values , 0.25, 0.5, 0.75, and 1.

For the branch-and-bound algorithm, the average and the maximum numbers of nodes as well as the average and the maximum execution times (in seconds) were recorded. For the three genetic algorithms, the mean and the maximum error percentages were recorded, where the error percentage was calculated as where is the total completion time obtained from the genetic algorithmand is the total completion time of the optimal schedule. The computational times of the heuristic algorithms were not recorded since they were finished within a second.

In the computational experiment, four different numbers of jobs , and , four different values of learning effect , and  , and five different values of generation parameter of release times were tested in the branch-and-bound algorithm. As a consequence, 80 experimental situations were examined. A set of 20 instances were randomly generated for each situation and a total of 1600 problems were tested. The algorithms were set to skip to the next set of data if the number of nodes exceeded 108. The results are presented in Table 1 and Figures 4, 5, 6, and 7. Figures 47 showed the average number of nodes for various and at job size 12, 16, 20, and 24, respectively. The average number of nodes decreased as the value of increased when was greater than 16. This was the direct result of the efficiency of LB1 and LB2. As increased, the frequency of applications of LB2 would increase. Consequently, it would yield longer release times in those cases and the properties were more powerful. Moreover, LB1 is more efficient than LB2. Table 1 and Figures 47 also showed whether in job size, the algorithms had the least mean number of nodes at . It was due to the fact that with , the release time was relatively short and the completion time would readily exceed the release time. In those cases, Property 6 was applied more frequently; conversely, the completion time would not easily exceed the release time when the values of increased. Moreover, the number of nodes increased exponentially as the number of jobs increased, which was typical of an NP-hard problem. As illustrated in Table 1, when , there were five cases in which the branch-and-bound algorithm could solve all the problems optimally larger than nodes. The branch-and-bound algorithm had the worst performance when with nodes and 5234 seconds. With fixed at , the decrease of the completion time would be relatively small at the beginning when the learning effectwas small (e.g., ). In other words, the completion time would easily exceed the release time which would expedite the timing of invoking Property 6 and consequently the average number of nodes would be smaller. With , as increased, the corresponding least average number of nodes would occur at greater values of learning effect.

The performance of the proposed GA algorithms, out of the 80 evaluations and a total of 1600 problems, was tested. The number of times that each of the objective functions of the GA1, GA2, and GA3 algorithms had the smallest mean error percentage was 45, 41, and 49, respectively. In addition, in Table 1 and Figures 8, 9, and 10, their performances were not affected with the learning rate, the generation parameter of release times, or the number of jobs. None of the three genetic algorithms had absolutely dominant performance in terms of mean error percentage. However, the combined algorithm GA4 strikingly outperformed each of the three algorithms in terms of the maximum mean error percentage among all the 80 scenarios with varying learning rates, numbers of jobs, and values of . The maximum mean error percentages of GA1, GA2, GA3, and GA4 were 1.6268%, 0.6032%, 0.8700%, and 0.3452%, respectively. The maximum mean error percentage of GA1 was more than four times that of GA4. The combined algorithm GA4 also clearly outperformed each of the three algorithms in terms of the maximum error percentage. The worst cases of the corresponding algorithms were 7.8202%, 6.9562%, 4.2205%, and 3.2396%, respectively. The worst case error of GA1 was more than twice that of GA4. Thus, we would recommend the combined algorithm GA4.

6. Conclusions

In this paper, we address a single-machine total completion time problem with the sum of processing times based learning effect and release times. The objective is to minimize the total completion time. The problem without learning consideration is NP-hard one. Thus, we develop a branch-and-bound algorithm incorporating with several dominances and two lower bounds to derive optimal solution and genetic algorithms that was proposed to obtain near-optimal solutions, respectively.

The branch-and-bound algorithm performs well to solve the instances of less than or equal to 24 jobs in a reasonable amount of time. In the different varying parameter, the proposed genetic algorithms are quite accurate for small size problems and the mean error percentage of the proposed genetic algorithms are less than 0.35%. Another interesting topic for future study is to consider and study the problem in the multimachine environments or multicriteria cases.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.