Abstract

We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes’ random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms.

1. Introduction

This study focuses on the problem of scheduling jobs that involve splitting and machine- and sequence-dependent setup times on nonidentical (unrelated) parallel machines to minimize the maximum completion time (makespan). The problem will be referred to as with job splitting.

Job splitting is a great requirement for some industries. For example, drilling of printed circuit board (PCB) manufacturing, dicing of semiconductor wafer manufacturing, and weaving of textile manufacturing are major bottleneck operations and splitting the jobs into subjobs and processing them on different machines are necessary because of the due date pressure and competitive challenges of the modern manufacturing industry. In fact, the methods that were proposed in this paper can be applied to real weaving process of textile manufacturing industry from which this research is originated.

The mentioned problem has a set of jobs (), which can be split into subjobs. The number of subjobs for each job will be between one and the maximum number of subjobs. The maximum number of subjobs will be defined beforehand and the proposed algorithms will try to find the best number of subjobs for each job. More setup times would be required if there are more subjobs. Hence, in the scheduling problem considered in this study, finding an appropriate number of subjobs to be split from each job is so important.

If the jobs’ processing times are dependent on the assigned machines and there is no relationship among these machines, then the machines are considered to be unrelated [1]. So, the processing time of is different from on in unrelated machine environment. As it will be explained in next sections, indicates processing time of , at . The mentioned subjobs must be processed on one machine in a set of unrelated parallel machines (). The setup times are sequence- and machine-dependent (). Each machine has its own matrix of setup times, and these matrices are different from each other. The setup time on between and is different compared with the setup time between and on the same machine.

In scheduling theory, the makespan () is defined as the completion time of the final job (when it leaves the system). A smaller implies a higher utilization, which is closely related to the throughput rate of the system. Therefore, reducing will lead to a higher throughput rate [2]. For that reason, minimization of the “makespan” is the objective of this study.

Minimizing the makespan on a scheduling problem with identical parallel machines and sequence-dependent setup times is NP-hard [3]. Thus, a more complex case of the problem with job splitting and unrelated parallel machines is also NP-hard.

The present paper contains mixed integer programming (MIP) models and hybrid genetic-local search algorithm techniques to solve the mentioned with job splitting problem. The proposed algorithms in this study perform job splitting and scheduling simultaneously with variable number of subjobs, where, to the best of our knowledge, no work has been published on an algorithm with these properties. This is the first contribution of this paper. The second contribution of this work is getting over the problem of main difficulty in using random key numbers in the chromosome for hybrid structures. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). In the literature random key numbers are frequently used in chromosome. But local search application inside the genetic algorithm made the situation more complex. Inside each generation of the genetic algorithm, we are trying to find better job sequence using local search. But the better job sequence must be adapted to the chromosome which will be used in each next generation. We need to determine which data in genes will be exchanged or in which condition and in which genes we will regenerate new random key numbers. To manage the random key numbers according to the desired job sequence that would be necessary during local search operations, which is implemented in the genetic algorithm, we developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes’ random key numbers. The third contribution of this paper is three developed mixed integer programming (MIP) models which are designed to solve different subjob environments. In the MIP1 and the MIP2, numbers of the subjobs will be determined by models between one and the maximum number of subjob. In the MIP3, the numbers of subjobs will be determined randomly between one and maximum number of subjobs beforehand and will be given as an input data to the model. In the MIP1, the quantities of subjobs are not equal. In the MIP2 and MIP3 the quantities of subjobs are equal. Finally we presented an implementation (GAspLAMIP) which is the fourth contribution of this study. In this implementation, obtained scheduling using GAspLA is combined with the MIP formulation. So, the result of the GAspLA feeds MIP formulation with initial solution set. This implementation lets us verify the optimality of GAspLA for the studied combinations. To present the performance of the algorithms, the problem set SchedulingResearch 2005 [4] was used. The results demonstrated that proposed algorithms outperformed the compared ones.

This paper is organized as follows. Section 2 provides a review of the existing literature. In Section 3, mixed integer programming (MIP) mathematical models are formulated. In Section 4, the proposed GAspLA is described. In Section 5, the implementation of GAspLAMIP is presented.In Section 6, the experimental design and computational results are reported. Section 7 concludes the paper.

2. Literature Review

The interest in scheduling problems with setup times began in the mid-1960s. These types of problems have received continuous interest from researchers since then. Allahverdi et al. [5] presented a survey of scheduling problems with setup times or costs. The paper classified scheduling problems into those with batching and nonbatching considerations and those with sequence-independent and sequence-dependent setup times. It also classified the literature according to shop environments, including a single machine, parallel machines, flow shop, no-wait flow shop, flexible flow shop, job shop, and open shop. Machine setup time is a significant factor for production scheduling in manufacturing environments, and sequence-dependent setup times have been investigated by researchers. Zhu and Wilhelm [6] presented a review of scheduling problems that involves sequence-dependent setup times (costs). Li and Yang [7] presented a review on nonidentical parallel-machine scheduling research in which the total weighted completion times are minimized. Models and relaxations are classified in this paper, and heuristics and optimizing techniques are surveyed for the problems. Lin et al. [8] studied research that compares the performance of various heuristics for unrelated parallel machine scheduling problems. They proposed a metaheuristic, and computational results showed that the proposed metaheuristic outperformed other existing heuristics for each of the three objectives, minimized makespan, total weighted completion time, and total weighted tardiness, when run with a parameter setting that is appropriate for the objective.

Because of the problem complexity, it is general practice to find an appropriate heuristic rather than an optimal solution for the parallel-machine scheduling problem. Park et al. [9] used a neural network and heuristic rules to schedule jobs with sequence-dependent setup times on parallel machines. To calculate the priority index of each job, they utilized a neural network, and their computational results showed that the proposed approach outperformed the Lee et al. [10] original ATCS (Apparent Tardiness Cost with Setups) and a simple application of ATCS. van Hop and Nagarur [11] focused on scheduling problems of printed circuit boards (PCBs) for nonidentical parallel machines, and a composite genetic algorithm was developed to solve this multiobjective problem. Test results of the proposed methodology showed that the solutions were efficient and were obtained within a reasonable amount of time. Zandieh et al. [12] studied on the hybrid flow shop scheduling problems in which there were sequence-dependent setup times. They proposed an immune evolutionary algorithm (IEA) for this problem. In the paper of Wang [13] the problem of single machine common due date scheduling with controllable processing times is considered. Behnamian et al. [14] compared makespan results solved by ant colony optimization (ACO), variable neighborhood search (VNS), simulated annealing (SA), and the VNS hybrid algorithm for the problem of parallel machine scheduling problems with sequence-dependent setup times. Yang [15] proposed an evolutionary simulation optimization approach for solving the parallel-machine scheduling problem. The proposed methodology’s findings were benchmarked against lower-bound solutions, and efficient results were obtained. Balin [16] attempted to adapt a GA to the nonidentical parallel machine scheduling problem and proposed an algorithm with a new crossover operator and a new optimality criterion. The new algorithm was tested on a numerical example by implementing it in simulation software. The results showed that, in addition to its high computational speed for a larger-scale problem, the GA addressed the nonidentical parallel machine scheduling problem of minimizing the makespan. Keskinturk et al. [17] aimed to minimize the average relative percentage of imbalance and used the ACO algorithm for load balancing in parallel machines with sequence-dependent setup times. The results of tests on various random data showed that the ACO outperformed the GA and heuristics.

In this section, some of the previous studies on the unrelated parallel machine scheduling problem with the objective of minimizing the makespan are discussed. All of these studies considered in the following assumptions:(i)machine-dependent and job sequence-dependent setup times;(ii)all of the jobs are available at time zero.

A metaheuristic Meta-RaPS was introduced by Rabadi et al. [18], and its performance was evaluated by comparing its solutions to those obtained by existing heuristics. The results of the metaheuristic showed that the solutions were efficient. A two-stage ant colony optimization (ACO) algorithm was proposed by Arnaout et al. [19]. The performance of this metaheuristic was evaluated by using the benchmark problems, and the solutions were found to be efficient. Another method was suggested by Chang and Chen [20] for the same NP-hard problem. A set of dominance properties was developed, including intermachine and intramachine switching properties, which are necessary conditions of job sequencing orders in a derived optimal schedule. They also introduced a new metaheuristic by integrating the dominance properties with a genetic algorithm (GADP). The performance of this metaheuristic was evaluated by using benchmark problems from the literature, and the solutions were efficient. Vallada and Ruiz [1] proposed a genetic algorithm that includes the crossover operator with a limited local search as well as a fast local search procedure; this method was tested on both small and large problem sets and outperformed the other evaluated methods. SchedulingResearch 2005 [4] datasets were used by many of the researchers [1820] in the literature. Yilmaz Eroglu et al. [21] proposed a genetic algorithm with local search (GALA) that was based on random keys. To present the performance of the algorithm, the same problem set SchedulingResearch 2005 [4] was used. The results showed that the GALA, which is the foundation of this study, outperformed the other algorithms.

The jobs that are considered in this paper can be split into subjobs; a feature that is very seldom studied in the literature. Studies can be categorized as splitting the job into subjobs of discrete units or continuous units. Some studies that involve discrete units are as follows: Kim et al. [22] focused on the dicing of semiconductor wafer manufacturing, which is the major bottleneck operation of the whole process. They proposed a simulated annealing algorithm for the problem of allotting work parts of jobs into parallel unrelated machines, where a job is referred to a lot composed of items. Setup times were job sequence-dependent. The proposed SA method outperformed a neighborhood search method in terms of the total tardiness. Kim et al. [23] suggested a two-phase heuristic algorithm for the problem of scheduling a drilling process in the PCB manufacturing system. It was assumed that a job can be split into a discrete number of subjobs and that they would be processed on identical parallel machines independently. In the first phase of the algorithm, an initial sequence is generated by existing heuristic methods for the parallel machine scheduling problem. In the second phase, each job was split into subjobs, and then jobs and subjobs were rescheduled on the machines by using a certain method. The performance of the suggested algorithm was proved by the results of the computational experiments, which performed better than an existing method. Shim and Kim [24] also focused on PCB manufacturing system bottleneck operations of the drilling process. For the problem of scheduling jobs that can be split into subjobs, they developed several dominance properties and lower bounds and then suggested a branch and bound algorithm using them. The suggested algorithm solved problems of moderate size in a reasonable amount of computational time. Xing and Zhang [25] proposed a heuristic algorithm for the /split/ problem and analyzed the worst case performance of the algorithm. Yalaoui and Chu [3] considered a simplified real-life identical parallel machine scheduling problem with sequence-dependent setup times and job splitting to minimize the makespan. The proposed method composed of two phases. In the first phase, the problem was reduced to a single machine scheduling problem and was transformed into a traveling salesman problem (TSP), which could efficiently be solved by using Little’s method. In the second phase, a derived initial solution was improved in a step-by-step manner, accounting for the setup times and job splitting. Tahar et al. [26] proposed a new method, which is an improvement of the method proposed by Yalaoui and Chu [3]. For the problem of splitting the job into continuous units, Serafini [27] studied the scheduling problem of looms in the textile industry. Jobs might be independently split over several specified machines, and preemption was allowed. It was assumed that there were uniform parallel machines and that there were no setup times. For the mentioned problem, heuristic algorithms were proposed for the objective of minimizing the maximum weighted tardiness. Yilmaz Eroglu et al. [28] proposed a genetic algorithm (without local search) that decided number of subjobs for each order and makes schedule simultaneously. Result of computations showed that the suggested algorithm could find solutions for problems with 75 machines and 111 jobs in a reasonable amount of CPU time. According to the results, the proposed GA outperformed existing system in makespan. Pimentel et al. [29] focused on the problem which is related with the knitting production process. MIP model was formulated and the heuristics and local search algorithms were proposed for the identical parallel machine scheduling problem with job splitting to minimize total tardiness.

3. MIP Mathematical Models

In this section, mixed integer programming (MIP) mathematical models are formulated to find optimal solutions for the unrelated parallel machine scheduling problem with sequence dependent setup times and job splitting. Three mathematical models are developed on this topic. In the MIP1 and the MIP2, numbers of the subjobs will be determined by models between one and the maximum number of subjob. In the MIP3, the numbers of subjobs will be determined randomly between one and maximum number of subjobs beforehand and will be given as an input data to the model. In the MIP1, the quantities of subjobs are not equal. In the MIP2 and MIP3 the quantities of subjobs are equal.

Mentioned problem has a set of jobs () which can be split into subjobs. The numbers that are declared in a set of have different meaning in MIP1, MIP2, and MIP3.

The numbers in set for MIP1 and MIP2 denote the maximum number of subjobs for each job. MIP1 and MIP2 will decide the number of subjobs between one and maximum number of subjobs which is defined in set of . So, in MIP1 and MIP2 the numbers of subjobs are variable.

The numbers in set for MIP3 will be predefined randomly between one and the maximum number of subjobs for each job. So, in MIP3 the numbers of subjobs are constant. But this constant number will be generated randomly between one and the maximum number of subjobs in order to approximate the hybrid genetic-local search algorithm which will be described in Section 4.

Mentioned subjobs have to be processed on one machine in a set of unrelated parallel machines ():(i): processing time of , at , .(ii): machine based sequence dependent setup time on , , when processing , , after having processed , .

Details about the models are explained in the following subsections.

3.1. MIP1: The Quantities of Subjobs Are Not Equal

The quantities of subjobs are not equal in MIP1. This is the point of difference of this model from hybrid genetic-local search algorithm (GAspLA) that will be explained in Section 4:(i): total constant production quantity for each job.(ii): processing time of unit quantity of at , .

The decision variables for the model are as follows: The objective function is And the constraints are

The objective (2) is to minimize the maximum completion time or makespan. Constraints sets (3) and (4) ensure that the number of subjobs will be between 1 and the maximum number of related subjobs. The usage of dummy jobs 0 as indicates that is the first job on . Constraint set (5) is to control the production quantities. If is processing on then production quantity exists; otherwise, production quantity is zero. Constraint set (6) ensures that the total subjob quantity of the each job is equal to a constant number . Difference between unit processing times of each job will be calculated by the formulation of which will be used in constraint set (11). Constraints set (7) defines total production quantities as 0 for dummy jobs. Constraints set (8) prevents subjobs on the same machine. Constraints (9) specify that not more than one subjob can be scheduled first at each machine. With set (10) we ensure that jobs are properly linked in machine: if a given is processed on a given , a predecessor must exist on the same machine. Constraint set (11) is to control the completion times of the jobs at the machines. If is assigned to after , will be equal to 1. In this situation the completion time of , must be greater than the completion time of , plus the setup time between and and processing time of . The processing time of is computed by multiplying unit processing time and which is production quantity of on . If is equal to 0, then the big constant renders the constraint redundant. Sets (12) and (13) define completion times as 0 for dummy jobs and nonnegative for regular jobs, respectively. Set (14) defines the maximum completion time. Set (15) defines binary variables. Finally, set (16) defines integer variables.

Optimal solution for the problem can be obtained by solving this MIP model using a solver. We coded this model using MPL and CPLEX 11.0 is used as a solver. The model can solve this problem for 2-machine 6-job problem. In which all the jobs can be split into maximum 3 subjobs. But, for the 4-machine 6-job problem the model cannot give the optimal solution in reasonable elapsed time, one day. Because of this situation, additional constraints are added to this model to reduce the solution space. The new model, MIP2, that will be explained in Section 3.2 is identical with the GAspLA that will be explained in Section 4.

3.2. MIP2: The Quantities of Subjobs Are Equal

In this model, each subjob from one main job includes equal quantity of order. The model is also deciding the optimal number of subjobs. So, the content of this model is identical with the GAspLA that will be explained in Section 4.

To achieve equal subjob quantities in MIP2, constraint sets (17) and (18) will be added to the MIP1 that was explained in the previous subsection. Following logic constructs the constraints sets (17) and (18):(i)If there is a splitting operation, the production quantities of on and must be equal ().(ii)If there is splitting, then must be satisfied.(iii)Splitting condition is added to the inequality of .

Consider

The quantities of subjobs () are integer, because of this constraint, or 0.1 difference between subjobs will lose the importance and the constraint will be provided. Otherwise, the big constant renders the constraint redundant.

We coded this model using MPL and CPLEX 11.0 is used to solve this model. While the model can solve the problem with 2 machines and 6 jobs, the 4-machine 6-job problem could not be solved even in one day. This circumstance has forced us to develop new model, MIP3, that is explained in Section 3.3.

3.3. MIP3: The Quantities of Subjobs Are Determined

In MIP3, the numbers of subjobs are determined before the model start running. To become close to the GAspLA, the numbers of subjobs are set randomly between 1 and the maximum number of subjobs for each individual jobs. The details about the model are as follows.

The decision variables for the model are as follows: The objective function is The constraints are

The objective (20) is to minimize the maximum completion time (makespan). Constraint set (21) ensures that every subjob is assigned to one machine and has one predecessor. The usage of dummy job 0 as indicates that , which is th subjob of , is the first job on . Constraints (22) make sure that the maximum number of successors of every subjob is one. Constraints (23) specify that not more than one subjob can be scheduled first at each machine. Constraint set (24) ensures that jobs are properly linked in machine. If subjob is processed on a given , a predecessor must be processed on the same machine. Constraints (25) are used to calculate and control completion times. If a subjob is assigned to after subjob , then . In that condition must be greater than or equal to the completion time of and plus the setup time between and and the processing time of subjob which is calculated by processing time of on , , divided by the number of subjobs of , . If , then the big constant renders the constraint redundant. Constraints (26) state that the completion time for the dummy job 0 is zero and constraints (27) ensure that completion times are nonnegative for regular jobs. Set (28) defines the maximum completion time. Set (29) defines the binary variables.

MIP3’s performance is better than the others and gave optimal solution to many of the problem sets which will be explained in the computational results. We coded this model using MPL and to solve the models, CPLEX 11.0 is used as a solver.

4. GAspLA: Hybrid Genetic-Local Search Algorithm for Job Splitting Property

4.1. Genetic Algorithm for Job Splitting Property

The genetic algorithm (GA) is a search technique based on the principles of genetics and natural selection. One of the important issues is the genetic representation (string of symbols) of each solution in a population. The string is referred to as chromosome and the symbols as genes. After generation of initial population and determination of the fitness function values for each chromosome, the GA manipulates the selection process by operations such as reproduction, crossover, and mutation. The algorithm was introduced in the 1970s by Holland [30]. As the searching technique of genetic algorithms (GAs) [31] became popular in the mid-1980s, many researchers started to apply this heuristic to scheduling problems.

The encoding must be designed to utilize the algorithm's ability to transfer information among chromosome strings efficiently and effectively [31]. To qualify our encoding scheme, the permutation of jobs in this work is shown through random keys. Each job has a random number between 0 and 1. These random keys show the relative order of the jobs for each machine. In addition, each chromosome also carries the number of subjobs. Detailed description of developed genetic algorithm has been reported in the following subsections.

The proposed local search algorithm which will be explained in Section 4.2 is inserted in fitness function calculation process of genetic algorithm. The proposed hybrid genetic-local search algorithms are coded in C# language according to proposed methods that were explained in the next subsections.

4.1.1. Encoding Scheme

A chromosome is presented as a string of random keys. Figure 1 illustrates a sample chromosome. We would like to note that this chromosome structure is used in our previous study [28] which proposed a genetic algorithm without local search. In the first section of the chromosome, the string contains (number of jobs) sections. Each section is further divided into (number of machines) sections. Each section for machines is divided into subsections (genes). The number of subsections for related machine is determined according to maximum number of subjobs for related job. Sequence of jobs will be determined by random key numbers (generated between 0 and 1) of each gene. In the second section, the string contains (number of jobs) sections. For each section, a random number will be generated between one and the max numbers of subjobs for each job to determine the number of subjobs. The chromosome structure of the example is in Figure 1. For this chromosome structure, we have 2 machines and 3 jobs. Maximum numbers of subjobs for Job1, Job2, and Job3 are 3, 2, and 1, respectively. For Job1, because of number of subjobs as the value of 2 in second section of chromosome, the smallest two random numbers will be selected from first section of the chromosome among all the numbers generated for Job1. Selected values (0.2 and 0.3 for Job1) are given in bold in Figure 1. It’s clear that the first and second subjobs of Job1 will be processed on Machine1. Similarly, number of subjobs is 2 for Job2 and they will be processed on Machine1 and Machine2. Number of subjobs is 1 for Job3 and it will be processed on Machine2.

4.1.2. Fitness Function ()

As mentioned earlier, there are machine-dependent processing times and machine and job sequence-dependent setup times:(i): processing time of , at , .(ii): machine based sequence dependent setup time on , when processing    after having processed , .

In case of considering three jobs, two machines scheduling problem, for example, Table 1 is the processing times for Machine1 and Machine2. Tables 2 and 3 are setup times of Machine1 and Machine2, respectively, for the mentioned jobs. According to chromosome that was shown in Figure 1, Machine1 will process two subjobs of Job1 (Subjob1.1, Subjob1.2) and one subjob of Job2 (Subjob2.1). Machine2 will process one subjob of Job2 (Subjob2.2) and one subjob of Job3 (Subjob3.1). The sequence of jobs on Machine1 will be determined according to random key numbers (0.20, 0.26, and 0.30). Increasing arrangement of these random numbers will designate Machine1’s job sequence. So, sequence of subjobs is Subjob1.1, Subjob2.1 and Subjob1.2 on Machine1. If a job is split into subjobs, processing time of required subjob on selected machine can be calculated by dividing a job’s processing time on the related machine to the number of subjobs. Completion time of Machine1 is 130 and completion time of Machine2 is 78 according to and values. So, will be 130. Mentioned chromosome’s schedule and makespan value can be seen in Figure 2. Population size will determine number of different chromosomes and values.

4.1.3. Genetic Operators

After generating initial population, selection, crossover, and mutation will be iteratively used to search for the best solution.(i)Selection: chromosomes will be selected into the mating pool based on random selection method [32]. In this method, moms and dads are randomly chosen from population.(ii)Crossover: the crossover operator, which was applied according to value of crossover rate, , is a method for sharing information between chromosomes. Single point crossover will be used as crossover operator in our algorithm. It randomly chooses the crossing point and exchanges the genes between two parents in order to create offsprings. Crossover operator will be used for the first and second sections separately. Figure 3 illustrates this operation. In Figure 3, for the first section the crossing point is selected randomly at position seven. The child is created as follows. First, Parent1 passes its genes that are on the left of the crossover point (position seven) to Offspring1. In a similar manner, Parent2 passes its genes to the left of the same crossover point to Offspring2. Next, the genes on the right of the crossover point of Parent1 (Parent2) are copied to Offspring2 (Offspring1). For the second section, crossing point is selected at position one randomly. Same procedure to create the offsprings is applied for the second section of the chromosome.(iii)Mutation: the mutation operator is used to prevent converging to a local optimum. In our algorithm, a mutation is performed as follows. Chromosomes that mutation operation will be applied to are randomly selected from the population according to mutation rate (). The value of the randomly selected gene of the chromosome will be replaced with a new random number. This operation will be applied to all selected chromosomes. In this way, chromosomes with new schedules and makespan values can be obtained. The fitness value may be better or worse or may not be changed after applying this operator. For the example in Figure 4 if randomly selected chromosome is Parent1 and randomly selected gene’s key number 0.54 would be changed to the key number 0.15 which is also randomly generated for the mutation operation, the sequence on Machine1 and Machine2 will be altered. Because selected values for Job1 will be 0.20 and 0.15 (are given in bold in Figure 4), Machine1 will process one subjob of Job1 (Subjob1.1) and one subjob of Job2 (Subjob2.1). Machine2 will process one subjob of Job1 (Subjob1.2), one subjob of Job2 (Subjob2.2), and one subjob of Job3 (Subjob3.1). This will be a cause of new makespan value of 127.5 which is better than the previous one.

4.2. Local Search

Integrating one of the local search techniques within GA will generally generate more competitive results. The integrated dominance properties method was proposed by Chang and Chen [20]. In this method, the decision to exchange two jobs (intramachine or intermachines) is made according to the result of calculations, which are described in detail in the cited paper. While adapting this method to our GA, we designed a local search method based on integrated dominance properties. Our aim is searching all possible alternatives in chromosome in order to get better processing and setup times for each job. In the proposed local search method, the following three disparities were created as compared with integrated dominance properties method.(1)Job interchange: we only used setup times instead of adjusted processing times while interchanging two jobs that are processed on the same machine.(2)Job exchange: while the jobs under consideration are not on the same machine, setup times and processing times are both considered in the calculations.(3)During the job interchanges or exchanges on chromosome, also the random numbers must be redesigned. So, after the decision to interchange or exchange jobs is made, random numbers must be redesigned to achieve the required sequence. In order to change minimum number of random numbers on the chromosome, a new approach is generated. This process (calibration of random numbers) is also managed and coded in the proposed local search method.

All of the situations for local search have been considered and written in pseudocode, which is summarized in the Appendix. The proposed intermachine (interchange) and intramachine (exchange) job changes will be explained in Section 4.2.1 and the calibration of random numbers will be explained in Section 4.2.2. The notation for the pseudocode and explanations of local search are as follows.

: number of selected genes from the chromosome, which is ordered by machine and then by job sequence. Each individual gene contains machine and job features. Table 4 explains the sample gene structure for an example with seven jobs and two machines. The structure also indicates the sequence of jobs for each machine.(i) and : comparing gene numbers to decide whether the jobs in these genes should be exchanged.(ii) indicates the machine number on th gene. According to the structure shown in Table 4, the machine number for 1st gene () is 0.(iii) indicates the job number on th gene. According to the structure shown in Table 4, the job number for 1st gene () is 4.(iv): setup time on between and .(v): processing time of on (vi)CT_: completion time of

4.2.1. Interchange and Exchange of Jobs

(i) Interchange of jobs: there are two cases to be considered within the intermachine interchange, they are adjacent jobs interchange (see Figure 5) and nonadjacent jobs interchange (see Figure 6).

(I) Adjacent interchange: in order to define the algorithm better, adjacent jobs interchange case is also demonstrated in Figure 5. Arrows on Figure 5 show setup time (among stated genes) differences between after interchange and before interchange situations. In Figure 5 and intermachine, adjacent jobs interchange section of pseudocode, steps are as follows:(1)if precedes on before interchange section of Figure 5, “” represents sum of setup time differences between after interchange and before interchange situations for stated blue arrows.(2)If is the first job on the machine on before interchange section of Figure 5, “” represents sum of setup time differences between after interchange and before interchange situations for stated blue arrows.(3)Either “” or “” must be equal to 0.(4)If precedes on Figure 5, “” represents setup time differences between after interchange and before interchange situations for stated red arrow.(5)If the sum of , , and is smaller than 0, it means that better completion time is obtained. Let us interchange and on the schedule.(6)Then calibrate random key numbers. In Section “4.2.2”, item “i” is description of intermachine interchange.

(II) Nonadjacent interchange: this case is shown in Figure 6. The steps to decide whether interchange of and on the schedule are similar with adjacent interchange. During the calculation of “” and “”, three blue arrows must be sum which are shown on Figure 6. This is caused by new setup time differences coming into existence between after interchange and before interchange situations. The remaining steps are the same as adjacent interchange situation.

(ii) Intramachine exchanging is shown on Figure 7: in Figure 7 and intramachine exchanging section of pseudocode steps are as follows:(1)calculate completion times of and before exchange. “” represents maximum value of these two values.(2)If precedes on which was shown before exchange section of Figure 7, “” represents setup time differences between after interchange and before interchange situations for stated blue rectangles (). (.) (If is the first job on , second “” of pseudocode represents sum of setup time differences between after interchange and before interchange situations.)(3)If precedes on which was shown before exchange section of Figure 7, “” represents setup time differences between after interchange and before interchange situations for stated white rectangles (). (.)(4)If precedes on which was shown before exchange section of Figure 7, “” represents setup time differences between after interchange and before interchange situations for stated yellow rectangles (). (.) (If is the first job on , second “” of pseudocode represents sum of setup time differences between after interchange and before interchange situations.)(5)If precedes on which was shown before exchange section of Figure 7, “” represents setup time differences between after interchange and before interchange situations for stated pink rectangles. (.) (.)(6)Calculate new completion times of and after possible exchange. “” represents maximum value of these two values. Formulation the computation of is shown in Figure 7.(7)If is smaller than it means that better completion time is obtained. Let us exchange and on the schedule.(8)Then calibrate random key numbers. In Section “4.2.2”, item “ii” is description of intramachine exchange.

4.2.2. Calibration of Random Numbers on the Chromosome

If we decided to change and , we need to calibrate the chromosome to recognize the new situation. To recognize the job changes by chromosome, the following configurations were also integrated in the code. During the integration process, the places of random numbers are considered. JobA is split into 2 subjobs, JobB is not split, and JobC is split into 2 subjobs for the considered chromosome. The number of subjobs has been determined randomly as pointed out before. Chromosomes of Figures 8 and 9 do not show the section of number of subjobs.

(i) Intermachine interchange: Figure 8 shows before and after interchange of jobs on the same machine. Selected genes from chromosome and nonselected genes from chromosome constitute a chromosome. The realized interchanges are shown in red letter on the after interchange section of Figure 8. The steps of interchanging and for the example of Figure 8 are the following.(I)Random numbers of the jobs will be exchanged (0.40 for and 0.62 for will be changed to 0.40 for and 0.62 for ).(II)0.40 is smaller than 0.62. Thus, after the interchange there is no need to change the other random numbers for JobC. Because the current situation guarantees JobC to select Machine0.(III)0.62 is bigger than 0.40. Thus, after the interchange of the other random numbers for JobA those smaller than 0.62 inside the “nonselected genes from chromosome” must be changed. For example, 0.41 is smaller than 0.62. A new random number which is bigger than 0.62 will be randomly generated (0.96) and changed with 0.41. Another random number is 0.61 inside the “nonselected genes from chromosome” for JobA which is smaller than 0.62. A new random number which is bigger than 0.62 will be randomly generated (0.75) and changed with 0.61. Thus, JobA satisfies selecting Machine0.

(ii) Intramachine exchange: Figure 9 shows before and after exchange of jobs on two different machines. The realized exchanges are shown in red letter on the after exchange section of Figure 9. The steps of exchanging and for the example of Figure 9 are as follows. is a maximum random number value for on . Similarly, is a maximum random number value for on .(I)Random numbers for and are exchanged. And also random numbers for and are exchanged.(II)After the exchange it must be checked if there is a random number inside the “nonselected genes from chromosome” for that is smaller than random number of which is equal to 0.84 or other selected value of JobA which is equal to 0.34. For example, 0.41 is smaller than 0.84. Another random number which is bigger than 0.84 will be randomly generated (0.95) and changed with 0.41.(III)Between selected genes from chromosome and nonselected genes from chromosome, jobs for and are exchanged. And also jobs for and are exchanged.(IV)It must be also checked if there is a random number inside the “nonselected genes from chromosome” for that is smaller than random number of which is equal to 0.40 or other selected value of JobC which is equal to 0.62. For example, 0.61 is smaller than 0.62. Another random number which is bigger than 0.62 will be randomly generated (0.87) and changed with 0.61.

5. Implementation: GAspLA-to-MIP Method (GAspLAMIP)

In this section, we propose GAspLA based mix-integer programming model for the mentioned problem. This implementation will help us to reduce the elapsed time of MIP3. And this implementation will also show that our hybrid genetic-local search algorithm (GAspLA) gives optimal results for the studied datasets which will be discussed in Section 6.

As we mentioned, we obtain final scheduling using GAspLA. Then, according to the sequencing information of subjobs on each machine, we set binary variable values in the MIP3 formulation as initial values. For example, according to Table 5, there are 4 machines and 6 jobs. Each machine’s first job must be dummy job (). According to this information, first row of Table 5 indicates that on Machine1, subjob31 will be processed. And according to second and third rows of Table 5, Machine2’s sequence will be subjob11 and subjob41. The schedules of the other machines can be interpreted similarly.

In order to reduce the search space of MIP3, the result schedule of GAspLA will be used. Therefore, running MIP3 with the initial solution of GAspLA will help us to accelerate to reach the solution. In Section 3.3, MIP3 was proposed and in Section 4 GAspLA was proposed. The steps for GAspLAMIP are as follows.Step 1. Run the GAspLA algorithm.Step 2. According to the final sequence information from GAspLA, specify the value of binary variable that MIP3 will use.Step 3. MIP3 will use these binary variables as initial solution.Step 4. Send the binary variable information to the MIP3 formulation.Step 5. Solve the MIP3 to find an optimal solution with given initial solution.

The results of the implementation will be discussed in the next section.

6. Experimental Design and Computational Results

The proposed algorithm is coded in C# language and run on computer with an Intel Core i7-3612QM processor running at 2.10 GHz. Test samples are provided by SchedulingResearch 2005 [4].

The parameter configuration of GALA that was proposed by Yilmaz Eroglu et al. [21], which is the foundation of GAspLA, is done by the Design of Experiment (DoE). The scheduling environment difference between GALA and GAspLA is job splitting property. We will make comparison also with GALA and to avoid advantages of parameter configuration, we run GALA, GAspLA, and GAspLAMIP with the same parameters which are obtained by DoE approach that is detailed in next subsection.

6.1. Experimental Results and Parameter Values

Makespan values and elapsed times were registered for the selected 4-machine 20-job problem structure for every 100 generations up to 1000 generations to determine the optimal number of generations. Figure 10 provides the average makespan values and elapsed times of five replications for the analyzed generations with the selected parameters. Figure 10 shows that after 500 generations, there is no improvement in the makespan. Thus, the number of generations can be set to 500.

The 4-machine 20-job problem structure was chosen to determine the optimal parameter setting by DoE. The levels of the three factors are listed in Table 6. Five replications were conducted for each combination, and the makespan values were calculated by running the program until 500 generations had elapsed. In Table 7, the estimates obtained through the regression analysis are shown, including significant factors (). The results indicate that the probability of crossover () and probability of mutation () are significant, with low values shown in Table 7. The interaction plot in Figure 11 and the low value of in Table 7 indicate that there is a significant interaction between and . According to Figures 11 and 12, when is set to 1 and is set to 0.2, the algorithm will yield a better solution quality. The population size () is set to 100 according to the main effects plot in Figure 12.

The obtained results are evaluated in two subsections. In the first subsection, the results of MIP that was proposed by Rabadi et al. [18] and the results of GALA which was proposed by Yilmaz Eroglu et al. [21] will be analyzed for nonjob splitting situation. For the job splitting situation, the results of MIP3 that was proposed in Section 3.3, the results of GAspLA that was proposed in Section 4, and the results of implementation (GAspLAMIP) that was proposed in Section 5 will be explicated in Section 6.2. In Section 6.3, detailed results of solving some literature problems using the proposed GAspLA are submitted.

6.2. The Results of MIP, GAspLA and GAspLAMIP

There are 15 different problem instances in the dataset SchedulingResearch 2005 that was taken from the literature [4]. For the combinations which were shown on the Table 8, the first instances of these datasets are used and the following cases are studied and reported.(i)For the nonjob splitting situation, MIP model which was proposed by Rabadi et al. [18] is coded in MPL and solved using CPLEX 11.0. The makespan values and elapsed times are reported in the nonjob splitting situation, MIP0 section of Table 8.(ii)For the nonjob splitting situation, the makespan values and elapsed times of the GALA algorithm which was proposed by Yilmaz Eroglu et al. [21] are reported in the nonjob splitting situation, GALA section of Table 8.(iii)For the job splitting situation, the proposed model which is discussed in Section 3.3 was coded in MPL and solved using CPLEX 11.0. The makespan values and elapsed times are reported in job splitting situation, MIP3 section of Table 8.(iv)For the job splitting situation, the makespan values and elapsed times of the proposed GAspLA algorithm are reported in the job splitting situation, GAspLA section of Table 8.(v)For the job splitting situation, the makespan values and elapsed times of the GAspLAMIP which is discussed in Section 5 are reported in job splitting situation, GAspLAMIP section of Table 8.

The results of the models whose elapsed times exceed one day (86400 seconds) are not reported in Table 8. To point out the improvement through the proposed algorithm for the problem with job splitting, nonjob splitting situation results are also reported. If the analyzed problem has the job splitting property, then it is better to use the proposed algorithms including job splitting. The results on Table 8 can be interpreted as follows.

For the nonjob splitting situation, MIP0 model that was proposed by Rabadi et al. [18] found optimal solution for all the problem combinations in Table 8. The algorithm of GALA which was proposed by Yilmaz Eroglu et al. [21] also solved the problems on all these points in Table 8 with optimal values. For the job splitting situation the proposed MIP3 model which is introduced in Section 3.3, managed to find the optimal solutions for 5 combinations (6 jobs, 2 and 4 machines; 7 jobs, 2 and 4 machines; 8 jobs, 4 machines) in acceptable CPU time. The proposed algorithm of GAspLA could manage to find the optimal results in a couple of seconds on these points. MIP3 and GAspLA algorithms for job splitting situation find the same or better results for the combinations as compared with the nonjob splitting situation. In addition, the implementation of GAspLAMIP which is performed by using GAspLA results as initial solution for MIP3 is solving 9 combinations (6 jobs, 2 and 4 machines; 7 jobs, 2 and 4 machines; 8 jobs, 2 and 4 machines; 9 jobs, 2 and 4 machines; 10 jobs, 2 machines) optimally in acceptable CPU time. The elapsed time of the GAspLAMIP is less than MIP3 as it can be seen in Table 8. The optimal makespan values of GAspLAMIP are the same with the result of GAspLA for the studied combinations. The results for these combinations demonstrated that the makespan values of GAspLA are close to the lower bounds. In the next subsection the performance of the GAspLA for the other combinations are shown in detail.

6.3. The Results of GAspLA for Some Literature Problems

To test the proposed GAspLA algorithm, SchedulingResearch 2005 [4] dataset was used. ACO which was proposed by Arnaout et al. [19], GALA which was proposed by Yilmaz Eroglu et al. [21], and our new approach GAspLA were compared using the mentioned dataset.

There is no dataset for the problem of with job splitting in the literature according to our knowledge. An original dataset did not contain number of splits. For that reason, number of splitting has been randomly generated between 1 and 3. For all instances, the processing and setup times that were uniformly distributed between 50 and 100 are called balanced datasets. For small instances, the combinations were 2 machines and 6, 7, 8, 9, 10, and 11 jobs, 4 machines and 6, 7, 8, 9, 10, and 11 jobs, 6 machines and 8, 9, 10, and 11 jobs, and 8 machines and 10 and 11 jobs. For large instances, the combinations were 4 machines and 20 jobs, 6 machines and 20 and 40 jobs, 8 machines and 20 and 60 jobs, and 10 machines and 20 and 40 jobs. Each machine-job combination was tested with 15 instances of the problem and the average deviation from a lower bound (LB) was calculated. The method for calculation of LB was explained in the paper of Arnaout et al. [19]. And the same LB values are used.

The percent deviation from lower bound is calculated as follows where is the makespan value of the algorithm and LB is lower bound for the concerned instance.

Figure 13 summarizes the relative deviation of each algorithm from LB for small problem combination. It’s clear from Figure 13 that the GAspLA outperformed the others for the small problem sets. So if the problem can be split in subjobs, our new approach gives better results.

Figure 14 summarizes the relative deviation of each algorithm from LB for large problem combination. It’s clear from Figure 14 that the GAspLA outperformed the ACO on each combination. The GAspLA outperformed the GALA on 6 machines and 20 jobs and 8 machines and 20 jobs combinations. The GAspLA finds similar results with GALA on the other points. The number of splitting for GAspLA can be selected between 1 and 3 as pointed out before. The GAspLA decides not to split, so the number of splitting is 1, in the case of not finding better results. This is the main reason of obtaining similar results with the GALA, which is the nonjob splitting version of this study.

7. Conclusions and Future Direction of Research

The present paper presented mixed integer programming (MIP) models, genetic-local search algorithm (GAspLA), and an implementation of GAspLAMIP for the unrelated parallel machine scheduling problem with machine-dependent and sequence-dependent setup times and job splitting property to minimize the makespan. In this problem, it was assumed that a job can be split into a number of subjobs and these subjobs are processed on the unrelated parallel machines independently. Very little research appears to have been undertaken on this general model. The proposed hybrid genetic-local search algorithms are coded in C# language according to proposed methods.

The first contribution of this paper is capability of the developed algorithms to make splitting and scheduling simultaneously with variable number of subjobs, where, to the best of our knowledge, no work has been published on an algorithm with these properties. The proposed hybrid genetic-local search algorithm (GAspLA) is innovated in its use of random key numbers to determine the number of subjobs for each job and managing them to sequence on the machines. In order to overcome the problem of managing random key numbers according to the desired job sequence that might appear during local search operation which implemented in the genetic algorithm, some methods are developed. The developed algorithms satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes’ random key numbers. This is the second contribution of the paper. The third contribution of this paper is developed MIP models, which are coded in MPL and solved using CPLEX 11.0 solver. According to the best of our knowledge, there is no MIP approach for the problem type studied in this research. The results that are reported here indicate that the mentioned problem can be solved using the developed MIP models. Three mathematical models are presented on this topic. In the MIP1 and the MIP2, numbers of the subjobs will be determined by models between one and the maximum number of subjobs. In the MIP3, the numbers of subjobs will be determined randomly between one and maximum number of subjobs beforehand and will be given as an input data to the model. In the MIP1, the quantities of subjobs are not equal. In the MIP2 and MIP3 the quantities of subjobs are equal. The fourth contribution of this paper is an implementation of combining the results from GAspLA with the developed MIP formulation. This implementation (GAspLAMIP) also shows us the optimality of the results of GAspLA for the studied combinations. We used Design of Experiment to select the parameter values for hybrid genetic-local search algorithms. The proposed methods are tested on a set of small and large problems taken from the literature (SchedulingResearch 2005 [4]), and the computational results validate the effectiveness of the proposed algorithm.

The algorithms suggested in this paper may be useful for customer oriented market environments just like weaving process of textile industry that this study is originated from. And also drilling of printed circuit board (PCB) manufacturing and dicing of semiconductor wafer manufacturing have similar production environment.

There are balanced, setup times dominant and processing time dominant datasets in this SchedulingResearch 2005 [4] database. In this research we made the comparisons using balanced datasets. Running the algorithm using the remaining datasets might give interesting results. In this research, we are applying local search in each generation. Determining the conditions of applying local search may accelerate the algorithm. In order get closer to the real system, machine eligibility would be included into the algorithms. Generating hyperheuristics through the use of different metaheuristics’ strengths may be a worthwhile avenue of research. Application of the proposed algorithm to multiobjective fitness functions would be a novel research topic. Other constraints, such as priority conditions, could be incorporated into future research. The other parallel machines problems like batch-processing machines seem also interesting.

Appendix

Here we give pseudocode for the local search. For brevity only the key steps are detailed.

C# notation is used.

See Algorithm 1.

For all generations
For the population
For all chromosomes
to
to
If Gene(i)Machine == Gene(j)Machine → (inter-machine interchange)
 If → (adjacent jobs) (Figure 5)
    ; ; ;
   If & Gene(i)Machine == Gene( )Machine
       → (“a” on Figure 5)
   If Gene(i)Machine != Gene( )Machine → (Gene(i)Job is the first job on Gene(i)Machine)
       → (“b” on Figure 5)
   If & Gene(j)Machine == Gene( )Machine
       → (“c” on Figure 5)
   If
      Let to interchange Gene(i)Job and Gene(j)Job on the schedule
      Calibrate random key numbers → (Section 4.2.2, item “i”)
 If → (non-adjacent jobs) (Figure 6)
    ;
   If & Gene(i)Machine == Gene( )Machine
      
         
   If Gene(i)Machine != Gene( )Machine → (Gene(i)Job is the first job on Gene(i)Machine)
      
         
   If & Gene(j)Machine == Gene( )Machine
      
   If
      Let to interchange Gene(i)Job and Gene(j)Job on the schedule
      Calibrate random key numbers → (Section 4.2.2, item “i”)
If Gene(i)Machine != Gene(j)Machine → (intra-machine exchanging) (Figure 7)
    ; = 0; = 0;
   a = CT_Gene(i)Machine +
   If & Gene(i)Machine == Gene( )Machine
      
   If Gene(i)Machine != Gene( )Machine → (Gene( )Job is the first job on Gene(i)Machine)
      
   If Gene(i)Machine == Gene( )Machine
      
   b = CT_Gene(j)Machine +
   If Gene(j)Machine == Gene( )Machine
      
   If Gene(j)Machine ! = Gene( )Machine
       → (Gene(j)Job is the first job on Gene(j)Machine)
   If & Gene(j)Machine == Gene( )Machine
       =
   
   
    = Max(CT_Gene(i)Machine; CT_Gene(j)Machine)
    =Max( )
   If <
      Let to exchange Gene(i)Job and Gene(j)Job on the schedule
      Calibrate random key numbers → (Section 4.2.2, item “ii”)

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.