Research Article  Open Access
Pisut Pongchairerks, "A TwoLevel Metaheuristic Algorithm for the JobShop Scheduling Problem", Complexity, vol. 2019, Article ID 8683472, 11 pages, 2019. https://doi.org/10.1155/2019/8683472
A TwoLevel Metaheuristic Algorithm for the JobShop Scheduling Problem
Abstract
This paper proposes a novel twolevel metaheuristic algorithm, consisting of an upperlevel algorithm and a lowerlevel algorithm, for the jobshop scheduling problem (JSP). The upperlevel algorithm is a novel populationbased algorithm developed to be a parameter controller for the lowerlevel algorithm, while the lowerlevel algorithm is a local search algorithm searching for an optimal schedule in the solution space of parameterizedactive schedules. The lowerlevel algorithm’s parameters controlled by the upperlevel algorithm consist of the maximum allowed length of idle time, the scheduling direction, the perturbation method to generate an initial solution, and the neighborhood structure. The proposed twolevel metaheuristic algorithm, as the combination of the upperlevel algorithm and the lowerlevel algorithm, thus can adapt itself for every single JSP instance.
1. Introduction
Scheduling problems generally involve assigning jobs to machines at particular times. This paper focuses on the jobshop scheduling problem with minimizing the total length of the schedule (JSP), which is one of the hardtosolve scheduling problems. The JSP is an NPhard optimization problem [1, 2] well known in both academic and practical areas. To deal with the JSP and related problems, many approximation algorithms have been developed based on metaheuristic algorithms [3–8]. In addition, some methods such as [9] have been presented for the purpose of reducing the solution space of the JSP.
To solve the JSP, this paper aims at developing the twolevel metaheuristic algorithm in which the upperlevel algorithm controls the lowerlevel algorithm’s input parameters, and the lowerlevel algorithm acts as a local search algorithm to search for an optimal schedule. The purpose of the upperlevel algorithm is to iteratively adapt the inputparameter values of the lowerlevel algorithm, so that the lowerlevel algorithm will be fitted well for every single instance. In general, a mechanism of controlling a lowerlevel algorithm’s input parameters by an upperlevel algorithm is classified as adaptive parameter control based on the definition given by [10]. The concept of using a metaheuristic algorithm to control the input parameters of another metaheuristic algorithm, known as a metaevolutionary algorithm, has been applied in many different researches such as [11–15].
In detail, this paper will hereafter call the upperlevel algorithm and lowerlevel algorithm in the proposed twolevel metaheuristic algorithm as UPLA and LOLA, respectively. LOLA is a local search algorithm exploring in a solution space of parameterizedactive schedules (also known as hybrid schedules), where each parameterizedactive schedule [16, 17] is decoded from an operationbased permutation [18–20]. In LOLA, there are four input parameters which need to be assigned their values before LOLA is executed, i.e., the maximum allowed length of idle time of constructing parameterizedactive schedules, the scheduling direction of constructing schedules, the perturbation method to generate an initial solution, and the operator combination used to generate a neighborhood structure. The values of these four input parameters can be adjusted and inputted by a human user; however, this duty in the proposed twolevel algorithm will belong to UPLA.
UPLA is a populationbased metaheuristic algorithm searching in the realnumber search space; from this point of view, it is similar to the particle swarm optimization [21], differential evolution [22], firefly [23], cuckoo search [24], and artificial fishswarm [25, 26] algorithms. Unlike the others, the UPLA’s function is not to find the problem solutions; however, it is designed for being an inputparameter controller specifically for LOLA. At an initial iteration, UPLA starts with a population of the combinations of LOLA’s inputparameter values. All inputparameter values in each inputparametervalue combination are represented in real numbers; most of them need to be decoded from real numbers to the other forms understandable for LOLA. At each iteration, UPLA tries to improve its population by using the feedback returned from LOLA. By the support of UPLA, LOLA will be upgraded from a local search algorithm to be an iterated local search algorithm. Moreover, the combination of UPLA and LOLA results in the twolevel algorithm which can adapt itself for every single JSP instance.
The remainder of this paper is structured as follows. Section 2 provides the preliminary knowledge relevant to this research. Section 3 presents the proposed lowerlevel algorithm (LOLA), including its decoding procedure for generating solutions. Section 4 presents the proposed upperlevel algorithm (UPLA), including its decoding procedure for generating LOLA’s inputparameter values. Section 5 then evaluates the performance of the proposed twolevel metaheuristic algorithm on wellknown JSP instances. Finally, Section 6 provides a conclusion.
2. Preliminaries
The jobshop scheduling problem (JSP) comes with a given set of jobs and a given set of machines . The jobs arrive before or at the same time of the schedule’s start time (i.e., time 0), and the machines are all available at the schedule’s start time as well. Each job consists of an unchangeable sequence of operations . Thus, in order to process the job , the operation must be finished before the operation can start, the operation must be finished before the operation can start, and so on. Moreover, each operation must be processed on a preassigned machine with a predetermined processing time. Each machine can process at most one operation at a time, and it cannot be interrupted during processing an operation. The objective of the JSP in this paper is to find a feasible schedule of the jobs on the machines with minimizing makespan. Note that makespan means the total length of the schedule, or the total amount of time required to complete all jobs. Good reviews about the JSP are found in [1, 2, 27].
Schedules in the JSP can be constructed in forward or backward (reverse) directions. A schedule is defined as a forward schedule if every job (where ) in the schedule is done by starting from the first operation to the last operation . On the contrary, a schedule is defined as a backward schedule if every job (where ) in the schedule is done backward by starting from the last operation to the first operation . The backward scheduling is commonly used for the scheduling problem with a duedate criterion. However, it also provides a benefit to the JSP with a makespan criterion because some instances can be solved to optimality by the backward direction more simply than by the forward direction. A backward schedule can simply be constructed by reversing the directions of the precedence constraints of all operations, and then allocating jobs to machines in the same way as constructing a forward schedule; after that, the schedule must be turned back to front in order to make it satisfy the original precedence constraints. The uses of the backward scheduling direction are found in published articles, e.g., [7, 28–31].
All feasible schedules can be classified into three classes, i.e., semiactive schedules, active schedules, and nondelay schedules [16, 19, 32]. A feasible schedule is defined as a semiactive schedule if no operation can be started earlier without changing the operation sequence. A semiactive schedule is defined as an active schedule if no operation can start earlier without delaying another operation or without violating the precedence constraints. Finally, an active schedule is defined as a nondelay schedule if no machine is kept idle when it can start processing an operation. Thus, the solution space of active schedules is a subset of the solution space of semiactive schedules, and the solution space of nondelay schedules is a subset of the solution space of active schedules. The solution space of active schedules is surely dominant over the solution space of semiactive schedules since it is smaller and also guaranteed to contain an optimal schedule. The solution space of nondelay schedules, which is the smallest solution space, may not contain an optimal schedule.
In addition to the three solution spaces given above, the solution space of parameterizedactive schedules or hybrid schedules [16, 17] is a subset of the solution space of active schedules which is not smaller than the solution space of nondelay schedules. In definition, a parameterizedactive schedule is an active schedule where no machine runs idle for more than a maximum allowed length of idle time. In [16], the maximum allowed length of idle time is controlled by a tunable parameter . At extremes, the GA of [16] with = 1 will explore in the solution space of active schedules, while the GA with = 0 will explore in the solution space of nondelay schedules. Because can be adjusted to be any real number between 0 and 1, this parameter thus controls the algorithm’s solution space between nondelay schedules and active schedules. Some variants of the hybrid scheduler of [16], which are modified for the purpose of simplification, can be found in published articles such as [15, 31, 33, 34].
Since the proper value of is problemdependent, the researchers try to control the tunable parameter in many different ways. Some metaheuristic algorithms such as [16, 33] use with fixed values. On the other hand, some other metaheuristic algorithms such as [34, 35] adjust the values during their evolutionary processes; thus, their methods of changing the values belong to the classification of selfadaptive parameter control based on the definition given by [10]. In addition, some metaheuristic algorithms adjust the values based on the mathematical functions of another parameter (or other parameters); for example, the local search algorithms of [31] adjust the value based on the iteration’s index. The concept of a twolevel metaheuristic algorithm in which the upperlevel algorithm controls the input parameters including for the lowerlevel algorithm has been found in [15]. Hereafter, the parameter will be called acceptable idletime limit in this paper.
3. The Proposed LowerLevel Algorithm
LOLA, which is the lowerlevel algorithm in the proposed twolevel algorithm, acts as a local search algorithm. It is similar to the other local search algorithms in its basic framework [36–39]; in addition, it generates neighbor solutions based on common swap and insert operators [31, 40, 41]. However, LOLA has three specific features different from general local search algorithms as follows:(i)LOLA transforms an operationbased permutation [18–20] into a parameterizedactive schedule (or hybrid schedule) via Algorithm 1 (see Section 3.1)(ii)LOLA allows being adjusted in the values of its input parameters, so that LOLA can be best fitted for every single instance. These input parameters are the acceptable idletime limit, the scheduling direction, the perturbation method, and the neighborhood structure. In this paper, the duties of adjusting the proper parameter values for LOLA belong to UPLA(iii)Once LOLA is combined with UPLA, it will use the perturbation method selected by UPLA to generate its initial solution. By the perturbation method supported from UPLA, LOLA is upgraded from a local search algorithm to be an iterated local search algorithm, which can escape from a local optimum. Reviews about iterated local search algorithms can be found in many articles, e.g., [42]
Section 3.1 presents Algorithm 1, the procedure of decoding an operationbased permutation into a parameterizedactive schedule used by LOLA. A parameterizedactive schedule generated by Algorithm 1 can be a forward parameterizedactive schedule (i.e., a parameterizedactive schedule constructed in the forward scheduling direction) or a backward parameterizedactive schedule (i.e., a parameterizedactive schedule constructed in the backward scheduling direction) depending on an UPLA’s selection. Section 3.2 then presents Algorithm 2, the procedure of LOLA.
3.1. Decoding Procedure to Generate ParameterizedActive Schedules Used by LOLA
For an job/machine JSP instance, an operationbased permutation [18–20] is a permutation with repetition of the numbers , where each number occurs times. As an example, the permutation (3, 2, 2, 1, 3, 1) is an operationbased permutation representing a schedule for a 3job/2machine JSP instance. The procedure of transforming an operationbased permutation into a parameterizedactive schedule used by LOLA is given in Algorithm 1. As mentioned, a parameterizedactive schedule decoded from an operationbased permutation by Algorithm 1 can be constructed in either the forward direction or backward direction as two options.
In brief, Algorithm 1 starts by receiving an operationbased permutation, an acceptable idletime limit , and a scheduling direction in Step 1. If the scheduling direction just received is backward, then Step 1 reverses the order of all members in the operationbased permutation, and Step 2 reverses the precedence relations of all operations of each job. Step 3 then transforms the operationbased permutation into the priority order of all operations. Based on the priority order of operations and the acceptable idletime limit, a parameterizedactive schedule will then be constructed by Steps 4 to 7. In these steps, Algorithm 1 will iteratively choose the highestpriority operation from all schedulable operations which can be started notlaterthan + δ( – ) and add the chosen operation into the partial schedule. Note that is the minimum of the earliest possible start times of all schedulable operations, and is the minimum of the earliest possible finish times of all schedulable operations. Steps 5 to 7 will be repeated until the schedule is completed. After that, if the scheduling direction received in Step 1 is forward, stop the algorithm; however, if it is backward, Step 8 will modify the schedule given by Step 7 to satisfy the original precedence constraints.
Algorithm 1 can be defined as a generalization of the solutiondecoding procedures shown in [15, 31, 33, 34] since it can generate a parameterizedactive schedule in either the forward direction or backward direction as the options. Algorithm 1 and its abovementioned variants are modified from the hybrid scheduler of [16] for the purpose of simplification (as found in Steps 5 and 6 of Algorithm 1). Hence, Algorithm 1 may construct a different schedule from the hybrid scheduler of [16], even if they both use an identical value on an identical operationbased permutation.
Algorithm 1. It is the procedure of decoding an operationbased permutation into a parameterizedactive schedule used by LOLA.Step 1.Receive an operationbased permutation. Then, receive an acceptable idletime limit and a scheduling direction (forward or backward) as inputparameter values. If the scheduling direction is backward, then reverse the order of all members in the operationbased permutation. For example, if the received scheduling direction is backward, a permutation (3, 2, 2, 1, 3, 1) will then be changed to be (1, 3, 1, 2, 2, 3)Step 2.If the scheduling direction received in Step 1 is backward, then reverse the precedence relations of all operations of every job in the beingconsidered JSP instance by using Steps 2.1 and 2.2 Step 2.1.For each job (where ), the operations must be renamed , respectively Step 2.2.Assign the precedence relations of all operations of the job by following the operation indices taken from Step 2.1. This means that must be finished before can start, must be finished before can start, and so onStep 3.Transform the operationbased permutation taken from Step 1 into an order of priorities of all operations as follows: let the th occurrence of the number i in the permutation, starting from furthest at the left, represent the operation ; then, let the order of all operations in the permutation, starting from furthest at the left, represent the descending order of priorities of the operations. For example, the permutation (3, 2, 2, 1, 3, 1) means priority of > priority of > priority of > priority of > priority of > priority of Step 4.Let be the set of all schedulable operations at stage . Note that a schedulable operation is an asyetunscheduled operation whose all preceding operations in its job have already been scheduled. Let be the partial schedule of the scheduled operations. Thus, is empty, and consists of all operations (where ). In addition, let the earliest possible start times of all operations be equal to the time 0. Now, let Step 5.Let be the minimum of the earliest possible start times of all schedulable operations in , and let be the minimum of the earliest possible finish times of all schedulable operations in . (Note that the earliest possible start time of a specific schedulable operation is the maximum between the finished time of its immediatepreceding operation in its job and the earliest available time of its preassigned machine. Then, the earliest possible finish time of a specific schedulable operation is the sum of its earliest possible start time and its predetermined processing time)Step 6.Let represent the highestpriority operation of all schedulable operations whose earliest possible start times are not greater than + δ( – ) in . Then, create by allocating into on the machine preassigned for processing at the earliest possible start time of . After that, create by deleting from ; in addition, if has an immediatesuccessive operation in its job, let the immediatesuccessive operation of in its job be added to Step 7.If , then let the value of be increased by 1, and repeat from Step 5; otherwise, let be a completed parameterizedactive schedule, and go to Step . (Remember that is the number of all operations)Step 8.If the scheduling direction received in Step 1 is forward, then stop Algorithm 1, and let the schedule taken from Step 7 be the completed forward parameterizedactive schedule and also the algorithm’s final result; however, if it is backward, then modify to satisfy the original precedence constraints by using Steps 8.1 and 8.2 Step 8.1.Let the operations of each job in the schedule be renamed , respectively Step 8.2.Turn the schedule modified from Step 8.1 back to front, so that the lastfinished operation in the schedule will become the firststarted operation, and vice versa; after that, let the schedule be started at the time 0. Then, stop Algorithm 1, and let the schedule modified in this step be the completed backward parameterizedactive schedule and also the algorithm’s final result
3.2. Procedure of LOLA
As mentioned earlier, the four input parameters of LOLA controlled by UPLA consist of the acceptable idletime limit, the scheduling direction, the perturbation method for generating an initial operationbased permutation, and the method for generating a neighbor operationbased permutation. The details of these four input parameters of LOLA are given below:(i)The acceptable idletime limit is defined as a controller of the solution space; for example, the LOLA with = 0 will search in the solution space of nondelay schedules(ii)The scheduling direction is a direction of constructing schedules. If is chosen to be forward, LOLA will generate only forward schedules. On the other hand, if is chosen to be backward, LOLA will generate only backward schedules(iii)The perturbation method for generating an initial operationbased permutation is the perturbation method used to generate an initial permutation for LOLA. If IP is selected to be full randomization, the initial operationbased permutation will be generated by full randomization without using any part of the bestfound permutation memorized by UPLA. However, if IP is selected to be partial randomization, the initial operationbased permutation will be generated by using insert operators (i.e., using the insert operator times) on the bestfound operationbased permutation memorized by UPLA(iv)The method for generating a neighbor operationbased permutation is the adjustable method for modifying the LOLA’s current bestfound operationbased permutation into a neighbor operationbased permutation. The 2insert is to use two insert operators. The 1insert/1swap is to use an insert operator and then a swap operator. The 1swap/1insert is to use a swap operator and then an insert operator. Finally, the 2swap is to use two swap operators
In this paper, the swap operator is done by randomly selecting two members (of all members) from two different positions in the permutation, and then switching the positions of the two selected members. The insert operator is done by randomly selecting two members (of all members) from two different positions in the permutation, removing the firstselected member from its old position, and then inserting it into the position in front of the secondselected member. Note that is the number of all jobs, is the number of all machines, and is thus the number of all operations in the problem’s instance. The procedure of LOLA is given in Algorithm 2.
Algorithm 2. It is the procedure of LOLA.Step 1.Receive the bestfound operationbased permutation memorized by UPLA. Then, receive the values of the LOLA’s input parameters from UPLA via Steps 1.1 to 1.4Step 1.1.Receive the acceptable idletime limit Step 1.2.Receive the scheduling direction Step 1.3.Receive the perturbation method for generating an initial operationbased permutation Step 1.4.Receive the method for generating a neighbor operationbased permutation , Step 2.Let represent the LOLA’s current bestfound operationbased permutation. Generate an initial by using the IP method received in Step 1.3Step 3.Execute Algorithm 1 by using from Step 1.1 and from Step 1.2 on the permutation from Step 2. Then, let be equal to the schedule returned from Algorithm 1 executed in this stepStep 4.Execute the local search procedure by Steps 4.1 to 4.5Step 4.1.Let = 0Step 4.2.Generate a neighbor operationbased permutation by modifying from the permutation via the method received in Step 1.4Step 4.3.Execute Algorithm 1 by using from Step 1.1 and from Step 1.2 on the permutation from Step 4.2. Then, let be equal to the schedule returned from Algorithm 1 executed in this stepStep 4.4.If the makespan of is less than the makespan of , then update to be equal to , update to be equal to , and let = 0; otherwise, increase the value of by 1Step 4.5.If < mn(mn – 1), then repeat from Step 4.2; otherwise, stop Algorithm 2, and let and be the final bestfound schedule and the final bestfound operationbased permutation, respectively, as the LOLA’s final results
Remember that there are no parametervalue settings for LOLA required here since the four input parameters of LOLA are controlled by UPLA. As shown in Step 1 of Algorithm 2, the settings of δ, D, IP, and NP are provided by UPLA.
4. The Proposed UpperLevel Algorithm
UPLA is a populationbased metaheuristic algorithm exploring in a realnumber search space; from this viewpoint, it is similar to the other populationbased algorithms [21–26]. However, the procedure of improving its population is different since it is developed for being a parameter controller for LOLA. UPLA starts with a population of the combinations of LOLA’s inputparameter values, consisting of . Let represent the th combination of the LOLA’s inputparameter values (where ) at the th iteration. , , , and in the combination represent the acceptable idletime limit , the scheduling direction , the perturbation method for generating an initial operationbased permutation , and the method for generating a neighbor operationbased permutation NP, respectively, in LOLA. Table 1 shows the translation from , , , and of the combination into δ, D, IP, and NP of LOLA. In short, let a combination of LOLA’s inputparameter values be called a parametervalue combination.

The performance of is simply equal to the makespan returned from the LOLA using the parameter values decoded from . This means that, between two parametervalue combinations, the better combination is the combination which makes LOLA return lower makespan. Thus, the parametervalue combination will be defined as the best if it makes LOLA return the lowest makespan. In UPLA, let represent the bestfound parametervalue combination memorized by UPLA. Thus, will always be updated whenever UPLA can find a parametervalue combination which performs better than the current . In other words, once UPLA finds a performing better than the current , this will become the new .
In the th UPLA’s iteration, , , , and , as the members of C_{i}(t), will each be updated respectively into (t + 1), (t + 1), (t + 1), and (t + 1) of C_{i}(t + 1) by two vectors. The two vectors, used for updating each member’s current value in to its new value in (t + 1), are described in directions and magnitudes as follows. In the vector directions, the first vector’s direction is toward the current value in to the bestfound value in , while the second vector’s direction is opposite the first vector’s direction. In the vector magnitudes, if the current value in and the bestfound value in are different, the first vector’s magnitude is a random real number in while the second vector’s magnitude is a random real number in ; however, if they are the same, the magnitudes of the two vectors are both random real numbers in . The process of iteratively updating the parametervalue combination given in this paragraph will be found in Steps 4.2 to 4.5 of Algorithm 3.
The mechanism of the UPLA combined with LOLA is summarized in Figure 1, where Makespan_{i}(t) stands for the makespan of the schedule returned from the LOLA using the inputparameter values decoded from . The procedure of UPLA is fully presented in Algorithm 3. In Algorithm 3, Steps 1 and 2 initialize the UPLA’s population. Although the largest possible range of is as mentioned earlier, the algorithm generates initially within and then restricts its value within along the computational process, based on a suggestion from [31]. Step 3 transforms into δ, D, IP, and NP for LOLA, and this step then executes the LOLA using the inputparameter values transformed from . After that, Step 3 evaluates the performance of each , where , and updates . Step 4 then updates to (t + 1); however, all values in C_{i}(t + 1) will be randomly regenerated once for every 25 iterations for the purpose of diversifying the population. Step 5 finally checks the condition to stop or continue the next iteration.
Algorithm 3. It is the procedure of UPLA.Step 1.Let represent the th parametervalue combination (where ) at the th iteration. Let represent the bestfound parametervalue combination. Set the initial performance of to be equal to an extremely large valueStep 2.Let . Then, initially generate , where , as follows: , , , and Step 3.Evaluate the performance of and update by Steps 3.1 to 3.5Step 3.1.Let Step 3.2.Translate , , , and of into , , , and for LOLA by the relationships shown in Table 1. Then, execute the LOLA (Algorithm 2) which uses , , , and translated in this stepStep 3.3.Let the performance of be equal to the makespan of the schedule returned from the LOLA executed in Step 3.2Step 3.4.If the performance of is better (lower) than the performance of , then update to be equal to , update the performance of to be equal to the performance of , and also set the schedule and the permutation returned from the LOLA executed in Step 3.2 respectively as the bestfound schedule and the bestfound operationbased permutation memorized by UPLAStep 3.5.If , then increase the value of by 1, and repeat from Step 3.2; otherwise, go to Step 4Step 4.If mod 25 = 0, then randomly generate ( + 1), where , as follows: ( + 1) ~ , ( + 1) ~ , ( + 1) ~ , and ( + 1) ~ ; otherwise, generate ( + 1) by Steps 4.1 to 4.6Step 4.1.Let = 1Step 4.2.Generate , ~ , and then generate ( + 1) as follows: After that, if ( + 1) < 0.7 or ( + 1) ≥ 1.0, then regenerate ( + 1) ~ Step 4.3.Generate , ~ , and then generate ( + 1) as follows:Step 4.4.Generate , ~ , and then generate ( + 1) as follows:Step 4.5.Generate , ~ , and then generate ( + 1) as follows:Step 4.6.If , then increase the value of by 1, and repeat from Step 4.2; otherwise, go to Step 5
Step 5.If the stopping criterion is not met, then increase the value of by 1, and repeat from Step 3. Otherwise, stop Algorithm 3, and let the bestfound schedule memorized by UPLA be the final result of UPLA.
5. Performance Evaluation
Among JSPsolving algorithms, the twolevel particle swarm optimization algorithm (or the twolevel PSO) of [15] is probably the most similar algorithm to the proposed twolevel metaheuristic algorithm. The similarity is that both their lowerlevel algorithms generate parameterizedactive schedules by the similar procedures; in addition, both their upperlevel algorithms control the same two parameters (i.e., the acceptable idletime limit and the scheduling direction for constructing schedules) for their lowerlevel algorithms. However, the twolevel PSO [15] is different from the proposed twolevel metaheuristic algorithm in that it uses the GLNPSO’s framework [43] in both its levels. Due to their similarity and difference just mentioned, this paper then selects the twolevel PSO [15] to compare with the proposed twolevel metaheuristic algorithm in their performances. In this section, UPLA will represent the proposed twolevel metaheuristic algorithm as a whole (i.e., the UPLA combined with LOLA) since UPLA must work by having LOLA inside when executed.
This section evaluates the performance of UPLA on 53 wellknown JSP benchmark instances. The 53 benchmark instances consist of the ft06, ft10, and ft20 instances from [44], the la01 to la40 instances from [45], and the orb01 to orb10 instances from [46]; these 53 instances can also be found online in [47]. This paper divides the 53 benchmark instances into two sets, i.e., the set of ftandla instances composed of ft06 to la40 and the set of orb instances composed of orb01 to orb10. To evaluate UPLA’s performance, UPLA will be compared to the twolevel PSO of [15] in their results on the set of ftandla instances. Since the results of the twolevel PSO on the orb instances do not exist, UPLA will be compared to the GA of [6] in their results on the set of orb instances.
In the performance comparisons, the results of UPLA are received from the experiment here. The settings of UPLA for the experiment are given as follows:(i)The population in UPLA consists of 10 combinations of the LOLA’s inputparameter values (i.e., N = 10)(ii)The stopping criterion of UPLA is that either the 200th iteration is reached (i.e., t = 200 is the maximum iteration) or the optimal solution shown in published articles, e.g., [5], is found(iii)UPLA is coded in C# and executed on an Intel® Core^{TM} i5 CPU processor M580 @ 2.67 GHz with RAM of 6 GB (2.3 GB usable)(iv)UPLA is executed for 10 runs with different randomseed numbers(v)The directions and magnitudes of the vectors used to generate (t + 1) are given in Step 4 of Algorithm 3
Tables 2 and 3 show the experiment’s results on the ftandla instances and the orb instances, respectively. Note that the words solution and solution value in this paper are equivalent to the words schedule and makespan, respectively. The information given by these tables contains the following:(i)The column Instance provides the name of each instance(ii)The column Opt provides the optimal solution value (i.e., the optimal schedule’s makespan) of each instance given by the published articles, e.g., [5](iii)The column UPLA in each table is divided into six columns, i.e., Best, %BSVD, Avg., %ASVD, Avg. No. of Iters, and Avg. CPU Time (sec). Their definitions are given as follows:(a)Best stands for the bestfound solution value over 10 runs of UPLA(b)%BSVD stands for the deviation percentage between the bestfound solution value over 10 runs of UPLA and the optimal solution value(c)Avg. stands for the average of the bestfound solution values from the 1st run to the 10th run of UPLA(d)%ASVD stands for the deviation percentage between the average of the bestfound solution values from the 1st run to the 10th run of UPLA and the optimal solution value(e)Avg. No. of Iters shows the average number of iterations used by UPLA until it reaches the stopping criterion over 10 runs(f)Avg. CPU Time (sec) shows the average computational time (in seconds) used by UPLA until it reaches the stopping criterion over 10 runs(iv)The column PSO in Table 2 and the column GA in Table 3 are each divided into two columns, i.e., Best and %BSVD. Best in the column PSO means the twolevel PSO’s bestfound solution value [15], while Best in the column GA means the GA’s bestfound solution value [6]. %BSVD means the deviation percentage between the specified algorithm’s bestfound solution value and the optimal solution value(v)Each bestfound solution value will be marked by an asterisk () if its value is the same as the optimal solution value; in addition, it will be marked by a sharp sign () if it wins in comparison


The results of the twolevel PSO [15] and UPLA on the 43 ftandla instances are taken from Table 2 to compare in the three indicators, i.e., the number of signs, the number of signs, and the %BSVD values. In counting the instances where optimal solutions can be found over the total 43 instances (counting signs), the twolevel PSO can find the optimal solutions on 31 instances, while UPLA can find the optimal solutions on 33 instances. In counting the instances where each algorithm can find better solutions than the other (counting signs), the twolevel PSO finds better solutions than UPLA on only 2 instances, while UPLA finds better solutions than the twolevel PSO on 9 instances. In the %BSVD values, the average of the %BSVD values of the twolevel PSO is 0.32%, while the average of the %BSVD values of UPLA is 0.28%. In conclusion, UPLA performs better than the twolevel PSO [15] in all the three indicators.
The results of the GA [6] and UPLA on the 10 orb instances in Table 3 are also compared in the same three indicators. In counting the signs, the GA can find the optimal solutions on 5 instances, while UPLA can find the optimal solutions on 7 instances. In counting the signs, the GA cannot find better solutions than UPLA on any instances, while UPLA finds better solutions than the GA on 5 instances. In the %BSVD values, the average of the %BSVD values of the GA is 0.49%, while the average of the %BSVD values of UPLA is 0.06%. Hence, the conclusion is that UPLA performs better than the GA [6] in all the three indicators.
The combination of UPLA and LOLA enhances the performance of an isolated LOLA in solution quality but worsens the performance in CPUtime consumption. One of its causes is that if the optimal solution cannot be found, UPLA will run until the 200th iteration as its predetermined maximum iteration. However, the use of the 200th iteration as the UPLA’s maximum iteration is usually higher than necessary since UPLA always finds its bestfound solution before the 200th iteration (and it cannot find any better solution since then) in most instances. Thus, by a properly lower maximum iteration, UPLA can finish its computational process faster without any effect on solution quality. To determine the proper maximum iteration for UPLA, Figure 2 shows the %ASVDoveriteration plots on the three hardtosolve instances, i.e., la27, la29, and la38.
In Figure 2, the %ASVDoveriteration plots show the similar patterns in their three periods of iterations, i.e., the 1st to the 60th iterations, the 60th to the 150th iterations, and the 150th to the 200th iterations. The %ASVD value of each plot is reduced rapidly in the first 60 iterations. After that, from the 60th to the 150th iterations, the %ASVD value continues to be reduced at a slow rate. Finally, the %ASVD value is almost stable after the 150th iteration. Based on the abovementioned finding, the maximum iteration is recommended to be in a range between the 60th iteration and the 150th iteration. At extreme points, the maximum iteration at the 60th iteration should be used when a short CPU time is required, while the 150th iteration should be used when the user is very concerned about the solution’s quality. The maximum iteration is not recommended to be lesser than the 60th iteration since it tends to stop UPLA prematurely, and it is also not recommended to be greater than the 150th iteration since UPLA will have a very low possibility to find a better solution after the 150th iteration.
6. Conclusion
This paper introduced the twolevel metaheuristic algorithm, consisting of LOLA as the lowerlevel algorithm and UPLA as the upperlevel algorithm, for the JSP. LOLA serves as a local search algorithm to search for an optimal schedule, while UPLA serves as the parameter controller for LOLA. In more detail, UPLA is a new populationbased search algorithm developed for adjusting the values of the LOLA’s input parameters. UPLA has an important role in evolving LOLA to perform its best for every single instance. For example, UPLA controls the LOLA’s solution space by the acceptable idletime limit δ, and it also upgrades LOLA from a local search algorithm to an iterated local search algorithm by the perturbation method IP. The numerical experiment in this paper showed that the proposed twolevel metaheuristic algorithm outperforms the two other metaheuristic algorithms taken from the literature in terms of solution quality. A further study on this research should be generalizing UPLA into a general form that can be used for controlling the parameters of different metaheuristic algorithms.
Data Availability
The data used to support the findings of this study are available from the author upon request.
Conflicts of Interest
The author declares that there are no conflicts of interest regarding the publication of this paper.
Acknowledgments
The author would like to acknowledge partial financial support from the ThaiNichi Institute of Technology, Thailand.
References
 A. S. Jain and S. Meeran, “Deterministic jobshop scheduling: past, present and future,” European Journal of Operational Research, vol. 113, no. 2, pp. 390–434, 1999. View at: Publisher Site  Google Scholar
 J. Błazewicz, W. Domschke, and E. Pesch, “The job shop scheduling problem: conventional and new solution techniques,” European Journal of Operational Research, vol. 93, no. 1, pp. 1–33, 1996. View at: Publisher Site  Google Scholar
 U. Dorndorf and E. Pesch, “Evolution based learning in a job shop scheduling environment,” Computers & Operations Research, vol. 22, no. 1, pp. 25–40, 1995. View at: Publisher Site  Google Scholar
 B. Peng, Z. Lü, and T. C. E. Cheng, “A tabu search/path relinking algorithm to solve the job shop scheduling problem,” Computers & Operations Research, vol. 53, pp. 154–164, 2015. View at: Publisher Site  Google Scholar
 J. F. Gonçalves and M. G. C. Resende, “An extended Akers graphical method with a biased randomkey genetic algorithm for jobshop scheduling,” International Transactions in Operational Research, vol. 21, no. 2, pp. 215–246, 2014. View at: Publisher Site  Google Scholar  MathSciNet
 N. H. Moin, O. C. Sin, and M. Omar, “Hybrid genetic algorithm with multiparents crossover for job shop scheduling problems,” Mathematical Problems in Engineering, vol. 2015, Article ID 210680, 12 pages, 2015. View at: Publisher Site  Google Scholar
 T. Yamada and R. Nakano, “A fusion of crossover and local search,” in Proceedings of the IEEE International Conference on Industrial Technology (ICIT'96), pp. 426–430, IEEE, Shanghai, China, 1996. View at: Publisher Site  Google Scholar
 J.Q. Li, H.Y. Sang, Y.Y. Han, C.G. Wang, and K.Z. Gao, “Efficient multiobjective optimization algorithm for hybrid flow shop scheduling problems with setup energy consumptions,” Journal of Cleaner Production, vol. 181, pp. 584–598, 2018. View at: Publisher Site  Google Scholar
 U. Dorndorf, E. Pesch, and T. PhanHuy, “Constraint propagation and problem decomposition: a preprocessing procedure for the job shop problem,” Annals of Operations Research, vol. 115, no. 14, pp. 125–145, 2002. View at: Publisher Site  Google Scholar  MathSciNet
 Á. E. Eiben, R. Hinterding, and Z. Michalewicz, “Parameter control in evolutionary algorithms,” IEEE Transactions on Evolutionary Computation, vol. 3, no. 2, pp. 124–141, 1999. View at: Publisher Site  Google Scholar
 J. J. Grefenstette, “Optimization of control parameters for genetic algorithms,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 16, no. 1, pp. 122–128, 1986. View at: Publisher Site  Google Scholar
 S.J. Wu and P.T. Chow, “Genetic algorithms for nonlinear mixed discreteinteger optimization problems via metagenetic parameter optimization,” Engineering Optimization, vol. 24, no. 2, pp. 137–159, 1995. View at: Publisher Site  Google Scholar
 T. Brys, M. M. Drugan, and A. Nowé, “Metaevolutionary algorithms and recombination operators for satisfiability solving in fuzzy logics,” in Proceedings of the 2013 IEEE Congress on Evolutionary Computation, pp. 1060–1067, IEEE, Cancun, Mexico, June 2013. View at: Google Scholar
 P. Cortez, M. Rocha, and J. Neves, “A metagenetic algorithm for time series forecasting,” in Proceedings of Workshop on Artificial Intelligence Techniques for Financial Time Series Analysis, 10th Portuguese Conference on Artificial Intelligence (EPIA 2001), pp. 21–31, Porto, Portugal, Dec 2001. View at: Google Scholar
 P. Pongchairerks and V. Kachitvichyanukul, “A twolevel particle swarm optimisation algorithm on jobshop scheduling problems,” International Journal of Operational Research, vol. 4, no. 4, pp. 390–411, 2009. View at: Google Scholar  MathSciNet
 C. Bierwirth and D. C. Mattfeld, “Production scheduling and rescheduling with genetic algorithms,” Evolutionary Computation, vol. 7, no. 1, pp. 1–17, 1999. View at: Publisher Site  Google Scholar
 J. F. Gonçalves, J. J. Mendes, and M. G. C. Resende, “A hybrid genetic algorithm for the job shop scheduling problem,” European Journal of Operational Research, vol. 167, no. 1, pp. 77–95, 2005. View at: Publisher Site  Google Scholar  MathSciNet
 R. Cheng, M. Gen, and Y. Tsujimura, “A tutorial survey of jobshop scheduling problems using genetic algorithms – I. representation,” Computers & Industrial Engineering, vol. 30, no. 4, pp. 983–997, 1996. View at: Publisher Site  Google Scholar
 M. Gen and R. Cheng, Genetic Algorithms and Engineering Design, John Wiley & Sons, New York, NY, USA, 1997.
 C. Bierwirth, “A generalized permutation approach to job shop scheduling with genetic algorithms,” OR Spectrum, vol. 17, no. 23, pp. 87–92, 1995. View at: Publisher Site  Google Scholar
 J. Kennedy, R. C. Eberhart, and Y. Shi, Swarm Intelligence, Morgan Kaufmann, San Francisco, Calif, USA, 2001.
 R. Storn and K. Price, “Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997. View at: Publisher Site  Google Scholar
 X.S. Yang and X. He, “Firefly algorithm: recent advances and applications,” International Journal of Swarm Intelligence, vol. 1, no. 1, pp. 36–50, 2013. View at: Publisher Site  Google Scholar
 X.S. Yang and S. Deb, “Engineering optimisation by Cuckoo search,” International Journal of Mathematical Modelling and Numerical Optimisation, vol. 1, no. 4, pp. 330–343, 2010. View at: Publisher Site  Google Scholar
 M. Neshat, G. Sepidnam, M. Sargolzaei, and A. N. Toosi, “Artificial fish swarm algorithm: a survey of the stateoftheart, hybridization, combinatorial and indicative applications,” Artificial Intelligence Review, vol. 42, no. 4, pp. 965–997, 2014. View at: Publisher Site  Google Scholar
 Z.X. Zheng, J.Q. Li, and P.Y. Duan, “Optimal chiller loading by improved artificial fish swarm algorithm for energy saving,” Mathematics and Computers in Simulation, vol. 155, pp. 227–243, 2019. View at: Publisher Site  Google Scholar  MathSciNet
 H. R. Lourenço, “Jobshop scheduling: Computational study of local search and largestep optimization methods,” European Journal of Operational Research, vol. 83, no. 2, pp. 347–364, 1995. View at: Publisher Site  Google Scholar
 D. Sun and L. Lin, “A dynamic job shop scheduling framework: A backward approach,” International Journal of Production Research, vol. 32, no. 4, pp. 967–985, 1994. View at: Publisher Site  Google Scholar
 L. Özdamar, “A genetic algorithm approach to a general category project scheduling problem,” IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, vol. 29, no. 1, pp. 44–59, 1999. View at: Publisher Site  Google Scholar
 V. K. Ganesan, A. I. Sivakumar, and G. Srinivasan, “Hierarchical minimization of completion time variance and makespan in jobshops,” Computers & Operations Research, vol. 33, no. 5, pp. 1345–1367, 2006. View at: Publisher Site  Google Scholar
 P. Pongchairerks, “Efficient local search algorithms for jobshop scheduling problems,” International Journal of Mathematics in Operational Research, vol. 9, no. 2, pp. 258–277, 2016. View at: Publisher Site  Google Scholar  MathSciNet
 M. Moonen and G. K. Janssens, “Gifflerthompson focused genetic algorithm for the static jobshop scheduling problem,” Journal of Information and Computational Science, vol. 4, no. 2, pp. 629–642, 2007. View at: Google Scholar
 P. Pongchairerks, “Particle swarm optimization algorithm applied to scheduling problems,” ScienceAsia, vol. 35, no. 1, pp. 89–94, 2009. View at: Publisher Site  Google Scholar
 P. Pongchairerks, “A selftuning PSO for jobshop scheduling problems,” International Journal of Operational Research, vol. 19, no. 1, pp. 96–113, 2014. View at: Publisher Site  Google Scholar  MathSciNet
 D. Petrovic, E. Castro, S. Petrovic, and T. Kapamara, “Radiotherapy scheduling,” in Automated Scheduling and Planning, vol. 505 of Studies in Computational Intelligence, pp. 155–189, Springer, Berlin, Germany, 2013. View at: Google Scholar
 Y. Crama, A. W. J. Kolen, and E. J. Pesch, “Local search in combinatorial optimization,” in Artificial Neural Networks, vol. 931 of Lecture Notes in Computer Science, pp. 157–174, Springer, Berlin, Germany, 1995. View at: Google Scholar
 J. B. Orlin, A. P. Punnen, and A. S. Schulz, “Approximate local search in combinatorial optimization,” SIAM Journal on Computing, vol. 33, no. 5, pp. 1201–1214, 2004. View at: Publisher Site  Google Scholar  MathSciNet
 W. Michiels, E. Aarts, and J. Korst, Theoretical Aspects of Local Search, Springer, Berlin, Germany, 2007. View at: MathSciNet
 E. Pesch, Learning in Automated Manufacturing: A Local Search Approach, PhysicaVerlag, Heidelberg, Germany, 1994. View at: Publisher Site
 M. den Besten, T. Stützle, and M. Dorigo, “Design of iterated local search algorithms: an example application to the single machine total weighted tardiness problem,” in Applications of Evolutionary Computing, vol. 2037 of Lecture Notes in Computer Science, pp. 441–451, Springer, Berlin, Germany, 2001. View at: Publisher Site  Google Scholar
 M. F. Tasgetiren, O. Buyukdagli, Q.K. Pan, and P. N. Suganthan, “A general variable neighborhood search algorithm for the noidle permutation flowshop scheduling problem,” in Swarm, Evolutionary, and Memetic Computing, vol. 8297 of Lecture Notes in Computer Science, pp. 24–34, Springer, Cham, Germany, 2013. View at: Publisher Site  Google Scholar  MathSciNet
 H. R. Lourenço, O. C. Martin, and T. Stützle, “A beginner’s introduction to iterated local search,” in Proceedings of the 4th Metaheuristics International Conference, pp. 1–6, Porto, Portugal, 2001. View at: Google Scholar  MathSciNet
 P. Pongchairerks and V. Kachitvichyanukul, “A nonhomogenous particle swarm optimization with multiple social structures,” in Proceedings of the 2005 International Conference on Simulation and Modeling, pp. 132–136, Nakornpathom, Thailand, Jan 2005. View at: Google Scholar
 H. Fisher and G. L. Thompson, “Probabilistic learning combinations of local jobshop scheduling rules,” in Industrial Scheduling, J. F. Muth and G. L. Thompson, Eds., pp. 225–251, PrenticeHall, Englewood, NJ, USA, 1963. View at: Google Scholar
 S. Lawrence, “Resource constrained project scheduling: an experimental investigation of heuristic scheduling techniques (supplement),” Graduate School of Industrial Administration ORNL/Sub7654/1, Carnegie Mellon University, Pittsburgh, PA, USA, 1984. View at: Publisher Site  Google Scholar
 D. Applegate and W. Cook, “A computational study of the jobshop scheduling problem,” ORSA Journal on Computing, vol. 3, no. 2, pp. 149–156, 1991. View at: Publisher Site  Google Scholar
 J. E. Beasley, “Job shop scheduling,” ORLibrary, 2004, http://people.brunel.ac.uk/~mastjjb/jeb/orlib/files/jobshop1.txt. View at: Google Scholar
Copyright
Copyright © 2019 Pisut Pongchairerks. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.