Research Article  Open Access
Pisut Pongchairerks, "An Enhanced TwoLevel Metaheuristic Algorithm with Adaptive Hybrid Neighborhood Structures for the JobShop Scheduling Problem", Complexity, vol. 2020, Article ID 3489209, 15 pages, 2020. https://doi.org/10.1155/2020/3489209
An Enhanced TwoLevel Metaheuristic Algorithm with Adaptive Hybrid Neighborhood Structures for the JobShop Scheduling Problem
Abstract
For solving the jobshop scheduling problem (JSP), this paper proposes a novel twolevel metaheuristic algorithm, where its upperlevel algorithm controls the input parameters of its lowerlevel algorithm. The lowerlevel algorithm is a local search algorithm searching for an optimal JSP solution within a hybrid neighborhood structure. To generate each neighbor solution, the lowerlevel algorithm randomly uses one of two neighbor operators by a given probability. The upperlevel algorithm is a populationbased search algorithm developed for controlling the five input parameters of the lowerlevel algorithm, i.e., a perturbation operator, a scheduling direction, an ordered pair of two neighbor operators, a probability of selecting a neighbor operator, and a start solutionrepresenting permutation. Many operators are proposed in this paper as options for the perturbation and neighbor operators. Under the control of the upperlevel algorithm, the lowerlevel algorithm can be evolved in its inputparameter values and neighborhood structure. Moreover, with the perturbation operator and the start solutionrepresenting permutation controlled, the twolevel metaheuristic algorithm performs like a multistart iterated local search algorithm. The experiment’s results indicated that the twolevel metaheuristic algorithm outperformed its previous variant and the two other highperforming algorithms in terms of solution quality.
1. Introduction
Production scheduling is an important tool for controlling and optimizing workloads in an industrial production system. It is a decisionmaking process which involves assigning jobs to machines on a timetable. The jobshop scheduling problem (JSP) is one of wellknown production scheduling problems. Such a problem is also defined as a much complex optimization problem both in theoretical and practical aspects. The objective of JSP is commonly to find a feasible schedule which completes all jobs by the shortest makespan (note that makespan stands for the length of the schedule). Approximation algorithms, such as [1–6], have been developed for JSP since the problem cannot be optimally solved in a reasonable (polynomial) amount of time [7, 8]. The twolevel metaheuristic algorithm of [9] is one of the highperforming approximation algorithms for JSP. This algorithm is, as its name implies, a combination between its upperlevel algorithm (named UPLA) and its lowerlevel algorithm (named LOLA). Its mechanism is that UPLA controls the input parameters of LOLA, and LOLA then searches for an optimal schedule. Due to successful results of [9], this paper aims at developing an enhanced twolevel metaheuristic algorithm for JSP. To do so, the twolevel metaheuristic algorithm proposed in this paper has been changed from its original variant [9] in both levels.
The lowerlevel algorithm proposed in this paper, named LOSAP, is a local search algorithm exploring in a probabilisticbased hybrid neighborhood structure. To generate each neighbor solution, LOSAP randomly uses one from the two predetermined neighbor operators (i.e., the first and second operators) by a preassigned probability of selecting the first operator. A high value of this probability leads the hybrid neighborhood structure to be more likely similar to the first operator’s neighborhood structure, and vice versa. Previous successful applications of randomly using one from two different operators have been found in [10, 11]. Major differences between LOSAP and LOLA [9] are briefly presented as follows. The LOLA’s search ability is mainly based on its special solution space, which is a solution space of parameterizedactive schedules. The LOSAP’s search ability is, however, mainly based on its hybrid neighborhood structure since it searches in an ordinary solution space of semiactive schedules. LOSAP also has many proposed operators as the options for its perturbation and neighbor operators; most of them were not used in LOLA. In addition, LOSAP uses a different criterion from LOLA on accepting a new bestfound solution.
Although LOLA and LOSAP have many differences from each other as abovementioned, they still share a common weakness. The weakness is that no single combination of inputparameter values performs best for all instances. In other words, a combination of inputparameter values performing best for an instance may not perform as best for another instance. For each instance, each algorithm has a specific best combination of inputparameter values; however, it cannot be foreknown without doing an experiment on the beingconsidered instance. This weakness is, in fact, a common weakness of most metaheuristic algorithms. To overcome such a weakness, past researches, e.g., [4, 9, 12–15], developed an upperlevel metaheuristic algorithm to control input parameters of a problemsolving metaheuristic algorithm in a lower level.
The upperlevel algorithm proposed in this paper, named MUPLA, is a modification of UPLA (i.e., the upperlevel algorithm of [9]). MUPLA is a populationbased metaheuristic algorithm designed to be a parameter controller for LOSAP. Thus, its population consists of a number of combinations of the LOSAP’s inputparameter values which are evolved (updated) over iterations. For short, let a combination of LOSAP’s inputparameter values be called a parametervalue combination. Each parametervalue combination contains specific values of the perturbation operator, the scheduling direction, the ordered pair of two neighbor operators, the probability of selecting a neighbor operator, and the start solutionrepresenting permutation. A major change of MUPLA from UPLA [9] is that each parametervalue combination has a different start solutionrepresenting permutation from those of the others. As a consequence, the MUPLA combined with LOSAP acts as a multistart iterated local search algorithm. This is a large upgrade because the UPLA combined with LOLA [9] is just an iterated local search algorithm.
The remainder of this paper is divided into four main sections. Section 2 provides an overview of the relevant publications of the research topic. Section 3 describes the proposed twolevel metaheuristic algorithm in both levels. Thus, the lowerlevel algorithm (LOSAP) and the upperlevel algorithm (MUPLA) are described in Sections 3.1 and 3.2, respectively. Then, Section 4 presents the results and discussions on the evaluation of the twolevel metaheuristic algorithm’s performance. Section 5 finally summarizes the findings and recommendations.
2. Literature Review
The jobshop scheduling problem (JSP) comes with n given jobs J_{1}, J_{2},…, J_{n} and m given machines M_{1}, M_{2},…, M_{m}. Each job J_{i} is composed of a sequence of m given operations O_{i1}, O_{i2},…, O_{im} as a chain of precedence constraints. To process each job J_{i}, O_{ij} (where j = 1, 2, …, m − 1) is defined as an immediatepreceding operation of O_{ik} (where k = j + 1); thus, O_{ij} must be finished before O_{ik} can start. In addition, each operation must be processed on a preassigned machine with a predetermined processing time. Each machine cannot process more than one operation at a time, and it cannot be stopped or paused during processing an operation. All n given jobs arrive at the time 0, and all m given machines are also available at the time 0. A schedule is feasible if it completely allocates all n jobs under all the given constraints. An optimal schedule is a feasible schedule which minimizes the makespan, i.e., the total amount of time required to complete all jobs. Excellent reviews about JSP are available in [7, 8, 16, 17].
In JSP, feasible schedules can be alternatively constructed in forward or backward (reverse) directions. A forward schedule is a schedule constructed in the forward direction, while a backward (reverse) schedule is a schedule constructed in the backward direction. In other words, a forward schedule is a schedule in which all jobs J_{i} (where i = 1, 2, …, n) are constructed forward from O_{i1} to O_{im}, while a backward schedule is a schedule in which all jobs J_{i} are constructed backward from O_{im} to O_{i1}. Although the forward scheduling is commonly used for the makespan criterion, the backward scheduling as an alternative has been applied in many researches, e.g., [18–20]. Besides the schedule’s classification based on the scheduling directions, the feasible schedules can be classified based on their allowable delay times, e.g., semiactive schedules and active schedules [17, 21]. A feasible schedule is defined as a semiactive schedule if no operation can be started earlier without altering an operation sequence on any machine. A semiactive schedule is then defined as an active schedule if no operation can be started earlier without delaying any other operation or violating any precedence constraint.
Many approximation algorithms have been developed based on metaheuristic algorithms for solving JSP. Iterated local search, a wellknown type of metaheuristic algorithms, has also been applied for JSP [22, 23]. In general, an iterated local search algorithm is a singlesolutionbased local search technique which can search for a global optimal solution. During an exploration, it uses a neighbor operator repeatedly to find a local optimum and then uses a perturbation operator to escape the justfound local optimum (note that a perturbation operator stands for an operator that generates a new start solution by largely modifying a found local optimal solution [24, 25]). It has also been found that some researches such as [26–28] enhanced their iterated local search algorithms by adding multistart properties.
In the iterated local search and related algorithms, there are three operators commonly used as a neighbor operator and a perturbation operator. These three operators are the common swap, insert, and inverse operators [29]. Some iterated local search algorithms, such as [30, 31], use the common swap operator or the common insert operator multiple times as their perturbation operators. The definitions of the three common operators are given as follows (let u and are random integers from 1 to the number of all members in the permutation, and ≠ u):(i)The common swap operator is to swap between the two members in the u^{th} and positions of a permutation.(ii)The common insert operator is to remove a member from the u^{th} position of a permutation and then insert it back at the position.(iii)The common inverse operator is to inverse the sequence of all members from the u^{th} to positions of a permutation.
The twolevel metaheuristic algorithm of [9] can be classified as an adaptive iterated local search algorithm for JSP. Its upperlevel algorithm (named UPLA) controls the input parameters of its lowerlevel algorithm, and its lowerlevel algorithm (named LOLA) then searches for an optimal schedule. Thus, the twolevel metaheuristic algorithm can adapt itself for every single JSP instance. The development of the twolevel metaheuristic algorithm of [9] followed in the successes of the previous researches of [4, 12–15] in using a metaheuristic algorithm to control parameters of another metaheuristic algorithm.
In [9], LOLA is a local search algorithm exploring in a solution space of parameterizedactive schedules. Its input parameters (i.e., an acceptable idletime limit, a scheduling direction, a perturbation operator, and a neighbor operator) are controlled by UPLA. UPLA is a populationbased metaheuristic algorithm searching in a realnumber search space. In this view’s point, UPLA is similar to the other populationbased algorithms, such as particle swarm optimization [32], differential evolution [33], fish swarm [34], and cuckoo search [35]. However, the evolving procedure of the UPLA’s population is different from those of the others mentioned. The UPLA’s population consists of the combinations of inputparameter values of LOLA. For a parametervalue combination, each parameter’s value is iteratively changed by a sum of two changeable oppositedirection vectors. The first vector’s direction is toward the memorized bestfound value, whereas the second vector’s direction is away from. The magnitudes of these two vectors are generated randomly between zeros to their given maximum values. The first vector’s maximum magnitude (0.05) is usually larger than the second vector’s maximum magnitude (0.01). However, if the parameter’s value equals the memorized bestfound value, the maximum magnitudes of both vectors then equal the same value (0.01).
3. Method
Section 3 describes the procedure of the proposed twolevel metaheuristic algorithm in both levels. In this section, the lowerlevel algorithm (LOSAP) is described in Section 3.1, and the upperlevel algorithm (MUPLA) is described in Section 3.2.
3.1. Proposed LowerLevel Algorithm
The lowerlevel algorithm proposed in this paper, named LOSAP, is a local search algorithm searching for an optimal solution in a probabilisticbased hybrid neighborhood structure. Its framework is similar to those of the other local search algorithms [36–39]. However, its neighborhood structure is generated based on the two predetermined operators, i.e., the first and second neighbor operators. By a given probability, LOSAP randomly uses one from the two predetermined operators in order to generate a neighbor solutionrepresenting permutation. This means that based on the given probability, LOSAP can switch between the two given operators anytime during its exploration. In this paper, many operators are proposed as the options for being the LOSAP’s neighbor operators (note that the successes of randomly using one from two neighbor operators have been found in different algorithms, e.g., [10, 11]).
LOSAP generates a hybrid neighborhood structure between the first operator’s neighborhood structure and the second operator’s neighborhood structure. The hybridization is controlled by the probability of selecting the first neighbor operator as a LOSAP’s input parameter (the probability of selecting the second neighbor operator, as a complement, is the unity minus the probability of selecting the first operator). Note that the higher the probability of selecting the first operator, the more likely the hybrid neighborhood structure is like the first operator’s neighborhood structure. It equally means that the lower the probability of selecting the first operator, the more likely the hybrid neighborhood structure is like the second operator’s neighborhood structure. At boundaries, the mentioned probability’s values of 1.00 and 0.00 make the hybrid neighborhood structure be identical to the first operator’s neighborhood structure and the second operator’s neighborhood structure, respectively.
The probability of selecting the first neighbor operator is not the only LOSAP’s input parameter. LOSAP has total five input parameters consisting of the perturbation operator, the scheduling direction, the ordered pair of the first and second neighbor operators, the probability of selecting the first neighbor, and the start operationbased permutation. LOSAP provides many options for setting each input parameter in order that LOSAP with proper parameter values can perform well for every single instance. Below, Section 3.1.1 describes Algorithm 1 as the decoding procedure used by LOSAP. In detail, Algorithm 1 is the procedure of transforming each solutionrepresenting permutation generated by LOSAP into a semiactive schedule. Section 3.1.2 then describes Algorithm 2 as the procedure of LOSAP.


3.1.1. Decoding Procedure
LOSAP searches in a solution space of semiactive schedules, and it uses an operationbased permutation [40–43] to represent a semiactive schedule. An operationbased permutation has been used to represent a semiactive schedule in many researches, such as [20, 43, 44]. For an njob/mmachine instance, an operationbased permutation is a permutation with m repetitions of the numbers 1, 2, …, n. In the permutation, the number i (where i = 1, 2, …, n) in its j^{th} occurrence from left to right (where j = 1, 2, …, m) represents the operation O_{ij}. Then, a semiactive schedule is constructed by scheduling all operations in the order given by the permutation; in addition, each operation must be scheduled at its earliest possible start time on its preassigned machine. For example, the permutation (3, 2, 3, 1, 1, 2) represents the schedule in which the operations O_{31}, O_{21}, O_{32}, O_{11}, O_{12}, and O_{22} are sequentially scheduled at their earliest possible start times on their preassigned machines. The earliest possible start time of a specific operation is the maximum between the finished time of its immediatepreceding operation in its job and the finished time of the current lastscheduled operation on its machine.
Algorithm 1 is the solutiondecoding procedure used by LOSAP. As the options, it can transform an operationbased permutation into a forward semiactive schedule (i.e., a semiactive schedule constructed by the forward direction) or a backward semiactive schedule (i.e., a semiactive schedule constructed by the backward direction). Thus, Algorithm 1 requires assigning values of the two input parameters, i.e., an operationbased permutation and a scheduling direction. Remind that m and n represent the number of all machines and the number of all jobs, respectively, in the beingconsidered JSP instance. Thus, mn (i.e., m multiplied by n) represents the number of all operations.
As mentioned above, Algorithm 1 is the solutiondecoding method used by LOSAP, and Algorithm 2 in Section 3.1.2 is the LOSAP's procedure. Thus, it can be said that Algorithm 1 is a component of Algorithm 2.
3.1.2. LOSAP Procedure
LOSAP, as shown in Algorithm 2, has the five input parameters whose values need to be assigned, i.e., PTBT, SD, PNO ≡ (NO1, NO2), PROB, and P. The definitions of these LOSAP’s input parameters are given below:(i)PTBT and P stand for the perturbation operator and the start operationbased permutation, respectively. LOSAP uses PTBT on P in order to generate its initial bestfound operationbased permutation.(ii)SD stands for the scheduling direction (forward or backward) of all solutions generated by LOSAP. If SD is selected to be forward, LOSAP transforms each generated operationbased permutation into a forward schedule. Otherwise, LOSAP transforms each permutation into a backward schedule.(iii)PNO ≡ (NO1, NO2) is the ordered pair of the first neighbor operator (called NO1) and the second neighbor operator (called NO2).(iv)PROB is the probability of selecting the first neighbor operator (NO1). Consequently, the probability of selecting the second neighbor operator (NO2) is the unity minus PROB.
Besides the definitions of the input parameters mentioned above, the other abbreviations used in LOSAP (i.e., Algorithm 2) are defined below:(i)P_{0} stands for the current bestfound operationbased permutation. As mentioned above, the initial P_{0} is generated by using PTBT on P.(ii)S_{0}, which is decoded from P_{0}, stands for the current bestfound schedule. In addition, Makespan (S_{0}) stands for the makespan of S_{0}.(iii)P_{1}, which is generated by using NO1 or NO2 on P_{0}, stands for the current neighbor operationbased permutation.(iv)S_{1}, which is decoded from P_{1}, stands for the current neighbor schedule. In addition, Makespan (S_{1}) stands for the makespan of S_{1}.(v)m and n are the number of machines and the number of jobs, respectively, in the beingconsidered JSP instance. Thus, mn is the number of operations.
The LOSAP’s procedure given in Algorithm 2 is efficient but simple. In brief, LOSAP starts its procedure by using PTBT on P to generate an initial P_{0}; after that, LOSAP transforms P_{0} into S_{0}. Then, LOSAP starts its repeated loop by using PROB to randomly select either NO1 or NO2. LOSAP uses the selected operator (i.e., either NO1 or NO2) on P_{0} to generate P_{1}, and LOSAP then transforms P_{1} into S_{1}. If the criterion of accepting a new bestfound permutation is satisfied, then LOSAP updates P_{0} ⟵ P_{1} and S_{0} ⟵ S_{1}. Finally, LOSAP determines whether to continue the repeated loop’s next round or stop its procedure. Note that S_{0} and S_{1} are always generated in the forward direction if SD is selected as forward at the beginning; otherwise, they are always generated in the backward direction.
In LOSAP, there are many optional operators for PTBT and PNO. These optional operators are modified from the common swap, insert, and inverse operators [29] (the definitions of the common operators are given in Section 2). In detail, the five LOSAP’s options for PTBT consist of the nmedium swap operator, the nlarge swap operator, the nmedium inverse operator, the nlarge insert operator, and the nmedium insert operator. LOSAP also provides the four options for PNO ≡ (NO1, NO2), consisting of (1small inverse, 1medium insert), (1large swap, 1large insert), (1medium swap, 1medium insert), and (1small swap, 1small insert). These optional operators are defined as follows. The number in front of the hyphen sign () indicates the number of repeated uses of the operator mentioned in back of the hyphen sign. For example, the 1small swap operator is to use the small swap operator once on a permutation, and the nmedium inverse operator is to use the medium inverse operator n times on a permutation.
In addition to the above paragraph, the words small, medium, and large in the names of the optional operators are used to restrict the value of in its distance from u as explained below (remind that u is a random integer within [1, mn]):(i)For all operators with small, is a random integer within [u − 4, u + 4] (note that the small swap in [45] means the swap of two adjacent members, and it thus differs from the small swap in LOSAP).(ii)For all operators with medium, is a random integer within [u − (mn/5), u + (mn/5)].(iii)For all operators with large, is a random integer within [1, mn]. It means that the operators with large are identical to the common operators.
After generating successfully, its value must be verified whether it can be used or not. If the generated value of is outside of [1, mn] or equal to u, then it must be randomly regenerated within the given same range. This procedure must be repeated until receiving the value of within [1, mn] and unequal to u.
Remind that an addition of more optional operators into LOSAP may not always give a benefit for the twolevel metaheuristic algorithm. Moreover, it sometimes makes the twolevel metaheuristic algorithm harder to find a better solution. As shown in Algorithm 2, LOSAP does not include the nsmall swap, nsmall insert, and nlarge inverse operators as the options for PTBT. The reason of excluding the nsmall swap and nsmall insert operators is that they make a toosmall change into P; by using them, the twolevel metaheuristic algorithm can hardly escape a local optimum. In contrast, a change from the nlarge inverse operator is almost as large as a change from a reinitialization. For PNO, the 1medium inverse and 1large inverse operators are not used as the options because, as neighbor operators, they make a toolarge change into P_{0}.
LOLA [9] and LOSAP both are lowerlevel algorithms of their own twolevel metaheuristic algorithms; however, there are many differences between them. The LOLA’s search ability is mainly based on its solution space of parameterizedactive schedules. By contrast, LOSAP uses an ordinary solution space of semiactive schedules; thus, its search ability is mainly based on its probabilisticbased hybrid neighborhood structure. Most optional perturbation and neighbor operators of LOSAP are different from those of LOLA; the nlarge insert operator is the only optional perturbation operator found in both LOLA and LOSAP. Another difference between LOLA and LOSAP is in their criteria of accepting a new bestfound solution. LOLA accepts only a better neighbor solution, while LOSAP accepts a nonworsening neighbor solution. LOSAP uses this acceptance criterion to escape from a shoulder (i.e., a flat area of search space adjacent to a downhill edge [46]). In addition, LOSAP will not reset t_{L} to 0 for any sideways move in order to avoid an endless loop when finding a flat local minimum.
3.2. Proposed UpperLevel Algorithm
MUPLA is the upperlevel algorithm of the proposed twolevel algorithm. The purpose of MUPLA is to evolve the LOSAP’s inputparameter values so that LOSAP can return its best performance on every single JSP instance. MUPLA is a populationbased search algorithm specifically developed for being a parameter tuner. The MUPLA’s population contains N combinations of the LOSAP’s inputparameter values, i.e., C_{1} (t), C_{2} (t),…, C_{N} (t). In short, let a parametervalue combination stand for a combination of the LOSAP’s inputparameter values. In the population, each parametervalue combination is adjusted over iterations by the MUPLA’s specific evolutionary process.
Let C_{i} (t) ≡ (c_{1i} (t), c_{2i} (t), c_{3i} (t), c_{4i} (t), p_{i} (t)) represent the i^{th} parametervalue combination (where i = 1, 2, …, N) in the MUPLA's population at the t^{th} iteration. These c_{1i} (t), c_{2i} (t), c_{3i} (t), and c_{4i} (t) are real numbers representing the values of the perturbation operator (PTBT), the scheduling direction (SD), the ordered pair of the first and second neighbor operators (PNO), and the probability of selecting the first neighbor operator (PROB) of LOSAP, respectively. In addition, p_{i} (t) represents the start operationbased permutation of LOSAP. Note that p_{i} (t) is an operationbased permutation, not a real number like others. Table 1 presents the transformation from c_{1i} (t), c_{2i} (t), c_{3i} (t), c_{4i} (t), and p_{i} (t) into the values of PTBT, SD, PNO, PROB, and P of LOSAP, respectively.

The abbreviations used in MUPLA (i.e., Algorithm 3) are defined below:(i)Score (C_{i} (t)) stands for the performance score of C_{i} (t). Note that the lower the performance score, the better the performance. Between any two parametervalue combinations, the combination with a lower performance score is the better one.(ii)P_{fi} (t) stands for the final (bestfound) operationbased permutation of the LOSAP using the inputparameter values decoded from C_{i} (t).(iii)S_{fi} (t) stands for the final (bestfound) schedule of the LOSAP using the inputparameter values decoded from C_{i} (t). In addition, Makespan (S_{fi} (t)) stands for the makespan of S_{fi} (t).(iv)C_{best} ≡ (c_{1best}, c_{2best}, c_{3best}, c_{4best}, p_{best}) stands for the best parametervalue combination everfound by the population. In addition, Score (C_{best}) stands for the performance score of C_{best} (note that C_{best} in definition is similar to the global best position of PSO [32]).(v)S_{best} stands for the best schedule everfound by the population.

The procedure of MUPLA is presented in detail in Algorithm 3. In addition, it is also presented in a form of flow chart in Figure 1. In brief, MUPLA starts its procedure by assigning t ⟵ 1 and Score (C_{best}) ⟵ + ∞. Then, MUPLA randomly generates C_{i} (t), where i = 1, 2, …, N. After that, MUPLA processes its repeated loop as follows. For each C_{i} (t), MUPLA decodes it into the LOSAP inputparameter values and then runs the LOSAP using these inputparameter values to receive P_{fi} (t) and S_{fi} (t); then, MUPLA assigns Score (C_{i} (t)) ⟵ Makespan (S_{fi} (t)). If MUPLA finds any C_{i} (t) whose Score (C_{i} (t)) is less than or equal to Score (C_{best}), then it updates C_{best} ⟵ C_{i} (t), Score (C_{best}) ⟵ Score (C_{i} (t)), and S_{best} ⟵ S_{fi} (t). After that, MUPLA generates C_{i} (t + 1), where i = 1, 2, …, N, by using its specific evolutionary process (as shown in Step 3 of Algorithm 3), and it then assigns t ⟵ t + 1. Finally, MUPLA determines whether to continue its repeated loop’s next round or stop its procedure.
A main difference of MUPLA from the other populationbased algorithms such as [32–35] is in its specific evolutionary process (i.e., the procedure of adjusting its population) given by the equation in Step 3.3. In the equation in Step 3.3, each c_{ji} (t + 1) is updated from c_{ji} (t) by the sum of two oppositedirection vectors. The first vector is toward the bestfound value, whereas the second vector is away from. The equation in Step 3.3 used in MUPLA is slightly modified from that in UPLA [9]. The modification is to reduce the first vector’s maximum magnitude by a half (from 0.05 to 0.025) in cases that c_{ji} (t) differs from c_{jbest}. The purpose of this modification is to delay the population from getting stuck in a local optimum. In a preliminary experiment, the proposed twolevel metaheuristic algorithm performed better on average after reducing the first vector’s maximum magnitude as mentioned.
Besides the modification in the equation in Step 3.3, there are larger other changes of MUPLA from UPLA [9]. One change is that, unlike UPLA, each C_{i} (t) of the MUPLA’s population includes a start operationbased permutation p_{i} (t). By this change mentioned, MUPLA can add a multistart property into LOSAP. Consequently, the combination of MUPLA and LOSAP becomes a multistart iterated local search algorithm. This is a large upgrade because the combination of UPLA and LOLA in [9] is just an iterated local search algorithm. Another change is in its criterion of accepting a new bestfound parametervalue combination. UPLA accepts only a better parametervalue combination, while MUPLA accepts a nonworsening parametervalue combination.
4. Results and Discussions
The performance of the proposed twolevel metaheuristic algorithm was evaluated via the experiment conducted in this research. Section 4 then presents results of the experiment. In this section, let MUPLA stand for the whole twolevel metaheuristic algorithm (i.e., the MUPLA combined with LOSAP). The reason is that MUPLA uses LOSAP as its component when it solves JSP. For a comparison purpose, the MUPLA’s results were compared to those of TS, GA, and UPLA taken from [5, 6], and [9], respectively. Let TS stand for the taboo search algorithm developed by Nowicki and Smutnicki [5], and let GA stand for the genetic algorithm developed by Piroozfard et al. [6]. In addition, let UPLA in this section stand for the UPLA combined with LOLA.
The performance of MUPLA was evaluated on 53 benchmark instances. These 53 instances consisted of the ft01, ft10, and ft20 instances from [47], the la01 to la40 instances from [48], and the orb01 to orb10 instances from [49]. The 53 mentioned instances were chosen because they covered all instances used by [5, 6, 9] in their experiments. In detail, the TS’s performance was evaluated in [5] on the 43 firstmentioned instances, i.e., ft06, ft10, ft20, and all 40 la instances. The GA’s performance was evaluated in [6] on the 42 instances, i.e., ft06, ft20, and all 40 la instances. In addition, the UPLA’s performance was evaluated in [9] on all the 53 instances.
There were two main objectives of the experiment. The first objective was to evaluate the best performance of which MUPLA was capable on its solution quality without a computational time’s limitation. To achieve this objective, MUPLA was run on an extremely long computational time (i.e., 5,000 iterations in this paper) for each trial unless it could find the known optimal solution. The reason was to ensure that MUPLA already reached the best solution as possible as it could. The second objective of the experiment was to evaluate the MUPLA’s search performance over iterations. To do so, the solution’s convergence rates of MUPLA were plotted. A finding from the solution’s convergence rates was to identify a proper maximum iteration for MUPLA. It was very important because a proper maximum iteration could balance the solution quality and the consumed computational time. Remind that the higher the number of iterations used, the higher the computational time consumed.
The parameter settings of MUPLA used in the experiment are shown below:(i)The population of MUPLA consisted of three parametervalue combinations (i.e., N = 3 in Algorithm 3).(ii)The stopping criterion of MUPLA was that either the 5,000^{th} iteration (i.e., t = 5,000 in Algorithm 3) was reached or the known optimal solution value was found (note that solution and solution value in this paper represent schedule and makespan, respectively).(iii)MUPLA was coded in and executed on an Intel® CoreTM i5 CPU processor M580 @ 2.67 GHz with RAM of 6 GB (2.3 GB usable).(iv)For each instance, MUPLA was executed five trials with different randomseed numbers.
The results of the experiment using the above settings are presented in Table 2. In the table, the MUPLA’s results on each instance consisted of the bestfound solution value over five trials, the average of the first trial’s bestfound solution value to the fifth trial’s bestfound solution value, the average number of used iterations, and the average computational time consumed. In the purpose of comparison, Table 2 presents the bestfound solution values of TS, GA, and UPLA taken from [5, 6], and [9], respectively. Remind that in the table, the TS’s results are shown on only the ft06, ft10, ft20, and all 40 la instances, and the GA’s results are shown on only the ft06, ft20, and all 40 la instances.

The abbreviations used in Table 2 are defined as follows:(i)Instance and Opt represent the name of each instance and its known optimal solution value given by literature (e.g., [2]), respectively.(ii)In each instance, Best represents the bestfound solution value of each algorithm. Best of MUPLA was taken from the five trials in this experiment. Bests of TS, GA, and UPLA were taken from [5, 6], and [9], respectively. For each algorithm, %BD represents the percent deviation of Best from Opt.(iii)In each instance, Avg represents the average of the first trial’s bestfound solution value to the fifth trial’s bestfound solution value of MUPLA. Then, %AD represents the percent deviation of Avg from Opt.(iv)Avg Iterations and Avg CPU Time stand for the average number of iterations and the average computational time (in second), respectively, consumed by MUPLA until the stopping criterion is met.(v)In parentheses, Avg Iterations and Avg CPU Time provide the average number of iterations and the approximate average computational time (in second), respectively, consumed by MUPLA until no further improvement. The information occurs only when at least one trial of MUPLA could not find the known optimal solution before reaching the 5,000^{th} iteration.
Based on the results in Table 2, Section 4.1 evaluates the best search performance of which MUPLA was capable without a computational time’s limitation. Section 4.2 then evaluates the MUPLA’s search performance over iterations.
4.1. Evaluation on Search Performance without Computational Time’s Limitation
As just mentioned, Section 4.1 aims at revealing the best search performance of which MUPLA was capable on its solution quality without a computational time’s restriction. To achieve the aim, MUPLA was executed on an extremely long computational time (i.e., 5,000 iterations) unless MUPLA could find the known optimal solution. The purpose of this setting was to ensure that MUPLA already reached its best solution as possible as it could. In the performance evaluation, the %BDs of MUPLA were compared to the %BDs of TS [5], GA [6], and UPLA [9] shown in Table 2. Remind that the %BDs of TS, GA, and UPLA were calculated from the bestfound solution values given by their original articles. The use of these %BDs of TS, GA, and UPLA can be defined as a limitation of this experiment because TS, GA, and UPLA might find improving solutions if they were run on more computational times than those used in their articles. On the other hand, they might not find any improving solution although much more computational times were provided; it usually happens to any metaheuristic algorithm when its search gets stuck in a local optimum.
Discussions on the %BDs shown in Table 2 are given hereafter along with using onesided paired ttests to compare the mean %BDs. Let the significance level (α) be equal to 0.1. As shown in Table 2, MUPLA performed better than TS [5] in terms of %BD (remind that the performances were compared on only the 43 instances). Of the total 43 instances, the average %BD of MUPLA was 0.03%, while the average %BD of TS was 0.06%. MUPLA returned better %BDs than TS on five instances (i.e., la21, la24, la27, la37, and la40), while TS returned a better %BD than MUPLA on only one instance (i.e., la29). On the 37 remaining instances, they both returned equal %BDs. In addition, there were 41 instances where MUPLA returned %BDs of 0.00%, while there were only 37 instances where TS returned %BDs of 0.00%. It means that from all 43 instances, MUPLA found optimal solutions on 41 instances, while TS found optimal solutions on only 37 instances. With α = 0.1, an onesided paired ttest indicated that the mean %BD of MUPLA was significantly better than the mean %BD of TS ( value = 0.066).
MUPLA also outperformed GA [6] in terms of %BD (remind that the performances were compared on only the 42 instances). Over the total 42 instances, the average %BD of MUPLA was 0.03%, while the average %BD of GA was 0.35%. MUPLA returned better %BDs than GA on 13 instances, and they both returned equal %BDs on the 29 remaining instances. This means that GA could not return a better %BD than MUPLA on any instance. Moreover, MUPLA returned %BDs of 0.00% on 40 instances, while GA returned %BDs of 0.00% on only 29 instances. This means that from the 42 instances, MUPLA found the optimal solutions on 40 instances, while GA found the optimal solutions on only 29 instances. In detail, MUPLA failed to find optimal solutions on la29 and la40, while GA failed on la29, la40, and the 11 other instances. Although they both failed on la29 and la40, MUPLA performed much better on these two instances. On la29 and la40, respectively, the MUPLA’s %BDs were 0.95% and 0.16%, while the GA’s %BDs were 2.34% and 0.90%. An onesided paired ttest with α = 0.1 indicated that the mean %BD of MUPLA was significantly better than the mean %BD of GA ( value = 0.001).
Based on %BDs in Table 2, MUPLA outperformed UPLA [9]. From the total 53 instances, the average %BD of MUPLA was 0.02%, while the average %BD of UPLA was 0.24%. MUPLA returned better %BDs than UPLA on 13 instances, and they both returned the same %BDs on 40 instances. It means that there were no instances where UPLA returned better %BDs than MUPLA. In addition, MUPLA returned the %BDs of 0.00% on 51 instances, while UPLA returned the %BDs of 0.00% on only 40 instances. It means that MUPLA could find optimal solutions on 51 instances, while UPLA could find optimal solutions on only 40 instances. In detail, MUPLA failed to find optimal solutions on only la29 and la40, while UPLA failed on la29, la40, and the 11 other instances. Based on the results from Table 2, an onesided paired ttest with α = 0.1 indicated that the mean %BD of MUPLA was significantly better than the mean %BD of UPLA ( value = 0.002).
A summary of the above discussions on %BDs is given below:(i)When compared with TS [5] on the 43 instances, MUPLA returned better %BDs on five instances, while TS returned a better %BD on only one instance. The average %BD of MUPLA was 0.03%, while the average %BD of TS was 0.06%.(ii)When compared with GA [6] on the 42 instances, MUPLA returned better %BDs on 13 instances, while GA could not return a better %BD on any instance. The average %BD of MUPLA was 0.03%, while the average %BD of GA was 0.35%.(iii)When compared with UPLA [9] on the total 53 instances, MUPLA returned better %BDs on 13 instances, while UPLA could not return a better %BD on any instance. The average %BD of MUPLA was 0.02%, while the average %BD of UPLA was 0.24%.(iv)Based on the onesided paired ttests with α = 0.1, the mean %BD of MUPLA was significantly better than the mean %BDs of the other algorithms (with values of 0.066 for TS, 0.001 for GA, and 0.002 for UPLA).
In discussion on %ADs, Table 2 shows that over all 53 instances, MUPLA returned %ADs of 0.00% on 45 instances. It means that MUPLA found the optimal solution by every single trial for the 45 instances. On the eight remaining instances, %ADs of MUPLA were also very low. Notice that the highest %AD, belonging to la29, was only 1.02%. This can be interpreted that MUPLA could perform very well in overall, not only in its best trial. When compared with %BDs, %ADs were equal or almost equal to %BDs for all instances. This emphasizes that MUPLA kept its high performance with good consistency from one trial to another.
4.2. Evaluation on Search Performance over Iterations
In this section, the solution’s convergence rates of MUPLA were plotted in order to evaluate the MUPLA’s search performance over iterations (remind that a solution’s convergence rate is usually presented by a scatter plot showing the relationship between the solution quality and the number of iterations used). As shown in Figures 2 and 3 respectively, %ADoveriteration plots (i.e., the plots of %ADs over iterations) and %BDoveriteration plots (i.e., the plots of %BDs over iterations) were drawn for the eight instances, i.e., la21, la24, la27, la29, la38, la40, orb2, and orb6. These eight instances were selected because their %ADs in Table 2 were greater than 0% (it means that for each mentioned instance, at least one trial of MUPLA could not find the known optimal solution). The total number of iterations in each figure was only 1,500 iterations (not 5,000 iterations) because %ADs and %BDs, in general, were changed hardly after the 1,500^{th} iteration.
To simplify the analysis, the pieces of information in Figures 2 and 3 were combined altogether and then put into Figure 4. At each iteration, the %ADs and %BDs on eight instances from Figures 2 and 3 were averaged into the avg %AD and avg %BD, respectively, in Figure 4. They resulted in the avg%ADoveriteration plot and the avg%BDoveriteration plot in Figure 4. For a comparison purpose, Figure 4 also presents the plots of TS’s final avg %BD, GA’s final avg %BD, and UPLA’s final avg %BD. These plots represent the average %BDs of the final solutions on eight instances of TS, GA, and UPLA (i.e., 0.31%, 1.38%, and 1.41%, respectively). They were used to identify the lowest numbers of iterations of which MUPLA returned better solutions than the final solutions of TS, GA, and UPLA.
In Figure 4, the avg%ADoveriteration and avg%BDoveriteration plots could be divided into three periods based on their reduction rates. The first period roughly started from the first iteration to the 160^{th} iteration, where the avg %AD and avg %BD were reduced sharply. The second period roughly started from the 160^{th} iteration to the 700^{th} iteration, where the avg %AD and avg %BD were reduced relatively quickly but slower than the first period’s rate. The third period roughly started from the 700^{th} iteration onwards, where the avg %AD and avg %BD were reduced very slowly. Since the higher number of iterations results in the higher computational time, the proper maximum iteration can help MUPLA to balance the solution quality and the computational time. Based on the two given plots, the MUPLA’s maximum iteration should be set within the 160^{th} to 700^{th} iterations. At extremes, the 160^{th} iteration should be used when highly concerned on the computational time, while the 700^{th} iteration should be used when highly concerned on the solution quality. For any other point within the 160^{th} to 700^{th} iterations, the higher possibility to find a better solution must be traded off for the greater number of iterations used.
For more analysis on Figure 4, the avg%ADoveriteration and avg%BDoveriteration plots were compared to the final avg %BDs of TS, GA, and UPLA. For each algorithm, the final avg %BD is the average of the %BDs of the final solutions on eight instances taken from Table 2. In Figure 4, the avg%BDoveriteration plot of MUPLA was lower than the final avg %BDs of UPLA, GA, and TS at the eighth, ninth, and 484^{th} iterations, respectively. It means that the MUPLA’s best trial (among the total five trials) at its eighth, ninth, and 484^{th} iterations provided better solutions than the final solutions of UPLA, GA, and TS, respectively. Moreover, the avg%ADoveriteration plot was lower than the final avg %BDs of UPLA and GA at the 35^{th} and 39^{th} iterations, respectively. It can be interpreted roughly that MUPLA could usually return the better solutions than the final solutions of UPLA and GA within the first 40 iterations. The findings mentioned in this paragraph fulfilled those of the previous paragraph. One finding was that MUPLA could return acceptablequality solutions within its first 10 iterations and then highquality solutions within its first 500 iterations. Moreover, MUPLA was still able to find improving solutions after the 500^{th} iteration.
In addition, Figure 4 also reveals an enhancement percentage of the LOSAP’s search performance received from MUPLA. Remind that at the MUPLA’s first iteration (as the start), the LOSAP’s inputparameter values were generated fullrandomly. It means that at the MUPLA’s first iteration, LOSAP performed its own search performance without any support from the upperlevel algorithm yet. In Figure 4, the avg %AD at the first iteration was 4.80%. After passing through the MUPLA’s evolutionary process to the 100^{th} iteration, the avg %AD became 1.00%. At the 500^{th} iteration, the avg %AD was then reduced to 0.56%. The improvement of the avg %AD has been a strong evidence of the enhancement in the LOSAP’s search performance received from MUPLA. This can be concluded in the same way when explaining by the avg%BDoveriteration plot.
5. Conclusion
The proposed twolevel metaheuristic algorithm is composed of the upperlevel algorithm and the lowerlevel algorithm named MUPLA and LOSAP, respectively. In the upper level, MUPLA is a populationbased search algorithm developed for controlling the LOSAP’s input parameters. In the lower level, LOSAP is a local search algorithm with a probabilisticbased hybrid neighborhood structure. The work relation between MUPLA and LOSAP is given in brief as follows: MUPLA starts a repeated loop by generating the LOSAP’s inputparameter values; then, LOSAP uses these inputparameter values to return its bestfound JSP solution. The bestfound JSP solution becomes a feedback sent back to MUPLA for improving the LOSAP’s inputparameter values. MUPLA then starts the next round of the repeated loop. The MUPLA combined with LOSAP performs like an adaptive multistart iterated local search algorithm. The experiment’s results indicated that the proposed twolevel metaheuristic algorithm (i.e., the MUPLA combined with LOSAP) outperformed its original variant (i.e., the UPLA combined with LOLA) and the two other highperforming algorithms in terms of solution quality. Based on the convergence rates, the maximum iteration of the twolevel metaheuristic algorithm was recommended to be set within the 160^{th} to the 700^{th} iterations. A future study should be conducted to enhance the performance of the twolevel metaheuristic algorithm for JSP and related other problems. Another interesting future study is to modify MUPLA (uncombined with LOSAP) for being a lowerlevel algorithm. Consequently, the twolevel metaheuristic algorithm using MUPLA in both levels will be possibly developed.
Data Availability
The data used to support the findings of this study are available from the author upon request.
Conflicts of Interest
The author declares that there are no conflicts of interest regarding the publication of this paper.
Acknowledgments
The author acknowledges partial financial support for publication from the ThaiNichi Institute of Technology, Thailand.
References
 B. Peng, Z. Lü, and T. C. E. Cheng, “A tabu search/path relinking algorithm to solve the job shop scheduling problem,” Computers & Operations Research, vol. 53, pp. 154–164, 2015. View at: Publisher Site  Google Scholar
 J. F. Gonçalves and M. G. C. Resende, “An extended akers graphical method with a biased randomkey genetic algorithm for jobshop scheduling,” International Transactions in Operational Research, vol. 21, no. 2, pp. 215–246, 2014. View at: Publisher Site  Google Scholar
 N. H. Moin, O. C. Sin, and M. Omar, “Hybrid genetic algorithm with multiparents crossover for job shop scheduling problems,” Mathematical Problems in Engineering, vol. 2015, Article ID 210680, 12 pages, 2015. View at: Publisher Site  Google Scholar
 P. Pongchairerks and V. Kachitvichyanukul, “A twolevel particle swarm optimisation algorithm on jobshop scheduling problems,” International Journal of Operational Research, vol. 4, no. 4, pp. 390–411, 2009. View at: Publisher Site  Google Scholar
 E. Nowicki and C. Smutnicki, “A fast taboo search algorithm for the job shop problem,” Management Science, vol. 42, no. 6, pp. 797–813, 1996. View at: Publisher Site  Google Scholar
 H. Piroozfard, K. Y. Wong, and A. Hassan, “A hybrid genetic algorithm with a knowledgebased operator for solving the job shop scheduling problems,” Journal of Optimization, vol. 2016, Article ID 7319036, 13 pages, 2016. View at: Publisher Site  Google Scholar
 A. S. Jain and S. Meeran, “Deterministic jobshop scheduling: past, present and future,” European Journal of Operational Research, vol. 113, no. 2, pp. 390–434, 1999. View at: Publisher Site  Google Scholar
 J. Błazewicz, W. Domschke, and E. Pesch, “The job shop scheduling problem: conventional and new solution techniques,” European Journal of Operational Research, vol. 93, no. 1, pp. 1–33, 1996. View at: Publisher Site  Google Scholar
 P. Pongchairerks, “A twolevel metaheuristic algorithm for the jobshop scheduling problem,” Complexity, vol. 2019, Article ID 8683472, 11 pages, 2019. View at: Publisher Site  Google Scholar
 Q. Luo, Y. Zhou, J. Xie, M. Ma, and L. Li, “Discrete bat algorithm for optimal problem of permutation flow shop scheduling,” The Scientific World Journal, vol. 2014, Article ID 630280, 15 pages, 2014. View at: Publisher Site  Google Scholar
 M. N. Janardhanan, Z. Li, P. Nielsen, and Q. Tang, “Artificial bee colony algorithms for twosided assembly line worker assignment and balancing problem,” in Advances in Intelligent Systems and Computing, vol. 620, pp. 11–18, Springer, Cham, Switzerland, 2018. View at: Google Scholar
 S.J. Wu and P.T. Chow, “Genetic algorithms for nonlinear mixed discreteinteger optimization problems via metagenetic parameter optimization,” Engineering Optimization, vol. 24, no. 2, pp. 137–159, 1995. View at: Publisher Site  Google Scholar
 P. Cortez, M. Rocha, and J. Neves, “A metagenetic algorithm for time series forecasting,” in Proceedings of Workshop on Artificial Intelligence Techniques for Financial Time Series Analysis, pp. 21–31, Porto, Portugal, December 2001. View at: Google Scholar
 S. Luke and A. K. A. Talukder, “Is the metaEA a viable optimization method?” in Proceedings of the 15th Annual Conference on Genetic and Evolutionary Computation, pp. 1533–1540, Amsterdam, Netherlands, July 2013. View at: Google Scholar
 S. Wink, T. Bäck, and M. Emmerich, “A metagenetic algorithm for solving the capacitated vehicle routing problem,” in Proceedings of the 2012 IEEE Congress on Evolutionary Computation, pp. 1–8, Brisbane, Australia, June 2012. View at: Google Scholar
 H. R. Lourenço, “Jobshop scheduling: computational study of local search and largestep optimization methods,” European Journal of Operational Research, vol. 83, no. 2, pp. 347–364, 1995. View at: Publisher Site  Google Scholar
 T. Yamada and R. Nakano, “Jobshop scheduling,” in Genetic Algorithms in Engineering Systems, A. M. S. Zalzala and P. J. Fleming, Eds., pp. 134–160, Institution of Electrical Engineers, London, UK, 1997. View at: Google Scholar
 T. Yamada and R. Nakano, “A fusion of crossover and local search,” in Proceedings of the IEEE International Conference on Industrial Technology, pp. 426–430, Shanghai, China, December 1996. View at: Google Scholar
 L. Özdamar, “A genetic algorithm approach to a general category project scheduling problem,” IEEE Transactions on Systems, Man and Cybernetics, Part C (Applications and Reviews), vol. 29, no. 1, pp. 44–59, 1999. View at: Publisher Site  Google Scholar
 P. Pongchairerks, “Forward VNS, reverse VNS, and multiVNS algorithms for jobshop scheduling problem,” Modelling and Simulation in Engineering, vol. 2016, Article ID 5071654, 15 pages, 2016. View at: Publisher Site  Google Scholar
 M. Sakawa, Genetic Algorithms and Fuzzy Multiobjective Optimization, Kluwer Academic Publishers, Boston, MA, USA, 2001.
 E. Balas and A. Vazacopoulos, “Guided local search with shifting bottleneck for job shop scheduling,” Management Science, vol. 44, no. 2, pp. 262–275, 1998. View at: Publisher Site  Google Scholar
 H. R. Lourenço and M. Zwijnenburg, “Combining the largestep optimization with tabusearch: application to the jobshop scheduling problem,” in MetaHeuristics: Theory & Applications, I. H. Osman and J. P. Kelly, Eds., pp. 219–236, Springer, Boston, MA, USA, 1996. View at: Google Scholar
 H. R. Lourenço, O. C. Martin, and T. Stützle, “A beginner’s introduction to iterated local search,” in Proceedings of the 4th Metaheuristics International Conference, pp. 1–6, Porto, Portugal, July 2001. View at: Google Scholar
 H. R. Lourenço, O. C. Martin, and T. Stützle, “Iterated local search,” in International Series in Operations Research and Management Science, vol. 57, pp. 321–354, Springer, Boston, MA, USA, 2003. View at: Google Scholar
 J. Michallet, C. Prins, L. Amodeo, F. Yalaoui, and G. Vitry, “Multistart iterated local search for the periodic vehicle routing problem with time windows and time spread constraints on services,” Computers & Operations Research, vol. 41, pp. 196–207, 2014. View at: Publisher Site  Google Scholar
 S. Kande, C. Prins, L. Belgacem, and B. Redon, “Multistart iterated local search for twoechelon distribution network for perishable products,” in Proceedings of the International Conference on Operations Research and Enterprise Systems, pp. 294–303, Lisbon, Portugal, January 2015. View at: Google Scholar
 M. Avci and S. Topaloglu, “A multistart iterated local search algorithm for the generalized quadratic multiple knapsack problem,” Computers & Operations Research, vol. 83, pp. 54–65, 2017. View at: Publisher Site  Google Scholar
 C.W. Chiou and M.C. Wu, “A GATabu algorithm for scheduling inline steppers in lowyield scenarios,” Expert Systems with Applications, vol. 36, no. 9, pp. 11925–11933, 2009. View at: Publisher Site  Google Scholar
 T. Davidović, P. Hansen, and N. Mladenović, “Scheduling by VNS: experimental analysis,” in Proceedings of the Yugoslav Symposium on Operations Research, pp. 319–322, Belgrade, Serbia, October 2001. View at: Google Scholar
 M. Marmion, F. Mascia, M. LópezIbáñez, and T. Stützle, “Automatic design of hybrid stochastic local search algorithms,” in Lecture Notes in Computer Science, vol. 7919, pp. 144–158, Springer, Berlin, Germany, 2013. View at: Google Scholar
 Y. Shi and R. Eberhart, “A modified particle swarm optimizer,” in Proceedings of the IEEE International Conference on Evolutionary Computation, pp. 69–73, Anchorage, AK, USA, May 1998. View at: Google Scholar
 R. Storn and K. Price, “Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997. View at: Publisher Site  Google Scholar
 M. Neshat, G. Sepidnam, M. Sargolzaei, and A. N. Toosi, “Artificial fish swarm algorithm: a survey of the stateoftheart, hybridization, combinatorial and indicative applications,” Artificial Intelligence Review, vol. 42, no. 4, pp. 965–997, 2014. View at: Publisher Site  Google Scholar
 X.S. Yang and S. Deb, “Engineering optimisation by cuckoo search,” International Journal of Mathematical Modelling and Numerical Optimisation, vol. 1, no. 4, pp. 330–343, 2010. View at: Publisher Site  Google Scholar
 Y. Crama, A. W. J. Kolen, and E. J. Pesch, “Local search in combinatorial optimization,” in Lecture Notes in Computer Science, vol. 931, pp. 157–174, Springer, Berlin, Germany, 1995. View at: Google Scholar
 J. B. Orlin, A. P. Punnen, and A. S. Schulz, “Approximate local search in combinatorial optimization,” SIAM Journal on Computing, vol. 33, no. 5, pp. 1201–1214, 2004. View at: Publisher Site  Google Scholar
 W. Michiels, E. Aarts, and J. Korst, Theoretical Aspects of Local Search, Springer, Berlin, Germany, 2007.
 E. Pesch, Learning in Automated Manufacturing: A Local Search Approach, PhysicaVerlag, Heidelberg, Germany, 1994.
 R. Cheng, M. Gen, and Y. Tsujimura, “A tutorial survey of jobshop scheduling problems using genetic algorithmsi. representation,” Computers & Industrial Engineering, vol. 30, no. 4, pp. 983–997, 1996. View at: Publisher Site  Google Scholar
 M. Gen and R. Cheng, Genetic Algorithms and Engineering Design, John Wiley & Sons, New York, NY, USA, 1997.
 C. Bierwirth, “A generalized permutation approach to job shop scheduling with genetic algorithms,” OR Spektrum, vol. 17, no. 23, pp. 87–92, 1995. View at: Publisher Site  Google Scholar
 M. Gen, Y. Tsujimura, and E. Kubota, “Solving jobshop scheduling problems by genetic algorithm,” in Proceedings of IEEE International Conference on Systems, Man and Cybernetics, pp. 1577–1582, San Antonio, TX, USA, October 1994. View at: Google Scholar
 Y. Tsujimura, Y. Mafune, and M. Gen, “Introducing coevolution and subevolution processes into genetic algorithm for jobshop scheduling,” in Proceedings of the 26th Annual Conference of the IEEE Industrial Electronics Society, pp. 2827–2830, Nagoya, Japan, October 2000. View at: Google Scholar
 W. Bożejko and M. Makuchowski, “Solving the nowait jobshop problem by using genetic algorithm with automatic adjustment,” International Journal of Advanced Manufacturing Technology, vol. 57, no. 5–8, pp. 735–752, 2011. View at: Publisher Site  Google Scholar
 S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach, PrenticeHall, Upper Saddle River, NJ, USA, 3rd edition, 2009.
 H. Fisher and G. L. Thompson, “Probabilistic learning combinations of local jobshop scheduling rules,” in Industrial Scheduling, J. F. Muth and G. L. Thompson, Eds., pp. 225–251, PrenticeHall, Englewood, NJ, USA, 1963. View at: Google Scholar
 S. Lawrence, Resource Constrained Project Scheduling: An Experimental Investigation of Heuristic Scheduling Techniques (Supplement), Carnegie Mellon University, Pittsburgh, PA, USA, 1984.
 D. Applegate and W. Cook, “A computational study of the jobshop scheduling problem,” ORSA Journal on Computing, vol. 3, no. 2, pp. 149–156, 1991. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2020 Pisut Pongchairerks. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.