Complexity

Complexity / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 3489209 | https://doi.org/10.1155/2020/3489209

Pisut Pongchairerks, "An Enhanced Two-Level Metaheuristic Algorithm with Adaptive Hybrid Neighborhood Structures for the Job-Shop Scheduling Problem", Complexity, vol. 2020, Article ID 3489209, 15 pages, 2020. https://doi.org/10.1155/2020/3489209

An Enhanced Two-Level Metaheuristic Algorithm with Adaptive Hybrid Neighborhood Structures for the Job-Shop Scheduling Problem

Academic Editor: José Manuel Galán
Received25 Jan 2020
Revised17 Apr 2020
Accepted27 Apr 2020
Published28 Jun 2020

Abstract

For solving the job-shop scheduling problem (JSP), this paper proposes a novel two-level metaheuristic algorithm, where its upper-level algorithm controls the input parameters of its lower-level algorithm. The lower-level algorithm is a local search algorithm searching for an optimal JSP solution within a hybrid neighborhood structure. To generate each neighbor solution, the lower-level algorithm randomly uses one of two neighbor operators by a given probability. The upper-level algorithm is a population-based search algorithm developed for controlling the five input parameters of the lower-level algorithm, i.e., a perturbation operator, a scheduling direction, an ordered pair of two neighbor operators, a probability of selecting a neighbor operator, and a start solution-representing permutation. Many operators are proposed in this paper as options for the perturbation and neighbor operators. Under the control of the upper-level algorithm, the lower-level algorithm can be evolved in its input-parameter values and neighborhood structure. Moreover, with the perturbation operator and the start solution-representing permutation controlled, the two-level metaheuristic algorithm performs like a multistart iterated local search algorithm. The experiment’s results indicated that the two-level metaheuristic algorithm outperformed its previous variant and the two other high-performing algorithms in terms of solution quality.

1. Introduction

Production scheduling is an important tool for controlling and optimizing workloads in an industrial production system. It is a decision-making process which involves assigning jobs to machines on a timetable. The job-shop scheduling problem (JSP) is one of well-known production scheduling problems. Such a problem is also defined as a much complex optimization problem both in theoretical and practical aspects. The objective of JSP is commonly to find a feasible schedule which completes all jobs by the shortest makespan (note that makespan stands for the length of the schedule). Approximation algorithms, such as [16], have been developed for JSP since the problem cannot be optimally solved in a reasonable (polynomial) amount of time [7, 8]. The two-level metaheuristic algorithm of [9] is one of the high-performing approximation algorithms for JSP. This algorithm is, as its name implies, a combination between its upper-level algorithm (named UPLA) and its lower-level algorithm (named LOLA). Its mechanism is that UPLA controls the input parameters of LOLA, and LOLA then searches for an optimal schedule. Due to successful results of [9], this paper aims at developing an enhanced two-level metaheuristic algorithm for JSP. To do so, the two-level metaheuristic algorithm proposed in this paper has been changed from its original variant [9] in both levels.

The lower-level algorithm proposed in this paper, named LOSAP, is a local search algorithm exploring in a probabilistic-based hybrid neighborhood structure. To generate each neighbor solution, LOSAP randomly uses one from the two predetermined neighbor operators (i.e., the first and second operators) by a preassigned probability of selecting the first operator. A high value of this probability leads the hybrid neighborhood structure to be more likely similar to the first operator’s neighborhood structure, and vice versa. Previous successful applications of randomly using one from two different operators have been found in [10, 11]. Major differences between LOSAP and LOLA [9] are briefly presented as follows. The LOLA’s search ability is mainly based on its special solution space, which is a solution space of parameterized-active schedules. The LOSAP’s search ability is, however, mainly based on its hybrid neighborhood structure since it searches in an ordinary solution space of semiactive schedules. LOSAP also has many proposed operators as the options for its perturbation and neighbor operators; most of them were not used in LOLA. In addition, LOSAP uses a different criterion from LOLA on accepting a new best-found solution.

Although LOLA and LOSAP have many differences from each other as above-mentioned, they still share a common weakness. The weakness is that no single combination of input-parameter values performs best for all instances. In other words, a combination of input-parameter values performing best for an instance may not perform as best for another instance. For each instance, each algorithm has a specific best combination of input-parameter values; however, it cannot be foreknown without doing an experiment on the being-considered instance. This weakness is, in fact, a common weakness of most metaheuristic algorithms. To overcome such a weakness, past researches, e.g., [4, 9, 1215], developed an upper-level metaheuristic algorithm to control input parameters of a problem-solving metaheuristic algorithm in a lower level.

The upper-level algorithm proposed in this paper, named MUPLA, is a modification of UPLA (i.e., the upper-level algorithm of [9]). MUPLA is a population-based metaheuristic algorithm designed to be a parameter controller for LOSAP. Thus, its population consists of a number of combinations of the LOSAP’s input-parameter values which are evolved (updated) over iterations. For short, let a combination of LOSAP’s input-parameter values be called a parameter-value combination. Each parameter-value combination contains specific values of the perturbation operator, the scheduling direction, the ordered pair of two neighbor operators, the probability of selecting a neighbor operator, and the start solution-representing permutation. A major change of MUPLA from UPLA [9] is that each parameter-value combination has a different start solution-representing permutation from those of the others. As a consequence, the MUPLA combined with LOSAP acts as a multistart iterated local search algorithm. This is a large upgrade because the UPLA combined with LOLA [9] is just an iterated local search algorithm.

The remainder of this paper is divided into four main sections. Section 2 provides an overview of the relevant publications of the research topic. Section 3 describes the proposed two-level metaheuristic algorithm in both levels. Thus, the lower-level algorithm (LOSAP) and the upper-level algorithm (MUPLA) are described in Sections 3.1 and 3.2, respectively. Then, Section 4 presents the results and discussions on the evaluation of the two-level metaheuristic algorithm’s performance. Section 5 finally summarizes the findings and recommendations.

2. Literature Review

The job-shop scheduling problem (JSP) comes with n given jobs J1, J2,…, Jn and m given machines M1, M2,…, Mm. Each job Ji is composed of a sequence of m given operations Oi1, Oi2,…, Oim as a chain of precedence constraints. To process each job Ji, Oij (where j = 1, 2, …, m − 1) is defined as an immediate-preceding operation of Oik (where k = j + 1); thus, Oij must be finished before Oik can start. In addition, each operation must be processed on a preassigned machine with a predetermined processing time. Each machine cannot process more than one operation at a time, and it cannot be stopped or paused during processing an operation. All n given jobs arrive at the time 0, and all m given machines are also available at the time 0. A schedule is feasible if it completely allocates all n jobs under all the given constraints. An optimal schedule is a feasible schedule which minimizes the makespan, i.e., the total amount of time required to complete all jobs. Excellent reviews about JSP are available in [7, 8, 16, 17].

In JSP, feasible schedules can be alternatively constructed in forward or backward (reverse) directions. A forward schedule is a schedule constructed in the forward direction, while a backward (reverse) schedule is a schedule constructed in the backward direction. In other words, a forward schedule is a schedule in which all jobs Ji (where i = 1, 2, …, n) are constructed forward from Oi1 to Oim, while a backward schedule is a schedule in which all jobs Ji are constructed backward from Oim to Oi1. Although the forward scheduling is commonly used for the makespan criterion, the backward scheduling as an alternative has been applied in many researches, e.g., [1820]. Besides the schedule’s classification based on the scheduling directions, the feasible schedules can be classified based on their allowable delay times, e.g., semiactive schedules and active schedules [17, 21]. A feasible schedule is defined as a semiactive schedule if no operation can be started earlier without altering an operation sequence on any machine. A semiactive schedule is then defined as an active schedule if no operation can be started earlier without delaying any other operation or violating any precedence constraint.

Many approximation algorithms have been developed based on metaheuristic algorithms for solving JSP. Iterated local search, a well-known type of metaheuristic algorithms, has also been applied for JSP [22, 23]. In general, an iterated local search algorithm is a single-solution-based local search technique which can search for a global optimal solution. During an exploration, it uses a neighbor operator repeatedly to find a local optimum and then uses a perturbation operator to escape the just-found local optimum (note that a perturbation operator stands for an operator that generates a new start solution by largely modifying a found local optimal solution [24, 25]). It has also been found that some researches such as [2628] enhanced their iterated local search algorithms by adding multistart properties.

In the iterated local search and related algorithms, there are three operators commonly used as a neighbor operator and a perturbation operator. These three operators are the common swap, insert, and inverse operators [29]. Some iterated local search algorithms, such as [30, 31], use the common swap operator or the common insert operator multiple times as their perturbation operators. The definitions of the three common operators are given as follows (let u and are random integers from 1 to the number of all members in the permutation, and  ≠ u):(i)The common swap operator is to swap between the two members in the uth and positions of a permutation.(ii)The common insert operator is to remove a member from the uth position of a permutation and then insert it back at the position.(iii)The common inverse operator is to inverse the sequence of all members from the uth to positions of a permutation.

The two-level metaheuristic algorithm of [9] can be classified as an adaptive iterated local search algorithm for JSP. Its upper-level algorithm (named UPLA) controls the input parameters of its lower-level algorithm, and its lower-level algorithm (named LOLA) then searches for an optimal schedule. Thus, the two-level metaheuristic algorithm can adapt itself for every single JSP instance. The development of the two-level metaheuristic algorithm of [9] followed in the successes of the previous researches of [4, 1215] in using a metaheuristic algorithm to control parameters of another metaheuristic algorithm.

In [9], LOLA is a local search algorithm exploring in a solution space of parameterized-active schedules. Its input parameters (i.e., an acceptable idle-time limit, a scheduling direction, a perturbation operator, and a neighbor operator) are controlled by UPLA. UPLA is a population-based metaheuristic algorithm searching in a real-number search space. In this view’s point, UPLA is similar to the other population-based algorithms, such as particle swarm optimization [32], differential evolution [33], fish swarm [34], and cuckoo search [35]. However, the evolving procedure of the UPLA’s population is different from those of the others mentioned. The UPLA’s population consists of the combinations of input-parameter values of LOLA. For a parameter-value combination, each parameter’s value is iteratively changed by a sum of two changeable opposite-direction vectors. The first vector’s direction is toward the memorized best-found value, whereas the second vector’s direction is away from. The magnitudes of these two vectors are generated randomly between zeros to their given maximum values. The first vector’s maximum magnitude (0.05) is usually larger than the second vector’s maximum magnitude (0.01). However, if the parameter’s value equals the memorized best-found value, the maximum magnitudes of both vectors then equal the same value (0.01).

3. Method

Section 3 describes the procedure of the proposed two-level metaheuristic algorithm in both levels. In this section, the lower-level algorithm (LOSAP) is described in Section 3.1, and the upper-level algorithm (MUPLA) is described in Section 3.2.

3.1. Proposed Lower-Level Algorithm

The lower-level algorithm proposed in this paper, named LOSAP, is a local search algorithm searching for an optimal solution in a probabilistic-based hybrid neighborhood structure. Its framework is similar to those of the other local search algorithms [3639]. However, its neighborhood structure is generated based on the two predetermined operators, i.e., the first and second neighbor operators. By a given probability, LOSAP randomly uses one from the two predetermined operators in order to generate a neighbor solution-representing permutation. This means that based on the given probability, LOSAP can switch between the two given operators anytime during its exploration. In this paper, many operators are proposed as the options for being the LOSAP’s neighbor operators (note that the successes of randomly using one from two neighbor operators have been found in different algorithms, e.g., [10, 11]).

LOSAP generates a hybrid neighborhood structure between the first operator’s neighborhood structure and the second operator’s neighborhood structure. The hybridization is controlled by the probability of selecting the first neighbor operator as a LOSAP’s input parameter (the probability of selecting the second neighbor operator, as a complement, is the unity minus the probability of selecting the first operator). Note that the higher the probability of selecting the first operator, the more likely the hybrid neighborhood structure is like the first operator’s neighborhood structure. It equally means that the lower the probability of selecting the first operator, the more likely the hybrid neighborhood structure is like the second operator’s neighborhood structure. At boundaries, the mentioned probability’s values of 1.00 and 0.00 make the hybrid neighborhood structure be identical to the first operator’s neighborhood structure and the second operator’s neighborhood structure, respectively.

The probability of selecting the first neighbor operator is not the only LOSAP’s input parameter. LOSAP has total five input parameters consisting of the perturbation operator, the scheduling direction, the ordered pair of the first and second neighbor operators, the probability of selecting the first neighbor, and the start operation-based permutation. LOSAP provides many options for setting each input parameter in order that LOSAP with proper parameter values can perform well for every single instance. Below, Section 3.1.1 describes Algorithm 1 as the decoding procedure used by LOSAP. In detail, Algorithm 1 is the procedure of transforming each solution-representing permutation generated by LOSAP into a semiactive schedule. Section 3.1.2 then describes Algorithm 2 as the procedure of LOSAP.

Step 1. Receive an operation-based permutation and a scheduling direction (forward or backward) from LOSAP (Algorithm 2).
Step 2. If the scheduling direction received in Step 1 is forward, then the precedence relations of the operations of each job are unchanged. However, if it is backward, then the precedence relations of the operations must be reversed by using Steps 2.1 and 2.2.
  Step 2.1. For each job Ji (where i = 1, 2, …, n), let the operations Oi1, Oi2,…, Oim be renamed Oim, Oim–1,…, Oi1, respectively.
  Step 2.2. Assign the precedence relations of all operations Oij (where j = 1, 2, …, m) of each job Ji in ascending order of j values. Thus, Oi1 must be finished before Oi2 can start, Oi2 must be finished before Oi3 can start, and so on.
Step 3. If the scheduling direction received in Step 1 is forward, then the operation-based permutation received in Step 1 is unchanged. However, if it is backward, then the order of all members in the operation-based permutation must be reversed. For example, the permutation (3, 2, 3, 1, 1, 2) with a backward scheduling direction must be changed into (2, 1, 1, 3, 2, 3).
Step 4. Transform the permutation taken from Step 3 by changing the number i in its jth occurrence (from left to right) into the operation Oij (i = 1, 2, …, n and j = 1, 2, …, m). For example, the permutation (3, 2, 3, 1, 1, 2) must be transformed into (O31, O21, O32, O11, O12, O22).
Step 5. Transform the permutation taken from Step 4 into a semiactive schedule by using Steps 5.1 to 5.3.
  Step 5.1. Let Φt be the partial schedule of the t scheduled operations. Thus, Φ0 is empty. Now, let t ⟵ 1.
  Step 5.2. Let OL ⟵  the leftmost as-yet-unscheduled operation in the permutation. Then, create Φt by scheduling OL into Φt–1 at its earliest possible start time on its preassigned machine (the earliest possible start time of OL is the maximum between the finished time of its immediate-preceding operation in its job and the finished time of the current last-scheduled operation on its machine).
  Step 5.3. If t < mn, then let t ⟵ t + 1 and repeat from Step 5.2. Otherwise, go to Step 6.
Step 6. If the scheduling direction received in Step 1 is forward, then return Φmn (which is a completed forward semiactive schedule) as the final result. However, if it is backward, then modify Φmn to satisfy the original precedence relations by using Steps 6.1 and 6.2.
  Step 6.1. Let the operations Oim, Oim–1,…, Oi1 of each job Ji in the schedule Φmn be renamed Oi1, Oi2,…, Oim, respectively.
  Step 6.2. Turn the schedule modified from Step 6.1 back to front in order that the last-finished operation in the schedule becomes the first-started operation, and so on. After that, let the schedule be started at the time 0. Then, return the schedule modified in this step (which is a completed backward semiactive schedule) as the final result.
Step 1. Receive the input-parameter values from MUPLA (Algorithm 3) via Steps 1.1 to 1.5.
  Step 1.1. Receive PTBT ∈ {n-medium swap, n-large swap, n-medium inverse, n-large insert, n-medium insert}.
  Step 1.2. Receive SD ∈ {forward, backward}.
  Step 1.3. Receive PNO ≡ (NO1, NO2) ∈ {(1-small inverse, 1-medium insert), (1-large swap, 1-large insert), (1-medium swap, 1-medium insert), (1-small swap, 1-small insert)}.
  Step 1.4. Receive PROB ∈ all real numbers within [0, 1].
  Step 1.5. Receive P ∈ all possible operation-based permutations (i.e., all permutations with m repetitions of the numbers 1, 2, …, n).
Step 2. Generate an initial P0 by using PTBT on P.
Step 3. Execute Algorithm 1 by inputting SD and P0 in order to receive S0.
Step 4. Find a local optimal schedule by using Steps 4.1 to 4.4.
  Step 4.1. Let tL ⟵ 0.
  Step 4.2. Randomly generate u from . If u ≤ PROB, then generate P1 by using NO1 on P0; otherwise, generate P1 by using NO2 on P0.
  Step 4.3. Execute Algorithm 1 by inputting SD and P1 in order to receive S1.
  Step 4.4. Update P0, S0, and tL by using Steps 4.4.1 to 4.4.3.
   Step 4.4.1. If Makespan (S1) < Makespan (S0), then let P0 ⟵ P1 and S0 ⟵ S1, and repeat from Step 4.1.
   Step 4.4.2. If Makespan (S1) = Makespan (S0), then let P0 ⟵ P1 and S0 ⟵ S1, and repeat from Step 4.2.
   Step 4.4.3. If Makespan (S1) > Makespan (S0), then let tL ⟵ tL + 1. After that, if tL < (mn)2, then repeat from Step 4.2; otherwise, go to Step 5.
Step 5. Return P0 and S0 as the final (best-found) operation-based permutation and the final (best-found) schedule, respectively.
3.1.1. Decoding Procedure

LOSAP searches in a solution space of semiactive schedules, and it uses an operation-based permutation [4043] to represent a semiactive schedule. An operation-based permutation has been used to represent a semiactive schedule in many researches, such as [20, 43, 44]. For an n-job/m-machine instance, an operation-based permutation is a permutation with m repetitions of the numbers 1, 2, …, n. In the permutation, the number i (where i = 1, 2, …, n) in its jth occurrence from left to right (where j = 1, 2, …, m) represents the operation Oij. Then, a semiactive schedule is constructed by scheduling all operations in the order given by the permutation; in addition, each operation must be scheduled at its earliest possible start time on its preassigned machine. For example, the permutation (3, 2, 3, 1, 1, 2) represents the schedule in which the operations O31, O21, O32, O11, O12, and O22 are sequentially scheduled at their earliest possible start times on their preassigned machines. The earliest possible start time of a specific operation is the maximum between the finished time of its immediate-preceding operation in its job and the finished time of the current last-scheduled operation on its machine.

Algorithm 1 is the solution-decoding procedure used by LOSAP. As the options, it can transform an operation-based permutation into a forward semiactive schedule (i.e., a semiactive schedule constructed by the forward direction) or a backward semiactive schedule (i.e., a semiactive schedule constructed by the backward direction). Thus, Algorithm 1 requires assigning values of the two input parameters, i.e., an operation-based permutation and a scheduling direction. Remind that m and n represent the number of all machines and the number of all jobs, respectively, in the being-considered JSP instance. Thus, mn (i.e., m multiplied by n) represents the number of all operations.

As mentioned above, Algorithm 1 is the solution-decoding method used by LOSAP, and Algorithm 2 in Section 3.1.2 is the LOSAP's procedure. Thus, it can be said that Algorithm 1 is a component of Algorithm 2.

3.1.2. LOSAP Procedure

LOSAP, as shown in Algorithm 2, has the five input parameters whose values need to be assigned, i.e., PTBT, SD, PNO ≡ (NO1, NO2), PROB, and P. The definitions of these LOSAP’s input parameters are given below:(i)PTBT and P stand for the perturbation operator and the start operation-based permutation, respectively. LOSAP uses PTBT on P in order to generate its initial best-found operation-based permutation.(ii)SD stands for the scheduling direction (forward or backward) of all solutions generated by LOSAP. If SD is selected to be forward, LOSAP transforms each generated operation-based permutation into a forward schedule. Otherwise, LOSAP transforms each permutation into a backward schedule.(iii)PNO ≡ (NO1, NO2) is the ordered pair of the first neighbor operator (called NO1) and the second neighbor operator (called NO2).(iv)PROB is the probability of selecting the first neighbor operator (NO1). Consequently, the probability of selecting the second neighbor operator (NO2) is the unity minus PROB.

Besides the definitions of the input parameters mentioned above, the other abbreviations used in LOSAP (i.e., Algorithm 2) are defined below:(i)P0 stands for the current best-found operation-based permutation. As mentioned above, the initial P0 is generated by using PTBT on P.(ii)S0, which is decoded from P0, stands for the current best-found schedule. In addition, Makespan (S0) stands for the makespan of S0.(iii)P1, which is generated by using NO1 or NO2 on P0, stands for the current neighbor operation-based permutation.(iv)S1, which is decoded from P1, stands for the current neighbor schedule. In addition, Makespan (S1) stands for the makespan of S1.(v)m and n are the number of machines and the number of jobs, respectively, in the being-considered JSP instance. Thus, mn is the number of operations.

The LOSAP’s procedure given in Algorithm 2 is efficient but simple. In brief, LOSAP starts its procedure by using PTBT on P to generate an initial P0; after that, LOSAP transforms P0 into S0. Then, LOSAP starts its repeated loop by using PROB to randomly select either NO1 or NO2. LOSAP uses the selected operator (i.e., either NO1 or NO2) on P0 to generate P1, and LOSAP then transforms P1 into S1. If the criterion of accepting a new best-found permutation is satisfied, then LOSAP updates P0 ⟵ P1 and S0 ⟵ S1. Finally, LOSAP determines whether to continue the repeated loop’s next round or stop its procedure. Note that S0 and S1 are always generated in the forward direction if SD is selected as forward at the beginning; otherwise, they are always generated in the backward direction.

In LOSAP, there are many optional operators for PTBT and PNO. These optional operators are modified from the common swap, insert, and inverse operators [29] (the definitions of the common operators are given in Section 2). In detail, the five LOSAP’s options for PTBT consist of the n-medium swap operator, the n-large swap operator, the n-medium inverse operator, the n-large insert operator, and the n-medium insert operator. LOSAP also provides the four options for PNO ≡ (NO1, NO2), consisting of (1-small inverse, 1-medium insert), (1-large swap, 1-large insert), (1-medium swap, 1-medium insert), and (1-small swap, 1-small insert). These optional operators are defined as follows. The number in front of the hyphen sign (-) indicates the number of repeated uses of the operator mentioned in back of the hyphen sign. For example, the 1-small swap operator is to use the small swap operator once on a permutation, and the n-medium inverse operator is to use the medium inverse operator n times on a permutation.

In addition to the above paragraph, the words small, medium, and large in the names of the optional operators are used to restrict the value of in its distance from u as explained below (remind that u is a random integer within [1, mn]):(i)For all operators with small, is a random integer within [u − 4, u + 4] (note that the small swap in [45] means the swap of two adjacent members, and it thus differs from the small swap in LOSAP).(ii)For all operators with medium, is a random integer within [u − (mn/5), u + (mn/5)].(iii)For all operators with large, is a random integer within [1, mn]. It means that the operators with large are identical to the common operators.

After generating successfully, its value must be verified whether it can be used or not. If the generated value of is outside of [1, mn] or equal to u, then it must be randomly regenerated within the given same range. This procedure must be repeated until receiving the value of within [1, mn] and unequal to u.

Remind that an addition of more optional operators into LOSAP may not always give a benefit for the two-level metaheuristic algorithm. Moreover, it sometimes makes the two-level metaheuristic algorithm harder to find a better solution. As shown in Algorithm 2, LOSAP does not include the n-small swap, n-small insert, and n-large inverse operators as the options for PTBT. The reason of excluding the n-small swap and n-small insert operators is that they make a too-small change into P; by using them, the two-level metaheuristic algorithm can hardly escape a local optimum. In contrast, a change from the n-large inverse operator is almost as large as a change from a re-initialization. For PNO, the 1-medium inverse and 1-large inverse operators are not used as the options because, as neighbor operators, they make a too-large change into P0.

LOLA [9] and LOSAP both are lower-level algorithms of their own two-level metaheuristic algorithms; however, there are many differences between them. The LOLA’s search ability is mainly based on its solution space of parameterized-active schedules. By contrast, LOSAP uses an ordinary solution space of semiactive schedules; thus, its search ability is mainly based on its probabilistic-based hybrid neighborhood structure. Most optional perturbation and neighbor operators of LOSAP are different from those of LOLA; the n-large insert operator is the only optional perturbation operator found in both LOLA and LOSAP. Another difference between LOLA and LOSAP is in their criteria of accepting a new best-found solution. LOLA accepts only a better neighbor solution, while LOSAP accepts a nonworsening neighbor solution. LOSAP uses this acceptance criterion to escape from a shoulder (i.e., a flat area of search space adjacent to a downhill edge [46]). In addition, LOSAP will not reset tL to 0 for any sideways move in order to avoid an endless loop when finding a flat local minimum.

3.2. Proposed Upper-Level Algorithm

MUPLA is the upper-level algorithm of the proposed two-level algorithm. The purpose of MUPLA is to evolve the LOSAP’s input-parameter values so that LOSAP can return its best performance on every single JSP instance. MUPLA is a population-based search algorithm specifically developed for being a parameter tuner. The MUPLA’s population contains N combinations of the LOSAP’s input-parameter values, i.e., C1 (t), C2 (t),…, CN (t). In short, let a parameter-value combination stand for a combination of the LOSAP’s input-parameter values. In the population, each parameter-value combination is adjusted over iterations by the MUPLA’s specific evolutionary process.

Let Ci (t) ≡ (c1i (t), c2i (t), c3i (t), c4i (t), pi (t)) represent the ith parameter-value combination (where i = 1, 2, …, N) in the MUPLA's population at the tth iteration. These c1i (t), c2i (t), c3i (t), and c4i (t) are real numbers representing the values of the perturbation operator (PTBT), the scheduling direction (SD), the ordered pair of the first and second neighbor operators (PNO), and the probability of selecting the first neighbor operator (PROB) of LOSAP, respectively. In addition, pi (t) represents the start operation-based permutation of LOSAP. Note that pi (t) is an operation-based permutation, not a real number like others. Table 1 presents the transformation from c1i (t), c2i (t), c3i (t), c4i (t), and pi (t) into the values of PTBT, SD, PNO, PROB, and P of LOSAP, respectively.


MUPLA’s Ci (t)LOSAP’s input-parameter values

c1i (t) ∈ PTBT = 

c2i (t) ∈ SD = 

c3i (t) ∈ PNO ≡ (NO1, NO2) = 

c4i (t) ∈ PROB = 

pi (t) ∈ all possible operation-based permutationsP = pi (t)

The abbreviations used in MUPLA (i.e., Algorithm 3) are defined below:(i)Score (Ci (t)) stands for the performance score of Ci (t). Note that the lower the performance score, the better the performance. Between any two parameter-value combinations, the combination with a lower performance score is the better one.(ii)Pfi (t) stands for the final (best-found) operation-based permutation of the LOSAP using the input-parameter values decoded from Ci (t).(iii)Sfi (t) stands for the final (best-found) schedule of the LOSAP using the input-parameter values decoded from Ci (t). In addition, Makespan (Sfi (t)) stands for the makespan of Sfi (t).(iv)Cbest ≡ (c1best, c2best, c3best, c4best, pbest) stands for the best parameter-value combination ever-found by the population. In addition, Score (Cbest) stands for the performance score of Cbest (note that Cbest in definition is similar to the global best position of PSO [32]).(v)Sbest stands for the best schedule ever-found by the population.

Step 1. Let t ⟵ 1 and Score (Cbest) ⟵ + ∞. Generate each Ci (t) ≡ (c1i (t), c2i (t), c3i (t), c4i (t), pi (t)) (where i = 1, 2, …, N) by randomly generating cji (t) from (where j = 1, 2, 3, 4) and randomly generating pi (t) from any possible operation-based permutation.
Step 2. Evaluate Score (Ci (t)), and update Cbest and Sbest by using Steps 2.1 to 2.5.
  Step 2.1. Let i ⟵ 1.
  Step 2.2. Transform Ci (t) into the values of PTBT, SD, PNO, PROB, and P of LOSAP by the relationships shown in Table 1.
  Step 2.3. Execute Algorithm 2 (LOSAP) by inputting the PTBT, SD, PNO, PROB, and P taken from Step 2.2 in order to receive Pfi (t) and Sfi (t). Then, let Score (Ci (t)) ⟵ Makespan (Sfi (t)).
  Step 2.4. If Score (Ci (t)) ≤ Score (Cbest), then let Cbest ⟵ Ci (t), Score (Cbest) ⟵ Score (Ci (t)), and Sbest ⟵ Sfi (t).
  Step 2.5. If i < N, then let i ⟵ i + 1 and repeat from Step 2.2; otherwise, go to Step 3.
Step 3. Update Ci (t + 1), where i = 1, 2, …, N, by using Steps 3.1 to 3.4.
  Step 3.1. Let i ⟵ 1.
  Step 3.2. If t mod 1000 = 0, then randomly generate pi (t + 1) from any possible operation-based permutation; otherwise, let pi  (t + 1) ⟵ Pfi (t).
  Step 3.3. If t mod 25 = 0, then randomly generate cji (t + 1) from (where j = 1, 2, 3, 4); otherwise, generate cji (t + 1) by the following equation. Let u1 and u2 be randomly generated from .
Step 3.4. If i < N, then let i ⟵ i + 1 and repeat from Step 3.2; otherwise, go to Step 4.
Step 4. If the stopping criterion is not met, then let t ⟵ t + 1 and repeat from Step 2. Otherwise, return Sbest as the final result.

The procedure of MUPLA is presented in detail in Algorithm 3. In addition, it is also presented in a form of flow chart in Figure 1. In brief, MUPLA starts its procedure by assigning t ⟵ 1 and Score (Cbest) ⟵ + ∞. Then, MUPLA randomly generates Ci (t), where i = 1, 2, …, N. After that, MUPLA processes its repeated loop as follows. For each Ci (t), MUPLA decodes it into the LOSAP input-parameter values and then runs the LOSAP using these input-parameter values to receive Pfi (t) and Sfi (t); then, MUPLA assigns Score (Ci (t)) ⟵ Makespan (Sfi (t)). If MUPLA finds any Ci (t) whose Score (Ci (t)) is less than or equal to Score (Cbest), then it updates Cbest ⟵ Ci (t), Score (Cbest) ⟵ Score (Ci (t)), and Sbest ⟵ Sfi (t). After that, MUPLA generates Ci (t + 1), where i = 1, 2, …, N, by using its specific evolutionary process (as shown in Step 3 of Algorithm 3), and it then assigns t ⟵ t + 1. Finally, MUPLA determines whether to continue its repeated loop’s next round or stop its procedure.

A main difference of MUPLA from the other population-based algorithms such as [3235] is in its specific evolutionary process (i.e., the procedure of adjusting its population) given by the equation in Step 3.3. In the equation in Step 3.3, each cji (t + 1) is updated from cji (t) by the sum of two opposite-direction vectors. The first vector is toward the best-found value, whereas the second vector is away from. The equation in Step 3.3 used in MUPLA is slightly modified from that in UPLA [9]. The modification is to reduce the first vector’s maximum magnitude by a half (from 0.05 to 0.025) in cases that cji (t) differs from cjbest. The purpose of this modification is to delay the population from getting stuck in a local optimum. In a preliminary experiment, the proposed two-level metaheuristic algorithm performed better on average after reducing the first vector’s maximum magnitude as mentioned.

Besides the modification in the equation in Step 3.3, there are larger other changes of MUPLA from UPLA [9]. One change is that, unlike UPLA, each Ci (t) of the MUPLA’s population includes a start operation-based permutation pi (t). By this change mentioned, MUPLA can add a multistart property into LOSAP. Consequently, the combination of MUPLA and LOSAP becomes a multistart iterated local search algorithm. This is a large upgrade because the combination of UPLA and LOLA in [9] is just an iterated local search algorithm. Another change is in its criterion of accepting a new best-found parameter-value combination. UPLA accepts only a better parameter-value combination, while MUPLA accepts a nonworsening parameter-value combination.

4. Results and Discussions

The performance of the proposed two-level metaheuristic algorithm was evaluated via the experiment conducted in this research. Section 4 then presents results of the experiment. In this section, let MUPLA stand for the whole two-level metaheuristic algorithm (i.e., the MUPLA combined with LOSAP). The reason is that MUPLA uses LOSAP as its component when it solves JSP. For a comparison purpose, the MUPLA’s results were compared to those of TS, GA, and UPLA taken from [5, 6], and [9], respectively. Let TS stand for the taboo search algorithm developed by Nowicki and Smutnicki [5], and let GA stand for the genetic algorithm developed by Piroozfard et al. [6]. In addition, let UPLA in this section stand for the UPLA combined with LOLA.

The performance of MUPLA was evaluated on 53 benchmark instances. These 53 instances consisted of the ft01, ft10, and ft20 instances from [47], the la01 to la40 instances from [48], and the orb01 to orb10 instances from [49]. The 53 mentioned instances were chosen because they covered all instances used by [5, 6, 9] in their experiments. In detail, the TS’s performance was evaluated in [5] on the 43 first-mentioned instances, i.e., ft06, ft10, ft20, and all 40 la instances. The GA’s performance was evaluated in [6] on the 42 instances, i.e., ft06, ft20, and all 40 la instances. In addition, the UPLA’s performance was evaluated in [9] on all the 53 instances.

There were two main objectives of the experiment. The first objective was to evaluate the best performance of which MUPLA was capable on its solution quality without a computational time’s limitation. To achieve this objective, MUPLA was run on an extremely long computational time (i.e., 5,000 iterations in this paper) for each trial unless it could find the known optimal solution. The reason was to ensure that MUPLA already reached the best solution as possible as it could. The second objective of the experiment was to evaluate the MUPLA’s search performance over iterations. To do so, the solution’s convergence rates of MUPLA were plotted. A finding from the solution’s convergence rates was to identify a proper maximum iteration for MUPLA. It was very important because a proper maximum iteration could balance the solution quality and the consumed computational time. Remind that the higher the number of iterations used, the higher the computational time consumed.

The parameter settings of MUPLA used in the experiment are shown below:(i)The population of MUPLA consisted of three parameter-value combinations (i.e., N = 3 in Algorithm 3).(ii)The stopping criterion of MUPLA was that either the 5,000th iteration (i.e., t = 5,000 in Algorithm 3) was reached or the known optimal solution value was found (note that solution and solution value in this paper represent schedule and makespan, respectively).(iii)MUPLA was coded in and executed on an Intel® CoreTM i5 CPU processor M580 @ 2.67 GHz with RAM of 6 GB (2.3 GB usable).(iv)For each instance, MUPLA was executed five trials with different random-seed numbers.

The results of the experiment using the above settings are presented in Table 2. In the table, the MUPLA’s results on each instance consisted of the best-found solution value over five trials, the average of the first trial’s best-found solution value to the fifth trial’s best-found solution value, the average number of used iterations, and the average computational time consumed. In the purpose of comparison, Table 2 presents the best-found solution values of TS, GA, and UPLA taken from [5, 6], and [9], respectively. Remind that in the table, the TS’s results are shown on only the ft06, ft10, ft20, and all 40 la instances, and the GA’s results are shown on only the ft06, ft20, and all 40 la instances.


InstanceOptTS [5]GA [6]UPLA [9]MUPLA
Best%BDBest%BDBest%BDBest%BDAvg%ADAvg iterationsAvg CPU time

ft0655550.00550.00550.00550.00550.0010.1
ft109309300.00n/an/a9300.009300.009300.0097315
ft20116511650.0011650.0011650.0011650.0011650.003351,890
la016666660.006660.006660.006660.006660.0011
la026556550.006550.006550.006550.006550.0052
la035975970.005970.005970.005970.005970.002710
la045905900.005900.005900.005900.005900.0062
la055935930.005930.005930.005930.005930.0010.4
la069269260.009260.009260.009260.009260.0012
la078908900.008900.008900.008900.008900.0012
la088638630.008630.008630.008630.008630.0012
la099519510.009510.009510.009510.009510.0012
la109589580.009580.009580.009580.009580.0012
la11122212220.0012220.0012220.0012220.0012220.0014
la12103910390.0010390.0010390.0010390.0010390.0018
la13115011500.0011500.0011500.0011500.0011500.0016
la14129212920.0012920.0012920.0012920.0012920.0016
la15120712070.0012070.0012070.0012070.0012070.0017
la169459450.009460.119450.009450.009450.0089294
la177847840.007840.007840.007840.007840.001033
la188488480.008480.008480.008480.008480.00824
la198428420.008420.008420.008420.008420.0045149
la209029020.009070.559020.009020.009020.003121,073
la21104610470.1010500.3810520.5710460.001046.60.063,478 (1,644)64,880 (30,668)
la229279270.009270.009270.009270.009270.00721,439
la23103210320.0010320.0010320.0010320.0010320.00225
la249359390.439430.869410.649350.00937.40.264,673 (1,186)84,121 (21,350)
la259779770.009840.729820.519770.009770.0095415,827
la26121812180.0012180.0012180.0012180.0012180.00182
la27123512360.0812551.6212561.7012350.001235.40.033,702 (3,266)220,382 (194,427)
la28121612160.0012170.0812160.0012160.0012160.00281,972
la29115211600.6911792.3411913.3911630.951163.81.025,000 (2,041)318,615 (130,059)
la30135513550.0013550.0013550.0013550.0013550.001123
la31178417840.0017840.0017840.0017840.0017840.001306
la32185018500.0018500.0018500.0018500.0018500.001172
la33171917190.0017190.0017190.0017190.0017190.001313
la34172117210.0017210.0017210.0017210.0017210.001448
la35188818880.0018880.0018880.0018880.0018880.001393
la36126812680.0012942.0512780.7912680.0012680.001,10485,418
la37139714070.7214181.5014070.7213970.0013970.0082360,481
la38119611960.0012222.1712151.5911960.0011990.253,985 (2,282)296,821 (169,974)
la39123312330.0012491.3012501.3812330.0012330.0026518,057
la40122212290.5712330.9012290.5712240.1612240.165,000 (1,678)355,968 (119,463)
orb011059n/an/an/an/a10590.0010590.0010590.00307965
orb02888n/an/an/an/a8890.118880.00888.20.022,438 (1,443)7,622 (4,511)
orb031005n/an/an/an/a10050.0010050.0010050.009032,500
orb041005n/an/an/an/a10050.0010050.0010050.00257840
orb05887n/an/an/an/a8890.238870.008870.002,0646,237
orb061010n/an/an/an/a10130.3010100.001010.40.042,912 (2,033)9,610 (6,709)
orb07397n/an/an/an/a3970.003970.003970.0061216
orb08899n/an/an/an/a8990.008990.008990.00184594
orb09934n/an/an/an/a9340.009340.009340.00127515
orb10944n/an/an/an/a9440.009440.009440.0058183

The abbreviations used in Table 2 are defined as follows:(i)Instance and Opt represent the name of each instance and its known optimal solution value given by literature (e.g., [2]), respectively.(ii)In each instance, Best represents the best-found solution value of each algorithm. Best of MUPLA was taken from the five trials in this experiment. Bests of TS, GA, and UPLA were taken from [5, 6], and [9], respectively. For each algorithm, %BD represents the percent deviation of Best from Opt.(iii)In each instance, Avg represents the average of the first trial’s best-found solution value to the fifth trial’s best-found solution value of MUPLA. Then, %AD represents the percent deviation of Avg from Opt.(iv)Avg Iterations and Avg CPU Time stand for the average number of iterations and the average computational time (in second), respectively, consumed by MUPLA until the stopping criterion is met.(v)In parentheses, Avg Iterations and Avg CPU Time provide the average number of iterations and the approximate average computational time (in second), respectively, consumed by MUPLA until no further improvement. The information occurs only when at least one trial of MUPLA could not find the known optimal solution before reaching the 5,000th iteration.

Based on the results in Table 2, Section 4.1 evaluates the best search performance of which MUPLA was capable without a computational time’s limitation. Section 4.2 then evaluates the MUPLA’s search performance over iterations.

4.1. Evaluation on Search Performance without Computational Time’s Limitation

As just mentioned, Section 4.1 aims at revealing the best search performance of which MUPLA was capable on its solution quality without a computational time’s restriction. To achieve the aim, MUPLA was executed on an extremely long computational time (i.e., 5,000 iterations) unless MUPLA could find the known optimal solution. The purpose of this setting was to ensure that MUPLA already reached its best solution as possible as it could. In the performance evaluation, the %BDs of MUPLA were compared to the %BDs of TS [5], GA [6], and UPLA [9] shown in Table 2. Remind that the %BDs of TS, GA, and UPLA were calculated from the best-found solution values given by their original articles. The use of these %BDs of TS, GA, and UPLA can be defined as a limitation of this experiment because TS, GA, and UPLA might find improving solutions if they were run on more computational times than those used in their articles. On the other hand, they might not find any improving solution although much more computational times were provided; it usually happens to any metaheuristic algorithm when its search gets stuck in a local optimum.

Discussions on the %BDs shown in Table 2 are given hereafter along with using one-sided paired t-tests to compare the mean %BDs. Let the significance level (α) be equal to 0.1. As shown in Table 2, MUPLA performed better than TS [5] in terms of %BD (remind that the performances were compared on only the 43 instances). Of the total 43 instances, the average %BD of MUPLA was 0.03%, while the average %BD of TS was 0.06%. MUPLA returned better %BDs than TS on five instances (i.e., la21, la24, la27, la37, and la40), while TS returned a better %BD than MUPLA on only one instance (i.e., la29). On the 37 remaining instances, they both returned equal %BDs. In addition, there were 41 instances where MUPLA returned %BDs of 0.00%, while there were only 37 instances where TS returned %BDs of 0.00%. It means that from all 43 instances, MUPLA found optimal solutions on 41 instances, while TS found optimal solutions on only 37 instances. With α = 0.1, an one-sided paired t-test indicated that the mean %BD of MUPLA was significantly better than the mean %BD of TS ( value = 0.066).

MUPLA also outperformed GA [6] in terms of %BD (remind that the performances were compared on only the 42 instances). Over the total 42 instances, the average %BD of MUPLA was 0.03%, while the average %BD of GA was 0.35%. MUPLA returned better %BDs than GA on 13 instances, and they both returned equal %BDs on the 29 remaining instances. This means that GA could not return a better %BD than MUPLA on any instance. Moreover, MUPLA returned %BDs of 0.00% on 40 instances, while GA returned %BDs of 0.00% on only 29 instances. This means that from the 42 instances, MUPLA found the optimal solutions on 40 instances, while GA found the optimal solutions on only 29 instances. In detail, MUPLA failed to find optimal solutions on la29 and la40, while GA failed on la29, la40, and the 11 other instances. Although they both failed on la29 and la40, MUPLA performed much better on these two instances. On la29 and la40, respectively, the MUPLA’s %BDs were 0.95% and 0.16%, while the GA’s %BDs were 2.34% and 0.90%. An one-sided paired t-test with α = 0.1 indicated that the mean %BD of MUPLA was significantly better than the mean %BD of GA ( value = 0.001).

Based on %BDs in Table 2, MUPLA outperformed UPLA [9]. From the total 53 instances, the average %BD of MUPLA was 0.02%, while the average %BD of UPLA was 0.24%. MUPLA returned better %BDs than UPLA on 13 instances, and they both returned the same %BDs on 40 instances. It means that there were no instances where UPLA returned better %BDs than MUPLA. In addition, MUPLA returned the %BDs of 0.00% on 51 instances, while UPLA returned the %BDs of 0.00% on only 40 instances. It means that MUPLA could find optimal solutions on 51 instances, while UPLA could find optimal solutions on only 40 instances. In detail, MUPLA failed to find optimal solutions on only la29 and la40, while UPLA failed on la29, la40, and the 11 other instances. Based on the results from Table 2, an one-sided paired t-test with α = 0.1 indicated that the mean %BD of MUPLA was significantly better than the mean %BD of UPLA ( value = 0.002).

A summary of the above discussions on %BDs is given below:(i)When compared with TS [5] on the 43 instances, MUPLA returned better %BDs on five instances, while TS returned a better %BD on only one instance. The average %BD of MUPLA was 0.03%, while the average %BD of TS was 0.06%.(ii)When compared with GA [6] on the 42 instances, MUPLA returned better %BDs on 13 instances, while GA could not return a better %BD on any instance. The average %BD of MUPLA was 0.03%, while the average %BD of GA was 0.35%.(iii)When compared with UPLA [9] on the total 53 instances, MUPLA returned better %BDs on 13 instances, while UPLA could not return a better %BD on any instance. The average %BD of MUPLA was 0.02%, while the average %BD of UPLA was 0.24%.(iv)Based on the one-sided paired t-tests with α = 0.1, the mean %BD of MUPLA was significantly better than the mean %BDs of the other algorithms (with values of 0.066 for TS, 0.001 for GA, and 0.002 for UPLA).

In discussion on %ADs, Table 2 shows that over all 53 instances, MUPLA returned %ADs of 0.00% on 45 instances. It means that MUPLA found the optimal solution by every single trial for the 45 instances. On the eight remaining instances, %ADs of MUPLA were also very low. Notice that the highest %AD, belonging to la29, was only 1.02%. This can be interpreted that MUPLA could perform very well in overall, not only in its best trial. When compared with %BDs, %ADs were equal or almost equal to %BDs for all instances. This emphasizes that MUPLA kept its high performance with good consistency from one trial to another.

4.2. Evaluation on Search Performance over Iterations

In this section, the solution’s convergence rates of MUPLA were plotted in order to evaluate the MUPLA’s search performance over iterations (remind that a solution’s convergence rate is usually presented by a scatter plot showing the relationship between the solution quality and the number of iterations used). As shown in Figures 2 and 3 respectively, %AD-over-iteration plots (i.e., the plots of %ADs over iterations) and %BD-over-iteration plots (i.e., the plots of %BDs over iterations) were drawn for the eight instances, i.e., la21, la24, la27, la29, la38, la40, orb2, and orb6. These eight instances were selected because their %ADs in Table 2 were greater than 0% (it means that for each mentioned instance, at least one trial of MUPLA could not find the known optimal solution). The total number of iterations in each figure was only 1,500 iterations (not 5,000 iterations) because %ADs and %BDs, in general, were changed hardly after the 1,500th iteration.

To simplify the analysis, the pieces of information in Figures 2 and 3 were combined altogether and then put into Figure 4. At each iteration, the %ADs and %BDs on eight instances from Figures 2 and 3 were averaged into the avg %AD and avg %BD, respectively, in Figure 4. They resulted in the avg-%AD-over-iteration plot and the avg-%BD-over-iteration plot in Figure 4. For a comparison purpose, Figure 4 also presents the plots of TSs final avg %BD, GAs final avg %BD, and UPLAs final avg %BD. These plots represent the average %BDs of the final solutions on eight instances of TS, GA, and UPLA (i.e., 0.31%, 1.38%, and 1.41%, respectively). They were used to identify the lowest numbers of iterations of which MUPLA returned better solutions than the final solutions of TS, GA, and UPLA.

In Figure 4, the avg-%AD-over-iteration and avg-%BD-over-iteration plots could be divided into three periods based on their reduction rates. The first period roughly started from the first iteration to the 160th iteration, where the avg %AD and avg %BD were reduced sharply. The second period roughly started from the 160th iteration to the 700th iteration, where the avg %AD and avg %BD were reduced relatively quickly but slower than the first period’s rate. The third period roughly started from the 700th iteration onwards, where the avg %AD and avg %BD were reduced very slowly. Since the higher number of iterations results in the higher computational time, the proper maximum iteration can help MUPLA to balance the solution quality and the computational time. Based on the two given plots, the MUPLA’s maximum iteration should be set within the 160th to 700th iterations. At extremes, the 160th iteration should be used when highly concerned on the computational time, while the 700th iteration should be used when highly concerned on the solution quality. For any other point within the 160th to 700th iterations, the higher possibility to find a better solution must be traded off for the greater number of iterations used.

For more analysis on Figure 4, the avg-%AD-over-iteration and avg-%BD-over-iteration plots were compared to the final avg %BDs of TS, GA, and UPLA. For each algorithm, the final avg %BD is the average of the %BDs of the final solutions on eight instances taken from Table 2. In Figure 4, the avg-%BD-over-iteration plot of MUPLA was lower than the final avg %BDs of UPLA, GA, and TS at the eighth, ninth, and 484th iterations, respectively. It means that the MUPLA’s best trial (among the total five trials) at its eighth, ninth, and 484th iterations provided better solutions than the final solutions of UPLA, GA, and TS, respectively. Moreover, the avg-%AD-over-iteration plot was lower than the final avg %BDs of UPLA and GA at the 35th and 39th iterations, respectively. It can be interpreted roughly that MUPLA could usually return the better solutions than the final solutions of UPLA and GA within the first 40 iterations. The findings mentioned in this paragraph fulfilled those of the previous paragraph. One finding was that MUPLA could return acceptable-quality solutions within its first 10 iterations and then high-quality solutions within its first 500 iterations. Moreover, MUPLA was still able to find improving solutions after the 500th iteration.

In addition, Figure 4 also reveals an enhancement percentage of the LOSAP’s search performance received from MUPLA. Remind that at the MUPLA’s first iteration (as the start), the LOSAP’s input-parameter values were generated full-randomly. It means that at the MUPLA’s first iteration, LOSAP performed its own search performance without any support from the upper-level algorithm yet. In Figure 4, the avg %AD at the first iteration was 4.80%. After passing through the MUPLA’s evolutionary process to the 100th iteration, the avg %AD became 1.00%. At the 500th iteration, the avg %AD was then reduced to 0.56%. The improvement of the avg %AD has been a strong evidence of the enhancement in the LOSAP’s search performance received from MUPLA. This can be concluded in the same way when explaining by the avg-%BD-over-iteration plot.

5. Conclusion

The proposed two-level metaheuristic algorithm is composed of the upper-level algorithm and the lower-level algorithm named MUPLA and LOSAP, respectively. In the upper level, MUPLA is a population-based search algorithm developed for controlling the LOSAP’s input parameters. In the lower level, LOSAP is a local search algorithm with a probabilistic-based hybrid neighborhood structure. The work relation between MUPLA and LOSAP is given in brief as follows: MUPLA starts a repeated loop by generating the LOSAP’s input-parameter values; then, LOSAP uses these input-parameter values to return its best-found JSP solution. The best-found JSP solution becomes a feedback sent back to MUPLA for improving the LOSAP’s input-parameter values. MUPLA then starts the next round of the repeated loop. The MUPLA combined with LOSAP performs like an adaptive multistart iterated local search algorithm. The experiment’s results indicated that the proposed two-level metaheuristic algorithm (i.e., the MUPLA combined with LOSAP) outperformed its original variant (i.e., the UPLA combined with LOLA) and the two other high-performing algorithms in terms of solution quality. Based on the convergence rates, the maximum iteration of the two-level metaheuristic algorithm was recommended to be set within the 160th to the 700th iterations. A future study should be conducted to enhance the performance of the two-level metaheuristic algorithm for JSP and related other problems. Another interesting future study is to modify MUPLA (uncombined with LOSAP) for being a lower-level algorithm. Consequently, the two-level metaheuristic algorithm using MUPLA in both levels will be possibly developed.

Data Availability

The data used to support the findings of this study are available from the author upon request.

Conflicts of Interest

The author declares that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The author acknowledges partial financial support for publication from the Thai-Nichi Institute of Technology, Thailand.

References

  1. B. Peng, Z. Lü, and T. C. E. Cheng, “A tabu search/path relinking algorithm to solve the job shop scheduling problem,” Computers & Operations Research, vol. 53, pp. 154–164, 2015. View at: Publisher Site | Google Scholar
  2. J. F. Gonçalves and M. G. C. Resende, “An extended akers graphical method with a biased random-key genetic algorithm for job-shop scheduling,” International Transactions in Operational Research, vol. 21, no. 2, pp. 215–246, 2014. View at: Publisher Site | Google Scholar
  3. N. H. Moin, O. C. Sin, and M. Omar, “Hybrid genetic algorithm with multiparents crossover for job shop scheduling problems,” Mathematical Problems in Engineering, vol. 2015, Article ID 210680, 12 pages, 2015. View at: Publisher Site | Google Scholar
  4. P. Pongchairerks and V. Kachitvichyanukul, “A two-level particle swarm optimisation algorithm on job-shop scheduling problems,” International Journal of Operational Research, vol. 4, no. 4, pp. 390–411, 2009. View at: Publisher Site | Google Scholar
  5. E. Nowicki and C. Smutnicki, “A fast taboo search algorithm for the job shop problem,” Management Science, vol. 42, no. 6, pp. 797–813, 1996. View at: Publisher Site | Google Scholar
  6. H. Piroozfard, K. Y. Wong, and A. Hassan, “A hybrid genetic algorithm with a knowledge-based operator for solving the job shop scheduling problems,” Journal of Optimization, vol. 2016, Article ID 7319036, 13 pages, 2016. View at: Publisher Site | Google Scholar
  7. A. S. Jain and S. Meeran, “Deterministic job-shop scheduling: past, present and future,” European Journal of Operational Research, vol. 113, no. 2, pp. 390–434, 1999. View at: Publisher Site | Google Scholar
  8. J. Błazewicz, W. Domschke, and E. Pesch, “The job shop scheduling problem: conventional and new solution techniques,” European Journal of Operational Research, vol. 93, no. 1, pp. 1–33, 1996. View at: Publisher Site | Google Scholar
  9. P. Pongchairerks, “A two-level metaheuristic algorithm for the job-shop scheduling problem,” Complexity, vol. 2019, Article ID 8683472, 11 pages, 2019. View at: Publisher Site | Google Scholar
  10. Q. Luo, Y. Zhou, J. Xie, M. Ma, and L. Li, “Discrete bat algorithm for optimal problem of permutation flow shop scheduling,” The Scientific World Journal, vol. 2014, Article ID 630280, 15 pages, 2014. View at: Publisher Site | Google Scholar
  11. M. N. Janardhanan, Z. Li, P. Nielsen, and Q. Tang, “Artificial bee colony algorithms for two-sided assembly line worker assignment and balancing problem,” in Advances in Intelligent Systems and Computing, vol. 620, pp. 11–18, Springer, Cham, Switzerland, 2018. View at: Google Scholar
  12. S.-J. Wu and P.-T. Chow, “Genetic algorithms for nonlinear mixed discrete-integer optimization problems via meta-genetic parameter optimization,” Engineering Optimization, vol. 24, no. 2, pp. 137–159, 1995. View at: Publisher Site | Google Scholar
  13. P. Cortez, M. Rocha, and J. Neves, “A meta-genetic algorithm for time series forecasting,” in Proceedings of Workshop on Artificial Intelligence Techniques for Financial Time Series Analysis, pp. 21–31, Porto, Portugal, December 2001. View at: Google Scholar
  14. S. Luke and A. K. A. Talukder, “Is the meta-EA a viable optimization method?” in Proceedings of the 15th Annual Conference on Genetic and Evolutionary Computation, pp. 1533–1540, Amsterdam, Netherlands, July 2013. View at: Google Scholar
  15. S. Wink, T. Bäck, and M. Emmerich, “A meta-genetic algorithm for solving the capacitated vehicle routing problem,” in Proceedings of the 2012 IEEE Congress on Evolutionary Computation, pp. 1–8, Brisbane, Australia, June 2012. View at: Google Scholar
  16. H. R. Lourenço, “Job-shop scheduling: computational study of local search and large-step optimization methods,” European Journal of Operational Research, vol. 83, no. 2, pp. 347–364, 1995. View at: Publisher Site | Google Scholar
  17. T. Yamada and R. Nakano, “Job-shop scheduling,” in Genetic Algorithms in Engineering Systems, A. M. S. Zalzala and P. J. Fleming, Eds., pp. 134–160, Institution of Electrical Engineers, London, UK, 1997. View at: Google Scholar
  18. T. Yamada and R. Nakano, “A fusion of crossover and local search,” in Proceedings of the IEEE International Conference on Industrial Technology, pp. 426–430, Shanghai, China, December 1996. View at: Google Scholar
  19. L. Özdamar, “A genetic algorithm approach to a general category project scheduling problem,” IEEE Transactions on Systems, Man and Cybernetics, Part C (Applications and Reviews), vol. 29, no. 1, pp. 44–59, 1999. View at: Publisher Site | Google Scholar
  20. P. Pongchairerks, “Forward VNS, reverse VNS, and multi-VNS algorithms for job-shop scheduling problem,” Modelling and Simulation in Engineering, vol. 2016, Article ID 5071654, 15 pages, 2016. View at: Publisher Site | Google Scholar
  21. M. Sakawa, Genetic Algorithms and Fuzzy Multiobjective Optimization, Kluwer Academic Publishers, Boston, MA, USA, 2001.
  22. E. Balas and A. Vazacopoulos, “Guided local search with shifting bottleneck for job shop scheduling,” Management Science, vol. 44, no. 2, pp. 262–275, 1998. View at: Publisher Site | Google Scholar
  23. H. R. Lourenço and M. Zwijnenburg, “Combining the large-step optimization with tabu-search: application to the job-shop scheduling problem,” in Meta-Heuristics: Theory & Applications, I. H. Osman and J. P. Kelly, Eds., pp. 219–236, Springer, Boston, MA, USA, 1996. View at: Google Scholar
  24. H. R. Lourenço, O. C. Martin, and T. Stützle, “A beginner’s introduction to iterated local search,” in Proceedings of the 4th Metaheuristics International Conference, pp. 1–6, Porto, Portugal, July 2001. View at: Google Scholar
  25. H. R. Lourenço, O. C. Martin, and T. Stützle, “Iterated local search,” in International Series in Operations Research and Management Science, vol. 57, pp. 321–354, Springer, Boston, MA, USA, 2003. View at: Google Scholar
  26. J. Michallet, C. Prins, L. Amodeo, F. Yalaoui, and G. Vitry, “Multi-start iterated local search for the periodic vehicle routing problem with time windows and time spread constraints on services,” Computers & Operations Research, vol. 41, pp. 196–207, 2014. View at: Publisher Site | Google Scholar
  27. S. Kande, C. Prins, L. Belgacem, and B. Redon, “Multi-start iterated local search for two-echelon distribution network for perishable products,” in Proceedings of the International Conference on Operations Research and Enterprise Systems, pp. 294–303, Lisbon, Portugal, January 2015. View at: Google Scholar
  28. M. Avci and S. Topaloglu, “A multi-start iterated local search algorithm for the generalized quadratic multiple knapsack problem,” Computers & Operations Research, vol. 83, pp. 54–65, 2017. View at: Publisher Site | Google Scholar
  29. C.-W. Chiou and M.-C. Wu, “A GA-Tabu algorithm for scheduling in-line steppers in low-yield scenarios,” Expert Systems with Applications, vol. 36, no. 9, pp. 11925–11933, 2009. View at: Publisher Site | Google Scholar
  30. T. Davidović, P. Hansen, and N. Mladenović, “Scheduling by VNS: experimental analysis,” in Proceedings of the Yugoslav Symposium on Operations Research, pp. 319–322, Belgrade, Serbia, October 2001. View at: Google Scholar
  31. M. Marmion, F. Mascia, M. López-Ibáñez, and T. Stützle, “Automatic design of hybrid stochastic local search algorithms,” in Lecture Notes in Computer Science, vol. 7919, pp. 144–158, Springer, Berlin, Germany, 2013. View at: Google Scholar
  32. Y. Shi and R. Eberhart, “A modified particle swarm optimizer,” in Proceedings of the IEEE International Conference on Evolutionary Computation, pp. 69–73, Anchorage, AK, USA, May 1998. View at: Google Scholar
  33. R. Storn and K. Price, “Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997. View at: Publisher Site | Google Scholar
  34. M. Neshat, G. Sepidnam, M. Sargolzaei, and A. N. Toosi, “Artificial fish swarm algorithm: a survey of the state-of-the-art, hybridization, combinatorial and indicative applications,” Artificial Intelligence Review, vol. 42, no. 4, pp. 965–997, 2014. View at: Publisher Site | Google Scholar
  35. X.-S. Yang and S. Deb, “Engineering optimisation by cuckoo search,” International Journal of Mathematical Modelling and Numerical Optimisation, vol. 1, no. 4, pp. 330–343, 2010. View at: Publisher Site | Google Scholar
  36. Y. Crama, A. W. J. Kolen, and E. J. Pesch, “Local search in combinatorial optimization,” in Lecture Notes in Computer Science, vol. 931, pp. 157–174, Springer, Berlin, Germany, 1995. View at: Google Scholar
  37. J. B. Orlin, A. P. Punnen, and A. S. Schulz, “Approximate local search in combinatorial optimization,” SIAM Journal on Computing, vol. 33, no. 5, pp. 1201–1214, 2004. View at: Publisher Site | Google Scholar
  38. W. Michiels, E. Aarts, and J. Korst, Theoretical Aspects of Local Search, Springer, Berlin, Germany, 2007.
  39. E. Pesch, Learning in Automated Manufacturing: A Local Search Approach, Physica-Verlag, Heidelberg, Germany, 1994.
  40. R. Cheng, M. Gen, and Y. Tsujimura, “A tutorial survey of job-shop scheduling problems using genetic algorithms-i. representation,” Computers & Industrial Engineering, vol. 30, no. 4, pp. 983–997, 1996. View at: Publisher Site | Google Scholar
  41. M. Gen and R. Cheng, Genetic Algorithms and Engineering Design, John Wiley & Sons, New York, NY, USA, 1997.
  42. C. Bierwirth, “A generalized permutation approach to job shop scheduling with genetic algorithms,” OR Spektrum, vol. 17, no. 2-3, pp. 87–92, 1995. View at: Publisher Site | Google Scholar
  43. M. Gen, Y. Tsujimura, and E. Kubota, “Solving job-shop scheduling problems by genetic algorithm,” in Proceedings of IEEE International Conference on Systems, Man and Cybernetics, pp. 1577–1582, San Antonio, TX, USA, October 1994. View at: Google Scholar
  44. Y. Tsujimura, Y. Mafune, and M. Gen, “Introducing co-evolution and sub-evolution processes into genetic algorithm for job-shop scheduling,” in Proceedings of the 26th Annual Conference of the IEEE Industrial Electronics Society, pp. 2827–2830, Nagoya, Japan, October 2000. View at: Google Scholar
  45. W. Bożejko and M. Makuchowski, “Solving the no-wait job-shop problem by using genetic algorithm with automatic adjustment,” International Journal of Advanced Manufacturing Technology, vol. 57, no. 5–8, pp. 735–752, 2011. View at: Publisher Site | Google Scholar
  46. S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach, Prentice-Hall, Upper Saddle River, NJ, USA, 3rd edition, 2009.
  47. H. Fisher and G. L. Thompson, “Probabilistic learning combinations of local job-shop scheduling rules,” in Industrial Scheduling, J. F. Muth and G. L. Thompson, Eds., pp. 225–251, Prentice-Hall, Englewood, NJ, USA, 1963. View at: Google Scholar
  48. S. Lawrence, Resource Constrained Project Scheduling: An Experimental Investigation of Heuristic Scheduling Techniques (Supplement), Carnegie Mellon University, Pittsburgh, PA, USA, 1984.
  49. D. Applegate and W. Cook, “A computational study of the job-shop scheduling problem,” ORSA Journal on Computing, vol. 3, no. 2, pp. 149–156, 1991. View at: Publisher Site | Google Scholar

Copyright © 2020 Pisut Pongchairerks. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views240
Downloads96
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.