Abstract
This paper proposes a twolevel metaheuristic consisting of lower and upperlevel algorithms for the jobshop scheduling problem with multipurpose machines. The lowerlevel algorithm is a local search algorithm used for finding an optimal solution. The upperlevel algorithm is a populationbased metaheuristic used to control the lowerlevel algorithm’s input parameters. With the upperlevel algorithm, the lowerlevel algorithm can reach its best performance on every problem instance. Most changes of the proposed twolevel metaheuristic from its original variants are in the lowerlevel algorithm. A main purpose of these changes is to increase diversity into solution neighborhood structures. One of the changes is that the neighbor operators of the proposed lowerlevel algorithm are developed to be more adjustable. Another change is that the roulettewheel technique is applied for selecting a neighbor operator and for generating a perturbation operator. In addition, the proposed lowerlevel algorithm uses an adjustable delaytime limit to select an optional machine for each operation. The performance of the proposed twolevel metaheuristic was evaluated on wellknown benchmark instances. The evaluation’s results indicated that the proposed twolevel metaheuristic performs well on most benchmark instances.
1. Introduction
The jobshop scheduling problem (JSP) is a wellknown NPhard optimization problem [1–3]. JSP involves scheduling jobs onto machines in order to minimize makespan, i.e., the schedule’s length. Each job consists of a number of operations, where each operation must be processed on a predetermined machine with a predetermined processing time. To complete each job, all of its operations must be processed in the sequence from the first to the last operations. JSP has many variants and related problems, such as [4–7]. One of the wellknown variants of JSP is the jobshop scheduling problem with multipurpose machines (MPMJSP) [8–10]. MPMJSP is defined as a generalized variant of JSP. An only difference between them is that each operation in JSP has only one predetermined machine, while each operation in MPMJSP may have more than one optional machine. This difference makes MPMJSP be closer to the realworld applications in modern factories than JSP because, nowadays, most machines have been developed for multiple tasks.
In this paper, the research’s objective is to develop a highperforming algorithm for MPMJSP. To do so, this paper proposes a twolevel metaheuristic, based on the framework of [5, 11, 12], consisting of upper and lowerlevel algorithms. The upperlevel algorithm is a populationbased metaheuristic that acts as a parameter controller for the lowerlevel algorithm. The upperlevel algorithm’s population consists of the parametervalue combinations of the lowerlevel algorithm. In a parametervalue combination, each parameter’s value is iteratively changed by a sum of two changeable oppositedirection vectors. The directions of the first and the second vectors are toward and away from, respectively, the memorized bestfound value. The lowerlevel algorithm is a local search algorithm searching for an optimal solution of the beingsolved MPMJSP problem. Like other metaheuristics, the lowerlevel algorithm cannot perform its best on all instances with a single combination of inputparameter values. This drawback can be overcome when its input parameters are controlled by the upperlevel algorithm.
The proposed twolevel metaheuristic is modified from its original variant [12], where the main differences are in their lowerlevel algorithms. Their lowerlevel algorithms both search for an optimal solution in hybrid neighborhood structures via their optional operators. Their optional neighbor operators are similarly modified from the traditional operators (i.e., swap, insert, and reverse) by limiting a distance between positions of two members selected in a solutionrepresenting permutation. However, while the lowerlevel algorithm of [12] has only three levels of the distance limit, the distance limit in this paper is adjustable to any possible range. Another main difference is in their methods of generating their hybrid neighborhood structures. To generate each neighbor solutionrepresenting permutation, the lowerlevel algorithm of [12] uses a given probability to select one of two neighbor operators. Instead, the proposed lowerlevel algorithm uses the roulettewheel method [13] to select one from three neighbor operators. In this paper, the roulettewheel method is also applied to select multiple optional operators for generating a perturbation operator. A purpose of using the roulettewheel method is to diversify more on the hybridization of the neighborhood structure.
As mentioned, each operation in MPMJSP has one or more optional machines. The lowerlevel algorithm proposed in this paper uses the delaytime limit (δ), as its input parameter, to make a criterion of selecting a machine for each operation. This use of δ is different from the uses of δ in the other researches, e.g., [10, 11, 14–17]. While the proposed lowerlevel algorithm uses δ to select a machine for each operation, the other researches use δ to select an operation into a timetable. A method of generating a schedule of the proposed lowerlevel algorithm is briefly presented as follows: First, an appearance order of all operations in a solutionrepresenting permutation is used as a priority order for all operations. Then, every operation is assigned onebyone into a schedule by the given priority order. When being assigned, each operation must be processed on the machine that satisfies δ, and it must be started as early as the machine can.
The remainder of this paper is divided into five sections. Section 2 describes MPMJSP and reviews the previous researches relevant to the proposed twolevel metaheuristic. Section 3 describes the proposed twolevel metaheuristic in detail. Section 4 shows experiment’s results of evaluating the proposed twolevel metaheuristic’s performance. Section 5 then analyzes and discusses the experiment’s results. Finally, Section 6 concludes the research’s findings.
2. Preliminaries
In this section, the description of MPMJSP is given in Section 2.1, and the review on the previous researches relevant to the proposed twolevel metaheuristic is given in Section 2.2.
2.1. Description of MPMJSP
The jobshop scheduling problem with multipurpose machines (MPMJSP) is classified as a generalization of the jobshop scheduling problem (JSP). An only difference between JSP and MPMJSP is their numbers of optional machines of each operation. That is, while each operation in JSP has only one predetermined machine, each operation in MPMJSP has one or more optional machines. MPMJSP thus becomes JSP if each of its operations has only one optional machine. A similar variant of MPMJSP is the flexible jobshop scheduling problem (FJSP) [18, 19]. MPMJSP and FJSP both are the JSP’s variants where each operation may have more than one optional machine. However, the processing time of an operation in FJSP may change when changing its selected optional machine, while the processing time of each operation in MPMJSP is fixed for all of its optional machines. This means MPMJSP is a specific FJSP where, for each operation, all optional machines have the same processing time.
Notation used to describe MPMJSP in this paper is defined below:(i)Let n denote the number of all jobs in MPMJSP.(ii)Let m denote the number of all machines in MPMJSP.(iii)Let J_{i} denote the ith job in MPMJSP, where i = 1, 2, …, n.(iv)Let M_{j} denote the jth machine in MPMJSP, where j = 1, 2, …, m.(v)Let n_{i} denote the number of all operations of J_{i}.(vi)Let O_{ik} denote the kth operation of J_{i}, where k = 1, 2, …, n_{i}.(vii)Let τ_{ik} denote the processing time of O_{ik}.(viii)Let m_{ik} denote the number of all optional machines of O_{ik}, where i = 1, 2, …, n, and k = 1, 2, …, n_{i}.(ix)Let E_{ikl} ∈ all optional machines of O_{ik} that appear in {M_{1}, M_{2}, …, M_{m}} and denote the lth optional machine of O_{ik}, where l = 1, 2, …, m_{ik}. In other words, E_{ikl} is equivalent to the M_{j} that owns the lth lowest value of j among all optional machines of O_{ik}. For example, suppose O_{12} has three optional machines, i.e., M_{2}, M_{4}, and M_{5}. Thus, E_{121} = M_{2}, E_{122} = M_{4}, and E_{123} = M_{5}.
MPMJSP comes with n given jobs (J_{1}, J_{2}, …, J_{n}) and m given machines (M_{1}, M_{2}, …, M_{m}). Each job J_{i} consists of a sequence of n_{i} given operations (O_{i1}, O_{i2}, …, ) as a chain of precedence constraints. To complete each job J_{i}, O_{i1} must be finished before O_{i2} can start; O_{i2} must be finished before O_{i3} can start, and so on. Each O_{ik} must be processed by one of E_{ik1}, E_{ik2}, …, with the processing time of τ_{ik}. Each machine cannot process more than one operation at a time, and it cannot be stopped during processing an operation. At the beginning (i.e., time 0), all jobs have already arrived, and all machines have not been occupied. An optimal schedule is a feasible schedule that minimizes makespan (i.e., the schedule’s length).
MPMJSP was first introduced by Brucker and Schlie [8]. They also proposed a polynomialtime algorithm for MPMJSP with two jobs. MPMJSP with three jobs belongs to NPhard problem even if the number of all machines is two [9]. Tabusearch algorithms were developed by [9] for solving three sets of MPMJSP benchmark instances, i.e., Edata, Rdata, and Vdata. Since then, these three instance sets have been commonly used for comparing results of different algorithms on MPMJSP. To date, many algorithms have been developed for solving MPMJSP and its closely related problems [10, 18–21].
2.2. Previous Relevant Researches
Iterated local search is traditionally defined as a singlesolutionbased metaheuristic that can search for a global optimal solution. During an exploration, it uses a neighbor operator repeatedly to find a local optimum and then uses a perturbation operator to escape the found local optimum. Note that a perturbation operator stands for an operator that generates a new initial solution by largely modifying a found local optimal solution [22]. Some untraditional iterated local search algorithms are enhanced in their performance by using multiple initial solutions [23, 24]. The iterated local search algorithms have been successful on many optimization problems, including MPMJSP and FJSP [25, 26].
In iterated local search and related algorithms, there are three operators usually used as neighbor operators and perturbation operators. These three operators are the traditional swap, insert, and reverse operators [27]. To explain the mentioned operators, let u and be two different integers randomly generated from 1 to D, where D represents the number of all members in a solutionrepresenting permutation. The swap operator is to swap between the two members in the uth and the th positions of the permutation. The insert operator is to remove a member from the uth position of the permutation and then insert it back at the th position. The reverse operator is to reverse the sequence of all members from the uth to the th positions of the permutation.
A common drawback of most metaheuristics is that their performance is dependent on their parametervalue settings. To overcome such a drawback, many applications use upperlevel algorithms to control parameters of their solutionsearching algorithms [11, 12, 14, 15, 28–31]. Some of them, e.g., [5], require more than two levels of algorithms for very complicated problems; however, most of them require only two levels. For solving JSP, there are two twolevel metaheuristics acting as adaptive iterated local search algorithms [11, 12]. In each of the twolevel metaheuristics, the upperlevel algorithm controls the lowerlevel algorithm’s input parameters, while the lowerlevel algorithm is a local search algorithm searching for an optimal jobshop schedule.
UPLA and MUPLA are the upperlevel algorithms in [11, 12], respectively; they both are populationbased algorithms searching in realnumber search spaces. In each of them, the population is a number of the parametervalue combinations of the lowerlevel algorithm. In a parametervalue combination, each parameter’s value is iteratively changed by a sum of two changeable oppositedirection vectors. The first vector’s and the second vector’s directions are toward and away from, respectively, the memorized bestfound value. In only MUPLA, each parametervalue combination includes a different start operationbased permutation; thus, the twolevel metaheuristic of [12] has a multistart property.
LOLA, the lowerlevel algorithm in [11], is a local search algorithm exploring in a solution space of parameterizedactive schedules (i.e., hybrid schedules [16]). Its input parameters (i.e., a delaytime limit, a scheduling direction, a perturbation operator, and a neighbor operator) are controlled by UPLA, its upperlevel algorithm. Because the delaytime limit (δ) is one of the input parameters controlled, UPLA then can control the solution space’s size of parameterizedactive schedules. Such a control of δ follows in the successes of the twolevel PSOs of [14, 15]. Other techniques of controlling δ can also be found in literature. For example, the value of δ in [10] is dependent on the number of jobs and the number of machines, while the value of δ in [17] is dependent on the algorithm’s iteration index. In addition, the PSO in [32] controls the value of δ by using the concept of selfadaptive parameter control [28].
LOSAP, the lowerlevel algorithm in [12], is a local search algorithm searching in a probabilisticbased hybrid neighborhood structure. By a given probability, LOSAP randomly uses one from two predetermined operators to generate a neighbor solutionrepresenting permutation. In other words, based on the given probability, LOSAP can switch between the two given operators anytime during its exploration. While the search performance of LOLA [11] is mainly based on its special solution space, that of LOSAP is mainly based on its hybrid neighborhood structure. LOSAP has multiple optional operators for its perturbation and neighbor operators. These optional operators are modified from the traditional operators by limiting the distance of from u. For generating in LOSAP, there are three optional distancelimit levels: [u − 4, u + 4], [u − (D/5), u + (D/5)], and [1, D].
3. Proposed TwoLevel Metaheuristic
For solving MPMJSP, this paper proposes the twolevel metaheuristic consisting of the lower and upperlevel algorithms. In this section, MPMLOLA and MPMUPLA represent the lowerlevel algorithm and the upperlevel algorithm, respectively. The description of MPMLOLA is given in Section 3.1, and the description of MPMUPLA is given in Section 3.2.
3.1. MPMLOLA
MPMLOLA, as a variant of LOLA [11] and LOSAP [12], is a local search algorithm exploring in a hybrid neighborhood structure. Similar to LOLA and LOSAP, MPMLOLA generates its neighborhood structure by using multiple optional operators. However, there are many changes of MPMLOLA from its older variants. Although MPMLOLA uses the delaytime limit (δ) like LOLA does, it uses δ in different way and purpose. While LOLA uses δ to select an operation, MPMLOLA uses δ to select an optional machine for each operation. In the remaining other parts, MPMLOLA is more similar to LOSAP than LOLA. The major changes of MPMLOLA from LOSAP are summarized below:(i)While the LOSAP’s solutiondecoding method generates JSP’s solutions, the MPMLOLA’s solutiondecoding method generates MPMJSP’s solutions.(ii)MPMLOLA is similar to LOSAP in that its optional operators are modified from the traditional operators by limiting the distance of from u. However, while LOSAP has only three levels of distance limit, the distance limit of MPMLOLA is adjustable to any possible range.(iii)Unlike LOSAP, MPMLOLA applies the roulettewheel method to select a neighbor operator. It also applies the roulettewheel method to select optional operators for generating a perturbation operator.
For the purpose of clarification, the description of MPMLOLA is divided into two parts: a description of its solutiondecoding method and a description of its overall procedure. The solutiondecoding method is described in Section 3.1.1, and the overall procedure is described in Section 3.1.2.
3.1.1. SolutionDecoding Method
MPMLOLA decodes a solutionrepresenting permutation into a schedule by using the delaytime limit (δ) and the tiebreak criterion (TB). Note that TB is used only if there is more than one optional machine that satisfies δ. In this paper, every solutionrepresenting permutation is in a form of the operationbased permutation [33, 34]. An operationbased permutation is a permutation of numbers 1, 2, …, n where the number i (i = 1, 2, …, n) appears n_{i} times. Remind that n and n_{i} denote the number of all jobs and the number of all operations of the job J_{i}, respectively. In the permutation, the number i in its kth appearance represents the operation O_{ik}. Then, a schedule is constructed by scheduling all operations onebyone in the order given by the permutation. Each operation must be processed by its optional machine that satisfies δ and TB, and it must be started as early as this machine can. It is noticed that the use of δ in this paper is different from those in the other researches, e.g., [10, 11, 14–16]. While MPMLOLA uses δ to select an optional machine for each operation, the other researches use δ to select an operation into the timetable.
As mentioned above, δ ∈ [0, 1) and TB ∈ {lowest, highest} are used to select an optional machine for each operation. If δ = 0, each operation must be processed on its optional machine that can start processing earliest. When the value of δ is assigned larger, the maximum delaytime allowed for each operation is then longer; consequently, it may increase the number of optional machines that satisfy δ for each operation. If there is more than one optional machine that satisfies δ, then TB is required as a tie breaker. If TB is selected to be lowest, the lowestindexed optional machine that satisfies δ is selected; otherwise, the highestindexed optional machine that satisfies δ is selected.
Algorithm 1 presents the solutiondecoding method used by MPMLOLA. The algorithm uses δ and TB, as its input parameters, to transform an operationbased permutation into an MPMJSP’s solution. Note that Algorithm 1 may return a different schedule from the same operationbased permutation if the values of δ and TB are changed. Notation used in Algorithm 1 is defined below:(i)Let D denote the number of all operations in the beingsolved MPMJSP instance. Thus, D = n_{1} + n_{2} + … + n_{n}, where n_{i} is the number of all operations of the job J_{i} (i = 1, 2, …, n).(ii)Let П denote the sequence of operations transformed from the operationbased permutation.(iii)Let Φ denote the schedule transformed from П.(iv)Let O_{i′k′} denote the k′th operation of the job J_{i′}, and it represents the asyetunscheduled operation that is currently in its turn to be scheduled.(v)Let m_{i′k′} denote the number of all optional machines of O_{i′k′}.(vi)Let E_{i′k′l} denote the lth optional machine of O_{i′k′}, where l = 1, 2, …, m_{i′k′}.(vii)Let E denote the chosen machine for processing O_{i′k′}. This machine must be chosen from all E_{i′k′l} (l = 1, 2, …, m_{i′k′}).(viii)Let δ be a real number within [0, 1) and denote the delaytime limit.(ix)Let TB ∈ {lowest, highest} denote the tiebreak criterion for selecting an optional machine to process O_{i′k′}. It is used only if there is more than one optional machine that satisfies δ. If TB = lowest, let E be the machine with the lowest l from all E_{i′k′l} (where l = 1, 2, …, m_{i′k′}) that satisfy δ. If TB = highest, let E be the machine with the highest l from all E_{i′k′l} (where l = 1, 2, …, m_{i′k′}) that satisfy δ.(x)Let τ_{i′k′} denote the processing time of O_{i′k′}.(xi)Let σ_{M} denote the minimum of the earliest available times of all optional machines of O_{i′k′}.(xii)Let σ_{J} denote the earliest possible start time of O_{i′k′} in the job J_{i′}. This means σ_{J} is equal to the finished time of O_{i′k′}_{– 1}. If O_{i′k′} has no immediatepreceding operation, then σ_{J} is equal to 0.(xiii)Let σ denote the earliest possible start time of O_{i′k′}. It is equal to the maximum between σ_{M} and σ_{J}.

3.1.2. Procedure of MPMLOLA
MPMLOLA generates its neighborhood structure based on multiple optional operators. These optional operators consist of dswap, dinsert, dreverse, Dswap, Dinsert, and Dreverse operators. Note that Dswap, Dinsert, and Dreverse are identical to the traditional swap, insert, and reverse operators, respectively. Notation and terminologies used to define the mentioned operators are given below:(i)Let u and be integers used to point at the two member positions in an operationbased permutation.(ii)Let d denote a distance limit of from u. It is used for specifying dswap, dinsert, and dreverse.(iii)Let D denote the number of all members in the solutionrepresenting permutation, which equals the number of all operations in the MPMJSP instance. Thus, D = n_{1} + n_{2} + … + n_{n}, where n_{i} is the number of all operations of the job J_{i}.(iv)dswap is to swap two members in the uth and the th positions in the permutation. Let u be randomly selected from 1 to D. Then, let be randomly selected from max(1, u – d) to min(u + d, D) except u.(v)dinsert is to remove a member from the uth position in the permutation and then insert it into the th position. Let u be randomly selected from 1 to D. Then, let be randomly selected from max(1, u – d) to min(u + d, D) except u.(vi)dreverse is to reverse the positions of all members from the uth to the th positions in the permutation. Let u be randomly selected from 1 to D. Then, let be randomly selected from max(1, u – d) to min(u + d, D) except u.(vii)Dswap is to swap two members in the uth and the th positions in the permutation. Let u and be two different integers randomly selected from 1 to D.(viii)Dinsert is to remove a member from the uth position in the permutation and then insert it into the th position. Let u and be two different integers randomly selected from 1 to D.(ix)Dreverse is to reverse the positions of all members from the uth to the th positions in the permutation. Let u and be two different integers randomly selected from 1 to D.
Before an execution, eight MPMLOLA’s input parameters must be assigned values. The first input parameter, denoted by P, is the start operationbased permutation. The remaining seven input parameters consist of two parameters for specifying a perturbation operator, three parameters for generating its neighbor operators, and two parameters for selecting an optional machine. In MPMLOLA, the perturbation operator is to randomly use one from Dswap, Dinsert, and Dreverse on the start operationbased permutation n times (remind that n denotes the number of all jobs). In each of these n random selections, the roulettewheel technique is applied to select one from Dswap, Dinsert, and Dreverse. The probabilities of selecting Dswap and Dinsert in the roulette wheel are the second and the third input parameters of MPMLOLA, respectively. Consequently, the probability of selecting Dreverse is unity subtracted by the sum of the probabilities of selecting Dswap and Dinsert.
The fourth to the sixth input parameters are used to generate MPMLOLA’s neighbor operators (i.e., dswap, dinsert, and dreverse). The fourth input parameter, denoted by d, is the distance limit of from u for specifying dswap, dinsert, and dreverse. Then, MPMLOLA uses the roulettewheel technique to randomly select one from dswap, dinsert, and dreverse to generate a neighbor solutionrepresenting permutation. In the roulette wheel, the probabilities of selecting dswap and dinsert are the fifth and the sixth input parameters, respectively. The probability of selecting dreverse is thus unity subtracted by the sum of the probabilities of selecting dswap and dinsert.
The delaytime limit (δ) and the tiebreak criterion (TB) are the MPMLOLA’s seventh and eighth input parameters, respectively. These two input parameters are used to select an optional machine for each operation in constructing a schedule. Thus, instead of using δ and TB by MPMLOLA itself, MPMLOLA transfers the values of δ and TB into its solutiondecoding method (Algorithm 1).
The overall procedure of MPMLOLA is given in Algorithm 2. Notation used in Algorithm 2 is defined below:(i)Let D denote the number of all operations in the beingsolved MPMJSP instance. Thus, D = n_{1} + n_{2} + … + n_{n}, where n_{i} is the number of all operations of the job J_{i}.(ii)Let d denote the distance limit of from u in the operationbased permutation for specifying dswap, dinsert, and dreverse.(iii)Let ρ_{S} and ρ_{I}, which are real numbers within [0, 1), denote the probabilities of selecting Dswap and Dinsert, respectively. Thus, the probability of selecting Dreverse is 1 – _{S} – _{I}.(iv)Let ρ_{NS} and ρ_{NI}, which are real numbers within [0, 1), denote the probabilities of selecting dswap and dinsert, respectively. Thus, the probability of selecting dreverse is 1 – ρ_{NS} – ρ_{NI}.(v)Let δ, which is a real number within [0, 1), denote the delaytime limit for selecting an optional machine for each operation.(vi)Let TB ∈ {lowest, highest} denote the tiebreak criterion for selecting an optional machine for each operation.(vii)Let P denote the start operationbased permutation.(viii)Let P_{0} denote the current bestfound operationbased permutation. An initial P_{0} is generated from P via the perturbation operator.(ix)Let S_{0}, which is decoded from P_{0}, denote the current bestfound schedule. In addition, Makespan(S_{0}) stands for the makespan of S_{0}.(x)Let P_{1} denote the current neighbor operationbased permutation.(xi)Let S_{1}, which is decoded from P_{1}, denote the current neighbor schedule. In addition, Makespan(S_{1}) stands for the makespan of S_{1}.

3.2. MPMUPLA
MPMUPLA is an upperlevel algorithm of the proposed twolevel metaheuristic. It uses the same framework of the upperlevel algorithms of [11, 12]. MPMUPLA is thus a populationbased search algorithm that acts as a parameter controller. It evolves the MPMLOLA’s inputparameter values, so that MPMLOLA can return its best performance on every single MPMJSP instance. At the tth iteration, the MPMUPLA’s population consists of N combinations of the MPMLOLA’s inputparameter values, i.e., C_{1}(t), C_{2}(t), …, C_{N}(t). In short, let a parametervalue combination stand for a combination of the MPMLOLA’s inputparameter values. Let denote the th parametervalue combination (where = 1, 2, …, N) in the population at the tth iteration. It represents the value combination of the MPMLOLA’s input parameters, i.e., P, ρ_{S}, ρ_{I}, d, ρ_{NS}, ρ_{NI}, and TB.
The delaytime limit, δ, is an important MPMLOLA’s input parameter controlled by MPMUPLA. However, the value of δ is not assigned as a member in , but it is controlled by the MPMUPLA’s iteration index, t. At the first MPMUPLA’s iteration (t = 1), the value of δ is set to be 0.0 for MPMLOLA. For every next 50 MPMUPLA’s iterations, the value of δ is increased by 0.2 for MPMLOLA. This setting of δ was based on the result of a preliminary study of this research. It found that the control of δ using the MPMUPLA’s iteration index usually performs better than the control of δ using .
The transformations from the ’s members into MPMLOLA’s parameter values are described below:(i)Let represent P. In other words, P is directly equal to in the transformation.(ii)Let , , and ∈ ℝ be used together to determine the values of ρ_{S} and ρ_{I}. Their transformations are given in (1) and (2).(iii)Let ∈ ℝ be used to determine the value of d. In its transformation, let d be equal to the rounded integer from 1 + D, where D is the number of all operations in the MPMJSP instance. After that, reassign d = 1 if d < 1, and reassign d = D if d > D.(iv)Let , , and ∈ ℝ be used together to determine the values of ρ_{NS} and ρ_{NI}. Their transformations are given in (3) and (4).(v)Let ∈ ℝ be used to determine the value of TB. In its transformation, let TB = lowest if < 0.5, and let TB = highest if ≥ 0.5.
The overall procedure of MPMUPLA is given in Algorithm 3. Notation used in Algorithm 3 is defined below:(i)Let N denote the number of all parametervalue combinations in the MPMUPLA’s population.(ii)Let denote the th parametervalue combination (where = 1, 2, …, N) in the MPMUPLA’s population at the tth iteration.(iii)Let Score denote the performance score of . Note that the lower the performance score, the better the performance.(iv)After executing MPMLOLA with the parameter values given by , let its final (bestfound) operationbased permutation and final (bestfound) schedule be denoted by (t) and (t), respectively.(v)Let Makespan((t)) denote the makespan of (t).(vi)Let C_{best} ≡ (c_{1best}, c_{2best}, …, c_{9best}) denote the best parametervalue combination ever found by the population. In addition, let Score(C_{best}) denote the performance score of C_{best}.(vii)Let S_{best} denote the best schedule ever found by the population.

4. Experiment’s Results
In this paper, an experiment was conducted to compare MPMUPLA’s results with those of TS, PSO, and CP. Let TS, PSO, and CP represent the tabusearch algorithm [9], the particle swarm optimization algorithm [10], and the ILOG constraint programming optimizer [20], respectively. These three algorithms were chosen because they perform well on the same benchmark instance sets used in this paper’s experiment. In Sections 4 and 5, let MPMUPLA stand for the whole twolevel metaheuristic, i.e., MPMUPLA combined with MPMLOLA. The reason is that MPMUPLA uses MPMLOLA as its component when solving MPMJSP.
In the comparison, this paper’s experiment used three benchmark instance sets, i.e., Edata, Rdata, and Vdata, taken from [9, 20, 21]. Each instance set consists of 66 instances, modified from the wellknown JSP benchmark instances [35–39]. The difference among the three instance sets is the number of optional machines of each operation in their instances. In Edata, the average number of optional machines of each operation is 1.15, and the maximum number of optional machines of each operation is 2 or 3. In Rdata, the average number and the maximum number of optional machines of each operation are 2 and 3, respectively. In Vdata, the average number and the maximum number of optional machines of each operation are 0.5m and 0.8m, respectively (where m = the number of all machines in an instance).
The parameter settings of MPMUPLA used in the experiment are shown below:(i)The population of MPMUPLA consisted of three parametervalue combinations (i.e., N = 3 in Algorithm 3).(ii)The stopping criterion of MPMUPLA was to stop when any of the below conditions was satisfied:(a)The 1,000th iteration (i.e., t = 1,000 in Algorithm 3) was reached.(b)The 150th minute of computational time was reached.(c)The known optimal solution was found. If the optimal solution has been yet unknown, its lower bound [20, 21] was used instead.(iii)MPMUPLA was coded in C# and executed on an Intel® Core™ i73520M @ 2.90 GHz with RAM of 4 GB (3.87 GB useable).(iv)For each instance, MPMUPLA was executed five trials with different randomseed numbers.
With the above settings, the experiment’s results on Edata, Rdata, and Vdata are presented in Tables 1 to 3, respectively. These tables first show the name, size, and bestknown solution value of each instance. The bestknown solution value, given by literature, stands for the upper bound of the optimal solution value. For each instance, each table then shows the bestfound solution values of TS [9], PSO [10], CP [20], and MPMUPLA. The bestfound solution values of TS, PSO, and CP are their best solution values taken from their original articles [9, 10] and [20], respectively. For MPMUPLA, its bestfound solution value on each instance is a minimum of the bestfound solution values from its five trials in this experiment. Each table also shows an average of the bestfound solution values, the average number of used iterations, and the average computational time from the MPMUPLA’s five trials on each instance.
Notation and terminologies used for each instance in Tables 1 to 3 are given as follows:(i)Let Instance column present the name of each instance.(ii)Let n and m denote the number of all jobs and the number of all machines, respectively, in the instance.(iii)Let a solution denote an MPMJSP’s schedule, and let a solution value denote a makespan of an MPMJSP’s schedule.(iv)Let BKS column present the bestknown solution value given by literature, e.g., [20, 21]. If the bestknown solution value has been proven to be an optimal solution value, it is presented without parentheses. Otherwise, it is an upper bound of the optimal solution value and is presented within parentheses.(v)If the bestfound solution value of MPMUPLA is better than the bestknown solution value given by literature, let the bestfound solution value of MPMUPLA become the new bestknown solution value. When such a case occurs, an arrow symbol (⟶) is given in front of the value.(vi)Let Best represent the bestfound solution value of each algorithm. Best of MPMUPLA was taken from the five trials in this experiment. Bests of TS, PSO, and CP were taken from [9, 10] and [20], respectively. N/A means the bestfound solution value does not appear in the original article.(vii)Let Avg represent the average of the bestfound solution values of the five trials of MPMUPLA.(viii)Let No of Iters and Time stand for the average number of iterations and the average computational time (in HH:MM:SS format), respectively, until the MPMUPLA’s stopping criterion is met.
For each instance set, Table 4 shows Avg %BD of each algorithm on each instance category. Of each algorithm, %BD of each instance denotes a percent deviation of the bestfound solution value from the bestknown solution value. Then, Avg %BD denotes an average of %BDs of all instances in their category. In Table 4, each instance is classified into one of 13 instance categories, based on its source and size. The details of these 13 categories are given below:(i)M620 consists of three instances, i.e., M6, M10, and M20.(ii)LA15 consists of five 10job/5machine instances, i.e., LA1, LA2, …, LA5.(iii)LA610 consists of five 15job/5machine instances, i.e., LA06, LA07, …, LA10.(iv)LA1115 consists of five 20job/5machine instances, i.e., LA11, LA12, …, LA15.(v)LA1620 consists of five 10job/10machine instances, i.e., LA16, LA17, …, LA20.(vi)LA2125 consists of five 15job/10machine instances, i.e., LA21, LA22, …, LA25.(vii)LA2630 consists of five 20job/10machine instances, i.e., LA26, LA27, …, LA30.(viii)LA3135 consists of five 30job/10machine instances, i.e., LA31, LA32, …, LA35.(ix)LA3640 consists of five 15job/15machine instances, i.e., LA36, LA37, …, LA40.(x)ABZ56 consists of two 10job/10machine instances, i.e., ABZ5 and ABZ6.(xi)ABZ79 consists of three 20job/15machine instances, i.e., ABZ7, ABZ8, and ABZ9.(xii)CAR18 consists of eight instances, i.e., CAR1, CAR2, …, CAR8.(xiii)ORB110 consists of ten 10job/10machine instances, i.e., ORB1, ORB2, …, ORB10.
In addition to the 13 categories, Table 4 includes four more instance categories, i.e., M6LA40, SMM6LA40, M6ORB10, and SMM6ORB10. These additional categories were used for comparing performance of the algorithms. SM in SMM6LA40 and SMM6ORB10 indicates that these categories contain only smalltomedium instances. Let instances be defined as smalltomedium instances if their nm < 150 and large instances otherwise (where n = the number of jobs and m = the number of machines). The details of these four additional categories are given below:(i)M6LA40 consists of the first 43 instances of all 66 instances, starting from M6 to LA40. These 43 instances were used by TS [9] and PSO [10] in their original articles.(ii)SMM6LA40 consists of all 23 smalltomedium instances from M6LA40.(iii)M6ORB10 consists of all 66 instances, starting from M6 to ORB10. These 66 instances were used by CP [20] in its original article.(iv)SMM6ORB10 consists of all 43 smalltomedium instances from M6ORB10.
5. Result Analysis and Discussion
This section analyzes and discusses the results shown in Section 4. Like Section 4, MPMUPLA in this section stands for the whole twolevel metaheuristic, i.e., MPMUPLA combined with MPMLOLA. The performance of MPMUPLA was compared with the performance of TS [9], PSO [10], and CP [20] via three performance indicators. These indicators are the number of instances achieved in finding the bestknown solutions, the number of instances won by an algorithm against another, and the average percent deviation of the algorithm’s bestfound solution value from the bestknown solution value (Avg %BD). Of each instance, the bestknown solution value means the best solution value found by the published literature. An only exception is in LA31 of Rdata, where its bestknown solution value was taken from the bestfound solution value of MPMUPLA. The reason is that, in LA31 of Rdata, MPMUPLA found the better solution than the previously published bestknown solution.
For each instance set, this section separates analyses on the first 43 instances from those on all 66 instances. The reason is that the results of TS and PSO were given on only the 43 instances in their original articles [9, 10], while the results of CP were given on the 66 instances in its original article [20]. In addition, this section separates analyses on smalltomedium instances from those on all given instances. Note that all instances with nm < 150 are defined as smalltomedium instances (where n = the number of jobs and m = the number of machines). Sections 5.1 to 5.3 show the analyses and discussions via the three given indicators. Then, Section 5.4 provides an overall summary from Sections 5.1 to 5.3.
5.1. The Number of Instances Achieved in Finding the BestKnown Solutions
Of each instance set, this section first compares the number of instances achieved in finding the bestknown solutions by MPMUPLA with those by TS, PSO, and CP on the first 43 instances. The numbers of instances achieved by each algorithm in Edata, Rdata, and Vdata were counted from Tables 1, 2, and 3, respectively. For the first 43 instances, MPMUPLA obviously outperforms the three other algorithms on all three instance sets, especially Rdata. Of each instance set, the comparison results are given below:(i)For the first 43 instances of Edata, the algorithms TS, PSO, CP, and MPMUPLA reach the bestknown solutions on 10, 15, 22, and 27 instances, respectively.(ii)For the first 43 instances of Rdata, the algorithms TS, PSO, CP, and MPMUPLA reach the bestknown solutions on 4, 4, 8, and 24 instances, respectively. MPMUPLA also found a new bestknown solution value (i.e., 1520) for LA31 of Rdata. This value is defined as the optimal solution value because it equals the optimal solution value’s lower bound given in [20].(iii)For the first 43 instances of Vdata, the algorithms TS, PSO, CP, and MPMUPLA reach the bestknown solutions on 14, 13, 23, 28 instances, respectively.
Of each instance set, this section then compares the number of instances achieved by MPMUPLA with that by CP on all 66 instances. For all 66 instances, MPMUPLA outperforms CP on all three instance sets, especially Rdata. Of each instance set, the comparison results are given below:(i)For all 66 instances of Edata, CP and MPMUPLA reach the bestknown solutions on 32 and 44 instances, respectively.(ii)For all 66 instances of Rdata, CP and MPMUPLA reach the bestknown solutions on 14 and 38 instances, respectively.(iii)For all 66 instances of Vdata, CP and MPMUPLA reach the bestknown solutions on 39 and 45, respectively.
Thus, as a conclusion, MPMUPLA outperforms TS, PSO, and CP in finding the bestknown solutions on all three instance sets, especially Rdata. On Rdata, the number of instances achieved by MPMUPLA is more than double the number of instances achieved by each of the others. Moreover, MPMUPLA also found the new bestknown solution value on LA31 of Rdata.
5.2. The Number of Instances Won
This section first compares the number of instances won by MPMUPLA with those by TS, PSO, and CP on the first 43 instances of each instance set. Note that in the first 43 instances, there are 23 smalltomedium instances included. The numbers of instances won in Edata, Rdata, and Vdata were counted from Tables 1, 2, and 3, respectively. For the first 43 instances, MPMUPLA obviously outperforms the three other algorithms on Edata and Rdata; MPMUPLA outperforms TS and PSO but underperforms CP on Vdata. However, when considering only the 23 smalltomedium instances, MPMUPLA outperforms CP on Vdata. Of each instance set, the comparison results are detailed below:(i)Out of the first 43 instances of Edata, MPMUPLA has 33 wins, 10 draws, and 0 losses against TS; it has 28 wins, 15 draws, and 0 losses against PSO. In addition, it has 21 wins, 20 draws, and 2 losses against CP.(ii)Out of the first 43 instances of Rdata, MPMUPLA has 39 wins, 4 draws, and 0 losses against each of TS and PSO. In addition, it has 29 wins, 11 draws, and 3 losses against CP.(iii)Out of the first 43 instances of Vdata, MPMUPLA has 19 wins, 13 draws, and 11 losses against TS; it has 18 wins, 14 draws, and 11 losses against PSO. In addition, it has 7 wins, 21 draws, and 15 losses against CP. However, when considering only the 23 smalltomedium instances, MPMUPLA has 7 wins, 16 draws, and 0 losses against CP.
Then, this section compares the number of instances won by MPMUPLA with that by CP on all 66 instances of each instance set. Note that in the 66 instances, there are 43 smalltomedium instances included. For the 66 instances, MPMUPLA outperforms CP on Edata and Rdata, but it underperforms CP on Vdata. However, when considering only the 43 smalltomedium instances, MPMUPLA obviously outperforms CP on all three instance sets. For each instance set, the comparison results are detailed below:(i)Out of all 66 instances of Edata, MPMUPLA has 34 wins, 29 draws, and 3 losses against CP. Out of the 43 smalltomedium instances of Edata, MPMUPLA has 17 wins, 24 draws, and 2 losses against CP.(ii)Out of all 66 instances of Rdata, MPMUPLA has 44 wins, 17 draws, and 5 losses against CP. Out of the 43 smalltomedium instances of Rdata, MPMUPLA has 28 wins, 15 draws, and 0 losses against CP.(iii)Out of all 66 instances of Vdata, MPMUPLA has 11 wins, 35 draws, and 20 losses against CP. Out of the 43 smalltomedium instances of Vdata, MPMUPLA has 11 wins, 30 draws, and 2 losses against CP.
As a conclusion, in terms of the number of instances won, MPMUPLA outperforms the three other algorithms on Edata and Rdata. For Vdata, MPMUPLA outperforms TS and PSO but underperforms CP. However, when considering only smalltomedium instances, MPMUPLA outperforms CP on Vdata.
5.3. Avg %BD
This section analyzes Avg %BDs in Table 4. To do so, it first analyzes Avg %BDs of the first 43 instances of each instance set. Then, it analyzes those of the 23 smalltomedium instances of the first 43 instances. In Table 4, the rows M6LA40 and SMM6LA40 provide Avg % BDs of the first 43 instances and those of the 23 smalltomedium instances, respectively. For Avg %BDs of the first 43 instances, MPMUPLA outperforms the three other algorithms on Edata and Rdata, but it underperforms the three other algorithms on Vdata. When considering only the 23 smalltomedium instances, MPMUPLA obviously outperforms the three other algorithms on all three instance sets. Of each instance set, the analysis results are detailed below:(i)For the first 43 instances of Edata, MPMUPLA’s Avg %BD (i.e., 0.24%) is much better than those of TS, PSO, and CP (i.e., 3.16%, 2.26%, and 0.69%, respectively). Based on these 43 instances, paired t tests concluded that the mean %BD of MPMUPLA is significantly better than those of TS, PSO, and CP (with p values of 3 × 10^{−10}, 1 × 10^{−8}, and 0.0002, respectively). When considering only the 23 smalltomedium instances, MPMUPLA’s Avg %BD (i.e., 0.02%) is also much better than those of TS, PSO, and CP (i.e., 2.07%, 0.65%, and 0.27%, respectively).(ii)For the first 43 instances of Rdata, MPMUPLA’s Avg %BD (i.e., 0.57%) is much better than those of TS, PSO, and CP (i.e., 1.87%, 3.76%, and 0.77%, respectively). Based on these 43 instances, paired t tests concluded that the mean %BD of MPMUPLA is significantly better than those of TS, PSO, and CP (with p values of 4 × 10^{−7}, 1 × 10^{−7}, and 0.00004, respectively). When considering only the 23 smalltomedium instances, MPMUPLA’s Avg %BD (i.e., 0.02%) is also much better than those of TS, PSO, and CP (i.e., 0.87%, 0.96%, and 0.16%, respectively).(iii)For the first 43 instances of Vdata, MPMUPLA’s Avg %BD (i.e., 0.59%) is worse than those of TS, PSO, and CP (i.e., 0.44%, 0.56%, and 0.08%, respectively). However, when considering only the 23 smalltomedium instances, MPMUPLA’s Avg %BD (i.e., 0.00%) is better than those of TS, PSO, and CP (i.e., 0.26%, 0.13%, and 0.04%, respectively). Based on the 23 smallto medium instances, paired t tests concluded that the mean %BD of smalltomedium instances of MPMUPLA is significantly better than those of TS, PSO, and CP (with p values of 0.001, 0.0007, and 0.004, respectively).
Of each instance set, this section then compares Avg %BDs of MPMUPLA and CP from all 66 instances and from their 43 smalltomedium instances. In Table 4, the rows M6ORB10 and SMM6ORB10 provide Avg %BDs of all 66 instances and those of the 43 smalltomedium instances, respectively. For all 66 instances, MPMUPLA outperforms CP on Edata and Rdata, but it underperforms CP on Vdata. However, when considering only the 43 smalltomedium instances, MPMUPLA obviously outperforms CP on all three instance sets. Of each instance set, the comparison results are detailed below:(i)For all 66 instances of Edata, MPMUPLA’s Avg %BD (i.e., 0.26%) is better than CP’s Avg %BD (i.e., 0.96%). A paired ttest concluded that the mean %BD of MPMUPLA is significantly better than the mean %BD of CP (with p value of 0.00001). When considering only the 43 smalltomedium instances, MPMUPLA’s Avg %BD (i.e., 0.07%) is also better than CP’s Avg %BD (i.e., 0.67%).(ii)For all 66 instances of Rdata, MPMUPLA’s Avg %BD (i.e., 0.57%) is better than CP’s Avg %BD (i.e., 0.87%). A paired ttest concluded that the mean %BD of MPMUPLA is significantly better than the mean %BD of CP (with p value of 0.00002). When considering only the 43 smalltomedium instances, MPMUPLA’s Avg %BD (i.e., 0.24%) is also better than CP’s Avg %BD (i.e., 0.55%).(iii)For all 66 instances of Vdata, MPMUPLA’s Avg %BD (i.e., 0.78%) is worse than CP’s Avg %BD (i.e., 0.07%). However, when considering only the 43 smalltomedium instances, MPMUPLA’s Avg %BD (i.e., 0.01%) is better than CP’s Avg %BD (i.e., 0.04%). Based on the 43 smalltomedium instances, a paired ttest concluded that the mean %BD of smalltomedium instances of MPMUPLA is significantly better than that of CP (with p value of 0.002).
As a conclusion, based on Avg %BDs, MPMUPLA obviously outperforms the three other algorithms on Edata and Rdata, but it underperforms the three other algorithms on Vdata. When considering only the smalltomedium instances, MPMUPLA outperforms the three other algorithms on all three instance sets.
5.4. Overall Summary
In the number of instances achieved in finding the bestknown solutions, MPMUPLA outperforms the three other algorithms on all three sets of instances. In the number of instances won, MPMUPLA outperforms the three other algorithms on Edata and Rdata; MPMUPLA outperforms TS and PSO but underperforms CP on Vdata. However, when considering only smalltomedium instances, MPMUPLA outperforms CP on Vdata in the number of instances won. In Avg %BD, MPMUPLA outperforms the three other algorithms on Edata and Rdata, but it underperforms the three other algorithms on Vdata. However, when considering only smalltomedium instances, MPMUPLA outperforms the three other algorithms on Vdata. As a conclusion, MPMUPLA usually performs very well on the MPMJSP instances where each operation has less than four optional machines (e.g., the instances in Edata and Rdata). When each operation has many optional machines (i.e., ≥ 0.5m optional machines), MPMUPLA usually performs well on only smalltomedium instances (i.e., the instances of nm < 150).
6. Conclusion
In this paper, a twolevel metaheuristic was proposed for solving MPMJSP. The twolevel metaheuristic consists of MPMUPLA and MPMLOLA as its upper and lowerlevel algorithms, respectively. MPMUPLA, a populationbased algorithm, acts as the MPMLOLA’s parameter controller. MPMLOLA is a local search algorithm, searching for an MPMJSP’s optimal solution. MPMLOLA has many changes from its older variants, such as perturbation and neighbor operators. It also uses a unique method to select an optional machine for each operation. The MPMUPLA’s function is to evolve the MPMLOLA’s inputparameter values, so that MPMLOLA can perform its best for every single instance. In this paper’s experiment, the performance of the twolevel metaheuristic was evaluated on the three instance sets, i.e., Edata, Rdata, and Vdata. The experiment’s results indicated that the twolevel metaheuristic performs very well on Edata and Rdata. For Vdata, the twolevel metaheuristic usually performs well on only the category of smalltomedium instances. Thus, a future research should be focused to enhance the twolevel metaheuristic’s performance, especially on large instances of Vdata.
Data Availability
The data used to support the findings of this study are available from the author upon request.
Conflicts of Interest
The author declares that there are no conflicts of interest.