Abstract

This paper proposes a two-level metaheuristic consisting of lower- and upper-level algorithms for the job-shop scheduling problem with multipurpose machines. The lower-level algorithm is a local search algorithm used for finding an optimal solution. The upper-level algorithm is a population-based metaheuristic used to control the lower-level algorithm’s input parameters. With the upper-level algorithm, the lower-level algorithm can reach its best performance on every problem instance. Most changes of the proposed two-level metaheuristic from its original variants are in the lower-level algorithm. A main purpose of these changes is to increase diversity into solution neighborhood structures. One of the changes is that the neighbor operators of the proposed lower-level algorithm are developed to be more adjustable. Another change is that the roulette-wheel technique is applied for selecting a neighbor operator and for generating a perturbation operator. In addition, the proposed lower-level algorithm uses an adjustable delay-time limit to select an optional machine for each operation. The performance of the proposed two-level metaheuristic was evaluated on well-known benchmark instances. The evaluation’s results indicated that the proposed two-level metaheuristic performs well on most benchmark instances.

1. Introduction

The job-shop scheduling problem (JSP) is a well-known NP-hard optimization problem [13]. JSP involves scheduling jobs onto machines in order to minimize makespan, i.e., the schedule’s length. Each job consists of a number of operations, where each operation must be processed on a predetermined machine with a predetermined processing time. To complete each job, all of its operations must be processed in the sequence from the first to the last operations. JSP has many variants and related problems, such as [47]. One of the well-known variants of JSP is the job-shop scheduling problem with multipurpose machines (MPMJSP) [810]. MPMJSP is defined as a generalized variant of JSP. An only difference between them is that each operation in JSP has only one predetermined machine, while each operation in MPMJSP may have more than one optional machine. This difference makes MPMJSP be closer to the real-world applications in modern factories than JSP because, nowadays, most machines have been developed for multiple tasks.

In this paper, the research’s objective is to develop a high-performing algorithm for MPMJSP. To do so, this paper proposes a two-level metaheuristic, based on the framework of [5, 11, 12], consisting of upper- and lower-level algorithms. The upper-level algorithm is a population-based metaheuristic that acts as a parameter controller for the lower-level algorithm. The upper-level algorithm’s population consists of the parameter-value combinations of the lower-level algorithm. In a parameter-value combination, each parameter’s value is iteratively changed by a sum of two changeable opposite-direction vectors. The directions of the first and the second vectors are toward and away from, respectively, the memorized best-found value. The lower-level algorithm is a local search algorithm searching for an optimal solution of the being-solved MPMJSP problem. Like other metaheuristics, the lower-level algorithm cannot perform its best on all instances with a single combination of input-parameter values. This drawback can be overcome when its input parameters are controlled by the upper-level algorithm.

The proposed two-level metaheuristic is modified from its original variant [12], where the main differences are in their lower-level algorithms. Their lower-level algorithms both search for an optimal solution in hybrid neighborhood structures via their optional operators. Their optional neighbor operators are similarly modified from the traditional operators (i.e., swap, insert, and reverse) by limiting a distance between positions of two members selected in a solution-representing permutation. However, while the lower-level algorithm of [12] has only three levels of the distance limit, the distance limit in this paper is adjustable to any possible range. Another main difference is in their methods of generating their hybrid neighborhood structures. To generate each neighbor solution-representing permutation, the lower-level algorithm of [12] uses a given probability to select one of two neighbor operators. Instead, the proposed lower-level algorithm uses the roulette-wheel method [13] to select one from three neighbor operators. In this paper, the roulette-wheel method is also applied to select multiple optional operators for generating a perturbation operator. A purpose of using the roulette-wheel method is to diversify more on the hybridization of the neighborhood structure.

As mentioned, each operation in MPMJSP has one or more optional machines. The lower-level algorithm proposed in this paper uses the delay-time limit (δ), as its input parameter, to make a criterion of selecting a machine for each operation. This use of δ is different from the uses of δ in the other researches, e.g., [10, 11, 1417]. While the proposed lower-level algorithm uses δ to select a machine for each operation, the other researches use δ to select an operation into a timetable. A method of generating a schedule of the proposed lower-level algorithm is briefly presented as follows: First, an appearance order of all operations in a solution-representing permutation is used as a priority order for all operations. Then, every operation is assigned one-by-one into a schedule by the given priority order. When being assigned, each operation must be processed on the machine that satisfies δ, and it must be started as early as the machine can.

The remainder of this paper is divided into five sections. Section 2 describes MPMJSP and reviews the previous researches relevant to the proposed two-level metaheuristic. Section 3 describes the proposed two-level metaheuristic in detail. Section 4 shows experiment’s results of evaluating the proposed two-level metaheuristic’s performance. Section 5 then analyzes and discusses the experiment’s results. Finally, Section 6 concludes the research’s findings.

2. Preliminaries

In this section, the description of MPMJSP is given in Section 2.1, and the review on the previous researches relevant to the proposed two-level metaheuristic is given in Section 2.2.

2.1. Description of MPMJSP

The job-shop scheduling problem with multipurpose machines (MPMJSP) is classified as a generalization of the job-shop scheduling problem (JSP). An only difference between JSP and MPMJSP is their numbers of optional machines of each operation. That is, while each operation in JSP has only one predetermined machine, each operation in MPMJSP has one or more optional machines. MPMJSP thus becomes JSP if each of its operations has only one optional machine. A similar variant of MPMJSP is the flexible job-shop scheduling problem (FJSP) [18, 19]. MPMJSP and FJSP both are the JSP’s variants where each operation may have more than one optional machine. However, the processing time of an operation in FJSP may change when changing its selected optional machine, while the processing time of each operation in MPMJSP is fixed for all of its optional machines. This means MPMJSP is a specific FJSP where, for each operation, all optional machines have the same processing time.

Notation used to describe MPMJSP in this paper is defined below:(i)Let n denote the number of all jobs in MPMJSP.(ii)Let m denote the number of all machines in MPMJSP.(iii)Let Ji denote the i-th job in MPMJSP, where i = 1, 2, …, n.(iv)Let Mj denote the j-th machine in MPMJSP, where j = 1, 2, …, m.(v)Let ni denote the number of all operations of Ji.(vi)Let Oik denote the k-th operation of Ji, where k = 1, 2, …, ni.(vii)Let τik denote the processing time of Oik.(viii)Let mik denote the number of all optional machines of Oik, where i = 1, 2, …, n, and k = 1, 2, …, ni.(ix)Let Eikl ∈ all optional machines of Oik that appear in {M1, M2, …, Mm} and denote the l-th optional machine of Oik, where l = 1, 2, …, mik. In other words, Eikl is equivalent to the Mj that owns the l-th lowest value of j among all optional machines of Oik. For example, suppose O12 has three optional machines, i.e., M2, M4, and M5. Thus, E121 = M2, E122 = M4, and E123 = M5.

MPMJSP comes with n given jobs (J1, J2, …, Jn) and m given machines (M1, M2, …, Mm). Each job Ji consists of a sequence of ni given operations (Oi1, Oi2, …, ) as a chain of precedence constraints. To complete each job Ji, Oi1 must be finished before Oi2 can start; Oi2 must be finished before Oi3 can start, and so on. Each Oik must be processed by one of Eik1, Eik2, …, with the processing time of τik. Each machine cannot process more than one operation at a time, and it cannot be stopped during processing an operation. At the beginning (i.e., time 0), all jobs have already arrived, and all machines have not been occupied. An optimal schedule is a feasible schedule that minimizes makespan (i.e., the schedule’s length).

MPMJSP was first introduced by Brucker and Schlie [8]. They also proposed a polynomial-time algorithm for MPMJSP with two jobs. MPMJSP with three jobs belongs to NP-hard problem even if the number of all machines is two [9]. Tabu-search algorithms were developed by [9] for solving three sets of MPMJSP benchmark instances, i.e., Edata, Rdata, and Vdata. Since then, these three instance sets have been commonly used for comparing results of different algorithms on MPMJSP. To date, many algorithms have been developed for solving MPMJSP and its closely related problems [10, 1821].

2.2. Previous Relevant Researches

Iterated local search is traditionally defined as a single-solution-based metaheuristic that can search for a global optimal solution. During an exploration, it uses a neighbor operator repeatedly to find a local optimum and then uses a perturbation operator to escape the found local optimum. Note that a perturbation operator stands for an operator that generates a new initial solution by largely modifying a found local optimal solution [22]. Some untraditional iterated local search algorithms are enhanced in their performance by using multiple initial solutions [23, 24]. The iterated local search algorithms have been successful on many optimization problems, including MPMJSP and FJSP [25, 26].

In iterated local search and related algorithms, there are three operators usually used as neighbor operators and perturbation operators. These three operators are the traditional swap, insert, and reverse operators [27]. To explain the mentioned operators, let u and be two different integers randomly generated from 1 to D, where D represents the number of all members in a solution-representing permutation. The swap operator is to swap between the two members in the u-th and the -th positions of the permutation. The insert operator is to remove a member from the u-th position of the permutation and then insert it back at the -th position. The reverse operator is to reverse the sequence of all members from the u-th to the -th positions of the permutation.

A common drawback of most metaheuristics is that their performance is dependent on their parameter-value settings. To overcome such a drawback, many applications use upper-level algorithms to control parameters of their solution-searching algorithms [11, 12, 14, 15, 2831]. Some of them, e.g., [5], require more than two levels of algorithms for very complicated problems; however, most of them require only two levels. For solving JSP, there are two two-level metaheuristics acting as adaptive iterated local search algorithms [11, 12]. In each of the two-level metaheuristics, the upper-level algorithm controls the lower-level algorithm’s input parameters, while the lower-level algorithm is a local search algorithm searching for an optimal job-shop schedule.

UPLA and MUPLA are the upper-level algorithms in [11, 12], respectively; they both are population-based algorithms searching in real-number search spaces. In each of them, the population is a number of the parameter-value combinations of the lower-level algorithm. In a parameter-value combination, each parameter’s value is iteratively changed by a sum of two changeable opposite-direction vectors. The first vector’s and the second vector’s directions are toward and away from, respectively, the memorized best-found value. In only MUPLA, each parameter-value combination includes a different start operation-based permutation; thus, the two-level metaheuristic of [12] has a multistart property.

LOLA, the lower-level algorithm in [11], is a local search algorithm exploring in a solution space of parameterized-active schedules (i.e., hybrid schedules [16]). Its input parameters (i.e., a delay-time limit, a scheduling direction, a perturbation operator, and a neighbor operator) are controlled by UPLA, its upper-level algorithm. Because the delay-time limit (δ) is one of the input parameters controlled, UPLA then can control the solution space’s size of parameterized-active schedules. Such a control of δ follows in the successes of the two-level PSOs of [14, 15]. Other techniques of controlling δ can also be found in literature. For example, the value of δ in [10] is dependent on the number of jobs and the number of machines, while the value of δ in [17] is dependent on the algorithm’s iteration index. In addition, the PSO in [32] controls the value of δ by using the concept of self-adaptive parameter control [28].

LOSAP, the lower-level algorithm in [12], is a local search algorithm searching in a probabilistic-based hybrid neighborhood structure. By a given probability, LOSAP randomly uses one from two predetermined operators to generate a neighbor solution-representing permutation. In other words, based on the given probability, LOSAP can switch between the two given operators anytime during its exploration. While the search performance of LOLA [11] is mainly based on its special solution space, that of LOSAP is mainly based on its hybrid neighborhood structure. LOSAP has multiple optional operators for its perturbation and neighbor operators. These optional operators are modified from the traditional operators by limiting the distance of from u. For generating in LOSAP, there are three optional distance-limit levels: [u − 4, u + 4], [u − (D/5), u + (D/5)], and [1, D].

3. Proposed Two-Level Metaheuristic

For solving MPMJSP, this paper proposes the two-level metaheuristic consisting of the lower- and upper-level algorithms. In this section, MPM-LOLA and MPM-UPLA represent the lower-level algorithm and the upper-level algorithm, respectively. The description of MPM-LOLA is given in Section 3.1, and the description of MPM-UPLA is given in Section 3.2.

3.1. MPM-LOLA

MPM-LOLA, as a variant of LOLA [11] and LOSAP [12], is a local search algorithm exploring in a hybrid neighborhood structure. Similar to LOLA and LOSAP, MPM-LOLA generates its neighborhood structure by using multiple optional operators. However, there are many changes of MPM-LOLA from its older variants. Although MPM-LOLA uses the delay-time limit (δ) like LOLA does, it uses δ in different way and purpose. While LOLA uses δ to select an operation, MPM-LOLA uses δ to select an optional machine for each operation. In the remaining other parts, MPM-LOLA is more similar to LOSAP than LOLA. The major changes of MPM-LOLA from LOSAP are summarized below:(i)While the LOSAP’s solution-decoding method generates JSP’s solutions, the MPM-LOLA’s solution-decoding method generates MPMJSP’s solutions.(ii)MPM-LOLA is similar to LOSAP in that its optional operators are modified from the traditional operators by limiting the distance of from u. However, while LOSAP has only three levels of distance limit, the distance limit of MPM-LOLA is adjustable to any possible range.(iii)Unlike LOSAP, MPM-LOLA applies the roulette-wheel method to select a neighbor operator. It also applies the roulette-wheel method to select optional operators for generating a perturbation operator.

For the purpose of clarification, the description of MPM-LOLA is divided into two parts: a description of its solution-decoding method and a description of its overall procedure. The solution-decoding method is described in Section 3.1.1, and the overall procedure is described in Section 3.1.2.

3.1.1. Solution-Decoding Method

MPM-LOLA decodes a solution-representing permutation into a schedule by using the delay-time limit (δ) and the tiebreak criterion (TB). Note that TB is used only if there is more than one optional machine that satisfies δ. In this paper, every solution-representing permutation is in a form of the operation-based permutation [33, 34]. An operation-based permutation is a permutation of numbers 1, 2, …, n where the number i (i = 1, 2, …, n) appears ni times. Remind that n and ni denote the number of all jobs and the number of all operations of the job Ji, respectively. In the permutation, the number i in its k-th appearance represents the operation Oik. Then, a schedule is constructed by scheduling all operations one-by-one in the order given by the permutation. Each operation must be processed by its optional machine that satisfies δ and TB, and it must be started as early as this machine can. It is noticed that the use of δ in this paper is different from those in the other researches, e.g., [10, 11, 1416]. While MPM-LOLA uses δ to select an optional machine for each operation, the other researches use δ to select an operation into the timetable.

As mentioned above, δ ∈ [0, 1) and TB ∈ {lowest, highest} are used to select an optional machine for each operation. If δ = 0, each operation must be processed on its optional machine that can start processing earliest. When the value of δ is assigned larger, the maximum delay-time allowed for each operation is then longer; consequently, it may increase the number of optional machines that satisfy δ for each operation. If there is more than one optional machine that satisfies δ, then TB is required as a tie breaker. If TB is selected to be lowest, the lowest-indexed optional machine that satisfies δ is selected; otherwise, the highest-indexed optional machine that satisfies δ is selected.

Algorithm 1 presents the solution-decoding method used by MPM-LOLA. The algorithm uses δ and TB, as its input parameters, to transform an operation-based permutation into an MPMJSP’s solution. Note that Algorithm 1 may return a different schedule from the same operation-based permutation if the values of δ and TB are changed. Notation used in Algorithm 1 is defined below:(i)Let D denote the number of all operations in the being-solved MPMJSP instance. Thus, D = n1 + n2 + … + nn, where ni is the number of all operations of the job Ji (i = 1, 2, …, n).(ii)Let П denote the sequence of operations transformed from the operation-based permutation.(iii)Let Φ denote the schedule transformed from П.(iv)Let Oi′k′ denote the k′-th operation of the job Ji′, and it represents the as-yet-unscheduled operation that is currently in its turn to be scheduled.(v)Let mi′k′ denote the number of all optional machines of Oi′k′.(vi)Let Ei′k′l denote the l-th optional machine of Oi′k′, where l = 1, 2, …, mi′k′.(vii)Let E denote the chosen machine for processing Oi′k′. This machine must be chosen from all Ei′k′l (l = 1, 2, …, mi′k′).(viii)Let δ be a real number within [0, 1) and denote the delay-time limit.(ix)Let TB ∈ {lowest, highest} denote the tiebreak criterion for selecting an optional machine to process Oi′k′. It is used only if there is more than one optional machine that satisfies δ. If TB = lowest, let E be the machine with the lowest l from all Ei′k′l (where l = 1, 2, …, mi′k′) that satisfy δ. If TB = highest, let E be the machine with the highest l from all Ei′k′l (where l = 1, 2, …, mi′k′) that satisfy δ.(x)Let τi′k′ denote the processing time of Oi′k′.(xi)Let σM denote the minimum of the earliest available times of all optional machines of Oi′k′.(xii)Let σJ denote the earliest possible start time of Oi′k′ in the job Ji′. This means σJ is equal to the finished time of Oi′k′– 1. If Oi′k′ has no immediate-preceding operation, then σJ is equal to 0.(xiii)Let σ denote the earliest possible start time of Oi′k′. It is equal to the maximum between σM and σJ.

Step 1. Receive a δ’s value, a TB’s value, and an operation-based permutation needed to be transformed from MPM-LOLA (Algorithm 2).
Step 2. Transform the operation-based permutation taken from Step 1 into П by changing the number i in its k-th appearance into the operation Oik (i = 1, 2, …, n; k = 1, 2, …, ni). For example, the operation-based permutation (3, 2, 3, 1, 1, 2) is transformed into П = (O31, O21, O32, O11, O12, O22).
Step 3. Transform П into Φ by using Steps 3.1 to 3.6.
Step 3.1. Let Φ ⟵ an empty schedule, and let t ⟵ 1.
Step 3.2. Let Oi′k′ ⟵ the leftmost as-yet-unscheduled operation in П.
Step 3.3. Find σM, σJ, and σ of Oi′k′.
Step 3.4. Let E ⟵ the machine, chosen from all Ei′k′l (l= 1, 2, …, mi′k′), that can start processing not-later-than σ + δτi′k′. If there is more than one machine that can be chosen as E, then choose one of them that has the lowest l if TB= lowest; otherwise, choose one of them that has the highest l.
Step 3.5. Modify Φ by assigning E to process Oi′k′. In the schedule, let E start processing Oi′k′ as early as possible.
Step 3.6. If t < D, then t ⟵ t + 1 and repeat from Step 3.2. Otherwise, go to Step 4.
Step 4. Return Φ as the complete schedule to MPM-LOLA (Algorithm 2).
3.1.2. Procedure of MPM-LOLA

MPM-LOLA generates its neighborhood structure based on multiple optional operators. These optional operators consist of d-swap, d-insert, d-reverse, D-swap, D-insert, and D-reverse operators. Note that D-swap, D-insert, and D-reverse are identical to the traditional swap, insert, and reverse operators, respectively. Notation and terminologies used to define the mentioned operators are given below:(i)Let u and be integers used to point at the two member positions in an operation-based permutation.(ii)Let d denote a distance limit of from u. It is used for specifying d-swap, d-insert, and d-reverse.(iii)Let D denote the number of all members in the solution-representing permutation, which equals the number of all operations in the MPMJSP instance. Thus, D = n1 + n2 + … + nn, where ni is the number of all operations of the job Ji.(iv)d-swap is to swap two members in the u-th and the -th positions in the permutation. Let u be randomly selected from 1 to D. Then, let be randomly selected from max(1, u – d) to min(u + d, D) except u.(v)d-insert is to remove a member from the u-th position in the permutation and then insert it into the -th position. Let u be randomly selected from 1 to D. Then, let be randomly selected from max(1, u – d) to min(u + d, D) except u.(vi)d-reverse is to reverse the positions of all members from the u-th to the -th positions in the permutation. Let u be randomly selected from 1 to D. Then, let be randomly selected from max(1, u – d) to min(u + d, D) except u.(vii)D-swap is to swap two members in the u-th and the -th positions in the permutation. Let u and be two different integers randomly selected from 1 to D.(viii)D-insert is to remove a member from the u-th position in the permutation and then insert it into the -th position. Let u and be two different integers randomly selected from 1 to D.(ix)D-reverse is to reverse the positions of all members from the u-th to the -th positions in the permutation. Let u and be two different integers randomly selected from 1 to D.

Before an execution, eight MPM-LOLA’s input parameters must be assigned values. The first input parameter, denoted by P, is the start operation-based permutation. The remaining seven input parameters consist of two parameters for specifying a perturbation operator, three parameters for generating its neighbor operators, and two parameters for selecting an optional machine. In MPM-LOLA, the perturbation operator is to randomly use one from D-swap, D-insert, and D-reverse on the start operation-based permutation n times (remind that n denotes the number of all jobs). In each of these n random selections, the roulette-wheel technique is applied to select one from D-swap, D-insert, and D-reverse. The probabilities of selecting D-swap and D-insert in the roulette wheel are the second and the third input parameters of MPM-LOLA, respectively. Consequently, the probability of selecting D-reverse is unity subtracted by the sum of the probabilities of selecting D-swap and D-insert.

The fourth to the sixth input parameters are used to generate MPM-LOLA’s neighbor operators (i.e., d-swap, d-insert, and d-reverse). The fourth input parameter, denoted by d, is the distance limit of from u for specifying d-swap, d-insert, and d-reverse. Then, MPM-LOLA uses the roulette-wheel technique to randomly select one from d-swap, d-insert, and d-reverse to generate a neighbor solution-representing permutation. In the roulette wheel, the probabilities of selecting d-swap and d-insert are the fifth and the sixth input parameters, respectively. The probability of selecting d-reverse is thus unity subtracted by the sum of the probabilities of selecting d-swap and d-insert.

The delay-time limit (δ) and the tiebreak criterion (TB) are the MPM-LOLA’s seventh and eighth input parameters, respectively. These two input parameters are used to select an optional machine for each operation in constructing a schedule. Thus, instead of using δ and TB by MPM-LOLA itself, MPM-LOLA transfers the values of δ and TB into its solution-decoding method (Algorithm 1).

The overall procedure of MPM-LOLA is given in Algorithm 2. Notation used in Algorithm 2 is defined below:(i)Let D denote the number of all operations in the being-solved MPMJSP instance. Thus, D = n1 + n2 + … + nn, where ni is the number of all operations of the job Ji.(ii)Let d denote the distance limit of from u in the operation-based permutation for specifying d-swap, d-insert, and d-reverse.(iii)Let ρS and ρI, which are real numbers within [0, 1), denote the probabilities of selecting D-swap and D-insert, respectively. Thus, the probability of selecting D-reverse is 1 – SI.(iv)Let ρNS and ρNI, which are real numbers within [0, 1), denote the probabilities of selecting d-swap and d-insert, respectively. Thus, the probability of selecting d-reverse is 1 – ρNSρNI.(v)Let δ, which is a real number within [0, 1), denote the delay-time limit for selecting an optional machine for each operation.(vi)Let TB ∈ {lowest, highest} denote the tiebreak criterion for selecting an optional machine for each operation.(vii)Let P denote the start operation-based permutation.(viii)Let P0 denote the current best-found operation-based permutation. An initial P0 is generated from P via the perturbation operator.(ix)Let S0, which is decoded from P0, denote the current best-found schedule. In addition, Makespan(S0) stands for the makespan of S0.(x)Let P1 denote the current neighbor operation-based permutation.(xi)Let S1, which is decoded from P1, denote the current neighbor schedule. In addition, Makespan(S1) stands for the makespan of S1.

Step 1. Receive values of its eight input parameters (i.e., P, ρS, ρI, d, ρNS, ρNI, δ, and TB) from MPM-UPLA (Algorithm 3).
Step 2. Generate P0 from P by using Steps 2.1 to 2.4.
Step 2.1. Let r ⟵ 1.
Step 2.2. Randomly generate p ∼ U[0, 1).
Step 2.3. Modify P by using D-swap if p < ρS, D-insert if ρS ≤ p < ρS + ρI, and D-reverse otherwise.
Step 2.4. If r < n, let r ⟵ r + 1 and repeat from Step 2.2. Otherwise, let P0P and go to Step 3.
Step 3. Execute Algorithm 1, with the taken values of δ and TB, for transforming P0 into S0.
Step 4. Find a local optimal schedule by using Steps 4.1 to 4.5.
Step 4.1. Let tL ⟵ 0.
Step 4.2. Randomly generate p ∼ U[0, 1).
Step 4.3. Generate P1 from P0 by using d-swap if p < ρNS, d-insert if ρNS ≤ p < ρNS + ρNI, and d-reverse otherwise.
Step 4.4. Execute Algorithm 1, with the taken values of δ and TB, for transforming P1 into S1.
Step 4.5. Update P0, S0, and tL by using Steps 4.5.1 to 4.5.3.
  Step 4.5.1. If Makespan(S1) < Makespan(S0), then let P0 ⟵ P1 and S0 ⟵ S1, and repeat from Step 4.1.
  Step 4.5.2. If Makespan(S1) = Makespan(S0), then let P0 ⟵ P1 and S0 ⟵ S1, and repeat from Step 4.2.
  Step 4.5.3. If Makespan(S1) > Makespan(S0), then let tLtL + 1. If tL < D2, repeat from Step 4.2; otherwise, go to Step 5.
Step 5. Return P0 and S0 as the final (best-found) operation-based permutation and the final (best-found) schedule, respectively, to MPM-UPLA (Algorithm 3).
3.2. MPM-UPLA

MPM-UPLA is an upper-level algorithm of the proposed two-level metaheuristic. It uses the same framework of the upper-level algorithms of [11, 12]. MPM-UPLA is thus a population-based search algorithm that acts as a parameter controller. It evolves the MPM-LOLA’s input-parameter values, so that MPM-LOLA can return its best performance on every single MPMJSP instance. At the t-th iteration, the MPM-UPLA’s population consists of N combinations of the MPM-LOLA’s input-parameter values, i.e., C1(t), C2(t), …, CN(t). In short, let a parameter-value combination stand for a combination of the MPM-LOLA’s input-parameter values. Let denote the -th parameter-value combination (where  = 1, 2, …, N) in the population at the t-th iteration. It represents the value combination of the MPM-LOLA’s input parameters, i.e., P, ρS, ρI, d, ρNS, ρNI, and TB.

The delay-time limit, δ, is an important MPM-LOLA’s input parameter controlled by MPM-UPLA. However, the value of δ is not assigned as a member in , but it is controlled by the MPM-UPLA’s iteration index, t. At the first MPM-UPLA’s iteration (t = 1), the value of δ is set to be 0.0 for MPM-LOLA. For every next 50 MPM-UPLA’s iterations, the value of δ is increased by 0.2 for MPM-LOLA. This setting of δ was based on the result of a preliminary study of this research. It found that the control of δ using the MPM-UPLA’s iteration index usually performs better than the control of δ using .

The transformations from the ’s members into MPM-LOLA’s parameter values are described below:(i)Let represent P. In other words, P is directly equal to in the transformation.(ii)Let , , and be used together to determine the values of ρS and ρI. Their transformations are given in (1) and (2).(iii)Let be used to determine the value of d. In its transformation, let d be equal to the rounded integer from 1 + D, where D is the number of all operations in the MPMJSP instance. After that, reassign d = 1 if d < 1, and reassign d = D if d > D.(iv)Let , , and be used together to determine the values of ρNS and ρNI. Their transformations are given in (3) and (4).(v)Let be used to determine the value of TB. In its transformation, let TB = lowest if < 0.5, and let TB = highest if ≥ 0.5.

The overall procedure of MPM-UPLA is given in Algorithm 3. Notation used in Algorithm 3 is defined below:(i)Let N denote the number of all parameter-value combinations in the MPM-UPLA’s population.(ii)Let denote the -th parameter-value combination (where  = 1, 2, …, N) in the MPM-UPLA’s population at the t-th iteration.(iii)Let Score denote the performance score of . Note that the lower the performance score, the better the performance.(iv)After executing MPM-LOLA with the parameter values given by , let its final (best-found) operation-based permutation and final (best-found) schedule be denoted by (t) and (t), respectively.(v)Let Makespan((t)) denote the makespan of (t).(vi)Let Cbest ≡ (c1best, c2best, …, c9best) denote the best parameter-value combination ever found by the population. In addition, let Score(Cbest) denote the performance score of Cbest.(vii)Let Sbest denote the best schedule ever found by the population.

Step 1. Receive a value of N and a stopping criterion from a user. Let t ⟵ 1, δ ⟵ 0.0, and Score(Cbest) ⟵ + ∞.
Step 2. Generate , where  = 1, 2, …, N, by using Steps 2.1 to 2.4.
Step 2.1. Let ⟵ 1.
Step 2.2. Randomly generate from any possible operation-based permutation.
Step 2.3. Randomly generate .
Step 2.4. If  < N, let ⟵  + 1 and repeat from Step 2.2. Otherwise, go to Step 3.
Step 3. Evaluate , and update Cbest and Sbest by using Steps 3.1 to 3.6.
Step 3.1. Let ⟵ 1.
Step 3.2. Transform into the values of P, ρS, ρI, d, ρNS, ρNI, and TB.
Step 3.3. Execute MPM-LOLA (Algorithm 2) with the last-updated values of P, ρS, ρI, d, ρNS, ρNI, TB, and δ in order to receive (t) and (t).
Step 3.4. Let .
Step 3.5. If , then let , , and .
Step 3.6. If , let and repeat from Step 3.2. Otherwise, go to Step 4.
Step 4. Update , where , by using Steps 4.1 to 4.5.
Step 4.1. Let .
Step 4.2. Let .
Step 4.3. If (t + 1) mod 50 = 0, let δδ + 0.2. After that, reassign δ ⟵ 0.999 if δ 1.0.
Step 4.4. If mod 50 = 0, randomly generate . Otherwise, generate , where , by using the below equation. Let and .
Step 4.5. If , then let and repeat from Step 4.2. Otherwise, go to Step 5.
Step 5. If the stopping criterion is not met, then let t ⟵ t + 1 and repeat from Step 2. Otherwise, return Sbest as the final result to the user.

4. Experiment’s Results

In this paper, an experiment was conducted to compare MPM-UPLA’s results with those of TS, PSO, and CP. Let TS, PSO, and CP represent the tabu-search algorithm [9], the particle swarm optimization algorithm [10], and the ILOG constraint programming optimizer [20], respectively. These three algorithms were chosen because they perform well on the same benchmark instance sets used in this paper’s experiment. In Sections 4 and 5, let MPM-UPLA stand for the whole two-level metaheuristic, i.e., MPM-UPLA combined with MPM-LOLA. The reason is that MPM-UPLA uses MPM-LOLA as its component when solving MPMJSP.

In the comparison, this paper’s experiment used three benchmark instance sets, i.e., Edata, Rdata, and Vdata, taken from [9, 20, 21]. Each instance set consists of 66 instances, modified from the well-known JSP benchmark instances [3539]. The difference among the three instance sets is the number of optional machines of each operation in their instances. In Edata, the average number of optional machines of each operation is 1.15, and the maximum number of optional machines of each operation is 2 or 3. In Rdata, the average number and the maximum number of optional machines of each operation are 2 and 3, respectively. In Vdata, the average number and the maximum number of optional machines of each operation are 0.5m and 0.8m, respectively (where m = the number of all machines in an instance).

The parameter settings of MPM-UPLA used in the experiment are shown below:(i)The population of MPM-UPLA consisted of three parameter-value combinations (i.e., N = 3 in Algorithm 3).(ii)The stopping criterion of MPM-UPLA was to stop when any of the below conditions was satisfied:(a)The 1,000-th iteration (i.e., t = 1,000 in Algorithm 3) was reached.(b)The 150-th minute of computational time was reached.(c)The known optimal solution was found. If the optimal solution has been yet unknown, its lower bound [20, 21] was used instead.(iii)MPM-UPLA was coded in C# and executed on an Intel® Core™ i7-3520M @ 2.90 GHz with RAM of 4 GB (3.87 GB useable).(iv)For each instance, MPM-UPLA was executed five trials with different random-seed numbers.

With the above settings, the experiment’s results on Edata, Rdata, and Vdata are presented in Tables 1 to 3, respectively. These tables first show the name, size, and best-known solution value of each instance. The best-known solution value, given by literature, stands for the upper bound of the optimal solution value. For each instance, each table then shows the best-found solution values of TS [9], PSO [10], CP [20], and MPM-UPLA. The best-found solution values of TS, PSO, and CP are their best solution values taken from their original articles [9, 10] and [20], respectively. For MPM-UPLA, its best-found solution value on each instance is a minimum of the best-found solution values from its five trials in this experiment. Each table also shows an average of the best-found solution values, the average number of used iterations, and the average computational time from the MPM-UPLA’s five trials on each instance.

Notation and terminologies used for each instance in Tables 1 to 3 are given as follows:(i)Let Instance column present the name of each instance.(ii)Let n and m denote the number of all jobs and the number of all machines, respectively, in the instance.(iii)Let a solution denote an MPMJSP’s schedule, and let a solution value denote a makespan of an MPMJSP’s schedule.(iv)Let BKS column present the best-known solution value given by literature, e.g., [20, 21]. If the best-known solution value has been proven to be an optimal solution value, it is presented without parentheses. Otherwise, it is an upper bound of the optimal solution value and is presented within parentheses.(v)If the best-found solution value of MPM-UPLA is better than the best-known solution value given by literature, let the best-found solution value of MPM-UPLA become the new best-known solution value. When such a case occurs, an arrow symbol (⟶) is given in front of the value.(vi)Let Best represent the best-found solution value of each algorithm. Best of MPM-UPLA was taken from the five trials in this experiment. Bests of TS, PSO, and CP were taken from [9, 10] and [20], respectively. N/A means the best-found solution value does not appear in the original article.(vii)Let Avg represent the average of the best-found solution values of the five trials of MPM-UPLA.(viii)Let No of Iters and Time stand for the average number of iterations and the average computational time (in HH:MM:SS format), respectively, until the MPM-UPLA’s stopping criterion is met.

For each instance set, Table 4 shows Avg %BD of each algorithm on each instance category. Of each algorithm, %BD of each instance denotes a percent deviation of the best-found solution value from the best-known solution value. Then, Avg %BD denotes an average of %BDs of all instances in their category. In Table 4, each instance is classified into one of 13 instance categories, based on its source and size. The details of these 13 categories are given below:(i)M6-20 consists of three instances, i.e., M6, M10, and M20.(ii)LA1-5 consists of five 10-job/5-machine instances, i.e., LA1, LA2, …, LA5.(iii)LA6-10 consists of five 15-job/5-machine instances, i.e., LA06, LA07, …, LA10.(iv)LA11-15 consists of five 20-job/5-machine instances, i.e., LA11, LA12, …, LA15.(v)LA16-20 consists of five 10-job/10-machine instances, i.e., LA16, LA17, …, LA20.(vi)LA21-25 consists of five 15-job/10-machine instances, i.e., LA21, LA22, …, LA25.(vii)LA26-30 consists of five 20-job/10-machine instances, i.e., LA26, LA27, …, LA30.(viii)LA31-35 consists of five 30-job/10-machine instances, i.e., LA31, LA32, …, LA35.(ix)LA36-40 consists of five 15-job/15-machine instances, i.e., LA36, LA37, …, LA40.(x)ABZ5-6 consists of two 10-job/10-machine instances, i.e., ABZ5 and ABZ6.(xi)ABZ7-9 consists of three 20-job/15-machine instances, i.e., ABZ7, ABZ8, and ABZ9.(xii)CAR1-8 consists of eight instances, i.e., CAR1, CAR2, …, CAR8.(xiii)ORB1-10 consists of ten 10-job/10-machine instances, i.e., ORB1, ORB2, …, ORB10.

In addition to the 13 categories, Table 4 includes four more instance categories, i.e., M6-LA40, SM-M6-LA40, M6-ORB10, and SM-M6-ORB10. These additional categories were used for comparing performance of the algorithms. SM in SM-M6-LA40 and SM-M6-ORB10 indicates that these categories contain only small-to-medium instances. Let instances be defined as small-to-medium instances if their nm < 150 and large instances otherwise (where n = the number of jobs and m = the number of machines). The details of these four additional categories are given below:(i)M6-LA40 consists of the first 43 instances of all 66 instances, starting from M6 to LA40. These 43 instances were used by TS [9] and PSO [10] in their original articles.(ii)SM-M6-LA40 consists of all 23 small-to-medium instances from M6-LA40.(iii)M6-ORB10 consists of all 66 instances, starting from M6 to ORB10. These 66 instances were used by CP [20] in its original article.(iv)SM-M6-ORB10 consists of all 43 small-to-medium instances from M6-ORB10.

5. Result Analysis and Discussion

This section analyzes and discusses the results shown in Section 4. Like Section 4, MPM-UPLA in this section stands for the whole two-level metaheuristic, i.e., MPM-UPLA combined with MPM-LOLA. The performance of MPM-UPLA was compared with the performance of TS [9], PSO [10], and CP [20] via three performance indicators. These indicators are the number of instances achieved in finding the best-known solutions, the number of instances won by an algorithm against another, and the average percent deviation of the algorithm’s best-found solution value from the best-known solution value (Avg %BD). Of each instance, the best-known solution value means the best solution value found by the published literature. An only exception is in LA31 of Rdata, where its best-known solution value was taken from the best-found solution value of MPM-UPLA. The reason is that, in LA31 of Rdata, MPM-UPLA found the better solution than the previously published best-known solution.

For each instance set, this section separates analyses on the first 43 instances from those on all 66 instances. The reason is that the results of TS and PSO were given on only the 43 instances in their original articles [9, 10], while the results of CP were given on the 66 instances in its original article [20]. In addition, this section separates analyses on small-to-medium instances from those on all given instances. Note that all instances with nm < 150 are defined as small-to-medium instances (where n = the number of jobs and m = the number of machines). Sections 5.1 to 5.3 show the analyses and discussions via the three given indicators. Then, Section 5.4 provides an overall summary from Sections 5.1 to 5.3.

5.1. The Number of Instances Achieved in Finding the Best-Known Solutions

Of each instance set, this section first compares the number of instances achieved in finding the best-known solutions by MPM-UPLA with those by TS, PSO, and CP on the first 43 instances. The numbers of instances achieved by each algorithm in Edata, Rdata, and Vdata were counted from Tables 1, 2, and 3, respectively. For the first 43 instances, MPM-UPLA obviously outperforms the three other algorithms on all three instance sets, especially Rdata. Of each instance set, the comparison results are given below:(i)For the first 43 instances of Edata, the algorithms TS, PSO, CP, and MPM-UPLA reach the best-known solutions on 10, 15, 22, and 27 instances, respectively.(ii)For the first 43 instances of Rdata, the algorithms TS, PSO, CP, and MPM-UPLA reach the best-known solutions on 4, 4, 8, and 24 instances, respectively. MPM-UPLA also found a new best-known solution value (i.e., 1520) for LA31 of Rdata. This value is defined as the optimal solution value because it equals the optimal solution value’s lower bound given in [20].(iii)For the first 43 instances of Vdata, the algorithms TS, PSO, CP, and MPM-UPLA reach the best-known solutions on 14, 13, 23, 28 instances, respectively.

Of each instance set, this section then compares the number of instances achieved by MPM-UPLA with that by CP on all 66 instances. For all 66 instances, MPM-UPLA outperforms CP on all three instance sets, especially Rdata. Of each instance set, the comparison results are given below:(i)For all 66 instances of Edata, CP and MPM-UPLA reach the best-known solutions on 32 and 44 instances, respectively.(ii)For all 66 instances of Rdata, CP and MPM-UPLA reach the best-known solutions on 14 and 38 instances, respectively.(iii)For all 66 instances of Vdata, CP and MPM-UPLA reach the best-known solutions on 39 and 45, respectively.

Thus, as a conclusion, MPM-UPLA outperforms TS, PSO, and CP in finding the best-known solutions on all three instance sets, especially Rdata. On Rdata, the number of instances achieved by MPM-UPLA is more than double the number of instances achieved by each of the others. Moreover, MPM-UPLA also found the new best-known solution value on LA31 of Rdata.

5.2. The Number of Instances Won

This section first compares the number of instances won by MPM-UPLA with those by TS, PSO, and CP on the first 43 instances of each instance set. Note that in the first 43 instances, there are 23 small-to-medium instances included. The numbers of instances won in Edata, Rdata, and Vdata were counted from Tables 1, 2, and 3, respectively. For the first 43 instances, MPM-UPLA obviously outperforms the three other algorithms on Edata and Rdata; MPM-UPLA outperforms TS and PSO but underperforms CP on Vdata. However, when considering only the 23 small-to-medium instances, MPM-UPLA outperforms CP on Vdata. Of each instance set, the comparison results are detailed below:(i)Out of the first 43 instances of Edata, MPM-UPLA has 33 wins, 10 draws, and 0 losses against TS; it has 28 wins, 15 draws, and 0 losses against PSO. In addition, it has 21 wins, 20 draws, and 2 losses against CP.(ii)Out of the first 43 instances of Rdata, MPM-UPLA has 39 wins, 4 draws, and 0 losses against each of TS and PSO. In addition, it has 29 wins, 11 draws, and 3 losses against CP.(iii)Out of the first 43 instances of Vdata, MPM-UPLA has 19 wins, 13 draws, and 11 losses against TS; it has 18 wins, 14 draws, and 11 losses against PSO. In addition, it has 7 wins, 21 draws, and 15 losses against CP. However, when considering only the 23 small-to-medium instances, MPM-UPLA has 7 wins, 16 draws, and 0 losses against CP.

Then, this section compares the number of instances won by MPM-UPLA with that by CP on all 66 instances of each instance set. Note that in the 66 instances, there are 43 small-to-medium instances included. For the 66 instances, MPM-UPLA outperforms CP on Edata and Rdata, but it underperforms CP on Vdata. However, when considering only the 43 small-to-medium instances, MPM-UPLA obviously outperforms CP on all three instance sets. For each instance set, the comparison results are detailed below:(i)Out of all 66 instances of Edata, MPM-UPLA has 34 wins, 29 draws, and 3 losses against CP. Out of the 43 small-to-medium instances of Edata, MPM-UPLA has 17 wins, 24 draws, and 2 losses against CP.(ii)Out of all 66 instances of Rdata, MPM-UPLA has 44 wins, 17 draws, and 5 losses against CP. Out of the 43 small-to-medium instances of Rdata, MPM-UPLA has 28 wins, 15 draws, and 0 losses against CP.(iii)Out of all 66 instances of Vdata, MPM-UPLA has 11 wins, 35 draws, and 20 losses against CP. Out of the 43 small-to-medium instances of Vdata, MPM-UPLA has 11 wins, 30 draws, and 2 losses against CP.

As a conclusion, in terms of the number of instances won, MPM-UPLA outperforms the three other algorithms on Edata and Rdata. For Vdata, MPM-UPLA outperforms TS and PSO but underperforms CP. However, when considering only small-to-medium instances, MPM-UPLA outperforms CP on Vdata.

5.3. Avg %BD

This section analyzes Avg %BDs in Table 4. To do so, it first analyzes Avg %BDs of the first 43 instances of each instance set. Then, it analyzes those of the 23 small-to-medium instances of the first 43 instances. In Table 4, the rows M6-LA40 and SM-M6-LA40 provide Avg % BDs of the first 43 instances and those of the 23 small-to-medium instances, respectively. For Avg %BDs of the first 43 instances, MPM-UPLA outperforms the three other algorithms on Edata and Rdata, but it underperforms the three other algorithms on Vdata. When considering only the 23 small-to-medium instances, MPM-UPLA obviously outperforms the three other algorithms on all three instance sets. Of each instance set, the analysis results are detailed below:(i)For the first 43 instances of Edata, MPM-UPLA’s Avg %BD (i.e., 0.24%) is much better than those of TS, PSO, and CP (i.e., 3.16%, 2.26%, and 0.69%, respectively). Based on these 43 instances, paired t tests concluded that the mean %BD of MPM-UPLA is significantly better than those of TS, PSO, and CP (with p values of 3 × 10−10, 1 × 10−8, and 0.0002, respectively). When considering only the 23 small-to-medium instances, MPM-UPLA’s Avg %BD (i.e., 0.02%) is also much better than those of TS, PSO, and CP (i.e., 2.07%, 0.65%, and 0.27%, respectively).(ii)For the first 43 instances of Rdata, MPM-UPLA’s Avg %BD (i.e., 0.57%) is much better than those of TS, PSO, and CP (i.e., 1.87%, 3.76%, and 0.77%, respectively). Based on these 43 instances, paired t tests concluded that the mean %BD of MPM-UPLA is significantly better than those of TS, PSO, and CP (with p values of 4 × 10−7, 1 × 10−7, and 0.00004, respectively). When considering only the 23 small-to-medium instances, MPM-UPLA’s Avg %BD (i.e., 0.02%) is also much better than those of TS, PSO, and CP (i.e., 0.87%, 0.96%, and 0.16%, respectively).(iii)For the first 43 instances of Vdata, MPM-UPLA’s Avg %BD (i.e., 0.59%) is worse than those of TS, PSO, and CP (i.e., 0.44%, 0.56%, and 0.08%, respectively). However, when considering only the 23 small-to-medium instances, MPM-UPLA’s Avg %BD (i.e., 0.00%) is better than those of TS, PSO, and CP (i.e., 0.26%, 0.13%, and 0.04%, respectively). Based on the 23 small-to- medium instances, paired t tests concluded that the mean %BD of small-to-medium instances of MPM-UPLA is significantly better than those of TS, PSO, and CP (with p values of 0.001, 0.0007, and 0.004, respectively).

Of each instance set, this section then compares Avg %BDs of MPM-UPLA and CP from all 66 instances and from their 43 small-to-medium instances. In Table 4, the rows M6-ORB10 and SM-M6-ORB10 provide Avg %BDs of all 66 instances and those of the 43 small-to-medium instances, respectively. For all 66 instances, MPM-UPLA outperforms CP on Edata and Rdata, but it underperforms CP on Vdata. However, when considering only the 43 small-to-medium instances, MPM-UPLA obviously outperforms CP on all three instance sets. Of each instance set, the comparison results are detailed below:(i)For all 66 instances of Edata, MPM-UPLA’s Avg %BD (i.e., 0.26%) is better than CP’s Avg %BD (i.e., 0.96%). A paired t-test concluded that the mean %BD of MPM-UPLA is significantly better than the mean %BD of CP (with p value of 0.00001). When considering only the 43 small-to-medium instances, MPM-UPLA’s Avg %BD (i.e., 0.07%) is also better than CP’s Avg %BD (i.e., 0.67%).(ii)For all 66 instances of Rdata, MPM-UPLA’s Avg %BD (i.e., 0.57%) is better than CP’s Avg %BD (i.e., 0.87%). A paired t-test concluded that the mean %BD of MPM-UPLA is significantly better than the mean %BD of CP (with p value of 0.00002). When considering only the 43 small-to-medium instances, MPM-UPLA’s Avg %BD (i.e., 0.24%) is also better than CP’s Avg %BD (i.e., 0.55%).(iii)For all 66 instances of Vdata, MPM-UPLA’s Avg %BD (i.e., 0.78%) is worse than CP’s Avg %BD (i.e., 0.07%). However, when considering only the 43 small-to-medium instances, MPM-UPLA’s Avg %BD (i.e., 0.01%) is better than CP’s Avg %BD (i.e., 0.04%). Based on the 43 small-to-medium instances, a paired t-test concluded that the mean %BD of small-to-medium instances of MPM-UPLA is significantly better than that of CP (with p value of 0.002).

As a conclusion, based on Avg %BDs, MPM-UPLA obviously outperforms the three other algorithms on Edata and Rdata, but it underperforms the three other algorithms on Vdata. When considering only the small-to-medium instances, MPM-UPLA outperforms the three other algorithms on all three instance sets.

5.4. Overall Summary

In the number of instances achieved in finding the best-known solutions, MPM-UPLA outperforms the three other algorithms on all three sets of instances. In the number of instances won, MPM-UPLA outperforms the three other algorithms on Edata and Rdata; MPM-UPLA outperforms TS and PSO but underperforms CP on Vdata. However, when considering only small-to-medium instances, MPM-UPLA outperforms CP on Vdata in the number of instances won. In Avg %BD, MPM-UPLA outperforms the three other algorithms on Edata and Rdata, but it underperforms the three other algorithms on Vdata. However, when considering only small-to-medium instances, MPM-UPLA outperforms the three other algorithms on Vdata. As a conclusion, MPM-UPLA usually performs very well on the MPMJSP instances where each operation has less than four optional machines (e.g., the instances in Edata and Rdata). When each operation has many optional machines (i.e., ≥ 0.5m optional machines), MPM-UPLA usually performs well on only small-to-medium instances (i.e., the instances of nm < 150).

6. Conclusion

In this paper, a two-level metaheuristic was proposed for solving MPMJSP. The two-level metaheuristic consists of MPM-UPLA and MPM-LOLA as its upper- and lower-level algorithms, respectively. MPM-UPLA, a population-based algorithm, acts as the MPM-LOLA’s parameter controller. MPM-LOLA is a local search algorithm, searching for an MPMJSP’s optimal solution. MPM-LOLA has many changes from its older variants, such as perturbation and neighbor operators. It also uses a unique method to select an optional machine for each operation. The MPM-UPLA’s function is to evolve the MPM-LOLA’s input-parameter values, so that MPM-LOLA can perform its best for every single instance. In this paper’s experiment, the performance of the two-level metaheuristic was evaluated on the three instance sets, i.e., Edata, Rdata, and Vdata. The experiment’s results indicated that the two-level metaheuristic performs very well on Edata and Rdata. For Vdata, the two-level metaheuristic usually performs well on only the category of small-to-medium instances. Thus, a future research should be focused to enhance the two-level metaheuristic’s performance, especially on large instances of Vdata.

Data Availability

The data used to support the findings of this study are available from the author upon request.

Conflicts of Interest

The author declares that there are no conflicts of interest.