Complexity

Complexity / 2019 / Article

Research Article | Open Access

Volume 2019 |Article ID 8683472 | https://doi.org/10.1155/2019/8683472

Pisut Pongchairerks, "A Two-Level Metaheuristic Algorithm for the Job-Shop Scheduling Problem", Complexity, vol. 2019, Article ID 8683472, 11 pages, 2019. https://doi.org/10.1155/2019/8683472

A Two-Level Metaheuristic Algorithm for the Job-Shop Scheduling Problem

Academic Editor: Chongyang Liu
Received02 Nov 2018
Revised10 Jan 2019
Accepted11 Feb 2019
Published07 Mar 2019

Abstract

This paper proposes a novel two-level metaheuristic algorithm, consisting of an upper-level algorithm and a lower-level algorithm, for the job-shop scheduling problem (JSP). The upper-level algorithm is a novel population-based algorithm developed to be a parameter controller for the lower-level algorithm, while the lower-level algorithm is a local search algorithm searching for an optimal schedule in the solution space of parameterized-active schedules. The lower-level algorithm’s parameters controlled by the upper-level algorithm consist of the maximum allowed length of idle time, the scheduling direction, the perturbation method to generate an initial solution, and the neighborhood structure. The proposed two-level metaheuristic algorithm, as the combination of the upper-level algorithm and the lower-level algorithm, thus can adapt itself for every single JSP instance.

1. Introduction

Scheduling problems generally involve assigning jobs to machines at particular times. This paper focuses on the job-shop scheduling problem with minimizing the total length of the schedule (JSP), which is one of the hard-to-solve scheduling problems. The JSP is an NP-hard optimization problem [1, 2] well known in both academic and practical areas. To deal with the JSP and related problems, many approximation algorithms have been developed based on metaheuristic algorithms [38]. In addition, some methods such as [9] have been presented for the purpose of reducing the solution space of the JSP.

To solve the JSP, this paper aims at developing the two-level metaheuristic algorithm in which the upper-level algorithm controls the lower-level algorithm’s input parameters, and the lower-level algorithm acts as a local search algorithm to search for an optimal schedule. The purpose of the upper-level algorithm is to iteratively adapt the input-parameter values of the lower-level algorithm, so that the lower-level algorithm will be fitted well for every single instance. In general, a mechanism of controlling a lower-level algorithm’s input parameters by an upper-level algorithm is classified as adaptive parameter control based on the definition given by [10]. The concept of using a metaheuristic algorithm to control the input parameters of another metaheuristic algorithm, known as a meta-evolutionary algorithm, has been applied in many different researches such as [1115].

In detail, this paper will hereafter call the upper-level algorithm and lower-level algorithm in the proposed two-level metaheuristic algorithm as UPLA and LOLA, respectively. LOLA is a local search algorithm exploring in a solution space of parameterized-active schedules (also known as hybrid schedules), where each parameterized-active schedule [16, 17] is decoded from an operation-based permutation [1820]. In LOLA, there are four input parameters which need to be assigned their values before LOLA is executed, i.e., the maximum allowed length of idle time of constructing parameterized-active schedules, the scheduling direction of constructing schedules, the perturbation method to generate an initial solution, and the operator combination used to generate a neighborhood structure. The values of these four input parameters can be adjusted and inputted by a human user; however, this duty in the proposed two-level algorithm will belong to UPLA.

UPLA is a population-based metaheuristic algorithm searching in the real-number search space; from this point of view, it is similar to the particle swarm optimization [21], differential evolution [22], firefly [23], cuckoo search [24], and artificial fish-swarm [25, 26] algorithms. Unlike the others, the UPLA’s function is not to find the problem solutions; however, it is designed for being an input-parameter controller specifically for LOLA. At an initial iteration, UPLA starts with a population of the combinations of LOLA’s input-parameter values. All input-parameter values in each input-parameter-value combination are represented in real numbers; most of them need to be decoded from real numbers to the other forms understandable for LOLA. At each iteration, UPLA tries to improve its population by using the feedback returned from LOLA. By the support of UPLA, LOLA will be upgraded from a local search algorithm to be an iterated local search algorithm. Moreover, the combination of UPLA and LOLA results in the two-level algorithm which can adapt itself for every single JSP instance.

The remainder of this paper is structured as follows. Section 2 provides the preliminary knowledge relevant to this research. Section 3 presents the proposed lower-level algorithm (LOLA), including its decoding procedure for generating solutions. Section 4 presents the proposed upper-level algorithm (UPLA), including its decoding procedure for generating LOLA’s input-parameter values. Section 5 then evaluates the performance of the proposed two-level metaheuristic algorithm on well-known JSP instances. Finally, Section 6 provides a conclusion.

2. Preliminaries

The job-shop scheduling problem (JSP) comes with a given set of jobs and a given set of machines . The jobs arrive before or at the same time of the schedule’s start time (i.e., time 0), and the machines are all available at the schedule’s start time as well. Each job consists of an unchangeable sequence of operations . Thus, in order to process the job , the operation must be finished before the operation can start, the operation must be finished before the operation can start, and so on. Moreover, each operation must be processed on a preassigned machine with a predetermined processing time. Each machine can process at most one operation at a time, and it cannot be interrupted during processing an operation. The objective of the JSP in this paper is to find a feasible schedule of the jobs on the machines with minimizing makespan. Note that makespan means the total length of the schedule, or the total amount of time required to complete all jobs. Good reviews about the JSP are found in [1, 2, 27].

Schedules in the JSP can be constructed in forward or backward (reverse) directions. A schedule is defined as a forward schedule if every job (where ) in the schedule is done by starting from the first operation to the last operation . On the contrary, a schedule is defined as a backward schedule if every job (where ) in the schedule is done backward by starting from the last operation to the first operation . The backward scheduling is commonly used for the scheduling problem with a due-date criterion. However, it also provides a benefit to the JSP with a makespan criterion because some instances can be solved to optimality by the backward direction more simply than by the forward direction. A backward schedule can simply be constructed by reversing the directions of the precedence constraints of all operations, and then allocating jobs to machines in the same way as constructing a forward schedule; after that, the schedule must be turned back to front in order to make it satisfy the original precedence constraints. The uses of the backward scheduling direction are found in published articles, e.g., [7, 2831].

All feasible schedules can be classified into three classes, i.e., semiactive schedules, active schedules, and nondelay schedules [16, 19, 32]. A feasible schedule is defined as a semiactive schedule if no operation can be started earlier without changing the operation sequence. A semiactive schedule is defined as an active schedule if no operation can start earlier without delaying another operation or without violating the precedence constraints. Finally, an active schedule is defined as a nondelay schedule if no machine is kept idle when it can start processing an operation. Thus, the solution space of active schedules is a subset of the solution space of semiactive schedules, and the solution space of nondelay schedules is a subset of the solution space of active schedules. The solution space of active schedules is surely dominant over the solution space of semiactive schedules since it is smaller and also guaranteed to contain an optimal schedule. The solution space of nondelay schedules, which is the smallest solution space, may not contain an optimal schedule.

In addition to the three solution spaces given above, the solution space of parameterized-active schedules or hybrid schedules [16, 17] is a subset of the solution space of active schedules which is not smaller than the solution space of nondelay schedules. In definition, a parameterized-active schedule is an active schedule where no machine runs idle for more than a maximum allowed length of idle time. In [16], the maximum allowed length of idle time is controlled by a tunable parameter . At extremes, the GA of [16] with = 1 will explore in the solution space of active schedules, while the GA with = 0 will explore in the solution space of nondelay schedules. Because can be adjusted to be any real number between 0 and 1, this parameter thus controls the algorithm’s solution space between nondelay schedules and active schedules. Some variants of the hybrid scheduler of [16], which are modified for the purpose of simplification, can be found in published articles such as [15, 31, 33, 34].

Since the proper value of is problem-dependent, the researchers try to control the tunable parameter in many different ways. Some metaheuristic algorithms such as [16, 33] use with fixed values. On the other hand, some other metaheuristic algorithms such as [34, 35] adjust the values during their evolutionary processes; thus, their methods of changing the values belong to the classification of self-adaptive parameter control based on the definition given by [10]. In addition, some metaheuristic algorithms adjust the values based on the mathematical functions of another parameter (or other parameters); for example, the local search algorithms of [31] adjust the value based on the iteration’s index. The concept of a two-level metaheuristic algorithm in which the upper-level algorithm controls the input parameters including for the lower-level algorithm has been found in [15]. Hereafter, the parameter will be called acceptable idle-time limit in this paper.

3. The Proposed Lower-Level Algorithm

LOLA, which is the lower-level algorithm in the proposed two-level algorithm, acts as a local search algorithm. It is similar to the other local search algorithms in its basic framework [3639]; in addition, it generates neighbor solutions based on common swap and insert operators [31, 40, 41]. However, LOLA has three specific features different from general local search algorithms as follows:(i)LOLA transforms an operation-based permutation [1820] into a parameterized-active schedule (or hybrid schedule) via Algorithm 1 (see Section 3.1)(ii)LOLA allows being adjusted in the values of its input parameters, so that LOLA can be best fitted for every single instance. These input parameters are the acceptable idle-time limit, the scheduling direction, the perturbation method, and the neighborhood structure. In this paper, the duties of adjusting the proper parameter values for LOLA belong to UPLA(iii)Once LOLA is combined with UPLA, it will use the perturbation method selected by UPLA to generate its initial solution. By the perturbation method supported from UPLA, LOLA is upgraded from a local search algorithm to be an iterated local search algorithm, which can escape from a local optimum. Reviews about iterated local search algorithms can be found in many articles, e.g., [42]

Section 3.1 presents Algorithm 1, the procedure of decoding an operation-based permutation into a parameterized-active schedule used by LOLA. A parameterized-active schedule generated by Algorithm 1 can be a forward parameterized-active schedule (i.e., a parameterized-active schedule constructed in the forward scheduling direction) or a backward parameterized-active schedule (i.e., a parameterized-active schedule constructed in the backward scheduling direction) depending on an UPLA’s selection. Section 3.2 then presents Algorithm 2, the procedure of LOLA.

3.1. Decoding Procedure to Generate Parameterized-Active Schedules Used by LOLA

For an -job/-machine JSP instance, an operation-based permutation [1820] is a permutation with repetition of the numbers , where each number occurs times. As an example, the permutation (3, 2, 2, 1, 3, 1) is an operation-based permutation representing a schedule for a 3-job/2-machine JSP instance. The procedure of transforming an operation-based permutation into a parameterized-active schedule used by LOLA is given in Algorithm 1. As mentioned, a parameterized-active schedule decoded from an operation-based permutation by Algorithm 1 can be constructed in either the forward direction or backward direction as two options.

In brief, Algorithm 1 starts by receiving an operation-based permutation, an acceptable idle-time limit , and a scheduling direction in Step 1. If the scheduling direction just received is backward, then Step 1 reverses the order of all members in the operation-based permutation, and Step 2 reverses the precedence relations of all operations of each job. Step 3 then transforms the operation-based permutation into the priority order of all operations. Based on the priority order of operations and the acceptable idle-time limit, a parameterized-active schedule will then be constructed by Steps 4 to 7. In these steps, Algorithm 1 will iteratively choose the highest-priority operation from all schedulable operations which can be started not-later-than + δ() and add the chosen operation into the partial schedule. Note that is the minimum of the earliest possible start times of all schedulable operations, and is the minimum of the earliest possible finish times of all schedulable operations. Steps 5 to 7 will be repeated until the schedule is completed. After that, if the scheduling direction received in Step 1 is forward, stop the algorithm; however, if it is backward, Step 8 will modify the schedule given by Step 7 to satisfy the original precedence constraints.

Algorithm 1 can be defined as a generalization of the solution-decoding procedures shown in [15, 31, 33, 34] since it can generate a parameterized-active schedule in either the forward direction or backward direction as the options. Algorithm 1 and its abovementioned variants are modified from the hybrid scheduler of [16] for the purpose of simplification (as found in Steps 5 and 6 of Algorithm 1). Hence, Algorithm 1 may construct a different schedule from the hybrid scheduler of [16], even if they both use an identical value on an identical operation-based permutation.

Algorithm 1. It is the procedure of decoding an operation-based permutation into a parameterized-active schedule used by LOLA.Step 1.Receive an operation-based permutation. Then, receive an acceptable idle-time limit and a scheduling direction (forward or backward) as input-parameter values. If the scheduling direction is backward, then reverse the order of all members in the operation-based permutation. For example, if the received scheduling direction is backward, a permutation (3, 2, 2, 1, 3, 1) will then be changed to be (1, 3, 1, 2, 2, 3)Step 2.If the scheduling direction received in Step 1 is backward, then reverse the precedence relations of all operations of every job in the being-considered JSP instance by using Steps 2.1 and 2.2 Step 2.1.For each job (where ), the operations must be renamed , respectively Step 2.2.Assign the precedence relations of all operations of the job by following the operation indices taken from Step 2.1. This means that must be finished before can start, must be finished before can start, and so onStep 3.Transform the operation-based permutation taken from Step 1 into an order of priorities of all operations as follows: let the th occurrence of the number i in the permutation, starting from furthest at the left, represent the operation ; then, let the order of all operations in the permutation, starting from furthest at the left, represent the descending order of priorities of the operations. For example, the permutation (3, 2, 2, 1, 3, 1) means priority of > priority of > priority of > priority of > priority of > priority of Step 4.Let be the set of all schedulable operations at stage . Note that a schedulable operation is an as-yet-unscheduled operation whose all preceding operations in its job have already been scheduled. Let be the partial schedule of the scheduled operations. Thus, is empty, and consists of all operations (where ). In addition, let the earliest possible start times of all operations be equal to the time 0. Now, let Step 5.Let be the minimum of the earliest possible start times of all schedulable operations in , and let be the minimum of the earliest possible finish times of all schedulable operations in . (Note that the earliest possible start time of a specific schedulable operation is the maximum between the finished time of its immediate-preceding operation in its job and the earliest available time of its preassigned machine. Then, the earliest possible finish time of a specific schedulable operation is the sum of its earliest possible start time and its predetermined processing time)Step 6.Let represent the highest-priority operation of all schedulable operations whose earliest possible start times are not greater than + δ() in . Then, create by allocating into on the machine preassigned for processing at the earliest possible start time of . After that, create by deleting from ; in addition, if has an immediate-successive operation in its job, let the immediate-successive operation of in its job be added to Step 7.If , then let the value of be increased by 1, and repeat from Step 5; otherwise, let be a completed parameterized-active schedule, and go to Step . (Remember that is the number of all operations)Step 8.If the scheduling direction received in Step 1 is forward, then stop Algorithm 1, and let the schedule taken from Step 7 be the completed forward parameterized-active schedule and also the algorithm’s final result; however, if it is backward, then modify to satisfy the original precedence constraints by using Steps 8.1 and 8.2 Step 8.1.Let the operations of each job in the schedule be renamed , respectively Step 8.2.Turn the schedule modified from Step 8.1 back to front, so that the last-finished operation in the schedule will become the first-started operation, and vice versa; after that, let the schedule be started at the time 0. Then, stop Algorithm 1, and let the schedule modified in this step be the completed backward parameterized-active schedule and also the algorithm’s final result

3.2. Procedure of LOLA

As mentioned earlier, the four input parameters of LOLA controlled by UPLA consist of the acceptable idle-time limit, the scheduling direction, the perturbation method for generating an initial operation-based permutation, and the method for generating a neighbor operation-based permutation. The details of these four input parameters of LOLA are given below:(i)The acceptable idle-time limit is defined as a controller of the solution space; for example, the LOLA with = 0 will search in the solution space of nondelay schedules(ii)The scheduling direction is a direction of constructing schedules. If is chosen to be forward, LOLA will generate only forward schedules. On the other hand, if is chosen to be backward, LOLA will generate only backward schedules(iii)The perturbation method for generating an initial operation-based permutation is the perturbation method used to generate an initial permutation for LOLA. If IP is selected to be full randomization, the initial operation-based permutation will be generated by full randomization without using any part of the best-found permutation memorized by UPLA. However, if IP is selected to be partial randomization, the initial operation-based permutation will be generated by using insert operators (i.e., using the insert operator times) on the best-found operation-based permutation memorized by UPLA(iv)The method for generating a neighbor operation-based permutation is the adjustable method for modifying the LOLA’s current best-found operation-based permutation into a neighbor operation-based permutation. The 2-insert is to use two insert operators. The 1-insert/1-swap is to use an insert operator and then a swap operator. The 1-swap/1-insert is to use a swap operator and then an insert operator. Finally, the 2-swap is to use two swap operators

In this paper, the swap operator is done by randomly selecting two members (of all members) from two different positions in the permutation, and then switching the positions of the two selected members. The insert operator is done by randomly selecting two members (of all members) from two different positions in the permutation, removing the first-selected member from its old position, and then inserting it into the position in front of the second-selected member. Note that is the number of all jobs, is the number of all machines, and is thus the number of all operations in the problem’s instance. The procedure of LOLA is given in Algorithm 2.

Algorithm 2. It is the procedure of LOLA.Step 1.Receive the best-found operation-based permutation memorized by UPLA. Then, receive the values of the LOLA’s input parameters from UPLA via Steps 1.1 to 1.4Step 1.1.Receive the acceptable idle-time limit Step 1.2.Receive the scheduling direction Step 1.3.Receive the perturbation method for generating an initial operation-based permutation Step 1.4.Receive the method for generating a neighbor operation-based permutation , Step 2.Let represent the LOLA’s current best-found operation-based permutation. Generate an initial by using the IP method received in Step 1.3Step 3.Execute Algorithm 1 by using from Step 1.1 and from Step 1.2 on the permutation from Step 2. Then, let be equal to the schedule returned from Algorithm 1 executed in this stepStep 4.Execute the local search procedure by Steps 4.1 to 4.5Step 4.1.Let = 0Step 4.2.Generate a neighbor operation-based permutation by modifying from the permutation via the method received in Step 1.4Step 4.3.Execute Algorithm 1 by using from Step 1.1 and from Step 1.2 on the permutation from Step 4.2. Then, let be equal to the schedule returned from Algorithm 1 executed in this stepStep 4.4.If the makespan of is less than the makespan of , then update to be equal to , update to be equal to , and let = 0; otherwise, increase the value of by 1Step 4.5.If < mn(mn – 1), then repeat from Step 4.2; otherwise, stop Algorithm 2, and let and be the final best-found schedule and the final best-found operation-based permutation, respectively, as the LOLA’s final results

Remember that there are no parameter-value settings for LOLA required here since the four input parameters of LOLA are controlled by UPLA. As shown in Step 1 of Algorithm 2, the settings of δ, D, IP, and NP are provided by UPLA.

4. The Proposed Upper-Level Algorithm

UPLA is a population-based metaheuristic algorithm exploring in a real-number search space; from this viewpoint, it is similar to the other population-based algorithms [2126]. However, the procedure of improving its population is different since it is developed for being a parameter controller for LOLA. UPLA starts with a population of the combinations of LOLA’s input-parameter values, consisting of . Let represent the th combination of the LOLA’s input-parameter values (where ) at the th iteration. , , , and in the combination represent the acceptable idle-time limit , the scheduling direction , the perturbation method for generating an initial operation-based permutation , and the method for generating a neighbor operation-based permutation NP, respectively, in LOLA. Table 1 shows the translation from , , , and of the combination into δ, D, IP, and NP of LOLA. In short, let a combination of LOLA’s input-parameter values be called a parameter-value combination.


UPLALOLARelationship





The performance of is simply equal to the makespan returned from the LOLA using the parameter values decoded from . This means that, between two parameter-value combinations, the better combination is the combination which makes LOLA return lower makespan. Thus, the parameter-value combination will be defined as the best if it makes LOLA return the lowest makespan. In UPLA, let represent the best-found parameter-value combination memorized by UPLA. Thus, will always be updated whenever UPLA can find a parameter-value combination which performs better than the current . In other words, once UPLA finds a performing better than the current , this will become the new .

In the th UPLA’s iteration, , , , and , as the members of Ci(t), will each be updated respectively into (t + 1), (t + 1), (t + 1), and (t + 1) of Ci(t + 1) by two vectors. The two vectors, used for updating each member’s current value in to its new value in (t + 1), are described in directions and magnitudes as follows. In the vector directions, the first vector’s direction is toward the current value in to the best-found value in , while the second vector’s direction is opposite the first vector’s direction. In the vector magnitudes, if the current value in and the best-found value in are different, the first vector’s magnitude is a random real number in while the second vector’s magnitude is a random real number in ; however, if they are the same, the magnitudes of the two vectors are both random real numbers in . The process of iteratively updating the parameter-value combination given in this paragraph will be found in Steps 4.2 to 4.5 of Algorithm 3.

The mechanism of the UPLA combined with LOLA is summarized in Figure 1, where Makespani(t) stands for the makespan of the schedule returned from the LOLA using the input-parameter values decoded from . The procedure of UPLA is fully presented in Algorithm 3. In Algorithm 3, Steps 1 and 2 initialize the UPLA’s population. Although the largest possible range of is as mentioned earlier, the algorithm generates initially within and then restricts its value within along the computational process, based on a suggestion from [31]. Step 3 transforms into δ, D, IP, and NP for LOLA, and this step then executes the LOLA using the input-parameter values transformed from . After that, Step 3 evaluates the performance of each , where , and updates . Step 4 then updates to (t + 1); however, all values in Ci(t + 1) will be randomly regenerated once for every 25 iterations for the purpose of diversifying the population. Step 5 finally checks the condition to stop or continue the next iteration.

Algorithm 3. It is the procedure of UPLA.Step 1.Let represent the th parameter-value combination (where ) at the th iteration. Let represent the best-found parameter-value combination. Set the initial performance of to be equal to an extremely large valueStep 2.Let . Then, initially generate , where , as follows: , , , and Step 3.Evaluate the performance of and update by Steps 3.1 to 3.5Step 3.1.Let Step 3.2.Translate , , , and of into , , , and for LOLA by the relationships shown in Table 1. Then, execute the LOLA (Algorithm 2) which uses , , , and translated in this stepStep 3.3.Let the performance of be equal to the makespan of the schedule returned from the LOLA executed in Step 3.2Step 3.4.If the performance of is better (lower) than the performance of , then update to be equal to , update the performance of to be equal to the performance of , and also set the schedule and the permutation returned from the LOLA executed in Step 3.2 respectively as the best-found schedule and the best-found operation-based permutation memorized by UPLAStep 3.5.If , then increase the value of by 1, and repeat from Step 3.2; otherwise, go to Step 4Step 4.If mod 25 = 0, then randomly generate ( + 1), where , as follows: ( + 1) ~ , ( + 1) ~ , ( + 1) ~ , and ( + 1) ~ ; otherwise, generate ( + 1) by Steps 4.1 to 4.6Step 4.1.Let = 1Step 4.2.Generate , ~ , and then generate ( + 1) as follows:                  After that, if ( + 1) < 0.7 or ( + 1) ≥ 1.0, then regenerate ( + 1) ~ Step 4.3.Generate , ~ , and then generate ( + 1) as follows:Step 4.4.Generate , ~ , and then generate ( + 1) as follows:Step 4.5.Generate , ~ , and then generate ( + 1) as follows:Step 4.6.If , then increase the value of by 1, and repeat from Step 4.2; otherwise, go to Step 5

Step 5.If the stopping criterion is not met, then increase the value of by 1, and repeat from Step 3. Otherwise, stop Algorithm 3, and let the best-found schedule memorized by UPLA be the final result of UPLA.

5. Performance Evaluation

Among JSP-solving algorithms, the two-level particle swarm optimization algorithm (or the two-level PSO) of [15] is probably the most similar algorithm to the proposed two-level metaheuristic algorithm. The similarity is that both their lower-level algorithms generate parameterized-active schedules by the similar procedures; in addition, both their upper-level algorithms control the same two parameters (i.e., the acceptable idle-time limit and the scheduling direction for constructing schedules) for their lower-level algorithms. However, the two-level PSO [15] is different from the proposed two-level metaheuristic algorithm in that it uses the GLN-PSO’s framework [43] in both its levels. Due to their similarity and difference just mentioned, this paper then selects the two-level PSO [15] to compare with the proposed two-level metaheuristic algorithm in their performances. In this section, UPLA will represent the proposed two-level metaheuristic algorithm as a whole (i.e., the UPLA combined with LOLA) since UPLA must work by having LOLA inside when executed.

This section evaluates the performance of UPLA on 53 well-known JSP benchmark instances. The 53 benchmark instances consist of the ft06, ft10, and ft20 instances from [44], the la01 to la40 instances from [45], and the orb01 to orb10 instances from [46]; these 53 instances can also be found online in [47]. This paper divides the 53 benchmark instances into two sets, i.e., the set of ft-and-la instances composed of ft06 to la40 and the set of orb instances composed of orb01 to orb10. To evaluate UPLA’s performance, UPLA will be compared to the two-level PSO of [15] in their results on the set of ft-and-la instances. Since the results of the two-level PSO on the orb instances do not exist, UPLA will be compared to the GA of [6] in their results on the set of orb instances.

In the performance comparisons, the results of UPLA are received from the experiment here. The settings of UPLA for the experiment are given as follows:(i)The population in UPLA consists of 10 combinations of the LOLA’s input-parameter values (i.e., N = 10)(ii)The stopping criterion of UPLA is that either the 200th iteration is reached (i.e., t = 200 is the maximum iteration) or the optimal solution shown in published articles, e.g., [5], is found(iii)UPLA is coded in C# and executed on an Intel® CoreTM i5 CPU processor M580 @ 2.67 GHz with RAM of 6 GB (2.3 GB usable)(iv)UPLA is executed for 10 runs with different random-seed numbers(v)The directions and magnitudes of the vectors used to generate (t + 1) are given in Step 4 of Algorithm 3

Tables 2 and 3 show the experiment’s results on the ft-and-la instances and the orb instances, respectively. Note that the words solution and solution value in this paper are equivalent to the words schedule and makespan, respectively. The information given by these tables contains the following:(i)The column Instance provides the name of each instance(ii)The column Opt provides the optimal solution value (i.e., the optimal schedule’s makespan) of each instance given by the published articles, e.g., [5](iii)The column UPLA in each table is divided into six columns, i.e., Best, %BSVD, Avg., %ASVD, Avg. No. of Iters, and Avg. CPU Time (sec). Their definitions are given as follows:(a)Best stands for the best-found solution value over 10 runs of UPLA(b)%BSVD stands for the deviation percentage between the best-found solution value over 10 runs of UPLA and the optimal solution value(c)Avg. stands for the average of the best-found solution values from the 1st run to the 10th run of UPLA(d)%ASVD stands for the deviation percentage between the average of the best-found solution values from the 1st run to the 10th run of UPLA and the optimal solution value(e)Avg. No. of Iters shows the average number of iterations used by UPLA until it reaches the stopping criterion over 10 runs(f)Avg. CPU Time (sec) shows the average computational time (in seconds) used by UPLA until it reaches the stopping criterion over 10 runs(iv)The column PSO in Table 2 and the column GA in Table 3 are each divided into two columns, i.e., Best and %BSVD. Best in the column PSO means the two-level PSO’s best-found solution value [15], while Best in the column GA means the GA’s best-found solution value [6]. %BSVD means the deviation percentage between the specified algorithm’s best-found solution value and the optimal solution value(v)Each best-found solution value will be marked by an asterisk () if its value is the same as the optimal solution value; in addition, it will be marked by a sharp sign () if it wins in comparison


InstanceOptPSO [15]UPLA
Best%BSVDBest%BSVDAvg.%ASVDAvg. No. of ItersAvg. CPU Time (sec)

ft06550.000.00550.0010.3

ft109300.000.009310.13921208

ft2011650.000.0011720.621884114

la016660.000.006660.0011

la026550.000.006550.0022

la035970.000.005970.001013

la045900.000.005900.0034

la055930.000.005930.0011

la069260.000.009260.0014

la078900.000.008900.0015

la088630.000.008630.0015

la099510.000.009510.0014

la109580.000.009580.0014

la1112220.000.0012220.00112

la1210390.000.0010390.00113

la1311500.000.0011500.00112

la1412920.000.0012920.00112

la1512070.000.0012070.00117

la169450.000.009450.031241458

la177840.000.007840.00678

la188480.000.008480.00676

la198420.000.008430.17901130

la209029070.550.009030.061071304

la2110460.0010520.5710581.1820013505

la229279350.860.009350.8519112840

la2310320.000.0010320.002116

la249359440.969410.649430.8320012922

la259779840.729820.519860.9320012931

la2612180.000.0012180.006613547

la27123512581.8612561.7012662.5120041364

la28121612180.160.0012230.5617936085

la29115211842.7811913.3911994.0420039881

la3013550.000.0013550.00101895

la3117840.000.0017840.001701

la3218500.000.0018500.001849

la3317190.000.0017190.001725

la3417210.000.0017210.0011255

la3518880.000.0018880.001902

la36126812780.7912780.7912881.6120048387

la37139714100.9314070.7214151.3220049836

la38119612212.0912151.5912323.0220050876

la39123312511.4612501.3812521.5120050603

la40122212290.5712290.5712421.6120050609


InstanceOptGA [6]UPLA
Best%BSVDBest%BSVDAvg.%ASVDAvg. No. of ItersAvg. CPU Time (sec)

orb01105910771.700.0010690.931862312

orb028888890.118890.118890.112002393

orb03100510221.690.0010211.581862358

orb0410050.000.0010060.1167796

orb058878900.348890.238900.322002458

orb06101010211.0910130.3010190.932002525

orb073970.000.003990.551782096

orb088990.000.009091.071882338

orb099340.000.009350.0574884

orb109440.000.009440.0067817

The results of the two-level PSO [15] and UPLA on the 43 ft-and-la instances are taken from Table 2 to compare in the three indicators, i.e., the number of signs, the number of signs, and the %BSVD values. In counting the instances where optimal solutions can be found over the total 43 instances (counting signs), the two-level PSO can find the optimal solutions on 31 instances, while UPLA can find the optimal solutions on 33 instances. In counting the instances where each algorithm can find better solutions than the other (counting signs), the two-level PSO finds better solutions than UPLA on only 2 instances, while UPLA finds better solutions than the two-level PSO on 9 instances. In the %BSVD values, the average of the %BSVD values of the two-level PSO is 0.32%, while the average of the %BSVD values of UPLA is 0.28%. In conclusion, UPLA performs better than the two-level PSO [15] in all the three indicators.

The results of the GA [6] and UPLA on the 10 orb instances in Table 3 are also compared in the same three indicators. In counting the signs, the GA can find the optimal solutions on 5 instances, while UPLA can find the optimal solutions on 7 instances. In counting the signs, the GA cannot find better solutions than UPLA on any instances, while UPLA finds better solutions than the GA on 5 instances. In the %BSVD values, the average of the %BSVD values of the GA is 0.49%, while the average of the %BSVD values of UPLA is 0.06%. Hence, the conclusion is that UPLA performs better than the GA [6] in all the three indicators.

The combination of UPLA and LOLA enhances the performance of an isolated LOLA in solution quality but worsens the performance in CPU-time consumption. One of its causes is that if the optimal solution cannot be found, UPLA will run until the 200th iteration as its predetermined maximum iteration. However, the use of the 200th iteration as the UPLA’s maximum iteration is usually higher than necessary since UPLA always finds its best-found solution before the 200th iteration (and it cannot find any better solution since then) in most instances. Thus, by a properly lower maximum iteration, UPLA can finish its computational process faster without any effect on solution quality. To determine the proper maximum iteration for UPLA, Figure 2 shows the %ASVD-over-iteration plots on the three hard-to-solve instances, i.e., la27, la29, and la38.

In Figure 2, the %ASVD-over-iteration plots show the similar patterns in their three periods of iterations, i.e., the 1st to the 60th iterations, the 60th to the 150th iterations, and the 150th to the 200th iterations. The %ASVD value of each plot is reduced rapidly in the first 60 iterations. After that, from the 60th to the 150th iterations, the %ASVD value continues to be reduced at a slow rate. Finally, the %ASVD value is almost stable after the 150th iteration. Based on the abovementioned finding, the maximum iteration is recommended to be in a range between the 60th iteration and the 150th iteration. At extreme points, the maximum iteration at the 60th iteration should be used when a short CPU time is required, while the 150th iteration should be used when the user is very concerned about the solution’s quality. The maximum iteration is not recommended to be lesser than the 60th iteration since it tends to stop UPLA prematurely, and it is also not recommended to be greater than the 150th iteration since UPLA will have a very low possibility to find a better solution after the 150th iteration.

6. Conclusion

This paper introduced the two-level metaheuristic algorithm, consisting of LOLA as the lower-level algorithm and UPLA as the upper-level algorithm, for the JSP. LOLA serves as a local search algorithm to search for an optimal schedule, while UPLA serves as the parameter controller for LOLA. In more detail, UPLA is a new population-based search algorithm developed for adjusting the values of the LOLA’s input parameters. UPLA has an important role in evolving LOLA to perform its best for every single instance. For example, UPLA controls the LOLA’s solution space by the acceptable idle-time limit δ, and it also upgrades LOLA from a local search algorithm to an iterated local search algorithm by the perturbation method IP. The numerical experiment in this paper showed that the proposed two-level metaheuristic algorithm outperforms the two other metaheuristic algorithms taken from the literature in terms of solution quality. A further study on this research should be generalizing UPLA into a general form that can be used for controlling the parameters of different metaheuristic algorithms.

Data Availability

The data used to support the findings of this study are available from the author upon request.

Conflicts of Interest

The author declares that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The author would like to acknowledge partial financial support from the Thai-Nichi Institute of Technology, Thailand.

References

  1. A. S. Jain and S. Meeran, “Deterministic job-shop scheduling: past, present and future,” European Journal of Operational Research, vol. 113, no. 2, pp. 390–434, 1999. View at: Publisher Site | Google Scholar
  2. J. Błazewicz, W. Domschke, and E. Pesch, “The job shop scheduling problem: conventional and new solution techniques,” European Journal of Operational Research, vol. 93, no. 1, pp. 1–33, 1996. View at: Publisher Site | Google Scholar
  3. U. Dorndorf and E. Pesch, “Evolution based learning in a job shop scheduling environment,” Computers & Operations Research, vol. 22, no. 1, pp. 25–40, 1995. View at: Publisher Site | Google Scholar
  4. B. Peng, Z. Lü, and T. C. E. Cheng, “A tabu search/path relinking algorithm to solve the job shop scheduling problem,” Computers & Operations Research, vol. 53, pp. 154–164, 2015. View at: Publisher Site | Google Scholar
  5. J. F. Gonçalves and M. G. C. Resende, “An extended Akers graphical method with a biased random-key genetic algorithm for job-shop scheduling,” International Transactions in Operational Research, vol. 21, no. 2, pp. 215–246, 2014. View at: Publisher Site | Google Scholar | MathSciNet
  6. N. H. Moin, O. C. Sin, and M. Omar, “Hybrid genetic algorithm with multiparents crossover for job shop scheduling problems,” Mathematical Problems in Engineering, vol. 2015, Article ID 210680, 12 pages, 2015. View at: Publisher Site | Google Scholar
  7. T. Yamada and R. Nakano, “A fusion of crossover and local search,” in Proceedings of the IEEE International Conference on Industrial Technology (ICIT'96), pp. 426–430, IEEE, Shanghai, China, 1996. View at: Publisher Site | Google Scholar
  8. J.-Q. Li, H.-Y. Sang, Y.-Y. Han, C.-G. Wang, and K.-Z. Gao, “Efficient multi-objective optimization algorithm for hybrid flow shop scheduling problems with setup energy consumptions,” Journal of Cleaner Production, vol. 181, pp. 584–598, 2018. View at: Publisher Site | Google Scholar
  9. U. Dorndorf, E. Pesch, and T. Phan-Huy, “Constraint propagation and problem decomposition: a preprocessing procedure for the job shop problem,” Annals of Operations Research, vol. 115, no. 1-4, pp. 125–145, 2002. View at: Publisher Site | Google Scholar | MathSciNet
  10. Á. E. Eiben, R. Hinterding, and Z. Michalewicz, “Parameter control in evolutionary algorithms,” IEEE Transactions on Evolutionary Computation, vol. 3, no. 2, pp. 124–141, 1999. View at: Publisher Site | Google Scholar
  11. J. J. Grefenstette, “Optimization of control parameters for genetic algorithms,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 16, no. 1, pp. 122–128, 1986. View at: Publisher Site | Google Scholar
  12. S.-J. Wu and P.-T. Chow, “Genetic algorithms for nonlinear mixed discrete-integer optimization problems via meta-genetic parameter optimization,” Engineering Optimization, vol. 24, no. 2, pp. 137–159, 1995. View at: Publisher Site | Google Scholar
  13. T. Brys, M. M. Drugan, and A. Nowé, “Meta-evolutionary algorithms and recombination operators for satisfiability solving in fuzzy logics,” in Proceedings of the 2013 IEEE Congress on Evolutionary Computation, pp. 1060–1067, IEEE, Cancun, Mexico, June 2013. View at: Google Scholar
  14. P. Cortez, M. Rocha, and J. Neves, “A meta-genetic algorithm for time series forecasting,” in Proceedings of Workshop on Artificial Intelligence Techniques for Financial Time Series Analysis, 10th Portuguese Conference on Artificial Intelligence (EPIA 2001), pp. 21–31, Porto, Portugal, Dec 2001. View at: Google Scholar
  15. P. Pongchairerks and V. Kachitvichyanukul, “A two-level particle swarm optimisation algorithm on job-shop scheduling problems,” International Journal of Operational Research, vol. 4, no. 4, pp. 390–411, 2009. View at: Google Scholar | MathSciNet
  16. C. Bierwirth and D. C. Mattfeld, “Production scheduling and rescheduling with genetic algorithms,” Evolutionary Computation, vol. 7, no. 1, pp. 1–17, 1999. View at: Publisher Site | Google Scholar
  17. J. F. Gonçalves, J. J. Mendes, and M. G. C. Resende, “A hybrid genetic algorithm for the job shop scheduling problem,” European Journal of Operational Research, vol. 167, no. 1, pp. 77–95, 2005. View at: Publisher Site | Google Scholar | MathSciNet
  18. R. Cheng, M. Gen, and Y. Tsujimura, “A tutorial survey of job-shop scheduling problems using genetic algorithms – I. representation,” Computers & Industrial Engineering, vol. 30, no. 4, pp. 983–997, 1996. View at: Publisher Site | Google Scholar
  19. M. Gen and R. Cheng, Genetic Algorithms and Engineering Design, John Wiley & Sons, New York, NY, USA, 1997.
  20. C. Bierwirth, “A generalized permutation approach to job shop scheduling with genetic algorithms,” OR Spectrum, vol. 17, no. 2-3, pp. 87–92, 1995. View at: Publisher Site | Google Scholar
  21. J. Kennedy, R. C. Eberhart, and Y. Shi, Swarm Intelligence, Morgan Kaufmann, San Francisco, Calif, USA, 2001.
  22. R. Storn and K. Price, “Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997. View at: Publisher Site | Google Scholar
  23. X.-S. Yang and X. He, “Firefly algorithm: recent advances and applications,” International Journal of Swarm Intelligence, vol. 1, no. 1, pp. 36–50, 2013. View at: Publisher Site | Google Scholar
  24. X.-S. Yang and S. Deb, “Engineering optimisation by Cuckoo search,” International Journal of Mathematical Modelling and Numerical Optimisation, vol. 1, no. 4, pp. 330–343, 2010. View at: Publisher Site | Google Scholar
  25. M. Neshat, G. Sepidnam, M. Sargolzaei, and A. N. Toosi, “Artificial fish swarm algorithm: a survey of the state-of-the-art, hybridization, combinatorial and indicative applications,” Artificial Intelligence Review, vol. 42, no. 4, pp. 965–997, 2014. View at: Publisher Site | Google Scholar
  26. Z.-X. Zheng, J.-Q. Li, and P.-Y. Duan, “Optimal chiller loading by improved artificial fish swarm algorithm for energy saving,” Mathematics and Computers in Simulation, vol. 155, pp. 227–243, 2019. View at: Publisher Site | Google Scholar | MathSciNet
  27. H. R. Lourenço, “Job-shop scheduling: Computational study of local search and large-step optimization methods,” European Journal of Operational Research, vol. 83, no. 2, pp. 347–364, 1995. View at: Publisher Site | Google Scholar
  28. D. Sun and L. Lin, “A dynamic job shop scheduling framework: A backward approach,” International Journal of Production Research, vol. 32, no. 4, pp. 967–985, 1994. View at: Publisher Site | Google Scholar
  29. L. Özdamar, “A genetic algorithm approach to a general category project scheduling problem,” IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, vol. 29, no. 1, pp. 44–59, 1999. View at: Publisher Site | Google Scholar
  30. V. K. Ganesan, A. I. Sivakumar, and G. Srinivasan, “Hierarchical minimization of completion time variance and makespan in jobshops,” Computers & Operations Research, vol. 33, no. 5, pp. 1345–1367, 2006. View at: Publisher Site | Google Scholar
  31. P. Pongchairerks, “Efficient local search algorithms for job-shop scheduling problems,” International Journal of Mathematics in Operational Research, vol. 9, no. 2, pp. 258–277, 2016. View at: Publisher Site | Google Scholar | MathSciNet
  32. M. Moonen and G. K. Janssens, “Giffler-thompson focused genetic algorithm for the static job-shop scheduling problem,” Journal of Information and Computational Science, vol. 4, no. 2, pp. 629–642, 2007. View at: Google Scholar
  33. P. Pongchairerks, “Particle swarm optimization algorithm applied to scheduling problems,” ScienceAsia, vol. 35, no. 1, pp. 89–94, 2009. View at: Publisher Site | Google Scholar
  34. P. Pongchairerks, “A self-tuning PSO for job-shop scheduling problems,” International Journal of Operational Research, vol. 19, no. 1, pp. 96–113, 2014. View at: Publisher Site | Google Scholar | MathSciNet
  35. D. Petrovic, E. Castro, S. Petrovic, and T. Kapamara, “Radiotherapy scheduling,” in Automated Scheduling and Planning, vol. 505 of Studies in Computational Intelligence, pp. 155–189, Springer, Berlin, Germany, 2013. View at: Google Scholar
  36. Y. Crama, A. W. J. Kolen, and E. J. Pesch, “Local search in combinatorial optimization,” in Artificial Neural Networks, vol. 931 of Lecture Notes in Computer Science, pp. 157–174, Springer, Berlin, Germany, 1995. View at: Google Scholar
  37. J. B. Orlin, A. P. Punnen, and A. S. Schulz, “Approximate local search in combinatorial optimization,” SIAM Journal on Computing, vol. 33, no. 5, pp. 1201–1214, 2004. View at: Publisher Site | Google Scholar | MathSciNet
  38. W. Michiels, E. Aarts, and J. Korst, Theoretical Aspects of Local Search, Springer, Berlin, Germany, 2007. View at: MathSciNet
  39. E. Pesch, Learning in Automated Manufacturing: A Local Search Approach, Physica-Verlag, Heidelberg, Germany, 1994. View at: Publisher Site
  40. M. den Besten, T. Stützle, and M. Dorigo, “Design of iterated local search algorithms: an example application to the single machine total weighted tardiness problem,” in Applications of Evolutionary Computing, vol. 2037 of Lecture Notes in Computer Science, pp. 441–451, Springer, Berlin, Germany, 2001. View at: Publisher Site | Google Scholar
  41. M. F. Tasgetiren, O. Buyukdagli, Q.-K. Pan, and P. N. Suganthan, “A general variable neighborhood search algorithm for the no-idle permutation flowshop scheduling problem,” in Swarm, Evolutionary, and Memetic Computing, vol. 8297 of Lecture Notes in Computer Science, pp. 24–34, Springer, Cham, Germany, 2013. View at: Publisher Site | Google Scholar | MathSciNet
  42. H. R. Lourenço, O. C. Martin, and T. Stützle, “A beginner’s introduction to iterated local search,” in Proceedings of the 4th Metaheuristics International Conference, pp. 1–6, Porto, Portugal, 2001. View at: Google Scholar | MathSciNet
  43. P. Pongchairerks and V. Kachitvichyanukul, “A non-homogenous particle swarm optimization with multiple social structures,” in Proceedings of the 2005 International Conference on Simulation and Modeling, pp. 132–136, Nakornpathom, Thailand, Jan 2005. View at: Google Scholar
  44. H. Fisher and G. L. Thompson, “Probabilistic learning combinations of local job-shop scheduling rules,” in Industrial Scheduling, J. F. Muth and G. L. Thompson, Eds., pp. 225–251, Prentice-Hall, Englewood, NJ, USA, 1963. View at: Google Scholar
  45. S. Lawrence, “Resource constrained project scheduling: an experimental investigation of heuristic scheduling techniques (supplement),” Graduate School of Industrial Administration ORNL/Sub-7654/1, Carnegie Mellon University, Pittsburgh, PA, USA, 1984. View at: Publisher Site | Google Scholar
  46. D. Applegate and W. Cook, “A computational study of the job-shop scheduling problem,” ORSA Journal on Computing, vol. 3, no. 2, pp. 149–156, 1991. View at: Publisher Site | Google Scholar
  47. J. E. Beasley, “Job shop scheduling,” OR-Library, 2004, http://people.brunel.ac.uk/~mastjjb/jeb/orlib/files/jobshop1.txt. View at: Google Scholar

Copyright © 2019 Pisut Pongchairerks. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views1801
Downloads554
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.