Abstract

Job shop scheduling problem (JSP) is one of the most difficult optimization problems in manufacturing industry, and flexible job shop scheduling problem (FJSP) is an extension of the classical JSP, which further challenges the algorithm performance. In FJSP, a machine should be selected for each process from a given set, which introduces another decision element within the job path, making FJSP be more difficult than traditional JSP. In this paper, a variant of grasshopper optimization algorithm (GOA) named dynamic opposite learning assisted GOA (DOLGOA) is proposed to solve FJSP. The recently proposed dynamic opposite learning (DOL) strategy adopts the asymmetric search space to improve the exploitation ability of the algorithm and increase the possibility of finding the global optimum. Various popular benchmarks from CEC 2014 and FJSP are used to evaluate the performance of DOLGOA. Numerical results with comparisons of other classic algorithms show that DOLGOA gets obvious improvement for solving global optimization problems and is well-performed when solving FJSP.

1. Introduction

Global optimization has become an important research topic in scientific research and engineering applications. Neural network training [1, 2], path planning [3, 4], industrial design [5, 6], and many other complex problems hope to use optimization algorithms to achieve optimum solutions. Global optimization algorithms mainly consist of two classes: deterministic mathematical programming method [710] and metaheuristic algorithms (MAs) [1114]. MAs have simple structure and strong adaptability, which is very advantageous in solving complex problems [15, 16]. In recent decades, inspired by nature, MAs with many different characteristics have been proposed by researchers. Glover et al. proposed tabu search (TS) [17] by simulating the feature that human has memory function. Simulated annealing algorithm (SA) [18] was proposed by simulating the industrial annealing process of high temperature object. Based on Darwin’s evolution theory and Mendel’s genetics, Holland and John proposed genetic algorithm (GA) [19] for the first time in the 1970s. Dorigo and Sttzle proposed ant colony optimization (ACO) [20] based on the study of ant colony behavior. Inspired by the foraging behavior of birds, American psychologist Kennedy and electrical engineer Eberhart proposed particle swarm optimization (PSO) in 1995 [13]. In addition to these classical algorithms, some new MAs have been proposed in recent years, such as grey wolf optimizer (GWO) [21], moth flame optimization (MFO) [22], firefly algorithm (FA) [23], teaching-learning-based optimization (TLBO) [24, 25] competitive swarm optimization (CSO) [26, 27], dragonfly algorithm (DA) [28], whale optimization algorithm (WOA) [29], and pigeon-inspired optimization [30].

The grasshopper optimization algorithm (GOA) is a recently proposed new MA. It was inspired by the foraging behavior of grasshopper and proposed by Saremi et al. in 2017 [31]. The algorithm has simple principle and distinctive features and demonstrated good performance in solving optimization problems. As soon as the algorithm was proposed, scholars applied it to different optimization problems. Mirjalili et al. applied GOA to the multiobjective optimization problem in 2017 [32]. Tumuluru and Ravi used GOA to update the weight of the deep confidence neural network and applied it to the classification of cancer in 2017 [33]. Aljarah et al. used GOA for feature selection and support vector machine optimization for solving feature selection task in 2018 [34].

In addition to the original GOA, many variants have emerged for the improvements of the algorithm performance. In 2017, Wu et al. proposed an adaptive grasshopper optimization algorithm (AGOA) and applied it to the UAV tracking trajectory optimization problem [35]. In 2018, Ewees et al. proposed an improved version of the grasshopper optimization algorithm based on the opposition-based learning strategy called OBLGOA for solving benchmark optimization functions and engineering problems [36]. Although the existing method has greatly improved GOA, there is still a large probability of falling into local optimum. This paper adopts the DOL strategy proposed by Xu et al. [37] to improve the optimization performance of GOA. The search space of DOL strategy is asymmetric and dynamic, which greatly improves the probability of GOA to obtain the optimal solution and avoids it falling into local optimum to some extent.

The job shop scheduling problem (JSP) [38] is a famous NP hard problem in the discrete manufacturing system, and flexible job shop scheduling problem (FJSP) is an extension of the classical JSP [39, 40]. In FJSP, there may be more than one machine for the same process, so two subproblems should be considered when solving FJSP: the machine selection (MS) and the operations sequencing (OS) [41]. Although there is only one more step than JSP to assign a series of optional machines to a specified process, it is much more difficult to get optimal solution to the FJSP in polynomial time. Intelligent optimization algorithm can obtain the approximate solution with better quality in a short time, due to which an increasing number of intelligent optimization algorithms are used to solve FJSP. The literature shows that many swarm intelligent optimization algorithms such as GA [4244], TS [45, 46], and artificial bee colony algorithm (ABC) [47, 48] have been successfully applied to FJSP and obtained good results. Though numerous approaches have been proposed for solving the given problem, the complex FJSP problem calls for more competitive solvers for solving the problem. In order to further test the performance of DOLGOA, we use DOLGOA algorithm to solve FJSP.

The main contribution of this paper is as follows:(1)An encoding scheme FJSP is discussed in detail, and the mathematical model and constrains are given(2)A new variant of GOA by adopting a dynamic opposite learning strategy is proposed to improve the exploitation ability of GOA and increase the possibility of finding the global optimum(3)Various popular benchmarks from CEC 2014 and FJSP are used to evaluate the performance of DOLGOA by comparing with other classic algorithms

In this paper, we introduce the background in Section 1, then the problem description of FJSP is discussed in Section 2, and followed by the GOA algorithm and DOL strategy which are demonstrated in Section 3. Then, the DOLGOA is proposed in Section 4. The experiment and discussion are given in Section 5 with the comparison of other algorithms. Finally, Section 6 concludes the paper.

2. Problem Formulation

In the FJSP, we assume that there are jobs to be processed and usable machines in a workshop, where each job contains multiple operations, and each operation can be processed by a series of specified machines. Each machine can only serve one process at a time, the purpose of optimization is to arrange the machines according to the process requirements to minimize the maximal completion time . In order to express the problem more clearly, we use notations as follows:: the job: the operation of the corresponding job: the machine: the total number of operations for job : beginning time of operation of job on machine : processing time of operation of job on machine : completion time of operation of job on machine : completion time for job: maximal completion timeObjective function: minimize

The constraints of FJSP can be expressed as follows:(i)All machines are available at time zero, and all jobs can start at time zero:(ii)Processing time of each operation on the corresponding machine is definite, and the transmission time is ignored:(iii)Jobs are independent of each other and cannot be inserted or cancelled by force:(iv)Each machine can only process one job at a time:(v)Each operation of a job can only be processed by one machine at a time:

3. Algorithm Preliminaries

3.1. Grasshopper Optimization Algorithm

GOA optimization algorithm is a featured intelligent optimization algorithm based on population, which is inspired by the foraging behavior of grasshopper population. The grasshoppers’ position constitutes the solution space, and the optimal grasshopper’s position corresponds to the optimal solution. Grasshopper is always moving during the foraging process, so the position is constantly changing. During each iteration, the optimal position will be updated if a better position is found until the iteration termination condition is satisfied.

The mathematical model employed to simulate the swarming behavior of grasshoppers is shown as follows:where defines the position of the grasshopper, is social interaction, is the influence of gravity on the grasshopper, and is the influence of wind on the grasshopper. However, in order to provide random behavior, the equation can be written as , where , , and is a random number between :where is the distance between the and the grasshopper, calculated as , s is a function to define the strength of social forces, which is shown in equation (8), and is a unit vector from the grasshopper to the grasshopper.

The function s, which defines the strength of social forces, is calculated as follows:where is the strength of attraction and is the scale of attraction length.

The component in equation (6) is calculated as follows:where is the gravitational constant and is a unit vector towards the center of Earth.

The A component in equation (6) is calculated as follows:where is an offset constant and is the unit vector along the direction of wind.

Substituting , , and in equation (6), this equation can be expressed as follows:where and N is the number of grasshoppers.

However, this mathematical model cannot be used directly to solve the optimization problems mainly because the grasshoppers quickly reach their comfort zone and the swarm does not converge to the specified point. In order to solve the optimization problem, the modified form of this equation is as follows:where is the upper bound of the dimension, is the lower bound of the dimension, , is the value of dimension of the target (best solution found so far), and is a decreasing coefficient of comfort zone, repulsion zone, and attraction zone, as defined in equation (13). Note that is almost similar to the component in equation (6). However, we do not consider the gravity (no component) and assume that the wind direction is always toward a target . According to equation (12), the next position of the grasshopper is determined according to its current position, the position of the target, and the position of all other grasshoppers:where is the maximum value, is the minimum value, is the current iteration number, and is the maximum iteration number. In this paper, we take 1 and 0.00001 for and , respectively.

3.2. Oppositional-Based Learning (OBL)

Opposition-based learning (OBL) [49] is one of the most successful learning strategies and is widely used in the effective learning phase to enhance the search capability of population-based algorithms. When finding the solution X of a given problem, OBL makes the candidate solution more likely to approach the optimal solution by simultaneously computing the current X and the inverse solution of X.

3.2.1. Opposite Number

X is a real number in [a, b], where a and b are the boundaries of X. is the opposite number of X, which can be defined as follows:

3.2.2. Opposite Point

Assume that is a point in a D-dimensional space, and , where j = 1 : D and and are the low and high boundaries of the current population respectively, which change with iteration. A D-dimension opposite point is defined as follows:

3.3. Dynamic-Opposite Learning (DOL)

On the basis of OBL strategy, many new ideas such as quasi-opposite-based learning (QOBL) [50] and quasi-reflection-based learning (QRBL) [51] are proposed, and the quasi-opposite number in QOBL and the quasi-reflection number in QRBL are both closer to the global optimum than the opposite number in OBL. However, if there is a local optimum in the search space, all strategies including OBL, QOBL, and QRBL tend to converge the search space to the local optimal location. Concerning this problem, a dynamic opposite learning (DOL) strategy is proposed in [37], which can improve the probability of convergence to the global optimum while avoiding falling into the local optimum.

In order to avoid the situation of falling into the local optimum in the space from the current number to the opposite number in OBL theory, a random opposite number was proposed, and . We can see that can be not only between and but also greater or less than or . DOL strategy is formulated by randomly selecting a DOL number between and . When , ; otherwise, if exceeds the range of , it is necessary to check whether the number obtained by DOL has a boundary: if , is equal to ; otherwise, should be reset as a random number between a and b. Although can enhance the diversity of the search space, the search space may shrink along with the iteration, which will lead to the deterioration of the exploitation capability. In light of this, a positive weighting factor is used to balance the region and diversity of the search space. The mathematical model of DOL can be described as follows.

3.3.1. Dynamic Opposite Number

X is a real number in [a, b], where a and b are the boundaries of X. The dynamic opposite number can be defined as equation (16), where and are the boundaries and is a random number between 0 and 1. Moreover, is a positive weighting factor, and is the opposite number defined as equation (14):

3.3.2. Dynamic Opposite Point

is a point in a D-dimensional space, and , where j = 1 : D and and are the low and high boundaries of the current population, respectively, which change with iteration. A D-dimension dynamic opposite point is defined as follows:where is a random number between 0 and 1, is a positive weighting factor, and is the opposite point defined as equation (15).

3.3.3. DOL-Based Optimization

Assume that is a point in a D-dimensional space, and , where j = 1 : D and and are the low and high boundaries of the current population, respectively, which change with iteration. Dynamic opposite point is defined according to equation (17) and is updated by in each generation. should be replaced by if the fitness of is better than . Otherwise, stays the same. It is important to note that should be in , otherwise, needs to be redefined as a random number in this interval.

4. DOL-Based GOA Algorithm

Conventional GOA algorithm has limitation in the exploitation ability, which calls for novel strategies in enhancing the solution diversity. In this section, an algorithm named DOLGOA is proposed, which applies DOL strategy to GOA algorithm in accelerating the convergence speed and preventing the algorithm from falling into local optimal. DOL strategy contributes to GOA mainly in two aspects, including population initialization and generation jumping.

4.1. DOL Population Initialization

The DOL initialization is shown as follows:where is the randomly generated initial population and is the population obtained by DOL strategy. is the population size, represents the individual , represents the dimension , and and are two random numbers in [0, 1]. In DOL initialization, the weighting factor is set as 1.

To ensure the effectiveness of DOL, boundary detection is also required:

In the initialization step, the optimal individual is selected from the new population consisting of and .

4.2. DOL Generation Jumping

In each iteration, if the selected probability (random number between [0, 1]) is smaller than the jump rate (), the population will be updated through DOL strategy. The DOL jump process is shown as follows:where also needs to meet the boundary conditions. In order to lock the new candidate objects generated by DOL into a smaller search space, we dynamically update the boundary as follows:

In DOL generation jumping step, the optimal individuals are selected from the new population consisting of and .

4.3. DOLGOA Algorithm Steps

When DOL population initialization and generation jumping are added, a new GOA variant, named dynamic opposite learning assisted GOA (DOLGOA), is proposed. In order to visualize the concrete algorithm, the steps of DOLGOA algorithm are described in Algorithm 1. The procedure of DOLGOA is shown in Figure 1.

(1)Randomly generate an initial population ;
(2)for; ; do
(3);
(4)for; ; do
(5)  ;
(6)  Check the boundaries;
(7)end for
(8)end for
(9)Select N number of the fittest individuals from ;
(10)Set ;
(11)while ≤ maximal iteration do
(12) Evaluate all learners by the fitness function ;
(13)ifthen
(14)  Sort individuals by the fitness value to get the best grasshopper in the first population;
(15)else
(16)  for; ; do
(17)   Update the position of the individuals according to update mechanism (equation (12));
(18)   Check the boundaries;
(19)   Evaluate the fitness values of the new individuals ;
(20)   ifthen
(21)    Replace with ;
(22)   end if
(23)  end for
(24)end if
(25);
(26)ifthen
(27)  for; ; do
(28)   ;
(29)   for; ; do
(30)    
(31)    
(32)    Check boundaries;
(33)   end for
(34)  end for
(35)  Select N number of the fittest individuals from ;
(36)  ;
(37)end if
(38)end while
4.4. Encoding and Decoding

To solve FJSP with DOLGOA, we encode the problem. In the FJSP, we define three variables , , and , all of which are matrix with m rows and n columns, where is the total number of jobs and is the maximum number of operations.

The specific meaning of these three variables is shown as follows:(i) represents the machine number used in each operation, and the initial value of is randomly selected from the set of alternative machines.(ii) represents the beginning time of each process, and represents the completion time of each process. The values of and depend on .(iii)We treat each as an individual and randomly generate individuals X at initialization time to form a population with number of individuals.

In addition, the decoding process can be expressed as follows:(1)For each individual X, we calculate the beginning time and completion time of the first process for each job(2)Then, calculate the arrangement of the remaining operations, where the beginning time of one operation is equal to the completion time of the previous one which is processed on the same machine, and the completion time of each operation is equal to the sum of the corresponding starting time and processing time(3)Calculate the maximum completion time of each individual , which is equal to the completion time of the last operation of the whole work(4)With the goal of minimizing the maximum completion time, DOLGOA was used to select the optimal individual , that is, the optimal machine allocation strategy(5)Calculate the and corresponding to each procedure in the optimal strategy, and then draw the Gantt chart. The number on the Gantt chart represents the job, and the represents the process of the job

5. Experiment and Discussion

5.1. Benchmark Function

In this section, we use 23 benchmark test functions to evaluate the performance of DOLGOA. These 23 benchmark functions are derived from CEC2014 and are shown in Table 1. They include 3 unimodal functions, 13 multimodal functions, 6 hybrid functions, and 1 composite function. Unimodal functions are supposed to test the exploitation capability of DOLGOA because it has no local optimum in the searching space. On the contrary, multimodal functions have many local optimum and are used for exploration capability test of DOLGOA. In order to better mimic real search spaces, we also use hybrid functions and composite functions to test the performance of the algorithm.

5.2. Parameter Settings

The 23 test functions above were used to test and compare DOLGOA with four other algorithms, including GOA, GWO, TLBO, and Jaya. The parameter settings for these experiments are listed in Table 2.

For DOLGOA algorithm, weight factor and jump rate are two important parameters, and their analysis is shown in Table 3. We divide and into ten levels and analyze the effects of and on the optimization results by using orthogonal experiment [52]. For every pair of and , we run the corresponding algorithm 10 times and record the average results in Table 3. As shown in Table 3, when and , DOLGOA gets the best result. Therefore, and are the most appropriate parameter settings.

In addition, the total number of individuals for all tests is 100. The function evaluations (FES) is set as 300,000, and all tests were executed 10 times.

5.3. Unimodal/Multimodal Test Functions and Their Analysis

Unimodal functions and multimodal functions are used to test the exploitation and exploration capability of DOLGOA. The mean and standard values of the result under these functions are shown in Table 4.

It can be seen that DOLGOA and GOA algorithms are superior to other algorithms in unimodal test functions F1–F3. Especially in terms of F1 and F3, DOLGOA is greatly improved on the basis of original GOA, which indicates that DOL strategy can improve the exploitation of individuals and help them converge to the global optimum in asymmetric space.

Multimodal functions contain multiple local optimum, which are used to test the exploration capability of DOLGOA under various local optimal conditions. It can be seen from F4–F16 that compared with other algorithms, GOA and DOLGOA algorithms show better exploration capability, especially on F5, F12, and F13, and DOLGOA has the best performance.

5.4. Analysis of Hybrid and Composition Test Functions

The hybrid function combines both unimodal and multimodal functions to better simulate the real search space. When dealing with the hybrid function, it needs to balance the exploitation ability and exploration ability, which requires higher requirements on the algorithm. The mean and standard values of all hybrid and composition test functions are shown in Table 5.

As can be seen from the table, DOLGOA performs very well in hybrid functions, especially in F17, F20, F21, and F22. On the composition function F23, the result of DOLGOA is not worse than other algorithms, which indicates that DOLGOA can well balance the exploitation and exploration capability in practical application.

5.5. Statistical Test Results

Two independent-sample T tests are applied to test whether there is a significant difference in the mean value of the two samples. The t values are shown in Table 6, and the values are shown in Table 7, which is marked as “+” and “−”.

Set 0.05 as the level of significance: . When , it indicates , and is marked as “+” in Table 7. In this case, the null hypothesis is eligible and accepted, which means there is no significant difference between the two algorithms. On the contrary, when , is indicated, and is marked as “−” in Table 7; the null hypothesis is rejected, which means the two algorithms are significantly different. In Table 7, “Same” means the total number of DOLGOA that is not significantly different from other algorithms, and “Better” means the total number of DOLGOA that is significantly different from other algorithms.

From Tables 6 and 7, it can be seen that, for most test functions, DOLGOA shows significant difference from other algorithms, which means that DOL greatly improves the performance of GOA.

5.6. Analysis of Convergence

The convergence trends of all algorithms for unimodal and multimodal test functions are shown in Figures 2 and 3, respectively, and the convergence trends of all algorithms for hybrid function and composite function are shown in Figure 4.

As can be seen from the figure, the convergence result of DOLGOA is the best for most of the test functions, which indicates that DOLGOA has good exploration capability to avoid falling into the local optimum. The convergence curve of DOLGOA eventually tends to the optimal value, thanks to the exploitation capability of the DOL strategy.

5.7. Application to FJSP

In addition to the test on numerical benchmark functions, DOLGOA is adopted to solve the FJSP to further evaluate its performance. The purpose of FJSP in this paper is to minimize the maximal completion time. We analyze the performance of DOLGOA by comparing with GOA, PSO, Jaya, DE , GWO [5356], HTS/TA, ITS, and ISA. Some of the data of these experiments are adopted from the literature [57]. These algorithms are implemented on 21 different problems, which are classified into three classes: small (SFJS01–SFJS10), medium (MFJS01–MFJS10), and large ((LFJS01)) size FJSP problems. The data of small and medium size experiments is adopted from the literature [57], and the data of the large size experiment is adopted from the literature [58]. Table 1 shows the parameter settings of these optimization algorithms. To eliminate the randomness, each problem implements 10 independent runs. Table 8 shows the experimental results of these algorithms in small and medium size experiments, and the results with are the best solution results of the corresponding problem.

It can be seen that DOLGOA obtains the best results for 14 problems. Although DOLGOA does not get the best results in other medium size problems, the difference is not significant from the best results. From Table 8, we can see that almost all algorithms can get the best result for small size FJSP, but as the scale of the problem gets larger, the advantage of DOLGOA becomes more apparent. For the 10 medium size FJSPs, DOLGOA obtains the best results on four of them, and the results are much better than GOA. This demonstrates that DOL strategy improves the exploitation capability of the algorithm, which makes DOLGOA more effective for solving FJSP. Figure 5 shows the mean convergence curve of the makespan in MFJS03 of the six algorithms in 10 separate runs. Figure 6 shows the convergence curve of the makespan in MFJS09 of six algorithms in 10 independent runs. It can be seen that DOLGOA obtains the best result. Figures 7 and 8 illustrate the Gantt charts of the optimal solutions obtained by DOLGOA in a featured run for problem MFJS03 and MFJS10, respectively. We can see that the shortest completion time solved by DOLGOA algorithm is 489 and 1517 for MFJS03 and MFJS10, respectively. Figure 9 shows the convergence curve of the average results of ten runs obtained by DOLGOA for the large size experiments. In addition, Figure 10 shows the Gantt chart of the optimal solutions obtained by DOLGOA for the large size experiments. In the large size problem LFJS01, there are 15 jobs to be processed by 10 machines in a workshop, where each job consists of 2–4 processes and each process can only be completed by several specified machines. It can be seen that DOLGOA can assign a series of optional machines to a specified process to obtain the best solution. As shown in Figure 7, the optimal solution obtained by DOLGOA for LFJS01 is 31. In Table 9, we list the computational time of these 6 algorithms to solve FJSP, and all the simulations run on an Intel(R) Core(TM) i5-9400F [email protected] GHz PC and the Matlab(R) 2019a software platform.

6. Conclusion

In this paper, a new variant of GOA named DOLGOA is proposed, which embeds DOL strategy into GOA to prevent it from falling into local optimum. The asymmetric and dynamic nature of DOL helps DOLGOA possess better exploitation and exploration capabilities than GOA. The performance of the proposed DOLGOA algorithm is evaluated in 23 benchmarks from CEC2014, and the proposed algorithm is applied in 21 FJSP problems. Comprehensive results show that DOLGOA behaves the best when solving numerical benchmarks and is well-performed for FJSP problems in different scales. The proposed algorithm is promising to be applied in solving various engineering optimization problems.

Data Availability

The data used to support the findings of the study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research work was supported by the National Key Research and Development Project under Grant 2018YFB1700500.