Abstract

Due to the NP-hard nature, the permutation flowshop scheduling problem (PFSSP) is a fundamental issue for Industry 4.0, especially under higher productivity, efficiency, and self-managing systems. This paper proposes an improved genetic-shuffled frog-leaping algorithm (IGSFLA) to solve the permutation flowshop scheduling problem. In the proposed IGSFLA, the optimal initial frog (individual) in the initialized group is generated according to the heuristic optimal-insert method with fitness constrain. The crossover mechanism is applied to both the subgroup and the global group to avoid the local optimal solutions and accelerate the evolution. To evolve the frogs with the same optimal fitness more outstanding, the disturbance mechanism is applied to obtain the optimal frog of the whole group at the initialization step and the optimal frog of the subgroup at the searching step. The mathematical model of PFSSP is established with the minimum production cycle (makespan) as the objective function, the fitness of frog is given, and the IGSFLA-based PFSSP is proposed. Experimental results have been given and analyzed, showing that IGSFLA not only provides the optimal scheduling performance but also converges effectively.

1. Introduction

With the advancement of Industry 4.0, the demographic-dividend is gradually replaced by the technology-dividend [1, 2]. In the flowshop, designing an intelligent scheduling algorithm can effectively improve manufacturing systems and production efficiency of enterprises. Although scheduling is a very active field with a high practical relevance, there are still many challenging problems in the flexible manufacturing systems of Industry 4.0.

Among these challenging problems, the permutation flowshop scheduling problem (PFSSP) has been researched for more than half a century due to its complexity. It can be described as jobs that are processed on different machines in the same order. The processing time of the n-th job on the m-th machine is known in advance and fixed. The task is to solve the processing order of each job so that the objective function (generally refers to the time when the last job is processed on the last machine) is optimal [36]. Nowadays, there are a variety of dynamic factors in a highly intelligent flowshop. A reasonable scheduling method can control the production process, so that enterprises should effectively face the unexpected situation in production and processing to maximize the benefits of enterprises [46]. The most commonly used scheduling index is the minimum production cycle, also called as makespan, referring to the minimum time to complete the processing of all jobs [711].

When the number of jobs is small, the PFSSP can be solved by the deterministic solution methods [3], such as dynamic programming method and branch and bound method [12]. PFSSP becomes an NP-hard problem when the number of jobs is large [13]. It is a challenging task to solve mainly because that a machine needs to process multiple jobs, and a job needs to be sequentially processed on multiple machines [14, 15]. The increase in the number of jobs and machines will make the optimal solution process very complicated, and the solution will be more difficult. The metaheuristic algorithm [16, 17] provides a feasible solution to the NP problem which is difficult to be solved by traditional optimization algorithm, and this kind of algorithms has been widely used in PFSSP [1820]. The study in [21] uses genetic algorithm to solve PFSSP. The authors in [2224] study the simulated annealing algorithm. The authors in [25, 26] study the particle swarm optimization. The authors in [2729] study the artificial bee colony algorithm. However, each single metaheuristic algorithm has its disadvantage and some combination approaches have been proposed.

The genetic-shuffled frog-leaping algorithm (GSFLA) is a swarm intelligence optimization algorithm that simulates the process of frog foraging behavior [3034]. It combines the advantages of the memetic algorithm (MA) based on memetic evolution [35] and the particle swarm optimization algorithm (PSO) based on swarm behavior [36]. GSFLA can realize the sharing and exchanging of group information and has the characteristics of scattered search and global information exchange. Now, GSFLA has been applied to solve generator maintenance scheduling (2015), time-optimal traveling salesman problem (2018), text document clustering (2019), and neural network structure optimization (2020).

In this paper, we propose an improved GSFLA called as IGSFLA and apply it to solve the PFSSP. This paper has three contributions: (1) a heuristic optimal-insert method with fitness constrain is applied to generate the frog (individual) group; (2) the crossover mechanism is applied to both the subgroup and the global group to avoid the local optimal solutions and make the evolution faster; (3) the disturbance mechanism is applied both in the initialization step and the local searching step to evolve the frogs with the same optimal fitness.

The remainder of the paper is structured as follows: Section 2 gives a brief overview of related work that deals with the PFSSP. The details of the IGSFLA algorithm and IGSFLA-based PFSSP are presented in Section 3. The experimental results and analysis are reported in Section 4 and in Section 5 by the conclusion.

The manufacturing problems can be classified according to different characteristics, for example, the number of machines (one machine and parallel machines), the job characteristics (preemption allowed or not and equal processing times), and the number of objective (single objective usually the makespan, biobjective, and multiobjective). When each job has a fixed number of operations requiring different machines and all jobs share the same route, we are dealing with a flowshop scheduling problem. If each machine has to process the jobs in the same order, the problem is named as permutation flowshop scheduling problem (PFSSP).

Up to now, lots of methods to provide exact or approximate solutions have been presented for the PFSSP over the last 60 years, including reinforcement learning based method [37]. Ruiz has presented a review of approximate methods for PFSSP, including almost heuristics and metaheuristics with the makespan criterion [38]. Among these solution approaches, metaheuristic approaches have become more popular, but according to [38], the current state-of-the-art approach is far from easy to identify. In this section, we mainly list and analysis some of them related to our work.

2.1. Genetic Algorithms and Utilization to PFSSP

Genetic algorithm (GA) is originated from the computer simulation study of the biological system [21, 39]. It imitates the mechanism of biological evolution in nature, borrows from Darwin’s theory of evolution and Mendel’s theory of heredity, and is essentially an efficient, parallel, and global search method, which can automatically acquire and accumulate knowledge about the search space during the search process, and adaptively control the search process to obtain the optimal solution. GA mainly consists of three operations, namely, selection, crossover, and mutation, which are performed by calculating the fitness of the group to evolve and produce new frogs continuously.

Zhang et al. [40] proposed HGA-RMA for the PFSSP and achieved optimal performance at that time. However, due to the slow convergence speed of GA, it is still easy to fall into local optimal solutions. This issue inspires researchers to combine the genetic algorithm with other algorithms to solve PFSSP. The characteristics of noncompact flowshop scheduling plans in manufacturing enterprises are analyzed, and a scheduling strategy based on the nondominated sorting genetic algorithm (NSGA) is proposed in [21]. NSGA can guarantee the diversity and evolutionary effect of the group in the multiobjective scheduling model of noncompact flowshop.

A hybrid algorithm combining genetic algorithms and generative adversarial networks (GAN) is proposed to solve scheduling problems in [30]. The algorithm uses GAN to minimize sample information. Compared with traditional optimization algorithms, the algorithm can avoid premature local optimal solutions.

Genetic simulated annealing algorithm is proposed in [41] and improves flowshop scheduling with a makespan criterion, but the improvements are not satisfied.

Recently, Kurdi [42] combines the genetic algorithm, simulated annealing, and NEH to construct a memetic algorithm with novel semiconstructive evolution operators to solve the permutation flowshop scheduling problem, and the proposed MASC can be considered as one of the best-so-far methods for PFSSP.

2.2. Shuffled Frog-Leaping Algorithm

The idea of the shuffled frog-leaping algorithm (SFLA) is inspired from a group of frogs with differences among frogs living in a certain area. Each frog with the same structure represents a solution, and the frog group constitutes the solution space. The whole frog group is divided into several subgroups, and each of which has its own scheduling strategies and executes local search strategies in a certain way. Each frog in the subgroup has its own scheduling strategies, which affect other frogs in and beyond this subgroup. The frogs can exchange information in a global scope.

SFLA combines particle swarm optimization (PSO) [43] based social behavior and shuffled complex evolution (SCE) [44] based complex search. As a kind of deterministic strategies, PSO allow the algorithm to effectively use response information to guide the heuristic search [42]. Meanwhile, as a kind of stochastic strategies, SCE ensures the flexibility and robustness of the search mode.

Independently, SCE and PSO have been applied to solve the PFSSP, called as SCEOL [45] and PSOENT [46] correspondingly. When the number of jobs and machines are less than 20, PSOENT obtains a better performance than GA-based HGA-RMA [40] and vice versa.

SFLA has been applied to multiobjective flexible job shop scheduling problem (MOFJSSP), and one of an approximate optimal solution can be obtained by iterating the global search several times [47]. Another solution can be obtained with an improved SFLA under four types of energy consumption [48].

Besides, the memeplex grouping SFLA is proposed and applied to solve the distributed two-stage hybrid flowshop scheduling problem (DTHFSSP) in a multifactory environment to minimize manufacturing time and the amount of delayed work [32].

Above all, SFLA has been applied to deal with MOFJSSP and DTHFSSP. Its performance to deal with PFSSP will be given in our experiments.

2.3. Genetic-Shuffled Frog-Leaping Algorithm

Because of the deficiency of SFLA itself, researchers have proposed some approaches that combine SFLA with other heuristic methods to solve different problems. Among them, the genetic-shuffled frog-leaping algorithm (GSFLA) has been proposed by combining GA and SFLA. In the local search of GSFLA, the concept of crossing between the optimal frog and the worst one is adopted to replace the jump operation in the traditional SFLA, to complete the evolution of the worst frog. At the same time, to avoid premature convergence of the algorithm, the idea of frog mutation is introduced to increase the diversity of the group. After the mutation, a comparison with the original frog is performed to avoid losing the optimal frog. In summary, the local search consists of the crossover of different frogs and the selective mutation of frog frogs.

As mentioned above, GSFLA has been applied to solve scheduling and clustering problems. G. Giftson Samuel proposed a hybrid PSO-based GA and hybrid PSO-based SFLA for solving long-term generation maintenance scheduling problem by considering a security constrained model [49]. Zhang et al. [50] proposed a hybrid SFLA-GA to solve the time-optimal traveling salesman problem (TOTSP) to reflect the change of traffic over time. The hybrid SFLA-GA has shown stronger search capability and fast convergence speed.

Alhenak and Hosny [51] propose a genetic frog-leaping algorithm for text document clustering in order to extract useful information from large collections of documents. In their work, the GA performs feature selection and the SFLA performs clustering. While the proposed algorithm should require longer computational time.

Inspired by above work, we try to solve the PFSSP with GSFLA. While as shown in our experiments, the traditional GSFLA is not good enough to solve the PFSSP, and in this paper, we propose an improved GSFLA and apply it to solve the PFSSP better.

3. Improved GSFLA for PFSSP

3.1. Improved GSFLA

We improve the traditional GSFLA by improving the initialization step and optimizing the crossover and disturbance mechanisms. Our crossover mechanism utilizes three traditional crossover operators. However, comparing with the traditional GSFLA crossover mechanism between the optimal frog and the worst one, we crossover all frogs in the subgroup with the optimal frog of this subgroup. Moreover, the frog of the subgroup will also crossover with the global optimal frogs to avoid local optimal solutions. Our mutation mechanism utilizes three traditional mutation operators. Comparing with the traditional GSFLA, we introduce frog disturbances to evolve the optimal frogs that have the same fitness in the subgroup. The essence of disturbances is mutations, and frogs with disturbances can explore better results.

The IGSFLA includes heuristic initialization, subgroup division, search, and shuffle.

3.1.1. Heuristic Initialization

Initialization is the first step of both traditional GSFLA and our IGSFLA to generate the frogs as a group and then to divide it into different subgroups. Here, each frog (also called as individual) presents a candidate scheduling solution of PFSSP.

For the traditional initialization of GSFLA and SFLA, frogs are initialized in a random way. Considering that the number of frogs is far less than the size of solution space, the initialized frog group is hard to simulate the optimal solution.

Considering that if the initialized frogs contain one frog that is close to the optimal solution, then the other group individuals will evolve toward to the optimal solution at high probability. In order to make the optimal initialized frogs close to the optimal solution, in this paper, we present the following optimal-insert-based heuristic initialization with optimal-fitness constraint to generate the optimal initialized frog of the whole group.

Generate the optimal initialized frog as follows:Step 1. Choose the first item of the jobs as the first gene to form the gene queue of a frog.For each following item in the jobs,Step 2. Insert one job as a new gene into different positions of the gene queue to form different updated gene queues of the frog.Step 3. Calculate each fitness of the updated gene queues.Step 4. Sort the gene queues and select the one with optimal fitness.Step 5. If there are several gene queues with the same optimal fitness (as called multioptimal frogs), then utilize disturbance operators to evolve them and go to Step 3.End forStep 6. Select the gene queue with optimal fitness as the optimal initialized frog.

Generate the other initialized frogs as follows:

A random method is utilized to generate the other initialized frogs of the group.

3.1.2. Division of Subgroups

The traditional division of subgroups is cutting the whole initialized frog group simply according to the size of subgroup. In this paper, to keep the variety of frogs, distribute different level frogs into each subgroup with the following division strategies:Step 1. Calculate the fitness of the initialized frogs in the whole group.Step 2. Sort the frogs according to the fitness in descending order.Step 3. Record the frog with optimal fitness as which denotes the current optimal solution of PFSSP.Step 4. Distribute the sorted frogs into the each subgroup according to a snake-order. If the number of subgroups is , for the i-th frog in the group, calculate , where is the modulus operation. When is not zero, the i-th frogs will be distributed into the subgroup with number , otherwise, into the subgroup with number .

3.1.3. Search Process of IGSFLA

The search process includes local search and global search.

Local Search in Each Subgroup.Step 1. Sort the frogs in the subgroup according to their fitness in descending order.Step 2. Crossover:(1)Selection: select frog with optimal fitness in the subgroup, and represent other frogs as .(2)Crossover: other frogs perform one of the crossover operations mentioned in following Section 3.2 randomly with to produce offspring . performs one of the crossover operations randomly on to produce offspring to ensure that each round of local search can learn from the global optimal frog.(3)Judgment: If , replaces else, and crossover to produce If , replaces else, stochastic feasible solution replaces If , replaces else, remains unchangedStep 3. Mutation:(1)Selection: calculate the probability of mutation for each frog. Set the range of probability as , the maximum fitness of the subgroup is recorded as , the average fitness is recorded as , and calculate according to the following formula:

(2)Mutation: perform one of the mutation operations (mentioned in Section 3.3) randomly to produce offspring.(3)Judgment: If , replaces   else, remains unchangedwhere denotes the original frog and denotes the mutated frog.Step 4. Frog disturbance to the multioptimal frog:(1)Selection: select each frog in the multioptimal frogs in each subgroup.(2)Mutation: perform one of the mutation operations randomly to produce offspring.(3)Judgment:If , replaces  else, remains unchangedwhere denotes the disturbed frogStep 5. Reorder subgroups and iterates:

Global Search.Step 1. Perform local search on subgroups (as shown above), and this step mainly applied three traditional operations: crossover, mutation, and disturbance.Step 2. Shuffle the subgroups and recalculate .Step 3. Verify whether the convergence condition is satisfied. If so, output the result; otherwise, return to Step 3.

3.2. Detailed Operation of Crossover

Three traditional crossover operators are applied in this paper: position-based crossover, sequence-based crossover, and loop-based crossover.

3.2.1. Position-Based Crossover

(1)Select the 1/2 genes in parent frog randomly(2)Record and sort the positions of unselected genes in (3)In parent frogs , find the corresponding genes which are the same as the selected genes in parent frogs (4)Record and sort the positions of unselected genes in (5)Crossover the unselected genes according to the sorted positions in (2) and (4)

To demonstrate the position-based crossover operating, an example of schematic diagram is given in Figure 1. As shown in the figure, in order to generate offspring and from the parent frogs and , firstly, the genes of [g2, g4, g5] in are selected randomly with their positions [2, 4, 5], and so the unselected genes in are [g1, g3, g6] with their positions [1, 3, 6]. According to the selected [g2, g4, g5] in , we find the same [g2, g4, g5] with their positions [2, 3, 5] in , and then the unselected positions are [1, 4, 6] with genes [g3, g6, g1] in ; we crossover the corresponding genes in unselected [g1, g3, g6] of and unselected [g3, g6, g1] of , and then we obtain the offspring and .

3.2.2. Sequence-Based Crossover

(1)Select the 1/2 genes in parent frog randomly(2)Record and sort the positions of selected genes in (3)In parent frogs , find the corresponding genes which are the same as the selected genes in parent frogs (4)Record and sort the positions of selected genes in (5)Crossover the selected genes according to the sorted positions in (2) and (4)

Schematic diagram of the sequence-based crossover is shown in Figure 2.

3.2.3. Loop-Based Crossover

(1)Initialize order number j as 1 and position queue K as NULL.(2)While the gene at position j in differs from the first gene in ,update j with the position order of gene in which is the same to the one at position j in put j into position queue KEnd while(3)Generate by gathering genes in the position queue K in and genes beyond the position queue K in . Generate by gathering genes in the position queue K in and genes beyond the position queue K in .

Schematic diagram of loop-based crossover is shown in Figure 3.

3.3. Detailed Operation of Mutation

Three traditional mutation operators are applied in this paper: inverted mutation, transformation mutation, and insertion mutation.

3.3.1. Inverted Mutation

(1)Randomly select two genes on the frog(2)Reverse the order of the genes between the two selected genes

Schematic diagram of inverted mutation is shown in Figure 4.

3.3.2. Transformation Mutation

(1)Randomly select two genes on the frog(2)Swap positions for the two selected genes

Schematic diagram of transformation mutation is shown in Figure 5.

3.3.3. Insertion Mutation

(1)Randomly select two genes on the frog(2)Forward insert: insert the second selected gene in front of the first gene(3)Backward insert: insert the first selected gene after the second gene

Schematic diagrams of forward insert and backward insert are shown in Figures 6 and 7.

3.4. The Mathematical Model of PFSSP

PFSSP refers to the processing flow of N jobs on M machines . The processing time of each job on each machine is known, so the scheduling algorithm needs to give the processing order of each job on each machine to fit the following additional hypotheses.Hypotheses about jobs are as follows:The processing order of each job on each machine is the same and knownEach job can only be processed on one machine at a timeEach job is independent of each otherHypotheses about machines are as follows:Each machine can only process one job at a timeEach machine is independent of each otherHypotheses about production process are as follows:Each job has no processing priorityThe transmission time of a job between machines is considered negligibleMathematical model:Let the PFSSP processing order be represented as a permutation , where denotes the index of job arranged at the n-th position of . According the following set of recursive equations, we could calculate the completion time of each job [42]:

where denotes the time when the n-th job is processed on the m-th machine. denotes the processing time of the n-th job on the m-th machine. The makespan denotes the time when the last job is processed on the last machine.Objective function:where .The definition of fitness:As mentioned above, the fitness function is needed to determine how good a frog is. For the frog P in PFSSP, the fitness function is defined simply as the minus of makespan:
3.5. IGSFLA-Based PFSSP

The IGSFLA-based PFSSP includes group initialization, group division, local search, and global search. Local search is mainly to complete the evolution of the frogs in the subgroup. Global search is mainly to search for the most adaptable frogs in the group and to complete the group division and merge the subgroups. In this paper, the process is described as follows:Step 1. Mathematical modeling and parameter setting: construct the time matrix according to the PFSSP. To ensure that the time matrix is the same when the number of machines and jobs are the same, the state parameter is introduced. Initialize the number of subgroups and the number of frogs in a subgroup , so the size of the whole group is .Step 2. Heuristic group initialization: initialize the frog group according to Section 3.1.1. Sort the fitness of frogs according to the fitness, and represent the globally optimal frog as .Step 3. Group division: divide the frog into several subgroups utilizing the snake-order-grouping method according to Section 3.1.2.Step 4. Local search according to Section 3.1.3.Step 5. Global search: shuffle all subgroups and sort all frogs in order of fitness from largest to smallest, and record the global optimal frog .Step 6. Go back to Step 3 until the conditional output is reached.

4. Experiments and Analysis

In this section, firstly, we test the parameters that need to be set for the proposed IGSFLA, which are the number of subgroup iterations and the number of frogs in the subgroup . Then, we compare the results of random initialization and heuristic initialization with optimal-insert in GSFLA and IGSFLA to demonstrate the effectiveness of the proposed heuristic initialization with the optimal-insert method. Thirdly, PFSSP based on IGSFLA is compared with IGSFLA-no-disturbance (which lacks the disturbance mechanism compared to IGSFLA), GA, SFLA, and GSFLA, to show the effectiveness and convergence of disturbance mechanism. Finally, we compare our IGSFLA with two kinds of state-of-the-art methods on open datasets.

The experiments are executed on a laptop with Windows 10 OS, Intel Core i5-3230M CPU and 8G RAM.

4.1. Algorithm Parameter Testing
4.1.1. The Number of Subgroup Iterations

To test the number of subgroup iterations , the number of subgroups is fixed as and the number of frogs in a subgroup as . 9 kinds of combinations with N jobs and M machines are set up as shown in Table 1. The experiment is repeated 10 times under 5 conditions of . Keeping the optimal value of each experiment, the averages of each 10 experiments are calculated and shown in Table 1. Because the NP-hard nature, we do not need to find the real optimal makespan. For the algorithms in our experiments, the makespan obtained after iterating 50 times is considered as the optimal value.

Through analyzing the experimental results in Table 1, it can be found that the value of has an impact on the algorithm performance. If is set too small, too many global exchanges will result in poor accuracy of the algorithm. However, if is too large, the global mixing times will be reduced correspondingly, the algorithm will lose the advantage of global search, the subgroup frogs will not be able to communicate well with the global optimal solution, and the increase of local search times will lead to the increase of computation, making the algorithm less efficient. In the above parameter combination, when the number of machines and the number of jobs are small, the value c is not so important. However, when the number of machines and the number of jobs become large, a large will cause a lot of unnecessary calculations; otherwise, a small c will reduce the calculations more effectively. The optimal c depends on the number of machines and jobs. With the increasing in the number of jobs and machines, the optimal c should be increased. In our experiments, the proposed algorithm can achieve optimal when for 9 kinds of combinations in Table 1.

4.1.2. The Number of Frogs in the Subgroup

To test the number of frogs in the subgroup , the total group size is fixed as and the number of subgroup iterations as . 5 kinds of job and machine combinations are set up, and the experiment is repeated 10 times under conditions of . Keeping the optimal value of each experiment, the averages of the results of 10 experiments are calculated and shown in Table 2.

Through analyzing the experimental results in Table 2, it can be found that the proposed IGSFLA could perform better when is between 30 and 40. The results show that is not as large as possible. If is too small, the subgroup cannot adequately communicate with each other, and the algorithm loses the advantage of local search. If is too large, the algorithm is easy to fall into local optimization.

4.1.3. The Heuristic Initialization with Optimal-Insert Method

To test the effect of heuristic initialization with the optimal-insert method, the total group size is fixed as , the number of subgroup iterations as , and the number of frogs in a subgroup as . 5 kinds of jobs and machine combinations are set up and the experiment is repeated 10 times under the traditional random initialization and our heuristic initialization. Keeping the optimal value of each experiment, the averages of the results of 10 experiments are calculated and shown in Table 3.

From Table 3, it can be found easily that the heuristic initialization with the optimal-insert method of fitness constraint could generate high-quality frogs, especially the initialized frog that has the optimal fitness. Based on the high-quality initialization, it is much easier to obtain better makespan in both traditional GSFLA and our IGSFLA.

4.2. Optimization Result Testing

First, the number of subgroups is set as , and the number of frogs in a subgroup is set as . Then, 9 kinds of combinations with N jobs and M machines are set up as shown in Table 2. The experiment is repeated 10 times using 5 different algorithms, which are GA, SFLA, GSFLA, IGSFLA-no-disturbance, and IGSFLA. Following the contrast principle, the same parameters are set in the 4 algorithms of SFLA, GSFLA, IGSFLA-no-disturbance, and IGSFLA. Keeping the optimal value of each experiment, the averages of the results of 10 experiments are calculated and shown in Table 4.

From Table 4, it can be seen that with the increase in the number of jobs and the number of machines, the proposed IGSFLA gradually shows its advantages. Comparing with GA, SFLA, and GSFLA, IGSFLA can get a better solution because of its powerful search ability. Besides, the disturbance could improve the performance obviously. As mentioned above in Sections 3.1.1 and 3.1.3, the disturbance works effectively in two places, which are the initialization step and the local searching step. This could evolve the frogs with the same optimal fitness more outstanding, so the group could simulate the optimal situation more effectively.

4.3. Algorithm Convergence

In the actual flow workshop, the improvement of production efficiency is an overall problem. The scheduling system needs the scheduling algorithm to give an excellent scheduling plan in a short time. Therefore, the convergence of the algorithm is also very significant.

To test the convergence of IGSFLA, the number of jobs is fixed as , and the number of machines as . SFLA, GSFLA, IGSFLA-no-disturbance, and IGSFLA are set with the same parameters: , , and . Each algorithm is iterated for 50 times. While for GA, iterated here refers to the completion of an overall evolution, and for SFLA, GSFLA, IGSFLA-no-disturbance, and IGSFLA, iterated here refers to the completion of a global search. The Gantt charts of the solution with above algorithms are provided in Figures 812. The convergence graph is drawn according to the data. The results are shown in Figure 13.

For actual production tasks, the scheduling system often does not need an optimal result, but a suboptimal that can be obtained in a short time is good enough. To quantify the convergence of the algorithms, the statistical results are shown in Table 5. Here, the optimal result of GA with 86 iterations is considered as the benchmark and the number of iterations required by the other algorithms to achieve the benchmark is given. It can be seen that to achieve the same optimal result of benchmark, IGSFLA-no-disturbance needs 15 iterations, while GSFLA and IGSFLA only need 2 iterations.

Combining the convergence comparison shown in Figure 13 and Table 5, it can be seen that IGSFLA converges faster and gets better results than IGSFLA-no-disturbance, GSFLA, SFLA, and GA. Comparing with GSFLA, IGSFLA further improves the scheduling ability by optimizing the initialization and improving the crossover and disturbance mechanism, converges faster, and gets better results with less iteration times.

4.4. Further Comparison and Discussion

In 1989, Taillard chose the hardest problems and their optimal solutions to construct the Taillard dataset. The new optimal solution will be updated into the dataset when it is appeared and confirmed.

Here, the scheduling results executed on the Taillard dataset are listed and compared with the proposed IGSFLA and with some other methods besides metaheuristic methods.

As shown in Table 6, our IGSFLA could perform better than the Q-learning method proposed in 2017 which belongs to a new self-learning method called as reinforcement learning. Besides, our proposed IGSFLA could nearly achieve the optimal solution of MASC in 2020. Furthermore, the IGSFLA could perform as well as MASC when the number of machines is less than 20. While with the increasing of machine number, the deficiency of IGSFLA between the state-of-the-art method appears, which should be researched in our further work.

5. Conclusion

Technological developments along with the emergence of Industry 4.0 call for new algorithms to solve fundamental industrial problems, especially the flowshop scheduling problem. In this paper, to improve the efficiency of permutation flowshop scheduling, the IGSFLA algorithm is proposed and applied to solve the PFSSP. In IGSFLA, to enhance the local search ability, the crossover mechanisms are optimized, and the disturbance mechanism is utilized. The mathematical model of PFSSP is established and solved based on the proposed IGSFLA algorithm. Experimental results show that, compared with other algorithms, IGSFLA could converge quickly by giving an approximately optimal value with fewer iterations and achieve the optimal scheduling solution when the number of machines is small. The following work is to improve the performance to deal with complicated PFSSP with enormous number of jobs and machines.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was partially supported by the National Key R&D Program of China (2018YFB1308300), the European Commission Marie Skłodowska-Curie SMOOTH (smart robots for fire-fighting) project (H2020-MSCA-RISE-2016-734875), the China Postdoctoral Science Foundation (2018M631620), the Natural Science Foundation of Beijing Municipality (4202026), and the Doctoral Fund of Yanshan University (BL18007).