Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2020 / Article
Special Issue

Big Data Modelling of Engineering and Management

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 9462048 | https://doi.org/10.1155/2020/9462048

Lvjiang Yin, Meier Zhuang, Jing Jia, Huan Wang, "Energy Saving in Flow-Shop Scheduling Management: An Improved Multiobjective Model Based on Grey Wolf Optimization Algorithm", Mathematical Problems in Engineering, vol. 2020, Article ID 9462048, 14 pages, 2020. https://doi.org/10.1155/2020/9462048

Energy Saving in Flow-Shop Scheduling Management: An Improved Multiobjective Model Based on Grey Wolf Optimization Algorithm

Guest Editor: Weilin Xiao
Received16 Jun 2020
Accepted21 Aug 2020
Published14 Oct 2020

Abstract

Currently, energy saving is increasingly important. During the production procedure, energy saving can be achieved if the operational method and machine infrastructure are improved, but it also increases the complexity of flow-shop scheduling. Actually, as one of the data mining technologies, Grey Wolf Optimization Algorithm is widely applied to various mathematical problems in engineering. However, due to the immaturity of this algorithm, it still has some defects. Therefore, we propose an improved multiobjective model based on Grey Wolf Optimization Algorithm related to Kalman filter and reinforcement learning operator, where Kalman filter is introduced to make the solution set closer to the Pareto optimal front end. By means of reinforcement learning operator, the convergence speed and solving ability of the algorithm can be improved. After testing six benchmark functions, the results show that it is better than that of the original algorithm and other comparison algorithms in terms of search accuracy and solution set diversity. The improved multiobjective model based on Grey Wolf Optimization Algorithm proposed in this paper is conducive to solving energy saving problems in flow-shop scheduling problem, and it is of great practical value in engineering and management.

1. Introduction

Many mathematical problems in scientific research and practical engineering essentially belong to multiobjective optimization problem. The analysis of constrained multiobjective optimization algorithm has become a research hotspot in recent years.

Different theories exist in the literature regarding optimization algorithm such as the Improved Multiobjective Grey Wolf Optimizer (IMOGWO) that hybridize with the fast nondominated sorting strategy [1]. In fact, significant progress has been made in solving constrained multiobjective optimization problems at home and abroad due to its efficiency and simplicity [2], but there is still much room for improvement in terms of the diversity and convergence of solution sets.

In previous research, some scholars proposed a differential evolution algorithm based on two-population search mechanism, which randomly deletes one of the two individuals with the smallest Euclidean distance [3]. In this way, the boundary solution may be lost, and the diversity of solution set may be affected. At the same time, when updating the infeasible solution set, the individuals with a small degree of constraint violation are preferred, but the individual objective function value selected in this way may be poor, thus affecting the convergence speed of the algorithm.

Several lines of evidence suggest that a number of penalty terms were applied to modify the value of individual objective function. In the process of evolution, feasible nondominant solutions were retained, and infeasible solutions with a small degree of constraint violation were also retained [4]. This algorithm also has the condition of losing boundary solution, which indicates that it has some defects in the diversity maintenance of solution set. When updating feasible solution sets and infeasible solution sets, individuals located in the sparse region are given priority. However, when updating the infeasible solution sets, individuals with a large degree of constraint violation will be retained, thus reducing the convergence speed of the algorithm [5].

As for the improved elite selection strategy, it can make the solution set more widely distributed by setting preference points and expand the application of constrained multiobjective optimization algorithm to high-dimensional problems by combining with Deb criterion [6], but it still has some drawbacks.

Up to now, plenty of differential evolution algorithms have been proposed, which minimize the value of the objective function for the feasible solution and minimize the degree of constraint violation for the infeasible solution [7]. However, the information interaction between the feasible solution set and the infeasible solution set is insufficient, and the population diversity needs to be improved. On the other hand, some scholars proposed a constrained multiobjective optimization algorithm based on adaptive e truncation strategy, which can improve the diversity of solution sets and give consideration to the convergence [8]. But the selection of e parameters is difficult, which needs to be adjusted according to different problems [9]. In brief, it is difficult for most algorithms to achieve enough balance for the key performance indexes in the constrained multiobjective optimization problems in terms of diversity and convergence [1012].

Therefore, we propose an improved Multiobjective Grey Wolf Optimizer related to Kalman filtering and reinforcement learning (MKGWO) in this paper. The main innovation of the algorithm is that Kalman filter facilitates the convergence from solution set to Pareto optimal front end introduced into the static multiobjective algorithm. It combines the characteristics of Kalman filter with the robustness, reliability, and high efficiency of the reinforcement learning system when solving problems [13]. In the process of evolution, the algorithm uses an elite population to store feasible nondominant solutions and retains the nondominant solutions generated by historical iteration. The problem of scheduling research is to allocate scarce resources to different tasks within a certain period of time [14, 15]. As the production with the continuous expansion of scale, the importance of scheduling and decision-making to enterprise management and production is increasingly prominent [16].

The scheduling problem is an interdisciplinary field of research, which involves operations research, computer science, control theory, industrial engineering, and many other disciplines [17]. A good scheduling scheme can greatly improve the production level of enterprises, making rational use of resources and enhancing the market competitiveness [1821].

From a different perspective, combining the data mining technology and mathematical logic, we establish an improved multiobjective operation model based on Grey Wolf Optimization Algorithm to give consideration to energy-saving problems in engineering. The results show that the algorithm can solve the Pareto front end problem in flow-shop scheduling successfully, and it is of great practical value in engineering and management.

2. Multiobjective Grey Wolf Optimizer

2.1. Multiobjective Problem

As briefly mentioned in the introduction, multiobjective optimization refers to the optimization of a problem with more than one objective function [22, 23]. Without loss of generality, it can be formulated as a maximization problem as follows:where n is the number of variables, o is the number of objective functions, m is the number of inequality constraints, is the number of equality constraints, is the ith inequality constraints, hi indicates the ith equality constraints, and are the boundaries of ith variable [2426].

In single-objective optimization, solutions can be compared easily due to the unary objective function. For maximization problems, solution X is better than Y if and only if X > Y. However, the solutions in a multiobjective space cannot be compared by the relational operators due to multicriterion comparison metrics. In this case, a solution is better than (dominates) another solution if and only if it shows better or equal objective value on all of the objectives and provides a better value in at least one of the objective functions [27]. The concepts of comparison of two solutions in multiobjective problems were first proposed by Khamis et al. [28] and then extended by Khamis et al. [29]. Without loss of generality, the mathematical definition of Pareto dominance for a maximization problem is as follows [30].

Definition 1. (Pareto dominance).
Suppose that there are two vectors such as x = (x_1, x_2, …, x_k) and y = (y_1, y_2, …, y_k).
Vector x dominates vector y (denoted as x ≻ y) ifThe definition of Pareto optimality is as follows.

Definition 2. (Pareto optimality).
A solution x ∈ X is called Pareto-optimal ifA set including all the nondominated solutions of a problem is called Pareto-optimal set, and it is defined as follows.

Definition 3. (Pareto optimality set).
The set of all Pareto-optimal solutions is called Pareto set as follows:A set containing the corresponding objective values of Pareto-optimal solution in Pareto-optimal set is called Pareto-optimal front [31]. The definition of the Pareto-optimal front is as follows.

Definition 4. (Pareto-optimal front).
A set containing the values of objective functions for Pareto solutions set is

2.2. MOGWO

MOGWO algorithm was proposed by Holland [32]. The social leadership and hunting technique of grey wolves were the main inspiration of this algorithm. In order to mathematically model the social hierarchy of wolves when designing MOGWO, the fittest solution is considered as the alpha (α) wolf. Consequently, the second and third best solutions are named beta (β) and delta (δ) wolves, respectively. The rest of the candidate solutions are assumed to be omega (ω) wolves. In the GWO algorithm, the hunting (optimization) is guided by α, β, and δ. The ω wolves follow these three wolves in the search for the global optimum [3335]:where t indicates the current iteration, A and C are coefficient vectors, is the position vector of the prey, and X indicates the position vector of a grey wolf [36].

The vectors A and C are calculated as follows:where elements of a linear decrease from 2 to 0 over the course of iterations and , are random vectors in [0, 1]. Position updating mechanism of search agents and effects A is indicated in Figure 1 [37]. In this figure, we can see that the three top wolves (namely, the fittest solutions) guide the directions of other wolves (namely, the candidate solutions).

The following formulas are run constantly for each search agent during optimization in order to simulate the hunting and find promising regions of search space:

C is a random value that is generated in [0, 2]. The storage mechanism of the nondominant solution is grid in MOGWO. When the archive is full, each hypercube is taken out by roulette according to the probability:where c is a constant number greater than one and N is the number of obtained Pareto-optimal solutions in the segment [38].

2.3. Defect in MOGWO

Traditional Multiobjective Grey Wolf Optimizer is a grey wolf group predation was inspired by multiobjective optimization algorithm, using a fixed external file to store nondominated solution, simple multiobjective grey wolves optimizer in solving static multiobjective problem, because without a good promotion strategy, lead to being not close to the Pareto-optimal front end, and the diversity of solution set is not high [39]. To solve the above problems, an improved algorithm KMGWO is proposed in the next section.

3. An Improved MOGWO Based on Kalman Filter and Reinforcement Learning

3.1. Kalman Filter

In 1960, R. E. Kalman published a paper describing a method which can process a time series of measurements and predict unknown variables more precisely than that based on a single measurement alone. This is referred to as the Kalman filter. Kalman filter maintains state vectors, which describe the system state, along with its uncertainties. The equations for the Kalman filter fall into two groups, time update and measurement update equations, which are performed recursively for the Kalman filter to make prediction. Here, the Kalman filter is used to directly predict for future generations in the decision space, and the two major steps are described below [40].

3.1.1. Measurement Update

The measurement update equations are responsible for incorporating a new measurement into the a priori estimate to obtain an improved a posteriori estimate [41]. The individual solutions just before the change occurs are taken as the actual measurements of the previous predictions. This information is used to update the Kalman filter prediction model [42].

3.1.2. Time Update

The time update equations are responsible for projecting forward the current state and error estimate covariance estimates to obtain the a priori estimates for the next step. New solutions are predicted based on the corrected Kalman filter associated with each individual in the decision space [43]. These are a priority estimates of the future.

Pareto-optimal solutions will then be used to update the reference points and subproblems. The specific equations for the two steps are presented in the following [44]:Time update step:Measurement update step:where x is the state vector to be estimated by the Kalman filter, A denotes the state transition model, u is the optional control input to the state x, B is the control input matrix, and is the error covariance estimate [45]. Z denotes the measurement of the state vector, H is the observation matrix, and the process and measurement noise covariance matrices are 2 and R, respectively. K is the Kalman filter gain.

Here is an example, so we can understand the Kalman filter more intuitively.

As shown in Figure 2, the state vector of an object at the period t obeys the magenta normal distribution, according to which the position of the object at period t + 1 can be predicted, and the predicted value is the blue normal distribution. This blue normal distribution is getting fatter, because a layer of noise to the recursion is added, so the uncertainty is getting bigger. In order to avoid the deviation caused by pure estimation, we made a measurement on the position of the object at the period of t + 1, and the measurement results are subject to the red normal distribution. Through the five equations of Kalman filter, the real state vector in the car at the time of t + 1 can be obtained. The state vector follows the green normal distribution, which means that the prediction of Kalman filter can be carried out iteratively.

Kalman filter has been widely applied to the dynamic multiobjective algorithm to ensure that the dynamic multiobjective algorithm can converge to the Pareto-optimal front end in time when the problem changes. It can be said that, at present, Kalman filter is one of the most effective methods to make the population converge to the Pareto-optimal set and the solution set converge to the Pareto-optimal front end. Therefore, this paper reversely applies it to the static multiobjective algorithm to promote the static multiobjective algorithm to converge to the Pareto-optimal front end faster.

In order to overcome the defects of MOGWO described above, this paper proposes a multiobjective Grey Wolf Algorithm based on Kalman filter transformation, which is hereinafter referred to as MKGWO.

After each iteration, a new grey wolf population is generated by the newly generated grey wolf population and the previous-generation grey wolf population using Kalman filter through update probability , which promotes the convergence of solution set to the Pareto-optimal front end. This strategy is called Kalman filter operator, and the formula is described as follows:

3.2. Reinforcement Learning

In the field of big data and machine learning, data mining and learning technology can be divided into supervised learning, unsupervised learning, and reinforcement learning. Reinforcement learning grew out from animal learning and parameter perturbation adaptive control theory, referring to the mapping from environmental state to the action. It is a machine learning method that can adapt to and interact with the environment. This method is different from supervised learning through positive cases and counterexamples to advise the agent of what action to take, but by trial and error to find the optimal behavioral strategy.

As is shown in Figure 3, the basic principle of reinforcement learning is test evaluation process, and the agent chooses an action for the environment, after which the action state change is accepted at the same time producing a reinforcement signal. Then, the agent selects next action according to the reinforcement signal and environmental current state, and the selection principle is increased by the probability of positive reinforcement. The selected action not only affects the immediate reinforcement value, but also the state of the environment at the next moment and the final reinforcement value.

In MOGWO improved based on Kalman filter operator, we found that (update probability) could not be fixed. For different problems, the optimal vaccination probability was different. For the same problem, the optimal vaccination probability is also different at different periods of iteration, so we choose the reinforcement learning method here to dynamically determine the vaccination probability. In this paper, this improved method is collectively called reinforcement learning operator, as shown in Figure 4.

The reinforcement learning method used here is based on snap–drift neural network. It switches between snap mode and drift mode. In this operator, agent (MOGWO) accepts the state (snap or drift) and reward value () at time t then takes an action (increase or decrease ) to convert to a new state:

Se is the number of nondominated solutions generated in this iteration; is the conversion probability, that is determined by the proportion of update individuals in population this iteration. Snap mode is used for less than 50%, and drift mode is used for more than or equal to 50%. ω is the step size of with each change.

3.3. MKGWO Flow

MKGWO’s algorithm flow framework is as follows (Algorithm 1).

 Initialize the grey wolf population
 Initialize a, A, and C
 Calculate the objective values for each search agent
 Find the nondominated solutions and initialize the archive with them
 Exclude alpha from the archive temporarily to avoid selecting the same leader
 Exclude beta from the archive temporarily to avoid selecting the same leader
 Add back alpha and beta to the archive
t = 1
 while (t < Max number of iterations).
 for each search agent
  Update the position of the current search agent by equations (6)–(16)
 end for
 Update a, A, and C
 Invoke Kalman filter by equations (18)–(24)
 Invoke reinforcement learning operator by equations (25) and (26)
 Calculate the objective values of all search agents
 Find the nondominated solutions
 Update the archive with respect to the obtained nondomination solutions
 If the archive is full
  Run the grid mechanism to omit one of the current archive solutions
  Add the new solution to the archives
 End if
 Exclude alpha from the archive temporarily to avoid selecting the same leader
  
 Exclude beta from the archive temporarily to avoid selecting the same leader
 Add back alpha and beta to the archive

4. Simulation Experiments

To test the performance of MKGWO, MKGWO and MOGWO, MOPSO, NSGA2, MOEA/D, and PESA2 simulation experiment, the benchmark functions and correlation index are analyzed in this section.

4.1. Experimental Environment and Benchmark Function
4.1.1. Experimental Environment

The operating environment of the simulation experiment is as follows: the machine is Dawning 5000A supercomputer. Xeon X5620 CPU (4 cores) , 24 GB memory, 300 GB SAS hard disk. Equipped with Rhel5.6 operating system. The programming tool is MATLAB 2012a (for Linux).

4.1.2. Benchmark Function

In this paper, six benchmark functions are selected to evaluate the performance of the algorithm. This group of benchmark functions is widely used in the test of multiobjective optimization algorithm. The function names, dimensions, ranges, and expressions are shown in Table 1. These six benchmark functions can be divided into two categories: Kursawe, Schaffer, ZDT1, and ZDT6 are two-dimensional test functions used to investigate the search ability of the algorithm at low latitude; Viennet2 and Viennet3 are 3D test functions, adding more Pareto points and increasing the difficulty of searching, for further detecting the overall performance of the algorithm. These test problems are considered as the most challenging test problems in the literature that provide different multiobjective search spaces with different Pareto-optimal fronts: convex, nonconvex, discontinuous, and multimodal.


Function nameEquationSearch domainSearch boundary

Kursawe3
Schaffer1
Viennet22
Viennet32
ZDT130
ZDT610

4.2. Contrast Indicators and Algorithm Parameters

For the performance metric, we have used Inverted Generational Distance (IGD) for measuring convergence. The Spacing (SP) is employed to quantify and measure the coverage. The mathematical formulation of IGD is similar to that of Generational Distance (GD). This modified measure formula is as follows:where n is the number of true Pareto-optimal solutions and d indicates the Euclidean distance between the ith true Pareto-optimal solution and the closest obtained Pareto-optimal solutions in the reference set. The Euclidean distance between obtained solutions and reference set is different here. In IGD, the Euclidean distance is calculated for every true solution with respect to its nearest obtained Pareto-optimal solutions in the objective space.

The mathematical formulation of the SP and MS measures is as follows:

In the simulation experiment, the population number of each algorithm is 200, the number of archives is 200, and the number of iterations is 500. Each algorithm ran independently for 30 generations, and its minimum value, maximum value, average value, and variance were taken as the results. The remaining parameters are shown in Table 2.


AlgorithmsParameters

KMGWOAlpha = 0.1; beta = 4; gamma = 2;  = 0.2;  = 0.1
MOGWOAlpha = 0.1; beta = 4; gamma = 2
MOPSO = 0.4; c1 = 2; c2 = 2
NSGA2pc = 0.9; pm = 0.5
MOEA/DGamma = 0.5
PESA2 = 0.5; h = 0.3; gamma = 0.15

4.3. Comparative Analysis of Experimental Results

The Pareto diagram in Figures 510 shows that the Pareto nondominant solution generated by KMGWO is basically consistent with the real Pareto front end and the solution set distribution is relatively uniform. Then, the simulation experiment results are analyzed by data comparison. In the Kursawe function test results, the indicators of KMGWO and MOEA/D are the best, around 0.12, slightly larger than PESA2. The worst is NSGA2 algorithm. It shows that in addition to KMGWO, MOEA/D is also suitable for solving low-dimensional multiobjective problems. The test results of Schaffer’s function show that KMGWO has the best effect. MOGWO and MOEA/D are slightly worse than KMGWO, and the standard deviation of MOGWO is smaller. In both Viennet2 and Viennet3, the results of the three-target benchmark test functions, PESA2 and KMGWO, have the best effect of two algorithms, respectively. In Viennet2, KMGWO is only slightly less than PESA2 algorithm. Taken together, KMGWO is very suitable for solving three-target optimization problem; this may be associated with the algorithm search ability being stronger. The next step research direction can test the effect of this algorithm in the target problem; it is very exciting, and in the two test functions, KMGWO’s standard deviation is also very good. It shows that the stability of the algorithm in this kind of test function is excellent. In this paper, two of the functions of ZDT system proposed by Deb are selected: ZDT1 and ZDT6. The test results of ZDT1 function showed that the best algorithm was MOGWO, MOPSO was slightly inferior, and KMGWO was excellent and ranked third. The test results of ZDT6 function show that KMGWO is the best algorithm with excellent stability. In general, KMGWO’s ability to approach the real Pareto front end is very strong, especially when the number of targets is high, so the algorithm will be widely used in production.

Concerning IGD metric, the merit is clear that the KMGWO significantly dominates over the KMGWO on almost the problems. As shown in Table 3, the KMGWO is the best performing method in our comparison experiment. This strongly demonstrates that the reinforcement learning operator can effectively improve the overall performance of the algorithm. The reason for this superiority of MKGWO lies in reinforcement learning operator as follows: in reinforcement learning operator, the optimal vaccination probability is also different at different periods of iteration; population is divided into many subpopulations and each solution has its own neighbors.


KursaweSchafferViennet2Viennet3ZDT1ZDT6

KMGWOMin0.0012680.0005480.0001950.000134.15E − 050.000336
Max0.0014670.0006060.0002210.0006635.97E − 050.000473
Mean0.0013670.0005770.0002080.0003975.06E − 050.000404
Std0.000144.1E − 051.84E − 050.0003771.28E − 059.7E − 05

MOGWOMin0.0014290.0006260.0002070.0002222.96E − 050.002448
Max0.002080.0006890.0002180.0002653.66E − 050.01636
Mean0.0017550.0006570.0002130.0002443.31E − 050.009404
Std0.0004614.47E − 057.86E − 062.99E − 054.93E − 060.009838

MOPSOMin0.0021780.0006690.000360.00020.0002670.026323
Max0.0024260.0007050.000390.0002210.0002960.030315
Mean0.0023020.0006870.0003750.000210.0002810.028319
Std0.0001752.61E − 052.08E − 051.44E − 052.11E − 050.002823

NSGA2Min0.4021720.097391.5380410.857750.0926740.134332
Max0.4100010.0973981.5927570.8708690.1097240.185229
Mean0.4060860.0973941.5653990.8643090.1011990.15978
Std0.0055365.83E − 060.038690.0092770.0120560.035989

MOEA/DMin0.0012680.0006052.33E − 050.0001053.44E − 050.000864
Max0.0041350.0006134.94E − 050.0002084.66E − 050.018473
Mean0.0027010.0006093.64E − 050.0001564.05E − 050.009668
Std0.0020275.98E − 061.85E − 057.28E − 058.6E − 060.012451

PESA2Min0.0012740.0006280.0001830.0001950.0022720.021336
Max0.0015490.0006440.0003470.0003230.0028470.052005
Mean0.0014120.0006360.0002650.0002590.002560.036671
Std0.0001941.15E − 050.0001169.05E − 050.0004070.021687

Table 4 is the statistics of SP. SP value represents the degree of uniformity between Pareto solutions. The smaller this value is, the more homogeneous the Pareto solutions obtained by the algorithm will be, and the smaller the distance difference between them will be. Kursawe’s function test results show that NSGA2 SP value is the smallest of these six algorithms, 1.4, and MOGWO, MOEA/D, and PESA2 with poor results, suggesting that NSGA2 simple double objective test functions can generate the most homogeneous solution set. It is worth noting that KMGWO effect is better than MOGWO; Kalman filtering and reinforcement learning operator under the condition of synergy, multiobjective algorithm, can improve the static solution set of uniformity. The test results of Schaffer’s function show that the values of KMGWO and MOEA/D are optimal, both around 0.2, but the standard deviation of KMGWO is better. In this test function, the performance of KMGWO is the best. The test results of the Viennet2 function show that the SP values of KMGWO, MOEA/D, PESA2, and NSGA2 are roughly the same, all around 0.2. From the image, we can know that the test function is a three-objective test function, and its image is a three-dimensional image, but not complicated, indicating that most algorithms can find a relatively uniform solution set on it. The test results of Viennet3 show that the SP value of the MOEA/D algorithm is optimal, reaching 0.9. The results of KMGWO and MOGWO come next, reaching 1.6. The other test algorithms are far inferior to the three algorithms. In the test results of ZDT1, the opposite was found. Except for NSGA2, all the algorithms reached the optimal value of around 0.05, and the uniformity of NSGA2 was the worst, which was only 0.3. This situation may be caused by the cross search ability of NSGA2. In the test results of ZDT6 function, the SP value of MOEA/D, PESA2, and MOGWO was the smallest, around 0.8, followed by KMGWO, with SP value of 0.17, and then MOPSO, with 0.3. To sum up, KMGWO is quite a competitive algorithm in terms of the uniformity of the generated Pareto solution set, but other algorithms, such as MOEA/D, cannot be ignored, and their performance in this index is relatively excellent.


SPKursaweSchafferViennet2Viennet3ZDT1ZDT6

KMGWOMin1.8697050.4649790.2293261.6816360.0576980.170041
Max2.1403320.6282680.3545612.3947850.0678460.255117
Mean2.0050190.5466240.2919442.0382110.0627720.212579
Std0.1913620.1154630.0885540.5042720.0071760.060158

MOGWOMin2.0719430.5447360.3388351.6915690.0585890.082575
Max2.2556490.5880340.3956612.200730.0619410.180822
Mean2.1637960.5663850.3672481.9461490.0602650.131699
Std0.1298990.0306160.0401820.3600310.002370.069471

MOPSOMin1.8126540.5976220.4063342.2054420.0745370.309447
Max1.9488180.6012610.414422.2213220.0768960.435343
Mean1.8807360.5994410.4103772.2133820.0757160.372395
Std0.0962820.0025730.0057180.0112280.0016680.089021

NSGA2Min1.4420971.1984190.2254712.557510.3429420.245284
Max1.4905511.2285260.22992.6459840.3436570.372529
Mean1.4663241.2134720.2276852.6017470.3432990.308907
Std0.0342620.0212890.0031320.062560.0005050.089976

MOEA/DMin1.6450380.2340280.2383730.0975280.0549640.072255
Max1.6554350.2954180.3128420.1453550.0632640.15472
Mean1.6502370.2647230.2756070.1214410.0591140.113488
Std0.0073520.043410.0526580.0338190.0058690.058311

PESA2Min2.0917930.5994840.2762252.3626640.0698840.22178
Max2.1605440.6389220.342842.3982250.0760210.814949
Mean2.1261690.6192030.3095332.3804440.0729530.518364
Std0.0486140.0278870.0471040.0251450.004340.419434

5. Application to Energy Saving considering Flow-Shop Scheduling

5.1. Energy Consumption considering Flow-Shop Scheduling

The low-carbon scheduling problem in the flow shop studied in this section can be described as follows: n jobs need to go through m stages in the same flow direction, and each job has only one working procedure in each stage. Considering the preparation time of the machine, the preparation time is related to the ordering of the two adjacent jobs. Therefore, the preparation time is based on the sequence correlation of the artifacts, and the machine startup is linked to the processing time of the artifacts.

At different stages, the machine has different speed gears for production and can be adjusted. From the point of view of energy consumption, the machine has four different states: (machine) on the machining processing status, start state in preparation for a new jobs (machine), standby (machine is in idle), and turned off (machine is turned off). Under normal circumstances, when the machine is working at a higher rate, the processing time will be shortened, but the corresponding energy consumption will increase. Therefore, this problem aims to maximize the completion time and energy consumption index. Due to the characteristics of the problem studied in this chapter, this problem is much more complicated than the traditional flow-shop scheduling problem. In this problem, other settings are as follows.

The job is processed continuously in the workshop. In other words, the process cannot be interrupted. Machines are allowed free time and have unlimited buffers between phases. When there is a first job processing, the machine boots. When all the jobs are finished, the machine is shut down. The machine speed cannot be adjusted in the course of a job processing.

In order to present the mathematical model of the problem, we first defined the following related mathematical symbols according to the above description of the problem.Symbol definition are below:: Set of jobs, .: Set of processes, .: The speed set of the process machine, .: The processing time in process speed job i.: The start time in process job transform to . indicates the job i is the first job allotted this machine.: Process the power consumed by the machine running at speed .: Process power consumed by machine start-up.: The power consumed by the machine in idle running in process .Decision variables are as below:: The initial processing time of job at stage .: The end processing time of job at stage .: The two-dimensional decision variable, when the job from to is processed at a rate of , is 1; otherwise the value is 0.: Two-dimensional decision variable, when the job is processed onstage , the value is 1; otherwise the value is 0.: Auxiliary decision variable, indicating the start-up time of stage machine,decision by and .: Auxiliary decision variable, indicating the halt time of stage machine,decision by and .Based on these mathematical symbols, the mixed-integer programming model of the flow-shop low-carbon scheduling problem is presented as follows:Objective function:Constraint condition:

Formula (29) stands for the minimum completion time. Formula (30) represents the minimum energy consumption. Formula (31) represents the total energy consumption when the machine is in the processing state, while formula (32) represents the total energy consumption when the machine is in the starting state, and formula (33) represents the total energy consumption when the machine is in the standby state. Formula (34) means that each job traverses all stages, and in a specific stage, each job is assigned to a machine and processed at a speed level. Formula (35) means that interruption is not allowed during processing. Formula (36) ensures that the job operation can only be started after the operation of the previous stage is completely processed. Formulas (37) and (38) guarantee the machine capacity limit, which means the job can be processed only after the previous work is completed. Formula (39) means that the machine starting to process immediately after installation is complete. Formula (40) represents the calculation of auxiliary variables, which is equal to the minimum value between the start time and the set time of the job assigned to the corresponding machine. Formula (41) represents the calculation of auxiliary variables, which is equal to the maximum value of the end time of the job assigned to the corresponding machine. Formulas (42)–(44), respectively, define the feasible range of decision variables.

This section gives a simple example of three jobs and three stages, each with three different processing speeds. Table 5 shows the processing time and corresponding power consumption of the machine at each stage; that is, the element in the table represents .


ProcessJob i = 1Job i = 2Job i = 3
 = 1 = 2 = 3 = 1 = 2 = 3 = 1 = 2 = 3

j = 1(28, 4)(22, 7)(15, 11)(22, 4)(20, 7)(18, 11)(20, 4)(18, 7)(17, 11)
j = 2(20, 5)(16, 8)(12, 12)(21, 5)(18, 8)(15, 12)(23, 5)(20, 8)(17, 12)
j = 3(19, 5)(15, 7)(11, 10)(20, 5)(16, 7)(12, 10)(23, 5)(20, 7)(16, 10)

Table 6 shows the sequence-dependent start times; the element in the table represents . Set ; .


JobJob i = 1Job i = 2Job i = 3
j = 1j = 2j = 3j = 1j = 2j = 3j = 1j = 2j = 3

i′ = 1584468644
i′ = 2646232462
i′ = 3248242644

5.2. Experimental Result

To solve the above pipeline scheduling problem, this paper sets the parameters of KMGWO as follows: population number 20, warehouse number 20, 100 iterations. Figure 11 shows the Pareto front end obtained by KMGWO algorithm. It can be clearly observed that this problem is not complicated for KMGWO, and the Pareto-optimal solution set is easily obtained. Because there was not enough prior knowledge, we chose the scheme calculated at the Pareto point with the longest time of 113 and energy consumption of 1059.

Table 7 is the Pareto point calculated plan, showing the start time of the job in each process, warm-up time, working time, and end time. We, through these data, drew a Gantt chart, as shown in Figure 12; this is a time and energy balance options assembly line scheduling Gantt chart. In practice, we can choose according to the actual demand of enterprise decision point countless times and can deal with complex situation; this is the power of KMGWO algorithm.


Job 2Job 3Job 1
BeginStartWorkEndBeginStartWorkEndBeginStartWorkEnd

Process 102151717222414162067
Process 2143203737421626342390
Process 33521956602208286423113

6. Conclusions

In this paper, an improved multiobjective operational model based on Grey Wolf Optimization Algorithm related to Kalman filtering and reinforcement learning (KMGWO) is proposed, which is the combination of data mining technology and mathematical logic. With Kalman filter, the algorithm promotes the understanding set to the real Pareto front end. The reinforcement learning operator is applied to enhance the utilization of the dominant position of the group, and adaptive parameters are used instead of human intervention. The results of six benchmark functions show that the algorithm performs better than the comparison algorithm in terms of approximating the real Pareto optimal solution set and keeping the solution set uniform. Considering the energy saving of the assembly line scheduling solution, KMGWO performance is excellent and accordingly suitable for solving the practical optimization problems. This operational model has the formidable superiority in the field of mathematical optimization, which can be applied to machine learning, engineering optimization design, and other important areas, thus enhancing the performance of energy saving in production management.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the National Social Science Foundation of China (NSSFC) under Grant no. 17BGL238.

References

  1. Y. Yao, Y. Chun-Ming, and Y. Feng, “Solving bi-objective reentrant hybrid flow shop scheduling problems by a hybrid discrete grey wolf optimizer,” Operations Research and Management Science, vol. 28, no. 8, pp. 190–199, 2019. View at: Publisher Site | Google Scholar
  2. X. Zhang and X. Wang, “Comprehensive review of grey wolf optimization algorithm,” Computer Science, vol. 46, no. 3, pp. 30–38, 2019. View at: Google Scholar
  3. H. Meng, X. zhang, and L. San-Yang, “Two-group method for constrained multi-objective optimization problems,” Chinese Journal of Computers, vol. 31, no. 2, pp. 229–235, 2008. View at: Google Scholar
  4. Y. G. Woldesenbet, G. G. Yen, and B. G. Tessema, “Constraint handling in multiobjective evolutionary optimization,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 3, pp. 514–525, 2009. View at: Publisher Site | Google Scholar
  5. Y. Zhang, “Concise multi-objective particle swarm optimization algorithm for constrained optimization,” Electronic Journals, vol. 39, no. 6, pp. 1437–1440, 2011. View at: Google Scholar
  6. H. Jain and K. Deb, “An evolutionary many-objective optimization algorithm using reference-point based nondominated sorting approach, part II: handling constraints and extending to an adaptive approach,” IEEE Transactions on Evolutionary Computation, vol. 18, no. 4, pp. 602–622, 2014. View at: Publisher Site | Google Scholar
  7. W. F. Gao, G. G. Yen, and S. Y. Liu, “A dual-population differential evolution with coevolution for constrained optimization,” IEEE Transactions on Cybenetics, vol. 45, no. 5, pp. 1094–1107, 2015. View at: Publisher Site | Google Scholar
  8. B. Xiaojun and L. Zhang, “Constrained multi-objective optimization algorithm based on adaptive e-truncation strategy,” Journal of Electronics and Information Technology, vol. 38, no. 8, pp. 2047–2053, 2016. View at: Google Scholar
  9. J. Branke and K. Deb, Integrating User Preferences into Evolutionary Multi-Objective Optimization Knowledge Incorporation in Evolutionary Computation, Springer, Berlin, Germany, 2005.
  10. J. Branke, T. Kaußler, and H. Schmeck, “Guidance in evolutionary multi-objective optimization,” Advances in Engineering Software, vol. 32, no. 6, pp. 499–507, 2001. View at: Publisher Site | Google Scholar
  11. C. A. C. Coello, “Evolutionary multi-objective optimization: some current research trends and topics that remain to be explored,” Frontiers of Computer Science in China, vol. 3, no. 1, pp. 18–30, 2009. View at: Publisher Site | Google Scholar
  12. D. Lei, M. Li, and L. Wang, “A two-phase meta-heuristic for multiobjective flexible job shop scheduling problem with total energy consumption threshold,” IEEE Transactions on Cybernetics, vol. 49, no. 3, pp. 1097–1109, 2019. View at: Publisher Site | Google Scholar
  13. C. A. C. Coello, G. B. Lamont, and D. A. Van Veldhuisen, Evolutionary Algorithms for Solving Multi-Objective Problems, Springer, Berlin, Germany, 2007.
  14. C. A. C. Coello, G. T. Pulido, and M. S. Lechuga, “Handling multiple objectives with particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 256–279, 2004. View at: Publisher Site | Google Scholar
  15. X. Wu, X. Shen, and Q. Cui, “Multi-objective flexible flow shop scheduling problem considering variable processing time due to renewable energy,” Sustainability, vol. 10, no. 3, p. 841, 2018. View at: Publisher Site | Google Scholar
  16. C. A. Coello Coello and M. S. Lechuga, “MOPSO: a proposal for multiple objective particle swarm optimization,” in Proceedings of the 2002 Congress on Paper presented at the Evolutionary Computation, Honolulu, HI, USA, May 2002. View at: Publisher Site | Google Scholar
  17. I. Das and J. E. Dennis, “Normal-boundary intersection: a new method for generating the pareto surface in nonlinear multicriteria optimization problems,” SIAM Journal on Optimization, vol. 8, no. 3, pp. 631–657, 1998. View at: Publisher Site | Google Scholar
  18. K. Deb, Advances in Evolutionary Multi-Objective Optimization Search Based Software Engineering, Springer, Berlin, Germany, 2012.
  19. L. Yin, X. Li, L. Gao, C. Lu, and Z. Zhang, “A novel mathematical model and multi-objective method for the low-carbon flexible job shop scheduling problem,” Sustainable Computing: Informatics and Systems, vol. 13, no. 3, pp. 15–30, 2017. View at: Publisher Site | Google Scholar
  20. F. Y. Edgeworth, Mathematical Physics, Routledge, London, UK, 1881.
  21. L. Yin, X. Li, C. Lu, and L. Gao, “Energy-efficient scheduling problem using an effective hybrid multi-objective evolutionary algorithm,” Sustainability, vol. 8, no. 12, pp. 1268–1301, 2016. View at: Publisher Site | Google Scholar
  22. P. Mahat, Z. Chen, and B. Bak-Jensen, “Review of islanding detection methods for distributed generation,” in Proceedings of the 2008 Third International Conference on Electric Utility Deregulation and Restructuring and Power Technologies, Nanjing, China, April 2008. View at: Publisher Site | Google Scholar
  23. Z. Zhang, L. Wu, T. Peng, and S. Jia, “An improved scheduling approach for minimizing total energy consumption and makespan in a flexible job shop environment,” Sustainability, vol. 11, p. 179, 2019. View at: Google Scholar
  24. A. R. Malekpour, A. R. Seifi, M. R. Hesamzadeh, and N. Hosseinzadeh, “An optimal load shedding approach for distribution networks with DGs considering capacity deficiency modelling of bulked power supply,” in Proceedings of the Australasian Universities Power Engineering Conference (AUPEC’08), Sydney, Australia, December 2008. View at: Google Scholar
  25. W. Liao and T. Wang, “Promoting green and sustainability: a multi-objective optimization method for the job-shop scheduling problem,” Sustainability, vol. 10, no. 11, p. 4205, 2018. View at: Publisher Site | Google Scholar
  26. M. H. Haque, “A linear static voltage stability margin for radial distribution system,” in Proceedings of the 2006 IEEE Power Engineering Society General Meeting, pp. 1–6, Montreal, Canada, June 2006. View at: Publisher Site | Google Scholar
  27. S. Banerjee, D. Maity, and C. Chanda, “Teaching learning based optimization for economic load dispatch problem considering valve point leading effect,” Electrical Power and Energy Systems, vol. 73, 2015. View at: Publisher Site | Google Scholar
  28. A. Khamis, H. Shareef, A. Mohamed, and Z. Dong, “A load shedding scheme for DG integrated islanded power system utilizing backtracking search algorithm,” Ain Shams Engineering Journal, vol. 9, no. 1, 2015. View at: Publisher Site | Google Scholar
  29. A. Khamis, H. Shareef, A. Mohamed, and E. Bizkevelci, “An optimal load shedding methodology for radial power distribution systems to improve static voltage stability margin using gravity search,” Jurnal Teknolog, vol. 68, no. 3, pp. 71–76, 2014. View at: Publisher Site | Google Scholar
  30. Z. Sun and X. Gu, “Hybrid algorithm based on an estimation of distribution algorithm and cuckoo search for the no idle permutation flow shop scheduling problem with the total tardiness criterion minimization,” Sustainability, vol. 9, p. 953, 2017. View at: Google Scholar
  31. X. Li, C. Lu, L. Gao, S. Xiao, and L. Wen, “An effective multiobjective algorithm for energy-efficient scheduling in a real-life welding shop,” IEEE Transactions on Industrial Informatics, vol. 14, no. 2, pp. 5400–5409, 2018. View at: Publisher Site | Google Scholar
  32. J. H. Holland, Adaptation in Natural and Artificial Systems, The University of Michigan Press, Ann Arbor, MI, USA, 1975.
  33. C. Lu, S. Xiao, X. Li, and L. Gao, “An effective multi-objective discrete grey wolf optimizer for a real-world scheduling problem in welding production,” Advances in Engineering Software, vol. 99, no. 99, pp. 161–176, 2016. View at: Publisher Site | Google Scholar
  34. X. Zhu and N. Wang, “Cuckoo search algorithm with membrane communication mechanism for modeling overhead crane systems using RBF neural networks,” Applied Soft Computing, vol. 56, no. 56, pp. 458–471, 2017. View at: Publisher Site | Google Scholar
  35. C. Lu, L. Gao, X. Li, and S. Xiao, “A hybrid multi-objective grey wolf optimizer for dynamic scheduling in a real-world welding industry,” Engineering Applications of Artificial Intelligence, vol. 57, no. 57, pp. 61–79, 2017. View at: Publisher Site | Google Scholar
  36. L. Zhang and N. Wang, “Application of coRNA-GA based RBF-NN to model proton exchange membrane fuel cells,” International Journal of Hydrogen Energy, vol. 43, no. 1, pp. 329–340, 2018. View at: Publisher Site | Google Scholar
  37. K. Wang and N. Wang, “A novel RNA genetic algorithm for parameter estimation of dynamic systems,” Chemical Engineering Research and Design, vol. 88, no. 11, pp. 1485–1493, 2010. View at: Publisher Site | Google Scholar
  38. L. Zhang and N. Wang, “An adaptive RNA genetic algorithm for modeling of proton exchange membrane fuel cells,” International Journal of Hydrogen Energy, vol. 38, no. 1, pp. 219–228, 2013. View at: Publisher Site | Google Scholar
  39. Q. Zhu, N. Wang, and L. Zhang, “Circular genetic operators based RNA genetic algorithm for modeling proton exchange membrane fuel cells,” International Journal of Hydrogen Energy, vol. 39, no. 31, pp. 17779–17790, 2014. View at: Publisher Site | Google Scholar
  40. K. Wang and N. Wang, “A protein inspired RNA genetic algorithm for parameter estimation in hydrocracking of heavy oil,” Chemical Engineering Journal, vol. 167, no. 1, pp. 228–239, 2011. View at: Publisher Site | Google Scholar
  41. C. Neuhauser and S. M Krone, “The genealogy of samples in models with selection,” Genetics, vol. 145, no. 2, pp. 519–534, 1997. View at: Google Scholar
  42. F. Wang, Y. Rao, C. Zhang, Q. Tang, and L. Zhang, “Estimation of distribution algorithm for energy-efficient scheduling in turning processes,” Sustainability, vol. 8, no. 8, p. 762, 2016. View at: Publisher Site | Google Scholar
  43. S. M. Werner and A. J. Ruthenburg, “Nuclear fractionation reveals thousands of chromatin-tethered noncoding RNAs adjacent to active,” Cell Reports, vol. 12, pp. 1089–1098, 2015. View at: Publisher Site | Google Scholar
  44. J. M. Engreitz, “Long non-coding RNAs: spatial amplifiers that control nuclear structure and gene expression,” Nature Reviews Molecular Cell Biology, vol. 17, pp. 756–770, 2016. View at: Publisher Site | Google Scholar
  45. M. D. Simon, C. I. Wang, P. V. Kharchenko et al., “The genomic binding sites of a noncoding RNA,” Proceedings of the National Academy of Sciences, vol. 108, no. 51, pp. 20497–20502, 2011. View at: Publisher Site | Google Scholar

Copyright © 2020 Lvjiang Yin et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views111
Downloads224
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.