Advanced Intelligent Fuzzy Systems Modeling Technologies for Smart CitiesView this Special Issue
Research Article | Open Access
Qingquan Jiang, Xiaoya Liao, Rui Zhang, Qiaozhen Lin, "Energy-Saving Production Scheduling in a Single-Machine Manufacturing System by Improved Particle Swarm Optimization", Mathematical Problems in Engineering, vol. 2020, Article ID 8870917, 16 pages, 2020. https://doi.org/10.1155/2020/8870917
Energy-Saving Production Scheduling in a Single-Machine Manufacturing System by Improved Particle Swarm Optimization
A single-machine scheduling problem that minimizes the total weighted tardiness with energy consumption constraints in the actual production environment is studied in this paper. Based on the properties of the problem, an improved particle swarm optimization (PSO) algorithm embedded with a local search strategy (PSO-LS) is designed to solve this problem. To evaluate the algorithm, some computational experiments are carried out using PSO-LS, basic PSO, and a genetic algorithm (GA). Before the comparison experiment, the Taguchi method is used to select appropriate parameter values for these three algorithms since heuristic algorithms rely heavily on their parameters. The experimental results show that the improved PSO-LS algorithm has considerable advantages over the basic PSO and GA, especially for large-scale problems.
Manufacturing is an important industry that consumes about one-third of the world’s energy  and 56% of China’s energy . Saving energy not only reduces production cost but also protects the environment. Achieving energy savings through low-cost production scheduling is important  although it is possible to replace old machines with higher-priced and energy-efficient ones.
Energy-saving scheduling has received increasing attention in recent years. The literature on energy-efficient scheduling has considerably expanded since 2013 .
Lee et al.  studied single-machine scheduling with energy efficiency under a policy of time-varying electricity price and solved the problem using a dynamic control approach. Módos et al.  studied robust single-machine scheduling to minimize release times and total tardiness with periodic energy consumption limits. They proposed an efficient algorithm to obtain the optimal robust solution of given processing orders and used it in two exact algorithms and Tabu search. Che  focused on biobjective optimization to minimize total energy consumption and maximum tardiness in a single-machine scheduling problem considering a power-down mechanism. The problem was built as a mixed integer linear programming (MILP) model. They developed exact and approximate algorithms for small- and large-scale problems accordingly. Che et al.  established a new continuous-time mixed integer programming model for an energy-sensitive single-machine scheduling problem and proposed a greedy insertion algorithm to reduce the total electricity cost of production, which provided energy-saving solution for a large-scale single-machine scheduling problem, such as 5000 jobs, in a few tens of seconds. Chen et al.  studied the energy consumption of production activities from the point of view of machine reliability. They examined the actual problem in a rotor production workshop. The problem was a single-machine scheduling problem that optimized delay costs and energy costs. The ant colony algorithm embedded with a modified Emmons rule was used to solve this problem. In addition, they used sensitivity analysis to discuss special cases. By determining the start time of job processing, the idle time, and the time when the machine must be shut down, Shrouf et al.  aimed to minimize the cost of the energy consumption in a single-machine scheduling problem in the production process, and the energy price was varied throughout a day. For large-scale problems, a genetic algorithm (GA) was used to obtain a satisfactory solution.
Interval number theory was used to describe the uncertainty of renewable energy such as wind energy and solar energy , and two new biobjective optimization problems for interval single-machine scheduling were studied. Through parameter analysis, the authors provided decision-makers with guidelines for practical production environment. In , a memetic differential evolution (MDE) algorithm with superior performances to strength Pareto evolutionary algorithm II (SPEA-II) and nondominated sorting genetic algorithm II (NSGA-II) was proposed to solve an energy-saving biobjective unrelated parallel-machine scheduling problem to minimize maximum completion time and total energy consumption. The MDE algorithm was enhanced by integrating the combination of list scheduling heuristic and local search.
Given the difficulty in the energy-concerned production scheduling problem, solving most of them by exact algorithms is unrealistic, especially for large-scale instances. Heuristic algorithms are popular for solving these problems .
Particle swarm optimization (PSO) is one of the heuristic algorithms, and it has been widely used in many fields. A new bare-bones multiobjective particle swarm optimization algorithm was firstly proposed by Zhang et al.  in order to deal with environmental/economic multiobjective optimization problems. There were three innovations in an updating method for particles which do not need to adjust parameters, a dynamic mutation operator to improve search ability, and a global particle leader updating method based on particle diversity. The results of experiments demonstrated that the algorithm can obtain a good approximation of the Pareto front.
Song et al.  proposed a variable-size cooperative coevolutionary particle swarm optimization algorithm (VS-CCPSO) for evolutionary feature selection (FS) methods to deal with “curse of dimensionality” on high-dimensional data. The idea of “divide and conquer” was used to cooperative coevolutionary approach, and several strategies tailored according to properties of problems were developed including a space division strategy, an adaptive adjustment mechanism of subswarm size, a particle deletion strategy, and a particle generation strategy. The experimental results illustrated that VS-CCPSO can obtain a good subset of features, which indicated that vS-CCPSO is competitive in dealing with high-dimensional FS problems.
The paper by Zhang et al.  presented the first study of multiobjective particle swarm optimization (PSO) for cost-based feature selection in classification in order to obtain nondominated solutions. Some adjustments, including a probability-based encoding method, a hybrid operator, ideas of the crowding distance, the external archive, and the Pareto domination relationship, were embedded in PSO to improve its search capability. Experimental results illustrated that this algorithm can automatically evolve a Pareto front, and it is a competitive feature selection method to solve the cost-based feature selection problem.
A fuzzy multiobjective FS method with particle swarm optimization, called PSOMOFS, was developed in the literature by Hu et al. , aiming at the feature selection (FS) problem with fuzzy cost. Specifically, a fuzzy dominance relationship, a fuzzy crowding distance measure, and a tolerance coefficient were used in the algorithm. Compared with other evolutionary methods and classical multiobjective FS methods, experimental results indicated that the proposed algorithm can obtain feature sets with better performances in approximation, diversity, and feature cost.
In our earlier work , we studied a biobjective optimization problem. The drawbacks of the biobjective approach can be identified from two aspects. Firstly, it is difficult and sometimes puzzling for practitioners to choose the most suitable solution from the set of nondominated solutions output by the scheduling algorithm. Secondly, the optimization capability of multiobjective evolutionary algorithms is often not robust enough, which means the obtained nondominated solutions can be far from the true optimum.
In this paper, a single-machine scheduling problem considering energy consumption based on real production environment is studied. Numerous industries involving continuous production processes (e.g., continuous casting and hot rolling in the steel industry) are limited in their energy consumption over time by contracts with power suppliers. In particular, the amount of energy expended in each successive period must not exceed a certain limit. Production scheduling under such constraints is much more complex than under normal production conditions. Therefore, we focus on minimizing total weighted tardiness with energy consumption constraints. The total weighted tardiness captures the requirement of meeting the delivery time specified by customers. In other words, we are treating the problem as a single-objective optimization model, with energy-saving goals defined as constraints. This approach is closer to reality because energy considerations are usually expressed as constraints.
In order to solve our problem, a local search strategy was designed to be embedded in the basic particle swarm optimization (PSO) algorithm  as the proposed PSO-LS. Besides, a constraint handling process was tailored for this problem, which was used to determine the start processing time and completion time of each job to obtain total weighted tardiness as an objective function value. To make the problem more realistic, the starting time of each job is as low as seconds without violation of constraints. The experimental results show that our enhanced algorithm PSO-LS is statistically superior than that of the basic PSO and GA algorithms especially for large-scale problems.
The remainder of this paper is arranged as follows: in Section 2, we define the problem and analyze its difficulty and then provide a small-scale example to facilitate understanding. Then, a PSO embedded with a local search strategy (PSO-LS) algorithm with design details, such as the decoding method, is proposed to solve this problem in Section 3. In Section 4, many computational experiments are executed using PSO-LS, basic PSO, and GA to evaluate PSO-LS. At last, we outline the conclusion and a prospect for future studies in Section 5.
2. Problem Definition
2.1. Problem Statement
At the beginning, there are jobs marked as waiting to be processed on one machine. Job requires a processing time of (h) and consumes (kWh) per hour, and it cannot start to be processed before the former job is finished. The total energy consumption of the jobs processed in the same processing time window shall not exceed (kWh). Time window ’s time interval is , where is the label of time window and is the length of processing time window.
Job also has due date and importance due to customer requirements. We need to decide the starting time of job so that we can determine completion time and the weighted tardiness . Our target is to minimize the sum of weighted tardiness of all the jobs, i.e., , which is called total weighted tardiness (TWT).
The following assumptions are considered in this problem:(i)There are no uncertainties.(ii)The length of processing time window so that the time window is reasonable.(iii)The electricity limit guarantees the problem is solvable.
The following mathematical model can be built to describe the problem in detail:
Expression (1) is the objective function used to minimize TWT. Expression (2) is for the energy constraint. Expression (3) is used to express the constraint that one job at most is being processed at a time. Expression (4) calculates the weighted tardiness of each job. Expressions (5) and (6) are the ranges of the starting time and weighted tardiness, respectively.
2.2. Analysis of the Problem
According to Graham et al. , our problem can be expressed as TWT. If the energy consumption constraint is not considered, the problem can be relaxed to TWT, which is nondeterministic polynomial-time hardness (NP-hard) . This means an NP-hard problem can be reduced to our problem. So, in this paper, the problem TWT is at least NP-hard.
There are an infinite number of possible solutions to this problem. Due to energy consumption limits in each time window, some jobs may have to wait for some time after the previous job has been completed. Therefore, during certain time windows, the machine may need to be idle for a period of time. Jobs to be fully processed in such time windows have an infinite number of starting times to choose from because there are an infinite number of points on a line segment.
When jobs are processed as early as possible without violating the constraint conditions, one permutation can only represent one feasible solution. Considering the objective function with the minimum TWT, i.e., , the value of the objective function at an early starting time must not be worse than that at a late starting time in the same situation. Therefore, the optimal solution to the scheduling problem can be included in the permutations.
Based on the above analysis, the job processing sequences are used to represent the solutions of the single-machine scheduling problem with energy consumption constraints. When the order of jobs is determined, the starting time of each job is also determined based on the principle of as early as possible. For a problem with jobs, there are possible solutions in the solution space.
2.3. A Small-Scale Example
A simple example is used to facilitate understanding of the solution. Considering 6 jobs at hand needing to be scheduled, the length of time window equals 30 and the energy consumption limit per hour is equal to 189. Other figures are provided in Table 1.
To facilitate the actual production operation, the time value of starting machining operation is accurate to the second. Since the processing time is measured in hours, we need to reserve the starting processing time to two decimal places without violating production constraints.
To find the optimal solution, recursion and backtracking are used to enumerate all the permutations of the job processing order. Then, we have solutions that are labelled from 1 to 720, as shown in Figure 1. The elbow points are captured during the enumeration process.
The optimal solution whose objective function value is 262.63 appears at the 637th feasible solution, whose processing order is (6, 2, 4, 1, 3, 5) and the starting processing time for each job is (0, 10, 14, 29.42, 37.42, 57.51).
3. Heuristic Algorithms
Due to the difficulty in the single-machine scheduling problem with energy consumption constraints, it is impossible to quickly obtain an exact solution with the increase in the number of jobs.
To solve this problem, it is necessary to use a fitting algorithm based on a heuristic algorithm to obtain satisfactory solutions. There are many heuristic algorithms but we choose PSO from them as our framework for three reasons. Firstly, PSO relies on only a few parameters. Secondly, it has good performance in both the convergence speed and the diversity of solutions. Thirdly, its process is simple and effect is good.
In this section, the basic PSO algorithm is first introduced. Then, the encoding and decoding algorithms for this problem are designed in detail. To improve the PSO algorithm, a local search based on the nature of the problem is proposed. Finally, the PSO-LS framework is described.
3.1. Basic PSO Algorithm
There are a bunch of particles in the solution space looking for a target. Every particle from the beginning has a position (pos) and a velocity (vel). The best previous location in its memory is called pbest. The best particle in the swarm is called gbest. During each flight, as the particles reach their target, they need to determine their speed based on the speed of the last flight, the previous pbest, and the previous gbest. Their pbest and gbest are reselected based on new locations. This iteration runs until the end of time. The gbest resulting from the final iteration will be the result of the problem produced by PSO:
The PSO algorithm can be described in detail as follows:(1)Initialize the particle swarm. Specifically, initialize the position and velocity of each particle.(2)Evaluate all the particles in the swarm according to their own positions.(3)Initialize pbest and gbest. Assign the pos value of each particle to its own pbest, and the best one of all the particles is used to assign gbest as its initial value.(4)Repeating steps 5 to 8 until the stop criteria are satisfied.(5)Update particle swarm. The new velocity and new position of each particle can be calculated by Expressions (7) and (8), respectively, where weights and are parameters that need to be set in advance and are two different random real numbers with values ranging from 0.0 to 1.0. is the number of the current iteration.(6)Adjust the particle swarm. The position of the particle is in the range of , where both are parameters of the PSO. Particles beyond this limit are adjusted to meet this constraint.(7)Evaluate the swarm like in step 2.(8)Obtain new pbest and gbest.(9)Obtain the gbest of the last iteration as the result of the problem by PSO.
3.2. Encoding and Decoding
To apply the particle swarm algorithm to our problem, for each particle, the position and velocity have dimensions according to the number of jobs to be processed. In addition, the position and velocity of each particle are calculated by formulas (9) and (10), respectively, where is the dimension index:
Since the basic PSO algorithm was originally designed to solve continuous problems, the smallest position value (SPV) was used to convert the positions into integer sequences as the decoding method, which can be expressed as job processing sequences.
The core of SPV process is a sorting algorithm. It is simple, effective, and easy to implement, so we use it as the decoding method.
For example, given a 5-dimensional position [3.1, 4.3, 2.7, 2.2, 1.9], [1.9, 2.2, 2.7, 3.1, 4.3] can be obtained by placing the values of the different dimensions in ascending order. The values on each dimension correspond to the original dimension indexes that are extracted to produce sequences (5, 4, 3, 1, 2).
The decoding method (SPV) is shown in Algorithm 1.
3.3. Constraint Handling and Solution Obtaining
To ensure these permutations have the probability of containing optimal solutions, jobs are scheduled as early as possible according to processing orders in compliance with the energy consumption constraint.
Specifically, we need to decide where to place the job in a given processing order on the processing schedule in the current processing time window . The jobs , , etc., are scheduled before job .
To make the problem easier to solve, we only need to decide how long the job costs in the current time window and the next time window . Each job only needs to make this decision, then the starting processing time of each job can be determined, and finally, the objective function value of this processing sequence can be used to evaluate the quality of this given processing sequence.
The time the job costs in the current time window and the next time window , expressed as and , respectively, can be calculated by formulas (11) and (12), respectively, where is the rest energy that can be used in the current time window, is the energy consumption per hour to process the job , and is the processing time of job :
We enumerate all the meaningful situations based on Expressions (11) and (12) to obtain the rest and rest , which are expressed as and for the job in the next position, respectively, and to determine the completion time of the current job.
All the cases are shown in Figure 3.(1) and . This means the job will be scheduled to be processed entirely in the current time window. The completion time of the current job can be calculated by .(a). and are for the next job. This situation is shown in Figure 3(a).(b). The energy consumption runs out after processing job . So, the next job must be scheduled in the next time window, and are included in the next job. Figure 3(b) displays this case.(c). So, the job has to be processed in the next time window , and are for the next job. Figure 3(c) demonstrates this situation.(d). The next job is scheduled to be processed in the next time window with and . This case is described in Figure 3(d).(2) and . The job cannot be scheduled entirely in the time window . The completion time of the current job can be calculated by . This current job takes hours to be processed in the next time window. We can obtain and for the next job.(a). After producing the job , the electricity in time window runs out. There is a gap between the starting time of job and the completion time of job . This situation is shown in Figure 3(e).(b). The current job can begin processing when the production of job is complete. Figure 3(f) displays this case.(c). This is similar to 2(b). The energy in the current time window runs out. Figure 3(g) shows this situation in detail.(3) and . The job has to start being processed in the time window with and . Then, we need to reschedule the job in the next iteration, which may be consistent with situation 2(a) discussed above. An example of this situation is shown in Figure 3(h).
The procedure above repeats from until for the number of jobs. We can determine the completion time of each job so that TWT can be calculated as the fitness of a particle.
Based on the above description, the constraint handling process is described in Algorithm 2. To determine the completion time precision down to seconds, lines 7 and 17 to 22 were added to handle the with more than two decimals. In line 7, we leave with 2 decimal places to obtain . For example, if , then we have . In lines 17 through 22, and are calculated based on instead of , and the schedule is adjusted as well.
3.4. Local Search
To improve the searching ability of the basic PSO, an insertion operator is used in the gbest of the particle swarm in each iteration. This insertion operator can considerably increase the diversity of solutions, thus increasing the probability of significantly improving gbest. In addition, it is simple to implement and has low time and memory overhead.
The insertion operator for the processing order of gbest is described in Figure 4. It aims to insert the job from the time window immediately after the job in time window .
This insertion operator can be easily applied to the job processing order.
Firstly, we need to select two different time windows. Time windows and are random integers selected from the range of and . Then, two jobs need to be chosen from those two time windows. Jobs and are chosen randomly from the jobs processed in the selected time windows. At this moment, job must be processed before job because time window is in front of time window . Thirdly, for jobs from job to the job immediately after job , their ordinal numbers need to be subtracted by 1 in this processing sequence. Finally, we change the ordinal number of job in the processing order. The only task to perform is to place job after job .
Because the position of gbest is used to update particles in the next iteration, we need to change the position of gbest to maintain the consistency of the position and processing order of gbest. A small example is provided in Table 2 to illustrate the process of adjusting the position of gbest according to the insertion operator.
Assume that the original gbest has a 5-dimensional position and the original order . If and , we can determine the changed order according to insertion operator for order, as shown in Figure 4. For this instance, step 1 shown in Table 2 involves exchanging the job to the job to the right. So we need to swap jobs 4 and 3, and we can obtain by swapping the value of dimensions 3 and 4 of . At this moment, we have and . In step 2, we also swap the job to the right job. This time, we need to exchange job 4 with job 1 according to so that can be obtained. To obtain , we can swap the value of dimensions 4 and 1 of , that is, exchanging 3.1 with 2.7. Step 3 is also based on its previous step; this time, we need to exchange job and job because job is to the right of job in . Similar with step 1 and step 2, is obtained by swapping the values of dimensions 4 and 2 of . Finally, we get . To verify its correctness, we can decode using SPV so that we can determine its order (5, 3, 1, 2, 4), which is equal to the target order.
Based on all descriptions above, for each iteration, we applied the insertion operator shown in Algorithm 3 to the original gbest five times to obtain five different new gbests. The original gbest was replaced by the best gbest among these five gbests with the smallest objective function value if it was better than the original gbest. Based on the above, the local search process is described in Algorithm 4.
3.5. PSO-LS Framework
The PSO-LS framework structure is shown in Algorithm 5. As described in the previous sections, the state of the swarm is initialized by the first three lines. The main loop of the PSO algorithm consists of lines from 4 through 9. Based on the fact that the behavior of each particle depends heavily on the global leader gbest, the eighth line which aims to improve the quality of the global optimal particle is added before the end of each iteration by calling Algorithm 4.
4. Computational Experiments
To evaluate PSO-LS proposed in the previous chapter, we performed many experiments for comparison with the genetic algorithm (GA) .
To be specific, we used the main algorithm framework in a previous study  that uses GA to solve the scheduling problem. The initial population was generated randomly. We used C1 from  for chromosomal representation and crossover operator. For selection mechanism, we used a hybrid of roulette wheel selection  with an elitist strategy . A shift mutation was applied to the GA according to the literature . The probability of mutation was dynamic, and at the beginning, . For each iteration, decreases by a rate of . In the population, when the value of the minimum fitness over the mean of fitness is larger than a constant real number , i.e., , takes the initial value .
Before the comparison, experiments for parameter selection were needed because the performance of heuristic algorithms depends strongly on their parameters. We performed the same parameter selection experiment on PSO and GA.
4.1. Problem Data Setting
Before we started the experiment, the data of problems are given as follows:(i)The processing time of job , , is uniformly generated in the range .(ii)The electricity consumption of job , , is distributed from the uniform .(iii)The due time of job , , is distributed from the uniform .(iv)The importance of job , , is distributed from the uniform .
All the problems handled in the experiments were generated randomly based on the distributions.
4.2. Algorithm Parameters Setting
We used the Taguchi method  for parameter selection.
Taguchi method is also called the orthogonal experimental design which is used to determine the value of each factor especially when the number of factors and factor levels are greater than 3. Based on orthogonal tables, the number of experiments can be greatly decreased. The brief steps of the Taguchi method are as follows: firstly, determine responses, factors, and levels; secondly, select appropriate orthogonal tables; thirdly, execute the experiments according to orthogonal tables and fill the tables with experimental results; then, analyze the results and determine the factor level combination; and at last, verify the effectiveness of the selected levels of factors.
We took the size of swarm, weight, , , and as factors for the PSO algorithm. We also took the size of population, the probability of crossover , the probability of mutation , and the other two parameters related to mutations and as the factors for the GA algorithm. Each factor of both algorithms took 4 levels, as shown in Tables 3 and 4.