Abstract

A single-machine scheduling problem that minimizes the total weighted tardiness with energy consumption constraints in the actual production environment is studied in this paper. Based on the properties of the problem, an improved particle swarm optimization (PSO) algorithm embedded with a local search strategy (PSO-LS) is designed to solve this problem. To evaluate the algorithm, some computational experiments are carried out using PSO-LS, basic PSO, and a genetic algorithm (GA). Before the comparison experiment, the Taguchi method is used to select appropriate parameter values for these three algorithms since heuristic algorithms rely heavily on their parameters. The experimental results show that the improved PSO-LS algorithm has considerable advantages over the basic PSO and GA, especially for large-scale problems.

1. Introduction

Manufacturing is an important industry that consumes about one-third of the world’s energy [1] and 56% of China’s energy [2]. Saving energy not only reduces production cost but also protects the environment. Achieving energy savings through low-cost production scheduling is important [3] although it is possible to replace old machines with higher-priced and energy-efficient ones.

Energy-saving scheduling has received increasing attention in recent years. The literature on energy-efficient scheduling has considerably expanded since 2013 [4].

Lee et al. [5] studied single-machine scheduling with energy efficiency under a policy of time-varying electricity price and solved the problem using a dynamic control approach. Módos et al. [6] studied robust single-machine scheduling to minimize release times and total tardiness with periodic energy consumption limits. They proposed an efficient algorithm to obtain the optimal robust solution of given processing orders and used it in two exact algorithms and Tabu search. Che [7] focused on biobjective optimization to minimize total energy consumption and maximum tardiness in a single-machine scheduling problem considering a power-down mechanism. The problem was built as a mixed integer linear programming (MILP) model. They developed exact and approximate algorithms for small- and large-scale problems accordingly. Che et al. [8] established a new continuous-time mixed integer programming model for an energy-sensitive single-machine scheduling problem and proposed a greedy insertion algorithm to reduce the total electricity cost of production, which provided energy-saving solution for a large-scale single-machine scheduling problem, such as 5000 jobs, in a few tens of seconds. Chen et al. [9] studied the energy consumption of production activities from the point of view of machine reliability. They examined the actual problem in a rotor production workshop. The problem was a single-machine scheduling problem that optimized delay costs and energy costs. The ant colony algorithm embedded with a modified Emmons rule was used to solve this problem. In addition, they used sensitivity analysis to discuss special cases. By determining the start time of job processing, the idle time, and the time when the machine must be shut down, Shrouf et al. [10] aimed to minimize the cost of the energy consumption in a single-machine scheduling problem in the production process, and the energy price was varied throughout a day. For large-scale problems, a genetic algorithm (GA) was used to obtain a satisfactory solution.

Interval number theory was used to describe the uncertainty of renewable energy such as wind energy and solar energy [11], and two new biobjective optimization problems for interval single-machine scheduling were studied. Through parameter analysis, the authors provided decision-makers with guidelines for practical production environment. In [12], a memetic differential evolution (MDE) algorithm with superior performances to strength Pareto evolutionary algorithm II (SPEA-II) and nondominated sorting genetic algorithm II (NSGA-II) was proposed to solve an energy-saving biobjective unrelated parallel-machine scheduling problem to minimize maximum completion time and total energy consumption. The MDE algorithm was enhanced by integrating the combination of list scheduling heuristic and local search.

Given the difficulty in the energy-concerned production scheduling problem, solving most of them by exact algorithms is unrealistic, especially for large-scale instances. Heuristic algorithms are popular for solving these problems [4].

Particle swarm optimization (PSO) is one of the heuristic algorithms, and it has been widely used in many fields. A new bare-bones multiobjective particle swarm optimization algorithm was firstly proposed by Zhang et al. [13] in order to deal with environmental/economic multiobjective optimization problems. There were three innovations in an updating method for particles which do not need to adjust parameters, a dynamic mutation operator to improve search ability, and a global particle leader updating method based on particle diversity. The results of experiments demonstrated that the algorithm can obtain a good approximation of the Pareto front.

Song et al. [14] proposed a variable-size cooperative coevolutionary particle swarm optimization algorithm (VS-CCPSO) for evolutionary feature selection (FS) methods to deal with “curse of dimensionality” on high-dimensional data. The idea of “divide and conquer” was used to cooperative coevolutionary approach, and several strategies tailored according to properties of problems were developed including a space division strategy, an adaptive adjustment mechanism of subswarm size, a particle deletion strategy, and a particle generation strategy. The experimental results illustrated that VS-CCPSO can obtain a good subset of features, which indicated that vS-CCPSO is competitive in dealing with high-dimensional FS problems.

The paper by Zhang et al. [15] presented the first study of multiobjective particle swarm optimization (PSO) for cost-based feature selection in classification in order to obtain nondominated solutions. Some adjustments, including a probability-based encoding method, a hybrid operator, ideas of the crowding distance, the external archive, and the Pareto domination relationship, were embedded in PSO to improve its search capability. Experimental results illustrated that this algorithm can automatically evolve a Pareto front, and it is a competitive feature selection method to solve the cost-based feature selection problem.

A fuzzy multiobjective FS method with particle swarm optimization, called PSOMOFS, was developed in the literature by Hu et al. [16], aiming at the feature selection (FS) problem with fuzzy cost. Specifically, a fuzzy dominance relationship, a fuzzy crowding distance measure, and a tolerance coefficient were used in the algorithm. Compared with other evolutionary methods and classical multiobjective FS methods, experimental results indicated that the proposed algorithm can obtain feature sets with better performances in approximation, diversity, and feature cost.

In our earlier work [17], we studied a biobjective optimization problem. The drawbacks of the biobjective approach can be identified from two aspects. Firstly, it is difficult and sometimes puzzling for practitioners to choose the most suitable solution from the set of nondominated solutions output by the scheduling algorithm. Secondly, the optimization capability of multiobjective evolutionary algorithms is often not robust enough, which means the obtained nondominated solutions can be far from the true optimum.

In this paper, a single-machine scheduling problem considering energy consumption based on real production environment is studied. Numerous industries involving continuous production processes (e.g., continuous casting and hot rolling in the steel industry) are limited in their energy consumption over time by contracts with power suppliers. In particular, the amount of energy expended in each successive period must not exceed a certain limit. Production scheduling under such constraints is much more complex than under normal production conditions. Therefore, we focus on minimizing total weighted tardiness with energy consumption constraints. The total weighted tardiness captures the requirement of meeting the delivery time specified by customers. In other words, we are treating the problem as a single-objective optimization model, with energy-saving goals defined as constraints. This approach is closer to reality because energy considerations are usually expressed as constraints.

In order to solve our problem, a local search strategy was designed to be embedded in the basic particle swarm optimization (PSO) algorithm [18] as the proposed PSO-LS. Besides, a constraint handling process was tailored for this problem, which was used to determine the start processing time and completion time of each job to obtain total weighted tardiness as an objective function value. To make the problem more realistic, the starting time of each job is as low as seconds without violation of constraints. The experimental results show that our enhanced algorithm PSO-LS is statistically superior than that of the basic PSO and GA algorithms especially for large-scale problems.

The remainder of this paper is arranged as follows: in Section 2, we define the problem and analyze its difficulty and then provide a small-scale example to facilitate understanding. Then, a PSO embedded with a local search strategy (PSO-LS) algorithm with design details, such as the decoding method, is proposed to solve this problem in Section 3. In Section 4, many computational experiments are executed using PSO-LS, basic PSO, and GA to evaluate PSO-LS. At last, we outline the conclusion and a prospect for future studies in Section 5.

2. Problem Definition

2.1. Problem Statement

At the beginning, there are jobs marked as waiting to be processed on one machine. Job requires a processing time of (h) and consumes (kWh) per hour, and it cannot start to be processed before the former job is finished. The total energy consumption of the jobs processed in the same processing time window shall not exceed (kWh). Time window ’s time interval is , where is the label of time window and is the length of processing time window.

Job also has due date and importance due to customer requirements. We need to decide the starting time of job so that we can determine completion time and the weighted tardiness . Our target is to minimize the sum of weighted tardiness of all the jobs, i.e., , which is called total weighted tardiness (TWT).

The following assumptions are considered in this problem:(i)There are no uncertainties.(ii)The length of processing time window so that the time window is reasonable.(iii)The electricity limit guarantees the problem is solvable.

The following mathematical model can be built to describe the problem in detail:

Expression (1) is the objective function used to minimize TWT. Expression (2) is for the energy constraint. Expression (3) is used to express the constraint that one job at most is being processed at a time. Expression (4) calculates the weighted tardiness of each job. Expressions (5) and (6) are the ranges of the starting time and weighted tardiness, respectively.

Noticing that Expression (2) is a max-min constraint and Expression (3) is a quadratic constraint, this model is nonlinear.

2.2. Analysis of the Problem

According to Graham et al. [19], our problem can be expressed as TWT. If the energy consumption constraint is not considered, the problem can be relaxed to TWT, which is nondeterministic polynomial-time hardness (NP-hard) [20]. This means an NP-hard problem can be reduced to our problem. So, in this paper, the problem TWT is at least NP-hard.

There are an infinite number of possible solutions to this problem. Due to energy consumption limits in each time window, some jobs may have to wait for some time after the previous job has been completed. Therefore, during certain time windows, the machine may need to be idle for a period of time. Jobs to be fully processed in such time windows have an infinite number of starting times to choose from because there are an infinite number of points on a line segment.

When jobs are processed as early as possible without violating the constraint conditions, one permutation can only represent one feasible solution. Considering the objective function with the minimum TWT, i.e., , the value of the objective function at an early starting time must not be worse than that at a late starting time in the same situation. Therefore, the optimal solution to the scheduling problem can be included in the permutations.

Based on the above analysis, the job processing sequences are used to represent the solutions of the single-machine scheduling problem with energy consumption constraints. When the order of jobs is determined, the starting time of each job is also determined based on the principle of as early as possible. For a problem with jobs, there are possible solutions in the solution space.

2.3. A Small-Scale Example

A simple example is used to facilitate understanding of the solution. Considering 6 jobs at hand needing to be scheduled, the length of time window equals 30 and the energy consumption limit per hour is equal to 189. Other figures are provided in Table 1.

To facilitate the actual production operation, the time value of starting machining operation is accurate to the second. Since the processing time is measured in hours, we need to reserve the starting processing time to two decimal places without violating production constraints.

To find the optimal solution, recursion and backtracking are used to enumerate all the permutations of the job processing order. Then, we have solutions that are labelled from 1 to 720, as shown in Figure 1. The elbow points are captured during the enumeration process.

The optimal solution whose objective function value is 262.63 appears at the 637th feasible solution, whose processing order is (6, 2, 4, 1, 3, 5) and the starting processing time for each job is (0, 10, 14, 29.42, 37.42, 57.51).

The schedule and energy consumption of the optimal solution are shown in Figures 2(a) and 2(b).

3. Heuristic Algorithms

Due to the difficulty in the single-machine scheduling problem with energy consumption constraints, it is impossible to quickly obtain an exact solution with the increase in the number of jobs.

To solve this problem, it is necessary to use a fitting algorithm based on a heuristic algorithm to obtain satisfactory solutions. There are many heuristic algorithms but we choose PSO from them as our framework for three reasons. Firstly, PSO relies on only a few parameters. Secondly, it has good performance in both the convergence speed and the diversity of solutions. Thirdly, its process is simple and effect is good.

In this section, the basic PSO algorithm is first introduced. Then, the encoding and decoding algorithms for this problem are designed in detail. To improve the PSO algorithm, a local search based on the nature of the problem is proposed. Finally, the PSO-LS framework is described.

3.1. Basic PSO Algorithm

There are a bunch of particles in the solution space looking for a target. Every particle from the beginning has a position (pos) and a velocity (vel). The best previous location in its memory is called pbest. The best particle in the swarm is called gbest. During each flight, as the particles reach their target, they need to determine their speed based on the speed of the last flight, the previous pbest, and the previous gbest. Their pbest and gbest are reselected based on new locations. This iteration runs until the end of time. The gbest resulting from the final iteration will be the result of the problem produced by PSO:

The PSO algorithm can be described in detail as follows:(1)Initialize the particle swarm. Specifically, initialize the position and velocity of each particle.(2)Evaluate all the particles in the swarm according to their own positions.(3)Initialize pbest and gbest. Assign the pos value of each particle to its own pbest, and the best one of all the particles is used to assign gbest as its initial value.(4)Repeating steps 5 to 8 until the stop criteria are satisfied.(5)Update particle swarm. The new velocity and new position of each particle can be calculated by Expressions (7) and (8), respectively, where weights and are parameters that need to be set in advance and are two different random real numbers with values ranging from 0.0 to 1.0. is the number of the current iteration.(6)Adjust the particle swarm. The position of the particle is in the range of , where both are parameters of the PSO. Particles beyond this limit are adjusted to meet this constraint.(7)Evaluate the swarm like in step 2.(8)Obtain new pbest and gbest.(9)Obtain the gbest of the last iteration as the result of the problem by PSO.

3.2. Encoding and Decoding

To apply the particle swarm algorithm to our problem, for each particle, the position and velocity have dimensions according to the number of jobs to be processed. In addition, the position and velocity of each particle are calculated by formulas (9) and (10), respectively, where is the dimension index:

Since the basic PSO algorithm was originally designed to solve continuous problems, the smallest position value (SPV) was used to convert the positions into integer sequences as the decoding method, which can be expressed as job processing sequences.

The core of SPV process is a sorting algorithm. It is simple, effective, and easy to implement, so we use it as the decoding method.

For example, given a 5-dimensional position [3.1, 4.3, 2.7, 2.2, 1.9], [1.9, 2.2, 2.7, 3.1, 4.3] can be obtained by placing the values of the different dimensions in ascending order. The values on each dimension correspond to the original dimension indexes that are extracted to produce sequences (5, 4, 3, 1, 2).

The decoding method (SPV) is shown in Algorithm 1.

Require: position,
Ensure: order
(1)
(2)
(3)for each do
(4)
(5)
(6)end for
(7)for each do
(8)
(9)end for
(10)return order
3.3. Constraint Handling and Solution Obtaining

To ensure these permutations have the probability of containing optimal solutions, jobs are scheduled as early as possible according to processing orders in compliance with the energy consumption constraint.

Specifically, we need to decide where to place the job in a given processing order on the processing schedule in the current processing time window . The jobs , , etc., are scheduled before job .

To make the problem easier to solve, we only need to decide how long the job costs in the current time window and the next time window . Each job only needs to make this decision, then the starting processing time of each job can be determined, and finally, the objective function value of this processing sequence can be used to evaluate the quality of this given processing sequence.

The time the job costs in the current time window and the next time window , expressed as and , respectively, can be calculated by formulas (11) and (12), respectively, where is the rest energy that can be used in the current time window, is the energy consumption per hour to process the job , and is the processing time of job :

We enumerate all the meaningful situations based on Expressions (11) and (12) to obtain the rest and rest , which are expressed as and for the job in the next position, respectively, and to determine the completion time of the current job.

All the cases are shown in Figure 3.(1) and . This means the job will be scheduled to be processed entirely in the current time window. The completion time of the current job can be calculated by .(a). and are for the next job. This situation is shown in Figure 3(a).(b). The energy consumption runs out after processing job . So, the next job must be scheduled in the next time window, and are included in the next job. Figure 3(b) displays this case.(c). So, the job has to be processed in the next time window , and are for the next job. Figure 3(c) demonstrates this situation.(d). The next job is scheduled to be processed in the next time window with and . This case is described in Figure 3(d).(2) and . The job cannot be scheduled entirely in the time window . The completion time of the current job can be calculated by . This current job takes hours to be processed in the next time window. We can obtain and for the next job.(a). After producing the job , the electricity in time window runs out. There is a gap between the starting time of job and the completion time of job . This situation is shown in Figure 3(e).(b). The current job can begin processing when the production of job is complete. Figure 3(f) displays this case.(c). This is similar to 2(b). The energy in the current time window runs out. Figure 3(g) shows this situation in detail.(3) and . The job has to start being processed in the time window with and . Then, we need to reschedule the job in the next iteration, which may be consistent with situation 2(a) discussed above. An example of this situation is shown in Figure 3(h).

The procedure above repeats from until for the number of jobs. We can determine the completion time of each job so that TWT can be calculated as the fitness of a particle.

Based on the above description, the constraint handling process is described in Algorithm 2. To determine the completion time precision down to seconds, lines 7 and 17 to 22 were added to handle the with more than two decimals. In line 7, we leave with 2 decimal places to obtain . For example, if , then we have . In lines 17 through 22, and are calculated based on instead of , and the schedule is adjusted as well.

Require: position and problemData
Ensure: schedule, starting time , completion time , and TWT
(1)
(2)
(3)
(4)
(5)if each do
(6) reschedule:
(7)
(8)
(9)
(10)ifthen
(11)  
(12)  
(13)  
(14)  
(15)  ifthen
(16)   
(17)   
(18)   ifthen
(19)    ifthen
(20)     
(21)     
(22)     
(23)     
(24)     
(25)    else
(26)     
(27)     
(28)    end if
(29)   end if
(30)  end if
(31)  ;
(32)  
(33)else
(34)   ;
(35)   
(36)   go to reschedule
(37)  end if
(38)end for
(39)ifthen
(40)  
(41)end if
(42)return schedule, st, C and TWT
3.4. Local Search

To improve the searching ability of the basic PSO, an insertion operator is used in the gbest of the particle swarm in each iteration. This insertion operator can considerably increase the diversity of solutions, thus increasing the probability of significantly improving gbest. In addition, it is simple to implement and has low time and memory overhead.

The insertion operator for the processing order of gbest is described in Figure 4. It aims to insert the job from the time window immediately after the job in time window .

This insertion operator can be easily applied to the job processing order.

Firstly, we need to select two different time windows. Time windows and are random integers selected from the range of and . Then, two jobs need to be chosen from those two time windows. Jobs and are chosen randomly from the jobs processed in the selected time windows. At this moment, job must be processed before job because time window is in front of time window . Thirdly, for jobs from job to the job immediately after job , their ordinal numbers need to be subtracted by 1 in this processing sequence. Finally, we change the ordinal number of job in the processing order. The only task to perform is to place job after job .

Because the position of gbest is used to update particles in the next iteration, we need to change the position of gbest to maintain the consistency of the position and processing order of gbest. A small example is provided in Table 2 to illustrate the process of adjusting the position of gbest according to the insertion operator.

Assume that the original gbest has a 5-dimensional position and the original order . If and , we can determine the changed order according to insertion operator for order, as shown in Figure 4. For this instance, step 1 shown in Table 2 involves exchanging the job to the job to the right. So we need to swap jobs 4 and 3, and we can obtain by swapping the value of dimensions 3 and 4 of . At this moment, we have and . In step 2, we also swap the job to the right job. This time, we need to exchange job 4 with job 1 according to so that can be obtained. To obtain , we can swap the value of dimensions 4 and 1 of , that is, exchanging 3.1 with 2.7. Step 3 is also based on its previous step; this time, we need to exchange job and job because job is to the right of job in . Similar with step 1 and step 2, is obtained by swapping the values of dimensions 4 and 2 of . Finally, we get . To verify its correctness, we can decode using SPV so that we can determine its order (5, 3, 1, 2, 4), which is equal to the target order.

Based on all descriptions above, for each iteration, we applied the insertion operator shown in Algorithm 3 to the original gbest five times to obtain five different new gbests. The original gbest was replaced by the best gbest among these five gbests with the smallest objective function value if it was better than the original gbest. Based on the above, the local search process is described in Algorithm 4.

Require: gbest, problemData
Ensure:
(1)time window
(2)time window
(3)
(4)
(5)
(6)
(7)initialize
(8)for each do
(9)  
(10)end for
(11)for each do
(12)  
(13)end for
(14)
(15)return
Require: gbest, problemData
Ensure: gbest
(1)
(2)for each do
(3)  apply insertion operator to gbest to get a
(4)  
(5)end for
(6)obtain the best from as
(7)if is better than gbest then
(8)  gbest
(9)end if
(10)return gbest
Require: parameter, problemData
Ensure: solution
(1)initializeSwarm (swarm, parameter, n)
(2)evaluateSwarm (swarm, problemData)
(3)initialize pbest and gbest
(4)while parameter.stopCriteria is not satisfied do
(5)  updateSwarm (swarm, parameter)
(6)  adjustSwarm (swarm, parameter)
(7)  evaluateSwarm (swarm, problemData)
(8)  localSearch (swarm.gbest, problemData)
(9)end while
(10)solution = swarm.gbest.solution
(11)return solution
3.5. PSO-LS Framework

The PSO-LS framework structure is shown in Algorithm 5. As described in the previous sections, the state of the swarm is initialized by the first three lines. The main loop of the PSO algorithm consists of lines from 4 through 9. Based on the fact that the behavior of each particle depends heavily on the global leader gbest, the eighth line which aims to improve the quality of the global optimal particle is added before the end of each iteration by calling Algorithm 4.

4. Computational Experiments

To evaluate PSO-LS proposed in the previous chapter, we performed many experiments for comparison with the genetic algorithm (GA) [21].

To be specific, we used the main algorithm framework in a previous study [22] that uses GA to solve the scheduling problem. The initial population was generated randomly. We used C1 from [22] for chromosomal representation and crossover operator. For selection mechanism, we used a hybrid of roulette wheel selection [23] with an elitist strategy [23]. A shift mutation was applied to the GA according to the literature [22]. The probability of mutation was dynamic, and at the beginning, . For each iteration, decreases by a rate of . In the population, when the value of the minimum fitness over the mean of fitness is larger than a constant real number , i.e., , takes the initial value .

Before the comparison, experiments for parameter selection were needed because the performance of heuristic algorithms depends strongly on their parameters. We performed the same parameter selection experiment on PSO and GA.

4.1. Problem Data Setting

Before we started the experiment, the data of problems are given as follows:(i)The processing time of job , , is uniformly generated in the range .(ii)The electricity consumption of job , , is distributed from the uniform .(iii)The due time of job , , is distributed from the uniform .(iv)The importance of job , , is distributed from the uniform .

All the problems handled in the experiments were generated randomly based on the distributions.

4.2. Algorithm Parameters Setting

We used the Taguchi method [24] for parameter selection.

Taguchi method is also called the orthogonal experimental design which is used to determine the value of each factor especially when the number of factors and factor levels are greater than 3. Based on orthogonal tables, the number of experiments can be greatly decreased. The brief steps of the Taguchi method are as follows: firstly, determine responses, factors, and levels; secondly, select appropriate orthogonal tables; thirdly, execute the experiments according to orthogonal tables and fill the tables with experimental results; then, analyze the results and determine the factor level combination; and at last, verify the effectiveness of the selected levels of factors.

We took the size of swarm, weight, , , and as factors for the PSO algorithm. We also took the size of population, the probability of crossover , the probability of mutation , and the other two parameters related to mutations and as the factors for the GA algorithm. Each factor of both algorithms took 4 levels, as shown in Tables 3 and 4.

We conducted 16 experiments according to the orthogonal tables of parameters of each algorithm, as shown in Tables 5 and 6.

Considering that the problems on different scales may take different parameter levels to obtain effective search capabilities, we set the means of TWT of a random small-scale problem and a random large-scale problem as responses. A problem with 30 jobs was used as the small-scale problem and 70 jobs as the large-scale problem. To eliminate as much random interference as possible, the average value of the objective function value resulting from 20 executions of each algorithm was taken as the response.

After the tables were designed and filled, we needed to select parameters for the PSO and GA algorithms by analyzing Tables 5 and 6.

Then, we needed to decide which level would be selected for each factor. We selected the factor of the size of the particle swarm for the small-scale problem as an example, as shown in the line on the top left of Figure 5. Firstly, let us consider the situation where size equals 50. According to Table 5, the average of TWT for a size of 50 for the small-scale problem can calculated by the mean of the small-scale response of the first, second, third, and fourth experiments. That is, . Secondly, similarly, we can obtain 3635.88475 for level 100, 3366.793375 for level 150, and 3411.819375 for level 200. Thirdly, we determine the best parameter of 150 for the small-scale problem for PSO because level 150 has the smallest mean TWT of 3366.793375.

The average responses at different levels for each factor need to be calculated, and then, the level of each factor with the smallest mean TWT is chosen. We performed this procedure on other factors for PSO and all the factors for GA for both the small- and large-scale problems. The main effect plots for PSO and GA are shown in Figures 5 and 6, respectively. The lowest points of these lines are taken as the value for each parameter for the algorithms.

As such, we determined all the parameter values for each algorithm. For the PSO algorithm, the values of the size of swarm, weight, , , m, and are 150, 0.9, 0.1, 0.95, and 4 for the small-scale problem and 200, 0.9, 0.15, 0.95, and 4 for the large-scale problem, respectively. For the GA algorithm, similarly, the size of population, the probability of crossover , the probability of mutation , and the other two parameters relating to mutations and are 200, 0.5, 0.9, 0.99, and 0.95 for the small-scale problem and 200, 0.3, 0.9, 0.99, and 0.95 for the large-scale problem, respectively.In order to verify the effectiveness of selected levels of factors by the Taguchi method, the responses of the selected parameter values need to be compared with that of all the level combinations displayed in Figures 5 and 6. Problems of size 30 and size 70 are generated randomly, and their results are displayed in Tables 7 and 8. The values of line 17 in both figures are the levels for factors selected for small-scale problems and line 18 for large-scale problems. It can be seen from the two tables that the fitness obtained by the levels of factors selected for the small-scale problems is the smallest among the other values in the same column. The TWT obtained by the levels of factors chosen for the large-scale problems is also the smallest among the other data in the same column. Therefore, the parameter selections for PSO and GA by the Taguchi method are effective.

4.3. Performance Comparison

We compared the results of PSO-LS, PSO, and GA for problems on six different scales: 10, 30, 50, 70, 100, and 300 jobs. Problems with fewer than 70 jobs were considered small-scale; otherwise, they were considered large-scale.

When solving these two sets of problems on different scales, the algorithms choose the values of parameters of the corresponding scale, which was obtained in the previous section.

For each scale, five different instances were randomly generated. When solving each problem, each algorithm was independently executed 20 times, and the average value of these 20 results was recorded to reduce the differences in the random initial solutions.

Each algorithm executes the same duration for each run. All algorithms were executed in 1, 10, 30, 50, 100, and 100 s, which correspond to solving problems with 10, 30, 50, 70, 100, and 300 jobs, respectively. For small-scale problems, the algorithms almost converge within the setting time, so we compare the results of each algorithm when the algorithms converge. For large-scale problems, because of the difficulty in the problems, algorithms cannot converge in a short time, so we let the algorithm run enough time to see, which makes the results closer to the optimal solutions. Since time is vital to production environment, it is important to get a better satisfactory solution in a shorter time. Therefore, the computational time rather than the number of iterations is controlled for the computational experiments.

For the 20 runs, the comparison results of the average TWT obtained by PSO-LS, PSO, and GA are shown in Table 9.

Paired t-tests were to be conducted with a significance level to analyze the performance of PSO-LS compared with both basic PSO and GA.

For the same scale of problems, the differences in the objective function values obtained by PSO-LS and PSO shown in Table 9 can be obtained by PSO-, where , which are normally distributed; thus, . We wish to test the hypothesis:.

The number of samples, the sample mean, and the sample standard deviation are n, , and accordingly. If , is rejected and . That is, PSO has a greater fitness than PSO-LS, so the performance of PSO-LS is better than PSO. Otherwise, PSO-LS is considered to have no obvious advantages.

We used the above method to test the performance of PSO-LS compared to PSO at all scales. Firstly, subtract the value of the PSO-LS column from that of the PSO column in Table 9 and obtain the data of the second column in Table 10. Values greater than 0 indicate that PSO-LS is better. Then, the mean and standard deviation of differences on different problem sales are calculated including 10, 30, 50, 70, 100, and 300, and the results are recorded in the third and fourth columns in Table 10, respectively. T-statistics on different scales was finally obtained according to the mean differences and standard deviations which were displayed in the fifth column. Similarly, columns 2 through 5 are shown in Table 11 to demonstrate the differences between PSO-LS and GA on all scales. When the computed t-statistics was greater than , PSO-LS was considered to have better performance than PSO for problems of the corresponding numbers of jobs. In the fifth column of the two tables, the data greater than 2.1318 have been highlighted to show that PSO-LS is better for problems with corresponding number of jobs.

In addition, to analyze PSO-LS performance in small- and large-scale problems, the sample size is n = 15, and when , PSO-LS is considered statistically better in large- or small-scale problems significantly. The figures were placed in columns 6 through 8 of Tables 10 and 11. Column 8 was the t-statistics for small-scale problems and large-scale problems, and the data greater than were shown in bold.

In order to analyze the differences between PSO-LS and PSO and GA in general, when , PSO-LS was considered statistically better. The mean of differences, the standard deviations, and t-statistics are shown in the last three columns of Tables 10 and 11. The numbers greater than 1.6991 were bold.

In order to analyze the effectiveness of PSO-LS against basic PSO, the differences, the averages of differences, and t-statistics are shown in Table 10. The gap is the difference of TWT obtained by PSO and by PSO-LS, is the mean of the differences calculated from 5 instances with the same job size, is the standard deviation related to , and then, the t-statistics of each scales, , can be obtained by . We can also calculate the t-statistics by the small-scale and the large-scale, , by , where is the mean of the gaps from the small-scale category and large-scale, while is their related standard deviation. Similarly, we can obtain t = 1.26 by , where and for the overall data obtained from PSO and PSO-LS.

Similar to Table 10, Table 11 is also given to analyze the performances of PSO-LS compared with GA.

The observations from the displayed results are shown as follows.

As for the comparison between PSO and PSO-LS, PSO-LS is statistically better than the basic PSO significantly according to Table 10 because the differences between the PSO and PSO-LS were almost greater than zero (except for 10-4 and 30-1 in Table 9) and the t-statistics for all the differences was 2.58 which was greater than (1.6991). As for performance of solving problems with the same number of jobs, t-statistics in the column was greater than (2.1318), only except for the problems with 10 jobs, which demonstrated the superior performance of PSO-LS than that of PSO except for the 10-scale problems. T-statistics in is both larger than (1.7613). This can be attributed to the effectiveness of the local search strategy, by which a better gbest, as the global leader which the whole particle swarm depends on, may be found in each iteration with just a little more time cost because the insertion operator is simple, efficient, and time-saving. Besides, there was a increasing tendency of t-statistics with the increase in the number of jobs, starting with 0.46 and reaching the top at 14.09 by the scale 300, which meant the local search strategy improved the search capability as the complex of problems increases, although on small scales such as scale 10, it did not have significant advantages. Moreover, t-statistics in column and is much smaller than 14.09 obtained by scale 300 because the differences in objective function values of problems in different scales vary dramatically, giving rise to large standard deviations which can bring a smaller t-statistics, according to the equation of t-statistic, i.e., . Standard deviation depends on the difference in instances, which can be considered as a property of problems.

As for the comparison between GA and PSO-LS, t-statistics was equal to 2.43 for all the gaps between GA and PSO-LS and the t-statistics is larger than (1.6991). So PSO-LS was significantly better than GA generally. PSO framework was time-saving, which gave rise to less time cost within each iteration so that within the executing time limit, the number of iterations of PSO-LS can be more, compared with the GA algorithm with relatively complex operators of selection, crossover, and mutation. Moreover, the local search strategy with the efficient insertion operator tailored specifically for this problem improved gbest which all the particles rely heavily on to update their states, leading to improvement in almost each iteration. Because of the energy constraints, the insertion operator can cause great changes. However, the operators of GA, such as crossover and mutation, can bring less diversity of offspring, and good genes might be likely to be lost in inheritance.

There are more information between PSO-LS and GA in detail. Firstly, when the number of jobs was 10, GA had a slightly larger average TWT than PSO-LS at most instances, except for instances 10-1 and 10-2 in Table 9, but the gaps between the objective function values obtained by PSO-LS and GA were smaller than 5.00 in all instances, which is shown in Table 11. The t-statistics of the gaps between GA and PSO-LS was 1.71, which was smaller than . So, when the number of jobs was 10, PSO-LS was slightly better than GA and there was no statistically significant differences between them. This was because the problems with 10 jobs were not difficult and algorithms had converged or was close to converging within 1 second.

Secondly, when the number of jobs was 30, PSO-LS obtained the fewest points compared with PSO and GA except for 30-1, as shown in Table 9. So the differences between GA and PSO-LS were mostly nonnegative in Table 11. The t-statistics of the gaps was 2.26, which was greater than , according to which the PSO-LS was statistically better than GA.

Thirdly, when the number of jobs was larger than 30, PSO-LS was much more effective than PSO and GA, as displayed in Table 9.

According to the column in Table 11, the t-statistics of gaps of TWT obtained from GA and PSO-LS was 4.58, 11.96, 15.58, and 32.82 for scale from 50 to 300, which were all significantly larger than . And t-statistics in column had an upward trend with the increase in jobs, which illustrated the superior performance of PSO-LS than GA especially for large-scale problems.

As for t-statistics of differences between GA and PSO for the small-scale and large-scale problems, according to column in Table 11, they were 2.55 and 2.68, respectively, which were both greater than . It shows good global searching capability of PSO-LS against GA.

In conclusion, PSO-LS has a strong search capability for finding more optimal solutions than those obtained by basic PSO and GA. The larger the number of jobs, the larger the gap in the results between PSO-LS and GA. Because of local search strategy, PSO-LS is statistically better than the basic PSO and GA significantly.

5. Conclusion

In this paper, we studied a single-machine scheduling problem with energy consumption limits to minimize total weighted tardiness. Based on the basic PSO algorithm, a local search was designed to be embedded in the algorithm as PSO-LS to improve the performance of the original algorithm, which is one aspect of our main contribution. Another aspect of our contribution is the constraint handling process tailored for this problem, which was used to determine the start processing time and completion time of each job to obtain total weighted tardiness as the objective function value. The constraint handling process considers the actual production environment, reserving two efficiencies for starting processing time to be accurate to the seconds and using as much energy in each time window as possible without violating energy constraints.

Experimental results showed that this enhanced algorithm, PSO-LS, is more effective than the original PSO and GA algorithms especially for large-scale problems, which demonstrated the effectiveness of our algorithm design.

This research can be extended in the following aspects: first, more operators inside PSO could be developed to improve the performance of local search; second, other methods such as the adaptive large neighborhood search algorithm (ALNS) [25] could be used to bring more diversity of particles; third, initialization methods of PSO could be taken into account to improve both the diversity of particles and the goodness of initial solution; fourth, energy consumption limits with uncertainty can be taken into account to meet more complex production environment; fifth, other PSO variants such as binary PSO [26] can be considered to be used as the algorithm framework to handle the discrete problem. These issues will be addressed in the further studies.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

Authors’ Contributions

Qingquan Jiang and Xiaoya Liao contributed equally to this work.

Acknowledgments

This research was funded by the Natural Science Foundation of China (Grant no. U1660202), Project of Ministry of Education of China (Grant no. 2019ITA01018), Social Science Foundation of Fujian Province (Grant no. FJ2019B101), and Fujian Provincial Department of Science and Technology-Soft Science Research Plan Project (Grant nos. 2019R0093 and 2019R0094).