Abstract

The single-machine scheduling problem with fixed periodic preventive maintenance, in which preventive maintenance is implemented periodically to maintain good machine operational status and decrease the cost caused by sudden machine failure, is studied in this paper. The adopted objective function is to minimise the total weighted completion time, which is representative of the minimisation of the global holding/inventory cost in the system. This problem is proven to be NP-hard; a position-based mixed integer programming model and an efficient heuristic algorithm with local improvement strategy are developed for the total weighted completion time problem. To evaluate the performances of the proposed heuristic algorithms, two new lower bounds are further developed. Computational experiments show that the proposed heuristic can rapidly achieve optimal results for small-sized problems and obtain near-optimal solutions with tight average relative percentage deviation for large-sized problems.

1. Introduction

Production scheduling is concerned with the allocation of limited resources (machines or operators) to tasks in the manufacturing systems of enterprises to optimise certain objective functions such that their competitive position is maintained or improved in fast-changing markets. In traditional scheduling theory, it is usually assumed that machines can operate continuously during a scheduling planning horizon. However, uncertain machine breakdown or preventive maintenance activities cannot be ignored in the real world, as they cause the machines to be unavailable and production continuity to be slowed or stopped. As a result, the overall performance of the system will be decreased. Therefore, the constraint of machine unavailability taken into account during production scheduling is very important, and it makes scheduling problems more complex and realistic.

Preventive maintenance (PM) causes machine unavailability, as mentioned above, but it has gradually become a common practice in many companies as a priori maintenance measure to prevent potential malfunctions and critical nonavailability of a system. For this reason, many researchers and practitioners have gradually regarded production scheduling jointly with PMas a common and significant issue. In the literature, most scheduling models assume that the intervals between maintenance activities are known in advance and that maintenance time (unavailability time) is a constant. This assumption is reasonable in the case that PM is determined by engineering staff in advance. To date, such models have received considerable attention from researchers, and comprehensive reviews of this type of research have also been provided [14].

Regarding the scheduling problem with PM activities, the single-machine problem with a single interval of machine nonavailability is a basic model, denoted as, where indicates the PM activity occurs once in the planning horizon, and Obj is the objective function adopted in scheduling problems. Adiri et al. [5] and Lee and Liman [6] proved the single-machine problem with the total completion time to be NP-hard. They showed that the SPT rule has a tight worst-case error bound of 2/7. For the same problem, Sadifi et al. [7] showed their proposed approximation algorithm MSPT (modified SPT) had a worst-case error bound of3/17. Breit [8] provided a parametric O (nlogn)-algorithm H and the best error bound of this algorithm was 7.4%. Another related flow-time objective function is the weighted completion time, which also has been discussed in the literature, because weights can measure the importance of processing jobs, or quantify the stocking cost per a unit of time of jobs in a system. Kacem et al. [9] studied the problem of . They proposed a branch-and-bound (BAB) method, a mixed integer programming model (MIP), and a dynamic programming (DP) method. For the same problem, Kacem and Paschos [10] modified the WSPT rule slightly and provided a polynomial -differential algorithm.

All the research cited above addressed cases with a single PM activity in the planning horizon. Without loss of generality, it is necessary to consider multiple PM activities, as maintenance is scheduled periodically in many manufacturing systems. That is, there is usually more than one maintenance period in the planning horizon. Liao and Chen [11] extended a single PM activity to multiple and periodic maintenance activities for a single-machine scheduling problem; they proposed a BAB algorithm and an efficient heuristic for minimising the maximum tardiness. For the objective of minimising the number of tardy jobs, Chen [12] proposed an efficient heuristic based on Moore’s algorithm [13] and the BAB method to solve the problem. Lee and Kim [14] proposed a two-phase heuristic algorithm for the same problem to minimise the number of tardy jobs. Batun and Azizoglu [15] proposed a BAB algorithm to solve the same problem, in terms of total flow time. To minimise the makespan, Perez-Gonzalez and Framinan [16] developed an MIP model and two-phase heuristic algorithm based on bin-packing dispatching rules to obtain optimal or near-optimal solutions. Zhou et al. [17] also proposed different heuristic algorithms with a local improvement mechanism for the same problem.

In the related research, a variety of production scheduling problems exist that consider different maintenance types in different manufacturing shop floors. Relative to fixed PM activity with known starting time and duration, flexible maintenance is another attractive research topic where the maintenance start time is not fixed in advance. In some cases of steel processing or semiconductor manufacturing, the maintenance start time depends on a specific threshold value such as maximum continuous working time or maximum dirt. This kind of assumption can be found in Qi et al. [18], Mosheiov and Sarig [19], Cui and Lu [20], Su and Wang [21], Pang et al. [22], Huang et al. [23], Chung et al. [24], etc. Another deterministic case is when the maintenance period [u, ] is arranged in advance and maintenance operations should be undertaken within the period [u, ], where the maintenance time () is not longer than the maintenance period, i.e., . Some contributions under this assumption for different scheduling problems can be found in Chen [25], Yang et al. [4], etc. All studies mentioned above assumed that the maintenance time was a constant. More recently, some researchers have taken variable maintenance time into account, because in some realistic applications the maintenance time either increases or decreases depending on the machine conditions or the maintenance skill improvement. Bock et al. [26] considered the case of job-dependent machine deterioration, where the maintenance time is dependent on the amount by which the maintenance level is increased; that is, more improvement of the machine’s state requires more maintenance time. Several recent studies have been conducted to address variable maintenance activity on scheduling problems [27, 28].

This study focuses on scheduling jobs on a single machine with periodic unavailability periods due to implementing PM activities; the objective is to minimise the total weighted completion times. In our study, the unavailability periods are fixed and known in advance. In the literature, a recent study by Krim et al. [29] considered the same scheduling problem. Our contribution lies in developing a different MIP model, two lower bounds, and a new heuristic algorithm while examining the effectiveness of the proposed methods in terms of various problems. In the following section, we propose an MIP model and some lower bounds, which are compared with the results by Krim et al. [29]. The results show that our model finds the optimal solution in all problems of the model by Krim et al. [29] at a lower CPU time, and some lower bounds are addressed in Section 3. In Section 4, we propose a heuristic algorithm based on the analytical properties. The proposed heuristic algorithm is examined on large instances, and the results are compared and reported in Section 5. Finally, Section 6 states our conclusions along with future research directions.

2. Mixed Integer Programming Model

In this section, we first describe the considered problem. There are n independent nonresumable jobs to be processed on a single machine. PM activities are carried out periodically, so the machine is not constantly available in the planning horizon, which is composed of an availability period of time units followed by a nonavailability period of time units. A nonresumable job refers to a job that cannot be finished before a maintenance activity and thus has to restart. Each availability period of is frozen, and machine idle may occur because of job nonresumablity, as shown in Figure 1. Using the three-field notation of Graham et al. [30], we denote this scheduling problem as , which has been proven to be NP-hard [29]. For the problem, some assumptions and notations are given as follows:(i)Assumptions:(1)All jobs and machine are available for time t = 0(2)The processing times and weights of jobs are deterministic and known in advance(3)The machine can process only one job at a time without pre-emption(4)For a machine, the maintenance time is deterministic and known in advance; after maintenance, the status of the machine becomes the initial status(5)The fixed interval T is known in advance, and it is not large enough to process all the jobs (i.e., ); however, the availability period of should be greater than or equal to the maximum processing time of jobs, i.e., (ii)Notation:

n: number of jobs: processing time of jobs i, : weight of jobs i, : maintenance time: an available period, where T is a given fixed time periodM: a large integer number: a binary decision variable used to judge whether job i is processed at position k, defined as follows: where ; : a binary decision variable that is used to judge whether position k is assigned to execute maintenance, defined as follows:: sum of processing time at position k, : idle time after position k, : completion time of job i, : start time at position k,: processing time at position k, : completion time at position k,.

Next, an MIP formulation is provided for the problem, and the model is constructed based on the viewpoint of each position. This is different from another MIP model that is built based on the relationship of jobs [29]. The proposed MIP model can be described as follows:subject to

Constraint (1) is the objective function that minimises the total weighted completion time of all jobs. Constraint (2) and constraint (3) ensure that each job can only be placed at one position and that each position can only be occupied by one job. Constraint (4) specifies the processing time of job that is assigned at position k. Constraint (5) ensures that the time for position k to assign a job should start no earlier than the sum of its predecessor’s finish time, idle time, and possible maintenance time. Constraint (6) establishes a link between the start time and finish time for position k. Constraint (7) calculates the accumulated processing time of position 1. Constraints (8) to (10) calculate the accumulated processing time of position k, excluding position 1. Constraint (11) ensures that the accumulated processing time of position k in the sequence does not exceed the predefined available period. Constraints (12) and (13) are used to calculate the idle time immediately after position k. Constraint (14) is the completion time of each job. Constraint (15) specifies the starting time for position 1 when the sequence is equal to zero. Constraints (16) and (17) specify the maintenance time and idle time for the last position in the sequence is equal to zero, respectively.

In the proposed model, the number of constraints is . The number of integer and binary variables is and , respectively, and n is the number of jobs. In the model proposed by Batun et al. [15] and mentioned above, the number of constraints is . The number of integer and binary variables is n and , respectively. For even relatively modest problem sizes, there are too many integer variables in the two models for the problem to be solved; however, the MIP model is still an appropriate way to describe the problem complexity and obtain optimal solutions for a small-size problem. Form a practical viewpoint, heuristic algorithms are required to solve large-sized problem.

3. Lower Bounds

A lower bound value is an estimate of the best possible solution and is useful to evaluate the performance of heuristic algorithms when the optimal solutions cannot be found. Krim et al. [29] proposed four different lower bound procedures by polynomial relaxed problems. According to their research, we develop two different lower bound values for the considered problem in this paper. Before introducing our lower bound values, the four different lower bound procedures of Krim et al. [29] are briefly described as follows:(i), which is based on the relaxed problem of (ii), which is based on the special case that the processing times of all jobs are the same(iii), which is based on the relaxed problem of and applies a dynamic programming algorithm to solve the (iv), which is based on the special case that the weight values of all jobs are the same

For more detail of these lower bounds, please see the work of Krim et al. [29]. Among the four lower bounds, we find the formula of LB2 to be questionable, and we use a counterexample of 8 jobs in which , , and to illustrate the question.

For the above counterexample, the optimal sequence is as shown in Figure 2, and the optimal solution is . However, according to the LB2 formula where , , and denotes the least integer greater than or equal to x, , , ; the calculation of is as follows:

Regarding our proposed lower bound values, first we supposed an optimal sequence, as shown in Figure 3. Other notations used in Figure 3 are described below.(i): number of batches for an optimal solution(ii): kth batch, (iii): number of jobs in the kth batch, (iv)(v): idle time occurred in the kth batch, (vi): job in the ith position of an optimal sequence(vii): processing time of the job in the ith position of an optimal sequence(viii): weight value of the job in the ith position of an optimal sequence

For the optimal sequence as shown in Figure 3, we can obtain the total weighted completion time as follows:

According to Figure 3 and the formula mentioned above, it is obvious that the number of batches is an important factor influencing the schedule results, and the obvious estimation for the number of batches is [29]. In this paper, we adopt another lower bound concept provided by Martello and Toth [31] for the bin-packing problem to estimate the number of batches. We let, where denotes the greatest integer less than or equal to x and . We put the jobs where their processing times satisfy the condition of into set A. The other jobs where their processing times satisfy the condition of are put into set B. We let be the number of jobs in set A. Now, for each a, the waste time is calculated based on the formula of , and the maximum waste is obtained by . Considering maximum waste, the estimation of the number of batches will be . For , , and , the relation among them is .

For a job sequence where all jobs are sorted according to WSPT rule, i.e., , we use the estimation as a substitute for the best number of batches and ignore the impact of the total weighted idle time, i.e., on the objective function, i.e., Then, the new lower bound (NewLB) is derived based on equation (18) as below, and the value of is less than or equal to the optimal solution ., where is the minimum number of jobs in . The estimation of is as follows:where To illustrate the proposed lower bound values, an instance with 5 jobs is given; the information for the instance is shown in Table 1, and the optimal solution of 194 is obtained by the MIP model.

First, . We let a be equal to 1, and , since none of jobs satisfied the condition of , . .

We let a be equal to 2, , since these jobs satisfied the condition of , .. For , , since these jobs satisfied the condition of . . . The maximum waste time is, and the estimation of the number of batches is . For each batch, we can obtain the minimum number of jobs according to equation (19), which is . Next, the jobs are sorted based on the WSPT rule, and NewLB is obtained as follows:

Recalling LB4, the estimation is composed of and ; now replace with ; we can derive . For this example, Table 2 lists the results obtained by the five lower bounds for this five-job example where the performance of lower bound is evaluated by the related percentage gap (RPG) from the optimal solution.

4. Heuristic Algorithms

Despite the relative success of the two MIP models mentioned above for finding optimal solution reporting in computational experiments, the models are still incapable of solving medium and large instances. It is necessary to develop efficient heuristic algorithms. Many existing heuristic algorithms in the literature use some specific optimality properties as a basis for constructing better schedule solutions [18, 32]. Additionally, some local improvement strategies such as insert and swap methods are integrated in heuristic algorithms to gain better solutions, especially when developing metaheuristic algorithms (Li et al. [33], Imran Hossain et al. [34]). Inspired by this idea, we adopt three important optimality properties proposed by Krim et al. [29] in this paper to construct a heuristic algorithm to find optimal or near-optimal solutions for large-sizes problems. Additionally, a local improvement strategy is combined with the heuristic algorithm.

First, the three important optimality properties addressed by Krim et al. [29] are briefly described as follows.

Property 1. In any optimal solution, the jobs are scheduled in increasing order of (WSPT rule) in each batch.

Property 2. In any optimal solution, where .

Property 3. In any optimal solution, for each idle time in the batch and with smallest processing time in the batch , , the condition of for is always satisfied.
Krim et al. [29] proposed nine heuristic algorithms, and the basic framework of these algorithms contains the following three stages: (1) a sequence of jobs is generated by dispatching rules; (2) given the sequence of jobs, an initial schedule with PM is produced by batching rules; and (3) Properties 1 and 2 are applied to improve the initial schedule to obtain final schedule. The detailed steps of the heuristic algorithms can be found in the study by Krim et al. [29]. According to their computational results, the WSPTBF and WSPTFF algorithms had better performances among the nine algorithms. In WSPTBF and WSPTFF algorithms, the job sequence is generated by WSPT rule, whereas the initial schedule was, respectively, generated by best fit (BF) and first fit (FF) batching rules, which are well-known batching rules for solving bin-packing problems [32].
In this paper, we develop two-phase heuristic algorithm with framework similar to that of the heuristic algorithms of Krim et al. [29]. Phase 1 consists of a sorting step using a WSPT dispatching rule to determine a sequence of jobs, followed by assigning steps using a full-batching rule that determines the jobs in each batch (i.e., the interval of ). The full-batching rule is used widely to solve the bin-packing problem that tries to minimise the number of batches [32]. As a result, an initial schedule solution is obtained. Next, Phase 2 aims to improve the initial schedule by the local improvement strategy, which includes insert and swap procedures. The insert procedure attempts to satisfy Property 3 by inserting jobs to the forward batches. That is, if a job , and batch exist in which the condition of is satisfied, then job is removed from batch and placed into the idle period of batch . The detailed steps of the insert procedure are described as follows:(Algorithm 1)
The swap procedure attempts swapping jobs belonging to two different batches so that the current idle time is decreased or the total weights of batch k are increased. After swapping two jobs, Property 1 still needs to be satisfied. The detailed steps of the swap procedure are described as follows.(Algorithm 2)
Finally, a complete heuristic algorithm combined with local improvement strategy including insert and swap procedures, named WSPTFB + LIS, is described below.(Algorithm 3)
A preliminary test is conducted using the 12-job instance of Krim et al. [29] to compare our proposed heuristic algorithm with WSPTBF and WSPTFF. Tables 3 and 4 show the information of the 12-job instance and the results obtained by the heuristic algorithms, respectively. In Table 4, the symbol indicates the solution is also optimal. For this instance, WSPTFF had a better initial solution than the other algorithms, and the initial solutions obtained by the WSPTBF and WSPTFB + LIS algorithms are the same, equal to 11131; however, the WSPTFB + LIS algorithm had better final solution. This result occurred because, without violating the three optimality properties, our heuristic algorithm could reallocate jobs between batches to obtain a better schedule by the local improvement strategy. The other algorithms (WSPTBF and WSPTFF) only tried to re-sort the sequences of jobs (or batches) to improve the solution quality, and the content (jobs) of each batch did not change.

(i)Step 4: let k = 1 and l = 1
(ii)Step 4.1. If the condition of is satisfied, remove job from batch and place it into the idle period of batch let improve_flag = true, substitute the current schedule by the new one, go to Step 3
(iii)Otherwise, go to Step 4.2./Property 3 check
(iv)Step 4.2. If is satisfied, let go to Step 4.1; otherwise, go to Step 4.3
(v)Step 4.3. If is satisfied, let go to Step 4.1; otherwise, go to Step 5
(i)Step 6: let k = 1 and l = 1.
(ii)Step 6.1: if there exist two jobs, ,, the conditions of and are satisfied, duplicate the current schedule to schedule , and go to Step 6.2. Otherwise, go to Step 6.4.
(iii)Step 6.2: job in the position is interchangeable with job in the position of schedule and resorting the jobs in batch and batch based on WSPT rule, respectively. Then, a new schedule is generated; the corresponding objective value is ./Property 1 check.
(iv)Step 6.3: if is satisfied, let , , improve_flag = true, go to Step 3. Otherwise, go to Step 6.4.
(v)Step 6.4: if is satisfied, let ; go to Step 6.1; otherwise, go to Step 6.5.
(vi)Step 6.5: if is satisfied, let ; go to Step 6.1; otherwise, go to Step 7.
(i)Step 1: input data.
(ii)Step 2: sort the jobs according to the WSPT rule, and assign the sorted jobs into each batch based on full-batching method. Let , TWC, and improve_flag be the obtained schedule, the corresponding objective value, and true, respectively.
(iii)Step 3: if improve_flag = false, output the schedule , TWCandStop. If improve_flag = true, go to Step 4.
(iv)Step 4: execute insert procedure.
(v)Step 5: if improve_flag = true, go to Step 3; otherwise, go to Step 6.
(vi)Step 6: execute swap procedure.
(vii)Step 7: if improve_flag = true, go to Step 3; otherwise, go to Step 8.
(viii)Step 8: compute .
(ix)Step 9: if the schedule satisfies Property 2, go to Step 3; otherwise, sort the batches in decreasing order of and improve_flag = true; go to Step 3./ Property 2 check.

5. Computational Experiments

This section consists of three experiments. Experiment 1 focuses on comparing the computational efficiency of the two MIP models provided by Krim et al. [29] and ours. Experiment 2 aims to evaluate different lower bounds. Finally, the performances of the proposed heuristic algorithms are compared in Experiment 3. Since the benchmark data of Krim et al. [29] is not available at the following URL: https://sakai.uphf.fr/wiki/site/1b6900a5-38f1-4003-94e9-317898cbf168/instancesmaintenancewct.html, we generate a set of problems to carry out the computational experiments. The test-bed contains small-sized and large-sized problems. Small-sized problems consist of 5, 6, 7, …, 14, 15 jobs, and large-sized problems consider 20, 25, 30, 35, 40, …, 90, 100, 200, 300, 400, 500, 1000 jobs. The processing times of jobs from a discrete uniform distribution in the range [1, 5, 15, 20] are generated, respectively. The weight of jobs is generated from a uniform distribution in the range [1, 10]. T and are set to [30, 50] and [3, 10], respectively. 10 instances were generated for each combination of problem size (n), processing time , T and ; thus, a total of 2240 instances were randomly generated and tested. To make the generated instances be reasonable, the following two restricted conditions should be involved: (1) ; (2) . All experiments were conducted on a PC with an Intel Xeon E-2124 3.4 GHz CPU and 32 GB RAM.

Experiment 1. Comparison of the two MIP models
In this Experiment 1, we compare our model with the one provided by Krim et al. [29]. The two models are executed by IBM ILOG CPLEXOptimization Studio Version 12.7.1, and all problems are solved on the same machine for fair comparison. Additionally, the time limit of ILOG OPL is set to 7200 seconds for solving each instance. Table 5 reports the results for the two models, where and are, respectively, denoted as the model provided by Krim et al. [29] and our model. The “AveSol” and “AveLB” columns give the average the weighted completion time and average lower bound value over 80 instances, respectively. Another column, “nOPT,” denotes the number of times each model finds the optimal solution. Since the MIP model may not find optimal solutions within the time limit of 7200 seconds, we record the lower bound values obtained by each MIP model to make sure the identified solutions are optimal, that is, . For the model, nOPT for 13-job instances is 79 (53), where 79 indicates the number of times each solution of model is equal to lower bound values ofmodel over 80 instances; furthermore, (53) means the number of times each solution of model is equal to lower bound values of model. From this table, we can observe that AveLB obtained by the proposed model improves significantly as the number of problem size increases, which allows the model to find better solutions within the time limit of 7200 seconds. Additionally, because the values of are tighter than those of , using the condition of , feasible solutions obtained by model within the time limit of 7200 seconds can be proved to be optimal. For 15-job problem, Nopt value of is increased from 1 to 76, as shown in Table 5.
Regarding computation times, the computational effort required by the model is significantly higher than that of the model, especially when the problem sizes increases. Table 5 and Figure 4 reveal that the efficiency of the model is better than results obtained by the (Krim et al. [29]). However, the two models cannot solve the NP-hard problems considered in this paper in a reasonable time, and the performances of the two models become worse as the number of jobs increases. The comparison results showed that the two models are computationally inefficient when the number of jobs is beyond 20. Therefore, it is necessary to develop heuristics to obtain near optimal solutions in reasonable time.

Experiment 2. Comparison of the lower bounds
This experiment aims to evaluate the performance of the lower bounds, so we compare the lower bounds values with the optimal solutions for small-size problems. For large-size problems, the different lower bound values are compared with the value of , where ; that is, the related percentage gap (RPG) is adopted as follows:
RPG =  where Best indicates the best solutions obtained by the model or .
Regarding the computational time, excluding LB3, the average computational time required by the lower bound methods, less than 1 second, is negligible. The LB3 method applied the dynamic programming method to obtain a lower bound value for the special case in which only one PM activity exists; therefore, LB3 requires much more computational time as the number of jobs increases. In our computational experiment, the average computational time required by LB3 is approximately 1700 seconds for 1000 jobs. The computation time required by all lower bound methods excluding LB3 is very small; therefore, we do not report it here.
Tables 68 show the minimum, maximum, and average RPG, respectively, of the lower bounds relative to the optimal solutions over 80 instances in each set of small-sized problems. From Tables 68, we can find that the minimum RPG, maximum RPG, and average RPG of NewLB are considerably smaller than others. Additionally, the results also supported the result of Krim et al. [29] in which LB1 and LB3 are quite tight compared to LB4. The overall average RPG of NewLB is at 5.403% on the optimality gap. For large-sized problems, Table 9 shows that NewLB remains the best lower bound. From the results shown in Tables 69, we observe that the NewLB is quite superior to others. To that end, NewLB is used to compute the performances of the heuristic algorithms in Experiment 3.

Experiment 3. Comparison of the heuristic algorithms
In this experiment, our proposed heuristic algorithm is compared with the two heuristic algorithms provided by Krim et al. [29], and we adopt the RPD (Relative Percentage Deviation) to measure the solution quality of each instance for the heuristic algorithms. RPD is defined as follows:, where Min is the optimal or lower bound value provided by model or NewLB method, and is the total weighted completion time obtained by each heuristic algorithm. Tables 10 and 11 provide the results of the WSPTFB and WSPTFB + LIS algorithms, where WSPTFB represents the proposed heuristic algorithm without local improvement strategy. From Tables 10 and 11, algorithm WSPTFB + LIS give notable results by local improvement strategy in which WSPTFB + LIS has found the smaller average RPD, and the number of optimal solutions obtained is more than WSPTFB. Additionally, we also applied ANOVA analysis to examine the differences of the average RPD between the two heuristic algorithms. The results in Table 12 show that a statistically significant difference exists between the two algorithms, because -value is less than 0.05. Based on the preliminary results, we focus only on comparing WSPTFB + LIS with other heuristic algorithms later.
To the best of our knowledge, the most recent and best algorithms proposed by Krim et al. [29] for solving the problem are the WSPTBF and WSPTFF algorithms. Thus, we adopt these two algorithms as benchmarks and compare the results with those of our WSPTFB + LIS algorithm. For fair comparison, WSPTBF and WSPTFF are recoded in C++ based on their pseudo codes (Krim et al. [29]), and the same test-bed problems are examined under the same computer configuration. In Table 13, we present the minimum, maximum, and average RPD values and the number of optimal solutions found by each heuristic algorithm over the 80 instances of each small-size problem. From this table, we can observe that WSPTFB + LIS is able to obtain better results in all cases (552 optimal solutions found over 880 instances). Additionally, the average RPD values of WSPTFB + LIS are considerably smaller than those of other heuristic algorithms, which is also seen in the results presented in Table 14 in which WSPTFB + LIS has smaller average RPD values for large-size problems. Table 15 also shows that there are significant differences on the average RPD values among the three algorithms based on the result of ANOVA analysis, where -value is less than 0.05.
In terms of computation times, Table 16 shows the proposed WSPTFB + LIS for which the required computational effort is slightly higher than that for the others as the number of jobs increases. This is because it performs the local improvement strategy to find better solutions. Nonetheless, the proposed WSPTFB + LIS can obtain very good solutions within extremely small computational times as shown in the above results, where in Table 16 the average computational effort required by our proposed algorithm is less than 2 seconds for solving 1000 jobs. This indicates that our proposed WSPTFB + LIS algorithm outperforms the WSPTBF and WSPTFF algorithms, and the proposed algorithm can apply to real-world problems.

6. Conclusions

Scheduling problems with PM activity have received increasing attention in the last decade, due to the enthusiastic adoption of the PM philosophy in many real manufacturing systems. However, not much work has been performed for problems with the objective of total weighted completion time, denoted as. For the problem, we proposed a new MIP model, two new lower bounds, and a heuristic algorithm. The proposed heuristic algorithm adopted a bin-packing scheme that considers batch formation and batch sequence and applied the optimality properties in the local improvement strategy to efficiently obtain optimal or near optimal solutions. To the best of our knowledge, a recent new study on the same problem was conducted by Krim et al. [29]. Thus, we took their proposed methods including the MIP model, lower bound methods, and better heuristic algorithms as the benchmark approaches.

Based on the results of a comprehensive comparison experiment with the benchmark approaches, in terms of MIP models, the number of optimal solutions found by our MIP model was more than that of the benchmark model within the same time limit (7200 seconds). Regarding lower bound methods, NewLB was significantly more effective than other methods, as the average RPG of NewLB was smaller (closer to optimal solutions/best lower bounds) than that of other methods. For solving small-sized problems, approximately 63% of our proposed heuristic solutions were found to be optimal, a ratio that is higher than that of the other algorithms (28% and 39% solutions found to be optimal for the WSPTBF and the WSPTFF, respectively). Furthermore, the results for WSPTFF + LIS were significantly better than the benchmark heuristic results for the average RPD values (reaching lower bound values) for larger instances. The average computation time of our heuristic algorithm was less than 2 seconds. These results show that our proposed heuristic algorithm is very efficient and has better performance with respect to problems with different dimensions.

In summary, the results presented in this paper are very encouraging regarding the use of the proposed MIP model and the WSPTFF + LIS algorithm for the single-machine total weighted completion time problem with periodic preventive maintenance. In future work, it will be worth considering different types of maintenance activities, such as variable maintenance duration or flexible maintenance activities, or extending this problem to parallel machines or, flow or job shop environments, as well as including different operation constraints. Additionally, another extension may be a consideration of other criterion such as total tardiness-earliness cost, and total weighted tardiness, which are also important objective functions for system performance measures.

Finally, benchmark data sets and solutions for our proposed methods including the MIP model, NewLB, and WSPTFF + LIS are made available at the following URL: https://drive.google.com/file/d/1Fh5S7j6pHGHhKuy2O6WcO0yfl2eMno-t/view?usp=sharing for researchers to compare their solutions.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grant nos. 51705370 and 71501143) and the Zhejiang Province Natural Science Foundation of China (Grant nos. LY18G010012 and LY19G010007).