Abstract

The scheduling problem with controllable processing times (CPT) is one of the most important research topics in the scheduling field due to its widespread application. Because of the complexity of this problem, a majority of research mainly addressed single-objective small scale problems. However, most practical problems are multiobjective and large scale issues. Multiobjective metaheuristics are very efficient in solving such problems. This paper studies a single machine scheduling problem with CPT for minimizing total tardiness and compression cost simultaneously. We aim to develop a new multiobjective discrete backtracking search algorithm (MODBSA) to solve this problem. To accommodate the characteristic of the problem, a solution representation is constructed by a permutation vector and an amount vector of compression processing times. Furthermore, two major improvement strategies named adaptive selection scheme and total cost reduction strategy are developed. The adaptive selection scheme is used to select a suitable population to enhance the search efficiency of MODBSA, and the total cost reduction strategy is developed to further improve the quality of solutions. For the assessment of MODBSA, MODBSA is compared with other algorithms including NSGA-II, SPEA2, and PAES. Experimental results demonstrate that the proposed MODBSA is a promising algorithm for such scheduling problem.

1. Introduction

The scheduling problem with controllable processing times (CPT) has received increasing attention in manufacturing fields. The CPT denotes that operation duration of a job can be compressed or expanded by adjusting the available resources like fuel, equipment, manpower, and so on [14]. Most classical scheduling problems assume that the job processing times are constant values. However, this assumption sometimes violates practical production. We can observe that job processing times are controllable in some cases. For example, in the chemical industry, the processing times of a chemical substance can be compressed by catalyzer or expanded by inhibitor, which requires additional costs [5]. In the CNC manufacturing industry, the job processing times can be controlled by adjusting the cutting speed or the feed rate, which also entails more costs [6, 7]. Therefore, the consideration of CPT in scheduling problems may be more applicable for some manufacturing systems.

This paper studies a single machine scheduling problem with CPT (SSPWCPT) for the following reasons: it fills the gap where a multiobjective evolutionary approach for the large scale SSPWCPT with multiple criteria has been rarely reported. Most existing methods to deal with single-objective SSPWCPT can be classified into two categories, namely, exact and approximate approaches. Exact approaches like branch and bound algorithm have been successfully applied to small scale SSPWCPT. Unfortunately, the exact methods are incapable of solving large scale SSPWCPT. On the contrary, the approximate approaches can solve large scale SSPWCPT within an acceptable time. It is therefore important to conduct a study on efficiency and effectiveness of approximate algorithms for this studied problem. However, purely single-objective SSPWCPT cannot fully reflect the requirements of the real-world scheduling applications. Thus, the problem considers two important criteria, namely, total tardiness and total compression cost. The two objectives are widely accepted in single machine with CPT because they can affect satisfaction of customer and profits of enterprise. One straightforward strategy for addressing the multiobjective optimization problem (MOP) is to combine multiple objectives into a scalar function by giving fixed weights to each objective function [8]. Nevertheless, in most practical scheduling problems, multiple criteria are usually in conflict with each other [911]. In addition, the objective weight is difficult to determine due to different objective scales. Therefore, it is better to handle multiple objectives with knowledge about Pareto dominance. Pareto-based multiobjective evolutionary algorithm (MOEA) is suitable for solving multiobjective scheduling problems since it can yield nondominated solutions in a single run [4, 12, 13].

Recently, backtracking search algorithm (BSA) [14] is a promising method for solving single-objective scheduling problem due to its high convergence speed and ease of implementation. The core idea of BSA is that dual population is utilized to search for an optimal solution during search process. Based on the effectiveness of BSA and characteristics of the MOP, a novel multiobjective discrete backtracking search algorithm (MODBSA) is proposed to solve this multiobjective SSPWCPT. Although multiobjective backtracking search algorithm has been developed, it is mainly used to address continuous optimization problems [15]. The main reason for adopting this MODBSA is that the problem under study is NP-hard [16] and BSA has been demonstrated to be an effective approach for solving this category of problem [17, 18]. Furthermore, dual population in BSA can be utilized to store information of different population for keeping diversity of population. In addition, to the best of the authors’ knowledge, there exists no research about multiobjective BSA in the field of scheduling problem in literature. These reasons drive us to develop an efficient multiobjective algorithm based on BSA for this discrete optimization problem.

To achieve good performance of the proposed algorithm on SSPWCPT, several strategies of this method are developed. According to the characteristic of the addressed problem, the proposed MODBSA uses a two-part encoding scheme. The first part represents the job permutation and the second one denotes the amount of compression processing times of job. Moreover, an adaptive scheme is proposed to select a suitable population for enhancing search efficiency. Meanwhile, a total cost reduction strategy is proposed to improve convergence toward the efficient solution.

The remainder of the paper is organized as follows. In Section 2, some relevant work is described. In Section 3, a definition of the studied problem is stated. In Section 4, the proposed MODBSA for the SSPWCPT is elaborated. The experimental results of the proposed MODBSA are presented and analyzed in Section 5. Conclusions and future work are given in Section 6.

2. Literature Review

The SSPWCPT has been extensively studied since Vickson [19] initiated the CPT in a single machine scheduling problem. However, most single-objective methods or weighted sum methods have been used to solve SSPWCPT in previous literature. For example, Janiak and Kovalyov [20] addressed the SSPWCPT with deadlines and processing time which was a linear decreasing function of the amount of compression. They proposed an algorithm for small scale cases with the objective to minimize the total resource consumption. Shabtay and Kaspi [21] considered a single machine scheduling problem for minimizing total weighted completion time. They presented and analyzed some special cases that were solvable by using polynomial time algorithms. They also gave some heuristic algorithms and a dynamic programming for the general case. Kayan and Akturk [6] studied a bicriteria scheduling problem on a single CNC machine. They proposed an exact algorithm and four heuristic methods to find a set of discrete points to approximate the continuous trade-off curve on small scale problem. Cheng et al. [22] considered a single machine scheduling problem where both job processing times and release dates were controllable. They proposed an algorithm to solve this problem for minimizing the sum of makespan and the total compression cost. Yin and Wang [23] presented a heuristic to address a single machine scheduling problem with CPT and learning effect. The objective of this problem is to minimize a cost function including makespan, total completion, and total absolute differences in completion times. Xu et al. [24] proposed a polynomial time algorithm of for this small scale problem with the objective of minimizing the total tardiness. Tseng et al. [25] proposed a net benefit compression (NBC) heuristic to minimize total tardiness plus compression cost on a SSPWCPT. Kayvanfar et al. [26, 27] extended the work of Tseng et al. [25] by designing a net benefit compression-net benefit expansion (NBC-NBE) algorithm. Yedidsion et al. [28] proved the complexity of a single machine scheduling problem with CPT for several related criteria. Yin et al. [29] addressed a single machine batch delivery scheduling problem with an assignable common due date and CPT. They provided an dynamic programming algorithm and developed an algorithm to find the optimal solution for minimizing a cost function consisting of earliness, tardiness, job holding, and due date assignment. Nearchou [8] considered this SSPWCPT and the objective of this problem was to minimize total weighted completion time plus the compression cost. Several single-objective population-based metaheuristics were adopted to solve this problem. Giglio [30] considered a class of single machine family scheduling problems, characterized by unreliable behavior of the machine, CPT, sequence-dependent setups, and due dates. The objective of this problem is to minimize the sum of the total weighted tardiness, the total weighted consumption cost, and the total setup cost. He proposed a dynamic programming approach to solve this problem. A survey on scheduling with CPT was provided by Shabtay and Steiner [31]. In this paper, we only consider the single machine scheduling problem with CPT; thus, the scheduling problems in other environment are not reviewed.

As stated previously, the main criterion of most research is a single-objective or combined objective with the weighted sum approach. However, multiobjective scheduling problem should be the trend for the real-life scheduling production in the future. The majority of the previous research is focused on heuristics for addressing small scale problems. However, the research on addressing the studied problem using metaheuristics is relatively scarce. In fact, metaheuristics are very efficient in solving such type of large scale scheduling problems. To the best of authors’ knowledge, efficient MOEAs based on BSA have not yet been applied to SSPWCPT in the previous literature. Thus, the motivation of this study from a theoretical perspective is to develop an efficient MOEA for minimizing both total tardiness and total compression cost on SSPWCPT. Nevertheless, the no free lunch theory implies that algorithm’s performance is sensitive to the problem considered. Therefore, we should develop a more efficient MOEA than other high-performing metaheuristics if some characteristics of the problem as well as some techniques are considered.

3. Problem Formulation

The scheduling problem under study can be described as follows. A set of independent jobs is available at time zero on a single machine. One machine can process at most one job at a time and job preemption is not allowed. Each job has a normal discrete processing time. The normal job processing times can be compressed by allocating additional resources, which entails compression cost. Assume that resources can only be assigned in discrete quantities and the job with normal processing time incurs no extra cost.

The parameters and decision variables used throughout the paper are as shown in the Notations.

The problem is an extension of the scheduling problem with the fixed processing time [32]. It can be defined as a multiobjective mathematical model for minimizing total tardiness and total compression cost simultaneously, which is formulated as follows:

Equations (1) define the objective to minimize the total tardiness and the total compression cost. Constraints (2) and (3) guarantee the precedence relationship and define that only one job can be processed at any instance in time. Constraints (4) impose the tardiness of each job in a sequence. Constraints (5) limit the amount of compression processing time for all jobs. Constraints (6) show nonnegativity of variables. This studied problem is NP-hard since the single machine total tardiness problem is already NP-hard [16]. It is still valuable for us to research and explore such a problem.

4. The Proposed MODBSA for SSPWCPT

In this section, firstly we give a basic background on multiobjective optimization and then describe the original BSA and present a framework of the proposed MODBSA. Finally, the main improvement strategies of the MODBSA for optimizing SSPWCPT are elaborated.

4.1. Background on Multiobjective Optimization

To better understand the proposed MODBSA for solving the above problem, we begin with a brief introduction of the basic concept of MOEA. Without loss of generality, a multiobjective optimization problem (MOP) can be formally defined as follows:where indicates the th subobjective function, is a vector of the solution, which should satisfy the above constraints, is the decision variable space, and and are inequality constraint and equality constraint, respectively. and denote the total number of inequality constraints and equality constraints, respectively.

Let and , and a vector dominates another vector (denoted by ) if is not inferior to for any of objectives and is superior to for at least one objective. A solution is a Pareto optimal vector if there is not any solution that dominates . The corresponding objective function in the objective space forms the Pareto optimal front point . For a Pareto optimal solution, the improvement in any objective will incur the deterioration of at least another objective. A set of all the Pareto optimal solutions is called Pareto optimal set (), while the set of all Pareto optimal front vectors is called the Pareto optimal front (). The main goal of multiobjective optimization is to find . However, in general, a Pareto front consists of a large number of points. Therefore, a good Pareto front contains a limited number of points which should be as close as possible to the and uniformly spread as well.

4.2. Brief Introduction of Backtracking Search Algorithm (BSA)

The original BSA is described before presenting the MODBSA, and its main steps include five processes: initialization, selection 1, mutation, crossover, and selection II [14]. Note that the original BSA is proposed for solving continuous problems; thus, the decision variables are real numbers.

4.2.1. Initialization

Randomly initialize a population , which can be formulated as follows:where and are the population size and the number of decision variables, respectively, is a real number value uniformly distributed in , and denote the lower and upper bound for the th decision variable of the th solution.

4.2.2. Selection I

To obtain the search direction, the aim of BSA’s Selection I stage is to determine the historical population . The initial historical population is defined using the following formulation:

BSA has a choice to generate the at the beginning of each generation through the following form:where is the initialization operation; and are real number values in the range . The above equation defines that the BSA’s historical population () is from either the previous population or the itself. Once is generated, a permutation function is adopted to randomly alter the position of the individuals in .

4.2.3. Mutation

The form of a trial population (i.e., offspring pupation) by the mutation operation can be written as follows:where is a scale factor which controls the amplitude of the search-direction matrix (). The value , where is a real number uniformly distributed in . Since the historical population is employed in the calculation of the search-direction matrix, BSA takes some advantages of previous generations to obtain a trial population.

4.2.4. Crossover

The final trial population is obtained by crossover in BSA. The crossover of BSA works as follows. First, compute a binary integer-valued matrix (map) whose dimension is . It denotes that the individuals of are produced by using the relevant individuals of . Then, if , where and , is updated with . Algorithm 1 provides the crossover operation of BSA.

Input: Mutant, mixrate, and .
Output: : Trial population.
(1)
(2) if then
(3)      for from 1 to
(4)             
(5)      end
(6) else
(7)      for from 1 to
(8)             
(9)      end
(10) end
(11)
(12) for from 1 to
(13)           for from 1 to
(14)                    if then
(15)           end
(16) end

In Algorithm 1, represents the ceiling function (in line 4). The parameter indicates a random real number in []. The parameter controls the number of elements of individuals that will mutate in a trial individual by using . Two strategies are employed to define BSA’s map. In the first strategy, mixrate is used to define map (in lines 3–5). In the second strategy, only one randomly selected element is going to mutate (in lines 7–9).

4.2.5. Selection II

In the stage of BSA’s Selection II, is used to update the if it is better than in terms of the fitness value. If the best solution outperforms the global optimal solution found so far, the global optimal solution is replaced by [14].

4.3. Framework of MODBSA for SSPWCPT

The original BSA is designed for addressing continuous single-objective optimization problems. However, the problem under consideration is a multiobjective combinatorial NP-hard issue. This paper is aimed at developing a MODBSA to solve SSPWCPT. A basic flowchart of MODBSA is shown in Figure 1.

The framework of the MODBSA consists of 6 steps summarized below.

Input(i)A stopping condition(ii): the population size(iii)Input other parameters

Output. Pareto front () and Pareto solutions () are stored in an external archive.

Step 1 (initialization). Generate the initial and historical population according to the proposed solution representation stated in Section 4.4.2.

Step 2 (stopping condition). If stopping criterion is met, then stop and output and . Otherwise, go to Step 3.

Step 3 (Selection I). Calculate the selection probability value (denoted as ) of the population selected based on adaptive selection mechanism. This adaptive mechanism can select a suitable population as a historical population. Section 4.4.3 gives details of the adaptive selection mechanism.

Step 4 (update). Perform update operation on the population. This update operator is described in Section 4.4.4.

Step 5 (total cost reduction strategy). The total cost reduction strategy can improve exploitation of the algorithm. This strategy is presented in Section 4.4.5.

Step 6 (Selection II). This stage of the MODBSA is different from that of the basic BSA. The offspring solutions are evaluated with regard to both fitness values (i.e., total tardiness and total compression cost). First, all the individuals in the population are sorted according to a nondominated sorting technique. This fast nondominated sorting method can be described as follows. First, for each solution we compute two entities: (1) domination count , the number of solutions that dominate the solution , and (2) , a set of solutions that the solution dominates. The domination count of all solutions in the first nondominated level is equal to zero. Now, for each solution with , we visit each individual from its set and reduce its domination count by one. When for any individual the domination count becomes zero, it will be put into a separate set where the individuals belong to the second nondominated level. This procedure continues until all nondominated levels are identified. That is, each individual has a rank equal to its nondominance level. Then, within each front or rank a crowding distance strategy is used to define an ordering among individuals. To achieve wide spread Pareto fronts (solutions differently balancing total tardiness and total compression cost), individuals with a large crowding distance are better than ones with a smaller crowding distance when they are the same nondominated level. The external archive is used to store nondominated solutions found so far. This external archive has a maximum size. To obtain with a uniform spread, the crowding distance technique is also employed to remove solutions with the worst crowding distance from the archive when this archive is full.

4.4. The Main Improvements for the Proposed MODBSA on SSPWCPT

In this subsection, a new solution representation is stated and the main improvement strategies are elaborated.

4.4.1. Solution Representation

One of the important issues when applying MOEA lies in its solution representation where individuals include information associated with the problem considered. Unlike the other scheduling problems with fixed processing times, a SSPWCPT has to deal with the job sequence and the amount of compression processing time of jobs simultaneously. Thus, we propose a new encoding scheme which contains two parts. The first part represents the job permutation, that is, . The second one denotes the amount of compression processing times of jobs, that is, . Although Nearchou [8] presented a two-part encoding for this problem, he adopted subrange keys based on random key encoding scheme, which generates information redundancy. We used this proposed scheme to effectively avoid information redundancy.

To illustrate this solution representation, Figure 2 gives an example of a solution representation for a 5-job instance. This solution has two parts: (1) the first part: the job permutation, namely, ; (2) the second one: the amount of compression processing times of job, namely, . There exists a corresponding relationship between the two parts. Each job has a corresponding amount of compression processing time. For the job denoted by , its corresponding amount of compression processing time is . For example, the 1st job (i.e., ) in the sequence is job , the amount of compression of is (i.e., 2). Similarly, the 2nd job (i.e., ) in the sequence is and the corresponding amount of compression of is (i.e., 1). Therefore, the corresponding amount vector of compression is for a sequence . In this manner, a feasible schedule is easily generated. In addition, this encoding scheme has two advantages below:(1)Solution structure is simple as it contains two parts, namely, job sequence and amount of compression of job processing times.(2)In general, this discrete encoding scheme can reduce information redundancy compared with previous random keys encoding scheme like subrange keys [8]. This view has been further certified in Section 5.4.

4.4.2. Initialization

The MODBSA begins with a population of initial individuals. As stated previously, each solution is constructed by a permutation vector and a vector of compression processing times. To ensure the high quality and good diversity of solutions, the job permutation of one individual is based on nondescending order of due time [33]. In detail, this solution is based on the property that if we have job and job that satisfy and () then there exists an optimal processing sequence where job precedes job . The amount of compression processing times of another individual is set to zero in order to minimize total compression cost. The other initial individuals are randomly generated in the feasible range as follows:where denotes the th job in the th individual to be processed on the machine and presents the amount of compression for job in the th individual.

4.4.3. Selection I Based on the Adaptive Mechanism

The dual population scheme is a core idea of the BSA. The dual population is based on a random selection scheme, which can assist algorithm to maintain population diversity. However, this strategy cannot ensure a good convergence toward the optimal solutions, since too much emphasis on diversity would cause pure random search [15]. To improve convergence performance, an adaptive selection scheme is presented. This adaptive selection scheme can select a suitable population as the current historical population. To simplify the calculation of the selection probability of population as the current historical population, we record the population that is chosen to participate in update operation. After the initial population and historical population are generated, a population is selected as the current historical population based on (10) so that each population has an equal selection probability. Afterwards, the current historical population is updated by an adaptive selection scheme during the search process. Let ( is not a random number at this time) represent the probability of updating the historical population by replacing it with the current population . The adaptive selection scheme can be stated as follows.

Step 1. After population is updated, calculate the update probability of historical population ; namely, , where is the number of nondominated solutions and represents the total number of the nondominated solutions in the external archive at the current iteration. The updating of is executed by simply replacing the with the current population .

Step 2. Use the roulette-wheel approach to select a population from each population.

This selection strategy is simple yet efficient for improving performance of MODBSA. It implies that the selection probability is proportional to the number of nondominated solutions from the population . To avoid the situation where all solutions are obtained from the same population throughout the iterations, the population has a minimum selection probability of . That is, after the calculation of the selection probability of population, if , then set . In this work, .

Before updating population, select two parents in which one is from the current (note that the current is either from or from according to the adaptive mechanism) and the other is from . In addition, the efficiency of this adaptive scheme has been proven in Section 5.5.

4.4.4. Update Operator

It is evident that crossover and mutation operators of original BSA are not suitable for solving the SSPWCPT. To overcome this problem, we change the crossover and mutate operators to the traditional crossover and mutation operators.

Crossover can explore unknown areas of solution search space. For the first part (i.e., π), partially mapped crossover (PMX) [34] is adopted to update the permutation part and has been widely used in the scheduling field. For the second part, two-point crossover is used to update amount of compression processing times in this paper. The detailed steps of the crossover are as follows.

For the first part (see Figure 3), consider the following.

Step 1. Select the substring: randomly generate two crossover points and define the substring between two points as matching area (i.e., yellow area).

Step 2. Exchange the substrings: generate temporary offspring by swapping the matching area of two parents. Note that only the elements in the matching area (yellow area) are exchanged. We can find that temporary offspring is unfeasible when exchanging the substrings. For instance, job 5 appears twice in temporary offspring 1. Similarly, job 1 appears twice in temporary offspring 2.

Step 3. Mapped relationship: determine mapped relationship of the elements in conflict. When the same job sequence is assigned more than once, the mapped relationship of the sequence in crossover segments is defined.

Step 4. Legalize the offspring: make the job permutation part feasible by using the information from the mapped relationship without any changes to the substring (keep the substring unchanged).

For the second part (see Figure 4), consider the following.

Step 1. Select the substring: randomly generate two points and define the substring between two points as exchange area.

Step 2. Exchange the substring: exchange the substrings between the two points.

Mutation operator assists the algorithm to escape from local optima. In this research, the mutation operator is composed of two mutation techniques with the probability of 0.5, respectively. That is, when this mutation operation is performed, one of the two techniques is executed. Therefore, the algorithm performs either the first technique or the second technique.

The first technique called swap mutation is only applied in the permutation part (i.e., the first part). The second one is only applied in the amount of resource compression part (i.e., the second part). To explain this mutation operator, an example is illustrated in Figure 5. For the first mutation technique as presented in Figure 5(a), the original sequence of jobs is . When the two positions are randomly selected (e.g., the 2nd and 4th positions), their corresponding job 1 and job 3 are swapped while the second part remains unchanged. That is, the sequence of jobs becomes after performing the first technique. For the second mutation technique as shown in Figure 5(b), firstly two positions are randomly selected (e.g., the 2nd and 4th positions), and then the corresponding new feasible integer numbers are generated to replace the original values. That is, the amount value of compression of job 2 in the 2nd position is updated to a new value 1, while the amount value of job 4 in 4th position is replaced by new value 2. These new values are randomly generated in their own ranges. Therefore, the offspring after update operator is still feasible.

4.4.5. Total Cost Reduction Strategy

Metaheuristics are usually combined with local search approaches, which may assist in searching for good solutions since they introduce an idea “greediness” within the metaheuristic [35]. In this paper, however, any local search strategies are not applied in the proposed algorithm. Instead, a release cost procedure is developed to further improve quality of solutions. This procedure does not change the job processing sequence but reduces the total compression cost while keeping the same total tardiness. According to the characteristic of the problem, it can be observed that by adjusting amount of compression processing time of job the compression cost can be further reduced without affecting total tardiness. Thus, it can improve the quality of a solution for a given job sequence to some extent. The computational complexity of this heuristic is . The main steps of the proposed total reduction strategy are below.

Step 1. Let ;; , ( is an empty set). is a set of release cost. is earliness set in which jobs follow the job.

Step 2. Compute . If , go to Step 4. Otherwise go to Step 3.

Step 3. While then perform the following loop until or .

Step 3.1. Compute earliness time of the job, namely, , and then update the set .

Step 3.2. Obtain the current minimum earliness and calculate release cost , and then put the value into the set , .

Step 4. Find the job with the maximum value from if is not an empty set. This job is denoted by . Update its amount of compression processing time and the completion time of jobs following the job.

To explain this total reduction strategy, an example is given below.

Example 1. Consider a 5-job instance in Table 1 and Figure 6 with a given job sequence and amount vector of compression processing time . The corresponding total tardiness and total compression cost are 2 and 2.6, respectively. Perform the total reduction strategy in the following steps.

Step 1. Let ; ; , .

Step 2. Compute and go to Step 3.

Step 3. Conduct the following operation.

Step 3.1. Compute earliness time and .

Step 3.2. Obtain the minimum earliness found so far ; compute the release cost ; put the value into the set ; .

Step 2. Compute and go to Step 3.

Step 3. Conduct the following operation.

Step 3.1. and .

Step 3.2. ; ; ; .

Step 2. Compute and go to Step 4.

Step 4. Select the job with the maximum cost value from , namely, the job. Update the amount of processing time compressed: . Meanwhile, the corresponding total tardiness and total compression cost are updated to 2 and 2.1, respectively. Note that the total tardiness is fixed but the total compression cost is reduced.

5. Experimental Study

This section is devoted to assessing the performance of the proposed algorithm MODBSA on SSPWCPT. The experimental studies include the following four aspects:(1)Evaluate efficiency of the proposed solution representation.(2)Evaluate efficiency of the adaptive selection scheme in the MODBSA.(3)Test performance on the total cost reduction strategy of the MODBSA.(4)Compare the MODBSA with other MOEAs on the instances.

In the following subsections, performance metrics, test function, and parameter setting are described at first, and then the experimental studies are further investigated step by step.

5.1. Performance Metrics

As mentioned previously, the final result is usually not a single optimal value rather than a set of optimal solutions for MOPs. To explain this problem, the parameters of a specific instance are provided in Table 2.

The obtained by MODBSA on this instance is shown in Figure 7. Some nondominated solutions and the corresponding objective values are summarized in Table 3. It can be observed that the obtained results consist of some trade-off solutions. It is also an interesting observation that there are two nondominated solutions that correspond to the same front point (i.e., solution 2). Unlike single-objective problem, the high quality results of MOP not only have good convergence but also evenly distribution along . Therefore, how to evaluate results found by MOEAs is important for users.

To evaluate the results obtained by MOEAs, some metrics including the Spread [36], GD, and IGD [37] should be adopted as follows.

(1) Spread (). It is a diversity performance index that assesses the distribution of the obtained solutions in the front. This metric can be formulated as follows:where is the Euclidean distance of each point in to its closest point in , represents the mean value of all , denotes the Euclidean distance between the extreme solutions in the th objective and the boundary solutions of the , is the number of , and is the number of objectives. If the spread value is zero, then all the members of Pareto optimal front are evenly spaced. Lower values indicate better distribution and diversity.

(2) Generational Distance (GD). It is a convergence indicator, which represents how far the obtained is from . It can be formulated as follows: where means the number of points and is the Euclidean distance between the th member of obtained and the nearest member of . A low GD value represents a good convergence performance. A normalization method is used in this metric.

(3) Inverse Generational Distance (IGD). It is a variant of the GD but represents a combined or comprehensive indicator. It measures the average distance between each solution consisting of the optimal Pareto front and obtained front. IGD can be defined as follows:where is the number of the optimal Pareto fronts; is the Euclidean distance between and the nearest member of the approximation. Fronts with a lower IGD value are desirable. This metric uses a normalization method.

It should be mentioned that the true of the studied problem may be unknown; therefore, the nondominated solutions obtained by different MOEAs on each instance in all the independent runs are regarded as on that instance [38].

5.2. Description of Test Function

The instances generated are defined as shown in Table 4. There are six different numbers of jobs (), where the normal processing times and the crash processing times are drawn from the discrete uniform distributions and , respectively. In due date calculation, , and is the discrete value from 0.2 to 1.0 with the step size of 0.2. The unit cost of compression is generated from a uniform distribution ranging between 0.5 and 2.5. Each instance can be labelled in the form of “n_r”. For example, “10_02” represents the fact that the problem is featured by 10 jobs and equal to 0.2.

5.3. Experimental Settings

All algorithms are coded in Java on the platform jMetal [39]. Experimental tests are implemented on a computer with Intel Core i5, 2.39 GHz, 4 GB RAM, with a Windows 8 operating system.

Parameter settings can affect the performance of the algorithm. The pilot experiments demonstrated that the population size and archive size were sensitive to the problem scale. Therefore, for various variants of MODBSA in Sections 5.4–5.6, the maximum number of function evaluations (NFEs) is 25,000 for 10-job and 30-job instances, 35,000 for 50-job and 80-job instances, and 45,000 for 100-job and 200-job instances. The population size and the external archive size are set to 50 for 10-job and 30-job instances, 80 for 50-job and 80-job instances, and 100 for 100-job and 200-job instances. The historical population size is equal to the population size. In Section 5.7, the parameter settings of MODBSA and its compared MOEAs can be found in related subsection. Each experiment was conducted 30 independent times on each test problem for each algorithm.

The optimal results are highlighted with bold in Tables 5, 7, 9, 12, 13, and 14. Due to the stochastic characteristic of all candidate MOEAs, the statistical analysis is necessary to provide confidential comparisons. A Wilcoxon sign rank test [38, 40] is used to test the significant difference between the results obtained by different algorithms. The confidence level for all tests is set to 95% (corresponding to ). The sign “+” indicates that our proposed MODBSA algorithm performs significantly better than the second best algorithm on average. While “−” represents the fact that the MODBSA algorithm is significantly worse than the best algorithm, the “=” sign denotes that there is no significant difference between MODBSA and the best or second best MOEA. represents the sum of ranks for the problem where the MODBSA performs better than its competitor. denotes the sum of ranks for the opposite.

5.4. Efficiency of Solution Representation

To test the efficiency of the proposed solution representation, it is compared with subrange keys for MODBSA on medium and large scale problems. In this study, represents MODBSA based on subrange keys. More detailed information on subrange keys can be found in Nearchou [8]. Since subrange key is a real-coded scheme, the operators in are different from that in MODBSA. The update operators of are as follows: the simulated binary crossover (SBX) and polynomial mutation are used. The distribution indexes in both SBX and the polynomial mutation are set to 20. The crossover rate is 0.9, and mutation rate is 0.2. In addition, the proposed total cost reduction strategy is also adopted in , but requires converting real-coded scheme to discrete-coded scheme in this phase. MODBSA includes the proposed solution representation. The crossover rate is 0.9 and mutation rate is 0.2. Table 5 shows the mean and standard deviation metrics on these algorithms over 30 independent runs. Table 6 reports the significant test results over 30 runs.

Table 5 reveals that MODBSA obtains the optimal results on 13, 5, and 20 out of 20 test instances for GD, Spread, and IGD metrics, while achieves the best values on 7, 15, and 0 problems, respectively. Table 6 records the values of the Wilcoxon signed rank test. We can clearly observe from Table 6 that MODBSA has higher “+” counts than its compared algorithm in terms of GD and IGD. It means MODBSA is significantly better than for GD and IGD metrics. This may be because the proposed discrete encoding scheme can avoid information redundancy during the search process compared with the encoding scheme based on subrange keys. Meanwhile, MODBSA is significantly worse than in terms of Spread metric on most instances. The reason behind it is that the encoding scheme based on subrange keys may have a great choice to search for different areas of the search space and thus improve search diversity, although it can lead to information redundancy.

5.5. Efficiency of Adaptive Strategy

To test the efficiency of the adaptive mechanism in MODBSA, we compare the MODBSA with MODBSA based on random selection mechanism on the 20 medium and large scale problems. In this experiment, denotes MODBSA with random selection mechanism. Note that adaptive mechanism is included in MODBSA. The other parameter settings of both MOEAs are the same for a fair comparison. Table 7 reports the statistical metrics on two strategies over 30 independent runs. Table 8 summarizes the values of Wilcoxon signed rank test.

From Tables 7 and 8, it can be observed that MODBSA completely dominates in terms of IGD metric on most instances. However, such advantage will no longer exist when only Spread metric is considered. The poor distribution performance of MODBSA may be associated with the characteristic of the problem. More specifically, this type of scheduling problem may be a multimodal optimization issue which contains several optimal solutions corresponding to the same objective value (i.e., the second nondominated solution on the case in Table 3). Therefore, the distribution distance between Pareto fronts is very crowded. MODBSA can obtain better results than in terms of GD metric. In summary, the proposed MODBSA based on adaptive selection mechanism is superior to on most instances. This means that adaptive selection can enhance search efficiency. Besides, the results computed by the proposed MODBSA are more stable, which indicates that the adaptive selection strategy can strengthen the stability of the MODBSA.

5.6. Efficiency of Total Cost Reduction in Proposed Algorithm

To prove the efficiency of the MODBSA with the total cost reduction strategy, it is compared with MODBSA without the total cost reduction. In this experiment, denotes the MODBSA without the total cost reduction strategy. MODBSA itself includes the total cost reduction strategy. The parameter settings of both MOEAs are the same as the above experiments. Table 9 records the statistical metrics on different algorithms over 30 independent runs. Table 10 shows the test results based on the best metrics for each instance with 30 independent runs.

Table 9 presents that the MODBSA is superior or competitive to in terms of all metrics on most instances. From Table 10, it can be clearly observed that the MODBSA with the total cost reduction strategy has a significant better performance than the one without this strategy on most instances. It means that MODBSA using the total cost reduction strategy has good convergence and coverage performance compared with MODBSA without total cost reduction. It also implies that the exploitation ability can be improved by the adoption of the total cost reduction technique in MODBSA for solving SSPWCPT.

5.7. Comparison MODBSA with Other Algorithms

To further assess the performance of the MODBSA on these scheduling problems, MODBSA is compared with well-known MOEAs: NSGA-II [36], PAES [41], and SPEA2 [42]. To fit the characteristic of the addressed problem and make a fair comparison, we modified these considered MOEAs. All MOEAs use the same population size and the NFEs as stated in Section 5.3. Moreover, the initial population is generated based on the proposed encoding scheme and strategy for all MOEAs. All MOEAs adopt the same operators including crossover, mutation, and total cost reduction as mentioned in this paper only if they exist corresponding operators. The other parameters are summarized in Table 11. 30 independent runs are implemented for each MOEA on each test instance.

Tables 1214 show the statistical results of GD, Spread, and IGD. From these tables, we can observe that the proposed MODBSA outperforms its counterparts for most instances. Especially on the comprehensive metric IGD and convergence metric GD, the outperformance of the MODBSA is overwhelming except for several problems. MODBSA is also competitive to NSGA-II with regard to Spread metric. In addition, from Table 15, MODBSA shows a significant improvement over the other MOEAs with a level of significance in terms of IGD metric. The major reasons for the good performance of the MODBSA can be explained as follows. First, the adoption of the dual population strategy can improve the diversity of population since different population may have different search directions, and thus the MODBSA can maintain a good diversity in search space. Second, to boost convergence performance, the adaptive selection mechanism can select an appropriate population as parent population for generating new candidate solutions according to different search environment and thus improve search efficiency. Third, to further enhance convergence, the total cost reduction strategy is proposed to improve solution quality. Therefore, we can draw a conclusion that these strategies have a positive effect on the behavior of the algorithm.

Figure 8 presents the approximations found in the run with the best IGD value of each MOEA for three level instances with small, medium, and large scale. It is evident from Figure 8(a) that although all MOEAs can find some approximations with regard to convergence for the small scale problem, MODBSA is capable of covering more areas than other MOEAs. As is depicted in Figure 8(b), MODBSA can show better convergence and coverage performances for the medium scale problem, while the other MOEAs tend to fall into local optima. The outperformance of MODBSA can be attributed to the adaptive mechanism, by which MODBSA can search preferable solutions in different directions to enhance search diversity. We can also observe from Figure 8(c) that the MODBSA has good convergence performance compared with its MOEAs. The good performance of MODBSA on medium and large scale problems may be based on the fact that the dual population can help to balance the exploitation and exploration of MODBSA. Figure 8 not only presents good convergence of MODBSA but also illustrates a widespread coverage of MODBSA. In addition, MODBSA can find more nondominated solutions than other MOEAs, which implies good exploration ability of MODBSA. Therefore, we can conclude that MODBSA is very suitable for addressing this type of scheduling problem.

The statistical results are plotted as boxplots in Figure 9. The vertical axis of each subfigure represents the IGD value and the horizontal axis represents the different MOEAs. The lower position of box denotes better performance. The narrower the shape of the box is, the more stable the corresponding algorithm is. Clearly, the MODBSA is overwhelming without any exception in terms of IGD metric for three above level instances. It is consistent with previous numerical analysis and our view that the MODBSA outperforms other MOEAs considered for the SSPWCPT. The reasons behind the good performance of MODBSA are as follows. First, dual population scheme makes MODBSA have a better exploration ability as that it has a greater choice to search for different unknown areas of the search space. Second, total cost reduction strategy can improve the convergence of MODBSA since the quality of solution can be improved when total cost criterion is reduced while the tardiness criterion remains unchanged.

6. Conclusions and Future Work

In this paper, a multiobjective single machine scheduling problem with CPT is studied. The objective of this study is to minimize the total tardiness and total compression cost simultaneously. To solve this multiobjective problem, a new multiobjective discrete backtracking search algorithm (MODBSA) is proposed. In MODBSA, a new solution representation is developed to adapt to the characteristic of the problem. Experimental results show the validity of the solution representation. To improve search diversity and efficiency, we propose two improvement strategies into the MODBSA. First, an adaptive selection scheme is designed to select a suitable population for update operation. Second, a total cost reduction strategy is embedded into the MODBSA for enhancing exploitation ability. The efficiency of each improvement strategy is separately validated by experimental studies. The MODBSA is also compared with NSGA-II, SPEA2, and PAES on 30 instances. The empirical results demonstrate that the MODBSA outperforms its rivals on most instances. In conclusion, the main contributions of this work are at least threefold.(1)A multiobjective mathematical model of scheduling problem with CPT is constructed. A new multiobjective backtracking search algorithm is developed to solve this scheduling problem.(2)A new solution representation is provided for the scheduling problems with CPT. It also provides a new encoding scheme in solving scheduling problems.(3)The adaptive selection strategy and the total cost reduction strategy both can improve the performance of the MODBSA on instances. The adaptive selection mechanism can help to enhance exploration ability of the MODBSA, while the total cost reduction strategy can improve exploitation ability.

With respect to future work, an interesting issue worth studying is extending MODBSA to more complex scheduling problems with CPT such as the dynamic job shop scheduling problem. Another research direction is to further improve the search efficiency of the algorithm by incorporating heuristic based on problem property.

Notations

Parameters
:Number of jobs
:Job
:Job in the th position in a sequence
:Processing sequence of jobs, namely,
:Normal processing time of job
:Crash (minimum allowable) processing time of job
:Maximum amount of compression processing time for job , namely,
:Actual processing time of job
:Unit cost of compression processing time of job
:Due date of job
:Tardiness of job , namely, , where is the completion time of job
:Start time of job
:An infinite positive number.
Decision Variables
:Amount of compression processing time of job
:It is set to 1 if precedes ; it is set to 0 otherwise.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research work was supported by the National Natural Science Foundation of China (NSFC) under Grant nos. 51375004 and 51435009, National Key Technology Support Program under Grant no. 2015BAF01B04, and Youth Science & Technology Chenguang Program of Wuhan under Grant no. 2015070404010187.