Abstract

A novel multiobjective memetic algorithm based on decomposition (MOMAD) is proposed to solve multiobjective flexible job shop scheduling problem (MOFJSP), which simultaneously minimizes makespan, total workload, and critical workload. Firstly, a population is initialized by employing an integration of different machine assignment and operation sequencing strategies. Secondly, multiobjective memetic algorithm based on decomposition is presented by introducing a local search to MOEA/D. The Tchebycheff approach of MOEA/D converts the three-objective optimization problem to several single-objective optimization subproblems, and the weight vectors are grouped by K-means clustering. Some good individuals corresponding to different weight vectors are selected by the tournament mechanism of a local search. In the experiments, the influence of three different aggregation functions is first studied. Moreover, the effect of the proposed local search is investigated. Finally, MOMAD is compared with eight state-of-the-art algorithms on a series of well-known benchmark instances and the experimental results show that the proposed algorithm outperforms or at least has comparative performance to the other algorithms.

1. Introduction

The job shop scheduling problem (JSP) is one of the most important and difficult problems in the field of manufacturing which processes a set of jobs on a set of machines. Each job consists of a sequence of successive operations, and each operation is allowed to process on a unique machine. Different from JSP which one operation is merely allowed to process on a specific machine, the flexible job shop scheduling problem (FJSP) permits one operation processed by any machine from its available machine set. Since FJSP needs to assign operations to their suited machine as well as sequence those operations assigned on the same machine, it is a complex NP-hard optimization problem [1].

The existing literatures [25] about solving single-objective FJSP (SOFJSP) over the past decades mainly concentrated on minimizing one specific objective such as makespan. However, in practical manufacturing process, single-objective optimization cannot fully satisfy the production requirements since many optimized objectives are usually in conflict with each other. In recent years, multiobjective flexible job shop scheduling problem (MOFJSP) has received much attention, and, until now, many algorithms have been developed to solve this kind of problem. These methods can be classified into two groups: one is a priori approach and the other is Pareto approach.

Multiple objectives are usually linearly combined into a single one by weighted sum approach in the a priori method, which can be illustrated as , where , . However, we can get only one or several Pareto solutions by using this approach, which may not well reflect the tradeoffs among different objectives, and it would be difficult to assign an appropriate weight for each problem. Even more important, the performance of the algorithm deteriorates when solving the problems contains nonconcave Pareto front (PF). The Pareto approach mainly focuses on searching the Pareto set (PS) of optimization problems by comparing two solutions based on Pareto dominance relation [6]. A solution is said to dominate solution iff is not worse than in all objectives and there exists at least one objective in which is better than . is called a Pareto optimal solution iff there is no solution that dominates . All the Pareto optimal solutions constitute the PS, and PF is the mapped vector of PS in the objective space. Since Pareto approach can achieve a set of Pareto solutions rather than a specific one, it has received much more attention than a priori approach and is recognized to be more suitable to solve MOFJSP.

Because the three objectives, makespan, total workload, and critical workload, are conflicted with each other, it is better to handle this model with knowledge about their PF. Multiobjective evolutionary algorithm (MOEA) is a kind of mature global optimization method with high robustness and wide applicability. Due to the fact that MOEAs have low requirements on the optimization problem itself and high ability to obtain multiple Pareto solutions during each run, they are suitable for solving multiobjective optimization problems (MOPs). The multiobjective evolutionary algorithm based on decomposition (MOEA/D) that integrates mathematical programming with evolutionary algorithm (EA) can obtain a set of Pareto solutions by aggregating multiple objectives into different single-objectives with many predefined weight vectors [7]. MOEA/D has shown great superiority on continuous optimization problems [812]; thus it is necessary to investigate its performance on multiobjective combinatorial optimization problems (MO-COPs) such as MOFJSP. To the best of our knowledge, in the literature reported, although MOEA/D has been applied to different kinds of multiobjective scheduling problems such as multiobjective flow shop scheduling problem (MOFSP) [13], multiobjective permutation flow shop scheduling problem (MOPFSP) [14], multiobjective stochastic flexible job shop scheduling problem (MOSFJSP) [15], and multiobjective job shop scheduling problem (MOJSP) [16], there is seldom corresponding application on MOFJSP.

The primary aim of this paper is to solve MOFJSP in a decomposition manner by proposing a multiobjective memetic algorithm based on decomposition (MOMAD) hybridizing MOEA/D with local search. With the purpose of making the proposed algorithm more applicable, four aspects are studied: integration of different machine assignment and operation sequencing strategies are presented to construct the initial population; objective normalization is incorporated into Tchebycheff approach to convert an MOP into a number of single-objective optimization subproblems; all weight vectors are divided into a few groups based on K-means clustering: some superior individuals corresponding to different weight vector groups are selected by using a selection mechanism; local search based on moving critical operations is applied on selected individuals. To evaluate the effectiveness of the proposed algorithm, some benchmark instances are tested with three purposes: investigating the effects of different aggregation functions and validating the effectiveness of local search; analyzing the influence of the key parameters on the performance of the algorithm; comparing the performance of MOMAD with other state-of-the-art algorithms for solving MOFJSP.

The rest of the paper is organized as follows. Next section presents a short overview of the existing related work. In Section 3, the background knowledge of MOFJSP is introduced. Section 4 introduces the framework of MOMAD. The implementation details of the proposed MOMAD including genetic global search and problem specific local search are described in Section 5. Afterwards, experimental studies are provided in Section 6. Finally, Section 7 concludes this paper and outlines some avenues for future research.

As mentioned above, there are two main methods to solve MOFJSP: a priori approach and Pareto approach. As for a priori approach, Xia and Wu [17] discussed a hybrid algorithm, where particle swarm optimization (PSO) and simulated annealing (SA) were employed in global search and local search, respectively. A bottleneck shifting-based local search was incorporated into genetic global search by Gao et al. [18]. Zhang et al. [19] introduced an effective hybrid PSO algorithm combined with tabu search (TS). Xing et al. [20] used ten different weight vectors to collect effective solution sets. A hybrid TS (HTS) algorithm was structured by combining adaptive rules with two neighborhoods. In this algorithm, three weight coefficients , , and with different settings were given to test different problems [21]. An effective estimation of distribution algorithm was proposed by Wang et al. [22], in which the new individuals were generated by sampling a probability model.

Contrary to the a priori approach, a PS can be obtained by using Pareto approach, and the tradeoffs among different objectives can be presented. The integration of fuzzy logic and EA was proposed by Kacem et al. [23]. A guide local search was incorporated into EA to enhance the convergence [24]. With the aim of keeping population diversity, immune and entropy principle were adopted in multiobjective genetic algorithm (MOGA) [25]. Two memetic algorithms (MAs) were, respectively, proposed, both of which integrate nondominated sorting genetic algorithm II (NSGA-II) [6] with effective local search techniques [26, 27]. Several effective neighborhood approaches were used in variable neighborhood search to enhance the convergence ability in a hybrid Pareto-based local search (PLS) [28]. Chiang and Lin [29] proposed a simple and effective evolutionary algorithm which only requires two parameters. Both the neighborhoods of machine assignment and operation sequence are considered in Xiong et al. [30]. An effective Pareto-based EDA was proposed by Wang et al. [31]. A novel path-relinking multiobjective TS algorithm was proposed in [32], in which a routing solution is identified by problem-specific neighborhood search and is then further refined by the TS with back-jump tracking for a sequencing decision. In addition to the successful use of EA, several swarm intelligence algorithms have also been widely used for global search. PSOs were used as global search algorithms in [3336]. Besides, shuffled frog leaping [37] and artificial bee colony [38] were integrated with local search in related hybrid algorithms.

Besides the successful using in many scheduling problems, MOEA/Ds have also been widely dedicated to other MO-COPs. A novel NBI-style Tchebycheff approach was used in MOEA/D to solve portfolio management MOP with disparately scaled objectives [39]. Mei et al. [40] developed an effective MA by combining MOEA/D with extended neighborhood search to solve capacitated arc routing problem. Hill climbing, SA, and evolutionary gradient search were, respectively, embedded into EDA for solving multiple traveling salesmen problem (MTSP) in a decomposition manner [41]. A hybrid MOEA was established by combining ant colony optimization with MOEA/D [42], and then it was adopted to solve multiobjective 0-1 knapsack problem (MOKP) and MTSP, respectively. Then, aiming at the same two problems, Ke and Zhang proposed a hybridization of decomposition and PLS [43].

As mentioned before, MOEA/D is a kind of popular MOEA which is very suitable for solving MO-COPs such as scheduling problem. In this paper, a MOMAD is proposed that integrates MOEA/D algorithm with local search to enrich the tool-kit for solving MOFJSP.

3.1. Problem and Objective of MOFJSP

The MOFJSP can be formulated as follows. There are a set of jobs and machines ; each job contains one or more operations to be processed in accordance with the predetermined sequence. Each operation can be processed on any machine among its corresponding operable machine set . The problem is defined as T-FJSP iff ; otherwise, it is called P-FJSP [44]. MOFJSP not only assigns suitable processing machines for each operation but also determines the most reasonable processing sequence of operations assigned on the same machine in order to simultaneously optimize several objectives.

The following constraints should be satisfied in the process:(1)At a certain time, a machine can process one operation at most, and one operation can be processed by only one machine at a certain moment.(2)Each operation cannot be interrupted once processed.(3)All jobs and machines are available at time 0.(4)Different jobs share the same priority.(5)There exists no precedence constraint among the operations of different jobs, but there exist precedence constraints among the operations belonging to the same job.

An instance of P-FJSP with three jobs and three machines is illustrated in Table 1. Let and be the completion time of operation and its processing time on machine , respectively. denotes the completion time of job . Three considered objectives are makespan, total workload, and critical workload which are formulated as follows:

3.2. Disjunctive Graph Model

Disjunctive graph model has been adapted for representing feasible schedules of FJSP. denotes nodes set, and each of them represents an operation. The virtual starting and ending operations are represented by two virtual nodes, 0 and , respectively. is the set of conjunctive arcs which connect adjacent operations of the same job, and each arc indicates the precedence constraint within the same job. is the set of disjunctive arcs corresponding to the adjacent operations scheduled on the same machine. , where denotes the disjunctive arcs set of machine . The weight value above the node denotes , and the selected machine to operate is labeled under the node , .

Figure 1 shows the disjunctive graph of a feasible solution corresponding to the instance shown in Table 1, in which every disjunctive arc confirms a direction. This graph is called digraph. In digraph , the longest path from node to node is termed as critical path, the length of which denotes the makespan of corresponding schedule. Besides, each operation on critical path is called critical operation. In Figure 1, there is one critical path whose length equals 13.

Given that the local search in the following section can be well described, some concepts based on digraph are denoted here. Suppose is a node in , and its corresponding operation is . denotes the corresponding machine to process . and denote its earliest starting time and earliest completion time. Let be the processing time of operation on ; the latest starting time and latest completion time without delaying the required makespan can be calculated by

The predecessor and successor operation scheduled on the same machine right before or after are denoted as and , respectively. In addition, and are the predecessor and successor operation of in the same job, respectively. is a critical operation if and only if .

4. Framework of the Proposed MOMAD

The framework of MOMAD algorithm is formed by hybridizing MOEA/D with local search, which is given in Algorithm 1. First, a set of uniformly distributed weight vectors is generated by Das and Dennis’s approach [45], where each vector corresponds to subproblem . Next, all the weight vectors are divided into groups by K-means clustering. After calculating the Euclidean distance between any two weight vectors, the neighborhood of is set by gathering closest weight vectors. Then, the population containing solutions is initialized. The ideal point vector is obtained by calculating the infimum found so far of . The archive is established by founding the nondominated solutions in initial population. In Steps  , the two mating parent solutions and are chosen from formed by with the probability or by whole population with the probability . Then, the new solution is generated by crossover and mutation, and finally is used to update .

Generate a set of weight vectors
Generate weight vector groups
Get the neighborhood of each weight vector , , where are the
closest weight vectors to  .
   Initialize the population
Initialize Idealpoint:  
Find the non-dominated solutions in initial population to construct the archive
  while the termination criterion is not satisfied  do
for    do
if    then
else
end if
Randomly select two different indexes , from , ()
GeneticOperators
UpdateIdealPoint (, )
Objective Normalization
Update Current Population
Update External Archive
end for
LocalSearch()
end while

Steps   contain the updating and local search phase. The objective normalization is first adopted before population updating. Suppose is th weight vector and is the largest value of in the current population; then is compared with the solutions from one by one, and the one that has poorer fitness in terms of (3) will be replaced by . It should be noted that the updating procedure will be terminated as soon as the predefined maximal replacing number which benefits from keeping population diversity is reached or is empty. Afterwards, the updating of is held. If no solutions in dominate , then copy into and remove all the repeated and dominated solutions. Finally, after selecting the super solutions in current population, the local search is applied to get some improved solutions, and then they are rejected into the population to ameliorate it.

5. Detailed Description of Exploration and Exploitation in MOMAD

5.1. Chromosome Encoding and Decoding

In MOMAD, a chromosome coding consists of two parts: machine selection (MS) part and operation sequence (OS) part, which are encoded by machine selection and operation sequence vector, respectively. Each integer in MS vector represents the corresponding machine assigned for each operation in turn. As for OS vector, each gene is directly encoded with the job number. When compiling the chromosome from left to right, the th occurrence of the job number refers to the th operation of the corresponding job. Figure 2 shows a chromosome encoding of an P-FJSP instance which is shown in Table 1. is selected to process operation and is selected to process operation . The operation sequence can be interpreted as .

Since it has been verified that the optimal schedule exists in active schedule [3], the greedy inserting algorithm [25] is employed for chromosome encoding to make sure the operation is positioned in the earliest capable time of its assigned machine. It should be noted that operation may be started earlier than while appears before in the OS. In order to make the encoding be able to reflect the actual processing sequence, the operation sequence in original OS will be reordered in the light of their actual starting time after decoding.

5.2. Population Initialization

Population initialization plays an important role in MOMAD performance since a high quality initial population with more diversity can avoid falling into premature convergence. Here, four machine assignment rules are used to produce the machine assignment vectors for MOFJSP. The first two rules are global selection and local selection proposed by Zhao et al. [46]. The third rule prefers to select a machine from the candidate machine set at random. The aim of the last rule is assigning each operation to a machine with the minimum processing time. In our MOMAD, for machine assignment initialization, 50% of individuals are generated by rule 1, 10% of individuals are formed with rule 4, and rule 2 and rule 3 take share of the rest of the population.

Once the machines are assigned to each operation, the sequence of operations should be considered next. A mixture of four operation sequencing rules is employed to generate the initial operation sequencing vectors. The probabilities of using four operation sequencing rules are set as 0.3, 0.2, 0.3, and 0.2, respectively.

Operation sequencing rules are as follows:(1) Most Work Remaining [47]. The operations which have the most remaining processing time will be put into the operation sequencing vector first.(2) Most Number of Operations Remaining (MOR) [47]. The operations which have the most subsequent operations in the same job will be preferentially taken into account.(3) Shortest Processing Time (SPT) [48]. The operations with the shortest processing time will be firstly processed.(4) Random Dispatching. It randomly orders the operations on each machine.

5.3. Exploration Using Genetic Operators

The problem-specific crossover and mutation operators are applied to produce the offspring, both of which are performed on each vector independently since the encoding of one chromosome has two components.

Crossover. For the MS, uniform crossover [3] is adopted. First of all, a subset of positions is uniformly chosen at random, where equals the total number of all operations. Then, two new individuals are generated by changing the gene between parent chromosomes corresponding to the selected positions. With respect to OS vector, the precedence preserving order-based (POX) [3] crossover is applied.

Mutation. As for MS, two positions are chosen arbitrarily, and the genes are replaced by changing the different machine from their corresponding machine set. With regard to OS, the mutation is achieved by exchanging two randomly chosen operations.

5.4. Exploitation Using Local Search

It is widely accepted that integrating local search can effectually improve the performance of EAs, and it is an effective strategy to solve FJSP [31], so it is of great importance to design an effect local search method to make MOMAD keep good balance between exploration and exploitation. First of all, with the aim of saving the computational complexity as well as maintaining the population diversity, only a number of individuals are selected from entire evolutionary population to apply local search in each generation. Then, the following two critical issues will be introduced in detail in the next two subsections.(1)How to select appropriate solutions to apply local search?(2)Which local search method will be used?

5.4.1. Selection of Individuals for Local Search

When selecting a candidate individual each time, at first, a weight vector is chosen from the vector set at random. Then, all incumbent solutions corresponding to different weight vectors which belong to are selected. Next, all the selected individuals are compared according to (3) with the weight vector , and the one which has the best fitness is selected as a candidate solution to apply local search. Finally, an improved solution obtained by local search is adopted to update neighborhood and archive. Denote as the number of weight vectors which belong to . The basic framework of local search can be summarized as Algorithm 2.

  for   to   do
Randomly select a weight from the weight vector set ;
Select all solutions respectively correspond to weight vectors which
belong to .
Tournament Selection
LocalSearchForIndividual
UpdateNeighborhood
UpdateArchive
  end for
5.4.2. Description of Local Search Method

Considering that makespan is the most important and the hardest objective to be optimized among the three optimization objectives, the local search is performed on the decoded schedule of one chromosome rather than the chromosome itself. Suppose there are a set of critical operations , and let be the makespan of . Then, the following theorems based on disjunctive graph are summarized as follows [2]:(1)It has been proven that the makespan can only be reduced by moving critical operations .(2)Suppose is obtained by deleting one critical operation in . Let be the set of operations processed on which are sorted by the increasing of starting time (note that ) in ; then two subsequences of denoted as and are defined as follows:It has been verified that a feasible schedule can be achieved by inserting into a position , where contains all positions before all the operations of and after all the operations of .(3)The insertion of moving on position must satisfy the following constraint:

When considering an action of finding a position on for to insert into , which is denoted as , only the positions in should be taken into account. Insert into one position in if satisfies (5), and other positions in will no longer be considered. Suppose is the alternative machine set to operate ; denotes the total actions of . Let be the action set , which contains actions. Now, a hierarchical strategy [27] is adopted to calculate and which, respectively, represent the variation of total workload and critical workload. All actions in are sorted by nondescending order of . If both actions have the same , then the lowest is chosen as a second criterion. The value of , is calculated as follows:

These actions in sorted are considered in succession until a neighborhood is established. Whereafter, we compare with by adopting an acceptance rule described as Step in Algorithm 3. will be replaced by , providing that is not dominated by and they are different from each other. It should be noted that the length of once local search is controlled by two points in order to save the computing cost. First, when getting a neighborhood of , once an effective action in is found, a neighbor schedule is formed. Then, the remaining actions in will no longer be considered. The second effort is to set max iteration number so that the search will be terminated when the is exhausted.

  
  
  
  while   and   do
GetNeighborhood
if    then
if   or   then
end if
end if
end while
ChromosomeEncoding
return  
5.4.3. Computational Complexity Analysis

The major computational costs in MOMAD are involved in Step  ~Step   and Step  . Step   performs comparisons and assignments. The objective normalization in Step   requires operations. Because the computing for neighborhood solutions and basic operations are required in one computation, basic operations are needed for Step  . Therefore, the total computational cost in Step  ~Step   is since it computes times. When comparing and in local search, it requires basic operations in one iteration, and the worst case is if the maximal iterations are reached. The computational cost of Step   is since it has passes. Thus, the total computational complexity of MOMAD is .

6. Experiments and Results

With the aim of testing the performance of MOMAD, 5 well-known Kacem instances (Ka , Ka , Ka , Ka , and Ka ) [23] and 10 BRdata instances (MK01~MK10) [49] are underinvestigated in the experiment. The MOMAD is implemented in C++ language and on an Intel Core i3-4170 3.70 GHz processor with 4 GB of RAM.

6.1. Parameter Settings

With the purpose of eliminating the influence of random factors on the performance of the algorithm, it is necessary to independently run the proposed MOMAD 10 times on each test instance, and the algorithm will be terminated when reaching the maximal number of objective function evaluations in one run. The parameters used in MOMAD algorithm are listed in Table 2. Moreover, because of the different complexity of each test instance, the predefined maximal numbers of objective evaluations corresponding to different problems are listed in the second column in Table 3.

6.2. Performance Metrics

In order to quantitatively evaluate the performance of the compared algorithms, three quantitative indicators are employed to make the comparison more convincing, and they are described as follows.

(1) Inverted Generational Distance (IGD) [50]. Let be the approximate PF obtained by the compared algorithm and be the reference PF uniformly distributed in the object space. IGD metric measures the distance between and with smaller value representing better performance. The IGD metric can well reflect the convergence and diversity simultaneously to some extent if is larger enough to well represent the reference PF; that is,

In this experiment, since all benchmark instances are tested without knowledge about their actual PFs, used in calculating IGD metric is obtained by two steps. First, we merge all final nondominated solutions obtained by all compared algorithms during all the independent runs. Then, we select the nondominated solutions from the mixed solution set as .

(2) Set Coverage (C) [51]. metric can directly reflect the dominance relationship between two groups of approximate PFs. Let and be two approximate PFs that are obtained by two different algorithms; then is defined as

Although represents the percentage of solutions in that are dominated by at least one solution in , in general, . If is larger than , algorithm is better than algorithm to some extent.

(3) Hypervolume (HV) [51]. HV metric is employed to measure the volume of hypercube enclosed by PF and reference vector with lager values representing better performance. It can be obtained bywhere is the volume of hypercube enclosed by solution in PF and reference vector . HV measure can reflect both convergence and diversity of corresponding PF to a certain degree.

For the convenience of computation, all objective vectors of the Pareto solutions are normalized based on (10) before calculating all three metrics. Thus, the reference vectors of all benchmark instances for calculating HV value are set as .where and are the supremum and infimum of over all nondominated solutions obtained by gathering all compared algorithms.

6.3. Performance Comparison with Several Variants of MOMAD

Because the implementation of algorithm framework is not unique and there are some different strategies employed to instantiate it, several variants of MOMAD will be first studied. MOEA/D is a simplified algorithm designed by eliminating local search from MOMAD to investigate its effectiveness. In the interest of studying the effect of different aggregation functions, MOMAD-WS and MOMAD-PBI are formed by replacing the Tchebycheff approach with weighted sum (WS) [7] and penalty-based boundary intersection (PBI) approach [7]. The Wilcoxon signed-rank test [52] is performed on the sample data of three metrics obtained after 10 independent runs with the significance of 0.05, and the one which is significantly better than others is marked in bold. Three metric values are listed in Tables 35.

In Tables 3 and 4, the results of performance comparison between MOMAD, MOMAD-WS, and MOMAD-PBI are listed. MOMAD is significantly better than MOMAD-WS on MK10 for IGD, HV, and C values. Besides, MOMAD performs better than MOMAD-WS on MK07 and MK09 in terms of HV and C value and is also better than MOMAD-WS on MK06 for IGD metric. In contrast, MOMAD-WS only achieves a better HV value on Ka . When comparing with MOMAD-PBI, MOMAD is better than MOMAD-PBI for all the BRdata instances in terms of IGD and HV values. In addition, MOMAD also obtains better C metric values on 8 out of 10 instances. In summary, the presented results indicate that Tchebycheff aggregation function is more suitable to be utilized in MOEA/D framework when solving MOFJSP.

To understand the effectiveness of problem-specific local search, the comparison between MOMAD and MOEA/D is conducted, and the three metric values over 10 independent runs are shown in Table 5. It is easily observed that, as for IGD value, there are significant differences in the performances of the two algorithms on 8 test problems, and MOMAD significantly outperforms MOEA/D on all these instances. As for the other two metrics, the situations are similar to IGD metric. MOMAD outperforms MOEA/D on 9 out of total 15 instances in terms of HV metric and 7 out of 15 instances for C metric, while MOEA/D only obtains a better C value on MK06. Based on the above comparison results and analyses, MOMAD is much powerful than MOEA/D, which well verifies the effectiveness of local search.

With the aim of intuitively comparing the convergence performance of the two algorithms, the evolutions of average IGD values in MOMAD and MOEA/D on four selected BRdata instances are illustrated in Figure 3. As can be clearly seen from this figure, with the increasing number of function evaluations, the IGD values in all four instances gradually decrease and tend to be stable, which indicates that both algorithms have good convergence. Since MOMAD achieves lower IGD convergence curves, it is easily observed that MOMAD achieves better convergence property and convergence efficiency than MOEA/D.

6.4. Performance Comparison with Other Algorithms

In this subsection, comparison between MOMAD and several state-of-the-art algorithms are made. First, to compare MOMAD with algorithms solving MOFJSP by using a priori approach, MOMAD is compared on five Kacem instances with PSO + SA [17], hGA [18], PSO + TS [19], Xing et al.’s algorithm [20], HTSA [21], and EDA [22]. All Pareto solutions marked in bold are listed in Table 6. Next, in Tables 710, MOMAD is compared with eight algorithms recently proposed for solving MOFJSP by using Pareto approach that are MOGA [25], PLS [28], HSFLA [37], HMOEA [30], SEA [29], P-EDA [31], hDPSO [34], and PRMOTS + IS [32]. It should be pointed out that these compared algorithms list the results after predefined runs rather than each run in their original literatures. Therefore, the statistical comparisons as made before no longer apply. In this subsection, the three metrics are computed for the set of PFs collected over predefined runs of each algorithm.

As shown in Table 6, first, the MOMAD can obtain more Pareto solutions than all other algorithms for solving the five instances except EDA for Ka and Ka and Xing for Ka . Second, as for Ka , the solution (7, 44, 6) obtained by PSO + SA and solution (7, 43, 6) obtained by PSO + TS are both dominated by (7, 42, 6) and (7, 43, 5) obtained by MOMAD. Besides, when comparing with Xing, one solution (15, 76, 12) of Ka obtained by Xing is dominated by (15, 75, 12) got by MOMAD. By analyzing the results of Ka 15 × 10, two solutions (i.e., (12, 91, 11) and (11, 93, 11)) which are, respectively, obtained by PSO + SA and PSO + TS are, respectively, dominated by (11, 91, 11) and (11, 93, 10) achieved by MOMAD. According to the above comparison results and analyses, when comparing with the algorithms based on a priori approach, MOMAD can obtain more nondominated solutions with higher quality.

Tables 7 and 8 show the comparison results of IGD and HV values between MOMAD and eight Pareto-based algorithms. First, it can be observed that except MOGA and hDPSO, MOMAD and other 6 algorithms can find all the nondominated solutions of Ka , Ka , Ka , and Ka . Although there exist no algorithms that can find all nondominated solutions of Ka , MOMAD and other 6 algorithms are better than MOGA and HMOEA. Next, we focus on the BRdata test set. When considering IGD metric, MOMAD obtains the best values on 6 out of 10 instances and the second-best IGD values on three instances. The situation of HV metric is much similar to IGD metric. MOMAD achieves the best HV values on 7 out of 10 instances and yields the second-best HV values on all the other instances. MOGA obtains the best IGD value on MK07, but MOMAD is better than MOGA on other 9 instances. The situation is much similar to SEA, P-EDA, and PRMOTS + IS. Although they exhibit superior performance over MOMAD on several instances for IGD and HV values, in most cases, they perform relatively worse than MOMAD.

MOMAD is compared with eight algorithms in Tables 9 and 10 by using set coverage value. It is clearly indicated that MOMAD is much superior to five algorithms except for MOGA, SEA, and PRMOTS + IS. When comparing with MOGA, MOMAD is worse than MOGA on MK07 and MK08, but MOMAD is better than MOGA on 7 BRdata instances. Compared with SEA, MOMAD is generally better on MK01, MK02, and MK10 instances, but on MK04, MK06, and MK09, SEA generally shows higher performance. As for PRMOTS + IS, MOMAD is superior to it on MK02, MK05, MK07, MK09, and MK10, whereas PRMOTS + IS only wins in MK01, MK04, and MK06.

Table 11 shows the comparison between MOMAD and MOGA on 18 DPdata instances which are designed in [53]. The predefined maximal function evaluation of each instance is shown in second column, and MOMAD is independently run 5 times at a time. From the IGD and HV value, it is easily observed that the performance of MOMAD is much better than MOGA since MOGA only obtains a better IGD value on 07a and a better HV value on 02a. The comparison of C metric is similar to the IGD and HV, MOMAD achieves 16 significantly better results, and there is no significant difference between MOMAD and MOGA on 02a and 09a. In addition, it can also be observed that C (MOMAD, MOGA) equals one on 12 test instances which means, as for these 12 instances, every solution obtained by MOGA is dominated by at least one solutions by MOMAD.

The objective ranges of MOMAD and PRMOTS + IS for DPdata instances are given in Table 12. The MOMAD finds a wider spread of nondominated solutions than PRMOTS + IS especially in terms of makespan and total workload. Thus, it can be concluded that MOMAD is more effective than PRMOTS + IS in terms of exploring a search space.

According to the above extensive IGD and HV values, the average performance scores of 10 BRdata instances are further recorded to rank the compared algorithm, which make it easier to quantify the overall performance of these algorithms by intuitive observation. For each specific BRdata instance, suppose , respectively, denote the algorithms employed in comparison. Let be 1, iff obtains smaller IGD and bigger HV value than . Then, the performance of each algorithm can be calculated as [54]

It should be noted that the smaller the score, the better the algorithm. Here, PLS and HMOEA only consider part of instances, so we first rank all algorithms on MK01MK03 and MK08 instances, and then we rank these algorithms except PLS and HMOEA on the remaining 6 instances. Figure 4 shows the average performance score of IGD and HV values over 10 BRdata instances for 9 selected algorithms, and the rank accordance with the score of each algorithm is listed in the corresponding bracket. It is easily observed that MOMAD works well nearly on all the instances in terms of IGD and HV metrics since it achieves good performance on almost all the test problems.

Figures 5 and 6 show the final PFs of MK01MK10 instances obtained by MOMAD and the reference PFs generated by selecting nondominated solutions from the mixture of eight compared algorithm. As can be seen, MOMAD finds all Pareto solutions for MK02. As for MK01, MK03, MK05, MK07, and MK08, MOMAD almost finds all Pareto solutions. Besides, MOMAD can find the vast majority of the optimal solutions for MK04, MK06, MK09, and MK10. Thus, MOMAD is capable of finding a wide spread of PF for each instance and showing the tradeoffs among three different objectives.

Average CPU time consumed by MOMAD and other compared algorithms are listed in Table 13. However, the different experimental platform, programming language, and programming skills make this comparison not entirely reasonable. Hence, we enumerate the computational CPU time combined with original experimental platform and programming language for each corresponding algorithm, which help us distinguish the efficiency of the referred algorithms. The values show that the computational time consumed by MOMAD is much smaller than other algorithms except for Ka .

In summary, from the above-mentioned experimental results and comparisons, it can be concluded that MOMAD outperforms or at least has comparative performance to all the other typical algorithms when solving MOFJSP.

6.5. Impacts of Parameters Settings

(1) Impacts of Population Size . Since population size is an important parameter for MOMAD, we test the sensitivity of MOMAD to different settings of for MK04. All other parameters except are the same as shown in Table 2. The algorithm independently runs 10 times with each different independently. As clearly shown in Table 14, the HV value changes weakly with the increasing of . The IGD value decreases at first; then it will have a growing tendency. Totally speaking, MOMAD is not significantly sensitive to population size , and properly increasing value of can improve the algorithm performance to some extent.

(2) Impacts of Neighborhood Size . is another key parameter in MOMAD. With the aim of studying the sensitivity of , MOMAD is implemented by setting different values of and keeps other parameter settings unchanged. We run the MOMAD with each 10 times independently for MK06. Table 14 shows how the IGD and HV values change along with the increasing of from where we can obtain the same observations. Both values are enhanced first with the increasing of ; then the algorithm performance is degraded with the continuous increasing. So the influence of on MOMAD is similar to .

7. Conclusions and Future Work

This paper solves the MOFJSP in a decomposition manner for the purpose of simultaneously minimizing makespan, total workload, and critical workload. To propose an effective MOMAD algorithm, the framework of MOEA/D is adopted. First, a mixture of different machine assignment and operation sequencing rules is utilized to generate an initial population. Then, objective normalization technique is used in Tchebycheff approach, and the MOP is converted into many single-objective optimization subproblems. By clustering the weight vectors into different groups, a local exploitation based on moving critical operations is incorporated in MOEA/D and applied on candidate solutions with best aggregation function compared with solutions whose weight vectors belong to the same group. After embedding the local search into MOEA/D, our MOMAD is established. In simulation experiments, the Tchebycheff approach is studied to be more suitable for using in MOEA/D framework than WS and PBI approaches, and the effectiveness of local search is also verified. Moreover, MOMAD is compared with eight competitive algorithms in terms of three quantitative metrics. Finally, the effect of two key parameters, population size and neighborhood size, are analysed. Extensive computational results and comparisons indicate that the proposed MOMAD outperforms or at least has a comparative performance to other representative approaches, and MOMAD is fit to solve MOFJSP.

In the future, we want to study the MOFJSP with more than three objectives at first. Second, it would be significant to introduce a novel local search method which considers moving more than one critical operation. Finally, it would also be interesting to apply the MOMAD to dynamic scheduling problem.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research work is supported by the National Science Foundation of China (no. 61572238) and the Provincial Outstanding Youth Foundation of Jiangsu Province (no. BK20160001).