Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2017 (2017), Article ID 2857564, 20 pages
https://doi.org/10.1155/2017/2857564
Research Article

A Novel Memetic Algorithm Based on Decomposition for Multiobjective Flexible Job Shop Scheduling Problem

Engineering Research Center of IoT Technology Applications, Ministry of Education, Jiangnan University, Wuxi 214122, China

Correspondence should be addressed to Yan Wang

Received 12 May 2017; Revised 1 November 2017; Accepted 9 November 2017; Published 29 November 2017

Academic Editor: Josefa Mula

Copyright © 2017 Chun Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A novel multiobjective memetic algorithm based on decomposition (MOMAD) is proposed to solve multiobjective flexible job shop scheduling problem (MOFJSP), which simultaneously minimizes makespan, total workload, and critical workload. Firstly, a population is initialized by employing an integration of different machine assignment and operation sequencing strategies. Secondly, multiobjective memetic algorithm based on decomposition is presented by introducing a local search to MOEA/D. The Tchebycheff approach of MOEA/D converts the three-objective optimization problem to several single-objective optimization subproblems, and the weight vectors are grouped by K-means clustering. Some good individuals corresponding to different weight vectors are selected by the tournament mechanism of a local search. In the experiments, the influence of three different aggregation functions is first studied. Moreover, the effect of the proposed local search is investigated. Finally, MOMAD is compared with eight state-of-the-art algorithms on a series of well-known benchmark instances and the experimental results show that the proposed algorithm outperforms or at least has comparative performance to the other algorithms.

1. Introduction

The job shop scheduling problem (JSP) is one of the most important and difficult problems in the field of manufacturing which processes a set of jobs on a set of machines. Each job consists of a sequence of successive operations, and each operation is allowed to process on a unique machine. Different from JSP which one operation is merely allowed to process on a specific machine, the flexible job shop scheduling problem (FJSP) permits one operation processed by any machine from its available machine set. Since FJSP needs to assign operations to their suited machine as well as sequence those operations assigned on the same machine, it is a complex NP-hard optimization problem [1].

The existing literatures [25] about solving single-objective FJSP (SOFJSP) over the past decades mainly concentrated on minimizing one specific objective such as makespan. However, in practical manufacturing process, single-objective optimization cannot fully satisfy the production requirements since many optimized objectives are usually in conflict with each other. In recent years, multiobjective flexible job shop scheduling problem (MOFJSP) has received much attention, and, until now, many algorithms have been developed to solve this kind of problem. These methods can be classified into two groups: one is a priori approach and the other is Pareto approach.

Multiple objectives are usually linearly combined into a single one by weighted sum approach in the a priori method, which can be illustrated as , where , . However, we can get only one or several Pareto solutions by using this approach, which may not well reflect the tradeoffs among different objectives, and it would be difficult to assign an appropriate weight for each problem. Even more important, the performance of the algorithm deteriorates when solving the problems contains nonconcave Pareto front (PF). The Pareto approach mainly focuses on searching the Pareto set (PS) of optimization problems by comparing two solutions based on Pareto dominance relation [6]. A solution is said to dominate solution iff is not worse than in all objectives and there exists at least one objective in which is better than . is called a Pareto optimal solution iff there is no solution that dominates . All the Pareto optimal solutions constitute the PS, and PF is the mapped vector of PS in the objective space. Since Pareto approach can achieve a set of Pareto solutions rather than a specific one, it has received much more attention than a priori approach and is recognized to be more suitable to solve MOFJSP.

Because the three objectives, makespan, total workload, and critical workload, are conflicted with each other, it is better to handle this model with knowledge about their PF. Multiobjective evolutionary algorithm (MOEA) is a kind of mature global optimization method with high robustness and wide applicability. Due to the fact that MOEAs have low requirements on the optimization problem itself and high ability to obtain multiple Pareto solutions during each run, they are suitable for solving multiobjective optimization problems (MOPs). The multiobjective evolutionary algorithm based on decomposition (MOEA/D) that integrates mathematical programming with evolutionary algorithm (EA) can obtain a set of Pareto solutions by aggregating multiple objectives into different single-objectives with many predefined weight vectors [7]. MOEA/D has shown great superiority on continuous optimization problems [812]; thus it is necessary to investigate its performance on multiobjective combinatorial optimization problems (MO-COPs) such as MOFJSP. To the best of our knowledge, in the literature reported, although MOEA/D has been applied to different kinds of multiobjective scheduling problems such as multiobjective flow shop scheduling problem (MOFSP) [13], multiobjective permutation flow shop scheduling problem (MOPFSP) [14], multiobjective stochastic flexible job shop scheduling problem (MOSFJSP) [15], and multiobjective job shop scheduling problem (MOJSP) [16], there is seldom corresponding application on MOFJSP.

The primary aim of this paper is to solve MOFJSP in a decomposition manner by proposing a multiobjective memetic algorithm based on decomposition (MOMAD) hybridizing MOEA/D with local search. With the purpose of making the proposed algorithm more applicable, four aspects are studied: integration of different machine assignment and operation sequencing strategies are presented to construct the initial population; objective normalization is incorporated into Tchebycheff approach to convert an MOP into a number of single-objective optimization subproblems; all weight vectors are divided into a few groups based on K-means clustering: some superior individuals corresponding to different weight vector groups are selected by using a selection mechanism; local search based on moving critical operations is applied on selected individuals. To evaluate the effectiveness of the proposed algorithm, some benchmark instances are tested with three purposes: investigating the effects of different aggregation functions and validating the effectiveness of local search; analyzing the influence of the key parameters on the performance of the algorithm; comparing the performance of MOMAD with other state-of-the-art algorithms for solving MOFJSP.

The rest of the paper is organized as follows. Next section presents a short overview of the existing related work. In Section 3, the background knowledge of MOFJSP is introduced. Section 4 introduces the framework of MOMAD. The implementation details of the proposed MOMAD including genetic global search and problem specific local search are described in Section 5. Afterwards, experimental studies are provided in Section 6. Finally, Section 7 concludes this paper and outlines some avenues for future research.

2. Related Work

As mentioned above, there are two main methods to solve MOFJSP: a priori approach and Pareto approach. As for a priori approach, Xia and Wu [17] discussed a hybrid algorithm, where particle swarm optimization (PSO) and simulated annealing (SA) were employed in global search and local search, respectively. A bottleneck shifting-based local search was incorporated into genetic global search by Gao et al. [18]. Zhang et al. [19] introduced an effective hybrid PSO algorithm combined with tabu search (TS). Xing et al. [20] used ten different weight vectors to collect effective solution sets. A hybrid TS (HTS) algorithm was structured by combining adaptive rules with two neighborhoods. In this algorithm, three weight coefficients , , and with different settings were given to test different problems [21]. An effective estimation of distribution algorithm was proposed by Wang et al. [22], in which the new individuals were generated by sampling a probability model.

Contrary to the a priori approach, a PS can be obtained by using Pareto approach, and the tradeoffs among different objectives can be presented. The integration of fuzzy logic and EA was proposed by Kacem et al. [23]. A guide local search was incorporated into EA to enhance the convergence [24]. With the aim of keeping population diversity, immune and entropy principle were adopted in multiobjective genetic algorithm (MOGA) [25]. Two memetic algorithms (MAs) were, respectively, proposed, both of which integrate nondominated sorting genetic algorithm II (NSGA-II) [6] with effective local search techniques [26, 27]. Several effective neighborhood approaches were used in variable neighborhood search to enhance the convergence ability in a hybrid Pareto-based local search (PLS) [28]. Chiang and Lin [29] proposed a simple and effective evolutionary algorithm which only requires two parameters. Both the neighborhoods of machine assignment and operation sequence are considered in Xiong et al. [30]. An effective Pareto-based EDA was proposed by Wang et al. [31]. A novel path-relinking multiobjective TS algorithm was proposed in [32], in which a routing solution is identified by problem-specific neighborhood search and is then further refined by the TS with back-jump tracking for a sequencing decision. In addition to the successful use of EA, several swarm intelligence algorithms have also been widely used for global search. PSOs were used as global search algorithms in [3336]. Besides, shuffled frog leaping [37] and artificial bee colony [38] were integrated with local search in related hybrid algorithms.

Besides the successful using in many scheduling problems, MOEA/Ds have also been widely dedicated to other MO-COPs. A novel NBI-style Tchebycheff approach was used in MOEA/D to solve portfolio management MOP with disparately scaled objectives [39]. Mei et al. [40] developed an effective MA by combining MOEA/D with extended neighborhood search to solve capacitated arc routing problem. Hill climbing, SA, and evolutionary gradient search were, respectively, embedded into EDA for solving multiple traveling salesmen problem (MTSP) in a decomposition manner [41]. A hybrid MOEA was established by combining ant colony optimization with MOEA/D [42], and then it was adopted to solve multiobjective 0-1 knapsack problem (MOKP) and MTSP, respectively. Then, aiming at the same two problems, Ke and Zhang proposed a hybridization of decomposition and PLS [43].

As mentioned before, MOEA/D is a kind of popular MOEA which is very suitable for solving MO-COPs such as scheduling problem. In this paper, a MOMAD is proposed that integrates MOEA/D algorithm with local search to enrich the tool-kit for solving MOFJSP.

3. Related Background Knowledge

3.1. Problem and Objective of MOFJSP

The MOFJSP can be formulated as follows. There are a set of jobs and machines ; each job contains one or more operations to be processed in accordance with the predetermined sequence. Each operation can be processed on any machine among its corresponding operable machine set . The problem is defined as T-FJSP iff ; otherwise, it is called P-FJSP [44]. MOFJSP not only assigns suitable processing machines for each operation but also determines the most reasonable processing sequence of operations assigned on the same machine in order to simultaneously optimize several objectives.

The following constraints should be satisfied in the process:(1)At a certain time, a machine can process one operation at most, and one operation can be processed by only one machine at a certain moment.(2)Each operation cannot be interrupted once processed.(3)All jobs and machines are available at time 0.(4)Different jobs share the same priority.(5)There exists no precedence constraint among the operations of different jobs, but there exist precedence constraints among the operations belonging to the same job.

An instance of P-FJSP with three jobs and three machines is illustrated in Table 1. Let and be the completion time of operation and its processing time on machine , respectively. denotes the completion time of job . Three considered objectives are makespan, total workload, and critical workload which are formulated as follows:

Table 1: An instance of P-FJSP with 3 jobs on 3 machines.
3.2. Disjunctive Graph Model

Disjunctive graph model has been adapted for representing feasible schedules of FJSP. denotes nodes set, and each of them represents an operation. The virtual starting and ending operations are represented by two virtual nodes, 0 and , respectively. is the set of conjunctive arcs which connect adjacent operations of the same job, and each arc indicates the precedence constraint within the same job. is the set of disjunctive arcs corresponding to the adjacent operations scheduled on the same machine. , where denotes the disjunctive arcs set of machine . The weight value above the node denotes , and the selected machine to operate is labeled under the node , .

Figure 1 shows the disjunctive graph of a feasible solution corresponding to the instance shown in Table 1, in which every disjunctive arc confirms a direction. This graph is called digraph. In digraph , the longest path from node to node is termed as critical path, the length of which denotes the makespan of corresponding schedule. Besides, each operation on critical path is called critical operation. In Figure 1, there is one critical path whose length equals 13.

Figure 1: Illustration of the disjunctive graph.

Given that the local search in the following section can be well described, some concepts based on digraph are denoted here. Suppose is a node in , and its corresponding operation is . denotes the corresponding machine to process . and denote its earliest starting time and earliest completion time. Let be the processing time of operation on ; the latest starting time and latest completion time without delaying the required makespan can be calculated by

The predecessor and successor operation scheduled on the same machine right before or after are denoted as and , respectively. In addition, and are the predecessor and successor operation of in the same job, respectively. is a critical operation if and only if .

4. Framework of the Proposed MOMAD

The framework of MOMAD algorithm is formed by hybridizing MOEA/D with local search, which is given in Algorithm 1. First, a set of uniformly distributed weight vectors is generated by Das and Dennis’s approach [45], where each vector corresponds to subproblem . Next, all the weight vectors are divided into groups by K-means clustering. After calculating the Euclidean distance between any two weight vectors, the neighborhood of is set by gathering closest weight vectors. Then, the population containing solutions is initialized. The ideal point vector is obtained by calculating the infimum found so far of . The archive is established by founding the nondominated solutions in initial population. In Steps  , the two mating parent solutions and are chosen from formed by with the probability or by whole population with the probability . Then, the new solution is generated by crossover and mutation, and finally is used to update .

Algorithm 1: Framework of the proposed MOMAD.

Steps   contain the updating and local search phase. The objective normalization is first adopted before population updating. Suppose is th weight vector and is the largest value of in the current population; then is compared with the solutions from one by one, and the one that has poorer fitness in terms of (3) will be replaced by . It should be noted that the updating procedure will be terminated as soon as the predefined maximal replacing number which benefits from keeping population diversity is reached or is empty. Afterwards, the updating of is held. If no solutions in dominate , then copy into and remove all the repeated and dominated solutions. Finally, after selecting the super solutions in current population, the local search is applied to get some improved solutions, and then they are rejected into the population to ameliorate it.

5. Detailed Description of Exploration and Exploitation in MOMAD

5.1. Chromosome Encoding and Decoding

In MOMAD, a chromosome coding consists of two parts: machine selection (MS) part and operation sequence (OS) part, which are encoded by machine selection and operation sequence vector, respectively. Each integer in MS vector represents the corresponding machine assigned for each operation in turn. As for OS vector, each gene is directly encoded with the job number. When compiling the chromosome from left to right, the th occurrence of the job number refers to the th operation of the corresponding job. Figure 2 shows a chromosome encoding of an P-FJSP instance which is shown in Table 1. is selected to process operation and is selected to process operation . The operation sequence can be interpreted as .

Figure 2: Chromosome encoding.

Since it has been verified that the optimal schedule exists in active schedule [3], the greedy inserting algorithm [25] is employed for chromosome encoding to make sure the operation is positioned in the earliest capable time of its assigned machine. It should be noted that operation may be started earlier than while appears before in the OS. In order to make the encoding be able to reflect the actual processing sequence, the operation sequence in original OS will be reordered in the light of their actual starting time after decoding.

5.2. Population Initialization

Population initialization plays an important role in MOMAD performance since a high quality initial population with more diversity can avoid falling into premature convergence. Here, four machine assignment rules are used to produce the machine assignment vectors for MOFJSP. The first two rules are global selection and local selection proposed by Zhao et al. [46]. The third rule prefers to select a machine from the candidate machine set at random. The aim of the last rule is assigning each operation to a machine with the minimum processing time. In our MOMAD, for machine assignment initialization, 50% of individuals are generated by rule 1, 10% of individuals are formed with rule 4, and rule 2 and rule 3 take share of the rest of the population.

Once the machines are assigned to each operation, the sequence of operations should be considered next. A mixture of four operation sequencing rules is employed to generate the initial operation sequencing vectors. The probabilities of using four operation sequencing rules are set as 0.3, 0.2, 0.3, and 0.2, respectively.

Operation sequencing rules are as follows:(1) Most Work Remaining [47]. The operations which have the most remaining processing time will be put into the operation sequencing vector first.(2) Most Number of Operations Remaining (MOR) [47]. The operations which have the most subsequent operations in the same job will be preferentially taken into account.(3) Shortest Processing Time (SPT) [48]. The operations with the shortest processing time will be firstly processed.(4) Random Dispatching. It randomly orders the operations on each machine.

5.3. Exploration Using Genetic Operators

The problem-specific crossover and mutation operators are applied to produce the offspring, both of which are performed on each vector independently since the encoding of one chromosome has two components.

Crossover. For the MS, uniform crossover [3] is adopted. First of all, a subset of positions is uniformly chosen at random, where equals the total number of all operations. Then, two new individuals are generated by changing the gene between parent chromosomes corresponding to the selected positions. With respect to OS vector, the precedence preserving order-based (POX) [3] crossover is applied.

Mutation. As for MS, two positions are chosen arbitrarily, and the genes are replaced by changing the different machine from their corresponding machine set. With regard to OS, the mutation is achieved by exchanging two randomly chosen operations.

5.4. Exploitation Using Local Search

It is widely accepted that integrating local search can effectually improve the performance of EAs, and it is an effective strategy to solve FJSP [31], so it is of great importance to design an effect local search method to make MOMAD keep good balance between exploration and exploitation. First of all, with the aim of saving the computational complexity as well as maintaining the population diversity, only a number of individuals are selected from entire evolutionary population to apply local search in each generation. Then, the following two critical issues will be introduced in detail in the next two subsections.(1)How to select appropriate solutions to apply local search?(2)Which local search method will be used?

5.4.1. Selection of Individuals for Local Search

When selecting a candidate individual each time, at first, a weight vector is chosen from the vector set at random. Then, all incumbent solutions corresponding to different weight vectors which belong to are selected. Next, all the selected individuals are compared according to (3) with the weight vector , and the one which has the best fitness is selected as a candidate solution to apply local search. Finally, an improved solution obtained by local search is adopted to update neighborhood and archive. Denote as the number of weight vectors which belong to . The basic framework of local search can be summarized as Algorithm 2.

Algorithm 2: LocalSearch ().
5.4.2. Description of Local Search Method

Considering that makespan is the most important and the hardest objective to be optimized among the three optimization objectives, the local search is performed on the decoded schedule of one chromosome rather than the chromosome itself. Suppose there are a set of critical operations , and let be the makespan of . Then, the following theorems based on disjunctive graph are summarized as follows [2]:(1)It has been proven that the makespan can only be reduced by moving critical operations .(2)Suppose is obtained by deleting one critical operation in . Let be the set of operations processed on which are sorted by the increasing of starting time (note that ) in ; then two subsequences of denoted as and are defined as follows:It has been verified that a feasible schedule can be achieved by inserting into a position , where contains all positions before all the operations of and after all the operations of .(3)The insertion of moving on position must satisfy the following constraint:

When considering an action of finding a position on for to insert into , which is denoted as , only the positions in should be taken into account. Insert into one position in if satisfies (5), and other positions in will no longer be considered. Suppose is the alternative machine set to operate ; denotes the total actions of . Let be the action set , which contains actions. Now, a hierarchical strategy [27] is adopted to calculate and which, respectively, represent the variation of total workload and critical workload. All actions in are sorted by nondescending order of . If both actions have the same , then the lowest is chosen as a second criterion. The value of , is calculated as follows:

These actions in sorted are considered in succession until a neighborhood is established. Whereafter, we compare with by adopting an acceptance rule described as Step in Algorithm 3. will be replaced by , providing that is not dominated by and they are different from each other. It should be noted that the length of once local search is controlled by two points in order to save the computing cost. First, when getting a neighborhood of , once an effective action in is found, a neighbor schedule is formed. Then, the remaining actions in will no longer be considered. The second effort is to set max iteration number so that the search will be terminated when the is exhausted.

Algorithm 3: LocalSearchForIndividual .
5.4.3. Computational Complexity Analysis

The major computational costs in MOMAD are involved in Step  ~Step   and Step  . Step   performs comparisons and assignments. The objective normalization in Step   requires operations. Because the computing for neighborhood solutions and basic operations are required in one computation, basic operations are needed for Step  . Therefore, the total computational cost in Step  ~Step   is since it computes times. When comparing and in local search, it requires basic operations in one iteration, and the worst case is if the maximal iterations are reached. The computational cost of Step   is since it has passes. Thus, the total computational complexity of MOMAD is .

6. Experiments and Results

With the aim of testing the performance of MOMAD, 5 well-known Kacem instances (Ka , Ka , Ka , Ka , and Ka ) [23] and 10 BRdata instances (MK01~MK10) [49] are underinvestigated in the experiment. The MOMAD is implemented in C++ language and on an Intel Core i3-4170 3.70 GHz processor with 4 GB of RAM.

6.1. Parameter Settings

With the purpose of eliminating the influence of random factors on the performance of the algorithm, it is necessary to independently run the proposed MOMAD 10 times on each test instance, and the algorithm will be terminated when reaching the maximal number of objective function evaluations in one run. The parameters used in MOMAD algorithm are listed in Table 2. Moreover, because of the different complexity of each test instance, the predefined maximal numbers of objective evaluations corresponding to different problems are listed in the second column in Table 3.

Table 2: Parameter settings of the proposed MOMAD algorithm.
Table 3: Comparison of three different aggregation functions on average IGD and HV values over 10 independent runs for all Kacem and BRdata instances.
6.2. Performance Metrics

In order to quantitatively evaluate the performance of the compared algorithms, three quantitative indicators are employed to make the comparison more convincing, and they are described as follows.

(1) Inverted Generational Distance (IGD) [50]. Let be the approximate PF obtained by the compared algorithm and be the reference PF uniformly distributed in the object space. IGD metric measures the distance between and with smaller value representing better performance. The IGD metric can well reflect the convergence and diversity simultaneously to some extent if is larger enough to well represent the reference PF; that is,

In this experiment, since all benchmark instances are tested without knowledge about their actual PFs, used in calculating IGD metric is obtained by two steps. First, we merge all final nondominated solutions obtained by all compared algorithms during all the independent runs. Then, we select the nondominated solutions from the mixed solution set as .

(2) Set Coverage (C) [51]. metric can directly reflect the dominance relationship between two groups of approximate PFs. Let and be two approximate PFs that are obtained by two different algorithms; then is defined as

Although represents the percentage of solutions in that are dominated by at least one solution in , in general, . If is larger than , algorithm is better than algorithm to some extent.

(3) Hypervolume (HV) [51]. HV metric is employed to measure the volume of hypercube enclosed by PF and reference vector with lager values representing better performance. It can be obtained bywhere is the volume of hypercube enclosed by solution in PF and reference vector . HV measure can reflect both convergence and diversity of corresponding PF to a certain degree.

For the convenience of computation, all objective vectors of the Pareto solutions are normalized based on (10) before calculating all three metrics. Thus, the reference vectors of all benchmark instances for calculating HV value are set as .where and are the supremum and infimum of over all nondominated solutions obtained by gathering all compared algorithms.

6.3. Performance Comparison with Several Variants of MOMAD

Because the implementation of algorithm framework is not unique and there are some different strategies employed to instantiate it, several variants of MOMAD will be first studied. MOEA/D is a simplified algorithm designed by eliminating local search from MOMAD to investigate its effectiveness. In the interest of studying the effect of different aggregation functions, MOMAD-WS and MOMAD-PBI are formed by replacing the Tchebycheff approach with weighted sum (WS) [7] and penalty-based boundary intersection (PBI) approach [7]. The Wilcoxon signed-rank test [52] is performed on the sample data of three metrics obtained after 10 independent runs with the significance of 0.05, and the one which is significantly better than others is marked in bold. Three metric values are listed in Tables 35.

Table 4: Comparison of three different aggregation functions on average C value over 10 independent runs for all 15 problems.
Table 5: Performance evaluation of the effect of local search using IGD, HV, and C values over 10 independent runs for all 15 problems.

In Tables 3 and 4, the results of performance comparison between MOMAD, MOMAD-WS, and MOMAD-PBI are listed. MOMAD is significantly better than MOMAD-WS on MK10 for IGD, HV, and C values. Besides, MOMAD performs better than MOMAD-WS on MK07 and MK09 in terms of HV and C value and is also better than MOMAD-WS on MK06 for IGD metric. In contrast, MOMAD-WS only achieves a better HV value on Ka . When comparing with MOMAD-PBI, MOMAD is better than MOMAD-PBI for all the BRdata instances in terms of IGD and HV values. In addition, MOMAD also obtains better C metric values on 8 out of 10 instances. In summary, the presented results indicate that Tchebycheff aggregation function is more suitable to be utilized in MOEA/D framework when solving MOFJSP.

To understand the effectiveness of problem-specific local search, the comparison between MOMAD and MOEA/D is conducted, and the three metric values over 10 independent runs are shown in Table 5. It is easily observed that, as for IGD value, there are significant differences in the performances of the two algorithms on 8 test problems, and MOMAD significantly outperforms MOEA/D on all these instances. As for the other two metrics, the situations are similar to IGD metric. MOMAD outperforms MOEA/D on 9 out of total 15 instances in terms of HV metric and 7 out of 15 instances for C metric, while MOEA/D only obtains a better C value on MK06. Based on the above comparison results and analyses, MOMAD is much powerful than MOEA/D, which well verifies the effectiveness of local search.

With the aim of intuitively comparing the convergence performance of the two algorithms, the evolutions of average IGD values in MOMAD and MOEA/D on four selected BRdata instances are illustrated in Figure 3. As can be clearly seen from this figure, with the increasing number of function evaluations, the IGD values in all four instances gradually decrease and tend to be stable, which indicates that both algorithms have good convergence. Since MOMAD achieves lower IGD convergence curves, it is easily observed that MOMAD achieves better convergence property and convergence efficiency than MOEA/D.

Figure 3: Convergence graphs in terms of average IGD value obtained by MOMAD and MOEA/D for MK01, MK02, MK07, and MK10 problems.
6.4. Performance Comparison with Other Algorithms

In this subsection, comparison between MOMAD and several state-of-the-art algorithms are made. First, to compare MOMAD with algorithms solving MOFJSP by using a priori approach, MOMAD is compared on five Kacem instances with PSO + SA [17], hGA [18], PSO + TS [19], Xing et al.’s algorithm [20], HTSA [21], and EDA [22]. All Pareto solutions marked in bold are listed in Table 6. Next, in Tables 710, MOMAD is compared with eight algorithms recently proposed for solving MOFJSP by using Pareto approach that are MOGA [25], PLS [28], HSFLA [37], HMOEA [30], SEA [29], P-EDA [31], hDPSO [34], and PRMOTS + IS [32]. It should be pointed out that these compared algorithms list the results after predefined runs rather than each run in their original literatures. Therefore, the statistical comparisons as made before no longer apply. In this subsection, the three metrics are computed for the set of PFs collected over predefined runs of each algorithm.

Table 6: Comparison of the Kacem instance by listing the nondominated solutions.
Table 7: Comparison between MOMAD and other algorithms using IGD metric for Kacem and BR data instances.
Table 8: Comparison between MOMAD and other algorithms using HV metric for Kacem and BR data instances.
Table 9: Comparison between MOMAD and other algorithms using C metric for Kacem and BR data instances.
Table 10: Comparison between MOMAD and other algorithms using C metric for Kacem and BR data instances.

As shown in Table 6, first, the MOMAD can obtain more Pareto solutions than all other algorithms for solving the five instances except EDA for Ka and Ka and Xing for Ka . Second, as for Ka , the solution (7, 44, 6) obtained by PSO + SA and solution (7, 43, 6) obtained by PSO + TS are both dominated by (7, 42, 6) and (7, 43, 5) obtained by MOMAD. Besides, when comparing with Xing, one solution (15, 76, 12) of Ka obtained by Xing is dominated by (15, 75, 12) got by MOMAD. By analyzing the results of Ka 15 × 10, two solutions (i.e., (12, 91, 11) and (11, 93, 11)) which are, respectively, obtained by PSO + SA and PSO + TS are, respectively, dominated by (11, 91, 11) and (11, 93, 10) achieved by MOMAD. According to the above comparison results and analyses, when comparing with the algorithms based on a priori approach, MOMAD can obtain more nondominated solutions with higher quality.

Tables 7 and 8 show the comparison results of IGD and HV values between MOMAD and eight Pareto-based algorithms. First, it can be observed that except MOGA and hDPSO, MOMAD and other 6 algorithms can find all the nondominated solutions of Ka , Ka , Ka , and Ka . Although there exist no algorithms that can find all nondominated solutions of Ka , MOMAD and other 6 algorithms are better than MOGA and HMOEA. Next, we focus on the BRdata test set. When considering IGD metric, MOMAD obtains the best values on 6 out of 10 instances and the second-best IGD values on three instances. The situation of HV metric is much similar to IGD metric. MOMAD achieves the best HV values on 7 out of 10 instances and yields the second-best HV values on all the other instances. MOGA obtains the best IGD value on MK07, but MOMAD is better than MOGA on other 9 instances. The situation is much similar to SEA, P-EDA, and PRMOTS + IS. Although they exhibit superior performance over MOMAD on several instances for IGD and HV values, in most cases, they perform relatively worse than MOMAD.

MOMAD is compared with eight algorithms in Tables 9 and 10 by using set coverage value. It is clearly indicated that MOMAD is much superior to five algorithms except for MOGA, SEA, and PRMOTS + IS. When comparing with MOGA, MOMAD is worse than MOGA on MK07 and MK08, but MOMAD is better than MOGA on 7 BRdata instances. Compared with SEA, MOMAD is generally better on MK01, MK02, and MK10 instances, but on MK04, MK06, and MK09, SEA generally shows higher performance. As for PRMOTS + IS, MOMAD is superior to it on MK02, MK05, MK07, MK09, and MK10, whereas PRMOTS + IS only wins in MK01, MK04, and MK06.

Table 11 shows the comparison between MOMAD and MOGA on 18 DPdata instances which are designed in [53]. The predefined maximal function evaluation of each instance is shown in second column, and MOMAD is independently run 5 times at a time. From the IGD and HV value, it is easily observed that the performance of MOMAD is much better than MOGA since MOGA only obtains a better IGD value on 07a and a better HV value on 02a. The comparison of C metric is similar to the IGD and HV, MOMAD achieves 16 significantly better results, and there is no significant difference between MOMAD and MOGA on 02a and 09a. In addition, it can also be observed that C (MOMAD, MOGA) equals one on 12 test instances which means, as for these 12 instances, every solution obtained by MOGA is dominated by at least one solutions by MOMAD.

Table 11: Comparison between MOMAD and MOGA using IGD, HV, and C metric for DP data.

The objective ranges of MOMAD and PRMOTS + IS for DPdata instances are given in Table 12. The MOMAD finds a wider spread of nondominated solutions than PRMOTS + IS especially in terms of makespan and total workload. Thus, it can be concluded that MOMAD is more effective than PRMOTS + IS in terms of exploring a search space.

Table 12: Objective ranges for the DPdata.

According to the above extensive IGD and HV values, the average performance scores of 10 BRdata instances are further recorded to rank the compared algorithm, which make it easier to quantify the overall performance of these algorithms by intuitive observation. For each specific BRdata instance, suppose , respectively, denote the algorithms employed in comparison. Let be 1, iff obtains smaller IGD and bigger HV value than . Then, the performance of each algorithm can be calculated as [54]

It should be noted that the smaller the score, the better the algorithm. Here, PLS and HMOEA only consider part of instances, so we first rank all algorithms on MK01MK03 and MK08 instances, and then we rank these algorithms except PLS and HMOEA on the remaining 6 instances. Figure 4 shows the average performance score of IGD and HV values over 10 BRdata instances for 9 selected algorithms, and the rank accordance with the score of each algorithm is listed in the corresponding bracket. It is easily observed that MOMAD works well nearly on all the instances in terms of IGD and HV metrics since it achieves good performance on almost all the test problems.

Figure 4: Ranking in the average performance score over MK01, MK02, MK03, and MK08 problem instances for the compared nine algorithms and over other six BRdata instances for the compared seven algorithms. The smaller the score, the better the overall performance in terms of HV and IGD metrics.

Figures 5 and 6 show the final PFs of MK01MK10 instances obtained by MOMAD and the reference PFs generated by selecting nondominated solutions from the mixture of eight compared algorithm. As can be seen, MOMAD finds all Pareto solutions for MK02. As for MK01, MK03, MK05, MK07, and MK08, MOMAD almost finds all Pareto solutions. Besides, MOMAD can find the vast majority of the optimal solutions for MK04, MK06, MK09, and MK10. Thus, MOMAD is capable of finding a wide spread of PF for each instance and showing the tradeoffs among three different objectives.

Figure 5: Final nondominated solutions of MK01~Mk06 instance found by MOMAD.
Figure 6: Final nondominated solutions of MK07~Mk10 instance found by MOMAD.

Average CPU time consumed by MOMAD and other compared algorithms are listed in Table 13. However, the different experimental platform, programming language, and programming skills make this comparison not entirely reasonable. Hence, we enumerate the computational CPU time combined with original experimental platform and programming language for each corresponding algorithm, which help us distinguish the efficiency of the referred algorithms. The values show that the computational time consumed by MOMAD is much smaller than other algorithms except for Ka .

Table 13: The average CPU time (in seconds) consumed by different algorithms on Kacem and BRdata instances.

In summary, from the above-mentioned experimental results and comparisons, it can be concluded that MOMAD outperforms or at least has comparative performance to all the other typical algorithms when solving MOFJSP.

6.5. Impacts of Parameters Settings

(1) Impacts of Population Size . Since population size is an important parameter for MOMAD, we test the sensitivity of MOMAD to different settings of for MK04. All other parameters except are the same as shown in Table 2. The algorithm independently runs 10 times with each different independently. As clearly shown in Table 14, the HV value changes weakly with the increasing of . The IGD value decreases at first; then it will have a growing tendency. Totally speaking, MOMAD is not significantly sensitive to population size , and properly increasing value of can improve the algorithm performance to some extent.

Table 14: IGD and HV metric corresponding to different and .

(2) Impacts of Neighborhood Size . is another key parameter in MOMAD. With the aim of studying the sensitivity of , MOMAD is implemented by setting different values of and keeps other parameter settings unchanged. We run the MOMAD with each 10 times independently for MK06. Table 14 shows how the IGD and HV values change along with the increasing of from where we can obtain the same observations. Both values are enhanced first with the increasing of ; then the algorithm performance is degraded with the continuous increasing. So the influence of on MOMAD is similar to .

7. Conclusions and Future Work

This paper solves the MOFJSP in a decomposition manner for the purpose of simultaneously minimizing makespan, total workload, and critical workload. To propose an effective MOMAD algorithm, the framework of MOEA/D is adopted. First, a mixture of different machine assignment and operation sequencing rules is utilized to generate an initial population. Then, objective normalization technique is used in Tchebycheff approach, and the MOP is converted into many single-objective optimization subproblems. By clustering the weight vectors into different groups, a local exploitation based on moving critical operations is incorporated in MOEA/D and applied on candidate solutions with best aggregation function compared with solutions whose weight vectors belong to the same group. After embedding the local search into MOEA/D, our MOMAD is established. In simulation experiments, the Tchebycheff approach is studied to be more suitable for using in MOEA/D framework than WS and PBI approaches, and the effectiveness of local search is also verified. Moreover, MOMAD is compared with eight competitive algorithms in terms of three quantitative metrics. Finally, the effect of two key parameters, population size and neighborhood size, are analysed. Extensive computational results and comparisons indicate that the proposed MOMAD outperforms or at least has a comparative performance to other representative approaches, and MOMAD is fit to solve MOFJSP.

In the future, we want to study the MOFJSP with more than three objectives at first. Second, it would be significant to introduce a novel local search method which considers moving more than one critical operation. Finally, it would also be interesting to apply the MOMAD to dynamic scheduling problem.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research work is supported by the National Science Foundation of China (no. 61572238) and the Provincial Outstanding Youth Foundation of Jiangsu Province (no. BK20160001).

References

  1. M. R. Garey, D. S. Johnson, and R. Sethi, “The complexity of flowshop and jobshop scheduling,” Mathematics of Operations Research, vol. 1, no. 2, pp. 117–129, 1976. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  2. M. Mastrolilli and L. M. Gambardella, “Effective neighbourhood functions for the flexible job shop problem,” Journal of Scheduling, vol. 3, no. 1, pp. 3–20, 2000. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  3. G. Zhang, L. Gao, and Y. Shi, “An effective genetic algorithm for the flexible job-shop scheduling problem,” Expert Systems with Applications, vol. 38, no. 4, pp. 3563–3573, 2011. View at Publisher · View at Google Scholar · View at Scopus
  4. L. Wang, S. Wang, Y. Xu, G. Zhou, and M. Liu, “A bi-population based estimation of distribution algorithm for the flexible job-shop scheduling problem,” Computers & Industrial Engineering, vol. 62, no. 4, pp. 917–926, 2012. View at Publisher · View at Google Scholar · View at Scopus
  5. Y. Yuan and H. Xu, “An integrated search heuristic for large-scale flexible job shop scheduling problems,” Computers & Operations Research, vol. 40, no. 12, pp. 2864–2877, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  6. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002. View at Publisher · View at Google Scholar · View at Scopus
  7. Q. Zhang and H. Li, “MOEA/D: a multiobjective evolutionary algorithm based on decomposition,” IEEE Transactions on Evolutionary Computation, vol. 11, no. 6, pp. 712–731, 2007. View at Publisher · View at Google Scholar · View at Scopus
  8. H. Li and Q. Zhang, “Multi-objective optimization problems with complicated pareto sets, MOEA/D and NSGA-II,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 2, pp. 284–302, 2009. View at Publisher · View at Google Scholar · View at Scopus
  9. Q. Zhang, W. Liu, E. Tsang, and B. Virginas, “Expensive multiobjective optimization by MOEA/D with gaussian process model,” IEEE Transactions on Evolutionary Computation, vol. 14, no. 3, pp. 456–474, 2010. View at Publisher · View at Google Scholar · View at Scopus
  10. Y.-Y. Tan, Y.-C. Jiao, H. Li, and X.-K. Wang, “A modification to MOEA/D-DE for multiobjective optimization problems with complicated Pareto sets,” Information Sciences, vol. 213, pp. 14–38, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  11. Y. Yuan, H. Xu, B. Wang, B. Zhang, and X. Yao, “A new dominance relation-based evolutionary algorithm for many-objective optimization,” IEEE Transactons on Evolutionary Computation, vol. 20, no. 1, pp. 16–37, 2016. View at Google Scholar
  12. Y. Yuan, H. Xu, B. Wang, B. Zhang, and X. Yao, “Balancing Convergence and Diversity in Decomposition-Based Many-Objective Optimizers,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 2, pp. 180–198, 2016. View at Publisher · View at Google Scholar · View at Scopus
  13. P. C. Chang, S. H. Chen, Q. Zhang, and J. L. Lin, “MOEA/D for flowshop scheduling problems,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '08), pp. 1433–1438, Hong Kong, June 2008. View at Publisher · View at Google Scholar · View at Scopus
  14. A. Alhindi and Q. Zhang, “MOEA/D with Tabu Search for multiobjective permutation flow shop scheduling problems,” in Proceedings of the 2014 IEEE Congress on Evolutionary Computation, CEC 2014, pp. 1155–1164, China, July 2014. View at Publisher · View at Google Scholar · View at Scopus
  15. X.-N. Shen, Y. Han, and J.-Z. Fu, “Robustness measures and robust scheduling for multi-objective stochastic flexible job shop scheduling problems,” Soft Computing, pp. 1–24, 2016. View at Publisher · View at Google Scholar · View at Scopus
  16. F. Zhao, Z. Chen, J. Wang, and C. Zhang, “An improved MOEA/D for multi-objective job shop scheduling problem,” International Journal of Computer Integrated Manufacturing, vol. 30, no. 6, pp. 616–640, 2017. View at Publisher · View at Google Scholar · View at Scopus
  17. W. Xia and Z. Wu, “An effective hybrid optimization approach for multi-objective flexible job-shop scheduling problems,” Computers & Industrial Engineering, vol. 48, no. 2, pp. 409–425, 2005. View at Publisher · View at Google Scholar · View at Scopus
  18. J. Gao, M. Gen, L. Sun, and X. Zhao, “A hybrid of genetic algorithm and bottleneck shifting for multiobjective flexible job shop scheduling problems,” Computers & Industrial Engineering, vol. 53, no. 1, pp. 149–162, 2007. View at Publisher · View at Google Scholar · View at Scopus
  19. G. H. Zhang, X. Y. Shao, P. G. Li, and L. Gao, “An effective hybrid particle swarm optimization algorithm for multi-objective flexible job-shop scheduling problem,” Computers & Industrial Engineering, vol. 56, no. 4, pp. 1309–1318, 2009. View at Publisher · View at Google Scholar · View at Scopus
  20. L.-N. Xing, Y.-W. Chen, and K.-W. Yang, “An efficient search method for multi-objective flexible job shop scheduling problems,” Journal of Intelligent Manufacturing, vol. 20, no. 3, pp. 283–293, 2009. View at Publisher · View at Google Scholar · View at Scopus
  21. J.-Q. Li, Q.-K. Pan, and Y.-C. Liang, “An effective hybrid tabu search algorithm for multi-objective flexible job-shop scheduling problems,” Computers & Industrial Engineering, vol. 59, no. 4, pp. 647–662, 2010. View at Publisher · View at Google Scholar · View at Scopus
  22. S. Wang, L. Wang, M. Liu, and Y. Xu, “An estimation of distribution algorithm for the multi-objective flexible job-shop scheduling problem,” in Proceedings of the 2013 IEEE Symposium on Computational Intelligence in Scheduling, CISched 2013 - 2013 IEEE Symposium Series on Computational Intelligence, SSCI 2013, pp. 1–8, Singapore, April 2013. View at Publisher · View at Google Scholar · View at Scopus
  23. I. Kacem, S. Hammadi, and P. Borne, “Pareto-optimality approach for flexible job-shop scheduling problems: hybridization of evolutionary algorithms and fuzzy logic,” Mathematics and Computers in Simulation, vol. 60, no. 3-5, pp. 245–276, 2002. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  24. N. B. Ho and J. C. Tay, “Solving multiple-objective flexible job shop problems by evolution and local search,” IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, vol. 38, no. 5, pp. 674–685, 2008. View at Publisher · View at Google Scholar · View at Scopus
  25. X. Wang, L. Gao, C. Zhang, and X. Shao, “A multi-objective genetic algorithm based on immune and entropy principle for flexible job-shop scheduling problem,” The International Journal of Advanced Manufacturing Technology, vol. 51, no. 5-8, pp. 757–767, 2010. View at Publisher · View at Google Scholar · View at Scopus
  26. M. Frutos, A. C. Olivera, and F. Tohm, “A memetic algorithm based on a {NSGAII} scheme for the flexible job-shop scheduling problem,” Annals of Operations Research, vol. 181, pp. 745–765, 2010. View at Publisher · View at Google Scholar · View at MathSciNet
  27. Y. Yuan and H. Xu, “Multiobjective flexible job shop scheduling using memetic algorithms,” IEEE Transactions on Automation Science and Engineering, vol. 12, no. 1, pp. 336–353, 2015. View at Publisher · View at Google Scholar · View at Scopus
  28. J.-Q. Li, Q.-K. Pan, and J. Chen, “A hybrid Pareto-based local search algorithm for multi-objective flexible job shop scheduling problems,” International Journal of Production Research, vol. 50, no. 4, pp. 1063–1078, 2012. View at Publisher · View at Google Scholar · View at Scopus
  29. T.-C. Chiang and H.-J. Lin, “A simple and effective evolutionary algorithm for multiobjective flexible job shop scheduling,” International Journal of Production Economics, vol. 141, no. 1, pp. 87–98, 2013. View at Publisher · View at Google Scholar · View at Scopus
  30. J. Xiong, X. Tan, K. Yang, L. Xing, and Y. Chen, “A Hybrid Multiobjective Evolutionary Approach for Flexible Job-Shop Scheduling Problems,” Mathematical Problems in Engineering, vol. 2012, pp. 1–27, 2012. View at Publisher · View at Google Scholar
  31. L. Wang, S. Y. Wang, and M. Liu, “A Pareto-based estimation of distribution algorithm for the multi-objective flexible job-shop scheduling problem,” International Journal of Production Research, vol. 51, no. 12, pp. 3574–3592, 2013. View at Publisher · View at Google Scholar · View at Scopus
  32. S. Jia and Z.-H. Hu, “Path-relinking Tabu search for the multi-objective flexible job shop scheduling problem,” Computers & Operations Research, vol. 47, pp. 11–26, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  33. G. Moslehi and M. Mahnam, “A Pareto approach to multi-objective flexible job-shop scheduling problem using particle swarm optimization and local search,” International Journal of Production Economics, vol. 129, no. 1, pp. 14–22, 2011. View at Publisher · View at Google Scholar · View at Scopus
  34. X. Shao, W. Liu, Q. Liu, and C. Zhang, “Hybrid discrete particle swarm optimization for multi-objective flexible job-shop scheduling problem,” The International Journal of Advanced Manufacturing Technology, vol. 67, no. 9–12, pp. 2885–2901, 2013. View at Publisher · View at Google Scholar · View at Scopus
  35. L. C. F. Carvalho and M. A. Fernandes, “Multi-objective Flexible Job-Shop scheduling problem with DIPSO: More diversity, greater efficiency,” in Proceedings of the 2014 IEEE Congress on Evolutionary Computation, CEC 2014, pp. 282–289, China, July 2014. View at Publisher · View at Google Scholar · View at Scopus
  36. N. Tian and Z. Ji, “Pareto-ranking based quantum-behaved particle swarm optimization for multiobjective optimization,” Mathematical Problems in Engineering, vol. 2015, Article ID 940592, 10 pages, 2015. View at Publisher · View at Google Scholar · View at Scopus
  37. J. Li, Q. Pan, and S. Xie, “An effective shuffled frog-leaping algorithm for multi-objective flexible job shop scheduling problems,” Applied Mathematics and Computation, vol. 218, no. 18, pp. 9353–9371, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  38. L. Wang, G. Zhou, Y. Xu, and M. Liu, “An enhanced Pareto-based artificial bee colony algorithm for the multi-objective flexible job-shop scheduling,” The International Journal of Advanced Manufacturing Technology, vol. 60, no. 9–12, pp. 1111–1123, 2012. View at Publisher · View at Google Scholar · View at Scopus
  39. Q. Zhang, H. Li, D. Maringer, and E. Tsang, “MOEA/D with NBI-style Tchebycheff approach for portfolio management,” in Proceedings of the 2010 6th IEEE World Congress on Computational Intelligence, WCCI 2010 - 2010 IEEE Congress on Evolutionary Computation, CEC 2010, Spain, July 2010. View at Publisher · View at Google Scholar · View at Scopus
  40. Y. Mei, K. Tang, and X. Yao, “Decomposition-based memetic algorithm for multiobjective capacitated arc routing problem,” IEEE Transactions on Evolutionary Computation, vol. 15, no. 2, pp. 151–165, 2011. View at Publisher · View at Google Scholar · View at Scopus
  41. V. A. Shim, K. C. Tan, and C. Y. Cheong, “A hybrid estimation of distribution algorithm with decomposition for solving the multiobjective multiple traveling salesman problem,” IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, vol. 42, no. 5, pp. 682–691, 2012. View at Publisher · View at Google Scholar · View at Scopus
  42. L. Ke and Q. Zhang, “Multiobjective combinatorial optimization by using decomposition and ant colony,” IEEE Transactions on Cybernetics, vol. 43, no. 6, pp. 1845–1859, 2013. View at Google Scholar
  43. L. Ke and Q. Zhang, “Hybridzation of Decomposition and Local Search for Multiobjective Optimization,” IEEE Transactions on Cybernetics, vol. 44, no. 44, pp. 1808–1819, 2014. View at Google Scholar
  44. I. Kacem, S. Hammadi, and P. Borne, “Approach by localization and multiobjective evolutionary optimization for flexible job-shop scheduling problems,” IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, vol. 32, no. 1, pp. 1–13, 2002. View at Publisher · View at Google Scholar · View at Scopus
  45. K. Deb and H. Jain, “An evolutionary many-objective optimization algorithm using reference-point- based nondominated sorting approach, part I: solving problems with box constraints,” IEEE Transactions on Evolutionary Computation, vol. 18, no. 4, pp. 577–601, 2014. View at Publisher · View at Google Scholar
  46. S.-K. Zhao, S.-L. Fang, and X.-J. Gu, “Machine selection and FJSP solution based on limit scheduling completion time minimization,” Jisuanji Jicheng Zhizao Xitong/Computer Integrated Manufacturing Systems, CIMS, vol. 20, no. 4, pp. 854–865, 2014. View at Publisher · View at Google Scholar · View at Scopus
  47. F. Pezzella, G. Morganti, and G. Ciaschetti, “A genetic algorithm for the flexible job-shop scheduling problem,” Computers & Operations Research, vol. 35, no. 10, pp. 3202–3212, 2008. View at Publisher · View at Google Scholar · View at Scopus
  48. K. Z. Gao, P. N. Suganthan, Q. K. Pan, T. J. Chua, T. X. Cai, and C. S. Chong, “Pareto-based grouping discrete harmony search algorithm for multi-objective flexible job shop scheduling,” Information Sciences, vol. 289, pp. 76–90, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  49. P. Brandimarte, “Routing and scheduling in a flexible job shop by tabu search,” Annals of Operations Research, vol. 41, no. 3, pp. 157–183, 1993. View at Publisher · View at Google Scholar · View at Scopus
  50. E. Zitzler, L. Thiele, M. Laumanns, C. M. Fonseca, and V. G. Da Fonseca, “Performance assessment of multiobjective optimizers: an analysis and review,” IEEE Transactions on Evolutionary Computation, vol. 7, no. 2, pp. 117–132, 2003. View at Publisher · View at Google Scholar · View at Scopus
  51. E. Zitzler and L. Thiele, “Multiobjective evolutionary algorithms: A comparative case study and the strength Pareto approach,” IEEE Transactions on Evolutionary Computation, vol. 3, no. 4, pp. 257–271, 1999. View at Publisher · View at Google Scholar · View at Scopus
  52. J. Demsar, “Statistical comparisons of classifiers over multiple data sets,” Journal of Machine Learning Research, vol. 7, pp. 1–30, 2006. View at Google Scholar · View at MathSciNet
  53. S. Dauzère-Pérès and J. Paulli, “An integrated approach for modeling and solving the general multiprocessor job-shop scheduling problem using tabu search,” Annals of Operations Research, vol. 70, pp. 281–306, 1997. View at Publisher · View at Google Scholar · View at MathSciNet
  54. J. Bader and E. Zitzler, “HypE: an algorithm for fast hypervolume-based many-objective optimization,” Evolutionary Computation, vol. 19, no. 1, pp. 45–76, 2011. View at Publisher · View at Google Scholar · View at Scopus