Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2017 (2017), Article ID 6439631, 20 pages
https://doi.org/10.1155/2017/6439631
Research Article

Latest Stored Information Based Adaptive Selection Strategy for Multiobjective Evolutionary Algorithm

Air Force Engineering University, Xi’an, China

Correspondence should be addressed to Jiale Gao; moc.361@dgk_elaijoag

Received 4 July 2017; Revised 18 October 2017; Accepted 15 November 2017; Published 17 December 2017

Academic Editor: Salvatore Alfonzetti

Copyright © 2017 Jiale Gao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The adaptive operator selection (AOS) and the adaptive parameter control are widely used to enhance the search power in many multiobjective evolutionary algorithms. This paper proposes a novel adaptive selection strategy with bandits for the multiobjective evolutionary algorithm based on decomposition (MOEA/D), named latest stored information based adaptive selection (LSIAS). An improved upper confidence bound (UCB) method is adopted in the strategy, in which the operator usage rate and abandonment of extreme fitness improvement are introduced to improve the performance of UCB. The strategy uses a sliding window to store recent valuable information about operators, such as factors, probabilities, and efficiency. Four common used DE operators are chosen with the AOS, and two kinds of assist information on operator are selected to improve the operators search power. The operator information is updated with the help of LSIAS and the resulting algorithmic combination is called MOEA/D-LSIAS. Compared to some well-known MOEA/D variants, the LSIAS demonstrates the superior robustness and fast convergence for various multiobjective optimization problems. The comparative experiments also demonstrate improved search power of operators with different assist information on different problems.

1. Introduction

Multiobjective optimization is a common problem that scientists and engineers face, which concerns optimizing problems with multiple and often conflicting objectives. In principle, there is no single solution for a multiobjective optimization problem (MOP), but a set of Pareto-optimal solutions. This paper considers the following continuous MOP:where is a n-dimensional decision variable vector, is the decision space, and consists of m real-valued continuous objective functions [1, 2].

Over the past decades, a number of multiobjective evolutionary algorithms (MOEAs) have been proposed. The first MOEA, named vector evaluated genetic algorithm, has been used for MOPs since 1980s [3] and after that more and more attention has been attached to MOEA. The first generation of intelligence algorithms for MOPs represented by NSGA [4], NPGA [5], and MOGA [6] were presented in 1990s. In 2000s, the second generation appeared, and the renowned ones include NSGA-II [7], SPEA-II [8], MOEA/D [9], and some biology imitated algorithms, such as MOPSO [2], MOIA [10], MOACO [11]. These algorithms are usually classified into 3 categories: Pareto based methods [48], indicator-based methods [12], and decomposition-based methods [9]. The first one evaluates individuals based on the nondominated ranking. The second one integrates the convergence and diversity into a single indicator to guide evolution. The last one decomposes an MOP into a set of single subproblems, and evaluates solutions with regard to reference vector.

An extreme difficulty for most MOEAs is how to promote the searching efficiency; that is to say how to improve the operator to induce a more probability for searching high dimensional space. There are two improving methods: enhancing operator with adaptive parameter control and using multiple operators with an adaptive operator selection (AOS). The simulated binary crossover (SBX) [13] and the differential operator (DE) [14] are widely adopted, and they are usually combined with the binomial crossover or the polynomial mutation, as these combinations could bring powerful search ability [15, 16]. There are still some limitations for composite operators [17] and variants [18] when searching complex problems. However, more and more adaptive control strategies are considered indispensable. In [19], best individuals are selected as donor vectors for the DE, and the parameter of every mutation vary within the range of the statistical data of each generation. Zhu et al. [17] design a novel recombination operator possessing the advantages of both the DE and the SBX. Its adaptive parameter control strategy is to allocate probabilities for SBX and DE according to the search period, whereas Zhao et al. [20] suggest that the different neighborhood sizes have an unavoidable influence on the search power of operators based on the framework of MOEA/D, and the experiments imply that adaptive selection of neighborhood sizes works very well.

The major intention of the adaptive parameter control is to solve an essential problem regarded as the exploration versus exploitation (EvE) dilemma. Exploitation means searching the local space deeply, that is to say making full use of the current operator with current parameters. However, exploration means the operator search power for unfamiliar areas, which is displayed by other operators or the configuration of the current operator with different parameters. In conclusion, the EvE dilemma can be considered as looking for a tradeoff of the search power both in unfamiliar and familiar areas.

The EvE dilemma is of great significance for the existence of AOS methods. The EvE dilemma has been intensively studied in the game theory community for dealing with the multiarmed bandit (MAB) problem, which was first proposed in 1952 [21]. An interesting strategy, the upper confidence bound (UCB) selection strategy, has been used to solve the EvE dilemma since 1994 [22]. It possesses distinctive advantages among many AOSs, and a host of improved UCB versions appears after that. The application of UCB strategy in solving the MAB problem has been widely recognized, and in the following part UCB is referred to as the MAB algorithm for convenience. DMAB [23] presents the combination of the MAB problem and two statistical tests and suggests that the operator selection is sensitive to any change of the reward distribution. SlMAB [24] uses a sliding time window to store recent rewards and applied operators with a mechanism of first in first out (FIFO). Compared with DMAB, SlMAB possesses less assessment and factors. Furthermore, two rank based methods, the Area Under Curve and the Sum of Ranks, are presented for assigning credits, respectively [25]. Inspired by SlMAB and rank based credit assignment, Li et al. [16] suggest that a decay factor is useful for the credit assignment and it can improve the selection probability of the best operator. Two modified MAB methods are provided, called UCB-Tuned and UCB-V, in which reward variance is used as a parameter for a better EvE tradeoff [26].

Whether enhancing operator with adaptive parameter control or using multiple operators with AOSs, the statistical data about the operators is vitally meaningful with no doubt. Enlightened by the sliding time window, we present a novel adaptive method which is used to store the information about the operators, called latest stored information based adaptive selection (LSIAS) strategy. The information of operators includes operator names, operator efficacies, and parameters about operators such as neighborhood sizes, scaling factors, and other parameters of operators (the parameters about operators will be regarded as assist information on operator for convenience). In this paper, two kinds of assist information are taken into account and they are used within MOEA/D with dynamical resource allocation (MOEA/D-DRA) [27], which won the championship of the CEC 2009 MOEA contest. The reason of choosing a decomposition-based algorithm is mainly because each decomposed subproblem is a single objective problem, which easily gives an exact value to measure the performance of operators every time. To validate the effectiveness and robustness of LSIAS, 22 well-known benchmark problems such as ZDT problems [28], DTLZ problems [29], and UF problems [30] are adopted. The comparative experiments are demonstrated when the MOEA/D-LSIAS compares with some versions of MOEA/D, for example, MOEA/D-DE [31], MOEA/D-DRA [27], MOEA/D-FRRMAB [16], MOEA/D-STM [32], MOEA/D-UCB-T [21], and MOEA/D-AGR [33].

The rest of this paper is organized as follows. The background and some works regarding the AOS and the adaptive parameter control are described in Section 2. The detail description of the LSIAS strategy is given in Section 3, and its utilization with the MOEA/D-DRA is presented in Section 4. Section 5 analyzes the result of the comparison experiments. Finally, conclusions are summarized and further work along the direction of the adaptive selection is discussed.

2. Related Background

2.1. Tchebycheff Approach in MOEA/D

MOEA/D provides a method which decomposes an MOP into a series of single problems. It is suitable to evaluate the performance of operators. There are three common decomposition methods: the weighted sum approach, the Tchebycheff approach, and the boundary intersection method, which are all described in [9]. This paper employs the Tchebycheff approach and it is in the formwhere is the reference point and for each . For each Pareto optimal point there exists a weight vector such that is the optimal solution of (2) and each optimal solution of (2) is a Pareto optimal solution of (1). Therefore, one is able to obtain different Pareto optimal solutions by altering the weight vector.

2.2. Basic Differential Evolution Operator

Differential evolution (DE) is a parallel direct search method. There are many various mutation strategies about DE. Here, several frequently used DE operators [15, 3436] are given as follows:(i)DE/rand/1: ,(ii)DE/rand/2: ,(iii)DE/target-to-rand/1: ,(iv)DE/target-to-rand/2: ,(v)DE/best/1: ,(vi)DE/best/2: ,(vii)DE/target-to-best/1: ,

where is the target vector, is the mutant vector, and both of them belong to the ith subproblem. is one of the best individual vectors in the population. The scale factors F and K are used to control the influence of the mutant vector difference within the range . The donor vectors , , , , and are different individuals.

After the differential evolution, the crossover operation and the polynomial mutation usually go on to mutate the vector . The crossover operation decides which dimension of the new solution will be replaced by or the trial vector . Basically, DE works with the binomial crossover [14], and the trial vector is formed as follows:where r is a random number in the range . CR is the crossover rate and has the same range as r. and is a random integer within the range .

The polynomial mutation provides a deep mutation for the to generate a new solution . The polynomial mutation is defined as follows:where and are the lower and upper bounds of the jth variable, respectively, is the mutation probability, and is a random number within the range . is a mutation factor and is obtained bywhere is a random number within the range and is the distribution index and is usually set to 20.

The description above is a common DE mutation process. It has been validated that these different DE operators enjoy different search powers. “DE/rand/1” and “DE/rand/2” show strong exploration performance. “DE/target-to-rand/1,” “DE/best/1,” and “DE/best/2” manifest their perfect exploitation performance and are useful for unimodal problems. However, “DE/target-to-rand/1” is more suitable for rotated problems than other DE operators [19]. Accordingly, a combination of multiple DE operators could offer a strong search power to solve most MOPs.

2.3. Adaptive Operator Selection

Every operator is in possession of different search power. Although parameter self-adaptive adjustment improves their search power, it is limited by its best performance. The AOS offers intense search power by choosing different operators when faced with different dilemmas.

There are mainly two AOS methods: the probability based methods, such as the probability matching (PM) [37] and the adaptive pursuit (AP) [38], and the bandits based methods [39]. The similarity of all AOS methods is that they enjoy similar processes, which are applied operator based credit assignment and selection based credit accumulation. Nevertheless, the detail of selection and credit accumulation is different.

2.3.1. Credit Assignment

Two credit assignment methods are often used. One is the dynamic statistic information evaluation about operators, and the other is the search power evaluation which uses various complex statistics to detect outlier production. The former takes recent assist information on operator into account. Some recent assist information is employed as rewards which decide the credit assignment of operators [24]. Rank factor is used to increase the use frequency of better operators. The latter possesses a complex calculation in consideration of two measures, fitness and diversity. In [40], the evaluation criterion depends on the appearance probability of outlier solutions. This evaluation criterion does not regard the fitness as the unique criterion, as the authors argue that infrequent but powerful operators are as significant as frequent but powerless operators. Density estimator is adopted as evaluation criterion in [41, 42], and a statistic method is added in [42]. This method calculates the normalized relative fitness improvements from successful operators, and then it regards the mean value of the improvements brought by operators as the credit. In [26], four different credit assignment methods are adopted, which are Average Absolute Reward, Extreme Absolut Reward, Average Normalized Reward, and Extreme Normalized Reward. All the rewards of each method are evaluated, and the method with the max probability is chosen as the credit assignment method at current generation.

2.3.2. Operator Selection

Assume that there are K different operators, and and are the probability vector and the estimate of the ith operator reward, respectively.

Probability Matchingwhere r is the reward of selected and successfully applied operator and t is the time point. is a decay factor which alleviates the influence brought by the accumulation reward of previously used operators. In this method, the worst probability is and the best probability is . So each operator has a nonzero probability to be chosen during the whole search process. As every operator performs differently at different phases, nonzero probability operator selections are very suitable for AOSs, and it manifests superior robustness.

Adaptive Pursuitwhere is the same as in (6). When the best operator is successfully applied, it will get a relatively better reward. The accumulation is enlarged and it is the reason why this selection strategy always chooses the best operator, noted .

Multiarmed Banditwhere is the same as that in (6), is the successful times of th operator at point , and is a scaling factor of the tradeoff of different search powers. In the MAB algorithm, the operator denotes the arm.

Each operator gets different credit after credit assignment, which is the key of operator selection. As (9) revealed, the selection depends on two factors: one is the credit value of operators () and the other is the usage number of operators (the part behind parameter C). The parameter C plays a crucial role in deciding which factor plays a more important role. SlMAB [24] uses a sliding window with a mechanism of FIFO to store some latest information about operators. The latest information about operators truly reflects the operator performance. Due to the timeliness inadequacy of this accumulation, a decay factor is suggested in [16]. In [21], two modified MAB methods are proposed, which are MOEA/D-UCB-Tuned and MOEA/D-UCB-V. The two methods use a parameter called the rewards’ variance to modify (9), and the experimental results show that UCB-T performs better than V on most test problems.

Except for the two methods mentioned above, there are some other kinds of adaptive operator selection methods, such as gradient based methods and multiple trial vector comparison based methods. Schütze et al. [43] propose a local search mechanism, Hypervolume Directed Search (HVDS). In HVDS, the gradient information is used to select search behaviors which are greedy search, descent direction, and search along the Pareto front. As these search behaviors are based on gradients, these methods cannot be used if objectives are not differentiable. Lara et al. [2] suggest a novel iterative search procedure, Hill Climber with Sidestep (HCS), in which it is capable of moving both toward and along the Pareto set depending on the distance of the current iterate toward this set. The search direction selection is on the basis of the dominance relation between several trial vectors and the old individual.

3. The Proposed Algorithm

In this section, we present an improved bandit based method for MOPs, named latest stored information based adaptive selection strategy. This method attaches more attention to the AOS dynamic nature. It is mainly composed of two parts, credit assignment and operator selection.

3.1. Credit Assignment

Credit assignment contains two main tasks: one is to calculate credit value of applied operators; the other is to assign the credit fairly.

For the first task, fitness change is adopted as rewards of successful applied operators, which is regarded as Fitness Improvement Rate (FIR). During different search processes, the convergence levels of individuals are highly different. Normalization is used for FIR as follows:where is the fitness value of the solution of last generation on the subproblem and is the fitness value of the current solution on the subproblem.

A sliding window with length W is used to store operators, their assist information, and FIRs with the mechanism of FIFO. It always stores the latest W configuration of operators and their related information.

Supposing that the operator number is , the type number of assist information on operator is T, and the number of each type of assist information on operator is , . The structure of sliding window is shown as in Figure 1. As the sliding window revealed, the first layer is stored by operator names, the last is FIRs, and the middle layers are for parameters. It is worth noting that the locations of different types of parameters in the sliding window are ranked according to their significance, and this order is also the computation order of assist information on operator. Since the best suitable operator and the best suitable configuration of operator and assist information on operator are considered simultaneously, the credit value to operator is first to be assigned and the configuration of operator and assist information is considered later. The configurations among assist information on operator are not taken into consideration. The index of FIR is not described in Figure 1. This is mainly because the index of FIR is mainly associated with the credit assignment of different operators and different types of assist information on operator. The details of the index of FIR are illustrated in Figure 2.

Figure 1: An illustration of sliding window in the way of FIFO.
Figure 2: An illustration of the index of FIR in credit value computation.

As rewards guide operator selection methods, some unexpected extreme fitness improvement values brought by operators can appear. In case of that, we discard 5% best rewards and 5% worst rewards, and the rest of the rewards are denoted as R.where denotes credit value of operator i and is credit value of a configuration of operator i and th parameter of parameter type t; denotes jth reward of operator i and th parameter of parameter type t which belongs to , and denotes jth reward of operator i which belongs to . denotes the cardinality of a set. But if the total usage number of the operator or the assist information is small, this method is not very useful. So this method is used under the condition that the total usage number is no less than 20.

3.2. Operator Selection

Based on the credit assignment described above, each configuration of operator and parameters gets its FRR. Then the best configuration will be selected. Refer to MAB algorithms; the selection of operators and parameters are defined aswhere is the successful applied number of ith operator, is the successful applied number of jth parameter of parameter type t, and K is the number of operators or this type of parameters.

As (12) revealed, the selection depends on two factors. One is the credit value of operators (FRR_op) and the other is the usage number of operators (the part behind parameter C). As the parameter C plays a crucial role in deciding which factor plays a more important role, we decide to redefine the parameter C with the difference between the best usage rate and the worst usage rate. The advantage of such a parameter C is that the selection of operators and parameters becomes sensitive. If the difference is a large value, usage number is the major factor. The configuration of the operator and parameter possessing maximum usage numbers has more possibility of giving full play to its search power. If the difference is a small value, the credit value is more important and the configuration with power search ability at this time will be selected more possibly by chance. The parameter C is defined aswhere   and are the maximum and minimum usage numbers, respectively and W is the length of sliding window, and it also could be regarded as the total usage number.

The pseudocode of operator and parameter selection is given in Algorithm 1.

Algorithm 1: The procedure for the adaptive selection.

4. Integration of LSIAS with MOEA/D-DRA

The MOEA/D provides a decomposition method which could decompose a MOP into a series of single problems. It is the single problems that could give an exact value to measure the performance of every operator. Therefore, the main reason we choose MOEA/D is that the metrics for evaluating operators are easy to get.

Since the MOEA/D was presented in 2007, a good deal of research has been put forward to improve its performance and its applied range. Any improvement on MOEA/D could be of practical interests. Here we choose a famous improved version of MOEA/D as the framework, MOEA/D-DRA, as it is the champion of the CEC 2009 MOEA contest. Consequently, we investigate how to enhance the MOEA/D with LSIAS.

4.1. MOEA/D-DRA

In this paper, we use Tchebycheff with objective normalization instead of classic Tchebycheff. It is known that the objective normalization performs better than classic methods especially when the objective space becomes more complex.

MOEA/D minimizes all these objectives functions simultaneously in a single run. Neighborhood relations among these single objective subproblems are defined based on the distances among their weight vectors. Each subproblem is optimized by using information mainly from its neighboring subproblems. For most versions of MOEA/D proposed, they receive about the same amount of computational effort. These subproblems, however, may have different computational difficulties; therefore, it is very reasonable to assign different amounts of computational effort to different problems. In MOEA/D-DRA, N/5 subproblems are selected to be optimized based on the utility in a single run. We define and compute a utility for each subproblem i. Computational efforts are distributed to these subproblems based on their utilities. is the relative improvement of subproblem i and its value is a positive number. If the solution of subproblem i is better than the one in last generation, the value of is 1. But if the solution of subproblem i is not better than the one in last generation, the value of is less than 1. Thus this subproblem is selected to be optimized with a large probability in next generation.

Suppose that the MOP is decomposed into N scalar subproblems, and each subproblem has a weight vector which is uniformly spread. In , each   matches the condition of and . The objective function of jth subproblem can be stated as (2). When searching, MOEA/D-DRA maintains(i)a population of N points , where is the current solution to the ith subproblem;(ii), where is the F-value of ; that is, for each ;(iii): the utility of ith subproblem, which measures the improvement of the individual between the previous and the current generation. It is defined aswhere is the relative improvement of subproblem i.

An important advantage of MOEA/D and its improved versions is that a better solution is helpful for both the ith subproblem and the subproblems close to the ith. There is a congruent relationship between subproblems and vectors. Thus, each solution converges along its vector from beginning to end, and the excellent ones could bring valuable information to help its neighbor individuals evolve. Each subproblem has T neighborhoods which are selected based on the Euclidean distance. For generation t, the process is as follows:(1)Select T neighborhoods or all population as P;(2)Randomly select several solution from P;(3)Generate a new solution after using genetic operators;(4)Compare the new one with old ones selected from P and replace them if the new one is better.

4.2. MOEA/D-DRA with LSIAS

To combine the LSIAS with MOEA/D-DRA, we set the content of latest stored information and define the reward calculation. In this paper, we choose four kinds of operator related information as the content of latest stored information, which are an operator pool, the scaling factor F, the neighborhood size, and the FIRs. They are stored in the sliding window as shown in Figure 3. The operator pool consists of four DE operators, expressed as ().(i): DE/rand/1(ii): DE/rand/2(iii): DE/target-to-rand/1(iv): DE/target-to-rand/2.

Figure 3: An illustration of latest stored information in the sliding window.

In MOEA/D-DRA, each vector employs T nearest vectors as its neighborhood. In this paper, suppose that all vectors are divided into several neighborhoods with the size of T. Choosing three nearest neighborhoods for , it means that each subproblem has three neighborhoods as its neighborhood pool to provide donor vectors for mutation. But just only one neighborhood can be used for every mutation.

The scaling factor F is used for the adaptive parameter control of DE operators. Because of the different search power of the four, four independent scaling factors are set. Every F follows a Cauchy distribution with location parameter and scale parameter (). If a new F is out of the range , it will be regenerated.where is a weight factor. It is validated that a little random perturbation of can bring better effectiveness for the optimization process [17]. As a result, it is randomly generated within the range . is updated as (16) and it is initialized to be 0.5. is the set of the scaling factor of ith operator which is stored in the sliding window. is given as follows:where is the number of in the sliding window. k is set to 1.5 which is proved to be the best in [36].

Assume that is a new offspring for ith subproblem and is the old one. If the new one is better than the old one, the former replaces the latter. The reward is the same as the relative improvement in [26], and it is given by

In MOEA/D-DRA, each new offspring is compared with solutions selected from the neighborhood; thus the is the sum of brought by with the current operator and the current parameters.

The pseudocode of MOEA/D-LSIAS is demonstrated in Algorithm 2.

Algorithm 2: MOEA/D-LSIAS.

5. Experimental Studies

In this section, some experiments are adopted to analyze the performance of our algorithm. In Section 5.1, 22 well-known benchmark problems are briefly introduced in the experiments. Two performance measures and all parameter settings about the experiments are described, respectively, in Sections 5.2 and 5.3. Furthermore, some comparative experiments between MOEA/D-LSIAS and other state-of-the-art MOEA/D variants are given and analyzed in Section 5.4. Additionally, some experiments about parameters are conducted to analyze the effectiveness of parameters with different numbers in Sections 5.5 and 5.6.

5.1. Benchmark Functions

Three types of benchmark functions are adopted here to manifest the effectiveness and robustness of MOEA/D-LSIAS. There are 22 well-known benchmark problems in total including ZDT problems [28], DTLZ problems [29], and UF problems [30]. The solution set of the these benchmark problems (the Pareto set) is not given by a single point but forms a -dimensional object, where m is the number of objectives involved in the MOP.

The most widely used ZDT problems are biobjective test problems, including ZDT1–4 and ZDT6. This kind of problems is easy to solve as it lacks some features, such as variable linkage and multimodality. Therefore, UF problems and DTLZ problems are covered in the experiments, of which various features can make up for the deficiency of ZDT problems. It is worth noting that UF1–7 are all biobjective test problems and DTLZ1–7 and UF8–10 are three-objective test problems. All the three types of benchmark functions are widely used for the evaluation of MOEAs.

5.2. Performance Measure

There are many performance measures for comparisons among algorithms, like Inverted Generational Distance (IGD) [44], Hypervolume (HV) [45], Spread [46], [47], and so on. Because all comparative algorithms in their literature employ either IGD or HV or both of them, they are also chosen to assess the performance of the proposed algorithm and its comparisons. The two metrics possess the ability of convergence assessment and diversity assessment.

IGD. Let be a set of solutions distributed in the true Pareto-optimal front, which is a sphere composed of all best solutions. Let be an approximation set got by MOEA. The IGD metrics of is obtained bywhere is the minimum Euclidean distance between x and its nearest solution in P, and is the number of elements in . The true Pareto-optimal front set is known in advance and it can be got in http://jmetal.sourceforge.net/problems.html. In this paper, we, respectively, select 1,000 and 10,000 points uniformly distributed in the true Pareto-optimal front for biobjective and three-objective test problems.

HV. Assume that is the reference point dominated by all the points in . HV metrics is the size of the objective space dominated by the approximate set from and bounded by .where indicates the Lebesgue measure. It is noted that if a solution in P does not dominate , it will be abandoned. In this paper, is set to (2.0, 2.0) for ZDT1–ZDT4, ZDT6, and UF1–UF7, (1.0, 1.0, 1.0) for DTLZ1, (2.0, 2.0, 2.0) for DTLZ2–DTLZ6 and UF8–UF10, and (2.0, 2.0, 2.0) for DTLZ7.

5.3. Experimental Settings

In this paper, the proposed algorithm, MOEA/D-LSIAS, is compared with various MOEA/D improved versions, including MOEA/D-DE, MOEA/D-DRA, MOEA/D-FRRMAB, MOEA/D-STM, MOEA/D-UCB-T, and MOEA/D-ARG. All the basic experimental settings for the above competitive algorithms are shown in Table 1. Three kinds of benchmark functions are adopted, and the population size N and the maximum number of function evaluations for each are set differently. N is set to 100 for all ZDT benchmark problems. N is set to 300 for all DTLZ problems and all UF problems which are three-objective problems. The maximum number of function evaluations is set to 25,000 for ZDT problems and 300,000 for UF problems and DTLZ problems. The parameter settings of MOEA/D-LSIAS are as follows. The control parameters CR and C and the sliding window size W are shown in Table 1. The operator parameters are explained in Section 4 and the neighborhood size is set to 20. The mean value and the interquartile range (IQR) are used to measure all results of every test instance run 30 independent times. Furthermore, Wilcoxon’s rank sum test is employed to assess the statistical significance of all experimental results from the compared algorithms with a significance level .

Table 1: The basic experimental parameter settings of the compared algorithms.
5.4. Comparisons of MOEA/D-LSIAS with State-of-the-Art MOEA/D Variants
5.4.1. Comparisons on the ZDT Test Problems

Tables 2 and 3 provide the comparative experiment results of all the competitive algorithms on the ZDT test problems, which show the mean value and the interquartile range of IGD and HV over 30 independent runs, respectively. The best result for each test problem is highlighted in boldface with regard to the metric in Tables 27.

Table 2: Comparative results of all the compared algorithms on the ZDT test problems regarding IGD.
Table 3: Comparative results of all the compared algorithms on the ZDT test problems regarding HV.
Table 4: Comparative results of all the compared algorithms on the DTLZ test problems regarding IGD.
Table 5: Comparative results of all the compared algorithms on the DTLZ test problems regarding HV.
Table 6: Comparative results of all the compared algorithms on the UF test problems regarding IGD.
Table 7: Comparative results of all the compared algorithms on the UF test problems regarding HV.

In Table 2, MOEA/D-LSIAS performs best on ZDT1 and ZDT2, while MOEA/D-UCB-T, MOEA/D-STM, and MOEA/D-DE are, respectively, the best on ZDT3, ZDT4, and ZDT6. Moreover, the Wilcoxon’s rank sum test indicates that MOEA/D-LSIAS is similar to MOEA/D-ARG on ZDT2 and to MOEA/D on ZDT6. These comparison results for the compared algorithms on ZDT test problems are summarized in the last two rows. The “+/−/≈” summarizes the competitive results on ZDT test problems regarding IGD. It can be observed that the compared algorithms except MOEA/D-DRA perform better than MOEA/D-LSIAS at least once. But the “Rank Sum” indicates that MOEA/D-LSIAS gets the first rank, and MOEA/D-ARG is the best among all the compared algorithms.

Table 3 provides the comparative results using HV. It is easy to find that MOEA/D-LSIAS gets the best performance on ZDT1 and ZDT6, while MOEA/D-ARG is the best on ZDT2 and ZDT4. MOEA/D-FRRMAB performs best only on ZDT3. The Wilcoxon’s rank sum test indicates that MOEA/D-LSIAS is similar to MOEA/D-DE on ZDT6. It is worth noting that MOEA/D-UCB-T and MOEA/D-FRRMAB perform similarly on most ZDT test problems. “+/−/≈” shows that MOEA/D-LSIAS performs better or in a similar way on over half of the ZDT test problems. The “Rank Sum” also shows that MOEA/D-LSIAS is the best. The advantages of MOEA/D-LSIAS are further confirmed by using HV.

5.4.2. Comparisons on the DTLZ Test Problems

DTLZ test problems are three-objective optimization problems, and they are obviously harder than ZDT test problems. Tables 4 and 5 are suggestive of the fact that MOEA/D-LSIAS gets the best performance.

Table 4 gives comparative results of all the compared algorithms on the DTLZ test problems regarding IGD. It is straightforward to find that MOEA/D-LSIAS performs better on more than half of the DTLZ test problems. MOEA/D-LSIAS performs best on 4 (i.e., DTLZ1, DTLZ3, DTLZ5, and DTLZ6) out of 7 DTLZ test problems. “+/−/≈” indicates that MOEA/D-LSIAS performs better than MOEA/D-DE, MOEA/D-DRA, MOEA/D-FRRMAB, and MOEA/D-STM on 6, 6, 7, 4, 7, and 5 out of 7 DTLZ test problems. The Wilcoxon’s rank sum test shows that MOEA/D-LSIAS performs similarly to MOEA/D-STM on DTLZ5. As observed from “Rank Sum,” MOEA/D-LSIAS is evidently the best one for the IGD metrics on DTLZ test problems.

In Table 5, the results of DTLZ test problems with regard to HV are provided. MOEA/D-LSIAS gets the best result on DTLZ1, DTLZ4, and DTLZ7, and it is worse than MOEA/D-STM on DTLZ2, worse than MOEA/D-ARG on DTLZ3 and DTLZ5, and worse than MOEA/D-UCB-T and MOEA/D-FRRMAB on DTLZ6. It achieves statistically similar results to MOEA/D-DRA on DTLZ1 and to MOEA/D-STM on DTLZ5 and DTLZ7. It is reasonable to conclude that MOEA/D-LSIAS achieves a superior performance on most of DTLZ test problems on account of the IGD and HV metric.

5.4.3. Comparisons on the UF Test Problems

Table 6 lists compared results on the UF problems with regard to IGD, which possesses a more complicated Pareto optimal set than ZDT and DTLZ test problems. It is observed that MOEA/D-LSIAS performs best on 5 (i.e., UF1-UF2, UF4, UF6, and UF10) out of 10 UF test problems, while MOEA/D-ARG performs best on 3 (i.e., UF3, UF5, and UF7). MOEA/D-STM achieves best results on UF8 and UF9. The Wilcoxon’s rank sum test indicates that MOEA/D-LSIAS performs similarly to MOEA/D-STM on UF1 and to MOEA/D-DRA on UF10. The “+/−/≈” and the “Rank Sum” show that MOEA/D-ARG and MOEA/D-STM are serious rivals to MOEA/D-LSIAS. But it is evident that MOEA/D-LSIAS performs best when comparing UF test problems with regard to IGD.

Table 7 shows the results of the HV metrics for MOEA/D-LSIAS and various versions of MOEA/D. MOEA/D-LSIAS also performs best on 5 (i.e., UF2–4 and UF6-7) out of 10 UF test problems. In more detail, MOEA/D-LSIAS is worse than MOEA/D-STM on UF1, UF5, UF8, and UF9, worse than MOEA/D-ARG on UF9 and UF10, and worse than MOEA/D-FRRMAB on UF8. Moreover, the Wilcoxon’s rank sum test reveals that MOEA/D-LSIAS performs similarly to MOEA/D-ARG on UF4. As summarized in the “Rank Sum,” MOEA/D-LSIAS is better than MOEA/D-DE and MOEA/D-DRA on all the UF test problems. Regarding the comparison with MOEA/D-FRRMAB, MOEA/D-STM, MOEA/D-UCB-T, and MOA/D-ARG, MOEA/D-LSIAS performs better or in a similar way on more than half of the UF test problems. Therefore, it is confirmed that MOEA/D-LSIAS possesses a better performance than all the compared algorithms by using HV.

5.5. MOEA/D-LSIAS versus AOSs with Different Assist Information

To investigate the effectiveness of different types of assist information, a further experiment about diverse configurations of operator information is given. In Section 3, two types of assist information are considered, including adaptive selection of neighborhood and adaptive control of factor F. The compared algorithms are the same as MOEA/D-LSIAS except the operator information selection, and they all employ AOS with four DE operators.(1)Method 1 (M1): AOS with no assist information,(2)Method 2 (M2): AOS with adaptive selection of neighborhood,(3)Method 3 (M3): AOS with adaptive control of factor F.

Figure 4 provides the four algorithm comparison results on ZDT1, DTLZ1, and UF1. For the IGD metrics, the ones with assist information are better than AOS with nothing on ZDT1, but it is absolutely not on DTLZ1 and UF1. M2 and M3 perform worse than M1 on UF1, while MOEA/D-LSIAS is better than M1 on all the three test problems. It is concluded that the configuration of adaptive control of factor F and adaptive selection of neighborhood are helpful to improve the performance of AOS regarding IGD.

Figure 4: Box plots for the comparison of the MOEA/D-LSIAS and the compared algorithms on ZDT1, DTLZ1, and UF1 regarding IGD and HV.

Regarding the HV metric, M2, M3, and MOEA/D-LSIAS work better than M1 on ZDT1. M1 performs better than M2 and M3 on DTLZ1 and UF1, but is absolutely beaten by LSIAS. The results indicate that the assist information is not absolutely helpful for the AOS, but a good configuration of assist information is more useful. The improvement value depends on the configuration of assist information, and AOS with the help of different assist information can display different search power on different test problem.

5.6. The Dynamics of Operator and Assist Information Selection

For a further investigation for the dynamic performance, we analyze the usage of operators and operator with different assist information. Divide the search process into 50 search phases. Each phase is composed of 500 function evaluations for ZDT1, while it is composed of 6000 function evaluations for UF1 and DTLZ1. Then calculate the usage number of every phase during searching. The usage state of operators and operators with different assist information during the whole search process on ZDT1, DTLZ1, and UF1 are shown in Figures 46.

Figure 5: Operators and assist information usage number during the whole search process on ZDT1.
Figure 6: Operators and assist information usage number during the whole search process on DTLZ1.

In Figure 5, op4 shows effect search power during all the phases, but the impact of the other three operators is unable to be ignored. op1 performs better with ne1 than the other neighborhoods, but op1 only provides effective search power at the beginning of the search phase. op3 has two good partners which are ne1 and ne3, and op3 plays an important role during the second half of the search. During the whole search phase, all the assist information work well with op4.

In Figure 6, it is found that at the beginning of the search phase op3 with ne3 and op4 with ne2 possess powerful search capability. At the first half of search phase, op1 provides effective search, but the four operators provide effective search in turn. The important partners of op1 are ne1 and ne1. op2 works well with all the neighborhoods, but op4 only works well with ne2.

As observed from Figure 7, all of four operators work hard at the beginning, but op4 and op3 provide little help during the second half of search phase. ne3 offers significant effect for different operators during different search phases. It means that the farthest neighborhood set is very helpful for the search. ne1, ne2, and ne3 offer help for op1 and op2 in turn. op3 and op4 perform well with ne2 and ne3. It is obvious that op4 works well at the first half of the search phase, but it cannot show a good performance later.

Figure 7: Operators and assist information usage number during the whole process on UF1.

6. Conclusions

In this paper, a novel AOS method called LSIAS is introduced. In the LSIAS, the best operator usage rate and the worst operator usage rate are used to improve the UCB method. In case of the influence of unexpected extremely large or small fitness improvement, a credit assignment method which is abandonment of extreme fitness improvement is introduced. LSIAS is also an adaptive selection strategy, which is used to select operators and assist information on operator adaptively. Every latest used operator with all of its assist information and its efficiency are dynamically stored in a sliding window. Based on the efficiency, the best configuration of the operator and its assist information for each search phase are dynamically chosen.

Since the decomposition-based MOEAs could help in evaluating the efficiency of operators with their assist information easily, a variant of MOEA/D is adopted to investigate the performance of our proposed LSIAS. Four DE mutation operators and two assist information, which are neighborhoods and scaling factors, are used in this method within the MOEA/D framework. We conduct extensive experimental studies on three kinds of benchmark functions. The experiments show that LSIAS is robust and effective and its adaptive selection method of operators and assist information can significantly improve the performance of MOEA/D.

In the future work, the adaptive selection of operators with assist information will be used to investigate many-objective optimization problems and constraint optimization problems. Furthermore, as the indicator-based MOEAs can also provide exact estimation for operators with assist information, the performance of LSIAS within this algorithm can also be studied.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grants nos. 7170129 and 71771216).

References

  1. Y. Xiang, Y. Zhou, and H. Liu, “An elitism based multi-objective artificial bee colony algorithm,” European Journal of Operational Research, vol. 245, no. 1, pp. 168–193, 2015. View at Publisher · View at Google Scholar · View at Scopus
  2. A. Lara, G. Sanchez, C. A. C. Coello, and O. Schütze, “HCS: A new local search strategy for memetic multiobjective evolutionary algorithms,” IEEE Transactions on Evolutionary Computation, vol. 14, no. 1, pp. 112–132, 2010. View at Publisher · View at Google Scholar · View at Scopus
  3. D. J. Schaffer, “Multiple objective optimization with vector evaluated genetic algorithm,” in Proceedings of the First International Conference of Genetic Algorithms and Their Application, 1985.
  4. N. Srinivas and K. Deb, “Multiobjective function optimization using nondominated sorting genetic algorithms,” Evolutionary Computation, vol. 2, pp. 221–248, 1994. View at Publisher · View at Google Scholar
  5. J. Horn, N. Nafpliotis, and D. E. Goldberg, “A niched Pareto genetic algorithm for multiobjective optimization,” in Proceedings of the 1st IEEE Conference on Evolutionary Computation, pp. 82–87, June 1994. View at Scopus
  6. C. M. Fonseca and Fleming P. J., “Genetic algorithms for multiobjective optimization: formulation discussion and generalization,” in Proceedings of the 5th International Conference on Genetic Algorithms (ICGA '93), pp. 416–423, 1993. View at Publisher · View at Google Scholar
  7. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002. View at Publisher · View at Google Scholar · View at Scopus
  8. Z. Eckart, M. Laumanns, and L. Thiele, SPEA2: Improving the strength Pareto evolutionary algorithm, 2001.
  9. Q. Zhang and H. Li, “MOEA/D: a multiobjective evolutionary algorithm based on decomposition,” IEEE Transactions on Evolutionary Computation, vol. 11, no. 6, pp. 712–731, 2007. View at Publisher · View at Google Scholar · View at Scopus
  10. J.-K. Xiao, W.-M. Li, X.-R. Xiao, and C.-Z. Lv, “A novel immune dominance selection multi-objective optimization algorithm for solving multi-objective optimization problems,” Applied Intelligence, vol. 46, no. 3, pp. 739–755, 2017. View at Publisher · View at Google Scholar · View at Scopus
  11. M. Afzalirad and J. Rezaeian, “A realistic variant of bi-objective unrelated parallel machine scheduling problem: NSGA-II and MOACO approaches,” Applied Soft Computing, vol. 50, pp. 109–123, 2017. View at Publisher · View at Google Scholar · View at Scopus
  12. S. Santander-Jiménez and M. A. Vega-Rodríguez, “Using mixed mode programming to parallelize an indicator-based evolutionary algorithm for inferring multiobjective phylogenetic histories,” Soft Computing, pp. 1–20, 2016. View at Publisher · View at Google Scholar · View at Scopus
  13. K. Deb and R. B. Agrawal, “Simulated binary crossover for continuous search space,” Complex Systems, vol. 9, no. 2, pp. 115–148, 1995. View at Google Scholar · View at MathSciNet
  14. R. Storn and K. Price, “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997. View at Publisher · View at Google Scholar · View at Scopus
  15. A. K. Qin, V. L. Huang, and P. N. Suganthan, “Differential evolution algorithm with strategy adaptation for global numerical optimization,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 2, pp. 398–417, 2009. View at Publisher · View at Google Scholar · View at Scopus
  16. K. Li, A. Fialho, S. Kwong, and Q. Zhang, “Adaptive operator selection with bandits for a multiobjective evolutionary algorithm based on decomposition,” IEEE Transactions on Evolutionary Computation, vol. 18, no. 1, pp. 114–130, 2014. View at Publisher · View at Google Scholar · View at Scopus
  17. Q. Zhu, Q. Lin, Z. Du et al., “A novel adaptive hybrid crossover operator for multiobjective evolutionary algorithm,” Information Sciences, vol. 345, pp. 177–198, 2016. View at Google Scholar
  18. R. Mallipeddi, P. N. Suganthan, Q. K. Pan, and M. F. Tasgetiren, “Differential evolution algorithm with ensemble of parameters and mutation strategies,” Applied Soft Computing, vol. 11, no. 2, pp. 1679–1696, 2011. View at Publisher · View at Google Scholar · View at Scopus
  19. S. M. Islam, S. Das, S. Ghosh, S. Roy, and P. N. Suganthan, “An adaptive differential evolution algorithm with novel mutation and crossover strategies for global numerical optimization,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 42, no. 2, pp. 482–500, 2012. View at Publisher · View at Google Scholar · View at Scopus
  20. S.-Z. Zhao, P. N. Suganthan, and Q. Zhang, “Decomposition-based multiobjective evolutionary algorithm with an ensemble of neighborhood sizes,” IEEE Transactions on Evolutionary Computation, vol. 16, no. 3, pp. 442–446, 2012. View at Publisher · View at Google Scholar · View at Scopus
  21. R. A. Gonçalves, C. P. Almeida, and A. Pozo, “Upper confidence bound (UCB) algorithms for adaptive operator selection in MOEA/D,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics): Preface, vol. 9018, pp. 411–425, 2015. View at Publisher · View at Google Scholar · View at Scopus
  22. L. P. Kaelbling, “Associative Reinforcement Learning: A Generate and Test Algorithm,” Machine Learning, vol. 15, no. 3, pp. 299–319, 1994. View at Publisher · View at Google Scholar · View at Scopus
  23. L. DaCosta, Á. Fialho, M. Schoenauer, and M. Sebag, “Adaptive operator selection with dynamic multi-armed bandits,” in Proceedings of the 10th Annual Genetic and Evolutionary Computation Conference, GECCO 2008, pp. 913–920, usa, July 2008. View at Scopus
  24. Á. Fialho, L. Da Costa, M. Schoenauer, and M. Sebag, “Analyzing bandit-based adaptive operator selection mechanisms,” Annals of Mathematics and Artificial Intelligence, vol. 60, no. 1-2, pp. 25–64, 2010. View at Publisher · View at Google Scholar · View at MathSciNet
  25. Á. Fialho, M. Schoenauer, and M. Sebag, “Toward comparison-based adaptive operator selection,” in Proceedings of the 12th Annual Genetic and Evolutionary Computation Conference, GECCO-2010, pp. 767–774, USA, July 2010. View at Publisher · View at Google Scholar · View at Scopus
  26. S. M. Venske, R. A. Gonçalves, and M. R. Delgado, “ADEMO/D: Multiobjective optimization by an adaptive differential evolution algorithm,” Neurocomputing, vol. 127, pp. 65–77, 2014. View at Publisher · View at Google Scholar · View at Scopus
  27. Q. Zhang, W. Liu, and H. Li, “The performance of a new version of MOEA/D on CEC09 unconstrained MOP test instances,” in Proceedings of the Proceeding of the IEEE Congress on Evolutionary Computation (CEC '09), pp. 203–208, Trondheim, Norway, May 2009. View at Publisher · View at Google Scholar · View at Scopus
  28. E. Zitzler, K. Deb, and L. Thiele, “Comparison of multiobjective evolutionary algorithms: empirical results,” Evolutionary Computation, vol. 8, no. 2, pp. 173–195, 2000. View at Publisher · View at Google Scholar · View at Scopus
  29. K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, “Scalable multi-objective optimization test problems,” in Proceedings of the 2002 Congress on Evolutionary Computation, CEC 2002, pp. 825–830, usa, May 2002. View at Publisher · View at Google Scholar · View at Scopus
  30. Q. Zhang, A. Zhou, S. Zhao, P. N. Suganthan, W. Liu, and S. Tiwari, “Multiobjective optimization test instances for the CEC 2009 special session and competition,” special session on performance assessment of multi-objective optimization algorithms, technical report, University of Essex, Colchester, UK and Nanyang technological University, Singapore, 2008. View at Google Scholar
  31. H. Li and Q. Zhang, “Multiobjective optimization problems with complicated pareto sets, MOEA/D and NSGA-II,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 2, pp. 284–302, 2009. View at Publisher · View at Google Scholar · View at Scopus
  32. K. Li, Q. Zhang, S. Kwong, M. Li, and R. Wang, “Stable matching-based selection in evolutionary multiobjective optimization,” IEEE Transactions on Evolutionary Computation, vol. 18, no. 6, pp. 909–923, 2014. View at Publisher · View at Google Scholar · View at Scopus
  33. Z. Wang, Q. Zhang, A. Zhou, M. Gong, and L. Jiao, “Adaptive Replacement Strategies for MOEA/D,” IEEE Transactions on Cybernetics, vol. 46, no. 2, pp. 474–486, 2016. View at Publisher · View at Google Scholar · View at Scopus
  34. R. Storn, “On the usage of differential evolution for function optimization,” in Proceedings of the Biennial Conference of the North American Fuzzy Information Processing Society (NAFIPS '96), pp. 519–523, June 1996. View at Scopus
  35. K. V. Price, R. M. Storn, and J. A. Lampinen, “The differential evolution algorithm,” Differential evolution: a practical approach to global optimization, pp. 37–134, 2005. View at Google Scholar
  36. A. W. Iorio and X. Li, “Solving rotated multi-objective optimization problems using differential evolution,” in AI 2004: Advances in artificial intelligence, vol. 3339 of Lecture Notes in Comput. Sci., pp. 861–872, Springer, Berlin, 2004. View at Publisher · View at Google Scholar · View at MathSciNet
  37. D. E. Goldberg, “Probability Matching, the Magnitude of Reinforcement, and Classifier System Bidding,” Machine Learning, vol. 5, no. 4, pp. 407–425, 1990. View at Publisher · View at Google Scholar · View at Scopus
  38. D. Thierens, “An adaptive pursuit strategy for allocating operator probabilities,” in Proceedings of the GECCO 2005 - Genetic and Evolutionary Computation Conference, pp. 1539–1546, USA, June 2005. View at Publisher · View at Google Scholar · View at Scopus
  39. P. Auer, N. Cesa-Bianchi, and P. Fischer, “Finite-time analysis of the multiarmed bandit problem,” Machine Learning, vol. 47, no. 2-3, pp. 235–256, 2002. View at Publisher · View at Google Scholar · View at Scopus
  40. J. M. Whitacre, T. Q. Pham, and R. A. Sarker, “Use of statistical outlier detection method in adaptive evolutionary algorithms track: Genetic algorithms,” in Proceedings of the 8th Annual Genetic and Evolutionary Computation Conference 2006, pp. 1345–1352, usa, July 2006. View at Scopus
  41. J. Maturana, F. Lardeux, and F. Saubion, “Autonomous operator management for evolutionary algorithms,” Journal of Heuristics, vol. 16, no. 6, pp. 881–909, 2010. View at Publisher · View at Google Scholar · View at Scopus
  42. K. Li, Á. Fialho, and S. Kwong, “Multi-objective Differential Evolution with adaptive control of parameters and operators,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics): Preface, vol. 6683, pp. 473–487, 2011. View at Publisher · View at Google Scholar · View at Scopus
  43. O. Schütze, V. A. S. Hernández, H. Trautmann, and G. Rudolph, “The hypervolume based directed search method for multi-objective optimization problems,” Journal of Heuristics, pp. 1–28, 2016. View at Publisher · View at Google Scholar · View at Scopus
  44. M. Li, S. Yang, K. Li, and X. Liu, “Evolutionary algorithms with segment-based search for multiobjective optimization problems,” IEEE Transactions on Cybernetics, vol. 44, no. 8, pp. 1295–1313, 2014. View at Publisher · View at Google Scholar · View at Scopus
  45. E. Zitzler, L. Thiele, M. Laumanns, C. M. Fonseca, and V. G. Da Fonseca, “Performance assessment of multiobjective optimizers: an analysis and review,” IEEE Transactions on Evolutionary Computation, vol. 7, no. 2, pp. 117–132, 2003. View at Publisher · View at Google Scholar · View at Scopus
  46. B. Akay, “Synchronous and asynchronous Pareto-based multi-objective Artificial Bee Colony algorithms,” Journal of Global Optimization, vol. 57, no. 2, pp. 415–445, 2013. View at Publisher · View at Google Scholar · View at Scopus
  47. O. Schütze, X. Esquivel, A. Lara, and C. A. C. Coello, “Using the averaged hausdorff distance as a performance measure in evolutionary multiobjective optimization,” IEEE Transactions on Evolutionary Computation, vol. 16, no. 4, pp. 504–522, 2012. View at Publisher · View at Google Scholar · View at Scopus