Table of Contents Author Guidelines Submit a Manuscript
Computational Intelligence and Neuroscience
Volume 2018, Article ID 5865168, 26 pages
https://doi.org/10.1155/2018/5865168
Research Article

An Optimization Framework of Multiobjective Artificial Bee Colony Algorithm Based on the MOEA Framework

1School of Electronic and Information Engineering, Lanzhou Jiaotong University, Lanzhou 730070, China
2Northwest Institute of Eco-Environment and Resources, Chinese Academy of Sciences, Lanzhou 730000, China
3College of Information Science and Technology, Gansu Agricultural University, Lanzhou 730070, China

Correspondence should be addressed to Jiuyuan Huo; moc.liamxof@yjouh

Received 11 June 2018; Revised 10 September 2018; Accepted 27 September 2018; Published 1 November 2018

Academic Editor: Daniele Bibbo

Copyright © 2018 Jiuyuan Huo and Liqun Liu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The artificial bee colony (ABC) algorithm has become one of the popular optimization metaheuristics and has been proven to perform better than many state-of-the-art algorithms for dealing with complex multiobjective optimization problems. However, the multiobjective artificial bee colony (MOABC) algorithm has not been integrated into the common multiobjective optimization frameworks which provide the integrated environments for understanding, reusing, implementation, and comparison of multiobjective algorithms. Therefore, a unified, flexible, configurable, and user-friendly MOABC algorithm framework is presented which combines a multiobjective ABC algorithm named RMOABC and the multiobjective evolution algorithms (MOEA) framework in this paper. The multiobjective optimization framework aims at the development, experimentation, and study of metaheuristics for solving multiobjective optimization problems. The framework was tested on the Walking Fish Group test suite, and a many-objective water resource planning problem was utilized for verification and application. The experiment’s results showed the framework can deal with practical multiobjective optimization problems more effectively and flexibly, can provide comprehensive and reliable parameters sets, and can complete reference, comparison, and analysis tasks among multiple optimization algorithms.

1. Introduction

The optimization problems in the real world are multiobjective in nature, which means that the optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. These problems are known as multiobjective optimization problems (MOPs) which can be found in many disciplines such as engineering, transportation, economics, medicine, and bioinformatics [1]. Most of the multiobjective techniques have been designed based on the theories of Pareto Sort [2] and nondominated solutions. Thus, the optimum solution for this kind of problem is not a single solution as in the mono-objective case, but rather a set of solutions known as the Pareto optimal set. This refers to when no element in the set is superior to the others for all the objectives.

By using the multiobjective optimization method, the conflicting objectives in these MOPs can acquire better trade-off, and satisfactory optimization results can be given. Therefore, with the complexity and nonlinearity of objectives and constraints, finding a set of good quality nondominated solutions becomes more challenging, and research of efficient and stable multiobjective optimization algorithms is also one of the key and major directions for scholars to study. Over the last few decades, the metaheuristics algorithms [3] have proven to be effective methods for solving MOPs. Among them, the evolutionary algorithms are very popular and widely used to effectively solve complex real-world MOPs [4]. Some of the most well-known algorithms belong to this class, such as the Nondominated Sorted Genetic Algorithm-II (NSGA-II) [5], Multiobjective ε-evolutionary Algorithm based on ε Dominance (ε-MOEA) [6], and Borg [7].

Nevertheless, the swarm intelligence algorithm [8] inspired by biological information is one important type of metaheuristic algorithms. With its unique advantages and mechanisms, it has become a popular and important field. The main algorithms include the particle swarm optimization (PSO) algorithm [9], ant colony optimization (ACO) algorithm [10], and shuffled frog leaping algorithm (SFLA) [11]. In 2005, Karaboga proposed an artificial bee colony (ABC) algorithm based on the foraging behavior of honeybees [12]. ABC has been demonstrated to have a strong ability to solve optimization problems, and its validity and practicality have been proven [13]. Because of achieving high convergence speed and strong robustness, it has been used in different areas of engineering and seems more suitable for multiobjective optimization. At present, the ABC algorithm and its application research mainly focuses on single-objective optimization. The study of multiobjective optimization has just begun.

However, because the multiobjective optimization needs to cope with real problems, there exists some inconvenience in practical applications. For instance, the multiobjective optimization algorithms are closely related to solving problems which are difficult to apply to other MOPs; a consistent model is needed to regulate and compare optimization strategies of different multiobjective optimization algorithms, and users have difficulty choosing the suitable optimization algorithm for their problems and also need to spend a lot of time learning the algorithms.

In this context, it is necessary to establish a unified, universal, and user-friendly multiobjective optimization framework which can be a valuable tool for understanding the behavior of existing techniques, for codes or modules that reuse existing algorithms, and for helping in the implementation and comparison of algorithms’ new ideas. Moreover, researchers have found that focusing on the study of one algorithm has a lot of limitations. If different heuristic algorithms can be effectively referred or integrated with each other, they can handle actual problems or large-scale problems more effectively and more flexibly [14].

Therefore, multiobjective optimization frameworks have been proposed to integrate optimization algorithms, optimization problems, evaluation functions, improvement strategies, adjustment methods, and output of results to provide an integrated environment for users to easily handle optimization problems, such as the jMetal [15], Paradiseo-MOEO [16], and PISA [17]. Among them, the MOEA framework [18] is a powerful and efficient platform which is a free and open source Java library for developing and experimenting with multiobjective evolutionary algorithms (MOEAs) and other general purpose multiobjective optimization algorithms.

However, in these integrated environments for MO algorithms, the multiobjective artificial bee colony (MOABC) algorithm has not been integrated yet, and the MOABC algorithm has been proven in our previous research to perform better than many state-of-the-art MO algorithms [19]. Therefore, a multiobjective ABC algorithm named RMOABC [19] was introduced to integrate with the MOEA framework for providing a flexible and configurable MOABC algorithm framework that is independent of specific problems in this paper.

The remainder of this paper is organized as follows. The related literatures are reviewed in Section 2. Section 3 provides the background concepts and related technologies of MO and introduces the RMOABC algorithm. Section 4 described the unified optimization framework for MOABC algorithm based on the MOEA framework. The case study is represented in Section 5. The experiment’s settings, results, and corresponding analyses are discussed in Section 6, and finally, the conclusions and future work are drawn in Section 7.

2. Literature Review

In the past, methods based on metaheuristics developed by simulating various phenomena in the natural world have proven to be effective methods for solving MOPs [20]. Compared to traditional algorithms, modern heuristics are not tied to a specific problem domain and are not sensitive to the mathematical nature of the problem. They are more suitable for dealing with the practical MOPs. A subfamily of them in particular, the evolutionary algorithms, is now widely used to effectively handle MOPs in the real world [21]. In the mid-1980s, the genetic algorithm (GA) began to be applied to solve MOPs. In 1985, Schaffer [22] proposed a vector evaluation GA which realized the combination of the genetic algorithm and multiobjective optimization problems for the first time. In 1989, Goldberg proposed a new idea for solving MOPs by combining Pareto theory in economics with evolutionary algorithms, and it brought important guidance for the subsequent research on multiobjective optimization algorithms [23]. Subsequently, various multiobjective evolution algorithms (MOEAs) have been proposed, and some of them have been successfully applied in engineering [24]. For instance, Li et al. proposed a new multiobjective evolutionary method based on the differential evolution algorithm (MOEA/D-DE) to solve MOPs with complicated Pareto sets [25].

Since 2001, optimization algorithms based on swarm intelligence inspired by the cooperation mechanism of the biological populations have been developed [8]. Through the cooperation of intelligent individuals, the wisdom of the swarm can achieve breakthroughs beyond the optimal individual. Swarm intelligence algorithms have been successfully applied to handle the optimizing problems with more than one objective. For the multiobjective particle swarm optimization (MOPSO) [26] algorithms, a local search procedure and a flight mechanism that are both based on crowding distance are incorporated into the MOPSO algorithm [27]. Kamble et al. proposed a hybrid PSO-based method to handle the flexible job-shop scheduling problem [28]. Leong et al. integrated a dynamic population strategy within the multiple-swarm MOPSO [29].

Among the swarm intelligence algorithm, due to the high accuracy and satisfactory convergence speed, the ABC algorithm shows a greater advantage in problem representation, solving ability, and parameter adjustment [30]. Because research on the multiobjective ABC algorithm has just begun in recent years, there are relatively few studies on MOABC algorithms and its applications. For instance, Hedayatzadeh et al. designed a multiobjective artificial bee colony (MOABC) based on the Pareto theory and ε-domination notion [31]. The performance of Pareto-based MOABC algorithm has been investigated by Akbari et al., and the studies showed that the algorithm could provide competitive performance [32]. Zou et al. presented a multiobjective ABC that utilizes the Pareto-dominance concept and maintains the nondominated solutions in an external archive [33]. And Akbari designed a multiobjective bee swarm optimization algorithm (MOBSO) that can adaptively maintain an external archive of nondominated solutions [34]. Zhang et al. presented a hybrid multiobjective ABC (HMABC) for burdening optimization of copper strip production that solved a two-objective problem of minimizing the total cost of materials and maximizing the amount of waste material thrown into the melting furnace [35]. Luo et al. proposed a multiobjective artificial bee colony optimization method called ε-MOABC based on performance indicators to solve multiobjective and many-objective problems [36]. Kishor presented a nondominated sorting based multiobjective artificial bee colony algorithm (NSABC) to solve multiobjective optimization problems [37]. Nseef et al. put forward an adaptive multipopulation artificial bee colony (ABC) algorithm for dynamic optimization problems (DOPs) [38]. In our previous works, a multiobjective artificial bee colony algorithm with regulation operators (RMOABC) which utilizes the mechanisms of adaptive grid and regulation operator was proposed in [19]. The experimental results show that compared with the traditional multiobjective algorithms, these variants of multiobjective ABC can find solutions with competitive convergence and diversity within a shorter period of time.

To effectively integrate different heuristic algorithms to handle MOPs more effectively and flexibly, a number of optimization algorithm frameworks were presented and applied in industrial and other fields. For instance, Choobineh et al. proposed a methodology for management of an industrial plant considering the multiple objective functions of asset management, emission control, and utilization of alternative energy resources [39]. Khalili-Damghani et al. proposed an integrated multiobjective framework for solving multiperiod portfolio project selection problems in the investment managers to make portfolio decisions by maximizing profits and minimizing risks over a multiperiod planning horizon [40]. An evolutionary multiobjective framework for business process optimization was presented by Vergidis et al. to construct feasible business process designs with optimum attribute values such as duration and cost [41]. Charitopoulos and Dua presented a unified framework for model-based multiobjective linear process and energy optimization under uncertainty [42]. Tsai and Chen proposed a simulation-based solution framework for tackling the multiobjective inventory optimization problem to minimize three objective functions [43]. A multiobjective, simulation-based optimization framework was developed by Avci and Selim for supply chain inventory optimization to determine supplier flexibility and safety stock levels [44]. Golding et al. introduced a general framework based on ACO for the identification of optimal strategies for mitigating the impact of regional shocks to the global food production network [45]. And a multiobjective optimization framework for automatic calibration of cellular automata land-use models with multiple dynamic land-use classes was presented by Newland et al. [46].

A number of multiobjective optimization framework for more general purposes have also been developed. For example, jMetal is an object-oriented Java-based framework designed to multiobjective optimization using metaheuristics and is available to people interested in multiobjective optimization [47]. PISA is a C-based framework for multiobjective optimization which is based on separating the algorithm specific part of an optimizer from the application-specific part [17]. A framework for dynamic multiobjective big data optimization, jMetalSP combines the multiobjective optimization features of the jMetal framework with the streaming facilities of the Apache Spark cluster computing system that was presented to solve dynamic multiobjective big data optimization problems [48]. The MOEA framework [18] is a powerful and efficient platform that is a free and open source Java library for developing and experimenting with multiobjective evolutionary algorithms (MOEAs) and other general purpose multiobjective optimization algorithms.

In summary, the research of multiobjective ABC algorithms is still in the initial stage, and the MOABC algorithms are still not implemented in the common multiobjective optimization frameworks. Therefore, this paper focuses on introducing the RMOABC algorithm based on the Pareto dominance theory into the MOEA framework to establish a unified, universal, and user-friendly multiobjective optimization framework for the general optimization purpose.

3. Background Concepts and Related Technologies

3.1. Pareto Dominate Concepts

Multiobjective optimization often has to minimize/maximize two or more nonlinear objectives at the same time which are in conflict with each other. Thus, the trade-offs decisions should be taken between these objectives. Most of the multiobjective algorithms are proposed based on the Pareto Sort [2, 49] theory, so the optimization result is not usually a single solution but rather a set of solutions named as a Pareto nondominated set.

Generally, a multiobjective optimization problem is to optimize a set of objectives subjected to some equality/inequality constraints. The goal of multiobjective optimization is to guide the optimization process towards the true or approximate Pareto front and to generate a well-distributed Pareto optimal set. The basic concepts of the multiobjective method based on the Pareto theory can be found in [50].

3.2. Artificial Bee Colony Algorithm

The artificial bee colony (ABC) algorithm is a meta-heuristic and swarm intelligence algorithm proposed by Karaboga [12]. It is inspired by the foraging behavior of honeybees. Each individual bee is taken as an agent, and the swarm intelligence can be guided by the cooperation among different individuals. For its excellent performance, the ABC algorithm has become an effective means for solving complex nonlinear optimization problems.

The three types of bees—employed bees, onlookers, and scouts—constitute the artificial bee colony in the ABC algorithm. The optimization process is changed to the searching process of the nectar foods. Each position of the nectar source represents a feasible solution for the problem, and the nectar amount from the nectar source corresponds to the quality or fitness of the feasible solution. The evolutionary iterations and global convergence are achieved by the cooperation of the three kinds of bees: (1) employed bees perform local random searches in the areas near their food sources; (2) onlookers make an optimum food source to further evolve in accordance with the specific mechanism; and (3) scouts update the stagnant food source according to the processing mechanism for stagnant solutions.

Overall, the employed bees and onlookers can work together to obtain better food sources through random and targeted searches. When the stagnant number of optimal food source reaches a certain value that is prone to fall into the local search, the scouts will start a new random exploration task for the global search. Thus, through the collaboration of the three kinds of bees, the ABC algorithm can quickly and effectively achieve global convergence.

3.3. RMOABC Algorithm

A typical goal in a multiobjective optimization problem is to obtain a set of Pareto optimal solutions. As identified earlier, it is necessary to provide a wide variety among the set of solutions for the decision-maker to choose from. By utilizing the Pareto theory, the original ABC algorithm has been improved and extended to handle the MOPs, and the new algorithm is called the RMOABC algorithm [19]. The RMOABC algorithm adopted two mechanisms, regulation operators and adaptive grid, to improve the accuracy and keep the diversity, respectively. And an external archive is also integrated to maintain the historical values of nondominated solutions found in the evolution process.

In the evolution process of optimization algorithms, it is essential to properly control the exploration and exploitation capabilities of the bees to efficiently find the global optimum for the optimization problem. According to the main update equation (i.e., Equation (1)) of the original ABC algorithm, it can be found that more emphasis is taken on the exploration capability [12]:where (or ) denotes the j-th element of (or ); j is a random index; denotes another solution selected randomly from the population; and is a random number in [−1, 1]. It is well known that the exploration and exploitation capabilities of ABC heavily depend on the control parameters in the updated equation of the bees. Thus, to improve the exploitation capability of the ABC algorithm, Zhu et al. proposed a Gbest-guided artificial bee colony algorithm (GABC) in [51] to replace Equation (1) in the original ABC algorithm to obtainwhere is a random number in [0, 1.5] and is the optimal solution fitness value in the j-th dimensional space.

To balance the trade-offs between the exploration and exploitation capabilities of MOABC, we proposed a multiobjective artificial bee colony algorithm with regulation operators (RMOABC) in [19] to dynamically adjust the capabilities of exploration and exploitation in the algorithm’s evolution process. The local and global dynamic regulation operators were integrated with the GABC algorithm. The mechanisms are to improve the ability of exploitation and guide the search of candidate solutions based on the information of global optimal solutions. The updated Equation (2) in the GABC algorithm was changed into the following equation:where the local dynamic regulation operator k is set to , the global dynamic regulation operator r is set to , i is the current iteration number, and MFE is the maximum iteration number of algorithms. The details can be found in the literature [19].

In the design of multiobjective algorithms, the external archive is a typical method for maintaining the historical values of nondominated solutions found in the evolution process. The adaptive grid [52] mechanism proposed in the PAES (Pareto Archive Evolutionary Strategy) algorithm was utilized in the RMOABC to produce well-distributed nondominated Pareto solutions set in the external archive. Each nondominated solution can be mapped in a certain location in the grid according to the values of multiobjective functions. The grid can adaptively maintain the distribution of candidate solutions stored in the external archive in a uniform way in the evolution process. Thus, these two mechanisms adopted in the RMOABC algorithm can help the algorithm to quickly achieve global convergence. The details can be found in the literature [19].

3.4. MOEA Framework

Researchers of optimization algorithms have consensus that there is NO optimum strategy or algorithm for all of the optimized problems, but there is an effective strategy or algorithm for particular optimization problems. Thus, how to efficiently choose the proper algorithm for the particular optimization problem is a challenge for users. As mentioned above, a unified multiobjective optimization framework is good solution that can help users understand the behavior of existing techniques. And it can reuse the codes or modules in existing algorithms and can facilitate the implementation and comparison of new algorithms.

In multiobjective optimization frameworks, the MOEA framework is an open-source evolutionary computation library for Java that specializes in multiobjective optimization [18]. It is also an extensible framework for rapidly designing, developing, executing, and statistically testing multiobjective evolutionary algorithms (MOEAs). The framework supports a variety of state-of-the-art multiobjective evolutionary algorithms (MOEAs) such as NSGA-II (Nondominated Sorting Genetic Algorithm II), NSGA-III (Nondominated Sorting Genetic Algorithm III), ε-MOEA (Multiobjective ε-evolutionary Algorithm Based on ε Dominance), GDE3 (The Third Evolution Step of Generalized Differential Evolution), MOEA/D (Multiobjective Evolutionary Algorithm Based on Decomposition), PISA (Platform and Programming Language Independent Interface for Search Algorithms), and Borg MOEA. It also includes dozens of analytical test problems such as Zitzler-Deb-Thiele (ZDT), Deb-Thiele-Laumanns-Zitzler (DTLZ), CEC2009 (unconstrained problems), and so on. Thus, it can support the multiobjective optimization algorithm to be tested against a suite of state-of-the-art algorithms across a large collection of test problems. The new problems can be conducted in numerous comparative studies to assess the efficiency, reliability, and controllability of state-of-theart MOEAs.

4. The Unified Optimization Framework with MOABC Algorithm

The purpose of this paper is to present a unified optimization framework for the MOABC algorithm (UOF-MOABC), which combines the features of the MOEA framework [18] for multiobjective optimization metaheuristics with the RMOABC algorithm presented in [19]. Based on the advantages of a number of classic and modern state-of-the-art optimization, a wide set of benchmark problems and a set of well-known quality indicators assess the performance of the MO algorithms included in the MOEA framework. The UOF-MOABC can assist in multiobjective optimization research at the development, experimentation, comparison, and study of MOABC for solving multiobjective optimization problems.

4.1. System Architecture

The architecture of optimization algorithms should be generic enough to allow much needed flexibility to implement most of the metaheuristic; thus, before establishing the optimization framework, the metaheuristic should be characterized by a common behavior that is shared by all its algorithms.

As shown in Algorithm 1, unity procedures of metaheuristics were concluded by Wu et al. in [53]. This algorithm template is similar to most of the optimization algorithms that are based on the metaheuristic search which can be used to implement popular multiobjective technique and foster code reusability.

Algorithm 1: Unity procedure of metaheuristics.

The MOEA framework has a number of algorithm templates that were summarized from the behavior of the base metaheuristic. Therefore, developing a particular algorithm only requires implementing the specific methods. And MOEA framework was designed based on the object-oriented architecture of Java that can facilitate the creation of new components and reusing of existing ones.

4.1.1. General Architecture of the MOEA Framework

The UML diagram, which includes the base classes within the MOEA framework, is summarized and depicted in Figure 1, which was acquired through in-depth analysis and research on the source codes. To make the names of classes more general to be used in most of the metaheuristics, the MOEA framework adopted a generic terminology to name the classes. Therefore, the major components in the MOEA framework include Algorithm, Problem, and Solution. As shown in the figure, the working mechanism of MOEA framework is that an Algorithm is adopted to solve a Problem using a set of Solutions, the Algorithm performs the evolution process through a set of Selection and Variation operations, and the set of nondominated Solutions is assessed by the related evaluation methods. In the context of evolutionary algorithms, populations and individuals correspond to Population and Solution classes in the MOEA framework, respectively. The same classes can be applied to the ABC optimization algorithm related to the concepts of swarm and bees.

Figure 1: General architecture of the MOEA framework.

The class Algorithm represents the superclass for all the optimization metaheuristics. There are two main types of algorithms included in the MOEA framework. One is the native algorithms implemented within the framework that supports all functionality, and the other is the optimization algorithms provided by the JMetal library that can be executed within the MOEA framework. They are represented as the AbstractAlgorithm and the JMetalAlgorithmAdapter classes which are all inherited the class Algorithm, respectively. The classes can combine the optimization algorithm with the Problem (getProblem()) and the set of Solutions (getResults()) and evaluate the solutions with evaluation methods (evaluation()).

The class Solution represents a Solution object that is composed of an array of Variable objects. The class Variable is a superclass aimed at flexibly describing different kinds of representations for solutions that can contain variables of mixed variable types, such as RealVariable, BinaryVariable, Program, Grammar, and Permutation.

In the MOEA framework, all the problems have to inherit from interface Problem and class AbstractProblem. The interface Problem mainly contains two basic methods: evaluate() and newSolution(). The first method receives a Solution object representing a candidate solution to the problem and evaluates it. The second one is to generate a new Solution object for the problem according to the algorithm’s mechanisms. The Selection and Variation interfaces aim at representing generic operators to be used by the different algorithms. The Selection operators include TournamentSelection and UniformSelection, and the Variable operators include AdaptiveMultimethodVariation, CompoundCrossover, OnePointCrossover, TwoPointCrossover, and UniformCrossover.

A more detailed description of the MOEA framework can be found in [18] and in the MOEA framework user manual.

4.1.2. Adding MOABC Algorithm to the Framework

This section is aimed at describing how the MOABC algorithm can be developed and included in the MOEA framework. And the RMOABC algorithm was taken as the demonstration case. To deal with this issue, the new algorithm must be adapted to comply with the standards of the MOEA framework. The UML class diagram of the MOABC algorithm and its variants which were implemented in the MOEA framework is depicted in Figure 2. To make the framework more versatile, we first designed a common class MOABCAlgorithm to implement the general attributes and methods of the MOABC algorithm. Through inheriting and implementing methods in the class AbstractAlgorithm of the MOEA framework, it achieves the integration of the MOABC algorithm with the MOEA framework. Then, the implementation classes of the MOABC algorithm and its variant algorithms inherit the MOABCAglorithm class and implement their own mechanisms by adding or modifying the attributes and methods.

Figure 2: UML diagram of the MOABC algorithm and its variants.

Taking the RMOABC algorithm as a case, the class RMOABCAlgorithm needs to inherit the class MOABCAlgorithm and implements its specific methods such as SendEmployedBees(), SendOnlookerBees(), SendScoutBees(), and UpdateExternalArchive(). The Adaptive Grid mechanism of RMOABC produces well-distributed nondominated Pareto solutions set in the external archive, which is implemented by class AdaptiveGrid and class Hypercube.

After implementation of the MOABC algorithm in the MOEA framework, we began to describe how to execute it. The Executor, Instrumenter, and Analyzer classes provide most of the functionality provided by the MOEA framework [18]. The Executor class is responsible for constructing and executing runs of an algorithm. A single run requires three pieces of information: (1) the problem; (2) the algorithm used to solve the problem; and (3) the number of objective function evaluations allocated to solve the problem. The Instrumenter class works with the Executor class to record the necessary data while the algorithm is running. And the Analyzer class provides end-of-run analysis which is useful for statistically comparing the results produced by the different algorithms.

4.2. General Optimization Framework

Figure 3 gives an overview of the proposed UOF-MOABC framework for the general purpose optimization problems. As shown in the figure, the framework is comprised of four stages: Problem Formulation, Algorithm Selection, Optimization Process, and Result Assessment for RMOABC algorithm to solve the optimization problems.

Figure 3: UOF-MOABC general optimization framework.
4.2.1. Problem Formulation

In the Problem Formulation stage, the decision variables in this optimization problem and the specification of the objective function to be optimized, as well as any constraints on the decision variables or solutions need to be formulated according to the standards of the MOEA framework.

(1) Decision Variables. In the context of optimization, decision variables are unknown and controllable options that need to be determined in order to solve the problem; in other words, the problem is solved when the best values of the variables have been identified. The values of decision variables determine the values of the objective functions. The defining of decision variables is one of the hardest and most crucial steps in formulating the optimized problem. Sometimes, a creative and suitable definition for the decision variables can dramatically reduce the size and difficulty of the problem. Thus, the MOEA framework provides different kinds of representations such as RealVariable, BinaryVariable, Program, Grammar, and Permutation for solutions which can flexibly contain variables of mixed variable types.

(2) Objective Functions. The objective function of an optimization problem indicates how much each decision variable contributes to the value to be optimized in the problem. As shown in Equation (4), the objective of the optimization process is to minimize or maximize the numerical value of the objective function by changing the values of selected decision variables:where is the vector of decision variables and denotes the i-th decision variable in the n-dimensional decision space.

(3) Constraints. Constraints define the possible values the decision variables of an optimization problem may take. They typically represent resource constraints, or the minimum or maximum levels of an activity and take the general form as follows:where is the vector of decision variables, denotes the i-th decision variable in the n-dimensional decision space, defines J inequality constraints, and defines K equality constraints. Note that j is an index that runs from 1 to J, and each value of j corresponds to an inequality constraint. k is an index that runs from 1 to K, and each value of k corresponds to an equality constraint. The constraints generally reflect limitations of the real world system under consideration.

4.2.2. Algorithm Selection

Once the optimization problem has been formulated correctly, the optimization framework needs to choose the appropriate and suitable optimization algorithm for the problem in the Algorithm Selection stage. There are several configurations that need to be specified before running the optimization algorithm. Firstly, the parameters of selected optimization algorithms must be configured based on suggested values from the literature or from previous experiences. These values of parameters influence optimization processes or search behaviors of the algorithm, such as population size, food source number, Limit number, and size of external archive in the case of the multiobjective ABC algorithm. Secondly, the termination criteria for the optimization process such as a predefined maximum iterations number or the precision value of no significant improvement in performance, must be specified. And finally, the number of execution times of the entire optimization process that has to be repeated must be specified to eliminate the randomness effects for the solutions, and to increase the chance for the optimal nondominated solutions.

4.2.3. Optimization Process

The purpose of this stage is to use the selected optimization algorithm (such as the RMOABC algorithm) to identify the nondominated solutions for the optimization problem that provide the best possible trade-offs between the selected objective functions. Then, the problem can be solved using the optimization algorithm in the following optimization processes. The solution generation mechanisms are used to generate trial solutions for the optimized problem from the search space of each decision variable. Various strategies can be adopted in the solution generation mechanisms to improve the computational efficiency and accuracy of the solutions. These trial solutions are then evaluated in terms of their constraints to verify whether they violate the constraints of the problem. And objective function values are also evaluated to judge whether the trial solution is better than the previous solution (for single objective problems) or whether it is a nondominated solution to the problem (for multiobjective problem). Then, the algorithm judges whether the stopping criteria have been met, and, if so, the algorithm will output the resulting nondominated solutions; otherwise, information from the previous evaluation process is used to guide the generation of the next trial solutions using an optimization algorithm.

4.2.4. Result Assessment

Finally, in the last Result Assessment stage, the nondominated solutions for the optimized problem are quantitatively assessed and visualized. Then, the algorithm finishes the operation.

Many quantitative evaluation methods can be used to evaluate nondominated solutions to the optimal Pareto for the multiobjective optimization algorithms. These can mainly be divided into two categories: convergence and distribution [54]. The convergence indicator denotes the distance between the calculated noninferior front and the known true noninferior front or approximate true noninferior front. The distribution indicator refers to whether the obtained noninferior sets are evenly distributed in the optimal space. The most common used indicators are Generational Distance (GD) [55], Inverted Generational Distance (IGD) [56], [57], Spacing (SP) [58], Hypervolume (HV) [59], and Computational Time (Times).

5. Test Problems

In this section, how the framework can be used for optimizing a practical problem will be discussed in detail. A well-known problem named the water resource planning (WRP) problem in the multiobjective literatures and the Walking Fish Group (WFG) test suite are taken as the test problems.

5.1. Water Resources Plan Problem

The water resource planning problem involves optimal planning for a storm drainage system in an urban area that was originally described by Musselman and Talavage in [60]. The problem entails examining a particular subbasin within a watershed. The subbasin is assumed to be hydrologically independent of other subbasins, having its own drainage network, on-site detention storage facility, treatment plant, and tributary to a receiving water body [61]. Mathematically, the WRP problem is a three-variable, five-objective, seven-constraint real-world problem to optimize the planning for a storm drainage system in an urban area. A detailed description of the problem can be found in [61], and it has been implemented in the MOEA framework.

5.1.1. Decision Variables

Three decision variables are assumed to characterize the storm drainage system of the subbasin: x1 is the local detention storage capacity (unit: basin·inches), x2 is the maximum treatment rate (unit: basin·inches/hour), and x3 is the maximum allowable overflow rate (unit: basin·inches/hour) [61]. The model simulates the performance of the given storm drainage system which is defined by the variables, x1, x2, and x3 over a specified period of time and under weather conditions representative of the area.

5.1.2. Objective Functions

There are five objective functions that should be minimized in the WRP problem. The f1 function is the drainage network cost, f2 function is the storage facility cost, f3 function is the treatment facility cost, f4 function is the expected flood damage cost, and f5 function is the expected economic loss due to flood. The five objective functions are defined in the following equations:where x1, x2, and x3 are the three decision variables for the MRP problem.

5.1.3. Constraints

There are seven constraints that should be subjected in the WRP problem. The function is the average number of floods per year, function is the average flood volume per year, function is average number of pounds per year of suspended solids, function is the average number of pounds per year of settleable solids, function is the average number of pounds per year of biochemical oxygen demand, function is the average number of pounds per year of total nitrogen expected flood damage cost, and function is the average number of pounds per year of orthophosphate. The seven constraints are defined in the following equations:where x1, x2, and x3 are the three decision variables for the MRP problem and 0.01 ≤ x1 ≤ 0.45, 0.01 ≤ x2 ≤ 0.10, and 0.01 ≤ x3 ≤ 0.10.

5.2. Walking Fish Group (WFG) Toolkit

The Walking Fish Group (WFG) toolkit [62] is a well-known continuous and combinatorial benchmark suite that can be scaled to any number of objectives and decision variables. Comprised of problems with various characteristics, such as linear, convex, concave, multimodal, disconnected, biased, and degenerated Pareto fronts, the WFG suite is used to challenge varying capabilities of MO algorithms. Their characteristics are summarized in Table 1 [63]. The parameters k and l in WFG are set to (m − 1) and 10, respectively. m denotes the number of objectives, and then, the number of variables is set to (m − 1) + 10. In this paper, the objective number m is set to 2 and 3.

Table 1: Properties of the Walking Fish Group (WFG) test problems.

6. Experiments and Results Analysis

In this section, we describe the experimental study undertaken to evaluate the performance of the UOF-MOABC framework for solving the water resource planning (WRP) problem and the Walking Fish Group (WFG) toolkit.

6.1. Experimental Design

All of the experiments were performed on a PC with the Intel (R) Core i7-4720HQ @ 2.6 GHz 4 cores with 8 GB of RAM with Microsoft Windows 8 Professional Edition Version. We used the Java development kit (JDK) 1.7 and Eclipse 3.2 as the integrated development environment (IDE). And the version of the MOEA framework is 2.12.

6.1.1. Algorithm Selection

We intend to assess the performance of the RMOABC algorithm with six state-of-the-art multiobjective algorithms which were implemented within the MOEA framework, such as the Nondominated Sorted Genetic Algorithm-II (NSGA-II) [5], Nondominated Sorted Genetic Algorithm-III (NSGA-III) [64],Multiobjective ε-evolutionary Algorithm Based on ε Dominance (ε-MOEA) [65], Speed-constrained Multiobjective PSO (SMPSO) [66], Multiobjective Evolutionary Algorithm based on Decomposition (MOEA/D) [67], and the third evolution step of Generalized Differential Evolution (GDE3) [68]. The experiments of simulation were taken on the UOF-MOABC framework which combines the MOEA framework with the RMOABC algorithm in the paper.

NSGA-II algorithm is the second generation of NSGA (Nondominated Sorted Genetic Algorithm) that addressed the deficiencies existing in the construction of nondominated set and the maintenance distribution strategy of a solution set. NSGA-III is the many-objective successor to NSGA-II, using reference points to direct solutions towards a diverse set. ε-MOEA utilizes the ε-dominance archiving for recording the Pareto optimal solutions; SMPSO is one of the multiobjective PSO algorithms and has the characteristic of limiting the velocity of the particles that will initiate generating the new effective particle positions if the velocity becomes too high. MOEA/D is an optimization algorithm based on the concept of decomposing the problem into many single-objective formulations. GDE3 is a multiobjective differential evolution (DE) algorithm for global optimization with an arbitrary number of objectives and constraints.

6.1.2. Assessment Methods

To allow a quantitative assessment and comparison of the performance of the selected multiobjective optimization algorithms, four indicators, [57], SP [58], HV [59], and Times were adopted in the experiments. is an averaged Hausdorff distance composed of GDp [69] and IGDp [69] and measures both diversity and spread [57]. And we take the most typical value p = 2 in this paper. The first three indicators are mainly used to evaluate the quality of the obtained Pareto solution set. The last indicator Times is the execution time of the optimization algorithm on the same computer which can reflect the time complexity of computation of the tested algorithm.

6.1.3. Parameter Settings

The selection of algorithm parameters can greatly affect the execution performance; thus, it is generally recommended to fine tune the parameters controlling the searching behavior of the optimization algorithm, such as population size, mutation probability, and crossover probability. Parameter calibration is a very time-consuming and computationally intensive work. Thus, most of the multiobjective algorithms adopted the recommended parameter settings from the MOEA framework in this paper.

The parameter settings of the seven multiobjective algorithms are shown in Table 2. D denotes the dimension of decision variables of the optimized problem. The population size is all set to 100, and the external archive capacity is set to 100. The stopping criterion adopted is reaching a certain number of generations, and the maximum evolution number of the algorithm is 10000 for WFG toolkit and is 2500 for WRP problem.

Table 2: The parameter settings of the seven multiobjective algorithms.

The multiobjective optimization algorithms mentioned above are all the nondeterministic techniques. Thus, it does not make much sense to draw some conclusion after running the algorithm once. The usual solution is to carry out a number of independent runs and then to use means and standard deviations for summarizing the obtained results. Thus, the results obtained from the 30 independent runs of each algorithm were statistically calculated to compare their performance.

6.2. Experimental Results for Walking Fish Group (WFG) Problems
6.2.1. Distribution Visualization of the Nondominated Solutions

To understand the distribution of the nondominated solutions obtained by the seven multiobjective algorithms in the visualization, Figure 4 plots the final nondominated solution set of the seven algorithms on the 3 objectives of WFG9, drawn by the parallel coordinates. Considering the length of the paper, and the WFG9 is the most complicated problem in the WFG test toolkit. Thus, we took it as the case for the study. The experiment result was obtained from the particular run for which its HV value of the result is the closest to the mean value. According to the definitions of the WFG test suite, the upper and lower bounds of the objective i of WFG test function are 0 and 2 ∗ i, respectively. Thus, the value ranges of objectives for 3 objectives WFG9 are [0, 2], [0, 4], and [0, 6].

Figure 4: Plots of the final nondominated solution set of the seven algorithms on the 3 objectives of WFG9. (a) NSGA-II algorithm. (b) NSGA-III algorithm. (c) ε-MOEA algorithm. (d) SMPSO algorithm. (e) MOEA/D algorithm. (f) GDE3 algorithm. (g) RMOABC algorithm.

Although all the considered nondominated solution sets from the seven algorithms appear to converge into the optimal front, the algorithms perform differently in terms of diversity maintenance, which can be seen from Figure 4. The final nondominated solutions set obtained by NSGA-II, NSGA-III, ε-MOEA, SMPSO, and MOEA/D are all not converged in the uniform distribution, and there are many apparent gaps in the range lines of the objective functions, which means the algorithms fail to reach some regions of the Pareto front. The solution sets of GDE3 and RMOABC seem to have a better uniformity and can almost cover all the regions of the Pareto front. And for the NSGA-III, ε-MOEA, and GDE3 algorithms, some of their solutions exceed the boundary of the WFG9 problem which can be found in the figures. It can be concluded from the above results that the RMOABC algorithm can converge to the true Pareto front, have a better distribution of nondominated solutions, and appear to have a good covering of the whole Pareto front.

6.2.2. Optimization Process

To compare the convergence performance of the seven multiobjective algorithms for the WFG9 problem, the performance indicators of , SP, and HV have been considered as the performance measures in this study. The variations of these three indicators over the iteration number of the seven algorithms are shown in Figure 5. The X axis is the iteration number of the algorithms, and the Y axis of Figures 5(a)5(c) represents the values of , SP, and HV of the solutions set of the algorithms, respectively.

Figure 5: The performance indicators of , SP, and HV vs. the iteration number of the seven multiobjective algorithms for the WFG9 problem with 3 objectives. (a) indicator. (b) Spacing (SP) indicator. (c) Hypervolume (HV) indicator.

It can be seen from the figure, for the indicator, the seven algorithms perform well in the optimization process, especially the ε-MOEA, NSGA-III, and RMOABC algorithm. At about the 1200th iteration, the RMOABC algorithm exceeds other algorithms and outperforms others at the lower value which means it has a better quality of the solutions. For the SP indicator, the MOEA/D algorithm does not perform well and has a fluctuating line and larger values. The NSGA-II, NSGA-III, SMPSO, and GDE3 perform similarly, and their SP values also fluctuate along the iterations. The ε-MOEA and RMOABC have the more stable evolution lines in the SP performance, and the former exceeds the latter at about 1100 iterations and they reach the similar value at about 9500 iterations. For the HV indicator, the NSGA-III, ε-MOEA, and RMOABC perform better than other algorithms, especially with the RMOABC algorithm exceeding the others at about 5000 iterations.

6.2.3. Multiobjective Performance Comparison

Through the thirty independent runs of the seven algorithms, the numerical statistical results in terms of the three performance metrics, , SP, and HV and the computational time are shown in Tables 310. Max, Min, Mean, and SD represent the maximum value, minimum value, average value, and the standard deviation of the experiments results, respectively. The best mean among the algorithms for each WFG function is shown in bold and italics, respectively.

Table 3: Statistical results of the indicator obtained by different algorithms for the WFG problems with 2 objectives.
Table 4: Statistical results of the SP indicator obtained by different algorithms for the WFG problems with 2 objectives.
Table 5: Statistical results of the HV indicator obtained by different algorithms for the WFG problems with 2 objectives.
Table 6: Statistical results of the Times indicator obtained by different algorithms for the WFG problems with 2 objectives.
Table 7: Statistical results of the indicator obtained by different algorithms for the WFG problems with 3 objectives.
Table 8: Statistical results of the SP indicator obtained by different algorithms for the WFG problems with 3 objectives.
Table 9: Statistical results of the HV indicator obtained by different algorithms for the WFG problems with 3 objectives.
Table 10: Statistical results of the Times indicator obtained by different algorithms for the WFG problems with 3 objectives.

The statistical results of the indicator for the WFG problems with 2 objectives are shown in Table 3. A lower value means better computed fronts; thus, we can see that the best or second best indicator values are distributed in three algorithms: NSGA-II, NSGA-III, SMPSO, and RMOABC. NSGA-II, NSGA-III, and SMPSO have computed the best or the second best fronts regarding this indicator in two or three of the evaluated problems; RMOABC has obtained the best or second best values in this indicator for all the problems except the WFG1 and WFG9. For the SP indicator with the WFG problems of 2 objectives in Table 4, RMOABC can obtain the best or second best values in this indicator for all the problems but the WFG3 and WFG5; GDE3 got the best values for WFG5 and WFG7, and ε-MOEA got the best value for WFG3. For the HV indicator in Table 5, the larger the value means the better the quality of the solutions. RMOABC can obtain the best and second best values for all the problems but the WFG6; ε-MOEA got the best values for WFG5 and WFG6. And for the execution times indicator in Table 6, RMOABC can exceed other algorithms in most of the problems except the WFG1 and WFG2.

The indicator’s statistical results for the WFG problems with 3 objectives are shown in Table 7. It can be seen that the best or second best indicator values are distributed in four algorithms: NSGA-II, NSGA-III, ε-MOEA, and RMOABC. RMOABC has computed the best or the second best fronts regarding this indicator in most of the evaluated problems except WFG1 and WFG6. For the SP indicator with the WFG problems of 3 objectives in Table 8, ε-MOEA performs also very well in this indicator as it has obtained most of the best values for all the problems; RMOABC can obtain most of the second best values in this indicator for all the problems but the WFG2. For the HV indicator in Table 9, RMOABC can obtain the best and second best values for all the problems but the WFG4 and WFG5; NSGA-II got the best results for WFG3 and WFG4; ε-MOEA got the best result for WFG5. And for the execution times indicator in Table 10, RMOABC can exceed other algorithms in most of the problems.

As shown in these tables, most of the algorithms can obtain a good solution set for solving the WFG problems, and the RMOABC and ε-MOEA algorithms perform the best for , SP, and HV performance indicators, having a clear advantage over the other five algorithms on most of the test instances of two or three objectives. It demonstrated that the two algorithms are better than the other algorithms in terms of these performance indicators of convergence and distribution. Specifically, RMOABC can also obtain most of the best and second best results of execution times for WFG problems, and ε-MOEA needs to take several or even ten times longer to achieve similar results for the 3 objective problems. It demonstrated that the RMOABC can balance the various conflicting objectives and obtain a better performance than other algorithms.

6.3. Experimental Results for Water Resources Plan (WRP) Problem
6.3.1. Pareto Optimal Space (Range of Objective Values)

The values range (i.e., the maximum and minimum) of the five objective functions for the water resources planning problem obtained by the seven algorithms is shown in Table 11. As can be seen from the table, there are certain differences between the objective functions’ results of the seven algorithms. This is the case because each algorithm has its own way to search the solution space, which may lead to different solutions for achieving similar assessment results.

Table 11: Values ranges of five objective functions for water resources planning problem obtained by the seven algorithms.

For easy comparison and calculation, it is necessary to normalize the results of the objective functions. The specific normalized formula is shown in the following equation:where and are the original values of the objective function and the new value after normalization, respectively and and are the minimum and maximum values of the interval of the objective function. These were obtained from the Pareto front file in JMetal multiobjectives optimization framework [15], respectively. Ideally, the values ranges of the objective functions are normalized to [0, 1].

The high-low-close chart is an efficient graphical tool for visual presentation of various forms of data area such as a range of measured values (min-max), 95% confidence interval value (low limit-high limit), and low value-average value-high value [70]. Thus, the maximum and minimum values of the normalized objectives of Table 11 are presented in Figure 6. The lines indicate the coverage of the objective values for the nondominated solution set obtained by the seven multiobjectives algorithms.

Figure 6: Normalized ranges of the five objective functions’ values of the water problem obtained by the seven algorithms.

It can be concluded from the figure, the seven algorithms perform better in the objective function f2, f3, and f5 which most of the value space can be covered. But for the objective function f1, all the seven algorithms can not obtain the better coverage, which they cover the range from 0.8 to 1.0 of the value space. And for the objective function f4, the NSGA-II, NSGA-III, ε-MOEA, and SMPSO obtained much larger values than the Pareto front. MOEA/D and GDE3, and especially RMOABC, got better coverage for this objective function.

6.3.2. Optimization Process

Because the WRP problem is a many-objectives (5 objectives) issue and is very complicated, the calculation results of the distribution indicator (spacing) are very high and do not have too many significant representatives. Thus, to compare the performance of the seven multiobjective algorithms for this problem, we took the performance indicator HV as the quality measure in this subsection. The variations of these two indicators over the iteration number of the seven algorithms are shown in Figure 7. The X axis is the iteration number of the algorithms, and the Y axis represents the values HV of the solutions set of the different algorithms, respectively.

Figure 7: The performance indicator of HV vs. the iteration number of the seven multiobjective algorithms for WRP problem.

It can be seen from the figures, for the HV indicator, ε-MOEA and RMOABC perform better than other algorithms, especially the RMOABC algorithm can exceed the others at about 900 iterations. The other algorithms perform poor, and SMPSO and NSGA-II have a fluctuating line of smaller values.

6.3.3. Multiobjective Performance Comparison

Through the thirty independent runs of the seven algorithms, the performance metric of HV and execution time for the WRP problem are shown in Table 12. The best mean among the algorithms for each WFG function is shown in bold and italics, respectively.

Table 12: Performance comparison of the seven algorithms for water resource plan (WRP) problem.

For the HV indicators of the WRP problem, RMOABC obtained the best value, and ε-MOEA obtained the second best value. It demonstrated that RMOABC could exceed other algorithms in the performance of convergence and quality for the WRP problem. For the execution times indicator, MOEA/D has the best value, and NSGA-II has the second best value; RMOABC and ε-MOEA have to take more time to obtain the final solution set.

7. Conclusions

In this paper, we have presented a unified, flexible, configurable, and user-friendly MOABC algorithm framework, UOF-MOABC, which combines a multiobjective ABC algorithm named RMOABC and the multiobjective evolution algorithms (MOEA) framework. Particularly, the core classes of the MOEA framework are described. How to implement the new RMOABC algorithm into the framework has also been illustrated. Then, the Walking Fish Group (WFG) test suite and a many-objective water resource planning (WRP) problem were taken and compared with other six state-of-the-art multiobjective algorithms (NSGA-II, NSGA-III, ε-MOEA, SMPSO, MOEA/D, and GDE3) for verification and application.

The obtained experiment results have been statistically analyzed on the basis of three quality indicators (, SP, and HV) with the computation time under the framework. It can be drawn from the experiments that according to the problems, parameter settings, and quality indicators used, the RMOABC method generally outperforms or is equal to the other six MOO algorithms in our study.

As for future work, we plan to integrate more realistic problems to provide more reliable nondominated solution sets for supporting decision making, as well as considering the parallelization of multiobjective ABC algorithms to improve the efficiency and accuracy for solving complex MOPs.

Data Availability

The Pareto fronts data used to support the findings of this study are available from the project of jMetal 5.0 which can be downloaded from the web site: https://jmetal.github.io/jMetal/.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

Acknowledgments

This work was supported by the National Nature Science Foundation of China (Grant nos. 61462058 and 61862038).

References

  1. K. Deb, Multi-Objective Optimization Using Evolutionary Algorithms, John Wiley & Sons, Chichester, UK, 2001.
  2. C. Hillermeier, Nonlinear Multiobjective Optimization: A Generalized Homotopy Approach, vol. 135, Springer Science and Business Media, Berlin, Germany, 2001.
  3. C. Blum and A. Roli, “Metaheuristics in combinatorial optimization: overview and conceptual comparison,” ACM Computing Surveys, vol. 35, no. 3, pp. 268–308, 2003. View at Publisher · View at Google Scholar · View at Scopus
  4. C. Coello, G. Lamont, and D. van Veldhuizen, Multi-Objective Optimization Using Evolutionary Algorithms, John Wiley & Sons, Inc., New York, NY, USA, 2nd edition, 2007.
  5. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multi-objective genetic algorithm: NSGA-II,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002. View at Publisher · View at Google Scholar · View at Scopus
  6. K. Deb, M. Mohan, and S. Mishra, “A fast multi-objective evolutionary algorithm for finding well-spread pareto-optimal solutions,” KanGAL Report No 2003002, 2003.
  7. D. Hadka and P. R. Borg, “An auto-adaptive many-objective evolutionary computing framework,” Evolutionary Computation, vol. 21, no. 2, pp. 231–259, 2013. View at Publisher · View at Google Scholar · View at Scopus
  8. R. Eberhart, Y. Shi, and J. Kennedy, Swarm Intelligence, Morgan Kaufmann, San Fransisco, CA, USA, 2001.
  9. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of IEEE International Conference on Neural Networks, pp. 1942–1948, Perth, Australia, November-December 1995.
  10. M. Dorigo and M. Birattari, Ant Colony Optimization, Encyclopedia of Machine Learning, Springer, Boston, MA, USA, 2011.
  11. M. Eusuff, K. Lansey, and F. Pasha, “Shuffled frog-leaping algorithm: a memetic meta-heuristic for discrete optimization,” Engineering Optimization, vol. 38, no. 2, pp. 129–154, 2006. View at Publisher · View at Google Scholar · View at Scopus
  12. D. Karaboga, “An idea based on honey bee swarm for numerical optimization,” Erciyes University, Kayseri, Turkey, 2005, Technical Report-TR06. View at Google Scholar
  13. C. S. Zhang, Research and Application of Multi-Objective Artificial Bee Colony Algorithm and Genetic Algorithm, vol. 7, Northeastern University Press, Shenyang, China, 2013.
  14. B. Christian, P. Jakob, R. R. Günther, and R. Andrea, “Hybrid metaheuristics in combinatorial optimization: a survey,” Applied Soft Computing, vol. 11, no. 6, pp. 4135–4151, 2011. View at Publisher · View at Google Scholar · View at Scopus
  15. J. Durillo and A. Nebro, “jMetal: a java framework for multi-objective optimization,” Advances in Engineering Software, vol. 42, no. 10, pp. 760–771, 2011. View at Publisher · View at Google Scholar · View at Scopus
  16. A. Liefooghe, M. Basseur, L. Jourdan, and E.-G. Talbi, “ParadisEO-MOEO: a framework for evolutionary multi-objective optimization,” in Fourth International Conference on Evolutionary Multi-criterion Optimization (EMO 2007), S. Obayashi, K. Deb, C. Poloni, T. Hiroyasu, and T. Murata, Eds., vol. 4403 of LNCS, pp. 386–400, Springer, Matsushima, Japan, March 2007.
  17. S. Bleuler, M. Laumanns, L. Thiele, and E. Zitzler, “PISA—a platform and programming language independent interface for search algorithms,” in Evolutionary Multi-criterion Optimization (EMO 2003), C. M. Fonseca, P. J. Fleming, E. Zitzler, K. Deb, and L. Thiele, Eds., Lecture Notes in Computer Science, pp. 494–508, Springer, Faro, Portugal, April 2003.
  18. D. Hadka, “MOEA framework—a free and open source java framework for multiobjective optimization, version 2.12,” 2017, http://www.moeaframework.org/. View at Google Scholar
  19. J. Y. Huo and L. Q. Liu, “An improved multi-objective artificial bee colony optimization algorithm with regulation operators,” Information, vol. 8, no. 1, p. 18, 2017. View at Publisher · View at Google Scholar · View at Scopus
  20. C. Y. Lin and J. L. Lin, Method and Theory of Multi-objective Optimization, Jilin Education Press, Changchun, China, 1992.
  21. J. H. Zheng, Multi-Objective Evolutionary Algorithm and its Applications, Science Press, Beijing, China, 2007.
  22. J. D. Schaffer, “Multiple objective optimization with vector evaluated genetic algorithms,” in Proceedings of 1st international Conference on Genetic Algorithms, pp. 93–100, L. Erlbaum Associates Inc., Pittsburgh, PA, USA, July 1985.
  23. D. E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, Addison‐Wesley Professional location, Boston, MA, USA, 1989.
  24. X. X. Cui, Multi-Objective Evolutionary Algorithm and Its Applications, National Defense Industry Press, Beijing, China, 2008.
  25. H. Li and Q. Zhang, “Multiobjective optimization problems with complicated pareto sets,” MOEA/D and NSGA-II, vol. 13, no. 2, pp. 184–302, 2009. View at Google Scholar
  26. C. R. Raquel and P. C. Naval, “An effective use of crowding distance in multiobjective particle swarm optimization,” in Proceedings of Genetic and Evolutionary Computation Conference (GECCO-2005), pp. 257–264, Washington, DC, USA, June 2005.
  27. C. S. Tsou, S. C. Chang, and P. W. Lai, “Using crowding distance to improve multi-objective PSO with local search,” in Swarm Intelligence: Focus on Ant and Particle Swarm Optimization, F. T. S. Chan and M. K. Tiwari, Eds., Itech Education and Publishing, Vienna, Austria, 2007. View at Google Scholar
  28. S. V. Kamble, S. U. Mane, and A. J. Umbarkar, “Hybrid multi-objective particle swarm optimization for flexible job shop scheduling problem,” International Journal of Intelligent Systems Technologies and Applications, vol. 7, no. 4, pp. 54–61, 2015. View at Publisher · View at Google Scholar
  29. W. F. Leong and G. G. Yen, “PSO-based multiobjective optimization with dynamic population size and adaptive local archives,” IEEE Transactions on Systems Man and Cybernetics, vol. 38, p. 5, 2008. View at Google Scholar
  30. J. Y. Huo, Y. N. Zhang, and H. X. Zhao, “An improved artificial bee colony algorithm for numerical functions,” International Journal of Reasoning-based Intelligent Systems, vol. 7, no. 3-4, pp. 200–208, 2015. View at Publisher · View at Google Scholar · View at Scopus
  31. R. Hedayatzadeh, B. Hasanizadeh, and R. Akbari R, “A multi-objective artificial bee colony for optimization multi-objective problems,” in Proceedings of 3rd International Conference on Advanced Computer Theory and Engineering (ICACTE), pp. 277–281, Chengdu, China, August 2010.
  32. R. Akbari, R. Hedayatzadeh, K. Ziarati, and B. Hassanizadeh, “A multi-objective artificial bee colony algorithm,” Swarm and Evolutionary Computation, vol. 2, no. 1, pp. 39–52, 2012. View at Publisher · View at Google Scholar · View at Scopus
  33. W. P. Zou, Y. L. Zhu, H. N. Chen, and B. W. Zhang, “Solving multi-objective optimization problems using artificial bee colony algorithm,” Discrete Dynamics in Nature and Society, vol. 2011, Article ID 569784, 37 pages, 2011. View at Publisher · View at Google Scholar · View at Scopus
  34. R. Akbari and K. Ziarati, “Multi-objective bee swarm optimization,” International Journal of Innovative Computing Information and Control, vol. 8, no. 1B, pp. 715–726, 2012. View at Google Scholar
  35. H. Zhang, Y. Zhu, W. Zou, and X. Yan, “A hybrid multi-objective artificial bee colony algorithm for burdening optimization of copper strip production,” Applied Mathematical Modeling, vol. 36, no. 6, pp. 2578–2591, 2012. View at Publisher · View at Google Scholar · View at Scopus
  36. J. P. Luo, Q. Liu, Y. Yang, X. Li, M. R. Chen, and W. M. Cao, “An artificial bee colony algorithm for multi-objective optimization,” Applied Soft Computing, vol. 50, pp. 235–251, 2017. View at Publisher · View at Google Scholar · View at Scopus
  37. A. Kishor, P. K. Singh, and J. Prakash, “NSABC: non-dominated sorting based multi-objective artificial bee colony algorithm and its application in data clustering,” Neuro Computing, vol. 216, pp. 514–533, 2016. View at Publisher · View at Google Scholar · View at Scopus
  38. S. K. Nseef, S. Abdullah, A. Turky, and G. Kendall, “An adaptive multi-population artificial bee colony algorithm for dynamic optimisation problems,” Knowledge Based System, vol. 104, pp. 14–23, 2016. View at Publisher · View at Google Scholar · View at Scopus
  39. M. Choobineh and S. Mohagheghi, “A multi-objective optimization framework for energy and asset management in an industrial Microgrid,” Journal of Cleaner Production, vol. 139, pp. 1326–1338, 2016. View at Publisher · View at Google Scholar · View at Scopus
  40. K. Khalili-Damghani, M. Tavana, and S. Sadi-Nezhad, “An integrated multi-objective framework for solving multi-period project selection problems,” Applied Mathematics and Computation, vol. 219, no. 6, pp. 3122–3138, 2012. View at Publisher · View at Google Scholar · View at Scopus
  41. K. Vergidis, D. Saxena, and A. Tiwari, “An evolutionary multi-objective framework for business process optimisation,” Applied Soft Computing, vol. 12, no. 8, pp. 2638–2653, 2012. View at Publisher · View at Google Scholar · View at Scopus
  42. V. M. Charitopoulos and V. Dua, “A unified framework for model-based multi-objective linear process and energy optimisation under uncertainty,” Applied Energy, vol. 186, pp. 539–548, 2017. View at Publisher · View at Google Scholar · View at Scopus
  43. S. C. Tsai and S. T. Chen, “A simulation-based multi-objective optimization framework: a case study on inventory management,” Omega, vol. 70, pp. 148–159, 2017. View at Publisher · View at Google Scholar · View at Scopus
  44. M. G. Avci and H. Selim, “A Multi-objective, simulation-based optimization framework for supply chains with premium freights,” Expert Systems with Applications, vol. 67, pp. 95–106, 2017. View at Publisher · View at Google Scholar · View at Scopus
  45. P. Golding, S. Kapadia, and S. Naylor, “Framework for minimising the impact of regional shocks on global food security using multi-objective ant colony optimisation,” Environmental Modelling and Software, vol. 95, pp. 303–319, 2017. View at Publisher · View at Google Scholar · View at Scopus
  46. C. P. Newland, H. R. Maier, A. C. Zecchin, J. P. Newman, and H. van Delden, “Multi-objective optimisation framework for calibration of Cellular Automata land-use models,” Environmental Modelling and Software, vol. 100, pp. 175–200, 2018. View at Publisher · View at Google Scholar · View at Scopus
  47. J. J. Durillo and A. J. Nebro, jMetal: A Java Framework for Multi-Objective Optimization, Elsevier Science Ltd., New York, NY, USA, 2011.
  48. C. Barba-González, J. García-Nieto, A. J. Nebro et al., “jMetalSP: a framework for dynamic multi-objective big data optimization,” Applied Soft Computing, vol. 69, pp. 737–748, 2017. View at Publisher · View at Google Scholar · View at Scopus
  49. V. Pareto, The Rise and Fall of the Elites, Bedminster Press, Totowa, NJ, USA, 1968.
  50. Y. Huo, Y. Zhuang, J. Gu, and S. Ni, “Elite-guided multi-objective artificial bee colony algorithm,” Applied Soft Computing, vol. 32, pp. 199–210, 2015. View at Publisher · View at Google Scholar · View at Scopus
  51. G. P. Zhu and S. Kwong, “Gbest-guided artificial bee colony algorithm for numerical function optimization,” Applied Mathematics and Computation, vol. 217, no. 7, pp. 3166–3173, 2010. View at Publisher · View at Google Scholar · View at Scopus
  52. J. D. Knowles and D. W. Corne, “Approximating the non-dominated front using the Pareto archived evolution strategy,” Evolution Computing, vol. 8, no. 2, pp. 149–172, 2000. View at Publisher · View at Google Scholar · View at Scopus
  53. Q. D. Wu, L. Wang, and Y. W. Zhang, “Selection of meta-heuristic algorithms based on standardization evaluation system,” in Proceedings of Plenary Talk in the Sixth International Conference on Swarm Intelligence and the Second BRICS Congress on Computational Intelligence (ICSI-CCI’2015), Beijing, China, June 2015.
  54. J. Guo, J. Z. Zhou, Q. Zou, L. X. Song, and Y. C. Zhang, “Study on multi-objective calibration of hydrological model and effect of objective functions combination on optimization results,” Journal of Sichuan University, vol. 43, no. 6, pp. 58–63, 2011. View at Google Scholar
  55. D. A. Van, V. Gary, and B. Lamont, “Multiobjective evolutionary algorithm research: a history and analysis,” Evolutionary Computation, vol. 8, no. 2, pp. 125–147, 1998. View at Google Scholar
  56. K. Deb, M. Mohan, and S. Mishra, “Evaluating the epsilon-domination based multi-objective evolutionary algorithm for a quick computation of Pareto-optimal solutions,” Evolutionary Computation, vol. 13, no. 4, pp. 501–525, 2005. View at Publisher · View at Google Scholar · View at Scopus
  57. O. S. Rudolph, C. Grimme, C. Domínguez-Medina, and H. Trautmann, “Optimal averaged Hausdorff archives for bi-objective problems: theoretical and numerical results,” Computational Optimization and Applications, vol. 64, no. 2, pp. 589–618, 2016. View at Publisher · View at Google Scholar · View at Scopus
  58. S. A. R. Mohammadi, M. R. Feizi Derakhshi, and R. Akbari, “An adaptive multi-objective artificial bee colony with crowding distance mechanism,” Iranian Journal of Science and Technology, Transactions of Electrical Engineering, vol. 37, no. E1, pp. 79–92, 2013. View at Google Scholar
  59. E. Zitzler and L. Thiele, “Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach,” IEEE Transactions on Evolutionary Computation, vol. 3, no. 4, pp. 257–271, 1999. View at Publisher · View at Google Scholar · View at Scopus
  60. K. Musselman and J. Talavage, “A trade-off cut approach to multiple objective optimization,” Operations Research, vol. 28, no. 6, pp. 1424–1435, 1980. View at Publisher · View at Google Scholar · View at Scopus
  61. F. Y. Cheng and X. S. Li, “Generalized method for multiobjective engineering optimization,” Engineering Optimisation, vol. 31, no. 5, pp. 641–661, 1999. View at Publisher · View at Google Scholar · View at Scopus
  62. S. Huband, P. Hingston, and L. Barone, “A review of multiobjective test problems and a scalable test problem toolkit,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 5, pp. 477–506, 2006. View at Publisher · View at Google Scholar · View at Scopus
  63. M. Li, S. Yang, and X. Liu, “Bi-goal evolution for many-objective optimization problems,” Artificial Intelligence, vol. 228, pp. 45–65, 2015. View at Publisher · View at Google Scholar · View at Scopus
  64. K. Deb and H. Jain, “An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, Part I: solving problems with box constraints,” IEEE Transactions on Evolutionary Computation, vol. 18, no. 4, pp. 577–601, 2014. View at Publisher · View at Google Scholar · View at Scopus
  65. K. Deb, “A fast multi-objective evolutionary algorithm for finding well-spread pareto-optimal solutions,” KanGAL Report No 2003002, 2003.
  66. A. J. Nebro, J. J. Durillo, J. Garcia-Nieto, C. A. Coello Coello, F. Luna, and E. Alba, “SMPSO: a new PSO-based metaheuristic for multi-objective optimization,” in Proceedings of IEEE Symposium on Computational Intelligence in Multicriteria Decision-Making, vol. 66–73, Nashville, TN, USA, March-April 2009.
  67. H. Li and Q. Zhang, “Multiobjective optimization problems with complicated pareto sets, MOEA/D and NSGA-II,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 2, pp. 284–302, 2009. View at Google Scholar
  68. S. Kukkonen and J. Lampinen, “GDE3: the third evolution step of generalized differential evolution,” in Proceedings of IEEE Congress on Evolutionary Computation, vol. 1, pp. 443–450, September 2005.
  69. O. Schutze, X. Esquivel, A. Lara, and C. A. C. Coello, “Using the averaged Hausdorff distance as a performance measure in evolutionary multiobjective optimization,” IEEE Transactions on Evolutionary Computation, vol. 16, no. 4, pp. 504–522, 2012. View at Publisher · View at Google Scholar · View at Scopus
  70. J. Y. Huo and L. Q. Liu, “Application research of multi-objective Artificial Bee Colony optimization algorithm for parameters calibration of hydrological model,” Neural Computing and Applications, pp. 1–18, 2018. View at Publisher · View at Google Scholar