Research Article  Open Access
Wenping Zou, Yunlong Zhu, Hanning Chen, Beiwei Zhang, "Solving Multiobjective Optimization Problems Using Artificial Bee Colony Algorithm", Discrete Dynamics in Nature and Society, vol. 2011, Article ID 569784, 37 pages, 2011. https://doi.org/10.1155/2011/569784
Solving Multiobjective Optimization Problems Using Artificial Bee Colony Algorithm
Abstract
Multiobjective optimization has been a difficult problem and focus for research in fields of science and engineering. This paper presents a novel algorithm based on artificial bee colony (ABC) to deal with multiobjective optimization problems. ABC is one of the most recently introduced algorithms based on the intelligent foraging behavior of a honey bee swarm. It uses less control parameters, and it can be efficiently used for solving multimodal and multidimensional optimization problems. Our algorithm uses the concept of Pareto dominance to determine the flight direction of a bee, and it maintains nondominated solution vectors which have been found in an external archive. The proposed algorithm is validated using the standard test problems, and simulation results show that the proposed approach is highly competitive and can be considered a viable alternative to solve multiobjective optimization problems.
1. Introduction
In the real world, many optimization problems have to deal with the simultaneous optimization of two or more objectives. In some cases, however, these objectives are in contradiction with each other. While in singleobjective optimization the optimal solution is usually clearly defined, this does not hold for multiobjective optimization problems. Instead of a single optimum, there is rather a set of alternative tradeoffs, generally known as Pareto optimal solutions. These solutions are optimal in the wider sense that no other solutions in the search space are superior to them when all objectives are considered.
In the 1950s, in the area of operational research, a variety of methods have been developed for the solution of multiobjective optimization problems (MOP). Some of the most representative classical methods are linear programming, the weighted sum method, and the goal programming method [1]. Over the past two decades, a number of multiobjective evolutionary algorithms (MOEAs) have been proposed [2, 3]. A few of these algorithms include the nondominated sorting genetic algorithm II (NSGAII) [4], the strength Pareto evolutionary algorithm 2 (SPEA2) [5], and the multiobjective particle swarm optimization (MOPSO) which is proposed by Coello and Lechuga [6]. MOEA’s success is due to their ability to find a set of representative Pareto optimal solutions in a single run.
Artificial bee colony (ABC) algorithm is a new swarm intelligent algorithm that was first introduced by Karaboga in Erciyes University of Turkey in 2005 [7], and the performance of ABC is analyzed in 2007 [8]. The ABC algorithm imitates the behaviors of real bees in finding food sources and sharing the information with other bees. Since ABC algorithm is simple in concept, easy to implement, and has fewer control parameters, it has been widely used in many fields. For these advantages of the ABC algorithm, we present a proposal, called “Multiobjective Artificial Bee Colony” (MOABC), which allows the ABC algorithm to be able to deal with multiobjective optimization problems. We aim at presenting a kind of efficient and simple algorithm for multiobjective optimization, meanwhile filling up the research gap of the ABC algorithm in the field of multiobjective problems. Our MOABC algorithm is based on Nondominated Sorting strategy. We use the concept of Pareto dominance to determine which solution vector is better and use an external archive to maintain nondominated solution vectors. We also use comprehensive learning strategy which is inspired by comprehensive learning particle swarm optimizer (CLPSO) [9] to ensure the diversity of population. In order to evaluate the performance of the MOABC, we compared the performance of the MOABC algorithm with that of NSGAII and MOCLPSO [10] on a set of wellknown benchmark functions. Seven of these test functions SCH, FON, ZDT1 to ZDT4, and ZDT6 are of two objectives, while the other four DTLZ1 to DTLZ3 and DTLZ6 are of three objectives. Meanwhile, a version of MOABC with Nondominated Sorting strategy only is also compared to illustrate the comprehensive learning strategy effect for population diversity. This version algorithm is called nondominated Sorting artificial bee colony (NSABC). From the simulation results, the MOABC algorithm shows remarkable performance improvement over other algorithms in all benchmark functions.
The remainder of the paper is organized as follows. Section 2 gives relevant MOP concepts. In Section 3, we will introduce the ABC algorithm. In Section 4, the details of MOABC algorithm will be given. Section 5 presents a comparative study of the proposed MOABC with NSABC, NSGAII, and MOCLPSO on a number of benchmark problems, and we will draw conclusions from the study. Section 6 summarizes our discussion.
2. Related Concepts
Definition 1 (multiobjective optimization problem). A multiobjective optimization problem (MOP) can be stated as follows: where is called decision (variable) vector, is the decision (variable) space, is the objective space, and consists of realvalued objective functions. is the objective vector. We call problem (2.1) a MOP.
Definition 2 (Pareto optimal). For (2.1), let , be two vectors, is said to dominate if for all , and . A point is called (globally) Pareto optimal if there is no such that dominates . Pareto optimal solutions are also called efficient, nondominated, and noninferior solutions. The set of all the Pareto optimal solutions, denoted by , is called the Pareto set. The set of all the Pareto objectives vectors, , is called the Pareto front [11, 12]. Illustrative example can be seen in Figure 1.
(a)
(b)
3. The Original ABC Algorithm
The artificial bee colony algorithm is a new populationbased metaheuristic approach, initially proposed by Karaboga [7] and Karaboga and Basturk [8] and further developed by Karaboga and Basturk [13] and Karaboga and Akay [14]. It has been used in various complex problems. The algorithm simulates the intelligent foraging behavior of honey bee swarms. The algorithm is very simple and robust. In the ABC algorithm, the colony of artificial bees is classified into three categories: employed bees, onlookers, and scouts. Employed bees are associated with a particular food source that they are currently exploiting or are “employed” at. They carry with them information about this particular source and share the information to onlookers. Onlooker bees are those bees that are waiting on the dance area in the hive for the information to be shared by the employed bees about their food sources and then make decision to choose a food source. A bee carrying out random search is called a scout. In the ABC algorithm, the first half of the colony consists of the employed artificial bees, and the second half includes the onlookers. For every food source, there is only one employed bee. In other words, the number of employed bees is equal to the number of food sources around the hive. The employed bee whose food source has been exhausted by the bees becomes a scout. The position of a food source represents a possible solution to the optimization problem, and the nectar amount of a food source corresponds to the quality (fitness) of the associated solution represented by that food source. Onlookers are placed on the food sources by using a probabilitybased selection process. As the nectar amount of a food source increases, the probability value with which the food source is preferred by onlookers increases, too [7, 8]. The main steps of the algorithm are given in Algorithm 1.

In the initialization phase, the ABC algorithm generates randomly distributed initial food source positions of solutions, where denotes the size of employed bees or onlooker bees. Each solution is a ndimensional vector. Here, n is the number of optimization parameters. And then evaluate each nectar amount . In the ABC algorithm, nectar amount is the value of benchmark function.
In the employed bees’ phase, each employed bee finds a new food source in the neighborhood of its current source . The new food source is calculated using the following expression: where and are randomly chosen indexes and . is a random number between . It controls the production of a neighbour food source position around . And then employed bee compares the new one against the current solution and memorizes the better one by means of a greedy selection mechanism.
In the onlooker bees’ phase, each onlooker chooses a food source with a probability which is related to the nectar amount (fitness) of a food source shared by employed bees. Probability is calculated using the following expression:
In the scout bee phase, if a food source cannot be improved through a predetermined cycles, called “limit”, it is removed from the population, and the employed bee of that food source becomes scout. The scout bee finds a new random food source position using where and are lower and upper bounds of parameter , respectively.
These steps are repeated through a predetermined number of cycles, called maximum cycle number (MCN), or until a termination criterion is satisfied [7, 8, 15].
4. The Multiobjective ABC Algorithm
4.1. External Archive
As opposed to singleobjective optimization, MOEAs usually maintain a nondominated solutions set. In multiobjective optimization, for the absence of preference information, none of the solutions can be said to be better than the others. Therefore, in our algorithm, we use an external archive to keep a historical record of the nondominated vectors found along the search process. This technique is used in many MOEAs [5, 16].
In the initialization phase, the external archive will be initialized. After initializing the solutions and calculating the value of every solution, they are sorted based on nondomination. We compare each solution with every other solution in the population to find which one is nondominated solution. We then put all nondominated solutions into external archive. The external archive will be updated at each generation.
4.2. Diversity
MOEA’s success is due to their ability to find a set of representative Pareto optimal solutions in a single run. In order to approximate the Pareto optimal set in a single optimization run, evolutionary algorithms have to perform a multimodal search where multiple, widely different solutions are to be found. Therefore, maintaining a diverse population is crucial for the efficacy of an MOEA. ABC algorithm has been demonstrated to possess superior performance in the singleobjective domain. However, NSABC that we presented as the first version multiobjective ABC algorithm cannot get satisfactory results in terms of diversity metric. So as to apply ABC algorithm to multiobjective problems, we use the comprehensive learning strategy which is inspired by comprehensive learning particle swarm optimizer (CLPSO) [9] to ensure the diversity of population.
In our algorithm, all solutions in the external archive are regarded as food source positions, and all bees are regarded as onlooker bees. There do not exist employed bees and scouts. In each generation, each onlooker randomly chooses a food source from external archive, goes to the food source area, and then chooses a new food source. In original ABC algorithm, each bee finds a new food source by means of the information in the neighborhood of its current source. In our proposed MOABC algorithm, however, we use the comprehensive learning strategy. Like CLPSO, m dimensions of each individual are randomly chosen to learn from a random nondominated solution which comes from external archive. And each of the other dimensions of the individual learns from the other nondominated solutions. In our proposed NSABC algorithm, just one dimension of each individual is randomly chosen to learn from a random nondominated solution.
4.3. Update External Archive
As the evolution progresses, more and more new solutions enter the external archive. Considering that each new solution will be compared with every nondominated solution in the external archive to decide whether this new solution should stay in the archive, and the computational time is directly proportional to the number of comparisons, the size of external archive must be limited.
In our algorithm, each individual will find a new solution in each generation. If the new solution dominates the original individual, then the new solution is allowed to enter the external archive. On the other hand, if the new solution is dominated by the original individual, then it is denied access to the external archive. If the new solution and the original individual do not dominate each other, then we randomly choose one of them to enter the external archive. After each generation, we update external archive. We select nondominated solutions from the archive and keep them in the archive. If the number of nondominated solutions exceeds the allocated archive size, then crowding distance [4] is applied to remove the crowded members and to maintain uniform distribution among the archive members.
4.4. Crowding Distance
Crowding distance is used to estimate the density of solutions in the external archive. Usually, the perimeter of the cuboid formed by using the nearest neighbors as the vertices is called crowding distance. In Figure 2, the crowding distance of the th solution is the average side length of the cuboid (shown with a dashed box) [4].
The crowding distance computation requires sorting the population in the external archive according to each objective function value in ascending order of magnitude. Thereafter, for each objective function, the boundary solutions (solutions with smallest and largest function values) are assigned as an infinite distance value. All other intermediate solutions are assigned a distance value equal to the absolute normalized difference in the function values of two adjacent solutions. This calculation is continued with other objective functions. The overall crowding distance value is calculated as the sum of individual distance values corresponding to each objective. Each objective function is normalized before calculating the crowding distance [4].
4.5. The Multiobjective ABC Algorithm
The ABC algorithm is very simple when compared to the existing swarmbased algorithms. Therefore, we develop it to deal with multiobjective optimization problems. The main steps of the MOABC algorithm are shown in Algorithm 2.

In the initialization phase, we evaluate the fitness of the initial food source positions and sort them based on nondomination. Then we select nondominated solutions and store them in the external archive EA. This is the initialization of the external archive.
In the onlooker bees’ phase, we use comprehensive learning strategy to produce new solutions . For each bee , it randomly chooses dimensions and learns from a nondominated solution which is randomly selected from EA. And the other dimensions learn from the other nondominated solutions. The new solution is produced by using the following expression: where is randomly chosen index and p is the number of solutions in the EA. is the first m integers of a random permutation of the integers 1 : , and defines which ’s dimensions should learn from . As opposed to in original ABC algorithm, produce random numbers which are all between . And the random numbers correspond to the dimensions above. This modification makes the potential search space around . The potential search spaces of MOABC on one dimension are plotted as a line in Figure 3. Each remaining dimension learns from the other nondominated solutions by using where , and. For NSABC algorithm, each bee randomly chooses a dimension and learns from a nondominated solution which is randomly selected from EA. The new solution is produced by just using expression (4.2). After producing new solution, we calculate the fitness and apply greedy selection mechanism to decide which solution enters EA. The selection mechanism is shown in Algorithm 3.

In nondominated sorting phase, after a generation, the solutions in the EA are sorted based on nondomination, and we keep the nondomination solutions of them staying in the EA. If the number of nondominated solutions exceeds the allocated size of EA, we use crowding distance to remove the crowded members. Crowding distance algorithm is seen in [4].
5. Experiments
In the following, we will first describe the benchmark functions used to compare the performance of MOABC with NSABC, NSGAII, and MOCLPSO. And then we will introduce performance measures. For every algorithm, we will give the parameter settings. At last, we will present simulation results for benchmark functions.
5.1. Benchmark Functions
In order to illustrate the performance of the proposed MOABC algorithm, we used several wellknown test problems SCH, FON, ZDT1 to ZDT4, and ZDT6 as the twoobjective test functions. And we used DebThieleLaumannsZitzler (DTLZ) problem family as the threeobjective test functions.
SCH
Although simple, the most studied singlevariable test problem is Schaffer’s twoobjective problem [17, 18]
This problem has Pareto optimal solution , and the Pareto optimal set is a convex set:
FON
Schaffer and Fonseca and Fleming used a twoobjective optimization problem (FON) [18, 19] having variables:
The Pareto optimal solution to this problem is for . These solutions also satisfy the following relationship between the two function values:
in the range . The interesting aspect is that the search space in the objective space and the Pareto optimal function values do not depend on the dimensionality (the parameter ) of the problem. In our paper, we set . And the optimal solutions are .
ZDT1
This is a 30variable () problem having a convex Pareto optimal set. The functions used are as follows:
All variables lie in the range . The Pareto optimal region corresponds to and for [20].
ZDT2
This is also an variable problem having a nonconvex Pareto optimal set:
All variables lie in the range . The Pareto optimal region corresponds to and for [20].
ZDT3
This is an variable problem having a number of disconnected Pareto optimal fronts:
All variables lie in the range . The Pareto optimal region corresponds to for , and hence not all points satisfying lie on the Pareto optimal front [20].
ZDT4
This is an variable problem having a convex Pareto optimal set:
The variable lies in the range , but all others in the range . The Pareto optimal region corresponds to and for [20].
ZDT6
This is a 10variable problem having a nonconvex Pareto optimal set. Moreover, the density of solutions across the Pareto optimal region is nonuniform, and the density towards the Pareto optimal front is also thin:
All variables lie in the range . The Pareto optimal region corresponds to and for [20].
DTLZ1
A simple test problem uses objectives with a linear Pareto optimal front:
The Pareto optimal solution corresponds to (), and the objective function values lie on the linear hyperplane: . A value of is suggested here. In the above problem, the total number of variables is . The difficulty in this problem is to converge to the hyperplane [21, 22].
DTLZ2
This test problem has a spherical Pareto optimal front:
The Pareto optimal solutions correspond to (), and all objective function values must satisfy the . As in the previous problem, it is recommended to use . The total number of variables is as suggested [21, 22].
DTLZ3
In order to investigate an MOEA’s ability to converge to the global Pareto optimal front, the g function given in ZDTL1 is used in ZDTL2 problem:
It is suggested that . There are a total of decision variables in this problem. The Pareto optimal solution corresponds to () [21, 22].
DTLZ6
This test problem has disconnected Pareto optimal regions in the search space:
The functional g requires decision variables, and the total number of variables is . It is suggested that [21, 22].
5.2. Performance Measures
In order to facilitate the quantitative assessment of the performance of a multiobjective optimization algorithm, two performance metrics are taken into consideration: (1) convergence metric ; (2) diversity metric [4].
5.2.1. Convergence Metric
This metric measures the extent of convergence to a known set of Pareto optimal solutions, as follows: where is the number of nondominated solutions obtained with an algorithm and is the Euclidean distance between each of the nondominated solutions and the nearest member of the true Pareto optimal front. To calculate this metric, we find a set of uniformly spaced solutions from the true Pareto optimal front in the objective space. For each solution obtained with an algorithm, we compute the minimum Euclidean distance of it from chosen solutions on the Pareto optimal front. The average of these distances is used as the convergence metric . Figure 4 shows the calculation procedure of this metric [4].
5.2.2. Diversity Metric
This metric measure the extent of spread achieved among the obtained solutions. Here, we are interested in getting a set of solutions that spans the entire Pareto optimal region. This metric is defined as: where is the Euclidean distance between consecutive solutions in the obtained nondominated set of solutions and is the number of nondominated solutions obtained with an algorithm. is the average value of these distances. and are the Euclidean distances between the extreme solutions and the boundary solutions of the obtained nondominated set, as depicted in Figure 5.
5.3. Compared Algorithms and Parameter Settings
5.3.1. Nondominated Sorting Genetic Algorithm II (NSGAII)
This algorithm was proposed by Deb et al. It is a revised version of the NSGA proposed by Guria [23]. NSGAII is a computationally fast and elitist MOEA based on a nondominated sorting approach. It replaced the use of sharing function with the new crowded comparison approach, thereby eliminating the need of any userdefined parameter for maintaining the diversity among the population members. NSGAII has proven to be one of the most efficient algorithms for multiobjective optimization due to simplicity, excellent diversitypreserving mechanism, and convergence near the true Pareto optimal set.
The original NSGAII algorithm uses simulated binary crossover (SBX) and polynomial crossover. We use a population size of 100. Crossover probability and mutation probability is , where is the number of decision variables. The distribution indices for crossover and mutation operators are mu = 20 and mum = 20, respectively [4].
5.3.2. Multiobjective Comprehensive Learning Particle Swarm Optimizer (MOCLPSO)
MOCLPSO was proposed by Huang et al. [10]. It extends the comprehensive learning particle swarm optimizer (CLPSO) algorithm to solve multiobjective optimization problems. MOCLPSO incorporates Pareto dominance concept into CLPSO. It also uses an external archive technique to keep a historical record of the nondominated solutions obtained during the search process.The MOCLPSO uses population size , archive size , learning probability , elitism probability [10].
For the MOABC, a colony size of 50, archive size , elitism probability . NSABC algorithm does not need elitism probability. In the experiment, so as to compare the different algorithms, a fair time measure must be selected. It is, therefore, decided to use the number of function evaluations (FEs) as a time measure [24]. Thereby FEs in a time measure is our termination criterion.
5.4. Simulation Results for Benchmark Functions
The experimental results, including the best, worst, average, median, and standard deviation of the convergence metric and diversity metric values found in 10 runs are proposed in Tables 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, and 22 and all algorithms are terminated after 10000 and 20000 function evaluations, respectively. Figures 6, 7, 8, 9, 10, and 12 show the optimal front obtained by four algorithms for twoobjective problems. The continuous lines represent the Pareto optimal front; star spots represent nondominated solutions found. Figure 11 shows the results of NSABC, MOABC, and NSGAII algorithms optimizing the test function ZDT4. Figures 13, 14, 15, 16, 17, 18, 19, and 20 show the true Pareto optimal front and the optimal front obtained by four algorithms for DTLZ series problems.



