Abstract

Multiobjective optimization has been a difficult problem and focus for research in fields of science and engineering. This paper presents a novel algorithm based on artificial bee colony (ABC) to deal with multi-objective optimization problems. ABC is one of the most recently introduced algorithms based on the intelligent foraging behavior of a honey bee swarm. It uses less control parameters, and it can be efficiently used for solving multimodal and multidimensional optimization problems. Our algorithm uses the concept of Pareto dominance to determine the flight direction of a bee, and it maintains nondominated solution vectors which have been found in an external archive. The proposed algorithm is validated using the standard test problems, and simulation results show that the proposed approach is highly competitive and can be considered a viable alternative to solve multi-objective optimization problems.

1. Introduction

In the real world, many optimization problems have to deal with the simultaneous optimization of two or more objectives. In some cases, however, these objectives are in contradiction with each other. While in single-objective optimization the optimal solution is usually clearly defined, this does not hold for multiobjective optimization problems. Instead of a single optimum, there is rather a set of alternative trade-offs, generally known as Pareto optimal solutions. These solutions are optimal in the wider sense that no other solutions in the search space are superior to them when all objectives are considered.

In the 1950s, in the area of operational research, a variety of methods have been developed for the solution of multiobjective optimization problems (MOP). Some of the most representative classical methods are linear programming, the weighted sum method, and the goal programming method [1]. Over the past two decades, a number of multiobjective evolutionary algorithms (MOEAs) have been proposed [2, 3]. A few of these algorithms include the nondominated sorting genetic algorithm II (NSGA-II) [4], the strength Pareto evolutionary algorithm 2 (SPEA2) [5], and the multiobjective particle swarm optimization (MOPSO) which is proposed by Coello and Lechuga [6]. MOEA’s success is due to their ability to find a set of representative Pareto optimal solutions in a single run.

Artificial bee colony (ABC) algorithm is a new swarm intelligent algorithm that was first introduced by Karaboga in Erciyes University of Turkey in 2005 [7], and the performance of ABC is analyzed in 2007 [8]. The ABC algorithm imitates the behaviors of real bees in finding food sources and sharing the information with other bees. Since ABC algorithm is simple in concept, easy to implement, and has fewer control parameters, it has been widely used in many fields. For these advantages of the ABC algorithm, we present a proposal, called “Multiobjective Artificial Bee Colony” (MOABC), which allows the ABC algorithm to be able to deal with multiobjective optimization problems. We aim at presenting a kind of efficient and simple algorithm for multiobjective optimization, meanwhile filling up the research gap of the ABC algorithm in the field of multiobjective problems. Our MOABC algorithm is based on Nondominated Sorting strategy. We use the concept of Pareto dominance to determine which solution vector is better and use an external archive to maintain nondominated solution vectors. We also use comprehensive learning strategy which is inspired by comprehensive learning particle swarm optimizer (CLPSO) [9] to ensure the diversity of population. In order to evaluate the performance of the MOABC, we compared the performance of the MOABC algorithm with that of NSGA-II and MOCLPSO [10] on a set of well-known benchmark functions. Seven of these test functions SCH, FON, ZDT1 to ZDT4, and ZDT6 are of two objectives, while the other four DTLZ1 to DTLZ3 and DTLZ6 are of three objectives. Meanwhile, a version of MOABC with Nondominated Sorting strategy only is also compared to illustrate the comprehensive learning strategy effect for population diversity. This version algorithm is called nondominated Sorting artificial bee colony (NSABC). From the simulation results, the MOABC algorithm shows remarkable performance improvement over other algorithms in all benchmark functions.

The remainder of the paper is organized as follows. Section 2 gives relevant MOP concepts. In Section 3, we will introduce the ABC algorithm. In Section 4, the details of MOABC algorithm will be given. Section 5 presents a comparative study of the proposed MOABC with NSABC, NSGA-II, and MOCLPSO on a number of benchmark problems, and we will draw conclusions from the study. Section 6 summarizes our discussion.

Definition 1 (multiobjective optimization problem). A multiobjective optimization problem (MOP) can be stated as follows: where is called decision (variable) vector, is the decision (variable) space, is the objective space, and consists of real-valued objective functions. is the objective vector. We call problem (2.1) a MOP.

Definition 2 (Pareto optimal). For (2.1), let , be two vectors, is said to dominate if for all , and . A point is called (globally) Pareto optimal if there is no such that dominates . Pareto optimal solutions are also called efficient, nondominated, and noninferior solutions. The set of all the Pareto optimal solutions, denoted by , is called the Pareto set. The set of all the Pareto objectives vectors, , is called the Pareto front [11, 12]. Illustrative example can be seen in Figure 1.

3. The Original ABC Algorithm

The artificial bee colony algorithm is a new population-based metaheuristic approach, initially proposed by Karaboga [7] and Karaboga and Basturk [8] and further developed by Karaboga and Basturk [13] and Karaboga and Akay [14]. It has been used in various complex problems. The algorithm simulates the intelligent foraging behavior of honey bee swarms. The algorithm is very simple and robust. In the ABC algorithm, the colony of artificial bees is classified into three categories: employed bees, onlookers, and scouts. Employed bees are associated with a particular food source that they are currently exploiting or are “employed” at. They carry with them information about this particular source and share the information to onlookers. Onlooker bees are those bees that are waiting on the dance area in the hive for the information to be shared by the employed bees about their food sources and then make decision to choose a food source. A bee carrying out random search is called a scout. In the ABC algorithm, the first half of the colony consists of the employed artificial bees, and the second half includes the onlookers. For every food source, there is only one employed bee. In other words, the number of employed bees is equal to the number of food sources around the hive. The employed bee whose food source has been exhausted by the bees becomes a scout. The position of a food source represents a possible solution to the optimization problem, and the nectar amount of a food source corresponds to the quality (fitness) of the associated solution represented by that food source. Onlookers are placed on the food sources by using a probability-based selection process. As the nectar amount of a food source increases, the probability value with which the food source is preferred by onlookers increases, too [7, 8]. The main steps of the algorithm are given in Algorithm 1.

Main steps of the ABC algorithm
(1) cycle = 1
(2) Initialize the food source positions (solutions) ,
(3) Evaluate the nectar amount (fitness function ) of food sources
(4) repeat
(5)  Employed Bees’ Phase
   For each employed bee
    Produce new food source positions
    Calculate the value
    If new position better than previous position
    Then memorizes the new position and forgets the old one.
   End For.
(6)  Calculate the probability values for the solution.
(7)  Onlooker Bees’ Phase
   For each onlooker bee
    Chooses a food source depending on
    Produce new food source positions
    Calculate the value
    If new position better than previous position
    Then memorizes the new position and forgets the old one.
   End For
(8)  Scout Bee Phase
   If there is an employed bee becomes scout
   Then replace it with a new random source positions
(9)  Memorize the best solution achieved so far
(10)  cycle = cycle + 1.
(11) until cycle = Maximum Cycle Number

In the initialization phase, the ABC algorithm generates randomly distributed initial food source positions of solutions, where denotes the size of employed bees or onlooker bees. Each solution is a n-dimensional vector. Here, n is the number of optimization parameters. And then evaluate each nectar amount . In the ABC algorithm, nectar amount is the value of benchmark function.

In the employed bees’ phase, each employed bee finds a new food source in the neighborhood of its current source . The new food source is calculated using the following expression: where and are randomly chosen indexes and . is a random number between . It controls the production of a neighbour food source position around . And then employed bee compares the new one against the current solution and memorizes the better one by means of a greedy selection mechanism.

In the onlooker bees’ phase, each onlooker chooses a food source with a probability which is related to the nectar amount (fitness) of a food source shared by employed bees. Probability is calculated using the following expression:

In the scout bee phase, if a food source cannot be improved through a predetermined cycles, called “limit”, it is removed from the population, and the employed bee of that food source becomes scout. The scout bee finds a new random food source position using where and are lower and upper bounds of parameter , respectively.

These steps are repeated through a predetermined number of cycles, called maximum cycle number (MCN), or until a termination criterion is satisfied [7, 8, 15].

4. The Multiobjective ABC Algorithm

4.1. External Archive

As opposed to single-objective optimization, MOEAs usually maintain a nondominated solutions set. In multiobjective optimization, for the absence of preference information, none of the solutions can be said to be better than the others. Therefore, in our algorithm, we use an external archive to keep a historical record of the nondominated vectors found along the search process. This technique is used in many MOEAs [5, 16].

In the initialization phase, the external archive will be initialized. After initializing the solutions and calculating the value of every solution, they are sorted based on nondomination. We compare each solution with every other solution in the population to find which one is nondominated solution. We then put all nondominated solutions into external archive. The external archive will be updated at each generation.

4.2. Diversity

MOEA’s success is due to their ability to find a set of representative Pareto optimal solutions in a single run. In order to approximate the Pareto optimal set in a single optimization run, evolutionary algorithms have to perform a multimodal search where multiple, widely different solutions are to be found. Therefore, maintaining a diverse population is crucial for the efficacy of an MOEA. ABC algorithm has been demonstrated to possess superior performance in the single-objective domain. However, NSABC that we presented as the first version multiobjective ABC algorithm cannot get satisfactory results in terms of diversity metric. So as to apply ABC algorithm to multiobjective problems, we use the comprehensive learning strategy which is inspired by comprehensive learning particle swarm optimizer (CLPSO) [9] to ensure the diversity of population.

In our algorithm, all solutions in the external archive are regarded as food source positions, and all bees are regarded as onlooker bees. There do not exist employed bees and scouts. In each generation, each onlooker randomly chooses a food source from external archive, goes to the food source area, and then chooses a new food source. In original ABC algorithm, each bee finds a new food source by means of the information in the neighborhood of its current source. In our proposed MOABC algorithm, however, we use the comprehensive learning strategy. Like CLPSO, m dimensions of each individual are randomly chosen to learn from a random nondominated solution which comes from external archive. And each of the other dimensions of the individual learns from the other nondominated solutions. In our proposed NSABC algorithm, just one dimension of each individual is randomly chosen to learn from a random nondominated solution.

4.3. Update External Archive

As the evolution progresses, more and more new solutions enter the external archive. Considering that each new solution will be compared with every nondominated solution in the external archive to decide whether this new solution should stay in the archive, and the computational time is directly proportional to the number of comparisons, the size of external archive must be limited.

In our algorithm, each individual will find a new solution in each generation. If the new solution dominates the original individual, then the new solution is allowed to enter the external archive. On the other hand, if the new solution is dominated by the original individual, then it is denied access to the external archive. If the new solution and the original individual do not dominate each other, then we randomly choose one of them to enter the external archive. After each generation, we update external archive. We select nondominated solutions from the archive and keep them in the archive. If the number of nondominated solutions exceeds the allocated archive size, then crowding distance [4] is applied to remove the crowded members and to maintain uniform distribution among the archive members.

4.4. Crowding Distance

Crowding distance is used to estimate the density of solutions in the external archive. Usually, the perimeter of the cuboid formed by using the nearest neighbors as the vertices is called crowding distance. In Figure 2, the crowding distance of the th solution is the average side length of the cuboid (shown with a dashed box) [4].

The crowding distance computation requires sorting the population in the external archive according to each objective function value in ascending order of magnitude. Thereafter, for each objective function, the boundary solutions (solutions with smallest and largest function values) are assigned as an infinite distance value. All other intermediate solutions are assigned a distance value equal to the absolute normalized difference in the function values of two adjacent solutions. This calculation is continued with other objective functions. The overall crowding distance value is calculated as the sum of individual distance values corresponding to each objective. Each objective function is normalized before calculating the crowding distance [4].

4.5. The Multiobjective ABC Algorithm

The ABC algorithm is very simple when compared to the existing swarm-based algorithms. Therefore, we develop it to deal with multiobjective optimization problems. The main steps of the MOABC algorithm are shown in Algorithm 2.

Main steps of the MOABC algorithm
(1) cycle = 1
(2) Initialize the food source positions (solutions) ,
(3) Evaluate the nectar amount (fitness ) of food sources
(4) The initialized solutions are sorted based on nondomination
(5) Store nondominated solutions in the external archive EA
(6) repeat
(7)  Onlooker Bees’ Phase
   For each onlooker bee
    Randomly chooses a solution from EA
    Produce new solution by using expression  (4.1)
    Calculate the value
    Apply greedy selection mechanism in Algorithm 3 to decide which solution enters EA
   EndFor
(8)  The solutions in the EA are sorted based on nondomination
(9)  Keep the nondomination solutions of them staying in the EA
(10) If the number of nondominated solutions exceeds the allocated the size of EA
    Use crowding distance to remove the crowded members
(11) cycle = cycle + 1.
(12) until cycle = Maximum Cycle Number

In the initialization phase, we evaluate the fitness of the initial food source positions and sort them based on nondomination. Then we select nondominated solutions and store them in the external archive EA. This is the initialization of the external archive.

In the onlooker bees’ phase, we use comprehensive learning strategy to produce new solutions . For each bee , it randomly chooses dimensions and learns from a nondominated solution which is randomly selected from EA. And the other dimensions learn from the other nondominated solutions. The new solution is produced by using the following expression: where is randomly chosen index and p is the number of solutions in the EA. is the first m integers of a random permutation of the integers 1 : , and defines which ’s dimensions should learn from . As opposed to in original ABC algorithm, produce random numbers which are all between . And the random numbers correspond to the dimensions above. This modification makes the potential search space around . The potential search spaces of MOABC on one dimension are plotted as a line in Figure 3. Each remaining dimension learns from the other nondominated solutions by using where , and. For NSABC algorithm, each bee randomly chooses a dimension and learns from a nondominated solution which is randomly selected from EA. The new solution is produced by just using expression (4.2). After producing new solution, we calculate the fitness and apply greedy selection mechanism to decide which solution enters EA. The selection mechanism is shown in Algorithm 3.

Greedy selection mechanism
If dominates
  Put into EA
Else if dominates
  Do nothing
Else if and are not dominated by each other
  Put into EA
  Produce a random number drawn from a uniform distribution on the unit interval
  If
     The the original solution is replaced by the new solution as new food source position.
     That means is replaced by .
  Else
    Do nothing
  End If
End If

In nondominated sorting phase, after a generation, the solutions in the EA are sorted based on nondomination, and we keep the nondomination solutions of them staying in the EA. If the number of nondominated solutions exceeds the allocated size of EA, we use crowding distance to remove the crowded members. Crowding distance algorithm is seen in [4].

5. Experiments

In the following, we will first describe the benchmark functions used to compare the performance of MOABC with NSABC, NSGA-II, and MOCLPSO. And then we will introduce performance measures. For every algorithm, we will give the parameter settings. At last, we will present simulation results for benchmark functions.

5.1. Benchmark Functions

In order to illustrate the performance of the proposed MOABC algorithm, we used several well-known test problems SCH, FON, ZDT1 to ZDT4, and ZDT6 as the two-objective test functions. And we used Deb-Thiele-Laumanns-Zitzler (DTLZ) problem family as the three-objective test functions.

SCH
Although simple, the most studied single-variable test problem is Schaffer’s two-objective problem [17, 18] This problem has Pareto optimal solution , and the Pareto optimal set is a convex set:

FON
Schaffer and Fonseca and Fleming used a two-objective optimization problem (FON) [18, 19] having variables:
The Pareto optimal solution to this problem is for . These solutions also satisfy the following relationship between the two function values: in the range . The interesting aspect is that the search space in the objective space and the Pareto optimal function values do not depend on the dimensionality (the parameter ) of the problem. In our paper, we set . And the optimal solutions are .

ZDT1
This is a 30-variable () problem having a convex Pareto optimal set. The functions used are as follows: All variables lie in the range . The Pareto optimal region corresponds to and for [20].

ZDT2
This is also an variable problem having a nonconvex Pareto optimal set: All variables lie in the range . The Pareto optimal region corresponds to and for [20].

ZDT3
This is an variable problem having a number of disconnected Pareto optimal fronts: All variables lie in the range . The Pareto optimal region corresponds to for , and hence not all points satisfying lie on the Pareto optimal front [20].

ZDT4
This is an variable problem having a convex Pareto optimal set: The variable lies in the range , but all others in the range . The Pareto optimal region corresponds to and for [20].

ZDT6
This is a 10-variable problem having a nonconvex Pareto optimal set. Moreover, the density of solutions across the Pareto optimal region is nonuniform, and the density towards the Pareto optimal front is also thin: All variables lie in the range . The Pareto optimal region corresponds to and for [20].

DTLZ1
A simple test problem uses objectives with a linear Pareto optimal front: The Pareto optimal solution corresponds to (), and the objective function values lie on the linear hyperplane: . A value of is suggested here. In the above problem, the total number of variables is . The difficulty in this problem is to converge to the hyperplane [21, 22].

DTLZ2
This test problem has a spherical Pareto optimal front: The Pareto optimal solutions correspond to (), and all objective function values must satisfy the . As in the previous problem, it is recommended to use . The total number of variables is as suggested [21, 22].

DTLZ3
In order to investigate an MOEA’s ability to converge to the global Pareto optimal front, the g function given in ZDTL1 is used in ZDTL2 problem: It is suggested that . There are a total of decision variables in this problem. The Pareto optimal solution corresponds to () [21, 22].

DTLZ6
This test problem has disconnected Pareto optimal regions in the search space: The functional g requires decision variables, and the total number of variables is . It is suggested that [21, 22].

5.2. Performance Measures

In order to facilitate the quantitative assessment of the performance of a multiobjective optimization algorithm, two performance metrics are taken into consideration: (1) convergence metric ; (2) diversity metric [4].

5.2.1. Convergence Metric

This metric measures the extent of convergence to a known set of Pareto optimal solutions, as follows: where is the number of nondominated solutions obtained with an algorithm and is the Euclidean distance between each of the nondominated solutions and the nearest member of the true Pareto optimal front. To calculate this metric, we find a set of uniformly spaced solutions from the true Pareto optimal front in the objective space. For each solution obtained with an algorithm, we compute the minimum Euclidean distance of it from chosen solutions on the Pareto optimal front. The average of these distances is used as the convergence metric . Figure 4 shows the calculation procedure of this metric [4].

5.2.2. Diversity Metric

This metric measure the extent of spread achieved among the obtained solutions. Here, we are interested in getting a set of solutions that spans the entire Pareto optimal region. This metric is defined as: where is the Euclidean distance between consecutive solutions in the obtained nondominated set of solutions and is the number of nondominated solutions obtained with an algorithm. is the average value of these distances. and are the Euclidean distances between the extreme solutions and the boundary solutions of the obtained nondominated set, as depicted in Figure 5.

5.3. Compared Algorithms and Parameter Settings
5.3.1. Nondominated Sorting Genetic Algorithm II (NSGA-II)

This algorithm was proposed by Deb et al. It is a revised version of the NSGA proposed by Guria [23]. NSGA-II is a computationally fast and elitist MOEA based on a nondominated sorting approach. It replaced the use of sharing function with the new crowded comparison approach, thereby eliminating the need of any user-defined parameter for maintaining the diversity among the population members. NSGA-II has proven to be one of the most efficient algorithms for multiobjective optimization due to simplicity, excellent diversity-preserving mechanism, and convergence near the true Pareto optimal set.

The original NSGA-II algorithm uses simulated binary crossover (SBX) and polynomial crossover. We use a population size of 100. Crossover probability and mutation probability is , where is the number of decision variables. The distribution indices for crossover and mutation operators are mu = 20 and mum = 20, respectively [4].

5.3.2. Multiobjective Comprehensive Learning Particle Swarm Optimizer (MOCLPSO)

MOCLPSO was proposed by Huang et al. [10]. It extends the comprehensive learning particle swarm optimizer (CLPSO) algorithm to solve multiobjective optimization problems. MOCLPSO incorporates Pareto dominance concept into CLPSO. It also uses an external archive technique to keep a historical record of the nondominated solutions obtained during the search process.The MOCLPSO uses population size , archive size , learning probability , elitism probability [10].

For the MOABC, a colony size of 50, archive size , elitism probability . NSABC algorithm does not need elitism probability. In the experiment, so as to compare the different algorithms, a fair time measure must be selected. It is, therefore, decided to use the number of function evaluations (FEs) as a time measure [24]. Thereby FEs in a time measure is our termination criterion.

5.4. Simulation Results for Benchmark Functions

The experimental results, including the best, worst, average, median, and standard deviation of the convergence metric and diversity metric values found in 10 runs are proposed in Tables 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, and 22 and all algorithms are terminated after 10000 and 20000 function evaluations, respectively. Figures 6, 7, 8, 9, 10, and 12 show the optimal front obtained by four algorithms for two-objective problems. The continuous lines represent the Pareto optimal front; star spots represent nondominated solutions found. Figure 11 shows the results of NSABC, MOABC, and NSGA-II algorithms optimizing the test function ZDT4. Figures 13, 14, 15, 16, 17, 18, 19, and 20 show the true Pareto optimal front and the optimal front obtained by four algorithms for DTLZ series problems.

From Tables 1 and 2, we can see that the performances of all the four algorithms in convergence metric have been competitively good over this problem. All solutions of them are almost on the true Pareto front. In diversity metric aspect, however, the performance of NSABC is better than the other algorithms. When given 20000 function evaluations for them, MOABC and MOCLPSO algorithms improve diversity metric. MOABC is as good as NSABC in diversity metric. From Figure 6, it can be seen that the front obtained from MOABC, NSABC, and MOCLPSO are found to be uniformly distributed. However, NSGA-II algorithm is not able to cover the full Pareto front. On the whole the MOABC and NSABC algorithms are a little better than MOCLPSO and NSGA-II on SCH problem.

For FON problem, like SCH, it can be observed from Tables 3 and 4 that all algorithms perform very well in convergence metric. In diversity metric aspect, MOABC, NSABC, and MOCLPSO algorithms can guarantee a good performance. On the other hand, even though NSGA-II is able to find the true Pareto front for this problem, it cannot maintain a relatively good diversity metric. It is only able to cover the half Pareto front, supporting the results of Figure 7. From Tables 3 and 4, we can also see that the performances of all the four algorithms could not be improved even in 20000 function evaluations.

On ZDT1 function, when given 10000 function evaluations for four algorithms, Table 5 shows that the performance of MOABC in convergence metric is one order of magnitude better than that of NSABC and NSGA-II, but it is one order of magnitude worse than that of MOCLPSO. However, when the number of function evaluations is 20000, we can see from Tables 5 and 6 that MOABC can greatly improve the performance of convergence and it has been two orders of magnitude better than that of MOCLPSO. For diversity metric, one can note from tables and algorithms that MOABC outperform NSABC in terms of diversity metric. Figure 8 shows that MOABC and MOCLPSO can discover a well-distributed and diverse solution set for this problem. However, NSABC and NSGA-II only find a sparse distribution, and they cannot archive the true Pareto front for ZDT1.

On ZDT2 function, the results of the performance measures show that MOABC and MOCLPSO have better convergence and diversity compared to the NSABC and NSGA-II. From Table 8, one can note that the performance of MOABC in convergence metric is one order of magnitude better than that of MOCLPSO after 20000 function evaluations. Figure 9 shows that NSGA-II produces poor results on this test function and it cannot achieve the true Pareto front. In spite of NSABC getting a relatively good convergence metric, it cannot maintain a good diversity metric.

ZDT3 problem has a number of disconnected Pareto optimal fronts. The situation is the same on ZDT2 problem. The performances of MOABC and MOCLPSO are almost the same after 10000 function evaluations, and they are a little better than that of NSABC, and two orders of magnitude better than that of NSGA-II in convergence metric. However, when the number of function evaluations is 20000, it is found from Table 10 that the performances of MOCLPSO and NSGA-II could not be improved while MOABC and NSABC can improve the convergence metric by almost one order of magnitude. In diversity metric aspect, the results show that NSABC has worse diversity compared to the other algorithms, as can be seen in Figure 10.

The problem ZDT4 has 219 or about 8(1011) different local Pareto optimal fronts in the search space, of which only one corresponds to the global Pareto optimal front. The Euclidean distance in the decision space between solutions of two consecutive local Pareto optimal sets is 0.25 [20]. From Tables 11 and 12, it can be seen that MOABC, NSABC, and NSGA-II produce poor results in convergence metric. These three algorithms get trapped in one of its 219 local Pareto optimal fronts. Even though they cannot find the true Pareto front, they are able to find a good spread of solutions at a local Pareto optimal front, as shown in Figure 11. Since MOCLPSO converges poorly on this problem, we do not show MOCLPSO results on Figure 11 and Tables 11 and 12.

From Tables 13 and 14, we can see that the performances of MOABC, NSABC, and MOCLPSO are very good on this problem, and they outperform NSGA-II in terms of convergence metric and diversity metric. NSGA-II has difficulty in converging near the global Pareto optimal front. Considering the average and median values, we can observe that the proposed MOABC algorithm has better convergence in most test runs. For diversity metric, Figure 12 shows that MOABC and MOCLPSO effectively find nondominated solutions spread over the whole Pareto front. In spite of NSABC archiving the true Pareto front, it cannot maintain a good diversity metric. NSGA-II gets the worse distribution of solutions on the set of optimal solutions found.

From Tables 15 and 16, we can see that the performances of MOABC and NSABC are one order of magnitude better than that of NSGA-II in convergence metric. Since MOCLPSO converges poorly on this problem, we do not show MOCLPSO results. For diversity metric, Figure 14 shows that NSABC and MOABC effectively find nondominated solutions spreading over the whole Pareto front.

However, NSGA-II gets the worse distribution of solutions on the set of optimal solutions found.

On DTLZ2 function, from Tables 16 and 17, we can see that the performances of all the four algorithms in convergence metric have been competitively good over this problem. However, we can see that the performance of MOABC in convergence metric is one order of magnitude better than that of NSABC and NSGA-II and two orders of magnitude better than that of MOCLPSO. From Figure 16, it can be seen that the front obtained from MOABC, NSABC, and MOCLPSO is found to be uniformly distributed. However, NSGA-II algorithm is not able to cover the full Pareto front.

Like the DTLZ1 function, MOCLPSO converges poorly on DTLZ3 problem. For this problem, all algorithms could not quite converge on to the true Pareto optimal fronts after 20000 FEs. MOABC is a little better than the other algorithms in convergence metric. From tables, algorithms, and Figure 16, we have found that NSGA-II is bad in solving DTLZ3 in diversity metric. However, MOABC and NSABC can discover a well-distributed and diverse solution set for this problem.

DTLZ6 problem has disconnected Pareto optimal regions in the search space. This problem will test an algorithm’s ability to maintain subpopulation in different Pareto optimal regions. From Tables 21 and 22, we can observe that the performance of MOABC and MOCLPSO in convergence metric is one order of magnitude better than that of NSABC and NSGA-II. From Figure 20, MOABC and MOCLPSO can discover a well-distributed and diverse solution set for this problem. However, the performances of NSABC and NSGA-II in diversity metric are a little worse than that of MOABC and MOCLPSO.

6. Summary and Conclusion

In this paper, we present a novel article bee colony (ABC) algorithm to solving multiobjective optimization problems, namely, multiobjective artificial bee colony (MOABC). In our algorithm, we use Pareto concept and external archive strategy to make the algorithm converge to the true Pareto optimal front and use comprehensive learning strategy to ensure the diversity of population. The best advantage of MOABC is that it could use less control parameters to get the most competitive performance. In order to demonstrate the performance of the MOABC algorithm, we compared the performance of the MOABC with those of NSABC (with nondominated sorting strategy only), MOCLPSO, and NSGA-II optimization algorithms on several two-objective test problems and three-objective functions.

For the two-objective test problems, from the simulation results, it is concluded that MOABC algorithm could converge to the true Pareto optimal front and maintain good diversity along the Pareto front. We can also see that MOABC is much better than NSABC in terms of diversity metric. We believe that comprehensive learning strategy could help find more nondominated solutions and improve the capabilities of the algorithm to distribute uniformly the nondominated vectors found. Additionally, from the simulation result we also know that MOABC has the same performance as MOCLPSO after 10000 FEs. They are much better than the other two algorithms. NSABC is a little better than NSGA-II. However, when the number of function evaluations is 20000, we can see that MOABC can greatly improve the performance of convergence, and it is 1-2 orders of magnitude better than that of MOCLPSO. NSABC also has a great improvement in convergence metric. However, it still has a bad diversity metric.

For DTLZ series problems, we can see that the performances of all algorithms are improved very small, when we increase the number of function evaluation. Simulation results show that MOCLPSO converges poorly on DTLZ1 and DTLZ3 problems. That means that MOCLPSO performs badly in terms of convergence for three-objective problem. We also know that MOABC and NSABC perform the best in terms of convergence and MOABC is a little better than NSABC. In diversity metric aspect, MOABC and NSABC can discover a well-distributed and diverse solution set for DTLZ series problems. However, In spite of NSGA-II getting a relatively good convergence metric, it cannot maintain a good diversity metric for DTLZ series problems.

On the whole, the proposed MOABC significantly outperforms three other algorithms in terms of convergence metric and diversity metric. Therefore, the proposed MOABC optimization algorithm can be considered as a viable and efficient method to solve multiobjective optimization problems. As our algorithm is not applied in the real problems, robustness of the algorithm needs to be verified. In the future work, we will apply MOABC algorithm to the real engineering problems.

Acknowledgment

This work is supported by National Natural Science Foundation of China under nos. 61174164, 61003208, and 61105067.