Research Article  Open Access
Maowei He, Kunyuan Hu, Yunlong Zhu, Lianbo Ma, Hanning Chen, Yan Song, "Hierarchical Artificial Bee Colony Optimizer with DivideandConquer and Crossover for Multilevel Threshold Image Segmentation", Discrete Dynamics in Nature and Society, vol. 2014, Article ID 941534, 22 pages, 2014. https://doi.org/10.1155/2014/941534
Hierarchical Artificial Bee Colony Optimizer with DivideandConquer and Crossover for Multilevel Threshold Image Segmentation
Abstract
This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization (HABC), for multilevel threshold image segmentation, which employs a pool of optimal foraging strategies to extend the classical artificial bee colony framework to a cooperative and hierarchical fashion. In the proposed hierarchical model, the higherlevel species incorporates the enhanced information exchange mechanism based on crossover operator to enhance the global search ability between species. In the bottom level, with the divideandconquer approach, each subpopulation runs the original ABC method in parallel to partdimensional optimum, which can be aggregated into a complete solution for the upper level. The experimental results for comparing HABC with several successful EA and SI algorithms on a set of benchmarks demonstrated the effectiveness of the proposed algorithm. Furthermore, we applied the HABC to the multilevel image segmentation problem. Experimental results of the new algorithm on a variety of images demonstrated the performance superiority of the proposed algorithm.
1. Introduction
Image segmentation is considered as the process of partitioning a digital image into multiple regions or objects. Among the existing segmentation methods, multilevel threshold technique is a simple but effective tool that can extract several distinct objects from the background [1–4]. This means that searching the optimal multiple thresholds to ensure the desired thresholded classes is a significantly essential issue. Several multiple thresholds selection approaches have been proposed in literatures; [3, 5, 6] proposed some methods derived from optimizing an objective function, which were originally developed for bilevel threshold and later extended to multilevel threshold [1, 2]. However, these methods based on exhaustive search suffer from a common drawback; that is, the computational complexity will rise exponentially when extended to multilevel threshold from bilevel.
Recently, to address this concerned issue, swarm intelligence (SI) algorithms as the powerful optimization tools have been introduced to the field of image segmentation owing to their predominant abilities of coping with complex nonlinear optimizations [7–10]. Akay [11] employed two successful populationbased algorithmsparticle swarm optimization (PSO) and artificial bee colony (ABC) algorithm to speed up the multilevel thresholds optimization. Ozturk et al. [12] proposed an image clustering approach based on ABC to find the welldistinguished clusters. Sarkar and Das [13] presented a 2D histogram based multilevel thresholding approach to improve the separation between objects, which demonstrated its high effectiveness and efficiency.
It is worth noting that due to its simple arithmetic and good robustness, the ABCbased approaches have achieved more attentions from researchers [14–18] and are widely used in various optimization problems [19–24]. However, when solving complex problems, ABC algorithm will still suffer from the major drawbacks of being easily trapped into local minimums and loss of population diversity [20].
Comparing with other evolutionary and swarm intelligence algorithms [25–31], how to improve the diversity of swarm or overcome the local convergence of ABC is still a challenging issue in the optimization domain. Thus, this paper presents a novel hierarchical optimization scheme based on divideandconquer and crossover strategies, namely, HABC, to extend ABC framework from flat (one level) to hierarchical (multiple levels). Note that such hierarchical schemes have been applied in optimization algorithms [32–38]. The essential differences between our proposed scheme and others lie in the following aspects.(1)The divideandconquer strategy with random grouping is incorporated in this hierarchical framework, which can decompose the complex highdimensional vectors into several smaller partdimensional components that are assigned to the lower level. This can enhance the local search ability (exploitation).(2)The enhanced information exchange strategy, namely, crossover, is applied to interaction of different population to maintain the diversity of population. In this case, the neighbor bees with higher fitness can be chosen to crossover, which effectively enhances the global search ability and convergence speed to the global best solution (exploration).
The rest of the paper is organized as follows. Section 2 describes the canonical ABC algorithm. In Section 3 the proposed hierarchical artificial bee colony (HABC) model is given. Section 4 presents the experimental studies of the proposed HABC and the other algorithms with descriptions of the involved benchmark functions, experimental settings, and experimental results. And its application to image segmentation has been presented in Section 5. Finally, Section 6 outlines the conclusion.
2. Canonical ABC Algorithm
The artificial bee colony (ABC) algorithm, proposed by Karaboga in 2005 [19] and further developed by Karaboga and Basturk [20, 21] for realparameter optimization, which simulates the intelligent foraging behavior of a honeybee swarm, is one of the most recently introduced swarmbased optimization techniques.
The entire bee colony contains three groups of bees: employed bees, onlookers, and scouts. Employed bees explore the specific food sources and, meanwhile, pass the food information to onlooker bees. The number of employed bees is equal to that of food sources; in other words, each food source owns only one employed bee. Then onlooker bees choose good food sources based on the received information and then further exploit the food near their selected food source. The food source with higher quality would have a larger opportunity to be selected by onlookers. There is a control parameter called ‘‘limit’’ in the canonical ABC algorithm. If a food source is not improved anymore when limit is exceeded, it is assumed to be abandoned by its employed bee and the employed bee associated with that food source becomes a scout to search for a new food source randomly. The fundamental mathematic representations are listed as follows.
Step 1 (initialization phase). In initialization phase, a group of food sources are generated randomly in the search space using the following equation: where , . is the number of food sources. is the number of variables, that is, problem dimension. and are the lower upper and upper bounds of the th variable, respectively.
Step 2 (employed bees’ phase). In the employed bees’ phase, the neighbor food source (candidate solution) can be generated from the old food source of each employed bee in its memory using the following expression: where is a randomly selected food source and must be different from ; is a randomly chosen indexes; is a random number in range [−1, 1].
Step 3 (onlooker bees’ phase). In the onlooker bees’ phase, an onlooker bee selects a food source depending on the probability value associated with that food source; can be calculated as follows: where is the fitness value of th solution.
Step 4 (scout bees’ phase). In scout bees’ phase, if a food source cannot be improved further through a predetermined cycle (called “limit” in ABC), the food source is supposed be abandoned. The employed bee subsequently becomes a scout. A new food source will be produced randomly in the search space using (1).
The employed, onlooker, and scout bees’ phase will recycle until the termination condition is met.
3. Hierarchical Artificial Bee Colony Algorithm
3.1. Hierarchical Multipopulation Optimization Model
As shown in Figure 1, the HABC framework contains two levels to balance exploring and exploiting ability. In the bottom level, with the variables decomposing strategy, each subpopulation employs the canonical ABC method to search the partdimensional optimum in parallel. That is, in each iteration, subpopulations in the bottom level generate best solutions, which are constructed into a complete solution species updated to the top level. In the top level, the multispecies community adopts the information exchange mechanism based on crossover operator, by which each species can learn from its neighborhoods in a specific topology. The vectors decomposing strategy and information exchange crossover operator can be described in detail as follows.
3.2. Variables Decomposing Approach
The purpose of this approach is to obtain finer local search in single dimensions inspired by the divideandconquer approach. Notice that two aspects must be analyzed: how to decompose the whole solution vector and how to calculate the fitness of each individual of each subpopulation. The detailed procedure is presented as follows.
Step 1. The simplest grouping method is permitting a Ddimensional vector to be split into subcomponents, each corresponding to a subpopulation of sdimensions, with individuals (where ). The jth subpopulation is denoted as , .
Step 2. Construct complete evolving solution Gbest, which is the concatenation of the best subcomponents’ solutions by fowling: where represents the personal best solution of the jth subpopulation.
Step 3. For each component , , do the following.(a)At employed bees’ phase, for each individual , , replace the ith component of the Gbest by using the ith component of individual . Calculate the new solution fitness: (newGbest. If (newgbest) < (Gbest), then Gbest is replaced by newGbest.(b)Update positions by using (8).(c)At onlooker bees’ phase, repeat (a)(b).
Step 4. Memorize the best solution achieved so far; compare the best solution with Gbest and memorize the better one.
Random Grouping of Variables. To increase the probability of two interacting variables allocated to the same subcomponent, without assuming any prior knowledge of the problem, according to the random grouping of variables proposed by [29], we adopt the same random grouping scheme by dynamically changing group size. For example, for a problem of 50 dimensions, we can define that
Here, if we randomly decompose the Ddimensional object vector into subcomponents at each iteration (i.e., we construct each of the subcomponents by randomly selecting Sdimensions from the Ddimensional object vector), the probability of placing two interacting variables into the same subcomponent becomes higher, over an increasing number of iterations [30].
3.3. The Information Exchange Mechanism Based on Crossover Operator between Multispecies
In the top level, we adopt crossover operator with a specific topology to enhance the information exchange between species, in which each species can learn from its symbiotic partner in the neighborhood. The key operations of this crossover procedure are described in Figure 2.
Step 1 (select elites for the bestperforming list (BPL)). Individuals from current species ’s neighborhood (i.e., ring topology) with higher fitness have larger probability to be selected into the bestperforming list (BPL) as elites, whose size is equal to current population size.
Step 2 (crossover operation)
Step 2.1. Parents are selected from the BPL’s elites using the tournament selection scheme: two enhanced elites are selected randomly, and their fitness values are compared to select the elites. The one with better fitness is viewed as parent. Another parent is selected in the same way.
Step 2.2. Two offspring are created by arithmetic crossover on these selected parents by the following equation [39]: where is newly produced offspring, parent 1 and parent 2 are randomly selected from BPL.
Step 3 (update with different selection schemes). If population size is , then the replaced individuals number is (CR is a selecting rate). The greedy selection mechanism is adopted to replace the selected individual. There are three replacing approaches of selecting the best individuals (i.e., individuals), a medium level of individuals and the worst individuals. To maintain the population diversity, we randomly select one replacing approach at every iteration.
In summary, in order to facilitate the below presentation and test formulation, we define a unified parameters for HABC model in Table 1. According to the process description as mentioned above, the flowchart of HABC algorithm is summarized in Figure 3, while the pseudocode for HABC algorithm is presented in Pseudocode 1.


4. Experimental Study
In the experimental studies, according to the no free lunch (NFL) theorem [40], a set of eight basic benchmark functions and twelve CEC2005 benchmark functions are employed to fully evaluate the performance of the HABC algorithm fairly [27], as listed in the Appendix section. The number of function evaluations (FEs) is adopted as the time measure criterion substitutes the number of iterations.
4.1. Experimental Settings
Eight variants of HABC based on different crossover methods and CR values were executed with six stateoftheart EA and SI algorithms for comparisons:(i)Artificial bee colony algorithm (ABC) [20];(ii)cooperative artificial bee colony algorithm (CABC) [41];(iii)canonical PSO with constriction factor (PSO) [42];(iv)cooperative PSO (CPSO) [28];(v)standard genetic algorithm (GA) [39];(vi)covariance matrix adaptation evolution strategy (CMAES) [43].
In all experiments in this section, the values of the common parameters used in each algorithm such as population size and total generation number are chosen to be the same. Population size is set as 50 and the maximum evaluation number is set as 100000. For the fifteen continuous testing functions used in this paper, the dimensions are all set as 50.
All the control parameters for the EA and SI algorithms are set to be default of their original literatures: initialization conditions of CMAES are the same as in [43], and the number of offspring candidate solutions generated per time step is ; for ABC and CABC, the limit parameter is set to be , where is the dimension of the problem and SN is the number of employed bees. The split factor for CABC and CPSO is equal to the dimensions [28, 41]. For canonical PSO and CPSO, the learning rates and are both set as 2.05 and the constriction factor . For EGA, intermediate crossover rate of 0.8, Gaussian mutation rate of 0.01, and the global elite operation with a rate of 0.06 are adopted [39]. For the proposed HABC, the species number , split factor , and the selection rate CR should be tuned firstly in next section.
4.2. Parameter Sensitivity Analysis of HABC
4.2.1. Effects of Species Number
The species number of the top level in HABC needs to be tuned. Three 50D basic benchmark functions () and five 50D benchmark functions () are employed to investigate the impact of this parameter. Set CR equal to 1 and all the functions run 30 sample times. As shown in Table 2, it is visible that our proposed ABC got better optimal solutions and stability on the involved test functions with increased . However, the performance improvement is not very remarkable using this parameter.

4.2.2. Choices of Crossover Mode and Value
The benchmark functions ( and ) are adopted to evaluate the performance of HABC variants with different crossover CRs. All the functions with 30 dimensions are implemented for 30 run times. From Table 3, we can observe that HABC variant with CR equal to 1 performed best on four functions among all five functions while CR equal to 0.05 get best result on one function. Therefore, the value of selection rate CR in each crossover operation can be set to 1 as an optimal value in all following experiments.

4.2.3. Effects of Dynamically Changing Group Size
Obviously, the choice of value for split factor (i.e., subpopulation number) had a significant impact on the performance of the proposed algorithm. In order to vary during a run, we defined for 50D function optimization, and set randomly choosing one element of ; then, the HABC with dynamically changing is compared with that with fixed split number on these benchmark functions for 30 sample times. From the results listed in Table 4, we can observe that the performance is sensitive to the predefined value. HABC, using a dynamically changing value, consistently gave a better performance than the other variants except and . Moreover, in most realworld problems, we do not have any prior knowledge about the optimal s value, so the random grouping scheme can be a suitable solution.
