Discrete Dynamics in Nature and Society

Discrete Dynamics in Nature and Society / 2014 / Article

Research Article | Open Access

Volume 2014 |Article ID 941534 | https://doi.org/10.1155/2014/941534

Maowei He, Kunyuan Hu, Yunlong Zhu, Lianbo Ma, Hanning Chen, Yan Song, "Hierarchical Artificial Bee Colony Optimizer with Divide-and-Conquer and Crossover for Multilevel Threshold Image Segmentation", Discrete Dynamics in Nature and Society, vol. 2014, Article ID 941534, 22 pages, 2014. https://doi.org/10.1155/2014/941534

Hierarchical Artificial Bee Colony Optimizer with Divide-and-Conquer and Crossover for Multilevel Threshold Image Segmentation

Academic Editor: Zbigniew Leśniak
Received11 Apr 2014
Revised24 Jun 2014
Accepted26 Jun 2014
Published02 Sep 2014

Abstract

This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization (HABC), for multilevel threshold image segmentation, which employs a pool of optimal foraging strategies to extend the classical artificial bee colony framework to a cooperative and hierarchical fashion. In the proposed hierarchical model, the higher-level species incorporates the enhanced information exchange mechanism based on crossover operator to enhance the global search ability between species. In the bottom level, with the divide-and-conquer approach, each subpopulation runs the original ABC method in parallel to part-dimensional optimum, which can be aggregated into a complete solution for the upper level. The experimental results for comparing HABC with several successful EA and SI algorithms on a set of benchmarks demonstrated the effectiveness of the proposed algorithm. Furthermore, we applied the HABC to the multilevel image segmentation problem. Experimental results of the new algorithm on a variety of images demonstrated the performance superiority of the proposed algorithm.

1. Introduction

Image segmentation is considered as the process of partitioning a digital image into multiple regions or objects. Among the existing segmentation methods, multilevel threshold technique is a simple but effective tool that can extract several distinct objects from the background [14]. This means that searching the optimal multiple thresholds to ensure the desired thresholded classes is a significantly essential issue. Several multiple thresholds selection approaches have been proposed in literatures; [3, 5, 6] proposed some methods derived from optimizing an objective function, which were originally developed for bilevel threshold and later extended to multilevel threshold [1, 2]. However, these methods based on exhaustive search suffer from a common drawback; that is, the computational complexity will rise exponentially when extended to multilevel threshold from bilevel.

Recently, to address this concerned issue, swarm intelligence (SI) algorithms as the powerful optimization tools have been introduced to the field of image segmentation owing to their predominant abilities of coping with complex nonlinear optimizations [710]. Akay [11] employed two successful population-based algorithms-particle swarm optimization (PSO) and artificial bee colony (ABC) algorithm to speed up the multilevel thresholds optimization. Ozturk et al. [12] proposed an image clustering approach based on ABC to find the well-distinguished clusters. Sarkar and Das [13] presented a 2D histogram based multilevel thresholding approach to improve the separation between objects, which demonstrated its high effectiveness and efficiency.

It is worth noting that due to its simple arithmetic and good robustness, the ABC-based approaches have achieved more attentions from researchers [1418] and are widely used in various optimization problems [1924]. However, when solving complex problems, ABC algorithm will still suffer from the major drawbacks of being easily trapped into local minimums and loss of population diversity [20].

Comparing with other evolutionary and swarm intelligence algorithms [2531], how to improve the diversity of swarm or overcome the local convergence of ABC is still a challenging issue in the optimization domain. Thus, this paper presents a novel hierarchical optimization scheme based on divide-and-conquer and crossover strategies, namely, HABC, to extend ABC framework from flat (one level) to hierarchical (multiple levels). Note that such hierarchical schemes have been applied in optimization algorithms [3238]. The essential differences between our proposed scheme and others lie in the following aspects.(1)The divide-and-conquer strategy with random grouping is incorporated in this hierarchical framework, which can decompose the complex high-dimensional vectors into several smaller part-dimensional components that are assigned to the lower level. This can enhance the local search ability (exploitation).(2)The enhanced information exchange strategy, namely, crossover, is applied to interaction of different population to maintain the diversity of population. In this case, the neighbor bees with higher fitness can be chosen to crossover, which effectively enhances the global search ability and convergence speed to the global best solution (exploration).

The rest of the paper is organized as follows. Section 2 describes the canonical ABC algorithm. In Section 3 the proposed hierarchical artificial bee colony (HABC) model is given. Section 4 presents the experimental studies of the proposed HABC and the other algorithms with descriptions of the involved benchmark functions, experimental settings, and experimental results. And its application to image segmentation has been presented in Section 5. Finally, Section 6 outlines the conclusion.

2. Canonical ABC Algorithm

The artificial bee colony (ABC) algorithm, proposed by Karaboga in 2005 [19] and further developed by Karaboga and Basturk [20, 21] for real-parameter optimization, which simulates the intelligent foraging behavior of a honeybee swarm, is one of the most recently introduced swarm-based optimization techniques.

The entire bee colony contains three groups of bees: employed bees, onlookers, and scouts. Employed bees explore the specific food sources and, meanwhile, pass the food information to onlooker bees. The number of employed bees is equal to that of food sources; in other words, each food source owns only one employed bee. Then onlooker bees choose good food sources based on the received information and then further exploit the food near their selected food source. The food source with higher quality would have a larger opportunity to be selected by onlookers. There is a control parameter called ‘‘limit’’ in the canonical ABC algorithm. If a food source is not improved anymore when limit is exceeded, it is assumed to be abandoned by its employed bee and the employed bee associated with that food source becomes a scout to search for a new food source randomly. The fundamental mathematic representations are listed as follows.

Step  1 (initialization phase). In initialization phase, a group of food sources are generated randomly in the search space using the following equation: where , . is the number of food sources. is the number of variables, that is, problem dimension. and are the lower upper and upper bounds of the th variable, respectively.

Step  2 (employed bees’ phase). In the employed bees’ phase, the neighbor food source (candidate solution) can be generated from the old food source of each employed bee in its memory using the following expression: where is a randomly selected food source and must be different from ; is a randomly chosen indexes; is a random number in range [−1, 1].

Step  3 (onlooker bees’ phase). In the onlooker bees’ phase, an onlooker bee selects a food source depending on the probability value associated with that food source; can be calculated as follows: where is the fitness value of th solution.

Step  4 (scout bees’ phase). In scout bees’ phase, if a food source cannot be improved further through a predetermined cycle (called “limit” in ABC), the food source is supposed be abandoned. The employed bee subsequently becomes a scout. A new food source will be produced randomly in the search space using (1).

The employed, onlooker, and scout bees’ phase will recycle until the termination condition is met.

3. Hierarchical Artificial Bee Colony Algorithm

3.1. Hierarchical Multipopulation Optimization Model

As shown in Figure 1, the HABC framework contains two levels to balance exploring and exploiting ability. In the bottom level, with the variables decomposing strategy, each subpopulation employs the canonical ABC method to search the part-dimensional optimum in parallel. That is, in each iteration, subpopulations in the bottom level generate best solutions, which are constructed into a complete solution species updated to the top level. In the top level, the multispecies community adopts the information exchange mechanism based on crossover operator, by which each species can learn from its neighborhoods in a specific topology. The vectors decomposing strategy and information exchange crossover operator can be described in detail as follows.

3.2. Variables Decomposing Approach

The purpose of this approach is to obtain finer local search in single dimensions inspired by the divide-and-conquer approach. Notice that two aspects must be analyzed: how to decompose the whole solution vector and how to calculate the fitness of each individual of each subpopulation. The detailed procedure is presented as follows.

Step  1. The simplest grouping method is permitting a D-dimensional vector to be split into subcomponents, each corresponding to a subpopulation of s-dimensions, with individuals (where ). The jth subpopulation is denoted as , .

Step  2. Construct complete evolving solution Gbest, which is the concatenation of the best subcomponents’ solutions by fowling: where represents the personal best solution of the jth subpopulation.

Step  3. For each component , , do the following.(a)At employed bees’ phase, for each individual , , replace the ith component of the Gbest by using the ith component of individual . Calculate the new solution fitness: (newGbest. If (newgbest) < (Gbest), then Gbest is replaced by newGbest.(b)Update positions by using (8).(c)At onlooker bees’ phase, repeat (a)-(b).

Step  4. Memorize the best solution achieved so far; compare the best solution with Gbest and memorize the better one.

Random Grouping of Variables. To increase the probability of two interacting variables allocated to the same subcomponent, without assuming any prior knowledge of the problem, according to the random grouping of variables proposed by [29], we adopt the same random grouping scheme by dynamically changing group size. For example, for a problem of 50 dimensions, we can define that

Here, if we randomly decompose the D-dimensional object vector into subcomponents at each iteration (i.e., we construct each of the subcomponents by randomly selecting S-dimensions from the D-dimensional object vector), the probability of placing two interacting variables into the same subcomponent becomes higher, over an increasing number of iterations [30].

3.3. The Information Exchange Mechanism Based on Crossover Operator between Multispecies

In the top level, we adopt crossover operator with a specific topology to enhance the information exchange between species, in which each species can learn from its symbiotic partner in the neighborhood. The key operations of this crossover procedure are described in Figure 2.

Step  1 (select elites for the best-performing list (BPL)). Individuals from current species ’s neighborhood (i.e., ring topology) with higher fitness have larger probability to be selected into the best-performing list (BPL) as elites, whose size is equal to current population size.

Step  2 (crossover operation)

Step  2.1. Parents are selected from the BPL’s elites using the tournament selection scheme: two enhanced elites are selected randomly, and their fitness values are compared to select the elites. The one with better fitness is viewed as parent. Another parent is selected in the same way.

Step  2.2. Two offspring are created by arithmetic crossover on these selected parents by the following equation [39]: where is newly produced offspring, parent 1 and parent 2 are randomly selected from BPL.

Step  3 (update with different selection schemes). If population size is , then the replaced individuals number is (CR is a selecting rate). The greedy selection mechanism is adopted to replace the selected individual. There are three replacing approaches of selecting the best individuals (i.e., individuals), a medium level of individuals and the worst individuals. To maintain the population diversity, we randomly select one replacing approach at every iteration.

In summary, in order to facilitate the below presentation and test formulation, we define a unified parameters for HABC model in Table 1. According to the process description as mentioned above, the flowchart of HABC algorithm is summarized in Figure 3, while the pseudocode for HABC algorithm is presented in Pseudocode 1.



The number of species in top level
Subpopulation dividing -dimensions into -dimensions
Subpopulation size
Corresponding to a population of -dimensions, where
Dimensions of optimization problem
Top level population’s (species) ID counter from 1 to
Button level population’s ID counter from 1 to
The th population (of the th species)
CRSelection rate for replacing the offspring to the selected individuals.
The hierarchical interaction topology of HABC
The objective optimization goals

HABC algorithm
Set ;
INITIALIZE.
  Randomly divide the whole population into species () each possesses sub-populations (), each possesses bees;
Randomize ’s -dimensions food source positions ; , , .
  Each sub-population with dimensions (where is randomly chosen from a set , and ).
WHILE (the termination conditions are not met)
    for each species ,
    Initialize -dimensions complete vector = (),
    which consists of the -dimensions best solution .
    Randomly all dimension indices;
    WHILE (the termination conditions are not met)
     for each sub-population , do
       repeat
       Employed Bees’ Phase:
       For each employed bee :
             Produce a new solution by using (2)
            Evaluate the new solution
            Apply Greedy selection choosing the better solution
             end
             Calulate the probability values for the solution by using (2)
            Onlooker Bees’ Phase:
           for each employed bee
             Probabilistically choose a solution according to
             Produce a new solution by (2)
             Evaluate the new solution
             Apply Greedy selection choosing the better solution
             end
       Re-initialize solutions not improved for Limit cycles
       Memorize the best solution
       for each individual of ,   do
           Place best solution in the complete solution newGbest by:
              newGbest = ()
           Update complete solution if it improves:
            If (f(newGbest) < f(Gbest))
              Then  
             end
     end
    end WHILE
     Selct elites form neighborhood of
        = the top M best individuals of the ring topology
     Crossover & Mutation by (4)
     Update with applying Greedy selection mechanism from
    end
    find the global best solution gbest from the whole population
    memorize the best solution of each
    Set ;
  end WHILE

4. Experimental Study

In the experimental studies, according to the no free lunch (NFL) theorem [40], a set of eight basic benchmark functions and twelve CEC2005 benchmark functions are employed to fully evaluate the performance of the HABC algorithm fairly [27], as listed in the Appendix section. The number of function evaluations (FEs) is adopted as the time measure criterion substitutes the number of iterations.

4.1. Experimental Settings

Eight variants of HABC based on different crossover methods and CR values were executed with six state-of-the-art EA and SI algorithms for comparisons:(i)Artificial bee colony algorithm (ABC) [20];(ii)cooperative artificial bee colony algorithm (CABC) [41];(iii)canonical PSO with constriction factor (PSO) [42];(iv)cooperative PSO (CPSO) [28];(v)standard genetic algorithm (GA) [39];(vi)covariance matrix adaptation evolution strategy (CMA-ES) [43].

In all experiments in this section, the values of the common parameters used in each algorithm such as population size and total generation number are chosen to be the same. Population size is set as 50 and the maximum evaluation number is set as 100000. For the fifteen continuous testing functions used in this paper, the dimensions are all set as 50.

All the control parameters for the EA and SI algorithms are set to be default of their original literatures: initialization conditions of CMA-ES are the same as in [43], and the number of offspring candidate solutions generated per time step is ; for ABC and CABC, the limit parameter is set to be , where is the dimension of the problem and SN is the number of employed bees. The split factor for CABC and CPSO is equal to the dimensions [28, 41]. For canonical PSO and CPSO, the learning rates and are both set as 2.05 and the constriction factor . For EGA, intermediate crossover rate of 0.8, Gaussian mutation rate of 0.01, and the global elite operation with a rate of 0.06 are adopted [39]. For the proposed HABC, the species number , split factor , and the selection rate CR should be tuned firstly in next section.

4.2. Parameter Sensitivity Analysis of HABC
4.2.1. Effects of Species Number

The species number of the top level in HABC needs to be tuned. Three 50D basic benchmark functions () and five 50D benchmark functions () are employed to investigate the impact of this parameter. Set CR equal to 1 and all the functions run 30 sample times. As shown in Table 2, it is visible that our proposed ABC got better optimal solutions and stability on the involved test functions with increased . However, the performance improvement is not very remarkable using this parameter.


Swarm number ()26101418

Mean0 ()0 ()0 ()0 ()0 (9.3e + 003)
Std.00000

Mean0 ()0 ()0 ()0 ()0 (8.0e + 003)
Std.00000

Mean0 ()0 ()0 ()0 ()0 (1.1e + 004)
Std.00000

Mean0 ()0 ()0 ()0 ()0 ( )
Std.00000

Mean
Std.

Mean0 ()0 ()0 ()0 ()0 (2.6e + 004)
Std.00000

Mean
Std.

Mean0 ()0 ()0 ()0 ()0 (8.9e + 003)
Std.00000

4.2.2. Choices of Crossover Mode and Value

The benchmark functions ( and ) are adopted to evaluate the performance of HABC variants with different crossover CRs. All the functions with 30 dimensions are implemented for 30 run times. From Table 3, we can observe that HABC variant with CR equal to 1 performed best on four functions among all five functions while CR equal to 0.05 get best result on one function. Therefore, the value of selection rate CR in each crossover operation can be set to 1 as an optimal value in all following experiments.


Function0.050.10.20.40.61

Mean0 (55000)0 (45000)0 (28000)
Std.00000

Mean6.87
Std.9.67

Mean0 (65000)0 (50000)0 (35000)
Std.00000

Mean0 (431000)0 (276000)0 (181000)
Std.00

Mean9.33
Std.6.21

Mean0
Std.0

Mean
Std.

Mean0
Std.0

4.2.3. Effects of Dynamically Changing Group Size

Obviously, the choice of value for split factor (i.e., subpopulation number) had a significant impact on the performance of the proposed algorithm. In order to vary during a run, we defined for 50D function optimization, and set randomly choosing one element of ; then, the HABC with dynamically changing is compared with that with fixed split number on these benchmark functions for 30 sample times. From the results listed in Table 4, we can observe that the performance is sensitive to the predefined value. HABC, using a dynamically changing value, consistently gave a better performance than the other variants except and . Moreover, in most real-world problems, we do not have any prior knowledge about the optimal s value, so the random grouping scheme can be a suitable solution.


Function

Mean0 (95000)0 (56000)
Std.00

Mean1.581.87
Std.1.23

Mean00
Std.00

Mean0 (177000)0 (238000)
Std.00

Mean8.745.61
Std.5.489.91

Mean0
Std.0

Mean
Std.

Mean0
Std.0