- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Discrete Dynamics in Nature and Society
Volume 2014 (2014), Article ID 941534, 22 pages
Hierarchical Artificial Bee Colony Optimizer with Divide-and-Conquer and Crossover for Multilevel Threshold Image Segmentation
1Department of Information Service & Intelligent Control, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China
2University of Chinese Academy of Sciences, Beijing 100039, China
Received 11 April 2014; Revised 24 June 2014; Accepted 26 June 2014; Published 2 September 2014
Academic Editor: Zbigniew Leśniak
Copyright © 2014 Maowei He et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization (HABC), for multilevel threshold image segmentation, which employs a pool of optimal foraging strategies to extend the classical artificial bee colony framework to a cooperative and hierarchical fashion. In the proposed hierarchical model, the higher-level species incorporates the enhanced information exchange mechanism based on crossover operator to enhance the global search ability between species. In the bottom level, with the divide-and-conquer approach, each subpopulation runs the original ABC method in parallel to part-dimensional optimum, which can be aggregated into a complete solution for the upper level. The experimental results for comparing HABC with several successful EA and SI algorithms on a set of benchmarks demonstrated the effectiveness of the proposed algorithm. Furthermore, we applied the HABC to the multilevel image segmentation problem. Experimental results of the new algorithm on a variety of images demonstrated the performance superiority of the proposed algorithm.
Image segmentation is considered as the process of partitioning a digital image into multiple regions or objects. Among the existing segmentation methods, multilevel threshold technique is a simple but effective tool that can extract several distinct objects from the background [1–4]. This means that searching the optimal multiple thresholds to ensure the desired thresholded classes is a significantly essential issue. Several multiple thresholds selection approaches have been proposed in literatures; [3, 5, 6] proposed some methods derived from optimizing an objective function, which were originally developed for bilevel threshold and later extended to multilevel threshold [1, 2]. However, these methods based on exhaustive search suffer from a common drawback; that is, the computational complexity will rise exponentially when extended to multilevel threshold from bilevel.
Recently, to address this concerned issue, swarm intelligence (SI) algorithms as the powerful optimization tools have been introduced to the field of image segmentation owing to their predominant abilities of coping with complex nonlinear optimizations [7–10]. Akay  employed two successful population-based algorithms-particle swarm optimization (PSO) and artificial bee colony (ABC) algorithm to speed up the multilevel thresholds optimization. Ozturk et al.  proposed an image clustering approach based on ABC to find the well-distinguished clusters. Sarkar and Das  presented a 2D histogram based multilevel thresholding approach to improve the separation between objects, which demonstrated its high effectiveness and efficiency.
It is worth noting that due to its simple arithmetic and good robustness, the ABC-based approaches have achieved more attentions from researchers [14–18] and are widely used in various optimization problems [19–24]. However, when solving complex problems, ABC algorithm will still suffer from the major drawbacks of being easily trapped into local minimums and loss of population diversity .
Comparing with other evolutionary and swarm intelligence algorithms [25–31], how to improve the diversity of swarm or overcome the local convergence of ABC is still a challenging issue in the optimization domain. Thus, this paper presents a novel hierarchical optimization scheme based on divide-and-conquer and crossover strategies, namely, HABC, to extend ABC framework from flat (one level) to hierarchical (multiple levels). Note that such hierarchical schemes have been applied in optimization algorithms [32–38]. The essential differences between our proposed scheme and others lie in the following aspects.(1)The divide-and-conquer strategy with random grouping is incorporated in this hierarchical framework, which can decompose the complex high-dimensional vectors into several smaller part-dimensional components that are assigned to the lower level. This can enhance the local search ability (exploitation).(2)The enhanced information exchange strategy, namely, crossover, is applied to interaction of different population to maintain the diversity of population. In this case, the neighbor bees with higher fitness can be chosen to crossover, which effectively enhances the global search ability and convergence speed to the global best solution (exploration).
The rest of the paper is organized as follows. Section 2 describes the canonical ABC algorithm. In Section 3 the proposed hierarchical artificial bee colony (HABC) model is given. Section 4 presents the experimental studies of the proposed HABC and the other algorithms with descriptions of the involved benchmark functions, experimental settings, and experimental results. And its application to image segmentation has been presented in Section 5. Finally, Section 6 outlines the conclusion.
2. Canonical ABC Algorithm
The artificial bee colony (ABC) algorithm, proposed by Karaboga in 2005  and further developed by Karaboga and Basturk [20, 21] for real-parameter optimization, which simulates the intelligent foraging behavior of a honeybee swarm, is one of the most recently introduced swarm-based optimization techniques.
The entire bee colony contains three groups of bees: employed bees, onlookers, and scouts. Employed bees explore the specific food sources and, meanwhile, pass the food information to onlooker bees. The number of employed bees is equal to that of food sources; in other words, each food source owns only one employed bee. Then onlooker bees choose good food sources based on the received information and then further exploit the food near their selected food source. The food source with higher quality would have a larger opportunity to be selected by onlookers. There is a control parameter called ‘‘limit’’ in the canonical ABC algorithm. If a food source is not improved anymore when limit is exceeded, it is assumed to be abandoned by its employed bee and the employed bee associated with that food source becomes a scout to search for a new food source randomly. The fundamental mathematic representations are listed as follows.
Step 1 (initialization phase). In initialization phase, a group of food sources are generated randomly in the search space using the following equation: where , . is the number of food sources. is the number of variables, that is, problem dimension. and are the lower upper and upper bounds of the th variable, respectively.
Step 2 (employed bees’ phase). In the employed bees’ phase, the neighbor food source (candidate solution) can be generated from the old food source of each employed bee in its memory using the following expression: where is a randomly selected food source and must be different from ; is a randomly chosen indexes; is a random number in range [−1, 1].
Step 3 (onlooker bees’ phase). In the onlooker bees’ phase, an onlooker bee selects a food source depending on the probability value associated with that food source; can be calculated as follows: where is the fitness value of th solution.
Step 4 (scout bees’ phase). In scout bees’ phase, if a food source cannot be improved further through a predetermined cycle (called “limit” in ABC), the food source is supposed be abandoned. The employed bee subsequently becomes a scout. A new food source will be produced randomly in the search space using (1).
The employed, onlooker, and scout bees’ phase will recycle until the termination condition is met.
3. Hierarchical Artificial Bee Colony Algorithm
3.1. Hierarchical Multipopulation Optimization Model
As shown in Figure 1, the HABC framework contains two levels to balance exploring and exploiting ability. In the bottom level, with the variables decomposing strategy, each subpopulation employs the canonical ABC method to search the part-dimensional optimum in parallel. That is, in each iteration, subpopulations in the bottom level generate best solutions, which are constructed into a complete solution species updated to the top level. In the top level, the multispecies community adopts the information exchange mechanism based on crossover operator, by which each species can learn from its neighborhoods in a specific topology. The vectors decomposing strategy and information exchange crossover operator can be described in detail as follows.
3.2. Variables Decomposing Approach
The purpose of this approach is to obtain finer local search in single dimensions inspired by the divide-and-conquer approach. Notice that two aspects must be analyzed: how to decompose the whole solution vector and how to calculate the fitness of each individual of each subpopulation. The detailed procedure is presented as follows.
Step 1. The simplest grouping method is permitting a D-dimensional vector to be split into subcomponents, each corresponding to a subpopulation of s-dimensions, with individuals (where ). The jth subpopulation is denoted as , .
Step 2. Construct complete evolving solution Gbest, which is the concatenation of the best subcomponents’ solutions by fowling: where represents the personal best solution of the jth subpopulation.
Step 3. For each component , , do the following.(a)At employed bees’ phase, for each individual , , replace the ith component of the Gbest by using the ith component of individual . Calculate the new solution fitness: (newGbest. If (newgbest) < (Gbest), then Gbest is replaced by newGbest.(b)Update positions by using (8).(c)At onlooker bees’ phase, repeat (a)-(b).
Step 4. Memorize the best solution achieved so far; compare the best solution with Gbest and memorize the better one.
Random Grouping of Variables. To increase the probability of two interacting variables allocated to the same subcomponent, without assuming any prior knowledge of the problem, according to the random grouping of variables proposed by , we adopt the same random grouping scheme by dynamically changing group size. For example, for a problem of 50 dimensions, we can define that
Here, if we randomly decompose the D-dimensional object vector into subcomponents at each iteration (i.e., we construct each of the subcomponents by randomly selecting S-dimensions from the D-dimensional object vector), the probability of placing two interacting variables into the same subcomponent becomes higher, over an increasing number of iterations .
3.3. The Information Exchange Mechanism Based on Crossover Operator between Multispecies
In the top level, we adopt crossover operator with a specific topology to enhance the information exchange between species, in which each species can learn from its symbiotic partner in the neighborhood. The key operations of this crossover procedure are described in Figure 2.
Step 1 (select elites for the best-performing list (BPL)). Individuals from current species ’s neighborhood (i.e., ring topology) with higher fitness have larger probability to be selected into the best-performing list (BPL) as elites, whose size is equal to current population size.
Step 2 (crossover operation)
Step 2.1. Parents are selected from the BPL’s elites using the tournament selection scheme: two enhanced elites are selected randomly, and their fitness values are compared to select the elites. The one with better fitness is viewed as parent. Another parent is selected in the same way.
Step 2.2. Two offspring are created by arithmetic crossover on these selected parents by the following equation : where is newly produced offspring, parent 1 and parent 2 are randomly selected from BPL.
Step 3 (update with different selection schemes). If population size is , then the replaced individuals number is (CR is a selecting rate). The greedy selection mechanism is adopted to replace the selected individual. There are three replacing approaches of selecting the best individuals (i.e., individuals), a medium level of individuals and the worst individuals. To maintain the population diversity, we randomly select one replacing approach at every iteration.
In summary, in order to facilitate the below presentation and test formulation, we define a unified parameters for HABC model in Table 1. According to the process description as mentioned above, the flowchart of HABC algorithm is summarized in Figure 3, while the pseudocode for HABC algorithm is presented in Pseudocode 1.
4. Experimental Study
In the experimental studies, according to the no free lunch (NFL) theorem , a set of eight basic benchmark functions and twelve CEC2005 benchmark functions are employed to fully evaluate the performance of the HABC algorithm fairly , as listed in the Appendix section. The number of function evaluations (FEs) is adopted as the time measure criterion substitutes the number of iterations.
4.1. Experimental Settings
Eight variants of HABC based on different crossover methods and CR values were executed with six state-of-the-art EA and SI algorithms for comparisons:(i)Artificial bee colony algorithm (ABC) ;(ii)cooperative artificial bee colony algorithm (CABC) ;(iii)canonical PSO with constriction factor (PSO) ;(iv)cooperative PSO (CPSO) ;(v)standard genetic algorithm (GA) ;(vi)covariance matrix adaptation evolution strategy (CMA-ES) .
In all experiments in this section, the values of the common parameters used in each algorithm such as population size and total generation number are chosen to be the same. Population size is set as 50 and the maximum evaluation number is set as 100000. For the fifteen continuous testing functions used in this paper, the dimensions are all set as 50.
All the control parameters for the EA and SI algorithms are set to be default of their original literatures: initialization conditions of CMA-ES are the same as in , and the number of offspring candidate solutions generated per time step is ; for ABC and CABC, the limit parameter is set to be , where is the dimension of the problem and SN is the number of employed bees. The split factor for CABC and CPSO is equal to the dimensions [28, 41]. For canonical PSO and CPSO, the learning rates and are both set as 2.05 and the constriction factor . For EGA, intermediate crossover rate of 0.8, Gaussian mutation rate of 0.01, and the global elite operation with a rate of 0.06 are adopted . For the proposed HABC, the species number , split factor , and the selection rate CR should be tuned firstly in next section.
4.2. Parameter Sensitivity Analysis of HABC
4.2.1. Effects of Species Number
The species number of the top level in HABC needs to be tuned. Three 50D basic benchmark functions () and five 50D benchmark functions () are employed to investigate the impact of this parameter. Set CR equal to 1 and all the functions run 30 sample times. As shown in Table 2, it is visible that our proposed ABC got better optimal solutions and stability on the involved test functions with increased . However, the performance improvement is not very remarkable using this parameter.
4.2.2. Choices of Crossover Mode and Value
The benchmark functions ( and ) are adopted to evaluate the performance of HABC variants with different crossover CRs. All the functions with 30 dimensions are implemented for 30 run times. From Table 3, we can observe that HABC variant with CR equal to 1 performed best on four functions among all five functions while CR equal to 0.05 get best result on one function. Therefore, the value of selection rate CR in each crossover operation can be set to 1 as an optimal value in all following experiments.
4.2.3. Effects of Dynamically Changing Group Size
Obviously, the choice of value for split factor (i.e., subpopulation number) had a significant impact on the performance of the proposed algorithm. In order to vary during a run, we defined for 50D function optimization, and set randomly choosing one element of ; then, the HABC with dynamically changing is compared with that with fixed split number on these benchmark functions for 30 sample times. From the results listed in Table 4, we can observe that the performance is sensitive to the predefined value. HABC, using a dynamically changing value, consistently gave a better performance than the other variants except and . Moreover, in most real-world problems, we do not have any prior knowledge about the optimal s value, so the random grouping scheme can be a suitable solution.
4.3. Comparing HABC with Other State-of-the-Art Algorithms on Benchmark Problems
4.3.1. Results on Basic Benchmark Continuous Functions
The means and standard deviations obtained all involved algorithms on the 50-dimensional classical test suite for 30 runs and were reported in Table 5, where the best results among those algorithms were shown in bold. Figures 4(a)–4(h) presented the average convergence rates of each algorithm for each basic benchmark. On the unimodal basic benchmark functions (), from Table 5 and Figure 4, HABC converged faster than all other algorithms. HABC was able to consistently find the minimum to functions , , and within 100000 FEs. Statistically, HABC has significantly superior performance on these unimodal functions. On the multimodal functions (), it is visible that HABC algorithms markedly outperforms other algorithms on most of these cases. For example, HABC quickly find the global minimum on functions and , and CABC also can consistently find the minimum of within relatively more FEs while other algorithms perform poorer. This can be explained as the multipopulation cooperative coevolution strategy integrated by HABC and CABC enhance the local search ability, contributing to their better performances in the multimodal problems.
4.3.2. Results on CEC2005 Continuous Functions
To validate the effectiveness of the proposed algorithm, a suit of CEC2005 benchmarks (–) is employed . According to Section 4.2, we ensure the following optimal parameter setting of HABC: , , and , in comparison with CABC, CPSO, CMA-ES, ABC, PSO, and GA algorithms. From the experimental results in terms of means and standard deviations in Table 6, HABC outperformed CMA-ES on eight out of the twelve functions. CMA-ES also outperformed CABC on most functions. HABC can find the global optimum for , , , , , and within 10000 FEs; this is due to the fact that HABC can balance the exploration and exploitation by decomposing high-dimensional problems and using crossover operations to maintain its population diversity, which is a key contributing factor. On the other hand, CMA-ES converged extremely fast. However, CMA-ES converged very fast or tended to be trapped into local minima very quickly, especially on multimodal shifted and rotated functions. According to the rank values, the performance order of the algorithms involved is HABC > CMA-ES > CABC > ABC > CPSO > PSO > GA.
In order to further investigate the efficacy and robustness of the proposed HABC, the analysis of variance (ANOVA) test was employed to determine the statistical characteristics of each proposed algorithm over the others. In this work, the graphical ANOVA analyses were done through a graphical tool of box plots, which took on many important aspects of a distribution. Through the box plot, the general features of the distribution can be noticed. The box plots shown in Figures 5(a)–5(l) demonstrate the statistical performance of each involved algorithms on the classical test suite for 30 runs individually. From this box plot representation, it is clearly visible and proved that HABC achieved good variance distribution of compromise solutions on all classical functions. Note that the CABC algorithm also exhibited its robustness on almost some classical functions.
4.3.3. Algorithm Complexity Analysis
Algorithm complexity analysis is also presented briefly as follows. If we assume that the computation cost of one individual in the HABC is Cost_a, the cost of the crossover operator is Cost_c, and the total computation cost of HABC for one generation is . However, because the heuristic algorithms used in this paper cannot ensure comprehensive convergence, it is very difficult to give a brief analysis in terms of time for all algorithms. Through directly evaluating the algorithmic time response on different objective functions, the average computing time in 30 sample runs of all algorithms is given in Figure 6. From Figure 6, it is observed that the HABC takes the most computing time in all compared algorithms and the time increasing rate of it is the highest one. This can be explained as the multipopulation cooperative coevolution strategy integrated by HABC enhanced the local search ability at cost of increasing the computation amount. In summary, it is concluded that, compared with other algorithms, the HABC requires more computing time to achieve better results.
5. Multilevel Threshold for Image Segmentation by HABC
5.1. Entropy Criterion Based Fitness Measure
The Otsu multithreshold entropy measure  proposed by Otsu has been popularly employed in determining whether the optimal threshold method can provide image segmentation with satisfactory results. Here, it is used as the objective function for the involved algorithms and its process can be described as follows.
Let the gray levels of a given image range over and denote the occurrence of gray-level .
Let where and the optimal threshold is the gray level that maximizes (8). Then, (8) can also be written as where , , , and are the same as given in (9), and Expanding this logic to multilevel threshold, where is the number of thresholds.
Equation (12) is used as the objective function for the proposed HABC based procedure which is to be optimized (minimized). A close look into this equation will show that it is very similar to the expression for uniformity measure.
5.2. Probabilistic Rand Index as a Qualitative Measure
Consider a set of manually segmented (ground truth) images corresponding to an image , where a subscript indexes one of pixels. Let be the segmentation that is to be compared with the manually labeled set. We denote the label of point by in segmentation and in the manually segmented image . It is assumed that each label can take values in a discrete set of size , and correspondingly takes one of value.
We chose to model label relationships for each pixel pair by an unknown underlying distribution. One may visualize this as a scenario where each human segmenter provides information about the segmentation of the image in the form of binary numbers for each pair of pixels . The set of all perceptually correct segmentations defines a Bernoulli distribution over this number, giving a random variable with expected value denoted as . Hence, the set for all unordered pairs defines a generative model of correct segmentations for the image X.
Consider the Probabilistic Rand Index (PRI)  Let denote the event of a pair of pixels i and j having the same label in the test image Then, the PRI can be written as
This measure takes values in , where 0 means and have no similarities (i.e., when consists of a single cluster and each segmentation in consists only of clusters containing single points, or vice versa) to 1 when all segmentations are identical.
5.3. Experiment Setup
The experimental evaluations for segmentation performance by HABC are carried out on the Berkeley Segmentation Database (BSDS). The BSDS consists of 300 natural images, manually segmented by a number of different subjects. The ground-truth data for this large collection shows the diversity, yet high consistency, of human segmentation (Available at http://www.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/BSDS300/html/dataset/images.html). These datasets involve a suit of popular standard images [47–49], namely, Figures 7(a), 7(b), 7(c), 7(d), 7(e), and 7(f). The size of Figures 7(a), 7(b), 7(c), and 7(d) is and the size of Figures 7(e) and 7(f) is . A comparison between the proposed algorithm and other methods is evaluated based on Otsu, which means (12) is regarded as fitness function to evaluate all involved algorithms. The numbers of thresholds investigated in the experiments were 2, 3, 4, 5, 7, and 9 while all the experiments were repeated 30 times for each image for each value. The population size is 20 and the maximum number of FEs is 2000. Figure 7 donates the original images and their histograms.
The basic parameter settings of these algorithms, namely, HABC, ABC, PSO, EGA, and CMA_ES, are set as default in Section 4.1. It is noteworthy that the split factor of HABC should be adjusted according to the dimension of the image segmentation problems. In this experiment, for 2-dimensional problem, ; for 3-dimensional problem, ; for 4-dimensional case, ; for 5-dimensional case, ; for 7-dimensional case, ; for 9-dimensional case, . In order to test the effect of segmentation, the Probabilistic Rand Index (PRI) is chosen as a qualitative measure.
5.4. Experimental Results of Multilevel Threshold
Case 1 (multilevel threshold results with = 2, 3, 4). Table 7 presents the fitness function values, mean computation time, and corresponding optimal thresholds (with = 2, 3, 4) obtained by Otsu. It is noteworthy that the term CPU time is also an important issue in the real-time applications. From Table 8, there are not obvious differences about CPU times between the involved population-based methods, which are suitably significantly superior in terms of time complexity for high-dimensional image segmentation problems.
As can be seen form Table 9, the proposed HABC algorithm generally performs close to the Otsu method in term of fitness value when = 2, 3, 4, whereas the performance of HABC on time complexity is significantly superior to its counterpart Otsu. Furthermore, the HABC-based algorithm achieves the best performance among the population-based methods in most cases. This means that HABC can obtain an appropriate balance between exploration and exploitation. However, compared with the other population-based algorithms, the PRI results for low-dimensional segmentation obtained by HABC have no obvious enhancement. This can be explained as HABC endowed with crossover operation performed better global search in the higher-dimensional search space as well as the hierarchical cooperation strategy will be used to emphasize the fine exploitation around the promising area. Moreover, the differences between the HABC and the other algorithms are more evident as the segmentation level increases.
Case 2 (multilevel threshold results with = 5, 7, 9). Regarding the high-dimensional segmentation problems with = 5, 7, 9, Table 10 demonstrates the average fitness value, the standard deviation, and the PRI obtained by each population-based algorithm, where the correlative results with the larger values, the smaller standard deviations or the higher PRIs, indicate the better achievement.
From Table 10, depending on the crossover method and the fast convergence rate, HABC demonstrates the best performance in terms of efficiency and stability on the high-dimensional cases. Furthermore, as the level of segmentation increases, the fitness of HABC increases faster than that of other methods, especially ABC almost did not achieve any improvement for 5, 7, and 9 levels of segmentation. From the experimental results of Tables 8 and 9, the PRI results for high-dimensional segmentation are significantly better than that for the low-dimensional cases. Meanwhile, HABC can also achieve better statistical results regarding PRI than its counterparts in most tested datasets. This can be explained that, based on Otsu method, the segmentation with more thresholds can achieve greater consistency to human segmentation and HABC algorithm can demonstrate powerful performance in searching high-dimensional space. Figures 8, 9, 10, 11, 12, and 13 show the original images, multilevel threshold segmentations of those image, and the ground truth human segmentations of those images. From these results as shown in Figures 8–13, it is clearly visible that the HABC-based method is more suitable to deal with such multilevel segmentation problem.
In this paper, we propose a novel hierarchical artificial bee colony algorithm, called HABC, to improve the performance of solving complex problem. The main idea of HABC is to extend single artificial bee colony (ABC) algorithm to a hierarchical and cooperative mode by combining the multipopulation vector decomposing strategy and the comprehensive learning method. With the divide-and-conquer strategy, the complex vectors can be decomposed into smaller components, which are easily resolved. Through the crossover-based comprehensive learning, the information exchange between individuals can be significantly enhanced. In addition, due to lack of prior knowledge in most applications, the random grouping technology is employed as a suitable solution. The experimental results demonstrate that the proposed HABC achieved performance superior to other classical powerful algorithms.
Finally, the HABC algorithm is applied for resolving the real-world image segmentation problems. The correlative results obtained by HABC-based method on each image indicate a significant improvement compared to several other popular population-based methods. In our future work, we aim at finding a simpler and more efficient optimization frame by using HABC’s merits and then apply it to more complex image processing (up to 30 dimensions) and computer vision problems.
A. List of Test Functions
A.1. Basic Benchmark Function
Sphere Function is as follows:
Rosenbrock Function is as follows:
Quadric Function is as follows:
Sin Function is as follows:
Rastrigin Function is as follows:
Schwefel Function is as follows:
Ackley’s Function is as follows:
Griewank Function is as follows:
A.2. CEC2005 Function
Shifted Sphere Function is as follows:
Shifted Schwefel’s Problem is as follows:
Shifted Rotated High Conditioned Elliptic Function is as follows:
Shifted Schwefel’s Problem 1.2 with Noise in Fitness is as follows:
Shifted Schwefel’s Problem 2.6 with Global Optimum on Bounds is as follows: where A is a matrix, are integer random numbers in the range [−500,500], , and is the ith row of A. , is a vector, and are random number in the range [−100,100].
Shifted Rosenbrock’s Function is as follows:
Shifted Rotated Griewank’s Function without Bounds is as follows:
Shifted Rotated Ackley’s Function with Global Optimum on Bounds is as follows:
Shifted Rastrigin’s Function is as follows:
Shifted Rotated Rastrigin’s Function is as follows:
Shifted Rotated Weierstrass Function is as follows:
Schwefel’s Problem 2.13 is as follows: where , are two matrix, , are integer random numbers in the range [−100, 100], and are random numbers in the range .
A.3. Parameters of the Test Functions
The dimensions, initialization ranges, global optima , and the corresponding fitness value of each function are listed in Table 11.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
This research is partially supported by National Natural Science Foundation of China under Grant nos. 71001072 and 71271140 and the National High Technology Research and Development Program of China (863 Program) (no. 2014AA052101-3).
- J. Kittler and J. Illingworth, “Minimum error thresholding,” Pattern Recognition, vol. 19, no. 1, pp. 41–47, 1986.
- T. Pun, “Entropic thresholding, a new approach,” Computer Graphics and Image Processing, vol. 16, no. 3, pp. 210–239, 1981.
- J. N. Kapur, P. K. Sahoo, and A. K. C. Wong, “A new method for gray-level picture thresholding using the entropy of the histogram,” Computer Vision, Graphics, and Image Processing, vol. 29, no. 3, pp. 273–285, 1985.
- D. Tsai, “A fast thresholding selection procedure for multimodal and unimodal histograms,” Pattern Recognition Letters, vol. 16, no. 6, pp. 653–666, 1995.
- A. D. Brink, “Minimum spatial entropy threshold selection,” IEE Proceedings: Vision, Image and Signal Processing, vol. 142, no. 3, pp. 128–132, 1995.
- H. D. Cheng, J. Chen, and J. Li, “Threshold selection based on fuzzy c-partition entropy approach,” Pattern Recognition, vol. 31, no. 7, pp. 857–870, 1998.
- A. Chander, A. Chatterjee, and P. Siarry, “A new social and momentum component adaptive PSO algorithm for image segmentation,” Expert Systems with Applications, vol. 38, no. 5, pp. 4998–5004, 2011.
- H. Gao, S. Kwong, J. Yang, and J. Cao, “Particle swarm optimization based on intermediate disturbance strategy algorithm and its application in multi-threshold image segmentation,” Information Sciences, vol. 250, pp. 82–112, 2013.
- H. Gao, W. Xu, J. Sun, and Y. Tang, “Multilevel thresholding for image segmentation through an improved quantum-behaved particle swarm algorithm,” IEEE Transactions on Instrumentation and Measurement, vol. 59, no. 4, pp. 934–946, 2010.
- E. Cuevas, D. Zaldivar, and M. Pérez-Cisneros, “A novel multi-threshold segmentation approach based on differential evolution optimization,” Expert Systems with Applications, vol. 37, no. 7, pp. 5265–5271, 2010.
- B. Akay, “A study on particle swarm optimization and artificial bee colony algorithms for multilevel thresholding,” Applied Soft Computing Journal, vol. 13, no. 6, pp. 3066–3091, 2013.
- C. Ozturk, E. Hancer, and D. Karaboga, “Improved clustering criterion for image clustering with artificial bee colony algorithm,” Pattern Analysis and Applications, 2014.
- S. Sarkar and S. Das, “Multilevel image thresholding based on 2D histogram and maximum Tsallis entropy—a differential evolution approach,” IEEE Transactions on Image Processing, vol. 22, no. 12, pp. 4788–4797, 2013.
- S. R. Rao, H. Mobahi, A. Y. Yang, S. S. Sastry, and Y. Ma, “Natural image segmentation with adaptive texture and boundary encoding,” in Computer Vision—ACCV 2009, vol. 5994 of Lecture Notes in Computer Science, pp. 135–146, Springer, Berlin, Germany, 2010.
- A. Y. Yang, J. Wright, Y. Ma, and S. S. Sastry, “Unsupervised segmentation of natural images via lossy data compression,” Computer Vision and Image Understanding, vol. 110, no. 2, pp. 212–225, 2008.
- D. Karaboga and B. Akay, “A survey: algorithms simulating bee swarm intelligence,” Artificial Intelligence Review, vol. 31, no. 1–4, pp. 61–85, 2009.
- D. Karaboga, B. Gorkemli, C. Ozturk, and N. Karaboga, “A comprehensive survey: artificial bee colony (ABC) algorithm and applications,” Artificial Intelligence Review, vol. 42, no. 1, pp. 21–57, 2014.
- F. Liu and Y. Wang, “The reversible data hiding with lower distortion based on histogram shifting,” Information and Control, vol. 42, no. 5, pp. 595–600, 2013.
- D. Karaboga, “An idea based on honey bee swarm for numerical optimization,” Tech. Rep. TR06, Computer Engineering Department, Engineering Faculty, Erciyes University, 2005.
- D. Karaboga and B. Basturk, “On the performance of artificial bee colony (ABC) algorithm,” Applied Soft Computing Journal, vol. 8, no. 1, pp. 687–697, 2008.
- D. Karaboga and B. Basturk, “Artificial bee colony (ABC) optimization algorithm for solving constrained optimization problems,” in Foundations of Fuzzy Logic and Soft Computing: 12th International Fuzzy Systems Association World Congress, IFSA 2007, Cancun, Mexico, June 18–21, 2007, P. Melin, O. Castillo, L. T. Aguilar, J. Kacprzyk, and W. Pedrycz, Eds., vol. 4529 of Lecture Notes in Computer Science, pp. 789–798, Springer, Berlin, Germany, 2007.
- B. Basturk and D. Karaboga, “An Artificial Bee Colony (ABC) algorithm for numerical function optimization,” in Proceedings of the IEEE, Swarm Intelligence Symposium, Indianapolis, Ind, USA, 2006.
- D. Karaboga and B. Basturk, “A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm,” Journal of Global Optimization, vol. 39, no. 3, pp. 459–471, 2007.
- M. A. Potter and K. A. de Jong, “Cooperative coevolution: an architecture for evolving coadapted subcomponents,” Evolutionary Computation, vol. 8, no. 1, pp. 1–29, 2000.
- Z. Yang, K. Tang, and X. Yao, “Differential evolution for high-dimensional function optimization,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '07), pp. 3523–3530, Singapore, September 2007.
- Y. Shi, H. Teng, and Z. Li, “Cooperative co-evolutionary differential evolution for function optimization,” in Proceedings of the 1st International Conference on Natural Computation (ICNC '05), pp. 1080–1088, August 2005.
- J. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, “Comprehensive learning particle swarm optimizer for global optimization of multimodal functions,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 3, pp. 281–295, 2006.
- F. van den Bergh and A. P. Engelbrecht, “A cooperative approach to participle swam optimization,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 225–239, 2004.
- X. Li and X. Yao, “Cooperatively coevolving particle swarms for large scale optimization,” IEEE Transactions on Evolutionary Computation, vol. 16, no. 2, pp. 210–224, 2012.
- H. N. Chen, Y. L. Zhu, K. Hu, and X. He, “Hierarchical swarm model: a new approach to optimization,” Discrete Dynamics in Nature and Society, vol. 2010, Article ID 379649, 30 pages, 2010.
- B. Niu, Y. Zhu, X. He, and H. Wu, “MCPSO: a multi-swarm cooperative particle swarm optimizer,” Applied Mathematics and Computation, vol. 185, no. 2, pp. 1050–1062, 2007.
- S. Janson and M. Middendorf, “A hierarchical particle swarm optimizer and its adaptive variant,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 35, no. 6, pp. 1272–1282, 2005.
- H. Chen, Y. Zhu, K. Hu, and X. He, “Hierarchical swarm model: a new approach to optimization,” Discrete Dynamics in Nature and Society, vol. 2010, Article ID 379649, 30 pages, 2010.
- Y. Peng and B. Lu, “A hierarchical particle swarm optimizer with latin sampling based memetic algorithm for numerical optimization,” Applied Soft Computing, vol. 13, no. 5, pp. 2823–2836, 2013.
- J. Kennedy and R. Mendes, “Topological structure and particle swarm performance,” in Proceedings of the 4th Congress on Evolutionary Computation (CEC '02), pp. 1671–1676, 2002.
- R. Mendes, J. Kennedy, and J. Neves, “The fully informed particle swarm: simpler, maybe better,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 204–210, 2004.
- B. Y. Qu, P. N. Suganthan, and J. J. Liang, “Differential evolution with neighborhood mutation for multimodal optimization,” IEEE Transactions on Evolutionary Computation, vol. 16, no. 5, pp. 601–614, 2012.
- B. Y. Qu, P. N. Suganthan, and S. Das, “A distance-based locally informed particle swarm model for multimodal optimization,” IEEE Transactions on Evolutionary Computation, vol. 17, no. 3, pp. 387–402, 2013.
- S. Sumathi, T. Hamsapriya, and P. Surekha, Evolutionary Intelligence: An Introduction to Theory and Applications with Matlab, Springer, New York, NY, USA, 2008.
- D. H. Wolpert and W. G. Macready, “No free lunch theorems for search,” Santa Fe Institute SFI-TR-95-02-010, 1995.
- W. Zou, Y. Zhu, H. Chen, and X. Sui, “A clustering approach using cooperative artificial bee colony algorithm,” Discrete Dynamics in Nature and Society, vol. 2010, Article ID 459796, 16 pages, 2010.
- M. Clerc and J. Kennedy, “The particle swarm-explosion, stability, and convergence in a multidimensional complex space,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 1, pp. 58–73, 2002.
- N. Hansen and A. Ostermeier, “Completely derandomized self-adaptation in evolution strategies,” Evolutionary Computation, vol. 9, no. 2, pp. 159–195, 2001.
- P. N. Suganthan, N. Hansen, J. J. Liang et al., “Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization,” Tech. Rep. #2005005, Nanyang Technological University, Singapore; IIT, KanGAL Report, Kanpur, India, 2005.
- N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Transactions on System, Man, and Cybernetics, vol. 9, no. 1, pp. 62–66, 1979.
- R. Unnikrishnan and M. Hebert, “Measures of similarity,” in Proceedings of the 7th IEEE Workshop on Applications of Computer Vision (WACV '05), pp. 394–400, January 2005.
- W. B. Tao, H. Jin, and L. M. Liu, “Object segmentation using ant colony optimization algorithm and fuzzy entropy,” Pattern Recognition Letters, vol. 28, no. 7, pp. 788–796, 2007.
- P. Yin, “Multilevel minimum cross entropy threshold selection based on particle swarm optimization,” Applied Mathematics and Computation, vol. 184, no. 2, pp. 503–513, 2007.
- L. Cao, P. Bao, and Z. Shi, “The strongest schema learning GA and its application to multilevel thresholding,” Image and Vision Computing, vol. 26, no. 5, pp. 716–724, 2008.