Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2014, Article ID 974024, 23 pages
http://dx.doi.org/10.1155/2014/974024
Research Article

A Global Multilevel Thresholding Using Differential Evolution Approach

Department of Computer Science, Faculty of Science, Khon Kaen University, Khon Kaen 40002, Thailand

Received 3 October 2013; Revised 23 January 2014; Accepted 3 February 2014; Published 20 March 2014

Academic Editor: Yi-Hung Liu

Copyright © 2014 Kanjana Charansiriphaisan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Otsu’s function measures the properness of threshold values in multilevel image thresholding. Optimal threshold values are necessary for some applications and a global search algorithm is required. Differential evolution (DE) is an algorithm that has been used successfully for solving this problem. Because the difficulty of a problem grows exponentially when the number of thresholds increases, the ordinary DE fails when the number of thresholds is greater than 12. An improved DE, using a new mutation strategy, is proposed to overcome this problem. Experiments were conducted on 20 real images and the number of thresholds varied from 2 to 16. Existing global optimization algorithms were compared with the proposed algorithms, that is, DE, rank-DE, artificial bee colony (ABC), particle swarm optimization (PSO), DPSO, and FODPSO. The experimental results show that the proposed algorithm not only achieves a more successful rate but also yields a lower threshold value distortion than its competitors in the search for optimal threshold values, especially when the number of thresholds is large.

1. Introduction

Thresholding is the simplest and most commonly used method of image segmentation. It can be bilevel or multilevel [1]. Both of these types can be classified into parametric and nonparametric approaches [1]. Surveys of thresholding techniques for image segmentation can be found in [27]. The surveys revealed that Otsu’s method is a commonly used technique [4, 8]. This method finds the optimal thresholds by maximizing the weighted sum of between-class variances (BCV) [9]. The BCV function is also called Otsu's function. However, the solution finding process is an exhaustive search and it is a very time-consuming process because the complexity grows exponentially with the number of thresholds.

Multilevel image thresholding based on Otsu’s function has been used as a benchmark for comparing the capability of evolutionary algorithms (EA). The EA is a nongradient based optimization algorithm. Several algorithms have been widely applied to solve multilevel thresholding. A group of successful works were based on a combination of Otsu’s function with some state-of-the-art algorithms: PSO [10], DE [11], ABC [12], and FOSPSO [13]. Kulkarni and Venayagamoorthy [14] showed that PSO was faster than Otsu’s method in searching the optimal thresholds of multilevel image thresholding. Akay [15] presented a comprehensive comparative study of the ABC and PSO algorithms. The results showed that the ABC algorithm with both the between-class variance and the entropy criterion can be efficiently used in multilevel thresholding. Hammouche et al. [16] focused on solving the image thresholding problem by combining Otsu’s function with metaheuristic techniques, that is, genetic algorithm (GA), PSO, DE, ant colony, simulated annealing, and Tabu search. Their results revealed that DE was the most efficient with respect to the quality of solution. Osuna-Enciso et al. [17] presented an empirical comparative study of the ABC, PSO, and DE algorithms to perform image thresholding using a mixture of Gaussian functions. The results showed that the DE algorithm was superior in performance in minimizing the Hellinger distance and used less evaluations of the Hellinger distance. Ghamisia et al. [18] showed that a global optimal search for optimal threshold values of Otsu’s function was essential for the multilevel segmentation of multispectral and hyperspectral images.

The DE algorithm was selected for multilevel image thresholding. It is simple to implement and produces good results. However, based on our experiments, DE could not reach an optimal solution when it was applied to a very difficult problem. Therefore, a better DE algorithm is required. We noticed that the mechanism of vector selection and the size of the higher ranked population are an important criterion for success.

The contribution of this paper is as follows.

DE with the onlooker and ranking-based mutation operation, named , is proposed to overcome the drawback of the DE algorithm for multilevel image thresholding, especially when the number of thresholds is large. The proposed algorithm homogenizes the onlooker phase of the ABC algorithm and the ranking-based mutation operator of the [19]. The main advantage of the proposed algorithm is that a user can adjust the balancing of the exploitation and exploration capabilities of the algorithm.

To verify the capabilities of the proposed algorithm, experiments to find the optimal solutions in the multilevel image thresholding, when the number of thresholds ranged from two to 16, were set up. It was found that the optimal solutions could be effectively reached using the proposed algorithm.

The remainder of the paper is organized as follows. Section 2 describes the multilevel thresholding problem. Section 3 presents a brief review of the differential evolution algorithm (DE). In Section 4, the proposed new version of the DE algorithm with the onlooker and ranking-based mutation operator algorithm, , is described in detail. Section 5 shows the experimental results of applying the proposed method to multilevel segmentation in different images. Finally, the conclusion of the paper is discussed in Section 6.

2. Multilevel Thresholding Problem Formulation

Otsu’s method [9] is based on the maximization of the between-class variance. Consider a digital image having the size , where is the width and is the height. The pixels of a given picture are represented in gray levels and they are in . The number of pixels at level is denoted by and the total number of pixels by . The gray-level histogram is normalized and regarded as a probability distribution and is written as follows: The total mean of the image can be defined as The multilevel thresholding with respect to the given threshold values , can be performed as follows: where is the coordinate of a pixel and denotes the intensity level of a pixel. The pixels of a given image will be divided into classes in this regard.

The optimal threshold can be determined by maximizing the between-class variance function (BCV), , which can be defined by where represents a specific class in such a way that and are the probability of occurrence and the mean of class , respectively. Equation (4) is also called Otsu’s function. The probabilities of occurrence of classes are defined by The mean of each class can be given by Thus, the -level thresholding problem is transformed to an optimization problem. The process is to search for thresholds that maximize the value , which is generally defined as

3. Differential Evolution Algorithm

The DE algorithm is an evolutionary optimization technique proposed by Storn and Price [11]. The main procedures of DE are briefly described as follows.

3.1. Initialization

The DE algorithm starts with a population of initial solutions, each of dimension , , where the index denotes the solution, or vector, of the population, is the generation, and is the population size. The initial population (at = 0) is randomly generated to be within the search space constrained by the minimum and maximum bounds, and . The vector is initialized as follows: where is a uniformly distributed random real number between 0 and 1, .

3.2. Mutation Operators

The differential mutation operator is one of the three operators of DE. The mutation operator is applied to generate the mutant vector for each target vector in the current population. A mutant vector is generated according to where the randomly chosen indexes, random indexes, are mutually different random integer indices and they are also different from the running index . Further, , and are different so that . is a real and constant factor, , which controls the amplification of the differential variation; is called the base vector, is called the terminal vector, is called the other vector, and is called the difference vector.

There have been many proposed mutation strategies for DE [20, 21]. Each different strategy has different characteristics and is suitable for a set of problems. However, the choice of the best mutation operators for DE is difficult for a specific problem [2224]. The “DE/rand/1/bin” strategy has been widely used in DE literature [2528]. It is more reliable than the strategies based on the best-so-far solution such as “DE/best/1” and “DE/current-to-best/1”. However, “DE/rand/1/bin” has slower convergence. Simply put, it has high exploration but low exploitation abilities.

3.3. Crossover

DE utilizes the crossover operation to generate new solutions by shuffling competing vectors and to increase the diversity of the population. The classical version of the DE (DE/rand/1/bin) uses the binary crossover. It defines the following trial vector: where ( = problem dimension) and

CR is the crossover rate , is the th evaluation of a uniform random number generator with outcome , and is a randomly chosen index that ensures will get at least one parameter from .

3.4. Selection

Selection determines whether the target or the trial vector survives to the next generation. The selection operation is described as where is the objective function to be minimized. Therefore, if the objective of the new trial vector, , is equal to or less than the objective of the old trial vector, , then is set to ; otherwise, the old value is retained.

The pseudocode of basic DE with “DE/rand/1/bin” strategy is shown in Algorithm 1.

alg1
Algorithm 1: The DE algorithm with “DE/rand/1/bin” strategy.

The function returns a uniformly distributed random integer number between 1 and D. is a uniformly distributed random real value of . The word “better” in line 17 means “less than” if the problem requires minimization, see (12) and its explanation, and it means “greater than,” if the problem requires maximization. The best , where is the maximum number of generations, is the solution of the algorithm. The word “best” also depends on the type of problem.

4. The Proposed DE with Onlooker Ranking-Based Mutation Operator

In 2013 Gong and Cai [19] proposed a algorithm. They claimed that probabilistically selecting the vectors and in the mutation operator from the better population can improve the exploitation ability of basic DE. To the best of the authors’ knowledge, may, however, also lead to premature convergence (this will be shown in the experiments). That means that the has too much exploitation ability. Furthermore, it cannot balance between the exploration and the exploitation abilities. In order to balance between the two abilities, we propose DE with the onlooker and ranking-based mutation operator, named . The proposed algorithm is an improvement of the rank-DE by homogenizing the rank-DE with the onlooker phase of ABC algorithm. The detail of the algorithm is described as follows.

4.1. Ranking Assignment

To perform the maximization, the fitness of each vector is sorted in ascending order (i.e., from worst to best). Then, the rank of the th vector, , is assigned based on its sorted ordering as follows: As a result, the best vector in the current population will obtain the highest ranking, that is, NP.

4.2. Probabilistic Selection

After assigning the ranking for each vector, the selection probability of the th vector is calculated as

4.3. A New Strategy for Base Vector, Terminal Point, and the Other Vector Selections

Definition 1 (a worse population and a better population). Let be a real value and . A population having probability less than is called a worse population and a population having probability greater than or equal to is called a better population.

In the , the base vector and the terminal point were based on their selection probabilities. The other vector in the mutation operator, , is selected randomly as in the original DE algorithm. The vectors with higher rankings (higher selection probabilities) are more likely to be chosen as the base vector or the terminal point in the mutation operator.

Our investigation revealed that if both and vectors of were chosen from better vectors, then the distribution of the target vector may collapse quickly and possibly lead to premature convergence. Accordingly, when the was applied to a very difficult problem, it could not reach the optimal solution.

If the steps of the DE algorithm are compared with the ABC algorithm, the population in the current generation can be considered as the employed bees and the population in the next generation can be considered as the onlooker bees. To follow the concept of ABC, a new vector, , which is called the base vector, chooses a food source with respect to the probability that is computed from the fitness values of the current population. The probability value, , of which is chosen by a base vector can be calculated by using the expression given in (14). After a base source for a new vector is probabilistically chosen, both and are also chosen in the same manner as the terminal point and the other vector selections in the . The target vector is created by a mutation formula of DE. The mutant vector is created after the target vector is crossed with a randomly selected vector, and then the fitness value is computed. As in the ordinary DE, a greedy selection is applied between and . Hence, the new population contains better sources and positive feedback behavior appears. This idea can be expressed as pseudocode, as in Algorithm 2. Since the selection of is the onlooker selection and the selections of and are brought from the , then the algorithm is called onlooker and ranking-based vector selection.

alg2
Algorithm 2: Onlooker and ranking-based vector selection for DE.

The pseudocode of onlooker and ranking-based vector selection is shown in Algorithm 2. The differences between the original ranking-based and onlooker and ranking based selection are highlighted by “”.

The function is added to generalize the algorithm. Its output depends on the parameter . The outcome can be either a constant value of or a value of the uniform random function . The balance of the exploration and exploitation ability can be set by the parameter . And the function is defined by

4.4. The DE with Onlooker-Ranking-Based Mutation Operator

The procedures in Sections 4.1, 4.2, and 4.3 are combined together to create a better DE algorithm. The parameter determines the fraction of the worse population to be eliminated. When there is no worse population; each single vector in the current population will act as the base vector. If , then each single vector having a probability less than is a worse vector and will not be selected as the base vector. If is not a constant or is outside , each single base vector is an onlooker bee. Accordingly, the name of the algorithm is Ranking-Base Differential Evolution . To achieve the global solution, a user can set a proper value for to control the balance of the exploration and exploitation abilities of the algorithm. The pseudocode of is shown in Algorithm 3 and the differences between the and are highlighted by “”.

alg3
Algorithm 3: DE with onlooker and ranking-based mutation.

5. Experiments and Results

5.1. Experimental Setup

The global multilevel thresholding problem deals with finding optimal thresholds within the range that maximize the BCV function. The dimension of the optimization problem is the number of thresholds, , and the search space is . The parameter of is or is set to be one of 0.0, . The variation of the proposed was implemented and compared with the existing metaheuristics that performed image thresholding, that is, PSO, DPSO, FODPSO, ABC, and several variations of DE algorithms. All the methods were programmed in Matlab R2013a and were run on a personal computer with a 3.4 GHz CPU, 8 GB RAM with Microsoft Windows 7 64-bit operating system. The experiments were conducted on 20 real images. The 19 images, namely, starfish, mountain, cactus, butterfly, circus, snow, palace, flower, wherry, waterfall, bird, police, ostrich, viaduct, fish, houses, mushroom, snow mountain, and snake, were taken from the Berkeley Segmentation Dataset and Benchmark [29]. The last image, namely, Riosanpablo, is a satellite image “New ISS Eyes see Rio San Pablo”, March 1, 2013 (http://visibleearth.nasa.gov/view.php?id=80561). Each image has a unique gray level histogram. These original images and their histograms are depicted in Figure 1. An experiment of an image with a specific number of thresholds is called a “subproblem.” The number of thresholds investigated in the experiments was . Thus, there are subproblems per algorithm. Each subproblem was repeated 50 times and each time is called a run.

fig1
Figure 1: The test images and corresponding histograms.

To compare with PSO, ABC, and DEs algorithms, the objective function evaluation is computed for , where is population size and is the number of generations. A population of PSO and the DEs calls Otsu’s function one time per generation. The population size in the PSO and DEs algorithms was set to 50. A bee in the ABC calls Otsu’s function two times per generation; therefore their number of food sources were set to a half of the PSO’s size, that is, 25. The stopping criteria were set by the maximum amount of generations . In this experiment, was set to 50, 100, 150, 200, 300, 400, 600, 800, 1000, 1500, 2000, 3000, 4000, 5000, and 6000 when was 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, and 16, respectively. For the PSO, DPSO, and FODPSO algorithms, the parameters were set as per the suggestion in [30] and is shown in Table 1. The other control parameter of the ABC algorithm, limit, was set to 50 [15]. The control parameters and of the DE algorithms were set to 0.5 and 0.9, respectively [31, 32].

tab1
Table 1: Essential parameters of the PSO, DPSO, and FODPSO taken from [30].
5.2. Comparison Strategies and Metrics

To compare the performance of different algorithms, there are three metrics: the convergence rate of algorithms was compared by the average of generations (), a lower means a faster convergence rate; the stability of algorithms was compared by the average of the success rate, (SRHM), a higher SRHM means higher stability; the reliability was compared by the threshold value distortion measure (TVD), a lower TVD means higher reliability. The details of the three metrics are described as follows.

When all 50 runs of an algorithm performing on an image with a specific number of thresholds are terminated, the outcomes will be analyzed. Run ’th is called a successful run if there is a generation of such that and the number of generations (NG) of the successful run is recorded. Thus, the number can be defined by The average of from those successful runs is represented by as follows: The ratio of success rate (SR) for which the algorithm succeeds to reach the VTR for each subproblem is computed as The experiments were conducted on 20 images. The arithmetic mean (AM) of () over the entire set of images with a specific number of thresholds is calculated as where is the total number of images. is shown in Table 3. The worst-case scenario is that there is no successful run for a subproblem; this subproblem is called an “unsuccessful subproblem.” If an algorithm encounters this scenario, the subproblem will be grouped by its number of thresholds and the number of images in the group will be counted and assigned to . These scenarios will be represented by , as shown in Tables 3 and 4.

The average of the success rate over the entire dataset with a specific number of thresholds () is averaged by the Harmonic mean, HM, as follows:

The is very important in measuring the stability of an algorithm and it means the ratio of runs that are achieving the target solution. Because the evolutionary methods are based on stochastic searching algorithms, the solutions are not the same in each run of the algorithm and depend on the search ability of the algorithm. Therefore, the is vital in evaluating the stability of the algorithms. The comparison of the stability gives us valuable information in terms of the ratio representing the success rates (). A higher means better stability of the algorithm.

An algorithm producing means that more than 50 percent of the independent runs of the algorithm cannot reach the global solution. Thus, the algorithm that yields should not be selected to solve the problem. The experiments were conducted for the number of thresholds varying from 2 to 16. These experiments contained the maximum number of thresholds such that the algorithm yields , which is represented by in Table 4. Furthermore, the experiments also contained the maximum number of thresholds that the algorithm can solve; and above this value there was the case such that all 50 runs of some subproblems missed the VTR. This number is represented by . In this case the success rate was zero and the associated was zero too. And the definitions of the two values are presented in (21)

Let be the number of thresholds. The reliability of a solution is measured by threshold value distortion measure () and is computed as where is the threshold value producing the VTR, is the threshold value obtained from the algorithm, and is the indicator function, which is equal to 1 when and is zero otherwise. is zero if the algorithm can reach the VTR in every run. The lower the the more reliable the algorithm is.

5.2.1. The Value to Reach (VTR)

Following the completion of all of the experiments the best values of the between-class variance and the corresponding thresholds were collected and are shown in Table 6. The results are shown image by image and the numbers of thresholds vary from 2 to 16. The between-class variance values in column 3 are used as the VTR values.

5.2.2. Results Produced by Local Search Method

The multithresh function of the Matlab toolbox was conducted on the same images and number of thresholds as the other search methods. The capabilities of solving the optimal solution between a local search and a global search will be discussed here. This is the reason we focused on the global search, that is, the proposed algorithm. Table 2 shows the between-class variances and threshold values of the “mountain” image. These values were the best outcomes of 50 runs produced by the multithresh function in the Matlab R2013a toolbox and by the proposed algorithm. The terminated condition of the multithresh function was set by “MaxFunEvals” = 500000. That is the multithresh function performs more function calls than that of the algorithm. It can be seen from columns 3 and 5 that all the BCVs produced by the algorithm are better than the BCVs produced by the multithresh function; the difference of the BCVs is shown in column 7. The differences in the thresholds from the two algorithms, shown in column 8, tended to be large if the number of thresholds increased.

tab2
Table 2: The best values of the between-class variance and thresholds of the “mountain” image produced by the multithresh function of the Matlab toolbox and by the proposed O(0.0)R-DE algorithm. The number of thresholds varies from 2 to 16.
tab3
Table 3: The average of mean number of generation () and ranks of the methods.
tab4
Table 4: The average success rate (SRHM) and ranks of the methods.

Figure 2 shows the graph of the TVD of all the images and thresholds. These results are in the same pattern of the results of the “mountain” image in Table 2. That means the ability to search for the optimal solution of the proposed global search algorithm is higher than that of the multithresh function, especially when the number of thresholds is large. This goes to illustrate the difficulty of the problem. The problem with this kind is that it can be multimodal [33] or can be a nearly flat top surface [34]. The multithresh function solves the problem by performing the Nelder-Mead Simplex method [35], which is a local search method that cannot guarantee an optimal solution. Thus, its solutions are inferior to the solution produced by the algorithm using a global search.

974024.fig.002
Figure 2: The threshold value distortion (TVD) of algorithms versus number of thresholds.
5.2.3. Convergence Rate Comparison

The number of generations (NG) is a measure used for the convergence rate comparisons. If the target value, VTR, is achieved in a lesser number of generations (NG), it means a faster convergence rate for the algorithm. Table 3 shows the average of () for each specific number of thresholds. The results of each algorithm are represented in the corresponding column’s name. In each column, the cell containing starts from the row associated with until the row associated with . The cells associated with to the row associated with are filled by . The second last row of the table is filled by the triple:

The algorithm with the highest , that is, the lowest number of unsuccessful subproblems and the lowest average of generation is the winner. The ranking of the algorithms depends on the ordering of (, , ) and (, , ) as follows.First, rank on and . Since both and are numeric, the higher value has the higher rank.Second, rank on and b2. If is 16, then must be NA(0). If is less than 16, then must be NA (number of unsuccessful subproblems) and has the same characteristics as . Perform the order of the two numeric values in reverse; the lower value has the higher rank.Third, rank on c1 and . They are numeric but the lower is better; perform the order of the two numeric values in reverse. The lower value has the higher rank.When the ordering is finished, assign the numeric value of “1” to the object having the highest rank, assign the numeric value of “2” to the first runner up, and so on. These values are represented in the last row of the table.

If the row having must be ranked, it can be done in the same manner as above with some minor modifications. If is less than , the values of the triple pair will be , , . If is greater than , the values of the triple pair will be (, , ). Thus the ranking can now be performed.

From the ranking results, see the second last row of Table 3; the convergence rate can be ranked from best to worst in the following order: , , , , , , , , DE, , , , , ABC, PSO, DPSO, FODPSO.

As can be seen from Table 3, the DE algorithm cannot complete the task when and the algorithm cannot complete the task when . Thus, cannot compete with DE on searching for global multilevel thresholding.

In order for to outperform DE then must be in the range of or set to .

5.2.4. Stability Analysis

The harmonic mean of the success rate () for each specific number of thresholds was computed and is presented in Table 4. The results of each algorithm are represented in the corresponding column’s name. The second row from the bottom shows the harmonic mean of the success rate of each algorithm for all threshold levels.

In each column, the cells containing start from the row associated with to the row associated with . The cells from the row associated with to the row associated with are filled by . The cells from the row associated with to the row associated with are the cells that have ; these cells will be excluded from the comparison. The second last row of the table is filled with the triple:

The algorithm with the highest , the highest , and the highest average success rate is the winner. The ranking of the algorithms depends on the ordering of (a1, , ) and (, b2, ) as follows.First, rank on and a2.Second, rank on and .Third, rank on and .

Because they are numeric the higher value has the higher rank. When the ordering is finished, the numeric value of “1” is assigned to the object having the highest rank, the numeric value of “2” is assigned to the first runner up, and so on.

If the row having must be ranked, it can be done in the same manner as above with some minor modifications. If is less than or equal to , the values of the triple pair will be (, , ). If , then the values of the triple pair will be (, , ). If is greater than , the values of the triple pair will be (, , ). Thus, the ranking can now be performed.

From the ranking results, see Table 4, the success rate can be ranked from best to worst in the following order: , , , , , , , DE, , , , , , ABC, PSO, DPSO, FODPSO, multithresh.

As can be seen from Table 4, the DE algorithm has an until and the algorithm has an until . This result confirms that cannot compete with DE on searching for global multilevel thresholding. If the correct is selected, the proposed algorithm can work very well. For , has a higher rank than DE. The multithresh function cannot compete with any of the other algorithms. It can also be seen that the proposed algorithm has the best stability because its is greater than 0.5 when to 16.

5.2.5. Reliability Comparison

The threshold value distortion a.k.a. for each specific threshold is computed, shown in Table 5 and depicted in Figure 2. The results of each algorithm are illustrated in the corresponding column’s name. The second last row of Table 5 shows the slope or the approximated growth rate, , of the of each algorithm for all threshold levels. The is the slope of the robust linear regression computed by the Matlab function “robustfit.” The lower slope exhibits the better reliability. The of each algorithm is sorted in descending order. From the results, the reliability can be ranked from best to worst in the following order: , , , , , , , DE, , ABC, , , , , DPSO, FODPSO, PSO, multithresh.

tab5
Table 5: The average of threshold value distortion (TVD) and ranks of the methods.
tab6
Table 6: The between-class variance criterion and the best thresholds for test images.

We can see from these results that has a higher approximated growth rate than DE. with and are still better than DE. produced the best result with a very flat slope and a very low -intercept. The multithresh function yielded a higher growth rate of solution distortion and a higher -intercept than the other algorithms. The higher growth rate of solution distortion means the quality of solution drops very fast if the number of thresholds increases. The higher -intercept means that the solution distortion at the lowest number of thresholds is high. Thus, the global optimization algorithm is required for solving the multilevel thresholding.

6. Conclusions

The differential evolution with onlooker ranking-based mutation operator algorithm was proposed and applied to the multilevel image thresholding problem. The objective of this proposed algorithm was to increase the ability for adjusting the balance of the exploitation and exploration abilities. Its concept is a combination of the ranking-difference evolution and onlooker selection of the ABC algorithm. The experiments compared the proposed algorithm with six existing algorithms: PSO, DPSO, FODPSO, ABC, DE, and on 20 real images of the Berkeley Segmentation Dataset and Benchmark and a satellite image. The stability analysis, convergence speed, and the reliability were measured. The results signified that the proposed algorithm is more efficient than the six tested algorithms. The onlooker ranking-based mutation operator is able to enhance the performance of the proposed algorithm. The not only obtained more stability analysis, but it also achieved faster convergence rates to reach the target BCV, if a proper value of is set.

For future work based on this paper, the proposed algorithm has one parameter to be set by a user; the mechanism to automatically adapt this parameter is not presented but is required.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This work was supported by the Higher Education Research Promotion and National Research University Project of Thailand, Office of the Higher Education Commission, through the Cluster of Research to Enhance the Quality of Basic Education.

References

  1. W. K. Pratt, Digital Image Processing, John Wiley & Sons, New York, NY, USA, 1978.
  2. J. S. Weszka, “A survey of threshold selection techniques,” Computer Graphics and Image Processing, vol. 7, no. 2, pp. 259–265, 1978. View at Google Scholar · View at Scopus
  3. K. S. Fu and J. K. Mui, “A survey on image segmentation,” Pattern Recognition, vol. 13, no. 1, pp. 3–16, 1981. View at Google Scholar · View at MathSciNet · View at Scopus
  4. P. K. Sahoo, S. Soltani, and A. K. C. Wong, “A survey of thresholding techniques,” Computer Vision, Graphics and Image Processing, vol. 41, no. 2, pp. 233–260, 1988. View at Google Scholar · View at Scopus
  5. N. R. Pal and S. K. Pal, “A review on image segmentation techniques,” Pattern Recognition, vol. 26, no. 9, pp. 1277–1294, 1993. View at Publisher · View at Google Scholar · View at Scopus
  6. A. T. Abak, U. Baris, and B. Sankur, “The performance evaluation of thresholding algorithms for optical character recognition,” in Proceedings of the 4th International Conference on Document Analysis and Recognition, pp. 697–700, Ulm, Germany, August 1997. View at Scopus
  7. M. Sezgin and B. Sankur, “Survey over image thresholding techniques and quantitative performance evaluation,” Journal of Electronic Imaging, vol. 13, no. 1, pp. 146–168, 2004. View at Publisher · View at Google Scholar · View at Scopus
  8. S. U. Lee and S. Yoon Chung, “A comparative performance study of several global thresholding techniques for segmentation,” Computer Vision, Graphics and Image Processing, vol. 52, no. 2, pp. 171–190, 1990. View at Google Scholar · View at Scopus
  9. N. Otsu, “A threshold selection method from gray level histograms,” IEEE Transactions on Systems, Man and Cybernetics, vol. 9, no. 1, pp. 62–66, 1979. View at Google Scholar · View at Scopus
  10. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks (ICNN ’95), vol. 4, pp. 1942–1948, Perth, Australia, December 1995. View at Scopus
  11. R. Storn and K. Price, “Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces,” Tech. Rep. TR-95-012, International Computer Sciences Institute, Berkeley, Calif, USA, 1995. View at Google Scholar
  12. D. Karaboga, “An idea based on honey bee swarm for numerical optimization,” Tech. Rep. TR06, Erciyes University, Engineering Faculty, Computer Engineering Department, 2005. View at Google Scholar
  13. M. S. Couceiro, R. P. Rocha, N. M. F. Ferreira, and J. A. T. Machado, “Introducing the fractional-order Darwinian PSO,” Signal Image and Video Processing, vol. 6, no. 3, pp. 343–350, 2012. View at Publisher · View at Google Scholar
  14. R. V. Kulkarni and G. K. Venayagamoorthy, “Bio-inspired algorithms for autonomous deployment and localization of sensor nodes,” IEEE Transactions on Systems, Man and Cybernetics C, vol. 40, no. 6, pp. 663–675, 2010. View at Publisher · View at Google Scholar · View at Scopus
  15. B. Akay, “A study on particle swarm optimization and artificial bee colony algorithms for multilevel thresholding,” Applied Soft Computing Journal, vol. 3, no. 6, pp. 3066–3091, 2013. View at Publisher · View at Google Scholar · View at Scopus
  16. K. Hammouche, M. Diaf, and P. Siarry, “A comparative study of various meta-heuristic techniques applied to the multilevel thresholding problem,” Engineering Applications of Artificial Intelligence, vol. 23, no. 5, pp. 676–688, 2010. View at Publisher · View at Google Scholar · View at Scopus
  17. V. Osuna-Enciso, E. Cuevas, and H. Sossa, “A comparison of nature inspired algorithms for multi-threshold image segmentation,” Expert Systems with Applications, vol. 40, no. 4, pp. 1213–1219, 2013. View at Google Scholar
  18. P. Ghamisia, M. S. Couceiro, F. Martins, and J. A. Benediktsson, “Multilevel image segmentation based on Fractional-Order Darwinian particle swarm optimization,” IEEE Transactions on Geoscience and Remote Sensing, vol. 52, no. 1, pp. 1–44, 2013. View at Google Scholar
  19. W. Gong and Z. Cai, “Differential evolution with ranking-based mutation operators,” IEEE Transactions on Cybernetics, vol. 43, no. 6, pp. 2066–2081, 2013. View at Google Scholar
  20. K. Price, R. Storn, and J. Lampinen, Differential Evolution: A Practical Approach to Global Optimization, Springer, Berlin, Germany, 2005. View at MathSciNet
  21. R. Storn and K. Price, Home Page of Differential Evolution, International Computer Science Institute, Berkeley, Calif, USA, 2010.
  22. A. K. Qin and P. N. Suganthan, “Self-adaptive differential evolution algorithm for numerical optimization,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '05), pp. 1785–1791, September 2005. View at Scopus
  23. E. Mezura-Montes, J. Velázquez-Reyes, and C. A. Coello Coello, “A comparative study of differential evolution variants for global optimization,” in Proceedings of the 8th Annual Genetic and Evolutionary Computation Conference, pp. 485–492, July 2006. View at Scopus
  24. A. K. Qin, V. L. Huang, and P. N. Suganthan, “Differential evolution algorithm with strategy adaptation for global numerical optimization,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 2, pp. 398–417, 2009. View at Publisher · View at Google Scholar · View at Scopus
  25. S. Das, A. Abraham, and A. Konar, “Automatic clustering using an improved differential evolution algorithm,” IEEE Transactions on Systems, Man, and Cybernetics A, vol. 38, no. 1, pp. 218–237, 2008. View at Publisher · View at Google Scholar · View at Scopus
  26. J. Brest, S. Greiner, B. Bošković, M. Mernik, and V. Zumer, “Self-adapting control parameters in differential evolution: a comparative study on numerical benchmark problems,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 6, pp. 646–657, 2006. View at Publisher · View at Google Scholar · View at Scopus
  27. N. Noman and H. Iba, “Accelerating differential evolution using an adaptive local search,” IEEE Transactions on Evolutionary Computation, vol. 12, no. 1, pp. 107–125, 2008. View at Publisher · View at Google Scholar · View at Scopus
  28. R. S. Rahnamayan, H. R. Tizhoosh, and M. M. A. Salama, “Opposition-based differential evolution,” IEEE Transactions on Evolutionary Computation, vol. 12, no. 1, pp. 64–79, 2008. View at Publisher · View at Google Scholar · View at Scopus
  29. D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in Proceedings of the 8th International Conference on Computer Vision, vol. 2, pp. 416–423, July 2001. View at Scopus
  30. P. Ghamisi, M. S. Couceiro, J. A. Benediktsson, and N. M. F. Ferreira, “An efficient method for segmentation of image based on fractional calculus and natural selection,” Expert Systems with Applications, vol. 39, pp. 12407–12417, 2012. View at Google Scholar
  31. R. Gamperle, S. D. Muller, and A. Koumoutsakos, “A parameter study for differential evolution,” in Proceedings of the Advances in Intelligent Systems, Fuzzy Systems, Evolutionary Computation, vol. 10, pp. 293–298, 2002.
  32. M. Saraswat, K. V. Arya, and H. Sharma, “Leukocyte segmentation in tissue images using differential evolution algorithm,” Swarm and Evolutionary Computation, vol. 11, pp. 46–54, 2013. View at Google Scholar
  33. J. Kittler and J. Illingworth, “On threshold selection using clustering criteria,” IEEE Transactions on Systems, Man and Cybernetics, vol. 15, no. 5, pp. 652–655, 1985. View at Google Scholar · View at Scopus
  34. R. Guo and S. M. Pandit, “Automatic threshold selection based on histogram modes and a discriminant criterion,” Machine Vision and Applications, vol. 10, no. 5-6, pp. 331–338, 1998. View at Google Scholar · View at Scopus
  35. J. C. Lagarias, J. A. Reeds, M. H. Wright, and P. E. Wright, “Convergence properties of the Nelder-Mead simplex method in low dimensions,” SIAM Journal on Optimization, vol. 9, no. 1, pp. 112–147, 1998. View at Google Scholar · View at MathSciNet · View at Scopus