Table of Contents Author Guidelines Submit a Manuscript
Discrete Dynamics in Nature and Society
Volume 2017, Article ID 5193013, 22 pages
https://doi.org/10.1155/2017/5193013
Research Article

Multispecies Coevolution Particle Swarm Optimization Based on Previous Search History

1Department of Information Service & Intelligent Control, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China
2University of Chinese Academy of Sciences, Beijing 100039, China
3Shenyang University, Shenyang 110044, China
4School of Computer Science and Software, Tianjin Polytechnic University, Tianjin 300387, China

Correspondence should be addressed to Maowei He; moc.liamtoh@iewoameh and Hanning Chen; moc.liamtoh@nhc_tcefrep

Received 4 November 2016; Revised 11 April 2017; Accepted 19 April 2017; Published 27 June 2017

Academic Editor: Seenith Sivasundaram

Copyright © 2017 Danping Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A hybrid coevolution particle swarm optimization algorithm with dynamic multispecies strategy based on -means clustering and nonrevisit strategy based on Binary Space Partitioning fitness tree (called MCPSO-PSH) is proposed. Previous search history memorized into the Binary Space Partitioning fitness tree can effectively restrain the individuals’ revisit phenomenon. The whole population is partitioned into several subspecies and cooperative coevolution is realized by an information communication mechanism between subspecies, which can enhance the global search ability of particles and avoid premature convergence to local optimum. To demonstrate the power of the method, comparisons between the proposed algorithm and state-of-the-art algorithms are grouped into two categories: 10 basic benchmark functions (10-dimensional and 30-dimensional), 10 CEC2005 benchmark functions (30-dimensional), and a real-world problem (multilevel image segmentation problems). Experimental results show that MCPSO-PSH displays a competitive performance compared to the other swarm-based or evolutionary algorithms in terms of solution accuracy and statistical tests.

1. Introduction

Solving numerical optimization problems is a challenging research endeavor in many scientific areas. Many optimization techniques and search algorithms have drawn their motivation from evolution and social behavior. These include ant colony optimization [1], genetic algorithms [2], differential evolution [3], particle swarm optimization [4], and artificial immune systems [5]. In swarm intelligence (SI) systems, there are many simple individuals who can interact locally with one another and with their environments. Although such systems are decentralized, local interactions between individuals lead to the emergence of global behavior and properties. All of them have many variants, which have excellent performance. These variants are based on various improvement strategies.

In the last few years, several heuristics have been developed to improve the performance and set up suitable parameters for the PSO algorithm. van den Bergh [6] analyzed the trajectory of particles under different inertia weights and acceleration coefficients. The original structure of PSO model reflects an intuitive idea that a particle takes any position on which fitness value is better than where it currently is as the reference input. However, many experiments have shown that the basic PSO algorithm easily falls into local optima when solving complex multimodal problems with a huge number of local optima.

To overcome this problem, researchers conducted lots of studies. Inspired by the coevolution phenomenon in nature, in Cooperative Particle Swarm Optimization (CPSO) [7], multiple swarms are partitioned uniformly to optimize different components of the solution vector cooperatively. Inspired by this work, a CCPSO integrating the random grouping and adaptive weighting schemes was developed and demonstrated great promise in scaling up PSO on high-dimensional nonseparable problems [8]. In [9], a competitive and cooperative coevolutionary PSO (CCPSO) has considerable potential for solving complex optimization problems by explicitly modeling the coevolution of competing and cooperating species. In Multiswarm Self-Adaptive and Cooperative Particle Swarm Optimization (MSCPSO) [10], particles in each subswarms share the only global historical best optimum to enhance the cooperative capability. In [11], Adaptive Cooperative Particle Swarm Optimizer (ACPSO) facilitates cooperation technique through the usage of the Learning Automata (LA) algorithm. Coevolutionary particle swarm optimization (PSO) algorithm associated with the artificial immune principle (ICPSO) [12] adopts an improved immune-clonal-selection operator for optimizing the elite subpopulation and a migration scheme for the information exchange between elite subpopulation and normal subpopulations.

Different from the above works with static-swarms strategy, some cooperative PSO’ variants introduce dynamic multiswarms strategy to enhance the ability of exploring local optima. With the aim of maintaining multiple swarms on different peaks, a clustering PSO (CPSO) proposed in [13] employs a nearest neighbor learning strategy to train particles and a hierarchical clustering method to locate and track multiple optima. In hierarchical cluster-based multispecies particle swarm optimization (HCMSPSO) algorithm [14], a swarm is clustered into multiple species at an upper hierarchical level, and each species is further clustered into multiple subspecies at a lower hierarchical level. And in the lower layer, subspecies within the same species are formed adaptively in each iteration during the particle update. In [15], a dynamic multiswarm particle swarm optimizer (DMS-PSO) merges the harmony search (HS) algorithm into each subswarm. These subswarms are regrouped frequently and information is exchanged among the particles in the whole swarm. In a novel optimizer using Adaptive Heterogeneous Particle Swarms (AHPS2) [16], to best take advantage of the heterogeneous learning strategies, an adaptive competition strategy is designed for dynamically adjusting the population size of an independent swarm with comprehensive learning and another one with subgradient learning based on their group performance. Roles of learning swarm and learnt swarm in particle swarm optimization with interswarm interactive learning strategy (IILPSO) [17] swap to maintain the diversity when interswarm interactive learning (IIL) behavior is triggered to adjust sizes of these two swarms.

In order to avoid the different individuals to repetitively exploit the same researched regions or excessive individuals to prematurely search a narrow region, researchers conducted a series of studies. Inspired by biology, niche [18] and speciation [19] techniques are introduced into PSO to prevent the swarm from crowding too closely and to locate as many optimal solutions as possible. Additionally, PSO topological structures are also widely adopted. The ring topology employed in [20] operating as a niching algorithm can drive the particles exploring the search space more broadly. Further, in [21] the multilayer strategy with multiple topologies is adopted to decrease the amount of noneffective exploiting tries. Inspired by ecological cohabitation, chaotic multiswarm particle swarm optimization (CMS-PSO) modifies the generic PSO with the help of the chaotic sequence for multidimension unknown parameter estimation and optimization by forming multiple cooperating swarms [22]. Historical memory strategy in HMPSO [23], which estimates and preserves distribution information of particles’ historical promising, is helpful in preserving the information of optimum solution space and making a comprehensive learning.

Although all the above works improve the performance of particle swarm optimization, there is shortage of research into the performance of the hybrid algorithm with the dynamic multiple swarms strategy and nonrepetition search approach.

In this paper, we propose a new coevolution particle swarm optimization algorithm with dynamic multispecies strategy based on -means clustering and nonrevisit strategy based on Binary Space Partitioning fitness tree (called MCPSO-PSH). It is shown that when a Binary Space Partitioning tree archive stores the positions and the fitness values of all evaluated solutions, this archive can be treated as an approximation of the fitness function, which can avoid the individuals’ revisit phenomenon and improve the search efficiency. Moreover, -means clustering method is introduced to partition whole population into subpopulations frequently. Information communication among subpopulations is implemented in the dynamic repartition process. Therefore, MCPSO-PSH algorithm can accommodate a considerable potential for solving complex problems. To comprehensively evaluate the performance of the proposed algorithm, MCPSO-PSH is compared with other state-of-the-art algorithms on three categories of experiments: () ten 10-dimensional and 30-dimensional benchmark problems with various properties, such as unimodal and multimodal functions, are employed to test MCPSO-PSO’s performance for a diverse set of problems; () a set of CEC2005 30-dimensional tests including 10 benchmark functions are used to justify MCPSO-PSH’s scalability for complex problems; () a real-world problem, the multilevel thresholds based on Otsu method for image segmentation, is employed to benchmark MCPSO-PSH’s applicability for practical problems. The numerical results demonstrate that MCPSO-PSH significantly improves the performance of PSO and outperforms most of the comparison algorithms during the experiments. The main contributions of the proposed methods lie in the following aspects.(1)A new way of information communication among multiple swarms is designed. Each particle’s position is updated according to not only its personal history information and global information of its locating species but also other global information of other species.(2)A dynamic -means cluster is designed. In the process of decreasing the number of clusters and repartitioning the population into species, the search feature general achieves the transition from global search in the earlier evolution period to local search in the later evolution period.(3)Binary Space Partitioning fitness tree memorizing the previous search history can prevent the individuals from revisiting the unpromising landscapes, which can improve the search efficiency and avoid trapping in local optimum.

The rest of this paper is organized as follows: Section 2 first gives a review of the related works; Section 3 gives a brief explanation of the proposed coevolution particle swarm optimization algorithm; Section 4 presents the experimental studies of the proposed MCPSO-PSH; in Section 5, the performance of MCPSO-PSH based multilevel thresholding for image segmentation is evaluated. Finally, Section 6 concludes the paper.

2. Related Works

2.1. Particle Swarm Optimization

Particle swarm optimization (PSO) is an evolutionary computation (EC) algorithm paradigm that emulates the swarm behaviors of birds flocking [24]. It is a population-based search algorithm that exploits a population of individuals to probe promising regions of the search space. In this context, the population is called a swarm, and the individuals are called particles. Each particle moves with an adaptable velocity within the search space and retains the best position it ever encountered in its memory. The standard version of PSO is briefly described here [6].

Let be the swarm size, be the particle dimension space, and each particle of the swarm have a current position vector , a current velocity vector , and an individual best position vector found by the particle itself so far. The swarm also has the global best position vector found by any particle during all prior iterations in the search space. Assuming that the function is to be minimized, and describing the following notations in th iteration, the definitions are as follows:where each dimension of a particle is updated using the following equations:

In (2), and denote constant coefficients and and are elements from random sequences in the range of (0, 1).

The inertia weight plays the role of balancing the global and local searches. It can be a positive constant (e.g., 0.9) or even a positive linear or nonlinear function of time [13, 14]. Although sometimes proper fine-tuning of the parameters and leads to an improved performance, an extended study of the cognitive and social parameters in [6] suggests as default values.

2.2. K-Means Clustering

-means clustering is a method of vector quantization, originally from signal processing, that is popular for cluster analysis in data mining. -means clustering aims to partition observations into clusters in which each observation belongs to the cluster with the nearest mean, serving as a prototype of the cluster, as shown in Figure 1. This results in a partitioning of the data space into Voronoi cells [25].

Figure 1: -means clustering result with .

The cluster centers are substituted for center positions of food sources and the formula of computing the centers is shown in (4). If the th cluster contains members and the members are denoted as , then the center () is determined as

The radius of a cluster is defined as the mean distance (Euclidean) of the members from the center of the cluster. Thus, can be written as follows:where is position of the th member and is the number of members in its corresponding cluster.

2.3. Fitness Tree

Inspired by Binary Space Partitioning (BSP) tree, fitness tree is proposed by Shiu and Chi [26]. The establishment of a fitness tree is aimed at partitioning the whole search space into several subspaces (subregions) according to the history positions of members.

Each node of fitness tree represents a partitioned subspace of the whole search space. And each node has at most two child nodes, left node and right node. The sum of the search spaces of left node and right node is the search space of the parent node. Such fitness tree is constructed by a series of solutions found by the individuals. The process of building fitness tree is random, which means its topology for information communication is constructed stochastically.

The individuals of an evolution algorithm can be regarded as a sequence of solutions , where is the population size. In each cycle, parent individuals generate child individuals. The child individual is inserted into the fitness tree as a leaf node. So fitness tree stores all the search positions . This means that there is no revisit for the moment. For normal evolution algorithms, parent individuals with better solutions withstand higher selection pressure, which results in higher probability to revisit the search space. In the extreme case, an individual trapping into a local optimum may be selected over and over, which causes that after several iterations all the individuals are generated from such individual. The phenomenon of revisit leads to a precipitous loss of population diversity.

The pseudocode of fitness tree construction is given in Algorithm 1. Here, a detailed explanation is given as follows. In the process of inserting a solution into the fitness tree, the current node Curr_node represents the survey node. When the survey dimension of current node is equal to the dimension of inserted node , the th solution in the sequence is a revisit.

Algorithm 1: Pseudocode of fitness tree construction.

The core idea of fitness tree is that, under finding out the dimension with max difference, the new individual locates as a leaf node or partitions a parent node (subspace) into two binary nodes (sub-subspace).

3. Algorithm

3.1. Multispecies Coevolution Model for Optimization

Inspired by the mutualism phenomenon, the single population PSO is extended into the interacting multispecies model by enhancing particle dynamics.

In such coevolution model, the information for the particle’s position updating includes not only its personal history information and global information of its locating species but also other global information of other species. Information communication simultaneously occurs in and between the local species and neighboring species.

In mathematical terms, the original particle’s fly velocity equation and position update equation (please see (2) and (3)) are improved as follows:where the superscript represents the th species, denotes the position of the th particle of the th species, is the current particle’s history best position, is the history best position found by its locating species, and is the history best position found by the whole population. In (6), , , and are the constant coefficients. And in this paper, , , and are set to have the same value, ; , , and are the random values in the range of (0, 1).

In (6), takes into account the individual’s own experiences, means the internal communication within the th species, and indicates the symbiotic coevolution between dissimilar species.

3.2. Dynamically K-Means Cluster

In order to implement the multispecies coevolution model, the stochastically generated population should be partitioned into subspecies. The widely adopted -means cluster method (please see Section 2.2) is employed for achieving the population division. The main drawback of -means cluster is that the effect of population division depends on the initial distribution of stochastically generated individuals and the selection of initial division points. A relatively effective means is to set a large number of clusters.

In our proposed algorithm, the number of clusters is predefined as , where . During optimization, each individual in every cluster operates independently as the multispecies coevolution model (please see Section 3.1). After several optimization iterations , the whole population is repartitioned into clusters; and are orderly got from . Here, , where T is the total number of iterations. As is smaller than , the individuals in small clusters may be redistributed into the relatively larger clusters when the number of the clusters changes.

To balance the exploration and exploitation, the value of is not constant. To enhance the ability of local search especially in early stages of searching, should be larger than . Therefore, in our proposed algorithm, is calculated as follows:

Moreover, in the process of optimization, the centers of two or more clusters move too close or overlap each other. In order to avoid this phenomenon and to control every cluster to locally search a part of landscape, the relatively small cluster in two or more adjacent clusters decomposes and the individuals in such cluster redistribute into other adjacent clusters. Evaluation criterion is listed as follows:where is the distance between two adjacent clusters and and are the radiuses of two adjacent clusters (please see (5)); the left term () represents the gap between two adjacent clusters.

3.3. Adaptive Normalization of Population Members

In the proposed multispecies particle swarm optimization, the idea of fitness tree is also employed to overcome the revisit phenomenon. Here, a modified adaptive normalized fitness tree is proposed.

In the process of inserting a new individual into fitness tree (please see Algorithm 1), there is a potential drawback: when determining the comparing dimension, it depends on the maximum difference among the dimensions of two child nodes.

For overcoming this drawback, the normalization method is employed to determine which dimension has the largest distinguishable difference. First, the extreme points of the new generated population and constructed fitness tree are determined by identifying the minimum and maximum values and for each dimension and by constructing the extreme points and . Each dimension value of nodes in fitness tree is then translated by subtracting the corresponding dimension value of extreme points and dividing the distribution range of corresponding dimension as follows:

The pseudocode of adaptive normalization of population members is listed in Algorithm 2.

Algorithm 2: Pseudocode of adaptive normalization of population members.
3.4. Algorithmic Framework

In this paper, a new multispecies coevolution particle swarm optimization based on previous search history is proposed. The -means clustering is applied to dynamically partition the whole population into multiple species. Fitness tree is employed to avoid that the individuals revisit the previous search regions.

The pseudocode of the proposed algorithm is listed in Algorithm 3.

Algorithm 3: Pseudocode of multispecies coevolution particle swarm optimization based on previous search history.

4. Experimental Study

4.1. Experimental Settings

The proposed multispecies coevolution particle swarm optimization based on previous search history (MCPSO-PSH) was executed with four state-of-the-art EA and SI algorithms for comparisons: () canonical particle swarm optimization (PSO) [24]; () differential evolution (DE) [27]; () Covariance Matrix Adaptation Evolution Strategy (CMA-ES) [28]; () nonrevisiting genetic algorithm (NrGA) [26].

In all experiments in this section, the values of the common parameters used in each algorithm such as population size and total generation number are chosen to be the same. Population size is set as 100 and the maximum evaluation number is set as 100000. For the continuous testing functions used in this paper, the dimensions are all set as 10D and 30D.

All the control parameters for the EA and SI algorithms are set to be default of their original literatures.

For canonical PSO, the learning rates and are both set as 2.0.

For canonical DE, the zoom factor is set as and crossover rate is set as .

For CMA-ES, the population size is chosen by the suggested setting in [28] (i.e., ).

For NrGA, the crossover rate of 0.8 and the mutation rate of 0.01 are adopted [26].

For our proposed algorithm, the number of clusters is set as , and so is calculated as . The learning rates , , and in (6) are all set as .

4.2. Test and Benchmark Problems

According to the No Free Lunch theorem, “for any algorithm, any elevated performance over one class of problems is exactly paid for in performance over another class” [29]. To fully evaluate the performance of the MCPSO-PSH without a biased conclusion towards some chosen problems, a set of ten basic benchmark functions and ten CEC2005 benchmark functions [30] is employed. The formulas of these functions are presented below.

4.2.1. Basic Benchmark Functions

(1) Rosenbrock function

(2) Quadric function

(3) Sum of different powers

(4) Axis parallel hyperellipsoid function

(5) Sphere function

(6) Ackley’s function

(7) Griewank’s function

(8) Michalewicz function

(9) Rastrigin’s function

(10) Schwefel’s function

These ten benchmark functions can be grouped as unimodal continuous functions f1f5 and multimodal continuous functions f6f10.

4.2.2. CEC2005 Benchmark Functions

(11) Shifted sphere function

(12) Shifted Schwefel’s problem 1.2

(13) Shifted rotated high conditioned elliptic function

(14) Shifted Schwefel’s problem 1.2 with noise in fitness

(15) Schwefel’s Problem 2.6 on bounds

(16) Shifted Rosenbrock’s function

(17) Shifted rotated Griewank’s function without bounds

(18) Shifted rotated Ackley’s function on bounds

(19) Shifted Rastrigin’s function

(20) Schwefel’s Problem 2.13

These ten benchmark functions can be grouped as unimodal continuous functions f11f15 and multimodal continuous functions f16f20.

4.3. Result and Statistical Analysis
4.3.1. Results for Basic Benchmark Functions

In this experiment, the proposed MCPSO-PSH algorithm is compared with nonrevisiting genetic algorithm (NrGA), Covariance Matrix Adaptation Evolution Strategy (CMA-ES), the canonical particle swarm optimization (PSO) algorithm, and differential evolution (DE) algorithm.

For testing 10D benchmark test functions f1f10 in 10 runs, the basic statistical results including the mean and standard deviations of the function values are listed in Table 1. From Table 1, it is obvious that the canonical PSO and DE algorithms demonstrate a worse performance than MCPSO-PSH, NrGA, and CMA-ES. For the low-dimensional (10D) unimodal functions f1f5, the uniformity of CMA-ES algorithm is relatively good. For function f1, CMA-ES gets the best results. And, for functions f2f5, the performance of CMA-ES is rather competitive. Compared with the canonical PSO and other state-of-the-art algorithms, MCPSO-PSH brings in the dynamical multispecies strategy based on K-means to enhance the ability of local search and fitness tree to overcome the revisit phenomenon, which results in a huge improvement of performance. The MCPSO-PSH and NrGA algorithms with nonrevisit strategy have nearly the same results for low-dimensional problems. Benefitting from the dynamic K-means clustering and the adaptive normalization of population members, the improved multispecies PSO has a slightly better performance than NrGA on functions f1f5. For the low-dimensional (10D) multimodal functions f6f10, intuitively, the proposed MCPSO-PSH algorithm has rather smaller mean values and standard deviations than other algorithms for function f6. For functions f7 and f8, the MCPSO-PSH and NrGA obtain the global optimum with a high efficiency. It is due to the fact that the nonrevisit strategy based on BSP fitness tree effectively enhances the global search especially when a large number of local optima exist in the search space. For functions f9 and f10, the CMA-ES algorithm also gets the global optimum because the MCPSO-PSH, NrGA, and CMA-ES own their special mechanisms for jumping out of the local optimum. On the whole, regardless of the unimodal functions or the multimodal functions, the performance of the proposed MCPSO-PSH is uniform.

Table 1: Performance of all algorithms on 10D test functions f10.

For testing 30D benchmark test functions f1f10 in 10 runs, the basic statistical results including the mean and standard deviations of the function values are listed in Table 2. With the increase of the dimensions, the convergence rate becomes slow. The results of Table 2 show great disparities in convergence accuracy. For the high-dimensional unimodal problems f1f5, the performance of MCPSO-PSH algorithm is more outstanding than the other four algorithms because MCPSO-PSH with dynamic clustering strategy can accelerate the convergence rate especially in the later evolution period. The performance of NrGA algorithm becomes slightly poor although it also has the nonrevisit strategy for improving the search efficiency. Only for functions f1f5 does CMA-ES outperform NrGA on convergence rate and accuracy. For solving the high-dimensional multimodal problems f6f10, the MCPSO-PSH has a good performance. With the dynamic multispecies strategy based on K-means clustering and nonrevisit strategy based on BSP fitness tree, the MCSPSO-PSH algorithm can well avoid premature convergence to local optimum in the process of solving the high-dimensional multimodal problems. From the basic statistical results, NrGA also owns the most outstanding performance on testing functions f6 and f10, which means the nonrevisit strategy based on BSP fitness tree plays an important role in guiding search direction. On the whole, the performance of all algorithms tested here is ordered as MCPSO-PSH > NrGA > CMA-ES > PSO > DE.

Table 2: Performance of all algorithms on 30D test functions .

The convergence curves of the MCPSO-PSH algorithms and other EAs against test functions f1f10 in 10D and 30D are shown in Figure 2. It can be observed that, with the increase of the dimensions, the convergence rate of the proposed MCPSO-PSH algorithm becomes faster. Predictably, with a further increase of the dimensions of the test functions, the MCPSO-PSH algorithm will have an overwhelming superiority in solving the high-dimensional problems.

Figure 2: Convergence curves of MCPSO-PSH, NrGA, CMA-ES, PSO, and DE against test functions  f1f10 in low dimension, 10D, and high dimension, 30D, respectively.
4.3.2. Results for CEC2005 Benchmark Functions

The statistical results in Table 3 show that MCPSO-PSH has significantly outstanding performance for all these cases except  f14 and  f20. For  f14, although all the algorithms provide similar results (except DE), the solutions of MCPSO-PSH are still rather competitive and accessible. For f20 MCPSO-PSH achieved the second best solution just behind NrGA. The remarkable performance of MCPSO-PSH for  f11f15 indicates that MCPSO-PSH also applies to unimodal functions though it is primarily designed for multimodal functions. For the multimodal functions  f16f19, MCPSO-PSH outperforms NrGA, CMA-ES, PSO, and DE. In terms of average obtained results, MCPSO-PSH is better than NrGA on  f12, f13, f15, f16, f17, and  f18, and is somehow equivalent to it on  f11 and  f19. MCPSO-PSH and NrGA outperform other three algorithms on most cases, which demonstrates great promise of nonrevisiting strategy for maintaining population diversity and improving search efficiency synchronously. Obviously, MCPSO-PSH is superior to NrGA which benefits from its unique dynamic multispecies strategy based on -means clustering.

Table 3: Basic statistics of CEC2005 benchmark test functions obtained by MCPSO-PSH, NrGA, CMA-ES, PSO, and DE.

In order to intuitively depict the search effect, the box plots are employed to demonstrate the statistical performance of five different algorithms, as shown in Figure 3. Although nonrevisiting strategy is implemented in both NrGA and MCPSO-PSH, the robustness of NrGA is still undesirable. MCPSO-PSH with dynamic multispecies strategy has great enhancement of exploring ability.

Figure 3: Box plots of results obtained by all algorithms. Here, 1 to 5 in the horizontal axis are indices of MCPSO-PSH, NrGA, CMA-ES, PSO, and DE, respectively (the box has lines at the lower quartile, median, and upper quartile values. The whiskers extend to the most extreme data points not considered outliers. Outliers (denoted by +) are data with values beyond 100 units of interquartile range).

From the above observations, the superiority of MCPSO-PSH over other peer algorithms can be easily presumed.

5. Multilevel Threshold for Image Segmentation

5.1. Otsu Method

The Otsu multithreshold measure [31] proposed by Otsu has been popularly employed in determining whether the optimal threshold method can provide image segmentation with satisfactory results. The traditional Otsu method can be described as follows.

Let the grey levels of a given image with pixels range over [, ] and denote the number of the th grey level pixels. is the probability of .

Otsu defined the between-class variance as the sum of sigma functions of each class by (22)where is the mean intensity of the original image. In bilevel thresholding, the mean level of each class, (), can be calculated as follows: where

The optimal threshold can be derived by maximizing the between-class variance function

Bilevel thresholding based on the between-class variance can be extended to multilevel thresholding. The extended between-class variance function is calculated as follows:where is the number of thresholds. And

The optimal thresholds are found by maximizing the between-class variance function:

Equation (38) is used as the objective function which is to be optimized (maximized). A close look into this equation will show that it is very similar to the expression for uniformity measure.

5.2. Experiment Setup

For evaluating the performance of the proposed algorithm (MCPSO-PSH), we implemented it on a wide variety of images. These images include the popular tested images used in previous studies [32, 33], named avion.ppm, lena.ppm, and hunter.pgm (available in [34]) and 41033.jpg, 62096.jpg, and 145086.jpg provided by the Berkeley Segmentation Database (BSDS) (available in [35]). The sizes of avion.ppm, lena.ppm, and hunter.pgm are and the sizes of 41033.jpg, 62096.jpg, and 145086.jpg are . Each image has a unique grey-level histogram. Most of the images are difficult to segment due to multimodality of the histograms.

The performance of the proposed algorithm is demonstrated by comparing with those of other popular algorithms developed in the literature so far, for example, canonical particle swarm optimization (PSO), differential evolution (DE), Covariance Matrix Adaptation Evolution Strategy (CMA-ES), and nonrevisiting genetic algorithm (NrGA). A comparison between the proposed algorithm and other methods is evaluated based on Otsu, which means (38) is regarded as fitness function to evaluate all involved algorithms. The parameters settings are identical to those introduced in Section 4.1.

The numbers of thresholds investigated in the experiments were 2, 3, 4, 5, 7, and 9, while all the experiments were repeated 30 times for each image. The population size is 100 and the maximum number of FEs is 2000. The numbers of thresholds can be realized as low-dimensional problems, which also can be solved through the traditional search method although suffering from the long CPU time. The numbers of thresholds are regarded as medium levels for image segmentation, which means the high-dimensional problem for image segmentation cannot be solved by the traditional search method.

In all experiments, to ensure that the initial values of each random algorithm are equal, we used the MATLAB command . and are the maximum and minimum greys of the tested images. Figure 4 presents six original images and their histograms.

Figure 4: Test images and their histograms.
5.3. Experimental Results of Multilevel Threshold

Case  I: Multilevel Threshold Results with . We employ (38) as the fitness function to provide a comparison of performances. Since the classical Otsu method is an exhaustive searching algorithm, we regarded its results as the “standard.” Table 4 shows the fitness function values (with ) attained by Otsu methods. In Table 4, the mean computation time and corresponding optimal thresholds are listed. For , owing to the unbearable consumption of CPU time, we do not list the correlative values in our experiment. These “standard” results will be used to compare with the results attained by optimization methods.

Table 4: Objective values and thresholds by the Otsu method.

It is well known that the EA-based Otsu method for multilevel thresholding segmentation can only accelerate the computation velocity. The attained results will just get incredibly close to the optimal results but fail to improve the optimal results. Using the proposed MCPSO-PSH and other evolution algorithms, the mean fitness function values and their standard deviations obtained by the test algorithms (with ) are shown in Table 5. The bigger mean values and smaller standard deviation listed in Table 5 mean that the algorithm gets the better results.

Table 5: Objective value and standard deviation by the compared EA-based methods on Otsu algorithm.

According to the mean CPU times shown in Table 6, there is no obvious difference between the EA-based Otsu methods. All the EA-based Otsu methods are suitably efficient and practical in terms of time complexity for high-dimensional image segmentation problems. Therefore, in comparison with the traditional Otsu algorithm, all the EA-based Otsu methods effectively shorten the computation time.

Table 6: The mean CPU time of the compared EA-based methods on Otsu algorithm.

Among the EA-based Otsu methods tested listed in Table 5, MCPSO-PSH obtains results equal to or close to that obtained in the traditional methods when . Moreover, the standard deviations obtained by MCPSO-PSH are considerably small among the results obtained by the EA-based Otsu methods. This is attributed to the fact that the MCPSO-PSH has a good balance between the global and local search abilities. Therefore, the MCPSO-PSH-based Otsu method is suitable for the low-dimensional segmentation problems ().

Case II: Multilevel Threshold Results with . To further investigate the search performance of each population-based method on high-dimensional segmentation problems, we conduct these algorithms on image segment with . Due to the unacceptable computation time for the traditional Otsu method, it does not need to be assessed in this test. The average fitness value and standard deviation obtained by each of the EA-based Otsu methods are listed in Table 7, where the correlative results with the larger values and smaller standard deviations indicate the better achievements.

Table 7: Objective value and standard deviation by the compared EA-based methods on Otsu algorithm.

From Table 7, it is obvious that, with the growth in dimension, there is statistically significant gap among experiments using these EA-based Otsu methods, in terms of both efficiency (fitness values) and stability (standard deviation). Depending on the nonrevisit strategy based on Binary Space Partitioning fitness tree and dynamic multispecies strategy based on -means clustering, MCPSO-PSH owns the best performance and stability on a high dimension, which is more efficient than the canonical PSO and other classical EAs for image segmentation with Otsu method. Moreover, NrGA, which used nonrevisit strategy, also gets more acceptable results than the other algorithms, except MCPSO-PSH. With the increase of dimensions, the fitness values and standard deviation demonstrate that the performance of CMA-ES-based Otsu method has greatly improved. The results shown in Table 7 prove that the MCPSO-PSH-based Otsu method is more suitable for resolving multilevel image segmentation problems.

As mentioned above, the EA-based Otsu methods adopt different strategies to conduct global and local searches. An elaborate optimizer should have a better balance between exploration and exploitation search abilities. The above tests show that the MCPSO-PSH-based Otsu method outperforms the other EA-based Otsu method for multilevel image segmentation, especially on high-dimensional thresholding (with ). Moreover, compared with the traditional Otsu method, this method shortens the computational time.

6. Conclusions

In this paper, we have presented a new coevolution particle swarm optimization algorithm with dynamic multispecies strategy based on -means clustering and nonrevisit strategy based on Binary Space Partitioning fitness tree (called MCPSO-PSH). With the help of storing and analyzing the positions and the fitness values of all evaluated solutions in Binary Space Partitioning tree archive, MCPSO-PSH can prevent its individuals from revisiting the same researched regions and improve the search efficiency. The proposed algorithm adopts -means clustering method to dynamically partition the population into many clusters and the number of clusters is changing frequently to implement information exchange among the different clusters. The proposed algorithm keeps a good balance between exploration and exploitation search abilities. As a result, the proposed algorithm showed promising results among the compared algorithms. Finally, we applied the proposed algorithm in the multilevel image segmentation problem. The proposed algorithm gets favorable results compared to the other compared methods. The use of the MCPSO-PSH algorithm in other real-world problems merits further research in our future work.

While this research is promising, there is a large margin of improvement for future research:(1)The number of clusters and the repartitioned cycle are preset in this work. However, according to our preliminary experiments, different values of and will impact the results of several functions. and determine the range and rate of information exchange among species. For getting more suitable results of most test cases, we preset these two values. Therefore, how to adaptively adjust the values of and with regard to various functions is to be explored in the next stage.(2)A nonrevisit strategy based on Binary Space Partitioning fitness tree is employed to partition the whole search space into several subregions according to the history positions of members. According to our preliminary observation of particles’ trajectory, most of the global best positions are local under a parent node or a small patch of leaf nodes in the fitness tree, especially in the later evolution period. Most of the nodes in fitness tree have a lack of vitality. Therefore, in the future work, we may modify the static fitness tree into a dynamic one with a node age or a lifecycle feature.(3)We would also like to introduce the proposed dynamic multiswarm and nonrevisit strategies to other swarm-based algorithms. At the same time, applying MCPSO-PSH to address more real-world complex problems will also be studied in the future work.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the manuscript.

Acknowledgments

This research is partially supported by Liaoning Province Education Administration (L2015360, L2014473, and L2015372) and Tianjin Province Science and Technology Projects (16ZLZDZF00150) and National Natural Science Foundation of China (61603261, 61503373, 61602343, and 51607122).

References

  1. O. Castillo, H. Neyoy, J. Soria, P. Melin, and F. Valdez, “A new approach for dynamic fuzzy logic parameter tuning in ant colony optimization and its application in fuzzy control of a mobile robot,” Applied Soft Computing Journal, vol. 28, pp. 150–159, 2015. View at Publisher · View at Google Scholar · View at Scopus
  2. M. Qiu, Z. Ming, J. Li, K. Gai, and Z. Zong, “Phase-change memory optimization for green cloud with genetic algorithm,” Institute of Electrical and Electronics Engineers. Transactions on Computers, vol. 64, no. 12, pp. 3528–3540, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  3. S.-M. Guo and C.-C. Yang, “Enhancing differential evolution utilizing eigenvector-based crossover operator,” IEEE Transactions on Evolutionary Computation, vol. 19, no. 1, pp. 31–49, 2015. View at Publisher · View at Google Scholar · View at Scopus
  4. A. A. A. Esmin, R. A. Coelho, and S. Matwin, “A review on particle swarm optimization algorithm and its variants to clustering high-dimensional data,” Artificial Intelligence Review, vol. 44, no. 1, pp. 23–45, 2013. View at Publisher · View at Google Scholar · View at Scopus
  5. Z. Zhang, S. Su, Y. Lin, X. Cheng, K. Shuang, and P. Xu, “Adaptive multi-objective artificial immune system based virtual network embedding,” Journal of Network and Computer Applications, vol. 53, pp. 140–155, 2015. View at Publisher · View at Google Scholar · View at Scopus
  6. F. van den Bergh, An Analysis of Particle Swarm Optimizers, University of Pretoria, Pretoria, South Africa, 2001.
  7. F. van den Bergh and A. P. Engelbrecht, “A cooperative approach to participle swam optimization,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 225–239, 2004. View at Publisher · View at Google Scholar · View at Scopus
  8. X. Li and X. Yao, “Tackling high dimensional nonseparable optimization problems by cooperatively coevolving particle swarms,” in Proceedings of the Congress on Evolutionary Computation, CEC '9, pp. 1546–1553, IEEE Computational Intelligence Magazine, Trondheim, Norway, May 2009. View at Publisher · View at Google Scholar · View at Scopus
  9. C. K. Goh, K. C. Tan, D. S. Liu, and S. C. Chiam, “A competitive and cooperative co-evolutionary approach to multi-objective particle swarm optimization algorithm design,” European Journal of Operational Research, vol. 202, no. 1, pp. 42–54, 2010. View at Publisher · View at Google Scholar · View at Scopus
  10. J. Zhang and X. Ding, “A multi-swarm self-adaptive and cooperative particle swarm optimization,” Engineering Applications of Artificial Intelligence, vol. 24, no. 6, pp. 958–967, 2011. View at Publisher · View at Google Scholar · View at Scopus
  11. M. Hasanzadeh, M. R. Meybodi, and M. M. Ebadzadeh, “Adaptive cooperative particle swarm optimizer,” Applied Intelligence, vol. 39, no. 2, pp. 397–420, 2013. View at Publisher · View at Google Scholar · View at Scopus
  12. Z.-H. Liu, J. Zhang, S.-W. Zhou, X.-H. Li, and K. Liu, “Coevolutionary particle swarm optimization using AIS and its application in multiparameter estimation of PMSM,” IEEE Transactions on Cybernetics, vol. 43, no. 6, pp. 1921–1935, 2013. View at Publisher · View at Google Scholar · View at Scopus
  13. S. X. Yang and C. H. Li, “A clustering particle swarm optimizer for locating and tracking multiple optima in dynamic environments,” IEEE Transactions on Evolutionary Computation, vol. 14, no. 6, pp. 959–974, 2010. View at Publisher · View at Google Scholar · View at Scopus
  14. C.-F. Juang, C.-M. Hsiao, and C.-H. Hsu, “Hierarchical cluster-based multispecies particle-swarm optimization for fuzzy-system optimization,” IEEE Transactions on Fuzzy Systems, vol. 18, no. 1, pp. 14–26, 2010. View at Publisher · View at Google Scholar · View at Scopus
  15. S.-Z. Zhao, P. N. Suganthan, Q.-K. Pan, and M. F. Tasgetiren, “Dynamic multi-swarm particle swarm optimizer with harmony search,” Expert Systems with Applications, vol. 38, no. 4, pp. 3735–3742, 2011. View at Publisher · View at Google Scholar · View at Scopus
  16. X. Chu, M. Hu, T. Wu, J. . Weir, and Q. Lu, “AHPS2: an optimizer using adaptive heterogeneous particle swarms,” Information Sciences, vol. 280, pp. 26–52, 2014. View at Publisher · View at Google Scholar · View at MathSciNet
  17. Q. Qin and S. Cheng, “Particle swarm optimization based semi-supervised learning on Chinese text categorization,” IEEE Transactions on Cybernetics, vol. 46, no. 10, 2016. View at Google Scholar
  18. R. Brits, A. P. Engelbrecht, and F. van den Bergh, “Locating multiple optima using particle swarm optimization,” Applied Mathematics and Computation, vol. 189, no. 2, pp. 1859–1883, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  19. D. Parrott and X. D. Li, “Locating and tracking multiple dynamic optima by a particle swarm model using speciation,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 4, pp. 440–458, 2006. View at Publisher · View at Google Scholar · View at Scopus
  20. X. Li, “Niching without niching parameters: particle swarm optimization using a ring topology,” IEEE Transactions on Evolutionary Computation, vol. 14, no. 1, pp. 150–169, 2010. View at Publisher · View at Google Scholar · View at Scopus
  21. L. Wang, B. Yang, and Y. Chen, “Improving particle swarm optimization using multi-layer searching strategy,” Information Sciences, vol. 274, pp. 70–94, 2014. View at Publisher · View at Google Scholar · View at Scopus
  22. M. Sumona and B. Santo, “Global optimization of an optical chaotic system by Chaotic Multi Swarm Particle Swarm Optimization,” Expert Systems with Applications, vol. 39, no. 1, pp. 917–924, 2012. View at Google Scholar
  23. J. Li, J. Zhang, C. Jiang, and M. Zhou, “Composite particle swarm optimizer with historical memory for function optimization,” IEEE Transactions on Cybernetics, vol. 45, no. 10, pp. 2350–2363, 2015. View at Publisher · View at Google Scholar · View at Scopus
  24. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the International Conference on Neural Networks (ICNN ’95), vol. 4, pp. 1942–1948, IEEE, Perth, Western Australia, December 1995. View at Publisher · View at Google Scholar · View at Scopus
  25. J. B. MacQueen, “Some methods for classification and analysis of multivariate observations,” in Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability, pp. 281–297, University of California Press, Berkeley, Calif, USA, 1967. View at Google Scholar · View at MathSciNet
  26. Y. Y. Shiu and K. C. Chi, “A genetic algorithm that adaptively mutates and never revisits,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 2, pp. 454–472, 2009. View at Publisher · View at Google Scholar · View at Scopus
  27. R. Storn and K. Price, “Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997. View at Publisher · View at Google Scholar · View at MathSciNet
  28. C. Igel, N. Hansen, and S. Roth, “Covariance matrix adaptation for multi-objective optimization,” Evolutionary Computation, vol. 15, no. 1, pp. 1–28, 2007. View at Publisher · View at Google Scholar · View at Scopus
  29. D. H. Wolpert and W. G. Macready, “No free lunch theorems for optimization,” IEEE Transactions on Evolutionary Computation, vol. 1, no. 1, pp. 67–82, 1997. View at Publisher · View at Google Scholar · View at Scopus
  30. P. N. Suganthan, N. Hansen, J. J. Liang et al., “Problem Definitions and Evaluation Criteria for the CEC 2005 Special Session on Real-Parameter Optimization,” Technical Report 2005005, Nanyang Technological University, Singapore and KanGAL, (Kanpur Genetic Algorithms Laboratory, IIT Kanpur), 2005, pp. 1–50. View at Google Scholar
  31. N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Transactions on Systems, Man and Cybernetics, vol. 9, no. 1, pp. 62–66, 1979. View at Publisher · View at Google Scholar · View at Scopus
  32. S. Ouadfel and A. Taleb-Ahmed, “Social spiders optimization and flower pollination algorithm for multilevel image thresholding: a performance study,” Expert Systems with Applications, vol. 55, pp. 566–584, 2016. View at Publisher · View at Google Scholar
  33. X. Zhao, M. Turk, W. Li, K.-C. Lien, and G. Wang, “A multilevel image thresholding segmentation algorithm based on two-dimensional K–L divergence and modified particle swarm optimization,” Applied Soft Computing Journal, vol. 48, pp. 151–159, 2016. View at Publisher · View at Google Scholar · View at Scopus
  34. http://decsai.ugr.es/cvg/dbimagenes/.
  35. http://www.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/BSDS300/html/dataset/images.html.