Optimization Theory, Methods, and Applications in Engineering 2014View this Special Issue
An Effective Hybrid of Bees Algorithm and Differential Evolution Algorithm in Data Clustering
Clustering is one of the most commonly used approaches in data mining and data analysis. One clustering technique in clustering that gains big attention in clustering related research is -means clustering such that the observation is grouped into cluster. However, some obstacles such as the adherence of results to the initial cluster centers or the risk of getting trapped into local optimality hinder the overall clustering performance. The purpose of this research is to minimize the dissimilarity of all points of a cluster from gravity center of the cluster with respect to capacity constraints in each cluster, such that each element is allocated to only one cluster. This paper proposes an effective combination algorithm to find optimal cluster center for the analysis of data in data mining and a new combination algorithm is proposed to untangle the clustering problem. This paper presents a new hybrid algorithm, which is, based on cluster center initialization algorithm (CCIA), bees algorithm (BA), and differential evolution (DE), known as CCIA-BADE-K, aiming at finding the best cluster center. The proposed algorithm performance is evaluated with standard data set. The evaluation results of the proposed algorithm and its comparison with other alternative algorithms in the literature confirm its superior performance and higher efficiency.
Data clustering is one of the most important knowledge discovery techniques to extract structures from dataset and is widely used in data mining, machine learning, statistical data analysis, vector quantization, and pattern recognition. The aim of clustering is to partition data into cluster, so that each cluster contains data, which has the most similarity and maximum dissimilarity with the other clusters. Clustering algorithms can be comprehensively classified into hierarchical, partitioning, model-based, grid-based, and concentration-based clustering algorithms [1–3].
Hierarchical clustering algorithm divides a dataset into a number of levels of nested partitioning. In the partitioning algorithms observations of one dataset decompose into a set of clusters with most similarity among intra-group members and least similarity among inter group members . Dissimilarities are evaluated based on attribute values. Generally, distance criterion is used for data analysis .
The -means algorithm is one of the partitional clustering algorithm and one of the most popular algorithms, used in many domains. The -means algorithm implementation is easy and often practical. However, results of -means algorithm considerably depend on initial state. In other words, its efficiency highly depends on the first initial center .
The main purpose of -means clustering algorithm is to minimize the diversity of all objects in a cluster from their cluster centers. The initialization problem of -means algorithm is considered by heuristic algorithms, but it still risks being trapped in local optimality. Therefore, for achieving a better cluster algorithm we should find a solution for overcoming the problem of trap into local optimum .
There are many studies to overcome this problem. For instance, Niknam and Amiri have proposed a hybrid approach based on combining partial swarm optimization and ant colony optimization with -means algorithm for data clustering , and Nguyen and Cios have proposed a combination technique based on the hybrid of -means, genetic algorithm, and maximization of logarithmic regression expectation . Kao et al. have presented a combination algorithm according to the hybrid of partial swarm optimization, Nelder-Mead simplex search and genetic algorithm . Krishna and Murty proposed an algorithm for cluster analysis called genetic -means algorithm . Žalik proposed an approach for clustering without preassigning cluster numbers . Maulik and Bandyopadhyay haves introduced genetic based algorithm to solve this problem and evaluate the performance on real data. They define spatial distance-based mutation according to mutation operator for clustering . Laszlo and Mukherjee have proposed another genetic based approach, that for -means clustering exchanges neighboring cluster centers . Fathian et al. have presented a technique to overcome clustering problem according to honey-bees mating optimization (HBMO) [15–17]. Shelokar et al. have presented to solve clustering problem based on the ant colony optimization . Niknam et al., have combined to dominate this problem based on the simulated annealing and ant colony optimization . Ng and Sung have introduced a technique based on the taboo search to find cluster center [20, 21]. Niknam et al. have introduced a hybrid approach based on combining partial swarm optimization and ant simulated annealing to solve clustering problem [22, 23].
The bees algorithms can be classified in two main categories including foraging-based honeybee algorithms and marriage-based honeybee algorithm. Each of these categories have many algorithm such as artificial bee algorithm (ABC) [3, 24, 25], corporate artificial bee algorithm (CABC) , parallel artificial bee algorithm (PABC) , bee colony optimization (BCO) [28, 29], bee algorithm (BA) , bee foraging algorithm (BFA) , bee swarm optimization (BSO) for first categories . Marriage in honey-bees optimization (MBO) , fast marriage honey-bees optimization (FMBO) , and finally modified fast marriage in honey-bees optimization (MFMBO) are in the second category of bee algorithm .
One of the foraging-based algorithms is the bees algorithm that is a new population based search algorithm, developed by Pham et al. in 2006 . The algorithm mimics the food foraging behavior of swarms of honeybees (Figure 3). In its basic version, the algorithm performs a kind of neighborhood search combined with random search and can be used for optimization problems .
Differential evolution is an evolutionary algorithm (EA), which has been widely used in to optimization problems, mainly in continuous search spaces . Differential evolution was introduced by Storn and Price in 1995 . Global optimization is necessary in fields such as engineering, statistics, and finance, but many practical problems have objective functions that are nonlinear, noisy, noncontinuous, and multidimensional or have many local minima and constraints. Such problems are difficult if not impossible to solve analytically. Differential evolution can be used to find approximate solutions to such problems. Differential evolution also includes genetic algorithms, evolutionary strategies, and evolutionary programming. Differential evolution encodes solutions as vectors and new solution, compared to its parent. If the candidate is better than its parents, it replaces the parent in the population. Differential evolution can be applied in numerical optimization [37, 38].
In this paper, a hybrid evolutionary technique is used in order to solve the -means problem. The proposed algorithm helps clustering technique to escape from being trapped in local optimum. Our algorithm takes the benefits of both algorithms. Also, in this survey, some standard datasets are used for testing the proposed algorithm. To obtain the best cluster centers, in proposed algorithm, the advantages of BA (bees algorithm) and DE (differential evolution) are used with a data preprocessing technique called CCIA (cluster center initialization algorithm) for data analysis. Through experiments, the proposed CCIA-BADE-K algorithm has shown that this algorithm efficiently selects the exact cluster centers.
The main contribution of this paper is the introduction of a novel combination of evolutionary algorithm according to bees algorithm and differential evolution to overcome data analysis problem and hybrid with CCIA (cluster center initialization algorithm) preprocessing technique.
The rest of this paper is arranged as follows: in Section 2, the data clustering issue is introduced. In Sections 3 and 4, the classic principles of the DE and BA evolutionary algorithm are discussed. In Section 5, the suggested approach is introduced. In Section 6, experimental results of proposed algorithm are shown and compared with PSO-ANT, SA, ACO, GA, ACO-SA, TS, HBMO, PSO, and -means on benchmark data and finally Section 7 presents the concluding remarks.
2. Data Clustering
Clustering is defined as grouping similar objects either physically or in abstract. The groups inside one cluster have the most similarity with each other and the maximum diversity with other groups’ objects .
Definition 1. Suppose the set of containing objects. The purpose of clustering is to group objects in clusters as while each cluster satisfies the following conditions :(1);(2), ;(3).
According to the mentioned definition, the possible modes for clustering objects in clusters are obtained as follows:
In most approaches, the cluster number, that is, , is specified by an expert. Relation (1) implies that even with a given , finding the optimum solution for clustering is not so simple. Moreover, the number of possible solutions for clustering with objects in clusters increases by the order of . So, obtaining the best mode for clustering objects in clusters is an intricate NP-complete problem which needs to be settled by optimization approaches .
2.1. The -Means Algorithm
There have been many algorithms suggested for addressing the clustering problem and among them the -means algorithm which is one of the most famous and most practical algorithms . In this method, besides the input datasets, samples are introduced into the algorithm as the initial centers of clusters. These representing ’s are usually the first data samples . The way these representatives are chosen influences the performance of -means algorithm . The four stages of this algorithm are shown as follows.
Stage I. Choose data items randomly from as cluster centers of .
Stage II. Based on relation (2), add every data item to a relevant cluster. For example, if the following relation (2) holds, the object from the set of is added to the cluster
Stage III. Now, based on the clustering of Stage II, the new cluster centers are calculated by using relation (3) as follows ( is the number of objects in the cluster ):
Stage IV. If the cluster centers are changed, repeat the algorithm from Stage II, otherwise do the clustering based on the resulted centers.
The performance of the -means clustering algorithm relies on initial centers and this is a major challenge in this algorithm. Random selection of initial cluster centers makes this algorithm yield different results for different runs over the same datasets, which is considered as one of the potential disadvantages of this algorithm . This mix is not sensitive to center initialization, but it still has tendency towards local optimality. In this algorithm, strong ties among data points and the nearest data centers cause cluster centers not to exit from their local dense ranges .
The algorithm of bees, first developed by Karaboga and Basturk  and Pham et al. in 2006 , is a new swarm-based algorithm to search solutions independently. The algorithm was inspired by the behavior of food foraging from swarms of honeybees. In classic edition, the algorithm used random search to find neighborhood to solve optimization problems and issues.
2.2. Algorithm for Finding Cluster Initial Centers
In this study, with regards to efficiency purposes, all data objects are first clustered using -means algorithm to find the initial cluster centers to be used in the solutions based on all their attributes. Based on the generated clusters, the pattern for an object is produced from each attribute at any stage.
Objects with the same patterns are located in one cluster and hence all objects are clustered. The obtained clusters in this stage will be more than the original number of clusters. For more information, refer to paper . In this paper, clustering is completed in two stages. The first stage is performed as discussed above and in the second stage similar clusters are integrated with each other until achieving a given number of clusters. Algorithm 1 shows the proposed approach for initial clustering of data objects and the achieved cluster centers are called seed points.
As can be observed in Algorithm 1, for every attribute of data objects, a cluster label is generated and this label is added to the data object pattern. Objects with identical patterns are placed in one cluster. To produce data object labels based on each attribute, first the mean and standard deviation of that attribute are computed for all data objects. Thereafter, based on the mean and standard deviation, the range of attribute values are broken into identical intervals so that the tail of each interval appears as an initial cluster center. Thus, using the initial centers, all data objects are clustered by the -means method.
2.3. Fitness Function
To calculate the fitness of each solution, the distance between the centers of clusters and each data will be used. To do this, first a set of cluster centers will be generated randomly and then clustering of the numerator will be conducted based on (2). Now, according to centers obtained in the interaction step, the new centers of the clusters and fitness of solutions based on (3) will be calculated 
3. The Dance Language of Bees
For honeybees, finding nectar is essential to survival. Bees lead others to specific sources of food and then scout bees start to identify the visited resources by making movements as “dancing.” These dances are very careful and fast in different directions. Dancers try to give information about a food resource by specifying the direction, distance, and quality of the visited food source .
3.1. Describing the Dance Language
There are two kind of dance for Observed bees including “round dance” and “waggle dance” . When a food resource is less than fifty meters away, they do round dance and when a food resource is greater than fifty meters away, they perform waggle dance (Figure 1).
(a) Dance floor
(b) Dance languages
There are some concepts in this dance, in which the angle between vertical and waggle run is equal to the angle between the sun and food resource. Dance “tempo” shows the distance of food resource (Figure 2). A slower dance tempo means that a food resource is farther and vice versa . Another concept is the duration of dance and a longer dance duration means that a food resource is rich and better . Audiences are other bees, which follow the dancer. In this algorithm, there are two kinds of bees, SCOUTS are bees that find new food sources and perform the dance. RECRUTTS are bees that follow the scout bees, dance, and then forage. One of the first people that translate the waggle dance mining was Austrian etiologist Karl von Frisch.
Distance between flowers and hive is demonstrated by the duration of the waggle dance. The flowers that are farther from the hive have longer waggle dance duration. Each hundred meters distance between flowers from the hive is shown in the waggle dance phase close to 75 milliseconds.
3.2. Bee in Nature
A colony of honeybees can extend itself over long distances (more than 10 km) and in multiple directions simultaneously to exploit a large number of food sources. In principle, flower patches with plentiful amount of nectar or pollen that can be collected with less effort should be visited by more bees, whereas, patches with less nectar or pollen should receive fewer bees [35, 47].
The foraging process begins in a colony with the scout bees being sent out to search for promising flower patches. Scout bees move randomly from one patch to another. During the harvest season, a colony continues its exploration, keeping a percentage of the population as scout bees. When the scout bees return to the hive, those that found a patch, which is rated above a certain quality threshold (measured as a combination of some constituents, such as sugar content), deposit their nectar or pollen and go to the “dance floor” to perform a dance known as “waggle dance” . The waggle dance is essential for colony communication and contains three pieces of information regarding a flower patch: the direction in which it will be found, its distance from the hive, and its quality rating (or fitness). This information helps the colony to send its bees to flower patches precisely, without using guides or maps . After the waggle dance on the dancing floor, the dancers (i.e., scout bee) go back to the flower patch with follower bees that are waiting inside the hive. More follower bees are sent to patches that are most promising [48, 49]. The flowchart of bee algorithm is shown in Figure 4 .
The Basic Bee Algorithm is shown as in Algorithm 2 .
4. Differential Evolution
Differential evolution is a type of standard genetic algorithm. Differential evolution algorithm evaluates the initial population by using probability motion and observation models and population evolution is performed by using evolution operators . The main idea in the differential evolution algorithm is to generate a new solution for each solution by using one constant member and two random members. In each generation, the best member of population is selected and then the difference between each member of population and the best member is calculated. Two random members are then selected and the difference between them is calculated. Coefficient of this difference is added to th member and thus a new member is created. The cost of each new member is calculated and if the cost value of the new member is less, the th member is replaced instead of th member; otherwise, the previous value can be kept in the next generation .
Differential evolution is one of the population based popular algorithms that uses point floating (real coded) for presentation as follows :where is the number of generation (iteration), refers to members (population), and is the number of optimization parameters. Now, in each generation (or each iteration of algorithm) to perform changes on members of population , one donor vector is formed. The various methods of DE are used to determine how to make the donor vector. The first kind of DE named 1/rand/DE generates th member , in which three members of current generation () are chosen randomly as
Then, the difference between two vectors from three selected vectors are calculated and multiplied by coefficient and with the third vector added . Therefore, donor vector is obtained. Calculation process of donor vector for th element from th vector can be demonstrated as follows :
To increase the exploration of algorithm a crossover operation is then performed. Differential algorithm has generally two kinds of crossover exponential and binomial . In this paper to save time, the binomial mode has been used. To apply the binomial crossover, it requires that set of is constituted as in Algorithm 3.
Therefore, for each target vector , there is a trial vector as follows :where is equal to and is uniform distribution number between . Set of is guaranteed where there is at least one difference between and . In the next step, the selection process is performed between target vector and trial vector as follows:where is a function that should be the minimum. In this paper, to escape from premature convergence, two new strategies of merging have been studied. In the basic DE are used difference vector of multiplied where is control parameter between 0.4 and one [55, 57]. To improve the convergence feature in the DE, this paper makes the following proposal:where is uniform distribution number between zero and one. Generally, the DE algorithm steps are as in Algorithm 4.
In Figure 5, the process of differential evolution is illustrated.
5. Proposed Algorithm
As noted in the former sections, studies conducted on the BA method have shown that this algorithm can be a powerful approach with enough performance to handle different types of nonlinear problems in various fields. However, it can be possibly trapped into local optimum. Lately, several ideas have been used to reduce this problem by hybrid different evolutionary techniques such as partial swarm optimization, genetic algorithm, and simulating annealing. In most population based evolutionary algorithms, in each iteration, new members are generated and then the movement operations are applied to explore new positions based on providing better opportunities. To increase the diversity of algorithm, in the differential evolution algorithm, all members have a possibility to win the global optimum and move to that side. The ability of the best particle to local search also depends on the other particles by selecting the two other particles and calculating the difference between them. This situation may lead to local convergence.
In this proposed algorithm, to escape from random selecting of the global best particle, we used competency selection for choosing the global best particle. If particle is better than the other solutions, then the probability of being selected is greater.
The basic idea behind the proposed algorithm is that our solutions are grouped based on the bees’ algorithm.
On the other hand, in this algorithm new approach is proposed to the movement and selects the recruiting bees for selecting sites. This algorithm classified the bees into three groups and named them elite sites, nonelite sites, and nonselected site. To increase diversity, the two modes for movement based on the differential evolution algorithm operator as parallel mode and serial mode were used. The suggested algorithm tries to use the advantage of these algorithms to find the best cluster center and to improve simulation results. In other words, in this algorithm, first, a preprocessing technique is performed on the data and then the proposed hybrid algorithm is used to find the best cluster center for -means problem.
The flowchart and pseudocode of the combined algorithm, called CCIA-BA-DE, are illustrated in Algorithm 5 and Figure 6.
6. Application of CCIA-BA-BE on Clustering
The application of CCIA-BADE-K algorithm on the clustering problem in this section is presented. To perform the CCIA-BADE-K algorithm to find best cluster centers, the following steps should be repeated and taken.
Step 1 (generate the seed cluster center). This step is a preprocessing step to find the seed cluster center to choose the best interval for each cluster.
Step 2 (generate the initial bees’ population randomly). In other words, generate initial solutions to find the best cluster centers statistically where center is a vector with cluster and each vector has dimension:where is cluster center of for th scout bee and is the number of dimension for each cluster center. In fact, each solution in the algorithm is a matrix with . and are values of the minimum and maximum for each dimension (each feature of center).
Step 3 (calculate the objective function for each individual). Calculate the cost function for each solution (each site) in this algorithm.
Step 4 (sort the solutions and select scout bees for each groups). The sorting of the site is carried out based on the objective function value.
Step 5 (select the first group of sites). Finding the new solutions is performed by selecting the group of sites. There are three groups of sites in which the first group or elite sites are evaluated to find the neighbors of the selected site followed by nonelite site and finally nonselected sites. To find the neighbors of each group of sites, either the serial mode or parallel mode may be used. This algorithm used parallel model.
Step 6 (select the number of bees for each site). Numbers of bees for each site depend on their group and are considered as competence, more bees for better site. If the site is rich then more bees are allocated to this site. In other words, if the solution is better, it is rated as more important than the other sites.
Step 7 (performing the differential evolution operator (mutation)). In this step, the target site is chosen from the group sites and then two other sites from this group are selected randomly to calculate the weighted difference between them. After calculating this difference, it is added to base trial site as shown in the following equation:where is the target site, is the weight, and the , are the selected sites from target’s group. is the trial solution for comparison purposes.
Step 8 (perform crossover operation with crossover probability). The recombination step incorporates successful solutions from the previous generation. The trial vectors is developed from the elements of the target vector, and the elements of donor vector . Elements of the donor vector enter the trial vector with probability CR where and is a random integer from and ensures that .
Step 9 (calculate the cost function for trial site). In the selection step, target vector is compared with trial vector . There are two modes to calculate the new site as follows:
Trial vector is compared to target vector . To use greedy criterion, if is better than the , then replace with ; otherwise, “survive” and are discarded.
Step 10. If not all sites from this group are selected, go to Step 6 and select another site from this group; otherwise, go to the next step.
Step 11. If not all groups are selected, go to Step 5 and select the next group; otherwise, go to the next step.
Step 12 (check the termination criteria). If the current number of iteration does not reach the maximum number of iterations, go to Step 4 and start next generation; otherwise, go to the next step.
To evaluate the accuracy and efficiency of the proposed algorithm, experiments have been performed on two artificial datasets, four real-life datasets and four standard datasets to determine the correctness of clustering algorithms. This collection includes Iris, Glass, Wine, and Contraceptive Method Choice (CMC) datasets that have been chosen from standard UCI dataset.
The suggested algorithm is coded by an appropriate programming language and is run on an i5 computer with 2.60 GHz microprocessor speed and 4 GB main memory. For measuring the performance of the proposed algorithm, the benchmarks data items of Table 1 are used.
The execution results of the proposed algorithm over the selected datasets as well as the comparison figures relative to -means, PSO, and K-NM-PSO results in  are tabulated in Table 2. As easily seen in Table 2, the suggested algorithm provides superior results relative to -means and PSO algorithms. The real-life datasets compared with several optimization algorithms are included.
For better study and analysis of the proposed approach, the execution results of the proposed approach along with HBMO, PSO, ACO-SA, PSO-ACO, ACO, PSO-SA, TS, GA, SA, and -means clustering algorithm results as reported in  are tabulated in Tables 3–6. It is worth mentioning that the investigated algorithms of  are implemented with MATLAB 7.1, using a Pentium IV system of 2.8 GHz CPU speed and 512 MB main memory.
Frist artificial dataset includes (, , ) where is the number of instance, is the number of clusters, and is the number of dimensions. The instances were drawn for four absolute classes where each of these groups was distributed aswhere and are covariance matrix and vector, respectively . The first artificial dataset is demonstrated in Figure 7(a). Figure 7(b) illustrated the clustered data after applying CCIA-BADE-K algorithm on data.
(a) First artificial dataset
(b) Clustered dataset after with cluster centers
Second artificial dataset includes (, , ) where is the number of instance, is the number of clusters, and is the number of dimensions. The instances were drawn for four absolute classes where each of these groups was distributed aswhere and are covariance matrix and vector, respectively . The second artificial dataset is demonstrated in Figure 8. Figure 8 shows clusters after applying proposed algorithm on the artificial dataset.
(a) Second artificial dataset
(b) Clustered dataset after and with cluster centers
In Tables 3–6, best, worst, and average results are reported for 100 runs, respectively. The resulting figures represent the distance of every data from the cluster center to which it belongs and is computed by using relation (4). As observed in the table, regarding the execution time, the proposed algorithm generates acceptable solutions.
To clarify the issue, in Figure 10, the scatterplot (scatter-graph) is illustrated. The scatter-graph is one kind of mathematic diagram, which shows the values for a dataset for two variables using Cartesian coordinates. In this diagram, data is demonstrated as a set of spots. This type of diagram is known as a scatter diagram or scatter-gram. This kind of diagram is also used to display relation between response variables with control variables when a variable is below the control of the experimenter. One of the strongest aspects of the scatter-diagram is the ability to show nonlinear relationship between variables. In Figure 9, the scatter-diagram of Iris dataset is displayed and in Figure 10 the clustered Iris data on the scatter-diagram is shown.
In Table 4, best, worst, and average results of Wine dataset are reported for 100 runs. The resulting figures represent the distance of every data from the cluster center.
In Figure 11 best cost and average best costs of results for all datasets are reported for 100 runs. The resulting figures represent the distance of every data from the cluster center by using relation (4). Figure 11(a) is related to the best cost and mean of best cost for Iris dataset, and Figure 11(b) illustrated the best cost and mean of best cost for Wine dataset. Figure 11(c) reported best cost and mean of best cost for CMC datasets, and finally Figure 11(d) demonstrated mean value of best cost and best cost of Glass dataset.
(a) Best cost and mean of best cost for Iris dataset
(b) Best cost and mean of best cost for Wine dataset
(c) Best cost and mean of best cost for CMC dataset
(d) Best cost and mean of best cost for Glass dataset
According to the reported results in Tables 3 to 6, the proposed method over Iris, CMC, and Wine Datasets provides the best results in comparison with other mentioned algorithms. According to Table 6, the suggested algorithm over Glass dataset provides more acceptable results than the alternative algorithms. The reason for this behavior is justified by the fact that as data objects increase in number the efficiency of the alternative algorithms decreases while the deficiency of the suggested algorithm highlights more.
8. Image Segmentation
In Section 7, it was shown that the proposed CCIA-BADE-K algorithm is one of the best methods for data clustering. For further investigation of the performance of algorithm, the algorithm was tested on one standard image and one industrial image. Each digital image in RGB space is formed by three-color components consisting of red, green, and blue. Each of these three alone is a grayscale image and the numerical value of each pixel is between 1 and 255. Image histogram is a chart that is made by the number of pixels on an image that is determined based on the brightness level . To obtain a histogram of image it is enough to scroll the whole pixel of image and to calculate the number of pixels for each brightness level. The normalized histogram is obtained by dividing the total number of histogram value to each value of pixels. Normalizing the histogram causes the histogram value to be in interval. Figures 12 and 13 show that image samples in this paper are shown for image segmentation. In Figure 12, the color, grayscale, and clustered modes of these images are shown and, in Figure 13, histogram diagrams for these four images are shown. Furthermore, these segmentation charts will be used to detect segmentation an image.
(a) Color image of raisins
(b) Grayscale image of raisins
(c) Clustered raisins
(d) Color image of Lena
(e) Grayscale image of Lena
(f) Clustered Lena
(a) Histogram of raisins image
(b) Best cluster centers of raisins image histogram
(c) Histogram of Lena image
(d) Best cluster centers of Lena image histogram
9. Concluding Remarks
In this paper, a new technique based on a combination of bees algorithm and differential evolution algorithm with -means was presented. In the proposed algorithm, bee algorithm was assigned to perform globally and differential evolution algorithm was assigned to implement local searching on -means problem, which is responsible for the task of finding the best cluster centers. The new proposed algorithm CCIA-BADE-K applies abilities of both algorithms and, by removing shortcomings of each algorithm, it tries to use its own strengths to cover other algorithm defects as well as to find best cluster centers that is the proposed seed cluster center algorithm. Experimental results showed that the CCIA-BADE-K algorithm enjoys acceptable results.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
The authors would like to express their cordial thanks to the Ministry of Education (MoE), University Technology Malaysia (UTM), for the Research University Grant no. Q.J130000.2528.06H90. The authors are also grateful to Soft Computing Research Group (SCRG) for their support and incisive comments in making this study a success.
G. Gan, C. Ma, and J. Wu, Data Clustering: Theory, Algorithms, and Applications, vol. 20, SIAM, 2007.
J. Han, M. Kamber, and J. Pei, Data Mining: Concepts and Techniques, Morgan Kaufmann, 2006.
D. Karaboga and B. Basturk, “A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm,” Journal of Global Optimization, vol. 39, no. 3, pp. 459–471, 2007.View at: Publisher Site | Google Scholar | MathSciNet
E. Alpaydin, Introduction to Machine Learning, MIT Press, 2004.
S. Bandyopadhyay and U. Maulik, “An evolutionary technique based on K-means algorithm for optimal clustering in ,” Information Sciences, vol. 146, no. 1–4, pp. 221–237, 2002.View at: Publisher Site | Google Scholar | MathSciNet
S. S. Khan and A. Ahmad, “Cluster center initialization algorithm for K-means clustering,” Pattern Recognition Letters, vol. 25, no. 11, pp. 1293–1302, 2004.View at: Publisher Site | Google Scholar
G. Hamerly and C. Elkan, “Alternatives to the k-means algorithm that find better clusterings,” in Proceedings of the 11th International Conference on Information and Knowledge Management (CIKM '02), pp. 600–607, McLean, Va, USA, November 2002.View at: Google Scholar
T. Niknam and B. Amiri, “An efficient hybrid approach based on PSO, ACO and k-means for cluster analysis,” Applied Soft Computing Journal, vol. 10, no. 1, pp. 183–197, 2010.View at: Publisher Site | Google Scholar
C. D. Nguyen and K. J. Cios, “GAKREM: a novel hybrid clustering algorithm,” Information Sciences, vol. 178, no. 22, pp. 4205–4227, 2008.View at: Publisher Site | Google Scholar
Y.-T. Kao, E. Zahara, and I.-W. Kao, “A hybridized approach to data clustering,” Expert Systems with Applications, vol. 34, no. 3, pp. 1754–1762, 2008.View at: Publisher Site | Google Scholar
K. Krishna and M. N. Murty, “Genetic K-means algorithm,” IEEE Transactions on Systems, Man, and Cybernetics Part B: Cybernetics, vol. 29, no. 3, pp. 433–439, 1999.View at: Publisher Site | Google Scholar
K. R. Žalik, “An efficient k′-means clustering algorithm,” Pattern Recognition Letters, vol. 29, no. 9, pp. 1385–1391, 2008.View at: Publisher Site | Google Scholar
U. Maulik and S. Bandyopadhyay, “Genetic algorithm-based clustering technique,” Pattern Recognition, vol. 33, no. 9, pp. 1455–1465, 2000.View at: Publisher Site | Google Scholar
M. Laszlo and S. Mukherjee, “A genetic algorithm that exchanges neighboring centers for -means clustering,” Pattern Recognition Letters, vol. 28, no. 16, pp. 2359–2366, 2007.View at: Publisher Site | Google Scholar
M. Fathian and B. Amiri, “A honeybee-mating approach for cluster analysis,” The International Journal of Advanced Manufacturing Technology, vol. 38, no. 7-8, pp. 809–821, 2008.View at: Publisher Site | Google Scholar
A. Afshar, O. Bozorg Haddad, M. A. Mariño, and B. J. Adams, “Honey-bee mating optimization (HBMO) algorithm for optimal reservoir operation,” Journal of the Franklin Institute, vol. 344, no. 5, pp. 452–462, 2007.View at: Publisher Site | Google Scholar
M. Fathian, B. Amiri, and A. Maroosi, “Application of honey-bee mating optimization algorithm on clustering,” Applied Mathematics and Computation, vol. 190, no. 2, pp. 1502–1513, 2007.View at: Publisher Site | Google Scholar | MathSciNet
P. S. Shelokar, V. K. Jayaraman, and B. D. Kulkarni, “An ant colony approach for clustering,” Analytica Chimica Acta, vol. 509, no. 2, pp. 187–195, 2004.View at: Publisher Site | Google Scholar
T. Niknam, B. B. Firouzi, and M. Nayeripour, “An efficient hybrid evolutionary algorithm for cluster analysis,” World Applied Sciences Journal, vol. 4, no. 2, pp. 300–307, 2008.View at: Google Scholar
M. K. Ng and J. C. Wong, “Clustering categorical data sets using tabu search techniques,” Pattern Recognition, vol. 35, no. 12, pp. 2783–2790, 2002.View at: Publisher Site | Google Scholar
C. S. Sung and H. W. Jin, “A tabu-search-based heuristic for clustering,” Pattern Recognition, vol. 33, no. 5, pp. 849–858, 2000.View at: Publisher Site | Google Scholar
T. Niknam, B. Amiri, J. Olamaei, and A. Arefi, “An efficient hybrid evolutionary optimization algorithm based on PSO and SA for clustering,” Journal of Zhejiang University SCIENCE A, vol. 10, no. 4, pp. 512–519, 2009.View at: Publisher Site | Google Scholar
T. Niknam, “An efficient hybrid evolutionary algorithm based on PSO and HBMO algorithms for multi-objective Distribution Feeder Reconfiguration,” Energy Conversion and Management, vol. 50, no. 8, pp. 2074–2082, 2009.View at: Publisher Site | Google Scholar
D. Karaboga and B. Basturk, “On the performance of artificial bee colony (ABC) algorithm,” Applied Soft Computing Journal, vol. 8, no. 1, pp. 687–697, 2008.View at: Publisher Site | Google Scholar
D. Karaboga, “An idea based on honey bee swarm for numerical optimization,” Tech. Rep. tr06, Erciyes University, Engineering Faculty, Computer Engineering Department, 2005.View at: Google Scholar
W. Zou, Y. Zhu, H. Chen, and X. Sui, “A clustering approach using cooperative artificial bee colony algorithm,” Discrete Dynamics in Nature and Society, vol. 2010, Article ID 459796, 16 pages, 2010.View at: Publisher Site | Google Scholar | MathSciNet
H. Narasimhan, “Parallel artificial bee colony (PABC) algorithm,” in Proceedings of the World Congress on Nature & Biologically Inspired Computing (NABIC '09), pp. 306–311, IEEE, Coimbatore, India, December 2009.View at: Publisher Site | Google Scholar
D. Teodorović, “Bee colony optimization (BCO),” in Innovations in Swarm Intelligence, C. Lim, L. Jain, and S. Dehuri, Eds., vol. 248, pp. 39–60, Springer, Berlin, Germany, 2009.View at: Google Scholar
D. Teodorovic, P. Lucic, G. Markovic, and M. Dell' Orco, “Bee colony optimization: principles and applications,” in Proceedings of the 8th Seminar on Neural Network Applications in Electrical Engineering (NEUREL '06), pp. 151–156, 2006.View at: Google Scholar
D. Pham, A. Ghanbarzadeh, E. Koc, S. Otri, S. Rahim, and M. Zaidi, “The bees algorithm—a novel tool for complex optimisation problems,” in Proceedings of the 2nd Virtual International Conference on Intelligent Production Machines and Systems (IPROMS '06), pp. 454–459, 2006.View at: Google Scholar
R. Akbari, A. Mohammadi, and K. Ziarati, “A novel bee swarm optimization algorithm for numerical function optimization,” Communications in Nonlinear Science and Numerical Simulation, vol. 15, no. 10, pp. 3142–3155, 2010.View at: Publisher Site | Google Scholar | MathSciNet
H. Drias, S. Sadeg, and S. Yahi, “Cooperative bees swarm for solving the maximum weighted satisfiability problem,” in Computational Intelligence and Bioinspired Systems, J. Cabestany, A. Prieto, and F. Sandoval, Eds., vol. 3512 of Lecture Notes in Computer Science, pp. 318–325, Springer, Berlin, Germany, 2005.View at: Publisher Site | Google Scholar
C. Yang, J. Chen, and X. Tu, “Algorithm of fast marriage in honey bees optimization and convergence analysis,” in Proceedings of the IEEE International Conference on Automation and Logistics (ICAL '07), pp. 1794–1799, Jinan, China, August 2007.View at: Publisher Site | Google Scholar
M. T. Vakil-Baghmisheh and M. Salim, “A modified fast marriage in honey bee optimization algorithm,” in Proceedings of the 5th International Symposium on Telecommunications (IST '10), pp. 950–955, December 2010.View at: Publisher Site | Google Scholar
R. Storn and K. Price, “Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997.View at: Publisher Site | Google Scholar | MathSciNet
R. Storn and K. Price, Differential Evolution—A Simple and Efficient Adaptive Scheme for Global Optimization over Continuous Spaces, ICSI, Berkeley, Calif, USA, 1995.
V. Feoktistov, “Differential evolution,” in Differential Evolution, vol. 5, pp. 1–24, Springer, New York, NY, USA, 2006.View at: Google Scholar
K. V. Price, R. M. Storn, and J. A. Lampinen, “The differential evolution algorithm,” in Differential Evolution, pp. 37–134, Springer, Berlin, Germany, 2005.View at: Google Scholar
A. K. Jain, M. N. Murty, and P. J. Flynn, “Data clustering: a review,” ACM Computing Surveys, vol. 31, no. 3, pp. 264–323, 1999.View at: Publisher Site | Google Scholar
R. J. Kuo, E. Suryani, and A. Yasid, “Automatic clustering combining differential evolution algorithm and k-means algorithm,” in Proceedings of the Institute of Industrial Engineers Asian Conference 2013, Y.-K. Lin, Y.-C. Tsao, and S.-W. Lin, Eds., pp. 1207–1215, Springer, Singapore, 2013.View at: Publisher Site | Google Scholar
W. Kwedlo, “A clustering method combining differential evolution with the K-means algorithm,” Pattern Recognition Letters, vol. 32, no. 12, pp. 1613–1621, 2011.View at: Publisher Site | Google Scholar
Y.-J. Wang, J.-S. Zhang, and G.-Y. Zhang, “A dynamic clustering based differential evolution algorithm for global optimization,” European Journal of Operational Research, vol. 183, no. 1, pp. 56–73, 2007.View at: Publisher Site | Google Scholar | MathSciNet
M. Babrdelbonab, S. Z. M. H. M. Hashim, and N. E. N. Bazin, “Data analysis by combining the modified k-means and imperialist competitive algorithm,” Jurnal Teknologi, vol. 70, no. 5, 2014.View at: Publisher Site | Google Scholar
P. Berkhin, “A survey of clustering data mining techniques,” in Grouping Multidimensional Data, J. Kogan, C. Nicholas, and M. Teboulle, Eds., pp. 25–71, Springer, Berlin, Germany, 2006.View at: Publisher Site | Google Scholar
J. R. Riley, U. Greggers, A. D. Smith, D. R. Reynolds, and R. Menzel, “The flight paths of honeybees recruited by the waggle dance,” Nature, vol. 435, no. 7039, pp. 205–207, 2005.View at: Publisher Site | Google Scholar
C. Grüter, M. S. Balbuena, and W. M. Farina, “Informational conflicts created by the waggle dance,” Proceedings of the Royal Society B: Biological Sciences, vol. 275, no. 1640, pp. 1321–1327, 2008.View at: Publisher Site | Google Scholar
A. Dornhaus and L. Chittka, “Why do honey bees dance?” Behavioral Ecology and Sociobiology, vol. 55, no. 4, pp. 395–401, 2004.View at: Publisher Site | Google Scholar
K. O. Jones and A. Bouffet, “Comparison of bees algorithm, ant colony optimisation and particle swarm optimisation for PID controller tuning,” in Proceedings of the 9th International Conference on Computer Systems and Technologies and Workshop for PhD Students in Computing (CompSysTech '08), Gabrovo, Bulgaria, June 2008.View at: Publisher Site | Google Scholar
D. T. Pham and M. Kalyoncu, “Optimisation of a fuzzy logic controller for a flexible single-link robot arm using the Bees Algorithm,” in Proceedimgs of the 7th IEEE International Conference on Industrial Informatics (INDIN '09), pp. 475–480, Cardiff, Wales, June 2009.View at: Publisher Site | Google Scholar
D. T. Pham, S. Otri, A. Ghanbarzadeh, and E. Koc, “Application of the bees algorithm to the training of learning vector quantisation networks for control chart pattern recognition,” in Proceedings of the 2nd Information and Communication Technologies (ICTTA '06), vol. 1, pp. 1624–1629, Damascus, Syria, 2006.View at: Publisher Site | Google Scholar
L. Özbakir, A. Baykasoğlu, and P. Tapkan, “Bees algorithm for generalized assignment problem,” Applied Mathematics and Computation, vol. 215, no. 11, pp. 3782–3795, 2010.View at: Publisher Site | Google Scholar | MathSciNet
P. Rocca, G. Oliveri, and A. Massa, “Differential evolution as applied to electromagnetics,” IEEE Antennas and Propagation Magazine, vol. 53, no. 1, pp. 38–49, 2011.View at: Publisher Site | Google Scholar
R. Mallipeddi, P. N. Suganthan, Q. K. Pan, and M. F. Tasgetiren, “Differential evolution algorithm with ensemble of parameters and mutation strategies,” Applied Soft Computing Journal, vol. 11, no. 2, pp. 1679–1696, 2011.View at: Publisher Site | Google Scholar
R. Storn, “On the usage of differential evolution for function optimization,” in Proceedings of the Biennial Conference of the North American Fuzzy Information Processing Society (NAFIPS '96), pp. 519–523, June 1996.View at: Google Scholar
U. K. Chakraborty, Advances in Differential Evolution, Springer, Berlin, Germany, 2008.
G. Liu, Y. Li, X. Nie, and H. Zheng, “A novel clustering-based differential evolution with 2 multi-parent crossovers for global optimization,” Applied Soft Computing Journal, vol. 12, no. 2, pp. 663–681, 2012.View at: Publisher Site | Google Scholar
Z. Cai, W. Gong, C. X. Ling, and H. Zhang, “A clustering-based differential evolution for global optimization,” Applied Soft Computing Journal, vol. 11, no. 1, pp. 1363–1379, 2011.View at: Publisher Site | Google Scholar
M. Abbasgholipour, M. Omid, A. Keyhani, and S. S. Mohtasebi, “Color image segmentation with genetic algorithm in a raisin sorting system based on machine vision in variable conditions,” Expert Systems with Applications, vol. 38, no. 4, pp. 3671–3678, 2011.View at: Publisher Site | Google Scholar