Computational Intelligence and Metaheuristic Algorithms with Applications
View this Special IssueResearch Article  Open Access
S. SalcedoSanz, J. Del Ser, Z. W. Geem, "An Island Grouping Genetic Algorithm for Fuzzy Partitioning Problems", The Scientific World Journal, vol. 2014, Article ID 916371, 15 pages, 2014. https://doi.org/10.1155/2014/916371
An Island Grouping Genetic Algorithm for Fuzzy Partitioning Problems
Abstract
This paper presents a novel fuzzy clustering technique based on grouping genetic algorithms (GGAs), which are a class of evolutionary algorithms especially modified to tackle grouping problems. Our approach hinges on a GGA devised for fuzzy clustering by means of a novel encoding of individuals (containing elements and clusters sections), a new fitness function (a superior modification of the Davies Bouldin index), specially tailored crossover and mutation operators, and the use of a scheme based on a local search and a parallelization process, inspired from an islandbased model of evolution. The overall performance of our approach has been assessed over a number of synthetic and real fuzzy clustering problems with different objective functions and distance measures, from which it is concluded that the proposed approach shows excellent performance in all cases.
1. Introduction
Clustering (also known as partitioning) is an important subgroup of unsupervised learning techniques which hinges on grouping data objects into groups of disjoint (“crisp”) clusters [1–3]. A huge amount of key problems in science, engineering, and economics (e.g., bioengineering, telecommunications, energy, and risk assessment) can be formulated as clustering problems [4–8]. In this context, an important line of research related to clustering stems from the fact that, in some problems, the clusters intrinsically overlap with each other and, consequently, conventional crisp clustering algorithms are not suitable for dealing with this overlap [9, 10]. In these cases when an object can “partially” belong to different groups, fuzzy clustering algorithms have been proposed as a powerful methodology in recent years, more flexible than traditional crisp approaches and with excellent results in different real problems [11, 12].
To be specific, “fuzzy clustering” is the class of clustering problems where the boundary between clusters is illdefined, in the sense that a given sample is allowed to belong to different clusters. As such, the notion of fuzziness becomes relevant since any object of the data set is assigned to a given cluster with some membership grade, usually set between 0 and 1 (low and high membership grade, resp.). Formally, if is a set of data vectors in a given observation space , the goal of a fuzzy clustering algorithm is to find a partition of in a finite number of clusters, so that a data vector can belong to a cluster with a degree of membership . This is equivalent to finding a “partition matrix” whose elements (with and ) fulfill
Among the different techniques applied to fuzzy clustering that can be found in the literature, we focus on those based on the fuzzy Cmeans (FCM) algorithm [13], kernel methods [14, 15], statistical methods [16], clonal selection theory [17], rulebased clustering [18–20], and many different heuristic and metaheuristic approaches [21–25]. Metaheuristic algorithms have been thoroughly applied to fuzzy clustering in the last years due to their superior properties of robustness and convergence to nearoptimal solutions at a moderate computational cost. Many of these approaches are based on evolutionary variants of the Cmeans algorithm [26, 27] or simply on direct fuzzy clustering algorithms based on genetic and evolutionary approaches [28–31], multiobjective algorithms [32], differential evolution [33], particle swarm metaheuristics [34], or evolutionary programming approaches [35].
However, despite the research activity invested on different metaheuristic approaches applied to fuzzy clustering, several avantgarde algorithms have not been explored yet in their entirety for fuzzy clustering problem. Specifically, this paper proposes a grouping genetic algorithm for fuzzy clustering. The grouping genetic algorithm (GGA) [36, 37] is a class of evolutionary algorithms whose encoding procedure is especially designed to deal with grouping problems. It has been successfully applied to a variety of problems involving grouping of items but, surprisingly, its performance has not been assessed yet when tackling fuzzy clustering problems. For this purpose, this paper builds upon preliminary work in [38] by presenting a novel grouping encoding, a modified objective function, and crossover and mutation operators specifically adapted to fuzzy clustering problems tackled via GGA heuristics. In order to further enhance the performance of the grouping genetic approach the proposed scheme also incorporates a local search stage and a parallelization of the GGA using the wellknown island model, which can be both considered as additional novel ingredients with respect to [38]. Simulation results are presented so as to assess the performance of the proposed scheme in a number of application scenarios, based on which it is concluded that the GGAbased procedure here presented outperforms conventional fuzzy Cmeans methods.
The rest of this paper is structured as follows. For keeping the paper selfcontained, Section 2 summarizes some key mathematical concepts to define clustering algorithms, such as different definitions of distance and objective functions. Section 3 presents the aforementioned proposed GGA to tackle fuzzy clustering problems, along with a full description of its novel encoding, objective function, operators, local search, and parallelization approach. Section 4 discusses the performance of the proposed approach in a variety of different synthetic and real problems. Finally, Section 5 completes the paper by discussing some concluding remarks.
2. Background: Fuzzy Clustering Concepts
The classification of objects into clusters aims at grouping those that are similar. The extent to which two objects are similar to each other must be quantified by using an appropriate distance measure. In this regard, Section 2.1 discusses some different definitions for distances in fuzzy clustering. The second key concept, strongly related to the first one and outlined in Section 2.2, aims at evaluating the quality of a candidate solution under test in a fuzzy clustering problem and plays a key role in the GGA described in Section 3.
2.1. Distances in Fuzzy Clustering
The adequate definition of the aforementioned distances plays a central role in fuzzy clustering. For instance, a norm based on Mahalanobis distance can be used with a similar definition compared to that for the crisp clustering case; namely, though, in this case, the definition of the inverse of the covariance matrix of any cluster is slightly different and is given by An alternative to Mahalanobis distance, more suitable for fuzzy clustering, is the GustafsonKessel (GK) distance [39]. This distance metric is defined as and allows for the consideration of elliptic clusters with different orientations. However, this distance is not able to distinguish between different cluster sizes. To circumvent this drawback, a modification of this distance was proposed in [39] in the context of the adaptive fuzzy clustering (AFC) algorithm presented therein; that is, where is a novel adaptive term associated with the smallest eigenvalue of the th cluster’s covariance matrix and is the centroid of those objects the centroid of those objects belonging to cluster . By using this definition, any clustering algorithm will have the chance of locating clusters with different orientation and also with different volumes.
Using a proper definition for distance plays a key role when evaluating to what extent an algorithm solves accurately the problem at hand. Exploring different functions for fuzzy clustering evaluation is thus the goal of the following section.
2.2. Fuzzy Clustering Evaluation
The evaluation of a given solution in a fuzzy clustering problem can be carried out using two different strategies. First, it is possible to directly evaluate the fuzzy clusters produced by the algorithm at hand by using the membership functions of the different observations of the problem. A second strategy consists of using a defuzzification process, prior to the clustering evaluation, followed by the application of any of the crisp clustering evaluation measures described below. As in the crisp clustering case, evaluation measures can be unsupervised or supervised. In the first case, direct evaluation is usually applied, whereas in the second one a defuzzification is often required, since existing labeled data are usually crisp.
2.2.1. Unsupervised Evaluation
For comparison purposes with the objective (fitness) function later proposed for evaluating the performance of the algorithm, we summarize herein some of the most used unsupervised measures in the related literature.(i)Fuzzy sum of quadratic errors (fSSE): consider where controls the fuzziness degree of the solution; that is, values of close to 1 lead the solution to a disjoint partition, whereas large values of lead to more fuzzy clusters. Usually a value of is selected.(ii)XieBeni index (XB): defined in [40], this measure combines the sum of square errors with a term for measuring clusters separation: (iii)FukuyamaSugeno index (FS): the FS index [41] is similar to the XB index but, in this case, the separation between clusters is evaluated with respect to the average centroid of the clusters, , instead of the centroid of the rest of clusters. Based on this rationale we obtain The aforementioned unsupervised measures are useful in those problems in which there is no additional information to check the quality of the generated clusters. However, there are some clustering problems in which such information is indeed available, hence allowing for supervised measures.
2.2.2. Utilized Supervised Measurement: Rand Index
Among the supervised measures—sometimes called external measures—in this work the wellknown Rand index [42] will be utilized after defuzzification of the samples. It computes the similarity between the obtained partition and the known optimal solution as follows: where and are the number of correct and incorrect assignments, respectively, when the decision consists of assigning two elements to the same cluster; and and are the number of correct and incorrect assignments, respectively, when the decision consists of assigning two elements to different clusters. In other words, it is a measure of the percentage of correct decisions taken by the algorithm. Note that the value of lies on the interval : values of closer to 1 indicate a better quality of the solution tested.
3. Proposed Grouping Genetic Algorithm for Fuzzy Clustering
As mentioned in Section 1, the grouping genetic algorithm is a class of evolutionary algorithms whose encoding strategy is especially designed to tackle grouping problems. It was first proposed by Falkenauer [36, 37], who realized that traditional genetic algorithms had difficulties when applied to grouping problems. In GGA the encoding procedure and crossover and mutation operators of traditional GAs are modified to yield a compact algorithm, with improved performance in groupingbased problems. In light of their outperforming behavior with respect to its traditional counterparts, grouping genetic algorithms have so far been successfully applied to diverse problems [43–51], including crisp clustering [52]. This paper joins the upsurge of research gravitating on GGAs by adapting this heuristic to fuzzy clustering problems. This section discusses several modifications we have devised towards further enhancing the performance of GGAs in fuzzy clustering, including our modifications in the encoding process, the objective function, and the crossover and mutation operators (Sections 3.1, 3.2, 3.4, and 3.5, resp.).
3.1. Problem Encoding
The proposed GGA for fuzzy clustering is a variablelength genetic algorithm, with a novel encoding to deal with this specific problem. The encoding is carried out by splitting each chromosome in the algorithm (or equivalently, its corresponding individual or candidate solution) into two parts: . The first part is the element section composed by the partition matrix , whereas the second part is denoted as the group section of the individual. Following this notation, a certain individual for a fuzzy clustering problem with elements (objects or observations) and clusters can be expressed as where it is important to note that each element represents the degree of membership of th observation to th cluster, whereas the group section keeps a list of tags associated with each of the clusters of the solution. Also observe that in this encoding, both the group and the element section have a variable length, since the number of clusters is also a variable of the problem. For the sake of clarity, let us assume the following individual:
This chromosome encodes an individual (candidate solution) for a simple clustering problem with objects: . Note that the group section encodes a solution with 4 clusters, labeled “1,” “2,” “3,” and “4,” respectively. Any of the columns in the element section indicates to what extent any object belongs to a cluster , that is, the partition matrix element . For instance, the first column in the element section encodes a candidate fuzzy solution in which the object belongs to cluster with a degree of membership and belongs to with . Keeping this in mind, the aforementioned chromosome encodes an individual that represents a solution with 4 clusters, where observations , , , , , , , , and belong to a single cluster, observations , , , and belong to two different clusters with different degrees of membership, and finally observations and belong to three different clusters.
3.2. Objective Function
The proposed GGA will be run with different objective (fitness) functions to lead the search. Specifically, and for comparative purposes, we will use some of the classical objective functions for fuzzy clustering summarized in Section 2.2. In addition, in this paper we propose an adaptation of the wellknown DavisBouldin index (used in crisp clustering problems) to the fuzzy case which, to the best of our knowledge, is novel in fuzzy clustering. We will show that the use of this modified index renders better results for the GGA than the other existing evaluation indices. The idea of the DavisBouldin index [53] for crisp clustering problems is to minimize the intracluster distances while simultaneously maximizing the distances among the different clusters, yielding In the above expression note that small values of the conventional DB index correspond to compact and wellseparated clusters. The adaptation of the DB index for fuzzy clustering proposed in this work is expressed as where stands for the centroid associated with cluster , calculated by considering the average of each observation weighted by the degree of membership to cluster . Note in expression (13) that the proposed MDB index explicitly depends on the particular definition considered for the distance . For example, if we consider the GK distance and based on the covariance matrices of the clusters, the DB index for fuzzy clustering problems will be given by
3.3. Selection Operator
In this paper we use a rankbased wheel selection mechanism, similar to the one described in [44]. First, the individuals are sorted in a list based on their quality. The position of the individuals in the list is called rank of the individual and is denoted as (, with standing for the number of individuals in the population of the GGA). A rank to which the best individual is assigned will be , whereas the second best will be , , and so forth. A fitness value associated with each individual is then defined as Note that these values are normalized between 0 and 1, depending on the position of the individual in the ranking list. It is important to note that this rankbased selection mechanism is static, in the sense that probabilities of survival (given by ) do not depend on the generation but on the position of the individual in the list. As a toy example, consider a population formed by 5 individuals, in which individual 1 is the best quality one (), individual 2 is the second best (), and so on. In this case, the fitness associated with the individuals is , and the associated intervals for the roulette wheel are .
The process carried out in our algorithm consists of selecting the parents for crossover by using this selection mechanism. This procedure is performed with replacement; that is, a given individual can be selected several times as one of the parents. However, individuals in the crossover operator must be different.
3.4. Crossover Operator
The crossover operator implemented in the grouping genetic algorithm used in this paper is a modified version of the one initially proposed by Falkenauer in [36], but with the added bonus of being adapted to the fuzzy clustering problem. These are the main steps followed in the crossover operation.(1)Select two individuals at random and choose two crossing points in their group part.(2)Insert the elements belonging to the selected groups of the first individual into the offspring.(3)Assign the degree of membership of the inserted elements equal to the first individual.(4)Insert the elements belonging to the selected groups of the second individual into the offspring.(5)Assign the degree of membership of the inserted elements in the following way. First, the remaining degree membership after the assignment of the elements of the first individual is calculated. This remaining degree membership is then proportionally shared among the elements of the second individual.(6)Remove empty clusters, if any.(7)Modify the labels of the current groups in the offspring in order to numerate them from 1 to .
A simple yet illustrative enough example follows. Let us consider two different individuals and that have been randomly chosen among all individuals in a given GGA population so as to perform crossover on them. The groups selected to carry out the procedure are marked in boldface:
After steps 2 and 3 of the proposed crossover procedure (insertion of the group elements of the first individual and assignment of the degree of membership), the offspring results in
Then the group elements of the second individual are inserted, and the membership degree is modified considering the previous existing degrees from individual 1:
There are no empty clusters. Therefore, we pass on to the final step of the crossover approach: modify the labels of current groups in the offspring in order to numerate them from 1 to (4 in this case):
An example of the reassignment of the degree of membership in the final offspring is shown in Figure 1, where the evolution of the degrees of membership is shown for observation along the crossover operation. Intuitively the crossover should be high in the first stages of the algorithm and more moderate in the last ones in order to favor the explorative behavior of the algorithm through the search space. Thus, we have implemented an adaptive crossover probability defined as where is the crossover probability used in a given generation , stands for the total number of generations of the algorithm, and and are the initial and final values of probability, respectively, which are set as inputs for the proposed algorithm.
3.5. Mutation Operator
Mutation operators include modifications in each individual of the population with a low probability in order to explore new regions of the search space and also to escape from local optima when the algorithm is near convergence. In this case, we have implemented two different mutation operators adapted to the fuzzy clustering problems.
(i) Mutation by cluster splitting: this operator consists of splitting a selected cluster into two different parts. The degrees of membership are also randomly split between the two new clusters. The samples belonging to the original cluster are assigned to the new clusters with equal probability. Note that one of the new generated clusters will keep its label in the group section of the individual, whereas the other will be assigned a new label (). The selection for the initial cluster to be split is carried out depending on the clusters’ size, with more probability of splitting imposed on clusters of larger size. As an example, we illustrate an application of this operator in the final offspring individual of the previous example:
Let us suppose that the cluster chosen to be split is cluster 1. A possible cluster splitting mutation operation would be
(ii) Mutation by clusters merging: this mutation consists of randomly selecting two existing clusters and merging them into just one single cluster. The degree of membership of the new cluster is the sum of the degrees of the previous ones. As in mutation by cluster splitting, the probability of choosing the clusters depends on their size. In order to illustrate this mutation, we use again the final offspring from the crossover operator example. In this case, let us consider that the selected clusters to be merged are clusters 2 and 4, resulting in
Analogously to the crossover operator, we also consider an adaptive version of the probability of applying the mutation operators described above. Note that we apply the two mutation operators in a serial fashion (one after the other), with independent probabilities of application. In this case, probability of mutation is made smaller in the first generations of the algorithm and larger in the last ones in order to have more opportunities to escape from local minima in the last stages of the evolutionary process; that is, where is the probability of mutation used in a given generation , stands for the total number of generations of the algorithm, and and are the final and initial values of probability considered, respectively.
3.6. Replacement and Elitism
In the proposed GGA, the population at a given generation is obtained by replacement of the individuals in the population at generation , through the application of the selection, crossover, and mutation operators described above. An elitist scheme is also applied: the best individual in generation is automatically passed on to the population of generation , ensuring that the best solution encountered so far in the evolution is always kept by the algorithm.
3.7. Local Search
We use a local search procedure to try to find local optima in a close neighborhood of a given individual. The proposed local search is based on minor modifications of the current individual, as far as they produce an increase of the associated objective function: the local search changes the degree of membership of the observations, starting by one randomly chosen. The changes in the degree of membership are randomly generated. We finally keep the assignment with the largest objective function. Since this local search procedure is a timeconsuming operation, it is applied to a given individual with a small probability, , that is modified between an initial and final value in the algorithm in the same way that the crossover probability is modified.
3.8. An Island Model to Improve the Algorithm’s Performance
In order to improve the performance of the proposed GGA, an island model is considered for its parallelization. In this context, subpopulations (islands) are set in such a way that the evolution in each island is forced to be independent but the migration of good individuals is allowed between islands. We consider an elitist migration model, in which only the best individual in each island migrates and substitutes a randomly chosen individual in one of the other islands. There is a probability of migration predefined in the algorithm. The migration process is summarized in the following steps.(1)Choose the best individual in each island.(2)Randomly choose the island toward which each individual will migrate.(3)Randomly choose an individual in the destiny island and change it by the migrating individual.
4. Experiments and Results
This section summarizes and discusses the experimental work we have carried out in order to assess the performance of our proposed GGA approach. We have explored a number of variations of the proposed GGA (by combining different distances and/or objective functions) in a variety of fuzzy clustering scenarios (which, as will be shown later, exhibit an increasing degree of complexity). Table 1 lists the values of the GGA parameters used in all the simulations carried out in this paper. These values have been found to be the most appropriate after a number of side experiments, not shown for the sake of brevity. The algorithm presented here is compared with the fuzzy Cmeans (FCM) [13] algorithm because it has been successfully applied to many real clustering problems and applications characterized by different levels of complexity [26, 27].

For reasons made clearer in what follows, the experimental setup for comparing the considered algorithms will be divided into two different parts, characterized by using synthetic and real data (Sections 4.1 and 4.2, resp.).
4.1. Synthetic Data
4.1.1. Experiment 1 with Synthetic Data: Spherical Clusters
In this first experiment, we test the performance of the proposed GGA in a twodimensional clustering problem, defined by 300 observations randomly generated using a Gaussian distribution from 8 equiprobable classes, with mean values , , , , , , , and and covariance matrices:
Note that this procedure results in a problem characterized by spherical clusters. Figure 2 illustrates the twodimensional distribution of the observations following the above statistical distribution.
We have applied to this problem a number of configurations of the proposed GGA—with MDB, XB, and FS objective (fitness) functions—and the FCM algorithm fed with the real number of clusters as a priori information. At this point it is important to emphasize that the proposed GGA is able to infer the number of clusters within the problem, whereas the FCM requires this parameter to be set before execution (namely, in the above description of FCM). To circumvent this issue, side simulations have been run for FCM and the considered scenario by varying over a wide range of integer values, from which the value rendering the best metric value has been selected for comparison. Also included is the GGA approach from [38] in order to assess the impact of the novel aspects of the islandbased GGA proposed here.
Having said this, Table 2 lists the supervised evaluation of the results obtained by the aforementioned algorithms. Note that the proposed GGA with the three different objective functions obtains better results than the FCM algorithm. In particular, our GGA with the MDB index exhibits the best behavior (), higher than that of the conventional FCM algorithm () and the GGA with MDB index from [38] (). In addition, note that the GGA with MDB and XB indexes achieves the solution with the optimal number of clusters (i.e., 8). In order to better describe the behavior of the best algorithm (the GGA with our MBD index), it would be very interesting to have a closer look at Figures 3 to 5.(i)Figure 3 represents the twodimensional distributions of the 8 clusters found. The color of each observation has been obtained as a combination of those colors representing each cluster, weighted by the degree of membership of each observation.(ii)Figures 4(a) and 4(b) depict, as a function of the number of generations considered, the evolution of the objective function and that of the number of clusters, respectively, in what is the best solution found for this problem. It is worth noting that the algorithm is able to isolate the 8 clusters of the data set with a value of the objective function of 9.7688.(iii)Finally, Figure 5 shows the final solution after the defuzzification process, illustrating the ability of the proposed algorithm to find the 8 clusters.
(a)
(b)
The question arising from this first experiment lies on how the proposed fuzzy clustering approach works when facing clusters that are not spherical or exhibiting different distributions. This is the rationale behind the following second synthetic experiment.
4.1.2. Experiment 2 with Synthetic Data: Unbalanced Data
We now test the performance of the proposed GGA in a different twodimensional clustering problem, defined by randomly generated objects following a distribution drawn from 3 Gaussian classes with probabilities , , and . The mean values of each of such classes are , , and , whereas their covariance matrices are given by Note that, in this case, the classes are not spherical and have different distributions. Figure 6 displays the observations generated for this instance.
Table 3 shows, in terms of the Rand index, the results obtained by the proposed GGA with MDB, XB, and FS indexes and the previous scheme from [38] with the same set of indexes and those achieved by the FCM algorithm. As shown in this table, the GGA with MDB and XB indexes obtains similar results (better than the FCM), whereas the result of the GGA with FS index is slightly worse than the result of the FCM algorithm. The best results correspond to the here proposed GGA algorithm with the MDB index, rendering a value of (higher than that of the GGA approach from [38] with the same index and the FCM algorithm) and, what is very important, finding the 3 clusters hidden in the data. Finally, Figure 7 illustrates, in a more intuitive way, the fuzzy clustering reached by the proposed GGA using the MDB index as objective function.
4.1.3. Experiment 3 with Synthetic Data: Heterogeneous Clusters
The goal of this final synthetic experiment consists of exploring the effects of using different distances in the MDB objective function rendering the best results obtained by the proposed GGA. We again set up another twodimensional clustering problem defined by 300 Gaussiandistributed objects, but in this case the Gaussian distribution is randomly drawn from 6 classes with probabilities , , , , , and . Means of the classes are set as , , , , , and , whereas the covariance matrices are selected to be For illustrative purposes, Figure 8 displays the observations generated for this instance.
The analysis we have carried out in this case consists of comparing the GGA with the MDB index as objective function (which has obtained the best results in previous experiments), but using different distance within MDB metrics. Specifically, we will show the effect of including Euclidean, GK, and AFC distances within the proposed GGA. Figure 9(a) represents the solution found by the GGA with MDB index and Euclidean distance. Note that the algorithm is not able to distinguish nonspheric clusters. By contrast, Figure 9(b) shows the result obtained by the proposed GGA with the MDB index and the GK distance. In this case, the algorithm is able to detect the structure of the problem, as can be checked out in the detection of the elliptic clusters at the bottom of the figure. Finally, Figure 9(c) shows the result obtained by the proposed GGA with MDB index and the AFC distance. Note that in this case the adaptive distance measure allows detecting clusters of different sizes, as the large ones at the topmost part of the figure.
(a)
(b)
(c)
The analysis of the GGA performance in this problem proceeds by comparing the results obtained in terms of the Rand index (supervised measure). Table 4 lists the results computed by the proposed GGA, with MDB index and the different distances considered, compared to the results achieved by the FCM approach (with Euclidean distance, which has been found to be the best for the FCM algorithm). Note that the strategy using the proposed GGA with our MDB fitness function and the AFC distance exhibits the best performance, not only because it reaches the highest Rand index (), but also because it properly detects the 6 clusters hidden in the data. Furthermore, only the proposed GGA approach with MDB index and AFC and GK distances is able to locate the correct number of clusters in the final solution.

4.2. Real Data
4.2.1. Real Problem 1: Character Recognition
This problem can be stated as follows: let be a character, a twodimensional image, in which each pixel, , has been converted to black and white, with black pixels forming the character image. The goal is to optimally segment all the black pixels into clusters, in such a way that a final step of comparison with a reference set can be carried out, with the aim of recognizing the character of the image.
To illustrate the feasibility of our procedure, we have made use of an example, given by the character “” depicted by means of the different samples in the image represented in Figure 10. The performance of the proposed GGA in the recognition of this character is given in Figures 11(a) and 11(b), which display the results achieved by the GGA using our MDB index as objective function and the Euclidean and GK distances, respectively. It is important to note how the proposed GGA approach using the GK distance is able to correctly allocate the three segments that form the character. The GGA with the Euclidean distance does not provide, however, as good results as those depicted in Figure 11(a). To further assess the feasibility of our proposal, Table 5 summarizes a quantitative comparison in terms of the Rand index. The GGA with MDB index and GK distance is the best among all the algorithms compared, whereas the GGA using the MDB index and either Euclidean or AFC distances obtains similar results to those of the FCM approach. The approach that leads to the best solution of this problem is the proposed GGA by using our MDB fitness function along with the GK distance: it is able to correctly find the 3 segments (clusters of points) with the highest Rand index ().

(a)
(b)
4.2.2. Real Problem 2: Diabetes Data Set
The data set called “diabetes” (UCI machine learning repository, see [54]) is a wellknown problem in classification and clustering involving the diagnosis of diabetes patients, as defined by the World Health Organization. This data base is formed by 768 data vectors, containing, in turn, 8 features that represent medical conditions of the patients, such as age, arterial pressure, or body mass index. The observations belong to two classes, 500 of which belong to a negative diabetes diagnosis and 268 to a positive one. The results obtained by the proposed GGA assisted by the MDB index (which has been found to be the best) are shown in Table 6, in terms of percentage of correct classification. Note that the GGAMDB with GK distance is the best algorithm among all compared, with a percentage of correct classification over 83%.

5. Conclusions
In this paper we have presented a grouping genetic algorithm for fuzzy clustering problems. The main contributions of this work are a novel encoding approach of the individuals involved in the evolutionary process, containing information not only of the partition matrix elements, but also of the clusters being obtained; a novel fitness function based on a modification of the DavisBouldin index for its efficient use in fuzzy clustering problems and that enables the chance of introducing norms adapted to any problem; novel crossover and mutation operators particularly derived to achieve the effective evolution of the individuals; and a local search and parallelizationbased scheme of the algorithm aimed at improving its overall performance.
Indeed, such performance has been explored in a variety of experiments, both synthetically generated and based on practical problems. The experimental work devised—based on different fuzzy problems characterized by an increasing degree of complexity (clusters with different distribution, volume, and orientation)—proves that our algorithm (using our proposed fitness function with distances such as the GustafsonKessel distance or the one established for the adaptive fuzzy clustering) exhibits a significantly better performance than that achieved by the fuzzy Cmeans algorithm.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgment
This work was supported by the Gachon University research fund of 2014.
References
 A. K. Jain, M. N. Murty, and P. J. Flynn, “Data clustering: a review,” ACM Computing Surveys, vol. 31, no. 3, pp. 264–323, 1999. View at: Publisher Site  Google Scholar
 R. Xu and D. Wunsch II, “Survey of clustering algorithms,” IEEE Transactions on Neural Networks, vol. 16, no. 3, pp. 645–678, 2005. View at: Publisher Site  Google Scholar
 T. W. Liao, “Clustering of time series data—a survey,” Pattern Recognition, vol. 38, no. 11, pp. 1857–1874, 2005. View at: Publisher Site  Google Scholar
 P. Lingras and X. Huang, “Statistical, evolutionary, and neurocomputing clustering techniques: clusterbased vs objectbased approaches,” Artificial Intelligence Review, vol. 23, no. 1, pp. 3–29, 2005. View at: Publisher Site  Google Scholar
 J. Ji, W. Pang, C. Zhou, X. Han, and Z. Wang, “A fuzzy kprototype clustering algorithm for mixed numeric and categorical data,” KnowledgeBased Systems, vol. 30, pp. 129–135, 2012. View at: Publisher Site  Google Scholar
 S. Mitra and H. Banka, “Multiobjective evolutionary biclustering of gene expression data,” Pattern Recognition, vol. 39, no. 12, pp. 2464–2477, 2006. View at: Publisher Site  Google Scholar
 P. Scheunders, “A genetic Cmeans clustering algorithm applied to color image quantization,” Pattern Recognition, vol. 30, no. 6, pp. 859–866, 1997. View at: Google Scholar
 V. M. GomezMuñoz and M. A. PortaGándara, “Local wind patterns for modeling renewable energy systems by means of cluster analysis techniques,” Renewable Energy, vol. 25, no. 2, pp. 171–182, 2002. View at: Publisher Site  Google Scholar
 A. Baraldi and P. Blonda, “A survey of fuzzy clustering algorithms for pattern recognition, part I,” IEEE Transactions on Systems, Man, and Cybernetics B: Cybernetics, vol. 29, no. 6, pp. 778–785, 1999. View at: Publisher Site  Google Scholar
 A. Baraldi and P. Blonda, “A survey of fuzzy clustering algorithms for pattern recognition, part II,” IEEE Transactions on Systems, Man, and Cybernetics B: Cybernetics, vol. 29, no. 6, pp. 786–801, 1999. View at: Publisher Site  Google Scholar
 U. Maulik and S. Bandyopadhyay, “Fuzzy partitioning using a realcoded variablelength genetic algorithm for pixel classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 41, no. 5, pp. 1075–1081, 2003. View at: Publisher Site  Google Scholar
 X. Guo, Z. Zhu, and J. Shi, “A corporate credit rating model using support vector domain combined with fuzzy clustering algorithm,” Mathematical Problems in Engineering, vol. 2012, Article ID 302624, 20 pages, 2012. View at: Publisher Site  Google Scholar
 J. C. Bezdek, Pattern Recognition with Fuzzy Objective Function Algorithms, IEEE Press, 1981.
 D. Graves and W. Pedrycz, “Kernelbased fuzzy clustering and fuzzy clustering: a comparative experimental study,” Fuzzy Sets and Systems, vol. 161, no. 4, pp. 522–543, 2010. View at: Publisher Site  Google Scholar
 H. Zhang and J. Lu, “Semisupervised fuzzy clustering: a kernelbased approach,” KnowledgeBased Systems, vol. 22, no. 6, pp. 477–481, 2009. View at: Publisher Site  Google Scholar
 S. Chatzis and T. Varvarigou, “Factor analysis latent subspace modeling and robust fuzzy clustering using tdistributions,” IEEE Transactions on Fuzzy Systems, vol. 17, no. 3, pp. 505–517, 2009. View at: Publisher Site  Google Scholar
 Y. Zhong and L. Zhang, “A new fuzzy clustering algorithm based on clonal selection for land cover classification,” Mathematical Problems in Engineering, vol. 2011, Article ID 708459, 21 pages, 2011. View at: Publisher Site  Google Scholar
 A. Celikyilmaz and I. BurhanTurksen, “Enhanced fuzzy system models with improved fuzzy clustering algorithm,” IEEE Transactions on Fuzzy Systems, vol. 16, no. 3, pp. 779–794, 2008. View at: Publisher Site  Google Scholar
 E. G. Mansoori, M. J. Zolghadri, and S. D. Katebi, “SGERD: a steadystate genetic algorithm for extracting fuzzy classification rules from data,” IEEE Transactions on Fuzzy Systems, vol. 16, no. 4, pp. 1061–1071, 2008. View at: Publisher Site  Google Scholar
 E. G. Mansoori, “FRBC: a fuzzy rulebased clustering algorithm,” IEEE Transactions on Fuzzy Systems, vol. 19, no. 5, pp. 960–971, 2011. View at: Publisher Site  Google Scholar
 S. Eschrich, J. Ke, L. O. Hall, and D. B. Goldgof, “Fast accurate fuzzy clustering through data reduction,” IEEE Transactions on Fuzzy Systems, vol. 11, no. 2, pp. 262–270, 2003. View at: Publisher Site  Google Scholar
 E. N. Nasibov and G. Ulutagay, “A new unsupervised approach for fuzzy clustering,” Fuzzy Sets and Systems, vol. 158, no. 19, pp. 2118–2133, 2007. View at: Publisher Site  Google Scholar
 D. Horta, I. C. de Andrade, and R. J. Campello, “Evolutionary fuzzy clustering of relational data,” Theoretical Computer Science, vol. 412, no. 42, pp. 5854–5870, 2011. View at: Publisher Site  Google Scholar
 J. Yu and M.S. Yang, “A generalized fuzzy clustering regularization model with optimality tests and model complexity analysis,” IEEE Transactions on Fuzzy Systems, vol. 15, no. 5, pp. 904–915, 2007. View at: Publisher Site  Google Scholar
 W. Yaonan, L. Chunsheng, and Z. Yi, “A selection model for optimal fuzzy clustering algorithm and number of clusters based on competitive comprehensive fuzzy evaluation,” IEEE Transactions on Fuzzy Systems, vol. 17, no. 3, pp. 568–577, 2009. View at: Publisher Site  Google Scholar
 C. Hwang and F. C. Rhee, “Uncertain fuzzy clustering: interval type2 fuzzy approach to Cmeans,” IEEE Transactions on Fuzzy Systems, vol. 15, no. 1, pp. 107–120, 2007. View at: Publisher Site  Google Scholar
 M. Lee and W. Pedrycz, “The fuzzy Cmeans algorithm with fuzzy Pmode prototypes for clustering objects having mixed features,” Fuzzy Sets and Systems, vol. 160, no. 24, pp. 3590–3600, 2009. View at: Publisher Site  Google Scholar
 V. S. Alves, R. J. Campello, and E. R. Hruschka, “A fuzzy variant of an evolutionary algorithm for clustering,” in Proceedings of the IEEE International Conference on Fuzzy Systems, pp. 375–380, Imperial College, London, UK, July 2007. View at: Publisher Site  Google Scholar
 C. ChunHao, V. S. Tseng, and H. TzungPei, “Clusterbased evaluation in fuzzygenetic data mining,” IEEE Transactions on Fuzzy Systems, vol. 16, no. 1, pp. 249–262, 2008. View at: Publisher Site  Google Scholar
 M. K. Pakhira, S. Bandyopadhyay, and U. Maulik, “A study of some fuzzy cluster validity indices, genetic clustering and application to pixel classification,” Fuzzy Sets and Systems, vol. 155, no. 2, pp. 191–214, 2005. View at: Publisher Site  Google Scholar
 G. Gan, J. Wu, and Z. Yang, “A genetic fuzzy kModes algorithm for clustering categorical data,” Expert Systems with Applications, vol. 36, no. 2, pp. 1615–1620, 2009. View at: Publisher Site  Google Scholar
 A. Mukhopadhyay, U. Maulik, and S. Bandyopadhyay, “Multiobjective genetic algorithmbased fuzzy clustering of categorical attributes,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 5, pp. 991–1005, 2009. View at: Publisher Site  Google Scholar
 U. Maulik and I. Saha, “Automatic fuzzy clustering using modified differential evolution for image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 48, no. 9, pp. 3503–3510, 2010. View at: Publisher Site  Google Scholar
 H. Izakian and A. Abraham, “Fuzzy Cmeans and fuzzy swarm for fuzzy clustering problem,” Expert Systems with Applications, vol. 38, no. 3, pp. 1835–1838, 2011. View at: Publisher Site  Google Scholar
 H. Dong, Y. Dong, C. Zhou, G. Yin, and W. Hou, “A fuzzy clustering algorithm based on evolutionary programming,” Expert Systems with Applications, vol. 36, no. 9, pp. 11792–11800, 2009. View at: Publisher Site  Google Scholar
 E. Falkenauer, “The grouping genetic algorithm—widening the scope of the GAs,” Proceedings of the Belgian Journal of Operations Research, Statistics and Computer Science, vol. 33, pp. 79–102, 1992. View at: Google Scholar
 E. Falkenauer, Genetic Algorithms for Grouping Problems, John Wiley & Sons, New York, NY, USA, 1998.
 S. SalcedoSanz, L. CarroCalvo, J. A. PortillaFigueras, L. Cuadra, and D. Camacho, “Fuzzy clustering with grouping genetic algorithms,” in Intelligent Data Engineering and Automated Learning, vol. 8206 of Lecture Notes in Computer Science, pp. 334–341, Springer, Berlin, Germany, 2013. View at: Publisher Site  Google Scholar
 R. Krishnapuram and J. Kim, “A note on the GustafsonKessel and adaptive fuzzy clustering algorithms,” IEEE Transactions on Fuzzy Systems, vol. 7, no. 4, pp. 453–461, 1999. View at: Publisher Site  Google Scholar
 X. L. Xie and G. Beni, “A validity measure for fuzzy clustering,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no. 8, pp. 841–847, 1991. View at: Publisher Site  Google Scholar
 Y. Fukuyama and M. Sugeno, “A new method of choosing the number of clusters for the fuzzy Cmeans method,” in Proceedings of the 5th Fuzzy Systems Symposium, pp. 247–250, Kobe, Japan, 1989. View at: Google Scholar
 W. M. Rand, “Objective criteria for the evaluation of clustering methods,” Journal of the American Statistical Society, vol. 66, pp. 846–850, 1971. View at: Publisher Site  Google Scholar
 L. E. AgustínBlas, S. SalcedoSanz, P. Vidales, G. Urueta, and J. A. PortillaFigueras, “Near optimal citywide WiFi network deployment using a hybrid grouping genetic algorithm,” Expert Systems with Applications, vol. 38, no. 8, pp. 9543–9556, 2011. View at: Publisher Site  Google Scholar
 T. L. James, E. C. Brown, and K. B. Keeling, “A hybrid grouping genetic algorithm for the cell formation problem,” Computers & Operations Research, vol. 34, no. 7, pp. 2059–2079, 2007. View at: Publisher Site  Google Scholar
 T. James, M. Vroblefski, and Q. Nottingham, “A hybrid grouping genetic algorithm for the registration area planning problem,” Computer Communications, vol. 30, no. 10, pp. 2180–2190, 2007. View at: Publisher Site  Google Scholar
 E. C. Brown and M. Vroblefski, “A grouping genetic algorithm for the microcell sectorization problem,” Engineering Applications of Artificial Intelligence, vol. 17, no. 6, pp. 589–598, 2004. View at: Publisher Site  Google Scholar
 E. C. Brown and R. T. Sumichrast, “Evaluating performance advantages of grouping genetic algorithms,” Engineering Applications of Artificial Intelligence, vol. 18, no. 1, pp. 1–12, 2005. View at: Publisher Site  Google Scholar
 C. Hung, R. T. Sumichrast, and E. C. Brown, “CPGEA: a grouping genetic algorithm for material cutting plan generation,” Computers & Industrial Engineering, vol. 44, no. 4, pp. 651–672, 2003. View at: Publisher Site  Google Scholar
 P. de Lit, E. Falkenauer, and A. Delchambre, “Grouping genetic algorithms: an efficient method to solve the cell formation problem,” Mathematics and Computers in Simulation, vol. 51, no. 34, pp. 257–271, 2000. View at: Google Scholar
 E. C. Brown and R. T. Sumichrast, “Impact of the replacement heuristic in a grouping genetic algorithm,” Computers & Operations Research, vol. 30, no. 11, pp. 1575–1593, 2003. View at: Publisher Site  Google Scholar
 V. B. Kreng and T. Lee, “Modular product design with grouping genetic algorithm—a case study,” Computers & Industrial Engineering, vol. 46, no. 3, pp. 443–460, 2004. View at: Publisher Site  Google Scholar
 L. E. AgustnBlas, S. SalcedoSanz, S. JiménezFernández, L. CarroCalvo, J. del Ser, and J. A. PortillaFigueras, “A new grouping genetic algorithm for clustering problems,” Expert Systems with Applications, vol. 39, no. 10, pp. 9695–9703, 2012. View at: Publisher Site  Google Scholar
 D. Davies and D. Bouldin, “A cluster sparation measure,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 1, no. 2, pp. 224–227, 1979. View at: Google Scholar
 A. Asuncion and D. J. Newman, “UCI Machine Learning Repository,” School of Information and Computer Science, University of California, Irvine, Calif, USA, 2007, http://www.ics.uci.edu/~mlearn/MLRepository.html. View at: Google Scholar
Copyright
Copyright © 2014 S. SalcedoSanz et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.