Research Article  Open Access
Min Ren, Peiyu Liu, Zhihao Wang, Jing Yi, "A SelfAdaptive Fuzzy cMeans Algorithm for Determining the Optimal Number of Clusters", Computational Intelligence and Neuroscience, vol. 2016, Article ID 2647389, 12 pages, 2016. https://doi.org/10.1155/2016/2647389
A SelfAdaptive Fuzzy cMeans Algorithm for Determining the Optimal Number of Clusters
Abstract
For the shortcoming of fuzzy cmeans algorithm (FCM) needing to know the number of clusters in advance, this paper proposed a new selfadaptive method to determine the optimal number of clusters. Firstly, a densitybased algorithm was put forward. The algorithm, according to the characteristics of the dataset, automatically determined the possible maximum number of clusters instead of using the empirical rule and obtained the optimal initial cluster centroids, improving the limitation of FCM that randomly selected cluster centroids lead the convergence result to the local minimum. Secondly, this paper, by introducing a penalty function, proposed a new fuzzy clustering validity index based on fuzzy compactness and separation, which ensured that when the number of clusters verged on that of objects in the dataset, the value of clustering validity index did not monotonically decrease and was close to zero, so that the optimal number of clusters lost robustness and decision function. Then, based on these studies, a selfadaptive FCM algorithm was put forward to estimate the optimal number of clusters by the iterative trialanderror process. At last, experiments were done on the UCI, KDD Cup 1999, and synthetic datasets, which showed that the method not only effectively determined the optimal number of clusters, but also reduced the iteration of FCM with the stable clustering result.
1. Introduction
Cluster analysis has a long research history. Due to its advantage of learning without a priori knowledge, it is widely applied in the fields of pattern recognition, image processing, web mining, spatiotemporal database application, business intelligence, and so forth.
Clustering is often as unsupervised learning and aims to partition objects in a dataset into several natural groupings, namely, the socalled clusters, such that objects within a cluster tend to be similar while objects belonging to different clusters are dissimilar. Generally, the datasets from different application fields vary in feature, and the purposes of clustering are multifarious. Therefore, the best method of cluster analysis depends on datasets and purposes of use. There is no universal clustering technology that can be widely applicable to the diverse structures presented by various datasets [1]. According to the accumulation rules of objects in clusters and the methods of applying these rules, clustering algorithms are divided into many types. However, for most clustering algorithms including partitional clustering and hierarchical clustering, the number of clusters is a parameter needing to be preset, to which the quality of clustering result is closely related. In practical application, it usually relies on users’ experience or background knowledge in related fields. But in most cases, the number of clusters is unknown to users. If the number is assigned too large, it may result in more complicated clustering results which are difficult to be explained. On the contrary, if it is too small, a lot of valuable information in clustering result may be lost [2]. Thus, it is still a fundamental problem in the research of cluster analysis to determine the optimal number of clusters in a dataset [3].
Contributions of this paper are as follows. (1) A densitybased algorithm is proposed, which can quickly and succinctly generate highquality initial cluster centroids instead of choosing randomly, so as to stabilize the clustering result and also quicken the convergence of the clustering algorithm. Besides, this algorithm can automatically estimate the maximum number of clusters based on the features of a dataset, hereby determining the search range for estimating the optimal number of clusters and effectively reducing the iterations of the clustering algorithm. (2) Based on the features of compactness within a cluster and separation between clusters, a new fuzzy clustering validity index (CVI) is defined in this paper avoiding its value close to zero along with the number of clusters tending to the number of objects and obtaining the optimal clustering result. (3) A selfadaptive method that iteratively uses improved FCM algorithm to estimate the optimal number of clusters was put forward.
2. Related Work
The easiest method of determining the number of clusters is data visualization. For the dataset that can be effectively mapped to a 2dimensional Euclidean space, the number of clusters can be intuitively acquired through the distribution graph of data points. However, for highdimensional and complicated data, this method is unserviceable. Rodriguez and Laio proposed a clustering algorithm based on density peaks, declaring that it was able to detect nonspherical clusters and to automatically find the true number of clusters [4]. But, in fact, the number of cluster centroids still needs to be selected artificially according to the decision graph. Next, relevant technologies to determine the optimal number of clusters are summarized below.
2.1. Clustering Validity Index Based Method
Clustering validity index is used to evaluate the quality of partitions on a dataset generated by the clustering algorithm. It is an effective method to construct an appropriate clustering validity index to determine the number of clusters. The idea is to assign different values of the number of clusters within a certain range, then run fuzzy clustering algorithm on the dataset, and finally to evaluate the results by clustering validity index. When the value of clustering validity index is of the maximum or the minimum or an obvious inflection point appears, the corresponding value of is the optimal number of clusters . So far, researchers have put forward lots of fuzzy clustering validity indices, divided into the following two types.
(1) Clustering Validity Index Based on Fuzzy Partition. These indices are according to such a point of view that for a wellseparated dataset, the smaller the fuzziness of fuzzy partition is, the more reliable the clustering result is. Based on it, Zadeh, the founder of fuzzy sets, put forward the first clustering validity index degree of separation in 1965 [5]. But its discriminating effect is not ideal. In 1974, Bezdek put forward the concept of partition coefficient (PC) [6] which was the first practical generic function for measuring the validity of the fuzzy clustering and subsequently proposed another clustering validity function partition entropy (PE) [7] closely related to partition coefficient. Later, Windham defined proportion exponent by use of the maximum value of a fuzzy membership function [8]. Lee put forward a fuzzy clustering validity index using the distinguishableness of clusters measured by the object proximities [9]. Based on the Shannon entropy and fuzzy variation theory, Zhang and Jiang put forward a new fuzzy clustering validity index taking account of the geometry structure of the dataset [10]. Saha et al. put forward an algorithm based on differential evolution for automatic cluster detection, which well evaluated the validity of the clustering result [11]. Yue et al. partitioned the original data space into a gridbased structure and proposed a cluster separation measure based on grid distances [12]. The clustering validity index based on fuzzy partition is only related to the fuzzy degree of membership and has the advantages of simpleness and small calculating amount. But it lacks direct relation with some structural features of the dataset.
(2) Clustering Validity Index Based on the Geometry Structure of the Dataset. These indices are based on such a point of view that for a wellseparated dataset, every cluster should be compact and separated from each other as far as possible. The ratio of compactness and separation is used as the standard of clustering validity. This type of representative clustering validity indices include XieBeni index [13], Bensaid index [14], and Kwon index [15]. Sun et al. proposed a new validity index based on a linear combination of compactness and separation and inspired by Rezaee’s validity [16]. Li and Yu defined new compactness and separation and put forward a new fuzzy clustering validity function [17]. Based on fuzzy granulationdegranulation, Saha and Bandyopadhyay put forward a fuzzy clustering validity function [18]. Zhang et al. adopted Pearson correlation to measure the distance and put forward a validity function [19]. Kim et al. proposed a clustering validity index for GK algorithm based on the average value of the relative degrees of sharing of all possible pairs of fuzzy clusters [20]. Rezaee proposed a new validity index for GK algorithm to overcome the shortcomings of Kim’s index [21]. Zhang et al. proposed a novel WGLI to detect the optimal cluster number, using global optimum membership as the global property and modularity of bipartite network as the local independent property [22]. The clustering validity index based on the geometric structure of the dataset considers both the fuzzy degree of membership and the geometric structure, but its membership function is quite complicated with large calculating amount.
Based on clustering validity index, the optimal number of clusters is determined through exhaustive search. In order to increase the efficiency of estimating the optimal number of clusters , the search range of must be set; that is, is the maximum number of clusters, assigned to meet the condition . Most researchers used the empirical rule , where is the number of data in the dataset. For this problem, theoretical analysis and example verification were conducted in [23], which indicated that it was reasonable in a sense. However, apparently, this method has the following disadvantages. (1) Each must be tried in turn, which will cause a huge calculation. (2) For each , it cannot be guaranteed that the clustering result is the globally optimal solution. (3) When noise and outliers are existing, the reliability of clustering validity index is weak. (4) For some datasets like FaceImage [24], if , the empirical rule will be invalid. Researches showed that due to the diversified data types and structures, no universal fuzzy clustering validity index can be applicable to all datasets. The research is and will still be carried on urgently.
2.2. Heuristic Method
Some new clustering algorithms have been proposed in succession recently. The main idea is to use some criteria to guide the clustering process, with the number of clusters being adjusted. In this way, while the clustering is completed, the appropriate number of clusters can be obtained as well. For example, means algorithm [25] based on the split hierarchical clustering is representative. Contrary to the aforesaid process, RCA [26] determined the actual number of clusters by a process of competitive agglomeration. Combining the singlepoint iterative technology with hierarchical clustering, a similaritybased clustering method (SCM) [27] was proposed. A mercer kernelbased clustering [28] estimated the number of clusters by the eigenvectors of a kernel matrix. A clustering algorithm based on maximal distant subtrees [29] detected any number of wellseparated clusters with any shapes. In addition, Frey and Dueck put forward an affinity propagation clustering algorithm (AP) [24] which generated highquality cluster centroids via the message passing between objects to determine the optimal number of clusters of largescale data. Shihong et al. proposed a Gerschgorin disk estimationbased criterion to estimate the true number of clusters [30]. JoséGarcía and GómezFlores presented an uptodate review of all major natureinspired metaheuristic algorithms used thus far for automatic clustering, determining the best estimate of the number of clusters [31]. All these have widened the thoughts for relevant researches.
3. Traditional FCM for Determining the Optimal Number of Clusters
Since Ruspini first introduced the theory of fuzzy sets into cluster analysis in 1973 [32], different fuzzy clustering algorithms have been widely discussed, developed, and applied in various areas. FCM [33] is one of the most commonly used algorithms. FCM was first presented by Dunn in 1974. Subsequently, Bezdek introduced weighted index to the fuzzy degree of membership [34], further developing FCM. FCM divides the dataset into fuzzy clusters. This algorithm holds that each object belongs to a certain cluster with a different degree of membership, that is, a cluster is considered as a fuzzy subset on the dataset.
Assume that is a finite dataset, where is an ldimensional object and is the th property of the th object. denotes cluster(s). represents ldimensional cluster centroid(s), where . is a fuzzy partition matrix, and is the degree of membership of the th object in the th cluster, where . The objective function is the quadratic sum of weighed distances from the samples to the cluster centroid in each cluster; that is,where shows the Euclidean distance between the th object and the th cluster centroid. () is a fuzziness index, controlling the fuzziness of the memberships. As the value of becomes progressively higher, the resulting memberships become fuzzier [35]. Pal and Bezdek advised that should be between 1.5 and 2.5, and usually let if without any special requirement [36].
According to the clustering criterion, appropriate fuzzy partition matrix and cluster centroid are obtained to minimize the objective function . Based on the Lagrange multiplier method, and are, respectively, calculated by the formulas below:
FCM algorithm is carried out through an iterative process of minimizing the objective function , with the update of and . The specific steps are as follows.
Step 1. Assign the initial value of the number of clusters , fuzziness index , maximum iterations , and threshold .
Step 2. Initialize the fuzzy partition randomly according to the constraints of the degree of membership.
Step 3. At the step, calculate cluster centroids according to (3).
Step 4. Calculate the objective function according to (1). If or , then stop; otherwise continue to Step 5.
Step 5. Calculate according to (2) and return to Step 3.
At last, each object can be arranged into one cluster in accordance with the principle of the maximum degree of membership. The advantages of FCM may be summarized as simple algorithm, quick convergence, and easiness to be extended. Its disadvantages lie in the selection of the initial cluster centroids, the sensitivity to noise and outliers, and the setting of the number of clusters, which have a great impact on the clustering result. As the random selection of the initial cluster centroids cannot ensure the fact that FCM converges to an optimal solution, different initial cluster centroids are used for multiple running of FCM; otherwise they are determined by using of other fast algorithms.
The traditional method to determine the optimal number of clusters of FCM is to set the search range of the number of clusters, run FCM to generate clustering results of different number of clusters, select an appropriate clustering validity index to evaluate clustering results, and finally obtain the optimal number of clusters according to the evaluation result. The method is composed of the following steps.
Step 1. Input the search range ; generally, and .
Step 2. For each integer .
Step 2.1. Run FCM.
Step 2.2. Calculate clustering validity index.
Step 3. Compare all values of clustering validity index. corresponding to the maximum or minimum value is the optimal number of clusters .
Step 4. Output , the optimal value of clustering validity index, and clustering result.
4. The Proposed Methods
4.1. DensityBased Algorithm
Considering the large influence of the randomly selected initial cluster centroid on the clustering result, a densitybased algorithm is proposed to select the initial cluster centroids, and the maximum number of clusters can be estimated at the same time. Some related terms will be defined at first.
Definition 1 (local density). The local density of object is defined aswhere is the distance between two objects and and is a cutoff distance. The recommended approach is to sort the distance between any two objects in descending order and then assign as the value corresponding to the first of the sorted distance (appropriately ). It shows that the more objects with a distance from less than there are, the bigger the value of is. The cutoff distance finally can decide the number of initial cluster centroids, namely, the maximum number of clusters. The densitybased algorithm is robust with respect to the choice of .
Definition 2 (core point). Assume that there is an object , if and , , , then is a core point.
Definition 3 (directly densityreachable). Assume that there are two objects , , if their distance , then is directly densityreachable to and vice versa.
Definition 4 (densityreachable). Assume that is a dataset and objects belong to it. If, for each , is directly densityreachable to , then is densityreachable to .
Definition 5 (neighbor). Neighbors of an object are those who are directly densityreachable or densityreachable to it, denoted as Neighbor .
Definition 6 (border point). An object is called a border point if it has no neighbors.
The example is shown in Figure 1.
The selection principle of initial cluster centroids of densitybased algorithm is that, usually, a cluster centroid is an object with higher local density, surrounded by neighbors with lower local density than it, and has a relatively large distance from other cluster centroids [4]. Densitybased algorithm can automatically select the initial cluster centroids and determine the maximum number of clusters according to local density. The pseudocode is shown in Algorithm 1. These cluster centroids obtained are sorted in descending order according to local density. Figure 2 demonstrates the process of densitybased algorithm.

(a) Initial data
(b) Iteration 1
(c) Iteration 2
(d) Iteration 3
4.2. Fuzzy Clustering Validity Index
Firstly based on the geometric structure of the dataset, Xie and Beni put forward the XieBeni fuzzy clustering validity index [13], which tried to find a balance point between the fuzzy compactness and separation so as to acquire the optimal cluster result. The index is defined as
In the formula, the numerator is the average distance from various objects to centroids, used to measure the compactness, and the denominator is the minimum distance between any two centroids, measuring the separation.
However, Bensaid et al. found that the size of each cluster had a large influence on XieBeni index and put forward a new index [14], which was insensitive to the number of objects in each cluster. Bensaid index is defined aswhere is the fuzzy cardinality of the th cluster and defined as
shows the variation of the th fuzzy cluster. Then, the compactness is computed as
denotes the separation of the th fuzzy cluster, defined as the sum of the distances from its cluster centroid to the centroids of other clusters.
Becausethis index is the same as the XieBeni index. When , the index value will be monotonically decreased, close to 0, and will lose robustness and judgment function for determining the optimal number of clusters. Thus, this paper improves Bensaid index and proposes a new index :
The numerator represents the compactness of the th cluster, where is its fuzzy cardinality. Its second item, an introduced punishing function, denotes the distance from the cluster centroid of the th cluster to the average of all cluster centroids, which can eliminate the monotonically decreasing tendency as the number of clusters increases to . The denominator represents the mean distance from the th cluster centroid to other cluster centroids, which is used for measuring the separation. The ratio of the numerator and the denominator thereof represents the clustering effect of the th cluster. The clustering validity index is defined as the sum of the clustering effect (the ratio) of all clusters. Obviously, the smaller the value is, the better the clustering effect of the dataset is, and the corresponding to the minimum value is the optimal number of clusters.
4.3. SelfAdaptive FCM
In this paper, the iterative trialanderror process [16] is still used to determine the optimal number of clusters by selfadaptive FCM (SAFCM). The pseudocode of SAFCM algorithm is described in Algorithm 2.

5. Experiment and Analysis
This paper selects 8 experimental datasets, among which 3 datasets come from UCI datasets, respectively, Iris, Wine, and Seeds, a dataset (SubKDD) is randomly selected from KDD Cup 1999 shown in Table 1, and the remaining 4 datasets are synthetic datasets shown in Figure 3. SubKDD includes normal, 2 kinds of Probe attack (ipsweep and portsweep) and 3 kinds of DoS attack (neptune, smurf, and back). The first synthetic dataset (SD1) consists of 20 2dimensional Gaussian distribution data with 10 samples. Their covariance matrixes are secondorder unit matrix . The structural feature of the dataset is that the distance between any two clusters is large, and the number of classes is greater than . The second synthetic dataset (SD2) consists of 4 2dimensional Gaussian distribution data. The cluster centroids are, respectively, (5,5), (10,10), (15,15), and (20,20), each containing 500 samples and the covariance matrix of each being . The structural feature of the dataset is of short intercluster distance with a small overlap. The third synthetic dataset (SD3) has a complicated structure. The fourth synthetic dataset (SD4) is a nonconvex one.

(a) SD1
(b) SD2
(c) SD3
(d) SD4
In this paper, the numeric value is normalized aswhere and denote the mean of the th attribute value and its standard deviation, respectively; then
For the categorical attribute of SubKDD, the simple matching is used for the dissimilarity measure, that is, 0 for identical values and 1 for different values.
5.1. Experiment of Simulation
In [37], it was proposed that the number of clusters generated by AP algorithm could be selected as the maximum number of clusters. So this paper estimates the value of by the empirical rule, AP algorithm, and densitybased algorithm. The specific experimental results are shown in Table 2. Let in AP algorithm. is, respectively, selected as the distance of the first 1%, 2%, 3%, and 5%.

Experiment results show that, for the dataset with the true number of clusters greater than , it is obviously incorrect to let . In other words, the empirical rule is invalid. The number of clusters finally obtained by AP algorithm is close to the actual number of clusters. For the dataset with the true number of clusters smaller than , the number of clusters generated by AP algorithm can be used as an appropriate , but, sometimes, the value is greater than the value estimated by the empirical rule, which enlarges the search range of the optimal number of clusters. If is estimated by the proposed densitybased algorithm, the results in most cases are appropriate. The method is invalid only on SD1 when the cutoff distance is the first 5% distance. When the cutoff distance is selected as the first 3% distance, generated by the proposed algorithm is much smaller than . It is closer to the true number of clusters and greatly narrows the search range of the optimal number of clusters. Therefore, the cutoff distance is selected as the distance of the first 3% in the later experiments.
5.2. Experiment of Influence of the Initial Cluster Centroids on the Convergence of FCM
To show that the initial cluster centroids obtained by densitybased algorithm can quicken the convergence of clustering algorithm, the traditional FCM is adopted for verification. The number of clusters is assigned as the true value, with the convergence threshold being . Because the random selection of initial cluster centroids has a large influence on FCM, the experiment of each dataset is done repeatedly for 50 times, and the round numbers of the mean of algorithmic iterations are compared. The specific experimental result is shown in Table 3.

As shown in Table 3, the initial cluster centroids obtained by densitybased algorithm can effectively reduce the iteration of FCM. Particularly, on SD1, the iteration of FCM is far smaller than that with randomly selected initial cluster centroids whose minimum iteration is 24 and maximum iteration reaches 63, with unstable clustering results. Therefore, the proposed method can not only effectively quicken the convergence of FCM, but also obtain a stable clustering result.
5.3. Experiment of Clustering Accuracy Based on Clustering Validity Index
For 3 UCI datasets and SubKDD, clustering accuracy is adopted to measure the clustering effect, defined as
Here is the number of objects which cooccur in the th cluster and the th real cluster, and is the number of objects in the dataset. According to this measurement, the higher the clustering accuracy is, the better the clustering result of FCM is. When , the clustering result of FCM is totally accurate.
In the experiments, the true number of clusters is assigned to each dataset and the initial cluster centroids are obtained by densitybased algorithm. Then, experimental results of clustering accuracy are shown in Table 4.

Apparently, clustering accuracies of the datasets Wine, Seeds, and SubKDD are high, while that of the dataset Iris is relatively low for the reason that two clusters are nonlinearly separable.
The clustering results of the proposed clustering validity index on 4 synthetic datasets are shown in Figure 4, which depict the good partitions on the datasets SD1 and SD2 and the slightly poor partition on SD3 and SD4 because of the complexity of their structure or nonconvexity. When the number of clusters is set as 4, SD3 is divided into 4 groups that means the right and the left each have 2 groups.
(a) SD1
(b) SD2
(c) SD3
(d) SD4
5.4. Experiment of the Optimal Number of Clusters
At last, XieBeni index, Bensaid index, Kwon index, and the proposed index are, respectively, adopted for running of SAFCM so as to determine the optimal number of clusters. The results are shown in Table 5. , , , and represent XieBeni index, Bensaid index, Kwon index, and the proposed index, respectively.

It shows that, for synthetic datasets SD1 and SD2 with simple structure, these indices can all obtain the optimal number of clusters. For 3 UCI datasets, SubKDD and SD4, Bensaid index cannot obtain an accurate number of clusters except SD3. For the dataset Wine, both XieBeni index and Kwon index can obtain the accurate number of clusters, while for the datasets Iris, Seeds, SubKDD, SD3, and SD4, they only obtain the result approximate to the true number of cluster. The proposed index obtains the right result on the datasets Wine and SD4 and a better result compared to XieBeni index and Kwon index on the datasets SubKDD and SD3, while it obtains the same result of the two indices on the datasets Iris and Seeds. There are two reasons. First, the importance degree of each property of the dataset is not considered in the clustering process but assumed to be the same, thereby affecting the experimental result. Second, SAFCM has a weak capacity to deal with overlapping clusters or nonconvexity dataset. These are the authors’ subsequent research contents.
Tables 6–13 list the detailed results of 4 indices on the experimental datasets in each iteration of SAFCM. For the dataset Wine, when the number of clusters is 4, 5, or 6, the result based on XieBeni index or Kwon index is of no convergence. However, the proposed index can achieve the desired results.








6. Conclusions
FCM is widely used in lots of fields. But it needs to preset the number of clusters and is greatly influenced by the initial cluster centroids. This paper studies a selfadaptive method for determining the number of clusters by using of FCM algorithm. In this method, a densitybased algorithm is put forward at first, which can estimate the maximum number of clusters to reduce the search range of the optimal number, especially being fit for the dataset on which the empirical rule is inoperative. Besides, it can generate the highquality initial cluster centroids so that the the clustering result is stable and the convergence of FCM is quick. Then, a new fuzzy clustering validity index was put forward based on fuzzy compactness and separation so that the clustering result is closer to global optimum. The index is robust and interpretable when the number of clusters tends to that of objects in the dataset. Finally, a selfadaptive FCM algorithm is proposed to determine the optimal number of clusters run in the iterative trialanderror process.
The contributions are validated by experimental results. However, in most cases, each property plays a different role in the clustering process. In other words, the weight of properties is not the same. This issue will be focused on in the authors’ future work.
Competing Interests
The authors declare that they have no competing interests.
Acknowledgments
This work is partially supported by the National Natural Science Foundation of China (61373148 and 61502151), Shandong Province Natural Science Foundation (ZR2014FL010), Science Foundation of Ministry of Education of China (14YJC 860042), Shandong Province Outstanding Young Scientist Award Fund (BS2013DX033), Shandong Province Social Sciences Project (15CXWJ13 and 16CFXJ05), and Project of Shandong Province Higher Educational Science and Technology Program (no. J15LN02).
References
 S. Sambasivam and N. Theodosopoulos, “Advanced data clustering methods of mining web documents,” Issues in Informing Science and Information Technology, vol. 3, pp. 563–579, 2006. View at: Google Scholar
 J. Wang, S.T. Wang, and Z.H. Deng, “Survey on challenges in clustering analysis research,” Control and Decision, vol. 3, 2012. View at: Google Scholar
 C.W. Bong and M. Rajeswari, “Multiobjective natureinspired clustering and classification techniques for image segmentation,” Applied Soft Computing Journal, vol. 11, no. 4, pp. 3271–3282, 2011. View at: Publisher Site  Google Scholar
 A. Rodriguez and A. Laio, “Clustering by fast search and find of density peaks,” Science, vol. 344, no. 6191, pp. 1492–1496, 2014. View at: Publisher Site  Google Scholar
 L. A. Zadeh, “Fuzzy sets,” Information and Computation, vol. 8, no. 3, pp. 338–353, 1965. View at: Google Scholar  MathSciNet
 J. C. Bezdek, “Numerical taxonomy with fuzzy sets,” Journal of Mathematical Biology, vol. 1, no. 1, pp. 57–71, 1974. View at: Publisher Site  Google Scholar  MathSciNet
 J. C. Bezdek, “Cluster validity with fuzzy sets,” Journal of Cybernetics, vol. 3, no. 3, pp. 58–73, 1974. View at: Google Scholar  MathSciNet
 M. P. Windham, “Cluster validity for fuzzy clustering algorithms,” Fuzzy Sets and Systems, vol. 5, no. 2, pp. 177–185, 1981. View at: Publisher Site  Google Scholar
 M. Lee, “Fuzzy cluster validity index based on object proximities defined over fuzzy partition matrices,” in Proceedings of the IEEE International Conference on Fuzzy Systems and IEEE World Congress on Computational Intelligence (FUZZIEEE '08), pp. 336–340, June 2008. View at: Publisher Site  Google Scholar
 X.B. Zhang and L. Jiang, “A new validity index of fuzzy cmeans clustering,” in Proceedings of the IEEE International Conference on Intelligent HumanMachine Systems and Cybernetics (IHMSC '09), vol. 2, pp. 218–221, Hangzhou, China, August 2009. View at: Publisher Site  Google Scholar
 I. Saha, U. Maulik, and S. Bandyopadhyay, “A new differential evolution based fuzzy clustering for automatic cluster evolution,” in Proceedings of the IEEE International Advance Computing Conference (IACC '09), pp. 706–711, IEEE, Patiala, India, March 2009. View at: Publisher Site  Google Scholar
 S. Yue, J.S. Wang, T. Wu, and H. Wang, “A new separation measure for improving the effectiveness of validity indices,” Information Sciences, vol. 180, no. 5, pp. 748–764, 2010. View at: Publisher Site  Google Scholar  MathSciNet
 X. L. Xie and G. Beni, “A validity measure for fuzzy clustering,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no. 8, pp. 841–847, 1991. View at: Publisher Site  Google Scholar
 A. M. Bensaid, L. O. Hall, J. C. Bezdek et al., “Validityguided (re)clustering with applications to image segmentation,” IEEE Transactions on Fuzzy Systems, vol. 4, no. 2, pp. 112–123, 1996. View at: Publisher Site  Google Scholar
 S. H. Kwon, “Cluster validity index for fuzzy clustering,” Electronics Letters, vol. 34, no. 22, pp. 2176–2177, 1998. View at: Publisher Site  Google Scholar
 H. Sun, S. Wang, and Q. Jiang, “FCMbased model selection algorithms for determining the number of clusters,” Pattern Recognition, vol. 37, no. 10, pp. 2027–2037, 2004. View at: Publisher Site  Google Scholar
 Y. Li and F. Yu, “A new validity function for fuzzy clustering,” in Proceedings of the International Conference on Computational Intelligence and Natural Computing (CINC '09), vol. 1, pp. 462–465, IEEE, June 2009. View at: Publisher Site  Google Scholar
 S. Saha and S. Bandyopadhyay, “A new cluster validity index based on fuzzy granulationdegranulation criteria,” in Proceedings of the IEEEInternational Conference on Advanced Computing and Communications (ADCOM '07), pp. 353–358, IEEE, Guwahati, India, 2007. View at: Google Scholar
 M. Zhang, W. Zhang, H. Sicotte, and P. Yang, “A new validity measure for a correlationbased fuzzy cmeans clustering algorithm,” in Proceedings of the 31st IEEE Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC '09), pp. 3865–3868, Minneapolis, Minn, USA, September 2009. View at: Publisher Site  Google Scholar
 Y.I. Kim, D.W. Kim, D. Lee, and K. H. Lee, “A cluster validation index for GK cluster analysis based on relative degree of sharing,” Information Sciences, vol. 168, no. 1–4, pp. 225–242, 2004. View at: Publisher Site  Google Scholar  MathSciNet
 B. Rezaee, “A cluster validity index for fuzzy clustering,” Fuzzy Sets and Systems, vol. 161, no. 23, pp. 3014–3025, 2010. View at: Publisher Site  Google Scholar  MathSciNet
 D. Zhang, M. Ji, J. Yang, Y. Zhang, and F. Xie, “A novel cluster validity index for fuzzy clustering based on bipartite modularity,” Fuzzy Sets and Systems, vol. 253, pp. 122–137, 2014. View at: Publisher Site  Google Scholar  MathSciNet
 J. Yu and Q. Cheng, “Search range of the optimal cluster number in fuzzy clustering,” Science in China (Series E), vol. 32, no. 2, pp. 274–280, 2002. View at: Google Scholar
 B. J. Frey and D. Dueck, “Clustering by passing messages between data points,” Science, vol. 315, no. 5814, pp. 972–976, 2007. View at: Publisher Site  Google Scholar  MathSciNet
 D. Pelleg, A. W. Moore et al., “Xmeans: extending Kmeans with efficient estimation of the number of clusters,” in Proceedings of the 17th International Conference on Machine Learning (ICML '00), vol. 1, pp. 727–734, 2000. View at: Google Scholar
 H. Frigui and R. Krishnapuram, “A robust competitive clustering algorithm with applications in computer vision,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, no. 5, pp. 450–465, 1999. View at: Publisher Site  Google Scholar
 M.S. Yang and K.L. Wu, “A similaritybased robust clustering method,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 4, pp. 434–448, 2004. View at: Publisher Site  Google Scholar
 M. Girolami, “Mercer kernelbased clustering in feature space,” IEEE Transactions on Neural Networks, vol. 13, no. 3, pp. 780–784, 2002. View at: Publisher Site  Google Scholar
 L. Yujian, “A clustering algorithm based on maximal θdistant subtrees,” Pattern Recognition, vol. 40, no. 5, pp. 1425–1431, 2007. View at: Publisher Site  Google Scholar
 Y. Shihong, H. Ti, and W. Penglong, “Matrix eigenvalue analysisbased clustering validity index,” Journal of Tianjin University (Science and Technology), vol. 8, article 6, 2014. View at: Google Scholar
 A. JoséGarcía and W. GómezFlores, “Automatic clustering using natureinspired metaheuristics: a survey,” Applied Soft Computing Journal, vol. 41, pp. 192–213, 2016. View at: Publisher Site  Google Scholar
 E. H. Ruspini, “New experimental results in fuzzy clustering,” Information Sciences, vol. 6, pp. 273–284, 1973. View at: Publisher Site  Google Scholar
 J. C. Dunn, “Wellseparated clusters and optimal fuzzy partitions,” Journal of Cybernetics, vol. 4, no. 1, pp. 95–104, 1974. View at: Google Scholar  MathSciNet
 J. C. Bezdek, Pattern Recognition with Fuzzy Objective Function Algorithms, Springer Science & Business Media, 2013.
 R. E. Hammah and J. H. Curran, “Fuzzy cluster algorithm for the automatic identification of joint sets,” International Journal of Rock Mechanics and Mining Sciences, vol. 35, no. 7, pp. 889–905, 1998. View at: Publisher Site  Google Scholar
 N. R. Pal and J. C. Bezdek, “On cluster validity for the fuzzy cmeans model,” IEEE Transactions on Fuzzy Systems, vol. 3, no. 3, pp. 370–379, 1995. View at: Publisher Site  Google Scholar
 S. B. Zhou, Z. Y. Xu, and X. Q. Tang, “Method for determining optimal number of clusters based on affinity propagation clustering,” Control and Decision, vol. 26, no. 8, pp. 1147–1152, 2011. View at: Google Scholar  MathSciNet
Copyright
Copyright © 2016 Min Ren et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.