Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2016 / Article

Research Article | Open Access

Volume 2016 |Article ID 2647389 | 12 pages | https://doi.org/10.1155/2016/2647389

A Self-Adaptive Fuzzy c-Means Algorithm for Determining the Optimal Number of Clusters

Academic Editor: Elio Masciari
Received29 Jun 2016
Accepted17 Oct 2016
Published29 Nov 2016

Abstract

For the shortcoming of fuzzy c-means algorithm (FCM) needing to know the number of clusters in advance, this paper proposed a new self-adaptive method to determine the optimal number of clusters. Firstly, a density-based algorithm was put forward. The algorithm, according to the characteristics of the dataset, automatically determined the possible maximum number of clusters instead of using the empirical rule and obtained the optimal initial cluster centroids, improving the limitation of FCM that randomly selected cluster centroids lead the convergence result to the local minimum. Secondly, this paper, by introducing a penalty function, proposed a new fuzzy clustering validity index based on fuzzy compactness and separation, which ensured that when the number of clusters verged on that of objects in the dataset, the value of clustering validity index did not monotonically decrease and was close to zero, so that the optimal number of clusters lost robustness and decision function. Then, based on these studies, a self-adaptive FCM algorithm was put forward to estimate the optimal number of clusters by the iterative trial-and-error process. At last, experiments were done on the UCI, KDD Cup 1999, and synthetic datasets, which showed that the method not only effectively determined the optimal number of clusters, but also reduced the iteration of FCM with the stable clustering result.

1. Introduction

Cluster analysis has a long research history. Due to its advantage of learning without a priori knowledge, it is widely applied in the fields of pattern recognition, image processing, web mining, spatiotemporal database application, business intelligence, and so forth.

Clustering is often as unsupervised learning and aims to partition objects in a dataset into several natural groupings, namely, the so-called clusters, such that objects within a cluster tend to be similar while objects belonging to different clusters are dissimilar. Generally, the datasets from different application fields vary in feature, and the purposes of clustering are multifarious. Therefore, the best method of cluster analysis depends on datasets and purposes of use. There is no universal clustering technology that can be widely applicable to the diverse structures presented by various datasets [1]. According to the accumulation rules of objects in clusters and the methods of applying these rules, clustering algorithms are divided into many types. However, for most clustering algorithms including partitional clustering and hierarchical clustering, the number of clusters is a parameter needing to be preset, to which the quality of clustering result is closely related. In practical application, it usually relies on users’ experience or background knowledge in related fields. But in most cases, the number of clusters is unknown to users. If the number is assigned too large, it may result in more complicated clustering results which are difficult to be explained. On the contrary, if it is too small, a lot of valuable information in clustering result may be lost [2]. Thus, it is still a fundamental problem in the research of cluster analysis to determine the optimal number of clusters in a dataset [3].

Contributions of this paper are as follows. (1) A density-based algorithm is proposed, which can quickly and succinctly generate high-quality initial cluster centroids instead of choosing randomly, so as to stabilize the clustering result and also quicken the convergence of the clustering algorithm. Besides, this algorithm can automatically estimate the maximum number of clusters based on the features of a dataset, hereby determining the search range for estimating the optimal number of clusters and effectively reducing the iterations of the clustering algorithm. (2) Based on the features of compactness within a cluster and separation between clusters, a new fuzzy clustering validity index (CVI) is defined in this paper avoiding its value close to zero along with the number of clusters tending to the number of objects and obtaining the optimal clustering result. (3) A self-adaptive method that iteratively uses improved FCM algorithm to estimate the optimal number of clusters was put forward.

The easiest method of determining the number of clusters is data visualization. For the dataset that can be effectively mapped to a 2-dimensional Euclidean space, the number of clusters can be intuitively acquired through the distribution graph of data points. However, for high-dimensional and complicated data, this method is unserviceable. Rodriguez and Laio proposed a clustering algorithm based on density peaks, declaring that it was able to detect nonspherical clusters and to automatically find the true number of clusters [4]. But, in fact, the number of cluster centroids still needs to be selected artificially according to the decision graph. Next, relevant technologies to determine the optimal number of clusters are summarized below.

2.1. Clustering Validity Index Based Method

Clustering validity index is used to evaluate the quality of partitions on a dataset generated by the clustering algorithm. It is an effective method to construct an appropriate clustering validity index to determine the number of clusters. The idea is to assign different values of the number of clusters within a certain range, then run fuzzy clustering algorithm on the dataset, and finally to evaluate the results by clustering validity index. When the value of clustering validity index is of the maximum or the minimum or an obvious inflection point appears, the corresponding value of is the optimal number of clusters . So far, researchers have put forward lots of fuzzy clustering validity indices, divided into the following two types.

(1) Clustering Validity Index Based on Fuzzy Partition. These indices are according to such a point of view that for a well-separated dataset, the smaller the fuzziness of fuzzy partition is, the more reliable the clustering result is. Based on it, Zadeh, the founder of fuzzy sets, put forward the first clustering validity index degree of separation in 1965 [5]. But its discriminating effect is not ideal. In 1974, Bezdek put forward the concept of partition coefficient (PC) [6] which was the first practical generic function for measuring the validity of the fuzzy clustering and subsequently proposed another clustering validity function partition entropy (PE) [7] closely related to partition coefficient. Later, Windham defined proportion exponent by use of the maximum value of a fuzzy membership function [8]. Lee put forward a fuzzy clustering validity index using the distinguishableness of clusters measured by the object proximities [9]. Based on the Shannon entropy and fuzzy variation theory, Zhang and Jiang put forward a new fuzzy clustering validity index taking account of the geometry structure of the dataset [10]. Saha et al. put forward an algorithm based on differential evolution for automatic cluster detection, which well evaluated the validity of the clustering result [11]. Yue et al. partitioned the original data space into a grid-based structure and proposed a cluster separation measure based on grid distances [12]. The clustering validity index based on fuzzy partition is only related to the fuzzy degree of membership and has the advantages of simpleness and small calculating amount. But it lacks direct relation with some structural features of the dataset.

(2) Clustering Validity Index Based on the Geometry Structure of the Dataset. These indices are based on such a point of view that for a well-separated dataset, every cluster should be compact and separated from each other as far as possible. The ratio of compactness and separation is used as the standard of clustering validity. This type of representative clustering validity indices include Xie-Beni index [13], Bensaid index [14], and Kwon index [15]. Sun et al. proposed a new validity index based on a linear combination of compactness and separation and inspired by Rezaee’s validity [16]. Li and Yu defined new compactness and separation and put forward a new fuzzy clustering validity function [17]. Based on fuzzy granulation-degranulation, Saha and Bandyopadhyay put forward a fuzzy clustering validity function [18]. Zhang et al. adopted Pearson correlation to measure the distance and put forward a validity function [19]. Kim et al. proposed a clustering validity index for GK algorithm based on the average value of the relative degrees of sharing of all possible pairs of fuzzy clusters [20]. Rezaee proposed a new validity index for GK algorithm to overcome the shortcomings of Kim’s index [21]. Zhang et al. proposed a novel WGLI to detect the optimal cluster number, using global optimum membership as the global property and modularity of bipartite network as the local independent property [22]. The clustering validity index based on the geometric structure of the dataset considers both the fuzzy degree of membership and the geometric structure, but its membership function is quite complicated with large calculating amount.

Based on clustering validity index, the optimal number of clusters is determined through exhaustive search. In order to increase the efficiency of estimating the optimal number of clusters , the search range of must be set; that is, is the maximum number of clusters, assigned to meet the condition . Most researchers used the empirical rule , where is the number of data in the dataset. For this problem, theoretical analysis and example verification were conducted in [23], which indicated that it was reasonable in a sense. However, apparently, this method has the following disadvantages. (1) Each must be tried in turn, which will cause a huge calculation. (2) For each , it cannot be guaranteed that the clustering result is the globally optimal solution. (3) When noise and outliers are existing, the reliability of clustering validity index is weak. (4) For some datasets like FaceImage [24], if , the empirical rule will be invalid. Researches showed that due to the diversified data types and structures, no universal fuzzy clustering validity index can be applicable to all datasets. The research is and will still be carried on urgently.

2.2. Heuristic Method

Some new clustering algorithms have been proposed in succession recently. The main idea is to use some criteria to guide the clustering process, with the number of clusters being adjusted. In this way, while the clustering is completed, the appropriate number of clusters can be obtained as well. For example, -means algorithm [25] based on the split hierarchical clustering is representative. Contrary to the aforesaid process, RCA [26] determined the actual number of clusters by a process of competitive agglomeration. Combining the single-point iterative technology with hierarchical clustering, a similarity-based clustering method (SCM) [27] was proposed. A mercer kernel-based clustering [28] estimated the number of clusters by the eigenvectors of a kernel matrix. A clustering algorithm based on maximal -distant subtrees [29] detected any number of well-separated clusters with any shapes. In addition, Frey and Dueck put forward an affinity propagation clustering algorithm (AP) [24] which generated high-quality cluster centroids via the message passing between objects to determine the optimal number of clusters of large-scale data. Shihong et al. proposed a Gerschgorin disk estimation-based criterion to estimate the true number of clusters [30]. José-García and Gómez-Flores presented an up-to-date review of all major nature-inspired metaheuristic algorithms used thus far for automatic clustering, determining the best estimate of the number of clusters [31]. All these have widened the thoughts for relevant researches.

3. Traditional FCM for Determining the Optimal Number of Clusters

Since Ruspini first introduced the theory of fuzzy sets into cluster analysis in 1973 [32], different fuzzy clustering algorithms have been widely discussed, developed, and applied in various areas. FCM [33] is one of the most commonly used algorithms. FCM was first presented by Dunn in 1974. Subsequently, Bezdek introduced weighted index to the fuzzy degree of membership [34], further developing FCM. FCM divides the dataset into fuzzy clusters. This algorithm holds that each object belongs to a certain cluster with a different degree of membership, that is, a cluster is considered as a fuzzy subset on the dataset.

Assume that is a finite dataset, where is an l-dimensional object and is the th property of the th object. denotes cluster(s). represents l-dimensional cluster centroid(s), where . is a fuzzy partition matrix, and is the degree of membership of the th object in the th cluster, where . The objective function is the quadratic sum of weighed distances from the samples to the cluster centroid in each cluster; that is,where shows the Euclidean distance between the th object and the th cluster centroid. () is a fuzziness index, controlling the fuzziness of the memberships. As the value of becomes progressively higher, the resulting memberships become fuzzier [35]. Pal and Bezdek advised that should be between 1.5 and 2.5, and usually let if without any special requirement [36].

According to the clustering criterion, appropriate fuzzy partition matrix and cluster centroid are obtained to minimize the objective function . Based on the Lagrange multiplier method, and are, respectively, calculated by the formulas below:

FCM algorithm is carried out through an iterative process of minimizing the objective function , with the update of and . The specific steps are as follows.

Step 1. Assign the initial value of the number of clusters , fuzziness index , maximum iterations , and threshold .

Step 2. Initialize the fuzzy partition randomly according to the constraints of the degree of membership.

Step 3. At the -step, calculate cluster centroids according to (3).

Step 4. Calculate the objective function according to (1). If or , then stop; otherwise continue to Step  5.

Step 5. Calculate according to (2) and return to Step  3.

At last, each object can be arranged into one cluster in accordance with the principle of the maximum degree of membership. The advantages of FCM may be summarized as simple algorithm, quick convergence, and easiness to be extended. Its disadvantages lie in the selection of the initial cluster centroids, the sensitivity to noise and outliers, and the setting of the number of clusters, which have a great impact on the clustering result. As the random selection of the initial cluster centroids cannot ensure the fact that FCM converges to an optimal solution, different initial cluster centroids are used for multiple running of FCM; otherwise they are determined by using of other fast algorithms.

The traditional method to determine the optimal number of clusters of FCM is to set the search range of the number of clusters, run FCM to generate clustering results of different number of clusters, select an appropriate clustering validity index to evaluate clustering results, and finally obtain the optimal number of clusters according to the evaluation result. The method is composed of the following steps.

Step 1. Input the search range ; generally, and .

Step 2. For each integer .
Step  2.1. Run FCM.
Step  2.2. Calculate clustering validity index.

Step 3. Compare all values of clustering validity index. corresponding to the maximum or minimum value is the optimal number of clusters .

Step 4. Output , the optimal value of clustering validity index, and clustering result.

4. The Proposed Methods

4.1. Density-Based Algorithm

Considering the large influence of the randomly selected initial cluster centroid on the clustering result, a density-based algorithm is proposed to select the initial cluster centroids, and the maximum number of clusters can be estimated at the same time. Some related terms will be defined at first.

Definition 1 (local density). The local density of object is defined aswhere is the distance between two objects and and is a cutoff distance. The recommended approach is to sort the distance between any two objects in descending order and then assign as the value corresponding to the first of the sorted distance (appropriately ). It shows that the more objects with a distance from less than there are, the bigger the value of is. The cutoff distance finally can decide the number of initial cluster centroids, namely, the maximum number of clusters. The density-based algorithm is robust with respect to the choice of .

Definition 2 (core point). Assume that there is an object , if and , , , then is a core point.

Definition 3 (directly density-reachable). Assume that there are two objects , , if their distance , then is directly density-reachable to and vice versa.

Definition 4 (density-reachable). Assume that is a dataset and objects belong to it. If, for each , is directly density-reachable to , then is density-reachable to .

Definition 5 (neighbor). Neighbors of an object are those who are directly density-reachable or density-reachable to it, denoted as Neighbor .

Definition 6 (border point). An object is called a border point if it has no neighbors.

The example is shown in Figure 1.

The selection principle of initial cluster centroids of density-based algorithm is that, usually, a cluster centroid is an object with higher local density, surrounded by neighbors with lower local density than it, and has a relatively large distance from other cluster centroids [4]. Density-based algorithm can automatically select the initial cluster centroids and determine the maximum number of clusters according to local density. The pseudocode is shown in Algorithm 1. These cluster centroids obtained are sorted in descending order according to local density. Figure 2 demonstrates the process of density-based algorithm.

Input: Dataset and
Output: and
;
dc = cutoffdis ();         Calculate the cutoff distance dc.
;
;  The number of object corresponding to sorted
.
;
;   The cluster number that each object belongs to, with initial
value −1.
for to do
if then      dose not
belong to any cluster and the nmber of its neighbors is greater than 1.
  ;
  for do
   ;
  ;
;
4.2. Fuzzy Clustering Validity Index

Firstly based on the geometric structure of the dataset, Xie and Beni put forward the Xie-Beni fuzzy clustering validity index [13], which tried to find a balance point between the fuzzy compactness and separation so as to acquire the optimal cluster result. The index is defined as

In the formula, the numerator is the average distance from various objects to centroids, used to measure the compactness, and the denominator is the minimum distance between any two centroids, measuring the separation.

However, Bensaid et al. found that the size of each cluster had a large influence on Xie-Beni index and put forward a new index [14], which was insensitive to the number of objects in each cluster. Bensaid index is defined aswhere is the fuzzy cardinality of the th cluster and defined as

shows the variation of the th fuzzy cluster. Then, the compactness is computed as

denotes the separation of the th fuzzy cluster, defined as the sum of the distances from its cluster centroid to the centroids of other clusters.

Becausethis index is the same as the Xie-Beni index. When , the index value will be monotonically decreased, close to 0, and will lose robustness and judgment function for determining the optimal number of clusters. Thus, this paper improves Bensaid index and proposes a new index :

The numerator represents the compactness of the th cluster, where is its fuzzy cardinality. Its second item, an introduced punishing function, denotes the distance from the cluster centroid of the th cluster to the average of all cluster centroids, which can eliminate the monotonically decreasing tendency as the number of clusters increases to . The denominator represents the mean distance from the th cluster centroid to other cluster centroids, which is used for measuring the separation. The ratio of the numerator and the denominator thereof represents the clustering effect of the th cluster. The clustering validity index is defined as the sum of the clustering effect (the ratio) of all clusters. Obviously, the smaller the value is, the better the clustering effect of the dataset is, and the corresponding to the minimum value is the optimal number of clusters.

4.3. Self-Adaptive FCM

In this paper, the iterative trial-and-error process [16] is still used to determine the optimal number of clusters by self-adaptive FCM (SAFCM). The pseudocode of SAFCM algorithm is described in Algorithm 2.

Input: , , and
Output: , ,
for to do
;                 The current iteration.
;         The first kn cluster centroids of V.
;
;
while do
  Update ;
  Update ;
  Update ;
  ;
;
);
;
;  The best value of clustering validity index.
);
;

5. Experiment and Analysis

This paper selects 8 experimental datasets, among which 3 datasets come from UCI datasets, respectively, Iris, Wine, and Seeds, a dataset (SubKDD) is randomly selected from KDD Cup 1999 shown in Table 1, and the remaining 4 datasets are synthetic datasets shown in Figure 3. SubKDD includes normal, 2 kinds of Probe attack (ipsweep and portsweep) and 3 kinds of DoS attack (neptune, smurf, and back). The first synthetic dataset (SD1) consists of 20 2-dimensional Gaussian distribution data with 10 samples. Their covariance matrixes are second-order unit matrix . The structural feature of the dataset is that the distance between any two clusters is large, and the number of classes is greater than . The second synthetic dataset (SD2) consists of 4 2-dimensional Gaussian distribution data. The cluster centroids are, respectively, (5,5), (10,10), (15,15), and (20,20), each containing 500 samples and the covariance matrix of each being . The structural feature of the dataset is of short intercluster distance with a small overlap. The third synthetic dataset (SD3) has a complicated structure. The fourth synthetic dataset (SD4) is a nonconvex one.


Attack behavior Number of samples

normal200
ipsweep50
portsweep50
neptune200
smurf300
back50

In this paper, the numeric value is normalized aswhere and denote the mean of the th attribute value and its standard deviation, respectively; then

For the categorical attribute of SubKDD, the simple matching is used for the dissimilarity measure, that is, 0 for identical values and 1 for different values.

5.1. Experiment of Simulation

In [37], it was proposed that the number of clusters generated by AP algorithm could be selected as the maximum number of clusters. So this paper estimates the value of by the empirical rule, AP algorithm, and density-based algorithm. The specific experimental results are shown in Table 2. Let in AP algorithm. is, respectively, selected as the distance of the first 1%, 2%, 3%, and 5%.


Dataset

Iris1503129201496
Wine1783131514763
Seeds21031413181252
SubKDD1050632242117107
SD1200201419382220
SD220004442516342
SD388532927241953
SD494733031231384

Experiment results show that, for the dataset with the true number of clusters greater than , it is obviously incorrect to let . In other words, the empirical rule is invalid. The number of clusters finally obtained by AP algorithm is close to the actual number of clusters. For the dataset with the true number of clusters smaller than , the number of clusters generated by AP algorithm can be used as an appropriate , but, sometimes, the value is greater than the value estimated by the empirical rule, which enlarges the search range of the optimal number of clusters. If is estimated by the proposed density-based algorithm, the results in most cases are appropriate. The method is invalid only on SD1 when the cutoff distance is the first 5% distance. When the cutoff distance is selected as the first 3% distance, generated by the proposed algorithm is much smaller than . It is closer to the true number of clusters and greatly narrows the search range of the optimal number of clusters. Therefore, the cutoff distance is selected as the distance of the first 3% in the later experiments.

5.2. Experiment of Influence of the Initial Cluster Centroids on the Convergence of FCM

To show that the initial cluster centroids obtained by density-based algorithm can quicken the convergence of clustering algorithm, the traditional FCM is adopted for verification. The number of clusters is assigned as the true value, with the convergence threshold being . Because the random selection of initial cluster centroids has a large influence on FCM, the experiment of each dataset is done repeatedly for 50 times, and the round numbers of the mean of algorithmic iterations are compared. The specific experimental result is shown in Table 3.


DatasetMethod 1Method 2

Iris2116
Wine2718
Seeds1916
SubKDD3123
SD13814
SD21812
SD33022
SD42621

As shown in Table 3, the initial cluster centroids obtained by density-based algorithm can effectively reduce the iteration of FCM. Particularly, on SD1, the iteration of FCM is far smaller than that with randomly selected initial cluster centroids whose minimum iteration is 24 and maximum iteration reaches 63, with unstable clustering results. Therefore, the proposed method can not only effectively quicken the convergence of FCM, but also obtain a stable clustering result.

5.3. Experiment of Clustering Accuracy Based on Clustering Validity Index

For 3 UCI datasets and SubKDD, clustering accuracy is adopted to measure the clustering effect, defined as

Here is the number of objects which co-occur in the th cluster and the th real cluster, and is the number of objects in the dataset. According to this measurement, the higher the clustering accuracy is, the better the clustering result of FCM is. When , the clustering result of FCM is totally accurate.

In the experiments, the true number of clusters is assigned to each dataset and the initial cluster centroids are obtained by density-based algorithm. Then, experimental results of clustering accuracy are shown in Table 4.


Dataset IrisWineSeedsSubKDD

Clustering accuracy84.00%96.63%91.90%94.35%

Apparently, clustering accuracies of the datasets Wine, Seeds, and SubKDD are high, while that of the dataset Iris is relatively low for the reason that two clusters are nonlinearly separable.

The clustering results of the proposed clustering validity index on 4 synthetic datasets are shown in Figure 4, which depict the good partitions on the datasets SD1 and SD2 and the slightly poor partition on SD3 and SD4 because of the complexity of their structure or nonconvexity. When the number of clusters is set as 4, SD3 is divided into 4 groups that means the right and the left each have 2 groups.

5.4. Experiment of the Optimal Number of Clusters

At last, Xie-Beni index, Bensaid index, Kwon index, and the proposed index are, respectively, adopted for running of SAFCM so as to determine the optimal number of clusters. The results are shown in Table 5. , , , and represent Xie-Beni index, Bensaid index, Kwon index, and the proposed index, respectively.


Dataset

Iris2922
Wine3633
Seeds2522
SubKDD101094
SD120202020
SD24444
SD35354
SD42823

It shows that, for synthetic datasets SD1 and SD2 with simple structure, these indices can all obtain the optimal number of clusters. For 3 UCI datasets, SubKDD and SD4, Bensaid index cannot obtain an accurate number of clusters except SD3. For the dataset Wine, both Xie-Beni index and Kwon index can obtain the accurate number of clusters, while for the datasets Iris, Seeds, SubKDD, SD3, and SD4, they only obtain the result approximate to the true number of cluster. The proposed index obtains the right result on the datasets Wine and SD4 and a better result compared to Xie-Beni index and Kwon index on the datasets SubKDD and SD3, while it obtains the same result of the two indices on the datasets Iris and Seeds. There are two reasons. First, the importance degree of each property of the dataset is not considered in the clustering process but assumed to be the same, thereby affecting the experimental result. Second, SAFCM has a weak capacity to deal with overlapping clusters or nonconvexity dataset. These are the authors’ subsequent research contents.

Tables 613 list the detailed results of 4 indices on the experimental datasets in each iteration of SAFCM. For the dataset Wine, when the number of clusters is 4, 5, or 6, the result based on Xie-Beni index or Kwon index is of no convergence. However, the proposed index can achieve the desired results.



20.1142340.22360417.3849730.473604
30.2239730.12459834.5721460.551565
40.3167420.09910349.2794880.615436
50.5601090.08910887.5409680.676350
60.5744750.07220190.5633790.691340
70.4000710.06700563.3286790.735311
80.2756820.03628345.7369720.614055
90.2509710.02773542.8684490.584244



20.6634061.328291118.339021.578291
30.4689020.51307183.9390581.346421
40.4732541.735791
50.3736681.846686
60.2566981.683222



20.1472470.29360931.1703490.543609
30.2121270.15089945.3262160.599001
40.2434830.12772052.2153340.697943
50.3488420.08547775.4936540.701153



20.6469891.324434550.1666761.574431
30.2607550.378775222.0208381.090289
40.1338430.062126119.5445600.511238
50.2344020.052499202.6412040.537852
60.1807280.054938156.8122710.583800
70.1346360.047514119.0292650.619720
80.1045110.03284991.98527400.690873
90.1297210.02763974.01284700.562636
100.068220.02702591.35285600.528528



20.2216930.44339044.5929680.693390
30.2060350.19885340.2642510.726245
40.1277310.09365326.2202000.655550
50.1307810.06984827.1548670.651465
60.1448940.05006722.9221210.639325
70.1365620.04027529.1261520.636258
80.1124800.03287424.3236250.627442
90.1150900.02683324.2425800.624936
100.1414150.02261128.5745790.616701
110.1266800.01925628.8217070.611524
120.1031780.01663423.9318650.605990
130.1103550.01325326.5170650.588246
140.0955130.01108323.6350220.576808
150.0759280.00981719.3020950.562289
160.0660250.00882417.2361380.557990
170.0543140.00724814.9952840.544341
180.0453980.00609013.2088100.534882
190.0394920.00536511.9774370.527131
200.0345980.00491710.8359520.517481



20.0662860.132572132.815030.382572
30.0682420.063751137.525350.394200
40.0609160.028149123.099670.378337



20.1483790.300899131.5572690.570876
30.1956630.142149173.9005510.599680
40.1275120.142150113.7489470.557198
50.1265610.738070113.2629750.589535



20.1044340.20883299.14827100.473748
30.1703260.142561162.0444500.458832
40.2218840.081007211.6926990.583529
50.1562530.053094157.6839210.603211
60.1231910.041799118.2791160.575396
70.1654650.032411107.2100820.592625
80.1451640.028698139.3109690.606049

6. Conclusions

FCM is widely used in lots of fields. But it needs to preset the number of clusters and is greatly influenced by the initial cluster centroids. This paper studies a self-adaptive method for determining the number of clusters by using of FCM algorithm. In this method, a density-based algorithm is put forward at first, which can estimate the maximum number of clusters to reduce the search range of the optimal number, especially being fit for the dataset on which the empirical rule is inoperative. Besides, it can generate the high-quality initial cluster centroids so that the the clustering result is stable and the convergence of FCM is quick. Then, a new fuzzy clustering validity index was put forward based on fuzzy compactness and separation so that the clustering result is closer to global optimum. The index is robust and interpretable when the number of clusters tends to that of objects in the dataset. Finally, a self-adaptive FCM algorithm is proposed to determine the optimal number of clusters run in the iterative trial-and-error process.

The contributions are validated by experimental results. However, in most cases, each property plays a different role in the clustering process. In other words, the weight of properties is not the same. This issue will be focused on in the authors’ future work.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

This work is partially supported by the National Natural Science Foundation of China (61373148 and 61502151), Shandong Province Natural Science Foundation (ZR2014FL010), Science Foundation of Ministry of Education of China (14YJC 860042), Shandong Province Outstanding Young Scientist Award Fund (BS2013DX033), Shandong Province Social Sciences Project (15CXWJ13 and 16CFXJ05), and Project of Shandong Province Higher Educational Science and Technology Program (no. J15LN02).

References

  1. S. Sambasivam and N. Theodosopoulos, “Advanced data clustering methods of mining web documents,” Issues in Informing Science and Information Technology, vol. 3, pp. 563–579, 2006. View at: Google Scholar
  2. J. Wang, S.-T. Wang, and Z.-H. Deng, “Survey on challenges in clustering analysis research,” Control and Decision, vol. 3, 2012. View at: Google Scholar
  3. C.-W. Bong and M. Rajeswari, “Multi-objective nature-inspired clustering and classification techniques for image segmentation,” Applied Soft Computing Journal, vol. 11, no. 4, pp. 3271–3282, 2011. View at: Publisher Site | Google Scholar
  4. A. Rodriguez and A. Laio, “Clustering by fast search and find of density peaks,” Science, vol. 344, no. 6191, pp. 1492–1496, 2014. View at: Publisher Site | Google Scholar
  5. L. A. Zadeh, “Fuzzy sets,” Information and Computation, vol. 8, no. 3, pp. 338–353, 1965. View at: Google Scholar | MathSciNet
  6. J. C. Bezdek, “Numerical taxonomy with fuzzy sets,” Journal of Mathematical Biology, vol. 1, no. 1, pp. 57–71, 1974. View at: Publisher Site | Google Scholar | MathSciNet
  7. J. C. Bezdek, “Cluster validity with fuzzy sets,” Journal of Cybernetics, vol. 3, no. 3, pp. 58–73, 1974. View at: Google Scholar | MathSciNet
  8. M. P. Windham, “Cluster validity for fuzzy clustering algorithms,” Fuzzy Sets and Systems, vol. 5, no. 2, pp. 177–185, 1981. View at: Publisher Site | Google Scholar
  9. M. Lee, “Fuzzy cluster validity index based on object proximities defined over fuzzy partition matrices,” in Proceedings of the IEEE International Conference on Fuzzy Systems and IEEE World Congress on Computational Intelligence (FUZZ-IEEE '08), pp. 336–340, June 2008. View at: Publisher Site | Google Scholar
  10. X.-B. Zhang and L. Jiang, “A new validity index of fuzzy c-means clustering,” in Proceedings of the IEEE International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC '09), vol. 2, pp. 218–221, Hangzhou, China, August 2009. View at: Publisher Site | Google Scholar
  11. I. Saha, U. Maulik, and S. Bandyopadhyay, “A new differential evolution based fuzzy clustering for automatic cluster evolution,” in Proceedings of the IEEE International Advance Computing Conference (IACC '09), pp. 706–711, IEEE, Patiala, India, March 2009. View at: Publisher Site | Google Scholar
  12. S. Yue, J.-S. Wang, T. Wu, and H. Wang, “A new separation measure for improving the effectiveness of validity indices,” Information Sciences, vol. 180, no. 5, pp. 748–764, 2010. View at: Publisher Site | Google Scholar | MathSciNet
  13. X. L. Xie and G. Beni, “A validity measure for fuzzy clustering,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no. 8, pp. 841–847, 1991. View at: Publisher Site | Google Scholar
  14. A. M. Bensaid, L. O. Hall, J. C. Bezdek et al., “Validity-guided (re)clustering with applications to image segmentation,” IEEE Transactions on Fuzzy Systems, vol. 4, no. 2, pp. 112–123, 1996. View at: Publisher Site | Google Scholar
  15. S. H. Kwon, “Cluster validity index for fuzzy clustering,” Electronics Letters, vol. 34, no. 22, pp. 2176–2177, 1998. View at: Publisher Site | Google Scholar
  16. H. Sun, S. Wang, and Q. Jiang, “FCM-based model selection algorithms for determining the number of clusters,” Pattern Recognition, vol. 37, no. 10, pp. 2027–2037, 2004. View at: Publisher Site | Google Scholar
  17. Y. Li and F. Yu, “A new validity function for fuzzy clustering,” in Proceedings of the International Conference on Computational Intelligence and Natural Computing (CINC '09), vol. 1, pp. 462–465, IEEE, June 2009. View at: Publisher Site | Google Scholar
  18. S. Saha and S. Bandyopadhyay, “A new cluster validity index based on fuzzy granulation-degranulation criteria,” in Proceedings of the IEEEInternational Conference on Advanced Computing and Communications (ADCOM '07), pp. 353–358, IEEE, Guwahati, India, 2007. View at: Google Scholar
  19. M. Zhang, W. Zhang, H. Sicotte, and P. Yang, “A new validity measure for a correlation-based fuzzy c-means clustering algorithm,” in Proceedings of the 31st IEEE Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC '09), pp. 3865–3868, Minneapolis, Minn, USA, September 2009. View at: Publisher Site | Google Scholar
  20. Y.-I. Kim, D.-W. Kim, D. Lee, and K. H. Lee, “A cluster validation index for GK cluster analysis based on relative degree of sharing,” Information Sciences, vol. 168, no. 1–4, pp. 225–242, 2004. View at: Publisher Site | Google Scholar | MathSciNet
  21. B. Rezaee, “A cluster validity index for fuzzy clustering,” Fuzzy Sets and Systems, vol. 161, no. 23, pp. 3014–3025, 2010. View at: Publisher Site | Google Scholar | MathSciNet
  22. D. Zhang, M. Ji, J. Yang, Y. Zhang, and F. Xie, “A novel cluster validity index for fuzzy clustering based on bipartite modularity,” Fuzzy Sets and Systems, vol. 253, pp. 122–137, 2014. View at: Publisher Site | Google Scholar | MathSciNet
  23. J. Yu and Q. Cheng, “Search range of the optimal cluster number in fuzzy clustering,” Science in China (Series E), vol. 32, no. 2, pp. 274–280, 2002. View at: Google Scholar
  24. B. J. Frey and D. Dueck, “Clustering by passing messages between data points,” Science, vol. 315, no. 5814, pp. 972–976, 2007. View at: Publisher Site | Google Scholar | MathSciNet
  25. D. Pelleg, A. W. Moore et al., “X-means: extending K-means with efficient estimation of the number of clusters,” in Proceedings of the 17th International Conference on Machine Learning (ICML '00), vol. 1, pp. 727–734, 2000. View at: Google Scholar
  26. H. Frigui and R. Krishnapuram, “A robust competitive clustering algorithm with applications in computer vision,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, no. 5, pp. 450–465, 1999. View at: Publisher Site | Google Scholar
  27. M.-S. Yang and K.-L. Wu, “A similarity-based robust clustering method,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 4, pp. 434–448, 2004. View at: Publisher Site | Google Scholar
  28. M. Girolami, “Mercer kernel-based clustering in feature space,” IEEE Transactions on Neural Networks, vol. 13, no. 3, pp. 780–784, 2002. View at: Publisher Site | Google Scholar
  29. L. Yujian, “A clustering algorithm based on maximal θ-distant subtrees,” Pattern Recognition, vol. 40, no. 5, pp. 1425–1431, 2007. View at: Publisher Site | Google Scholar
  30. Y. Shihong, H. Ti, and W. Penglong, “Matrix eigenvalue analysis-based clustering validity index,” Journal of Tianjin University (Science and Technology), vol. 8, article 6, 2014. View at: Google Scholar
  31. A. José-García and W. Gómez-Flores, “Automatic clustering using nature-inspired metaheuristics: a survey,” Applied Soft Computing Journal, vol. 41, pp. 192–213, 2016. View at: Publisher Site | Google Scholar
  32. E. H. Ruspini, “New experimental results in fuzzy clustering,” Information Sciences, vol. 6, pp. 273–284, 1973. View at: Publisher Site | Google Scholar
  33. J. C. Dunn, “Well-separated clusters and optimal fuzzy partitions,” Journal of Cybernetics, vol. 4, no. 1, pp. 95–104, 1974. View at: Google Scholar | MathSciNet
  34. J. C. Bezdek, Pattern Recognition with Fuzzy Objective Function Algorithms, Springer Science & Business Media, 2013.
  35. R. E. Hammah and J. H. Curran, “Fuzzy cluster algorithm for the automatic identification of joint sets,” International Journal of Rock Mechanics and Mining Sciences, vol. 35, no. 7, pp. 889–905, 1998. View at: Publisher Site | Google Scholar
  36. N. R. Pal and J. C. Bezdek, “On cluster validity for the fuzzy c-means model,” IEEE Transactions on Fuzzy Systems, vol. 3, no. 3, pp. 370–379, 1995. View at: Publisher Site | Google Scholar
  37. S. B. Zhou, Z. Y. Xu, and X. Q. Tang, “Method for determining optimal number of clusters based on affinity propagation clustering,” Control and Decision, vol. 26, no. 8, pp. 1147–1152, 2011. View at: Google Scholar | MathSciNet

Copyright © 2016 Min Ren et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

2716 Views | 794 Downloads | 11 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19.