Abstract

Because of its positive effects on dealing with the curse of dimensionality in big data, random projection for dimensionality reduction has become a popular method recently. In this paper, an academic analysis of influences of random projection on the variability of data set and the dependence of dimensions has been proposed. Together with the theoretical analysis, a new fuzzy -means (FCM) clustering algorithm with random projection has been presented. Empirical results verify that the new algorithm not only preserves the accuracy of original FCM clustering, but also is more efficient than original clustering and clustering with singular value decomposition. At the same time, a new cluster ensemble approach based on FCM clustering with random projection is also proposed. The new aggregation method can efficiently compute the spectral embedding of data with cluster centers based representation which scales linearly with data size. Experimental results reveal the efficiency, effectiveness, and robustness of our algorithm compared to the state-of-the-art methods.

1. Introduction

With the rapid development of mobile Internet, cloud computing, Internet of things, social network service, and other emerging services, data is growing at an explosive rate recently. How to achieve fast and effective analyses of data and then maximize the data property’s benefits has become the focus of attention. The “four Vs” model [1], variety, volume, velocity, and value, for big data has made traditional methods of data analysis unapplicable. Therefore, new techniques for big data analysis such as distributed or parallelized [2, 3], feature extraction [4, 5], and sampling [6] have been widely concerned.

Clustering is an essential method of data analysis through which the original data set can be partitioned into several data subsets according to similarities of data points. It becomes an underlying tool for outlier detection [7], biology [8], indexing [9], and so on. In the context of fuzzy clustering analysis, each object in data set no longer belongs to a single group but possibly belongs to any group. The degree of an object belonging to a group is denoted by a value in . Among various methods of fuzzy clustering, fuzzy -means (FCM) [10] clustering has received particular attention for its special features. In recent years, based on different sampling and extension methods, a lot of modified FCM algorithms [1113] designed for big data analysis have been proposed. However, these algorithms are unsatisfactory in efficiency for high dimensional data, since they initially do not take the problem of “curse of dimensionality” into account.

In 1984, Johnson and Lindenstrauss [14] used the projection generated by a random orthogonal matrix to reduce the dimensionality of data. This method can preserve pairwise distances of the points within a factor of . Subsequently, [15] stated that such projection could be produced by a random Gaussian matrix. Moreover, Achlioptas investigated that even projection from a random scaled sign matrix satisfied the property of preserving pairwise distances [16]. These results laid the theoretic foundation for applying random projection to clustering analysis based on pairwise distances. Recently, Boutsidis et al. [17] designed a provably accurate dimensionality reduction method for -means clustering based on random projection. Since the method above was analyzed for crisp partitions, the effect of random projection on FCM clustering algorithm is still unknown.

As it can combine multiple base clustering solutions of the same object set into a single consensus solution, cluster ensemble has many attractive properties such as improved quality of solution, robust clustering, and knowledge reuse [18]. Ensemble approaches of fuzzy clustering with random projection have been proposed in [1921]. These methods were all based on multiple random projections of original data set and then integrated all fuzzy clustering results of the projected data sets. Reference [21] pointed out that their method used smaller memory and ran faster than the ones of [19, 20]. However, with respect to crisp partition solution, their method still needs computing and storing the product of membership matrices, which requires time and space complexity with quadratic data size.

Our Contribution. In this paper, our contributions can be divided into two parts: one is the analysis of impact of random projection on FCM clustering; the other is the proposition of a cluster ensemble method with random projection which is more efficient, robust, and suitable for a wider range of geometrical data sets. Concretely, the contributions are as follows:(i)We theoretically analyze that random projection can preserve the entire variability of data and prove the effectiveness of random projection for dimensionality reduction from the linear independence of dimensions of projected data. Together with the property of preserving pairwise distances of points, we obtain a modified FCM clustering algorithm with random projection. The accuracy and efficiency of modified algorithm have been verified through experiments on both synthetic and real data sets.(ii)We propose a new cluster ensemble algorithm for FCM clustering with random projection which gets spectral embedding efficiently through singular value decomposition (SVD) of the concatenation of membership matrices. The new method avoids the construction of similarity or distance matrix, so it is more efficient and space-saving than method in [21] with respect to crisp partition and methods in [19, 20] for large scale data sets. In addition, the improvements on robustness and efficiency of our approach are also verified by the experimental results on both synthetic and real data sets. At the same time, our algorithm is not only as accurate as the existing ones on Gaussian mixture data set, but also obviously more accurate than the existing ones on the real data set, which indicates that our approach is suitable for a wider range of data sets.

2. Preliminaries

In this section, we present some notations used throughout this paper, introduce the FCM clustering algorithm, and give some traditional cluster ensemble methods using random projection.

2.1. Matrix Notations

We use to denote data matrix; to denote the th row vector of and the th point; to denote the th element of . means the expectation of a random variable and denotes the probability of an event . Let be the covariance of random variables , ; let be the variance of random variable .

We denote the trace of matrix by , given ; thenFor any matrix , we have the following property:

Singular value decomposition is a popular dimensionality reduction method, through which one can get a projection: , where contains the top right singular vectors of matrix . The exact SVD of takes cubic time of dimension size and quadratic time of data size.

2.2. Fuzzy -Means Clustering Algorithm (FCM)

The goal of fuzzy clustering is to get a flexible partition, where each point has membership in more than one cluster with values in . Among the various fuzzy clustering algorithms, FCM clustering algorithm is widely used in low dimensional data because of its efficiency and effectiveness [22]. We start from giving the definition of fuzzy -means clustering problem and then describe the FCM clustering algorithm precisely.

Definition 1 (the fuzzy -means clustering problem). Given a data set of points with features denoted by an matrix , a positive integer regarded as the number of clusters, and fuzzy constant , find the partition matrix and centers of clusters , such that

Here, denotes norm, usually Euclidean norm; the element of partition matrix denotes the membership of point in the cluster . Moreover, for any , . The objective function is defined as .

FCM clustering algorithm first computes the degree of membership through distances between points and centers of clusters and then updates the center of each cluster based on the membership degree. By means of computing cluster centers and partition matrix iteratively, a solution is obtained. It should be noted that FCM clustering can only get a locally optimal solution and the final clustering result depends on the initialization. The detailed procedure of FCM clustering is shown in Algorithm 1.

Input: data set (an matrix), number of clusters , fuzzy constant ;
Output: partition matrix , centers of clusters ;
Initialize: sample (or ) randomly from proper space;
While , do
  
  
  .
2.3. Ensemble Aggregations for Multiple Fuzzy Clustering Solutions with Random Projection

There are several algorithms proposed for aggregating the multiple fuzzy clustering results with random projection. The main strategy is to generate data membership matrices through multiple fuzzy clustering solutions on the different projected data sets and then to aggregate the resulting membership matrices. Therefore, different methods of generation and aggregation of membership matrices lead to various ensemble approaches about fuzzy clustering.

The first cluster ensemble approach using random projection was proposed in [20]. After projecting the data into low dimensional space with random projection, the membership matrices were calculated through the probabilistic model of Gaussian mixture gained by EM clustering. Subsequently, the similarity of points and was computed as , where denoted the the probability of point belonging to cluster under model and denoted the probability that points and belonged to the same cluster under model . The aggregated similarity matrix was obtained by averaging across the multiple runs, and the final clustering solution was produced by a hierarchical clustering method called complete linkage. For mixture model, the estimation for the cluster number and values of unknown parameters is often complicated [23]. In addition, this approach needs space for storing the similarity matrix of data points.

Another approach which was used to find genes in DNA microarray data was presented in [19]. Similarly, the data was projected into a low dimensional space with random matrix. Then the method employed FCM clustering to partition the projected data and generated membership matrices , with multiple runs . For each run , the similarity matrix was computed as . Then the combined similarity matrix was calculated by averaging as . A distance matrix was computed by and the final partition matrix was gained by FCM clustering on the distance matrix . Since this method needs to compute the product of partition matrix and its transpose, the time complexity is and the space complexity is .

Considering the large scale data set in the context of big data, [21] proposed a new method for aggregating partition matrices from FCM clustering. They concatenated the partition matrices as , instead of averaging the agreement matrix. Finally, they got the ensemble result as . This algorithm avoids the products of partition matrices and is more suitable than [19] for large scale data sets. However, it still needs the multiplication of concatenated partition matrix when crisp partition result is wanted.

3. Random Projection

Dimensionality reduction is a common technique for analysis of high dimensional data. The most popular skill is SVD (or principal component analysis) where the original features are replaced by a small size of principal components in order to compress the data. But SVD takes cubic time of the number of dimensions. Recently, some literatures stated that random projection can be applied to dimensionality reduction and preserve pairwise distances within a small factor [15, 16]. Low computing complexity and preserving the metric structure make random projection receive much attention. Lemma 2 indicates that there are three kinds of simple random projection possessing the above properties.

Lemma 2 (see [15, 16]). Let matrix be a data set of points and features. Given , let

For integer , let matrix be a random matrix, wherein elements are independently identically distributed random variables from either one of the following three probability distributions:Let with . For any , with probability at least , it holds that

Lemma 2 implies that if the number of dimensions of data reduced by random projection is bigger than a certain bound, then pairwise Euclidean distance squares are preserved within a multiplicative factor of .

With the above properties, researchers have checked the feasibility of applying random projection to -means clustering in terms of theory and experiment [17, 24]. However, as membership degrees for FCM clustering and -means clustering are defined differently, the analysis method can not be directly used for assessing the effect of random projection on FCM clustering. Motivated by the idea of principal component analysis, we draw the conclusion that the compressed data gains the whole variability of original data in probabilistic sense based on the analysis of the variance difference. Besides, variables referring to dimensions of projected data are linear independent. As a result, we can achieve dimensionality reduction via replacing original data by compressed data as “principal components.

Next, we give a useful lemma for proof of the subsequent theorem.

Lemma 3. Let be independently distributed random variables from one of the three probability distributions described in Lemma 2; then

Proof. According to the probability distribution of random variable , it is easy to know that Then, obeys the law of large numbers; namely,

Since centralization of data does not change the distance of any two points and the FCM clustering algorithm is based on pairwise distances to partition data points, we assume that expectation of the data input is 0. In practice, covariance matrix of population is likely unknown. Therefore, we investigate the effect of random projection on variability of both population and sample.

Theorem 4. Let data set be independent samples of -dimensional random vector , and denotes the sample covariance matrix of . The random projection induced by random matrix maps the -dimensional random vector to -dimensional random vector , and denotes the sample covariance matrix of projected data. If elements of random matrix obey distribution demanded by Lemma 2 and are mutually independent with random vector , then (1)dimensions of projected data are linearly independent: ;(2)random projection maintains the whole variability: ; when , with probability 1, .

Proof. It is easy to know that the expectation of any element of random matrix . As elements of random matrix and random vector are mutually independent, the covariance of random vector induced by random projection is() If , then() If , thenThus, by assumption , we can getWe denote spectral decomposition of sample covariance matrice by , where is the matrix of eigenvectors and is a diagonal matrix in which the diagonal elements are and . Supposing the data samples have been centralized, namely, their means are , we can get covariance matrix . For convenience, we still denote a sample of random matrix by . Thus, projected data and sample covariance matrix of projected data . Then, we can getwhere is sample of element of random matrix .
In practice, the spectrum of a covariance often displays a distinct decay after few large eigenvalues. So we assume that there exists an integer , limited constant , such that, for all , it holds that . Then, By Lemma 3, with probability 1,Combining the above arguments, we achieve with probability 1, when .

Part () of Theorem 4 indicates that compressed data produced by random projection can take much information with low dimensionality owing to linear independence of reduced dimensions. Part () manifests that sum of variances of dimensions of original data is consistent with the one of projected data, namely, random projection holds the variability of primal data. Combining results of Lemma 2 with those of Theorem 4, we consider that random projection can be employed to improve the efficiency of FCM clustering algorithm with low dimensionality, and the modified algorithm can keep the accuracy of partition approximately.

4. FCM Clustering with Random Projection and an Efficient Cluster Ensemble Approach

4.1. FCM Clustering via Random Projection

According to the results of Section 3, we design an improved FCM clustering algorithm with random projection for dimensionality reduction. The procedure of new algorithm is shown in Algorithm 2.

Input: data set (an matrix), number of clusters , fuzzy constant , FCM clustering algorithm;
Output: partition matrix , centers of clusters .
() sample a    random projection meeting the requirements of Lemma 2;
() compute the product ;
() run FCM algorithm on , get the partition matrix ;
() compute the centers of clusters through original data and .

Algorithm 2 reduces the dimensions of input data via multiplying a random matrix. Compared with the time for running each iteration in original FCM clustering, the new algorithm would imply an time for each iteration. Thus, the time complexity of new algorithm decreases obviously for high dimensional data in the case . Another common dimensionality reduction method is SVD. Compared with the time of running SVD on data matrix , the new algorithm only needs time to generate random matrix . It indicates that random projection is a cost-effective method of dimensionality reduction for FCM clustering algorithm.

4.2. Ensemble Approach Based on Graph Partition

As different random projections may result in different clustering solutions [20], it is attractive to design the cluster ensemble framework with random projection for improved and robust clustering performance. Although it uses smaller memory and runs faster than ensemble method in [19], the cluster ensemble algorithm in [21] still needs product of concatenated partition matrix for crisp grouping, which leads to a high time and space costs under the circumstances of big data.

In this section, we propose a more efficient and effective aggregation method for multiple FCM clustering results. The overview of our new ensemble approach is presented in Figure 1. The new ensemble method is based on partition on similarity graph. For each random projection, a new data set is generated. After performing FCM clustering on the new data sets, membership matrices are output. The elements of membership matrix are treated as the similarity measure between points and the cluster centers. Through SVD on the concatenation of membership matrices, we get the spectral embedding of data point efficiently. The detailed procedure of new cluster ensemble approach is shown in Algorithm 3.

Input: data set (an matrix), number of clusters , reduced dimension , number of
random projection , FCM clustering algorithm;
Output: cluster label vector .
() at each iteration , run Algorithm 2, get membership matrix ;
() concatenate the membership matrices ;
() compute the first left singular vectors of , denoted by ,
where , is a diagonal matrix and ;
() treat each row of as a data point and apply -means to obtain cluster label vector.

In step () of the procedure in Algorithm 3, the left singular vectors of are equivalent to the eigenvectors of . It implies that we regard the matrix product as a construction of affinity matrix of data points. This method is motivated by the research on landmark-based representation [25, 26]. In our approach, we treat the cluster centers of each FCM clustering run as landmarks and the membership matrix as landmark-based representation. Thus, the concatenation of membership matrices forms a combinational landmark-based representation matrix. In this way, the graph similarity matrix is computed as which can create spectral embedding efficiently through step (). To normalize the graph similarity matrix, we multiply by . As a result, the degree matrix of is an identity matrix.

There are two perspectives to explain why our approach works. Considering the similarity measure defined by in FCM clustering, proposition 3 in [26] demonstrated that singular vectors of converged to eigenvectors of as converges to , where was affinity matrix generated in standard spectral clustering. As a result, singular vectors of converge to eigenvectors of normalized affinity matrix . Thus, our final output will converge to the one of standard spectral clustering as converges to . Another explanation is about the similarity measure defined by , where and are data points. We can treat each row of as a transformational data point. As a result, affinity matrix obtained here is the same as the one of standard spectral embedding, and our output is just the partition result of standard spectral clustering.

To facilitate comparison of different ensemble methods for FCM clustering solutions with random projection, we denote the approach of [19] by EFCM-A (average the products of membership matrices), the algorithm of [21] by EFCM-C (concatenate the membership matrices), and our new method by EFCM-S (spectral clustering on the membership matrices). In the cluster ensemble phase, the main computations of EFCM-A method are multiplications of membership matrices. Similarly, the algorithm of EFCM-C also needs the product of concatenated membership matrices in order to get the crisp partition result. Thus the above methods both need space and time. However, the main computation of EFCM-S is SVD for and -means clustering for . The overall space is , the SVD time is , and the -means clustering time is , where is iteration number of -means. Therefore, computational complexity of EFCM-S is obviously decreased compared with the ones of EFCM-A and EFCM-C considering and in large scale data set.

5. Experiments

In this section, we present the experimental evaluations of new algorithms proposed in Section 4. We implemented the related algorithms in Matlab computing environment and conducted our experiments on a Windows-based system with the Intel Core 3.6 GHz processor and 16 GB of RAM.

5.1. Data Sets and Parameter Settings

We conducted the experiments on synthetic and real data sets which both have relatively high dimensionality. The synthetic data set had 10000 data points with 1000 dimensions which were generated from 3 Gaussian mixtures in proportions . The means of components were , , and and the standard deviations were , , and . The real data set is the daily and sports activities data (ACT) published on UCI machine learning repository (the ACT data set can be found at http://archive.ics.uci.edu/ml/datasets/Daily+and+Sports+Activities). These are data of 19 activities collected by 45 motion sensors in 5 minutes at 25 Hz sampling frequency. Each activity was performed by 8 subjects in their own styles. To get high dimensional data sets, we treated 1 minute and 5 seconds of activity data as an instance, respectively. As a result, we got (ACT1) and (ACT2) data matrices whose rows were activity instances and columns were features.

For the parameters of FCM clustering, we let , we let maximum iteration number be 100, we let fuzzy factor be 2, and we let the number of clusters be for synthetic data set and for ACT data sets. We also normalized the objective function as , where is Frobenius norm of matrix [27]. To minimize the influence introduced by different initializations, we present the average values of evaluation indices of 20 independent experiments.

In order to compare different dimensionality reduction methods for FCM clustering, we initialized algorithms by choosing points randomly as the cluster centers and made sure that every algorithm began with the same initialization. In addition, we ran Algorithm 2 with for synthetic data set and for ACT1 data set. Two kinds of random projections (with random variables from (5) in Lemma 2) were both tested for verifying their feasibility. We also compared Algorithm 2 against another popular method of dimensionality reduction—SVD. What calls for special attention is that the number of eigenvectors corresponding to nonzero eigenvalues of ACT1 data is only 760, so we just took on FCM clustering with SVD for ACT1 data set.

Among comparisons of different cluster ensemble algorithms, we set dimension number of projected data as for both synthetic and ACT2 data sets. In order to meet for Algorithm 3, the number of random projection was set as 20 for the synthetic data set and 5 for the ACT2 data set, respectively.

5.2. Evaluation Criteria

For clustering algorithms, clustering validation and running time are two important indices for judging their performances. Clustering validation measures evaluate the goodness of clustering results [28] and often can be divided into two categories: external clustering validation and internal clustering validation. External validation measures use external information such as the given class labels to evaluate the goodness of solution output by a clustering algorithm. On the contrary, internal measures are to evaluate the clustering results using feature inherited from data sets. In this paper, validity evaluation criteria used are rand index and clustering validation index based on nearest neighbors for crisp partition, together with fuzzy rand index and Xie-Beni index for fuzzy partition. Here, rand index and fuzzy rand index are external validation measures, whereas the clustering validation index based on nearest neighbors index and Xie-Beni index are internal validation measures.

(1) Rand Index (RI) [29]. RI describes the similarity of clustering solution and correct labels through pairs of points. It takes into account the numbers of point pairs that are in the same and different clusters. The RI is defined as where is the number of pairs of points that exist in the same cluster in both clustering result and given class labels, is the number of pairs of points that are in different subsets for both clustering result and given class labels, and equals . The value of RI ranges from 0 to 1, and the higher value implies the better clustering solution.

(2) Fuzzy Rand Index (FRI) [30]. FRI is a generalization of RI with respect to soft partition. It also measures the proportion of pairs of points which exist in the same and different clusters in both clustering solution and true class labels. It needs to compute the analogous and through contingency table, described in [30]. Therefore, the range of FRI is also and the larger value means more accurate cluster solution.

(3) Xie-Beni Index (XB) [31]. XB takes the minimum square distance between cluster centers as the separation of the partition and the average square fuzzy deviation of data points as the compactness of the partition. XB is calculated as follows: where is just the objective function of FCM clustering and is the center of cluster . The smallest XB indicates the optimal cluster partition.

(4) Clustering Validation Index Based on Nearest Neighbors (CVNN) [32]. The separation of CVNN is about the situation of objects that have geometrical information of each cluster, and the compactness is the mean pairwise distance between objects in the same cluster. CVNN is computed as follows: where and . Here, is the number of clusters in partition result, is the maximum cluster number given, is the minimum cluster number given, is the number of nearest neighbors, is the number of objects in the th cluster , denotes the number of nearest neighbors of ’s th object which are not in , and denotes the distance between and . The lower CVNN value indicates the better clustering solution.

Objective function is a special evaluation criterion of validity for FCM clustering algorithm. The smaller objective function indicates that the points inside clusters are more “similar.”

Running time is also an important evaluation criterion often related to the scalability of algorithm. One main target of random projection for dimensionality reduction is to decrease the runtime and enhance the applicability of algorithm in the context of big data.

5.3. Performance of FCM Clustering with Random Projection

The experimental results about FCM clustering with random projection are presented in Figure 2 where (a), (c), (e), and (g) correspond to the synthetic data set and (b), (d), (f), and (h) correspond to the ACT1 data set. The evaluation criteria used to assess proposed algorithms are FRI, (a) and (b), XB, (c) and (d), objective function, (e) and (f), and running time, (g) and (h). “SignRP” denotes the proposed algorithm with random sign matrix, “GaussRP” denotes the FCM clustering with random Gaussian matrix, “FCM” denotes the original FCM clustering algorithm, and “SVD” denotes the FCM clustering with dimensionality reduction through SVD. It should be noted that true XB value of FCM clustering in subfigure (d) is 4.03e + 12, not 0.

From Figure 2, we can see that FCM clustering with random projection is clearly more efficient than the original FCM clustering. When number of dimensions is above certain bound, the validity indices are nearly stable and similar to the ones of naive FCM clustering for both data sets. This verifies the conclusion that “accuracy of clustering algorithm can be preserved when the dimensionality exceeds a certain bound.” The effectiveness for random projection method is also verified by the small bound compared to the total dimensions ( for synthetic data and for ACT1 data). Besides, the two different kinds of random projection methods have the similar impact on FCM clustering because of the analogous plot.

The higher objective function values and the smaller XB indices of SVD method for synthetic data set indicate that the generated clustering solution has better separation degree between clusters. The external cluster validation indices also verify that SVD method has better clustering results for synthetic data. These observations state that SVD method is more suitable for Gaussian mixture data sets than FCM clustering with random projection and naive FCM clustering.

Although the SVD method has a higher FRI for synthetic data set, the random projection methods have analogous FRI values for ACT1 data set and better objective function values for both data sets. In addition, the random projection approaches are obviously more efficient as the SVD needs cubic time of dimensionality. Hence, these observations indicate that our algorithm is quite encouraging in practice.

5.4. Comparisons of Different Cluster Ensemble Methods

The comparisons of different cluster ensemble approaches are shown in Figure 3 and Table 1. Similarly, (a) and (c) of the figure correspond to the synthetic data set and (b) and (d) corresponds to the ACT2 data set. We use RI, (a) and (b), and running time, (c) and (d), to present the performance of ensemble methods. Meanwhile, the meanings of EFCM-A, EFCM-C, and EFCM-S are identical to the ones in Section 4.2. In order to get crisp partition for EFCM-A and EFCM-C, we used hierarchical clustering-complete linkage method after getting the distance matrix as in [21]. Since all three cluster ensemble methods get perfect partition results on synthetic data set, we only compare CVNN indices of different ensemble methods on ACT2 data set, which is presented in Table 1.

In Figure 3, running time of our algorithm is shorter for both data sets. This verifies the result of time complexity analysis for different algorithms in Section 4.2. The three cluster ensemble methods all get the perfect partition for synthetic data set, whereas our method is more accurate than the other two methods for ACT2 data set. The perfect partition results suggest that all three ensemble methods are suitable for Gaussian mixture data set. However, the almost improvement on RI for ACT2 data set should be due to the different grouping ideas. Our method is based on the graph partition such that the edges between different clusters have low weight and the edges within a cluster have high weight. This clustering way of spectral embedding is more suitable for ACT2 data set. In Table 1, the smaller values of CVNN of our new method also show that new approach has better partition results on ACT2 data set. These observations indicate that our algorithm has the advantage on efficiency and adapts to a wider range of geometries.

We also compare the stability for three ensemble methods, presented in Table 2. From the table, we can see that the standard deviation of RI about EFCM-S is a lower order of magnitude than the ones of the other methods. Hence, this result shows that our algorithm is more robust.

Aiming at the situation of unknown clusters’ number, we also varied the number of clusters in FCM clustering and spectral embedding for our new method. We denote this version of new method as EFCM-SV. Since the number of random projections was set as 5 for ACT2 data set, we changed the clusters’ number from 17 to 21 as the input of FCM clustering algorithm. In addition, we set the clusters’ number from 14 to 24 as the input of spectral embedding and applied CVNN to estimate the most plausible number of clusters. The experimental results are presented in Table 3.

In Table 3, the values with respect to “EFCM-SV” are the average RI values with the estimated clusters’ numbers for 20 individual runs. The values of “+CVNN” are the average clusters’ numbers decided by the CVNN cluster validity index. Using the estimated clusters’ numbers by CVNN, our method gets the similar results of ensemble method with correct clusters’ number. In addition, the average estimates of clusters’ number are close to the true one. This indicates that our cluster ensemble method EFCM-SV is attractive when the number of clusters is unknown.

6. Conclusion and Future Work

The “curse of dimensionality” in big data gives new challenges for clustering recently, and feature extraction for dimensionality reduction is a popular way to deal with these challenges. We studied the feature extraction method of random projection for FCM clustering. Through analyzing the effects of random projection on the entire variability of data theoretically and verification both on synthetic and real world data empirically, we designed an enhanced FCM clustering algorithm with random projection. The new algorithm can maintain nearly the same clustering solution of preliminary FCM clustering and be more efficient than feature extraction method of SVD. What is more, we also proposed a cluster ensemble approach that is more applicable to large scale data sets than existing ones. The new ensemble approach can achieve spectral embedding efficiently from SVD on the concatenation of membership matrices. The experiments showed that the new ensemble method ran faster, had more robust partition solutions, and fitted a wider range of geometrical data sets.

A future research content is to design the provably accurate feature extraction and feature selection methods for FCM clustering. Another remaining question is that how to choose proper number of random projections for cluster ensemble method in order to get a trade-off between clustering accuracy and efficiency.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

This work was supported in part by the National Key Basic Research Program (973 programme) under Grant 2012CB315905 and in part by the National Nature Science Foundation of China under Grants 61502527 and 61379150 and in part by the Open Foundation of State Key Laboratory of Networking and Switching Technology (Beijing University of Posts and Telecommunications) (no. SKLNST-2013-1-06).