Research Article  Open Access
Spectral Clustering with Local Projection Distance Measurement
Abstract
Constructing a rational affinity matrix is crucial for spectral clustering. In this paper, a novel spectral clustering via local projection distance measure (LPDM) is proposed. In this method, the LocalProjectionNeighborhood (LPN) is defined, which is a region between a pair of data, and other data in the LPN are projected onto the straight line among the data pairs. Utilizing the Euclidean distance between projective points, the local spatial structure of data can be well detected to measure the similarity of objects. Then the affinity matrix can be obtained by using a new similarity measurement, which can squeeze or widen the projective distance with the different spatial structure of data. Experimental results show that the LPDM algorithm can obtain desirable results with high performance on synthetic datasets, realworld datasets, and images.
1. Introduction
As an unsupervised classification technique, clustering has been successfully applied to exploratory data analysis, such as image segmentation [1â€“3], data mining [4, 5], signal analysis [6], gene expression analysis [7], sport activities analysis in sport domain [8], and other subjects [9â€“11]. During the last decade, a number of clustering algorithms were adequately developed. Nonetheless, many of these algorithms are not effective for data classification when applied on nonconvex data space. Compared with the classical clustering, spectral clustering (SC) [12] has been successfully used to identify irregularly shaped datasets and is supported by linear algebra theory [13].
SC can be regarded as a type of partition problem for undirected graph [14]. In partition problem, a set of data points are represented by a similarity graph. Each data point can be considered as a vertex, and the similarities between pairs of data points can be considered as the edge weights in a graph. Through a partition of the graph, the data points can be clustered into different subgraphs such that the edges between different subgraphs have relatively low weights, in comparison to those within the same subgraph. It is well known that inchoate graph partition methods, such as mincut [15], tend to generate unbalanced solutions and they are extraordinarily sensitive to noise [16]. In order to overcome these drawbacks, many spectral clustering algorithms were proposed, such as normalized cut [17], ratio cut [18], minmax cut [19], and NgJordanWeiss (NJW) [20], which employed diverse criteria for spectral clustering to optimize the quality of graph partition. In these methods, as we know, Gaussian kernel function is chosen as similarity function, but it cannot offer any local spatial structure of the dataset.
To further improve the performance of spectral clustering algorithms, Chen and Feng [16] presented a semisupervised SC based on Near Strangers or Distant Relatives model, which is a generalization of SC algorithm. In [21], Li and Guo proposed a novel affinity matrix generation approach, which can adaptively adjust the similarity measure of data points, based on the spatial structure of dataset. To obtain insensitive scaling parameter, ZelnikManor and Perona [22] developed a selftuning SC algorithm to estimate the scaling parameter. Via Mestimation statistics, Chang and Yeung [23] proposed a robust pathbased SC algorithm. In [24], the local density can be employed to adjust the scaling parameter. Nevertheless, the method needs to set the parameter empirically [13]. To solve the problem, Yang et al. [25] proposed a density sensitive function, which can either elongate the similarity measure or shorten it in different density regions.
From a graphcut perspective, based on minimizing the sum of edge weights between data points, SC can partition an undirected weighted graph into disjoint components. The information of the adjacency relations between data points is contained within the affinity matrix . Most of existing methods exploited the local structure of dataset to construct a rational affinity matrix, which is one of the key issues of SC and greatly affects partition results.
As we know, data points with high similarity should have uniform density and consistent spatial characteristic [26]. Therefore, the key to estimate whether a pair of data points belong to a specific cluster is how to use the data information between them. The local projection distance measure (LPDM) presented in the current paper can reflect the local spatial structure of dataset in more depth, by which the goal of rational partition for synthetic datasets, images, and most of realworld datasets may be well achieved.
The main contributions of the current paper are threefold. The concept of LocalProjectionNeighborhood is introduced, which is a spatial area among data points and is an important source to obtain the local spatial structure of datasets. A measure for local projection distance is presented, which facilitates embodying an accurate local structure of dataset. A novel similarity measure is defined that can adaptively adjust the measure of similarity based on the local spatial structure of datasets and is insensitive to the parameter on UCI datasets.
The outline of the rest of this paper is as follows. In Section 2, spectral clustering algorithm is briefly discussed as preliminary. Section 3 introduces the LPDM algorithm. The performance of the presented approach is evaluated in Section 4. Section 5 is the conclusion.
2. Spectral Clustering
SC algorithms can be regarded as solving graphcut problems which are extensively applied to exploratory data analysis. In this section, as a preliminary, we will briefly review spectral clustering, which is closely related to LPDM algorithm.
In this paper, SC can be considered as an undirected weighted graphcut problem. For a dataset , the weights of the graph can be constructed by the adjacency matrix . Specifically, the element of the adjacency matrix is formulated aswhere is the scaling parameter determining the neighborhood and is the distance of points and . If element , this means that there is no link between them. The diagonal degree matrix is constructed as . SC uses the similarity information to group data points into predefined clusters.
The steps of NJW approach are listed as follows.(1)Calculate the similarity of data points by (1) to construct the affinity matrix and degree matrix .(2)Compute the normalized affinity matrix .(3)The first largest eigenvalues and the corresponding eigenvectors are calculated to construct the matrix with the column vector.(4)Normalize each row of the matrix (i.e., ) to construct the matrix .(5)Group data points by means method in a new space, which is spanned by the rows of the matrix .
Remark 1. Via spectral clustering, data points are mapped into a convex dataset in another space, and then means clustering can be used to group the image data points in the new space. In SC methods, NJW is widely applied to data analysis; thus, it is adopted in this paper.
The affinity matrix of classical spectral clustering is usually constructed by the Gaussian kernel function, but it could not well represent the space structure of datasets and lead to irrational results. Aiming at the problem, the LPDM algorithm is devised in the next section.
3. Affinity Matrix Construction for Spectral Clustering through Local Projection Distance Measure
As the most important part of the paper, some general problems on three similarity measurement algorithms are briefly discussed in the first part of this section. In the second part, to overcome these problems, a novel LPDM algorithm will be introduced.
3.1. Similarity Function Analysis
As we know, the Gaussian kernel function is employed in most of the existing SC methods. In most cases, since the scaling parameter is fixed and has to be set manually, Gaussian kernel function cannot objectively reflect the local spatial structure of datasets and reasonably figure out the similarity between data points, especially as the similarity function is applied to complex datasets. Figure 1 illustrates the high impact on the clustering by . It is evident that the results of NJW algorithm are to be affected greatly by the scaling parameter in Gaussian kernel function.
(a)
(b)
(c)
(d)
Unlike setting a fixed scaling parameter, the parameter in selftuning spectral clustering (SCST) [22] can be calculated based on the neighborhood of point aswhere is the th neighbor of point . Unfortunately, the affinity matrix in SCST is still constructed by Gaussian kernel function which is less valid in many cases [24]. In Figure 2, the method cannot yield better clustering result, failing to classify the ThreeSpiralArms datasets.
(a)
(b)
Zhang et al. [24] addressed the CommonNearNeighbor for spectral clustering (SCDA) and defined a novel similarity function aswhere represents the local density of the overlapped area in data space and the region can be determined by the data points and with radius . The result of SCDA algorithm on the ThreeSpiralArms dataset is shown in Figure 3. It is evident that the algorithm produces the correct clustering result.
Utilizing CNN, the parameter can be adjusted adaptively. However, the approach requires setting the parameter manually for the correct clustering [13]. Figure 4 shows that SCDA fails to classify the dataset and the unrevealed structures in this dataset cannot be discovered. It reveals that, in some cases, the similarity of data points cannot be properly reflected by Euclidean distance.
It is generally considered that if data points fall into the same cluster, the distribution of points should have similar patterns and concordant density. Nevertheless, in some cases, CNN could not correctly estimate the local density of complex datasets. Let us survey the synthetic dataset. In Figure 5, it is easy to find that the data points , , , and belong to the same cluster.
For three pairs of data points, the parameters of the CNN, the Euclidean distance (), the similarity () by (1), and the novel similarity () by (3) are calculated, respectively, and the results are summarized in Table 1.

From Table 1, we can find that the between data point and others are approximately the same. That is to say, those similarities between each of the three pairs of points are similar and they can be reflected by the parameter of . The parameter of CNN reflects the local density among data points according to SCDA and it can be used to estimate the similarity of point pairs. However, as can be seen from Table 1, the CNN and of the pair and are much larger than othersâ€™ and this implicates that the two points probably do not belong to the same cluster. Apparently these four data points locate in the same cluster. CNN can adaptively adjust the scaling parameter in Gaussian kernel function and reduce the impact of the fixed to some extent. Nonetheless, CNN merely reflects the local density of the geometric center between two data points. Therefore, the local structure of dataset cannot be fully described by CNN.
Combining the analysis of the three SC methods in this subsection, it is clear that, in some sense, the similarity of the correlative points could not be rightly reflected, based on Gaussian kernel function. How to obtain local spatial structure of datasets and construct an appropriate affinity matrix will be addressed in next subsection.
3.2. Local Projection Distance Measure
The motivation of LPDM algorithm originated from the idea that, in order to construct an appropriate affinity matrix, we should know spatial structure about the neighborhoods of the correlative points as much as possible. Therefore, in this subsection, we define the LocalProjectionNeighborhood (LPN) and propose a novel density sensitive similarity measure.
With a dataset given in , the LPN() of the pair and is the overlapped region with specified Euclidean radius , around the center points . The center points of the region can be calculated as where is a center point of the sphere region and is the radius. Here, the three points , , and form an equilateral triangle with the side lengths . Therefore, the center point can be obtained by solving (4).
The idea of LPDM algorithm is to discover the unrevealed configuration patterns of local dataset, with the spatial structure of the data points in LPN. Thus, how to obtain the points in LPN is a problem. Because the data points in LPN are located in the overlapped area between two sphere regions, they can be obtained as
Now consider the data points which are dispersedly located in LPN. How to achieve the similarity measurements from the spatial structure of these points is a key for LPDM algorithm. In this study, the point in LPN is projected onto the straight line connecting the points and , where the projective point is denoted by . The th coordinate of point can be calculated aswhere is the th coordinate of point .
Evidently, the straight line connecting the points and can be divided into several line segments by these projective points. As we know, data points being close in space and with uniform density might possibly tend to belong to the same cluster. The Euclidean distance between projective points in LPN represents the local similarity among the data points. If data points are located in the same cluster, the consistent projection distances exist in LPN. The structure information of local dataset can be reflected by the quantity and the length of the line segment.
Remark 2. We know that this method stresses the local spatial structure and can avoid seeking the shortest path that connects any two nodes in an undirected graph.
In this subsection, the local structure of datasets being reflected by projection distances in LPN may seem obvious. How to obtain a meaningful similarity measure among data points by applying projective distance is crucially important in LPDM algorithm. Here, a novel similarity measure is addressed, which is motivated by the discussions in [25].
The new adjustable projection distance is defined aswhere is the Euclidean distance between the points and and is the flexing factor. The Euclidean distance can be enlarged or shortened by the nonlinear function (7).
As we know, the similarity of the pair of points can be reflected by the distance between the two points. Nevertheless, the pair of points with a longer distance might still belong to the same cluster with a large number of points uniformly distributed between them. Therefore, the length of the line segment connecting projective image points in LPN can be adjusted by the nonlinear function (7). According to the spatial structure of the local dataset, a new measure of distance of the pair of points can be obtained through a summation of the values of the length of these line segments. The new distance between two points and can be obtained aswhere is the number of projective points in LPN. Notice that the points () and () are and , respectively.
The similarity of pair points is inversely proportional to their distance. Therefore, the similarity could be computed asThe novel similarity metric can highlight the diversity of the local structure of datasets and avoid seeking the shortest path in graph.
Here, the following example will be exhibited to illustrate the processes of LPDM algorithm. Consider four data points , , , and , shown in Figure 6.
The similarity of the pair of points and will be calculated by the following steps.
Step 1. Compute the center of points and with specified Euclidean radius by (4):The results are and .
Step 2. Find the data points in LPN by (5). According to the distances we can find that only the point belongs to LPN.
Step 3. Compute the projective point of the point by (6)
Step 4. Calculate the distance between points and by (7) and (8): where is set to 1.
Step 5. The similarity of the pair of points and can be estimated by (9): It is worth mentioning that the nearest neighbor is adopted to construct the affinity matrix in LPDM. Employing the structural information about the neighborhoods of the correlative points and novel density sensitive similarity measure, the LPDM algorithm achieves high accuracies of spectral clustering. The clustering results on the synthetic datasets achieved via LPDM algorithm are shown in Figure 7. The algorithm can obtain desired cluster results for this dataset.
In order to reduce the computational complexity, nearest neighbor is used to construct the affinity matrix [27]. According to the neighbor propagation principle [21], it is unnecessary to obtain all affinity relationships among data points because neighbor propagation could fully describe the structure of the dataset.
In this subsection, the LPDM algorithm is presented. Contrary to the classical method of similarity measure based on Gaussian kernel function, the similarity among data points achieved via LPDM algorithm can be constructed directly and the value of the measure reflects more the local spatial structure of datasets. The performance of this algorithm will be further illustrated in Section 4.
4. Experimental Results and Analysis
In the section, a number of experiments are conducted to evaluate the performance of the LPDM algorithm and the sensitivity of the parameters of LPDM algorithm will be further analyzed. The experimental results distinctly manifest the advantage of LPDM algorithm. In order to illustrate the procedure of the experiment, our experiments are conducted in the following subsection. Firstly, four SC algorithms are applied to several synthetic datasets and realworld datasets. The clustering accuracies of different algorithms can be examined with two smallsize datasets. Then, LPDM algorithm is executed on larger datasets to evaluate the performance of our method. All experiments are implemented in Matlab 7.12 environment on a PC with Intel CPU 1.6â€‰GHz and 4â€‰GB memory.
In our clustering experiments, clustering accuracy (Acc) [28] and Rand Index (RI) [29] are used to assess the performance of LPDM algorithm. The Acc is defined aswhere and are the true clustering results and experimental result of original data, respectively. is the quantity of data points that constitute both the true clustering result and the practical cluster . is a function which can map all cluster labels to the corresponding labels.
It is a known fact that there exist potential pairwise decisions to estimate whether each pair of data points belong to the same cluster, where is the size of dataset. RI is used to evaluate clustering accuracy and its value is proportional to the clustering performance, which is defined aswhere CD denotes the quantity of correct decisions and TD denotes the quantity of total decisions.
4.1. Parameter Selection
For NJW, SCDA, SCST, and LPDM algorithm, the parameters of SC algorithm need to be set for the above experiments. In order to obtain a reasonable scale parameter, Iris dataset from UCI datasets is used to evaluate the quality of the scale parameter. In NJW and SCDA algorithm, the scale parameter needs to be set. In Figures 8 and 9, we can find that NJW and SCDA gain better performances, when scale parameters are set to 0.2 and 0.01, respectively. is set to 7 in all our experiments in accordance with SCST in [22]. In LPDM, the flexing factor is set to 12 and 7% of the size of dataset is adopted as the parameter of neighborhood size when LPDM is implemented on each dataset.
4.2. Synthetic Data Experiments
In this subsection, four synthetic datasets of arbitrary shape and various densities are used to test the accuracy of the four SC algorithms. Experimental results are presented in Figure 10.
(a)
(b)
(c)
(d)
The first row of synthetic data is the TwoMoon dataset in Figure 10. It is evident that the data points of each moon should belong to the same cluster. Since the dataset includes nonconvex separate clusters and the two â€śmoonsâ€ť are very close, the classification of the dataset is a difficult task for SC algorithms. Figure 10 shows that SCST cannot rationally classify the TwoMoon dataset whereas the other SC approaches correctly identify the genuine clusters. In Figure 10, the second row of toy data includes diverse density of data and it is a challenging clustering problem. According to the results, we can find that SC and SCDA cannot classify them effectively, implicating that both SC and SCDA are less suitable for the multiscale clustering problem. For the remaining two synthetic datasets, four SC algorithms can obtain the expected clusters. In conclusion, the rational classifications can be obtained for all these synthetic datasets by applying LPDM. Thus, the algorithm can well handle different clustering problems.
4.3. Real Datasets Experiments
As we know, the UCI [30] datasets and the MNIST handwritten digits database [31] have been widely used for testing SC algorithms in the clustering problem.
In this subsection, both datasets are used in our experiments to evaluate the performance of proposed approach. In the UCI databases, we perform experiment on five datasets including Wilt, Wine Quality, Ionosphere, Zoo, and Abalone. The dimension of the data is the number of attributes varying from 6 to 34. Table 2 describes the characteristics of these datasets. Unlike the toy data, the dimension of MNIST database is much higher. Each image of the handwritten digit has been normalized and centered to graylevel image. In this experiment, four subsets {6, 9}, {1, 6}, {1, 2, 3}, and {0, 1, 3, 4} are selected to test the LPDM and 200 examples for each digit are randomly chosen from the MNIST training dataset. The basic characteristics of these datasets are summarized in Table 3.


For the UCI datasets, the clustering results are summarized in Figure 11. From Figure 11, we can find that LPDM outperforms others in accordance with Acc and RI. Taking the Wine Quality dataset as the example, one will see that the method can obtain the accuracy of 0.8000 in accordance with Acc, and the others are 0.3833, 0.5000, and 0.3500, respectively. For the Abalone dataset, the clustering accuracy of LPDM is seemingly lower than other datasets, but the performance of LPDM is still superior to other methods.
(a)
(b)
The experimental results on MNIST datasets are summarized in Figure 12. As can be seen in the figure, the accuracy of LPDM is higher than SC, SCST, and SCDA. For the subset {0, 1, 3, 4}, the accuracy of the four methods is 0.630, 0.6825, 0.5425, and 0.7125 by Acc, respectively. For the four subsets, despite the similarity of accuracies between SCST and LPDM, the accuracy of SCST is a little lower than the LPDM. It proves that a more reasonable affinity matrix can be constructed by LPDM.
(a)
(b)
4.4. Image Segmentation Experiments
Image segmentation is one of the applications of SC. The SC algorithm can be easily evaluated by the results of image segmentation and we can learn whether the results â€ślook good,â€ť whether an algorithm works only on smallsize datasets, and so on. Here, LPDM algorithm is applied to image segmentation and its ability can be intuitively evaluated by observation. In Figure 13, two original images (a) and (d) with the size of and in jpg format are used in this experiment, which are chosen from [20]. To reduce the cost of computation and memory space, we resize the image (a) to the size of and the sizes of the two images (a) and (d) are 12288 pixels and 3072 pixels, respectively. As we know, it is difficult for SC to segment the salient objects from the complicated background, especially for images with large number of pixels. In contrast, as can be seen from Figure 13, the child and the fire hydrant are partitioned successfully from the backgrounds of images (a) and (d).
(a)
(b)
(c)
(d)
(e)
(f)
4.5. Parameter Sensitiveness
In the last part of the experiments, the parameter sensitiveness of the LPDM approach is studied and the stability of the algorithm depends on its two parameters: the flexing factor and the neighborhood size . In this algorithm, two parameters and are required to be adjusted for clustering. The setting of parameters and is the crucial problem of LPDM. Here, Wilt, Wine Quality, Ionosphere, Zoo, and Abalone datasets are applied to evaluate the sensitiveness of two parameters.
For the parameter , the algorithm is evaluated with . The parameter interval of is . Figures 14(a) and 14(b) show the Acc rate and RI rate of LPDM on the five datasets. We can see that changes in the different intervals of the parameter have less impact on the Acc rate and RI rate, respectively. Apparently, the algorithm could work well under in the interval . Figures 14(c) and 14(d) show that LPDM is insensitive to , except the Wine Quality dataset. Hence, it is necessary to adopt the value of in the interval . Experimental results show that, in most cases, LPDM is insensitive to the parameters and in the different parameter intervals recommended in this subsection.
(a)
(b)
(c)
(d)
5. Conclusion
A local projection distance measurement for spectral clustering is proposed in this paper, which utilizes projective data points in LPN to detect the local spatial structure of the distribution of datasets. Employing a novel density sensitive similarity measure, local spatial structural information of datasets can be exploited and converted into the similarity measure of a pair of data points. Meanwhile, nearest neighbor sparse strategy is adopted to reduce both the computational difficulties and memory assumptions. The numerical results presented show that the local projection distance measure approach is able to correctly cluster many synthetic datasets, UCI datasets, MNIST handwritten digits database, and images and is less sensitive to parameters than other classical SC approaches.
There are still many problems awaiting us to offer solutions. For instance, how to automatically and effectively set several specific parameters in our algorithm is to be dealt with as the future work.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This work is supported by the National Natural Science Foundation of China (Grant 81360229), the Research Fund for the Doctoral Program of Higher Education (Grant 20116201110002), the Open Project Program of the National Laboratory of Pattern Recognition (Grant 201407347), and the Natural Science Foundation of Gansu Province (Grants 1308RJZA225 and 145RJ2A065).
References
 A. Rajendran and R. Dhanasekaran, â€śEnhanced possibilistic fuzzy Cmeans algorithm for normal and pathological brain tissue segmentation on magnetic resonance brain image,â€ť Arabian Journal for Science and Engineering, vol. 38, no. 9, pp. 2375â€“2388, 2013. View at: Publisher Site  Google Scholar
 S. Ghaffarian and S. Ghaffarian, â€śAutomatic histogrambased fuzzy Cmeans clustering for remote sensing imagery,â€ť ISPRS Journal of Photogrammetry and Remote Sensing, vol. 97, pp. 46â€“57, 2014. View at: Publisher Site  Google Scholar
 S. Zeng, R. Huang, Z. Kang, and N. Sang, â€śImage segmentation using spectral clustering of Gaussian mixture models,â€ť Neurocomputing, vol. 144, pp. 346â€“356, 2014. View at: Publisher Site  Google Scholar
 U. Fayyad, G. PiatetskyShapiro, and P. Smyth, â€śFrom data mining to knowledge discovery in databases,â€ť AI Magazine, vol. 17, no. 3, pp. 37â€“53, 1996. View at: Google Scholar
 I. H. Witten and E. Frank, Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations, Morgan Kaufmann, 1999.
 F. Mohamad, T. S. Cevat, A. Mehdi, and P. Farzad, â€śFracture characteristics of AISI D2 tool steel at different tempering temperatures using acoustic emission and fuzzy Cmeans clustering,â€ť Arabian Journal for Science and Engineering, vol. 38, no. 8, pp. 2205â€“2217, 2013. View at: Publisher Site  Google Scholar
 D. Jiang, C. Tang, and A. Zhang, â€śCluster analysis for gene expression data: a survey,â€ť IEEE Transactions on Knowledge and Data Engineering, vol. 16, no. 11, pp. 1370â€“1386, 2004. View at: Publisher Site  Google Scholar
 I. Fister Jr., I. Fister, D. Fister, and S. Fong, â€śData mining in sporting activities created by sports trackers,â€ť in Proceedings of the International Symposium on Computational and Business Intelligence (ISCBI '13), pp. 88â€“91, August 2013. View at: Publisher Site  Google Scholar
 N. Cai, J.W. Cao, H.Y. Ma, and C.X. Wang, â€śSwarm stability analysis of nonlinear dynamical multiagent systems via relative Lyapunov function,â€ť Arabian Journal for Science and Engineering, vol. 39, no. 3, pp. 2427â€“2434, 2014. View at: Publisher Site  Google Scholar  MathSciNet
 N. Cai, J. Cao, M. Liu, and H. Ma, â€śOn controllability problems of highorder dynamical multiagent systems,â€ť Arabian Journal for Science and Engineering, vol. 39, no. 5, pp. 4261â€“4267, 2014. View at: Publisher Site  Google Scholar
 N. Cai, J. Cao, and M. J. Khan, â€śA controllability synthesis problem for dynamic multiagent systems with linear highorder protocol,â€ť International Journal of Control, Automation and Systems, vol. 12, no. 6, pp. 1366â€“1371, 2014. View at: Publisher Site  Google Scholar
 F. Chung, Spectral Graph Theory, American Mathematical Society, Providence, RI, USA, 1997.
 K. Taşdemir, â€śVector quantization based approximate spectral clustering of large datasets,â€ť Pattern Recognition, vol. 45, no. 8, pp. 3034â€“3044, 2012. View at: Publisher Site  Google Scholar
 B. Mohar, â€śThe Laplacian spectrum of graphs,â€ť in Graph Theory, Combinatorics, and Applications, Y. Alavi, G. Chartrand, O. Ollermann, and A. Schwenk, Eds., vol. 2, pp. 871â€“898, Wiley, New York, NY, USA, 1991. View at: Google Scholar  MathSciNet
 E. L. Johnson, A. Mehrotra, and G. L. Nemhauser, â€śMincut clustering,â€ť Mathematical Programming, vol. 62, no. 1–3, pp. 133â€“151, 1993. View at: Publisher Site  Google Scholar  MathSciNet
 W. Chen and G. Feng, â€śSpectral clustering: a semisupervised approach,â€ť Neurocomputing, vol. 77, no. 1, pp. 229â€“242, 2012. View at: Publisher Site  Google Scholar
 J. Shi and J. Malik, â€śNormalized cuts and image segmentation,â€ť IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 888â€“905, 2000. View at: Publisher Site  Google Scholar
 L. Hagen and A. B. Kahng, â€śNew spectral methods for ratio cut partitioning and clustering,â€ť IEEE Transactions on ComputerAided Design of Integrated Circuits and Systems, vol. 11, no. 9, pp. 1074â€“1085, 1992. View at: Publisher Site  Google Scholar
 C. Ding, X. He, H. Zha, M. Gu, and H. Simon, â€śA minmax cut algorithm for graph partitioning and data clustering,â€ť in Proceedings of the IEEE International Conference on Data Mining (ICDM '01), pp. 107â€“114, San Jose, Calif, USA, 2001. View at: Publisher Site  Google Scholar
 A. Ng, M. Jordan, and Y. Weiss, â€śOn spectral clustering: analysis and an algorithm,â€ť in Advances in Neural Information Processing Systems 14, pp. 849â€“856, 2001. View at: Google Scholar
 X. Y. Li and L. J. Guo, â€śConstructing affinity matrix in spectral clustering based on neighbor propagation,â€ť Neurocomputing, vol. 97, pp. 125â€“130, 2012. View at: Publisher Site  Google Scholar
 L. ZelnikManor and P. Perona, â€śSelftuning spectral clustering,â€ť in Advances in Neural Information Processing Systems (NIPS), vol. 17, pp. 1601â€“1608, 2004. View at: Google Scholar
 H. Chang and D.Y. Yeung, â€śRobust pathbased spectral clustering,â€ť Pattern Recognition, vol. 41, no. 1, pp. 191â€“203, 2008. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 X. Zhang, J. Li, and H. Yu, â€śLocal density adaptive similarity measurement for spectral clustering,â€ť Pattern Recognition Letters, vol. 32, no. 2, pp. 352â€“358, 2011. View at: Publisher Site  Google Scholar
 P. Yang, Q. Zhu, and B. Huang, â€śSpectral clustering with density sensitive similarity function,â€ť KnowledgeBased Systems, vol. 24, no. 5, pp. 621â€“628, 2011. View at: Publisher Site  Google Scholar
 D. Zhou, O. Bousquet, T. N. Lal, J. Weston, and B. Scholkopf, â€śLearning with local and global consistency,â€ť in Proceedings of the Neural Information Processing Systems (NIPS '04), vol. 16, pp. 321â€“328, 2004. View at: Google Scholar
 F. Zhao, H. Liu, and L. Jiao, â€śSpectral clustering with fuzzy similarity measure,â€ť Digital Signal Processing, vol. 21, no. 6, pp. 701â€“709, 2011. View at: Publisher Site  Google Scholar
 F. Zhao, L. Jiao, H. Liu, X. Gao, and M. Gong, â€śSpectral clustering with eigenvector selection based on entropy ranking,â€ť Neurocomputing, vol. 73, no. 1012, pp. 1704â€“1717, 2010. View at: Publisher Site  Google Scholar
 W. M. Rand, â€śObjective criteria for the evaluation of clustering methods,â€ť Journal of the American Statistical Association, vol. 66, no. 336, pp. 846â€“850, 1971. View at: Google Scholar
 C. C. Blake and C. J. Merz, â€śUCI repository of machine learning databases,â€ť http://www.ics.uci.edu/mlearn/MLRepository.html. View at: Google Scholar
 Y. LeCun and C. Cortes, â€śThe MNIST database of handwritten digits,â€ť 2009, http://yann.lecun.com/exdb/mnist/. View at: Google Scholar
Copyright
Copyright © 2015 Chen Diao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.