Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 5175210 | https://doi.org/10.1155/2020/5175210

Yufang Min, Yaonan Zhang, "Exact -Component Graph Learning for Image Clustering", Mathematical Problems in Engineering, vol. 2020, Article ID 5175210, 10 pages, 2020. https://doi.org/10.1155/2020/5175210

Exact -Component Graph Learning for Image Clustering

Academic Editor: Gen Q. Xu
Received03 Jan 2020
Revised14 May 2020
Accepted15 May 2020
Published06 Aug 2020

Abstract

The performance of graph-based clustering methods highly depends on the quality of the data affinity graph as a good affinity graph can approximate well the pairwise similarity between data samples. To a large extent, existing graph-based clustering methods construct the affinity graph based on a fixed distance metric, which is often not an accurate representation of the underlying data structure. Also, they require postprocessing on the affinity graph to obtain clustering results. Thus, the results are sensitive to the particular graph construction methods. To address these two drawbacks, we propose a -component graph clustering (-GC) approach to learn an intrinsic affinity graph and to obtain clustering results simultaneously. Specifically, -GC learns the data affinity graph by assigning the adaptive and optimal neighbors for each data point based on the local distances. Efficient iterative updating algorithms are derived for -GC, along with proofs of convergence. Experiments on several benchmark datasets have demonstrated the effectiveness of -GC.

1. Introduction

Clustering is one of the most fundamental topics in computer vision and pattern recognition. The objective of clustering is to discover the data structure and partition a group of data points into several clusters, where the similarity of data points within the same cluster is greater than the similarity from different clusters [18].

Structure of data is usually characterized by the affinity matrix of the graph whose edges denote the similarities between data points. If vertices belonging to each cluster are connected to be a component, i.e., there is no edge connecting between different clusters, is assigned to a value of zero in graph theory. Our purpose is to learn a graph with exact number of components so that vertices in each connected component of the graph are partitioned into one cluster.

Given a cluster indicator matrix , it can be constructed by data labels; if a data point is assigned to the -th cluster, and otherwise. Since is a strictly block diagonal matrix in the ideal case, many clustering methods are designed to obtain a block diagonal affinity matrix or they use it as an important prior [913]. Actually, we find that a good quality of graph structure results in a good cluster indicator matrix, even if the affinity matrix is not a strictly block diagonal matrix. For example, in Figure 1, we can obtain an ideal cluster indicator matrix with the graph itself because one connected component has exactly one cluster, but the affinity matrix (see Figure 1) constructed from the graph itself is not a strictly block diagonal matrix, i.e., some on-block diagonal elements are zeros. If the graph has -connected components, we can directly obtain a good cluster indicator matrix with graph itself, even when the connected edges are relatively sparse in each connected component. It means that the graph structure rather than the strictly block diagonal affinity matrix is the intrinsic quality for obtaining a good clustering result.

We propose a novel graph-based clustering approach to exploit -connected components for clustering, called -component graph clustering (-GC). Figure 1 shows schematic of the -component graph learning. -GC aims to learn a graph whose connected edges are tuned adaptively until the graph has exactly connected components [14]. To evaluate the effectiveness of -GC, we have conducted experiments on six benchmark datasets in comparison to state-of-the-art approaches. Experimental results have well demonstrated that -GC performs better than other approaches consistently. -GC makes following contributions:(1)-GC learns to obtain a graph with exact -connected components. Since the vertices in each connected component belong to one cluster, labels are obtained directly from the learned graph itself.(2)The clustering indicators are obtained by using the learned graph itself without performing postprocessing -means clustering algorithm.(3)-GC can be used as an alternative to spectral clustering (SC). Similar to SC, -GC only needs to input an initial affinity graph without involving raw data.

SC, which exploits the eigenstructure of a data affinity graph to partition data into different groups, has become one of the most fundamental clustering approaches. Standard SC uses the radial basis function to construct the affinity matrix [15], and its performance relies heavily on the eigenstructure of the affinity matrix [1520].

In SC, the similarity between pairwise data points and is firstly computed by the radial basis function to construct the affinity matrix , then the number of eigenvectors of the normalized Laplacian matrix corresponding to the top smallest eigenvalues, , are regarded as the low-dimensional embedding of raw data , and -means clustering is performed on to obtain labels finally.

However, due to the ambiguity and uncertainty inherent in data structure, the intrinsic affinity matrix cannot be determined by a unified function. Since most existing SC methods use a predefined affinity matrix of the graph, the value is usually minimized but not zeroed, e.g., ratio cut [21], normalized cut [16], and min-max cut [22], which results in a postprecessing -means clustering algorithm to obtain the clustering labels [15, 17, 18].

Many approaches have been proposed to improve the performance of SC. Generally, they could be categorized into three paradigms:(1)How to improve data clustering using a predefined affinity matrix [12, 15, 23](2)How to construct a better affinity matrix to obtain a better result than standard SC [17, 24, 25](3)How to learn the affinity matrix and clustering structure simultaneously [2629]

This paper is related to the third paradigm. In the third paradigm, objective functions of [26, 28, 29] usually involve the raw data but -GC does not involve it. All of them employ a rank constraint on Laplacian matrix [2629], and clustering with adaptive neighbors (CAN) [26] and constrained Laplacian rank (CLR) [27] are two methods more related to -GC. CAN learns the graph from raw data. However, CLR learns the graph by minimizing the difference between the input initial graph and the learned graph.

In -GC, the raw data are not involved in the objective function, and -GC is directly modified from standard SC to tune the structure of graph so that an intrinsic graph with exact -connected components is obtained.

3. -Component Graph Clustering

Suppose that the data matrix is denoted by , where denotes the data matrix belonging to the -th cluster, denotes the cluster number, denotes the -th data point, is the number of data points, and denotes the dimension.

3.1. Preprocessing

We propose a new simple neighborhood-preserving method to refine the raw data in this section. Assume that a data point and its nearest data point are both generated from a same point since they may be disturbed by noise. Then, we define an equation to replace by the weighted linear combination of and :where is defined by

Since a data point is generally closer to the nearest one belonging to a same cluster than one to a different cluster, it is straightforward to check from equation (2) that tends to 1 if and belong to a same cluster and tends to 0 if and belong to different clusters. It implies that two data points in a same cluster become closer during iteration while two data points in different clusters have very small or even no influence on each other after using the preprocessing. With the iteration of equation (1), data points in same cluster become closer and closer so that the clustering is easier to carry out.

3.2. -Component Graph

An undirected finite graph is denoted by , and its vertex set is denoted by . We consider the graph is weighted and assign a non-negative real weight to each pair vertices of and . The degree of a vertex is defined by . The Laplacian matrix of the graph [30] is defined bywhere is the diagonal matrix whose -element is .

For any vector , we have , so is a positive semidefinite matrix. Since , the smallest eigenvalue of is equal to zero, and the corresponding eigenvector is .

Theorem 1. The number of connected components is equal to the multiplicity of 0 as an eigenvalue of .

Proof. Suppose there are -connected components in graph , the -th subgraph is denoted by , and the corresponding vertex set of is denoted by . Then, we have and . For an eigenvector of with eigenvalue zero, we have . In the -th connected component , a connected edge is weighted by a positive value , and the corresponding term of needs to be zeroed, i.e., and have to be a constant . and , and the terms of need to be zeroed while these edges are connected with positive weights , which means that in the -th component, , and each is of the same constant, i.e., . These eigenvectors of different components are linearly independent, so the multiplicity of 0 as eigenvalue is the number of components of the graph and vice versa.
Similar proof of Theorem 1 can be seen in previous works [13, 18, 31]. We explore Theorem 1 in the context of a specific example. In Figure 1, we show an intrinsic graph with two connected components, and an intrinsic affinity matrix of the graph being consistent in Theorem 1 is given bywhere we constrain the degree , i.e., the sum of each column of is equal to one.
The Laplacian matrix of equation (4) has an eigenvalue zero with the multiplicity , and we have the two corresponding eigenvectors and . They are concatenated by , and its transposition isIf the graph in Figure 1 corresponds to an affinity matrix in equation (4), i.e., the Laplacian matrix of equation (4) satisfies to Theorem 1, then the clustering labels are easy to obtain without further performing graph-cut or -means clustering algorithms. Since vertices in each connected component of the graph are partitioned into one cluster, the clustering labels can be obtained easily by using the graph with strongly connected-component algorithm [32].
Since it is a positive semidefinite matrix, has non-negative eigenvalues . Theorem 1 indicates that if , then the graph has -connected components and the vertices are already partitioned into clusters [18, 30, 31, 33]. Then, according to Fan’s theorem [14, 34], we havewhere denotes the trace operator, is an identity matrix, is the Laplacian matrix, and is a diagonal matrix whose diagonal elements are column sums of .
The proof of Fan’s theorem can be seen in [13, 35, 36].
It is straightforward to check that the objective function value of equation (6) does not generally tend to zero because the structure of graph is varied by varying the graph construction methods. In the following section, we propose a method to tune the graph structure of adaptively so that the objective function value of equation (6) tends to zero.

3.3. -GC Algorithm

In this section, we will explore how to learn a graph with exact -connected components [14].

The right term of equation (6) can be solved with respect to by the eigenvectors of , but it has trivial solution with respect to , i.e., only one is assigned to a value and others are zeroed in each column of . We add -norm regularization to smooth the weights in . Then, we optimize and simultaneously:where is the trade-off parameter.

If the affinity matrix is consistent in Theorem 1, the procedure of obtaining clustering labels is named as -GC. In other words, we guarantee the objective function of standard SC to be zeroed in -GC by means of tuning the graph structure. We divide the problem equation (7) into two subproblems and alternately solve them.

The first subproblem is to fix , updating . Then, equation (7) becomes

Equation (8) is solved by calculating the eigenvectors of .

The second subproblem is to fix , updating . Then, equation (7) becomeswhere denotes the -th column of .

Each column of is independent, so solving equation (9) is equal to optimizing the following problem:

is denoted by and is a fixed vector when we solve the -th column . Solving equation (10) is equal to optimizing the following problem:where is a constant vector.

Equation (11) is a Euclidean projection problem on the simplex space, and there are several algorithms for solving it [3739]. According to the Karush–Kuhn–Tucker condition [37], it can be verified that the optimal solution is

Equation (10) will be analysed specifically in Section 3.4, and we show how to obtain exact -connected components.

We alternately optimize equations (9) and (8) until the sum of the top smallest eigenvalues of becomes zero. The algorithm for solving equation (7) is summarized by Algorithm 1.

(1)Input: A data matrix , the number of neighbors , and the number of clusters .
(2)Output: The affinity graph .
(3)Initialize: The graph is initialized by equation (9) where we replace by ; is initialized with initial graph by solving equation (8); and .
(4)Iterate equation (1) several times.
(5)While not converge do
(6)Fordo
(7)  Update by using equation (12);
(8)end for
(9);
(10) Update by using equation (8), i.e., is formed by eigenvectors with the top smallest eigenvalues of .
(11)end while
3.4. -Component Graph Learning

We find a trade-off between two graph structures: the first case is that one vertex is connected with only one other vertex in vertex set and the second case is that all vertices are connected with each other by the same weight . The trade-off renders the objective function of equation (6) close to zero.

We optimize each column of independently, and one column of is denoted by . Values in the vector are zero or positive: means has no edge with and means and are connected with an edge.

The first is to optimize the following:

It returns the minimum value , i.e., is only connected to the other .

The second is the following:

It returns .

Thus, an adaptive graph learning objective function iswhere is the trade-off parameter.

If , then optimizing equation (15) is equivalent to solving equation (13); if tends to an infinity positive value, then equation (15) becomes equation (14). If we want to learn a sparse graph, we can tune to be a small value; if we want to learn a graph with more edges, we can tune to be a relatively large value.

Solving equation (15) is equal to optimizing the problem:

According [40], the optimal affinities are given by

Actually, it is straightforward to check from equation (17) that the number of neighbors is also determined by the setting of . In practice, the structure of the graph can be tuned coarsely by and can be tuned finely by the trade-off parameter . has an explicit meaning while has an implicit relation with the structure of the graph. We tune both of them to obtain an intrinsic affinity matrix.

As standard SC, we can input any affinity matrix for -GC. There is one parameter in equation (7). In Algorithm 1, we use Eq. (9) to obtain the initial graph by replacing with . For each , we have different , so before the iteration, we set for preserving the initial structure. The initial graph structure is mainly determined by .

3.5. Convergence Analysis

Since the second-order derivative of equation (16) with respect is equal to , equation (16) is a convex problem. Because the Laplacian matrix is a positive semidefinite matrix, equation (8) is a convex optimization problem. Optimizing and alternately, both of them decrease monotonically. As a result, the overall objective function value of equation (7) decreases monotonically in each iteration until Algorithm 1 converges.

3.6. Computational Complexity Analysis

The first step of the objective function equation (7) is to solve equation (10). We need time to compute , and to solve equation (15) where is the iteration number. times are needed to calculate each , so the complexity of the first step of equation (7) is . The second step is an eigendecomposition procedure, and the complexity of the generalized eigenvector problem is . For solving equation (8), we need to calculate the eigenvectors of , so its cost is . Thus, the total complexity of equation (7) iswhere is the number of iterations of the two steps.

4. Experimental Results

In this section, we conduct experiments on six datasets to demonstrate the effectiveness of -GC in terms of clustering accuracy (ACC) and normalized mutual information (NMI).

4.1. Dataset Description

Six datasets include the following:(1)Two-moon dataset is a randomly generated synthesis dataset and has two clusters of data distributed in the moon shape. Each cluster has 100 samples and the noise percentage is set to 0.20.(2)Path-based dataset [41] is made up of 300 samples which belong to three clusters.(3)Yale dataset (http://vision.ucsd.edu/content/yale-face-database) contains 165 grayscale images of 15 individuals. Each individual has 11 images taken under different facial expression or configuration.(4)ORL dataset (http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html) contains ten different images of each with 40 distinct subjects. For some subjects, the images were taken at different time, varying the lighting, facial expressions, and facial details. All the images were taken against a dark homogeneous background with the subjects in an upright, frontal position.(5)COIL-20 dataset [42] contains 1440 images of 20 object categories. Each category includes 72 images and all the images are normalized to 32  32 pixel arrays.(6)Notting-Hill dataset is derived from the movie “Notting Hill” and includes 4660 faces in 76 tracks belonging to five main casts.

The first two datasets are two-dimensional synthesis datasets. For datasets of Yale, ORL, COIL-20, and Notting-Hill, we employ the intensity feature of these images [43, 44]. A small fraction of samples in these image datasets is shown in Figure 2. Dataset description is summarized in Table 1.


DatasetSamplesDimensionsClusters

Two-moon20022
Path-based30023
Yale165409615
ORL400409640
COIL-201440102420
Notting-Hill466020005

4.2. Experimental Setup

We compare -GC with six methods, and the results are summarized in Table 2. We select three classic methods: -means clustering [45], ratio cut clustering (R-cut) [21], and self-tuning SC (ST-SC) [17]; besides, we select three related state-of-the-art methods: robust graph for SC (RG-SC) [24], clustering with adaptive neighbors (CAN) [26], and constrained Laplacian rank (CLR) [27].


Methods-meansR-cutST-SCRG-SCCANCLR-GC

ACC
Two-moon75.50  0.0073.00  0.0079.00  0.0078.00  0.0093.00  0.0087.50  0.0098.50  0.00
Path-based74.33  0.0076.85  2.0579.33  0.0060.66  0.0087.00  0.0087.00  0.0099.00  0.00
Yale50.30  2.6829.75  1.8657.63  1.4962.90  0.2552.73  0.0059.39  0.0068.48  0.00
ORL55.45  3.1030.12  1.3768.27  1.7859.82  0.8656.50  0.0063.75  0.0070.75  0.00
COIL-2066.79  2.6545.72  0.9969.34  0.0676.65  0.0290.14  0.0087.29  0.0094.03  0.00
Notting-Hill79.53  0.6148.92  0.0078.71  0.0080.86  0.0170.62  0.0074.79  0.0083.58  0.00

NMI
Two-moon19.79  0.0015.92  0.0025.85  0.0024.10  0.0069.37  0.0054.88  0.0090.22  0.00
Path-based51.28  0.0150.92  8.3857.29  0.0027.72  2.3575.63  0.0075.63  0.0095.82  0.00
Yale56.17  1.9836.27  2.6261.72  0.9265.49  0.2755.40  0.0059.21  0.0068.63  0.00
ORL74.31  1.5655.26  1.0482.60  0.7277.91  0.4374.07  0.0079.98  0.0083.33  0.00
COIL-2077.83  1.2255.99  0.4778.48  0.3084.83  0.0394.60  0.0094.14  0.0096.42  0.00
Notting-Hill68.09  1.0826.94  0.0064.19  0.0067.39  0.0161.80  0.0071.05  0.0077.93  0.00

The default parameters given by the respective authors are adopted.(1)-means [45] clusters samples based on the similarity between samples and cluster centroids. There is no parameter to tune in -means. We run -means clustering 10 times to evaluate the performance. In order to reduce the effect of random initialization, we run 30 times and report the result with the minimum value of the objective function of -means among these results in each time.(2)R-cut [21] finds the first eigenvectors of the Laplacian matrix of the graph so as to minimize the similarity between two parts in the graph. Specifically, R-cut advocates the second eigenvalue of the Laplacian matrix which gives the optimal solution.(3)ST-SC [17] constructs a graph in a local scale and each data point chooses different neighbors. It has a parameter to determine the number of nearest neighbors for graph construction, and we tune the neighbor number to report the best results in terms of the objective value of -means.(4)RG-SC [24] generates robust affinity graphs via identifying and exploiting discriminative features for spectral clustering based on the unsupervised clustering random forest. The default parameters are used in our experiments.(5)CAN [26] learns the data similarity matrix and clustering structure simultaneously and uses a rank constraint to create the clustering structure in the similarity matrix as several disconnected components. The number of nearest neighbors is searched from 2 to 50 with incremental 2.(6)CLR [27] imposes a Laplacian rank constraint on the learned graph which best approximates the input initial affinity graph. In our experiments, the number of nearest neighbors is searched from 2 to 50 with interval 2 to obtain the best parameter.

For -GC, we tune the parameter in the range of with interval two, and is fixed by . For preprocessing, we iterate equation (1) ten times with . We select optimal in terms of objective value of -means clustering preformed on . -GC obtains the clustering results from the learned graph itself because data points in each connected component belong to one cluster, and the clustering indicators are directly obtained according to Tarjan’s strongly connected-component algorithm [32]. In practice, we can determine in a heuristic way to accelerate the procedure [26]. As the iteration stopping condition of the sum of the top smallest eigenvalues of being zeroed, i.e., the first term of equation (7) is zeroed, is stronger than conventional iteration stopping condition [26], we adopt the former. Because of the iteration stopping condition, we set to a constant (e.g., ), then increase if the connected components of are larger than , and decrease if they are smaller than during iteration.

For all these methods, we run each method 10 times and report the mean of performance as well as the standard deviation in Table 2.

4.3. Results and Analysis

We choose six datasets to demonstrate the effectiveness of -GC, the first two of which are synthetic while the rest are real-world datasets. In addition, quantitative results of different methods are shown in Table 2, from which we can find that we have improved the performance to a large extent in comparison to the state-of-the-art methods. The clustering results of the comparing methods using the two synthetic datasets are visualized in Figure 3 so that we can observe in an intuitive way that -GC approximates the ground truth mostly. What is notable is that on two synthetic datasets of two-moon and path-based, we have a surprisingly good result approximating to 100%, meaning almost entirely accurate clustering. This implies the great importance of the intrinsic graph structure with a comprehensive analysis of which we obtain a significantly success over other methods.

Figure 4 shows that the objective value of -GC, equation (7), is nonincreasing during iterations. They all converge to a fixed value very fast in 10 iteration times. -GC only needs several times to iterate alternately to obtain a cut-zeroed graph while the conventional SC-based methods need to perform postprocessing by using -means clustering algorithm. Therefore, the -GC algorithm is effective overall.

5. Conclusion

In this paper, we have proposed a novel -component graph clustering method, which learns to obtain a graph with exact -connected components. Since the vertices in each connected component of the intrinsic graph belong to one cluster, labels are obtained directly from the learned graph itself without performing further graph-cut or -means clustering algorithms. -GC learns the affinity matrix and clustering structure simultaneously. Moreover, -GC can serve as an alternative to SC due to its simplicity and the effectiveness over the standard SC. This paper focuses on the spectral analysis of the graph Laplacian, to which the sum of top smallest eigenvalues is zeroed. It renders the graph structure update until it has exactly number of components. The efficient algorithm as well as the optimization algorithm is presented after plentiful analysis. Experiments on six benchmarks have demonstrated the superiority of -GC.

Data Availability

All the datasets are available at https://github.com/lzu-cvpr/data_for_clustering.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This study was supported by the 13th Five-Year Informatization Plan of the Chinese Academy of Sciences (grant nos. XXH13506 and XXH13505-220) and Data Sharing Fundamental Program for Construction of the National Science and Technology Infrastructure Platform (grant no. Y719H71006).

References

  1. H. Parvin and B. Minaei-Bidgoli, “A clustering ensemble framework based on elite selection of weighted clusters,” Advances in Data Analysis and Classification, vol. 7, no. 2, pp. 181–208, 2013. View at: Publisher Site | Google Scholar
  2. H. Parvin and B. Minaei-Bidgoli, “A clustering ensemble framework based on selection of fuzzy weighted clusters in a locally adaptive clustering algorithm,” Pattern Analysis and Applications, vol. 18, no. 1, pp. 87–112, 2015. View at: Google Scholar
  3. X. Chang, Y.-L. Yu, Y. Yang, and E. P. Xing, “Semantic pooling for complex event analysis in untrimmed videos,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 8, pp. 1617–1632, 2016. View at: Publisher Site | Google Scholar
  4. X. Chang, Z. Ma, M. Lin, Y. Yang, and A. G. Hauptmann, “Feature interaction augmented sparse learning for fast kinect motion detection,” IEEE Transactions on Image Processing, vol. 26, no. 8, pp. 3911–3920, 2017. View at: Publisher Site | Google Scholar
  5. X. Chang, Z. Ma, Y. Yang, Z. Zeng, and A. G. Hauptmann, “Bi-level semantic representation analysis for multimedia event detection,” IEEE Transactions on Cybernetics, vol. 47, no. 5, pp. 1180–1197, 2016. View at: Publisher Site | Google Scholar
  6. X. Chang and Y. Yang, “Semisupervised feature analysis by mining correlations among multiple tasks,” IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 10, pp. 2294–2305, 2016. View at: Publisher Site | Google Scholar
  7. Z. Li, F. Nie, X. Chang, and Y. Yang, “Beyond trace ratio: weighted harmonic mean of trace ratios for multiclass discriminant analysis,” IEEE Transactions on Knowledge and Data Engineering, vol. 29, no. 10, pp. 2100–2110, 2017. View at: Publisher Site | Google Scholar
  8. A. Almagbile, “Estimation of crowd density from uavs images based on corner detection procedures and clustering analysis,” Geo-spatial Information Science, vol. 22, no. 1, pp. 23–34, 2019. View at: Publisher Site | Google Scholar
  9. G. Liu, Z. Lin, S. Yan, J. Sun, Y. Yu, and Y. Ma, “Robust recovery of subspace structures by low-rank representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 1, pp. 171–184, 2013. View at: Publisher Site | Google Scholar
  10. E. Elhamifar and R. Vidal, “Sparse subspace clustering: algorithm, theory, and applications,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 11, pp. 2765–2781, 2013. View at: Publisher Site | Google Scholar
  11. J. Feng, Z. Lin, H. Xu, and S. Yan, “Robust subspace segmentation with block-diagonal prior,” in Proceedings of the CVPR, pp. 3818–3825, IEEE, Columbus, OH, USA, June 2014. View at: Publisher Site | Google Scholar
  12. C. Lu, S. Yan, and Z. Lin, “Convex sparse spectral clustering: single-view to multi-view,” IEEE Transactions on Image Processing, vol. 25, no. 6, pp. 2833–2843, 2016. View at: Publisher Site | Google Scholar
  13. K. Zhan, F. Nie, J. Wang, and Y. Yang, “Multiview consensus graph clustering,” IEEE Transactions on Image Processing, vol. 28, no. 3, pp. 1261–1270, 2019. View at: Publisher Site | Google Scholar
  14. K. Zhan, C. Zhang, J. Guan, and J. Wang, “Graph learning for multiview clustering,” IEEE Transactions on Cybernetics, vol. 48, no. 10, pp. 2887–2895, 2018. View at: Publisher Site | Google Scholar
  15. A. Y. Ng, M. I. Jordan, and Y. Weiss, “On spectral clustering: analysis and an algorithm,” in Proceedings of the NIPS, vol. 14, pp. 849–856, Vancouver, Canada, December 2002. View at: Google Scholar
  16. J. Shi and J. Malik, “Normalized cuts and image segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 888–905, 2000. View at: Publisher Site | Google Scholar
  17. L. Zelnik-Manor and P. Perona, “Self-tuning spectral clustering,” in Proceedings of the NIPS, pp. 1601–1608, Vancouver, Canada, December 2005. View at: Google Scholar
  18. U. Von Luxburg, “A tutorial on spectral clustering,” Statistics and Computing, vol. 17, no. 4, pp. 395–416, 2007. View at: Publisher Site | Google Scholar
  19. K. Zhan, C. Niu, C. Chen, F. Nie, C. Zhang, and Y. Yang, “Graph structure fusion for multiview clustering,” IEEE Transactions on Knowledge and Data Engineering, vol. 31, no. 10, pp. 1984–1993, 2018. View at: Publisher Site | Google Scholar
  20. K. Zhan, X. Chang, J. Guan, L. Chen, Z. Ma, and Y. Yang, “Adaptive structure discovery for multimedia analysis using multiple features,” IEEE Transactions on Cybernetics, vol. 49, no. 5, pp. 1826–1834, 2019. View at: Publisher Site | Google Scholar
  21. L. Hagen and A. B. Kahng, “New spectral methods for ratio cut partitioning and clustering,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 11, no. 9, pp. 1074–1085, 1992. View at: Publisher Site | Google Scholar
  22. C. H. Ding, X. He, H. Zha, M. Gu, and H. D. Simon, “A min-max cut algorithm for graph partitioning and data clustering,” in Proceedings of the ICDM, pp. 107–114, IEEE, San Jose, CA, USA, November-December 2001. View at: Publisher Site | Google Scholar
  23. T. Xiang and S. Gong, “Spectral clustering with eigenvector selection,” Pattern Recognition, vol. 41, no. 3, pp. 1012–1029, 2008. View at: Publisher Site | Google Scholar
  24. X. Zhu, C. Change Loy, and S. Gong, “Constructing robust affinity graphs for spectral clustering,” in Proceedings of the CVPR, pp. 1450–1457, IEEE, Columbus, OH, USA, June 2014. View at: Publisher Site | Google Scholar
  25. J. Huang, F. Nie, and H. Huang, “A new simplex sparse learning model to measure data similarity for clustering,” in Proceedings of the IJCAI, pp. 3569–3575, Buenos Aires, Argentina, July 2015. View at: Google Scholar
  26. F. Nie, X. Wang, and H. Huang, “Clustering and projected clustering with adaptive neighbors,” in Proceedings of the KDD, vol. 20, pp. 977–986, ACM, August 2014. View at: Google Scholar
  27. F. Nie, X. Wang, M. I. Jordan, and H. Huang, “The constrained laplacian rank algorithm for graph-based clustering,” in Proceedings of the AAAI, pp. 1969–1976, AAAI, Phoenix, AZ, USA, February 2016. View at: Google Scholar
  28. X. Zhu, W. He, Y. Li et al., “One-step spectral clustering via dynamically learning affinity matrix and subspace,” in Proceedings of the AAAI, pp. 2963–2969, San Francisco, CA, USA, February 2017. View at: Google Scholar
  29. Z. Kang, C. Peng, and Q. Cheng, “Twin learning for similarity and clustering: a unified kernel approach,” in Proceedings of the AAAI, pp. 2080–2086, San Francisco, CA, USA, February 2017. View at: Google Scholar
  30. F. R. Chung, Spectral Graph Theory, American Mathematical Society, Providence, Rhode Island, 1997.
  31. B. Mohar, Y. Alavi, G. Chartrand, and O. Oellermann, “The Laplacian spectrum of graphs,” Graph Theory, Combinatorics, and Applications, vol. 2, no. 12, pp. 871–898, 1991. View at: Google Scholar
  32. R. Tarjan, “Depth-first search and linear graph algorithms,” SIAM Journal on Computing, vol. 1, no. 2, pp. 146–160, 1972. View at: Publisher Site | Google Scholar
  33. B. Mohar, “Some applications of laplace eigenvalues of graphs,” in Graph Symmetry, G. Hahn and G. Sabidussi, Eds., pp. 225–275, Kluwer Academic Publishers, Berlin, Germany, 1997. View at: Google Scholar
  34. K. Fan, “On a theorem of Weyl concerning eigenvalues of linear transformations I,” Proceedings of the National Academy of Sciences, vol. 35, no. 11, pp. 652–655, 1949. View at: Publisher Site | Google Scholar
  35. M. L. Overton and R. S. Womersley, “On the sum of the largest eigenvalues of a symmetric matrix,” SIAM Journal on Matrix Analysis and Applications, vol. 13, no. 1, pp. 41–45, 1992. View at: Publisher Site | Google Scholar
  36. P. K. Chan, M. D. F. Schlag, and J. Y. Zien, “Spectral K-way ratio-cut partitioning and clustering,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 13, no. 9, pp. 1088–1096, 1994. View at: Publisher Site | Google Scholar
  37. S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, Cambridge, UK, 2004.
  38. J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra, “Efficient projections onto the -ball for learning in high dimensions,” in Proceedings of the ICML, vol. 25, pp. 272–279, ACM, Helsinki, Finland, July 2008. View at: Google Scholar
  39. L. Condat, “Fast projection onto the simplex and the ball,” Mathematical Programming, vol. 158, no. 1-2, pp. 575–585, 2016. View at: Publisher Site | Google Scholar
  40. C. Chen, C. Niu, X. Zhan, and K. Zhan, “Generative approach to unsupervised deep local learning,” Journal of Electronic Imaging, vol. 28, no. 4, Article ID 043005, 2019. View at: Publisher Site | Google Scholar
  41. H. Chang and D.-Y. Yeung, “Robust path-based spectral clustering,” Pattern Recognition, vol. 41, no. 1, pp. 191–203, 2008. View at: Publisher Site | Google Scholar
  42. S. A. Nene, S. K. Nayar, and H. Murase, Columbia object image library (COIL-20), Columbia University, New York, NY, USA, 1996.
  43. C. Zhang, H. Fu, S. Liu, G. Liu, and X. Cao, “Low-rank tensor constrained multiview subspace clustering,” in Proceedings of the ICCV, pp. 1582–1590, Santiago, Chile, December 2015. View at: Google Scholar
  44. X. Cao, C. Zhang, H. Fu, S. Liu, and H. Zhang, “Diversity-induced multi-view subspace clustering,” in Proceedings of the CVPR, pp. 586–594, Santiago, Chile, December 2015. View at: Google Scholar
  45. J. MacQueen, “Some methods for classification and analysis of multivariate observations,” in Proceedings of the fifth Berkeley Symposium on Mathematical Statistics and Probability, vol. 1, no. 14, pp. 281–297, Oakland, CA, USA, January 1967. View at: Google Scholar

Copyright © 2020 Yufang Min and Yaonan Zhang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views234
Downloads232
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.