Complexity

Complexity / 2018 / Article

Research Article | Open Access

Volume 2018 |Article ID 2743678 | 12 pages | https://doi.org/10.1155/2018/2743678

Graph Sparse Nonnegative Matrix Factorization Algorithm Based on the Inertial Projection Neural Network

Academic Editor: Pietro De Lellis
Received03 May 2017
Accepted13 Dec 2017
Published19 Mar 2018

Abstract

We present a novel method, called graph sparse nonnegative matrix factorization, for dimensionality reduction. The affinity graph and sparse constraint are further taken into consideration in nonnegative matrix factorization and it is shown that the proposed matrix factorization method can respect the intrinsic graph structure and provide the sparse representation. Different from some existing traditional methods, the inertial neural network was developed, which can be used to optimize our proposed matrix factorization problem. By adopting one parameter in the neural network, the global optimal solution can be searched. Finally, simulations on numerical examples and clustering in real-world data illustrate the effectiveness and performance of the proposed method.

1. Introduction

Dimensionality reduction plays a fundamental role in image processing, and many researchers have been seeking effective methods to solve this problem. For a given image database, there are many distinct features, whereas the available features are far less enough. Thus, it is of great significance to find useful features with low-dimensionality to represent the original feature space. For this purpose, matrix factorization techniques have attracted great attention in recent decades [13], and many different methods have been developed by using different criteria. The most familiar methods include Singular Value Decomposition (SVD) [4], Principal Component Analysis (PCA) [5], and Vector Quantization (VQ) [6]. The main idea of matrix factorization methods is finding several matrices whose product approximates to the original matrix. In dimensionality reduction, the dimension of the decomposed matrices is smaller than that of the original matrix. This gives rise to a low-dimensional compact representation of the original data points, which can facilitate clustering or classification.

Among these matrix factorization methods, one of the most used methods is nonnegative matrix factorization (NMF) [3], which requires the decomposed matrices to be nonnegative. The effect of the nonnegative constraint leads NMF to learn a part-based representation of high-dimensional data, and it is applied to so many areas such as signal processing [7], data mining [8, 9], and computer vision [10]. In general, NMF is shown to be effective for unsupervised learning problems, but not applicable to supervised learning problems. To overcome this problem, some researchers [1113] have presented semi-supervised learning theory to achieve better performance in dimensionality reduction. In the light of locality preserving projection, a graph regularized nonnegative matrix factorization method (GNMF) has been proposed to impose the geometrical information on the data space. The geometrical structure is constructed by a nearest neighbor graph [11]. Based on the idea of label propagation, Liu et al. [13] imposed the label information constraint into nonnegative matrix factorization (CNMF). The idea of CNMF is that the neighboring data points with the same class are supposed to merge together in the low-dimensional representation space.

Motivated by previous researches in matrix factorization, in this paper, we propose a novel method, called graph sparse nonnegative matrix factorization (GSNMF), for dimensionality reduction, which can be used for semi-supervised learning problems. In GSNMF, a sparse constraint is imposed on GNMF, and this leads matrix factorization to learn a sparse part-based representation of the original data space. The sparse constraint causes GSNMF to be a nonconvex nonsmooth problem, and traditional optimization algorithms can not be optimized directly. Recently, numerous neural networks have emerged as a powerful tool for optimization problems [1427]. For some nonconvex problems, an inertial projection neural network (IPNN) [16] has been proposed to search different local optimal solutions by the inertial term. In [17], a shuffled frog leaping algorithm (SFLA) has been developed using the recurrent neural network. Based on the SFLA framework, the global optimal solution can be searched. Moreover, there are many optimization methods for nonconvex nonsmooth problems that use neural networks [2227].

It is worth highlighting some advantages of our proposed method as follows:(i)Traditional algorithms for GNMF [11] and NMF [3] can easily trap into local optimum solution, and these algorithms are sensitive to initial values, while our proposed algorithm using inertial projection neural network can avoid these problems.(ii)Our proposed algorithm can be initialized by sparser matrices; however, GNMF and NMF may fail in this case.(iii)By adopting one parameter in the neural network, GSNMF has the better clustering effect than GNMF and NMF.

The rest of the paper is organized as follows. In Section 2, some related works to NMF are briefly reviewed; then we introduce GSNMF. Section 3 reviews the inertial projection neural network theory and provides a convergence proof to GSNMF. Section 4 presents numerical examples to demonstrate the validity of our proposed algorithm. Experiments on clustering are given in Section 5. Finally, we present some concluding marks and future work in Section 6.

2. Problem Formulation

To find the effective features of high dimensionality data, matrix factorization can be used to learn sets of features to represent data. Given a data matrix and an integer , matrix factorization is to find two matrices and such thatWhen , the matrix factorization method can be regarded as a dimensionality reduction method. In image dimensionality reduction, each column of is a basis vector to capture the original image data and each column of is the representation with respect to the new basis. The most used method to measure the approximation is the Frobenius norm in the following form:

Different matrix factorization methods imposed different constraints on (2) that can solve different practical problems. At present, the most used matrix factorization method is nonnegative matrix factorization (NMF) [3] with nonnegative constraints on and . The classic algorithm is summarized as follows:

Recently, Cai et al. [11] proposed a graph regularized nonnegative matrix factorization method (GNMF) and incorporated the geometrical information into the data space. The goal of GNMF is to find effective basis vectors to represent the intrinsic structure. The research has presented the natural assumption that if two data points and from are close in the intrinsic geometry of the data distribution, new representations of two points and are also close to each other. For each data point , we find its nearest neighbors and put edges between and its neighbors. Edges between each data points can be considered as the weight matrix . If nodes and are connected by an edge, then . can be described by

The low-dimensional representation of with respect to the new basis is . The Euclidean distance is used to measure the dissimilarity between and byWith the above analysis, the following term is used to measure the smoothness of the low-dimensional representation:where denotes the trace of a matrix, , and . Combining (6) and (2), the new objective function is defined by the Euclidean distance.The algorithm to solve (7) is presented as follows:

When or , GNMF is equivalent to nonnegative matrix factorization. In the representation of the image data, GNMF and NMF only consider the Euclidean structure of image space. However, recent researches have shown that human generated images may from a submanifold of the ambient Euclidean space [28, 29]. In general, the human generated images cannot uniformly fill up the high-dimensional Euclidean space. Therefore, the matrix factorization should respect the intrinsic manifold structure and learn the sparse basis to represent the image data. In the light of sparse coding [30], we impose the sparse constraint on (7) and the optimization problem is transformed into another form:

Because the optimization problem (9) is nonconvex, a block-coordinate update (BCD) [31] structure is proposed to optimize GSNMF. Given the initial and , BCD alternatively solvesuntil convergence. Since (11) and (10) have a similar form, we only consider how to solve (10); then (11) can be solved accordingly. The problem (10) can be transformed into the following vector form:whereIt is evident that -norm is not differentiable. However, [32] has presented a method to solve it. Supposing , problem (12) can be rewritten as follows:where , , , , and and are, respectively, defined as follows:According to the BCD structure, (14) can be separated into two subproblems. Given the initial and , one alternatively solvesuntil convergence. Since (16) and (17) have a similar form, we only consider how to solve (16); then (17) can be solved accordingly. Equation (16) can be transformed into the following convex quadratic program (CQP):where

According to the above analysis, problem (11) can be also transformed into a convex quadratic program problem. For saving space, we do not provide the derivation process. In the following section, we will introduce IPNN to optimize (18).

3. Neural Network Model and Analysis

3.1. Inertial Projection Neural Network

To solve problem (18), we establish the following neural network using IPNN [16]:where . Now, we are ready to show the convergence and optimality of (20). For any and , we set , , and ; then we will present the following theorems.

Theorem 1. For any initial point with the initial condition , there exists unique continuous solution , where .

Proof. Note that is Lipschitz continuous. Let be the Lipschitz constant. Thus, for any and , we obtainwhere and are unit matrix and Lipschitz constant, respectively. Therefore, is Lipschitz continuous on . There exists unique solution with initial condition by the local existence theorem of solution to ordinary differential equations.

Theorem 2. Define , if the following two conditions hold.
For any and ,where is the angle between and .
. Then, the solution of model (20) converges to optimal solution set of (18).

Proof. Considering and the Lyapunov function , we obtainSince and Condition holds, (23) can be rewritten asThen, (24) can be transformed intowhich indicates that the function is monotone nonincreasing. Thus, for any ,Since , we obtain . FurtherBy multiplying the inequality (27) into , we obtain which implies thatIntegrating (28) from 0 to , it is obtained that . Therefore, the trajectory of model (20) is bounded.
Since is bounded and inequality (26) holds, we obtainThus, (29) can be rewritten asSince is bounded, (30) indicates that is also bounded. From (25) and (30), one obtains and .
Assuming , and since , it implies that there exists , such that for any . Therefore, one obtainsand it is easy to obtain thatAccording to the theory of Calculus, the value of exists. Therefore, the value of also exists. Since , we obtain . Since is bounded, we have .
It follows from (20) thatIf , thenwhich implies that . In the following, we will prove .
Defining , and substituting into (20), one obtainsSince Condition holds, we haveHence, we getIntegrating (35), it gives . Since , we obtain . Thus, the solution of system (20) converges to the optimal set . The proof is completed.

Remark 3. In Theorem 2, Condition should be satisfied. In the following, we discuss the existence of parameter . For any and , we haveTherefore, is also Lipschitz continuous andThus, we getwhere is the angle between and . It is easy to obtain

3.2. Algorithms

Based on the above analysis, we summarize Algorithms 1, 2, and 3 to optimize GSNMF. Firstly, the parameters , , , and can be derived by Algorithm 1. Secondly, Algorithm 2 applies IPNN to optimize the CQP problem. Thirdly, the optimization problem (9) is divided into two CQP problems which are solved alternatively by Algorithm 3. In the following, we analyse the time complexity of our proposed algorithms. The main cost of our proposed algorithms is spent on the calculation of the gradient in Algorithm 2. To optimize subproblem (10), the operation in Algorithm 2 is the matrix product , which takes . With the cost for calculating , the complexity of using Algorithm 2 to optimize the subproblem (10) is Similarly, the complexity of using Algorithm 2 to optimize the subproblem (11) is Hence, the overall cost to solve GSNMF is We summarize the time complexity of one iteration round of GSNMF, GNMF, and NMF in Table 1. At each iteration, there are two operations, same as NMF and GNMF.


SolverTime complexity

GSNMF
GNMF
NMF

Input: , , , , ,
Output: , , ,
(1) Calculate the number of rows and columns in the matrix
(2) Calculate with
(3) Calculate with
(4) Calculate the diagonal matrix with
(5) Calculate with
Input: , , , , , , , ,
Output:
Initialization: , ,
repeat
(1)
(2)
(3)
(4)
(5)
(6)
until  
Input: , , ,
Output: ,
Initialization:   satifies (6), , , ,
repeat
(1) ,
(2)
(3)
(4)
(5)
(6)
(7)
(8) ,
until  
,

4. Numerical Examples

In this section, we exhibit the global searching ability of GSNMF. By adjusting the inertial term in the neural network, different local optimal solutions can be searched. Let , , , , , , , , and . In order to ensure the validity of this experiment, we provide the initial , , and in Tables 2, 3, and 4, respectively. Table 5 shows the comparison between GSNMF and NMF. To investigate whether GSNMF can converge, the convergence curve is depicted in Figure 1 with .


4.3451.792.920.8526.0366.9292.8032.445
0.5131.1648.548.6450.87912.3094.7311.326
4.2773.6655.6192.16.0486.1641.0774.167
5.6585.7880.8874.5997.6532.3427.5347.104
2.2820.312.6573.2484.331.5081.8191.26
1.8567.6692.1134.0847.6160.2872.8651.329
0.5896.8862.3880.4467.7592.4560.4867.189
0.21416.1563.6841.9730.253.8412.8251.482


0.7970.3940.0011.0290.182
0.30.6330.281.140.727
0.810.0781.5891.7641.152
0.0620.4080.2650.780.716
0.0730.3890.0051.7680.166
1.9211.451.4740.6730.016
0.3461.8960.2841.1350.087
1.8490.0580.4451.5741.745


0.0710.2420.4131.1011.3450.9670.9230.533
1.3191.3050.3290.990.3091.1730.7341.309
1.3770.7960.9130.3770.5041.3220.3780.682
0.4030.6080.2240.1630.6120.1620.7240.849
0.9291.4020.7731.7270.0390.1870.4460.132


GNMFNMF

Objective values, 20.517220.5172
, 20.7534
, 21.6804

5. Application in Image Clustering

5.1. Databases

To examine the clustering performance of GSNMF, we present the experiment in two databases including IRIS and COIL20. Their details are presented in the following (see also Table 6).


Dataset size Dimensionality Classes

IRIS 150 4 3
COIL20 1440 256 20

(1) IRIS. It includes 150 instances with 4 features. There are 3 classes including Versicolour, Setosa, and Virginca. Each class includes 50 instances.

(2) COIL20. This data set is an image library which contains 1440 instances with 16 × 16 gray scale features. There are 20 different classes and each class contains 72 instances.

5.2. Compared Methods

We present the clustering performance on two databases using GSNMF and GNMF. There are two metrics including accuracy and normalized mutual information [33] to evaluate the clustering performance. To reveal the effect of the sparse constraint, different cardinalities (the number of zero entries) of the initial are considered to evaluate the clustering performance. Because NMF is a nonconvex problem, different initial and may lead to different local optimal solutions. For the fair comparison, we try 10 different initial and and report the average results. We compare different methods in two cases. Firstly, and are randomly generated between 0 and 1. Secondly, and are randomly generated between −1 and 1.

5.3. Compared Results

Tables 7 and 8 present the clustering results on IRIS in two cases. The parameters for IRIS are , , , , , and . The clustering performance on COIL20 is shown in Tables 9 and 10 in two cases. The parameters for COIL20 are , , , , , and . These tables reveal some interesting points:(i)When the sparse constraint is imposed on GNMF, the clustering performance of GSNMF is better than NMF and GNMF.(ii)When the initial and have some negative entries, NMF and GNMF fail to cluster. However, GSNMF is not affected in this case.


Cardinality (%) Accuracy (%) Normalized mutual information (%)
SGNMF GNMF NMF SGNMF GNMF NMF

098.27 91.60 83.8793.09 80.66 70.58
1098.47 77.93 69.4793.97 55.70 48.48
2098.27 71.93 68.6793.09 42.23 38.86
3098.33 61.93 62.8793.39 28.90 28.82
4098.27 56.93 55.8093.15 18.89 17.59
5098.33 53.40 53.2793.39 13.35 12.96


Cardinality (%) Accuracy (%) Normalized mutual information (%)
SGNMF GNMF NMF SGNMF GNMF NMF

098.6034.0034.0094.561.0351.035
1098.1334.0034.0093.391.0351.035
2098.2034.0034.0093.021.0351.035
3098.0034.0034.0092.271.0351.035
4098.1334.0034.0092.921.0351.035
5098.0034.0034.0092.331.0351.035


Cardinality (%) Accuracy (%) Normalized mutual information (%)
SGNMF GNMF NMF SGNMF GNMF NMF

0 68.9272.92 64.54 77.5284.88 74.18
1068.24 63.37 57.6377.05 71.78 65.49
2068.15 53.40 48.3977.31 61.90 56.26
3068.38 45.81 43.9177.39 54.34 51.60
4068.06 41.03 38.2877.31 49.45 46.26
5068.67 35.25 32.6477.61 42.92 41.42


Cardinality (%) Accuracy (%) Normalized mutual information (%)
SGNMF GNMF NMF SGNMF GNMF NMF

068.22 5.07 5.0777.05 1.38 1.035
1068.16 5.07 5.0777.36 1.38 1.035
2068.78 5.07 5.0777.11 1.38 1.035
3068.12 5.07 5.0777.08 1.38 1.035
4068.38 5.07 5.0777.33 1.38 1.035
5068.25 5.07 5.0777.67 1.38 1.035

5.4. Parameters Selection

Our GSNMF has one essential parameter: the inertia term . Figures 2 and 3 depict the average performance of GSNMF with different .

5.5. Convergence Study

According to the BCD and IPNN theory, the method for optimizing GSNMF is proved to be convergent. Here we investigate whether this method can converge to a stationary point. Figure 4 depicts the convergence curves of GSNMF on two data sets. For each figure, the -axis is the iteration number and the -axis denotes the objective value.

6. Conclusion and Future Work

We propose a dimensionality reduction method, which can be solved by the inertial projection neural network. According to the experiments, three advantages are presented. Firstly, different local solutions can be achieved with different inertial terms . Secondly, the clustering performance cannot be affected by the negative initial values. However, GNMF and NMF have poor performance in clustering with negative initial values. Thirdly, if the initial values are sparse, our proposed method performs better than GNMF and NMF in the clustering.

Several topics remain to be discussed in our future work:(i)There is a parameter which searches the global optimal solution of GSNMF. Thus, a suitable value of is critical to our algorithm. It remains unclear how to select theoretically.(ii) is a step length to decide the convergence rate in Algorithm 2. If it is assigned a small value, slow convergence makes a bad clustering performance. Thus, an adaptive step length should be considered.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. S. Agarwal, A. Awan, and D. Roth, “Learning to detect objects in images via a sparse, part-based representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 11, pp. 1475–1490, 2004. View at: Publisher Site | Google Scholar
  2. S. Deerwester, S. T. Dumais, G. W. Furnas, T. K. Landauer, and R. Harshman, “Indexing by latent semantic analysis,” Journal of the Association for Information Science and Technology, vol. 41, no. 6, pp. 391–407, 1990. View at: Google Scholar
  3. D. D. Lee and H. S. Seung, “Learning the parts of objects by non-negative matrix factorization,” Nature, vol. 401, no. 6755, pp. 788–791, 1999. View at: Google Scholar
  4. D. Kalman, “A Singularly Valuable Decomposition: The SVD of a Matrix,” The College Mathematics Journal, vol. 27, no. 1, p. 2, 1996. View at: Publisher Site | Google Scholar
  5. I. T. Jolliffe, Principal Component Analysis, Springer Series in Statistics, Springer, New York, NY, USA, 2nd edition, 2002. View at: MathSciNet
  6. A. Gersho and R. M. Gray, Vector Quantization and Signal Compression, Springer, Boston, MA, USA, 1992. View at: Publisher Site
  7. A. Cichocki, R. Zdunek, and S.-I. Amari, “New algorithms for non-negative matrix factorization in applications to blind source separation,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '06), vol. 5, pp. V621–V624, Toulouse, France, May 2006. View at: Publisher Site | Google Scholar
  8. V. P. Pauca, F. Shahnaz, M. W. Berry, and R. J. Plemmons, “Text mining using non-negative matrix factorizations,” in Proceedings of the 4th SIAM International Conference on Data Mining, pp. 452–456, 2004. View at: Google Scholar | MathSciNet
  9. W. Xu, X. Liu, and Y. Gong, “Document clustering based on non-negative matrix factorization,” in Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Informaion Retrieval (SIGIR '03), pp. 267–273, Toronto, Canada, August 2003. View at: Publisher Site | Google Scholar
  10. P. O. Hoyer, “Non-negative sparse coding,” in Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing, pp. 557–565, 2002. View at: Publisher Site | Google Scholar
  11. D. Cai, X. He, J. Han, and T. S. Huang, “Graph regularized nonnegative matrix factorization for data representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 8, pp. 1548–1560, 2011. View at: Publisher Site | Google Scholar
  12. W.-S. Chen, Y. Zhao, B. Pan, and B. Chen, “Supervised kernel nonnegative matrix factorization for face recognition,” Neurocomputing, vol. 205, pp. 165–181, 2016. View at: Publisher Site | Google Scholar
  13. H. Liu, Z. Wu, X. Li, D. Cai, and T. S. Huang, “Constrained nonnegative matrix factorization for image representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 7, pp. 1299–1311, 2012. View at: Publisher Site | Google Scholar
  14. J. J. Hopfield and D. W. Tank, “Neural computation of decisions in optimization problems,” Biological Cybernetics, vol. 52, no. 3, pp. 141–152, 1985. View at: Google Scholar | MathSciNet
  15. Y. Xia, H. Leung, and J. Wang, “A projection neural network and its application to constrained optimization problems,” IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, vol. 49, no. 4, pp. 447–458, 2002. View at: Publisher Site | Google Scholar | MathSciNet
  16. X. He, T. Huang, J. Yu, and C. Li, “An inertial projection neural network for solving variational inequalities,” IEEE Transactions on Cybernetics, vol. 99, pp. 1–6, 2016. View at: Google Scholar
  17. H. Che, C. Li, X. He, and T. Huang, “An intelligent method of swarm neural networks for equalities-constrained nonconvex optimization,” Neurocomputing, vol. 167, pp. 569–577, 2015. View at: Publisher Site | Google Scholar
  18. J. Wang, “Analysis and design of a recurrent neural network for linear programming,” IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, vol. 40, no. 9, pp. 613–618, 1993. View at: Publisher Site | Google Scholar
  19. X. Hu and B. Zhang, “An alternative recurrent neural network for solving variational inequalities and related optimization problems,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 39, no. 6, pp. 1640–1645, 2009. View at: Publisher Site | Google Scholar
  20. X. Hu and J. Wang, “A recurrent neural network for solving a class of general variational inequalities,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 37, no. 3, pp. 528–539, 2007. View at: Publisher Site | Google Scholar
  21. X. Hu and J. Wang, “Design of general projection neural networks for solving monotone linear variational inequalities and linear and quadratic optimization problems,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 37, no. 5, pp. 1414–1421, 2007. View at: Publisher Site | Google Scholar
  22. Z. Yan, J. Wang, and G. Li, “A collective neurodynamic optimization approach to bound-constrained nonconvex optimization,” Neural Networks, vol. 55, pp. 20–29, 2014. View at: Publisher Site | Google Scholar
  23. Q. Liu and J. Wang, “A one-layer recurrent neural network with a discontinuous hard-limiting activation function for quadratic programming,” IEEE Transactions on Neural Networks and Learning Systems, vol. 19, no. 4, pp. 558–570, 2008. View at: Publisher Site | Google Scholar
  24. J. Fan and J. Wang, “A collective neurodynamic optimization approach to nonnegative matrix factorization,” IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 10, pp. 2344–2356, 2017. View at: Publisher Site | Google Scholar | MathSciNet
  25. Z. Yan and J. Wang, “Nonlinear model predictive control based on collective neurodynamic optimization,” IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 4, pp. 840–850, 2015. View at: Publisher Site | Google Scholar | MathSciNet
  26. J. Shi, X. Ren, G. Dai, J. Wang, and Z. Zhang, “A non-convex relaxation approach to sparse dictionary learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '11), pp. 1809–1816, June 2011. View at: Publisher Site | Google Scholar
  27. G. Li, Z. Yan, and J. Wang, “A one-layer recurrent neural network for constrained nonconvex optimization,” Neural Networks, vol. 61, pp. 10–21, 2015. View at: Publisher Site | Google Scholar
  28. H. Lee, A. Battle, R. Raina, and A. Ng, “Efficient sparse coding algorithms,” in Proceedings of the International Conference on Neural Information Processing Systems, pp. 801–808, 2006. View at: Google Scholar
  29. D. Cai, H. Bao, and X. He, “Sparse concept coding for visual analysis,” in Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2011, pp. 2905–2910, USA, June 2011. View at: Publisher Site | Google Scholar
  30. B. A. Olshausen and D. J. Field, “Emergence of simple-cell receptive field properties by learning a sparse code for natural images,” Nature, vol. 381, no. 6583, pp. 607–609, 1996. View at: Publisher Site | Google Scholar
  31. P. Tseng, “Convergence of a block coordinate descent method for nondifferentiable minimization,” Journal of Optimization Theory and Applications, vol. 109, no. 3, pp. 475–494, 2001. View at: Publisher Site | Google Scholar | MathSciNet
  32. M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems,” IEEE Journal of Selected Topics in Signal Processing, vol. 1, no. 4, pp. 586–597, 2007. View at: Publisher Site | Google Scholar
  33. D. Cai, X. He, and J. Han, “Document clustering using locality preserving indexing,” IEEE Transactions on Knowledge and Data Engineering, vol. 17, no. 12, pp. 1624–1637, 2005. View at: Publisher Site | Google Scholar

Copyright © 2018 Xiangguang Dai et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

640 Views | 332 Downloads | 1 Citation
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19.