- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Abstract and Applied Analysis
Volume 2013 (2013), Article ID 101974, 9 pages
-Goodness for Low-Rank Matrix Recovery
1Department of Applied Mathematics, Beijing Jiaotong University, Beijing 100044, China
2Department of Combinatorics and Optimization, Faculty of Mathematics, University of Waterloo, Waterloo, ON, Canada N2L 3G1
Received 21 January 2013; Accepted 17 March 2013
Academic Editor: Jein-Shan Chen
Copyright © 2013 Lingchen Kong et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Low-rank matrix recovery (LMR) is a rank minimization problem subject to linear equality constraints, and it arises in many fields such as signal and image processing, statistics, computer vision, and system identification and control. This class of optimization problems is generally hard. A popular approach replaces the rank function with the nuclear norm of the matrix variable. In this paper, we extend and characterize the concept of -goodness for a sensing matrix in sparse signal recovery (proposed by Juditsky and Nemirovski (Math Program, 2011)) to linear transformations in LMR. Using the two characteristic -goodness constants, and , of a linear transformation, we derive necessary and sufficient conditions for a linear transformation to be -good. Moreover, we establish the equivalence of -goodness and the null space properties. Therefore, -goodness is a necessary and sufficient condition for exact -rank matrix recovery via the nuclear norm minimization.
Low-rank matrix recovery (LMR for short) is a rank minimization problem (RMP) with linear constraints or the affine matrix rank minimization problem which is defined as follows: where is the matrix variable, is a linear transformation, and . Although specific instances can often be solved by specialized algorithms, the LMR is hard. A popular approach for solving LMR in the systems and control community is to minimize the trace of a positive semidefinite matrix variable instead of its rank (see, e.g., [1, 2]). A generalization of this approach to nonsymmetric matrices introduced by Fazel et al.  is the famous convex relaxation of LMR (1), which is called nuclear norm minimization (NNM): where is the nuclear norm of , that is, the sum of its singular values. When and the matrix , , is diagonal, the LMR (1) reduces to sparse signal recovery (SSR), which is the so-called cardinality minimization problem (CMP): where denotes the number of nonzero entries in the vector and is a given sensing matrix. A well-known heuristic for SSR is the -norm minimization relaxation (basis pursuit problem): where is the -norm of , that is, the sum of absolute values of its entries.
LMR problems have many applications and they appeared in the literature of a diverse set of fields including signal and image processing, statistics, computer vision, and system identification and control. For more details, see the recent paper . LMR and NNM have been the focus of some recent research in the optimization community, see; for example, [4–15]. Although there are many papers dealing with algorithms for NNM such as interior-point methods, fixed point and Bregman iterative methods, and proximal point methods, there are fewer papers dealing with the conditions that guarantee the success of the low-rank matrix recovery via NNM. For instance, following the program laid out in the work of Candès and Tao in compressed sensing (CS, see, e.g., [16–18]), Recht et al.  provided a certain restricted isometry property (RIP) condition on the linear transformation which guarantees that the minimum nuclear norm solution is the minimum rank solution. Recht et al. [14, 19] gave the null space property (NSP) which characterizes a particular property of the null space of the linear transformation, which is also discussed by Oymak et al. [20, 21]. Note that NSP states a necessary and sufficient condition for exactly recovering the low-rank matrix via nuclear norm minimization. Recently, Chandrasekaran et al.  proposed that a fixed -rank matrix can be recovered if and only if the null space of does not intersect the tangent cone of the nuclear norm ball at .
In the setting of CS, there are other characterizations of the sensing matrix, under which -norm minimization can be guaranteed to yield an optimal solution to SSR, in addition to RIP and null-space properties, see; for example, [23–26]. In particular, Juditsky and Nemirovski  established necessary and sufficient conditions for a Sensing matrix to be “-good” to allow for exact -recovery of sparse signals with nonzero entries when no measurement noise is present. They also demonstrated that these characteristics, although difficult to evaluate, lead to verifiable sufficient conditions for exact SSR and to efficiently computable upper bounds on those for which a given sensing matrix is -good. Furthermore, they established instructive links between -goodness and RIP in the CS context. One may wonder whether we can generalize the -goodness concept to LMR and still maintain many of the nice properties as done in . Here, we deal with this issue. Our approach is based on the singular value decomposition (SVD) of a matrix and the partition technique generalized from CS. In the next section, following Juditsky and Nemirovski’s terminology, we propose definitions of -goodness and -numbers, and , of a linear transformation in LMR and then we provide some basic properties of -numbers. In Section 3, we characterize -goodness of a linear transformation in LMR via -numbers. We consider the connections between the -goodness, NSP, and RIP in Section 4. We eventually obtain that satisfying is -good.
Let , , and let be an SVD of , where , , and is the diagonal matrix of which is the vector of the singular values of . Also let denote the set of pairs of matrices in the SVD of ; that is, For , we say is an -rank matrix to mean that the rank of is no more than . For an -rank matrix , it is convenient to take as its SVD where , are orthogonal matrices and . For a vector , let be the dual norm of specified by . In particular, is the dual norm of for a vector. Let denote the spectral or the operator norm of a matrix , that is, the largest singular value of . In fact, is the dual norm of . Let be the Frobenius norm of , which is equal to the -norm of the vector of its singular values. We denote by the transpose of . For a linear transformation , we denote by the adjoint of .
2. Definitions and Basic Properties
We first go over some concepts related to -goodness of the linear transformation in LMR (RMP). These are extensions of those given for SSR (CMP) in .
Definition 1. Let be a linear transformation and . One says that is -good, if for every -rank matrix , is the unique optimal solution to the optimization problem
We denote by the largest integer for which is -good. Clearly, . To characterize -goodness we introduce two useful -goodness constants: and . We call and -numbers.
Definition 2. Let be a linear transformation, and . Then we have the following.
(i) -number is the infimum of such that for every matrix with singular value decomposition (i.e., nonzero singular values, all equal to 1), there exists a vector such that where , are orthogonal matrices, and If there does not exist such for some as above, we set .
(ii) -number is the infimum of such that for every matrix with nonzero singular values, all equal to , there exists a vector such that and share the same orthogonal row and column spaces: If there does not exist such for some as above, we set and to be compatible with the special case given by  we write , instead of , , respectively.
From the above definition, we easily see that the set of values that takes is closed. Thus, when , for every matrix with nonzero singular values, all equal to 1, there exists a vector such thatSimilarly, for every matrix with nonzero singular values, all equal to , there exists a vector such that and share the same orthogonal row and column spaces: Observing that the set is convex, we obtain that if then for every matrix with at most nonzero singular values and there exist vectors satisfying (10) and there exist vectors satisfying (11).
2.2. Basic Properties of -Numbers
In order to characterize the -goodness of a linear transformation , we study the basic properties of -numbers. We begin with the result that -numbers and are convex nonincreasing functions of .
Proposition 3. For every linear transformation and every , -numbers and are convex nonincreasing functions of .
Proof. We only need to demonstrate that the quantity is a convex nonincreasing function of . It is evident from the definition that is nonincreasing for given . It remains to show that is a convex function of . In other words, for every pair , we need to verify that The above inequality follows immediately if one of , is . Thus, we may assume . In fact, from the argument around (10) and the definition of , we know that for every matrix with nonzero singular values, all equal to , there exist vectors such that for It is immediate from (13) that . Moreover, from the above information on the singular values of , , we may set , such that This implies that for every and hence , , and have orthogonal row and column spaces. Thus, noting that , we obtain that and for every . Combining this with the fact we obtain the desired conclusion.
The following observation that -numbers , are nondecreasing in is immediate.
Proposition 4. For every , one has , .
We further investigate the relationship between the -numbers and .
Proposition 5. Let be a linear transformation, , and . Then one has
Proof. Let . Then, for every matrix with nonzero singular values, all equal to 1, there exists , , such that , where and and have orthogonal row and column spaces. For a given pair as above, take . Then we have and
where the first term under the maximum comes from the fact that and agree on the subspace corresponding to the nonzero singular values of . Therefore, we obtain
Now, we assume that . Fix orthogonal matrices , . For an -element subset of the index set , we define a set with respect to orthogonal matrices as
In the above, denotes the complement of . It is immediately seen that is a closed convex set in . Moreover, we have the following
Claim 1. contains the -ball of radius centered at the origin in .
Proof. Note that is closed and convex. Moreover, is the direct sum of its projections onto the pair of subspaces Let denote the projection of onto . Then, is closed and convex (because of the direct sum property above and the fact that is closed and convex). Note that can be naturally identified with , and our claim is the image of under this identification that contains the -ball of radius centered at the origin in . For a contradiction, suppose is not contained in . Then there exists . Since is closed and convex, by a separating hyperplane theorem, there exists a vector , such that Let be defined by By definition of , for -rank matrix , there exists such that and where and have the same orthogonal row and column spaces, and . Together with the definitions of and , this means that contains a vector with , . Therefore, By and the definition of , we obtain where the strict inequality follows from the facts that and separates from . The above string of inequalities is a contradiction, and hence the desired claim holds.
Using the above claim, we conclude that for every with cardinality , there exists an such that , for all . From the definition of , we obtain that there exists with such that where if , and if . Thus, we obtain that
To conclude the proof, we need to prove that the inequalities we established are both equations. This is straightforward by an argument similar to the one in the proof of [24, Theorem 1]. We omit it for the sake of brevity.
We end this section with a simple argument which illustrates that for a given pair (), and , for all large enough.
Proposition 6. Let be a linear transformation and . Assume that for some , the image of the unit -ball in under the mapping contains the ball . Then for every
Proof. Fix . We only need to show the first implication. Let . Then for every matrix with its SVD , there exists a vector such that where , are orthogonal matrices, and Clearly, . That is, From the inclusion assumption, we obtain that Combining the above two strings of relations, we derive the desired conclusion.
3. -Goodness and -Numbers
We first give the following characterization result of -goodness of a linear transformation via the -number , which explains the importance of in LMR.
Theorem 7. Let be a linear transformation, and be an integer . Then is -good if and only if .
Proof. Suppose is -good. Let be a matrix of rank . Without loss of generality, let be its SVD where , are orthogonal matrices and . By the definition of -goodness of , is the unique solution to the optimization problem (6). Using the first-order optimality conditions, we obtain that there exists such that the function attains its minimum value over at . So or . Using the fact (see, e.g., )
it follows that there exist matrices , such that where , are orthogonal matrices and
where and . Therefore, the optimal objective value of the optimization problem
is at most one. For the given with its SVD , letIt is easy to see that is a subspace and its normal cone (in the sense of variational analysis, see, e.g.,  for details) is specified by . Thus, the above problem (38) is equivalent to the following convex optimization problem with set constraint:
We will show that the optimal value is less than . For a contradiction, suppose that the optimal value is one. Then, by [28, Theorem 10.1 and Exercise 10.52], there exists a Lagrange multiplier such that the function
has unconstrained minimum in equal to , where is the indicator function of . Let be an optimal solution. Then, by the optimality condition , we obtain that
Direct calculation yields that
Then there exist and such that . Notice that [29, Corollary 6.4] implies that for , and . Therefore, and . Moreover, by the definition of the dual norm of . This together with the facts , and yields
Thus, the minimum value of is attained, , when , . We obtain that . By assumption, . That is, . Without loss of generality, let SVD of the optimal be , where and . From the above arguments, we obtain that(i),
Clearly, for every , the matrices are feasible in (6). Note that Then, . From the above equations, we obtain that for all small enough (since , ). Noting that is the unique optimal solution to (6), we have , which means that for . This is a contradiction, and hence the desired conclusion holds.
We next prove that is -good if . That is, we let be an -rank matrix and we show that is the unique optimal solution to (6). Without loss of generality, let be a matrix of rank and its SVD, where , are orthogonal matrices and . It follows from Proposition 4 that . By the definition of , there exists such that , where , , and Now, we have the optimization problem of minimizing the function over all such that . Note that by and the definition of dual norm. So and this function attains its unconstrained minimum in at . Hence is an optimal solution to (6). It remains to show that this optimal solution is unique. Let be another optimal solution to the problem. Then . This together with the fact implies that there exist SVDs for and such that where and are orthogonal matrices, and if . Thus, for , for all , we must have . By the two forms of SVDs of as above, where , are the corresponding submatrices of , , respectively. Without loss of generality, let where and for the corresponding index . Then we have From , we obtain that Therefore, we deduce Clearly, the rank of is no less than . From the orthogonality property of and , we easily derive that Thus, we obtain , which implies that the rank of the matrix is no more than . Since , there exists such that Therefore, . Then .
Theorem 8. Let be a linear transformation, and . Then is -good if and only if .
4. -Goodness, NSP, and RIP
This section deals with the connections between -goodness, the null space property (NSP), and the restricted isometry property (RIP). We start with establishing the equivalence of NSP and -number . Here, we say satisfies NSP if for every nonzero matrix with the SVD , then we have For further details, see, for example, [14, 19–21] and references therein.
Proposition 9. For the linear transformation , if and only if satisfies NSP.
Proof. We first give an equivalent representation of the -number . We define a compact convex set first:
Let and . By definition, is the smallest such that the closed convex set contains all matrices with nonzero singular values, all equal to 1. Equivalently, contains the convex hull of these matrices, namely, . Note that satisfies the inclusion if and only if for every
For the above, we adopt the convention that whenever , is defined to be or depending on whether or . Thus, if and only if . Using the homogeneity of this last relation with respect to , the above is equivalent to
Therefore, we obtain . Furthermore,
For with , let be its SVD. Then, we obtain the sum of the largest singular values of as From (59), we immediately obtain that is the best upper bound on of matrices such that . Therefore, implies that the maximum value of -norms of matrices with is less than . That is, . Thus, and hence satisfies NSP. Now, it is easy to see that satisfies NSP if and only if .
Next, we consider the connection between restricted isometry constants and -number of the linear transformation in LMR. It is well known that, for a nonsingular matrix (transformation) , the RIP constants of and can be very different, as shown by Zhang  for the vector case. However, the -goodness properties of and are always the same for a nonsingular transformation (i.e., -goodness properties enjoy scale invariance in this sense). Recall that the -restricted isometry constant of a linear transformation is defined as the smallest constant such that the following holds for all -rank matrices : In this case, we say possesses the RI -property (RIP) as in the CS context. For details, see [4, 31–34] and the references therein.
Proposition 10. Let be a linear transformation and . For any nonsingular transformation , .
Proof. It follows from the nonsingularity of that . Then, by the equivalent representation of the -number in (59),
For the RIP constant , Oymak et al.  gave the current best bound on the restricted isometry constant , where they proposed a general technique for translating results from SSR to LMR. Together with the above arguments, we immediately obtain the following theorem.
Theorem 11. satisfying is -good.
The above theorem says that -goodness is a necessary and sufficient condition for recovering the low-rank solution exactly via nuclear norm minimization.
In this paper, we have shown that -goodness of the linear transformation in LMR is a necessary and sufficient conditions for exact -rank matrix recovery via the nuclear norm minimization, which is equivalent to the null space property. Our analysis is based on the two characteristic -goodness constants, and , and the variational property of matrix norm in convex optimization. This shows that -goodness is an elegant concept for low-rank matrix recovery, although and may not be easy to compute. Development of efficiently computable bounds on these quantities is left to future work. Even though we develop and use techniques based on optimization, convex analysis, and geometry, we do not provide explicit analogues to the results of Donoho  where necessary and sufficient conditions for vector recovery special case were derived based on the geometric notions of face preservation and neighborliness. The corresponding generalization to low-rank recovery is not known, currently the closest one being . Moreover, it is also important to consider the semidefinite relaxation (SDR) for the rank minimization with the positive semidefinite constraint since the SDR convexifies nonconvex or discrete optimization problems by removing the rank-one constraint. Another future research topic is to extend the main results and the techniques in this paper to the SDR.
The authors thank two anonymous referees for their very useful comments. The work was supported in part by the National Basic Research Program of China (2010CB732501), the National Natural Science Foundation of China (11171018), a Discovery Grant from NSERC, a research grant from the University of Waterloo, ONR research Grant N00014-12-10049, and the Fundamental Research Funds for the Central Universities (2011JBM306, 2013JBM095).
- C. Beck and R. D’Andrea, “Computational study and comparisons of LFT reducibility methods,” in Proceedings of the American Control Conference, Philadelphia, Pa, USA, June 1998.
- M. Mesbahi and G. P. Papavassilopoulos, “On the rank minimization problem over a positive semidefinite linear matrix inequality,” IEEE Transactions on Automatic Control, vol. 42, no. 2, pp. 239–243, 1997.
- M. Fazel, H. Hindi, and S. P. Boyd, “A rank minimization heuristic with application to minimum order system approximation,” in Proceedings of American Control Conference, pp. 4734–4739, June 2001.
- B. Recht, M. Fazel, and P. A. Parrilo, “Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization,” SIAM Review, vol. 52, no. 3, pp. 471–501, 2010.
- B. P. W. Ames and S. A. Vavasis, “Nuclear norm minimization for the planted clique and biclique problems,” Mathematical Programming, vol. 129, no. 1, pp. 69–89, 2011.
- J.-F. Cai, E. J. Candès, and Z. Shen, “A singular value thresholding algorithm for matrix completion,” SIAM Journal on Optimization, vol. 20, no. 4, pp. 1956–1982, 2010.
- E. J. Candès and B. Recht, “Exact matrix completion via convex optimization,” Foundations of Computational Mathematics, vol. 9, no. 6, pp. 717–772, 2009.
- C. Ding, D. Sun, and K. C. Toh, “An introduction to a class of matrix cone programming,” Tech. Rep., 2010.
- Z. Lin, M. Chen, L. Wu, and Y. Ma, “The augmented Lagrange multiplier method for exact recovery of a corrupted low-rank matrices,” http://arxiv.org/abs/1009.5055.
- Y. Liu, D. Sun, and K. C. Toh, “An implementable proximal point algorithmic framework for nuclear norm minimization,” Mathematical Programming, vol. 133, no. 1-2, pp. 399–436, 2012.
- Z. Liu and L. Vandenberghe, “Interior-point method for nuclear norm approximation with application to system identification,” SIAM Journal on Matrix Analysis and Applications, vol. 31, no. 3, pp. 1235–1256, 2009.
- S. Ma, D. Goldfarb, and L. Chen, “Fixed point and Bregman iterative methods for matrix rank minimization,” Mathematical Programming, vol. 128, no. 1-2, pp. 321–353, 2011.
- Y. Peng, A. Ganesh, J. Wright, W. Xu, and Y. Ma, “RASL: robust alignment by sparse and low-rank decomposition for linearly correlated images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 1, pp. 2233–2246, 2010.
- B. Recht, W. Xu, and B. Hassibi, “Necessary and sufficient conditions for success of the nuclear norm heuristic for rank minimization,” in Proceedings of the 47th IEEE Conference on Decision and Control (CDC '08), pp. 3065–3070, Cancun, Mexico, December 2008.
- M. Tao and X. Yuan, “Recovering low-rank and sparse components of matrices from incomplete and noisy observations,” SIAM Journal on Optimization, vol. 21, no. 1, pp. 57–81, 2011.
- E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Transactions on Information Theory, vol. 52, no. 2, pp. 489–509, 2006.
- E. J. Candes and T. Tao, “Decoding by linear programming,” IEEE Transactions on Information Theory, vol. 51, no. 12, pp. 4203–4215, 2005.
- D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006.
- B. Recht, W. Xu, and B. Hassibi, “Null space conditions and thresholds for rank minimization,” Mathematical Programming, vol. 127, no. 1, pp. 175–202, 2011.
- S. Oymak and B. Hassibi, “New null space results and recovery thresholds for matrix rank minimization,” http://arxiv.org/abs/1011.6326.
- S. Oymak, K. Mohan, M. Fazel, and B. Hassibi, “A simplified approach to recovery conditions for low-rank matrices,” in Proceedings of International Symposium on Information Theory (ISIT '11), pp. 2318–2322, August 2011.
- V. Chandrasekaran, B. Recht, P. A. Parrilo, and A. S. Willsky, “The convex geometry of linear inverse problems,” in Proceedings of the 48th Annual Allerton Conference on Communication, Control, and Computing, 2011.
- A. d'Aspremont and L. El Ghaoui, “Testing the nullspace property using semidefinite programming,” Mathematical Programming, vol. 127, no. 1, pp. 123–144, 2011.
- A. Juditsky and A. Nemirovski, “On verifiable sufficient conditions for sparse signal recovery via minimization,” Mathematical Programming, vol. 127, no. 1, pp. 57–88, 2011.
- A. Juditsky, F. Karzan, and A. Nemirovski, “Verifiable conditions of -recovery for sparse signals with sign restrictions,” Mathematical Programming, vol. 127, no. 1, pp. 89–122, 2011.
- A. Juditsky, F. Karzan, and A. S. Nemirovski, “Accuracy guarantees for -recovery,” IEEE Transactions on Information Theory, vol. 57, no. 12, pp. 7818–7839, 2011.
- G. A. Watson, “Characterization of the subdifferential of some matrix norms,” Linear Algebra and Its Applications, vol. 170, pp. 33–45, 1992.
- R. T. Rockafellar and R. J.-B. Wets, Variational Analysis, Springer, New York, NY, USA, 2nd edition, 2004.
- A. S. Lewis and H. S. Sendov, “Nonsmooth analysis of singular values. II. Applications,” Set-Valued Analysis, vol. 13, no. 3, pp. 243–264, 2005.
- Y. Zhang, “Theory of Compressive Sensing via -Minimization: A Non-RIP Analysis and Extensions,” 2008.
- E. J. Candès and Y. Plan, “Tight oracle inequalities for low-rank matrix recovery from a minimal number of noisy random measurements,” IEEE Transactions on Information Theory, vol. 57, no. 4, pp. 2342–2359, 2011.
- K. Lee and Y. Bresler, “Guaranteed minimum rank approximation from linear observations by nuclear norm minimization with an ellipsoidal constraint,” http://arxiv.org/abs/0903.4742.
- R. Meka, P. Jain, and I. S. Dhillon, “Guaranteed rank minimization via singular value projection,” http://arxiv.org/abs/0909.5457.
- K. Mohan and M. Fazel, “New restricted isometry results for noisy low-rank matrix recovery,” in Proceedings of IEEE International Symposium on Information Theory (ISIT '10), Austin, Tex, USA, June 2010.
- D. L. Donoho, “Neighborly polytopes and sparse solution of underdetermined linear equations,” Tech. Rep., 2004.