Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2015, Article ID 584712, 6 pages
http://dx.doi.org/10.1155/2015/584712
Research Article

The Sparsity of Underdetermined Linear System via Minimization for

1School of Science, Xi’an Polytechnic University, Xi’an 710048, China
2Department of Mathematics, Xian Jiaotong University, Xi’an 710049, China
3School of Computer Science, University of Lincoln, Lincoln LN6 7TS, UK

Received 7 November 2014; Accepted 14 April 2015

Academic Editor: Kacem Chehdi

Copyright © 2015 Haiyang Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The sparsity problems have attracted a great deal of attention in recent years, which aim to find the sparsest solution of a representation or an equation. In the paper, we mainly study the sparsity of underdetermined linear system via minimization for . We show, for a given underdetermined linear system of equations , that although it is not certain that the problem (i.e., subject to , where ) generates sparser solutions as the value of decreases and especially the problem generates sparser solutions than the problem (i.e., subject to ), there exists a sparse constant such that the following conclusions hold when : the problem generates sparser solution as the value of decreases; the sparsest optimal solution to the problem is unique under the sense of absolute value permutation; let and be the sparsest optimal solution to the problems and , respectively, and let not be the absolute value permutation of . Then there exist such that is the sparsest optimal solution to the problem and is the sparsest optimal solution to the problem .

1. Introduction

Recently, considerable attention has been paid to the following sparsity problem. Given a full-rank matrix of size with , -vector , and knowing that , where is an unknown sparse vector, we expect to recover . Although the system of equations is underdetermined and hence it is not a properly posed problem in linear algebra, sparsity of is a very useful priority that sometimes allows unique solution. Accordingly, one naturally proposes to use the following optimization model to obtain the sparsest solutions: where denotes the number of nonzero components of (we call norm). This is one of critical problems in compressed sensing research. This problem is motivated by data compression, error correcting codes, -term approximation, and so forth (see, e.g., [1]). It is known that the problem needs nonpolynomial time to solve (cf. [2]). It is crucial to recognize that one natural approach to tackle is to solve the following convex minimization problem: where is the standard norm. The study of this problem was pioneered by Donoho, Candès, and their collaborators and many researchers have made a lot of contributions related to the existence, uniqueness, and other properties of the sparse solution as well as computational algorithms and their convergence analysis to tackle the problem (see survey papers in [35]). However, the solutions to the problem are often not as sparse as those to the problem . It is definitely imperative and required for many applications to find solutions which are more sparse than that to the problem . A natural try for this purpose is to apply the problem   , that is, to solve the following model: where (we call -norm, though it is no longer norms for as the triangle inequality is no longer satisfied). Obviously, the problem is no longer a convex optimization problem. This minimization is motivated by the following fact:This model was initiated by [6] and many researchers have worked on this direction [1, 2, 716]. They demonstrate that (1) for a Gaussian random matrix , the restricted -isometry property of order holds if is almost proportional to when (cf. [8]); (2) when (or , ), the optimal solution to the problem is the same as the optimal solution to the problem when small enough, where is the restricted isometry constants of matrix (similar for , ) (cf. [7, 10, 13]); and (3) the minimization can be applied to a wider class of random matrices (cf. [11]). In addition, in [7, 15], the authors show that the problem generates sparser solution than the problem and the problem generates sparser solution as the value of decreases by taking phase diagram studies with a set of experiments. Nevertheless, are the conclusions showed by taking phase diagram studies true in theory? In the paper, we will answer this question by studying the sparsity of minimization. Firstly, using Example 2 we show, in general, that the answer to the question above is negative. Secondly, although the answer to the question above is negative, we can prove that, for a given underdetermined linear system of equations , there exists a constant (we call it sparsity constant) such that the following conclusions hold when .(1)The problem generates sparser solution as the value of decreases (Theorem 7).(2)Let be the sparsest optimal solution to the problem . Then is the unique sparsest optimal solution to the problem under the sense of absolute value permutation (Corollary 6).(3)Let and be the sparsest optimal solution to the problem and problem (), respectively, and let not be the absolute value permutation of . Then there exist such that is the sparsest optimal solution to the problem () and is the sparsest optimal solution to the problem () (Theorem 8).

2. The Sparsity of Underdetermined Linear System via Minimization

Let be the set of all solutions to the underdetermined linear systems . For the convenience of account, we call the absolute value permutation of , which means that is the permutation of , where and .

Lemma 1 (see [17]). The problem may have more than one solution. Nevertheless, even if there are infinitely many possible solutions to this problem, we can claim that (1) these solutions are gathered in a set that is bounded and convex, and (2) among these solutions, there exists at least one with at most nonzeros.

The following example shows that, in general, it is not certain that the problem generates sparser solution than the problem and the problem generates sparser solution as the value of decreases.

Example 2. We consider the underdetermined linear system of equations , where. By Lemma 1, the -norm of the optimal solutions to the problem are not more than 3, and hence the optimal solution is one of the following feasible solutions:(1);(2);(3);(4).
Furthermore, we can show that the optimal solution to the problem    is one of above feasible solutions. It is easy to calculate thatThus is the optimal solution when and is the optimal solution when and . However, , . Therefore, the problem does not generate sparser solution than the problem and the problem does not generate sparser solution as the value of decreases.

In the following, we will prove the conclusions mentioned in Introduction.

We define two functions () and (), where and . Then .

Theorem 3. is a monotone decreasing convex function and

Proof. It is easy to show that (7) holds. Without loss of generality, we assume that . Because is monotone decreasing.
Furthermore, is a convex function. In fact, we have, by the convexity of function , That is,Thusand hence is monotone increasing. Since is monotone decreasing, we know that is monotone increasing. Because is monotone decreasing, is monotone increasing and , which implies that is convex function.

Theorem 4. For a given underdetermined linear system of equations , there exists a constant such that, for any , either or when .

Proof. Let and ; there exists such that . Clearly, we have , .
Firstly, for any , there exists a constant such that when , either or .
Obviously, for any given , there is a positive number such that when , either or . Hence, it suffices to show . Otherwise, for an arbitrarily small positive number , there exists with , , and such that Using (7) we obtain That is, Since , we have .
HenceTherefore, there is a positive integer such that and, for any positive integer with , Since, for any positive integer ,we obtain, for and mentioned above,We assume, without loss of generality, that . For the mentioned above, (15) becomesFor the right of (21), we obtain And for the left of (21), we obtainThis is a contradiction and thus when , either or .
Secondly, for any , there exists a constant such that when , either or .
It suffices to show that . Otherwise, for an arbitrarily small positive number , there is with , and () such that Using (7) again, we also obtain (15).
For the right of (15), we haveAnd for the left of (15), we have This is a contradiction and thus .
Thirdly, for any , , , there exists a constant such that when , either or .
We assume, without loss of generality, that . Then So there is a positive number such that when , which implies that In conclusion, we take and thus when , for any , either or .

Obviously, for a given underdetermined linear system of equations , there are infinitely many constants such that when Theorem 7 holds. The supremum of is called the sparse constant of underdetermined linear system of equations and denoted .

Corollary 5. Let equations be an underdetermined linear system. Then and have at most one intersection in , where and is not the absolute value permutation of .

Proof. It is easy to prove that the conclusion holds by Theorems 4 and 7.

Corollary 6. Let be the sparsest optimal solution to the problem (). Then is the unique sparsest optimal solution to the problem under the sense of absolute value permutation.

Proof. Suppose that is another sparsest optimal solution to the problem and is not the absolute value permutation of . By Theorem 7, , either or . We suppose that and hence we have , which implies that . This is a contradiction.

Theorem 7. The problem generates sparser solution as the value of decrease when .

Proof. If the conclusion does not hold, then there exists the optimal solutions to the problems and the optimal solutions to the problems satisfying and . We consider the following two cases.(1)If , then because of Corollary 5 and . This contradicts with the fact that is the optimal solutions to .(2)If , then and have at least one intersection in because of . Since , , and have at least one intersection in . This is contradictory to Corollary 5.

Theorem 8. Let and be the sparsest optimal solution to the problem and problem (), respectively, and is not the absolute value permutation of . Then there exist such that when , is the sparsest optimal solution to the problem and when , is the sparsest optimal solution to the problem .

Proof. Firstly, is not the optimal solution to and hence . In fact, if , then by Corollary 5 and is the optimal solution to the problem . By Corollary 5 again, we have which contradicts with the fact that is the sparsest optimal solutions to .
We consider the following two cases.(1)If , then, for any , is the sparsest optimal solution to the problem . Otherwise, there exists such that or and . If , then by Corollary 5 and , which is contradictory to Theorem 8. If and , then by Corollary 5, which contradicts the fact that is the optimal solutions to . Therefore, we pick .(2)If , then, by , and have one intersection in , and hence . We assume, without loss of generality, that . Let be the sparsest optimal solution to the problem . Then is not the absolute value permutation of . Otherwise, we have , that is, is the optimal solution to the problem . Since , we have which contradicts the fact that is the sparsest optimal solution to the problem .
If is the absolute value permutation of , then and thus, by the proof of case (1), for any , is the sparsest optimal solution to the problem . Obviously, for any , is the sparsest optimal solution to the problem . Therefore, we pick .
If is not the absolute value permutation of , then by Corollary 6, and there exist , such that is the intersection of and and is the intersection of and . By the proof above, we have that, for any , is the sparsest optimal solution to the problem and for any , is the sparsest optimal solution to the problem .

3. Conclusion

In this paper, the sparsity of underdetermined linear system via minimization for has been studied. Our research reveals that for a given underdetermined linear system of equations there exists a sparse constant such that when , the problem generates sparser solution as the value of decreases and the sparsest optimal solution to the problem is unique under the sense of absolute value permutation and if is not the absolute value permutation of where and are the sparsest optimal solution to the problems and (), respectively, then there exist such that is the sparsest optimal solution to the problem () and is the the sparsest optimal solution to the problem ().

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Project 11271297, Project 11131006, Project 11201362, and EU FP7 Projects EYE2E (269118) and the National Basic Research Program of China under Project 2013CB329404.

References

  1. M.-J. Lai, “On sparse solutions of underdetermined linear systems,” Journal of Concrete and Applicable Mathematics, vol. 8, no. 2, pp. 296–327, 2010. View at Google Scholar · View at MathSciNet
  2. B. K. Natarajan, “Sparse approximate solutions to linear systems,” SIAM Journal on Computing, vol. 24, no. 2, pp. 227–234, 1995. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  3. R. G. Baraniuk, “Compressive sensing,” IEEE Signal Processing Magazine, vol. 24, no. 4, pp. 118–124, 2007. View at Publisher · View at Google Scholar · View at Scopus
  4. A. M. Bruckstein, D. L. Donoho, and M. Elad, “From sparse solutions of systems of equations to sparse modeling of signals and images,” SIAM Review, vol. 51, no. 1, pp. 34–81, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  5. E. J. Candès, “Compressive sampling,” in Proceedings of the International Congress of Mathematician, vol. 3, pp. 1433–1452, European Mathematical Society, Zürich, Switzerland, 2006.
  6. R. Gribonval and M. Nielsen, “Sparse representations in unions of bases,” IEEE Transactions on Information Theory, vol. 49, no. 12, pp. 3320–3325, 2003. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  7. R. Chartrand, “Exact reconstruction of sparse signals via nonconvex minimization,” IEEE Signal Processing Letters, vol. 14, no. 10, pp. 707–710, 2007. View at Publisher · View at Google Scholar · View at Scopus
  8. R. Chartrand and V. Staneva, “Restricted isometry properties and nonconvex compressive sensing,” Inverse Problems, vol. 24, no. 3, Article ID 035020, pp. 1–14, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  9. X. Chen, F. Xu, and Y. Ye, “Lower bound theory of nonzero entries in solutions of l2-lq minimization,” SIAM Journal on Scientific Computing, vol. 32, no. 5, pp. 2832–2852, 2010. View at Google Scholar
  10. S. Foucart and M.-J. Lai, “Sparsest solutions of underdetermined linear systems via lq-minimization for 0<q1,” Applied and Computational Harmonic Analysis, vol. 26, no. 3, pp. 395–407, 2009. View at Publisher · View at Google Scholar
  11. S. Foucart and M.-J. Lai, “Sparse recovery with pre-Gaussian random matrices,” Studia Mathematica, vol. 200, no. 1, pp. 91–102, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  12. M.-J. Lai and J. Wang, “An unconstrained lq Minimization with 0q1 for sparse solution of underdetermined linear systems,” SIAM Journal on Optimization, vol. 21, no. 1, pp. 82–101, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  13. Q. Sun, “Recovery of sparsest signals via lq-minimization,” Applied and Computational Harmonic Analysis, vol. 32, no. 3, pp. 329–341, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  14. Z. B. Xu, X. Chang, F. Xu, and H. Zhang, “L1/2 regularization: a thresholding representation theory and a fast solver,” IEEE Transactions on Neural Networks and Learning Systems, vol. 23, no. 7, pp. 1013–1027, 2012. View at Publisher · View at Google Scholar · View at Scopus
  15. Z. B. Xu, H. L. Guo, Y. Wang, and H. Zhang, “Representative of L1/2 regularization among Lq(0<q1) regularization: an experiments study based on phase diagram,” Acta Automatica Sinica, vol. 38, no. 7, pp. 1225–1228, 2012. View at Google Scholar
  16. J. Zeng, S. Lin, Y. Wang, and Z. B. Xu, “L1/2 regularization: convergence of iterative half thresholding algorithm,” IEEE Transactions on Signal Processing, vol. 62, no. 9, pp. 2317–2329, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  17. M. Elad, Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing, Springer, New York, NY, USA, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus