- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Discrete Dynamics in Nature and Society

Volume 2013 (2013), Article ID 692169, 6 pages

http://dx.doi.org/10.1155/2013/692169

## Restricted -Isometry Properties of Partially Sparse Signal Recovery

Department of Applied Mathematics, Beijing Jiaotong University, Beijing 100044, China

Received 20 January 2013; Revised 3 March 2013; Accepted 5 March 2013

Academic Editor: Binggen Zhang

Copyright © 2013 Haini Bi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

By generalizing the restricted -isometry property to the partially sparse signal recovery problem, we give a sufficient condition for exactly recovering partially sparse signal via the partial minimization (truncated minimization) problem with . Based on this, we establish a simpler sufficient condition which can show how the -RIP bounds vary corresponding to different s.

#### 1. Introduction

The partially sparse signal recovery (PSSR) is the problem of recovering a partially sparse signal from a certain number of linear measurements when the part of the signal is known to be sparse, which was coined by Bandeira et al. [1, 2]. This type of problems has many applications in signal and image processing, derivative-free optimizations, and so on; see, for example, [1–4]. Clearly, PSSR includes *sparse signal recovery (SSR)* as a special case. The latter is the well-known NP-hard problem in the compressed sensing (CS), which is also called *cardinality minimization problem (CMP, or **-norm minimization problems)*; see, for example, [5–8]. In particular, Candés and Tao [8] introduced a restricted isometry property (RIP) of a sensing matrix which guarantees to recover a sparse solution of SSR by minimizing its convex relaxation (-norm minimization). However, there are some problems which cannot be reformulated as an SSR, but a PSSR. As we know, PSSR happens naturally in sparse Hessian recovery; see, for example, [2], where Bandeira et al. employed partially sparse recovery approach for building sparse quadratic interpolation models of functions with sparse Hessian. They have successfully applied the -norm minimization of PSSR in interpolation-based trust-region methods for derivative-free optimization. Vaswani and Lu [3] successfully applied modified CS (partially sparse recovery) in image reconstruction, where the sufficient RIP condition is weaker than the RIP for SSR. Moreover, Bandeira et al. [1] considered the RIP and null space properties (NSP) for PSSR and extended recovery results under noisy measurements to the partially sparse case, where partial NSP is a necessary and sufficient condition for PSSR. In [4], Jacques also established the partial RIP condition for PSSR with noise via its convex relaxation problem.

Note that in the CS context, the SSR problem can also be relaxed to a -norm minimization (truncated -minimization) problem with ; see, for example, [9–19]. It is well known that Chartrand [20] firstly show that fewer measurements are required for exact reconstruction if we replace -norm with *-*norm (), and Chartrand and Staneva [10] established -RIP conditions for exact SSR via *-*minimization. In particular, the numerical experiments in magnetic resonance imaging (MRI) showed that this approach works very efficiently; see [9] for details. Wang et al. [19] studied the performance of -minimization for strong recovery and *weak recovery* where we need to recover all the sparse vectors on one support with one sign pattern. Moreover, Saab et al. [16] provided a sufficient condition for SSR via -minimization and provided a lower bound of the support size up to which -minimization can recover all such sparse vectors, and Foucart and Lai [14] improved this bound by considering a generalized version of RIP condition. While SSR and -minimization have been the focus point of some recent research, there are fewer research related to PSSR and the partially -minimization. One may naturally wonder whether we can generalize the -RIP conditions introduced by [10] from the SSR to the PSSR case. This paper will deal with this issue. We will give a different -RIP recovery condition for PSSR via its nonconvex relaxation. Furthermore, based on the recent work by Oymak et al. [21], we also extend our result to the matrix setting.

In the next section, we give the PSSR model and review some preliminaries on -RIP conditions. In Section 3, we establish the exact partially -RIP recovery conditions for PSSR via its nonconvex -minimization. In Section 4, we give a sufficient condition for partially low-rank matrix recovery via the partially Schatten- minimization problem.

#### 2. Preliminaries

In this section, we will review some basic concepts and results on the -RIP recovery conditions for SSR and introduce the -RIP definition for PSSR. We begin with defining the mathematical model of the PSSR problem as follows:
where the -norm is defined as (which is not really a norm since it is not positive homogeneous). For any positive number , we say is *-sparse* if . is a sensing matrix with , , and . It means that the unknown vector consists of two parts , where is sparse and is possibly dense. When , the previous problem reduces to the following *-norm minimization problem (sparse signal recovery, SSR)*:
The previous PSSR problem (1) is an NP-hard problem, since its special case SSR (2) is well-known NP-hard problem in the compressed sensing (CS). As we mentioned in Section 1, one popular and powerful approach is to solve it via *-norm minimization* (its convex relaxation), where the -norm is replaced by the -norm in SSR (2). Moreover, we can also use a nonconvex approach for exact reconstruction with fewer measurements than the convex relaxation; see, for example, [9, 10]. That is the *-norm minimization* problem with , where we replace the -norm with the -norm in (2) as follows:
Note that is not a norm when , but it is much close to -norm. Moreover, the numerical experiments in MRI showed that the approach via -minimization works very efficiently; see [9] for details. In particular, Chartrand and Staneva [10] introduced the concept of restricted isometry constant via -norm.

*Definition 1 (-RIC [10]). *Given a matrix , where , is a positive number and , then we say that is the restricted -isometry constant (or -RIC) of order of the matrix if is the smallest number, such that
for all -sparse vectors .

In the same paper, Chartrand and Staneva gave the following sufficient condition for exact SSR via -minimization.

Theorem 2 (see [10]). *Let , , and be the size of the support of , , , and , rounded up, so that is an integer . If satisfies
**
then is the unique minimizer of problem (2). *

Inspired by the previous analysis, it is natural to give the partially -norm minimization problem for PSSR (1) as follows: In order to establish the link between the PSSR (1) and its partially -norm minimization problem, we need to give a partially -RIC definition. Here we borrow the idea from Bandeira et al. [1]. Assume that is full column rank. For as mentioned above, let which is the matrix of the orthogonal projection from to .

*Definition 3 (Partially -RIC). * Let , where , and is full column rank. We say that is the Partially Restricted Isometry Constant (Partially -RIC) of order of the matrix if is the -RIC of order of the matrix ; that is, is the smallest number, such that
for all -sparse vectors , where is given by (7).

#### 3. Main Results

We will give our main results which state sufficient -RIP recovery conditions on the exact PSSR via the nonconvex -norm minimization. We begin with the following useful lemma.

Lemma 4. *For , let and with and . If , then and .*

*Proof. *In order to prove the lemma, we consider the following two cases.*Case* *1 *. In this case, from the fact , we have
If , from the previous inequality we easily obtain
which is a contradiction. Hence .*Case* *2 *. Similarly, in this case, from , we obtain
If , then
Combining the previous inequalities we obtain
which is a contradiction. Hence .

Therefore, taking into account the previous two cases, we completed the proof.

We below propose a general recovery condition for PSSR via its -norm minimization.

Theorem 5. *Let with and . Suppose that is full column rank, and let with . For , , and , let , with . If
**
then is the unique minimizer of problem(6)*

*Proof. *Note that is a feasible solution to optimization problem (6). We remain to show that the solution set is a singleton . This proof generally modifies that of [10], but under different assumptions. (Specifically, we use a different way to arrange the elements of in the following.) Let be an arbitrary solution to problem (6). we will show that and . We will prove firstly. Taking , we will show that . Let . For , denotes the matrix equaling in those columns whose indices belong to and otherwise zero. Similarly, we define the vector . Let be the support of . Then, the supports of and are disjoint since . From direct calculation, we obtain
where the first inequality holds because solves (6), and the last one holds by the triangle inequality for . Then we have

Now we arrange the elements of in order of decreasing magnitude of and partition into , where has elements and each has elements (except possibly ). Set . Note that
Direct calculations yield
Now we discuss the relation between -norm and -norm. For each and , it holds . So, we have for ,
Similarly, we obtain that for ,
Applying the Holder's inequality, we obtain
Similarly, we have
Therefore,
Thus by (18) and (23), we have
Clearly, the assumption ensures that the scalar factor is positive, and hence we obtain . That means . Using , we obtain . Therefore, , which means .

Now we remain to show that . It is obvious that . Since , we have . Then because is full column rank.

Theorem 5 states a different sufficient condition for the exactly PSSR via its nonconvex relaxation from the existing conditions for SSR.

Theorem 6. *Let with and . Suppose that is full column rank, and let with . For and , if
**
then is the unique minimizer of problem (6). Specifically, for all , if
**
then is the unique minimizer of problem (6).*

*Proof. *Applying Theorem 5, here we only need to show that if (25) holds, we can find and , such that (14) holds. We consider the three cases in the following.*Case* *i *. In this case, we easily obtain and . Therefore the following condition can guarantee the inequality (14):
Simplifying the previous inequality, we obtain
In this case, employing , we easily get that gives the maximum value of the right of the inequality (the strongest result) which satisfies the condition (14). That is,
*Case* *ii *. In this case, we can get that and . Similarly, the following condition can guarantee the inequality (14):
Simplifying the previous inequality, we obtain
In this case, employing , we get that give the maximum value of the right of the inequality; that is,
*Case* *iii *. In this case, it is clear that , , and . So the following condition can guarantee the inequality (14):
Simplifying the previous inequality, we obtain
In this case, employing , we chose to give the maximum value of the right of the inequality. That is,

It is easy to see that . In fact, . On the other hand, , which means .

Therefore, combining the previous three cases, we obtain that one can choose to get the weakest sufficient condition. It is easy to see that satisfying the assumptions of Theorem 5.

After the previous discussion, using condition (25) and choosing , we can derive condition (14).

Specifically, we consider the following function:
Clearly,
and hence is a decreasing function of . Thus, for all , condition
can guarantee condition (14).

The proof is completed.

Applying Theorem 6, we understand how the -RIP bounds related to as in Figure 1. From Figure 1, it is easy to give a stronger result (i.e., weaker sufficient condition) for smaller . Moreover, by taking different values of with , we obtain some interesting -RIP bounds as in Table 1.

#### 4. Final Remark

In this paper, we studied the restricted -isometry property to the partially sparse signal recovery problem and proposed a sufficient -RIP condition for exactly recovering partially sparse signal via the partially -minimization problem with . It is worth generalizing the -RIP condition from the vector case to the matrix case. Note that the well-known low-rank matrix recovery (LMR) problem has many applications and appeared in the literature of a diverse set of fields including matrix completion, quantum state tomography, face recognition, magnetic resonance imaging (MRI), computer vision, and system identification and control; see, for example, [21, 22] for more details and the reference therein. In particular, Oymak et al. [21] showed that several sufficient RIP recovery conditions for sparse vector are also sufficient for recovery of matrices of rank up to via Schatten -norm minimization. According to our approach to extend the -RIP bound from SSP to partially SSR, we can obtain some different restricted -isometry properties for LMR problem by using the idea in [21].

#### Acknowledgments

The work was supported in part by the National Basic Research Program of China (2010CB732501), the National Natural Science Foundation of China (11171018), and the Fundamental Research Funds for the Central Universities (2011JBM128, 2013JBM095).

#### References

- A. Bandeira, K. Scheinberg, and L. N. Vicente, “On partially sparse recovery,” Department of Mathematics, University of Coimbra, 2011.
- A. S. Bandeira, K. Scheinberg, and L. N. Vicente, “Computation of sparse low degree interpolating polynomials and their application to derivative-free optimization,”
*Mathematical Programming*, vol. 134, no. 1, pp. 223–257, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - N. Vaswani and W. Lu, “Modified-CS: modifying compressive sensing for problems with partially known support,”
*IEEE Transactions on Signal Processing*, vol. 58, no. 9, pp. 4595–4607, 2010. View at Publisher · View at Google Scholar · View at MathSciNet - L. Jacques, “A short note on compressed sensing with partially known signal support,”
*Signal Processing*, vol. 90, no. 12, pp. 3308–3312, 2010. - D. L. Donoho, “Compressed sensing,”
*IEEE Transactions on Information Theory*, vol. 52, no. 4, pp. 1289–1306, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - A. M. Bruckstein, D. L. Donoho, and M. Elad, “From sparse solutions of systems of equations to sparse modeling of signals and images,”
*SIAM Review*, vol. 51, no. 1, pp. 34–81, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - E. J. Candés, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,”
*IEEE Transactions on Information Theory*, vol. 52, no. 2, pp. 489–509, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - E. J. Candés and T. Tao, “Decoding by linear programming,”
*IEEE Transactions on Information Theory*, vol. 51, no. 12, pp. 4203–4215, 2005. View at Publisher · View at Google Scholar · View at MathSciNet - R. Chartrand, “Fast algorithms for nonconvex compressive sensing: MRI reconstruction from very few data,” in
*Proceedings of the IEEE International Symposium on Biomedical Imaging (ISBI '09)*, pp. 262–265, 2009. - R. Chartrand and V. Staneva, “Restricted isometry properties and nonconvex compressive sensing,”
*Inverse Problems*, vol. 24, no. 3, pp. 1–14, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - X. Chen, F. Xu, and Y. Ye, “Lower bound theory of nonzero entries in solutions of ${\ell}_{2}$-${\ell}_{p}$ minimization,”
*SIAM Journal on Scientific Computing*, vol. 32, no. 5, pp. 2832–2852, 2010. View at Publisher · View at Google Scholar · View at MathSciNet - X. Chen and W. Zhou, “Convergence of reweighted ${\ell}_{1}$ minimization algorithms and unique solution of truncated lp minimization,” Tech. Rep., 2010.
- M. E. Davies and R. Gribonval, “Restricted isometry constants where ${\ell}_{p}$ sparse recovery can fail for $0<p\le 1$,”
*IEEE Transactions on Information Theory*, vol. 55, no. 5, pp. 2203–2214, 2009. View at Publisher · View at Google Scholar · View at MathSciNet - S. Foucart and M.-J. Lai, “Sparsest solutions of underdetermined linear systems via ${\ell}_{q}$-minimization for $0<p\le 1$,”
*Applied and Computational Harmonic Analysis*, vol. 26, no. 3, pp. 395–407, 2009. View at Publisher · View at Google Scholar · View at MathSciNet - D. Ge, X. Jiang, and Y. Ye, “A note on the complexity of ${\ell}_{p}$ minimization,”
*Mathematical Programming*, vol. 129, no. 2, pp. 285–299, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - R. Saab, R. Chartrand, and O. Yilmaz, “Stable sparse approximations via nonconvex optimization,” in
*Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing*, pp. 3885–3888, 2008. - Y. Shen and S. Li, “Restricted $p$-isometry property and its application for nonconvex compressive sensing,”
*Advances in Computational Mathematics*, vol. 37, no. 3, pp. 441–452, 2012. View at Publisher · View at Google Scholar · View at MathSciNet - R. Saab and O. Yılmaz, “Sparse recovery by non-convex optimization–instance optimality,”
*Applied and Computational Harmonic Analysis*, vol. 29, no. 1, pp. 30–48, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - M. Wang, W. Xu, and A. Tang, “On the performance of sparse recovery via ${\ell}_{p}$-minimization $(0\le p\le 1)$,”
*IEEE Transactions on Information Theory*, vol. 57, no. 11, pp. 7255–7278, 2011. View at Publisher · View at Google Scholar · View at MathSciNet - R. Chartrand, “Exact reconstructions of sparse signals via nonconvex minimization,”
*IEEE Signal Processing Letters*, vol. 14, no. 10, pp. 707–710, 2007. - S. Oymak, K. Mohan, M. Fazel, and B. Hassibi, “A simplified approach to recovery conditions for low rank matrices,” in
*Proceedings of the IEEE International Symposium on Information Theory Proceedings (ISIT '11)*, pp. 2318–2322, 2011. - B. Recht, M. Fazel, and P. A. Parrilo, “Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization,”
*SIAM Review*, vol. 52, no. 3, pp. 471–501, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet