Abstract

We consider the recovery of high-dimensional sparse signals via -minimization under mutual incoherence condition, which is shown to be sufficient for sparse signals recovery in the noiseless and noise cases. We study both -minimization under the constraint and the Dantzig selector. Using the two -minimization methods and a technical inequality, some results are obtained. They improve the results of the error bounds in the literature and are extended to the general case of reconstructing an arbitrary signal.

1. Introduction

The problem of recovering a high-dimensional sparse signal based on a small number of measurements, possibly corrupted by noise, has attracted much recent attention. In the existing literature on sparse signals recovery and compressed sensing (see [113] and references therein), the emphasis is on assessing sparse signal from an observation: where the matrix with is given and is a vector of measurement errors. The goal is to reconstruct the unknown vector based on and . Throughout the paper, we will assume that the columns of are standardized to have unit -norm.

When noise is present, there are two well-known -minimization methods, which are well suited for recovering sparse signals. One is -minimization under the -constraint on the residuals:

Another method, called Dantzig selector, is proposed by Candes and Tao [4]. The Dantzig selector solves the sparse recovery problem through -minimization with a constraint on the correlation between the residuals and the column vectors of :

It is clear that regularity conditions are needed in order for these problems to be well behaved. Over the last few years, many interesting results for recovering sparse signals have been obtained in the framework of the mutual incoherence property (MIP) introduced by Donoho and Huo [14]. The MIP requires the pairwise correlations among the column vectors of to be small. Let See, for example, [8, 10, 14, 15].

It was first shown by Donoho and Huo [14], in the noiseless case for the setting where is a concatenation of two square orthogonal matrices, that ensuring that the exact recovery of when has at most nonzero entries (such a signal is called -sparse). This result was then extended in the noiseless case in [11, 16] to a general dictionary .

Stronger MIP conditions have been used in the literature to guarantee stable recovery of sparse signals in the noisy case. When noise is assumed to be bounded in -norm, Donoho et al. [15] showed that sparse signals can be recovered approximately through -minimization, with the error being at worst proportional to the input noise level, when

The results in [17] imply that is sufficient for stable recovery. And Tseng [18] used

Cai et al. [19] have showed that the condition is not only sufficient but in fact sharp for stable recovery with bounded noise as well as Gaussian noise.

In this paper, we consider the problem of recovering a high-dimensional sparse signal via two well -minimization methods under the condition . We study both -minimization under the constraint () and the Dantzig selector (). Using the two methods and a technical inequality, we give some results, which slightly improve those in [19]. Moreover, we obtain some results when unknown vector is not -sparse in the noise case and noiseless case.

The rest of the paper is organized as follows. In Section 2, some basic notation and definitions are reviewed; an elementary inequality, which allow us to make finer analysis of the sparse recovery problem, is introduced. We begin the analysis of -minimization methods for sparse signals recovery by considering the exact recovery in the noise case in Section 3; our results are similar to those in [19] and to some extent, we provide tighter error bounds than the existing results in the literature. In Section 4, we consider the case of unknown vector which is not -sparse under the condition . We give some facts and the proofs of the theorems in Section 5.

2. Preliminaries

We begin by introducing basic notation and definitions and then develop an important inequality which will be used in proving our main results.

For a vector , we will denote by the support of a vector . We use the standard notation to denote the -norm of the vector of . Moreover, a vector is said to be -sparse if . We also treat a vector as a function by assigning .

We now introduce a useful elementary inequality, which is used in the proofs of the theorems.

Proposition 1. Let be positive integers. For any descending chain of real numbers One has

Proof. Since for , we have

3. Recovery of -Sparse Signals

As previously above, the condition (5) has been proved to guarantee the recovery of -sparse signal in noiseless case. Cai et al. [19] have shown that this condition is also sufficient for stable reconstruction of -sparse signals in the noisy case when the error is in a bounded set. We will also give the results for reconstruction of -sparse signals both in the noiseless and noisy cases with error bounded, which are proved using different methods from [19].

Theorem 2. Consider the model (1) with . Suppose that is k-sparse and is the solution of -minimization problem (). Then, under the condition ,

We now consider sparse recovery of -sparse signals with error in a different bounded set. Candes and Tao [4] treated the sparse signal recovery in the Gaussian noise case by solving minimization with bounded set and referred the solution as the Dantzig Selector. The following result shows that the condition is also sufficient when the error is in the bounded set .

Theorem 3. Consider the model (1) with . Suppose that is k-sparse and is the solution of -minimization problem (). Then, under the condition ,

Remark 4. We consider the stable recovery of sparse signals with error in the -ball; for example, is a bounded set. is taken to be in the noiseless case and can be or in the noisy case.
Note that , under the condition , the result of Theorem 2 is equivalent to and Theorem 3 is equivalent to

To some extent, Theorem 2 improves Theorem  2.1 in [19], while Theorem 3 is improved using different method from [19] and gets the same results as Theorem  2.2 in [19].

4. Recovery of Approximately -Sparse Signals

In the previous section, the focus was on recovering -sparse signals. As discussed in [17, 19, 20], our results can also be stated in the general setting of reconstructing an arbitrary signal under the condition .

We begin in this section by considering the problem of exact recovery of spares signals when no noise is present. This is an interesting problem in itself and has been considered in a number of papers; see, for example, [9, 11, 17, 21]. More importantly, the solutions to this “clean” problem shed light on the noisy case.

When is not -sparse, -minimization can also recover with accuracy if has good -term approximation. For a general vector , denote by with all but the -largest entries (in absolute value) set to zero and the vector with the -largest entries (in absolute value) set to zero.

Theorem 5. Let . Suppose that satisfies and is the solution of the following -minimization problem Then

We now turn to the noisy case. Suppose that and , where the noise is bounded. We will specifically consider two cases: and . We will first consider the case .

Theorem 6. Consider the model (1) with satisfying . Suppose that satisfies and is the solution of the -minimization problem (). Then

We next turn to the case , which is called Dantzig selector.

Theorem 7. Consider the model (1) with . Suppose that satisfies and is the solution of -minimization problem (). Then

We have so far focused on stable recovery with bounded error. The results can be extended directly to the Gaussian noise case. This is due to the fact that Gaussian noise is “essentially bounded.” See, for example, [17, 19, 20].

5. The Proofs of the Theorems

Before giving the proofs of the theorems, we introduce three widely used facts, which are useful for the proofs.(A) The following fact is well known in the recovery of sparse signals. Let be any -sparse signal and ; then where is defined by (4); see, for example, [17, 18, 22, 23].

Let be a solution to the minimization problem; then by definition . Let , and . Here, denotes the indicator function of a set ; that is, if and 0 if .(B) The following is a widely used fact (see, e.g., [4, 7, 14, 17]):

This follows directly from the fact that (C) The following fact, which is based on the minimality of , has been widely used; see for example, [14, 19, 20]: where is the support of . This follows directly from the fact that

Proof of Theorem 2. The proof makes use of the ideas from [17, 19, 22].
Let . Rearranging the indices if necessary, we assume that
Let and be the support of ; then, from the fact (C),
Note that and both have elements, so we have
We will show that this implies that
In fact
For simplicity, partition into the following sets: where is an positive integer.
Note that is equivalent to or . Now where the second inequality applies the facts (A) and (B).
On the other hand, it follows from the fact (A) that
Note that
From Proposition 1 and the fact that , we get which implies
Putting them together, we get

Proof of Theorem 3. Note that from the fact (A) and the first part of the proof of Theorem 2, we have On the other hand, we also obtain the following relation:
We get, together with them, that Then where the last second inequality uses (37).

Proof of Theorem 5. Let and be the support of . Following the notation and the first part in the proof of Theorem 2, we first give the following relation:
In fact, since , we have
Since , this yields
Note that
So where the last second inequality holds from  (43).
Then
From (37), we have that
The proof is completed.

Proof of Theorem 6. From the proof of Theorem 5, we have
It follows from the fact (A) that
Note that
Together with them, we get
Then, from (37),

Proof of Theorem 7. From the proof of Theorems 3 and 5, we get Then