Abstract

The paper discusses the relationship between the null space property (NSP) and the -minimization in compressed sensing. Several versions of the null space property, that is, the stable NSP, the robust NSP, and the robust NSP for based on the standard NSP, are proposed, and their equivalent forms are derived. Consequently, reconstruction results for the -minimization can be derived easily under the NSP condition and its equivalent form. Finally, the NSP is extended to the -synthesis modeling and the mixed -minimization, which deals with the dictionary-based sparse signals and the block sparse signals, respectively.

1. Introduction

Compressed sensing has been drawing extensive and hot attention as soon as it was proposed since 2006 [14]. It is known as a new revolution in signal processing because it can realize sampling and compression of signal at the same time. Its fundamental idea is to recover a high-dimensional signal from remarkably small number of measurements using the sparsity of signals. -dimensional real-world signal is called sparse if the number of its nonzero coefficients under some representation is absolutely smaller than the signal length . Suppose that denotes the number of nonzero representation coefficients of the signal ; if , then we say the signal is -sparse. Here, we call tentatively norm (in fact, it is not a norm, because it does not satisfy the positive homogeneity of norm obviously). Using denotes the set of all -sparse signals: . Suppose the observed data ; we will recover the signal via a linear system where is an real matrix, known as measurement matrix. Generally speaking, when , the underdetermined function has infinitely many solutions. In other words, without additional information, it is impossible to recover the signal under this condition. However, compressed sensing focuses on the sparsest solution, and its mathematical modeling is as follows:We call the minimization (2) modeling. Unfortunately, the -minimization (2) is a nonconvex and NP-hard optimization problem [5]. To conquer the hardness, one seeks another tractable mathematical modeling to replace the -minimization. This is called modeling. Considerwhere For , is a norm, while for , it is a quasinorm because it does not satisfy the triangle inequality, and it only satisfies the -triangle inequality:It is natural that the modeling is to be considered:To one’s disappointment, the modeling is unable to exactly recover the sparse signals, even -sparse vectors. For example, supposing measurement matrix such as we will recover the signal from the measurement vector . Obviously, the solutions of this function are , . Let ; then . But is not sparse. In fact, is the sparsest solution of the above function. Moreover, for any , although the modeling is also a convex optimization problem, it cannot guarantee the sparsest solution; that is, the modeling for is unable to exactly recover the sparse signals. Here, sparsity plays a key role in such recovery of signal. The problem is which we can select to get the sparsest solution exactly.

Candès et al. gave the following modeling [2, 3, 68]:The -minimization (8) is a convex optimization problem and can be transformed into a linear programming problem to be solved tractably. A natural question is whether the solutions of (8) are equivalent to those of (2), or what kind of measurement matrices can be constructed to guarantee the equivalence of the solutions of the two problems. Candès and Tao [2] proposed the Restricted Isometry Property (RIP), which is a milestone work in sparse information processing. We say that a matrix satisfies the Restricted Isometry Property of order , if there exists a constant , for any , such thatWe call the smallest constant satisfying the above inequality the Restricted Isometry Constant (RIC). Candès and Tao showed, for instance, that any -sparse vector is exactly recovered as soon as the measurement matrix satisfies the RIP with [9], and the solution of (8) is equivalent to that of (2). Later, the RIC has been improved; for example, by Foucart and Lai [10], by Cai et al. [11], by Mo and Li [12], and especially by Zhou et al. [13].

We see that the modeling is seemingly perfect selection because it not only is convex but also can exactly recover sparse signals. Then, what about the -minimization under ? It is amazing that can also obtain the sparse solutions via hyperplane , and more and more sparsity is being required as decreases below [4]. This is natural to induce one's strong interest, and some important results have been obtained. Compared with the modeling, a sufficiently sparse signal can be recovered perfectly with the modeling under less restrictive RIP requirements than those needed to guarantee perfect recovery with [14]. Compared with the modeling, empirical evidence strongly indicates that solving the modeling takes much less time than that with [15]. These surprising phenomena undoubtedly inspire more people to research the modeling although the -minimization problem for is also NP-hard in general [16]. Reconstruction sparse signals via modeling have been considered in the literature in a series of papers [10, 1720]. However, it is not clear if there exists any hidden relationship between the modeling and the modeling. Recently, [21] showed that the solution of the problem (3) is also equivalent to that of the problem (2) when is smaller than a definitive constant , where the definitive constant depends only on and , which revealed the relationship behind these interesting phenomena.

The null space property (NSP) was introduced when one checked the equivalence of the solutions between the modeling and the modeling. When we inspect the existence of the solutions of the function , , we often make judgement via the null space of matrix with linear algebra knowledge. Therefore, it is absolutely important to propose the null space property. Now, we introduce the definition of the null space property [6, 8, 22]. Given a subset of and a vector , we denote by the vector that coincides with on and that vanishes on the complementary set .

Definition 1. For any set with , a matrix is said to satisfy the null space property of order iffor all .

The thoughts of the null space have appeared since one researched the best approximation of [23], but the name was first used by Cohen et al. in [22], and Donoho and Huo [6] and Gribonval and Nielsen [8] developed the thoughts and gave the following theorem.

Theorem 2. Given a matrix , every -sparse vector is the unique solution of the -minimization with if and only if satisfies the null space property of order .

This theorem not only gives an existence condition of the solution of the modeling but also demonstrates that the solution is right to that of the modeling. That is, it implies that for every with -sparse , the solution of the -minimization problem (8) actually solves the -minimization problem (2) when the measurement matrix satisfies the null space property of order . Indeed, assume that every -sparse vector is recovered via -minimization from . Let be the minimizer of the -minimization with ; then , so that also is -sparse. But since every -sparse vector is the unique -minimizer, it follows that [16].

As for RIP, the null space property is a necessary and sufficient condition to exactly reconstruct the signal using the -minimization, which is very important in theory. We see that many works [7, 8, 16, 2426], especially [16], focused on the null space property on the -minimization and gave the stable null space property, the robust null space property, and characterization of the solutions of the -minimization problem (8) under these properties. But, as for the null space property on the -minimization for , there are few papers [18, 27] touched upon it.

This paper focuses on the different types of null space property on the -minimization for . The remainder of the paper is organized as follows. In Section 2, based on the standard null space property, we give the definition of the stable null space property and derive its equivalent form. Then we discuss the approximation of the solutions of the -minimization with . In Section 3, we further consider the robust null space property and the robust null space property for , respectively. Using these properties, we effectively characterize reconstruction results for the -minimization when the observations are corrupted by noise. In Section 4, according to many practical scenarios, we extend the null space property to the -synthesis modeling and the mixed -minimization, which deals with the dictionary-based sparse signals and the block sparse signals, respectively. Finally, we relegate the proofs of the main results, that is, Theorems 7, 13, 18, and 26 to Appendix.

2. Stable Null Space Property

In this section, we will discuss the stable null space property; for this purpose, we first introduce the null space property [8, 16].

Definition 3. For any set with , a matrix is said to satisfy the null space property of order iffor all ; here , if one does not make the special statement, then one always supposes that in what follows.

Definition 3 generalizes Definition 1. This definition is so important that it can characterize the existence and uniqueness of the solution of the -minimization (3) for .

Theorem 4. Given a matrix , every -sparse vector is the unique solution of the -minimization with if and only if satisfies the null space property of order .

The proof of Theorem 4 is implied in [8] and easily found in [18].

However, in more realistic scenarios, we can only claim that the vectors are close to sparse vectors but no absolutely sparse ones. In such cases, we would like to recover a vector with an error controlled by its distance to -sparse vectors. This property is usually referred to as the stability of the reconstruction scheme with respect to sparsity defect [16]. To better discuss this property, we give the definition of the stable null space property.

Definition 5. For any set with , a matrix is said to satisfy the stable null space property of order with constant iffor all .

Remark 6. Formula (12) is often replaced by with constant .

Obviously, Definition 5 is stronger than Definition 3 if . But the following theorem demonstrates that it is also important to introduce Definition 5, because the stable null space property characterizes the distance between two vectors and satisfying .

Theorem 7. For any set with , the matrix satisfies the stable null space property of order with constant if and only iffor all vectors with .

We defer the proof of this theorem to Appendix. Under this theorem, we give the main stability result as follows.

Corollary 8. Suppose that a matrix satisfies the stable null space property of order with constant ; then for any , a solution of the -minimization with approximates the vector with -error:wherewhich denotes the error of best -term approximation to with respect to the -quasinorm. Obviously, if , then .

Proof. Take to be a set of largest absolute coefficients of , so that . If is a minimizer of the -minimization, then with . So we take in inequality (13); then which is the desired inequality.

By Corollary 8, using for , we have the following corollary.

Corollary 9. Suppose that a matrix satisfies the stable null space property of order with constant ; then every -sparse vector can be exactly recovered from the -minimization with .

Compared with Theorem 4, the stable null space property is only a sufficient condition to exactly recover sparse vector via -minimization.

Corollary 10. Suppose that a matrix satisfies the stable null space property of order with constant ; then for any and any , a solution of the -minimization with approximates the vector with -error: where , .

Proof. This result can be derived from the following inequality [16]:for any and .

Remark 11. Under the condition of Corollary 10, we choose , , and ; let ; then Let us choose , , and ; let ; then

3. Robust Null Space Property

In realistic situations, it is also inconceivable to measure a signal with infinite precision. This means that the measurement vector is only an approximation of the vector , with for some . In this case, the reconstruction scheme should be required to output a vector whose distance to the original vector is controlled by the measurement error . This property is usually referred to as the robustness of the reconstruction scheme with respect to measurement error [16]. We are going to investigate the following modeling:

Definition 12. For any set with , a matrix is said to satisfy the robust null space property of order with constant and , iffor all .

We see that Definition 12 is broader than Definition 5, because it does not require that . Obviously, if , the robust null space property implies the stable null space property. Similar to Theorem 7, we also give an equivalent form of the robust null space property.

Theorem 13. For any set with , the matrix satisfies the robust null space property of order with constant and if and only iffor all vectors .

The proof of Theorem 13 is deferred to Appendix, and spirit of this theorem is the following corollary.

Corollary 14. Suppose that a matrix satisfies the robust null space property of order with constant and ; then for any , a solution of the -minimization with and approximates the vector with -error:

Proof. Noting that , the rest of proof is similar to that of Corollary 8.

Remark 15. When , inequality (25) is the conclusion of Corollary 8.

Remark 16. Using inequality for and , inequality (25) can be modified such that and if we set , , then we can get the following inequality:where and .

Inequality (27) or (14) is the same as that of Theorem 3.1 in [10] except for the constant when we only consider , but we used the different approach; the latter is based on the measurement matrix satisfying the following inequality: for some integer , where , while and are the best constants such that the measurement matrix satisfies the following inequality: If we add the robust term, inequality (27) is better than that of Theorem 3.1 in [10].

The rest of this section will focus on another robust null space property, that is, the robust null space property for . In the following definition, we will use two quasinorms and rather than the same quasinorm.

Definition 17. Given , for any set with , a matrix is said to satisfy the robust null space property of order with constant and , iffor all .

Obviously, when , Definition 17 reduces to Definition 12, so we will consider the case .

Theorem 18. Given , for any set with , suppose that the matrix satisfies the robust null space property of order with constant and ; then, for any and , one haswhere and .

We defer the proof of this theorem to Appendix. In the proof, we will see that the condition , , is necessary in Theorem 18.

Corollary 19. Given , suppose that a matrix satisfies the robust null space property of order with constant and ; then for any , a solution of the -minimization with and approximates the vector with -error:where and are the same as those of Theorem 18.

Remark 20. In Corollary 19, if we set , then -error is controlled by ; that is,Although the quasinorm also satisfies for , the term is not worse than , because the reconstruction -error decays in rate .

4. Extensions

In this section, we will discuss two extensions of the null space property; one is on the null space property of the -synthesis modeling and the other is on the block null space property of the mixed -minimization.

4.1. Null Space Property of the -Synthesis Modeling

The techniques above hold for signals which are sparse in the standard coordinate basis or sparse with respect to some other orthonormal basis. However, in practical examples, there are numerous signals which are sparse in an overcomplete dictionary rather than an orthonormal basis; see [2732]. Here is matrix that is often rather coherent in applications. We also call a frame in the sense that the columns of form a finite frame. In this setting, the signal can be represented as , where is an -sparse vector in . Such signal is called dictionary-sparse signals or frame-sparse signals [33]. When the dictionary is specified, the signals are also called -sparse signals. Many signals naturally possess frame-sparse coefficients, such as radar images (Gabor frames), cartoon-like images (curvelets), and images with directional features (shearlets) [3235].

The -synthesis modeling [34] is defined byThe above method is called -synthesis modeling due to the second synthesizing step. It is relative to the -analysis modeling, which finds the solution directly by solving the problem [32]where denotes the transpose of the .

Empirical studies show that the -synthesis modeling often provides good recovery; however, it is fundamentally distinct from the -analysis modeling. The geometry of two problems was analyzed in [28], and there it was shown that because these geometrical structures exhibit substantially different properties, there was a large gap between the two formulations. This theoretical gap was also demonstrated by numerical simulations in [28], which showed that the two methods perform very differently on large families of signals. Recently, [33] elaborated why the authors introduced the -synthesis and the relationship of the two methods, and it appeared that the -synthesis is a more thorough method than the -analysis, and the -analysis is a subproblem of the -synthesis via sparse duals. Besides, [33] built up a framework for the -synthesis method and proposed a dictionary-based null space property which is the first sufficient and necessary condition for the success of the -synthesis method.

In this subsection, we only propose the -synthesis modeling and give the null space property of the minimization but did not elaborate the relationship between the -synthesis modeling and the -analysis modeling. The -synthesis modeling is as follows:

Given a frame , denotes the preimage of the set under the frame . We introduce the following notation [33]:

Definition 21. Fix a dictionary , for any set with ; a matrix is said to satisfy the null space property of frame of order if for any there exists , such that

This null space property is abbreviated to -. Here, a natural question is proposed which is why does not directly satisfy the null space property? The major difference is that - is essentially the minimum of over all , where , and the - degenerates into having the null space property. Therefore this new condition about the -synthesis modeling is weaker than having the null space property.

The following theorem asserts that the null space property of a frame of order is a sufficient and necessary condition for the -synthesis modeling (36) to successfully recover all the -sparse signals with sparsity .

Theorem 22. Fix a dictionary ; let ; for any , one has , where is the reconstructed signal from using the -synthesis modeling if only if satisfies the null space property of frame of order .

Proof. Combining the proof of Theorem 4.2 in [33] with the proof of Theorem 4 (Lemma 2.2 in [18]), the content adds very little to the work. Here, the proof is omitted.

Remark 23. On the null space property of the -analysis modeling is implied in [27].

4.2. Block Null Space Property of the Mixed -Minimization

The conventional compressed sensing only considers the sparsity that the signal has at most nonzero elements, which can appear anywhere in the vector, and it does not take into account any further structure. However, in many practical scenarios, the unknown signal not only is sparse but also exhibits additional structure in the form that the nonzero elements are aligned to blocks rather than being arbitrarily spread throughout the vector. These signals are referred to as the block sparse signals and arise in various applications, for example, DNA microarrays [36], equalization of sparse communication channels [37], and color imaging [38]. To define the block sparsity, it is necessary to introduce some further notations. Suppose that is split into blocks, , which are of length , respectively; that is,and . A vector is called block -sparse over if is nonzero for at most indices . When for each , the block sparsity reduces to the conventional definition of a sparse vector. Denote where is an indicator function that obtains the value if and otherwise. So a block -sparse vector can be defined by , and . Here, to avoid confusion with the above notations, we especially emphasize that denotes blocks of the signal out of which are nonzero but not entries out of which are nonzero outlined in the above sections.

To recover a block sparse signal, similar to the standard -minimization, one will pursue the sparsest block sparse vector with the following mixed modeling [3941]:But the mixed -minimization problem is also NP-hard. It is natural that one uses the mixed -minimization to replace the mixed model [3942]:where To investigate the performance of this method, Eldar and Mishali [40] proposed the definition of the block Restricted Isometry Property (block RIP) of a measurement matrix . We say that a matrix satisfies the block RIP over of order with positive constant if for every block -sparse vector over , such thatObviously, the block RIP is a natural extension of the standard RIP, but it is a less stringent requirement comparing with the standard RIP. Besides, the required number of measurements of satisfying the block RIP is less than that of satisfying the standard RIP [43]. Eldar and Mishali [40] proved that the mixed -minimization can exactly recover any block -sparse signal when the measurement matrix satisfies the block RIP with . Recently, Lin and Li [43] improved the sufficient condition to and established another sufficient condition for exact recovery. There are also a number of works based on non-RIP-analysis to characterize the theoretical performance of the mixed -minimization, such as block coherence [39], strong group sparsity [44], and null space characterization [42].

Based on the previous discussion of the performance of the -minimization, it is also natural that one would be interested to make an ongoing effort to extend the -minimization to the setting of block sparse signal recovery. Therefore, the -minimization was proposed in [38, 45, 46]. Considerwhere In [38, 45, 46], some numerical experiments demonstrated that fewer measurements are needed for exact recovery when than when . Moreover, exact recovery conditions based on block restricted -isometry property have also been studied [46].

In this subsection, we will extend the null space property to the block sparse signals. As for -block signal , whose structure is like (39), we set and by we mean the complement of the set with respect to ; that is, .

Definition 24. For any set with , a matrix is said to satisfy the block null space property over of order , iffor all , where denotes the vector equal to on a block index set and zero elsewhere.

Remark 25. Inequality (47) can be replaced by or for every set with .

Using Definition 24, we can easily characterize the existence and uniqueness of the solution of the mixed -minimization (45).

Theorem 26. Given a matrix , every block -sparse vector is the unique solution of the mixed -minimization with if and only if satisfies the block null space property of order .

We defer the proof of Theorem 26 to Appendix.

So far, we only extend the standard NSP to the -synthesis modeling and the mixed -minimization, respectively. As for extension of the stable NSP and the robust NSP, we will leave it for our next works, especially, for discussion of the block sparse signals via mixed -minimization.

Appendix

In this section, we provide the proofs of Theorems 7, 13, 18, and 26. We will need the following lemma.

Lemma A.1. Given a set and vector ,If is a solution of the  -minimization with , then

Proof. Using the following inequalities, we can easily obtain inequality (A.1):

Proof of Theorem 7. For any set with , let us now assume that the matrix satisfies the stable null space property of order with constant . For with , since , the stable null space property yields From Lemma A.1, we get Since , so Using (12), we derive which is the desired inequality.
Conversely, let us assume that the matrix satisfies (13) for all vectors with . Given a vector , since , let and ; then , so we can get from (13) and this can be written as therefore, we can get That is, .

Proof of Theorem 13. For any set with , let us assume that the matrix satisfies the robust null space property of order with constant and . For , setting , the robust null space property and Lemma A.1 yield and combining these two inequalities, we can get Using (23), we derive which is the desired inequality.
The proof of sufficiency is similar to that of Theorem 7; here it is omitted.

Proof of Theorem 18. Let us now assume that the matrix satisfies the robust null space property with constant and . For , choosing as an index set of largest entries of , noticing inequality (18) and , we get Applying (24) to , we can get

Proof of Theorem 26. In order to prove Theorem 26, we will need the following triangle inequality for :Case 1. We consider the simplest case; suppose that both and are split into blocks, with length and with length , respectively. We also assume that both and have the same number of entries; that is, . Then inequality (A.16) is easily obtained by only noticing that and using the fact that satisfies the triangle inequality for any ; that is,Case 2. Suppose that is split into blocks with length , but is split into blocks with length ; we might as well set ; then in the top blocks, there exist some blocks in , whose entries are more than those of corresponding blocks in . Without loss of generality, we might as well suppose that the first block has the above property; that is, ; then we only supplement zeros in block , such that and have the same number of entries. By Case 1, inequality (A.17) also holds andWe can use the same way to deal with the case if there exist some blocks in whose entries are more than those of corresponding block in for the top blocks. Finally, we supplement again zero-blocks in ; then inequality (A.18) can be written by which is the desired inequality.
Now we give the proof of Theorem 26.
Let us first assume that every block -sparse vector is the unique solution of the mixed -minimization with , and suppose that is split into blocks with size set . Given a fixed index set with , for any , since satisfies , we have . Since is block -sparse and , therefore
Conversely, let us assume that the block null space property relative to holds. Suppose that is a block -sparse vector with blocks, with length ; if we let , then . Let further be such that ; without loss of generality, suppose that is also split into blocks in terms of , with the same length as . Then and so we have . This establishes the required minimality of . Here, we have made use of inequality (A.16).

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to thank the anonymous reviewers for their insightful comments and valuable suggestions. This work was supported by the National Natural Science Foundation of China (NSFC) under Grant no. 11131006 and by a Marie Curie International Research Staff Exchange Scheme Fellowship within the 7th European Community Framework Programme, via EYE2E (269118) and LIVCODE (295151), and in part by the Science Research Project of Ningxia Higher Education Institutions under Grant no. NGY20140147.