Abstract

1-bit compressing sensing (CS) is an important class of sparse optimization problems. This paper focuses on the stability theory for 1-bit CS with quadratic constraint. The model is rebuilt by reformulating sign measurements by linear equality and inequality constraints, and the quadratic constraint with noise is approximated by polytopes to any level of accuracy. A new concept called restricted weak RSP of a transposed sensing matrix with respect to the measurement vector is introduced. Our results show that this concept is a sufficient and necessary condition for the stability of 1-bit CS without noise and is a sufficient condition if the noise is available.

1. Introduction

The standard noiseless compressing sensing (CS) model is to solve the following optimization problem:where is a sensing (or measurement) matrix and is a sparse signal requiring robust reconstruction from a given nonadaptive measurement vector [14]. The -minimization problem is well known to be NP-hard. Hence, to overcome this difficulty, a typical treatment is resorting to use -norm. Along this approach, a great deal of algorithms is available, e.g., orthogonal matching pursuit algorithm [5], basis pursuit algorithm [6], iterative hard threshold algorithm [7], and iteratively reweighted least squares algorithm [8]. Moreover, some added assumptions have to be added on the measurement matrix to ensure that a sparse solution/signal could be exactly recovered by minimization. These conditions include restricted isometry property [911], coherence condition [12], null space property [8, 13, 14], and range space property [15, 16]. In recent research, some work has been done concerning the robust reconstruction condition (RRC) based on the above traditional properties and their variants, e.g., exact reconstruction condition [17], double null space property [18], and null space property [19].

However, the above CS model cannot be adapted in some practical problems; for example, in brain signal processing and sigma-delta converters, only the sign or support of a signal is measured. This motivates one to consider sparse signal recovery through low bits of measurements. An extreme quantization is only one bit per measurement. It gives rise to the theory of 1-bit compressed sensing (see Boufounos and Baraniuk [20]).

In this paper, we further consider a constrained 1-bit compressed sensing model involved by a noisy constraint. Precisely, let be two given full-row rank matrices. Pick with, is a given vector, and is a positive number. The constrained 1-bit compressed sensing model is described as follows:where the last term stands for a noisy constraint. The corresponding convex relaxed problem via -norm is expressed as

Compared with the recovery of a given signal, it is equally important to study whether the recovered signal is stable. The stability of recovery means that recovery errors stay under control even if the measurements are slightly inaccurate and the data are not exactly sparse. Recent stability study for CS can be found in [2125]. However, few theoretical results are available on the stability of 1-bit CS. In general, it is impossible to exactly reconstruct a sparse signal by only using 1-bit information. For example, if , then any sufficiently small perturbation is also positive and hence satisfies the requirement. Hence, we turn our attention to recover part of the information in 1-bit CS, such as support set or sign of a target signal. Due to this reason, the following criterion,where and and denotes a sufficient small positive scalar and has been widely used in the 1-bit CS literature. Inspired by this observation, the problem (1) is said to be stable for noisy reconstruction, if for any nonzero vector , there is a nonzero solution of (3) such thatwhere and are constant depending on the primal problem data . If and is -sparse, then the right side of (5) is zero and hence , which in turn implies that ; i.e., the sign of target signals can be exact recovery.

The main target of this paper is to study the necessary and/or sufficient condition for (5). First, a new definition called restricted weak RSP with respect to is introduced. Our results show that, for 1-bit CS, this condition is sufficient and necessary condition for stability if there is no noise, while it is sufficient if the noise is available. The analysis is based on the duality theory of linear programming and the fact that the ball constraint can be approximated by polytopes to any level of accuracy.

The notations used in this paper are standard. Let be the set of nonnegative vectors in . Given a set , denotes the cardinality of . The -norm counts the number of nonzero components of , and the -norm of is defined as . Let stand for a vector of ones, i.e., . For a vector , write and . For any two norms and with , the induced matrix norm is defined as . A convex combination between the points and is written as , i.e.,

Given a vector , let

The sign function is defined asand where and . The projection of onto a convex set is denoted by , i.e., . Denote by the complement of in . The error of the best -term approximation of a vector is defined as

The Hausdorff metric of two sets is

Robinson’s constant is defined as follows:where

2. Reformulation and Approximation of (3)

The 1-bit CS is NP-hard and hence is difficult to solve precisely. It motivates us to reformulate the 1-bit CS problem by removing the sign function. The advantage of such a reformulation is yielding a decoding method based on the theory of linear programming.

Given sign measurements , denote by the submatrices of in which their rows are corresponding to index sets , , and , respectively. For simplification of notations, we simply use , , and to denote , , and , respectively. In the following analysis, we always assume that because otherwise , and nothing is measured in this case.

The constraint can be rewritten equivalently as

By rearranging the order of the components of and the order of the associated rows of if necessary, we may assume without loss of generality that

It is clear that

In fact, the inclusion “” is clear. For “,” take satisfying . Define

Clearly, . Thus, for all and for all ; i.e., and . Therefore,

For any fixed , define the following relaxed problem (denoted by -problem for short),

The formula (15) shows that , where and denote the feasible region of the primal problem and the relaxed problem, respectively. In addition, as long as . Thus,where the limit is in the sense of the Painlevé–Kuratowski.

Proposition 1. A vector is an optimal solution of primal problem (P) if and only if is an optimal solution of -problem for all , where .

Proof. .” The construction of ensures that . Hence, for ,i.e., is a feasible solution of -problem. Since is an optimal solution of the primal problem, is the optimal solution of -problem due to by (15).
.” Let be an optimal solution of the primal problem. Take where and . Then, due to the monotonicity of with respect to . By assumption, is an optimal solution of -problem. Since and is an optimal solution of the primal problem, then is an optimal solution of the primal problem.
Denote by and the optimal solution set of (3) and (18), respectively. Following the similar argument as above, we can obtain the following result.

Corollary 1. There exists such that for all .

The problem (18) by introducing the slack variables and can be rewritten equivalently aswhere stands for the unit -ball, i.e., . According to the convex set separate theorem, the set can be described as an intersection of an infinite number of half spaces, i.e.,

Define

Notice thatwhere denotes the optimal value of (18). Replacing in (24) by a polytope yields a relaxation of , called , i.e.,

The following lemma claims that the polytope can approximate to any level of accuracy, as long as is chosen suitably.

Lemma 1. (see [25], Corollary 6.5.2). For any , there exists a polytope approximation of satisfying and

In the remainder of the paper, we fix and choose a polytope such that and satisfying (26). The polytope can be described as an interaction of a finite number of half spaces:where for are some unit vectors (i.e., ) and is an integer number. For the convenience in the following analysis, we further add half spacesto , where is the -th column of the identity matrix. This yields the following polytope:

Denote by the collection of the vectors and in , i.e.,

Clearly, still satisfies (26), i.e.,

Let and let be the matrix with column vectors in . Thus, can be written aswhere is the vector of one’s in .

By replacing by the above , we obtain the following approximation of (3):and the solution set of (33) iswhere denotes the optimal value of (33). Since , then

3. Stability Analysis

The concept of range space property (RSP for short) was first introduced in [15] to develop a necessary and sufficient condition for uniform recovery of sparse signals via -minimization. It was extended in [26] to weak RSP for developing stability theory of convex optimization algorithms. Recently, restricted RSP (RRSP) was introduced to develop sign recovery condition for sparse signals through 1-bit measurement in [16, 25].

Definition 1. (weak RSP). Given a matrix , the transposed matrix is said to possess the weak RSP order , if for any two disjoint sets with , there exists a vector such thatTo investigate the stability of 1-bit compressed sensing involved noise constraints, the notion of weak RSP is needed to be extended to the following restricted weak RSP with respect to .

Definition 2. (restricted weak RSP with respect to ). Given matrices , , and , the pair is said to satisfy the restricted weak RSP of order with respect to , if for any disjoint subsets , of with , there exists such thatwhere , , and

Theorem 1. Let and be given matrices and . Suppose that, for any given vector , the following holds: for any satisfying , there is a solution ofwhere and are submatrices of in which their rows are corresponding to index sets , , and , such that

Here, is a constant dependent only on the problem data . Then, must satisfy the restricted weak RSP of order with respect to .

Proof. Let be any pair of disjoint subsets of with . To prove that satisfies the restricted weak RSP of order with respect to , it is sufficient to show that there exists a vector such thatwhere , , andTake a -sparse vector in . DefineLet . By assumption, there is a solution of (39) such thatSince is -sparse, then , which in turn implies . So, . This, together with (43), implies thatSince is a solution of linear programming (39), then KKT conditions hold; i.e., there exist and such thatwhere is the subgradient of the -norm at , i.e.,Hence, (46) ensures thatThis together with (45) means that satisfies (42). Since and are arbitrary disjoint subsets of with , we conclude that satisfies the restricted weak RSP of order with respect to .
We now further show that the restricted weak RSP with respect to is a sufficient condition for (3) to be stable. Firstly, for the approximation problem (33), let us introduce variables to yield the following equivalent form:Recall that the solution set of (49) is given as (34). The above optimization problem is a linear programming problem, and the dual problem can be written asAccording to the dual theory on linear programming, the solution of (49) can be characterized by KKT conditions.

Lemma 2. is a solution to the problem (33) if and only if , where

For the convenience of notations, the set in (51) can be written equivalently as

The following two lemmas play a key role to establish the stability theory on 1-bit CS problem.

Lemma 3. (Hoffman’s error bound). Let and be given matrices and

For any vector in , there is a point such thatwhere the constant is referred to as Robinson’s constant defined by and .

Hoffman’s error bound indicates that, for a linear system , the distance from a point in space to can be measured in terms of Robinson’s constant and quantity of the linear system being violated at this point.

Lemma 4. (see [25], Lemma 6.2.2). Given three convex compact sets , and satisfy and , then

Inspired by [25, 26], we obtain the following result, which states that the restricted weak RSP with respect to is a sufficient condition for the -minimization (3) to be stable in sparse vector recovery.

Theorem 2. Let the problem data is given as (3) and . Let be any prescribed small number, and let be the polytope given in (29) satisfying (26). If satisfies the restricted weak RSP of order with respect to , then for any nonzero , there is an optimal solution of (3) such thatwhere is sufficient small, , is the Robinson constant with given in (53), andIn particular, if is a feasible solution of (3), then there is an optimal solution of (1) such that

Proof. Let be an arbitrary nonzero vector and be the fixed polytope given in (29) satisfying (26) in Lemma 1. The proof is divided into the following four steps.

Step 1. . The first step is to construct . Constructing . LetThe choice of ensuresLet be the support set of the largest components of . DefineClearly, and with . Let be the complementary set of . Hence, and are disjoint. Since satisfies the restricted weak RSP of order with respect to , there exists a vector such thatfor some , , andNow, we construct a dual feasible solution .
Constructing . Set , , and as follows:Hence, satisfiesConstructing . We assume, without loss of generality, that the first columns in are and the second columns of are . The component of is assigned as follows:It follows from the choice of thatConstructing . Let . Clearly, .
With the above choice of , it follows from (64)–(70) that

Step 2. Calculating , where is a solution of (33) for , and satisfies for all as required in Corollary 1.
DefineFor where is constructed as above, Lemma 3 ensures the existence of such thatwhere is Robinson’s constant determined by given in (53). Since the vector satisfies (62) and (71), the inequality (73) can be simplified toSincewe have by (61). Therefore,It follows from (69) thatwhere the second step comes from the fact and the last step uses the fact by (64). Hence,Firstly, we focus on each term of the right-hand side of the above inequality, respectively. Recall that . Therefore,where the second equality is from (65). By using the restricted weak RSP of order with respect to , we haveHence,It then follows from (78)–(81) thattogether with (74) and (76) implies

Step 3. Calculating , where is a solution of (3). Recall three sets , and , where and are the solution of (18) and (33) (cf. (24) and (34)) and is given as (25) with . Clearly, . Let denote the projection of onto , i.e., . Since and by (35), applying Lemma 4 with and , the definition of and the fact by Corollary 1 yieldswhich together with by Lemma 1 impliesThis combined with the inequality (83) gives

Step 4. Calculating . Note first that due to . Consider the following two cases:(i)If , since , then for some . Hence,(ii)If , let as . Then,which implies since eigenvalues of are 0 and 1 with multiplicity . Thus,where the last inequality is due to the fact for any ,Combining (87) and (89) together yieldswhereThis together with (86) results in (58).
If is the feasible solution of (3), then andas is sufficiently small, which further impliesWe now further show that the restricted weak RSP with respect to is also a sufficient condition for the -minimization problem if the noise does not exist, i.e., . It should be noticed that, in this case, the constraint is linear, and hence, it is unnecessary to further introduce a polytope. Thus, the problem (3) and its relaxed problem (49) reduces toThe dual problem is given asSimilarly, according to the dual theory of linear programming, is a solution to the problem (96) if and only if there exists , whereThe set can be written equivalently aswhere , ,Following the similar argument as given in Theorem 2, we can obtain the following result.

Theorem 3. Let the problem data is given as (95) and the matrix with full row rank. If satisfies the restricted weak RSP of order with respect to , then for any , there is an optimal solution of (3) such thatwhere is sufficiently small, , and is the Robinson constant with given in (100). In particular, if is a feasible solution of (3), then there is an optimal solution of (3) such that

The following result shows that the property of restricted weak RSP with respect to is the mildest condition to ensure the stability of -minimization problem with any given measurement vector .

Corollary 2. Let the problem data be given as (95) and be a matrix with full row rank. Then, the 1-bit CS problemis stable for all if and only if satisfies restricted weak RSP of order with respect to .

Proof. Following the argument given in Theorem 2, we know that the restricted weak RSP of order of with respect to is a sufficient condition for -minimization problem (104) to be stable.
On the contrary, Theorem 1 claims that if the -minimization problem is stable for any given , then the matrix must satisfy the restricted weak RSP of the order with respect to .

4. Conclusions

In this paper, the stability theory for 1-bit CS with quadratic constraint is established. In the analysis, it is essential to use the duality theory of linear programming, Hoffman error bound, and the fact that the ball constraint via Euclidean norm can be approximated by polytopes to any level of accuracy. An interesting and challenging topic is to further study the stability theory for 1-bit CS with other norms, e.g., -norm, particularly as . In this case, the nonconvex structure of -norm requires us to adopt the error bounded theory (also called metric subregularity) for nonlinear systems, instead of linear system used in this paper.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (11771255 and 11801325), Young Innovation Teams of Shandong Province (2019KJI013), Program of Science and Technology Activities for Overseas Students in Henan Province in 2020, and Nanhu Scholars Program for Young Scholars of Xinyang Normal University.