Abstract

The principal component prsuit with reduced linear measurements (PCP_RLM) has gained great attention in applications, such as machine learning, video, and aligning multiple images. The recent research shows that strongly convex optimization for compressive principal component pursuit can guarantee the exact low-rank matrix recovery and sparse matrix recovery as well. In this paper, we prove that the operator of PCP_RLM satisfies restricted isometry property (RIP) with high probability. In addition, we derive the bound of parameters depending only on observed quantities based on RIP property, which will guide us how to choose suitable parameters in strongly convex programming.

1. Introduction

Matrix completion is a signal processing technique, which recovers an unknown low-rank or approximately low-rank matrix from a sampling of its entries. Recently, this technique has been applied in many fields, such as medical [1], imaging [2], seismology [3], and computer vision [46]. The dimension of data produced in practice is always vastly high, therefore, there exists a pressing challenge to develop efficient and effective tools to process, analyze, and extract useful information from a small set of linear measurements of the high-dimensional data. Many scholars had studied in detail in the recent papers [713]. Candès et al. address the problem of recovery of the unknown low-rank matrix and sparse matrix from the sampling of its complete entries [79] for large scale. As the extensions of previous works, Ganesh et al. address the problem of decomposing a superposition of a low-rank matrix and a sparse matrix when a relatively few linear measurements are available [10, 11].

These fundamental results have a great impact in engineering and applied mathematics. However, these results are limited into convex optimization; that is, nuclear norm based convex optimization leads to the exact low-rank matrix recovery under suitable conditions. It is well known that strongly convex optimization has many intrinsic advantages, such as the uniqueness of optimal solution. Especially, the strongly convex optimization was widely used in designing efficient and effective algorithm in the literature of compressive sensing and low-rank matrix recovery. Zhang et al. extend them; they proved that strongly convex optimizations can guarantee the exact low-rank matrix recovery as well in the paper [12]. The right bound of of strongly convex programming for robust principal component analysis was provided in paper [14].

Pertaining to the strongly convex optimization for compressive principal component pursuit, we had studied it in detail in our former work in [15], in which we prove that under suitable conditions, the corresponding strongly convex optimization also recovers the low-rank matrix and sparse matrix exactly. However, the bound of parameter strongly depends on the Frobenius norm of the data, which is not known in practice, because the data we can use is only. In this paper, we will give an easy band of depending on only.

1.1. Contents and Notations

For convenience, we will give here a brief summary of the notations that will be used throughout the paper. We denoted the operator norm of matrix by , the Frobenius norm by , the nuclear norm by and the norm by . We will bound the Euclidean inner product between two matrices which is defined by the formula . The Cauchy-Schwarz inequality gives , especially and . We will also bound linear transformations which act on the space of matrices, and we denote these operators by . The norm of the operator is denoted by ; that is, . We denote by the support of . By a slight abuse of notation, we also represent by the subspace of matrices whose support is contained in the support of . means each entry of the matrix belongs to the support of the sparse matrix independently with probability .

1.2. Basic Problem and Main Result

In this paper, we only address the strongly convex programming of principal component pursuit with reduced linear measurements, which is considered in paper [10]. This problem has the following formula: where is some positive penalty parameter and is the orthogonal projection onto the linear subspace , and is distributed according to the Haar measure on the Grassmannian . Suppose that are standard orthogonal basis of ; that is, are identical in distribution to , where and , whose entries are i.i.d, according to the standard normal distribution. More detail was interpreted in the paper [10].

When in the problem (1), we have existence and uniqueness theorems as follows.

Theorem 1 (see [10]). Fix any , and let be a -dimensional random subspace of ; obeys incoherence condition with parameter , and . Then, with high probability, the solution of problem (1) with is exact; that is, and , provided that where, , and are positive numerical constants.

Remark 2. The value of is a fixed positive numerical constant and should obey some suitable conditions; one of those is less than . the detail derivation has been well-studied (see, e.g., [8, 10] and the many references therein). With high probability means with probability at least , with numerical constant.
When in the problem (1), we have existence and uniqueness theorems as follow.

Lemma 3 (see [14, 15]). Assuming under the other assumptions of Theorem 1, is the unique solution to the strongly convex programming (1) with high probability.

Pertaining to the projection operator , we obtain the following restricted isometry property of operator restricted in a superposition of low-rank matrices and sparse matrices.

Theorem 4. Suppose that is low-rank matrix with rank , is sparse matrix with , , and , and assume that the conditions of Theorem 1 hold. Then, with high probability, one has

According to Lemma 3 and Theorem 4, we can easy obtain the following theorem.

Theorem 5. Supposing that is low-rank matrix with rank , is sparse matrix with . If under the other assumptions of Theorem 4, is the unique solution to the strongly convex programming (1) with high probability.

It is obvious that the band of depends on only, which is obtained very easily in practice.

1.3. Contributions and Organization

In this paper, we first prove that although the operator does not satisfy the restricted isometry property (RIP) in entire space, it obeys restricted isometry property (RIP) with high probability if the rank of low-rank matrix is not large and the sparse matrix is enough sparse. Second, we will use the restricted isometry property to improve the band of parameter which just depends on .

The main center of the paper provides the proof of Theorem 4. The rest of paper is organized as follows. In Section 2, we will provide some important lemmas which are important composition of our main result and then will give the proof of Theorem 4. Conclusion and further works are discussed in Section 3.

2. The Proof of Theorem 4

In this section, we will provide the proof of Theorem 4. We first bound the behaviors of the operator implying on the space of low-rank matrices and the space of sparse matrices, respectively. Then, the relation of and , we can obtain Theorem 4.

2.1. Bounding the Behavior of

We will provide some important lemmas, which are used in the main theorem.

Lemma 6 (see [10, Lemma 6]). Let be a linear subspace distributed according to the random subspace model described earlier. Then, for any , with high probability

Bernstein’s Inequality [16, 17]. Assume that are independent random variables with , for all . Furthermore, suppose that . Then,

Lemma 7. If , then with high probability

Proof. Let be the indicator variables with if and for otherwise. For , we have Let and . For and , it is obvious that satisfies Bernstein’s inequality. Note that According to Bernstein’s inequality, letting , we have Combining with , we have
In other words, with high probability. Note that . That means can obtain very small value when is very large.

Theorem  8(a).  Suppose that is sparse matrix with and is a -dimensional random subspace of . If , with high probability, one has

Proof. According to triangle inequality, we have In the third inequality, we have used Lemma 6. In the foarth inequality, we have used Cauchy-Schwarz inequality and Lemma 7. Combining with , we can obtain

Theorem  8(b).  Suppose that is sparse matrix with , and is a -dimensional random subspace of . If , with high probability, one has

Proof. For , we have . On the other hand, we have

2.2. Bounding the Behavior of

Lemma 9 (see [7, 18]). Suppose that is distributed as a chi-squared random variable with degrees of freedom. Then, for each

Lemma 10. With high probability, one has

Proof. According to the definition of , we have where . It is necessary to bound . It is easy to note that is distributed as a chi-squared random variable with degrees of freedom. Letting , we have where we have used (19). Equation (22) tells us: with high probability. Therefore, Lemma 9 is established.

Lemma 11 (see [8, 9]). Assume that . If , then with high probability

According to Lemma 11, if and are small enough, we have the below formula with high probability

Lemma 12. Under the assumption of Theorem 1, with high probability, one has

Proof. Note that , and we have In the first inequality, we have used Lemma 11. In the second equality, we have used , because . It is obvious that Therefore, And thus, we can obtain Lemma 12 is established.

Next, we will bound the behavior of , based on Lemma 9 to Lemma 12.

Theorem  13(a).  Suppose that be -rank matrix and is a -dimensional random subspace of . If , with high probability, one has

As pointed out by one reviewer, Lemma 15 in [10] is similar to Theorem  13(a). Lemma 15 in [10] tells us (30) is established if ; however it is obvious that is strongly based on which is defined as if there exists an orthonormal basis for satisfying , which is very difficult to compute. The condition of in Theorem  13(a) is just based on , and , which is easy to determine.

Proof. According to the definition of , we have where are independent and identically distributed to mentioned before. According to (31), it is necessary to bound . Suppose that is singular value decomposition of matrix ; that is, . Therefore, according to triangle inequality, we have where are the th rows of , respectively. In the second inequality, we have used Cauchy-Schwarz inequality. In the last inequality, we have used Lemma 10. Let ; note that where we have used the fact and . In other word . Let . It is obvious that is distributed as a chi-squared random variable with degrees of freedom.
According to Lemma 9, letting , we can obtain It implies that with high probability That is, with high probability Combining with (31), (32), and (36), with high probability, we can obtain where we have used the fact: if . Let That is, Theorem  13(a) is established.

Theorem  13(b).  Suppose that is -incoherent matrix and is a -dimensional random subspace of . If , with high probability, one has

Proof. The main idea of the proof follows the arguments of Theorem  8(b).

Proof of Theorem 4. According to Theorems  8 and 13 and the assumption of Lemma 3, we have Therefore, Then, Theorem 4 is established.

3. Conclusion

In this paper, we address the problem of principal component pursuit with reduced linear measurements (PCP_RLM). In order to obtain an easy handed band of , we prove that operator obeys RIP with high probability if the low-rank matrix satisfies incoherent conditions and the sparse matrix is enough sparse. And then, based on the RIP of operator , we provide the bound of parameters depending only on observed quantities. In the future, we will try to analyze the robustness of optimum solution of the strongly convex programming (1) with RIP property.

Acknowledgments

The authors would like to thank the anonymous reviewers for their comments that helped to improve the quality of the paper. This research was supported by the National Natural Science Foundation of China (NSFC) under Grant 61172140, and “985” Key Projects for the excellent teaching team support (postgraduate) under Grant A1098522-02. Haiwen Xu is supported by Joint Fund of the National Natural Science Foundation of China and the Civil Aviation Administration of China (Grant no. U1233105).