Abstract and Applied Analysis

Volume 2014, Article ID 765782, 8 pages

http://dx.doi.org/10.1155/2014/765782

## Linear Total Variation Approximate Regularized Nuclear Norm Optimization for Matrix Completion

^{1}Laboratory of Image Science and Technology, Southeast University, Nanjing 210096, China^{2}Centre de Recherche en Information Médicale Sino-français (CRIBs), France^{3}INSERM, U1099, Rennes 35000, France^{4}Université de Rennes 1, LTSI, Rennes 35042, France

Received 15 February 2014; Accepted 7 May 2014; Published 28 May 2014

Academic Editor: Zhiwu Liao

Copyright © 2014 Xu Han et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Matrix completion that estimates missing values in visual data is an important topic in computer vision. Most of the recent studies focused on the low rank matrix approximation via the nuclear norm. However, the visual data, such as images, is rich in texture which may not be well approximated by low rank constraint. In this paper, we propose a novel matrix completion method, which combines the nuclear norm with the local geometric regularizer to solve the problem of matrix completion for redundant texture images. And in this paper we mainly consider one of the most commonly graph regularized parameters: the total variation norm which is a widely used measure for enforcing intensity continuity and recovering a piecewise smooth image. The experimental results show that the encouraging results can be obtained by the proposed method on real texture images compared to the state-of-the-art methods.

#### 1. Introduction

The problem of matrix completion, which can be seen as the extension of recently developed compressed sensing (CS) theory [1–3], plays an important role in the field of signal and image processing [4–11]. This problem occurs in many real applications in computer vision and pattern recognition, such as image inpainting [12, 13], video denoising [14], and recommender systems [15, 16]. Reconstruction algorithms for matrix completion have received much attention. Cai et al. [17] proposed an algorithm, namely, the singular value thresholding (SVT) algorithm for matrix completion and related nuclear norm minimization problems. In [18], a simple and fast singular value projection (SVP) algorithm for rank minimization with affine constraints is exploited. Keshavan et al. [19] dealt with the matrix completion based on singular value decomposition followed by local manifold optimization. In order to achieve a better approximation of the rank of matrix, Hu et al. [11] presented an approach based on the truncated nuclear norm regularization (TNNR), which is defined by the difference between the nuclear norm and the sum of the largest few singular values. Since most of the existing matrix completion models aim to solve the low rank optimization via nuclear norm, we recall here this model. For an incomplete matrix of rank , the model can be described as follows: where and , and is the set of locations corresponding to the observed entries.

Unfortunately, the rank minimization problem in (1) is an NP-hard one, so the following convex relaxation is widely used: where is the nuclear norm given by where denotes the th largest singular value of .

In this paper, our objective is to exploit the intrinsic geometry of the data distribution and incorporate it as an additional regularization term to deal with the images which are rich in texture. The total variation (TV) norm has demonstrated its usefulness as a graph regularizer in the field of image processing, so we propose here a method that combines the nuclear norm with the linear TV approximate norm to solve the problem of matrix completion. We call it the linear total variation approximate regularized nuclear norm (LTVNN) minimization problem. This combination optimization problem will be solved by simple and efficient optimization scheme based on the alternating direction method of multipliers (ADMM) model [20, 21].

The paper is organized as follows. In the next section, we introduce the proposed LTVNN model and we describe the optimization schemes. In Section 3, we establish the convergence results for the iterations given in Section 2. Experimental results on a set of images are provided in Section 4. Finally, we draw some conclusions in Section 5.

#### 2. Proposed Method

##### 2.1. Some Preliminaries

The total variation along the vertical and horizontal directions can be described as
So the total variation of** X** is the summation for the magnitude of the gradient of each pixel [22]:
And the equvalent total variation formula as follows:
Here, we use the linear total variation approximate of (7) to approximate the second kind of total variation; that is,

##### 2.2. Proposed Model

As mentioned above, the key point of the proposed approach is the combination of the nuclear norm and the linear total variation approximate norm; therefore, the optimization problem is described as where is a penalty parameter, is the nuclear norm defined in (3), and is linear total variation norm approximate defined in (8), which can be reformulated as where “” means the trace of the matrix, denotes the Frobenius norm of the matrix, and and are, respectively, the column and row transform matrix given by

So, the problem in (9) can be rewritten as

##### 2.3. The Optimization Scheme

The alternating direction method of multipliers-ADMM [20, 21] is an efficient and scalable optimization model which exploits the structure of the optimization problem. In this section, we use ADMM to deal with the problem in (12), which can be reformulated as where and are the indicator functions. The augmented Lagrange function of (13) is where is the penalty parameter and is the multiplier. The solution can be obtained by incorporating the solutions of each regularization problem separately which are defined as follows. Row TV is as follows: where denotes the optimization result along the vertical direction of the total variation defined in (4). Column TV is as follows: where denotes the optimization result along the horizontal direction of the total variation defined in (5).

We deal with column linear TV optimization problem in (16) by the following steps in each iteration.

*Step 1 (initial setting). *, , , with the tolerance .

*Step 2 (computing ). *Fix and , and minimize (16) for obtaining as
Ignoring the constant terms, (17) can be rewritten as
To solve (18), Cai et al. [17] introduce the soft-thresholding operator which is defined as follows:
where .

Using the operator in (19), the solution of (18) can be obtained as

*Step 3 (computing ). *Fix and and calculate as follows:
which is a quadratic function of and can be easily solved by setting the derivation of to zeros, and then we get
Then we fix the values at the observed entries:
where denotes the set of the missing entries.

*Step 4 (computing ). *Fix and and calculate as follows:

Until the stop condition: .

Row TV problem defined by (15) can be solved in a similar way to that of column TV problem. The only difference is the in the second step, which is given by And the stop condition is .

Finally, we obtained as the average of and ; that is,

#### 3. Convergence Analysis

In this section, we give the proof of the convergence of column total variation (16) and the convergence of row total variation is similar to the column total variation. Here, the objection function (16) about column variation is as follows:

Lemma 1. *Let and . Then
**
The details of the proof can be found in [17].*

Theorem 2. *Assuming that the sequence of step size obeys , and . Here, denotes the optimization result and denotes the th iteration object variable; then by the iteration procedure defined in Section 2.3, we can obtain the unique optimization result, that is, . And the details of the proof can be found in the Appendix.*

#### 4. Experiments

In this section, we test the proposed method on a set of images. The algorithm was implemented with MATLAB programming language on a PC machine, which sets up Microsoft Windows 7 operating system and has an Intel Core I5 CPU with speed of 2.79 GHz and 2 GB RAM.

We deal with three channels () of color images separately and combine the results together to get the final outcome. We use peak signal-to-noise ratio (PSNR) values to evaluate the performance: where MSE denotes mean squared error,

In the experiments, we consider two situations: random mask sample and word mask sample. Figure 1 describes the recovered results with 60% random mask and word mask for and 1 by LTVNN. Figure 2 shows the recovered PSNR for Pepper under different random sample ratios and word mask sample for from 0 to 1 with step of 0.1 by LTVNN. It can be observed from these two figures that the best result is obtained for the value of near to 0.5, which corresponds to the case where the two norms (nuclear and LTV) are equivalently used in (9). For the two extreme cases: (only the nuclear norm is taken into consideration) and (only the linear total variation approximate norm is considered), the algorithm loses its efficiency.

We also compare our method (LTVNN) with other matrix completion methods including TNNR [10, 11], SVT [12], SVP [13], and OptSpace [14]. Figure 3 plots the recovered PSNR for Pepper for with different random sample ratios (from 40% to 90%) by LTVNN and other four methods (TNNR, SVT, SVP, and OptSpace). It can be seen from Figure 3 that the proposed LTVNN method achieves much higher PSNR than the other methods. Figure 4 shows the comparison of PSNR of recovered methods for Lena under word mask with by LTVNN and the other methods. Table 1 lists the PSNR results under word mask sample with for different images by LTVNN and the other methods. From Figure 4 and Table 1, we can see that the proposed method outperforms the other matrix completion methods under word mask for different images.

#### 5. Conclusion

In this paper, we have proposed a new model that combines the nuclear norm and total variation norm for the matrix completion problem, which was then solved by ADMM model. Experimental results demonstrate the effectiveness of the proposed algorithm compared to other methods.

#### Appendix

Before we give the proof of Theorem 2, we supplement one proof about Without loss of generality, we take an example matrix and the corresponding transform matrix . Then, so the term . The proof of Theorem 2 is as follows.

*Proof. *Let be primal-dual optimization for the problem (27). The optimality conditions give
where and . Then from (A.3), we deduce that
and combine (A.4) with Lemma 1 that
We observe (23) that ,
Here, we set ; then
where , .

Based on (A.7), when , that is, , the term is nonincreasing and converges to limit. The parameter is very easy for satisfying this condition when is smaller constant. And we can obtain other properties as follows.

Let , and then . Due to the fact that converges to zero, so is infinite small about and converges to zero. Now we reconsider (A.2); evidently the first column in converges to zero; that is, , , , . The second column converges to the first column and then converges to zero; that is, , , , . The third column converges to the second column and then converges to zero; that is, , , , , so through the iteration converges to except the last column due to the definition in (4) and (5); the last column and the last row are set to zero.

Fortunately, this problem does not have side effect for global result. Theorem 2 is established.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgments

This work was supported by the National Basic Research Program of China under Grant 2011CB707904, by the National Natural Science Foundation of China under Grants 61201344, 61271312, and 61073138, by the Ministry of Education of China under Grants 20110092110023 and 20120092120036, the Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, and by Natural Science Foundation of Jiangsu Province under Grant BK2012329. This work is supported by INSERM postdoctoral fellowship.

#### References

- E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,”
*IEEE Transactions on Information Theory*, vol. 52, no. 2, pp. 489–509, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - D. L. Donoho, “Compressed sensing,”
*IEEE Transactions on Information Theory*, vol. 52, no. 4, pp. 1289–1306, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - E. J. Candès, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,”
*Communications on Pure and Applied Mathematics*, vol. 59, no. 8, pp. 1207–1223, 2005. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - E. J. Candès and B. Recht, “Exact matrix completion via convex optimization,”
*Foundations of Computational Mathematics*, vol. 9, no. 6, pp. 717–772, 2008. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - E. J. Candès and T. Tao, “The power of convex relaxation: near-optimal matrix completion,”
*IEEE Transactions on Information Theory*, vol. 56, no. 5, pp. 2053–2080, 2009. View at Google Scholar · View at MathSciNet - A. Eriksson and A. van den Hengel, “Efficient computation of robust low-rank matrix approximations in the presence of missing data using the L1 norm,” in
*Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '10)*, pp. 771–778, June 2010. View at Publisher · View at Google Scholar · View at Scopus - H. Ji, C. Liu, Z. Shen, and Y. Xu, “Robust video denoising using Low rank matrix completion,” in
*Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '10)*, pp. 1791–1798, June 2010. View at Publisher · View at Google Scholar · View at Scopus - J. Liu, P. Musialski, P. Wonka, and J. Ye, “Tensor completion for estimating missing values in visual data,” in
*Proceedings of the IEEE 12th International Conference on Computer Vision (ICCV '09)*, pp. 2114–2121, Kyoto, Japan, 2009. - T. Okatani, T. Yoshida, and K. Deguchi, “Efficient algorithm for low-rank matrix factorization with missing components and performance comparison of latest algorithms,” in
*Proceedings of the IEEE International Conference on Computer Vision (ICCV '11)*, pp. 842–849, Barcelona, Spain, November 2011. View at Publisher · View at Google Scholar · View at Scopus - D. Zhang, Y. Hu, J. Ye, X. Li, and X. He, “Matrix completion by truncated nuclear norm regularization,” in
*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '12)*, pp. 2192–2199, 2012. - Y. Hu, D. Zhang, J. Ye, X. Li, and X. He, “Fast and accurate matrix completion via truncated nuclear norm regularization,”
*IEEE Transactions on Pattern Analysis and Machine Intelligence*, vol. 35, no. 9, pp. 2117–2130, 2013. View at Google Scholar - N. Komodakis and G. Tziritas, “Image completion using global optimization,” in
*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '06)*, vol. 1, pp. 442–452, 2006. - C. Rasmussen and T. Korah, “Spatiotemporal inpainting for recovering texture maps of partially occluded building facades,” in
*Proceedings of the IEEE International Conference on Image Processing (ICIP '05)*, vol. 3, pp. 125–128, September 2005. View at Publisher · View at Google Scholar · View at Scopus - H. Ji, C. Liu, Z. Shen, and Y. Xu, “Robust video denoising using Low rank matrix completion,” in
*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '10)*, pp. 1791–1798, June 2010. View at Publisher · View at Google Scholar · View at Scopus - Y. Koren, “Factorization meets the neighborhood: a multifaceted collaborative filtering model,” in
*Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining*, pp. 426–434, Las Vegas, Nev, USA, August 2008. View at Publisher · View at Google Scholar · View at Scopus - H. Steck, “Training and testing of recommender systems on data missing not at random,” in
*Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '10)*, pp. 713–722, Washington, DC, USA, July 2010. View at Publisher · View at Google Scholar · View at Scopus - J.-F. Cai, E. J. Candès, and Z. Shen, “A singular value thresholding algorithm for matrix completion,”
*SIAM Journal on Optimization*, vol. 20, no. 4, pp. 1956–1982, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - P. Jain, R. Meka, and I. Dhillon, “Guaranteed rank minimization via Singular Value Projection,” in
*Proceedings of the 24th Annual Conference on Neural Information Processing Systems (NIPS '10)*, Vancouver, Canada, December 2010. View at Scopus - R. H. Keshavan, A. Montanari, and S. Oh, “Matrix completion from a few entries,”
*IEEE Transactions on Information Theory*, vol. 56, no. 6, pp. 2980–2998, 2010. View at Publisher · View at Google Scholar · View at MathSciNet - Z. Lin, R. Liu, and Z. Su, “Linearized alternating direction method with adaptive penalty for low-rank representation,” in
*Proceedings of the 25th Annual Conference on Neural Information Processing Systems (NIPS '11)*, December 2011. View at Scopus - J. Yang and X. Yuan, “Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization,”
*Mathematics of Computation*, vol. 82, no. 281, pp. 301–329, 2013. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,”
*Physica D: Nonlinear Phenomena*, vol. 60, pp. 259–268, 1992. View at Google Scholar · View at Scopus