Table of Contents Author Guidelines Submit a Manuscript
Discrete Dynamics in Nature and Society
Volume 2018, Article ID 2598160, 12 pages
https://doi.org/10.1155/2018/2598160
Research Article

Enhancing Matrix Completion Using a Modified Second-Order Total Variation

School of Mathematics and Statistics, Southwest University, Chongqing 400715, China

Correspondence should be addressed to Jianjun Wang; nc.ude.uws@jjw

Received 17 March 2018; Revised 20 July 2018; Accepted 14 August 2018; Published 12 September 2018

Academic Editor: Seenith Sivasundaram

Copyright © 2018 Wendong Wang and Jianjun Wang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In this paper, we propose a new method to deal with the matrix completion problem. Different from most existing matrix completion methods that only pursue the low rank of underlying matrices, the proposed method simultaneously optimizes their low rank and smoothness such that they mutually help each other and hence yield a better performance. In particular, the proposed method becomes very competitive with the introduction of a modified second-order total variation, even when it is compared with some recently emerged matrix completion methods that also combine the low rank and smoothness priors of matrices together. An efficient algorithm is developed to solve the induced optimization problem. The extensive experiments further confirm the superior performance of the proposed method over many state-of-the-art methods.

1. Introduction

Matrix Completion (MC) refers to the problem of filling in missing entries of an unknown matrix from a limited sampling of its entries. This problem has received widespread attentions of researchers in many real applications; see, e.g., [13]. Generally, it is impossible to exactly recover arbitrary matrices from such a problem without some additional information since the problem is extraordinarily ill-posed [4] and hence has infinitely many solutions. However, in many realistic scenarios, very often the matrix we wish to recover is low-rank or can be approximated by a low-rank matrix, as illustrated in Figure 1. Therefore, an immediate idea to model MC is to pursue the low rank of matrices by solving the following optimization problem:where and the entries of in location set () are given while the remaining entries are missing.

Figure 1: (a) Lenna image (). (b) All the singular values of Lenna image are sorted from larger to smaller. A simple calculation will lead to the fact that the top 40 singular values contribute more than 80% information of all singular values.

Unfortunately, problem (1) is of little practical use since it is NP-hard in general. To circumvent this issue, a nuclear norm minimization method was suggested in [4, 5], which solveswhere and denotes the th largest singular value of . Especially in [4], Candès and Recht have proved theoretically that any low-rank matrices can be exactly recovered by solving problem (2) under some necessary assumptions. Recently, some new and deep results on the above problem (2) have been obtained to tackle the theoretical defects in [4]; see, e.g., [6, 7]. Many algorithms have also been proposed to solve problem (2) and its variants, including the Accelerated Proximal Gradient (APG) algorithms [8, 9], the Iterative Reweighted Least Squares (IRLS) algorithm [10], and the Singular Value Thresholding (SVT) algorithms [11, 12].

Nuclear norm is the tightest convex approximation of rank function, but it is far from the closest one [13]. The relationship between nuclear norm and the rank function of matrices is similar to that between norm and norm of vectors [4, 14]. To get a much closer approximation to the rank function, many nonconvex functions used to approximate the norm, see, e.g., [1519], have been successfully extended to replace the nuclear norm [2022]. Two representative ones are the Schatten- quasi-norm [20, 23, 24] and the truncated nuclear norm [21, 25] which is also called the partial sum of singular values [26]. The induced methods based on the above two approximate functions have been well investigated to deal with the MC problem [20, 21] and obtained much better performance than previous nuclear norm minimization method. Besides the above, there still exist many other methods used to tackle the MC problem; see, e.g., [27, 28] for the matrix factorization based methods and [29, 30] for the greedy methods.

Overall, almost all the existing MC methods are designed to approach the rank function and thus to induce the low rank. To some degree, low rank only characterizes the global prior of a matrix. In some instances, however, besides the low rank the matrices often have some additional structural priors. We still take the matrix (grey image) shown in Figure 1(a) for example. It is rich in smoothness features (priors). In other words, an entry and its neighboring entries in such a matrix often have small difference in their values. When dealing with such matrices, most existing MC methods in fact can not well capture their smoothness priors and hence may result in a poor performance. On the other hand, when the entries of a matrix reach an incredibly high missing ratio or the high-quality recovered matrices are in urgent need, one has no choice but to take their additional priors into consideration since it will become very difficult to exactly/robustly recover any matrices only by using their low rank prior. Therefore, how to mine more available priors of underlying matrices and integrate them into MC becomes a very crucial problem.

In this paper, we propose a new method that combines the low rank and smoothness priors of underlying matrices together to deal with the MC problem. Our method is not the first work on this topic, but by using a modified second-order total variation of matrices it becomes very competitive. In summary, the contributions of this paper are stated as follows:(i)A modified second-order total variation and nuclear norm of matrix are combined to characterize the smoothness and low-rank priors of underling matrices, respectively, which makes our method much more competitive for MC problem.(ii)An efficient algorithm is developed to deal with the optimization problem and the extensive experiments testify to the effectiveness of the proposed method over many state-of-the-art MC methods.

The remainder of this paper is organized as follows. In Section 2, we review some MC methods that simultaneously optimize both the low rank and smoothness priors of underlying matrices. Since matrices can be considered as the second-order tensors, this indicates that most tensor based completion methods can also be applied to the MC problem. Two related tensor completion methods are also included and introduced in this section. In Section 3, we present the proposed method and design an efficient algorithm to solve the induced optimization problem. Experimental results are presented in Section 4. Finally, the conclusion and future work are given in Section 5.

2. A Review on MC with Smoothness Priors

The low rank and smoothness priors of underlying matrices have been well studied in MC and visual data processing communities [31, 32], respectively. However, their combined work for MC is rarely reported. To the best of our knowledge, Han et al. [33] gave the first such work for MC and proposed a Linear Total Variation approximation regularized Nuclear Norm (LTVNN) minimization method, which takes the formwhere is a penalty parameter. In LTVNN method, the smoothness priors of underlying matrices are constrained by a Linear Total Variation Approximation (LTVA), which is defined aswhere It has been shown in [33] that this LTVNN method is superior to many state-of-the-art MC methods, such as previous SVT method [11] and TNNR method [21], that only pursue the low rank. However, LTVNN method may result in staircase effect due to the use of linear total variation. On the other hand, the induced LTVNN algorithm does not solve problem (3) directly, but solves it by dividing problem (3) into two subproblems firstly, then solving these two subproblems independently and outputting the final results based on the solutions to these two subproblems lastly. Obviously, such an algorithm strategy is not consistent with their proposed optimization problem (3).

Recently, such consideration has been extended to deal with the tensor completion (TC) problem. From the perspective of tensor, vectors and matrices can be considered as the first- and second-order tensors, respectively. Therefore, most existing TC methods can still be used to deal with the MC problem. In 2014, Chen et al. [34] proposed a Tucker decomposition based Simultaneous Tensor Decomposition and Completion (STDC) method for TC. This STDC method solves the following optimization problem:where , is the Tucker decomposition of tensor , is a unified factor matrix that contains all factor matrices, is the Kronecker product, and is a preset matrix designed by some priors. Similar to the matrix case, the elements of tensor in location set ( are given while the rest are missing. In problem (6), , i.e., the first term of the objective function, characterizes the low rank of tensor , and its second term enforces the similarity between individual components, which provides some smoothed results for . Thus, STDC can be considered as a tensor extension of low-rank and smooth matrix completion. Later, based on another PARAFAC decomposition of tensors, Yokota et al. [35] proposed a Smooth PARAFAC tensor Completion (SPC) method that integrates low rank and smoothness priors of tensor by solvingwhere is the PARAFAC decomposition of tensor , is the outer product, stands for the complement set of , is the classical norm on vectors, and the matrix is a smoothness constraint matrix, defined as SPC method integrates low rank and smoothness priors into a single term (i.e., the second term of the objective function in problem (7)), and it is totally different from the STDC method. Besides, it has been proved numerically in [35] that SPC method performs much better than the SDTC method. More discussion on tensor and TC problems is beyond the scope of this paper, we refer interested readers to [35, 36] and the references within.

3. Proposed New Method

In this section, we propose a new method that combines the low rank and smoothness priors of underlying matrices to deal with the MC problem. In our method, the nuclear norm is used to characterize the low rank of underlying matrices while their smoothness priors are characterized by a modified second-order total variation (MSTV), which is defined aswhere and Obviously, such a modified second-order total variation is a new metric for smoothness priors of matrices, which is not different from the classical second-order total variation [37], or from previous linear total variation approximation. However, from a functional perspective, it not only inherits the geometrical structure of second-order total variation which avoids the staircase effect caused by linear total variation approximation, but also yields a smooth function that allows more efficient numerical algorithms. Figure 2 shows the geometrical differences between the linear total variation approximation and our modified second-order total variation. Besides, equation (9) can be further written in matrix form aswhere for

Figure 2: The black points denote the local but nonedged entries, the red points denote the corresponding neighboring entries concerned in LTVA and our MSTV, and the grey points denote the remaining entries of matrices. Obviously, our MSTV takes more informative neighboring entries into consideration and hence yields a better performance.

Now, the proposed methods that are based on nuclear norm and a modified second-order total variation can be modeled as the following optimization problem:where is a penalty parameter. Although in problem (14) nuclear norm is used to characterize the low rank of matrices, it can be replaced by some other approximate functions of rank function, such as Schatten- quasi-norm and truncated nuclear norm. On the other hand, both problem (14) and the aforementioned problem (3) share some similarities, but one will see later in the experimental part that the induced algorithm by proposed problem (14) performs much better than that of problem (3).

To solve problem (14), we adopt the Alternating Direction Method of Multipliers (ADMM) [38], which has been widely used to solve many application problems. To use ADMM, we first rewrite problem (14) as

Then the corresponding augmented Lagrangian function of (15) is presented as where is the dual variable and is a penalty parameter associated with the augmentation. According to ADMM, the iterative procedure can be described as follows: (1) Fixing and , solve for by which is equivalent towhere . According to [11], problem (18) has a closed-form solution, which can be obtained by the singular value thresholding operator , i.e.,where is the singular value decomposition of .(2) Fixing and , solve for by which is equivalent to where . By setting the derivative of with respect to to zero, we havewhere is an identity matrix. Equation (22) is the well known Sylvester equation [39]. In this paper, we resort to the Matlab command to solve it; i.e.,After we obtain , i.e., the solution to (22), we can hence approximately obtain by(3) Update by

We summarize the iterative procedure in Algorithm 1. Note that we do not use a fixed but adopt an adaptive updating strategy for (see the 7th step in Algorithm 1). Such a setting is mainly inspired by the recent studies on ADMM [21, 26, 40]. In fact, both the theoretical and applied results have shown that such a setting can help achieve quick convergence for ADMM based algorithms and hence avoid high computation cost. Without loss of generality, we simply set and for Algorithm 1. It should also be noted that the performance of the proposed algorithm can be further improved if one can optimize these two parameters.

Algorithm 1: Solving problem (14) via ADMM.

4. Numerical Experiments

In this section, some experiments are conducted to evaluate the performance of the proposed method. They mainly involve solving the randomly masked image completion problem and the text masked image reconstruction problem. The former problem aims to complete the incomplete images generated by masking their entries randomly, and the latter one aims to reconstruct the images with text. In our experiments, the Lenna image shown in Figure 1(a) and 20 widely used test images shown in Figure 3 are used to generate the desired test matrices. To evaluate the quality of the reconstructed images, Structural SIMilarity (SSIM) and Peak Signal-to-Noise Ratio (PSNR) indices are adopted. We refer readers to [41] for SSIM index. As to PSNR index, suppose Mean Squared Error (MSE) is defined as , where counts the number of elements in and and are the original image and the reconstructed image, respectively; then we can calculate the PSNR value by . The missing ratio of an incomplete image is always defined as . All the experiments are implemented in Matlab and the codes can be downloaded from https://github.com/DongSylan/LIMC.

Figure 3: The test images. All are and will be numbered in order from 1 to 20, from left to right, and from top to bottom.
4.1. Convergence Behavior

In this part, we conduct some experiments to examine the convergence behavior of our Algorithm 1. The desired test matrices are generated by masking the entries in Lenna image randomly with a 60% missing ratio. Before the convergence analysis, we have to determine a proper penalty parameter since it plays a vital role in our Algorithm 1. Figure 4 plots the relationship between and the PSNR and SSIM results obtained by our Algorithm 1. It is easy to see that is the best selection. In above experiments, we initialize and set the stopping criterion aswhere, here and throughout, is called the Relative Neighboring Iteration Error (RNIE). Similar to RNIE, we define Relative True Iteration Error (RTIE) as . In what follows, we always set and for our Algorithm 1.

Figure 4: Penalty parameter selection.

Figure 5 plots the convergence results of our Algorithm 1. One can easily see that both the RNIE and RTIE decrease rapidly and consistently. Specifically, in terms of RNIE, when exceeds 200 it tends to be smaller than . However, under the same condition, RTIE tends to be a constant. These observations to some degree indicate that the stopping criterion (26) is reasonable for our Algorithm 1.

Figure 5: Convergence behavior of Algorithm 1.
4.2. Randomly Masked Image Completion

To illustrate the effectiveness of our method, we compare our Algorithm 1 with 5 other state-of-the-art algorithms. They are IRLS algorithm [24], PSSV algorithm [26], LTVNN algorithm [33], and the SPC algorithms [35] with TV and QV smoothing, i.e., the SPC-TV and SPC-QV algorithms. In our experiments, we set , , and for IRLS algorithm, for PSSV algorithm, and for LTVNN algorithm, for SPC-TV algorithm, and for SPC-QV algorithm. We also uniformly set their maximum iterations to 500.

We start with completing the Lenna image masked randomly with several different missing ratios . The obtained PSNR and SSIM results are presented in Table 1, where the highest PSNR and SSIM values are underlined and marked in bold, while their second highest values are underlined only. One can directly see that, when the missing ratio decreases or in other words the known information increases, all the algorithms tend to perform better. However, our Algorithm 1 performs better than the rest of the algorithms in both PSNR and SSIM perspectives whenever the missing ratio ranges. Specifically, when compared to the IRLS and PSSV algorithms that only pursue the low rank of underlying matrices, our Algorithm 1 improves their PSNR and SSIM performance by more than 5dB and 0.2, respectively. Figure 6 shows the reconstructed images obtained by these algorithms. It is easy to find that our Algorithm 1 not only avoids the staircase effect resulting from LTVNN algorithm, but also leads to a better smoothness performance than SPC-TV and SPC-QV algorithms.

Table 1: PSNRSSIM results on Lenna image with different missing ratios by different algorithms.
Figure 6: Visual results on Lenna image with 60% missing ratio by different algorithms. (a) Original image. (b) Randomly masked image. (c) IRLS: PSNR=22.91dB, SSIM=0.707. (d) PSSV: PSNR=23.21dB, SSIM=0.696. (e) LTVNN: PSNR=24.91dB, SSIM=0.853. (f) SPC-TV: PSNR=25.24dB, SSIM=0.801. (g) SPC-QV: PSNR=27.50dB, SSIM=0.898. (h) Ours: PSNR=28.96dB, SSIM=0.936. The figure is better viewed in zoomed PDF.

To further confirm the effectiveness of our method, we apply our Algorithm 1, together with other 5 algorithms, to 20 widely used test images (Figure 3). The obtained results are presented in Table 2. An overall impression observed from Table 2 is that our Algorithm 1 achieves the highest SSIM in all cases and the highest PSNR in almost all cases. Although in terms of PSNR our Algorithm 1 performs weaker than SPC-QV algorithm in some cases, but these cases only occupy a small proportion (13/80) of all the cases, and the PSNR difference in these cases is also very small (less than 0.8dB).

Table 2: PSNRSSIM results on 20 test images with different missing ratios by different algorithms.
4.3. Text Masked Image Reconstruction

Compared to the randomly masked image completion problem, text masked image reconstruction is a relatively hard task since the entries masked by the text are not randomly distributed in the matrix and the text may mask some important texture information. However, this problem can still be transformed into an MC problem by checking the position of the text first and regarding the corresponding entries as missing values.

Figure 7 shows the visual results obtained by above-mentioned algorithms on reconstruction of the Lenna image with text. In terms of PSNR and SSIM values, our Algorithm 1 and the SPC-QV algorithm rank first and second, respectively, followed by the rest of the algorithms. In terms of visual quality, the proposed Algorithm 1 well reconstructs the original image and its local feature structure is also well kept without seeing any text traces. In contrast, the images reconstructed by IRLS and PSSV algorithms are covered with the obvious text traces. The LTVNN algorithm suffers from staircase effect again. The image reconstructed by the SPC-QV algorithm is better than that by SPC-TV algorithm, but still has some faint text traces. Based on previous 20 test images, we generate another 20 desired images with text (Figure 8) to compare our Algorithm 1 with 5 other algorithms in dealing with the text masked image reconstruction problem. The obtained PSNR and SSIM results are reported in Table 3, which confirms again the excellent performance of our Algorithm 1.

Table 3: PSNRSSIM results on reconstruction of 20 test images with text by different algorithms.
Figure 7: Visual results on Lenna image with text by different algorithms. (a) Original image. (b) Text masked image. (c) IRLS: PSNR=22.14dB, SSIM=0.903. (d) PSSV: PSNR=24.41dB, SSIM=0.929. (e) LTVNN: PSNR=25.33dB, SSIM=0.962. (f) SPC-TV: PSNR=25.33dB, SSIM=0.941. (g) SPC-QV: PSNR=27.07dB, SSIM=0.966. (h) Ours: PSNR=29.12dB, SSIM=0.984. The figure is better viewed in zoomed PDF.
Figure 8: The desired text masked images. They are numbered in the same manner as we stated in Figure 3.

5. Conclusion and Future Work

This paper proposed a new method that combines the low rank and smoothness priors of matrices to tackle the matrix completion problem. Different from previous LTVNN method, our proposed method characterizes the smoothness of matrices by using a modified second-order total variation, which not only avoids the staircase effect caused by LTVNN method, but also leads to a better performance. Even compared to the recently emerged smooth PARAFAC tensor completion method, our method is still highly competitive in both PSNR and SSIM perspectives. The extensive experiments further confirm the excellent performance of our proposed method. Potential future work includes replacing the nuclear norm used in our method with other (nonconvex) low-rank promoting functions, integrating more available priors of matrices to enhance the performance of existing matrix completion methods, and applying our modified second-order total variation to the tensor completion problems.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the Natural Science Foundation of China under Grants Nos. 61273020 and 61673015.

References

  1. N. Komodakis, “Image Completion Using Global Optimization,” in Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1 (CVPR'06), pp. 442–452, New York, NY, USA, June 2006. View at Publisher · View at Google Scholar
  2. Y. Koren, R. Bell, and C. Volinsky, “Matrix factorization techniques for recommender systems,” The Computer Journal, vol. 42, no. 8, pp. 30–37, 2009. View at Publisher · View at Google Scholar
  3. H. Ji, C. Liu, Z. Shen, and Y. Xu, “Robust video denoising using low rank matrix completion,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '10), pp. 1791–1798, San Francisco, Calif, USA, June 2010. View at Publisher · View at Google Scholar
  4. E. J. Candès and B. Recht, “Exact matrix completion via convex optimization,” Foundations of Computational Mathematics, vol. 9, no. 6, pp. 717–772, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  5. B. Recht, M. Fazel, and P. A. Parrilo, “Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization,” SIAM Review, vol. 52, no. 3, pp. 471–501, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  6. Y. Chen, “Incoherence-optimal matrix completion,” Institute of Electrical and Electronics Engineers Transactions on Information Theory, vol. 61, no. 5, pp. 2909–2923, 2015. View at Publisher · View at Google Scholar · View at MathSciNet
  7. G. Liu and P. Li, “Low-rank matrix completion in the presence of high coherence,” IEEE Transactions on Signal Processing, vol. 64, no. 21, pp. 5623–5633, 2016. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  8. S. W. Ji and J. P. Ye, “An accelerated gradient method for trace norm minimization,” in Proceedings of the 26th Annual International Conference on Machine Learning (ICML '09), pp. 457–464, ACM, June 2009. View at Publisher · View at Google Scholar · View at Scopus
  9. K.-C. Toh and S. Yun, “An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems,” Pacific Journal of Optimization, vol. 6, no. 3, pp. 615–640, 2010. View at Google Scholar · View at MathSciNet · View at Scopus
  10. M. Fornasier, H. Rauhut, and R. Ward, “Low-rank matrix recovery via iteratively reweighted least squares minimization,” SIAM Journal on Optimization, vol. 21, no. 4, pp. 1614–1640, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  11. J. Cai, E. J. Candès, and Z. Shen, “A singular value thresholding algorithm for matrix completion,” SIAM Journal on Optimization, vol. 20, no. 4, pp. 1956–1982, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  12. J.-F. Cai and S. Osher, “Fast singular value thresholding without singular value decomposition,” Methods and Applications of Analysis, vol. 20, no. 4, pp. 335–351, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  13. C. Xu, Z. C. Lin, and H. B. Zha, “A unified convex surrogate for the Schatten-p norm,” in Proceedings of the AAAI Conference on Artificial Intelligence, pp. 926–932, 2017.
  14. S. Foucart and H. Rauhut, A Mathematical Introduction to Compressive Sensing, Springer, Berlin, Heidelberg, Germany, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  15. R. Chartrand, “Exact reconstruction of sparse signals via nonconvex minimization,” IEEE Signal Processing Letters, vol. 14, no. 10, pp. 707–710, 2007. View at Publisher · View at Google Scholar · View at Scopus
  16. J. Wen, D. Li, and F. Zhu, “Stable recovery of sparse signals via Lp-minimization,” Applied and Computational Harmonic Analysis, vol. 38, no. 1, pp. 161–176, 2015. View at Publisher · View at Google Scholar · View at MathSciNet
  17. Y. Wang and W. Yin, “Sparse signal reconstruction via iterative support detection,” SIAM Journal on Imaging Sciences, vol. 3, no. 3, pp. 462–491, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  18. P. Yin, Y. Lou, Q. He, and J. Xin, “Minimization of l1-2 for compressed sensing,” SIAM Journal on Scientific Computing, vol. 37, no. 1, pp. A536–A563, 2015. View at Publisher · View at Google Scholar
  19. W. Wang, J. Wang, and Z. Zhang, “Robust Signal Recovery with Highly Coherent Measurement Matrices,” IEEE Signal Processing Letters, vol. 24, no. 3, pp. 304–308, 2017. View at Publisher · View at Google Scholar · View at Scopus
  20. G. Marjanovic and V. Solo, “On lq optimization and matrix completion,” IEEE Transactions on Signal Processing, vol. 60, no. 11, pp. 5714–5724, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  21. Y. Hu, D. Zhang, J. Ye, X. Li, and X. He, “Fast and accurate matrix completion via truncated nuclear norm regularization,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 9, pp. 2117–2130, 2013. View at Publisher · View at Google Scholar · View at Scopus
  22. T.-H. Ma, Y. Lou, and T.-Z. Huang, “Truncated L1-2 models for sparse recovery and rank minimization,” SIAM Journal on Imaging Sciences, vol. 10, no. 3, pp. 1346–1380, 2017. View at Publisher · View at Google Scholar · View at MathSciNet
  23. F. P. Nie, H. Huang, and C. Ding, “Low-rank matrix recovery via efficient Schatten p-norm minimization,” in Proceedings of the 26th AAAI Conference on Artificial Intelligence, pp. 655–661, 2012.
  24. M. J. Lai, Y. Y. Xu, and W. T. Yin, “Improved iteratively reweighted least squares for unconstrained smoothed lq minimization,” SIAM Journal on Numerical Analysis, vol. 51, no. 2, pp. 927–957, 2013. View at Publisher · View at Google Scholar
  25. F. Cao, J. Chen, H. Ye, J. Zhao, and Z. Zhou, “Recovering low-rank and sparse matrix based on the truncated nuclear norm,” Neural Networks, vol. 85, pp. 10–20, 2017. View at Publisher · View at Google Scholar · View at Scopus
  26. T.-H. Oh, Y.-W. Tai, J.-C. Bazin, H. Kim, and I. S. Kweon, “Partial sum minimization of singular values in robust PCA: Algorithm and applications,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 4, pp. 744–758, 2016. View at Publisher · View at Google Scholar · View at Scopus
  27. R. H. Keshavan, A. Montanari, and S. Oh, “Matrix completion from a few entries,” IEEE Transactions on Information Theory, vol. 56, no. 6, pp. 2980–2998, 2010. View at Publisher · View at Google Scholar · View at Scopus
  28. Z. Wen, W. Yin, and Y. Zhang, “Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm,” Mathematical Programming Computation, vol. 4, no. 4, pp. 333–361, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  29. K. Lee and Y. Bresler, “ADMiRA: atomic decomposition for minimum rank approximation,” IEEE Transactions on Information Theory, vol. 56, no. 9, pp. 4402–4416, 2010. View at Publisher · View at Google Scholar · View at Scopus
  30. Z. Wang, M.-J. Lai, Z. Lu, W. Fan, H. Davulcu, and J. Ye, “Orthogonal rank-one matrix pursuit for low rank matrix completion,” SIAM Journal on Scientific Computing, vol. 37, no. 1, pp. A488–A514, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  31. M. A. Davenport and J. Romberg, “An overview of low-rank matrix recovery from incomplete observations,” IEEE Journal of Selected Topics in Signal Processing, vol. 10, no. 4, pp. 608–622, 2016. View at Publisher · View at Google Scholar · View at Scopus
  32. P. Rodriguez, “Total variation regularization algorithms for images corrupted with different noise models: a review,” Journal of Electrical and Computer Engineering, vol. 2013, Article ID 217021, 18 pages, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  33. Xu Han, Jiasong Wu, Lu Wang, Yang Chen, Lotf Senhadji I, and Huazhong Shu, “Linear total variation approximate regularized nuclear norm optimization for matrix completion,” Abstract and Applied Analysis, vol. 2014, Article ID 765782, 8 pages, 2014. View at Publisher · View at Google Scholar · View at MathSciNet
  34. Y.-L. Chen, C.-T. Hsu, and H.-Y. M. Liao, “Simultaneous tensor decomposition and completion using factor priors,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 3, pp. 577–591, 2014. View at Publisher · View at Google Scholar · View at Scopus
  35. T. Yokota, Q. Zhao, and A. Cichocki, “Smooth PARAFAC decomposition for tensor completion,” IEEE Transactions on Signal Processing, vol. 64, no. 20, pp. 5423–5436, 2016. View at Publisher · View at Google Scholar · View at MathSciNet
  36. T. G. Kolda and B. W. Bader, “Tensor decompositions and applications,” SIAM Review, vol. 51, no. 3, pp. 455–500, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  37. K. Papafitsoros, C. B. Schoenlieb, and B. Sengul, “Combined first and second order total variation inpainting using split Bregman,” Image Processing On Line, vol. 3, pp. 112–136, 2013. View at Publisher · View at Google Scholar
  38. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends in Machine Learning, vol. 3, no. 1, pp. 1–122, 2011. View at Publisher · View at Google Scholar · View at Scopus
  39. P. Benner, R.-C. Li, and N. Truhar, “On the ADI method for Sylvester equations,” Journal of Computational and Applied Mathematics, vol. 233, no. 4, pp. 1035–1045, 2009. View at Publisher · View at Google Scholar · View at MathSciNet
  40. Z. C. Lin, R. S. Liu, and Z. X. Su, “Linearized alternating direction method with adaptive penalty for low-rank representation,” in Proceedings of the 25th Annual Conference on Neural Information Processing Systems, pp. 1–7, 2011.
  41. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004. View at Publisher · View at Google Scholar · View at Scopus