Table of Contents Author Guidelines Submit a Manuscript
Journal of Applied Mathematics
Volume 2016, Article ID 1659019, 5 pages
http://dx.doi.org/10.1155/2016/1659019
Research Article

A New Algorithm for Positive Semidefinite Matrix Completion

College of Mathematics and Systems Science, Shandong University of Science and Technology, Qingdao 266590, China

Received 29 June 2016; Accepted 22 September 2016

Academic Editor: Qing-Wen Wang

Copyright © 2016 Fangfang Xu and Peng Pan. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Positive semidefinite matrix completion (PSDMC) aims to recover positive semidefinite and low-rank matrices from a subset of entries of a matrix. It is widely applicable in many fields, such as statistic analysis and system control. This task can be conducted by solving the nuclear norm regularized linear least squares model with positive semidefinite constraints. We apply the widely used alternating direction method of multipliers to solve the model and get a novel algorithm. The applicability and efficiency of the new algorithm are demonstrated in numerical experiments. Recovery results show that our algorithm is helpful.

1. Introduction

Matrix completion (MC) is the process of recovering the unknown or missing elements of a matrix. Under certain assumptions on the matrix, for example, low-rank or approximately low-rank, the incomplete matrix can be reconstructed very well [1, 2]. Matrix completion is widely applicable in many fields, such as machine learning, statistic analysis, system control, and image and video processing [3], where matrices with low-rank or approximately low-rank are widely used in the model construction.

Recently, there have been extensive research on the problems of low-rank matrix completion (LRMC). The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear constraints. However, it is NP-hard (nondeterministic polynomial-time hard) due to the combinatorial nature of the rank function. References [1, 2] showed that the solution of LRMC could be found by solving a nuclear norm minimization problem under some reasonable conditions. The singular value thresholding (SVT) method [4] and fixed point continuation method using approximate singular value decomposition (FPCA) [5] are two well-known algorithms because of their good recoverability, fast speed, and robustness. SVT applied the linearized Bregman iterations to solve the unconstrained nuclear norm regularized linear least squares problem. FPCA used iterations based on an iterative shrinkage-thresholding algorithm and used the continuation technique together with an approximate singular value decomposition procedure to accelerate the algorithm. Reference [6] proposed an accelerated proximal gradient singular value thresholding algorithm. A completely different model was developed in LMaFit [7], which was a nonlinear successive overrelaxation algorithm that only requires solving a linear least squares problem per iteration. More details on LRMC can be found in [1, 810] and references therein.

In practice, the completed matrix is often required to be positive semidefinite. For example, covariance matrix and its inverse: precision matrix of statistic analysis, are both positive semidefinite. Recently, there have been extensive research on high-dimensional covariance matrix estimation. They all motivate the development of positive semidefinite matrix completion (PSDMC). Reference [11] accomplished the matrix completion task in some special conditions and used the alternating direction method of multipliers (ADMM) [1215] to solve the model. References [16, 17] proposed new models for nonnegative matrix completion and also used ADMM to solve them.

Our main contribution in this work is the development of an efficient algorithm for PSDMC. First of all, we present the nuclear norm regularized linear least squares model with positive semidefinite constraints. Because of its robustness, we choose it as the model of PSDMC in this paper. The structure of the model suggests an alternating minimization scheme, which is very suitable for solving large-scale problems. We give an exact ADMM-based algorithm, whose subproblems are solved exactly. We test the new ADMM-based algorithm on two kinds of problems: random matrix completion problems and random low-rank approximation problems. Numerical experiments show that all our proposed algorithm outputs have satisfactory results. The paper is organized as follows. Section 2 presents models and algorithms of PSDMC. Some numerical results are given in Section 3.

The following notations will be used throughout this paper. Uppercase (lowercase) letters are used for matrices (column vectors). All vectors are column vectors; the subscript denotes matrix and vector transposition. denotes a diagonal matrix with on its main diagonal. is a matrix of all zeros of proper dimension; stands for the identity matrix. The trace of , that is, the sum of the diagonal elements of , is denoted by . The Frobenius norm of is defined as . The Euclidean inner product between two matrices and is defined as . The inequality means that is semidefinite positive. The equality means that for all entries .

2. ADMM-Based Methods for PSDMC

2.1. The Model of PSDMC

The matrix completion problem of recovering a positive semidefinite low-rank matrix from a subset of its entries is where is the decision variable and is the index set of known elements of .

Let be the projection onto the subspace of sparse matrices with nonzeros restricted to the index set ; that is,

From the definition of , we can reformulate the equality constraint in model (1) in terms of . Due to the combinational property of the objective function , model (1) is NP-hard in general. Inspired by the success of matrix completion under nuclear norm in [1, 2, 10], we use the nuclear norm as an approximation to to estimate the optimal solution of model (1) from the following model: where the nuclear norm of is defined as the summation of the singular values of ; that is, where is the th largest singular value. Moreover, for a positive semidefinite matrix, .

If the known elements of the matrix are noise-free, that is to say, is reliable, we will directly solve model (3) to conduct SDPMC. On the contrary, if the vector of known elements is contaminated by noise, the constraints must be relaxed, resulting in the following problem: or the nuclear norm regularized linear least squares model with positive semidefinite constraints: Here, and are given parameters, whose values should be set according to the noise level. When the values of and are set properly, (5) and (6) are equivalent. Model (6) is usually preferred over (5) for the case of noisy observations. Our algorithms can be extended to treat (5) with minor modifications.

Actually, model (6) is especially useful in practice. The reason is that the known information is usually gotten from large surveys and contaminated by sampling error inevitably. In this paper, we choose model (6) as the model to conduct NMC.

2.2. An ADMM-Based Method for Model (6)

In this subsection, we present an algorithm developed for model (6). To facilitate an efficient use of ADMM, we introduce one new matrix (splitting) variable and consider an equivalent form of model (6): where , . The augmented Lagrangian function of model (7) is where is a Lagrangian multiplier. is a penalty parameter.

The alternating direction method of multipliers for model (7) is derived by successively minimizing with respect to and in an alternating fashion; namely,where .

By rearranging the terms of (9a), it is equivalent towhere . Let its Eigenvalue Value Decomposition (EVD) be , where and . Define the shrinkage operator asThenis an optimal solution of problem (9a).

By rearranging the terms of (9b), it is equivalent to Model (13) can be split into two subproblems: where is the complement of . Finally, we can get the solution of (13) by solving the above two subproblems. where .

In short, ADMM applied to model (7) yields the iteration:

From the above considerations, we arrive at Algorithm 1.

Algorithm 1: An exact ADMM-based algorithm of PSDMC.

The convergence of ADMM and its variants for convex problems has been studied extensively. The interested reader is referred to [1215, 1820], and the references therein. Since model (6) is convex, the convergence of Algorithm 1 to a global optimal solution is guaranteed.

3. Numerical Results

In this section, we report on the application of our proposed ADMM-based algorithm to a series of matrix problems to demonstrate its ability. To illustrate the performance of our algorithmic approaches combined with different procedures, we test the following two solvers.(1)ADMM-VI; Algorithm in [11].(2)ADMM-SDP; Algorithm 1, the exact ADMM-based method for model (6).

We implement our algorithms in MATLAB. All the experiments are performed on a 2.20 GHz Intel Pentium PC with 6.0 GHz of memory and MATLAB 2012b.

3.1. Implementation and Parameters of the Two Solvers

We test the above two solvers on random positive semidefinite matrix problems. We do numerical experiments by the following procedure. Firstly, we create a low-rank positive semidefinite matrix . Secondly, we select a subset of elements uniformly at random from the elements of and denote their index set as . The index set of unknown elements is denoted as . The ratio between the number of measurements and the number of entries in the matrix is denoted as “SR” (sampling ratio). We use to complete ; the result is denoted as . Finally, we will check the differences between and its actual value .

The most important algorithmic parameters in Algorithm 1 are , , , , and the maximal number of iterations . In our implementation, we set , , , and . However, the value of is very difficult to set since it can be neither too large nor too small. is usually chosen to be a moderate value. Three sets of results are computed when is equal to three different values. They are shown in Table 1.

Table 1: Numerical results of different values of (, , ).

We can use continuation technique employed in [5, 6] to accelerate the convergence of ADMM-SDP. If model (6) is to be solved with the target parameter value , we propose solving a sequence of model (6) by a decreasing sequence . When a new problem, associated with , is to be solved, the approximate solution for the current problem with is used as the starting point. In our numerical experiments in Section 3.2, we set the initial and update at iteration . A stopping criterion of Algorithm 1 is met as long as all the following three conditions are satisfied:

3.2. Experiments on Random Matrix Completion Problems

The matrix with rank in this subsection is created randomly by the following procedure (see also [5, 7]): two random matrices and with i.i.d. standard Gaussian entries are first generated, is the noise matrix, and . Then is assembled. From the creation process of , its rank is no larger than .

The computational results of positive semidefinite matrix completion are presented in Table 2. We can observe that our proposed solver performs as well as ADMM-VI. Both of them can output satisfactory results.

Table 2: Numerical results on medium randomly created matrix completion problems.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

The authors would like to thank Professor Zaiwen Wen for the discussions on matrix completion. They thank Professor Xiaoming Yuan for offering the original codes of ADMM-VI. The work was supported in part by Scientific Research Foundation of Shandong University of Science and Technology for Recruited Talents (no. 2015RCJJ056).

References

  1. E. J. Candés and B. Recht, “Exact matrix completion via convex optimization,” Foundations of Computational Mathematics, vol. 9, no. 6, pp. 717–772, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  2. B. Recht, M. Fazel, and P. A. Parrilo, “Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization,” SIAM Review, vol. 52, no. 3, pp. 471–501, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  3. H. Ji, C. Liu, Z. Shen, and Y. Xu, “Robust video denoising using Low rank matrix completion,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '10), pp. 1791–1798, IEEE, San Francisco, Calif, USA, June 2010. View at Publisher · View at Google Scholar · View at Scopus
  4. J.-F. Cai, E. J. Candés, and Z. Shen, “A singular value thresholding algorithm for matrix completion,” SIAM Journal on Optimization, vol. 20, no. 4, pp. 1956–1982, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  5. S. Ma, D. Goldfarb, and L. Chen, “Fixed point and Bregman iterative methods for matrix rank minimization,” Mathematical Programming, vol. 128, no. 1-2, pp. 321–353, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  6. K.-C. Toh and S. Yun, “An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems,” Pacific Journal of Optimization, vol. 6, no. 3, pp. 615–640, 2010. View at Google Scholar · View at MathSciNet
  7. Z. Wen, W. Yin, and Y. Zhang, “Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm,” Mathematical Programming Computation, vol. 4, no. 4, pp. 333–361, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  8. M. Fazel, Matrix rank minimization with application [Ph.D. thesis], Stanford University, Stanford, Calif, USA, 2002.
  9. D. Goldfarb, S. Ma, and Z. Wen, “Solving low-rank matrix completion problems efficiently,” in Proceedings of the 47th Annual Allerton Conference on Communication, Control, and Computing (Allerton '09), pp. 1013–1020, Monticello, Ill, USA, September 2009. View at Publisher · View at Google Scholar · View at Scopus
  10. E. J. Candés and T. Tao, “The power of convex relaxation: near-optimal matrix completion,” IEEE Transactions on Information Theory, vol. 56, no. 5, pp. 2053–2080, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  11. C. Chen, B. He, and X. Yuan, “Matrix completion via an alternating direction method,” IMA Journal of Numerical Analysis, vol. 32, no. 1, pp. 227–245, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  12. M. Fortin and R. Glowinski, “Augmented Lagrangian methods,” in Applications to the Numerical Solution of Boundary Value Problems, Studies in Mathematics and its Applications, North-Holland, Amsterdam, The Netherlands, 1983, Translated from the French by B. Hunt and D. C. Spicer. View at Google Scholar
  13. B. S. He, H. Yang, and S. L. Wang, “Alternating direction method with self-adaptive penalty parameters for monotone variational inequalities,” Journal of Optimization Theory and Applications, vol. 106, no. 2, pp. 337–356, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  14. Z. Wen, D. Goldfarb, and W. Yin, “Alternating direction augmented Lagrangian methods for semidefinite programming,” Mathematical Programming Computation, vol. 2, no. 3-4, pp. 203–230, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  15. W. Deng and W. Yin, “On the global and linear convergence of the generalized alternating direction method of multipliers,” Tech. Rep., Rice University CAAM, 2012. View at Google Scholar
  16. Y. Xu, W. Yin, Z. Wen, and Y. Zhang, “An alternating direction algorithm for matrix completion with nonnegative factors,” Frontiers of Mathematics in China, vol. 7, no. 2, pp. 365–384, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  17. F. Xu and G. He, “New algorithms for nonnegative matrix completion,” Pacific Journal of Optimization, vol. 11, no. 3, pp. 459–469, 2015. View at Google Scholar · View at MathSciNet
  18. L. M. Briceño-Arias and P. L. Combettes, “A monotone+skew splitting model for composite monotone inclusions in duality,” SIAM Journal on Optimization, vol. 21, no. 4, pp. 1230–1250, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  19. R. Glowinski and P. Le Tallec, Augmented Lagrangian and Operator-Splitting Methods in Nonlinear Mechanics, vol. 9 of SIAM Studies in Applied Mathematics, Society for Industrial and Applied Mathematics, Philadelphia, Pa, USA, 1989. View at Publisher · View at Google Scholar · View at MathSciNet
  20. B. He and X. Yuan, “Linearized alternating direction method of multipliers with Gaussian back substitution for separable convex programming,” Numerical Algebra, Control and Optimization, vol. 3, no. 2, pp. 247–260, 2013. View at Publisher · View at Google Scholar