Journal of Applied Mathematics

Volume 2016, Article ID 1659019, 5 pages

http://dx.doi.org/10.1155/2016/1659019

## A New Algorithm for Positive Semidefinite Matrix Completion

College of Mathematics and Systems Science, Shandong University of Science and Technology, Qingdao 266590, China

Received 29 June 2016; Accepted 22 September 2016

Academic Editor: Qing-Wen Wang

Copyright © 2016 Fangfang Xu and Peng Pan. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Positive semidefinite matrix completion (PSDMC) aims to recover positive semidefinite and low-rank matrices from a subset of entries of a matrix. It is widely applicable in many fields, such as statistic analysis and system control. This task can be conducted by solving the nuclear norm regularized linear least squares model with positive semidefinite constraints. We apply the widely used alternating direction method of multipliers to solve the model and get a novel algorithm. The applicability and efficiency of the new algorithm are demonstrated in numerical experiments. Recovery results show that our algorithm is helpful.

#### 1. Introduction

Matrix completion (MC) is the process of recovering the unknown or missing elements of a matrix. Under certain assumptions on the matrix, for example, low-rank or approximately low-rank, the incomplete matrix can be reconstructed very well [1, 2]. Matrix completion is widely applicable in many fields, such as machine learning, statistic analysis, system control, and image and video processing [3], where matrices with low-rank or approximately low-rank are widely used in the model construction.

Recently, there have been extensive research on the problems of low-rank matrix completion (LRMC). The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear constraints. However, it is NP-hard (nondeterministic polynomial-time hard) due to the combinatorial nature of the rank function. References [1, 2] showed that the solution of LRMC could be found by solving a nuclear norm minimization problem under some reasonable conditions. The singular value thresholding (SVT) method [4] and fixed point continuation method using approximate singular value decomposition (FPCA) [5] are two well-known algorithms because of their good recoverability, fast speed, and robustness. SVT applied the linearized Bregman iterations to solve the unconstrained nuclear norm regularized linear least squares problem. FPCA used iterations based on an iterative shrinkage-thresholding algorithm and used the continuation technique together with an approximate singular value decomposition procedure to accelerate the algorithm. Reference [6] proposed an accelerated proximal gradient singular value thresholding algorithm. A completely different model was developed in LMaFit [7], which was a nonlinear successive overrelaxation algorithm that only requires solving a linear least squares problem per iteration. More details on LRMC can be found in [1, 8–10] and references therein.

In practice, the completed matrix is often required to be positive semidefinite. For example, covariance matrix and its inverse: precision matrix of statistic analysis, are both positive semidefinite. Recently, there have been extensive research on high-dimensional covariance matrix estimation. They all motivate the development of positive semidefinite matrix completion (PSDMC). Reference [11] accomplished the matrix completion task in some special conditions and used the alternating direction method of multipliers (ADMM) [12–15] to solve the model. References [16, 17] proposed new models for nonnegative matrix completion and also used ADMM to solve them.

Our main contribution in this work is the development of an efficient algorithm for PSDMC. First of all, we present the nuclear norm regularized linear least squares model with positive semidefinite constraints. Because of its robustness, we choose it as the model of PSDMC in this paper. The structure of the model suggests an alternating minimization scheme, which is very suitable for solving large-scale problems. We give an exact ADMM-based algorithm, whose subproblems are solved exactly. We test the new ADMM-based algorithm on two kinds of problems: random matrix completion problems and random low-rank approximation problems. Numerical experiments show that all our proposed algorithm outputs have satisfactory results. The paper is organized as follows. Section 2 presents models and algorithms of PSDMC. Some numerical results are given in Section 3.

The following notations will be used throughout this paper. Uppercase (lowercase) letters are used for matrices (column vectors). All vectors are column vectors; the subscript denotes matrix and vector transposition. denotes a diagonal matrix with on its main diagonal. is a matrix of all zeros of proper dimension; stands for the identity matrix. The trace of , that is, the sum of the diagonal elements of , is denoted by . The Frobenius norm of is defined as . The Euclidean inner product between two matrices and is defined as . The inequality means that is semidefinite positive. The equality means that for all entries .

#### 2. ADMM-Based Methods for PSDMC

##### 2.1. The Model of PSDMC

The matrix completion problem of recovering a positive semidefinite low-rank matrix from a subset of its entries is where is the decision variable and is the index set of known elements of .

Let be the projection onto the subspace of sparse matrices with nonzeros restricted to the index set ; that is,

From the definition of , we can reformulate the equality constraint in model (1) in terms of . Due to the combinational property of the objective function , model (1) is NP-hard in general. Inspired by the success of matrix completion under nuclear norm in [1, 2, 10], we use the nuclear norm as an approximation to to estimate the optimal solution of model (1) from the following model: where the nuclear norm of is defined as the summation of the singular values of ; that is, where is the th largest singular value. Moreover, for a positive semidefinite matrix, .

If the known elements of the matrix are noise-free, that is to say, is reliable, we will directly solve model (3) to conduct SDPMC. On the contrary, if the vector of known elements is contaminated by noise, the constraints must be relaxed, resulting in the following problem: or the nuclear norm regularized linear least squares model with positive semidefinite constraints: Here, and are given parameters, whose values should be set according to the noise level. When the values of and are set properly, (5) and (6) are equivalent. Model (6) is usually preferred over (5) for the case of noisy observations. Our algorithms can be extended to treat (5) with minor modifications.

Actually, model (6) is especially useful in practice. The reason is that the known information is usually gotten from large surveys and contaminated by sampling error inevitably. In this paper, we choose model (6) as the model to conduct NMC.

##### 2.2. An ADMM-Based Method for Model (6)

In this subsection, we present an algorithm developed for model (6). To facilitate an efficient use of ADMM, we introduce one new matrix (splitting) variable and consider an equivalent form of model (6): where , . The augmented Lagrangian function of model (7) is where is a Lagrangian multiplier. is a penalty parameter.

The alternating direction method of multipliers for model (7) is derived by successively minimizing with respect to and in an alternating fashion; namely,where .

By rearranging the terms of (9a), it is equivalent towhere . Let its Eigenvalue Value Decomposition (EVD) be , where and . Define the shrinkage operator asThenis an optimal solution of problem (9a).

By rearranging the terms of (9b), it is equivalent to Model (13) can be split into two subproblems: where is the complement of . Finally, we can get the solution of (13) by solving the above two subproblems. where .

In short, ADMM applied to model (7) yields the iteration:

From the above considerations, we arrive at Algorithm 1.