Mathematical Problems in Engineering

Volume 2016, Article ID 7473041, 13 pages

http://dx.doi.org/10.1155/2016/7473041

## An Efficient Method for Convex Constrained Rank Minimization Problems Based on DC Programming

School of Economics and Finance, Xi’an Jiaotong University, Xi’an 710061, China

Received 26 January 2016; Revised 19 May 2016; Accepted 2 June 2016

Academic Editor: Srdjan Stankovic

Copyright © 2016 Wanping Yang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The constrained rank minimization problem has various applications in many fields including machine learning, control, and signal processing. In this paper, we consider the convex constrained rank minimization problem. By introducing a new variable and penalizing an equality constraint to objective function, we reformulate the convex objective function with a rank constraint as a difference of convex functions based on the closed-form solutions, which can be reformulated as DC programming. A stepwise linear approximative algorithm is provided for solving the reformulated model. The performance of our method is tested by applying it to affine rank minimization problems and max-cut problems. Numerical results demonstrate that the method is effective and of high recoverability and results on max-cut show that the method is feasible, which provides better lower bounds and lower rank solutions compared with improved approximation algorithm using semidefinite programming, and they are close to the results of the latest researches.

#### 1. Introduction

In recent decades, with the increase of the system acquisition and data services, the explosion of data poses challenges to storage, transmission, and processing, as well as device design. The burgeoning theory of sparse recovery reveals the outstanding sensing ability on the large scales of high-dimensional data. Recently, a new theory called compressed sensing (or compressive sampling (CS)) has appeared. And this theory is praised as a big deal in signal processing area. This approach can acquire the signals while compressing data properly. Its sampling frequency is lower than that of Nyquist, which makes the collections of high resolution signal possible. One noticeable merit of this approach is its ability to combine traditional data collection and data compression into a union when facing sparse representation signals. This merit means that sampling frequency of signals, time and computational cost consuming on data processing, and expense for data storage and transmission are all greatly reduced. Thus this approach leads signal processing to a new age. However, same with CS, Low Rank Matrix theory can also solve signal reconstruction problems that are related to theory and practice. It is also carried out through exerting dimension reduction treatments on unknown multidimension signals validly. And then use sparse concepts in a more broad sense to the problems. That is to say, reconstruct the original high-dimensional signals array completely or approximately by a small number of values measured by sparse (or reducing dimensions) sampling. Considering that both of these two theories are closely related to intrinsic sparsity of signals, we generally view them as sparse recovery problems. The main goal of sparse recovery problems is using less linear measurement data to recovery high-dimensional sparse signal. Its core mainly includes three steps: signal sparse representation, linear dimension reduction measure, and the nonlinear reconstruction of signal. The three parts for the sparse recovery theory are closely related; each part will have a significant impact on the quality of signal reconstruction. However, the nonlinear reconstruction of signal can be seen as a special case of rank minimization problem to be solved.

Constrained rank minimization problems have attracted a great deal of attention over recent years, which aim to find sparse solutions of a system. The problems are widely applicable in a variety of fields including machine learning [1, 2], control [3, 4], Euclidean embedding [5], image processing [6], and finance [7, 8], to name but a few. There are some typical examples of constrained rank minimization problems; for example, one is affine matrix rank minimization problems [5, 9, 10]where is the decision variable and the linear map and vector are known. Equation (1) includes matrix completion, based on affine rank minimization problem and compressed sensing. Matrix completion is widely used in collaborative filtering [11, 12], system identification [13, 14], sensor network [15, 16], image processing [17, 18], sparse channel estimation [19, 20], spectrum sensing [21, 22], and multimedia coding [23, 24]. Another is max-cut problems [25] in combinatorial optimizationwhere , , , , is the weight matrix, and so on. As a matter of fact, these problems can be written as a common convex constrained rank minimization model; that is, where is a continuously differentiable convex function, is the rank of a matrix , is a given integer, is a closed convex set, and is a closed unitarily invariant convex set. According to [26], a set is a unitarily invariant set ifwhere denotes the set of all unitary matrices in . Various types of methods have been proposed to solve (3) or its special cases with different types of .

When is , (3) is the unconstrained rank minimization problem. One approach is to solve this case by convex relaxation of . Cai et al. [9] proposed singular value thresholding (SVT) algorithm; Ma et al. [10] proposed fixed point continuation (FPC) algorithm. Another approach is to solve rank constraint directly. Jain et al. [27] proposed simple and efficient algorithm Singular Value Projection (SVP) based on the projected gradient algorithm. Haldar and Hernando [28] proposed an alternating least squares approach. Keshavan et al. [29] proposed an algorithm based on optimization over a Grassmann manifold.

When is a convex set, (3) is a convex constrained rank minimization problem. One approach is to solve it by replacing rank constraint with other constraints, such as trace constraint and norm constraint. Based on this, several heuristic methods have been proposed in literature; for example, Nesterov et al. [30] proposed interior-point polynomial algorithms; Weimin [31] proposed an adaptive seminuclear norm regularization approach; in addition, semidefinite programming (SDP) has also been applied; see [32]. Another approach is to solve rank constraint directly. Grigoriadis and Beran [33] proposed alternating projection algorithms for linear matrix inequalities problems. Mohseni et al. [24] proposed penalty decomposition (PD) method based on penalty function. Gao and Sun [34] recently proposed the majorized penalty approach. Burer et al. [35] proposed nonlinear programming (NLP) reformulation approach.

In this paper, we consider problem (3). We introduce a new variable and penalize equality constraints to objective function. By this technique, we can obtain closed-form solution and use it to reformulate the objective function in problem (3) as a difference of convex functions; then (3) is reformulated as DC programming which can be solved via linear approximation method. A function is called DC if it can be represented as the difference of two convex functions. Mathematical programming problems dealing with DC functions are called DC programming problems. Our method is different from original PD [26]; for one thing we penalize equality constraints to objective function and keep other constraints, while PD penalizes all except rank constraint to objective function, including equality and inequality constraints; for another each subproblem of PD is approximately solved by a block coordinate descent method, while we approximate problem (3) to a convex optimization to be solved finally. Compared with PD method, our method uses the closed-form solutions to remove rank constraint, while PD needs to consider rank constraint in each iteration. It is known that rank constraint is a nonconvex and discontinuous constraint, which is not easy to deal. And due to the remaining convex constraints, the formulated programming can be solved by solving a series of convex programming. In our method, we use linear function instead of the last convex function to formulate the programming to convex programming. We test the performance of our method by applying it to affine rank minimization problems and max-cut problems. Numerical results show that the algorithm is feasible and effective.

The rest of this paper is organized as follows. In Section 1.1, we introduce the notation that is used throughout the paper. Our method is shown in Section 2. In Section 3, we apply it to some practical problems to test its performance.

##### 1.1. Notations

In this paper, the symbol denotes the -dimensional Euclidean space, and the set of all matrices with real entries is denoted by . Given matrices and in , the standard inner product is defined by , where denotes the trace of a matrix. The Frobenius norm of a real matrix is defined as . denotes the nuclear norm of , that is, the sum of singular values of . The rank of a matrix is denoted by . We use to denote the identity matrix, whose dimension is . Throughout this paper, we always assume that the singular values are arranged in nonincreasing order, that is, . denotes the subdifferential of the function . We denote the adjoint of by . is used to denote Ky Fan -norms. Let be the projection onto the closed convex set .

#### 2. An Efficient Algorithm Based on DC Programming

In this section, we first reformulated problem (3) as a DC programming problem with the convex constrained set, which can be solved via stepwise linear approximation method. Then, the convergence is provided.

##### 2.1. Model Transformation

Let . Problem (3) is rewritten asAdopting penalty function method to penalize to objective function and choosing proper penalty parameter , then (5) can be reformulated as Solving with fixed , (6) can be treated asIn fact, given the singular value decomposition, , and singular values of : , , where and are orthogonal matrices, the minimal optimal problem with respect to with fixed in (7), that is, , has closed-form solution withwhere . Thus, by the closed-form solution and the unitary invariance of -norm, we have Substituting the above into problem (7), problem (3) is reformulated as

Clearly, is a convex function about . Define function , , . Next, we will study the property of . Problem (3) would be reformulated as a DC programming problem with convex constraints if is a convex function, which can be solved via stepwise linear approximation method. In fact, it does hold, and equals a trace function.

Theorem 1. * Let where denotes zero matrix whose dimension is determined depending on context; then, for any , the following hold:*(1)* is a convex function;*(2)*.*

*Proof. *(1) Since are eigenvalues of , then , where is Ky Fan -norms (defined in [36]). Clearly, is a convex function. Let . For and we haveNote that ; thus they have the same nonzero singular values, and then holds. Together with (12), we have It is known that for arbitrary matrices , Further, That is, where and are symmetric matrices. It is also known that, for arbitrary Hermitian matrices , , the following holds (see Exercise II.1.15 in [24]) Thus Together with (13) and (16), (18) yields Hence, is a convex function.

(2) Let and is the rank of , and we have Moreoverwhere .

Thus, holds.

*According to above theorem, (10) can be rewritten asSince the objective function of (23) is a difference of convex functions, the constrained set is a closed convex constraint, so (23) is DC programming. Next, we will use stepwise linear approximate method to solve it.*

*2.2. Stepwise Linear Approximative Algorithm*

*In this subsection, we use stepwise linear approximation method [37] to solve reformulated model (23). Let represent objective function of model (23); that is, and the linear approximation function of can be defined as where (see 5.2.2 in [36]).*

*Stepwise linear approximative algorithm for solving model (23) can be described as follows.*

*Given the initial matrix , penalty parameter , rank , , set *

*Step 1. *Compute the single value decomposition of ,

*Step 2. *Update

*Step 3. *If stopping criterion is satisfied, stop; else set and and go to Step 1.

*Remark*(1)Subproblem (26) can be done by solving a series of convex subproblems. The efficient method for convex programming is multiple and it has perfect theory guarantees.(a)When is , closed-form solutions can be obtained; for example, when solving model (1), are obtained by the first-order optimality conditions (b)When is a general convex set, the closed-form solution is hard to get, but subproblem (26) can be solved by some convex optimization software programs (such as CVX and CPLEX). According to different closed convex constraint sets, we can pick and choose suitable convex programming algorithm to combine with our method. The toolbox listed in this paper is just an alternative choice.(2)The stop rule that we adopted here is with some small enough.

*2.3. Convergence Analysis*

*In this subsection, we analyze the convergence properties of the sequence generated by above algorithm. Before we prove the main convergence result, we give the following lemma which is analogous to Lemma 1 in [10].*

*Lemma 2. For any and where is the pre- singular values approximate of ().*

*Proof. *Without loss of generality, we assume and the singular value decomposition withwhere , and , . can be written as , (). Note that , , , are orthogonal matrices; we haveAnd we note thatwhere are also orthogonal matrices.

Next we estimate an upper bound for . According to Theorem 7.4.9 in [38], we get that an orthogonal matrix is a maximizing matrix for the problem , if and only if is positive semidefinite matrix. Thus, , , and achieve their maximum, if and only if , , and are all positive semidefinite. It is known that when is positive semidefinite, Applying (32) to above three terms, we have Hence, without loss of generality, assuming , we havewhich implies that holds.

*Next, we claim the value of iterations of stepwise linear approximative algorithm is nonincreasing.*

*Theorem 3. Let be the iterative sequence generated by stepwise linear approximative algorithm. Then is monotonically nonincreasing.*

*Proof. *Since , we haveand is the subdifferential of , which implies Let , in above inequality; we haveTherefore,Consequently, We immediately obtain the conclusion as desired.

*Next, we show the convergence of iterative sequence generated by stepwise linear approximative algorithm when solving (23). Theorem 4 shows the convergence when solving (23) with .*

*Theorem 4. Let be the set of stable points of problem (23) with no constraints; then the iterative sequence generated by linear approximative algorithm converges to some when .*

*Proof. *Assuming that is any optimal solution of problem (23) with , according to the first-order optimality conditions, we have Since , thenEquation (41) subtracts (40): in which

By Lemma 2, we have Norming both sides of (42), by the triangle inequalities property of norms, we have which implies that the sequence is monotonically nonincreasing when . We then observe that is bounded, and hence exists, denoted by , where is the limited point of . Let in (41); by the continuity, we get , which implies that . By setting , we have which completes the proof.

*Now, we are ready to give the convergence of iterative sequence when solving (23) with general closed convex .*

*Theorem 5. Let be the set of optimal solutions of problem (23); then the iterative sequence generated by stepwise linear approximative algorithm converges to some when .*

*Proof. *Generally speaking, subproblem (26) can be solved by projecting a solution obtained via solving a problem with no constraints onto a closed convex set; that is, , where . According to Theorem 4, the sequence is monotonically nonincreasing and convergent when ; together with the well known fact that is a nonexpansive operator, we have where is the projection of onto . Clearly, .

Hence, , which implies that converge to . We immediately obtain the conclusion.

*3. Numerical Results*

*In this section, we demonstrate the performance of our method proposed in Section 2 by applying it to solve affine rank minimization problems (image reconstruction and matrix completion) and max-cut problems. The codes implemented in this section are written in MATLAB and all experiments are performed in MATLAB R2013a running Windows 7.*

*3.1. Image Reconstruction Problems*

*In this subsection, we apply our method to solve one case of affine rank minimization problems. It can be formulated as where is the decision variable and the linear map and vector are known. Clearly, (47) is a special case of problem (3). Hence, our method can be suitably applied to (47).*

*Recht et al. [39] have demonstrated that when we sample linear maps from a class of probability distributions, then they would obey the Restricted Isometry Property. There are two ingredients for a random linear map to be nearly isometric. First, it must be isometric in expectation. Second, the probability of large distortions of length must be exponentially small. In our numerical experiments, we use four nearly isometric measurements. The first one is the ensemble with independent, identically distributed (i.i.d.) Gaussian entries. The second one has entries sampled from an i.i.d. symmetric Bernoulli distribution, which we call Bernoulli. The third one has zeros in two-thirds of the entries and others sampled from an i.i.d. symmetric Bernoulli distribution, which we call Sparse Bernoulli. The last one has an orthonormal basis, which we call Random Projection.*

*Now we conduct numerical experiments to test the performance of our method for solving (47). To illustrate the scaling of low rank recovery for a particular matrix , consider the MIT logo presented in [39]. The image has 46 rows and 81 columns (the total number of pixels is ). Since the logo only has 5 distinct rows, it has rank 5. We sample it using Gaussian i.i.d., Bernoulli i.i.d., Sparse Bernoulli i.i.d., and Random Projection measurements matrix with the number of linear constraints ranging between 700 and 2400. We declared MIT logo to be recovered if ; that is, SNR 60 dB (). Figures 1 and 2 show the results of our numerical experiments.*