Journal of Applied Mathematics

Journal of Applied Mathematics / 2014 / Article

Research Article | Open Access

Volume 2014 |Article ID 613205 | 9 pages | https://doi.org/10.1155/2014/613205

A Novel Approach for Solving Semidefinite Programs

Academic Editor: Ram N. Mohapatra
Received02 Apr 2014
Accepted03 Aug 2014
Published18 Aug 2014

Abstract

A novel linearizing alternating direction augmented Lagrangian approach is proposed for effectively solving semidefinite programs (SDP). For every iteration, by fixing the other variables, the proposed approach alternatively optimizes the dual variables and the dual slack variables; then the primal variables, that is, Lagrange multipliers, are updated. In addition, the proposed approach renews all the variables in closed forms without solving any system of linear equations. Global convergence of the proposed approach is proved under mild conditions, and two numerical problems are given to demonstrate the effectiveness of the presented approach.

1. Introduction

Minimizing a linear function of a symmetric positive semidefinite matrix subject to linear equality constraints is called semidefinite programs (SDP), whose form can be given as follows: where is a linear operator, which can be expressed as , are all matrices, and is a vector, and stands for the fact that is a symmetric positive semidefinite matrix. Here, stands for the space of symmetric matrices and stands for the standard inner product in . stands for the adjoint operator of . The dual problem of (1) is where and .

SDP problem has been always a very active area in optimization research for many years. It has broad applications in many areas, for example, system and control theory [1], combinatorial optimization [2], nonconvex quadratic programs [3], and matrix completion problems [4]. We refer to the reference book [5] for theory and applications of SDP. Interior point approaches (IPMs) have been very successful for solving SDP in polynomial time [69]. For small and medium sized SDP problems such as and , IPMs are generally efficient and robust. However, for large-scale SDP problems with large and moderate , IPMs become very slow due to the need of computing and factorizing the Schur complement matrix. In order to improve this shortcoming, by using an iterative solver to compute a search direction at each iteration, [10, 11] proposed inexact IPMs which manage to solve certain types of SDP problems with up to 125,000. Based on the augmented Lagrangian approach, many variants for SDP were proposed. For example, [12] introduced the so-called boundary point approach; using an eigenvalue decomposition to maintain complementarity, [13] presented a dual augmented Lagrangian approach. More recently, Huang and Xu [14] proposed a trust region algorithm for SDP problems by performing a number of conjugate gradient iterations to solve the subproblems. Zhao et al. [15] designed a Newton-CG augmented Lagrangian approach for solving SDP problems from the perspective of approximate semismooth Newton methods. Wen et al. [16] presented an alternating direction dual augmented Lagrangian approach for SDP. In [17], Wen et al. proposed a row-by-row approach for solving SDP problems based on solving a sequence of problems obtained by restricting the -dimensional positive semidefinite constraint on the matrix . In addition to the former reviewed methods, some related research works on the subject are given as follows. Xu et al. [18] presented a new algorithm for the box-constrained SDP based on the feasible direction method. Zhadan and Orlov [19] presented a dual interior point method for linear SDP problem. Lin [20] proposed an inexact spectral bundle method for convex quadratic SDP. Sun and Zhang [21] proposed a modified alternating direction method for solving convex quadratically constrained quadratic SDP, which requires much less computational effort per iteration than the second-order approaches. In [22], the authors presented penalty and barrier methods for convex SDP. In [23], an alternating direction method is proposed for solving convex SDP problems by Zhang et al., which only computes several metric projections at each iteration. In [24], Huang et al. presented a lower-order penalization approach to solve nonlinear SDP. In [25], Aroztegui et al. presented a feasible direction interior point algorithm for solving nonlinear SDP. Yang and Yu [26] proposed a homotopy method for nonlinear SDP. Kanzow et al. [27] presented successive linearization methods for solving nonlinear SDP. Yamashita et al. [28] presented a primal-dual interior point method for nonlinear SDP. Lu et al. [29] presented a saddle point mirror-prox algorithm for solving a large-scale SDP. In [30], Monteiro et al. presented a first-order block-decomposition method for solving two-easy-block structured SDP. In [31], an efficient low-rank stochastic gradient descent method is proposed for solving a class of SDP problems, which has clear computational advantages over the standard stochastic gradient descent method. Based on a new technique for finding the search direction and the strategy of the central path, Wang and Bai [32] presented a new primal-dual path-following interior-point algorithm for solving SDP problem. By reformulating the complementary conditions in the primal-dual optimality conditions as a projection equation, Yu [33] presented an alternating direction algorithm for the solution of SDP problems. However, most of these existing methods need to solve a system of linear equations for updating the variables which is time consuming especially for the large-scale case.

In this paper, we present a novel linearizing alternating direction dual augmented Lagrangian approach for computing SDP problems. For every iteration, the proposed algorithm works on the augmented Lagrangian function for the dual SDP problem. Specially, for every iteration, by fixing the other variables the proposed algorithm alternatively optimizes the dual variables and the dual slack variables; then the primal variables, that is, Lagrange multipliers, are updated. The proposed algorithm is closely related to the alternating direction augmented Lagrangian approach in [16] but for updating the dual variables. In particular, the proposed algorithm renews the dual variables without solving any system of linear equations. Moreover, the proposed algorithm renews all the variables in closed forms. Numerical experimental results demonstrate that the performance of the proposed approach can be significantly better than that reported in [16].

The remaining section of this paper is described as follows. In Section 2 a novel linearizing alternating direction augmented Lagrangian approach is proposed for solving SDP problems. The convergence of the proposed approach is proved in Section 3. In Section 4, some implementation issues of the proposed approach are discussed. Two numerical examples for frequency assignment problem and binary integer quadratic programs problems are used to demonstrate the performance of the proposed approach in Section 5.

Some notations: represents the set of symmetric positive semidefinite matrices. represents the fact that is positive definite. The notation stands for the Euclidean norm and stands for the Frobenius norm. denotes a vector obtained by stacking ’s columns one by one. denotes the identity matrix in proper order.

2. Linearizing Alternating Direction Augmented Lagrangian Approach

In this section, a linearizing alternating direction augmented Lagrangian approach is proposed for solving (1) and (3).

Let . The expression is equal to . is called an operator .

Without loss of generality, we assume that matrix is a full row rank and there exists a matrix such that . It is well known that, with the above assumption, a point is optimal for SDP problems (1) and (3) if and only if

Given a penalty parameter , the augmented Lagrangian function for the dual SDP (3) is defined as where . For given , the alternating direction augmented Lagrangian approach for solving problems (1) and (3) generates sequences , , and as follows:

Apparently, we can obtain by solving the first-order optimality conditions for (6), which is a system of linear equations associated with . Since is a matrix, it is difficult to get exactly when is large. In order to alleviate this difficulty, we use the quadratic approximation of in (6) around as follows: where and We replace step (6) by Then, we have

As pointed out in [16], problem (7) is equivalent to where . Denote the spectral decomposition of the matrix by where and are the nonnegative and negative eigenvalues of . We then obtain the fact that . It follows from (8) that where . Now we present the linearizing alternating direction augmented Lagrangian approach in Algorithm 1.

Initialize , , and . Choose initial step size greater than the
maximum eigenvalue of the matrix .
For  , … do
 Compute and .
  Compute and its eigenvalue decomposition, and set .
  Compute .
  Choose greater than the maximum eigenvalue of the matrix .
end

Remark 1. We can choose to satisfy the condition of Algorithm 1. If and for all , then Algorithm 1 is same as the approach proposed in [16].

3. The Convergence of the Proposed Approach

In this section, we prove the convergence of Algorithm 1 using the argument similar to the one in [34]. Let ; then we have the following proposition.

Lemma 2. Let be generated by Algorithm 1 and let be an optimal solution of (1) and (3); then one has where

Proof. From (8), there holds By (12), we know that That is, By substituting (8) into the above equality, using the fact , and rearranging the terms, one has Since , we have By substituting into (22), we get By adding (18), (21), and (23) together, we obtain where the last inequality comes from (8) and (22). Note that , , and are semidefinite positive matrices; then It follows (25) that By (26) and the fact that we have which completes the proof.

Theorem 3. Let be generated by Algorithm 1; then it converges to a solution of problems (1) and (3).

Proof. By (16), we know that the sequence is bounded and the sequence is monotonically nonincreasing. Therefore, where can be any limit point of . It follows that Since are greater than the maximum eigenvalue of matrix , then the matrix is positive definite. By the definition of , we obtain From the update formula (8), we have By (12) and the definition of , one has which together with (32) imply that By combining (32), (34), , and for all , we know that any limit point of , say , satisfies which means is a solution of problems (1) and (3). By Lemma 2, converges to a solution of problems (1) and (3).

4. Implementation Issues

The proposed algorithm is carried out by modifying the code of the alternating direction approach in [16] which is referred to as SDPAD. Before presenting the numerical results, we discuss some implementation issues of Algorithm 1 in this section.

In order to improve the computational performance of Algorithm 1, using the similar method as many alternating direction approaches [3537], we replace step (8) by We can use an argument similar method to the one in [34] to prove the following theorem.

Theorem 4. Let    be an optimal solution of (1) and (3). For , it holds that For , it holds that

Based on Theorem 4, it is not difficult to show that

In our numerical experiments, we stop the algorithm when where We set the maximum number of iterations allowed in Algorithm 1 and SDPAD to 20,000.

We use the same strategy for updating the parameter SDPAD (beta 2). In particular, given some integer , let For , if , then set . Otherwise, if , then set . Here .

We set ,  ,  , and for our test problems. The parameter for updating is set to 1.618. We choose the initial iterate ,  , and .

Let . Since the other parts of are linear, the choice of is mainly depending on . We set and choose as the Barzilai-Borwein step size [38] of with the following safeguard: where , and . Clearly, this choice of ensures that the matrix is positive definite for any .

5. Numerical Results

In this section, we report our numerical results. We compare solutions obtained from Algorithm 1 and SDPAD on the SDP relaxations of frequency assignment problems and binary integer quadratic programs problems. All the procedures were carried out by MATLAB 2011b on a 3.10 GHz Core i5 PC with 4 GB of RAM under Windows 7.

In Tables 1 and 2, the first column gives the problem name; some notations have been also used in column headers, : the size of the matrix ;  : the total number of equality and inequality constraints; “itr”: the number of iterations; “cpu”: the CPU time in the format of “hours, minutes, and seconds.”


Name Algorithm 1 in this paper SDPAD
pinf dinf gap itr cpu pinf dinf gap itr cpu

fap01 52 1378 622 1.13 653 0.67
fap02 61 1866 1399 1.67 1720 1.83
fap03 65 2145 914 1.06 870 0.98
fap04 81 3321 852 1.70 874 1.71
fap05 84 3570 1155 2.38 1198 2.37
fap06 93 4371 653 1.48 653 1.46
fap07 98 4851 648 1.50 667 1.52
fap08 120 7260 725 2.51 725 2.46
fap09 174 15225 438 3.05 464 3.17
fap10 183 14479 2278 21.55 2313 21.87
fap11 252 24292 2462 52.68 2585 54.54
fap12 369 26462 3197 2 : 28 3394 2 : 35
fap25 2118 322924 5152 7 : 54 : 54 5495 8 : 20 : 12
fap36 4110 1154467 4256 47 : 54 : 26 5000 116 : 28 : 06


Name Algorithm 1 in this paper SDPAD
pinf dinf gap itr cpu pinf dinf gap itr cpu 

be100.1 101 1464 3.76 2012 4.79
be100.2 101 1322 3.24 1744 4.24
be120.3.1 121 2214 6.82 2447 7.45
be120.3.2 121 1968 6.20 2405 7.53
be120.8.1 121 1618 4.78 2006 5.87
be120.8.2 121 3033 9.56 3415 10.64
be150.3.1 151 2030 9.08 2557 11.36
be150.3.2 151 2244 10.16 3143 14.01
be150.8.1 151 1694 7.33 2255 9.63
be150.8.2 151 1829 8.10 2386 10.28
be200.3.1 201 2031 13.90 2840 19.19
be200.3.2 201 2254 16.03 3276 23.20
be200.8.1 201