Research Article  Open Access
HongWei Jiao, YaKui Huang, Jing Chen, "A Novel Approach for Solving Semidefinite Programs", Journal of Applied Mathematics, vol. 2014, Article ID 613205, 9 pages, 2014. https://doi.org/10.1155/2014/613205
A Novel Approach for Solving Semidefinite Programs
Abstract
A novel linearizing alternating direction augmented Lagrangian approach is proposed for effectively solving semidefinite programs (SDP). For every iteration, by fixing the other variables, the proposed approach alternatively optimizes the dual variables and the dual slack variables; then the primal variables, that is, Lagrange multipliers, are updated. In addition, the proposed approach renews all the variables in closed forms without solving any system of linear equations. Global convergence of the proposed approach is proved under mild conditions, and two numerical problems are given to demonstrate the effectiveness of the presented approach.
1. Introduction
Minimizing a linear function of a symmetric positive semidefinite matrix subject to linear equality constraints is called semidefinite programs (SDP), whose form can be given as follows: where is a linear operator, which can be expressed as , are all matrices, and is a vector, and stands for the fact that is a symmetric positive semidefinite matrix. Here, stands for the space of symmetric matrices and stands for the standard inner product in . stands for the adjoint operator of . The dual problem of (1) is where and .
SDP problem has been always a very active area in optimization research for many years. It has broad applications in many areas, for example, system and control theory [1], combinatorial optimization [2], nonconvex quadratic programs [3], and matrix completion problems [4]. We refer to the reference book [5] for theory and applications of SDP. Interior point approaches (IPMs) have been very successful for solving SDP in polynomial time [6–9]. For small and medium sized SDP problems such as and , IPMs are generally efficient and robust. However, for largescale SDP problems with large and moderate , IPMs become very slow due to the need of computing and factorizing the Schur complement matrix. In order to improve this shortcoming, by using an iterative solver to compute a search direction at each iteration, [10, 11] proposed inexact IPMs which manage to solve certain types of SDP problems with up to 125,000. Based on the augmented Lagrangian approach, many variants for SDP were proposed. For example, [12] introduced the socalled boundary point approach; using an eigenvalue decomposition to maintain complementarity, [13] presented a dual augmented Lagrangian approach. More recently, Huang and Xu [14] proposed a trust region algorithm for SDP problems by performing a number of conjugate gradient iterations to solve the subproblems. Zhao et al. [15] designed a NewtonCG augmented Lagrangian approach for solving SDP problems from the perspective of approximate semismooth Newton methods. Wen et al. [16] presented an alternating direction dual augmented Lagrangian approach for SDP. In [17], Wen et al. proposed a rowbyrow approach for solving SDP problems based on solving a sequence of problems obtained by restricting the dimensional positive semidefinite constraint on the matrix . In addition to the former reviewed methods, some related research works on the subject are given as follows. Xu et al. [18] presented a new algorithm for the boxconstrained SDP based on the feasible direction method. Zhadan and Orlov [19] presented a dual interior point method for linear SDP problem. Lin [20] proposed an inexact spectral bundle method for convex quadratic SDP. Sun and Zhang [21] proposed a modified alternating direction method for solving convex quadratically constrained quadratic SDP, which requires much less computational effort per iteration than the secondorder approaches. In [22], the authors presented penalty and barrier methods for convex SDP. In [23], an alternating direction method is proposed for solving convex SDP problems by Zhang et al., which only computes several metric projections at each iteration. In [24], Huang et al. presented a lowerorder penalization approach to solve nonlinear SDP. In [25], Aroztegui et al. presented a feasible direction interior point algorithm for solving nonlinear SDP. Yang and Yu [26] proposed a homotopy method for nonlinear SDP. Kanzow et al. [27] presented successive linearization methods for solving nonlinear SDP. Yamashita et al. [28] presented a primaldual interior point method for nonlinear SDP. Lu et al. [29] presented a saddle point mirrorprox algorithm for solving a largescale SDP. In [30], Monteiro et al. presented a firstorder blockdecomposition method for solving twoeasyblock structured SDP. In [31], an efficient lowrank stochastic gradient descent method is proposed for solving a class of SDP problems, which has clear computational advantages over the standard stochastic gradient descent method. Based on a new technique for finding the search direction and the strategy of the central path, Wang and Bai [32] presented a new primaldual pathfollowing interiorpoint algorithm for solving SDP problem. By reformulating the complementary conditions in the primaldual optimality conditions as a projection equation, Yu [33] presented an alternating direction algorithm for the solution of SDP problems. However, most of these existing methods need to solve a system of linear equations for updating the variables which is time consuming especially for the largescale case.
In this paper, we present a novel linearizing alternating direction dual augmented Lagrangian approach for computing SDP problems. For every iteration, the proposed algorithm works on the augmented Lagrangian function for the dual SDP problem. Specially, for every iteration, by fixing the other variables the proposed algorithm alternatively optimizes the dual variables and the dual slack variables; then the primal variables, that is, Lagrange multipliers, are updated. The proposed algorithm is closely related to the alternating direction augmented Lagrangian approach in [16] but for updating the dual variables. In particular, the proposed algorithm renews the dual variables without solving any system of linear equations. Moreover, the proposed algorithm renews all the variables in closed forms. Numerical experimental results demonstrate that the performance of the proposed approach can be significantly better than that reported in [16].
The remaining section of this paper is described as follows. In Section 2 a novel linearizing alternating direction augmented Lagrangian approach is proposed for solving SDP problems. The convergence of the proposed approach is proved in Section 3. In Section 4, some implementation issues of the proposed approach are discussed. Two numerical examples for frequency assignment problem and binary integer quadratic programs problems are used to demonstrate the performance of the proposed approach in Section 5.
Some notations: represents the set of symmetric positive semidefinite matrices. represents the fact that is positive definite. The notation stands for the Euclidean norm and stands for the Frobenius norm. denotes a vector obtained by stacking ’s columns one by one. denotes the identity matrix in proper order.
2. Linearizing Alternating Direction Augmented Lagrangian Approach
In this section, a linearizing alternating direction augmented Lagrangian approach is proposed for solving (1) and (3).
Let . The expression is equal to . is called an operator .
Without loss of generality, we assume that matrix is a full row rank and there exists a matrix such that . It is well known that, with the above assumption, a point is optimal for SDP problems (1) and (3) if and only if
Given a penalty parameter , the augmented Lagrangian function for the dual SDP (3) is defined as where . For given , the alternating direction augmented Lagrangian approach for solving problems (1) and (3) generates sequences , , and as follows:
Apparently, we can obtain by solving the firstorder optimality conditions for (6), which is a system of linear equations associated with . Since is a matrix, it is difficult to get exactly when is large. In order to alleviate this difficulty, we use the quadratic approximation of in (6) around as follows: where and We replace step (6) by Then, we have
As pointed out in [16], problem (7) is equivalent to where . Denote the spectral decomposition of the matrix by where and are the nonnegative and negative eigenvalues of . We then obtain the fact that . It follows from (8) that where . Now we present the linearizing alternating direction augmented Lagrangian approach in Algorithm 1.

Remark 1. We can choose to satisfy the condition of Algorithm 1. If and for all , then Algorithm 1 is same as the approach proposed in [16].
3. The Convergence of the Proposed Approach
In this section, we prove the convergence of Algorithm 1 using the argument similar to the one in [34]. Let ; then we have the following proposition.
Lemma 2. Let be generated by Algorithm 1 and let be an optimal solution of (1) and (3); then one has where
Proof. From (8), there holds By (12), we know that That is, By substituting (8) into the above equality, using the fact , and rearranging the terms, one has Since , we have By substituting into (22), we get By adding (18), (21), and (23) together, we obtain where the last inequality comes from (8) and (22). Note that , , and are semidefinite positive matrices; then It follows (25) that By (26) and the fact that we have which completes the proof.
Theorem 3. Let be generated by Algorithm 1; then it converges to a solution of problems (1) and (3).
Proof. By (16), we know that the sequence is bounded and the sequence is monotonically nonincreasing. Therefore, where can be any limit point of . It follows that Since are greater than the maximum eigenvalue of matrix , then the matrix is positive definite. By the definition of , we obtain From the update formula (8), we have By (12) and the definition of , one has which together with (32) imply that By combining (32), (34), , and for all , we know that any limit point of , say , satisfies which means is a solution of problems (1) and (3). By Lemma 2, converges to a solution of problems (1) and (3).
4. Implementation Issues
The proposed algorithm is carried out by modifying the code of the alternating direction approach in [16] which is referred to as SDPAD. Before presenting the numerical results, we discuss some implementation issues of Algorithm 1 in this section.
In order to improve the computational performance of Algorithm 1, using the similar method as many alternating direction approaches [35–37], we replace step (8) by We can use an argument similar method to the one in [34] to prove the following theorem.
Theorem 4. Let be an optimal solution of (1) and (3). For , it holds that For , it holds that
Based on Theorem 4, it is not difficult to show that
In our numerical experiments, we stop the algorithm when where We set the maximum number of iterations allowed in Algorithm 1 and SDPAD to 20,000.
We use the same strategy for updating the parameter SDPAD (beta 2). In particular, given some integer , let For , if , then set . Otherwise, if , then set . Here .
We set , , , and for our test problems. The parameter for updating is set to 1.618. We choose the initial iterate , , and .
Let . Since the other parts of are linear, the choice of is mainly depending on . We set and choose as the BarzilaiBorwein step size [38] of with the following safeguard: where , and . Clearly, this choice of ensures that the matrix is positive definite for any .
5. Numerical Results
In this section, we report our numerical results. We compare solutions obtained from Algorithm 1 and SDPAD on the SDP relaxations of frequency assignment problems and binary integer quadratic programs problems. All the procedures were carried out by MATLAB 2011b on a 3.10 GHz Core i5 PC with 4 GB of RAM under Windows 7.
In Tables 1 and 2, the first column gives the problem name; some notations have been also used in column headers, : the size of the matrix ; : the total number of equality and inequality constraints; “itr”: the number of iterations; “cpu”: the CPU time in the format of “hours, minutes, and seconds.”

