Table of Contents Author Guidelines Submit a Manuscript
Journal of Applied Mathematics

Volume 2014, Article ID 613205, 9 pages

http://dx.doi.org/10.1155/2014/613205
Research Article

A Novel Approach for Solving Semidefinite Programs

1Department of Mathematics, Henan Institute of Science and Technology, Xinxiang 453003, China

2School of Mathematics and Statistics, Xidian University, Xi’an 710071, China

3Department of Basic Science, Henan Mechanical and Electrical Engineering College, Xinxiang 453002, China

Received 2 April 2014; Accepted 3 August 2014; Published 18 August 2014

Academic Editor: Ram N. Mohapatra

Copyright © 2014 Hong-Wei Jiao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A novel linearizing alternating direction augmented Lagrangian approach is proposed for effectively solving semidefinite programs (SDP). For every iteration, by fixing the other variables, the proposed approach alternatively optimizes the dual variables and the dual slack variables; then the primal variables, that is, Lagrange multipliers, are updated. In addition, the proposed approach renews all the variables in closed forms without solving any system of linear equations. Global convergence of the proposed approach is proved under mild conditions, and two numerical problems are given to demonstrate the effectiveness of the presented approach.

1. Introduction

Minimizing a linear function of a symmetric positive semidefinite matrix subject to linear equality constraints is called semidefinite programs (SDP), whose form can be given as follows: where is a linear operator, which can be expressed as , are all matrices, and is a vector, and stands for the fact that is a symmetric positive semidefinite matrix. Here, stands for the space of symmetric matrices and stands for the standard inner product in . stands for the adjoint operator of . The dual problem of (1) is where and .

SDP problem has been always a very active area in optimization research for many years. It has broad applications in many areas, for example, system and control theory [1], combinatorial optimization [2], nonconvex quadratic programs [3], and matrix completion problems [4]. We refer to the reference book [5] for theory and applications of SDP. Interior point approaches (IPMs) have been very successful for solving SDP in polynomial time [69]. For small and medium sized SDP problems such as and , IPMs are generally efficient and robust. However, for large-scale SDP problems with large and moderate , IPMs become very slow due to the need of computing and factorizing the Schur complement matrix. In order to improve this shortcoming, by using an iterative solver to compute a search direction at each iteration, [10, 11] proposed inexact IPMs which manage to solve certain types of SDP problems with up to 125,000. Based on the augmented Lagrangian approach, many variants for SDP were proposed. For example, [12] introduced the so-called boundary point approach; using an eigenvalue decomposition to maintain complementarity, [13] presented a dual augmented Lagrangian approach. More recently, Huang and Xu [14] proposed a trust region algorithm for SDP problems by performing a number of conjugate gradient iterations to solve the subproblems. Zhao et al. [15] designed a Newton-CG augmented Lagrangian approach for solving SDP problems from the perspective of approximate semismooth Newton methods. Wen et al. [16] presented an alternating direction dual augmented Lagrangian approach for SDP. In [17], Wen et al. proposed a row-by-row approach for solving SDP problems based on solving a sequence of problems obtained by restricting the -dimensional positive semidefinite constraint on the matrix . In addition to the former reviewed methods, some related research works on the subject are given as follows. Xu et al. [18] presented a new algorithm for the box-constrained SDP based on the feasible direction method. Zhadan and Orlov [19] presented a dual interior point method for linear SDP problem. Lin [20] proposed an inexact spectral bundle method for convex quadratic SDP. Sun and Zhang [21] proposed a modified alternating direction method for solving convex quadratically constrained quadratic SDP, which requires much less computational effort per iteration than the second-order approaches. In [22], the authors presented penalty and barrier methods for convex SDP. In [23], an alternating direction method is proposed for solving convex SDP problems by Zhang et al., which only computes several metric projections at each iteration. In [24], Huang et al. presented a lower-order penalization approach to solve nonlinear SDP. In [25], Aroztegui et al. presented a feasible direction interior point algorithm for solving nonlinear SDP. Yang and Yu [26] proposed a homotopy method for nonlinear SDP. Kanzow et al. [27] presented successive linearization methods for solving nonlinear SDP. Yamashita et al. [28] presented a primal-dual interior point method for nonlinear SDP. Lu et al. [29] presented a saddle point mirror-prox algorithm for solving a large-scale SDP. In [30], Monteiro et al. presented a first-order block-decomposition method for solving two-easy-block structured SDP. In [31], an efficient low-rank stochastic gradient descent method is proposed for solving a class of SDP problems, which has clear computational advantages over the standard stochastic gradient descent method. Based on a new technique for finding the search direction and the strategy of the central path, Wang and Bai [32] presented a new primal-dual path-following interior-point algorithm for solving SDP problem. By reformulating the complementary conditions in the primal-dual optimality conditions as a projection equation, Yu [33] presented an alternating direction algorithm for the solution of SDP problems. However, most of these existing methods need to solve a system of linear equations for updating the variables which is time consuming especially for the large-scale case.

In this paper, we present a novel linearizing alternating direction dual augmented Lagrangian approach for computing SDP problems. For every iteration, the proposed algorithm works on the augmented Lagrangian function for the dual SDP problem. Specially, for every iteration, by fixing the other variables the proposed algorithm alternatively optimizes the dual variables and the dual slack variables; then the primal variables, that is, Lagrange multipliers, are updated. The proposed algorithm is closely related to the alternating direction augmented Lagrangian approach in [16] but for updating the dual variables. In particular, the proposed algorithm renews the dual variables without solving any system of linear equations. Moreover, the proposed algorithm renews all the variables in closed forms. Numerical experimental results demonstrate that the performance of the proposed approach can be significantly better than that reported in [16].

The remaining section of this paper is described as follows. In Section 2 a novel linearizing alternating direction augmented Lagrangian approach is proposed for solving SDP problems. The convergence of the proposed approach is proved in Section 3. In Section 4, some implementation issues of the proposed approach are discussed. Two numerical examples for frequency assignment problem and binary integer quadratic programs problems are used to demonstrate the performance of the proposed approach in Section 5.

Some notations: represents the set of symmetric positive semidefinite matrices. represents the fact that is positive definite. The notation stands for the Euclidean norm and stands for the Frobenius norm. denotes a vector obtained by stacking ’s columns one by one. denotes the identity matrix in proper order.

2. Linearizing Alternating Direction Augmented Lagrangian Approach

In this section, a linearizing alternating direction augmented Lagrangian approach is proposed for solving (1) and (3).

Let . The expression is equal to . is called an operator .

Without loss of generality, we assume that matrix is a full row rank and there exists a matrix such that . It is well known that, with the above assumption, a point is optimal for SDP problems (1) and (3) if and only if

Given a penalty parameter , the augmented Lagrangian function for the dual SDP (3) is defined as where . For given , the alternating direction augmented Lagrangian approach for solving problems (1) and (3) generates sequences , , and as follows:

Apparently, we can obtain by solving the first-order optimality conditions for (6), which is a system of linear equations associated with . Since is a matrix, it is difficult to get exactly when is large. In order to alleviate this difficulty, we use the quadratic approximation of in (6) around as follows: where and We replace step (6) by Then, we have

As pointed out in [16], problem (7) is equivalent to where . Denote the spectral decomposition of the matrix by where and are the nonnegative and negative eigenvalues of . We then obtain the fact that . It follows from (8) that where . Now we present the linearizing alternating direction augmented Lagrangian approach in Algorithm 1.

alg1
Algorithm 1: Linearizing alternating direction augmented Lagrangian algorithm for SDP.

Remark 1. We can choose to satisfy the condition of Algorithm 1. If and for all , then Algorithm 1 is same as the approach proposed in [16].

3. The Convergence of the Proposed Approach

In this section, we prove the convergence of Algorithm 1 using the argument similar to the one in [34]. Let ; then we have the following proposition.

Lemma 2. Let be generated by Algorithm 1 and let be an optimal solution of (1) and (3); then one has where

Proof. From (8), there holds By (12), we know that That is, By substituting (8) into the above equality, using the fact , and rearranging the terms, one has Since , we have By substituting into (22), we get By adding (18), (21), and (23) together, we obtain where the last inequality comes from (8) and (22). Note that , , and are semidefinite positive matrices; then It follows (25) that By (26) and the fact that we have which completes the proof.

Theorem 3. Let be generated by Algorithm 1; then it converges to a solution of problems (1) and (3).

Proof. By (16), we know that the sequence is bounded and the sequence is monotonically nonincreasing. Therefore, where can be any limit point of . It follows that Since are greater than the maximum eigenvalue of matrix , then the matrix is positive definite. By the definition of , we obtain From the update formula (8), we have By (12) and the definition of , one has which together with (32) imply that By combining (32), (34), , and for all , we know that any limit point of , say , satisfies which means is a solution of problems (1) and (3). By Lemma 2, converges to a solution of problems (1) and (3).

4. Implementation Issues

The proposed algorithm is carried out by modifying the code of the alternating direction approach in [16] which is referred to as SDPAD. Before presenting the numerical results, we discuss some implementation issues of Algorithm 1 in this section.

In order to improve the computational performance of Algorithm 1, using the similar method as many alternating direction approaches [3537], we replace step (8) by We can use an argument similar method to the one in [34] to prove the following theorem.

Theorem 4. Let    be an optimal solution of (1) and (3). For , it holds that For , it holds that

Based on Theorem 4, it is not difficult to show that

In our numerical experiments, we stop the algorithm when where We set the maximum number of iterations allowed in Algorithm 1 and SDPAD to 20,000.

We use the same strategy for updating the parameter SDPAD (beta 2). In particular, given some integer , let For , if , then set . Otherwise, if , then set . Here .

We set ,  ,  , and for our test problems. The parameter for updating is set to 1.618. We choose the initial iterate ,  , and .

Let . Since the other parts of are linear, the choice of is mainly depending on . We set and choose as the Barzilai-Borwein step size [38] of with the following safeguard: where , and . Clearly, this choice of ensures that the matrix is positive definite for any .

5. Numerical Results

In this section, we report our numerical results. We compare solutions obtained from Algorithm 1 and SDPAD on the SDP relaxations of frequency assignment problems and binary integer quadratic programs problems. All the procedures were carried out by MATLAB 2011b on a 3.10 GHz Core i5 PC with 4 GB of RAM under Windows 7.

In Tables 1 and 2, the first column gives the problem name; some notations have been also used in column headers, : the size of the matrix ;  : the total number of equality and inequality constraints; “itr”: the number of iterations; “cpu”: the CPU time in the format of “hours, minutes, and seconds.”

tab1
Table 1: Numerical results compared with [16] for computing frequency assignment problems.
tab2
Table 2: Numerical results compared with [16] for computing binary integer quadratic programs problem.
5.1. Frequency Assignment Relaxation

In this subsection, we consider SDPs arising from semidefinite relaxation of frequency assignment problems (fap) [39]. The explicit description of the SDP form is given in [40]. For a given undirected graph with vertex set and edge set , assume is a weight matrix for . If the edge , we suppose . For a given edge subset , we can formulate the problem as where is the vector of all ones, is a diagonal matrix with as the diagonal entries, and is a vector of the diagonal entries of matrix . The constraints were replaced by and by . So we have . We set to 50 for updating the penalty parameter .

We did not run SDPAD on our own computer on the problem “fap36” and the results presented here were taken from Table 1 in [16]. From Table 1, it can be observed that Algorithm 1 is often faster than SDPAD for achieving a duality gap of the same order. The infeasibility achieved by Algorithm 1 is satisfactory as well.

5.2. Binary Integer Quadratic Programs Problem

In this subsection, we present numerical results of Algorithm 1 and SDPAD on binary integer quadratic (BIQ) problems [41] through SDP relaxations which have the following form: where . The constraints were replaced by and the matrix was scaled by its Frobenious norm. We set to 50 for updating the penalty parameter .

Table 2 lists the results of Algorithm 1 and SDPAD on the BIQ problems. By comparing the results in Table 2, we can conclude that Algorithm 1 applied to BIQ problems is superior to SDPAD in terms of CPU time and number of iterations. In addition, the accuracy of the approximate optimal solutions computed by Algorithm 1 is as good as that obtained by SDPAD.

Figure 1 shows the performance profiles [42] of Algorithm 1 and SDPAD for the number of iterations, Figure 1(a), and CPU time, Figure 1(b). We observe that Algorithm 1 is better than SDPAD in terms of number of iterations and CPU time.

fig1
Figure 1: Performance profiles for SDPAD and the present method for number of iterations (a) and CPU time (b).

6. Conclusion

In this paper, a novel linearizing alternating direction augmented Lagrangian approach is proposed for solving semidefinite programs (SDP). The algorithm updates the dual variables without solving any system of linear equations. Moreover, all the variables are updated in closed forms. Preliminary numerical results show the efficiency of the proposed algorithm. However, there are still some unsettled issues for implementation. For example, efficient strategies to update penalty parameter and choose step size deserve more work for applications of the algorithm.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to express their sincere thanks to the anonymous referees and editors for their careful review of the paper and the valuable comments, which have greatly improved the earlier version of this paper. This paper is supported by the Science and Technology Key Project of Education Department of Henan Province (14A110024).

References

  1. F. Alizadeh, “Interior point methods in semidefinite programming with applications to combinatorial optimization,” SIAM Journal on Optimization, vol. 5, no. 1, pp. 13–51, 1995. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  2. S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory, SIAM, Philadelphia, Pa, USA, 1994.
  3. T. Fujie and M. Kojima, “Semidefinite programming relaxation for nonconvex quadratic programs,” Journal of Global Optimization, vol. 10, no. 4, pp. 367–380, 1997. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  4. A. Y. Alfakih, A. Khandani, and H. Wolkowicz, “Solving Euclidean distance matrix completion problems via semidefinite programming,” Computational Optimization and Applications, vol. 12, no. 1-3, pp. 13–30, 1999. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  5. M. Anjos and J. B. Lasserre, Handbook on Semidefinite, Conic and Polynomial Optimization, Springer, New York, NY, USA, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  6. Y. Nesterov and A. Nemirovskii, Interior-Point Polynomial Algorithms in Convex Programming, SIAM Studies in Applied Mathematics, SIAM, Philadelphia, Pa, USA, 1994. View at Publisher · View at Google Scholar · View at MathSciNet
  7. R. D. C. Monteiro, “Primal-dual path-following algorithms for semidefinite programming,” SIAM Journal on Optimization, vol. 7, no. 3, pp. 663–678, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  8. C. Helmberg, F. Rendl, R. J. Vanderbei, and H. Wolkowicz, “An interior-point method for semidefinite programming,” SIAM Journal on Optimization, vol. 6, no. 2, pp. 342–361, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  9. L. Vandenberghe and S. Boyd, “Semidefinite programming,” SIAM Review, vol. 38, no. 1, pp. 49–95, 1996. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  10. K. Toh and M. Kojima, “Solving some large scale semidefinite programs via the conjugate residual method,” SIAM Journal on Optimization, vol. 12, no. 3, pp. 669–691, 2002. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  11. K. Toh, “Solving large scale semidefinite programs via an iterative solver on the augmented systems,” SIAM Journal on Optimization, vol. 14, no. 3, pp. 670–698, 2003. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  12. J. Povh, F. Rendl, and A. Wiegele, “A boundary point method to solve semidefinite programs,” Computing, vol. 78, no. 3, pp. 277–286, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  13. J. Malick, J. Povh, F. Rendl, and A. Wiegele, “Regularization methods for semidefinite programming,” SIAM Journal on Optimization, vol. 20, no. 1, pp. 336–356, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  14. A. Huang and C. Xu, “A trust region method for solving semidefinite programs,” Computational Optimization and Applications, vol. 55, no. 1, pp. 49–71, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  15. X. Zhao, D. Sun, and K. Toh, “A Newton-CG augmented Lagrangian method for semidefinite programming,” SIAM Journal on Optimization, vol. 20, no. 4, pp. 1737–1765, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  16. Z. Wen, D. Goldfarb, and W. Yin, “Alternating direction augmented Lagrangian methods for semidefinite programming,” Mathematical Programming Computation, vol. 2, no. 3-4, pp. 203–230, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  17. Z. Wen, D. Goldfarb, S. Ma, and K. Scheinberg, “Row by row methods for semidefinite programming,” Tech. Rep., Department of IEOR, Columbia University, 2009. View at Google Scholar
  18. Y. Xu, W. Sun, and L. Qi, “A feasible direction method for the semidefinite program with box constraints,” Applied Mathematics Letters, vol. 24, no. 11, pp. 1874–1881, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  19. V. G. Zhadan and A. A. Orlov, “Dual interior point methods for linear semidefinite programming problems,” Computational Mathematics and Mathematical Physics, vol. 51, no. 12, pp. 2031–2051, 2011. View at Google Scholar
  20. H. Lin, “An inexact spectral bundle method for convex quadratic semidefinite programming,” Computational Optimization and Applications, vol. 53, no. 1, pp. 45–89, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  21. J. Sun and S. Zhang, “A modified alternating direction method for convex quadratically constrained quadratic semidefinite programs,” European Journal of Operational Research, vol. 207, no. 3, pp. 1210–1220, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  22. A. Auslender and C. H. Ramírez, “Penalty and barrier methods for convex semidefinite programming,” Mathematical Methods of Operations Research, vol. 63, no. 2, pp. 195–219, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  23. S. Zhang, J. Ang, and J. Sun, “An alternating direction method for solving convex nonlinear semidefinite programming problems,” Optimization, vol. 62, no. 4, pp. 527–543, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  24. X. X. Huang, X. Q. Yang, and K. L. Teo, “Lower-order penalization approach to nonlinear semidefinite programming,” Journal of Optimization Theory and Applications, vol. 132, no. 1, pp. 1–20, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  25. M. Aroztegui, J. Herskovits, J. R. Roche, and E. Bazn, “A feasible direction interior point algorithm for nonlinear semidefinite programming,” Structural and Multidisciplinary Optimization, vol. 132, no. 1, pp. 1–20, 2007. View at Google Scholar
  26. L. Yang and B. Yu, “A homotopy method for nonlinear semidefinite programming,” Computational Optimization and Applications, vol. 56, no. 1, pp. 81–96, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  27. C. Kanzow, C. Nagel, H. Kato, and M. Fukushima, “Successive linearization methods for nonlinear semidefinite programs,” Computational Optimization and Applications, vol. 31, no. 3, pp. 251–273, 2005. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  28. H. Yamashita, H. Yabe, and K. Harada, “A primal-dual interior point method for nonlinear semidefinite programming,” Mathematical Programming, vol. 135, no. 1-2, pp. 89–121, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  29. Z. Lu, A. Nemirovski, and R. D. C. Monteiro, “Large-scale semidefinite programming via a saddle point mirror-prox algorithm,” Mathematical Programming, vol. 109, no. 2-3, pp. 211–237, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  30. R. D. C. Monteiro, C. Ortiz, and B. F. Svaiter, “A first-order block-decomposition method for solving two-easy-block structured semidefinite programs,” Mathematical Programming Computation, vol. 6, no. 2, pp. 103–150, 2014. View at Publisher · View at Google Scholar · View at MathSciNet
  31. J. H. Chen, T. B. Yang, and S. H. Zhu, “Efficient low-rank stochastic gradient descent methods for solving semidefinite programs,” in Proceedings of the 17th International Conference on Artificial Intelligence and Statistics, pp. 122–130, 2014.
  32. G. Q. Wang and Y. Q. Bai, “A new primal-dual path-following interior-point algorithm for semidefinite optimization,” Journal of Mathematical Analysis and Applications, vol. 353, no. 1, pp. 339–349, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  33. Z. Yu, “Solving semidefinite programming problems via alternating direction methods,” Journal of Computational and Applied Mathematics, vol. 193, no. 2, pp. 437–445, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  34. M. H. Xu and T. Wu, “A class of linearized proximal alternating direction methods,” Journal of Optimization Theory and Applications, vol. 151, no. 2, pp. 321–337, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  35. P. Tseng, “Alternating projection-proximal methods for convex programming and variational inequalities,” SIAM Journal on Optimization, vol. 7, no. 4, pp. 951–965, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  36. K. C. Kiwiel, C. H. Rosa, and A. Ruszczynski, “Proximal decomposition via alternating linearization,” SIAM Journal on Optimization, vol. 9, no. 3, pp. 668–689, 1999. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  37. B. S. He, H. Yang, and S. L. Wang, “Alternating direction method with self-adaptive penalty parameters for monotone variational inequalities,” Journal of Optimization Theory and Applications, vol. 106, no. 2, pp. 337–356, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  38. J. Barzilai and J. M. Borwein, “Two-point step size gradient methods,” IMA Journal of Numerical Analysis, vol. 8, no. 1, pp. 141–148, 1988. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  39. A. Eisenblätter and M. A. Grötschel, “Frequency planning and ramifications of coloring,” Discussiones Mathematicae. Graph Theory, vol. 22, no. 1, pp. 51–88, 2002. View at Publisher · View at Google Scholar · View at MathSciNet
  40. S. Burer, R. D. C. Monteiro, and Y. Zhang, “A computational study of a gradient-based log-barrier algorithm for a class of large-scale SDPs,” Mathematical Programming, vol. 95, no. 2, pp. 359–379, 2003. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  41. A. Wiegele, “Biq Mac Library—a collection of Max-Cut and quadratic 0-1 programming instances of medium size,” Tech. Rep., 2007. View at Google Scholar
  42. E. D. Dolan and J. J. Moré, “Benchmarking optimization software with performance profiles,” Mathematical Programming, vol. 91, no. 2, pp. 201–213, 2002. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus