Mathematical Problems in Engineering

Volume 2012 (2012), Article ID 819607, 13 pages

http://dx.doi.org/10.1155/2012/819607

## A Globally Convergent Filter-Type Trust Region Method for Semidefinite Programming

^{1}School of Mathematics and Statistics, Xi'an Jiaotong University, Shaanxi, Xi'an 710049, China^{2}School of Mathematics and Statistics, Huazhong University of Science and Technology, Hubei, Wuhan 430074, China

Received 17 May 2012; Accepted 29 June 2012

Academic Editor: Soohee Han

Copyright © 2012 Aiqun Huang and Chengxian Xu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

When using interior methods for solving semidefinite programming (SDP), one needs to solve a system of linear equations at each iteration. For problems of large size, solving the system of linear equations can be very expensive. In this paper, based on a semismooth equation reformulation using Fischer's function, we propose a filter method with trust region for solving large-scale SDP problems. At each iteration we perform a number of conjugate gradient iterations, but do not need to solve a system of linear equations. Under mild assumptions, the convergence of this algorithm is established. Numerical examples are given to illustrate the convergence results obtained.

#### 1. Introduction

Semidefinite programming (SDP) is convex programming over positive semidefinite matrices. For early application, SDP has been widely used in control theory and combinatorial optimization (see, e.g., [1–3]). Since some algorithms for linear optimization can be extended to many general SDP problems, that aroused much interest in SDP. In the past decade, many algorithms have been proposed for solving SDP, including interior-point methods (IPMs) [4–7], augmented methods [8–10], new Newton-type methods [11], modified barrier methods [12], and regularization approaches [13].

For small and medium sized SDP problems, IPMs are generally efficient. But for large-scale SDP problems, IPMs become very slow. In order to improve this shortcoming, [9, 14] proposed inexact IPMs using an iterative solver to compute a search direction at each iteration. More recently, [13] applied regularization approaches to solve SDP problems. All of these methods are first-order based on a gradient, or inexact second-order based on an approximation of Hessian matrix methods [15].

In this paper, we will extend filter-trust-region methods for solving linear (or nonlinear) programming [16] to large-scale SDP problems and use Lipschitz continuity. Furthermore, the accuracy of this method is controlled by a forcing parameter. It is shown that, under mild assumptions, this algorithm is convergent.

The paper is organized as follows. Some preliminaries are introduced in Section 2. In Section 3, we propose a filter-trust-region method for solving SDP problems, and we study the convergence of this method in Section 4. In Section 5, some numerical examples are presented to demonstrate the convergence results obtained in this paper. Finally, we give some conclusions in Section 6.

In this paper, we use the following common notation for SDP problems: and denote the space of real symmetric matrices and the space of vectors with dimensions, respectively; denotes that is positive semidefinite (positive definite), and is used to indicate that is negative semidefinite (negative definite). A superscript represents transposes of matrices or vectors. For , the standard scalar product on the space of is defined by If and , we denote that is the Frobenius norm of , that is, and is the 2-norm of , that is, , respectively. Let be a matrix. Then we denote by a vector made of columns of stacked one by one, and the operator is the inverse of , that is, . We also denote that is identity matrix.

#### 2. Preliminaries

We consider a SDP problem of the form where , , , and are given dates; is a linear map from to given by The dual to the problem (2.1) is given by where is an adjoint operator of given by Obviously, and are the primal and dual variables, respectively.

It is easily verified that the SDP problem (2.1) is convex. When (2.1) and (2.3) have strictly feasible points, then strong duality holds, see [5, 12]. In this case, a point is optimal for SDP problems (2.1) and (2.3) if and only if In the sense that solves SDP problems (2.1) and (2.3) if and only if solves (2.5) when both SDP problems (2.1) and (2.3) have strictly feasible points.

We now introduce some lemmas which will be used in the sequel.

Lemma 2.1 (see [17]). *Let and let . Then if and only if . *

For , we define a mapping given by which is attributed by Fischer to Burmeister (see [18, 19]). This function is nondifferentiable and has a basic property.

Lemma 2.2 (see [20, Lemma 6.1]). *Let be the Fischer-Burmeister function defined in (2.6). Then
*

In addition, for and , we define a mapping by which is differentiable and has following results.

Lemma 2.3 (see [11, Proposition 2.3]). *Let be any positive number and let be defined by (2.8). Then
*

Lemma 2.4. *Let be any positive number, and let be defined by (2.8). If , we would have
*

*Proof. *The proof can be obtained from Lemmas 2.2 and 2.3.

Lemma 2.5 (see [20, pages 170–171]). *For any , define the linear operator by
**
Then is strictly monotone and so has an inverse . *

Lemma 2.6 (see [21, Lemma 2]). *Let , and let be defined by (2.8). For any , we have that is Fréchet-differentiable and
**
where . *

Lemma 2.7 (see [22, Corollary 2.7]). *Let be a map from to . If is locally Lipschitzian on , then is almost everywhere Fréchet-differentiable on .*

#### 3. The Algorithm

In this section, we will present a filter-trust-region method for solving SDP problems (2.1) and (2.3). Firstly, for a parameter , we construct a function: where .

According to Lemmas 2.1, 2.3 and 2.4, the following theorem is obvious.

Theorem 3.1. *Let and let be defined by (3.1). If SDP problems (2.1) and (2.3) have strictly feasible points, then
*

In what follows, we will study properties of the function . For simplicity, in the remaining sections of this paper, we denote , and .

Theorem 3.2. *Let be defined by (3.1). For any and , then is Fréchet-differentiable and
**
where and . *

*Proof. *For any , since and are linear functions and continuous differentiable, it follows that they are also locally Lipschitz continuous. Then, from Lemma 2.7, and are Fréchet-differentiable. Furthermore, is Fréchet-differentiable from Lemma 2.6. Thus, is Fréchet-differentiable and has the form of (3.3). We complete the proof.

We endow the variable with the following norm: In addition, we set where We also define the function and the vector with the following norm:

Now, for any , we define the merit function by

Lemma 3.3. *For any and , if and are nonsingular, then is locally Lipschitz continuous and twice Fréchet-differentiable at every . *

*Proof. *For any , since is convex and continuously differentiable, it follows that is also locally Lipschitz continuous.

In addition, for any , from [20, pages 173–175], is twice Fréchet-differentiable. Furthermore, , , and are continuous at every when , which, together with Lemma 2.7, is twice Fréchet-differentiable. The proof is completed.

Lemma 3.4. *Let and be defined by (3.1) and (3.8), respectively. For any , we have
*

*Proof. *The proof can be immediately obtained from the definition of and .

We follow the classical method for solving , which consists some norm of the residual. For any , we consider where . Thus, for any , we want to find a minimizer of . Furthermore, if , then is also a solution of .

In order to state our method for solving (3.10), we consider using a filter mechanism to accept a new point. Just as [16, pages 19–20], the notation of filter is based on that of dominance.

*Definition 3.5. *For any and any , a point dominates a point if and only if

Thus, if iterate dominates iterate , the latter is of no real interest to us since is at least as good as for each of the components of . All we need to do is remember iterates that are no dominated by other iterates by using a structure called a filter.

*Definition 3.6. *Let be a set of 4-tuples of the following form:
We define as a filter if and belong to , when , then

*Definition 3.7. *A new point is acceptable for the filter if and only if
where is a small constant.

Now, we formally present our trust region algorithm by using filter techniques.

*Algorithm 3.8. *The Filter-Trust-Region Algorithm*Step *0. Choose an initial point , , and . The constants , , , , , , and are also given and satisfy

Compute , set , and only in the filter .*Step *1*.* If , stop.*Step *2*.* Compute by solving the following problem:
where

If , stop.

Otherwise, computer the trial point .*Step *3*.* Compute and define the following ratio:
*Step *4*.* If , set .

If but satisfies (3.14), then add to the filter and remove all points from dominated by . At the same time, set .

Else, set .*Step *5*.* Update by choosing
and update trust-region radius by choosing
*Step *6*.* Set and go to Step 1.

*Remark 3.9. *Algorithm 3.8 can be started any . In fact, in order to increase the convergent speed greatly, we always choose . In addition, in this algorithm, we fix at first, then search for to update . At last we update and repeat.

The following lemma is a generalized case of Proposition 3.1 in [23].

Lemma 3.10. *Algorithm 3.8 is well defined, that is, the inner iteration (Step 2) terminates finitely. *

For the purpose of our analysis, in the sequence of points generated by Algorithm 3.8, we denote , and . It is clear that, .

*Remark 3.11. *Lemma 3.3 implies that there exists a constant such that
for all and . The second of above inequalities ensures that the constant can also be chosen such that

#### 4. Convergence of Analysis

In this section, we present a proof of global convergence of Algorithm 3.8. First, we make the following assumptions.

Some lemmas will be presented to be used in the subsequent analysis. (S1), where is a solution of (3.16). (S2) The iterations generated by Algorithm 3.8 remain in a close, bounded domain.

Lemma 4.1 (see [24]). *Let assumptions (S1) and (S2) hold. If there exists such that for all ; then there exists such that . *

Lemma 4.2. *Let be the infinite sequence generated by the Algorithm 3.8. Then
*

*Proof. *Since , from Steps 4 and 5 of Algorithm 3.8, and . Therefore, . Moreover,
for , which completes the proof.

Theorem 4.3. *Let , assumptions (S1) and (S2) hold. Then there exists such that
*

*Proof. *Suppose that for all . Then there exists such that
From Lemma 4.1, there exists such that

On the other hand, , let be the last successful iteration, then are unsuccessful iterations. From Steps 4 and 5 of Algorithm 3.8, , for sufficiently large , we have
which contradicts (4.5). The proof is completed.

We now consider what happens if the set is infinite in the course of Algorithm 3.8.

Theorem 4.4. *Suppose that , assumptions (S1) and (S2) hold. For any and , if and are nonsingular, then each accumulation point of the infinite sequences generated by Algorithm 3.8 is a stationary point of . *

*Proof. *The proof is by contradiction. Suppose that is an infinite sequence generated by Algorithm 3.8, and any accumulation point of is not a stationary point of . Suppose furthermore that and are the accumulation points of and , respectively. Since is not a stationary point of , then
and there exists such that
For some , let be a neighborhood of . From (4.8), there exists such that
where .

For , because
we obtain that

From (4.10), we know that is monotone decreasing and bounded below, which implies that for . Thus,
As a result, we have
By the update rule of , there exists an infinite subsequence , and we have that
which contradicts . This completes the proof.

In what follows, we investigate the case where the number of iterations added to the filter in the course of Algorithm 3.8 is infinite.

Theorem 4.5. *Suppose that but , SDP problems (2.1) and (2.3) have strictly feasible points. Suppose furthermore that assumptions (S1) and (S2) hold. For any and , if and are nonsingular, then
*

*Proof. *First let be the sequence generated by Algorithm 3.8. From Lemma 4.2, we have
which, together with assumption (S2), the desired result follows from [16, Lemma 3.1].

#### 5. Numerical Experiments

In this section, we describe the results of some numerical experiments with the Algorithm 3.8 for the random sparse SDP considered in [13]. All programs are written in Matlab code and all computations are tested under Matlab 7.1 on Pentium 4.

In addition, in the computations, the following values are assigned to the parameters in the Algorithm: , , , , , , and . We also use the stopping criteria is being of .

In the following Table 1, the first two columns give the size of the matrix and the dimension of the variable . In the middle columns, “-time” denotes the computing time (in seconds), “-it.” denotes the numbers iteration, and “-obj.” defines the value of when our stopping criteria is satisfied. Some numerical results of [13] are shown in the last two columns.

As shown in Table 1, all test problems have been solved just few iterations compared with [13]. Furthermore, this algorithm is less sensitive to the size of SDP problems. Comparatively speaking, our method is attractive and suitable for solving large-scale SDP problems.

#### 6. Conclusions

In this paper, we have proposed a filter-trust-region method for SDP problems. Such a method offers a trade-off between the accuracy of solving the subproblems and the amount of work for solving them. Furthermore, numerical results show that our algorithm is attractive for large-scale SDP problems.

#### Acknowledgments

The authors would like to thank Professor Florian Jarre for his advice and guidance, Thomas David and Li Luo for their grateful help, and also the referees for their helpful comments. This work is supported by National Natural Science Foundation of China 10971162.

#### References

- S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan,
*Linear Matrix Inequalities in System and Control Theory*, vol. 15 of*SIAM Studies in Applied Mathematics*, SIAM, Philadelphia, Pa, USA, 1994. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - M. X. Goemans, “Semidefinite programming in combinatorial optimization,”
*Mathematical Programming*, vol. 79, no. 1–3, pp. 143–161, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - H. Wolkowicz, R. Saigal, and L. Vandenberghe,
*Handbook of Semidefinite Programming*, Kluwer Acadamic Publishers, Dordrecht, The Netherlands, 2000. - Y. Nesterov and A. Nemirovskii,
*Interior-Point Polynomial Algorithms in Convex Programming*, SIAM Studies in Applied Mathrmatics, SIAM, Philadelphia, Pa, USA, 1994. - F. Alizadeh, J. Pierre, A. Haeberly, and M. L. Overton, “A new primal-dual interiorpoint method for semidefinite programming,” in
*Proceedings of the 5th SIAM Conference on Applied Linear Algebra*, pp. 113–117, SIAM, Philadelphia, Pa, USA, 1994. - R. D. C. Monteiro, “Primal-dual path-following algorithms for semidefinite programming,”
*SIAM Journal on Optimization*, vol. 7, no. 3, pp. 663–678, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - C. Helmberg, F. Rendl, R. J. Vanderbei, and H. Wolkowicz, “An interior-point method for semidefinite programming,”
*SIAM Journal on Optimization*, vol. 6, no. 2, pp. 342–361, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - X.-Y. Zhao, D. Sun, and K.-C. Toh, “A Newton-CG augmented Lagrangian method for semidefinite programming,”
*SIAM Journal on Optimization*, vol. 20, no. 4, pp. 1737–1765, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - K.-C. Toh, “Solving large scale semidefinite programs via an iterative solver on the augmented systems,”
*SIAM Journal on Optimization*, vol. 14, no. 3, pp. 670–698, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - F. Jarre and F. Rendl, “An augmented primal-dual method for linear conic programs,”
*SIAM Journal on Optimization*, vol. 19, no. 2, pp. 808–823, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - C. Kanzow and C. Nagel, “Semidefinite programs: new search directions, smoothing-type methods, and numerical results,”
*SIAM Journal on Optimization*, vol. 13, no. 1, pp. 1–23, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - M. Kočvara and M. Stingl, “On the solution of large-scale SDP problems by the modified barrier method using iterative solvers,”
*Mathematical Programming*, vol. 109, no. 2-3, pp. 413–444, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - J. Malick, J. Povh, F. Rendl, and A. Wiegele, “Regularization methods for semidefinite programming,”
*SIAM Journal on Optimization*, vol. 20, no. 1, pp. 336–356, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - K.-C. Toh and M. Kojima, “Solving some large scale semidefinite programs via the conjugate residual method,”
*SIAM Journal on Optimization*, vol. 12, no. 3, pp. 669–691, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - F. Leibfritz and E. M. E. Mostafa, “An interior point constrained trust region method for a special class of nonlinear semidefinite programming problems,”
*SIAM Journal on Optimization*, vol. 12, no. 4, pp. 1048–1074, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - N. I. M. Gould, S. Leyffer, and P. L. Toint, “A multidimensional filter algorithm for nonlinear equations and nonlinear least-squares,”
*SIAM Journal on Optimization*, vol. 15, no. 1, pp. 17–38, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - C. Helmberg,
*Semidefinite Programming For Combinatorial Optimization*, Konard-Zuse-Zent rum für informationstechnik, Berlin, Germany, 2000. - A. Fischer, “A special Newton-type optimization method,”
*Optimization*, vol. 24, no. 3-4, pp. 269–284, 1992. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - A. Fischer, “A Newton-type method for positive-semidefinite linear complementarity problems,”
*Journal of Optimization Theory and Applications*, vol. 86, no. 3, pp. 585–608, 1995. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - P. Tseng, “Merit functions for semi-definite complementarity problems,”
*Mathematical Programming*, vol. 83, no. 2, pp. 159–185, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - X. Chen and P. Tseng, “Non-interior continuation methods for solving semidefinite complementarity problems,”
*Mathematical Programming*, vol. 95, no. 3, pp. 431–474, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - D. Sun and J. Sun, “Semismooth matrix-valued functions,”
*Mathematics of Operations Research*, vol. 27, no. 1, pp. 150–169, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - H. Jiang, M. Fukushima, L. Qi, and D. Sun, “A trust region method for solving generalized complementarity problems,”
*SIAM Journal on Optimization*, vol. 8, no. 1, pp. 140–157, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - W. Sun and Y. Yuan,
*Optimization Theory and Methods, Nonlinear Programming*, Springer, New York, NY, USA, 2006.