`Journal of Applied MathematicsVolume 2012 (2012), Article ID 145083, 11 pageshttp://dx.doi.org/10.1155/2012/145083`
Research Article

## An Augmented Lagrangian Algorithm for Solving Semiinfinite Programming

1Department of Mathematics, Shandong Normal University, Jinan, China
2Institute for Operations Research, Qufu Normal University, Qufu, China

Received 16 July 2012; Accepted 12 September 2012

Academic Editor: Jian-Wen Peng

Copyright © 2012 Qian Liu and Changyu Wang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We present a smooth augmented Lagrangian algorithm for semiinfinite programming (SIP). For this algorithm, we establish a perturbation theorem under mild conditions. As a corollary of the perturbation theorem, we obtain the global convergence result, that is, any accumulation point of the sequence generated by the algorithm is the solution of SIP. We get this global convergence result without any boundedness condition or coercive condition. Another corollary of the perturbation theorem shows that the perturbation function at zero point is lower semi-continuous if and only if the algorithm forces the sequence of objective function convergence to the optimal value of SIP. Finally, numerical results are given.

#### 1. Introduction

We consider the semi-infinite programming (SIP): where , the functions and are continuously differentiable. is a nonempty bounded and closed domain. In this paper, we assume that This assumption is very mild, because the objective function can be replaced by if the assumption is not satisfied.

Semi-infinite programming has wide applications such as engineering technology, optimal control, characteristic value calculation, and statistical design. Many methods have been proposed to solve semi-infinite programming (see [14]). As we know, the main difficulty for solving SIP is that it has infinite constraints. If transforming the infinite constraints into an integral function, SIP (1.1) is equivalent to a nonlinear programming with finite constraints.

For any given and , let Define by where is a given probability measure on , that is, . Thus SIP (1.1) can be reformulated as the following nonlinear programming (NP) with one equality constraint: Then nonlinear programming (1.5) has the same optimal solution and optimal value with SIP (1.1).

For nonlinear programming with finite equality constraints, Hestenes [5] and Powell [6] independently proposed an augmented Lagrangian function by incorporating a quadratic penalty term in the conventional Lagrangian function. This augmented Lagrangian function avoids the shortcoming that the conventional Lagrangian function is only suitable for convex function. So the augmented Lagrangian function can be applied to nonconvex optimization problem. Later, the augmented Lagrangian function was extended to inequality constrained optimization problems and thoroughly investigated by Rockafellar [7]. Recently, Yang and Teo [8] and Rückmann and Shapiro [9] introduced the augmented Lagrangian function for SIP (1.1). In [9], necessary and sufficient conditions for the existence of corresponding augmented Lagrange multipliers were presented. [8] proposed a nonlinear Lagrangian method and established that the sequence of optimal values of nonlinear penalty problems converges to that of SIP (1.1), under the assumption that the level set of objective function is bounded. In this paper, using the equivalent relation of semi-infinite programming (1.1) and nonlinear programming (1.5), without any boundness condition, we present an augmented Lagrangian algorithm for SIP (1.1).

We notice that although the constraints of NP (1.5) are finite, but the constraint function is nonsmooth. Therefore, existing gradient-based optimization methods cannot be used to solve NP (1.5) directly. To overcome this inconvenience, we have to smooth the constraint function. For SIP (1.1), [1013] presented semismooth Newton methods and smoothing Newton methods. They proved that each accumulation point is a generalized stationary point of SIP (1.1). However, at each iteration of these methods, a Hessian matrix needs to be computed. When the size of the problem is large, computing a Hessian matrix is very expensive. Based on exact penalty function that is approximated by a family of smoothing functions, a smoothed-penalty algorithm for solving NP (1.5) was proposed by [14]. They proved that if the constrained set is bounded or the objective function is coercive, the algorithm generates a sequence whose accumulation points are solutions of SIP (1.1).

In this paper, for SIP (1.1), we present a smooth augmented Lagrangian algorithm by smoothing the classical augmented Lagrangian function [7]. In this algorithm, we need not have to get an exact global optimal solution of unconstraint subproblem at each iteration. It is sufficient to search an inexact solution. It is not difficult to obtain an inexact solution, whenever the evaluation of the integral function is not very expensive. For this algorithm, we establish a perturbation theorem under mild conditions. As a corollary of the perturbation theorem, we obtain the global convergence result, that is, any accumulation point of the sequence generated by the algorithm is the solution of SIP (1.1). We get this global convergence result without any boundedness condition or coercive condition. It is noteworthy that the boundedness of the multiplier sequence is a sufficient condition in many literatures about Lagrangian method (see [1517]). However, in our algorithm, the multiplier sequence can be unbounded. Another corollary of the perturbation theorem shows that the perturbation function at zero point is lower semi-continuous if and only if the algorithm forces the sequence of objective function convergence to the optimal value of SIP (1.1).

The paper is organized as follows. In the next section, we present a smooth augmented Lagrangian algorithm. In Section 3, we establish the perturbation theorem of the algorithm. By this theorem, we obtain a global convergence property and a sufficient and necessary condition in which the algorithm forces the sequence of objective functions convergence to the optimal value of SIP (1.1). Finally, we give some numerical results in Section 4.

#### 2. Smooth Augmented Lagrangian Algorithm

Before we introduce the algorithm, some definitions and symbols need to be given. For , we define the relaxed feasible set of SIP (1.1) as follows: Then is the feasible set of SIP (1.1). Let be the set of optimal solutions of SIP (1.1). We assume that in this paper.

The perturbation function is defined as follows: Thus the optimal value of SIP (1.1) is It is easy to show that is upper semi-continuous at the point .

For problem (1.5), the corresponding classical augmented Lagrangian function [7] is where is the Lagrangian multiplier and is the penalty parameter. On base of it, we introduce a class of smooth augmented Lagrangian function: Here is the approximate parameter.

In the following, we suppose that the continuously differentiable function satisfies(a) is nonnegative and monotone increasing;(b) for any , ;(c).

It is easy to check that there are many continuously differentiable functions satisfying conditions (a), (b), and (c). For example, Using conditions (a) and (c), for any , we have From the above equation, under conditions (a)–(c), the smooth function approximates to the classical augmented Lagrangian function as approaches to zero, that is,

Based on the smooth augmented Lagrangian function , we present the following smooth augmented Lagrangian algorithm.

Algorithm 2.1. Set ,  ,  ,  ,  ,  .
Step  1. Compute Otherwise, seek on inexact global optimal solution satisfying Step  2. Set , , Step  3. Set , go back step 1.

Since is bounded below and is nonnegative, an inexact solution satisfying (2.10) always exists. Thus Algorithm 2.1 is feasible.

#### 3. Convergence Properties

In this section, by using a perturbation theorem of Algorithm 2.1, we will obtain a global convergence property, a sufficient and necessary condition that Algorithm 2.1 forces the sequence of objective functions convergence to the optimal value of SIP (1.1). To prove the perturbation theorem, we first give the following two lemmas.

Let , , .

Lemma 3.1. Suppose that the point sequence is generated by Algorithm 2.1. Then for any , there exists a positive integer such that , for all .

Proof.
Case  1. When , tends to a finite number. From Algorithm 2.1, there exists a positive integer such that for all . Notice that , so for any , there exists a positive integer such that for all . Therefore, when , we have .
Case  2. When , . We suppose that the conclusion does not hold. Then for , there exists an infinite subsequence such that for all , that is, Since , for the above , there exists a positive integer such that for all . Then using (2.10) in Algorithm 2.1, we have Therefore by (3.1), (3.2), and satisfying (a)-(b), for any , , we derive that Note that is bounded below and , then we can obtain that that is, . However, on the other hand, since , we can choose ; by the choice of , , in Algorithm 2.1 and the properties of , we obtain that This indicates that has an upper bound. It is in contradiction with .

By using Lemma 3.1, we have the following Lemma 3.2.

Lemma 3.2. Suppose that the point sequence is generated by Algorithm 2.1. Then for every accumulation point of , one has .

Theorem 3.3. Suppose that the sequence is generated by Algorithm 2.1, then(i);(ii);(iii).

Proof. Since is monotonically decreasing with respect to and has below bound, we know exists and is finite. By Algorithm 2.1, we have . Then Taking and , by the definition of infimum, there exists such that Since , that is, .
On the other hand, by Lemma 3.1, for any , when is sufficiently large, we have Since satisfies conditions (a) and (c), we obtain Therefore, there exists such that for any . As stated previously, by the choice of , , , and in Algorithm 2.1, (3.7), (3.8), and (3.10) derive that for any , From the above inequalities and (3.6), noticing that , for any , we have Then . So the conclusions (i)–(iii) hold.

Now, we prove the global convergence of Algorithm 2.1.

Corollary 3.4. Suppose that the point sequence is generated by Algorithm 2.1. Then every accumulation point of is the optimal solution of the problem (1.1).

Proof. Let be an accumulation point of ; from Lemma 3.2, we have By the conclusion (i) of Theorem 3.3 and (3.13), we obtain Then we get , because (3.14) and are upper semi-continuous at the point .

By using Theorem 3.3, we have the following Corollary 3.5.

Corollary 3.5. if and only if is lower semi-continuous at the point .

#### 4. Numerical Results

To give some insight into the behavior of the algorithm presented in this paper. It is implemented in Matlab 7.0.4 and runs are made on AMD Athlon(tm) Dual Core Processor 4800+ with CPU 2.50 GHz and 1.87 GB memory. Tables 1 and 2 show the computational results of the corresponding problems with the following items: : number of iterations; : starting point; : smoothing function; : the final iteration point; : the final Lagrangian multiplier; : the function value of at the final .

Table 1: Numerical results of Example 4.1.
Table 2: Numerical results of Example 4.2.

The parameters used in the Algorithm 2.1 are specified as follows:

Example 4.1 (see [18]). Consider the following: We choose the starting point . This example has the optimal solution .

Example 4.2 (see [18]). Consider the following: for , 6, and . We choose zero vectors as the starting points.

Throughout the computational experiments, we use trust region method for solving an unconstrained optimization subproblem at each step. For the corresponding trust region subproblem, we directly use the trust function in Matlab toolbox. The test results of Example 4.1 are summarized in Table 1. We test the three cases for , and , which are, respectively, used as the smoothing approximation functions. denotes the number of the iteration, denotes the approximate Lagrangian multiplier at the final iteration, and and are the approximate solution and the objective function at the final iteration. For Example 4.2, we test the results when , , and in Table 2. Numerical results demonstrate that augmented Lagrangian algorithm established in this paper is a practical and effective method for solving semi-infinite programming problem.

#### Acknowledgments

This work was supported by National Natural Science Foundation under Grants 10971118, 10901096, and 11271226, the Scientific Research Fund for the Excellent Middle-Aged and Youth Scientists of Shandong Province under Grant BS2012SF027, and the Natural Science Foundation of Shandong Province under Grant ZR2009AL019.

#### References

1. B. Bhattacharjee, P. Lemonidis, W. H. Green Jr., and P. I. Barton, “Global solution of semi-infinite programs,” Mathematical Programming, vol. 103, no. 2, pp. 283–307, 2005.
2. R. Hettich and K. O. Kortanek, “Semi-infinite programming: theory, methods, and applications,” SIAM Review, vol. 35, no. 3, pp. 380–429, 1993.
3. G. Still, “Discretization in semi-infinite programming: the rate of convergence,” Mathematical Programming, vol. 91, no. 1, pp. 53–69, 2001.
4. Y. Tanaka, M. Fukushima, and T. Ibaraki, “A globally convergent SQP method for semi-infinite nonlinear optimization,” Journal of Computational and Applied Mathematics, vol. 23, no. 2, pp. 141–153, 1988.
5. M. R. Hestenes, “Multiplier and gradient methods,” Journal of Optimization Theory and Applications, vol. 4, pp. 303–320, 1969.
6. M. J. D. Powell, “A method for nonlinear constraints in minimization problems,” in Optimization, R. Fletcher, Ed., pp. 283–298, Academic Press, London, UK, 1969.
7. R. T. Rockafellar, “Augmented Lagrange multiplier functions and duality in nonconvex programming,” SIAM Journal on Control and Optimization, vol. 12, pp. 268–285, 1974.
8. X. Q. Yang and K. L. Teo, “Nonlinear Lagrangian functions and applications to semi-infinite programs,” Annals of Operations Research, vol. 103, pp. 235–250, 2001.
9. J.-J. Rückmann and A. Shapiro, “Augmented Lagrangians in semi-infinite programming,” Mathematical Programming, vol. 116, no. 1-2, pp. 499–512, 2009.
10. D.-H. Li, L. Qi, J. Tam, and S.-Y. Wu, “A smoothing Newton method for semi-infinite programming,” Journal of Global Optimization, vol. 30, no. 2-3, pp. 169–194, 2004.
11. L. Qi, C. Ling, X. Tong, and G. Zhou, “A smoothing projected Newton-type algorithm for semi-infinite programming,” Computational Optimization and Applications, vol. 42, no. 1, pp. 1–30, 2009.
12. L. Qi, A. Shapiro, and C. Ling, “Differentiability and semismoothness properties of integral functions and their applications,” Mathematical Programming, vol. 102, no. 2, pp. 223–248, 2005.
13. L. Qi, S.-Y. Wu, and G. Zhou, “Semismooth Newton methods for solving semi-infinite programming problems,” Journal of Global Optimization, vol. 27, no. 2-3, pp. 215–232, 2003.
14. M. Gugat and M. Herty, “The smoothed-penalty algorithm for state constrained optimal control problems for partial differential equations,” Optimization Methods & Software, vol. 25, no. 4–6, pp. 573–599, 2010.
15. E. G. Birgin, R. A. Castillo, and J. M. Martínez, “Numerical comparison of augmented Lagrangian algorithms for nonconvex problems,” Computational Optimization and Applications, vol. 31, no. 1, pp. 31–55, 2005.
16. A. R. Conn, N. I. M. Gould, and P. L. Toint, “A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bounds,” SIAM Journal on Numerical Analysis, vol. 28, no. 2, pp. 545–572, 1991.
17. P. Tseng and D. P. Bertsekas, “On the convergence of the exponential multiplier method for convex programming,” Mathematical Programming, vol. 60, no. 1, pp. 1–19, 1993.
18. A. R. Conn and N. I. M. Gould, “An exact penalty function for semi-infinite programming,” Mathematical Programming, vol. 37, no. 1, pp. 19–40, 1987.