Abstract

We present a smooth augmented Lagrangian algorithm for semiinfinite programming (SIP). For this algorithm, we establish a perturbation theorem under mild conditions. As a corollary of the perturbation theorem, we obtain the global convergence result, that is, any accumulation point of the sequence generated by the algorithm is the solution of SIP. We get this global convergence result without any boundedness condition or coercive condition. Another corollary of the perturbation theorem shows that the perturbation function at zero point is lower semi-continuous if and only if the algorithm forces the sequence of objective function convergence to the optimal value of SIP. Finally, numerical results are given.

1. Introduction

We consider the semi-infinite programming (SIP): where , the functions and are continuously differentiable. is a nonempty bounded and closed domain. In this paper, we assume that This assumption is very mild, because the objective function can be replaced by if the assumption is not satisfied.

Semi-infinite programming has wide applications such as engineering technology, optimal control, characteristic value calculation, and statistical design. Many methods have been proposed to solve semi-infinite programming (see [14]). As we know, the main difficulty for solving SIP is that it has infinite constraints. If transforming the infinite constraints into an integral function, SIP (1.1) is equivalent to a nonlinear programming with finite constraints.

For any given and , let Define by where is a given probability measure on , that is, . Thus SIP (1.1) can be reformulated as the following nonlinear programming (NP) with one equality constraint: Then nonlinear programming (1.5) has the same optimal solution and optimal value with SIP (1.1).

For nonlinear programming with finite equality constraints, Hestenes [5] and Powell [6] independently proposed an augmented Lagrangian function by incorporating a quadratic penalty term in the conventional Lagrangian function. This augmented Lagrangian function avoids the shortcoming that the conventional Lagrangian function is only suitable for convex function. So the augmented Lagrangian function can be applied to nonconvex optimization problem. Later, the augmented Lagrangian function was extended to inequality constrained optimization problems and thoroughly investigated by Rockafellar [7]. Recently, Yang and Teo [8] and Rückmann and Shapiro [9] introduced the augmented Lagrangian function for SIP (1.1). In [9], necessary and sufficient conditions for the existence of corresponding augmented Lagrange multipliers were presented. [8] proposed a nonlinear Lagrangian method and established that the sequence of optimal values of nonlinear penalty problems converges to that of SIP (1.1), under the assumption that the level set of objective function is bounded. In this paper, using the equivalent relation of semi-infinite programming (1.1) and nonlinear programming (1.5), without any boundness condition, we present an augmented Lagrangian algorithm for SIP (1.1).

We notice that although the constraints of NP (1.5) are finite, but the constraint function is nonsmooth. Therefore, existing gradient-based optimization methods cannot be used to solve NP (1.5) directly. To overcome this inconvenience, we have to smooth the constraint function. For SIP (1.1), [1013] presented semismooth Newton methods and smoothing Newton methods. They proved that each accumulation point is a generalized stationary point of SIP (1.1). However, at each iteration of these methods, a Hessian matrix needs to be computed. When the size of the problem is large, computing a Hessian matrix is very expensive. Based on exact penalty function that is approximated by a family of smoothing functions, a smoothed-penalty algorithm for solving NP (1.5) was proposed by [14]. They proved that if the constrained set is bounded or the objective function is coercive, the algorithm generates a sequence whose accumulation points are solutions of SIP (1.1).

In this paper, for SIP (1.1), we present a smooth augmented Lagrangian algorithm by smoothing the classical augmented Lagrangian function [7]. In this algorithm, we need not have to get an exact global optimal solution of unconstraint subproblem at each iteration. It is sufficient to search an inexact solution. It is not difficult to obtain an inexact solution, whenever the evaluation of the integral function is not very expensive. For this algorithm, we establish a perturbation theorem under mild conditions. As a corollary of the perturbation theorem, we obtain the global convergence result, that is, any accumulation point of the sequence generated by the algorithm is the solution of SIP (1.1). We get this global convergence result without any boundedness condition or coercive condition. It is noteworthy that the boundedness of the multiplier sequence is a sufficient condition in many literatures about Lagrangian method (see [1517]). However, in our algorithm, the multiplier sequence can be unbounded. Another corollary of the perturbation theorem shows that the perturbation function at zero point is lower semi-continuous if and only if the algorithm forces the sequence of objective function convergence to the optimal value of SIP (1.1).

The paper is organized as follows. In the next section, we present a smooth augmented Lagrangian algorithm. In Section 3, we establish the perturbation theorem of the algorithm. By this theorem, we obtain a global convergence property and a sufficient and necessary condition in which the algorithm forces the sequence of objective functions convergence to the optimal value of SIP (1.1). Finally, we give some numerical results in Section 4.

2. Smooth Augmented Lagrangian Algorithm

Before we introduce the algorithm, some definitions and symbols need to be given. For , we define the relaxed feasible set of SIP (1.1) as follows: Then is the feasible set of SIP (1.1). Let be the set of optimal solutions of SIP (1.1). We assume that in this paper.

The perturbation function is defined as follows: Thus the optimal value of SIP (1.1) is It is easy to show that is upper semi-continuous at the point .

For problem (1.5), the corresponding classical augmented Lagrangian function [7] is where is the Lagrangian multiplier and is the penalty parameter. On base of it, we introduce a class of smooth augmented Lagrangian function: Here is the approximate parameter.

In the following, we suppose that the continuously differentiable function satisfies(a) is nonnegative and monotone increasing;(b) for any , ;(c).

It is easy to check that there are many continuously differentiable functions satisfying conditions (a), (b), and (c). For example, Using conditions (a) and (c), for any , we have From the above equation, under conditions (a)–(c), the smooth function approximates to the classical augmented Lagrangian function as approaches to zero, that is,

Based on the smooth augmented Lagrangian function , we present the following smooth augmented Lagrangian algorithm.

Algorithm 2.1. Set ,  ,  ,  ,  ,  .
Step  1. Compute Otherwise, seek on inexact global optimal solution satisfying Step  2. Set , , Step  3. Set , go back step 1.

Since is bounded below and is nonnegative, an inexact solution satisfying (2.10) always exists. Thus Algorithm 2.1 is feasible.

3. Convergence Properties

In this section, by using a perturbation theorem of Algorithm 2.1, we will obtain a global convergence property, a sufficient and necessary condition that Algorithm 2.1 forces the sequence of objective functions convergence to the optimal value of SIP (1.1). To prove the perturbation theorem, we first give the following two lemmas.

Let , , .

Lemma 3.1. Suppose that the point sequence is generated by Algorithm 2.1. Then for any , there exists a positive integer such that , for all .

Proof.    
Case  1. When , tends to a finite number. From Algorithm 2.1, there exists a positive integer such that for all . Notice that , so for any , there exists a positive integer such that for all . Therefore, when , we have .
Case  2. When , . We suppose that the conclusion does not hold. Then for , there exists an infinite subsequence such that for all , that is, Since , for the above , there exists a positive integer such that for all . Then using (2.10) in Algorithm 2.1, we have Therefore by (3.1), (3.2), and satisfying (a)-(b), for any , , we derive that Note that is bounded below and , then we can obtain that that is, . However, on the other hand, since , we can choose ; by the choice of , , in Algorithm 2.1 and the properties of , we obtain that This indicates that has an upper bound. It is in contradiction with .

By using Lemma 3.1, we have the following Lemma 3.2.

Lemma 3.2. Suppose that the point sequence is generated by Algorithm 2.1. Then for every accumulation point of , one has .

Theorem 3.3. Suppose that the sequence is generated by Algorithm 2.1, then(i);(ii);(iii).

Proof. Since is monotonically decreasing with respect to and has below bound, we know exists and is finite. By Algorithm 2.1, we have . Then Taking and , by the definition of infimum, there exists such that Since , that is, .
On the other hand, by Lemma 3.1, for any , when is sufficiently large, we have Since satisfies conditions (a) and (c), we obtain Therefore, there exists such that for any . As stated previously, by the choice of , , , and in Algorithm 2.1, (3.7), (3.8), and (3.10) derive that for any , From the above inequalities and (3.6), noticing that , for any , we have Then . So the conclusions (i)–(iii) hold.

Now, we prove the global convergence of Algorithm 2.1.

Corollary 3.4. Suppose that the point sequence is generated by Algorithm 2.1. Then every accumulation point of is the optimal solution of the problem (1.1).

Proof. Let be an accumulation point of ; from Lemma 3.2, we have By the conclusion (i) of Theorem 3.3 and (3.13), we obtain Then we get , because (3.14) and are upper semi-continuous at the point .

By using Theorem 3.3, we have the following Corollary 3.5.

Corollary 3.5. if and only if is lower semi-continuous at the point .

4. Numerical Results

To give some insight into the behavior of the algorithm presented in this paper. It is implemented in Matlab 7.0.4 and runs are made on AMD Athlon(tm) Dual Core Processor 4800+ with CPU 2.50 GHz and 1.87 GB memory. Tables 1 and 2 show the computational results of the corresponding problems with the following items: : number of iterations; : starting point; : smoothing function; : the final iteration point; : the final Lagrangian multiplier; : the function value of at the final .

The parameters used in the Algorithm 2.1 are specified as follows:

Example 4.1 (see [18]). Consider the following: We choose the starting point . This example has the optimal solution .

Example 4.2 (see [18]). Consider the following: for , 6, and . We choose zero vectors as the starting points.

Throughout the computational experiments, we use trust region method for solving an unconstrained optimization subproblem at each step. For the corresponding trust region subproblem, we directly use the trust function in Matlab toolbox. The test results of Example 4.1 are summarized in Table 1. We test the three cases for , and , which are, respectively, used as the smoothing approximation functions. denotes the number of the iteration, denotes the approximate Lagrangian multiplier at the final iteration, and and are the approximate solution and the objective function at the final iteration. For Example 4.2, we test the results when , , and in Table 2. Numerical results demonstrate that augmented Lagrangian algorithm established in this paper is a practical and effective method for solving semi-infinite programming problem.

Acknowledgments

This work was supported by National Natural Science Foundation under Grants 10971118, 10901096, and 11271226, the Scientific Research Fund for the Excellent Middle-Aged and Youth Scientists of Shandong Province under Grant BS2012SF027, and the Natural Science Foundation of Shandong Province under Grant ZR2009AL019.