## Nonlinear Analysis: Algorithm, Convergence, and Applications 2014

View this Special IssueResearch Article | Open Access

# Second-Order Multiplier Iteration Based on a Class of Nonlinear Lagrangians

**Academic Editor:**Gaohang Yu

#### Abstract

Nonlinear Lagrangian algorithm plays an important role in solving constrained optimization problems. It is known that, under appropriate conditions, the sequence generated by the first-order multiplier iteration converges superlinearly. This paper aims at analyzing the second-order multiplier iteration based on a class of nonlinear Lagrangians for solving nonlinear programming problems with inequality constraints. It is suggested that the sequence generated by the second-order multiplier iteration converges superlinearly with order at least two if in addition the Hessians of functions involved in problem are Lipschitz continuous.

#### 1. Introduction

Lagrangians play an important role for solving constrained optimization problems. Hestenes [1] and Powell [2] introduced the proximal augmented Lagrangian for problems with equality constraints and Rockafellar [3] developed the proximal augmented Lagrangian for problems with both equality and inequality constraints.

Based on the above Lagrangians, Bertsekas [4, 5] discussed the convergence of sequence generated by the second-order multiplier iteration. The same author further improved the convergence and convergent rate of the second-order multiplier iteration using Newton’s method in 1982. Besides, Brusch [6] and Fletcher [7] first independently proposed the second-order multiplier iteration using quasi-Newton’s method, respectively. Bertsekas [8] developed new framework of quasi-Newton’s method in 1982.

Consider the following inequality constrained optimization problem: where , are continuous differentiable functions.

As nonlinear Lagrangians can be used to develop dual algorithms for nonlinear programming, requiring no restrictions on primal feasibility, important contributions on this topic have been done by many authors.

Polyak and Teboulle [9] discussed a class of Lagrange functions of the form for solving , where is penalty parameter and is twice continuous differentiable function. Furthermore, Polyak and Griva [10] proposed a general primal-dual nonlinear rescaling (PDNR) method for convex optimization with inequality constraints, and Griva and Polyak [11] developed a general primal-dual nonlinear rescaling method with dynamic scaling parameter update. Besides the works by Polyak and his coauthors, Auslender et al. [12] and Ben-Tal and Zibulevsky [13] studied other nonlinear Lagrangians and obtained interesting convergence results for convex programming problems, too. Under appropriate conditions, the sequence generated by the first-order multiplier iteration converges superlinearly.

Ren and Zhang [14] analysed the following nonlinear Lagrangians: and constructed the dual algorithm based on minimizing as follows.

*D-Algorithm*

*Step 1. *Given large enough, small enough, , and , set .

*Step 2. *Solve (approximately)
and obtain its (approximate) solution .

*Step 3. *If , , stop; otherwise go to Step 4.

*Step 4. *Update Lagrange multiplier

*Step 5. *Set and return to Step 2.

It was shown that, under a set of conditions, dual algorithm based on this class of Lagrange is locally convergent when the penalty parameter is larger than a threshold.

In view of interpretation of the multiplier iteration as the steepest ascent method, it is natural to consider Newton’s method for maximizing the dual functional. Using known results for Newton’s method, we expect that the second-order iteration will yield a vector which is closer to than . This paper aims at discussing the second-order multiplier iteration based on nonlinear Lagrangians of the form (2). It is suggested that the sequence generated by the second-order multiplier iteration converges superlinearly with order at least two if are Lipschitz continuous.

We introduce the following notation to end this section:

#### 2. Preliminaries

Consider the inequality constrained optimization problem . Let denote the Lagrange function for problem and , .

For the convenience of description in the sequel, we list the following assumptions, some of which will be used somewhere.(a)Functions are twice continuously differentiable.(b)For convenience of statement, we assume , .(c)Let satisfy the Kuhn-Tucker conditions (d)Strict complementary condition holds; that is, (e)The set of vectors are linearly independent.(f)For all satisfying , , the following inequality holds:

Let function in defined in (2) and its derivatives satisfy the following conditions:(H1);(H2), for all , with , and ;(H3), for all , with ;(H4) is bounded, where , with , and for large enough.

The following proposition concerns properties of at a Kuhn-Tucker point .

Proposition 1 (see [14]). *Assume that (a)–(f) and (H1)–(H3) hold. For any and any Kuhn-Tucker point the following properties are valid:*(i);(ii);(iii),* where *;(iv)*there exist ** and ** such that*,* for any *,

Let be small enough, , and large enough satisfying (iv) of Proposition 1. For any fixed , define For any , we denote Let , is the identity matrix, and is the zero matrix.

Theorem 2 (see [14]). *Assume that (a)–(f) and (H1)–(H4) hold. Then there exists large enough such that, for any , there exist , satisfying that for any , the following statements hold.*(i)*There exists a vector
*(ii)*For in (i) and , the following estimate is valid:
where is a scalar independent of and .*(iii)*Function is strongly convex in a neighborhood of .*

#### 3. The Second-Order Multiplier Iteration

Based on the nonlinear Lagrange function , we consider the dual function defined on as follows: where is the indicator function of .

Lemma 3. *Assume that conditions (a)–(f) and (H1)–(H4) hold; then for any fixed function is twice continuously differentiable and concave on .*

*Proof. *Obviously, for , function is concave. In view of Theorem 2, for any , function is strong convex in the neighborhood of . So is unique minimizer of function with respect to in the neighborhood of point , and is smooth in ; that is, the Jacobian of exists, and
For , matrix is positive definite, and system generates unique vector-valued function satisfying and
In view of , we have
It follows from (18) that
which means
Thus,
So,

Let be the minimizer of in a neighborhood of ; then we obtain that

In view of the interpretation of the multiplier iteration as the steepest ascent method, it is natural to consider Newton’s method for maximizing the dual functional which is given by In view of (23), this iteration can be written as where We will provide a convergence and rate of convergence result for iteration (25) and (26).

For and , we define For a given , assume (by reordering indices if necessary) that contains the first indices where is an integer with . Define We note that and depend on , but to simplify notation we do not show explicitly this dependence. Now, we consider Newton’s method for solving the system of necessary conditions

Considering the extension of Newton’s method, given , we denote the next iterate by where . We also write The iteration, roughly speaking, consists of setting the multipliers of the inactive constraints () to zero and treating the remaining constraints as equalities. More precisely, we set and obtain by solving the system where .

If is invertible and has rank , we can solve system (31) explicitly. It follows from (31) that Premultiplying (32) with , and using (33), we obtain from which, we have Substitution in (32) yields Return to (25) and (26), and using the fact that , we see that iteration (25) and (26) is of the form (35).

For a triple for which the matrix on the left-hand side of (31) is invertible, we denote by , the unique solution of (31) and say that , are well defined.

Define

Proposition 4. *Let be a scalar. For every triple , if satisfies
**
then the vectors , are well defined if and only if the vectors , are well defined.**Furthermore,
*

*Proof. *By calculating, we have
As a result, the system (31) can be written asThe second equation yields
If we form the second-order Taylor series expansion of around ,
we obtain
Take , , , and it follows that

Substituting (46) into (43), we have
which, when substituted into the first equation in (42), yields
Thus, in view of condition , system (42) is equivalent to
This shows (39) and (40).

In view of (40), we can write (37) as where This means that one can carry out the second-order multiplier iteration (25), (26) in two stages. First execute the first-order iteration (51) and then the second-order iteration (50), which is part of Newton’s iteration at for solving the system of necessary conditions (29).

Now, we know that is close to for in an appropriate region of . Therefore, using known results for Newton’s method, we expect that (50) will yield a vector which is closer to than . This argument is the basis for the proof of the following proposition.

Proposition 5. *Assume (a)–(f) hold, and let , be as in Theorem 2. Then, given any scalar , there exists a scalar with such that for all there holds
**
where
**
where
**
If, in addition, , are Lipschitz continuous in a neighborhood of , there exists a scalar such that, for all , there holds
*

*Proof. *In view of Theorem 2, given any , there exist and such that if and , there holds
(compare with Proposition 1.17, Bertsekas [8]). Take sufficiently small so that, for all , we have , , and
From (50), we have
If are Lipschitz continuous, then there exists a such that for and , we have

From the above analysis, we know that the sequence generated by the second-order multiplier iteration converges superlinearly with order at least two if the Hessians of functions involved in problem are Lipschitz continuous.

#### Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgment

This project is supported by the National Natural Science Foundation of China (Grant no. 11171138).

#### References

- M. R. Hestenes, “Multiplier and gradient methods,”
*Journal of Optimization Theory and Applications*, vol. 4, no. 5, pp. 303–320, 1969. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - M. J. D. Powell, “A method for nonlinear constraints in minimization problems,” in
*Optimization*, pp. 283–298, Academic Press, New York, NY, USA, 1969. View at: Google Scholar | Zentralblatt MATH | MathSciNet - R. T. Rockafellar, “Augmented Lagrange multiplier functions and duality in nonconvex programming,”
*SIAM Journal on Control*, vol. 12, pp. 268–285, 1974. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - D. P. Bertsekas, “Multiplier methods: a survey,”
*Automatica*, vol. 12, no. 2, pp. 133–145, 1976. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - D. P. Bertsekas, “On the convergence properties of second-order multiplier methods,”
*Journal of Optimization Theory and Applications*, vol. 25, no. 3, pp. 443–449, 1978. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - R. B. Brusch, “A rapidly convergent methods for equality constrained function minimization,” in
*Proceedings of the IEEE Conference on Decision and Control*, pp. 80–81, San Diego, Calif, USA, 1973. View at: Google Scholar - R. Fletcher, “An ideal penalty function for constrained optimization,” in
*Nonlinear Programming 2*, pp. 121–163, Academic Press, New York, NY, USA, 1975. View at: Google Scholar | MathSciNet^{″} - D. P. Bertsekas,
*Constrained Optimization and Lagrange Multiplier Methods*, Academic Press, New York, NY, USA, 1982. View at: MathSciNet - R. A. Polyak and M. Teboulle, “Nonlinear rescaling and proximal-like methods in convex optimization,”
*Mathematical Programming, Series B*, vol. 76, no. 2, pp. 265–284, 1997. View at: Google Scholar | Zentralblatt MATH | MathSciNet - R. A. Polyak and I. Griva, “Primal-dual nonlinear rescaling method for convex optimization,”
*Journal of Optimization Theory and Applications*, vol. 122, no. 1, pp. 111–156, 2004. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - I. Griva and R. A. Polyak, “Primal-dual nonlinear rescaling method with dynamic scaling parameter update,”
*Mathematical Programming*, vol. 106, no. 2, pp. 237–259, 2006. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - A. Auslender, R. Cominetti, and M. Haddou, “Asymptotic analysis for penalty and barrier methods in convex and linear programming,”
*Mathematics of Operations Research*, vol. 22, no. 1, pp. 43–62, 1997. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - A. Ben-Tal and M. Zibulevsky, “Penalty/barrier multiplier methods for convex programming problems,”
*SIAM Journal on Optimization*, vol. 7, no. 2, pp. 347–366, 1997. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - Y.-H. Ren and L.-W. Zhang, “The dual algorithm based on a class of nonlinear Lagrangians for nonlinear programming,” in
*Proceedings of the 6th World Congress on Intelligent Control and Automation (WCICA '06)*, pp. 934–938, 2006. View at: Publisher Site | Google Scholar

#### Copyright

Copyright © 2014 Yong-Hong Ren. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.