- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Recently Accepted Articles ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Abstract and Applied Analysis

Volume 2013 (2013), Article ID 780107, 5 pages

http://dx.doi.org/10.1155/2013/780107

## A New Smoothing Nonlinear Conjugate Gradient Method for Nonsmooth Equations with Finitely Many Maximum Functions

^{1}College of Mathematics, Qingdao University, Qingdao 266071, China^{2}School of Management, University of Shanghai for Science and Technology, Shanghai 200093, China

Received 8 March 2013; Accepted 25 March 2013

Academic Editor: Yisheng Song

Copyright © 2013 Yuan-yuan Chen and Shou-qiang Du. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The nonlinear conjugate gradient method is of particular importance for solving unconstrained optimization. Finitely many maximum functions is a kind of very useful nonsmooth equations, which is very useful in the study of complementarity problems, constrained nonlinear programming problems, and many problems in engineering and mechanics. Smoothing methods for solving nonsmooth equations, complementarity problems, and stochastic complementarity problems have been studied for decades. In this paper, we present a new smoothing nonlinear conjugate gradient method for nonsmooth equations with finitely many maximum functions. The new method also guarantees that any accumulation point of the iterative points sequence, which is generated by the new method, is a Clarke stationary point of the merit function for nonsmooth equations with finitely many maximum functions.

#### 1. Introduction

In this paper, we consider the following nonsmooth equations with finitely many maximum functions: where for and , are continuously differentiable functions, and are finite index sets. Denote that then (1) can be reformulated as where is nonsmooth equations and , . Nonsmooth equations have been studied for decades, which is proposed in the study of the optimal control, the variational inequality and complementarity problems, equilibrium problems, and engineering mechanics [1–3], because many practical problems, such as stochastic complementarity problems, variational inequality problems, KKT systems of constrained nonlinear programming problems, and many problems in equilibrium problems, can be reformulated into (1). In the past few years, there has been a growing interest in the study of (1) (such as [4, 5]). Due to its simplicity and global convergence, the iterative methods, such as nonsmooth Levenberg-Marquardt method, general Newton method, and smoothing methods for solving nonsmooth equations, have been widely studied [4–11].

In this paper, we give a new smoothing nonlinear conjugate gradient method for (1). In the following section, we will recall some definitions and some background of nonlinear conjugate gradient methods. And we also give the new smoothing nonlinear conjugate gradient method for (1), which guarantees that any accumulation point of the iterative points sequence is a Clarke stationary point of the merit function for (1). In the last section, some discussions are also given.

*Notation*. In the following, a quantity with a subscript denotes that quantity is evaluated at , the norm is the 2 norm, and .

#### 2. Preliminaries and New Method

In this section, firstly, we give some definitions and some backgrounds of nonlinear conjugate gradient method. Secondly, we propose the new methods for (1) and give the convergence analysis.

In the following, we give two definitions, which will be used in this paper.

*Definition 1. *Let be a locally Lipschitz function. Then, is almost everywhere differentiable. Denote be the set of points where is differentiable, then the general gradient of at in the sense of Clark is
where conv denotes the convex set.

*Definition 2. *Let be a locally Lipschitz continuous function. We call a smoothing function of , if is continuously differentiable for any fixed and

In the following, we will give the new method for (1). In order to describe the method clearly, we divide this section into two parts. In Case 1, we give the new nonlinear conjugate gradient method for smooth objective function. In Case 2, we give the new smoothing nonlinear conjugate gradient method for nonsmooth objective function.

Denote that where is defined in (3). Then (1) is equivalent to the following unconstrained minimization problem with zero optimal value

*Case 1. *In this section, we assume that is a continuously differentiable function. Then (7) is a standard unconstrained optimization problem. There are many methods for solving the unconstrained optimization problem, such as Newton method, nonlinear conjugate gradient method, and quasi-Newton method [12–16]. Here, based on [13, 14], we will give a new nonlinear conjugate gradient method to solve (7). The iterates for solving (7) is given by
where is the direction and is a step size; in this paper, we use the Wolfe type line search [14]. Compute , such that
where , , and is the gradient of . The direction is defined by
where
, , , , , .

Now we give the new method for (7) as follows.

*Method 1. *Consider the following steps. *Step 0*. Given , , and , if , then stop. *Step 1*. Find satisfying (9), and is given by (8). *Step 2*. Compute by (10). Set , and go to Step 1.

We also need the following assumptions.

*Assumption 3. *Consider the following:(i) is level bounded;(ii)in the neighborhood of , there exists , such that
where , is a neighborhood of .

In the following, we will give the global convergence analysis about Method 1. Firstly, we give some lemmas.

Lemma 4. *Let be generated by Method 1, then
**
where .*

By [13, Theorem 2.1], we have Lemma 4.

Lemma 5 (see [14]). *Suppose that Assumption 3 holds, is computed by (9), and we have
*

By Lemmas 4 and 5, we have Lemma 6.

Lemma 6. *Suppose that Assumption 3 holds, is determined by (9), and we get
*

Now, we give the global convergence theorem for Method 1.

Theorem 7. *Suppose that Assumption 3 holds and is generated by Method 1, then
*

*Proof. *Suppose by contradiction that there exists , such that
holds for . By
and (11), we have
Denoting that , we get . Then we have
which contradicts (15). So we get the theorem.

*Case 2. *In this section, we assume that is a nonsmooth function. Equation (7) is the nonsmooth unstrained optimization problem. Denote that is the smoothing function for , , then is computed by
where . The direction is defined by
where
, , , , , .

We give the following smoothing nonlinear conjugate gradient method.

*Method 2. *Give , , , and .*Step 1*. Find satisfying (21), and is given by (8).*Step 2*. If , then set ; otherwise, let .*Step 3*. Compute by (22). Set , and go to Step 1.

Theorem 8. *Suppose that satisfies Assumption 3, for fixed . Then the sequence generated by Method 2 satisfies
*

*Proof. *If the set is finite, then for a fixed , , for all . Denoteing that , , because is a smooth function, the previous method reduces to Method 1, where . By Theorem 7, we get
So we know that for is impossible. Then we can assume that the set is infinite, then
By the infinity of , we can assume that the set is . Then we have
By
we have
So we complete the proof.

*Remark 9. *From the result of Theorem 8, we know that for some kinds of smoothing functions [9, 10], any accumulation point of generated by Method 2 is a Clarke stationary point of ; that is,.

#### 3. Some Discussions

In this paper, we give a new smoothing nonlinear conjugate gradient method for (1). The new method also guarantees that any accumulation point is a Clarke stationary point of the merit function for (1).

*Discussion 1. *In our Methods 1 and 2, we can use any line search, which is well defined under the condition that the search directions are descent directions.

*Discussion 2. *There are some kinds of smoothing functions, which satisfied Assumption 3 for fixed (see [9, 10]). When the smoothing function of has gradient consistent property, then any accumulation point of the sequence generated by Method 2 is a Clarke stationary point. Under some assumptions, we can also use the methods in [15] to solve nonsmooth equations with finitely many maximum functions (1).

*Discussion 3. *The new method can also be used for solving nonlinear complementarity problem (NCP). By F-B function
we know that NCP is equivalent to , where is a continuously differentiable function, so we can use the smooth version Method 1 to solve it. Method 2 can also be used to solve the vertical complementarity problems
where are continuously differentiable functions.

*Discussion 4. *The new method can also be used for solving Hamilton-Jacobi-Bellman equations (HJB) (see [17]). The Hamilton-Jacobi-Bellman equations (HJB) is used to find , such that
where is a bounded domain in , and are elliptic operators of second order. HJB arise in stochastic control problems and often are used to solve finance and control problems. By finite element method, we can obtain the following discrete HJB equation; find , such that
where , , .

*Discussion 5. *We also can consider to use the new method to solve the general variational inequality problem (see [18]). The general variational inequality problem is to compute , such that
where are two continuously differentiable functions and is a closed convex set. Equation (34) is denoted by in [18]. is a generalization of complementarity problems, nonlinear variational inequalities problems, and general nonlinear complementarity problems. The variational inequalities problems are to compute
where . The general nonlinear complementarity problems are to compute , such that
We can rewrite (34) as the following nonsmooth equation:
where is the projection operator onto under the Euclidean norm. The general nonlinear complementarity problems are widely used in solving engineering problems and economic problems; such that, under some conditions, the -person noncooperative game problem can be reformulated as (37) (see [19]). Therefore, how to use our new method to solve the general nonlinear complementarity problems and the -person noncooperative game problem would be an interesting topic and deserves further investigation.

#### Acknowledgments

This work was supported by National Science Foundation of China (11101231, 70971070), a Project of Shandong Province Higher Educational Science and Technology Program (J10LA05), and International Cooperation Program for Excellent Lecturers of 2011 by Shandong Provincial Education Department.

#### References

- L. Qi and P. Tseng, “On almost smooth functions and piecewise smooth functions,”
*Nonlinear Analysis*, vol. 67, no. 3, pp. 773–794, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - F. Facchinei and J.-S. Pang,
*Finite-Dimensional Variational Inequalities and Complementarity Problems*, vol. 1, Springer, New York, NY, USA, 2003. View at MathSciNet - D. Sun and J. Han, “Newton and quasi-Newton methods for a class of nonsmooth equations and related problems,”
*SIAM Journal on Optimization*, vol. 7, no. 2, pp. 463–480, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - Y. Gao, “Newton methods for solving two classes of nonsmooth equations,”
*Applications of Mathematics*, vol. 46, no. 3, pp. 215–229, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - S.-Q. Du and Y. Gao, “A parametrized Newton method for nonsmooth equations with finitely many maximum functions,”
*Applications of Mathematics*, vol. 54, no. 5, pp. 381–390, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - N. Yamashita and M. Fukushima, “Modified Newton methods for solving a semismooth reformulation of monotone complementarity problems,”
*Mathematical Programming B*, vol. 76, no. 3, pp. 469–491, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - L. Q. Qi and J. Sun, “A nonsmooth version of Newton's method,”
*Mathematical Programming A*, vol. 58, no. 3, pp. 353–367, 1993. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - Y. Elfoutayeni and M. Khaladi, “Using vector divisions in solving the linear complementarity problem,”
*Journal of Computational and Applied Mathematics*, vol. 236, no. 7, pp. 1919–1925, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - X. Chen and W. Zhou, “Smoothing nonlinear conjugate gradient method for image restoration using nonsmooth nonconvex minimization,”
*SIAM Journal on Imaging Sciences*, vol. 3, no. 4, pp. 765–790, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - X. Chen, “Smoothing methods for nonsmooth, nonconvex minimization,”
*Mathematical Programming B*, vol. 134, no. 1, pp. 71–99, 2012. View at Publisher · View at Google Scholar · View at MathSciNet - M. C. Ferris, O. L. Mangasarian, and J. S. Pang,
*Complementarity: Applications, Algorithms and Extensions*, Kluwer Academic, Dordrecht, The Netherlands, 2001. View at MathSciNet - Y. H. Dai and Y. Yuan, “A nonlinear conjugate gradient method with a strong global convergence property,”
*SIAM Journal on Optimization*, vol. 10, no. 1, pp. 177–182, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - Z. Dai and F. Wen, “Global convergence of a modified Hestenes-Stiefel nonlinear conjugate gradient method with Armijo line search,”
*Numerical Algorithms*, vol. 59, no. 1, pp. 79–93, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - C.-Y. Wang, Y.-Y. Chen, and S.-Q. Du, “Further insight into the Shamanskii modification of Newton method,”
*Applied Mathematics and Computation*, vol. 180, no. 1, pp. 46–52, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - Y. Y. Chen and S. Q. Du, “Nonlinear conjugate gradient methods with Wolfe type line search,”
*Abstract and Applied Analysis*, vol. 2013, Article ID 742815, 5 pages, 2013. View at Publisher · View at Google Scholar - Y. Xiao, H. Song, and Z. Wang, “A modified conjugate gradient algorithm with cyclic Barzilai-Borwein steplength for unconstrained optimization,”
*Journal of Computational and Applied Mathematics*, vol. 236, no. 13, pp. 3101–3110, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - S. Zhou and Z. Zou, “A relaxation scheme for Hamilton-Jacobi-Bellman equations,”
*Applied Mathematics and Computation*, vol. 186, no. 1, pp. 806–813, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - X. Chen, L. Qi, and D. Sun, “Global and superlinear convergence of the smoothing Newton method and its application to general box constrained variational inequalities,”
*Mathematics of Computation*, vol. 67, no. 222, pp. 519–540, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - M. C. Ferris and J. S. Pang, “Engineering and economic applications of complementarity problems,”
*SIAM Review*, vol. 39, no. 4, pp. 669–713, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet