Abstract

The nonlinear conjugate gradient method is of particular importance for solving unconstrained optimization. Finitely many maximum functions is a kind of very useful nonsmooth equations, which is very useful in the study of complementarity problems, constrained nonlinear programming problems, and many problems in engineering and mechanics. Smoothing methods for solving nonsmooth equations, complementarity problems, and stochastic complementarity problems have been studied for decades. In this paper, we present a new smoothing nonlinear conjugate gradient method for nonsmooth equations with finitely many maximum functions. The new method also guarantees that any accumulation point of the iterative points sequence, which is generated by the new method, is a Clarke stationary point of the merit function for nonsmooth equations with finitely many maximum functions.

1. Introduction

In this paper, we consider the following nonsmooth equations with finitely many maximum functions: where for and ,   are continuously differentiable functions, and are finite index sets. Denote that then (1) can be reformulated as where is nonsmooth equations and ,  . Nonsmooth equations have been studied for decades, which is proposed in the study of the optimal control, the variational inequality and complementarity problems, equilibrium problems, and engineering mechanics [13], because many practical problems, such as stochastic complementarity problems, variational inequality problems, KKT systems of constrained nonlinear programming problems, and many problems in equilibrium problems, can be reformulated into (1). In the past few years, there has been a growing interest in the study of (1) (such as [4, 5]). Due to its simplicity and global convergence, the iterative methods, such as nonsmooth Levenberg-Marquardt method, general Newton method, and smoothing methods for solving nonsmooth equations, have been widely studied [411].

In this paper, we give a new smoothing nonlinear conjugate gradient method for (1). In the following section, we will recall some definitions and some background of nonlinear conjugate gradient methods. And we also give the new smoothing nonlinear conjugate gradient method for (1), which guarantees that any accumulation point of the iterative points sequence is a Clarke stationary point of the merit function for (1). In the last section, some discussions are also given.

Notation. In the following, a quantity with a subscript denotes that quantity is evaluated at  , the norm is the 2 norm, and .

2. Preliminaries and New Method

In this section, firstly, we give some definitions and some backgrounds of nonlinear conjugate gradient method. Secondly, we propose the new methods for (1) and give the convergence analysis.

In the following, we give two definitions, which will be used in this paper.

Definition 1. Let be a locally Lipschitz function. Then, is almost everywhere differentiable. Denote be the set of points where is differentiable, then the general gradient of at in the sense of Clark is where conv denotes the convex set.

Definition 2. Let be a locally Lipschitz continuous function. We call a smoothing function of  , if    is continuously differentiable for any fixed and

In the following, we will give the new method for (1). In order to describe the method clearly, we divide this section into two parts. In Case 1, we give the new nonlinear conjugate gradient method for smooth objective function. In Case 2, we give the new smoothing nonlinear conjugate gradient method for nonsmooth objective function.

Denote that where is defined in (3). Then (1) is equivalent to the following unconstrained minimization problem with zero optimal value

Case 1. In this section, we assume that is a continuously differentiable function. Then (7) is a standard unconstrained optimization problem. There are many methods for solving the unconstrained optimization problem, such as Newton method, nonlinear conjugate gradient method, and quasi-Newton method [1216]. Here, based on [13, 14], we will give a new nonlinear conjugate gradient method to solve (7). The iterates for solving (7) is given by where is the direction and is a step size; in this paper, we use the Wolfe type line search [14]. Compute , such that where ,  , and is the gradient of . The direction is defined by where ,  ,  ,  ,  ,  .

Now we give the new method for (7) as follows.

Method 1. Consider the following steps.
Step  0.  Given ,  , and , if  , then stop.
Step  1. Find satisfying (9), and is given by (8).
Step  2. Compute by (10). Set ,  and go to Step 1.

We also need the following assumptions.

Assumption 3. Consider the following:(i) is level bounded;(ii)in the neighborhood of , there exists , such that where ,    is a neighborhood of .

In the following, we will give the global convergence analysis about Method 1. Firstly, we give some lemmas.

Lemma 4. Let be generated by Method 1, then where .

By [13, Theorem  2.1], we have Lemma 4.

Lemma 5 (see [14]). Suppose that Assumption 3 holds, is computed by (9), and we have

By Lemmas 4 and 5, we have Lemma 6.

Lemma 6. Suppose that Assumption 3 holds, is determined by (9), and we get

Now, we give the global convergence theorem for Method 1.

Theorem 7. Suppose that Assumption 3 holds and is generated by Method 1, then

Proof. Suppose by contradiction that there exists , such that holds for . By and (11), we have Denoting that , we get . Then we have which contradicts (15). So we get the theorem.

Case 2. In this section, we assume that is a nonsmooth function. Equation (7) is the nonsmooth unstrained optimization problem. Denote that is the smoothing function for , , then is computed by where . The direction is defined by where ,  ,  ,  , ,  .
We give the following smoothing nonlinear conjugate gradient method.

Method 2. Give , ,  ,  and  .
Step  1. Find satisfying (21), and is given by (8).
Step  2. If , then set ; otherwise, let  .
Step  3. Compute by (22). Set , and  go to Step 1.

Theorem 8. Suppose that satisfies Assumption 3, for fixed . Then the sequence generated by Method 2 satisfies

Proof. If the set is finite, then for a fixed , , for all . Denoteing that ,  , because is a smooth function, the previous method reduces to Method 1, where . By Theorem 7, we get So we know that for is impossible. Then we can assume that the set is infinite, then By the infinity of , we can assume that the set is . Then we have By we have So we complete the proof.

Remark 9. From the result of Theorem 8, we know that for some kinds of smoothing functions [9, 10], any accumulation point of generated by Method 2 is a Clarke stationary point of ; that is,.

3. Some Discussions

In this paper, we give a new smoothing nonlinear conjugate gradient method for (1). The new method also guarantees that any accumulation point is a Clarke stationary point of the merit function for (1).

Discussion 1. In our Methods 1 and 2, we can use any line search, which is well defined under the condition that the search directions are descent directions.

Discussion 2. There are some kinds of smoothing functions, which satisfied Assumption 3 for fixed (see [9, 10]). When the smoothing function of has gradient consistent property, then any accumulation point of the sequence generated by Method 2 is a Clarke stationary point. Under some assumptions, we can also use the methods in [15] to solve nonsmooth equations with finitely many maximum functions (1).

Discussion 3. The new method can also be used for solving nonlinear complementarity problem (NCP). By F-B function we know that NCP is equivalent to , where is a continuously differentiable function, so we can use the smooth version Method 1 to solve it. Method 2 can also be used to solve the vertical complementarity problems where are continuously differentiable functions.

Discussion 4. The new method can also be used for solving Hamilton-Jacobi-Bellman equations (HJB) (see [17]). The Hamilton-Jacobi-Bellman equations (HJB) is used to find , such that where    is a bounded domain in ,    and   are elliptic operators of second order. HJB arise in stochastic control problems and often are used to solve finance and control problems. By finite element method, we can obtain the following discrete HJB equation; find , such that where ,  ,  .

Discussion 5. We also can consider to use the new method to solve the general variational inequality problem (see [18]). The general variational inequality problem is to compute , such that where are two continuously differentiable functions and is a closed convex set. Equation (34) is denoted by    in [18].    is a generalization of complementarity problems, nonlinear variational inequalities problems, and general nonlinear complementarity problems. The variational inequalities problems are to compute where . The general nonlinear complementarity problems are to compute , such that We can rewrite (34) as the following nonsmooth equation: where is the projection operator onto under the Euclidean norm. The general nonlinear complementarity problems are widely used in solving engineering problems and economic problems; such that, under some conditions, the -person noncooperative game problem can be reformulated as (37) (see [19]). Therefore, how to use our new method to solve the general nonlinear complementarity problems and the -person noncooperative game problem would be an interesting topic and deserves further investigation.

Acknowledgments

This work was supported by National Science Foundation of China (11101231, 70971070), a Project of Shandong Province Higher Educational Science and Technology Program (J10LA05), and International Cooperation Program for Excellent Lecturers of 2011 by Shandong Provincial Education Department.