Abstract

Based on the Scaled conjugate gradient (SCALCG) method presented by Andrei (2007) and the projection method presented by Solodov and Svaiter, we propose a SCALCG method for solving monotone nonlinear equations with convex constraints. SCALCG method can be regarded as a combination of conjugate gradient method and Newton-type method for solving unconstrained optimization problems. So, it has the advantages of the both methods. It is suitable for solving large-scale problems. So, it can be applied to solving large-scale monotone nonlinear equations with convex constraints. Under reasonable conditions, we prove its global convergence. We also do some numerical experiments show that the proposed method is efficient and promising.

1. Introduction

In this paper, we consider the following convex constrained monotone equations: where is a continuous and monotone function. The feasible region is a nonempty closed convex set. Monotone means that

The algorithms of solving monotone nonlinear equations have strong relationship to algorithms of solving optimization problems. It’s known that the function is strictly function is equivalent to that the vector function is strictly monotone which means , and the definition of monotone nonlinear equations is same to this. The strictly convex function must exists unique minimum point, so the minimum point is a stable point of the convex functions, namely, the point which the gradient vector . The monotone vector function can be seen as a gradient vector of some strictly convex function. There exists strictly convex function , satisfying , Therefore, solving is equivalent to solving .

Nonlinear monotone equations arise in wide variety of applications, such as subproblems in the generalized proximal algorithms with Bergman distances [1]. In power engineering, the operations of a power system are described by a system of nonlinear equations, called the power flow equations, which are constrained by some operating constraints.

It has received much attention for the unconstrained nonlinear monotone equations [25]. Solodov and Svaiter [2] proposed a Newton-type method and a good property of the method is that the whole sequence of iterates converges to a solution of the system without any regularity assumptions. Under some weaker conditions, Zhou and Toh [4] showed that the Solodov and Svaiter’s method is super linear convergence. Zhou and Li [5, 6] extended Solodov and Svaiter’s projection method to the BFGS method and limited memory BFGS method. Zhang and Zhou [3] combined the spectral gradient method and the projection method of Solodov and Svaiter, proposed a spectral gradient projection method. Wang et al. [7] extended Solodov and Svaiter’s projection method to solve monotone equations with convex constraints. Yu et al. [8] proposed a spectral gradient projection algorithm for monotone nonlinear equations with convex constraints by combining a modified spectral gradient method and the projection method. A good property of the method is that the linear system is not necessary at each iteration. Xiao and Zhu [9] extended CG_DESCENT to solve large-scale nonlinear convex constrained monotone equations in compressive sensing by combining with the projection method of Solodov and Svaiter. At each iteration, the proposed method is not necessary to compute the Jacobian information or store any matrix.

This paper is organized as follows. In Section 2, we propose a SCALCG method for solving monotone nonlinear equation with convex constraints. Under reasonable conditions, we prove its global convergence in Section 3. In Section 4, we do some numerical experiments show that our method are efficient and promising.

2. The Method

In this section, we propose our method. At first, we simply review the SCALCG method presented by Andrei [10] for the following unconstrained optimization problems. where is a continuously differentiable function, is its gradient at point .

The method of Andrei generate a sequence of approximations to the minimum of , in which where .

Based on the SCALCG method, we now introduce our method for solving (1). Inspired by (5), we define as where , , , , is a step length which will be defined later. The definition of is similar to the one in [9].

Lemma 1. Let be generated by (6), then for any , we have

Proof. If , we have .
If , we obtain where So, we have By the definition of , the following inequality holds So, we obtain It can be seen that

The steps of our method are stated as follows.

Algorithm 2. Consider the following steps.
Step  0. Choose an initial point , and constants , , , Set .
Step 1. Stop if . Otherwise, compute by (6).
Step 2. Let which satisfies Let .
Step 3. Compute where
Step 4. Let . Go to Step 1.

3. Convergence Analysis

In this section, we establish the global convergence of Algorithm 2. For our purpose, we assume that satisfies the following assumptions.

Condition A. Consider the following.(1)The mapping is Lipchitz continuous, it means that it satisfies (2)The solution set of (1), denoted by , is nonempty.

Lemma 3. Algorithm 2 is well defined.

Proof. We just need prove that Step 2 is well defined in Algorithm 2. We take the limit of the both sides of (14), we have So Algorithm 2 is well defined.

Lemma 4. Suppose Condition A hold, the step length satisfies

Proof. If the algorithm stops at some iteration then , so that is a solution of (1). From now on, we assume that for any . It is easy to see that from (7).
If , by the line search process, we know that does not satisfies (14), that is where .
From (7), we know So, for any , there exists a positive number , such that .
From (7) and condition (1), we have So we get

Lemma 5. Suppose Condition A hold and , the sequence is generated by Algorithm 2. Then the sequence is bounded. That means for all , there exists a positive , such that

Proof. From (2), we have From the non-expansiveness of the projection operator, it holds It is easy to see Since is Lipchitz continuous, we get Let , then (46) is established.

Lemma 6. Suppose Condition A hold, and the sequence and are generated by Algorithm 2. Then, is a decent direction of the function at the point , where .

Proof. The gradient of the function is .
From (2), it can be seen that So, we obtain

Lemma 7. Suppose Condition A hold, and the sequence and are generated by Algorithm 2. Then we have the following:(1) and are bounded.(2).
Particularly, we have (3).

Proof. (1) From (26), we have So the sequence is bounded.
From (2), (14), and (24), we get So, the following inequality holds That is, So, the sequence is bounded.

(2) From (26), we obtain

Since the function is continuous, and the sequence is bounded, so the sequence is bounded, that is for all , that exists a positive , such that . Then, we get So, we have

Particularly, we obtain

(3) From the non-expansiveness of the projection operator, it holds So, we obtain

Theorem 8. Suppose Condition A hold, and the sequence is generated by Algorithm 2. Then, we have

Proof. If (42) does not hold, for any , there exist , such that From the nonexpansiveness of the projection operator, it holds By the definition of and Cauchy-Schwartz inequality, we have By the definition of , assumption (1) and (45), we obtain
From (12), we get
From (46), we have
From (7), we get So, we obtain That is, From (6), (24), (43), and (47), we have
Let , then for all , we have From (19), (43), and (53), it can be seen that The last inequality yields a contradiction with (31), so (42) holds.

4. Numerical Experiments

In this section, we do some numerical experiments to test the performance of Algorithm 2 on the following two problems. The algorithm was coded in Matlab and run on a personal computer with a 2.3 GHZ CPU and 2 GB memory and Windows XP operating system.

For each test problem, the termination condition is We set , , . We test both problems with the number of variables , 500, 1000, 2000, and 5000 and start form different initial points. The meaning of the columns in Tables 1 and 2 is stated as follows. “Dim” means the dimension of the problem, “Init” means the initials points, “Iter” means the number of iterations, “Time” stands for CPU time in seconds, and “Fn” stands for the final norm of equations.

Problem 9. The is taken as , where

Problem 10. The is taken as , where

Tables 1 and 2 show that our method is efficient. It is suitable for solving large-scale monotone equations with convex constraints.

5. Conclusions

In this paper, we have proposed a SCALCG method for solving nonlinear monotone equations with convex constraints. Under some wild conditions, we proved its global convergence.

Preliminary numerical experiments have illustrated that the proposed method works well for Problems 9 and 10.

Acknowledgment

This work has been supported by Scientific Research Fund of Hunan Provincial Education Department [12C0664].