/ / Article

Research Article | Open Access

Volume 2013 |Article ID 286486 | https://doi.org/10.1155/2013/286486

Sheng Wang, Hongbo Guan, "A Scaled Conjugate Gradient Method for Solving Monotone Nonlinear Equations with Convex Constraints", Journal of Applied Mathematics, vol. 2013, Article ID 286486, 7 pages, 2013. https://doi.org/10.1155/2013/286486

# A Scaled Conjugate Gradient Method for Solving Monotone Nonlinear Equations with Convex Constraints

Revised05 Nov 2013
Accepted05 Nov 2013
Published25 Nov 2013

#### Abstract

Based on the Scaled conjugate gradient (SCALCG) method presented by Andrei (2007) and the projection method presented by Solodov and Svaiter, we propose a SCALCG method for solving monotone nonlinear equations with convex constraints. SCALCG method can be regarded as a combination of conjugate gradient method and Newton-type method for solving unconstrained optimization problems. So, it has the advantages of the both methods. It is suitable for solving large-scale problems. So, it can be applied to solving large-scale monotone nonlinear equations with convex constraints. Under reasonable conditions, we prove its global convergence. We also do some numerical experiments show that the proposed method is efficient and promising.

#### 1. Introduction

In this paper, we consider the following convex constrained monotone equations: where is a continuous and monotone function. The feasible region is a nonempty closed convex set. Monotone means that

The algorithms of solving monotone nonlinear equations have strong relationship to algorithms of solving optimization problems. It’s known that the function is strictly function is equivalent to that the vector function is strictly monotone which means , and the definition of monotone nonlinear equations is same to this. The strictly convex function must exists unique minimum point, so the minimum point is a stable point of the convex functions, namely, the point which the gradient vector . The monotone vector function can be seen as a gradient vector of some strictly convex function. There exists strictly convex function , satisfying , Therefore, solving is equivalent to solving .

Nonlinear monotone equations arise in wide variety of applications, such as subproblems in the generalized proximal algorithms with Bergman distances . In power engineering, the operations of a power system are described by a system of nonlinear equations, called the power flow equations, which are constrained by some operating constraints.

It has received much attention for the unconstrained nonlinear monotone equations . Solodov and Svaiter  proposed a Newton-type method and a good property of the method is that the whole sequence of iterates converges to a solution of the system without any regularity assumptions. Under some weaker conditions, Zhou and Toh  showed that the Solodov and Svaiter’s method is super linear convergence. Zhou and Li [5, 6] extended Solodov and Svaiter’s projection method to the BFGS method and limited memory BFGS method. Zhang and Zhou  combined the spectral gradient method and the projection method of Solodov and Svaiter, proposed a spectral gradient projection method. Wang et al.  extended Solodov and Svaiter’s projection method to solve monotone equations with convex constraints. Yu et al.  proposed a spectral gradient projection algorithm for monotone nonlinear equations with convex constraints by combining a modified spectral gradient method and the projection method. A good property of the method is that the linear system is not necessary at each iteration. Xiao and Zhu  extended CG_DESCENT to solve large-scale nonlinear convex constrained monotone equations in compressive sensing by combining with the projection method of Solodov and Svaiter. At each iteration, the proposed method is not necessary to compute the Jacobian information or store any matrix.

This paper is organized as follows. In Section 2, we propose a SCALCG method for solving monotone nonlinear equation with convex constraints. Under reasonable conditions, we prove its global convergence in Section 3. In Section 4, we do some numerical experiments show that our method are efficient and promising.

#### 2. The Method

In this section, we propose our method. At first, we simply review the SCALCG method presented by Andrei  for the following unconstrained optimization problems. where is a continuously differentiable function, is its gradient at point .

The method of Andrei generate a sequence of approximations to the minimum of , in which where .

Based on the SCALCG method, we now introduce our method for solving (1). Inspired by (5), we define as where , , , , is a step length which will be defined later. The definition of is similar to the one in .

Lemma 1. Let be generated by (6), then for any , we have

Proof. If , we have .
If , we obtain where So, we have By the definition of , the following inequality holds So, we obtain It can be seen that

The steps of our method are stated as follows.

Algorithm 2. Consider the following steps.
Step  0. Choose an initial point , and constants , , , Set .
Step 1. Stop if . Otherwise, compute by (6).
Step 2. Let which satisfies Let .
Step 3. Compute where
Step 4. Let . Go to Step 1.

#### 3. Convergence Analysis

In this section, we establish the global convergence of Algorithm 2. For our purpose, we assume that satisfies the following assumptions.

Condition A. Consider the following.(1)The mapping is Lipchitz continuous, it means that it satisfies (2)The solution set of (1), denoted by , is nonempty.

Lemma 3. Algorithm 2 is well defined.

Proof. We just need prove that Step 2 is well defined in Algorithm 2. We take the limit of the both sides of (14), we have So Algorithm 2 is well defined.

Lemma 4. Suppose Condition A hold, the step length satisfies

Proof. If the algorithm stops at some iteration then , so that is a solution of (1). From now on, we assume that for any . It is easy to see that from (7).
If , by the line search process, we know that does not satisfies (14), that is where .
From (7), we know So, for any , there exists a positive number , such that .
From (7) and condition (1), we have So we get

Lemma 5. Suppose Condition A hold and , the sequence is generated by Algorithm 2. Then the sequence is bounded. That means for all , there exists a positive , such that

Proof. From (2), we have From the non-expansiveness of the projection operator, it holds It is easy to see Since is Lipchitz continuous, we get Let , then (46) is established.

Lemma 6. Suppose Condition A hold, and the sequence and are generated by Algorithm 2. Then, is a decent direction of the function at the point , where .

Proof. The gradient of the function is .
From (2), it can be seen that So, we obtain

Lemma 7. Suppose Condition A hold, and the sequence and are generated by Algorithm 2. Then we have the following:(1) and are bounded.(2).
Particularly, we have (3).

Proof. (1) From (26), we have So the sequence is bounded.
From (2), (14), and (24), we get So, the following inequality holds That is, So, the sequence is bounded.

(2) From (26), we obtain

Since the function is continuous, and the sequence is bounded, so the sequence is bounded, that is for all , that exists a positive , such that . Then, we get So, we have

Particularly, we obtain

(3) From the non-expansiveness of the projection operator, it holds So, we obtain

Theorem 8. Suppose Condition A hold, and the sequence is generated by Algorithm 2. Then, we have

Proof. If (42) does not hold, for any , there exist , such that From the nonexpansiveness of the projection operator, it holds By the definition of and Cauchy-Schwartz inequality, we have By the definition of , assumption (1) and (45), we obtain
From (12), we get
From (46), we have
From (7), we get So, we obtain That is, From (6), (24), (43), and (47), we have
Let , then for all , we have From (19), (43), and (53), it can be seen that The last inequality yields a contradiction with (31), so (42) holds.

#### 4. Numerical Experiments

In this section, we do some numerical experiments to test the performance of Algorithm 2 on the following two problems. The algorithm was coded in Matlab and run on a personal computer with a 2.3 GHZ CPU and 2 GB memory and Windows XP operating system.

For each test problem, the termination condition is We set , , . We test both problems with the number of variables , 500, 1000, 2000, and 5000 and start form different initial points. The meaning of the columns in Tables 1 and 2 is stated as follows. “Dim” means the dimension of the problem, “Init” means the initials points, “Iter” means the number of iterations, “Time” stands for CPU time in seconds, and “Fn” stands for the final norm of equations.

 Init Dim Iter Time Fn Iter Time Fn 100 53 0.041288 60 0.046265 500 117 0.168041 130 0.253245 1000 166 0.709354 184 0.663908 2000 237 4.222668 262 2.370548 5000 382 42.01209 421 21.02392 Init Dim Iter Time Fn Iter Time Fn 100 77 0.060589 55 0.042016 500 155 0.279761 120 0.149927 1000 216 0.813048 171 0.594531 2000 303 2.733373 244 2.273943 5000 479 23.90998 393 19.72807
 Init Dim Iter Time Fn Iter Time Fn 100 26 0.026912 32 0.025124 500 51 0.088748 63 0.149986 1000 69 0.2554104 86 0.356989 2000 96 0.986898 120 1.278378 5000 151 13.81959 187 20.48943 Init Dim Iter Time Fn Iter Time Fn 100 68 0.054123 24 0.018994 500 140 0.241559 47 0.128392 1000 194 0.712503 65 0.282163 2000 271 2.927190 90 0.889466 5000 425 46.31353 141 7.183196

Problem 9. The is taken as , where

Problem 10. The is taken as , where

Tables 1 and 2 show that our method is efficient. It is suitable for solving large-scale monotone equations with convex constraints.

#### 5. Conclusions

In this paper, we have proposed a SCALCG method for solving nonlinear monotone equations with convex constraints. Under some wild conditions, we proved its global convergence.

Preliminary numerical experiments have illustrated that the proposed method works well for Problems 9 and 10.

#### Acknowledgment

This work has been supported by Scientific Research Fund of Hunan Provincial Education Department [12C0664].

1. M. V. Solodov and A. N. Iusem, “Newton-type methods with generalized distances for constrained optimization,” Optimization, vol. 41, no. 3, pp. 257–278, 1997.
2. M. V. Solodov and B. F. Svaiter, “A globally convergent inexact Newton method for system of monotone equations,” in Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods, pp. 355–369, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1998. View at: Google Scholar
3. L. Zhang and W. Zhou, “Spectral gradient projection method for solving nonlinear monotone equations,” Journal of Computational and Applied Mathematics, vol. 196, no. 2, pp. 478–484, 2006.
4. G. Zhou and K. C. Toh, “Superlinear convergence of a Newton-type algorithm for monotone equations,” Journal of Optimization Theory and Applications, vol. 125, no. 1, pp. 205–221, 2005.
5. W. Zhou and D. Li, “A globally convergent BFGS method for nonlinear monotone equations without any merit functions,” Mathematics of Computation, vol. 77, no. 264, pp. 2231–2240, 2008.
6. W. Zhou and D. Li, “Limited memory BFGS method for nonlinear monotone equations,” Journal of Computational Mathematics, vol. 25, no. 1, pp. 89–96, 2007. View at: Google Scholar
7. C. Wang, Y. Wang, and C. Xu, “A projection method for a system of nonlinear monotone equations with convex constraints,” Mathematical Methods of Operations Research, vol. 66, no. 1, pp. 33–46, 2007.
8. Z. Yu, J. Lin, J. Sun, Y. Xiao, L. Liu, and Z. Li, “Spectral gradient projection method for monotone nonlinear equations with convex constraints,” Applied Numerical Mathematics, vol. 59, no. 10, pp. 2416–2423, 2009.
9. Y. H. Xiao and H. Zhu, “A conjugate gradient method to solve convex constrained monotone equations with applications in compressive sensing,” Journal of Mathematical Analysis and Applications, vol. 405, no. 1, pp. 310–319, 2013. View at: Publisher Site | Google Scholar
10. N. Andrei, “A scaled BFGS preconditioned conjugate gradient algorithm for unconstrained optimization,” Applied Mathematics Letters, vol. 20, no. 6, pp. 645–650, 2007.

#### More related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.