Table of Contents Author Guidelines Submit a Manuscript
Journal of Applied Mathematics
Volume 2012, Article ID 348654, 9 pages
Research Article

A Two-Step Matrix-Free Secant Method for Solving Large-Scale Systems of Nonlinear Equations

1Department of Mathematics, Faculty of Science, University Putra Malaysia, 43400 Serdang, Malaysia
2Department of Mathematics, Faculty of Science, Bayero University Kano, Kano 2340, Nigeria
3Department of Mathematics, Faculty of Science and Technology, Malaysia Terengganu University, Kuala Lumpur 21030 Terengganu, Malaysia

Received 9 November 2011; Revised 11 January 2012; Accepted 16 January 2012

Academic Editor: Renat Zhdanov

Copyright © 2012 M. Y. Waziri et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


We propose an approach to enhance the performance of a diagonal variant of secant method for solving large-scale systems of nonlinear equations. In this approach, we consider diagonal secant method using data from two preceding steps rather than a single step derived using weak secant equation to improve the updated approximate Jacobian in diagonal form. The numerical results verify that the proposed approach is a clear enhancement in numerical performance.

1. Introduction

Solving systems of nonlinear equations is becoming more essential in the analysis of complex problems in many research areas. The problem considered is to find the solution of nonlinear equations: 𝐹(𝑥)=0,(1.1) where 𝐹=𝑅𝑛𝑅𝑛 is continuously differentiable in an open neighborhood Φ𝑅𝑛 of a solution 𝑥=(𝑥1,,𝑥𝑛) of the system (1.1). We assume that there exists 𝑥 with 𝐹(𝑥)=0 and 𝐹(𝑥)0, where 𝐹(𝑥𝑘) is the Jacobian of 𝐹 at 𝑥𝑘 for which it is assumed to be locally Lipschitz continuous at 𝑥. The prominent method for finding the solution to (1.1) is the Newton's method which generates a sequence of iterates {𝑥𝑘} from a given initial guess 𝑥0 via 𝑥𝑘+1=𝑥𝑘𝐹𝑥𝑘1𝐹𝑥𝑘,(1.2) where 𝑘=0,1,2. However, Newton’s method requires the computation of the matrix entailing the first-order derivatives of the systems. In practice, computations of some functions derivatives are quite costly, and sometimes they are not available or could not be done precisely. In this case, Newton's method cannot be used directly.

It is imperative to mention that some efforts have been already carried out in order to eliminate the well-known shortcomings of Newton's method for solving systems of nonlinear equations, particularly large-scale systems. These so-called revised Newton-type methods include Chord-Newton method, inexact Newton's method, quasi-Newton's method, and so forth (e.g., see [14]). On the other hand, most of these variants of Newton's method still have some shortcomings as Newton's counterpart. For example, Broyden's method [5] and Chord's Newton's method need to store an 𝑛×𝑛 matrix, and their floating points operations are 𝑂(𝑛2).

To deal with these disadvantages, a diagonally Newton's method has been suggested by Leong et al. [6] by approximating the Jacobian matrix into nonsingular diagonal matrix and updated in every iteration. Incorporating this updating strategy, Leong et al. [6], based upon the weak secant equation of Denis and Wolkonicz [7], showed that their algorithm is appreciably cheaper than Newton's method and some of its variants. It is worth to report that, they employ a standard one-step two-point approach in Jacobian approximation, which is commonly used by most Newton-like methods. In contrast, this paper presents a new diagonal-updating Newton's method by extending the procedure of [6] and employs a two-step multipoint scheme to increase the accuracy of the approximation of the Jacobian. We organize the rest of this paper as follows. In the next section, we present the details of our method. Some numerical results are reported in Section 3. Finally, conclusions are made in Section 4.

2. Two-Step Diagonal Approximation

Here, we define our new diagonal variant of Newton's method which generates a sequence of points {𝑥𝑘} via 𝑥𝑘+1=𝑥𝑘𝑄𝑘1𝐹𝑥𝑘,(2.1) where 𝑄𝑘 is a diagonal approximation of the Jacobian matrix updated in each iteration. Our plan is to build a matrix 𝑄𝑘 through diagonal updating scheme such that 𝑄𝑘 is a good approximation of the Jacobian in some senses. For this purpose, we make use of an interpolating curve in the variable space to develop a weak secant equation which is derived initially by Dennis and Wolkowicz [7]. This is made possible by considering some of the most successful two-step methods (see [811] for more detail). Through integrating this two-step information, we can present an improved weak secant equation as follows: 𝑠𝑘𝛼𝑘𝑠𝑘1𝑇𝑄𝑘+1𝑠𝑘𝛼𝑘𝑠𝑘1=𝑠𝑘𝛼𝑘𝑠𝑘1𝑇𝑦𝑘𝛼𝑘𝑦𝑘1.(2.2) By letting 𝜌𝑘=𝑠𝑘𝛼𝑘𝑠𝑘1 and 𝜇𝑘=𝑦𝑘𝛼𝑘𝑦𝑘1 in (2.2), we have 𝜌𝑇𝑘𝑄𝑘𝜌𝑘=𝜌𝑇𝑘𝜇𝑘.(2.3) Since we used information from the last two steps instead of one previous step in (2.2) and (2.3), consequently we require to build an interpolating quadratic curves 𝑥(𝜈) and 𝑦(𝜈), where 𝑥(𝜈) interpolates the last two preceding iterates 𝑥𝑘1 and 𝑥𝑘, and 𝑦(𝜈) interpolates the last two preceding function evaluation 𝐹𝑘1 and 𝐹𝑘 (which are assumed to be available).

Using the approach introduced in [8], we can determine the value of 𝛼𝑘 in (2.2) by computing the values of 𝜈0, 𝜈1, and 𝜈2. If 𝜈2=0, {𝜈𝑗}2𝑗=0 can be computed as follows: 𝜈1=𝜈2𝜈1=𝑥(𝜈2)𝑥(𝜈1)𝑄𝑘=𝑥𝑘+1𝑥𝑘𝑄𝑘=𝑠𝑘𝑄𝑘=𝑠𝑇𝑘𝑄𝑘𝑠𝑘1/2,(2.4)𝜈0=𝜈2𝜈0=𝑥(𝜈2)𝑥(𝜈0)𝑄𝑘=𝑥𝑘+1𝑥𝑘1𝑄𝑘=𝑠𝑘+𝑠𝑘1𝑄𝑘=𝑠𝑘+𝑠𝑘1𝑇𝑄𝑘𝑠𝑘+𝑠𝑘11/2.(2.5)

Let us define 𝛽 by 𝜈𝛽=2𝜈0𝜈1𝜈0,(2.6) then 𝜌𝑘 and 𝜇𝑘 are give as 𝜌𝑘=𝑠𝑘𝛽2𝑠1+2𝛽𝑘1,𝜇(2.7)𝑘=𝑦𝑘𝛽2𝑦1+2𝛽𝑘1,(2.8) that is, 𝛼 in (2.4) is given by 𝛼=𝛽2/(1+2𝛽).

Assume that 𝑄𝑘 is a nonsingular diagonal matrix, the updated version of 𝑄𝑘, that is, 𝑄𝑘+1 is then defined by 𝑄𝑘+1=𝑄𝑘+Λ𝑘,(2.9) where Λ𝑘 is the deviation between 𝑄𝑘 and 𝑄𝑘+1 which is also a diagonal matrix. To preserve accurate Jacobian approximation, we update 𝑄𝑘+1 in such a way that the following condition is satisfied: 𝜌𝑇𝑘𝑄𝑘+1𝜌𝑘=𝜌𝑇𝑘𝜇𝑘.(2.10) We proceed by controlling the growth error of Λ𝑘 through minimizing its size under the Frobenius norm, such that (2.10) holds. Consequently we consider the following problem: minΛ(𝑖)𝑅12𝑛𝑖=1Λ𝑘(𝑖)2s.t𝑛𝑖=1Λ𝑘(𝑖)𝜌𝑘(𝑖)2=𝜌𝑇𝑘𝜇𝑘𝜌𝑇𝑘𝑄𝑘𝜌𝑘,(2.11) where Λ𝑘(𝑖); 𝑖=1,2,,𝑛 are the diagonal elements of Λ𝑘 and 𝜌𝑘(𝑖); 𝑖=1,2,,𝑛 are the components of vector 𝜌𝑘.

In view of the fact that, the objective function in (2.11) is convex, and the feasible set is also convex, then (2.11) has an unique solution. Hence its Lagrangian function can be expressed as follows: 𝐿Λ𝑘=1,𝜆2𝑛𝑖=1Λ𝑘(𝑖)2+𝜆𝑛𝑖=1Λ𝑘(𝑖)𝜌𝑘(𝑖)2𝜌𝑇𝑘𝜇𝑘+𝜌𝑇𝑘𝑄𝑘𝜌𝑘,(2.12) where 𝜆 is the Lagrangian multiplier associated with the constraint.

Taking the partial derivative of (2.12) with respect to each component of Λ𝑘 then set them equal to zero, multiply the relation by (𝜌𝑘(𝑖))2 and sum them all together, yieldsΛ𝑘(𝑖)𝜌=𝜆𝑘(𝑖)2,(2.13)𝑛𝑖=1Λ𝑘(𝑖)𝜌𝑘(𝑖)2=𝜆𝑛𝑖=1𝜌𝑘(𝑖)4,foreach𝑖=1,2,,𝑛.(2.14) Using (2.14) and the constraint, we obtain after some simplifications 𝜌𝜆=𝑇𝑘𝜇𝑘𝜌𝑇𝑘𝑄𝑘𝜌𝑘𝑛𝑖=1𝜌𝑘(𝑖)4.(2.15) Next, by substituting (2.15) into (2.13) and performing some little algebra, we obtainΛ𝑘(𝑖)=𝜌𝑇𝑘𝜇𝑘𝜌𝑇𝑘𝑄𝑘𝜌𝑘𝑛𝑖=1𝜌𝑘(𝑖)4𝜌𝑘(𝑖)2,𝑖=1,2,,𝑛.(2.16) Letting 𝐻𝑘=diag((𝜌𝑘(1))2,(𝜌𝑘(2))2,,(𝜌𝑘(𝑛))2) and 𝑛𝑖=1(𝜌𝑘(𝑖))4=Tr(𝐻2𝑘), where Tr is the trace operation, we can finally present the updating formula for 𝑄 as follows. 𝑄𝑘+1=𝑄𝑘+𝜌𝑇𝑘𝜇𝑘𝜌𝑇𝑘𝑄𝑘𝜌𝑘𝐻Tr2𝑘𝐻𝑘,(2.17) denotes the Euclidean norm of a vector.

To safeguard on the possibly of generating undefined 𝑄𝑘+1, we propose to use our updating scheme for 𝑄𝑘+1:𝑄𝑘+1=𝑄𝑘+𝜌𝑇𝑘𝜇𝑘𝜌𝑇𝑘𝑄𝑘𝜌𝑘𝐻Tr2𝑘𝐻𝑘;𝜌𝑘>104,𝑄𝑘;otherwise.(2.18) Now, we can describe the algorithm for our proposed method as follows:

Algorithm 2.1 (2-MFDN). Step 1. Choose an initial guess 𝑥0 and 𝑄0=𝐼, let 𝑘=0.Step 2 (Compute 𝐹(𝑥𝑘)). If 𝐹(𝑥𝑘)𝜖1 stop, where 𝜖1=104.Step 3. If 𝑘=0 define 𝑥1=𝑥0𝑄01𝐹(𝑥0). Else if 𝑘=1 set 𝜌𝑘=𝑠𝑘 and 𝜇𝑘=𝑦𝑘 and go to 5.Step 4. If 𝑘2 compute 𝜈1, 𝜈0, and 𝛽 via (2.4)–(2.6), respectively and find 𝜌𝑘 and 𝜇𝑘 using (2.7) and (2.8), respectively. If 𝜌𝑇𝑘𝜇𝑘104𝜌𝑘2𝜇𝑘2 set 𝜌𝑘=𝑠𝑘 and 𝜇𝑘=𝑦𝑘.Step 5. Let 𝑥𝑘+1=𝑥𝑘𝑄𝑘1𝐹(𝑥𝑘) and update 𝑄𝑘+1 as defined by (2.17).Step 6. Check if 𝜌𝑘2𝜖1,if yes retain 𝑄𝑘+1, that is, computed by Step 5. Else set, 𝑄𝑘+1=𝑄𝑘.Step 7. Set 𝑘=𝑘+1 and go to 2.

3. Numerical Results

In this section, we analyze the performance of 2-MFDN method compared to four Newton-Like methods. The codes are written in MATLAB 7.0 with a double-precision computer, the stopping criterion used is 𝜌𝑘+𝐹𝑥𝑘104.(3.1) The methods that we considered are(i)Newton's method (NM). (ii)Chord (fixed) Newton-method (FN).(iii)The Broyden method (BM).(iv)MFDN stands for the method proposed by Leong et al. [6].

The identity matrix has been chosen as an initial approximate Jacobian. Six (2.4) different dimensions are performed on the benchmark problems, that is, 25, 50, 100, 500, 1000, and 250000.

The symbol “−” is used to indicate a failure due to:(i)the number of iteration is at least 500 but no point of 𝑥𝑘 that satisfies (3.1) is obtained; (ii)CPU time in second reaches 500; (iii)insufficient memory to initiate the run.

In the following, we illustrate some details on the benchmarks test problems.

Problem 1. Trigonometric system of Shin et al. [12]: 𝑓𝑖𝑥(𝑥)=cos𝑖1,𝑖=1,2,,𝑛,𝑥0=(0.87,0.87,,0.87).(3.2)

Problem 2. Artificial function: 𝑓𝑖𝑥(𝑥)=ln𝑖𝑥cos11+𝑇𝑥21𝑥exp11+𝑇𝑥21𝑥,𝑖=1,2,,𝑛,0=(2.5,2.5,,2.5).(3.3)

Problem 3. Artificial function: 𝑓1(𝑥)=cos𝑥19+3𝑥1+8exp𝑥2,𝑓𝑖(𝑥)=cos𝑥𝑖9+3𝑥𝑖+8exp𝑥𝑖1,𝑖=2,,𝑛,(5,5,,5).(3.4)

Problem 4. Trigonometric function of Spedicato [13]: 𝑓𝑖(𝑥)=𝑛𝑛𝑗=1cos𝑥𝑗+𝑖1cos𝑥𝑖sin𝑥𝑖,𝑖=1,,𝑛,𝑥0=1𝑛,1𝑛1,,𝑛.(3.5)

Problem 5. Spares system of Shin et al. [12]: 𝑓𝑖(𝑥)=𝑥𝑖𝑥𝑖+11,𝑓𝑛(𝑥)=𝑥𝑛𝑥1𝑥1,𝑖=1,2,,𝑛1,0=(0.5,0.5,,.5).(3.6)

Problem 6. Artificial function: 𝑓𝑖𝑥(𝑥)=𝑛𝑖32+𝑥cos𝑖32𝑥𝑖2𝑥exp𝑖𝑥3+log2𝑖𝑥+1,𝑖=1,2,,𝑛,0=(3,3,3,,3).(3.7)

Problem 7 (see [14]). 𝑓1(𝑥)=𝑥1,𝑓𝑖(𝑥)=cos𝑥𝑖+1+𝑥𝑖1,𝑖=2,3,,𝑛,𝑥0=(0.5,0.5,,.5).(3.8)

From Table 1, it is noticeably that using the 2-step approach in building the diagonal updating scheme has significantly improved the performance of the one-step diagonal variant of Newton's method (MFDN). This observation is most significant when one considers CPU time and number of iterations, particularly as the systems dimensions increase. In addition, it is worth mentioning that, the result of 2-MFDN method in solving Problem 3 shows that the method could be a good solver even when the Jacobian is nearly singular.

Table 1: Numerical results of NM, FN, BM, MFDN, and 2-MFDN methods.

The numerical results presented in this paper shows that 2-MFDN method is a good alternative to MFDN method especially for extremely large systems.

4. Conclusions

In this paper, a new variant of secant method for solving large-scale system of nonlinear equations has been developed (2-MFDN). The method employs a two-step, 3-point scheme to update the Jacobian approximation into a nonsingular diagonal matrix, unlike the single-step method. The anticipation behind this approach is enhanced by the Jacobian approximation. Our method requires very less computational cost and number of iterations as compared to the MFDN method of Leong et al. [6]. This is more noticeable as the dimension of the system increases. Therefore, from the numerical results presented, we can wind up that, our method (2-MFDN ) is a superior algorithm compared to NM, FN, BM, and MFDN methods in handling large-scale systems of nonlinear equations.


  1. J. E. Dennis, Jr. and R. B. Schnabel, Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Prentice Hall, Englewood Cliffs, NJ, USA, 1983. View at Zentralblatt MATH
  2. R. S. Dembo, S. C. Eisenstat, and T. Steihaug, “Inexact Newton methods,” SIAM Journal on Numerical Analysis, vol. 19, no. 2, pp. 400–408, 1982. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  3. K. Natasa and L. Zorna, “Newton-like method with modification of the righ-thand vector,” Journal of Computational Mathematics, vol. 71, pp. 237–250, 2001. View at Google Scholar
  4. M. Y. Waziri, W. J. Leong, M. A. Hassan, and M. Monsi, “A low memory solver for integral equations of chandrasekhar type in the radiative transfer problems,” Mathematical Problems in Engineering, vol. 2011, Article ID 467017, 12 pages, 2011. View at Publisher · View at Google Scholar
  5. B. Lam, “On the convergence of a quasi-Newton method for sparse nonlinear systems,” Mathematics of Computation, vol. 32, no. 142, pp. 447–451, 1978. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  6. W. J. Leong, M. A. Hassan, and M. Waziri Yusuf, “A matrix-free quasi-Newton method for solving large-scale nonlinear systems,” Computers & Mathematics with Applications, vol. 62, no. 5, pp. 2354–2363, 2011. View at Publisher · View at Google Scholar
  7. J. E. Dennis, Jr. and H. Wolkowicz, “Sizing and least-change secant methods,” SIAM Journal on Numerical Analysis, vol. 30, no. 5, pp. 1291–1314, 1993. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  8. J. A. Ford and I. A. Moghrabi, “Alternating multi-step quasi-Newton methods for unconstrained optimization,” Journal of Computational and Applied Mathematics, vol. 82, no. 1-2, pp. 105–116, 1997, 7th ICCAM 96 Congress (Leuven). View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  9. J. A. Ford and I. A. Moghrabi, “Multi-step quasi-Newton methods for optimization,” Journal of Computational and Applied Mathematics, vol. 50, no. 1–3, pp. 305–323, 1994. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  10. M. Farid, W. J. Leong, and M. A. Hassan, “A new two-step gradient-type method for large-scale unconstrained optimization,” Computers & Mathematics with Applications, vol. 59, no. 10, pp. 3301–3307, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  11. J. A. Ford and S. Thrmlikit, “New implicite updates in multi-step quasi-Newton methods for unconstrained optimization,” Journal of Computational and Applied Mathematics, vol. 152, pp. 133–146, 2003. View at Google Scholar
  12. B.-C. Shin, M. T. Darvishi, and C.-H. Kim, “A comparison of the Newton-Krylov method with high order Newton-like methods to solve nonlinear systems,” Applied Mathematics and Computation, vol. 217, no. 7, pp. 3190–3198, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  13. E. Spedicato, “Cumputational experience with quasi-Newton algorithms for minimization problems of moderatetly large size,” Tech. Rep. CISE-N-175 3, pp. 10–41, 1975. View at Google Scholar
  14. A. Roose, V. L. M. Kulla, and T. Meressoo, Test Examples of Systems of Nonlinear Equations., Estonian Software and Computer Service Company, Tallin, Estonia, 1990.