- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Journal of Applied Mathematics

Volume 2012 (2012), Article ID 165452, 12 pages

http://dx.doi.org/10.1155/2012/165452

## On a Novel Fourth-Order Algorithm for Solving Systems of Nonlinear Equations

^{1}Department of Applied Mathematical Sciences, School of Innovative Technologies and Engineering, University of Technology, Mauritius, La Tour Koenig, Pointe aux Sables, Mauritius^{2}Instituto Universitario de Matemática Multidisciplinar, Universitat Politècnica de València, Camino de Vera, s/n, 40022 Valencia, Spain^{3}Department of Mathematics, Islamic Azad University, Zahedan Branch, P.O. Box 987-98138 Zahedan, Iran

Received 23 July 2012; Revised 9 October 2012; Accepted 10 October 2012

Academic Editor: Changbum Chun

Copyright © 2012 Diyashvir K. R. Babajee et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper focuses on solving systems of nonlinear equations numerically. We propose an efficient iterative scheme including two steps and fourth order of convergence. The proposed method does not require the evaluation of second or higher order Frechet derivatives per iteration to proceed and reach fourth order of convergence. Finally, numerical results illustrate the efficiency of the method.

#### 1. Introduction

Many relationships in nature are inherently nonlinear, which according to these effects are not in direct proportion to their cause. In fact, a large number of such real-world applications are reduced to solve a system of nonlinear equations numerically. Solving such systems has become one of the most challenging problems in numerical analysis, (see [1–4]), for example in Chaos (chaos theory studies the behavior of dynamical systems that are highly sensitive to initial conditions), or in climate studies (the weather is famously chaotic, where simple changes in one part of the system produce complex effects throughout).

Some robust and efficient methods for solving nonlinear systems are brought forward (e.g., see [5–7]). The Newton’s iteration for nonlinear equations is a basic method, which converges quadratically under some conditions; this method has also been extended for systems of nonlinear equations. In this method, if the derivative of the multivariate function at an iterate point is singular or almost singular, namely , the iteration can be aborted due to the overflow or leads to divergence, which restrict its applications to some extent. Recently, there has been some modified Newton’s methods with cubic convergence for systems of nonlinear equations in order to overcome on this flaw; see for example [8]. In fact, there exist fourth-order schemes that avoid the condition of nonsingular Jacobian matrix. For example, Noor et al. in [9] design different algorithms for solving nonlinear systems of equations, obtained by means of variational techniques. One of the iterative schemes proposed in this paper, that we use in the numerical section, has the following expression: denoted by , where are the coordinate functions of .

However, the condition of cannot be always avoided, though as discussed in [10], the generalized inverse could be used instead of , and the convergence still remains quadratically. Moreover, the efficiency index of these solvers is not satisfactory for practical problems, which provide us a way to suggest high order iterative methods with high efficiency index.

In this paper, we suggest a novel two-step iterative scheme for finding the solution of nonlinear systems whose Jacobian matrix is required to be nonsingular. The main merit of our proposed scheme is that per iteration it does not require the evaluation of the second, or higher order Frechet derivatives, while the fourth-order of convergence is attainable using only one function and two first-order Frechet derivatives per computing step.

The paper is organized as follows. In the next section, we give a short review on basic notes in this field of study, while Section 3 proposes the new iterative algorithm to solve systems of nonlinear equations and analyze its local convergence. In Section 4, numerical examples will be further considered to show that the new method performs well. The detailed results for numerical examples are analyzed and discussed in this section. And finally, some conclusions are drawn in Section 5.

#### 2. Preliminary Notes

Let us consider the problem of finding a real zero of a multivariate nonlinear function and is a smooth map and is an open and convex set. We assume that is a zero of the system, and is an initial guess sufficiently close to .

Consider Taylor expansion of about as follows: where . Replace by into (2.1) to get Solve (2.2) for to get If is an approximation of , we denote it by and get that is,

Waziri et al. [11] proposed a technique to approximate the Jacobian inverse by a diagonal matrix , which is updated at each iteration, and where the th diagonal entry of the matrix is defined to be where is the th component of the vector , for all and . We here remark that this procedure is similar to the Secant method in one-dimensional case. The approximation is valid only if is not equal to zero or , where is a positive real number for all . If else, then set , for more details, please refer to [11].

The order of convergence and efficiency index of such solvers are indeed not good for practical problems in which iterations with high efficiencies are required. This was just the motivation of producing higher-order iterative nonlinear systems solvers. Third-order methods free from second derivatives were proposed from Quadrature rules for solving systems of nonlinear equations. These methods require one functional evaluation and two evaluations of first derivatives at two different points. An example of such iterations is the Arithmetic Mean Newton method (3rd AM) derived from Trapezoidal rule [12] where wherein is the Newton’s iteration.

The convergence analysis of these methods using point of attraction theory can be found in [13]. This third-order Newton-like method is more efficient than Halley’s method because it does not require the evaluation of a third-order tensor of values.

In the next section, we present our new algorithm which reaches the highest possible order four by using only three functional evaluations of the multivariate function for solving systems of nonlinear equations.

#### 3. A Novel Computational Iterative Algorithm

Let be the order of a method and be defined as where and represent the number of times and are to be evaluated, respectively. The definition of the Logarithms Informational Efficiency or Efficiency Index for nonlinear systems [1] is given by The efficiency indices of the Newton method (2nd NM) and the third-order methods free from second derivatives (3rd AM) are given by respectively. We observe that , if . That is, the third-order methods free from second derivatives are less efficient than Newton’s method for systems of nonlinear equations. Let us also note that scheme has the same efficiency index as Newton’s method = = . Thus, it is important to develop fourth-order methods from these third-order methods to improve the efficiency. Soleymani et al. [5] have recently improved the 3rd AM method to get a class of fourth-order Jarratt-type methods for solving the nonlinear equation . A simplified example is

We extend this method to the multivariate case. The improved fourth-order Arithmetic Mean Newton method (4th AM) for systems of nonlinear equations can now be suggested as follows: where , and is the identity matrix and . Thus, by simplifying, we can produce the following fourth-order convergent algorithm.

*Proposed Algorithm*

*Step 1. *Solve the linear system ,

*Step 2. *Calculate(i),
(ii),

*Step 3. *Solve the linear system , where ,

*Step 4. *Calculate .

The main theorem is going to be demonstrated by means of the -dimensional Taylor expansion of the functions involved. Let be sufficiently Frechet differentiable in . By using the notation introduced in [14], the th derivative of at , is the -linear function such that . It is easy to observe that(1)(2), for all permutation of .So, in the following, we will denote:(a), (b).

It is well known that for lying in a neighborhood of a solution of the nonlinear system , Taylor’s expansion can be applied (assuming that the Jacobian matrix is nonsingular), and where . We observe that since and .

In addition, we can express as
where is the identity matrix. Therefore, . From (3.7), we obtain
where
We denote as the error in the th iteration. The equation
where is a -linear function , is called the *error equation*, and is the *order of convergence*. Observe that is .

Theorem 3.1. *Let be sufficiently Frechet differentiable at each point of an open convex neighborhood of , that is, a solution of the system . Let us suppose that is continuous and nonsingular in , and close enough to . Then the sequence obtained using the iterative expression (3.5) converges to with order 4, being the error equation
*

*Proof. *From (3.6) and (3.7), we obtain
where , and .

From (3.8), we have
where and .

Then,
and the expression for is

The Taylor expansion of Jacobian matrix is
where
Therefore,
and then,

On the other hand, the arithmetic mean of the Jacobian matrices can be expressed as
From an analogous reasoning as in (3.8), we obtain
where
So,
and finally, by using (3.19) and (3.23), the error equation can be expressed as
So, taking into account (3.24), it can be concluded that the order of convergence of the proposed method is four.

The efficiency index of the fourth-order method free from second derivative (4th AM) is given by This shows that the 4th AM method is more efficient than 2nd NM and 3rd AM methods. We next conduct some numerical experiments to compare the methods.

#### 4. Numerical Examples

In this section, we compare the performance of contributed method with Newton’s scheme and Arithmetic Mean Newton method (2.7). The algorithms have been written in MATLAB 7.6 and tested for the examples given below. For the following test problems, the approximate solutions are calculated correct to 500 digits by using variable arithmetic precision. We use the following stopping criterium: We have used the approximated computational order of convergence given by (see [15]) Let be the number of iterations required before convergence is reached and be the minimum residual.

We check the mentioned method by solving the following test problems.

*Test Problem 1 (TP1).* Consider , where , and
The Jacobian matrix is given by . The starting vector is and the exact solution is . In this case, the comparison of efficiencies is

*Test Problem 2 (TP2).* Consider

The solution is

We choose the starting vector . The Jacobian matrix is given by and has nonzero elements. In this case, the comparison of efficiencies is: = = = = = = .

*Test Problem 3 (TP3).* Consider
We solve this system using the initial approximation . The solution is
The Jacobian is a matrix given by
with nonzero elements. In this case, the comparison of efficiencies is: = = = = = = .

Table 1 gives the results for Test Problems 1, 2, and 3. It is observed for all problems that the fourth-order method converges in the least number of iterations. The computational order of convergence agrees with the theory. 4th AM gives the best results in terms of least residual, and it is the most efficient method compared to 2nd NM and 3rd AM. It can compete with the 4th NR method and has the advantage of having a higher efficiency index.

#### 5. Concluding Remarks

The efficiency of the quadratically and cubically multidimensional methods is not satisfactory in most practical problems. Thus in this paper, we have extended a fourth-order method from a third-order method for systems of nonlinear equations. We have shown that the fourth-order iterative method is more efficient than the second order Newton and third-order methods. Numerical experiments have also shown that the fourth-order method is efficient.

#### Acknowledgments

The authors would like to thank the referees for the valuable comments and for the suggestions to improve the readability of the paper. This research was supported by Ministerio de Ciencia y Tecnología MTM2011-28636-C02-02.

#### References

- D. K. R. Babajee, M. Z. Dauhoo, M. T. Darvishi, A. Karami, and A. Barati, “Analysis of two Chebyshev-like third order methods free from second derivatives for solving systems of nonlinear equations,”
*Journal of Computational and Applied Mathematics*, vol. 233, no. 8, pp. 2002–2012, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - M. T. Darvishi and A. Barati, “A fourth-order method from quadrature formulae to solve systems of nonlinear equations,”
*Applied Mathematics and Computation*, vol. 188, no. 1, pp. 257–261, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - J. A. Ezquerro, J. M. Gutiérrez, M. A. Hernández, and M. A. Salanova, “A biparametric family of inverse-free multipoint iterations,”
*Computational & Applied Mathematics*, vol. 19, no. 1, pp. 109–124, 2000. View at Google Scholar - J. F. Traub,
*Iterative Methods for the Solution of Equations*, Prentice Hall, Englewood Cliffs, NJ, USA, 1964. - F. Soleymani, S. K. Khattri, and S. Karimi Vanani, “Two new classes of optimal Jarratt-type fourth-order methods,”
*Applied Mathematics Letters*, vol. 25, no. 5, pp. 847–853, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - A. Cordero, J. L. Hueso, E. Martínez, and J. R. Torregrosa, “Accelerated methods for order $2p$ for systems of nonlinear equations,”
*Journal of Computational and Applied Mathematics*, vol. 233, no. 10, pp. 2696–2702, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - B. H. Dayton, T.-Y. Li, and Z. Zeng, “Multiple zeros of nonlinear systems,”
*Mathematics of Computation*, vol. 80, no. 276, pp. 2143–2168, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - W. Haijun, “New third-order method for solving systems of nonlinear equations,”
*Numerical Algorithms*, vol. 50, no. 3, pp. 271–282, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - M. A. Noor, M. Waseem, K. I. Noor, and E. Al-Said, “Variational iteration technique for solving a system of nonlinear equations,”
*Optimization Letters*. In press. View at Publisher · View at Google Scholar - A. Ben-Israel and T. N. E. Greville,
*Generalized Inverses*, Springer, New York, NY, USA, 2nd edition, 2003. - M. Y. Waziri, W. J. Leong, M. A. Hassan, and M. Monsi, “An efficient solver for systems of nonlinear equations with singular Jacobian via diagonal updating,”
*Applied Mathematical Sciences*, vol. 4, no. 69–72, pp. 3403–3412, 2010. View at Google Scholar · View at Zentralblatt MATH - M. Frontini and E. Sormani, “Third-order methods from quadrature formulae for solving systems of nonlinear equations,”
*Applied Mathematics and Computation*, vol. 149, no. 3, pp. 771–782, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - D. K. R. Babajee,
*Analysis of higher order variants of Newton’s method and their applications to differential and integral equations and in ocean acidification [Ph.D. thesis]*, University of Mauritius, 2010. - A. Cordero, J. L. Hueso, E. Martínez, and J. R. Torregrosa, “A modified Newton-Jarratt's composition,”
*Numerical Algorithms*, vol. 55, no. 1, pp. 87–99, 2010. View at Publisher · View at Google Scholar - A. Cordero and J. R. Torregrosa, “Variants of Newton's method using fifth-order quadrature formulas,”
*Applied Mathematics and Computation*, vol. 190, no. 1, pp. 686–698, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH