- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

International Journal of Engineering Mathematics

Volume 2014 (2014), Article ID 828409, 11 pages

http://dx.doi.org/10.1155/2014/828409

## Several New Third-Order and Fourth-Order Iterative Methods for Solving Nonlinear Equations

Department of Mathematics, Maulana Azad National Institute of Technology, Bhopal 462051, India

Received 17 August 2013; Revised 12 December 2013; Accepted 31 December 2013; Published 23 February 2014

Academic Editor: Viktor Popov

Copyright © 2014 Anuradha Singh and J. P. Jaiswal. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

In order to find the zeros of nonlinear equations, in this paper, we propose a family of third-order and optimal fourth-order iterative methods. We have also obtained some particular cases of these methods. These methods are constructed through weight function concept. The multivariate case of these methods has also been discussed. The numerical results show that the proposed methods are more efficient than some existing third- and fourth-order methods.

#### 1. Introduction

Newton’s iterative method is one of the eminent methods for finding roots of a nonlinear equation: Recently, researchers have focused on improving the order of convergence by evaluating additional functions and first derivative of functions. In order to improve the order of convergence and efficiency index, many modified third-order methods have been obtained by using different approaches (see [1–3]). Kung and Traub [4] presented a hypothesis on the optimality of the iterative methods by giving as the optimal order. It means that the Newton iteration by two function evaluations per iteration is optimal with 1.414 as the efficiency index. By using the optimality concept, many researchers have tried to construct iterative methods of optimal higher order of convergence. The order of the methods discussed above is three with three function evaluations per full iteration. Clearly its efficiency index is , which is not optimal. Very recently, the concept of weight functions has been used to obtain different classes of third- and fourth-order methods; one can see [5–7] and the references therein.

This paper is organized as follows. In Section 2, we present a new class of third-order and fourth-order iterative methods by using the concept of weight functions, which includes some existing methods and also provides some new methods. We have extended some of these methods for multivariate case. Finally, we employ some numerical examples and compare the performance of our proposed methods with some existing third- and fourth-order methods.

#### 2. Methods and Convergence Analysis

First we give some definitions which we will use later.

*Definition 1. *Let be a real valued function with a simple root and let be a sequence of real numbers that converge towards . The order of convergence is given by
where is the asymptotic error constant and .

*Definition 2. *Let be the number of function evaluations of the new method. The efficiency of the new method is measured by the concept of efficiency index [8, 9] and defined as
where is the order of convergence of the new method.

##### 2.1. Third-Order Iterative Methods

To improve the order of convergence of Newton’s method, some modified methods are given by Grau-Sánchez and Díaz-Barrero in [10], Weerakoon and Fernando in [1], Homeier in [2], Chun and Kim in [3], and so forth. Motivated by these papers, we consider the following two-step iterative method: where and is a real constant. Now we find under what conditions it is of order three.

Theorem 3. *Let be a simple root of the function and let have sufficient number of continuous derivatives in a neighborhood of . The method (4) has third-order convergence, when the weight function satisfies the following conditions:
*

*Proof. *Suppose is the error in the th iteration and , . Expanding and around the simple root with Taylor series, then we have
Now it can be easily found that
By using (7) in the first step of (4), we obtain
At this stage, we expand around the root by taking (8) into consideration. We have
Furthermore, we have
By virtue of (10) and (4), we get

Hence, from (11) and (4) we obtain the following general equation, which has third-order convergence:
This proves the theorem.

*Particular Cases*. To find different third-order methods we take in (4).

*Case 1. *If we take in (4), then we get the formula:
and its error equation is given by

*Case 2. *If we take in (4), then we get the formula:
and its error equation is given by

*Case 3. *If we take in (4), then we get the formula:
and its error equation is given by

*Case 4. *If we take in (4), then we get the formula:
and its error equation is given by

*Case 5. *If we take in (4), then we get the formula:
which is Huen’s formula [11].

*Remark 4. *By taking different values of and weight function in (4), one can get a number of third-order iterative methods.

##### 2.2. Optimal Fourth-Order Iterative Methods

The order of convergence of the methods obtained in the previous subsection is three with three function evaluations (one function and two derivatives) per step. Hence its efficiency index is , which is not optimal. To get optimal fourth-order methods we consider where and are two real-valued weight functions with and is a real constant. The weight functions should be chosen in such a way that the order of convergence arrives at optimal level four without using additional function evaluations. The following theorem indicate the required conditions for the weight functions and constant in (22) to get optimal fourth-order convergence.

Theorem 5. *Let be a simple root of the function and let have sufficient number of continuous derivatives in a neighborhood of . The method (22) has fourth-order convergence, when and the weight functions and satisfy the following conditions:
*

*Proof. *Using (6) and putting in the first step of (22), we have
Now we expand around the root by taking (24) into consideration. Thus, we have
Furthermore, we have
By virtue of (26) and (22), we obtain
Finally, from (27) and (22) we can have the following general equation, which reveals the fourth-order convergence:
It proves the theorem.

*Particular Cases*

*Method 1. *If we take and , where , then the iterative method is given by
and its error equation is given by

*Method 2. *If we take and , where , then the iterative method is given by
and its error equation is given by

*Method 3. *If we take and , where , then the iterative method is given by
and its error equation is given by

*Method 4. *If we take and , where , then the iterative method is given by
and its error equation is

*Method 5. *If we take and , where , then the iterative method is given by
which is same as the formula (11) of [12].

*Method 6. *If we take and , where , then the iterative method is given by
and its error equation is

*Remark 6. *By taking different values of and in (22), one can obtain a number of fourth-order iterative methods.

#### 3. Further Extension to Multivariate Case

In this section, we extend some third- and fourth-order methods from our proposed methods to solve the nonlinear systems. Similarly we can extend other methods also. The multivariate case of our third-order method (15) is given by where , ; similarly ; is identity matrix; = , , ; and is the Jacobian matrix of at . Let be any point of the neighborhood of exact solution of the nonlinear system . If Jacobian matrix is nonsingular, then Taylor’s series expansion for multivariate case is given by where , and where is an identity matrix. From the previous equation we can find where , , and . Here we denote the error in th iteration by . The order of convergence of method (40) can be proved by the following theorem.

Theorem 7. *Let be sufficiently Frechet differentiable in a convex set , containing a root of . Let one suppose that is continuous and nonsingular in and is close to . Then the sequence obtained by the iterative expression (40) converges to with order three.*

*Proof. *For the convenience of calculation, we replace by in the first step of (40). From (41), (42), and (43), we have
where , . Now from (46) and (44), we can obtain

where

By virtue of (47) the first step of the method (40) becomes
Taylor’s series expansion for Jacobian matrix can be given as
Now
where
Taking inverse of both sides of (51), we get
where
By multiplying (53) and (50), we get
where
and the values of , and are mentioned below:
From multiplication of (47) and (55), we achieve
After replacing the value of the above equation in second part of (40), we get
The final error equation of method (40) is given by
Thus, we end the proof of Theorem 7.

The multivariate case of (33) is given by The following theorem shows that this method has fourth-order convergence.

Theorem 8. *Let be sufficiently Frechet differentiable in a convex set , containing a root of . Let one suppose that is continuous and nonsingular in and is close to . Then the sequence obtained by the iterative expression (61) converges to with order four.*

*Proof. *For the convenience of calculation we replace by and put , , and in (61). From (46) and (50), we have
From the above equation we have
With the help of (62) and (63), we can obtain
By multiplying (64) to (58), we have
where
The final error equation of method (61) is given by
which confirms the theorem.

#### 4. Numerical Testing

##### 4.1. Single Variate Case

In this section, ten different test functions have been considered in Table 1 for single variate case to illustrate the accuracy of the proposed iterative methods. The root of each nonlinear test function is also listed. All computations presented here have been performed in *MATHEMATICA 8*. Many streams of science and engineering require very high precision degree of scientific computations. We consider 1000 digits floating point arithmetic using “*SetAccuracy *” command. Here we compare the performance of our proposed methods with some well-established third-order and fourth-order iterative methods. In Table 2, we have represented Huen’s method by HN3, our proposed third-order method (15) by M3, fourth-order method (17) of [5] by SL4, fourth-order Jarratt’s method by JM4, and proposed fourth-order method by M4. The results are listed in Table 2.

An effective way to compare the efficiency of methods is CPU time utilized in the execution of the programme. In present work, the CPU time has been computed using the command “*TimeUsed *” in *MATHEMATICA*. It is well known that the CPU time is not unique and it depends on the specification of the computer. The computer characteristic is Microsoft Windows 8 Intel(R) Core(TM) i5-3210M CPU@ 2.50 GHz with 4.00 GB of RAM, 64-bit operating system throughout this paper. The mean CPU time is calculated by taking the mean of 10 performances of the programme. The mean CPU time (in seconds) for different methods is given in Table 3.

##### 4.2. Multivariate Case

Further, six nonlinear systems (Examples 9–14) are considered for numerical testing of system of nonlinear equations. Here we compare our proposed third-order method (40) (MM3) with Algorithm (NR1) and Algorithm (NR2) of [13] and fourth-order method (61) (MM4) with (SH4) of [14] and method (BB4) of [15]. The comparison of norm of the function for different iterations is given in Table 4.

*Example 9. *Consider
with initial guess , and one of its solutions is .

*Example 10. *Consider
with initial guess , and one of its solutions is .

*Example 11. *Consider
with initial guess = , and one of its solutions is .

*Example 12. *Consider
with initial guess , and one of its solutions is .

*Example 13. *Consider
with initial guess , and one of its solutions is .

*Example 14. *Consider
with initial guess , and one of its solutions is .

#### 5. Conclusion

In the present work, we have provided a family of third- and optimal fourth-order iterative methods which yield some existing as well as many new third-order and fourth-order iterative methods. The multivariate case of these methods has also been considered. The efficiency of our methods is supported by Table 2 and Table 4.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgments

The authors would like to express their sincerest thanks to the editor and reviewer for their constructive suggestions, which significantly improved the quality of this paper. The authors would also like to record their sincere thanks to Dr. F. Soleymani for providing his efficient cooperation.

#### References

- S. Weerakoon and T. G. I. Fernando, “A variant of Newton's method with accelerated third-order convergence,”
*Applied Mathematics Letters*, vol. 13, no. 8, pp. 87–93, 2000. View at Google Scholar · View at Scopus - H. H. H. Homeier, “On Newton-type methods with cubic convergence,”
*Journal of Computational and Applied Mathematics*, vol. 176, no. 2, pp. 425–432, 2005. View at Publisher · View at Google Scholar · View at Scopus - C. Chun and Y. Kim, “Several new third-order iterative methods for solving nonlinear equations,”
*Acta Applicandae Mathematicae*, vol. 109, no. 3, pp. 1053–1063, 2010. View at Publisher · View at Google Scholar · View at Scopus - H. T. Kung and J. F. Traub, “Optimal order of one-point and multipoint iteration,”
*Journal of Computational and Applied Mathematics*, vol. 21, no. 4, pp. 643–651, 1974. View at Google Scholar · View at Scopus - F. Soleymani, “Two new classes of optimal Jarratt-type fourth-order methods,”
*Applied Mathematics Letters*, vol. 25, no. 5, pp. 847–853, 2012. View at Google Scholar - M. Sharifi, D. K. R. Babajee, and F. Soleymani, “Finding the solution of nonlinear equations by a class of optimal methods,”
*Computers and Mathematics with Applications*, vol. 63, no. 4, pp. 764–774, 2012. View at Publisher · View at Google Scholar · View at Scopus - S. K. Khattri and S. Abbasbandy, “Optimal fourth order family of iterative methods,”
*Matematicki Vesnik*, vol. 63, no. 1, pp. 67–72, 2011. View at Google Scholar · View at Scopus - W. Gautschi,
*Numerical Analysis: An Introduction*, Birkhauser, Boston, Mass, USA, 1997. - J. F. Traub,
*Iterative Methods for Solution of Equations*, Chelsea Publishing, New York, NY, USA, 1997. - M. Grau-Sánchez and J. L. Díaz-Barrero, “Zero-finder methods derived using Runge-Kutta techniques,”
*Applied Mathematics and Computation*, vol. 217, no. 12, pp. 5366–5376, 2011. View at Publisher · View at Google Scholar · View at Scopus - K. Huen, “Neue methode zur approximativen integration der differentialge-ichungen einer unabhngigen variablen,”
*Zeitschrift für angewandte Mathematik und Physik*, vol. 45, pp. 23–38, 1900. View at Google Scholar - F. Soleymani and D. K. R. Babajee, “Computing multiple zeros using a class of quartically convergent methods,”
*Alexandria Engineering Journal*, vol. 52, pp. 531–541, 2013. View at Google Scholar - M. A. Noor and M. Waseem, “Some iterative methods for solving a system of nonlinear equations,”
*Computers and Mathematics with Applications*, vol. 57, no. 1, pp. 101–106, 2009. View at Publisher · View at Google Scholar · View at Scopus - J. R. Sharma, R. K. Guha, and R. Sharma, “An efficient fourth-order weighted-Newton method for systems of nonlinear equations,”
*Numerical Algorithms*, vol. 62, pp. 307–323, 2013. View at Google Scholar - D. K. R. Babajee, A. Cordero, F. Soleymani, and J. R. Torregrosa, “On a novel fourth-order algorithm for solving systems of nonlinear equations,”
*Journal of Applied Mathematics*, vol. 2012, Article ID 165452, 12 pages, 2012. View at Publisher · View at Google Scholar