Journal of Applied Mathematics

Volume 2012, Article ID 294086, 22 pages

http://dx.doi.org/10.1155/2012/294086

## Another Simple Way of Deriving Several Iterative Functions to Solve Nonlinear Equations

^{1}Department of Mathematics, Panjab University, Chandigarh 160 014, India^{2}University Institute of Engineering and Technology, Panjab University, Chandigarh 160 014, India

Received 25 June 2012; Revised 20 September 2012; Accepted 4 October 2012

Academic Editor: Alicia Cordero

Copyright © 2012 Ramandeep Behl et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We present another simple way of deriving several iterative methods for solving nonlinear equations numerically. The presented approach of deriving these methods is based on exponentially fitted osculating straight line. These methods are the modifications of Newton's method. Also, we obtain well-known methods as special cases, for example, Halley's method, super-Halley method, Ostrowski's square-root method, Chebyshev's method, and so forth. Further, new classes of third-order multipoint iterative methods free from a second-order derivative are derived by semidiscrete modifications of cubically convergent iterative methods. Furthermore, a simple linear combination of two third-order multipoint iterative methods is used for designing new optimal methods of order four.

#### 1. Introduction

Various problems arising in mathematical and engineering science can be formulated in terms of nonlinear equation of the form To solve (1.1), we can use iterative methods such as Newton’s method [1–19] and its variants, namely, Halley’s method [1–3, 5, 6, 8, 9], Euler’s method (irrational Halley’s method) [1, 3], Chebyshev’s method [1, 2], super-Halley method [2, 4] as Ostrowski’s square-root method [5, 6], and so forth, available in the literature.

Among these iterative methods, Newton’s method is probably the best known and most widely used algorithm for solving such problems. It converges quadratically to a simple root and linearly to a multiple root. Its geometric construction consists in considering the straight line and then determining the unknowns and by imposing the tangency conditions thereby obtaining the tangent line to the graph of at .

The point of intersection of this tangent line with gives the celebrated Newton’s method Newton’s method for multiple roots appears in the work of Schröder [19], which is given as This method has a second-order convergence, including the case of multiple roots. It may be obtained by applying Newton’s method to the function , which has simple roots in each multiple root of . The well-known third-order methods which entail the evaluation of are close relatives of Newton’s method and can be obtained by admitting geometric derivation [1, 2, 5] from the different quadratic curves, for example, parabola, hyperbola, circle or ellipse, and so forth.

The purpose of the present work is to provide some alternative derivations to the existing third-order methods through an exponentially fitted osculating straight line. Some other new formulas are also presented. Here, we will make use of symbolic computation in the programming package Mathematica 7 to derive the error equations of various iterative methods.

Before starting the development of iterative scheme, we would like to introduce some basic definitions.

*Definition 1.1. *A sequence of iterations are said to converge with order to a point if
for some . If , the sequence is said to converge linearly to . In that case, we require ; the constant is called the rate of linear convergence of to .

*Definition 1.2. *Let be the error in the th iteration; one calls the relation
as the error equation. If we can obtain the error equation for any iterative method, then the value of is its order of convergence.

*Definition 1.3. * Let be the number of new pieces of information required by a method. A piece of information typically is any evaluation of a function or one of its derivatives. The efficiency of the method is measured by the concept of efficiency index [6] and is defined by
where is the order of the method.

#### 2. Development of the Methods

Two equivalent derivations for the third-order iterative methods through exponentially fitted osculating straight line are presented below.

*Case I. *Consider an exponentially fitted osculating straight line in the following form:
where , , and and are arbitrary constants. These constants will be determined by using the tangency conditions at the point .

If an exponentially fitted osculating straight line given by (2.1) is tangent to the graph of the equation in question, that is, at , then we have
Therefore, we obtain
and a quadratic equation in as follows:
Suppose that the straight line (2.1) meets the at , then
and it follows from (2.1) that
From (2.6) and if , we get
or
This is the well-known one-parameter family of Newton’s method [2]. This family converges quadratically under the condition , while is permitted in some points. For , we obtain Newton’s method. The error equation of Scheme (2.8) is given by
where , , , and is the root of nonlinear (1.1). In order to obtain the quadratic convergence, the entity in the denominator should be the largest in magnitude. It is straightforward to see from the above error equation (2.9) that for , we obtain the well-known third-order Halley’s method.

If we apply the well-known Newton’s method (1.5) to the modified function , we get another iterative method as
This is a new one-parameter modified family of Schröder’s method [19] for an equation having multiple roots of multiplicity unknown. It is interesting to note that by ignoring the term , method (2.10) reduces to Schröder’s method. It is easy to verify that this method is also an order two method, including the case of multiple zeros. Theorem 2.1 indicates that what choice on the disposable parameter in family (2.10), the order of convergence will reach at least the second and third order, respectively.

Theorem 2.1. * Let have at least three continuous derivatives defined on an open interval , enclosing a simple zero of (say ). Assume that initial guess is sufficiently close to and in . Then an iteration scheme defined by formula (2.10) has at least a second order convergence and will have a third-order convergence when . It satisfies the following error equation
**
where is a free disposable parameter, , and , .*

*Proof. * Let be a simple zero of . Expanding and about by the Taylor’s series expansion, we have
respectively.

Furthermore, we have
From (2.12) and (2.13), we have
Finally, The using above equation (2.14) in our proposed scheme (2.10), we get
This reveals that the one-point family of methods (2.10) reaches at least second order of convergence by using only three functional evaluations (i.e., , , and ) per full iteration. It is straightforward to see that family (2.10) reaches the third order of convergence when . This completes the proof of Theorem 2.1.

Now we wish to construct different third-order iterative methods, which are dependent on the different values of and are given as below.

*Special Cases*

(i) When , then can be neglected in (2.4), and we get
(a) Inserting this value of in (2.8), we get
This is the well-known third-order Halley’s method [1–3, 5, 6, 8, 9]. It satisfies the following error equation:
(b) Inserting this value of in (2.10), we get
This is the well-known third-order super-Halley method [1, 2, 4]. It satisfies the following error equation:
(ii) When we take as an implicit function, then the values of are based on the idea of successive approximations. Therefore, we get another value of from (2.4) as
Here, the function which occurs on the right cannot be computed till is known. To get round off the difficulty, we substitute the value of from a previously obtained value in (2.16). Therefore, the new modified value of is
(a) Inserting this value of in (2.8), we get
This is a new third-order iterative method. It satisfies the following error equation:
(b) Again inserting this value of in (2.10), we get
This is again a new third-order iterative method. It satisfies the following error equation:
(iii) Similarly, from (2.4), one can get another value of as
Again inserting the previously obtained value of in the right-hand side of (2.16), we get another modified value of as
(a) Inserting this value of in (2.8), we get
This is a new third-order iterative method. It satisfies the following error equation:
(b) Again inserting this value of in (2.10), we obtain
This is again a new third-order iterative method. It satisfies the following error equation:
(iv) Now we solve the quadratic equation (2.4) for general values of . Hence, we get
(a) Inserting these values of either in (2.4) or (2.10), we get the well-known third-order Ostrowski’s square-root method [5] as
It satisfies the following error equation:
(v) When we rationalize the numerator of (2.33), we get other values of as
Inserting these values of either in (2.8) or (2.10), we get
This is a new third-order iterative method. It satisfies the following error equation:

*Case II. *In the second case, we have now considered an exponentially fitted osculating straight line in the following form:
where , , and and are arbitrary constants. Adopting the same procedure as above, we get
From (2.40) and (2.41), we get another family of iterative methods given by
This is another new one-parameter family of Newton’s method. In order to obtain quadratic convergence, the entity in the denominator should be largest in magnitude. Again note that for , we obtain Newton’s method.

Now we apply the well-known Newton’s method to the modified function , and we will obtain another iterative method as This is another new one-parameter modified family of Schröder’s method for an equation having multiple roots of multiplicity unknown. It is interesting to note that by ignoring the term , (2.43) reduces to Schröder’s method. It is easy to verify that this method is also an order two method, including in the case of multiple zeros. Theorems 2.2 and 2.3 indicate that what choice on the disposable parameter in families (2.42) and (2.43), the order of convergence will reach at least the second order.

Theorem 2.2. *Let have at least three continuous derivatives defined on an open interval , enclosing a simple zero of (say ). Assume that initial guess is sufficiently close to and in . Then the family of iterative methods defined by (2.42) has at least a second-order convergence and will have a third-order convergence when . It satisfies the following error equation:
*

Theorem 2.3. *Let have at least three continuous derivatives defined on an open interval , enclosing a simple zero of (say ). Assume that initial guess is sufficiently close to and in . Then the family of iterative methods defined by (2.43) has at least a second-order convergence and will have a third-order convergence when . It satisfies the following error equation:
*

*Proof. *The proofs of these theorems are similar to the proof of Theorem 2.1. Hence, these are omitted here.

Now we wish to construct different third-order iterative methods, which are dependent on the different values of and are given as follows.

*Special Cases*

(i)When , then can be neglected in (2.41), and we get
(a) Inserting this value of in (2.42), we get
This is the well-known cubically convergent Chebyshev’s method [1–4, 9]. It satisfies the following error equation:
(b) Again inserting this value of in (2.43), we get
This is the already derived well-known cubically convergent Halley’s method.(ii) Adopting the same procedure as in (2.21), we get another value of from (2.41) as
Now we substitute to get another modified value of from (2.46) as
(a) Inserting this value of in (2.43), we get
This is a new third-order iterative method. It satisfies the following error equation:
(b) Again inserting this value of in (2.43), we get
This is again a new third-order iterative method. It satisfies the following error equation:
(iii) From (2.41), one can get another value of as
Inserting this previously obtained value of in (2.51), we get another modified value of as
(a) Inserting this value of in (2.42), we get
This is a new third-order iterative method. It satisfies the following error equation:
(b) Again inserting this value of in (2.43), we get
This is again a new third-order iterative method. It satisfies the following error equation:
(iv) Now we solve the quadratic equation (2.41) for the general value of , and we get other values of as
By inserting the above values of either in (2.42) or (2.43), we get
This is a new third-order iterative method. It satisfies the following error equation:

#### 3. Third-Order Multipoint Iterative Methods and Their Error Equations

The practical difficulty associated with the above-mentioned cubically convergent methods may be the evaluation of the second-order derivative. Recently, some new variants of Newton’s method free from second-order derivative have been developed in [3, 10, 11] and the references cited there in by the discretization of the second-order derivative or by the predictor-corrector approach or by considering different quadrature formulae for the computation of integral arising from Newton’s theorem. These multipoint iteration methods calculate new approximations to a zero of a function by sampling and possibly its derivatives for a number of values of the independent variable at each step.

Here, we also intend to develop new third-order multipoint methods free from the second-order derivative. The main idea of proposed methods lies in the discretization of the second-order derivative involved in the above-mentioned methods.

Expanding the function about the point by Taylor’s expansion, we have Therefore, we obtain where .

Using this approximate value of into the previously obtained formulae, we get different multipoint iterative methods free from second-order derivative.

*Special Cases*

(i)Inserting this approximate value of (from (3.2)) either in (2.17) or (2.19), we get
This is the well-known third-order Newton-Secant method [3]. It satisfies the following error equation:
(ii)Inserting this approximate value of (from (3.2)) in (2.23), we get
This is a new third-order multipoint iterative method having the error equation
(iii)Inserting this approximate value of (from (3.2)) in (2.25), we get
This is a new third-order multipoint iterative method having the error equation
(iv)Inserting this approximate value of (from (3.2)) in (2.29), we get
This is a new third-order multipoint iterative method having the error equation
(v)Inserting this approximate value of (from (3.2)) in (2.31), we get
This is a new third-order multipoint iterative method having the error equation
(vi)Inserting this approximate value of (from (3.2)) in (2.47), we get
This is the well-known third-order Potra-Pták’s method [12]. It satisfies the following error equation:
(vii)Inserting this approximate value of (from (3.2)) in (2.52), we get
This is a new third-order multipoint iterative method having the error equation
(viii)Inserting this approximate value of (from (3.2)) in equation (2.54), we get
This is a new third-order multipoint iterative method having the error equation
(ix)Inserting this approximate value of (from (3.2)) in (2.58), we get
This is a new third-order multipoint iterative method having the error equation
(x)Inserting this approximate value of (from (3.2)) in (2.63), we get
This is a new third-order multipoint iterative method having the error equation

#### 4. Optimal Fourth-Order Multipoint Iterative Methods and Their Error Equations

Now we intend to develop new fourth-order optimal multipoint iterative methods [10, 11, 15–17] for solving nonlinear equations numerically. These multipoint iterative methods are of great practical importance since they overcome theoretical limits of one-point methods concerning the convergence order and computational efficiency. In the case of these multipoint methods, Kung and Traub [13] conjectured that the order of convergence of any multipoint method without memory, consuming function evaluations per iteration, can not exceed the bound (called optimal order). Multipoint methods with this property are called optimal methods. Traub-Ostrowski’s method [3], Jarratt’s method [14], King’s method [11], a family of Traub-Ostrowski’s method [10], and so forth are famous optimal fourth order methods, because they require three function evaluations per step. Traub-Ostrowski’s method, Jarratt’s method, and King’s family are the most efficient fourth-order multipoint iterative methods till date. Nowadays, obtaining new optimal methods of order four is still important, because they have very high efficiency index. For this, we will take linear combination of the Newton-Secant method and the newly developed third-order multipoint iterative methods. Let us denote the Newton-Secant method by and the methods namely (3.5) to (3.17) by , respectively, therefore, taking the linear combination of (the Newton-Secant method) and (newly developed multipoint methods) as follows: For some particular values of and , we get many new fourth-order optimal multipoint iterative methods as follows.(i)When we take as a method (3.5) and in (4.1), we get This fourth-order optimal multipoint method is independently derived by Behzad Ghanbari [18]. It satisfies the following error equation (ii)When we take as method (3.7) and in (4.1), we get This is a new fourth-order optimal multipoint iterative method. It satisfies the following error equation:

*Program Code in Mathematica 7 for the Order of Convergence of Method (4.4)*

(iii) When we take as method (3.9) and in (4.1), we get
This is a new fourth-order optimal multipoint iterative method. It satisfies the following error equation:
(iv)When we take as method (3.11) and in (4.1), we get
This is a new fourth-order optimal multipoint iterative method. It satisfies the following error equation:
(v)When we take as method (3.13) and in (4.1), we get
This is a particular case of quadratically convergent King’s family [11] of multipoint iterative method for . It satisfies the following error equation:
(vi) When we take as a method (3.15) and in (4.1), we get
This is a new fourth-order optimal multipoint iterative method. It satisfies the following error equation:
(vii) When we take as a method (3.17) and in (4.1), we get
This is a new fourth-order optimal multipoint iterative method. It satisfies the following error equation:
(viii) Using the approximate value of (from (3.2)) in (2.19), we get
This is the well-known Traub-Ostrowski’s [3] fourth-order optimal multipoint iterative method. It satisfies the following error equation:

*Some Other Formulae*

In a similar fashion, let us denote Potra-Pták’s method by and the methods, namely, (3.5), (3.9), (3.19) to (3.21), respectively, by , taking the linear combination of (Potra-Pták’s method) and (newly developed multipoint methods) as follows:
For some particular values of and , we get many new other fourth-order optimal multipoint iterative methods as follows.(ix) When we take as method (3.9) and in (4.19), we get
This is a new fourth-order optimal multipoint iterative method. It satisfies the following error equation:
(x) When we take method (3.5) − method (3.15), we get
This is a new fourth-order optimal multipoint iterative method. It satisfies the following error equation:
Similarly, we can obtain many other new optimal multipoint fourth-order iterative methods for solving nonlinear equations numerically.

It is straightforward to see that per step these methods require three evaluations of function, namely, two evaluations of and one of its first-order derivative . In order to obtain an assessment of the efficiency of our methods, we shall make use of the efficiency index defined by (1.9). For our proposed third-order multipoint iterative methods, we find and to get which is better than , the efficiency index of the Newton’s method. For the quadratically convergent multipoint iterative methods, we find and to get which is better than those of most of the third-order methods and Newton’s method .

#### 5. Numerical Experiments

In this section, we shall check the effectiveness of the new optimal methods. We employ the present methods, namely, (4.2), (4.9), method (4.15), (4.22) respectively, to solve the following nonlinear equations given in Table 1. We compare them with the methods, namely, Newton’s method (NM), Traub-Ostrowski’s method (also known as Ostrowski’s method) (4.17) (TOM), Jarratt’s method (JM), and King’s method (KM) for and respectively. We have also shown the comparison of all methods mentioned above in Table 2. Computations have been performed using ++ in double-precision arithmetic. We use as a tolerable error. The following stopping criteria are used for computer programs:

(i) , (ii) .

#### 6. Conclusion

In this paper, we have presented another simple and elegant way of deriving different iterative functions to solve nonlinear equations numerically. This study represents several formulae of third and fourth order and has a well-known geometric derivation. Multipoint iterative methods belong to the class of the most powerful methods since they overcome theoretical limits of one-point methods concerning the convergence order and computational efficiency. The most important class of multipoint methods are optimal methods, which attain the convergence order using function evaluations per iteration. Therefore, fourth-order multipoint iterative methods are the main findings of the present paper in terms of speed and efficiency index. According to Kung-Traub conjecture, these different methods presented in this paper have the maximal efficiency index because only three function values are needed per step. The numerical results presented in Table 2, overwhelmingly, support that these different methods are equally competent to Traub-Ostrowski’s method, Jarratt’s method, and King’s family. By using the same idea, one can obtain other iterative processes by considering different exponentially fitted osculating curves.

#### Acknowledgments

The authors would like to record their sincerest thanks to the Editor, Professor Alicia Cordero, and anonymous reviewers for their constructive suggestion and remarks which have considerably contributed to the readability of this paper. R. Behl acknowledges further the financial support of CSIR, New Delhi, India.

#### References

- S. Amat, S. Busquier, and J. M. Gutiérrez, “Geometric constructions of iterative functions to solve nonlinear equations,”
*Journal of Computational and Applied Mathematics*, vol. 157, no. 1, pp. 197–205, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - V. Kanwar, S. Singh, and S. Bakshi, “Simple geometric constructions of quadratically and cubically convergent iterative functions to solve nonlinear equations,”
*Numerical Algorithms*, vol. 47, no. 1, pp. 95–107, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - J. F. Traub,
*Iterative Methods for the Solution of Equations*, Prentice-Hall Series in Automatic Computation, Prentice-Hall, Englewood Cliffs, NJ, USA, 1964. - J. M. Gutiérrez and M. A. Hernández, “An acceleration of Newton's method: super-Halley method,”
*Applied Mathematics and Computation*, vol. 117, no. 2-3, pp. 223–239, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - E. Hansen and M. Patrick, “A family of root finding methods,”
*Numerische Mathematik*, vol. 27, no. 3, pp. 257–269, 1976/77. View at Publisher · View at Google Scholar - A. M. Ostrowski,
*Solutions of Equations and System of Equations*, Academic Press, New York, NY, USA, 1960. - W. Werner, “Some improvements of classical iterative methods for the solution of nonlinear equations,” in
*Numerical Solution of Nonlinear Equations*, vol. 878 of*Lecture Notes in Mathematics*, pp. 426–440, Springer, Berlin, Germany, 1981. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - A. M. Ostrowski,
*Solution of Equations in Euclidean and Banach Spaces*, Academic Press, London, UK, 1973. - L. W. Johnson and R. D. Roiess,
*Numerical Analysis*, Addison-Wesely, Reading, Mass, USA, 1977. - V. Kanwar, R. Behl, and K. K. Sharma, “Simply constructed family of a Ostrowski's method with optimal order of convergence,”
*Computers & Mathematics with Applications*, vol. 62, no. 11, pp. 4021–4027, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - R. F. King, “A family of fourth order methods for nonlinear equations,”
*SIAM Journal on Numerical Analysis*, vol. 10, pp. 876–879, 1973. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - F. A. Potra and V. Pták, “Nondiscreate introduction and iterative processes,” in
*Research Notes in Mathematics*, Pitman, Boston, Mass, USA, 1984. View at Google Scholar - H. T. Kung and J. F. Traub, “Optimal order of one-point and multipoint iteration,”
*Journal of the Association for Computing Machinery*, vol. 21, pp. 643–651, 1974. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - P. Jarratt, “Some fourth-order multipoint methods for solving equations,”
*BIT*, vol. 9, pp. 434–437, 1965. View at Google Scholar - F. Soleymani, “Optimal fourth-order iterative methods free from derivatives,”
*Miskolc Mathematical Notes*, vol. 12, no. 2, pp. 255–264, 2011. View at Google Scholar - M. Sharifi, D. K. R. Babajee, and F. Soleymani, “Finding the solution of nonlinear equations by a class of optimal methods,”
*Computers & Mathematics with Applications*, vol. 63, no. 4, pp. 764–774, 2012. View at Publisher · View at Google Scholar - F. Soleymani, S. K. Khattri, and S. Karimi Vanani, “Two new classes of optimal Jarratt-type fourth-order methods,”
*Applied Mathematics Letters*, vol. 25, no. 5, pp. 847–853, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - Behzad Ghanbari, “A new general fourth-order family of methods for finding simple roots of nonlinear equations,”
*Journal of King Saud University-Science*, vol. 23, pp. 395–398, 2011. View at Google Scholar - E. Schröder, “Über unendlichviele algorithm zur au osung der gleichungen,”
*Annals of Mathematics*, vol. 2, pp. 317–365, 1870. View at Google Scholar