- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Article Processing Charges ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Recently Accepted Articles ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Advances in Numerical Analysis

Volume 2011 (2011), Article ID 270903, 10 pages

http://dx.doi.org/10.1155/2011/270903

## Novel Computational Iterative Methods with Optimal Order for Nonlinear Equations

Department of Mathematics, Islamic Azad University, Zahedan Branch, 98168 Zahedan, Iran

Received 21 August 2011; Accepted 17 October 2011

Academic Editor: Michele Benzi

Copyright © 2011 F. Soleymani. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper contributes a very general class of two-point iterative methods without memory for solving nonlinear equations. The class of methods is developed using weight function approach. Per iteration, each method of the class includes two evaluations of the function and one of its first-order derivative. The analytical study of the main theorem is presented in detail to show the fourth order of convergence. Furthermore, it is discussed that many of the existing fourth-order methods without memory are members from this developed class. Finally, numerical examples are taken into account to manifest the accuracy of the derived methods.

#### 1. Prerequisites

One of the important and challenging problems in numerical analysis is to find the solution of nonlinear equations. In recent years, several numerical methods for finding roots of nonlinear equations have been developed by using several different techniques; see, for example, [1, 2]. We herein consider the nonlinear equations of the general form where is a real valued function on an open neighborhood and a simple root of (1.1). Many relationships in nature are inherently nonlinear, in which their effects are not in direct proportion to their cause. Accordingly, solving nonlinear scalar equations occurs frequently in scientific works. Many robust and efficient methods for solving such equations are brought forward by many authors; see [3–5] and the references therein. Note that Newton’s method for nonlinear equations is an important and fundamental one.

In providing better iterations with better efficiency and order of convergence, a technique as follows is mostly used. The composition of two iterative methods of orders and , respectively, results in a method of order , [6]. Usually, new evaluations of the derivative or the nonlinear function are needed in order to increase the order of convergence. On the other hand, one well-known technique to bring generality is to use weight function correctly in which the order does not die down, but the error equation becomes general. In fact, this approach will be used in this paper.

*Definition 1.1. *The efficiency of a method is measured by the concept of *efficiency index,* which is given by
where is the convergence order of the method and is the whole number of evaluations per one computing process. Meanwhile, we should remember that by Kung-Traub conjecture [7] as comes next, an iterative multipoint scheme without memory for solving nonlinear equations has the optimal efficiency index and optimal rate of convergence .

Higher-order methods are widely referenced in literature; see for example, [8, 9] and the references therein. It can be concluded that they are useful in applications, for example, numerical solution of quadratic equations and nonlinear integral equations are needed in the study of dynamical models of chemical reactors or in radiative transfer. Moreover, many of these numerical applications use high precision in their computations; the results of these numerical experiments show that the high-order methods associated with a multiprecision arithmetic floating point are very useful, because they yield a clear reduction in the number of iterations. This simply shows the importance of multipoint methods in solving nonlinear scalar equations.

The two-step family of Geum and Kim, which was given in [10] recently, is one of the most significant two-point optimal fourth-order methods, which includes many of the existing fourth-order methods as its special elements:

It satisfies the error equation , with , and . Note that (1.3) is in fact the first two steps of the three-step scheme given in [10]. Clearly, choosing will end in the well-known King’s optimal fourth-order family [11]:
which reads the error equation and also contains the Ostrowski’s fourth-order method as its special element for .

Motivated and inspired by the recent activities in this direction, in this paper, we will construct a very general class of new iterative methods free from second- or higher-orders derivatives in computing process based on (1.3) and grounded on the use of weight function in the second step of our proposed class.

#### 2. Main Contribution

This section contains our novel contributed general class. According to the conjecture of Kung-Traub for constructing optimal without memory iterations, we must use only three evaluations per full cycle to reach the convergence order four. Therefore, we consider the following very general two-step two-point without memory iteration: where , , and are three real-valued weight functions with , , and , without the index , that should be chosen such that the order of convergence reaches the optimal level four. This is done in Theorem 2.1.

Theorem 2.1. *Let be a simple zero of a sufficiently differentiable function for an open interval , which contains as an initial approximation of . If , , and satisfy the conditions:
**
then the class of iterative without memory methods defined by (2.1) is of optimal order four.*

*Proof. *By defining as the error of the iterative scheme in the th iterate, applying Taylor’s expansion and taking into account , we have
where . Furthermore, we have
Dividing (2.3) by (2.4) gives us
By substituting (2.5) in the first step of (2.1) for , we obtain
and similarly . Again, by Taylor’s series expanding around the simple root and using the attained formulas, we have
Also, by Taylor expanding, we attain At this time, by taking into consideration (2.6), (2.7), and the conditions (2.2) for the weight functions into the last step of (2.1), we attain the follow-up error equation for the whole iteration (2.1) per computing process:
This shows that the iterative class (2.1)-(2.2) will arrive at the optimal local order of convergence four. This concludes the proof.

In standpoint of computational efficiency, each derived member from our class includes three evaluations per full cycle, that is, one evaluation of the first-order derivative and two evaluations of the function. Therefore, the resulted methods are optimal and consistent with the optimality conjecture of Kung-Traub for multipoint without memory iterations. The class possesses the optimal efficiency index 1.587, which is much better than that of Newton’s scheme efficiency. Furthermore, the error equation (2.8) completely reveals the generality of our class. Choosing any desired values for the three parameters and also the three real-valued weight functions, based on (2.2), will result in new methods. In what follows, we briefly provide some of the well-known methods in the literature as special members from our class of iterations.

*Case 1. *Choosing , , and will result in the family of Geum-Kim (1.3).

*Case 2. *Choosing , , , and will result in the family of King (1.4)

*Case 3. *Choosing , , , , and will result in the method of Khattri et al. in [12] as comes next with the same error equation:

Some typical forms of the weight function, which make the order of our general class optimal according to (2.2), are listed in Table 1.

According to Table 1, we can produce any desire method of optimal order four by using only three functional evaluations per full cycle. Hence, we can have as contributed examples from our class: with , and where is its error relation; or the following efficient method: with the follow-up error equation .

As positively pointed out by the reviewer, the novel fourth-order methods can be applied in providing higher-order convergent methods. That is to say, in order to fulfill the optimality conjecture of Kung-Traub (1974), optimal eighth- and sixteenth-order derivative-involved methods can only be built grounded on the optimal quartically methods. Now and according to the contributed class in this paper, very general eighth- and sixteenth-order optimal iterations without memory can be constructed by using (2.1)-(2.2) in the first two steps of a three- or four-step cycle.

The error relation (2.8) relies fully on the first, second, third derivatives of a given nonlinear function, as well as . Thus, in order to save the space and also giving some of the other optimal fourth-order methods according to (2.1) and (2.2), we list the interesting ones in Table 2.

#### 3. Computational Aspects

Here, to demonstrate the performance of the new fourth-order methods, we take a lot of nonlinear equations as follows:(i),,(ii), ,(iii), ,(iv), ,(v), ,(vi), , (vii), , (viii), , (ix), , (x), , (xi), , (xii), , (xiii), , (xiv), , (xv), , (xvi), .

We shall determine the consistency and the stability of results by examining the convergence of the new second-derivative-free iterative methods. The findings are shown by illustrating the effectiveness of the fourth-order methods for determining the simple root of a nonlinear equation. Consequently, we can give estimates of the approximate solution produced by the fourth-order methods. The numerical computations listed in Table 3 were performed with MATLAB 7.6. For comparisons, we have used the fourth-order derivative-free method of Kung-Traub (KTM) as comes next: where and are divided differences, and the Ostrowski’s method (OM) as follows:

We also have used King’s family with , as K(−1/2) in comparisons with our novel methods (2.10), (2.11), (2.12) from the suggested class. For convergence, it is required that the distance of two consecutive approximations ( with ) be less than . And the absolute value of the function , also referred to as residual, be less than . Note that the residuals are listed in Table 3 for each starting point and by considering the total number of evaluations as 12. We accept an approximate solution rather than the exact root, depending on the precision of the computer. The test results in Table 3show that the order of convergence and accuracy of the proposed methods are in accordance with the theory developed in the previous section. For most of the functions we have tested, the methods introduced in the present work behave well in comparison to the other methods of order four. The important characteristic of the novel methods is that they do not require the computation of second-order or higher-order derivatives of the function to carry out iterations. However, it should be emphasized that the order of convergence is a property of iteration formula near root: the order of convergence is one thing; the total number of iterations is another one. In general, for a given iteration formula, the total number of iterations depends not only on the order of convergence but also on the initial approximation .

In Table 3, as an instance, 0.3*e*−172 shows that the absolute value of the corresponding (test) function after 4 full iterations is zero up to 172 decimal digits.

#### 4. Concluding Remarks

In numerical analysis, many methods produce sequences of real numbers, for instance the iterative methods for solving nonlinear equations. Sometimes, the convergence of these sequences is slow and their utility in solving practical problems quite limited. Convergence acceleration methods try to transform a slowly converging sequence into a fast convergent one. Due to this, this paper has aimed to give a rapidly convergent two-point class for approximating simple roots. As high as possible of convergence order was attained by using as small as possible number of evaluations per full cycle. The local order of our class of iterations was established theoretically, and it has been seen that our class supports the optimality conjecture of Kung-Traub (1974). It was shown that choosing appropriate form of weight functions would end up in both existing and new iterative optimal root solvers without memory. Clearly, our contribution in this paper has unified the existing quartically methods, which are available in literature. In the sequel, numerical examples have used in order to show the efficiency and accuracy of the novel methods from our suggested second-derivative-free class. Finally, it should be noted that, like all other iterative methods, the new methods from the class (2.1)-(2.2) have their own domains of validity and in certain circumstances should not be used.

#### Acknowledgment

The author is so much thankful to Sedigheh Faramarzpour for supporting and providing excellent research facilities during the preparation of this paper.

#### References

- A. Iliev and N. Kyurkchiev,
*Nontrivial Methods in Numerical Analysis (Selected Topics in Numerical Analysis)*, Lambert Academic Publishing, 2010. - F. Soleymani, M. Sharifi, and B. S. Mousavi, “An improvement of Ostrowski's and King's techniques with optimal convergence order eight,”
*Journal of Optimization Theory and Applications*. In press. View at Publisher · View at Google Scholar - N. A. Mir, R. Muneer, and I. Jabeen, “Some families of two-step simultaneous methods for determining zeros of nonlinear equations,”
*ISRN Applied Mathematics*, vol. 2011, Article ID 817174, 11 pages, 2011. View at Publisher · View at Google Scholar - F. Soleymani, “Two classes of iterative schemes for approximating simple roots,”
*Journal of Applied Sciences*, vol. 11, no. 19, pp. 3442–3446, 2011. - F. Soleymani, S. Karimi Vanani, and A. Afghani, “A general three-step class of optimal iterations for nonlinear equations,”
*Mathematical Problems in Engineering*, vol. 2011, Article ID 469512, pp. 1–10. View at Publisher · View at Google Scholar - J. F. Traub,
*Iterative Methods for the Solution of Equations*, Chelsea Publishing Company, New York, NY, USA, 1976. - H. T. Kung and J. F. Traub, “Optimal order of one-point and multipoint iteration,”
*Journal of the ACM*, vol. 21, no. 4, pp. 643–651, 1974. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - P. Sargolzaei and F. Soleymani, “Accurate fourteenth-order methods for solving nonlinear equations,”
*Numerical Algorithms*, vol. 58, pp. 513–527, 2011. View at Publisher · View at Google Scholar - F. Soleymani and M. Sharifi, “On a class of fifteenth-order iterative formulas for simple roots,”
*International Electronic Journal of Pure and Applied Mathematics*, vol. 3, pp. 245–252, 2011. - Y. H. Geum and Y. I. Kim, “A multi-parameter family of three-step eighth-order iterative methods locating a simple root,”
*Applied Mathematics and Computation*, vol. 215, no. 9, pp. 3375–3382, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - R. F. King, “A family of fourth-order methods for nonlinear equations,”
*SIAM Journal of Numerical Analysis*, vol. 10, no. 5, pp. 876–879, 1973. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - S. K. Khattri, M. A. Noor, and E. Al-Said, “Unifying fourth-order family of iterative methods,”
*Applied Mathematics Letters*, vol. 24, no. 8, pp. 1295–1300, 2011. View at Publisher · View at Google Scholar