Table of Contents Author Guidelines Submit a Manuscript
Journal of Applied Mathematics
Volume 2014, Article ID 737305, 7 pages
http://dx.doi.org/10.1155/2014/737305
Research Article

A New Biparametric Family of Two-Point Optimal Fourth-Order Multiple-Root Finders

Department of Applied Mathematics, Dankook University, Cheonan 330-714, Republic of Korea

Received 21 February 2014; Revised 17 June 2014; Accepted 20 June 2014; Published 14 September 2014

Academic Editor: Juan R. Torregrosa

Copyright © 2014 Young Ik Kim and Young Hee Geum. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We construct a biparametric family of fourth-order iterative methods to compute multiple roots of nonlinear equations. This method is verified to be optimally convergent. Various nonlinear equations confirm our proposed method with order of convergence of four and show that the computed asymptotic error constant agrees with the theoretical one.

1. Introduction

It is not surprising that modified Newton’s method [1] in the simple form is most widely used to find the approximate multiple root of known multiplicity for a given nonlinear equation . Recall that numerical scheme (1) is a one-point optimal method with quadratic convergence. In order to find numerical solution for multiple roots of nonlinear equations more accurately, many researchers have made enormous efforts in developing higher-order methods with improved convergence.

In this paper, we extend modified Newton’s method and propose two-point optimal fourth-order multiple-root finders by evaluating two derivatives and one function per iteration. The optimality will be pursued based on Kung-Traub’s conjecture [2] in which the convergence order of any multipoint method [3] without memory can reach at most for evaluations of functions or derivatives.

The contents of this paper consist of what follows. Described in Section 2 are previous studies on multiple-root finders. Section 3 proposes a new biparametric family of two-point optimal fourth-order multiple-root finders. It fully treats method development and convergence analysis. Derivation of the error equations for the proposed schemes is an important task for ensuring convergence behavior. In Section 4, a variety of numerical examples are presented for a wide selection of test functions. It is important to compare the convergence behavior of the proposed schemes with that of existing methods. We confirm that the proposed methods well show the convergence behavior predicted by the developed theory.

2. Preliminary Review of Previous Studies

A number of interesting fourth-order multiple-root finders can be found in papers [416]. Among these, we especially introduce five studies as follows. Shengguo et al. [14] introduced the following fourth-order method which needs evaluations of one function and two derivatives per iteration for chosen in a neighborhood of the sought zero of with known multiplicity : where , , , and .

J. R. Sharma and R. Sharma [12] constructed the following fourth-order scheme with , , , and : Li et al. [15] presented the fourth-order method with : Zhou et al. [16] proposed the following fourth-order iterative scheme with : Kanwar et al. [8] developed the fourth-order optimal multipoint iterative method for multiple zeros: where , , and .

3. Method Development and Convergence Analysis

We first suppose that a function has a multiple root with integer multiplicity and is analytic in a small neighborhood of . Then a new iteration method free of second derivatives is proposed below to find an approximate root of multiplicity , given an initial guess sufficiently close to : where with , , , , , and are parameters to be chosen for maximal order of convergence [17, 18]. We establish a main theorem describing the convergence analysis regarding proposed scheme (7) and find out how to select parameters , , and for optimal fourth-order convergence.

Definition 1 (error equation, asymptotic error constant, and order of convergence). Let be a sequence converging to and let be the th iterate error. If there exist real numbers and such that the following error equation holds then or is called the asymptotic error constant and is called the order of convergence [17, 18].

Theorem 2. Let have a zero with integer multiplicity and be analytic in a small neighborhood of . Let and for . Let be an initial guess chosen in a sufficiently small neighborhood of . Let , be two free constant parameters. Let , , , and . Then iterative methods (7) are optimal and of order four and possess the following error equation: where .

Proof. The optimality on convergence order of proposed scheme (7) is clear in the sense of Kung-Traub due to three functional evaluations. Hence, it suffices to determine the constant parameters for fourth-order convergence.
Applying the Taylor’s series expansion about , we get the following relations: where , , and for .
Dividing (11) by (12), we have where , , .
For algebraic convenience, we introduce a parameter defined by ; that is, to obtain Evaluating from (11) with being replaced by in (14), we find Substituting (11)–(15) into (7), we obtain the error equation: where , , and coefficients , depend on the parameters , , , , , and and the function .
Solving , for and , respectively, we get We substitute , into and put . Solving independently of and , that is, solving for and , we obtain Substituting into (17) and (18) with , we get the following relations: By the aid of symbolic computation of Mathematica [19], we arrive at the relation below: where . As a result, the proof is completed.

Remark 3. We observe that error equation (20) contains only one free parameter , being independent of . Table 1 shows typically chosen parameters and and defines various methods , .

tab1
Table 1: Typical methods with interesting parameters and constants .

4. Numerical Examples and Conclusion

We have performed a variety of numerical experiments with Mathematica Version 5 [19] to confirm the theory developed in Section 3. In these experiments, we assign , via Mathematica command $MinPrecision = 300, as the minimum number of precision digits to achieve the specified sufficient accuracy. It is crucial to compute with high accuracy for desired numerical results. When zero is not exactly known, it is replaced by a highly accurate value which has larger number of significant digits than the assigned minimum number of precision digits. To deal with numerical results more effectively, we first define To properly display numerical results, we need to define the th computational error for . We need further terminologies as defined below.

Definition 4 (computational asymptotic error constant and computational convergence order). Assume that theoretical asymptotic error constant and convergence order are known (usually via main theorem). Define as the computational asymptotic error constant and as the computational convergence order. Then we find that is equal or close to , while is equal or close to .
If has the same accuracy of $MinPrecision as that of , then would be nearly zero and hence computing would unfavorably break down. Computed values of are accurate up to 300 significant digits. For current experiments is found to be accurate enough about up to 400 significant digits. To supply such , a set of following Mathematica commands are used: Although the number of significant digits of and is and , respectively, the limited paper space allows us to list both of them only up to 15 significant digits. We set the error bound to for .

As a first example, we select a function having a multiple zero with . We choose as an initial guess. We take another function with a root . We select as an initial value. The order of convergence and the asymptotic error constant are clearly shown in Tables 2 and 3 revealing a good agreement with the theory in Section 3. Taking another function with a root with multiplicity , we select as an initial value. In this example, we also find that the order of convergence is four and the computational asymptotic error constant well approaches the theoretical value . The computational convergence order and the computational asymptotic error constant are certainly shown in Tables 24 reaching a good agreement with the theory. It is certain that these methods need one evaluation of the function and two evaluations of the first derivative .

tab2
Table 2: Convergence behavior with . .
tab3
Table 3: Convergence behavior with . .
tab4
Table 4: Convergence behavior with . .

Additional test functions below are used to display the convergence behavior of proposed scheme (7): In Table 5, we compare numerical errors of proposed methods Y1–Y4 with those of existing optimal fourth-order multiple-root finders. Abbreviations S, J, L, Z, and K denote optimal fourth-order multiple-root finders obtained by Shengguo et al., Sharma et al., Li et al., Zhou et al., and Kanwar et al., respectively.

tab5
Table 5: Comparison of for various multiple-root finders.

The least errors within the prescribed error bound are highlighted in boldface. Method Y2 shows best convergence for , , , while method Y4 does show for , , , . It must be kept in mind that such favorable performance is shown only in the current numerical experiments with the particular choice of test functions. For a different choice of test functions and initial values, it is hardly expected that each of listed methods would always give rise to better convergence behavior. One should be aware that no numerical method exhibits better performance for all the test functions as compared with other numerical methods. With the same order of convergence, one should notice that the speed of local convergence is dependent on the function , an initial value , and a multiple zero itself.

For efficiency check of multipoint iterative methods, we need to calculate the efficiency index [17] defined by , where is the order of convergence and is the number of distinct functional or derivative evaluations per iteration. The proposed iterative methods (7) have describing the optimality, which is the same as that of any other listed method. This paper confirms optimal fourth-order convergence and derives the correct error equation for proposed iterative methods, using the weighted harmonic mean of two derivatives to find approximate multiple zeros of nonlinear equations. Proposed optimal fourth-order schemes efficiently solve given problems without any difficulty, with a wide selection of free parameters and . In future work, we will pursue higher-order optimal methods by extending the methods developed here.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to give special thanks to an anonymous referee for valuable suggestions and comments on improvements of this paper. This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (Project no. 2011-0014638).

References

  1. L. B. Rall, “Convergence of the Newton process to multiple solutions,” Numerische Mathematik, vol. 9, pp. 23–37, 1966. View at Publisher · View at Google Scholar · View at MathSciNet
  2. H. T. Kung and J. F. Traub, “Optimal order of one-point and multipoint iteration,” Journal of the Association for Computing Machinery, vol. 21, pp. 643–651, 1974. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  3. P. Jarratt, “Some efficient fourth order multipoint methods for solving equations,” BIT, vol. 9, pp. 119–124, 1969. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  4. C. Dong, “A family of multipoint iterative functions for finding multiple roots of equations,” International Journal of Computer Mathematics, vol. 21, pp. 363–367, 1987. View at Google Scholar
  5. Y. H. Geum and Y. I. Kim, “Cubic convergence of parameter-controlled Newton-secant method for multiple zeros,” Journal of Computational and Applied Mathematics, vol. 233, no. 4, pp. 931–937, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  6. Y. I. Kim and Y. H. Geum, “A two-parameter family of fourth-order iterative methods with optimal convergence for multiple zeros,” Journal of Applied Mathematics, vol. 2013, Article ID 369067, 7 pages, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  7. A. N. Johnson and B. Neta, “High-order nonlinear solver for multiple roots,” Computers & Mathematics with Applications, vol. 55, no. 9, pp. 2012–2017, 2008. View at Publisher · View at Google Scholar
  8. V. Kanwar, S. Bhatia, and M. Kansal, “New optimal class of higher-order methods for multiple roots, permitting f'xn=0,” Applied Mathematics and Computation, vol. 222, pp. 564–574, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  9. X. Li, C. Mu, J. Ma, and L. Hou, “Fifth-order iterative method for finding multiple roots of nonlinear equations,” Numerical Algorithms, vol. 57, no. 3, pp. 389–398, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  10. B. Neta, “Extension of Murakami's high-order non-linear solver to multiple roots,” International Journal of Computer Mathematics, vol. 87, no. 5, pp. 1023–1031, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  11. M. Petkovic, L. Petkovic, and J. Dzunic, “Accelerating generators of iterative methods for finding multiple roots of nonlinear equations,” Computers & Mathematics with Applications, vol. 59, no. 8, pp. 2784–2793, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  12. J. R. Sharma and R. Sharma, “Modified Jarratt method for computing multiple roots,” Applied Mathematics and Computation, vol. 217, no. 2, pp. 878–881, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  13. J. R. Sharma and R. Sharma, “Modified Chebyshev-Halley type method and its variants for computing multiple roots,” Numerical Algorithms, vol. 61, no. 4, pp. 567–578, 2012. View at Publisher · View at Google Scholar · View at Scopus
  14. L. Shengguo, L. Xiangke, and C. Lizhi, “A new fourth-order iterative method for finding multiple roots of nonlinear equations,” Applied Mathematics and Computation, vol. 215, no. 3, pp. 1288–1292, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  15. S. G. Li, L. Z. Cheng, and B. Neta, “Some fourth-order nonlinear solvers with closed formulae for multiple roots,” Computers & Mathematics with Applications, vol. 59, no. 1, pp. 126–135, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  16. X. Zhou, X. Chen, and Y. Song, “Constructing higher-order methods for obtaining the multiple roots of nonlinear equations,” Journal of Computational and Applied Mathematics, vol. 235, no. 14, pp. 4199–4206, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  17. J. F. Traub, Iterative Methods for the Solution of Equations, Prentice-Hall, Englewood Cliffs, NJ, USA, 1964. View at MathSciNet
  18. B. Neta, Numerical Methods for the Solution of Equations, Net-A-Sof, Monterey, Calif, USA, 1983.
  19. S. Wolfram, The Mathematica Book, Cambridge University Press, Cambridge, UK, 4th edition, 1999.