Iterative Methods for Nonlinear Equations or Systems and Their Applications
View this Special IssueResearch Article  Open Access
A TwoParameter Family of FourthOrder Iterative Methods with Optimal Convergence for Multiple Zeros
Abstract
We develop a family of fourthorder iterative methods using the weighted harmonic mean of two derivative functions to compute approximate multiple roots of nonlinear equations. They are proved to be optimally convergent in the sense of KungTraub’s optimal order. Numerical experiments for various test equations confirm well the validity of convergence and asymptotic error constants for the developed methods.
1. Introduction
A development of new iterative methods locating multiple roots for a given nonlinear equation deserves special attention on both theoretical and numerical interest, although prior knowledge about the multiplicity of the sought zero is required [1]. Traub [2] discussed the theoretical importance of multipleroot finders, although the multiplicity is not known a priori by stating: “since the multiplicity of a zero is often not known a priori, the results are of limited value as far as practical problems are concerned. The study is, however, of considerable theoretical interest and leads to some surprising results.” This motivates our analysis for multipleroot finders to be shown in this paper. In case the multiplicity is not known, interested readers should refer to the methods suggested by Wu and Fu [3] and Yun [4, 5].
Various iterative schemes finding multiple roots of a nonlinear equation with the known multiplicity have been proposed and investigated by many researchers [6–12]. Neta and Johnson [13] presented a fourthorder method extending Jarratt's method. Neta [14] also developed a fourthorder method requiring onefunction and threederivative evaluations per iteration grounded on a Murakami’s method [15]. Shengguo et al. [16] proposed the following fourthorder method which needs evaluations of one function and two derivatives per iteration for chosen in a neighborhood of the sought zero of with known multiplicity as follows: where , , , and with the following error equation: where − + , , and for .
Based on Jarratt [17] scheme for simple roots, J. R. Sharma and R. Sharma [18] developed the following fourth order of convergent scheme: where , , , and and derived the error equation below: where for .
The above error equation can be expressed in terms of as follows: where − + .
We now proceed to develop a new iterative method finding an approximate root of a nonlinear equation , assuming the multiplicity of is known. To do so, we first suppose that a function has a multiple root with integer multiplicity and is analytic in a small neighborhood of . Then we propose a new iterative method free of second derivatives below with an initial guess sufficiently close to as follows: where with , and as parameters to be chosen for maximal order of convergence [2, 19]. One should note that is obtained from Taylor expansion of about up to the firstorder terms with weighted harmonic mean [20] of and .
Theorem 1 shows that proposed method (6) possesses 2 free parameters and . A variety of free parameters and give us an advantage that iterative scheme (6) can develop various numerical methods. One can often have a freedom to select best suited parameters and for a sought zero . Several interesting choices of and further motivate our current analysis. As seen in Table 1, we consider five kinds of methods Y1, Y2, Y3, Y4, and Y5 list selected parameters , and the corresponding values , respectively.

If and are selected, then we obtain , , and , in which case iterative scheme (6) becomes method Y5 mentioned above and blackuces to iterative scheme (1) developed by Shengguo et al. [16].
In this paper, we investigate the optimal convergence of the fourthorder methods for multipleroot finders with known multiplicity in the sense of optimal order claimed by KungTraub [21] and derive the error equation. We find that our proposed schemes require one evaluation of the function and two evaluations of first derivative and satisfy the optimal order. In addition, through a variety of numerical experiments we wish to confirm that the proposed methods show well the convergence behavior predicted by the developed theory.
2. Convergence Analysis
In this section, we describe a choice of parameters , and in terms of and to get fourthorder convergence for our proposed scheme (6).
Theorem 1. Let have a zero with integer multiplicity and be analytic in a small neighborhood of . Let , . Let be an initial guess chosen in a sufficiently small neighborhood of . Let , , , and . Let be two free constant parameters. Then iterative method (6) is of order four and defines a twoparameter family of iterative methods with the following error equation: where and
Proof. Using Taylor's series expansion about , we thave the following relations:
where , , and for .
Dividing (10) by (11), we obtain
where , , and .
Expressing in terms of a new parameter for algebraic simplicity, we get
Since can be expressed from in (11) with substituted by from (13), we get
With the aid of symbolic computation of Mathematica [22], we substitute (10)–(14) into proposed method (6) to obtain the error equation as
where , and the coefficient may depend on parameters , and .
Solving and for and , respectively, we get after simplifications
Putting , we have
where and .
Observe that is satisfied with . Solving for , we get
Substituting into (16) and (19) with , we can rearrange these expressions to obtain
Calculating by the aid of symbolic computation of Mathematica [22], we arrive at the error equation below:
where + with .
It is interesting to observe that error equation (23) has only one free parameter , being independent of . Table 1 shows typically chosen parameters and and defines various methods derived from (6). Method Y5 results in the iterative scheme (1) that Shengguo et al. [16] suggested.
3. Numerical Examples and Conclusion
In this section, we have performed numerical experiments using Mathematica Version 5 program to convince that the optimal order of convergence is four and the computed asymptotic error constant agrees well with the theoretical value . To achieve the specified sufficient accuracy and to handle small number divisions appearing in asymptotic error constants, we have assigned as the minimum number of digits of precision by the command and set the error bound to for . We have chosen the initial values close to the sought zero to get fourthorder convergence. Although computed values of are truncated to be accurate up to 250 significant digits and the inexact value of is approximated to be accurate enough about up to 400 significant digits (with the command , , ), we list them up to 15 significant digits because of the limited space.
As a first example with a double zero and an initial guess , we select a test function . As a second experiment, we take another test function with a root of multiplicity and with an initial value .
Taking another test function with a root of multiplicity , we select as an initial value.
Throughout these examples, we confirm that the order of convergence is four and the computed asymptotic error constant approaches well the theoretical value . The order of convergence and the asymptotic error constant are clearly shown in Tables 2, 3, and 4 reaching a good agreement with the theory developed in Section 2.



The additional test functions listed below further confirm the convergence behavior of our proposed method (6).
Table 5 shows the convergence behavior of among methods S, J, Y1, Y2, Y3, and Y4, where S denotes the method proposed by Shengguo et al. [16], J the method proposed by J. R. Sharma and R. Sharma [18], the methods Y1 to Y4 are described in Table 1. It is certain that proposed method (6) needs one evaluation of the function and two evaluations of the first derivative per iteration. Consequently, the corresponding efficiency index [2] is found to be , which is optimally consistent with the conjecture of KungTraub [21]. For the particularly chosen test functions in these numerical experiments, methods Y1 to Y4 have shown better accuracy than methods S and J.
