Abstract

A class of three-step eighth-order root solvers is constructed in this study. Our aim is fulfilled by using an interpolatory rational function in the third step of a three-step cycle. Each method of the class reaches the optimal efficiency index according to the Kung-Traub conjecture concerning multipoint iterative methods without memory. Moreover, the class is free from derivative calculation per full iteration, which is important in engineering problems. One method of the class is established analytically. To test the derived methods from the class, we apply them to a lot of nonlinear scalar equations. Numerical examples suggest that the novel class of derivative-free methods is better than the existing methods of the same type in the literature.

1. Introduction

One of the common problems encountered in engineering problems is that given a single variable function 𝑓(π‘₯), find the values of π‘₯ for which 𝑓(π‘₯)=0. The solutions (values of π‘₯) are known as the roots of the equation 𝑓(π‘₯)=0, or the zeros of the function 𝑓(π‘₯). The root of such nonlinear equations may be real or complex. In general, an equation may have any number of (real) roots or no roots at all. There are two general types of methods available to find the roots of algebraic and transcendental equations. First, direct methods, which are not always applicable to find the roots, and second, iterative methods based on the concept of successive approximations. In this case, the general procedure is to start with one or more initial approximation(s) to the root and attain a sequence of iterates, which in the limit converges to the true solution [1].

Here, we focus on the simple roots of nonlinear scalar equations by iterative methods. The prominent one-point (or one-step) Newton’s method of order two, which is a basic tool in numerical analysis and numerous applications, has widely been applied and discussed in the literature; see, for example, [2–7]. Newton’s iteration and any of its variants, include derivative calculation per full cycle to proceed, which is not useful in engineering problems. Since the calculation of more derivatives often takes up an eye-catching time.

To remedy this, first Steffensen coined the following quadratical scheme: π‘₯𝑛+1=π‘₯π‘›βˆ’π‘“(π‘₯𝑛)2/(𝑓(π‘₯𝑛+𝑓(π‘₯𝑛))βˆ’π‘“(π‘₯𝑛)). Inspired by this method, so many techniques with better orders of convergence have been provided through two- or three-step cycles; see [8] and the bibliographies therein. In between, the concept of optimality, which was mooted by Kung-Traub [9], also plays a crucial role; a multipoint method for solving nonlinear scalar equations without memory has the optimal order 2(π‘›βˆ’1)/𝑛, where 𝑛 is the total number of evaluations per full cycle. In what follows, we review some of the significant derivative-free iterations.

Peng et al. in [10] investigated an optimal two-step derivative-free technique as comes next𝑀𝑛=π‘₯π‘›βˆ’π‘“ξ€·π‘₯𝑛𝑔π‘₯𝑛,π‘₯𝑛+1=π‘₯π‘›βˆ’π‘“ξ€·π‘₯𝑛𝑔π‘₯𝑛𝑓𝑀1+𝑛𝑓π‘₯𝑛+11+1βˆ’π‘‘π‘›π‘”ξ€·π‘₯𝑛ξƒͺ𝑓(𝑀𝑛)𝑓(π‘₯𝑛)ξ‚Ά2ξƒ­,(1.1) where 𝑔(π‘₯𝑛)=(𝑓(π‘₯𝑛)βˆ’π‘“(π‘₯π‘›βˆ’π‘‘π‘›π‘“(π‘₯𝑛)))/(𝑑𝑛𝑓(π‘₯𝑛)) and 𝑑𝑛 is adaptively determined.

Ren et al. in [11] furnished an optimal quartical scheme using divided differences as follows:𝑦𝑛=π‘₯π‘›βˆ’π‘“(π‘₯𝑛)2𝑓π‘₯𝑛π‘₯+𝑓𝑛π‘₯ξ€Έξ€Έβˆ’π‘“π‘›ξ€Έ,π‘₯𝑛+1=π‘¦π‘›βˆ’π‘“ξ€·π‘¦π‘›ξ€Έπ‘“ξ€Ίπ‘₯𝑛,𝑦𝑛𝑦+𝑓𝑛,𝑀𝑛π‘₯βˆ’π‘“π‘›,𝑀𝑛𝑦+π‘Žπ‘›βˆ’π‘₯π‘›π‘¦ξ€Έξ€·π‘›βˆ’π‘€π‘›ξ€Έ,(1.2) where 𝑀𝑛=π‘₯𝑛+𝑓(π‘₯𝑛) and π‘Žβˆˆβ„.

Taking into account the divided differences, [12] contributed the following two-step optimal method:𝑦𝑛=π‘₯π‘›βˆ’π‘“ξ€·π‘₯𝑛2𝑓π‘₯𝑛π‘₯+𝑓𝑛π‘₯ξ€Έξ€Έβˆ’π‘“π‘›ξ€Έ,π‘₯𝑛+1=π‘¦π‘›βˆ’π‘“ξ€Ίπ‘₯𝑛,π‘¦π‘›ξ€»ξ€Ίπ‘¦βˆ’π‘“π‘›,𝑀𝑛π‘₯+𝑓𝑛,𝑀𝑛𝑓π‘₯𝑛,𝑦𝑛2𝑓𝑦𝑛,(1.3) where 𝑀𝑛=π‘₯𝑛+𝑓(π‘₯𝑛). Note that the notation of divided differences will be used throughout this paper. Therefore, we have 𝑓[π‘₯𝑖,π‘₯𝑗]=(𝑓(π‘₯𝑖)βˆ’π‘“(π‘₯𝑗))/(π‘₯π‘–βˆ’π‘₯𝑗), βˆ€π‘–,π‘—βˆˆπ‘,𝑖≠𝑗.

In 2011, Khattri and Argyros [13] formulated a sixth-order method as follows:𝑦𝑛=π‘₯π‘›βˆ’π‘“ξ€·π‘₯𝑛2𝑓π‘₯𝑛π‘₯βˆ’π‘“π‘›ξ€·π‘₯βˆ’π‘“π‘›,𝑧𝑛=π‘¦π‘›βˆ’π‘“ξ€·π‘₯𝑛𝑓𝑦𝑛𝑓π‘₯𝑛π‘₯βˆ’π‘“π‘›ξ€·π‘₯βˆ’π‘“π‘›ξƒ¬π‘“ξ€·π‘¦ξ€Έξ€Έ1+𝑛𝑓π‘₯𝑛+𝑓𝑦𝑛𝑓π‘₯𝑛π‘₯βˆ’π‘“π‘›ξƒ­,π‘₯𝑛+1=π‘§π‘›βˆ’π‘“ξ€·π‘₯𝑛𝑓𝑧𝑛𝑓π‘₯𝑛π‘₯βˆ’π‘“π‘›ξ€·π‘₯βˆ’π‘“π‘›ξƒ¬π‘“ξ€·π‘¦ξ€Έξ€Έ1+𝑛𝑓π‘₯𝑛+𝑓𝑦𝑛𝑓π‘₯𝑛π‘₯βˆ’π‘“π‘›ξƒ­.ξ€Έξ€Έ(1.4)

Very recently, Thukral in [14] gave the following optimal eighth-order derivative-free iterations without memory:𝑦𝑛=π‘₯π‘›βˆ’π‘“ξ€·π‘₯𝑛2𝑓𝑀𝑛π‘₯βˆ’π‘“π‘›ξ€Έ,𝑀𝑛=π‘₯𝑛π‘₯+𝑓𝑛,𝑧𝑛=π‘¦π‘›βˆ’π‘“ξ€Ίπ‘₯𝑛,𝑀𝑛𝑓𝑦𝑛𝑓π‘₯𝑛,𝑦𝑛𝑓𝑀𝑛,𝑦𝑛,π‘₯𝑛+1=π‘§π‘›βˆ’π‘“ξ€·π‘§π‘›ξ€Έπ‘“ξ€Ίπ‘¦π‘›,𝑧𝑛π‘₯βˆ’π‘“π‘›,𝑦𝑛π‘₯+𝑓𝑛,𝑧𝑛𝑓𝑧1βˆ’π‘›ξ€Έπ‘“ξ€·π‘€π‘›ξ€Έξƒ­βˆ’1𝑦1+2𝑓𝑛3𝑓𝑀𝑛2𝑓π‘₯π‘›ξ€Έξƒ­βˆ’1,(1.5) and also𝑦𝑛=π‘₯π‘›βˆ’π‘“ξ€·π‘₯𝑛2𝑓𝑀𝑛π‘₯βˆ’π‘“π‘›ξ€Έ,𝑀𝑛=π‘₯𝑛π‘₯+𝛽𝑓𝑛𝑧,π›½βˆˆβ„βˆ’{0},𝑛=π‘¦π‘›βˆ’π‘“ξ€Ίπ‘₯𝑛,π‘¦π‘›ξ€»ξ€Ίπ‘¦βˆ’π‘“π‘›,𝑀𝑛π‘₯+𝑓𝑛,𝑀𝑛𝑓[π‘₯𝑛,𝑦𝑛]2𝑓𝑦𝑛,π‘₯𝑛+1=π‘§π‘›βˆ’π‘“ξ€·π‘§π‘›ξ€Έπ‘“ξ€Ίπ‘¦π‘›,𝑧𝑛π‘₯βˆ’π‘“π‘›,𝑦𝑛π‘₯+𝑓𝑛,𝑧𝑛𝑓𝑧1βˆ’π‘›ξ€Έπ‘“ξ€·π‘€π‘›ξ€Έξƒ­βˆ’1Γ—βŽ‘βŽ’βŽ’βŽ£ξƒ©π‘“ξ€·π‘¦1βˆ’π‘›ξ€Έπ‘“ξ€·π‘§π‘›ξ€Έξƒͺ3⎀βŽ₯βŽ₯βŽ¦ξƒ¬π‘“ξ€·π‘¦1+𝑛3𝑓𝑧𝑛2𝑓π‘₯π‘›ξ€Έξƒ­βˆ’1.(1.6)

Unfortunately, by error analysis, we have found that (1.6) (relation (2.32) in [14]) does not possess the convergence order eight. Thus, it is not optimal in the sense of Kung-Traub’s conjecture. Hence, (1.6) is excluded from our list of optimal eighth-order derivative-free methods. Note that (2.27) of [14] has also an eye-catching typo in its structure, which does not produce optimal order.

Derivative-free methods [15–17] have so many applications in contrast to the derivative-involved methods. Anyhow, there are some other factors, except being free from derivative or high order of convergence in choosing a method for root finding; for example, we refer the readers to [18], to see the importance of initial guesses in this subject matter. For further reading, one may refer to [19–26].

This research contributes a general class of three-step methods without memory using four points (we mean the evaluations of the function need to be computed four times per step) and four evaluations per full cycle for solving single-variable nonlinear equations. The contributed class has some important features in what follows. First, it reaches the optimal efficiency index 1.682. Second, it is free from derivative calculation per full cycle, which is so much fruitful for engineering and optimization problems. Third, using any optimal quartically convergent derivative-free two-step method in its first two steps will yield a new optimal eighth-order derivative-free iteration without memory in the sense of Kung-Traub’s conjecture [9]. Finally, we will find that the new eighth-order derivative-free methods are fast and convergent.

2. Main Contribution

Consider the nonlinear scalar function π‘“βˆΆπ·βŠ†β„β†’β„ that is sufficiently smooth in the neighborhood 𝐷, of a simple zero 𝛼. To construct derivative-free methods of optimal order eight, we consider a three-step cycle in which the first two steps are any of the optimal two-step derivative-free schemes without memory as follows:ξ‚»π‘₯Optimaltwo-stepderivative-freemethod𝑛and𝑀𝑛𝑦areavailable,𝑛π‘₯isavailable,𝑛+1=π‘§π‘›βˆ’π‘“ξ€·π‘§π‘›ξ€Έπ‘“β€²ξ€·π‘§π‘›ξ€Έ.(2.1)

Please note that 𝑀𝑛=π‘₯𝑛+𝛽𝑓(π‘₯𝑛) or 𝑀𝑛=π‘₯π‘›βˆ’π›½π‘“(π‘₯𝑛), where π›½βˆˆβ„βˆ’{0} is specified by the user. The value of 𝑀𝑛 completely relies on the two-point method allocated in the first two steps of (2.1). The purpose of this paper is to establish new derivative-free methods with optimal order; hence, we reduce the number of evaluations from five to four by using a suitable approximation of the new-appeared derivative. Now to attain a class of optimal eighth-order techniques free from derivative, we should approximate π‘“ξ…ž(𝑧𝑛) using the known values, that is, 𝑓(π‘₯𝑛), 𝑓(𝑀𝑛), 𝑓(𝑦𝑛), and 𝑓(𝑧𝑛). We do this procedure by applying the nonlinear rational fraction inspired by PadΓ© approximant as follows:π‘Žπ‘(𝑑)=0+π‘Ž1ξ€·π‘‘βˆ’π‘₯𝑛+π‘Ž2ξ€·π‘‘βˆ’π‘₯𝑛21+π‘Ž3ξ€·π‘‘βˆ’π‘₯𝑛,(2.2) where π‘Ž1βˆ’π‘Ž0π‘Ž3β‰ 0.

The general setup in approximation theory is that a function 𝑓 is given and next a user wants to estimate it with a simpler function 𝑔 but in such a way that the difference between 𝑓 and 𝑔 is small. The advantage is that the simpler function 𝑔 can be handled without too many difficulties. Polynomials are not always a good class of functions if one desires to estimate scalar functions with singularities, because polynomials are entire functions without singularities. They are only useful up to the first singularity of 𝑓 near the singularity point. Rational functions (the concept of the PadΓ© approximant) are the simplest functions with singularities. The idea is that the poles of the rational functions will move to the singularities of the function 𝑓, and hence the domain of convergence can be enlarged. The [π‘š,𝑛] PadΓ© approximant of 𝑓 is the rational function π‘„π‘š/𝑃𝑛, where π‘„π‘š is a polynomial of degree β‰€π‘š and 𝑃𝑛 is a polynomial of degree ≀𝑛, in which the interpolation condition is satisfied. As a matter of fact, (2.2) is an interpolatory rational function, which is inspired by the PadΓ© approximant.

Hence, by substituting the known values in (2.2), that is 𝑓(π‘₯𝑛)=𝑝(π‘₯𝑛), 𝑓(𝑀𝑛)=𝑝(𝑀𝑛), 𝑓(𝑦𝑛)=𝑝(𝑦𝑛), and 𝑓(𝑧𝑛)=𝑝(𝑧𝑛), we have the following system of three linear equations with three unknowns (π‘Ž0=𝑓(π‘₯𝑛) is obvious):βŽ›βŽœβŽœβŽœβŽœβŽξ€·π‘€π‘›βˆ’π‘₯π‘›π‘€ξ€Έξ€·π‘›βˆ’π‘₯𝑛2ξ€·π‘€βˆ’π‘“π‘›π‘€ξ€Έξ€·π‘›βˆ’π‘₯π‘›ξ€Έξ€·π‘¦π‘›βˆ’π‘₯π‘›π‘¦ξ€Έξ€·π‘›βˆ’π‘₯𝑛2ξ€·π‘¦βˆ’π‘“π‘›π‘¦ξ€Έξ€·π‘›βˆ’π‘₯π‘›ξ€Έξ€·π‘§π‘›βˆ’π‘₯π‘›π‘§ξ€Έξ€·π‘›βˆ’π‘₯𝑛2ξ€·π‘§βˆ’π‘“π‘›π‘§ξ€Έξ€·π‘›βˆ’π‘₯π‘›ξ€ΈβŽžβŽŸβŽŸβŽŸβŽŸβŽ βŽ›βŽœβŽœβŽœβŽœβŽπ‘Ž1π‘Ž2π‘Ž3⎞⎟⎟⎟⎟⎠=βŽ›βŽœβŽœβŽœβŽœβŽπ‘“ξ€·π‘€π‘›ξ€Έξ€·π‘₯βˆ’π‘“π‘›ξ€Έπ‘“ξ€·π‘¦π‘›ξ€Έξ€·π‘₯βˆ’π‘“π‘›ξ€Έπ‘“ξ€·π‘§π‘›ξ€Έξ€·π‘₯βˆ’π‘“π‘›ξ€ΈβŽžβŽŸβŽŸβŽŸβŽŸβŽ ,(2.3) which has the follow-up solution after simplifyingπ‘Ž3=𝑀𝑛𝑓𝑧𝑛,π‘₯π‘›ξ€»ξ€Ίπ‘¦βˆ’π‘“π‘›,π‘₯π‘›ξ€Ίπ‘§ξ€»ξ€Έβˆ’π‘“π‘›,π‘₯𝑛𝑦𝑛𝑀+𝑓𝑛,π‘₯π‘›π‘¦ξ€»ξ€·π‘›βˆ’π‘§π‘›ξ€Έξ€Ίπ‘¦+𝑓𝑛,π‘₯π‘›ξ€»π‘§π‘›ξ€·π‘§π‘›βˆ’π‘¦π‘›ξ€Έπ‘“ξ€·π‘€π‘›ξ€Έ+ξ€·π‘€π‘›βˆ’π‘§π‘›ξ€Έπ‘“ξ€·π‘¦π‘›ξ€Έ+ξ€·π‘¦π‘›βˆ’π‘€π‘›ξ€Έπ‘“ξ€·π‘§π‘›ξ€Έ,π‘Ž2=𝑓𝑀𝑛,π‘₯π‘›ξ€»ξ€Ίπ‘¦βˆ’π‘“π‘›,π‘₯𝑛+π‘Ž3ξ€·π‘“ξ€·π‘€π‘›ξ€Έξ€·π‘¦βˆ’π‘“π‘›ξ€Έξ€Έξ€·π‘€π‘›βˆ’π‘¦π‘›ξ€Έξ€Ίπ‘€=𝑓𝑛,π‘₯𝑛,𝑦𝑛+π‘Ž3𝑓𝑀𝑛,𝑦𝑛,π‘Ž1𝑀=𝑓𝑛,π‘₯𝑛𝑀+π‘“π‘›ξ€Έπ‘Ž3βˆ’ξ€·π‘€π‘›βˆ’π‘₯π‘›ξ€Έπ‘Ž2.(2.4)

At present, by using (2.1) and (2.4), we have the following efficient and accurate classξ‚»π‘₯Optimaltwo-stepderivative-freemethod𝑛and𝑀𝑛𝑦areavailable,𝑛π‘₯isavailable,𝑛+1=π‘§π‘›βˆ’ξ€·1+π‘Ž3ξ€·π‘§π‘›βˆ’π‘₯𝑛2π‘“ξ€·π‘§π‘›ξ€Έπ‘Ž1βˆ’π‘Ž0π‘Ž3+2π‘Ž2ξ€·π‘§π‘›βˆ’π‘₯𝑛+π‘Ž2π‘Ž3ξ€·π‘§π‘›βˆ’π‘₯𝑛2.(2.5)

By considering any optimal two-step derivative-free method in the first two steps of (2.5), we attain a new optimal derivative-free eighth-order technique. Applying (1.3), we have𝑦𝑛=π‘₯π‘›βˆ’π‘“(π‘₯𝑛)2𝑓𝑀𝑛π‘₯βˆ’π‘“π‘›ξ€Έ,𝑀𝑛=π‘₯𝑛π‘₯+𝑓𝑛,𝑧𝑛=π‘¦π‘›βˆ’π‘“ξ€Ίπ‘₯𝑛,π‘¦π‘›ξ€»ξ€Ίπ‘¦βˆ’π‘“π‘›,𝑀𝑛π‘₯+𝑓𝑛,𝑀𝑛𝑓[π‘₯𝑛,𝑦𝑛]2𝑓𝑦𝑛,π‘₯𝑛+1=π‘§π‘›βˆ’ξ€·1+π‘Ž3ξ€·π‘§π‘›βˆ’π‘₯𝑛2π‘“ξ€·π‘§π‘›ξ€Έπ‘Ž1βˆ’π‘Ž0π‘Ž3+2π‘Ž2ξ€·π‘§π‘›βˆ’π‘₯𝑛+π‘Ž2π‘Ž3ξ€·π‘§π‘›βˆ’π‘₯𝑛2.(2.6)

To obtain the solution of any nonlinear scalar equation by the new derivative-free methods, we must set a particular initial approximation π‘₯0, ideally close to the simple root. In numerical analysis, it is very useful and essential to know the behavior of an approximate method. Therefore, we are about to prove the convergence order of (2.6).

Theorem 2.1. Let 𝛼 be a simple root of the sufficiently differentiable function 𝑓 in an open interval 𝐷. If π‘₯0 is sufficiently close to 𝛼, then (2.6) is of eighth order and satisfies the error equation below𝑒𝑛+1=ξ€·1+𝑐1ξ€Έ2𝑐2ξ€·ξ€·2+𝑐1𝑐22βˆ’π‘1ξ€·1+𝑐1𝑐3ξ€Έξ€·ξ€·2+𝑐1𝑐42βˆ’π‘1ξ€·1+𝑐1𝑐22𝑐3βˆ’π‘21ξ€·1+𝑐1𝑐23+𝑐21ξ€·1+𝑐1𝑐2𝑐4𝑐71×𝑒8𝑛𝑒+𝑂9𝑛,(2.7) where 𝑒𝑛=π‘₯π‘›βˆ’π›Ό, and π‘π‘˜=𝑓(π‘˜)(𝛼)/π‘˜!, βˆ€π‘˜=1,2,3,….

Proof. We provide the Taylor series expansion of any term involved in (2.6). By Taylor expanding around the simple root in the 𝑛th iterate, we have 𝑓(π‘₯𝑛)=𝑐1𝑒𝑛+𝑐2𝑒2𝑛+𝑐3𝑒3𝑛+𝑐4𝑒4𝑛+𝑐5𝑒5𝑛+𝑐6𝑒6𝑛+𝑐7𝑒7𝑛+𝑐8𝑒8𝑛+𝑂(𝑒9𝑛). Furthermore, it is easy to find 𝑓π‘₯𝑛2𝑓𝑀𝑛π‘₯βˆ’π‘“π‘›ξ€Έ=𝑒1π‘›βˆ’ξ€·1+𝑐1𝑐2𝑐1𝑒2𝑛+ξ€·ξ€·2+𝑐1ξ€·2+𝑐1𝑐22βˆ’π‘1ξ€·1+𝑐1ξ€Έξ€·2+𝑐1𝑐3𝑐21𝑒3𝑛𝑒+β‹―+𝑂9𝑛.(2.8) By considering this relation and the first step of (2.6), we obtain 𝑦𝑛=π‘₯π‘›βˆ’π‘“ξ€·π‘₯𝑛𝑓π‘₯𝑛,𝑀𝑛1=𝛼+1+𝑐1𝑐2𝑒2𝑛+ξ€·βˆ’ξ€·ξ€·2+βˆ’2+𝑐1𝑐1𝑐22+𝑐1ξ€·1+𝑐1ξ€Έξ€·2+𝑐1𝑐3𝑐21𝑒3𝑛𝑒+β‹―+𝑂9𝑛.(2.9) At this time, we should expand 𝑓(𝑦𝑛) around the root by taking into consideration (2.9). Accordingly, we have 𝑓𝑦𝑛=ξ€·1+𝑐1𝑐2𝑒2𝑛+ξƒ©βˆ’ξ€·2+𝑐1ξ€·2+𝑐1𝑐22𝑐1+ξ€·1+𝑐1ξ€Έξ€·2+𝑐1𝑐3ξƒͺ𝑒3𝑛𝑒+β‹―+𝑂9𝑛.(2.10) Using (2.9) and (2.10) in the second step of (2.6) results in π‘§π‘›ξ€·βˆ’π›Ό=1+𝑐1𝑐2ξ€·ξ€·2+𝑐1𝑐22βˆ’π‘1ξ€·1+𝑐1𝑐3𝑐31𝑒4π‘›βˆ’1𝑐41ξ‚€ξ€·10+𝑐1ξ€·16+𝑐1ξ€·9+2𝑐1𝑐42βˆ’π‘1ξ€·1+𝑐1ξ€Έξ€·14+𝑐1ξ€·13+4𝑐1𝑐22𝑐3+𝑐21ξ€·1+𝑐1ξ€Έ2ξ€·2+𝑐1𝑐23+𝑐21ξ€·1+𝑐1ξ€Έ2ξ€·2+𝑐1𝑐2𝑐4𝑒5𝑛+1𝑐51ξ€·ξ€·31+𝑐1ξ€·53+𝑐1ξ€·39+𝑐1ξ€·16+3𝑐1𝑐52βˆ’π‘1ξ€·4+3𝑐1ξ€Έξ€·18+𝑐1ξ€·22+3𝑐1ξ€·4+𝑐1𝑐32𝑐3+𝑐21ξ€·1+𝑐1ξ€Έξ€·21+4𝑐1ξ€·7+𝑐1ξ€·4+𝑐1𝑐22𝑐4βˆ’π‘31ξ€·1+𝑐1ξ€Έ2ξ€·7+𝑐1ξ€·7+2𝑐1𝑐3𝑐4+𝑐21ξ€·1+𝑐1𝑐2ξ€·ξ€·2+𝑐1ξ€Έξ€·15+𝑐1ξ€·13+5𝑐1𝑐23βˆ’π‘1ξ€·1+𝑐1ξ€Έξ€·3+𝑐1ξ€·3+𝑐1𝑐5×𝑒6𝑛𝑒+β‹―+𝑂9𝑛.(2.11) On the other hand, we have 𝑓𝑧𝑛=ξ€·1+𝑐1𝑐2ξ€·ξ€·2+𝑐1𝑐22βˆ’π‘1ξ€·1+𝑐1𝑐3𝑐21𝑒4𝑛𝑒+β‹―+𝑂9𝑛.(2.12) We also have the following Taylor series expansion for the approximation of 𝑓′(𝑧𝑛) in the third step using (2.11): π‘Ž1βˆ’π‘Ž0π‘Ž3+2π‘Ž2ξ€·π‘§π‘›βˆ’π‘₯𝑛+π‘Ž2π‘Ž3ξ€·π‘§π‘›βˆ’π‘₯𝑛2ξ€·1+π‘Ž3ξ€·π‘§π‘›βˆ’π‘₯𝑛2=𝑐1+1𝑐31ξ€·1+𝑐12ξ€·ξ€Έξ€·2+𝑐1𝑐42βˆ’2𝑐1ξ€·1+𝑐1𝑐22𝑐3βˆ’π‘21ξ€·1+𝑐1𝑐23+𝑐21ξ€·1+𝑐1𝑐2𝑐4𝑒4𝑛𝑒+β‹―+𝑂9𝑛.(2.13) Now by applying (2.12) and (2.13), we attain ξ€·1+π‘Ž3ξ€·π‘§π‘›βˆ’π‘₯𝑛2π‘“ξ€·π‘§π‘›ξ€Έπ‘Ž1βˆ’π‘Ž0π‘Ž3+2π‘Ž2ξ€·π‘§π‘›βˆ’π‘₯𝑛+π‘Ž2π‘Ž3ξ€·π‘§π‘›βˆ’π‘₯𝑛2=ξ€·1+𝑐1𝑐2ξ€·ξ€·2+𝑐1𝑐22βˆ’π‘1ξ€·1+𝑐1𝑐3𝑐31𝑒4π‘›βˆ’1𝑐41ξ€·ξ€·10+𝑐1ξ€·16+𝑐1ξ€·9+2𝑐1𝑐42βˆ’π‘1ξ€·1+𝑐1ξ€Έξ€·14+𝑐1ξ€·13+4𝑐1𝑐22𝑐3+𝑐21ξ€·1+𝑐1ξ€Έ2ξ€·2+𝑐1𝑐23+𝑐21ξ€·1+𝑐1ξ€Έ2ξ€·2+𝑐1𝑐2𝑐4𝑒5𝑛𝑒+β‹―+𝑂9𝑛.(2.14) At this time, by applying (2.14) in (2.6), we obtain 𝑒𝑛+1=ξ€·1+𝑐1ξ€Έ2𝑐2ξ€·ξ€·2+𝑐1𝑐22βˆ’π‘1ξ€·1+𝑐1𝑐3ξ€Έξ€·ξ€·2+𝑐1𝑐42βˆ’π‘1ξ€·1+𝑐1𝑐22𝑐3βˆ’π‘21ξ€·1+𝑐1𝑐23+𝑐21ξ€·1+𝑐1𝑐2𝑐4𝑐71×𝑒8𝑛𝑒+𝑂9𝑛.(2.15) This ends the proof and shows that (2.6) is an optimal eighth-order derivative-free scheme without memory.

As a consequence using (1.2), we attain the following biparametric eighth-order family𝑦𝑛=π‘₯π‘›βˆ’π‘“ξ€·π‘₯𝑛𝑓π‘₯𝑛,𝑀𝑛,𝑀𝑛=π‘₯𝑛π‘₯+𝛽𝑓𝑛,𝑧𝑛=π‘¦π‘›βˆ’π‘“ξ€·π‘¦π‘›ξ€Έπ‘“ξ€Ίπ‘₯𝑛,𝑦𝑛𝑦+𝑓𝑛,𝑀𝑛π‘₯βˆ’π‘“π‘›,𝑀𝑛𝑦+π‘Žπ‘›βˆ’π‘₯π‘›π‘¦ξ€Έξ€·π‘›βˆ’π‘€π‘›ξ€Έ,π‘₯𝑛+1=π‘§π‘›βˆ’ξ€·1+π‘Ž3ξ€·π‘§π‘›βˆ’π‘₯𝑛2π‘“ξ€·π‘§π‘›ξ€Έπ‘Ž1βˆ’π‘Ž0π‘Ž3+2π‘Ž2ξ€·π‘§π‘›βˆ’π‘₯𝑛+π‘Ž2π‘Ž3ξ€·π‘§π‘›βˆ’π‘₯𝑛2,(2.16) which satisfies the following error equation:𝑒𝑛+1=ξ€·1+𝛽𝑐1ξ€Έ4𝑐2𝑐22+𝑐1ξ€·βˆ’π‘3𝑐+aξ€Έξ€Έξ€·42βˆ’π‘21𝑐23+𝑐21𝑐2𝑐4+𝑐1𝑐22ξ€·βˆ’π‘3+a𝑐71𝑒8𝑛𝑒+𝑂9𝑛,(2.17) with π‘Žβˆˆβ„ and π›½βˆˆβ„βˆ’{0}. In what follows, we give other methods that can simply be constructed using the approach given above and just by varying the first two steps and using optimal derivative-free quartical iterations. Therefore, as other examples, we have𝑦𝑛=π‘₯π‘›βˆ’π‘“ξ€·π‘₯𝑛𝑓π‘₯𝑛,𝑀𝑛,𝑀𝑛=π‘₯𝑛π‘₯βˆ’π›½π‘“π‘›ξ€Έ,𝑧𝑛=π‘¦π‘›βˆ’π‘“ξ€·π‘¦π‘›ξ€Έπ‘“ξ€Ίπ‘₯𝑛,𝑦𝑛𝑦+𝑓𝑛,𝑀𝑛π‘₯βˆ’π‘“π‘›,𝑀𝑛𝑦+π‘Žπ‘›βˆ’π‘₯π‘›π‘¦ξ€Έξ€·π‘›βˆ’π‘€π‘›ξ€Έ,π‘₯𝑛+1=π‘§π‘›βˆ’ξ€·1+π‘Ž3ξ€·π‘§π‘›βˆ’π‘₯𝑛2π‘“ξ€·π‘§π‘›ξ€Έπ‘Ž1βˆ’π‘Ž0π‘Ž3+2π‘Ž2ξ€·π‘§π‘›βˆ’π‘₯𝑛+π‘Ž2π‘Ž3ξ€·π‘§π‘›βˆ’π‘₯𝑛2,(2.18) which satisfies the following error equation:𝑒𝑛+1=ξ€·βˆ’1+𝛽𝑐1ξ€Έ4𝑐2𝑐22+𝑐1ξ€·βˆ’π‘3𝑐+π‘Žξ€Έξ€Έξ€·42βˆ’π‘21𝑐23+𝑐21𝑐2𝑐4+𝑐1𝑐22ξ€·βˆ’π‘3+π‘Žξ€Έξ€Έπ‘71𝑒8𝑛𝑒+𝑂9𝑛.(2.19) By considering 𝐺(0)=1, 𝐺′(0)=(2+𝛽𝑓[π‘₯𝑛,𝑀𝑛])/(1+𝛽𝑓[π‘₯𝑛,𝑀𝑛]),|πΊξ…žξ…ž(0)|<∞ in [17], we can have𝑦𝑛=π‘₯π‘›βˆ’π‘“ξ€·π‘₯𝑛𝑓π‘₯𝑛,𝑀𝑛,𝑀𝑛=π‘₯𝑛π‘₯+𝛽𝑓𝑛,𝑧𝑛=π‘¦π‘›βˆ’π‘“ξ€·π‘¦π‘›ξ€Έπ‘“ξ€Ίπ‘₯𝑛,𝑀𝑛𝐺𝑑𝑛,𝑑𝑛=𝑓𝑦𝑛𝑓π‘₯𝑛,π‘₯𝑛+1=π‘§π‘›βˆ’ξ€·1+π‘Ž3ξ€·π‘§π‘›βˆ’π‘₯𝑛2π‘“ξ€·π‘§π‘›ξ€Έπ‘Ž1βˆ’π‘Ž0π‘Ž3+2π‘Ž2ξ€·π‘§π‘›βˆ’π‘₯𝑛+π‘Ž2π‘Ž3ξ€·π‘§π‘›βˆ’π‘₯𝑛2,(2.20) with the follow-up error relation𝑒𝑛+1=14𝑐71𝑐2ξ€·1+𝑐1𝛽2ξ‚€2𝑐1𝑐3ξ€·1+𝑐1𝛽+𝑐22ξ‚€ξ€·βˆ’25+𝑐1𝛽5+𝑐1𝛽+ξ€·ξ€Έξ€Έ1+𝑐1𝛽2πΊξ…žξ…žΓ—ξ‚€(0)2𝑐1𝑐22𝑐3ξ€·1+𝑐1𝛽+2𝑐21𝑐23ξ€·1+𝑐1π›½ξ€Έβˆ’2𝑐21𝑐2𝑐4ξ€·1+𝑐1𝛽+𝑐42ξ‚€ξ€·βˆ’25+𝑐1𝛽5+𝑐1𝛽+ξ€·ξ€Έξ€Έ1+𝑐1𝛽2πΊξ…žξ…žπ‘’(0)8𝑛𝑒+𝑂9𝑛.(2.21) We can also present by considering 𝐺(0)=1, πΊξ…ž(0)=(2βˆ’π›½π‘“[π‘₯𝑛,𝑀𝑛])/(1βˆ’π›½π‘“[π‘₯𝑛,𝑀𝑛]), |πΊξ…žξ…ž(0)|<∞,𝑦𝑛=π‘₯π‘›βˆ’π‘“ξ€·π‘₯𝑛𝑓π‘₯𝑛,𝑀𝑛,𝑀𝑛=π‘₯𝑛π‘₯βˆ’π›½π‘“π‘›ξ€Έ,𝑧𝑛=π‘¦π‘›βˆ’π‘“ξ€·π‘¦π‘›ξ€Έπ‘“ξ€Ίπ‘₯𝑛,𝑀𝑛𝐺𝑑𝑛,𝑑𝑛=𝑓𝑦𝑛𝑓π‘₯𝑛,π‘₯𝑛+1=π‘§π‘›βˆ’ξ€·1+π‘Ž3ξ€·π‘§π‘›βˆ’π‘₯𝑛2π‘“ξ€·π‘§π‘›ξ€Έπ‘Ž1βˆ’π‘Ž0π‘Ž3+2π‘Ž2ξ€·π‘§π‘›βˆ’π‘₯𝑛+π‘Ž2π‘Ž3ξ€·π‘§π‘›βˆ’π‘₯𝑛2,(2.22) where π›½βˆˆβ„βˆ’{0} and𝑒𝑛+1=14𝑐71𝑐2ξ€·βˆ’1+𝑐1𝛽2ξ€·ξ€·π‘βˆ’221𝑐2𝑐4ξ€·1βˆ’π‘1𝛽+𝑐1𝑐22𝑐3ξ€·βˆ’1+𝑐1𝛽+𝑐21𝑐23ξ€·βˆ’1+𝑐1𝛽+𝑐42ξ€·5+𝑐1π›½ξ€·βˆ’5+𝑐1𝛽+𝑐42ξ€·βˆ’1+𝑐1𝛽2πΊξ…žξ…žΓ—ξ‚€(0)βˆ’2𝑐1𝑐3ξ€·βˆ’1+𝑐1𝛽+𝑐22ξ‚€ξ€·βˆ’25+𝑐1π›½ξ€·βˆ’5+𝑐1𝛽+ξ€·ξ€Έξ€Έβˆ’1+𝑐1𝛽2πΊξ…žξ…žπ‘’(0)8𝑛𝑒+𝑂9𝑛,(2.23) is its error equation.

In Table 1, a comparison of efficiency indices for different derivative-free methods of various orders is given. Equations (2.6), (2.16) or any optimal eighth-order scheme resulting from our class reaches the efficiency index 81/4β‰ˆ1.682, which is optimal according to the Kung-Traub hypothesis for multipoint iterative methods without memory for solving nonlinear scalar equations.

Remark 2.2. The introduced approximation for π‘“ξ…ž(𝑧𝑛) in the third step of (2.5) always doubles the convergence rate of the two-step method without memory given in its first two steps. That is, using any optimal fourth-order derivative-free method in its first two steps yields a novel three-step optimal eighth-order method without memory free from derivative. This makes our class interesting and accurate. Note that using any cubical derivative-free method without memory using three evaluations in the first two steps of (2.5) ends in a sixth-order derivative-free technique. This also shows that the introduced approximation for π‘“ξ…ž(𝑧𝑛) doubles the convergence rate.
The free nonzero parameter 𝛽 plays a crucial rule for obtaining new numerical and theoretical results. In fact, if the user approximate 𝛽 per cycle with another iteration using only the available data at the first step, the convergence behavior of the refined method(s) will be improved. That is to say, a more efficient refined version of the attained optimal eighth-order derivative-free methods can be constructed by accepting an iteration for 𝛽 inside the real iterative scheme, as well as more computational burden. This way of obtaining new methods is called β€œwith memory” iterations. Such developments can be considered as future improvements in this field. We here mention that according to the experimental results, choosing very small value for 𝛽, such as 10βˆ’10, will mostly decrease the error equation, and thus the numerical output results will be more accurate and reliable without much more computational load.
In what follows, the findings are generalized by illustrating the effectiveness of the eighth-order methods for determining the simple root of a nonlinear equation.

3. Numerical Testing

The results given in Section 2 are supported through the numerical works. We here also mention a well-known derivative-involved method for comparisons to show the reliability of our derivative-free methods.

Wang and Liu [27] suggested an optimal derivative-involved eighth-order method as follows:𝑦𝑛=π‘₯π‘›βˆ’π‘“ξ€·π‘₯π‘›ξ€Έπ‘“ξ…žξ€·π‘₯𝑛,𝑧𝑛=π‘₯π‘›βˆ’π‘“ξ€·π‘₯π‘›ξ€Έπ‘“ξ…žξ€·π‘₯𝑛𝑓π‘₯π‘›ξ€Έξ€·π‘¦βˆ’π‘“π‘›ξ€Έπ‘“ξ€·π‘₯π‘›ξ€Έξ€·π‘¦βˆ’2𝑓𝑛,π‘₯𝑛+1=π‘§π‘›βˆ’π‘“ξ€·π‘§π‘›ξ€Έπ‘“ξ…žξ€·π‘₯𝑛π‘₯5𝑓𝑛2ξ€·π‘₯βˆ’2𝑓𝑛𝑓𝑦𝑛𝑦+𝑓𝑛2ξ€·π‘₯5𝑓𝑛2ξ€·π‘₯βˆ’12𝑓𝑛𝑓𝑦𝑛+𝑓𝑦1+4𝑛𝑓π‘₯𝑛𝑓𝑧𝑛𝑓𝑦𝑛.(3.1)

Iteration (3.1) consists of derivative evaluation per full cycle, which is not good in engineering problems. Three methods of our class, (2.6), (2.16) with 𝛽=1,π‘Ž=0, (2.16) with 𝛽=0.01, π‘Ž=0, are compared with the quartically schemes of Liu et al. (1.3) and Ren et al. (1.2) with π‘Ž=0, the sixth-order scheme of Khattri and Argyros (1.4), the optimal eighth-order methods of Thukral (1.5). The comparison was also made by the derivative-involved method (3.1).

All the computations reported here were done using MATLAB 7.6 using VPA command, where for convergence we have selected the distance of two consecutive approximations to be less than πœ€=1.πΈβˆ’6000. That is, |π‘₯𝑛+1βˆ’π‘₯𝑛|≀10βˆ’6000 or |𝑓(π‘₯𝑛)|≀10βˆ’6000. Scientific computations in many branches of science and technology demand very high precision degree of numerical precision. The test nonlinear scalar functions are listed in Table 2.

The results of comparisons for the test functions are provided in Table 3. It can be seen that the resulting methods from our class are accurate and efficient in terms of number of accurate decimal places to find the roots after some iterations. In terms of computational cost, our class is much better than the compared methods. The class includes four evaluations of the function per full iteration to reach the efficiency index 1.682.

An important problem that appears in practical application of multipoint methods is that a quick convergence, one of the merits of multipoint methods, can be attained only if initial approximations are sufficiently close to the sought roots; otherwise, it is not possible to realize the expected convergence speed in practice. For this reason, in applying multipoint root-finding methods, a special attention should be paid to find good initial approximations. Yun in [28] outlined a noniterative way for this purpose as comes next π‘₯0β‰ˆ12ξ‚»ξ€œπ‘Ž+𝑏+sign(𝑓(π‘Ž))Γ—π‘π‘Žξ‚Ό,tanh(𝛿𝑓(π‘₯))𝑑π‘₯(3.2) where π‘₯0 is a simple root (an approximation of it) of 𝑓(π‘₯)=0 on the interval [π‘Ž,𝑏] with 𝑓(π‘Ž)𝑓(𝑏)<0, and 𝛿 is a positive number in the set of natural numbers.

If the cost of derivatives is greater than that of function evaluation, then (3.1) or any optimal eighth-order derivative-involved method is not effective. To compare (2.5) with optimal eighth-order derivative-involved methods, we can use the (original) computational efficiency definition due to Traub [29]: 𝐢𝐸𝑇=π‘βˆ‘1/𝑛𝑗=1Λ𝑗,(3.3) where Λ𝑗 is the cost of evaluating 𝑓(𝑗),𝑗β‰₯0. If we take Ξ›0=1, we have 𝐢𝐸𝑇(2.5)β‰ˆ81/4 and 𝐢𝐸𝑇(3.1)β‰ˆ81/(3+Ξ›1), so that 𝐢𝐸(3.1)<𝐢𝐸(2.5) if Ξ›1>1.

4. Concluding Notes

The design of iterative formulas for solving nonlinear scalar equations is an interesting task in mathematics. On the other hand, the analytic methods for solving such equations are almost nonexistent, and therefore, it is only possible to obtain approximate solutions by relying on numerical methods based upon iterative procedures. In this work, we have contributed a simple yet powerful class of iterative methods for the solution of nonlinear scalar equations. The class was obtained by using the concept of PadΓ© approximation (an interpolatory rational function) in the third step of a three-step cycle, in which the first two steps are available by any of the existing optimal fourth-order derivative-free methods without memory. We have seen that the introduced approximation of the first derivative of the function in the third step doubles the convergence rate. The analytical proof for one method of our novel class was written. Per full cycle, our class consists of four evaluations of the function and reaches the optimal order 8. Hence, the efficiency index of the class is 1.682, that is, the optimal efficiency index. The effectiveness of the developed methods from the class was illustrated by solving some test functions and comparing to the well-known derivative-free and derivative-involved methods. The results of comparisons were fully given in Table 3. Finally, it could be concluded that the novel class is accurate and efficient in contrast to the existing methods in the literature.

Acknowledgment

The authors would like to record their cordial thanks to the reviewer for his/her constructive remarks, which have considerably contributed to the readability of this paper.