Abstract

Many of the engineering problems are reduced to solve a nonlinear equation numerically, and as a result, an especial attention to suggest efficient and accurate root solvers is given in literature. Inspired and motivated by the research going on in this area, this paper establishes an efficient general class of root solvers, where per computing step, three evaluations of the function and one evaluation of the first-order derivative are used to achieve the optimal order of convergence eight. The without-memory methods from the developed class possess the optimal efficiency index 1.682. In order to show the applicability and validity of the class, some numerical examples are discussed.

1. Introduction

Numerical solution of nonlinear scalar equations plays a crucial role in many optimization and engineering problems. For example, many engineering systems can be modeled as neutral delay differential equations (NDDEs) that involve a time delay in the derivative of the highest order, which are different from retarded delay differential equations (RDDEs) that do not involve a time delay in the derivative of the highest order. To illustrate more, a system, which consists of a mass mounted on a linear spring to which a pendulum is attached via a hinged massless rod, is used to predict the dynamic response of structures to external forces using a set of actuators, and it is modeled as an NDDE if the delay in actuators is taken into consideration [1]. On the other hand, the stability of a delay differential equation can be investigated on the basis of the root location of the characteristic function. This simple example shows the importance of numerical root solvers in engineering problems.

There are numerical methods, which find one root at a time, such as Newton’s iteration or its variant, and the schemes, which find all the roots at a time, namely, simultaneous methods, such as Weierstrass method. Recently many journals such as Numerical Algorithms, Mathematical Problems in Engineering, Applied Mathematics and Computation, etc., have published new findings; see, for example, [2–5] and the references therein in this active topic of study. To shortly provide some of the newest findings in this field, we mention the following.

Noor et al. in [3] developed the follow-up quartically iterative scheme consisting of three steps and eight numbers of evaluation per full iteration as comes next 𝑦𝑛=π‘₯π‘›βˆ’π‘“ξ€·π‘₯π‘›ξ€Έπ‘“ξ…žξ€·π‘₯𝑛,𝑧𝑛=π‘¦π‘›βˆ’ξ€·π‘¦4π‘“π‘›ξ€Έπ‘“ξ…žξ€·π‘₯𝑛+2π‘“ξ…žπ‘₯𝑛+𝑦𝑛/2+π‘“ξ…žξ€·π‘¦π‘›ξ€Έ,π‘₯𝑛+1=π‘§π‘›βˆ’ξ€·π‘§4π‘“π‘›ξ€Έπ‘“ξ…žξ€·π‘₯𝑛+2π‘“ξ…žπ‘₯𝑛+𝑧𝑛/2+π‘“ξ…žξ€·π‘§π‘›ξ€Έ.(1.1)

In 2010, an eighth-order method is provided in [6] using Ostrowski's method in the first two steps of a three-step cycle as follows: 𝑦𝑛=π‘₯π‘›βˆ’π‘“ξ€·π‘₯π‘›ξ€Έπ‘“ξ…žξ€·π‘₯𝑛,𝑧𝑛=π‘¦π‘›βˆ’π‘“ξ€·π‘₯𝑛𝑓π‘₯π‘›ξ€Έξ€·π‘¦βˆ’2π‘“π‘›ξ€Έπ‘“ξ€·π‘¦π‘›ξ€Έπ‘“ξ…žξ€·π‘₯𝑛,π‘₯𝑛+1=π‘§π‘›βˆ’βŽ‘βŽ’βŽ’βŽ£π‘“ξ€·π‘§1+𝑛𝑓π‘₯𝑛+𝑓𝑧𝑛𝑓π‘₯𝑛ξƒͺ2⎀βŽ₯βŽ₯βŽ¦π‘“ξ€Ίπ‘₯𝑛,𝑦𝑛𝑓𝑧𝑛𝑓π‘₯𝑛,𝑧𝑛𝑓𝑦𝑛,𝑧𝑛,(1.2) wherein 𝑓[π‘₯0,π‘₯1,…,π‘₯π‘˜] are the divided differences of the function 𝑓.

Soleymani and Mousavi in [7] suggested an iteration without memory scheme including three steps and only four functional evaluations per iteration in what follows: 𝑦𝑛=π‘₯π‘›βˆ’π‘“ξ€·π‘₯π‘›ξ€Έπ‘“ξ…žξ€·π‘₯𝑛,𝑧𝑛=π‘₯𝑛+𝑓π‘₯𝑛𝑦+π‘“π‘›ξ€Έπ‘“ξ…žξ€·π‘₯𝑛𝑓π‘₯βˆ’2π‘›ξ€Έπ‘“ξ…žξ€·π‘₯𝑛𝑓π‘₯𝑛𝑓π‘₯π‘›ξ€Έξ€·π‘¦βˆ’π‘“π‘›ξ€Έ,π‘₯𝑛+1=π‘§π‘›βˆ’π‘“ξ€·π‘§π‘›ξ€Έξ€·π‘“ξ€·π‘§ξ‚†ξ‚€1+𝑛𝑦/𝑓𝑛2𝑓𝑧1+2𝑛π‘₯/𝑓𝑛𝑓𝑦1βˆ’6𝑛π‘₯/𝑓𝑛3ξ‚ξ‚‡βˆ’π’œΓ—β„¬π‘“ξ€Ίπ‘§π‘›,𝑦𝑛𝑧+𝑓𝑛,π‘₯𝑛,π‘₯π‘›π‘§ξ€»ξ€·π‘›βˆ’π‘¦π‘›ξ€Έ,(1.3) where π’œ denotes 9(𝑓(𝑦𝑛)/𝑓(π‘₯𝑛))4, and ℬ denotes (1+(𝑓(𝑧𝑛)/π‘“ξ…ž(π‘₯𝑛))2)(1+(𝑓(𝑦𝑛)/π‘“ξ…ž(π‘₯𝑛))3).

For further reading, one may consult [8], where a complete review of the methods given in literature from 2000 to 2010 was furnished, and also [9] for obtaining a background on the application use of such root solvers. We here remark that the efficiency of different methods could be assessed by the measure of efficiency index, which could be defined as π‘›βˆšπ‘, wherein 𝑝 is the order of convergence and 𝑛 is the total number of evaluations per iteration. Now, we should remark that Kung and Traub in [10] conjectured that an iterative scheme without memory by using 𝑛 evaluations per cycle can arrive at the maximum order of convergence 2π‘›βˆ’1. Any without memory iteration, which satisfies this hypothesis, is named as an optimal method in literature.

After providing a short background of this research in this section, we give the main contribution in Section 2. The convergence study of our general three-step class is also furnished therein. We will also produce different optimal three-step iterations from the contributed class. Section 3 discusses some numerical comparisons with the existing methods in literature, and finally Section 4 draws a conclusion of this research paper.

2. New Class of Iteration Methods

In order to contribute and give a general class of methods consistent with the optimality conjecture of Kung-Traub, an iteration eighth-order scheme without memory in this section should be constructed such that four evaluations per computing step are used. Such schemes are also known as predictor-corrector methods in which the first step is (Newton's step) predictor, while the other two steps correct the obtained solution. To achieve our goal, we consider the following three-step scheme on which the first two steps are the King's fourth-order family with one free parameter in real numbers, π›½βˆˆβ„, 𝑦𝑛=π‘₯π‘›βˆ’π‘“ξ€·π‘₯π‘›ξ€Έπ‘“ξ…žξ€·π‘₯𝑛,𝑧𝑛=π‘¦π‘›βˆ’π‘“ξ€·π‘¦π‘›ξ€Έπ‘“ξ…žξ€·π‘₯𝑛𝑓π‘₯𝑛𝑦+𝛽𝑓𝑛𝑓π‘₯𝑛𝑦+(π›½βˆ’2)𝑓𝑛,π‘₯𝑛+1=π‘§π‘›βˆ’π‘“ξ€·π‘§π‘›ξ€Έπ‘“ξ…žξ€·π‘§π‘›ξ€Έ.(2.1)

Clearly in (2.1), π‘“ξ…ž(𝑧𝑛) should be annihilated as the order of convergence remains at the highest level by the smallest use of number of evaluations per iteration. Toward this end, we approximate it by a polynomial of degree two that fits in π‘“ξ…ž(π‘₯𝑛), 𝑓(𝑦𝑛), and 𝑓(𝑧𝑛). Therefore, we take into account 𝑓(𝑑)β‰ˆπ΄(𝑑)=π‘Ž0+π‘Ž1(π‘‘βˆ’π‘¦π‘›)+π‘Ž2(π‘‘βˆ’π‘¦π‘›)2 where π΄ξ…ž(𝑑)=π‘Ž1+2π‘Ž2(π‘‘βˆ’π‘¦π‘›). Subsequently, by considering π‘“ξ…ž(π‘₯𝑛)=π΄ξ…ž(π‘₯𝑛), 𝑓(𝑦𝑛)=𝐴(𝑦𝑛), and 𝑓(𝑧𝑛)=𝐴(𝑧𝑛), we attain π‘Ž0=𝑓(𝑦𝑛), π‘Ž1+2π‘Ž2ξ€·π‘₯π‘›βˆ’π‘¦π‘›ξ€Έ=π‘“ξ…žξ€·π‘₯𝑛,π‘Ž1+π‘Ž2ξ€·π‘§π‘›βˆ’π‘¦π‘›ξ€Έ=π‘“ξ€·π‘§π‘›ξ€Έξ€·π‘¦βˆ’π‘“π‘›ξ€Έπ‘§π‘›βˆ’π‘¦π‘›ξ€Ίπ‘§=𝑓𝑛,𝑦𝑛.(2.2)

Solving the system of two linear equations with two unknowns, (2.2) gives us π‘Ž1 and π‘Ž2. Using the obtained relations for the unknowns in the approximation π‘“ξ…ž(𝑧𝑛)β‰ˆπ΄ξ…ž(𝑧𝑛)=π‘Ž1+2π‘Ž2(π‘§π‘›βˆ’π‘¦π‘›) and simplifying, we have π‘“ξ…žξ€·π‘§π‘›ξ€Έβ‰ˆξ€Ίπ‘§2𝑓𝑛,𝑦𝑛π‘₯ξ€»ξ€·π‘›βˆ’π‘§π‘›ξ€Έ+ξ€·π‘§π‘›βˆ’π‘¦π‘›ξ€Έπ‘“ξ…žξ€·π‘₯𝑛2π‘₯π‘›βˆ’π‘§π‘›βˆ’π‘¦π‘›.(2.3)

Considering (2.3) in (2.1) and using weight function approach, we have the following general class of three-step without-memory iteration: 𝑦𝑛=π‘₯π‘›βˆ’π‘“ξ€·π‘₯π‘›ξ€Έπ‘“ξ…žξ€·π‘₯𝑛,𝑧𝑛=π‘¦π‘›βˆ’π‘“ξ€·π‘¦π‘›ξ€Έπ‘“ξ…žξ€·π‘₯𝑛𝑓π‘₯𝑛𝑦+𝛽𝑓𝑛𝑓π‘₯𝑛𝑦+(π›½βˆ’2)𝑓𝑛,π‘₯𝑛+1=π‘§π‘›βˆ’π‘“ξ€·π‘§π‘›ξ€Έξ€·2π‘₯π‘›βˆ’π‘§π‘›βˆ’π‘¦π‘›ξ€Έξ€Ίπ‘§2𝑓𝑛,𝑦𝑛π‘₯ξ€»ξ€·π‘›βˆ’π‘§π‘›ξ€Έ+ξ€·π‘§π‘›βˆ’π‘¦π‘›ξ€Έπ‘“ξ…žξ€·π‘₯𝑛{𝐺(𝑑)+𝐻(𝜏)+𝑄(𝛾)},(2.4) wherein 𝐺(𝑑), 𝐻(𝜏), and 𝑄(𝛾) are three real-valued weight functions with 𝑑=𝑓(𝑧)/𝑓(𝑦), 𝜏=𝑓(𝑧)/𝑓(π‘₯), and 𝛾=𝑓(𝑦)/𝑓(π‘₯) (without the index 𝑛) which should be chosen such that the order of convergence arrives at the optimal level eight. We summarize this in the following theorem.

Theorem 2.1. Let π›Όβˆˆπ· be a simple zero of a sufficiently differentiable function π‘“βˆΆπ·βŠ‚β„β†’β„β€‰β€‰for an open interval 𝐷, which contains π‘₯0 as the initial approximation of 𝛼, then the three-step iteration (2.4), which includes four evaluations per full cycle, has the optimal convergence rate eight, when 𝐺(0)=1,πΊξ…ž||𝐺(0)=0,ξ…žξ…ž||(0)<∞,𝐻(0)=0,π»ξ…ž9(0)=6,||π»ξ…žξ…ž||(0)<∞,𝑄(0)=π‘„ξ…ž(0)=π‘„ξ…žξ…ž(0)=0,𝑄(3)||𝑄(0)=βˆ’(9+18𝛽),(4)||(0)<∞,(2.5) and satisfies the error equation below 𝑒𝑛+11=βˆ’ξ‚€π‘242ξ‚€βˆ’π‘3+𝑐22(1+2𝛽)6𝑐2ξ‚€9𝑐2𝑐3βˆ’4𝑐4+4𝑐32𝑐(5+(4βˆ’3𝛽)𝛽)+123βˆ’π‘22ξ€Έ(1+2𝛽)2πΊξ…žξ…ž(0)+𝑐42𝑄(4)𝑒(0)8𝑛𝑒+𝑂9𝑛.(2.6)

Proof. By defining 𝑒𝑛=π‘₯π‘›βˆ’π›Ό as the error of the iterative scheme in the 𝑛th iterate, applying the Taylor's series expansion for (2.4), and taking into account 𝑓(𝛼)=0, we have 𝑓π‘₯𝑛=π‘“ξ…žξ€Ίπ‘’(𝛼)𝑛+𝑐2𝑒2𝑛+𝑐3𝑒3𝑛+𝑐4𝑒4𝑛+𝑐5𝑒5𝑛+𝑐6𝑒6𝑛+𝑐7𝑒7𝑛+𝑐8𝑒8𝑛𝑒+𝑂9𝑛,(2.7) where π‘π‘˜=(1/π‘˜!)(𝑓(π‘˜)(𝛼)/π‘“ξ…ž(𝛼)),π‘˜β‰₯2. Furthermore, we have π‘“ξ…žξ€·π‘₯𝑛=π‘“ξ…žξ€Ί(𝛼)1+2𝑐2𝑒𝑛+3𝑐3𝑒2𝑛+4𝑐4𝑒3𝑛+5𝑐5𝑒4𝑛+6𝑐6𝑒5𝑛+7𝑐7𝑒6𝑛+8𝑐8𝑒7𝑛𝑒+𝑂8𝑛.(2.8) Dividing (2.7) by (2.8) gives us 𝑓(π‘₯𝑛)/π‘“ξ…ž(π‘₯𝑛)=π‘’π‘›βˆ’π‘2𝑒2𝑛+2(𝑐22βˆ’π‘3)𝑒3𝑛+(7𝑐2𝑐3βˆ’4𝑐32βˆ’3𝑐4)𝑒4𝑛+β‹―+𝑂(𝑒8𝑛). Again by substituting this relation in the first step of (2.4) and writing the Taylor's series expansion for 𝑓(𝑦𝑛), we obtain, respectively, 𝑦𝑛=𝛼+𝑐2𝑒2𝑛+2βˆ’π‘22+𝑐3𝑒3𝑛+ξ€·βˆ’7𝑐2𝑐3+4𝑐32+3𝑐4𝑒4𝑛𝑒+β‹―+𝑂8𝑛,𝑓𝑦𝑛=π‘“ξ…žξ€Ίπ‘(𝛼)2𝑒2𝑛+2βˆ’π‘22+𝑐3𝑒3𝑛+ξ€·βˆ’7𝑐2𝑐3+4𝑐32+3𝑐4𝑒4𝑛𝑒+β‹―+𝑂8𝑛.ξ€Έξ€»(2.9) Furthermore, we find π‘§π‘›ξ€·βˆ’π›Ό=βˆ’π‘2𝑐3+𝑐32𝑒(1+2𝛽)4π‘›ξ€·π‘βˆ’223+𝑐2𝑐4βˆ’2𝑐22𝑐3(2+3𝛽)+𝑐42𝑒(2+𝛽(6+𝛽))5𝑛𝑒+β‹―+𝑂8𝑛.(2.10) Similarly, we have 𝑓𝑧𝑛2π‘₯π‘›βˆ’π‘§π‘›βˆ’π‘¦π‘›ξ€Έξ€Ίπ‘§2𝑓𝑛,𝑦𝑛π‘₯ξ€»ξ€·π‘›βˆ’π‘§π‘›ξ€Έ+ξ€·π‘§π‘›βˆ’π‘¦π‘›ξ€Έπ‘“ξ…žξ€·π‘₯𝑛=ξ€·βˆ’π‘2𝑐3+𝑐32𝑒(1+2𝛽)4π‘›ξ€·π‘βˆ’223+𝑐2𝑐4βˆ’2𝑐22𝑐3(2+3𝛽)+𝑐42𝑒(2+𝛽(6+𝛽))5𝑛+ξ€·βˆ’7𝑐3𝑐4+6𝑐22𝑐4(2+3𝛽)βˆ’2𝑐32𝑐3ξ€·15+42𝛽+8𝛽2ξ€Έ+3𝑐2ξ€·βˆ’π‘5+𝑐23(6+8𝛽)ξ€Έξ€Έ+2𝑐52𝑒(5+𝛽(22+𝛽(7+𝛽)))6𝑛𝑒+β‹―+𝑂8𝑛,𝑧(2.11)π‘›βˆ’π‘“ξ€·π‘§π‘›ξ€Έξ€·2π‘₯π‘›βˆ’π‘§π‘›βˆ’π‘¦π‘›ξ€Έξ€Ίπ‘§2𝑓𝑛,𝑦𝑛π‘₯ξ€»ξ€·π‘›βˆ’π‘§π‘›ξ€Έ+ξ€·π‘§π‘›βˆ’π‘¦π‘›ξ€Έπ‘“ξ…žξ€·π‘₯𝑛3=π›Όβˆ’2𝑐2𝑐3ξ€·βˆ’π‘2𝑐3+𝑐32𝑒(1+2𝛽)ξ€Έξ€Έ7𝑛+ξ€·6𝑐2𝑐33+5𝑐22𝑐3𝑐4+𝑐72(1+2𝛽)2βˆ’34𝑐32𝑐23(23+32𝛽)βˆ’2𝑐42𝑐4+2𝑐4𝛽+14𝑐52𝑐3𝑒(29+2𝛽(41+6𝛽))8𝑛𝑒+𝑂9𝑛.(2.12) We moreover by using (2.11) and (2.5) attain that (𝑓(𝑧𝑛)(2π‘₯π‘›βˆ’π‘§π‘›βˆ’π‘¦π‘›)/2𝑓[𝑧𝑛,𝑦𝑛](π‘₯π‘›βˆ’π‘§π‘›)+(π‘§π‘›βˆ’π‘¦π‘›)π‘“ξ…ž(π‘₯𝑛)){𝐺(𝑓(𝑧𝑛)/𝑓(𝑦𝑛))+𝐻(𝑓(𝑧𝑛)/𝑓(π‘₯𝑛))+𝑄(𝑓(𝑦𝑛)/𝑓(π‘₯𝑛))}=(βˆ’π‘2𝑐3+𝑐32(1+2𝛽))𝑒4π‘›βˆ’2(𝑐23+𝑐2𝑐4βˆ’2𝑐22𝑐3(2+3𝛽)+𝑐42(2+𝛽(6+𝛽)))𝑒5𝑛+(βˆ’7𝑐3𝑐4+6𝑐22𝑐4(2+3𝛽)βˆ’2𝑐32𝑐3(15+42𝛽+8𝛽2)+3𝑐2(βˆ’π‘5+𝑐23(6+8𝛽))+2𝑐52(5+𝛽(22+𝛽(7+𝛽))))𝑒6𝑛+β‹―+𝑂(𝑒8𝑛). Considering this new relation, (2.12) and (2.5) in the last step of (2.4) will end in 𝑒𝑛+1=π‘₯𝑛+11βˆ’π›Ό=βˆ’ξ‚€π‘242ξ€·βˆ’π‘3+𝑐22ξ€ΈΓ—ξ‚€(1+2𝛽)6𝑐2ξ€·9𝑐2𝑐3βˆ’4𝑐4+4𝑐32𝑐(5+(4βˆ’3𝛽)𝛽)+123βˆ’π‘22ξ€Έ(1+2𝛽)2πΊξ…žξ…ž(0)+𝑐42𝑄(4)𝑒(0)8𝑛𝑒+𝑂9𝑛.(2.13) This concludes the proof. And it shows that our suggested general class of three-step without-memory methods (2.4)-(2.5) possesses the eighth order of convergence.

Remark 2.2. The class of three-step methods (2.4)-(2.5) requires four evaluations and has the order of convergence eight. Therefore, this class is of optimal order and supports the Kung-Traub conjecture [10]. Hence, the efficiency index of the eighth-order derivative-involved methods from the class is 4√8β‰ˆ1.682.Some efficient methods from the contributed optimal three-step class are given below. Per computing step, these methods are free from second or higher order derivative computations. The new contributed methods are 𝑦𝑛=π‘₯π‘›βˆ’π‘“ξ€·π‘₯π‘›ξ€Έπ‘“ξ…žξ€·π‘₯𝑛,𝑧𝑛=π‘¦π‘›βˆ’π‘“ξ€·π‘¦π‘›ξ€Έπ‘“ξ…žξ€·π‘₯𝑛π‘₯2π‘“π‘›ξ€Έξ€·π‘¦βˆ’π‘“π‘›ξ€Έξ€·π‘₯2π‘“π‘›ξ€Έξ€·π‘¦βˆ’5𝑓𝑛,π‘₯𝑛+1=π‘§π‘›βˆ’π‘“ξ€·π‘§π‘›ξ€Έξ€·2π‘₯π‘›βˆ’π‘§π‘›βˆ’π‘¦π‘›ξ€Έξ€Ίπ‘§2𝑓𝑛,𝑦𝑛π‘₯ξ€»ξ€·π‘›βˆ’π‘§π‘›ξ€Έ+ξ€·π‘§π‘›βˆ’π‘¦π‘›ξ€Έπ‘“ξ…žξ€·π‘₯π‘›ξ€ΈβŽ§βŽͺ⎨βŽͺβŽ©ξƒ©π‘“ξ€·π‘§1+𝑛𝑓𝑦𝑛ξƒͺ3+96𝑓𝑧𝑛𝑓π‘₯π‘›ξ€Έβˆ’94𝑓𝑦𝑛𝑓π‘₯𝑛ξƒͺ4⎫βŽͺ⎬βŽͺ⎭,(2.14) where 𝑒𝑛+1=(1/4)𝑐22𝑐3(9𝑐2𝑐3βˆ’4𝑐4)𝑒8𝑛+𝑂(𝑒9𝑛), 𝑦𝑛=π‘₯π‘›βˆ’π‘“ξ€·π‘₯π‘›ξ€Έπ‘“ξ…žξ€·π‘₯𝑛,𝑧𝑛=π‘¦π‘›βˆ’π‘“ξ€·π‘¦π‘›ξ€Έπ‘“ξ…žξ€·π‘₯𝑛𝑓π‘₯𝑛𝑓π‘₯π‘›ξ€Έξ€·π‘¦βˆ’2𝑓𝑛,π‘₯𝑛+1=π‘§π‘›βˆ’π‘“ξ€·π‘§π‘›ξ€Έξ€·2π‘₯π‘›βˆ’π‘§π‘›βˆ’π‘¦π‘›ξ€Έξ€Ίπ‘§2𝑓𝑛,𝑦𝑛π‘₯ξ€»ξ€·π‘›βˆ’π‘§π‘›ξ€Έ+ξ€·π‘§π‘›βˆ’π‘¦π‘›ξ€Έπ‘“ξ…žξ€·π‘₯π‘›ξ€ΈΓ—βŽ§βŽͺ⎨βŽͺβŽ©ξƒ©π‘“ξ€·π‘§1+𝑛𝑓𝑦𝑛ξƒͺ3+96𝑓𝑧𝑛𝑓π‘₯π‘›ξ€Έβˆ’96𝑓𝑦𝑛𝑓π‘₯𝑛ξƒͺ3ξƒ©π‘“ξ€·π‘¦βˆ’5𝑛𝑓π‘₯𝑛ξƒͺ4⎫βŽͺ⎬βŽͺ⎭,(2.15) where 𝑒𝑛+1=βˆ’(1/4)(𝑐22(𝑐22βˆ’π‘3)(9𝑐2𝑐3βˆ’4𝑐4))𝑒8𝑛+𝑂(𝑒9𝑛) is its error equation. We also here mention some of the typical forms of the weight functions 𝐺(𝑑), 𝐻(𝜏), and 𝑄(𝛾) in iteration (2.4), which satisfy (2.5) for making the order optimal. These forms are listed in Table 1.Other than the very efficient methods (2.14) and (2.15) of optimal order eight, many more three-step without-memory iterations can be constructed using Table 1, that is, (2.5) in (2.4) and also with different values for the free parameter 𝛽. Thus, now, in order to save the space and also giving some of the other such optimal eighth-order methods according to (2.4) and (2.5), we list the interesting ones in Table 2. Note that we consider first that the weight functions satisfy (2.5), and then we try to make new error equations based on the available data in (2.13).
Future researches in this field of study can now be turned to finding optimal sixteenth-order four-step without-memory iterations based on the general class (2.4)-(2.5). Furthermore, producing with-memory iterations according to this class can also be of researcher's interest for future studies.

3. Computational Examples

The contribution given in Section 2 is supported here through numerical works. We check the effectiveness of the novel methods (2.14) and (2.15) from our class of methods. For this reason, we have compared our new methods with Newton's method (NM), (1.1), (1.2), and (1.3). The nonlinear test functions are furnished in Table 3. The results of comparisons are given in Table 4 in terms of the number significant digits for each test function after some specified iterations.

All computations in this paper were performed in MATLAB 7.6 using variable precision arithmetic (VPA) to increase the number of significant digits. We have considered the following stopping criterion |𝑓(π‘₯𝑛)|≀10βˆ’800. In Table 4, 0.2π‘’βˆ’448 shows that the absolute value of the given nonlinear function after three iterations is zero up to 448 decimal places. In Table 4, IN and TNE stand for iteration number and total number of evaluation. As shown in Table 4, the proposed method (2.14) is preferable to Newton’s method and some methods with fourth- and eighth-order of convergences. It is evident that (2.14) is more robust than the other competent from various orders. We also recall an important concern in using multi point iterations, which indicates that the high-order root solvers are very sensitive for initial guesses far from the root. And they are so powerful for starting points in the vicinity of the sought zero and so close.

Remark 3.1. If we need to solve a lot of equations from a large system of boundary-value problems, then the cost of function evaluations becomes important. Therefore, the proposed class (2.4)-(2.5) is valuable for solving such problems.

4. Concluding Remarks

In recent years, numerous works have been focusing on the development of more advanced and efficient methods for nonlinear scalar equations. Many methods have been developed, which improve the convergence rate of the Newton’s method. One practical drawback of so many methods is their slow rate of convergence. This paper has developed and established a rapid class of eighth-order iteration methods. Per iteration, the methods from our class require three evaluations of the function and one of its first derivatives; and therefore, the efficiency of the methods is equal to 4√8β‰ˆ1.682, which is better than that of the classical Newton’s method. Kung and Traub [10] conjectured that a multipoint iteration without memory based on 𝑛 evaluations of 𝑓 or its derivatives could achieve optimal convergence order 2π‘›βˆ’1. Newton’s method is an example, which agrees with Kung-Traub’s conjecture for 𝑛=2, and the class of methods (2.4)-(2.5) is another example, which agrees with Kung-Traub’s hypothesis for 𝑛=4. Thus, the suggested class (2.4)-(2.5) is effective and attracts the attention of researchers.