Abstract

Steffensen-type methods are practical in solving nonlinear equations. Since, such schemes do not need derivative evaluation per iteration. Hence, this work contributes two new multistep classes of Steffensen-type methods for finding the solution of the nonlinear equation 𝑓(𝑥)=0. New techniques can be taken into account as the generalizations of the one-step method of Steffensen. Theoretical proofs of the main theorems are furnished to reveal the eighth-order convergence. Per computing step, the derived methods require only four function evaluations. Experimental results are also given to add more supports on the underlying theory of this paper as well as lead us to draw a conclusion on the efficiency of the developed classes.

1. Introduction

Finding rapidly and accurately the zeros of nonlinear functions is a common problem in various areas of numerical analysis. This problem has fascinated mathematicians for centuries, and the literature is full of ingenious methods, and discussions of their merits and shortcomings [113].

Over the last decades, there exist a large number of different methods for finding the solution of nonlinear equations either iteratively or simultaneously. Thus far, some better modified methods for finding roots of nonlinear equations cover mainly the pioneering work of Kung and Traub [14] or the efficient ways to build high-order methods through the procedures discussed in [1518].

In this paper, we consider the problem of finding a numerical procedure to solve a simple root 𝛼 of the nonlinear equation: 𝑓(𝑥)=0,(1.1) by derivative-free high-order iterations without memory. We here note that higher order of convergence is only possible through employing multistep methods. In what follows, we first give two definitions concerning iterative processes.

Definition 1.1. Let the sequence {𝑥𝑛} tends to 𝛼 such that lim𝑛𝑥𝑛+1𝛼𝑥𝑛𝛼𝑝=𝐶0,𝑛1.(1.2) Therefore, the order of convergence of the sequence {𝑥𝑛} is 𝑝, and 𝐶 is known as the asymptotic error constant. If 𝑝=1, 𝑝=2, or 𝑝=3, the sequence is said to converge linearly, quadratically, or cubically, respectively.

Definition 1.2. The efficiency of a method [2] is measured by the concept of efficiency index and is defined by EI=𝑝1/𝛽,(1.3) where 𝑝 is the order of the method and 𝛽 is the whole number of (functional) evaluations per iteration. Note that in (1.3) we consider that all function and derivative evaluations have the same computational cost. Moreover, we should remind that by Kung-Traub conjecture [14]: an iterative scheme without memory for solving nonlinear equations has the optimal efficiency index 2(𝛽1)/𝛽 and optimal rate (speed) of convergence 2(𝛽1).

The so-called method of Newton is basically taken into account to solve (1.1). This scheme is an optimal one-step one-point iteration without memory. The method is in fact the essence of all the improvements of root solvers. We should remind that iterations are themselves divided into two main categories of derivative-involved methods (in which at least one derivative evaluation is needed to proceed; see, e.g., [19, 20]) and derivative-free methods (do not have a direct derivative evaluation per iteration [2123]) which are more economic in terms of derivative evaluation. Clearly, the arising real problems in science and engineering are not normally of the type to let the users to calculate their first or second derivatives. Since, they include hard structures, which make the procedure of finding the derivatives so difficult or time consuming. Due to this, derivative-free methods now comes to attention to solve the related problems as easily as possible. For further reading, one may refer to [2428].

This paper tries to overcome on the matter of derivative evaluation for nonlinear solvers by giving two general classes of three-step eighth-order convergence methods, which have the optimal efficiency index, optimal order, accurate performance in solving numerical examples as well as are totally free from derivative calculation per full cycle to proceed. Hence, after this short introduction in this section, we organize the rest of the paper as comes next. Section 2 shortly provides one of the most applicable usages of root solvers in Mechanical Engineering in order to manifest their applicability. This section is followed by Section 3, where some of the available derivative-free schemes in root-finding are discussed and presented. Section 4 gives the heart of this paper by contributing two novel classes of three-step derivative-free without memory iterations. The theoretical proofs of the main theorems are given therein to show that each member of our classes without memory attain as much as possible of the efficiency index by using as small as possible of the number of evaluations. In Section 5, we give large number of numerical experiments to reveal the accuracy of the obtained methods from the proposed classes. And finally, Section 6 contains a short conclusion of this study.

2. Describing an Application of Nonlinear Equations Solvers

In Mechanical Engineering, a trunnion (a cylindrical protrusion used as a mounting and/or pivoting point; in a cannon, the trunnions are two projections cast just forward of the center of mass of the cannon are fixed to a two-wheeled movable gun carriage) has to be cooled before it is shrink fitted into a steel hub.

The equation that gives the temperature 𝑇𝑓 to which the trunnion has to be cooled to attain the desired contraction is given by 𝑓𝑇𝑓=0.50598×1010𝑇3𝑓+0.38292×107𝑇2𝑓+0.74363×104𝑇𝑓+0.88318×102=0.(2.1)

Clearly (2.1) could be solved using nonlinear equation solvers. This was one application of the matter of nonlinear equation solving by iterative processes in the scalar case. At this time, consider that the obtained nonlinear scalar equations in other problems are complicated; therefore their first and second derivatives are not at hand and moreover, better accuracy by low elapsed time is needed. Consequently, a close look at the derivative-free high-order methods should be paid by considering multi step iterations. Also note that such iterative schemes can be extended to solve systems of nonlinear equations which have a lot of applications in engineering problems, see for more [29].

3. Available Derivative-Free Methods in the Literature

For the first time, Steffensen in [30] coined the following derivative-free form of Newton's method: 𝑥𝑛+1=𝑥𝑛𝑓𝑥𝑛𝑓𝑥𝑛,𝑤𝑛,𝑤𝑛=𝑥𝑛𝑥+𝑓𝑛,(3.1) which possesses the same rate of convergence and efficiency index as Newton's.

We here remind the well-written family of derivative-free three-step methods, which was given by Kung and Traub [14] as comes next 𝑧𝑛=𝑦𝑛𝑓𝑥𝛽𝑛𝑓𝑦𝑛𝑓𝑦𝑛𝑥𝑓𝑛,𝑦𝑛=𝑥𝑛𝑥+𝛽𝑓𝑛,𝑤𝑛=𝑧𝑛𝑓𝑥𝑛𝑓𝑦𝑛𝑓𝑧𝑛𝑥𝑓𝑛1𝑓𝑦𝑛,𝑥𝑛1𝑓𝑧𝑛,𝑦𝑛,𝑥𝑛+1=𝑤𝑛𝑓𝑥𝑛𝑓𝑦𝑛𝑓𝑧𝑛𝑓𝑤𝑛𝑥𝑓𝑛×1𝑓𝑤𝑛𝑦𝑓𝑛1𝑓𝑤𝑛,𝑧𝑛1𝑓𝑧𝑛,𝑦𝑛1𝑓𝑧𝑛𝑥𝑓𝑛1𝑓𝑧𝑛,𝑦𝑛1𝑓𝑦𝑛,𝑥𝑛,(3.2) where 𝛽{0}. This family of one-parameter methods possesses the eighth-order convergence utilizing four pieces of information, namely, 𝑓(𝑥𝑛), 𝑓(𝑦𝑛), 𝑓(𝑧𝑛) and 𝑓(𝑤𝑛). Therefore, its classical efficiency index is 1.682. Note that the first two steps of (3.2), are an optimal fourth-order derivative-free uniparametric family, which could also be given as comes next 𝑦𝑛=𝑥𝑛𝑓𝑥𝑛𝑓𝑥𝑛,𝑤𝑛,𝑤𝑛=𝑥𝑛𝑥+𝛽𝑓𝑛𝑥,𝛽{0},𝑛+1=𝑦𝑛𝑓𝑦𝑛𝑓𝑤𝑛𝑓𝑤𝑛𝑦𝑓𝑛𝑓𝑥𝑛,𝑦𝑛.(3.3)

Another uniparametric three-step derivative-free iteration was presented recently by Zheng et al. in [21] as follows: 𝑦𝑛=𝑥𝑛𝑓𝑥𝑛𝑓𝑥𝑛,𝑤𝑛,𝑤𝑛=𝑥𝑛𝑥+𝛽𝑓𝑛,𝑧𝑛=𝑦𝑛𝑓𝑦𝑛𝑓𝑥𝑛,𝑦𝑛𝑦+𝑓𝑛,𝑤𝑛𝑥𝑓𝑛,𝑤𝑛,𝑥𝑛+1=𝑧𝑛𝑓𝑧𝑛𝑓𝑧𝑛,𝑦𝑛𝑧+𝑓𝑛,𝑦𝑛,𝑥𝑛𝑧𝑛𝑦𝑛𝑧+𝑓𝑛,𝑦𝑛,𝑥𝑛,𝑤𝑛𝑧𝑛𝑦𝑛𝑧𝑛𝑥𝑛,(3.4) where 𝑓[𝑥𝑛,𝑥𝑛1,,𝑥𝑛𝑖] is the divided differences of 𝑓(𝑥). And we recall that they can be defined recursively via 𝑓𝑥𝑖𝑥=𝑓𝑖𝑥;𝑓𝑖,𝑥𝑗=𝑓𝑥𝑖𝑥𝑓𝑗𝑥𝑖𝑥𝑗,𝑥𝑖𝑥𝑗(3.5) and, for 𝑚>𝑖+1, via 𝑓𝑥𝑖,𝑥𝑖+1,,𝑥𝑚=𝑓𝑥𝑖,𝑥𝑖+1,,𝑥𝑚1𝑥𝑓𝑖+1,𝑥𝑖+2,,𝑥𝑚𝑥𝑖𝑥𝑚,𝑥𝑖𝑥𝑚.(3.6)

4. Main Contributions

As usual to build high-order iterations, we must consider a multipoint cycle. Now in order to contribute, we pay heed to the follow up three-step scheme in which the first step is Steffensen, while the second and the third steps are Newton's iterations. This procedure is completely inefficient. Since it possesses 81/41.414 as its efficiency index, which is the same to Steffensen's and Newton's: 𝑦𝑛=𝑥𝑛𝑓𝑥𝑛𝑓𝑥𝑛,𝑤𝑛,𝑧𝑛=𝑦𝑛𝑓𝑦𝑛𝑦𝑓𝑛,𝑥𝑛+1=𝑧𝑛𝑓𝑧𝑛𝑧𝑓𝑛.(4.1)

To annihilate the derivative evaluations of the structure (4.1) and also keep the order at eight we must reconstruct our structure in which the first two steps provide the optimal order four using three evaluations and subsequently to the eighth-order convergence by consuming four function evaluations. Toward this end, we make use of weight function approach as comes next (and also by replacing 𝑓(𝑦𝑛)𝑓[𝑥𝑛,𝑤𝑛] and 𝑓(𝑧𝑛)𝑓[𝑥𝑛,𝑤𝑛]): 𝑦𝑛=𝑥𝑛𝑓𝑥𝑛𝑓𝑥𝑛,𝑤𝑛,𝑤𝑛=𝑥𝑛𝑥+𝛽𝑓𝑛,𝑧𝑛=𝑦𝑛𝑓𝑦𝑛𝑓𝑥𝑛,𝑤𝑛𝑥{𝐺(A)×𝐻(B)},𝑛+1=𝑧𝑛𝑓𝑧𝑛𝑓𝑥𝑛,𝑤𝑛{𝐾(Γ)×𝐿(Δ)×𝑃(E)×𝑄(B)×𝐽(A)},(4.2) wherein 𝛽{0},  𝐺(A),  𝐻(B),  𝐾(Γ),  𝐿(Δ),  𝑃(E),  𝑄(B), and 𝐽(A) are seven real-valued weight functions when A=(𝑓(𝑦)/𝑓(𝑥)), B=(𝑓(𝑦)/𝑓(𝑤)), Γ=(𝑓(𝑧)/𝑓(𝑥)), Δ=(𝑓(𝑧)/𝑓(𝑤)), and E=(𝑓(𝑧)/𝑓(𝑦)), without the index 𝑛, should be chosen such that the order of convergence arrives at the optimal level eight. Theorem 4.1 indicates the way of selecting the weight functions in order to reach the optimal efficiency index by using the smallest possible number of function evaluations.

Theorem 4.1. Let us consider 𝛼 as a simple root of the nonlinear equation 𝑓(𝑥)=0 in the domain 𝐷. And assume that 𝑓(𝑥) is sufficiently smooth in the neighborhood of the root. Then the derivative-free iterative class without memory defined by (4.2) is of optimal order eight, when the weight functions satisfy 𝐺(0)=𝐺(0)=1,𝐺(0)=𝐺(3)||𝐺(0)=0,(4)||(0)<,𝐻(0)=𝐻(0)=1,𝐻𝐻(0)=0,(3)𝑥(0)=18+6𝛽𝑓𝑛,𝑤𝑛𝑥5+𝛽𝑓𝑛,𝑤𝑛𝑥4+𝛽𝑓𝑛,𝑤𝑛,||𝐻(4)||𝐾(0)<,(0)=𝐾(0)=1,𝐿(0)=𝐿(0)=1,𝑃(0)=𝑃||𝑃(0)=1,||𝑄(0)<,(0)=𝑄(0)=1,𝑄𝑥(0)=2+2𝛽𝑓𝑛,𝑤𝑛,𝑄(3)||𝑄(0)=0,(4)||(0)<,𝐽(0)=𝐽(0)=1,𝐽(0)=𝐽(3)||𝐽(0)=0,(4)||(0)<.(4.3)

Proof. Using Taylor's series and symbolic computations, we can determine the asymptotic error constant of the three-step uniparametric class (4.2)-(4.3). Furthermore, assume 𝑒𝑛=𝑥𝑛𝛼 be the error in the 𝑛th iterate and take into account 𝑓(𝛼)=0, 𝑐𝑘=𝑓(𝑘)(𝛼)/𝑘!,  for all 𝑘=1,2,3,. Now, we expand 𝑓(𝑥𝑛) around the simple zero 𝛼. Hence, we have 𝑓𝑥𝑛=𝑐1𝑒𝑛+𝑐2𝑒2𝑛+𝑐3𝑒3𝑛+𝑐4𝑒4𝑛+𝑐5𝑒5𝑛+𝑐5𝑒5𝑛+𝑐6𝑒6𝑛+𝑐7𝑒7𝑛+𝑐8𝑒8𝑛𝑒+𝑂9𝑛.(4.4) By considering (4.4) and the first step of (4.2), we attain 𝑦𝑛𝛼=𝑐21𝑐1𝑒+𝛽2𝑛𝑒++𝑂9𝑛.(4.5) In the same vein, by considering (4.3) and (4.5), we obtain for the second step that 𝑧𝑛𝑐𝛼=21+𝑐1𝛽𝑐1𝑐31+𝑐1𝛽+𝑐22(2+𝑐1𝛽)2𝑐31𝑒4𝑛𝑒++𝑂9𝑛,(4.6) and 𝑓(𝑧𝑛)=((𝑐2(1+𝑐1𝛽)(𝑐1𝑐3(1+𝑐1𝛽)+𝑐22(2+𝑐1𝛽)2))/𝑐21)𝑒4𝑛++𝑂(𝑒9𝑛). We also have (𝑓(𝑧𝑛)/𝑓[𝑥𝑛,𝑤𝑛])=(1/𝑐31)𝑐2(1+𝑐1𝛽)(𝑐1𝑐3(1+𝑐1𝛽)+𝑐22(2+𝑐1𝛽)2)𝑒4𝑛++𝑂(𝑒9𝑛). Using symbolic computation in the last step of (4.2) and considering (4.3) and (4.6), we have 𝑧𝑛𝑓𝑧𝑛𝑓𝑥𝑛,𝑤𝑛𝑐𝛼=221+𝑐1𝛽2+𝑐1𝛽𝑐1𝑐31+𝑐1𝛽+𝑐222+𝑐1𝛽2𝑐41𝑒5𝑛𝑒++𝑂9𝑛.(4.7) Furthermore using (4.3), we have 𝑓𝑧𝑛𝑓𝑥𝑛,𝑤𝑛=𝑐{𝐾(Γ)×𝐿(Δ)×𝑃(E)×𝑄(B)×𝐽(A)}21+𝑐1𝛽𝑐1𝑐31+𝑐1𝛽+𝑐222+𝑐1𝛽2𝑒4𝑛𝑐311𝑐41𝑐21𝑐231+𝑐1𝛽22+𝑐1𝛽+𝑐21𝑐2𝑐41+𝑐1𝛽22+𝑐1𝛽𝑐1𝑐22𝑐31+𝑐1𝛽26+𝑐1𝛽37+𝑐1𝛽19+3𝑐1𝛽+𝑐4229+𝑐1𝛽65+𝑐1𝛽57+4𝑐1𝛽6+𝑐1𝛽𝑒5𝑛𝑒++𝑂9𝑛.(4.8) And finally, Taylor series expansion around the simple root in the last iterate by using (4.3) and the above relation (values of higher order derivatives of 𝐾(Γ) and 𝐿(Δ); not explicitly given in (4.9), can be arbitrary at the point 0), will result in 𝑒𝑛+11=24𝑐71𝑐21+𝑐1𝛽𝑐1𝑐31+𝑐1𝛽+𝑐222+𝑐1𝛽2×24𝑐21𝑐2𝑐41+𝑐1𝛽2+12𝑐21𝑐231+𝑐1𝛽22+𝑃(0)+24𝑐1𝑐22𝑐31+𝑐1𝛽26+𝑐1𝛽6+𝑐1𝛽2+𝑐1𝛽2𝑃(0)+𝑐42504+192𝑃(0)𝐺(4)(0)𝐻(4)(0)+𝐽(4)(0)+𝑐1𝛽4270+96𝑃(0)𝐺(4)(0)+𝐽(4)(0)+𝑐1𝛽6144+48𝑃(0)𝐺(4)(0)+𝐽(4)(0)+𝑐1𝛽𝑐1𝛽48+12𝑃(0)𝐺(4)(0)+𝐽(4)(0)+478+24𝑃(0)𝐺(4)(0)+𝐽(4)(0)+𝑄(4)𝑒(0)8𝑛𝑒+𝑂9𝑛,(4.9) which shows that the three-step without memory class (4.2)-(4.3) reaches the order of convergence eight by using only four pieces of information. This completes that proof.

A simple computational example from the class (4.2)-(4.3) can be 𝑦𝑛=𝑥𝑛𝑓𝑥𝑛𝑓𝑥𝑛,𝑤𝑛,𝑤𝑛=𝑥𝑛𝑥+𝑓𝑛,𝑧𝑛=𝑦𝑛𝑓𝑦𝑛𝑓𝑥𝑛,𝑤𝑛×𝑓𝑦1+𝑛𝑓𝑥𝑛+𝑓𝑦𝑛𝑓𝑥𝑛5×𝑓𝑦1+𝑛𝑓𝑤𝑛+𝑥3+𝑓𝑛,𝑤𝑛𝑥5+𝑓𝑛,𝑤𝑛𝑥4+𝑓𝑛,𝑤𝑛𝑓𝑦𝑛𝑓𝑤𝑛3,𝑥𝑛+1=𝑧𝑛𝑓𝑧𝑛𝑓𝑥𝑛,𝑤𝑛𝑊1,(4.10) where 𝑊1=𝑓𝑧1+𝑛𝑓𝑥𝑛+𝑓𝑧𝑛𝑓𝑥𝑛3𝑓𝑧1+𝑛𝑓𝑤𝑛+𝑓𝑧𝑛𝑓𝑤𝑛3𝑓𝑧1+𝑛𝑓𝑦𝑛+𝑓𝑧𝑛𝑓𝑦𝑛2×𝑓𝑦1+𝑛𝑓𝑤𝑛+𝑥1+𝑓𝑛,𝑤𝑛𝑓(𝑦𝑛)𝑓(𝑤𝑛)2𝑓𝑦1+𝑛𝑓𝑥𝑛+𝑓(𝑦𝑛)𝑓(𝑥𝑛)5,(4.11) and its error equation is as comes next 𝑒𝑛+1=1+𝑐13𝑐222+𝑐12𝑐22𝑐11+𝑐1𝑐35+𝑐13+𝑐1𝑐324𝑐1𝑐2𝑐3+𝑐21𝑐4𝑐71𝑒8𝑛𝑒+𝑂9𝑛.(4.12)

Remark 4.2. Although the structure (4.10) is hard, it could be coded easily because 𝑓[𝑥𝑛,𝑤𝑛] and the factors 𝑓(𝑦𝑛)/𝑓(𝑥𝑛), 𝑓(𝑦𝑛)/𝑓(𝑤𝑛), 𝑓(𝑧𝑛)/𝑓(𝑥𝑛), 𝑓(𝑧𝑛)/𝑓(𝑤𝑛), and 𝑓(𝑧𝑛)/𝑓(𝑦𝑛) should be computed only once per computing step and their values will be used throughout and we thus have some operational calculations in implementing (4.10).

The contributed class (4.2)-(4.3) uses the forward finite difference approximation in its first step. If we use backward finite difference estimation, then another novel three-step eighth-order derivative-free iteration without memory can be constructed. For this reason, our second general derivative-free class could be given as 𝑦𝑛=𝑥𝑛𝑓𝑥𝑛𝑓𝑥𝑛,𝑤𝑛,𝑤𝑛=𝑥𝑛𝑥𝛽𝑓𝑛,𝑧𝑛=𝑦𝑛𝑓𝑦𝑛𝑓𝑥𝑛,𝑤𝑛𝑥{𝐴(𝑡)×𝐵(𝑖)},𝑛+1=𝑧𝑛𝑓𝑧𝑛𝑓𝑥𝑛,𝑤𝑛{𝑃(𝑟)+𝑄(𝑖)+𝐽(𝑡)+𝐿(𝑘)},(4.13) where 𝛽{0},𝐴(𝑡),𝐵(𝑖),𝑃(𝑟),𝑄(𝑖),𝐽(𝑡), and 𝐿(𝑘)are six real-valued weight functions when 𝑡=𝑓(𝑦)/𝑓(𝑥), 𝑖=𝑓(𝑦)/𝑓(𝑤), 𝑟=𝑓(𝑧)/𝑓(𝑦), 𝑘=𝑓(𝑧)/𝑓(𝑤), without the index 𝑛, and they must read 𝐴(0)=𝐴(0)=1,𝐴(0)=2,𝐴(3)||𝐴(0)=0,(4)||(0)<,𝐵(0)=𝐵(0)=1,𝐵𝑥(0)=64𝛽𝑓𝑛,𝑤𝑛,𝐵(3)||𝐵(0)=0,(4)||||𝑃(0)<,𝑃(0)=𝑃(0)=1,||(0)<,𝑄(0)=0,𝑄(0)=1,𝑄𝑥(0)=108𝛽𝑓𝑛,𝑤𝑛,𝑄(3)𝑥(0)=60+12𝛽𝑓𝑛,𝑤𝑛𝑥8+3𝛽𝑓𝑛,𝑤𝑛,||𝑄(4)||(0)<,𝐽(0)=0,𝐽(0)=1,𝐽(0)=2,𝐽(3)(||𝐽0)=0,(4)(||0)<,𝐿(0)=0,𝐿𝑥(0)=42𝛽𝑓𝑛,𝑤𝑛.(4.14)

The scheme (4.13)-(4.14) defines a new family of multipoint methods. To obtain the solution of (1.1) by the new derivative-free class, we must set a particular guess 𝑥0, ideally close to the simple root. In numerical analysis, it is very useful and essential to know the behavior of an approximate method. Therefore, we will prove the order of convergence of the new eighth-order class.

Theorem 4.3. Let us consider 𝛼 as a simple root of the nonlinear equation 𝑓(𝑥)=0 in the domain 𝐷. And assume that 𝑓(𝑥) is sufficiently smooth in the neighborhood of the root. Then the derivative-free iterative class defined by (4.13)-(4.14) is of optimal order eight.

Proof. Applying Taylor series and symbolic computations, we can determine the asymptotic error constant of the three-step uni-parametric family (4.13)-(4.14). Assume 𝑒𝑛=𝑥𝑛𝛼 be the error in the nth iterate and take into account 𝑓(𝛼)=0, 𝑐𝑘=𝑓(𝑘)(𝛼)/𝑘!,  for all 𝑘=1,2,3,. Then, the procedure of this proof is similar to the proof of Theorem 4.1. Thus, we below give the final error equation of (4.13)-(4.14): 𝑒𝑛+11=24𝑐61𝑐2𝑐31+𝑐1𝛽2×24𝑐21𝑐2𝑐41+𝑐1𝛽2+48𝑐1𝑐22𝑐31+𝑐1𝛽7+𝑐1𝛽7+𝑐1𝛽12𝑐21𝑐231+𝑐1𝛽22+𝑃(0)+𝑐42264+𝐴(4)(0)+𝐵(4)(0)𝐽(4)(0)+𝑐1𝛽4156+𝐴(4)(0)𝐽(4)(0)+𝑐1𝛽660+𝐴(4)(0)𝐽(4)(0)+𝑐1𝛽4𝐴(4)(0)+𝑐1𝛽24+𝐴(4)(0)𝐽(4)(0)+4𝐽(4)(0)𝑄(4)𝑒(0)8𝑛𝑒+𝑂9𝑛.(4.15) This shows that the contributed class (4.13)-(4.14) achieves the optimal order eight by using only four pieces of information. The proof is complete.

A very efficient example from our novel class (4.13)-(4.14) can be the following iteration without memory 𝑦𝑛=𝑥𝑛𝑓𝑥𝑛𝑓𝑥𝑛,𝑤𝑛,𝑤𝑛=𝑥𝑛𝑥𝑓𝑛,𝑧𝑛=𝑦𝑛𝑓𝑦𝑛𝑓𝑥𝑛,𝑤𝑛×𝑓𝑦1+𝑛𝑓𝑥𝑛+𝑓𝑦𝑛𝑓𝑥𝑛2𝑓𝑦1+𝑛𝑓𝑤𝑛+𝑥32𝑓𝑛,𝑤𝑛𝑓𝑦𝑛𝑓𝑤𝑛2,𝑥𝑛+1=𝑧𝑛𝑓𝑧𝑛𝑓𝑥𝑛,𝑤𝑛𝑊2,(4.16) where 𝑊2𝑓𝑧=1+𝑛𝑓𝑦𝑛+𝑓𝑧𝑛𝑓𝑦𝑛2+𝑓𝑦𝑛𝑓𝑤𝑛+𝑥54𝑓𝑛,𝑤𝑛𝑓𝑦𝑛𝑓𝑤𝑛2+𝑥10+2𝑓𝑛,𝑤𝑛𝑥8+3𝑓𝑛,𝑤𝑛𝑓𝑦𝑛𝑓𝑤𝑛3+𝑥11𝑓𝑛,𝑤𝑛𝑥2615𝑓𝑛,𝑤𝑛+𝑓𝑥𝑛,𝑤𝑛3𝑓𝑦𝑛𝑓𝑤𝑛4+𝑓𝑦𝑛𝑓𝑥𝑛+𝑓𝑦𝑛𝑓𝑥𝑛2+𝑥42𝑓𝑛,𝑤𝑛𝑓𝑧𝑛𝑓𝑤𝑛,(4.17) and its very simple error equation comes next as 𝑒𝑛+1=(1+𝑐1)3𝑐22𝑐327+7+𝑐1𝑐1𝑐2𝑐3+1+𝑐1𝑐1𝑐4𝑐51𝑒8𝑛𝑒+𝑂9𝑛.(4.18) Mostly, and according to the assumptions made in the proof of Theorem 4.1, 𝑐71 should be in the denominator of the asymptotic error constant of the optimal eighth-order methods, and if one obtains some forms like 𝑐61 or 𝑐51, then the derived methods will be mostly finer than the other existing forms of optimal eighth-order methods.

Remark 4.4. Each method from the proposed derivative-free classes in this paper reaches the efficiency index 481.682, which is greater than 341.587 of the optimal fourth-order techniques' and 221.414 of optimal one-point methods' without memory.

Some other examples from the class (4.13)-(4.14) can be easily constructed by changing the involved weight function in (4.14). For example, we can have 𝑦𝑛=𝑥𝑛𝑓𝑥𝑛𝑓𝑥𝑛,𝑤𝑛,𝑤𝑛=𝑥𝑛𝑥𝑓𝑛,𝑧𝑛=𝑦𝑛𝑓𝑦𝑛𝑓𝑥𝑛,𝑤𝑛×𝑓𝑦1+𝑛𝑓𝑥𝑛+𝑓𝑦𝑛𝑓𝑥𝑛2𝑓𝑦1+𝑛𝑓𝑤𝑛+𝑥32𝑓𝑛,𝑤𝑛𝑓𝑦𝑛𝑓𝑤𝑛2,𝑥𝑛+1=𝑧𝑛𝑓𝑧𝑛𝑓𝑥𝑛,𝑤𝑛𝑊3,(4.19) with 𝑊3𝑓𝑧=1+𝑛𝑓𝑦𝑛+𝑓𝑦𝑛𝑓𝑤𝑛+𝑥54𝑓𝑛,𝑤𝑛𝑓𝑦𝑛𝑓𝑤𝑛2+𝑥10+2𝑓𝑛,𝑤𝑛𝑥8+3𝑓𝑛,𝑤𝑛𝑓𝑦𝑛𝑓𝑤𝑛3+𝑥11𝑓𝑛,𝑤𝑛𝑥2615𝑓𝑛,𝑤𝑛+𝑓𝑥𝑛,𝑤𝑛3×𝑓𝑦𝑛𝑓𝑤𝑛4+𝑓𝑦𝑛𝑓𝑥𝑛+𝑓𝑦𝑛𝑓𝑥𝑛2+𝑥42𝑓𝑛,𝑤𝑛𝑓𝑧𝑛𝑓𝑤𝑛,(4.20) where its error equation comes next as 𝑒𝑛+1=(1+𝑐1)3𝑐2𝑐327+7+𝑐1𝑐1𝑐22𝑐3+1+𝑐1𝑐1𝑐23+1+𝑐1𝑐1𝑐2𝑐4𝑐51𝑒8𝑛𝑒+𝑂9𝑛.(4.21) And also we could construct 𝑦𝑛=𝑥𝑛𝑓𝑥𝑛𝑓𝑥𝑛,𝑤𝑛,𝑤𝑛=𝑥𝑛𝑥𝑓𝑛,𝑧𝑛=𝑦𝑛𝑓𝑦𝑛𝑓𝑥𝑛,𝑤𝑛×𝑓𝑦1+𝑛𝑓𝑥𝑛+𝑓𝑦𝑛𝑓𝑥𝑛2𝑓𝑦1+𝑛𝑓𝑤𝑛+𝑥32𝑓𝑛,𝑤𝑛𝑓𝑦𝑛𝑓𝑤𝑛2,𝑥𝑛+1=𝑧𝑛𝑓𝑧𝑛𝑓𝑥𝑛,𝑤𝑛𝑊4,(4.22) where 𝑊4𝑓𝑧=1+𝑛𝑓𝑦𝑛+𝑓𝑧𝑛𝑓𝑦𝑛2+𝑓𝑦𝑛𝑓𝑤𝑛+𝑥54𝑓𝑛,𝑤𝑛𝑓𝑦𝑛𝑓𝑤𝑛2+𝑥10+2𝑓𝑛,𝑤𝑛𝑥8+3𝑓𝑛,𝑤𝑛𝑓𝑦𝑛𝑓𝑤𝑛3+𝑥11𝑓𝑛,𝑤𝑛𝑥2615𝑓𝑛,𝑤𝑛+𝑓𝑥𝑛,𝑤𝑛3𝑓𝑦𝑛𝑓𝑤𝑛4+𝑓𝑦𝑛𝑓𝑥𝑛+𝑓𝑦𝑛𝑓𝑥𝑛2+𝑓𝑦𝑛𝑓𝑥𝑛4+𝑥42𝑓𝑛,𝑤𝑛𝑓𝑧𝑛𝑓𝑤𝑛,(4.23) and its error equation comes next as 𝑒𝑛+1=(1+𝑐1)3𝑐22𝑐3(1+𝑐1)3𝑐322𝑐17+7+𝑐1𝑐1𝑐2𝑐31+𝑐1𝑐21𝑐4𝑐61𝑒8𝑛𝑒+𝑂9𝑛.(4.24)

5. Numerical Examples

We check the effectiveness of the novel derivative-free classes of iterative methods (4.2)-(4.3) and (4.13)-(4.14) in this section. In order to do this, we choose (4.10) as the representative from the optimal class (4.2)-(4.3) and (4.16) as the representative of the novel three-step class (4.13)-(4.14). We have compared (4.10) and (4.16) with Steffensen's method (3.1), the fourth-order family of Kung and Traub (3.3) with 𝛽=0.01, the eighth-order technique of Kung and Traub (3.2) with 𝛽=1, and the optimal eighth-order family of Zheng et al. (3.4) with 𝛽=1, using the examples listed in Table 1.

The results of comparisons are given in Table 2 in terms of the number significant digits for each test function after the specified number of iterations, that is, for example, 0.1𝑒1190 shows that the absolute value of the given nonlinear function 𝑓1, after three iterations, is zero up to 1190 decimal places. For numerical comparisons, the stopping criterion is |𝑓(𝑥𝑛)|<1.𝐸1200. In Table 2, IT and TNE stand for the number of iterations and the total number of (function) evaluations. We used four different initial guesses to analyze the behaviors of the methods totally. In Table 2, F stands for failure, for instance, when the iteration for the particular initial guess becomes divergence, or finds another root, needs more numerical computing steps to find an acceptable solution of the nonlinear equations.

It can be observed from Table 2 that almost in most cases our contributed methods from the classes of derivative free without memory iterations are superior in solving nonlinear equations. Numerical computations have been carried out using variable precision arithmetic in MATLAB 7.6.

We here remark that the eighth-order iterative methods such as (4.10) and (4.16) improve the number of correct digits in the convergence phase for the simple roots of (1.1) by a factor of eight and in order to show this and also the asymptotic error constant, we have applied 1200 digits floating point.

Note that experimental results show that whatever the value of 𝛽0 is small, then the output results of solving nonlinear equations will be more reliable. As a matter of fact, experimental results for the contributed methods from the three-step derivative-free classes (4.2)-(4.3) and (4.13)-(4.14) can give better feedbacks in most cases, for instance they provide better accuracy than those illustrated in Table 2, by choosing very small values for 𝛽. By doing this for 𝛽, the error equation will be narrowed. Also note that, if we approximate 𝛽 by an iteration through the data of the first step per cycle, then with memory iterations from the suggested classes will be attained, which is the topic of the forthcoming papers in this field.

A simple glance at Table 2 reveals that (4.16) is mostly better than its competitors. The reason indeed is that whatever the error equation is finer, the better numerical results will be attained. The error equations correspond to (4.16), for instance (4.18) is very small, in fact in its denominator we have 𝑐51 which clearly shows these refinements. In general, and according to the assumptions made in the proof of Theorem 4.1, 𝑐71 should be in the denominator of the optimal eighth-order methods, and if one obtains forms like 𝑐61 or 𝑐51, then the derived methods will be mostly finer than the other existing forms of the optimal eighth-order methods.

6. Conclusions

Multipoint methods without memory are methods that use new information at a number of points. Much literature on the multipoint Newton-like methods for function of one variable and their convergence analysis can be found in Traub [2].

In this paper, two novel classes of iterations without memory were discussed fully. We have shown that each member of our contributions reach the optimal order of convergence eight by consuming only four function evaluations per full cycle. Thus, our classes support the optimality conjecture of Kung and Traub for building optimal without memory iterations. Our classes can be taken into account as the generalizations of the well-cited derivative-free method of Steffensen. We have given a lot of numerical examples in Section 5 to support the underlying theory developed in this paper. Numerical results were completely in harmony with the theory developed in this paper, and accordingly, the contributions hit the target.

Acknowledgment

The author cheerfully acknowledges the interesting comments of the reviewer, which have led to the improvement of this paper.