Abstract

We construct a new biparametric three-point method with memory to highly improve the computational efficiency of its original partner, without adding functional evaluations. In this way, through different estimations of self-accelerating parameters, we have modified an existing seventh-order method. The parameters have been defined by Hermite interpolating polynomial that allows the accelerating effect. In particular, the -order of the proposed iterative method with memory is increased from seven to ten. A real multidimensional analysis of the stability of this method with memory is made, in order to study its dependence on the initial estimations. Taking into account that usually iterative methods with memory are more stable than their derivative-free partners and the obtained results in this study, the behavior of this scheme shows to be excellent, but for a small domain. Numerical examples and comparison are also provided, confirming the theoretical results.

1. Introduction

In recent times, solving nonlinear problems described by is burning difficulty in real-world phenomena. In this direction, numerous iterative methods have been projected (see, e.g., [17]). These iterative methods have a noteworthy area of research in numerical analysis, as they can be applied in several areas of pure or applied sciences. Out of them, the most eminent one-point iterative method without memory is the Newton-Raphson scheme, which is given by and has second-order convergence. One drawback of this method is the clause , which confines its applications.

The first objective and inspiration to design iterative methods for solving this kind of problem are to get the highest order of convergence with the least computational cost. Therefore, a lot of researchers have paid much interest to construct optimal multipoint methods without memory, in the sense of Kung-Traub’s conjecture [8], using functional evaluation which can reach the optimal . The paper follows three main goals: primarily, to avoid the restriction in the practice; along with the subsequent goal, to develop the proposed families to methods with memory in an approach that achieves convergence -order 10 without any new functional evaluation; and to analyze the dependence of the resulting scheme from the set of initial estimations used.

Multipoint schemes have enormous practical meaning, as they conquer theoretical limits of any point-to-point scheme about order and efficiency. They also generate approximations of higher precision and highly improved computer arithmetics, and symbolic calculation has allowed efficient execution of multipoint methods.

On the other hand, iterative schemes with memory utilize information from the recent and preceding iterations. Traub, in [9], designed the first method with memory by a small change in the well-known Steffensen’s scheme as follows: where and are given; is a self-accelerating parameter defined as where is Newton’s interpolating polynomial of the first degree. The order of convergence of (2) was .

More recently, some authors have constructed iterative schemes with memory from optimal procedures of different orders, mainly four (see, e.g., [1012]), eight ([1315], among others), or even general -point schemes [16, 17]. Some good reviews regarding the acceleration of convergence order by using memory are [18, 19].

The convergence order of a new method is improved regarding its without memory partner; it is derived by adding self-accelerating parameters but holding the derivatives in the iterative expression. The accelerated convergence rate has been obtained without additional functional evaluations, which results in higher computational efficiency. However, another important aspect of an iterative method to be considered is the numerical stability, that is, the analysis that tells us how dependent the scheme of the initial guesses used is. The dynamical performance of the rational functions associated to iterative schemes is a very useful element to study their dependence on initial estimations. In recent years, complex discrete dynamics has been widely used on iterative methods without memory (see, e.g., [2023]). Nevertheless, it is known that iterative schemes with memory cannot be analyzed by means of these techniques. This is the reason why the authors focused in [2426] their qualitative study by transforming them into multidimensional dynamical systems, as their qualitative properties can be analyzed by using standard tools as the classical Hénon map (see [27, 28]).

In this manuscript, we have obtained a multipoint method with memory, to solve the nonlinear equations followed by its convergence and stability analysis. Sections 2, 3, and 4 can be summarized as follows: in Section 2, we design a biparametric three-point iterative scheme with memory. The method has been obtained by employing two self-accelerating parameters, which use current and previous data from available information. These parameters are recalculated at each iteration by using Hermite interpolating polynomials, increasing from to the order of convergence. Section 3 is related to the qualitative analysis of the proposed scheme in terms of the real discrete dynamical system involved. After introducing some basic concepts, the stability of the method on a low degree generic polynomial is studied. In general, it presents a very stable behavior, but also a chaotic one for the values of the parameter in a small domain. In Section 4, some numerical examples are presented confirming the results proven in Section 2.

2. Convergence Analysis

In this section, we demonstrate the improvement of the convergence order of the method given by Khattri and Argyros [29] by adding up two different parameters in the first and the third steps. Firstly, we consider the seventh order without memory scheme presented in [29]. where and . The error equation for the last step of (4) is where , for .

By using two parameters and in (4) and replacing and , we obtain the biparametric family.

The error equations of each step of (6) are

and

From (9), we can assure that the order of convergence of (6) is still seven, with the independence of parameters and . This order can be improved from seven to ten, by taking and , but root is not known.

So, to improve the rate of convergence of (6), we recalculate the value of parameters and in each iteration, by taking and , as , , and are not provided. We denote by and these estimations, and they are computed by using the current and previous iteration satisfying and . We consider the following formulas for and , using Hermite interpolating polynomials of different degrees:

Theorem 1. Let be a sufficiently differentiable function and be the interpolating polynomial of Hermite with th degree that interpolates at nodes belonging to interval , and the derivative is continuous in I and the Hermite interpolating polynomial satisfies conditions , , , and .
Let us define and and(i)All nodes are close enough to zero .(ii)Conditions and hold. Then,

Proof. It is known that the error expression of can be obtained by

After differentiating (12) at , we get

Taylor’s development of the derivative of at and about provides

and where . Putting (14) and (15) in (13), we obtain which implies that is, or

The concept of the -order of convergence [30] and the subsequent declaration (see [4], p.287) is used to approximate the order of convergence of (6).

Then, we can prove the following result.

Theorem 2. Let be a sufficiently differentiable function and let and be the varying parameters in iterative scheme (6) obtained by means of (10). If the initial estimate is close enough to a simple root of , then, the -order of the iterative method is at least 10. We denote this scheme by OM10.

Proof. Let be the generated sequence by scheme OM10. As it converges to the root of , with -order , we write where . If , then, . Therefore,

Let us consider now sequences and with -order and , respectively; then,

and

The following error expression of the method with memory (6) can be obtained by (7), (8), and (9) and the varying parameter and .

and

Here, we excluded higher order terms in (24), (25), and (26).

In method OM10, is calculated by (10): Hermite interpolating polynomial satisfies conditions , , , , , and . After differentiating twice the error expression of at point , we get

By using Taylor’s expansion of and at and about ,

and

From (30), we have where . By replacing (29) and (31) into (27), we obtain

Now, dividing (32) by (28),

or

As is calculated by (10), the third derivative of Hermite interpolating polynomial satisfies

From (30), we have

Substituting (14) and (36) into (35), we obtain

and hence,

or

According to (24), (25), (26), (34), and (39), we get

and

By comparing the exponents of featuring in the three pairs of relation (22)-(40), (23)-(41), and (21)-(42), the subsequent system is provided.

The unique positive solution of (43) is , , and . So -order of method (6), for and being calculated by (10), is at least .

3. Multidimensional Dynamical Study

In this section, we build the discrete dynamical system associated to OM10, for overcoming its qualitative analysis. The general iterative expression of a fixed-point procedure that employs two previous iterations in order to get the following one is where and are initial approximations. But if the scheme is a three-step one, as OM10, not only but also the subsequent intermediate points and must be used to calculate the following iterate with a high order of convergence. So the fixed point iteration is expressed as

To obtain the fixed points, we define the fixed point multidimensional function associated to by means of

Therefore, a fixed point of satisfies and .

We define a discrete dynamical system in from function , given by where the steps of th and ()th iterations are denoted by , , , , , and . Fixed points of satisfy and . By imposing these conditions to the rational operator , it can be reduced to a real-valued function , to study the asymptotic stability of these points.

If all the components of a fixed point of are different from , being a zero of the nonlinear scalar function , then, it is called strange fixed point.

Let be a vectorial function. The orbit of a point is defined as .

A point is a -periodic point if and , for .

The stability of fixed points in multivariate operators (see, e.g., [31]) is analyzed as follows.

Theorem 3. Let from to be of class . Assume is a k-periodic point. Let be the eigenvalues of .(a)If all the eigenvalues have , then, is attracting.(b)If one eigenvalue has , then, is unstable, that is, repelling or saddle.(c)If all the eigenvalues have , then, is repelling.

In addition, a fixed point is hyperbolic if satisfies . In particular, if exists such that and such that , then, the hyperbolic point is a saddle point.

Let us remark the difference between the stability of a fixed point in one-dimensional dynamics and that in multidimensional dynamics: meanwhile, in the scalar case, the stability of the fixed point depends on the value of the derivative operator at the point: ( means that is attracting, superattracting if and repulsive if , being the rational function associated to the iterative method applied on a polynomial ); in the multidimensional case, the eigenvalues of the Jacobian matrix associated to the fixed point operator are the elements that determine the character of the fixed points.

Moreover, a point is called critical if the associated Jacobian matrix satisfies (in the one-dimensional case, a critical point is that what makes the derivative fixed point operator vanish). One way to calculate critical points, for iterative methods of convergence order higher than two, is to find those fixed points with null eigenvalues , . In the case that they are also fixed points, they are called superattracting, as an extension of the scalar case.

So, being an attracting fixed point of , its basin of attraction is defined as

A key result from Julia and Fatou [32] proves the relationship between the existence of free critical points (they are called in this way if they are different from the roots of ) and the set of initial points that converge to an attracting periodic point: there is at least one critical point associated with each invariant Fatou component. So observing the orbit of the critical points, all the attracting behaviors can be found. This result is valid as for complex as for real iterative functions. The main drawback is that often the analytic expression of the critical points cannot be found in high-order methods, because of the elevated degree of the polynomials involved in the rational function.

The union of all the basins of attraction defines the Fatou set, usually represented as a dynamical plane. It is numerically constructed starting with a mesh of initial guesses, iterating the method on them and assigning different colors depending on the basin they converge to.

3.1. Dynamical Analysis of OM10

Now, we study the behavior of the operator associated to scheme OM10 on quadratic polynomials. In order to get this aim, the asymptotic stability of the fixed points of the associate rational vector applied on , with , must be analyzed. It is clear that, if , has two simple real roots, it has one double root at if and does not have real roots if .

We need to calculate the fixed points of the associate fixed point operator on method OM10 on , , that is, a high-degree rational function depending on the parameter of the polynomial . However, it is not viable to analyze directly the fixed points of this operator, as usually indetermination appears when we calculate the fixed points due to cancelations of factors as , , ansd in the denominator of . In order to avoid this and using that all the fixed points have equal components, we force firstly and, after simplifying, . The resulting one-dimensional reduced operator is

Moreover, there are two values of which reduce the rational function: (i)If , then,(ii)If , then,

Proposition 1. The number of real fixed points (and their stability) of depends on the value of parameter :(a)If , has no real roots, so fixed points and are strange, being both repulsive.(b)If , the only (double) root of , is an attracting fixed point of and also is a repulsive strange fixed point.(c)When or , fixed points and are superattracting (corresponding to the roots of ). Moreover, and are strange fixed points if ; is always repulsive, and is attracting if and superattracting if .For , there is only one (superattracting) fixed point that is a root of , and one strange fixed point, , that is repulsive.

Proof. It is straightforward that identity leads us to the fixed points. Specifically, provided that . That is, the fixed points of are the roots of or the real roots of polynomial . Then, real strange fixed points and appear when .

Regarding their stability, it can be checked that the eigenvalues of the reduced Jacobian matrix coincide with the derivative of : .

and then, their stability is easily stated.

and where for . This is graphically checked in Figure 1.

An example of the behavior stated at Proposition 1 is presented at Figure 2, where the basins of attraction for different values of are showed. These pictures have been generated by means of a real mesh of points, following the routines appearing in [33]. For each initial pair , intermediate values and have been generated from by adding a small random value (in MATLAB code, and ). When the method is executed on each initial estimation under these conditions, the point of the mesh is represented in orange or blue color if the process has converged to one of the roots or at a distance lower than ; other colors correspond to convergence to strange fixed points, and black points to initial guesses that make the method diverge or do not converge after a maximum number of iterations, .

Let us remark that is the value of parameter that makes the strange fixed point superattractor; even in this situation, it is difficult to find initial estimations converging to it (a converging orbit to is plotted in Figure 2(b)), as the most of them converge to the roots (orange and blue areas of Figure 2(a)) but some of them converge to 2-periodic orbits (black area appearing in Figure 2(c) where the initial estimation of the orbit is selected). Also, is used and convergence is observed (Figure 2(d)) to both roots with black areas of slow convergence. No other attractors appear. In Figure 2(e), the most frequent behavior is the convergence to the roots, but a black line of convergence to a 2-periodic orbit near also appears. In the case of double root, corresponding to (see Figure 2(f)), convergence to is very slow.

On the other hand, critical points play a key role in the stability of the system, as they appear always in any basin of attraction. Then, it is necessary, not only to calculate them but also to analyze their asymptotic behavior, in order to detect all the attracting elements: attracting fixed points, periodic orbits, strange attractors, and so on. This is the reason why we calculate the free critical points of , with equal components, in the following result. Maybe other critical points of the multivariate rational functions exist, but their calculation is not feasible due to the high degree of the polynomials (of several variables) involved.

Proposition 2. The number of real critical points of depends on parameter , , are always critical points, and also,(a)If , and are free critical points.(b)If , there also exist two more free critical points: and .

The proof of this result is direct from solving or, equivalently, forcing the eigenvalues of the reduced Jacobian matrix to be zero.

Let us remark that, although the dynamics of the method seems to be stable from the dynamical planes plotted and the analysis of the strange fixed points, the presence of any of these critical points can make them bifurcate into periodic orbits or to generate chaos for specific values of . In what follows, we analyze how it changes depending on and if there exists any other kind of bifurcations that leads to repulsive or attractive points or to periodic or strange attractors.

3.2. Feigenbaum Diagrams

In order to analyze bifurcations, we employ Feigenbaum diagrams of the multidimensional rational function related to OM10 on , by using as initial estimation real critical points described in Proposition 2 and searching those ranges where changes of behavior happen. The critical points are not independent but conjugated two-by-two: and . Nevertheless, it is necessary to analyze the bifurcation diagram corresponding each one of them, as their performance is different.

By using the critical point as the starting guess, a bifurcation diagram is observed in Figure 3(a). In this case, it is observed that the critical point is in the basin of attraction of the roots , as converges to it for . A symmetric behavior is obtained for as the initial estimation, belonging to the basin of attraction of the root . In Figure 3(b), critical point shows also convergence to one of the roots, but shows a chaotic behavior for , with period-doubling bifurcation cascades as can be observed in Figures 3(c) and 3(d).

We plot the iteration of for the values of in one of the blue regions of Figure 3(d), in the space. In the sequence shown in Figure 4, several strange attractors appear, separated or unified depending on very close values of . These plots have been generated by fixing a value of , with an initial estimation at . OM10 which has been applied on it, plotting one point per iteration (blue for the first 2000 iterations, green for iteration 2001 to 4000, and magenta for iteration 4001 to 10,000). The resulting images show that several attracting points joint into wandering areas that are unified and separated depending on .

4. Numerical Comparison

In this section, after a review of two and three steps with memory methods, we have tried some schemes to be compared with proposed biparametric three-step method OM10, (6). This method is compared with methods XW41 (16–18), XW42 (16–19), XW43 (16–20), XW44 (17-18), XW45 (17–19), and XW46 (17–20) presented in [5], XW5(18–24) given in [1], method SK7 in [29], and methods XW81 (37–34), XW82 (37–35), XW83 (37–36), XW84 (38–34), XW85 (38–35), and XW86 (38–36) given in [6] by using the test functions shown in Table 1.

A numerical test of methods with memory is usually made by using starting point and also starting values of the accelerating parameters: to calculate and and thereafter in order to estimate . We check the performance of the methods by using this strategy, in order to see if the unstable behavior is avoided or, on the contrary, increased. For this, we use and . It can be observed in Figure 5 that the stability of the method is highly improved when appropriate initial values are considered, not only but also the steps and and, subsequently, the second iterate . The areas of convergence around the searched roots are wide in spite of the complexity of the nonlinear functions involved, and the behavior of the method is stable.

Table 1 is furnished with the considered nonlinear test functions with their root (). In the same table, there are infinite numbers of digits after the decimal but we have mentioned only four digits (the nonlinear functions are taken from [6, 7]). In Table 2, the absolute errors are given for presented method OM10. The computational order of convergence COC approximated using expression (see [34])

to check the theoretical order. All the numerical results revealed in Table 2 agree with previous results; for this, we have considered up to 2000 significant digits by using “Set Accuracy” command in Mathematica 8. OM10 has been used to solve the nonlinear functions, and the calculated results are compared with the other methods XW41 (16–18), XW42 (16–19), XW43 (16–20), XW44 (17-18), XW45 (17–19), XW46 (17–20),XW5(18–24), SK7, XW81 (37–34), XW82 (37–35), XW83 (37–36), XW84 (38–34), XW85 (38–35), and XW86 (38–36). In Table 2, “NC” means not convergent. From Table 2, it is very easy to identify that the results obtained by the proposed method are quite superior to the other two- and three-step schemes.

5. Conclusion

In this work, we have presented a new family of biparametric three-step schemes with memory to solve nonlinear equations. As a result, we have used two self-correcting parameters, designed by Hermite interpolating polynomials in the seventh-order method to achieve higher order convergence without any additional calculation. The -order of OM10 increases up to . The stability analysis of this family has been made by transforming it in a discrete dynamical system and studying the asymptotical behavior of the fixed and critical points. The method has been shown to be very stable except in few cases. These results have been checked by using dynamical planes, and the proposed method has been compared in performance and computational efficiency with a few existing methods by numerical examples. This confirms the validity of theoretical results.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research has been partially supported by grants from Spanish Ministerio de Economía y Competitividad (MTM2014-52016-C2-2-P) and by Generalitat Valenciana (PROMETEO/2016/089).