Deterministic Discrete Dynamical Systems: Advances in Regular and Chaotic Behavior with Applications
View this Special IssueResearch Article  Open Access
Farahnaz Soleimani, Fazlollah Soleymani, Stanford Shateyi, "Some Iterative Methods Free from Derivatives and Their Basins of Attraction for Nonlinear Equations", Discrete Dynamics in Nature and Society, vol. 2013, Article ID 301718, 10 pages, 2013. https://doi.org/10.1155/2013/301718
Some Iterative Methods Free from Derivatives and Their Basins of Attraction for Nonlinear Equations
Abstract
First, we make the Jain's derivativefree method optimal and subsequently increase its efficiency index from 1.442 to 1.587. Then, a novel threestep computational family of iterative schemes for solving single variable nonlinear equations is given. The schemes are free from derivative calculation per full iteration. The optimal family is constructed by applying the weight function approach alongside an approximation for the first derivative of the function in the last step in which the first two steps are the optimized derivativefree form of Jain's method. The convergence rate of the proposed optimal method and the optimal family is studied. The efficiency index for each method of the family is 1.682. The superiority of the proposed contributions is illustrated by solving numerical examples and comparing them with some of the existing methods in the literature. In the end, we provide the basins of attraction for some methods to observe the beauty of iterative nonlinear solvers in providing fractals and also choose the best method in case of larger attraction basins.
1. Introduction
In order to approximate the solution of nonlinear functions, it is suitable to use iteration methods which lead to monotone sequences. The construction of iterative methods for estimating the solution of nonlinear equations or systems is an interesting task in numerical analysis [1]. During the last years, a huge number of papers, devoted to the iterative methods, have appeared in many journals, see, for example, [2–5] and their bibliographies.
The first famous iterative method was attributed by Newton as . Steffensen approximated , using forward finite difference of order one to obtain its derivativefree form as . Both methods reach the quadratically convergence consuming two evaluations per cycle [6].
A very important aspect of an iterative process is the rate of convergence of the sequence , which approximates a solution of . This concept, along with the cost associated to the technique, allows establishing the index of efficiency for an iterative process. In this way, the classical efficiency index of an iterative process [6] is defined by the value , where is the convergence rate and is the total number of evaluations per cycle. Consequently, Newton and Steffensen schemes both have the same efficiency index 1.414.
In addition, Kung and Traub in [7] conjectured that a multipoint iteration without memory consuming evaluation per full iteration can reach the maximum convergence rate . Taking into account all these, many researchers in this topic have been trying to construct robust optimal methods; see, for example, [8, 9] and their bibliographies.
The remained contents of this study are summarized in what follows. In Section 2, we present an optimized form of the wellknown cubically Jain’s method [10] with quartic convergence. Moreover, considering the KungTraub conjecture, we build a family of threestep without memory iterative methods of optimal convergence order 8. In order to give this, we use an approximation of the first derivative of the function in the last step of a threestep cycle alongside a wellwritten weight function. Analyses of convergence are given. A comparison with the existing, without memory methods of various orders, is provided in Section 3. We also investigate the basins of attraction for some of the derivativefree methods to provide the fractal behavior of such schemes in Section 4. Section 5 gives the concluding remarks of this research and presents the future works.
2. Main Results
The main idea of this work is first to present a generalization of the wellknown Jain’s method with optimal order four and the efficiency index 1.587 and then construct a threestep family of derivativefree eighthorder methods with optimal efficiency index 1.682.
Let us take into consideration the Jain’s derivativefree method [10] as follows:
Equation (1) is a cubical technique using three function evaluations per iteration with as its efficiency index. This index of efficiency is not optimal in the sense of KungTraub hypothesis. Therefore, in order to improve the index of efficiency and make (1) optimal, we consider the following iteration:
wherein and . If we use divided differences and define , then a novel modification of Jain’s method (1) in a more simpler format than (2) can be obtained as follows: where its convergence order and efficiency index are optimal. Theorem 1 illustrates this fact.
Theorem 1. Let be a simple zero of a sufficiently differentiable function in an open interval. If is sufficiently close to , then the method defined by (3) has the optimal convergence order four using only three function evaluations.
Proof. We expand any terms of (3) around the simple zero in the th iterate where , , and . Therefore, we have . Accordingly by Taylor’s series expanding for the first step of (3), we get that
Now, we ought to expand around the simple root by using (4). We have
Note that throughout this paper we omit writing many terms of the Taylor expansions of the error equations for the sake of simplicity. Additionally, by providing the Taylor’s series expansion, in the second step of (3), we have
Using (4), (6), and the second step of (3), we attain
This shows that (3) is an optimal fourthorder derivativefree method with three evaluations per cycle. Hence, the proof is complete.
Remark 2. It should be remarked that if one uses in (3), that is, another variant of Steffensen’s method by backward finite difference of order one, then a similar optimal quartical convergence method will be attained. To illustrate more, using this variant will end in
wherein with and its error equation is as follows:
Therefore, we have given a modification of the wellknown cubical Jain’s method with fourth convergence order by using the same number of evaluations as the Jain’s scheme, while the orders are independent to the free nonzero parameter . The derivativefree iterations (3) and (8) satisfy the KungTraub conjecture for constructing optimal highorder multipoint iterations without memory. This was the first contribution of this research.
Remark 3. Although we provide the sketch for the proofs of the main Theorems in , the proposed Steffensentype methods of this paper could be applied for finding complex zeros as well. Toward such a goal, a complex initial approximation (seed) is needed.
Now in order to improve the convergence rate and the index of efficiency more, we compute a Newton’s step as follows in the third step of a threestep cycle in which the first two steps are (3) with and :
Obviously, (10) is an eighthorder method with five evaluations (four function and one first derivative evaluations) per full iteration to reach the efficiency index . This index of efficiency is lower than that of (3). For this reason, we should approximate by a combination of the already known data, that is, in a way that the order of (10) stay at eight but its number of evaluations lessen from five to four. First, we consider that the newappeared first derivative of the function at this step can be approximated as follows:
In fact, (11) is a linear combination of two divided differences in which the best choice of is zero, to attain a better order of convergence. However, (11) with does not preserve the convergence rate of (10). As a result, the new threestep method up to now (by only using (11)) will be of order six which is not optimal. In order to reach the optimality, we use a weight function at the third step as well. Thus, we consider the following threestep family without memory of derivativefree methods with the parameter . Theorem 4 illustrates that (12) reaches the eighthorder convergence using only four function evaluations per full iteration to proceed
where and
By combining these two ideas, that is, an approximation of the newappeared first derivative in the last step and a weight function, we have furnished a novel family of iterations.
Theorem 4. Let be a simple zero of a sufficiently differentiable function in an open interval. If is sufficiently close to , then the method defined by (12) has the optimal local convergence order eight.
Proof. Using the same definitions and symbolic computations as done in the Proof of Theorem 1, results in
We also obtain by using (5) and (14) that
Note that the whole of such symbolic computations could be done using a simple Mathematica code as given in Algorithm 1.
Additionally, applying (14) and (15) in the last step of (12) results in the following error equation:
This ends the proof and shows that (12) is an optimal eighthorder family using four function evaluations per iteration.

Remark 5. The index of efficiency for (3) and (8) is and for (12) is , which are optimal according to the conjecture of Kung and Traub.
A question might arise that how the weight functions in (12) were chosen to attain as high as possible convergence order with as small as possible number of functional evaluations. Although we have tried to suggest a simple family of iterations in (12), the weight function should be chosen generally in what follows: where and , , are three weight functions that satisfy the following: to read the following error equation:
3. Numerical Reports
The objective of this section is to provide a robust comparison between the presented schemes and some of the already known methods in the literature. For numerical reports here, we have used the secondorder Newton’s method (NM), the quadratical scheme of Steffensen (SM), our proposed optimal fourthorder technique (3) with denoted by PM4, the optimal derivativefree eighthorder uniparametric family of iterative methods given by Kung and Traub in [7] (KT1) with , and our presented novel derivativefree eighthorder family (12) with and denoted by PM8. Due to similarity of (3) and (8), we just give the numerical reports of (3). The considered nonlinear test functions, their zeros, and the initial guesses in the neighborhood of the simple zeros are furnished in Table 1.

The results are summarized in Table 2 after three full iterations. As they show, novel schemes are comparable with all of the famous methods. All numerical instances were performed using 700 digits floating point arithmetic. We have computed the root of each test function for the initial guess . As can be seen, the obtained results in Table 2 are in harmony with the analytical procedure given in Section 2.

The proposed optimal fourthorder modification of Jain’s method performs well in contrast to the classical onestep method. We should remark that, in light of computational complexity, our constructed derivativefree family (12) is more economic, due to its optimal order with only four function evaluations per full cycle.
In light of the classical efficiency index for the without memory methods which have compared in Table 2, we have NM and SM that possess 1.414; (3) reaches 1.587, while (KT1) and (12) reach 1.682.
An important aspect in the study of iterative processes is the choice of a good initial approximation. Moreover, it is known that the set of all starting points from which an iterative process converges to a solution of the equation can be shown by means of the attraction basins.
Thus, we have considered the initial approximations close enough to the sought zeros in numerical examples to reach the convergence. A clear hybrid algorithm written in Mathematica [11] has recently been given in [12] to provide robust initial guesses for all the real zeros of nonlinear functions in an interval. Thus, the convergence of such iterative methods could be guaranteed by following such hybrid algorithms for providing robust initial approximations.
In what follows, we give an application of the new scheme in Chemistry [13].
Application. An exothermic firstorder, irreversible reaction, , is carried out in an adiabatic reactor. Upon combining the kinetic and energybalance equations, the following equation is obtained for computing the final temperature in K: where the temperature is in K. The following logarithmic transformation improves the scaling of the problem, giving . This nonlinear equation has only one real root. A starting point of zero for is not feasible; instead, we arbitrarily select K. The results are given in Table 3, when, for example, stands for 7 iterations while the absolute value of the function would be . The true solution is .

In the next section, we investigate the beauty of such zerofinder iterative methods in the complex plane alongside obtaining the fractal behavior of the schemes.
4. Finding the Basins
The basin of attraction for complex Newton’s method was first considered and attributed by Cayley [14]. The concept of this section is to use this graphical tool for showing the basins of different methods. In order to view the basins of attraction for complex functions, we make use of the efficient computer programming package Mathematica [15] using double precision arithmetic. We take a rectangle and we assign the light to dark colors (based on the number of iterations) for (for each seed) according to the simple zero at which the corresponding iterative method starting from converges. See, for more details, [16, 17].
The Julia set will be denoted by whitelike colors. In this section, we consider the stopping criterion for convergence to be with a maximum of 30 iterations and with a grid 400 × 400 points. In fact, the colors we used are based on Figure 1.
We compare the results of Steffensen’s method with , the thirdorder method of Jain, the quartical convergent method (3) for two values and in Figures 2, 3, 4, 5, and 6 for the polynomials , , , , and wherein their simple solutions are , , , , and , respectively. We do not invite optimal eighth order methods due to their large basins of attraction (their order is high).
(a)
(b)
(c)
(d)
(a)
(b)
(c)
(d)
(a)
(b)
(c)
(d)
(a)
(b)
(c)
(d)
(a)
(b)
(c)
(d)
As was stated in [18–20], known derivativefree schemes do not verify a scaling theorem, so the dynamical conclusions on a set of polynomials cannot be extended to others of the same degree polynomials and they are only particular cases. Indeed, comparing the behavior of the methods analyzed in those papers with the dynamical planes obtained in this paper, it is clear that the introduction of the parameter plays an important role in the analysis.
Note that considering tighter conditions on our written codes may produce pictures with much more quality than these. In Figures 2–6, darker blue areas stand for low number of iterations, darker blue needs more number of iterations to converge, and red areas mean no convergence or a huge number of iterations is needed.
Based on Figures 2–6, we can see that the method of (3) with is the best method in terms of less chaotic behavior to obtain the solutions. It also has the largest basins for the solution and is faster than the other ones. This also clearly shows the significance of the free nonzero parameter . In fact, whenever is lower (is close to zero), the larger basin along with less chaotic behavior could be attained.
In order to summarize these results, we have attached a weight to the quality of the fractals obtained by each method. The weight of 1 is for the smallest Julia set and a weight of 4 for scheme with chaotic behaviors. We then averaged those results to come up with the smallest value for the best method overall and the highest for the worst. These data are presented in Table 4. The results show that (3) with is the best one.
5. Concluding Remarks
Many problems in scientific topics can be formulated in terms of finding zeros of the nonlinear equations. This is the reason why solving nonlinear equations or systems are important. In this work, we have presented some novel schemes of fourth and eighthorder convergence. The fourthorder derivativefree methods possess 1.587 as their efficiency index and the eighthorder derivativefree methods have 1.682 as their efficiency index. Per full cycle, the proposed techniques are free from derivative calculation. We have also given the fractal behavior of some of the derivativefree methods along some numerical tests to clearly show the acceptable behavior of the new scheme. We have concluded that has a very important effect on the convergence radius and the speed of convergence for Steffensentype methods. With memorization of the obtained fourth and eighth orders families could be considered for future studies.
Acknowledgments
The authors sincerely thank the two referees for their fruitful suggestions and corrections which led to the improved version of the present paper. The research of the first author (Farahnaz Soleimani) is financially supported by “Roudehen Branch, Islamic Azad University, Roudehen, Iran.”
References
 A. Cordero, J. L. Hueso, E. Martínez, and J. R. Torregrosa, “A family of derivativefree methods with high order of convergence and its application to nonsmooth equations,” Abstract and Applied Analysis, vol. 2012, Article ID 836901, 15 pages, 2012. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 F. Soleymani and S. Shateyi, “Two optimal eighthorder derivativefree classes of iterative methods,” Abstract and Applied Analysis, vol. 2012, Article ID 318165, 14 pages, 2012. View at: Google Scholar  Zentralblatt MATH  MathSciNet
 A. Iliev and N. Kyurkchiev, Methods in Numerical Analysis: Selected Topics in Numerical Analysis, LAP LAMBERT Academic Publishing, 2010.
 A. T. Tiruneh, W. N. Ndlela, and S. J. Nkambule, “A three point formula for finding roots of equations by the method of least squares,” Journal of Applied Mathematics and Bioinformatics, vol. 2, pp. 213–233, 2012. View at: Google Scholar
 B. H. Dayton, T.Y. Li, and Z. Zeng, “Multiple zeros of nonlinear systems,” Mathematics of Computation, vol. 80, no. 276, pp. 2143–2168, 2011. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 J. F. Traub, Iterative Methods for the Solution of Equations, Chelsea Publishing, London, UK, 2nd edition, 1982.
 H. T. Kung and J. F. Traub, “Optimal order of onepoint and multipoint iteration,” Journal of the Association for Computing Machinery, vol. 21, pp. 643–651, 1974. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 F. Soleymani, S. K. Vanani, and A. Afghani, “A general threestep class of optimal iterations for nonlinear equations,” Mathematical Problems in Engineering, vol. 2011, Article ID 469512, 10 pages, 2011. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 F. Soleymani, “Optimized Steffensentype methods with eighthorder convergence and high efficiency index,” International Journal of Mathematics and Mathematical Sciences, vol. 2012, Article ID 932420, 18 pages, 2012. View at: Google Scholar  Zentralblatt MATH  MathSciNet
 P. Jain, “Steffensen type methods for solving nonlinear equations,” Applied Mathematics and Computation, vol. 194, no. 2, pp. 527–533, 2007. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 S. Wagon, Mathematica in Action, Springer, Berlin, Germany, 3rd edition, 2010.
 F. Soleymani, “An efficient twelfthorder iterative method for finding all the solutions of nonlinear equations,” Journal of Computational Methods in Sciences and Engineering, 2012. View at: Publisher Site  Google Scholar
 S. K. Rahimian, F. Jalali, J. D. Seader, and R. E. White, “A new homotopy for seeking all real roots of a nonlinear equation,” Computers and Chemical Engineering, vol. 35, no. 3, pp. 403–411, 2011. View at: Publisher Site  Google Scholar
 A. Cayley, “The NewtonFourier imaginary problem,” American Journal of Mathematics, vol. 2, article 97, 1879. View at: Publisher Site  Google Scholar
 M. Trott, The Mathematica Guidebook for Numerics, Springer, New York, NY, USA, 2006. View at: MathSciNet
 M. L. Sahari and I. Djellit, “Fractal Newton basins,” Discrete Dynamics in Nature and Society, vol. 2006, Article ID 28756, 16 pages, 2006. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 J. L. Varona, “Graphic and numerical comparison between iterative methods,” The Mathematical Intelligencer, vol. 24, no. 1, pp. 37–46, 2002. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 F. Chicharro, A. Cordero, J. M. Gutiérrez, and J. R. Torregrosa, “Complex dynamics of derivativefree methods for nonlinear equations,” Applied Mathematics and Computation, vol. 219, no. 12, pp. 7023–7035, 2013. View at: Publisher Site  Google Scholar  MathSciNet
 J. M. Gutiérrez, M. A. Hernández, and N. Romero, “Dynamics of a new family of iterative processes for quadratic polynomials,” Journal of Computational and Applied Mathematics, vol. 233, no. 10, pp. 2688–2695, 2010. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 S. Artidiello, F. Chicharro, A. Cordero, and J. R. Torregrosa, “Local convergence and dynamical analysis of a new family of optimal fourthorder iterative methods,” International Journal of Computer Mathematics, 2013. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2013 Farahnaz Soleimani et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.