Table of Contents Author Guidelines Submit a Manuscript
Discrete Dynamics in Nature and Society
Volume 2015, Article ID 938606, 7 pages
http://dx.doi.org/10.1155/2015/938606
Research Article

Two Bi-Accelerator Improved with Memory Schemes for Solving Nonlinear Equations

Department of Mathematics, Maulana Azad National Institute of Technology, Bhopal, Madhya Pradesh 462051, India

Received 16 October 2014; Accepted 27 December 2014

Academic Editor: Giuseppe Izzo

Copyright © 2015 J. P. Jaiswal. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The present paper is devoted to the improvement of the -order convergence of with memory derivative free methods presented by Lotfi et al. (2014) without doing any new evaluation. To achieve this aim one more self-accelerating parameter is inserted, which is calculated with the help of Newton’s interpolatory polynomial. First theoretically it is proved that the -order of convergence of the proposed schemes is increased from 6 to 7 and 12 to 14, respectively, without adding any extra evaluation. Smooth as well as nonsmooth examples are discussed to confirm theoretical result and superiority of the proposed schemes.

1. Introduction

Finding the root of a nonlinear equation frequently occurs in scientific computation. Newton’s method is the most well-known method for solving nonlinear equations and has quadratic convergence. However, the existence of the derivative in the neighborhood of the required root is compulsory for convergence of Newton’s method, which restricts its applications in practice. To overcome the this difficulty, Steffensen replaced the first derivative of the function in Newton’s iterate by forward finite difference approximation. This method also possesses the quadratical convergence and the same efficiency just like Newton’s method. Kung and Traub are pioneers in constructing optimal general multistep methods without memory. Moreover, they conjectured any multistep methods without memory using function evaluations that may reach the convergence order at most [1]. Thus both Newton’s and Steffenssen’s methods are optimal in the sense of Kung and Traub. But the superiority of Steffenssen’s method over Newton’s method is that it is derivative free. So it can be applied to nondifferentiable equations also. To compare iterative methods theoretically, Owtrowski [2] introduced the idea of efficiency index given by , where is the order of convergence and number of function evaluations per iteration. In other words we can say that an iterative method with higher efficiency index is more efficient. To improve convergence order as well as efficiency index without adding any new function evaluations, Traub in his book introduced the method with memory. In fact he changed Steffensen’s method slightly as follows (see [3, pp. 185–187]).

, are given suitably: The parameter is called self-accelerator and method (1) has -order of convergence 2.414. The possibility to increase the convergence order cannot be denied by using more suitable parameters. Many researchers from the last few years, are trying to construct iterative methods without memory which support Kung and Traub conjecture such as [413] which are few of them. Although construction of optimal methods without memory is still an active field, from the last one year, many authors are shifting their attention for developing more efficient methods with memory.

In the convergence analysis of the new method, we employ the notation used in Traub’s book [3]: if and are null sequences and , where is a nonzero constant, we will write or . We also use the concept of -order of convergence introduced by Ortega and Rheinboldt [14]. Let be a sequence of approximations generated by an iterative method (IM). If this sequence converges to a zero of function with the -order , we will write where tends to the asymptotic error constant of the iterative method (IM) when .

The rest of the paper is organized as follows: in Section 2 we describe the existing two- and three-point with memory derivative-free schemes and then their convergence orders are accelerated from six to seven and twelve to fourteen, respectively, without doing any extra evaluation. Proposed methods are obtained by imposing one more suitable iterative parameter. The parameter is calculated using Newton’s interpolatory polynomial.

The numerical study is also presented in the next section to confirm the theoretical results. Finally, we give the concluding remarks.

2. Brief Literature Review and Improving with Memory Schemes

Two-step (double) and three-step (triple) Newton’s methods can be, respectively, written as The orders of convergence of the schemes (3) and (4) have been increased to four and eight, respectively, but both of them have no improvement in efficiency as compared to the original Newton’s method. One major draw back of the same schemes is that they also involved derivatives. For the purpose of obtaining efficient as well as free from derivatives schemes, Lotfi et al. [15] approximated the derivatives , , and by Lagrange’s interpolatory polynomials of degrees one, two, and three, respectively, which are given by where and , Thus modified versions of schemes (3) and (4), respectively, become The authors have shown that without memory methods (7) and (8) preserve the convergence order with reduced number of function evaluation. Their corresponding error expressions are given by respectively, where . The above two without memory schemes are optimal in the sense of Kung and Traub. Initially, Traub showed that the convergence of the without memory methods can be increased by use of information from current and previous iterations without adding any evaluation, which are known as with memory methods. To get increased order of convergence the authors in the same paper [15] first replaced by and then for , where is the exact root and is the approximation of . In addition, they used the approximation for method (7) and for method (8), where are Newton’s interpolatory polynomial of degrees three and four, respectively. Here the single prime denotes the first derivative and double prime will later denote the second derivative. So that the one-parametric version of the methods (7) and (8) can be written as The authors showed that the convergence order of methods (14) and (15) is increased from 4 to 6 and from 8 to 12, respectively. The aim of this paper is to find more efficient methods using the same number of evaluations. For this purpose we introduce one more iterative parameter in the above methods; then the modified with memory methods are given by with their error expression With its error expression, Since the above error equations contain the iterative parameters both and , we should approximate these parameters in such a way that they will increase the convergence order. To this end, we approximate and as follows.

For method (16) and for method (18) where are Newton’s interpolatory polynomial of degrees three, four, four and five, respectively. Now we denote where is the exact root. Before going to prove the main result, we state the following two lemmas which can be obtained by using the error expression of Newton’s interpolation, in the same manner as in [16].

Lemma 1. If and , then the estimates(i), (ii).

Lemma 2. If and , then the estimates(i), (ii).

The theoretical proof of the order of convergence of the proposed methods is given by the following theorem.

Theorem 3. If an initial approximation is sufficiently close to a simple zero of and the parameters and in the iterative scheme (16) and (18) are recursively calculated by the forms given in (20) and (21), respectively, then the -orders of convergence of with memory schemes (16) and (18) are at least seven and fourteen, respectively.

Proof. First, we assume that the -orders of convergence of the sequences , , , and are at least , , , and , respectively. Hence
Similarly Now we will prove the results in two parts. First for method (16) and then for (18).
Modified Method I. For method (16), it can be derived that
Using the results of Lemma 1 in (28), we have Now comparing the equal powers of in three pairs of (29)-(25), (30)-(26), and (31)-(24), we get the following nonlinear system: After solving these equations, we get , , and . It confirms the convergence of method (16). This shows the first part.
Modified Method II. For method (18), it can be derived that Now using the results of Lemma 2 in (33), we have
Comparing the equal powers of in four pairs of (34)-(25), (35)-(26), (36)-(27), and (37)-(24), we get the following nonlinear system: After solving these equations we get , , , and . And thus the proof is completed.

Note 1. The efficiency index of the proposed method (16) along with (20) is which is more than of method (14).

Note 2. The efficiency index of the proposed method (18) along with (21) is which is more than of method (15).

3. Numerical Examples and Conclusion

In this section the proposed derivative free methods are applied to solve smooth as well as nonsmooth nonlinear equations and compared with the existing with memory methods. Nowadays, high-order methods are important because numerical applications use high precision in their computations; for this reason numerical tests have been carried out using variable precision arithmetic in MATHEMATICA 8 with 700 significant digits. The computational order of convergence (COC) is defined by [17, 18] To test the performance of new method consider the following three nonlinear functions (which are taken from [5, 15]): (1),(2),(3)The absolute errors for the first three iterations are given in Table 1. In the table stands for . Note that a large number of three-step derivative-free (with and without memory) methods are available in the literature. But the methods which have been tested for nonsmooth functions are rare and this clearly proves the significance of this paper.

Table 1: Comparison of the absolute error in first, second, and third iteration.

The effectiveness of the new proposed derivative free with memory methods is confirmed by comparing this with the existing memory family. The numerical results shown in Table 1 are in concordance with the theory developed here. From the theoretical result, we can conclude that the order of convergence of the without memory family can be made more higher than the existing with memory family by imposing more self-accelerating parameter without any additional calculations and the computational efficiency of the presented with memory method is high. The -orders of convergence are increased from 6 to 7 and 12 to 14 in accordance with the quality of the applied accelerating method proposed in this paper. We can see that the self-accelerating parameters play a key role in increasing the order of convergence of the iterative method.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The author is grateful to editor and reviewers for their significant suggestions for the improvement of the paper.

References

  1. H. T. Kung and J. F. Traub, “Optimal order of one-point and multipoint iteration,” Journal of the Association for Computing Machinery, vol. 21, pp. 643–651, 1974. View at Publisher · View at Google Scholar · View at MathSciNet
  2. A. M. Owtrowski, Solution of Equations and Systems of Equations, Academic Press, New York, NY, USA, 1960.
  3. J. F. Traub, Iterative Methods for the Solution of Equations, Prentice-Hall, Englewood Cliffs, NJ, USA, 1964. View at MathSciNet
  4. S. Artidiello, F. Chicharro, A. Cordero, and J. R. Torregrosa, “Local convergence and dynamical analysis of a new family of optimal fourth-order iterative methods,” International Journal of Computer Mathematics, vol. 90, no. 10, pp. 2049–2060, 2013. View at Publisher · View at Google Scholar · View at Scopus
  5. M. A. Hafiz and M. S. Bahgat, “Solving nonsmooth equations using family of derivative-free optimal methods,” Journal of the Egyptian Mathematical Society, vol. 21, no. 1, pp. 38–43, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  6. J. P. Jaiswal, “Some class of third- and fourth-order iterative methods for solving nonlinear equations,” Journal of Applied Mathematics, vol. 2014, Article ID 817656, 17 pages, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  7. J. R. Sharma and R. K. Guha, “Second-derivative free methods of third and fourth order for solving nonlinear equations,” International Journal of Computer Mathematics, vol. 88, no. 1, pp. 163–170, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  8. A. Cordero and J. R. Torregrosa, “A class of Steffensen type methods with optimal order of convergence,” Applied Mathematics and Computation, vol. 217, no. 19, pp. 7653–7659, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  9. C. Chun and B. Neta, “An analysis of a new family of eighth-order optimal methods,” Applied Mathematics and Computation, vol. 245, pp. 86–107, 2014. View at Publisher · View at Google Scholar · View at MathSciNet
  10. M. A. Hafiz, “Solving nonlinear equations using Steffensen-type methods with optimal order of convergence,” Palestine Journal of Mathematics, vol. 3, no. 1, pp. 113–119, 2014. View at Google Scholar · View at MathSciNet
  11. Y. I. Kim, “A triparametric family of three-step optimal eighth-order methods for solving nonlinear equations,” International Journal of Computer Mathematics, vol. 89, no. 8, pp. 1051–1059, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  12. J. R. Sharma and H. Arora, “An efficient family of weighted-Newton methods with optimal eighth order convergence,” Applied Mathematics Letters, vol. 29, pp. 1–6, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  13. A. Singh and J. P. Jaiswal, “An efficient family of optimal eighth-order iterative methods for solving nonlinear equations and its dynamics,” Journal of Mathematics, vol. 2014, Article ID 569719, 14 pages, 2014. View at Publisher · View at Google Scholar · View at MathSciNet
  14. J. M. Ortega and W. C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, NY, USA, 1970.
  15. T. Lotfi, F. Soleymani, Z. Noori, A. Kılıçman, and F. Khaksar Haghani, “Efficient iterative methods with and without memory possessing high efficiency indices,” Discrete Dynamics in Nature and Society, vol. 2014, Article ID 912796, 9 pages, 2014. View at Publisher · View at Google Scholar
  16. J. Dzunic, “On efficient two-parameter methods for solving nonlinear equations,” Numerical Algorithms, vol. 63, no. 3, pp. 549–569, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  17. M. S. Petković, “Remarks on “On a general class of multipoint root-finding methods of high computational efficiency”,” SIAM Journal on Numerical Analysis, vol. 49, no. 3, pp. 1317–1319, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  18. M. S. Petković, B. Neta, L. D. Petković, and J. Džunić, Multipoint Methods for Solving Nonlinear Equations, Elsevier, New York, NY, USA, 2012.