Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2014, Article ID 795628, 6 pages
http://dx.doi.org/10.1155/2014/795628
Research Article

Solving Nondifferentiable Nonlinear Equations by New Steffensen-Type Iterative Methods with Memory

Department of Mathematics, Maulana Azad National Institute of Technology, Bhopal, Madhya Pradesh 462051, India

Received 5 August 2014; Revised 15 November 2014; Accepted 26 November 2014; Published 21 December 2014

Academic Editor: Alessandro Palmeri

Copyright © 2014 J. P. Jaiswal. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

It is attempted to present two derivative-free Steffensen-type methods with memory for solving nonlinear equations. By making use of a suitable self-accelerator parameter in the existing optimal fourth- and eighth-order without memory methods, the order of convergence has been increased without any extra function evaluation. Therefore, its efficiency index is also increased, which is the main contribution of this paper. The self-accelerator parameters are estimated using Newton’s interpolation. To show applicability of the proposed methods, some numerical illustrations are presented.

1. Introduction

Finding the root of a nonlinear equation frequently occurs in scientific computation. Newton’s method is the most well-known method for solving nonlinear equations and has quadratic convergence. However, the existence of the derivative in the neighborhood of the required root is compulsory for convergence of Newton’s method, which restricts its applications in practice. To overcome this difficulty, Steffensen replaced the first derivative of the function in Newton’s iterate by forward finite difference approximation. This method also possesses the quadratic convergence and the same efficiency just like Newton’s method. Some nice applications of iterative methods can be found in the literature; one can see [18].

Kung and Traub are pioneers in constructing optimal general multistep methods without memory. They discussed two general -step methods based on interpolation. Moreover, they conjectured that any multistep method without memory using function evaluations may reach the convergence order at most [9]. Thus, both Newton’s and Steffensen’s methods are optimal in the sense of Kung and Traub. But the superiority of Steffensen’s method over Newton’s method is that it is derivative-free. So, it can be applied to nondifferentiable equations also. To compare iterative methods theoretically, Owtrowski [10] introduced the idea of efficiency index given by , where is the order of convergence and is the number of function evaluations per iteration. In other words, we can say that an iterative method with higher efficiency index is more efficient. To improve convergence order as well as efficiency index without any new function evaluations, Traub in his book introduced the method with memory. In fact, he changed Steffensen’s method slightly as follows (see [11, pp. 185–187]): , are given suitably The parameter is called self-accelerator and method (1) has -order convergence 2.414. The possibility to increase the convergence order cannot be denied by using more suitable parameters. Many authors during the last few years tried to construct iterative methods without memory which support this conjecture with optimal order; one can see [1221] and many more. Although construction of optimal methods without memory is still an active field, the authors are shifting their attentions to develop more efficient methods with memory since the last year; for example, see [2225].

In the convergence analysis of the new method, we employ the notation used in Traub’s book [11]: if and are null sequences and , where is a nonzero constant, we will write or . We also use the concept of -order of convergence introduced by Ortega and Rheinboldt [5]. Let be a sequence of approximations generated by an iterative method (IM). If this sequence converges to a zero of function with the -order , we will write where tends to the asymptotic error constant of the iterative method (IM) when .

The rest of the paper is organized as follows. In Section 2, we describe the existing two- and three-point Steffensen-type iterative schemes and then their convergence orders are accelerated from four to six and from eight to twelve, respectively, without any extra evaluation. The proposed methods are obtained by using previous values as well as a suitable parameter. The parameter is calculated using Newton’s interpolations. The numerical study was also presented in the next section to confirm the theoretical results. Finally, we give the concluding remarks.

2. Development and Construction with Memory

Two-step and three-step repeated Newton’s method is given by The order of convergence of the methods (3) and (4) is four and eight, respectively. By repeating in the same way, one can get more higher-order methods. Definitely these methods have higher convergence order as compared to the standard Newton’s method. But there is no improvement in its efficiency index. For example, the efficiency indexes of the above two methods are and , which are the same as the efficiency index of the original Newton’s method . To improve the efficiency index, Cordero et al. in [26] have reduced the number of function evaluations by approximating the derivatives in terms of functions. For this first, they have approximated the derivatives by forward difference given by Then, to approximate the other two derivatives, they used rational approximation. In fact, is approximated by the rational approximation of the first degree and is approximated by the rational approximation of the second degree and they derived that where By using these approximations of the derivatives in (3) and (4), they have shown that their methods have the same order of convergence but with reduced number of function evaluations and thus efficiency index has been increased. In fact, the efficiency index becomes and , respectively. For more detail, one can see [26]. One more advantage of these methods is that it can be applied to nonsmooth functions also. Now one natural question arises: is it possible to find more efficient methods? Main aim of our paper is to find the answer of this question.

For this purpose, we first introduce a nonzero real parameter in and consider the following two methods with its error expression.

Method I. For the suitably given , and its error expression is given by

Method II. For the suitably given , and its error expression is given by where , , , and are defined as in (7) and .

Now we are concerned with the extension of the above schemes in methods with memory, since its error equations contain the parameter which can be approximated in such a way that it increases the local order convergence. For this purpose, we put and , where and . Here, and are two different approximations of and given by where are Newton’s interpolatory polynomial of degrees three and four, respectively. Now, the theoretical order of convergence of the methods is given by the following theorem.

Theorem 1. If an initial approximation is sufficiently close to a simple zero of and the parameters and in the iterative scheme (8) and (10) are recursively calculated by the forms given in (12) and (13), respectively, then the -order of convergence of methods (8) and (10) with memory is at least six and twelve, respectively.

Proof. Before going to show the main result, we first prove the following two results, which we will use later.
Claim I. Consider Claim II. Consider For this, suppose that there are nodes from the interval , where is the minimum and is the maximum of these nodes, respectively. Then, for some , the error expression of Newton’s interpolation polynomial of degree is given by For , the above equation assumes the form (keeping in the mind , , , and ) Differentiating (18) with respect to and putting , we get Now, Similarly, Using these relations in (19) and simplifying, we get And thus or which shows the first part. Similarly, taking in (17) and proceeding as in the above manner, we can prove the second relation. Now, we will prove the main result. To do this, we first assume that the -order of convergence of sequences , , , and is at least , , , and , respectively. Hence, Similarly,
Method I. For method (8), it can be derived that Using the expression of Claim (I) in (29), we have Now comparing the equal powers of in (26)–(30), (27)–(31), and (25)–(32), we get the following nonlinear system: After solving these equations, we get , , and , which confirm the convergence of method (8).
Method II. For method (10), it can be derived that Using the expression of Claim (II) in (34), we have Again comparing the equal powers of in (26)–(35), (27)–(36), and (28)–(37) and (25)–(38), we get the following nonlinear system: After solving these equations, we get , , , and and hence we have the second part. Thus, the proof is completed.

3. Application to Nonlinear Equations

In this section, we apply the proposed methods to solve some smooth as well as nonsmooth nonlinear equations. Here, we demonstrate the convergence behavior of the methods with and without memory. Numerical computations reported here have been carried out in a Mathematica 8.0 environment. Table 1 shows the absolute value of the difference of the exact root and approximated root , where the exact root is computed with 1000 significant digits (digits 1000). To check the theoretical order of convergence, we calculate the computational order of convergence (COC) using the following formula: For this purpose, we consider the following three test functions (taken from [25, 26]): In the table, stand for and is indeterminate. It is obvious from the table that the proposed methods give better results.

Table 1: Numerical results for nonlinear equations.

4. Conclusion

In this paper, the efficiency of the existing method has been improved by employing information from the current and previous iteration without any additional evaluation of the function. The efficiency index of the proposed methods is and which are higher than efficiency index and of the existing methods, respectively. The proposed methods are also tested on some numerical examples. The numerical results show that proposed method is very useful to find an acceptable approximation of the exact solution of nonlinear equations.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The author thanks the referees and the editor for their useful technical comments and valuable suggestions to improve the readability of the paper, which led to the significant improvement of the paper. The author is also grateful to National Board for Higher Mathematics, Department of Atomic Energy, Mumbai, India, for sanctioning research project.

References

  1. A. S. Al-Fhaid, S. Shateyi, M. Z. Ullah, and F. Soleymani, “A matrix iteration for finding Drazin inverse with ninth-order convergence,” Abstract and Applied Analysis, vol. 2014, Article ID 137486, 7 pages, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  2. F. Patrulescu, “Steffensen type methods for approximating solutions of differential equations,” Studia Universitatis Babes-Bolyai—Series Informatica, vol. 56, no. 2, pp. 505–513, 2011. View at Google Scholar
  3. F. Soleymani, “On a fast iterative method for approximate inverse of matrices,” Korean Mathematical Society, vol. 28, no. 2, pp. 407–418, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  4. F. Soleymani, T. Lotfi, and P. Bakhtiari, “A multi-step class of iterative methods for nonlinear systems,” Optimization Letters, vol. 8, no. 3, pp. 1001–1015, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  5. J. M. Ortega and W. C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, NY, USA, 1970. View at MathSciNet
  6. J. R. Torregrosa, I. K. Argyros, C. Chun, A. Cordero, and F. Soleymani, “Iterative methods for nonlinear equations or systems and their applications,” Journal of Applied Mathematics, vol. 2013, Article ID 656953, 2 pages, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  7. M. Grau-Sánchez, A. Grau, and M. Noguera, “Ostrowski type methods for solving systems of nonlinear equations,” Applied Mathematics and Computation, vol. 218, no. 6, pp. 2377–2385, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  8. R. L. Barden and J. D. Faires, Numerical Analysis, PWS Publishing Company, Boston, Mass, USA, 2001.
  9. H. T. Kung and J. F. Traub, “Optimal order of one-point and multipoint iteration,” Journal of the Association for Computing Machinery, vol. 21, pp. 643–651, 1974. View at Publisher · View at Google Scholar · View at MathSciNet
  10. A. M. Owtrowski, Solution of Equations and Systems of Equations, Academic Press, New York, NY, USA, 1960.
  11. J. F. Traub, Iterative Methods for Solution of Equations, Prentice-Hall, New York, NY, USA, 1964. View at MathSciNet
  12. A. Singh and J. P. Jaiswal, “Several new third-order and fourth-order iterative methods for solving nonlinear equations,” International Journal of Engineering Mathematics, vol. 2014, Article ID 828409, 11 pages, 2014. View at Publisher · View at Google Scholar
  13. D. Jain, “Families of Newton-like methods with fourth-order convergence,” International Journal of Computer Mathematics, vol. 90, no. 5, pp. 1072–1082, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  14. F. Soleymani, S. K. Khattri, and S. K. Vanani, “Two new classes of optimal Jarratt-type fourth-order methods,” Applied Mathematics Letters, vol. 25, no. 5, pp. 847–853, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  15. J. P. Jaiswal, “Some class of third- and fourth-order iterative methods for solving nonlinear equations,” Journal of Applied Mathematics, vol. 2014, Article ID 817656, 17 pages, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  16. M. A. Hafiz and M. S. Bahgat, “Solving nonsmooth equations using family of derivative-free optimal methods,” Journal of the Egyptian Mathematical Society, vol. 21, no. 1, pp. 38–43, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  17. R. Behl, V. Kanwar, and K. K. Sharma, “Optimal equi-scaled families of Jarratt's method,” International Journal of Computer Mathematics, vol. 90, no. 2, pp. 408–422, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  18. A. Cordero and J. R. Torregrosa, “A class of Steffensen type methods with optimal order of convergence,” Applied Mathematics and Computation, vol. 217, no. 19, pp. 7653–7659, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  19. F. Soleymani, “Efficient optimal eighth-order derivative-free methods for nonlinear equations,” Japan Journal of Industrial and Applied Mathematics, vol. 30, no. 2, pp. 287–306, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  20. J. P. Jaiswal and S. Panday, “An efficient optimal-eighth order iterative method for solving nonlinear equations,” Universal Journal of Computational Mathematics, vol. 1, no. 3, pp. 83–95, 2013. View at Google Scholar
  21. M. A. Hafiz, “Solving nonlinear equations using Steffensen-type methods with optimal order of convergence,” Palestine Journal of Mathematics, vol. 3, no. 1, pp. 113–119, 2014. View at Google Scholar · View at MathSciNet
  22. A. Cordero, T. Lotfi, P. Bakhtiari, and J. R. Torregrosa, “An efficient two-parametric family with memory for nonlinear equations,” Numerical Algorithms, 2014. View at Publisher · View at Google Scholar
  23. J. Džunić, “On efficient two-parameter methods for solving nonlinear equations,” Numerical Algorithms, vol. 63, no. 3, pp. 549–569, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  24. J. R. Sharma and P. Gupta, “On some efficient derivative free methods with and without memory for solving nonlinear equations,” International Journal of Computational Methods, vol. 12, no. 1, Article ID 1350093, 28 pages, 2014. View at Google Scholar
  25. T. Lotfi and E. Tavakoli, “On a new efficient steffensen-like iterative class by applying a suitable self-accelerator parameter,” The Scientific World Journal, vol. 2014, Article ID 769758, 9 pages, 2014. View at Publisher · View at Google Scholar · View at Scopus
  26. A. Cordero, J. L. Hueso, E. Martínez, and J. R. Torregrosa, “A new technique to obtain derivative-free optimal iterative methods for solving nonlinear equations,” Journal of Computational and Applied Mathematics, vol. 252, pp. 95–102, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus