About this Journal Submit a Manuscript Table of Contents
Abstract and Applied Analysis

Volume 2014 (2014), Article ID 705674, 8 pages

http://dx.doi.org/10.1155/2014/705674
Research Article

New Mono- and Biaccelerator Iterative Methods with Memory for Nonlinear Equations

1Department of Mathematics, Islamic Azad University, Hamedan Branch, Hamedan, Iran

2Department of Mathematics, Islamic Azad University, Shahrekord Branch, Shahrekord, Iran

3Department of Mathematics and Applied Mathematics, School of Mathematical and Natural Sciences, University of Venda, P. Bag X5050, Thohoyandou 0950, South Africa

Received 23 May 2014; Accepted 4 July 2014; Published 24 July 2014

Academic Editor: Alicia Cordero

Copyright © 2014 T. Lotfi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Acceleration of convergence is discussed for some families of iterative methods in order to solve scalar nonlinear equations. In fact, we construct mono- and biparametric methods with memory and study their orders. It is shown that the convergence orders 12 and 14 can be attained using only 4 functional evaluations, which provides high computational efficiency indices. Some illustrations will also be given to reverify the theoretical discussions.

1. Introduction

Finding the zeros of nonlinear functions using iterative methods is a challenging problem in computational mathematics with many applications (see, e.g., [1, 2]). The solution can be obtained as a fixed point of function by means of the following fixed-point iteration:

The most widely used method for this purpose is the classical Newton’s method and its derivative-free form known as Steffensen’s scheme [3]. These methods converge quadratically under the conditions that the function is continuously differentiable and a good initial approximation is given [4].

Considering these fundamental methods, many iterative methods without memory possessing optimal convergence order based on the hypothesis of Kung and Traub [5] have been constructed in the literature; see, for example, [6, 7] and the references therein. For application, refer to [8, 9].

According to the recent trend of researches in this topic, iterative methods with memory (also known as self-accelerating schemes) are worth studying. The iterative method with memory can improve the order of convergence of a without memory method, without any additional functional evaluations, and as a result it has a very high computational efficiency index. There are two kinds of iterative methods with memory, that is, Steffensen-type and Newton-type methods. In this paper, we only consider the Steffensen-type methods with memory.

To review the literature briefly, we remark that optimal Steffensen-type families without memory for solving nonlinear equations were introduced in [10] in a general form; two-step self-accelerating Steffensen-type methods and their applications in the solution of nonlinear systems and nonlinear differential equations were discussed in [11].

In 2012, Soleymani et al. in [12] proposed some multiparametric multistep optimal iterative methods without memory for nonlinear equations. For instance, they proposed where for simplicity is used throughout. Equation (2) reads the following error equation: They also proposed the following scheme: with the error equation and also where

Error relations (3), (5), and (7) play key roles in our study of convergence acceleration in the subsequent sections.

The purpose of this paper is to extend the results of [12] by providing with memory variants of the above three-step schemes. We aim at contributing two types of memorization, that is, variants using one accelerator and variants using two accelerators. For obtaining a background on such accelerations, one may refer to [13].

The remaining sections of this paper are organized as follows. Section 2 is devoted to the derivation of new root solvers with memory using one accelerator. Section 3 derives some new methods without and with memory possessing very high computational efficiency index. Computational efficiency index is also discussed to reveal the applicability and efficacy of the proposed approaches. The performance is tested through numerical examples in Section 4. Moreover, theoretical results concerning order of convergence and computational efficiency are confirmed in the examples. It is shown that the presented methods are more efficient than their existing counterparts. Finally, concluding remarks are given in Section 5.

2. Development of Some Monoaccelerator Methods with Memory

Our motivation for constructing methods with memory is directly connected to the basic concept of numerical analysis that any numerical algorithm should give as good as possible output results with minimal computational cost. In other words, it is necessary to search for algorithms of great computational efficiency.

Subsequently, we propose the following monoaccelerator methods at which the parameter is replaced by ( ) to accelerate the speed of convergence in what follows: or the following variant: and also

Note that, throughout this work, stands for Newton’s interpolating polynomial set through available approximations (nodes) from the current and previous iteration(s).

In fact, the main idea in constructing methods with memory consists of the calculation of parameters as the iteration proceeds by the formulas or the similar ones, where is an approximation to . In essence, in this way we minimize the factors involved in the final error equation of families without memory. This automatically affects the speed of convergence by scaling the method and is known as with memorization.

It is also assumed that initial estimates (or for the similar ones in methods with memory) should be chosen before starting the iterative process.

Now let us remember an important lemma from Traub [4] in what follows.

Lemma 1. If , , then the estimate holds, where is an asymptotic constant.

Theorem 2 determines -order of the three-step iterative methods with memory (8), (9), and (10). Note that is chosen in this paper so as to obtain as high as possible of convergence order. Obviously, if fewer nodes are used for the interpolating polynomials, slower acceleration is achieved.

Theorem 2. If an initial estimation is close enough to a simple root of , being a sufficiently differentiable function, then the -order of convergence of the three-step methods with memory (8), (9), and (10) is at least 12.

Proof. Let be a sequence of approximations generated by an iterative method (IM). If this sequence converges to a root of with the -order , we will write where tends to the asymptotic error constant of (IM), when . Hence We assume that the -order of the iterative sequences , , and is at least , , and , respectively; that is, By (14) and Lemma 1, we obtain Substituting these with , , , and , we have This gives the following system of linear equations:

This system has the solution , , , and , which specifies the -order of convergence twelve for the derivative-free schemes with memory (8) and (10). Similar results are valid for (9). The proof is now complete.

Significant acceleration of convergence was attained without any additional functional calculations, which provides very high computational efficiency of the proposed methods. We remark that another advantage is a convenient fact that the proposed methods do not use derivatives.

Following the definition of efficiency index given by , whereas and stand for the rate of convergence and the number of functional evaluations per cycle, then the computational efficiency index of the proposed variants with memory reaches which is higher than 1.682 of the families (2), (4), and (6).

3. Biaccelerator Methods with Memory

An accelerating approach, similar to that used in the previous section, will be applied for constructing three-step methods with memory. Calculation of two parameters becomes more complex since more information are needed per iteration.

3.1. The Development of Some Families without Memory

We first apply a free parameter in the denominator of the first substep of (4) to yield the following more general family of eighth-order methods without memory:

Theorem 3. Assume that function has a single root , where is an open interval. Assume furthermore that is a sufficiently differentiable function in the neighborhood of , that is, . Then, the order of convergence for the iterative family without memory defined by (18) is eight.

Proof. The proof of this theorem is similar to the proofs in [10]. Hence it is omitted and we only give the following error equation for (18): where .

Similarly, we have the following new multiparametric family of methods:

Similarly, we have the following theorem.

Theorem 4. Assume that function has a single root , where is an open interval. Assume furthermore that is a sufficiently differentiable function in the neighborhood of , that is, . Then, the order of convergence for the iterative family without memory defined by (20) is eight.

Proof. The proof of this theorem is similar to the proofs in [10]. Hence it is omitted and we only give the following error equation for (20) by considering : where .

In the next subsection, we extend these schemes to methods with memory for solving scalar nonlinear equations.

3.2. The Development of Some Biaccelerator Methods with Memory

Now using suitable Newton’s interpolatory polynomials of appropriate degree passing through all the available nodes, we can propose the following methods with memory with two accelerators:

Similarly, we have the following lemma.

Lemma 5. If and , , then the estimates hold, where and are some asymptotic constants.

Subsequently, the following theorem determines the convergence orders of the three-step iterative with memory methods (22).

Theorem 6. If an initial estimation is close enough to a simple root of , being a sufficiently differentiable function, then the -order of convergence of the three-step method with memory (22) is at least 14.

Proof. Let be a sequence of approximations generated by an iterative method (IM). If this sequence converges to a root of with the -order , we will write where tends to the asymptotic error constant of (IM), when . Hence We assume that the -order of the iterative sequences , , and is at least , , and , respectively; that is, By (27), (28), (29), and Lemma 5, we obtain Substituting these into , , , and , we obtain where , , are some asymptotic constants. Equating the powers of error exponents of in pairs of relations (27)–(31), (28)–(32), (29)–(33), and (26)–(34) gives This system has the solution , , , and , which specifies the -orders of convergence fourteen of the derivative-free schemes with memory (22).

The computational efficiency index of the proposed variants with memory (22) is , which is higher than 1.861 of the families (8), (9), and (10). Note that biparametric acceleration technique by self-correcting parameters in three-step iterative methods was not applied in the literature at present and this clearly reveals the originality of this study.

4. Numerical Reports

The presentation of numerical results in this section serves to point to very high computational efficiency and also to demonstrate fast convergence of the proposed methods.

The errors denote approximations to the sought zeros, and stands for . Moreover, indicates the computational order of convergence (COC) and is computed by

Note that the package Mathematica 9 with multiprecision arithmetic was used.

For comparison, in our numerical experiments we also tested several three-step iterative methods in what follows.

Kung and Traub’s method [5] is

Sharma et al.’s method [14] is where and .

Lotfi and Tavakoli’s method [15] is where , , , and .

Lotfi and Tavakoli’s method [15] is

Zheng et al.’s method [16] is

In Tables 1, 2, and 3, different test functions are tested as their captions indicated. Results of the second and third iterations are given only for demonstration of convergence speed of the tested methods and in most cases they are not required for practical problems at present. From the tables, we observe extraordinary accuracy of the produced approximations, obtained using only few function evaluations. Such an accuracy is not needed in practice but has a theoretical importance. We emphasize that our primary aim was to construct very efficient three-step methods with memory.

tab1
Table 1: .
tab2
Table 2: .
tab3
Table 3: .

5. Summary

In this paper, we have shown that the three-step families in [12] can be additionally accelerated without increasing computational cost, which directly improves computational efficiency of the modified methods. The main idea in constructing higher order methods consists of the introduction of another parameter and the improvement of accelerating technique for the parameter .

It is evident from Tables 1 to 3 that approximations to the roots possess great accuracy when the proposed methods with memory are applied.

Further researches must be done to extend the resented methods for system of nonlinear equations or to propose with memory version using three or four accelerators. These could be done in the next studies.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The first author “Taher Lotfi” is thankful to “Hamedan Branch, Islamic Azad University” for providing excellent research facilities and partial financial support.

References

  1. T. Lotfi, F. Soleymani, S. Sharifi, S. Shateyi, and F. Khaksar Haghani, “Multipoint iterative methods for finding all the simple zeros in an interval,” Journal of Applied Mathematics, vol. 2014, Article ID 601205, 14 pages, 2014.
  2. F. Soleymani, E. Tohidi, S. Shateyi, and F. Khaksar Haghani, “Some matrix iterations for computing matrix sign function,” Journal of Applied Mathematics, vol. 2014, Article ID 425654, 9 pages, 2014. View at Publisher · View at Google Scholar
  3. J. F. Steffensen, “Remarks on iteration,” Skandinavisk Aktuarietidskrift, vol. 16, pp. 64–72, 1933.
  4. J. F. Traub, Iterative Methods for the Solution of Equations, Prentice-Hall, New York, NY. USA, 1964. View at MathSciNet
  5. H. T. Kung and J. F. Traub, “Optimal order of one-point and multipoint iteration,” Journal of the Association for Computing Machinery, vol. 21, pp. 643–651, 1974. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  6. V. Kanwar, S. Bhatia, and M. Kansal, “New optimal class of higher-order methods for multiple roots, permitting f'xn=0,” Applied Mathematics and Computation, vol. 222, pp. 564–574, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  7. T. Lotfi, S. Shateyi, and S. Hadadi, “Potra-Pták iterative method with memory,” ISRN Mathematical Analysis, vol. 2014, Article ID 697642, 6 pages, 2014. View at Publisher · View at Google Scholar · View at MathSciNet
  8. F. Soleymani, M. Sharifi, and S. Shateyi, “Approximating the inverse of a square matrix with application in computation of the Moore-Penrose inverse,” Journal of Applied Mathematics, vol. 2014, Article ID 731562, 8 pages, 2014. View at Publisher · View at Google Scholar · View at MathSciNet
  9. Y. Wei, Q. He, Y. Sun, and C. Ji, “Improved power flow algorithm for {VSC}-{HVDC} system based on high-order Newton-type method,” Mathematical Problems in Engineering, vol. 2013, Article ID 235316, 10 pages, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  10. A. Cordero and J. R. Torregrosa, “Low-complexity root-finding iteration functions with no derivatives of any order of convergence,” Journal of Computational and Applied Mathematics, 2014. View at Publisher · View at Google Scholar
  11. Q. Zheng, F. Huang, X. Guo, and X. Feng, “Doubly-accelerated Stffensen's methods with memory and their applications on solving nonlinear ODEs,” Journal of Computational Analysis and Applications, vol. 15, no. 5, pp. 886–891, 2013. View at MathSciNet · View at Scopus
  12. F. Soleymani, D. K. R. Babajee, S. Shateyi, and S. S. Motsa, “Construction of optimal derivative-free techniques without memory,” Journal of Applied Mathematics, vol. 2012, Article ID 497023, 24 pages, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  13. F. Soleymani, “Some optimal iterative methods and their with memory variants,” Journal of the Egyptian Mathematical Society, vol. 21, no. 2, pp. 133–141, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  14. J. R. Sharma, R. K. Guha, and P. Gupta, “Some efficient derivative free methods with memory for solving nonlinear equations,” Applied Mathematics and Computation, vol. 219, no. 2, pp. 699–707, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  15. T. Lotfi and E. Tavakoli, “On a new efficient Steffensen-like iterative class by applying a suitable self-accelerator parameter,” The Scientific World Journal, vol. 2014, Article ID 769758, 9 pages, 2014. View at Publisher · View at Google Scholar
  16. Q. Zheng, J. Li, and F. Huang, “An optimal Steffensen-type family for solving nonlinear equations,” Applied Mathematics and Computation, vol. 217, no. 23, pp. 9592–9597, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus