`Abstract and Applied AnalysisVolume 2012, Article ID 782170, 14 pageshttp://dx.doi.org/10.1155/2012/782170`
Research Article

On a Family of High-Order Iterative Methods under Kantorovich Conditions and Some Applications

2Departamento de Matemáticas, Facultad de Ciencias, Universidad de Santiago de Chile, Casilla 307, Correo 2, Santiago, Chile

Received 23 February 2012; Revised 4 June 2012; Accepted 4 June 2012

Copyright © 2012 S. Amat et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper is devoted to the study of a class of high-order iterative methods for nonlinear equations on Banach spaces. An analysis of the convergence under Kantorovich-type conditions is proposed. Some numerical experiments, where the analyzed methods present better behavior than some classical schemes, are presented. These applications include the approximation of some quadratic and integral equations.

1. Introduction

This paper deals with the approximation of nonlinear equations as follows where is a nonlinear operator between Banach spaces, using the following family of high-order iterative methods: where is the identity operator on and for each , is the linear operator on defined by the following assuming that exists and is a given nonlinear operator (usually depending on the operator and its derivatives), here denotes the space of bounded linear operators from to .

The second step can be interpreted as an acceleration of the initial one (in our case Newton’s method). Indeed, this family was introduced for scalar equations in [1], for any initial scheme, Traub’s theorem reads:

Theorem 1.1. For all sufficiently smooth function , the following iterative method has order of convergence , where is the order of .

In this paper, we consider as the function the classical Newton method. We have mainly three reasons. First, because we can recover many well-known high-order iterative methods. Second, because the domain of convergence of Newton’s method is bigger than high order schemes [2]. Finally, since in practice it is a good strategy to start with a simple method when we are not sufficiently close to the solution [3].

On the other hand, conditions are imposed on and on in order to assure the convergence of to a solution of . This analysis, usually known as Kantorovich type, are based on a relationship between the problem in a Banach space and a single nonlinear scalar equation which leads the behavior of the problem. A priori error estimates, depending only on the initial conditions, and, hence, the order of convergence can be obtained by using Kantorovich type theorems.

A review to the amount of literature on high-order iterative methods in the two last decades (see for instance [4] and its references, or this incomplete list of recent papers [516]) may reveal the importance of high-order schemes. The main practical difficulty related to the classical third-order iterative methods is the evaluation of the second-order derivative. For a nonlinear system of equations and unknowns, the first Fréchet derivative is a matrix with entries, while the second Fréchet derivative has entries. This implies a huge amount of operations in order to evaluate every iteration. However, in some cases, the second derivative is easy to evaluate. Some clear examples of this case are the approximation of Hammerstein equations where the second Fréchet derivative is diagonal by blocks or quadratic equations where it is constant.

The structure of this paper is as follows: in Section 2 we present some particular examples of methods included in the family and in Section 3, we assert convergence and uniqueness theorems (Kantorovich type). Finally, some numerical experiments are presented in Section 4. These applications include quadratic (Riccati) equations and integral (Hammerstein) equations. In all these problems the proposed methods seem more efficient than second-order methods.

2. A Family of High-Order Iterative Methods

As was indicated in the introduction, we are interested in the study of the family of iterative methods as follows

Note that the method (2.1) is equivalent to iterate the function given by the following that is,

Particular examples of schemes included in the family with nonsmooth functions are(1)Halley(2)Super-Halley(3)Chebyshev(4)Chebyshev like methods. For , we consider the following -methods (5)Two-step

These methods have order of convergence three that is small than the estimate in Traub’s theorem (since is nonsmooth). For instance the above two-step method admits . Indeed, all these methods have the function in the denominator.

On the other hand, considering different smooth functions , the following schemes are also particular examples in the family.(1) The two-step method ()  has order four.(2) The two-step method () has order five.(3) We should start with other iterative functions and develop a similar analysis. For instance, starting with Chebyshev’s method we can consider the method () that has order six [3]. We use this scheme only in the numerical section.

3. Semilocal Convergence

Several techniques are usually considered to study the convergence of iterative methods, as we can see in the following papers [4, 1720]. Among these, the two most common are the based on the majorant principle and on recurrence relations.

In this section, we analyze the semilocal convergence of the introduced family (1.2) under a generalization of Kantorovich conditions.

Namely, we assume that:(C1) Let such that exists and .(C2).(C3) for all .(C4), , .

Under these hypotheses it is possible to find a cubic polynomial in an interval such that , , and in , with the unique simple solution of , and verifying the following hypotheses:

For and .(H1),(H2),(H3) for all , ,(H4), with , and .

Some immediate properties of the polynomial may be obtained from the conditions above imposed:(1) is decreasing in the interval , since in that interval.(2) in .(3) is increasing and is convex in , since we have in .(4) is increasing in , since in that interval.

From these properties it follows the next:(a)The Newton map associate to , , is increasing in ,   and .(b)The function in , since and are strictly positive in that interval. Furthermore, , since and .

In this paper, as in [21, page 43], we consider as the function the following polynomial: assuming

If this last condition holds, then the cubic polynomial has two roots and (). We can choose and such that and .

Moreover, we need some extra conditions associated to the operator and the function . We assume:(Hg1), for ,(Hg2),(Hg3) in , where

All the methods considered in the above section have associated functions that verify the three last conditions. With the two last hypotheses on and the definition of , following [21, Corollary in page 31], the next result holds.

Proposition 3.1. The sequence starting from the above converges monotonically to the real simple solution of in .

We are now ready to prove the desired semilocal convergence.

Theorem 3.2. Let us assume and verifying the hypotheses (H1)–(H4) and (Hg1)–(Hg3) with
If then the sequence (1.2) is well defined and converges to the unique solution of in .
Moreover, where

Proof. By an induction process, it is possible to verify that(i),(ii),and then,(iii),and(iv).
The case follows from the initial conditions on and .
We now assume that the conditions are valid for and we check them for .(i) Applying Taylor’s theorem: because is increasing. By applying the general invertibility criterion, is invertible, and (ii) Using the following Taylor expansion and by the definition of the method we obtain that and since we conclude that Similarly from the following expansion the definition of the method, the main hypotheses on and the induction process, we obtain, using that and that the desired inequality:
In this situation, the theorem holds by applying the previous estimates directly to the formulas that describe the methods, we refer [21, page 41-42] for more details.

The estimates given in the present paper are optimal in the sense that the sequence associated to verifies the inequalities with equalities.

4. Numerical Experiments

We consider several problems where the presented high-order methods can be considered as a good alternative to second-order methods.

4.1. Approximation of Riccati’s Equations

In this first example, we consider quadratic equations, therefore the second Fréchet derivative is constant. Particular cases of this type of equations, that appear in many applications, are Riccati’s equations [2224]. For instance, if we consider the problem of calculating feedback controls for systems modeled by partial differential or delay differential equations, a classical controller design objective will be to find a control for the state such that the following objective function is minimized, where is a positive defined matrix and the observation . In practice, the control is calculated through approximation. This leads to solving an algebraic Riccati equation for a feedback operator see [25, 26] for more details.

In the general case, an algebraic Riccati’s equation is given by [27] where are given matrix, symmetric and is the unknown.

In this case,

In particular, the second derivative is constant. In this case, the Kantorovich conditions for Newton’s methods have the compact form Moreover, this hypothesis also gives the convergence for the high-order methods [22].

Then, using a matricial norm

Given a symmetric initial guess , to obtain we solve the equation This equation has solution if is stable [27], that is, all its eigenvalues have negative real part. In this following case

Next, to illustrate the previous results, we consider the following algebraic Riccati equation (4.4) with matrix and the starting point In this case, the algebraic Riccati equation has exact solution Besides, from the aforesaid starting point it follows that is a stable matrix.

Now, considering the stopping criterion in Table 1, we obtain the errors . If we now analyze the following computational order of convergence [28]: we observe that method has computationally the order of convergence at least six. See Table 2, where , and denote, respectively, the computational order of convergence of the three last methods.

Table 1: Errors for the Newton, Chebyshev and M6 methods.
Table 2: The computational order of convergence for the Newton, Chebyshev and M6 methods.

In comparison with the classical Newton’s method, the extra computational cost per iteration of method M6, is only two new evaluations of the operator , and two extra matrix-vector multiplications. Moreover, the same as Newton’s method only a decomposition is necessary. Thus, M6 is more efficient.

See [29] for more details.

4.2. Approximation of Hammerstein Equations

We consider an important special case of integral equation, the following Hammerstein equation These equations are related with boundary value problems for differential equations. For some of them, high-order methods using second derivatives are useful for their effective (discretized) solution.

The discrete version of (4.14) is where are the grid points of some quadrature formula , and .

The second Fréchet derivative of the associated discrete system is diagonal by blocks.

Let the following Hammerstein equation

The discretization of this equation verifies the Lipschitz condition of our Kantorovich theorem [4].

We consider in the quadrature trapezoidal formula and as exact solution the obtained numerically by Newton method. In Table 3, we summarize the numerical results for different methods in the family: Newton, Halley, and M4. We consider as initial guess .

Table 3: Errors for the Newton, Halley, and M4 methods.

Since the second derivative is diagonal by blocks, its application has a computational cost of order . Thus, the computational cost in each iteration of the three schemes is, for sufficiently big, of the same order ( due to the decomposition). Note that we only have to do a factorization in each iteration of the three schemes. As conclusion, the scheme M4 (order four) is the most efficient for sufficiently big.

See [30] for other-related problems.

5. Conclusions

Summing up, in this paper we have studied a family of high-order iterative methods. Mainly, the theoretical analysis we did allows to ensure convergence conditions for all these schemes. We established priori error bounds for them and consequently their order. We have presented different applications where we may add that in these cases the analyzed high-order methods are more efficient than simpler second-order methods.

Acknowledgment

S. Amat, C. Bermúdez, and S. Busquier were supported in part by MINECO-FEDER MTM2010-17508 and 08662/PI/08. S. Plaza was supported in part by Fondecyt (Grant no. 1095025).

References

1. J. F. Traub, Iterative Methods for the Solution of Equations, Prentice-Hall Series in Automatic Computation, Prentice-Hall, Englewood Cliffs, NJ, USA, 1964.
2. J. A. Ezquerro and M. A. Hernández, “An improvement of the region of accessibility of Chebyshev's method from Newton's method,” Mathematics of Computation, vol. 78, no. 267, pp. 1613–1627, 2009.
3. S. Amat, M. A. Hernández, and N. Romero, “A modified Chebyshev's iterative method with at least sixth order of convergence,” Applied Mathematics and Computation, vol. 206, no. 1, pp. 164–174, 2008.
4. S. Amat and S. Busquier, “Third-order iterative methods under Kantorovich conditions,” Journal of Mathematical Analysis and Applications, vol. 336, no. 1, pp. 243–261, 2007.
5. M. Dehghan and M. Hajarian, “On derivative free cubic convergence iterative methods for solving nonlinear equations,” Journal of Computational Mathematics and Mathematical Physics, vol. 51, no. 4, pp. 555–561, 2011.
6. J. Džunić, M. S. Petković, and L. D. Petković, “Three-point methods with and without memory for solving nonlinear equations,” Applied Mathematics and Computation, vol. 218, no. 9, pp. 4917–4927, 2012.
7. J. A. Ezquerro, M. Grau-Sánchez, A. Grau, M. A. Hernández, M. Noguera, and N. Romero, “On iterative methods with accelerated convergence for solving systems of nonlinear equations,” Journal of Optimization Theory and Applications, vol. 151, no. 1, pp. 163–174, 2011.
8. L. Fang, “A cubically convergent iterative method for solving nonlinear equations,” Advances and Applications in Mathematical Sciences, vol. 10, no. 2, pp. 117–119, 2011.
9. M. Grau-Sánchez, À. Grau, and M. Noguera, “Frozen divided difference scheme for solving systems of nonlinear equations,” Journal of Computational and Applied Mathematics, vol. 235, no. 6, pp. 1739–1743, 2011.
10. M. Grau-Sánchez, À. Grau, and M. Noguera, “On the computational efficiency index and some iterative methods for solving systems of nonlinear equations,” Journal of Computational and Applied Mathematics, vol. 236, no. 6, pp. 1259–1266, 2011.
11. Y. I. Kim and Y. H. Geum, “A cubic-order variant of Newton's method for finding multiple roots of nonlinear equations,” Computers & Mathematics with Applications, vol. 62, no. 4, pp. 1634–1640, 2011.
12. W. Li and H. Chen, “A unified framework for the construction of higher-order methods for nonlinear equations,” Open Numerical Methods Journal, vol. 2, pp. 6–11, 2010.
13. M. S. Petković, J. Džunić, and B. Neta, “Interpolatory multipoint methods with memory for solving nonlinear equations,” Applied Mathematics and Computation, vol. 218, no. 6, pp. 2533–2541, 2011.
14. F. Sha and X. Tan, “A class of iterative methods of third order for solving nonlinear equations,” International Journal of Nonlinear Science, vol. 11, no. 2, pp. 165–167, 2011.
15. P. Wang, “A third-order family of Newton-like iteration methods for solving nonlinear equations,” Journal of Numerical Mathematics and Stochastics, vol. 3, no. 1, pp. 13–19, 2011.
16. X. Zhou, X. Chen, and Y. Song, “Constructing higher-order methods for obtaining the multiple roots of nonlinear equations,” Journal of Computational and Applied Mathematics, vol. 235, no. 14, pp. 4199–4206, 2011.
17. I. K. Argyros, “Improving the order and rates of convergence for the super-Halley method in Banach spaces,” The Korean Journal of Computational & Applied Mathematics, vol. 5, no. 2, pp. 465–474, 1998.
18. I. K. Argyros, “The convergence of a Halley-Chebysheff-type method under Newton-Kantorovich hypotheses,” Applied Mathematics Letters, vol. 6, no. 5, pp. 71–74, 1993.
19. J. A. Ezquerro and M. A. Hernández, “New Kantorovich-type conditions for Halley's method,” Applied Numerical Analysis and Computational Mathematics, vol. 2, no. 1, pp. 70–77, 2005.
20. M. A. Hernández and M. A. Salanova, “Modification of the Kantorovich assumptions for semilocal convergence of the Chebyshev method,” Journal of Computational and Applied Mathematics, vol. 126, no. 1-2, pp. 131–143, 2000.
21. N. Romero, Familias paramétricas de procesos iterativos de alto orden de convergencia [Ph.D. thesis], 2006, http://dialnet.unirioja.es/.
22. S. Amat, S. Busquier, and J. M. Gutiérrez, “An adaptive version of a fourth-order iterative method for quadratic equations,” Journal of Computational and Applied Mathematics, vol. 191, no. 2, pp. 259–268, 2006.
23. C.-H. Guo and N. J. Higham, “Iterative solution of a nonsymmetric algebraic Riccati equation,” SIAM Journal on Matrix Analysis and Applications, vol. 29, no. 2, pp. 396–412, 2007.
24. C.-H. Guo and A. J. Laub, “On the iterative solution of a class of nonsymmetric algebraic Riccati equations,” SIAM Journal on Matrix Analysis and Applications, vol. 22, no. 2, pp. 376–391, 2000.
25. I. Lasiecka and R. Triggiani, Control Theory for Partial Differential Equations: Continuous and Approximation Theories, Part 1, Cambridge University Press, 2000.
26. I. Lasiecka and R. Triggiani, Control Theory for Partial Differential Equations: Continuous and Approximation Theories, Part 2, Cambridge University Press, 2000.
27. P. Lancaster and L. Rodman, Algebraic Riccati Equations, The Clarendon Press, New York, NY, USA, 1995.
28. M. Grau and M. Noguera, “A variant of Cauchy's method with accelerated fifth-order convergence,” Applied Mathematics Letters, vol. 17, no. 5, pp. 509–517, 2004.
29. S. Amat, M. A. Hernández, and N. Romero, “Semilocal convergence of a sixth order iterative method for Riccati’s equations,” to appear in Applied Numerical Mathematics.
30. S. Amat, S. Busquier, and J. M. Gutiérrez, “Geometric constructions of iterative functions to solve nonlinear equations,” Journal of Computational and Applied Mathematics, vol. 157, no. 1, pp. 197–205, 2003.