- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Advances in Numerical Analysis
Volume 2014 (2014), Article ID 152187, 11 pages
An Efficient Family of Traub-Steffensen-Type Methods for Solving Systems of Nonlinear Equations
1Department of Mathematics, Sant Longowal Institute of Engineering and Technology, Longowal, Punjab 148 106, India
2Department of Mathematics, Government Ranbir College, Sangrur, Punjab 148 001, India
Received 6 February 2014; Accepted 11 June 2014; Published 2 July 2014
Academic Editor: Zhangxin Chen
Copyright © 2014 Janak Raj Sharma and Puneet Gupta. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Based on Traub-Steffensen method, we present a derivative free three-step family of sixth-order methods for solving systems of nonlinear equations. The local convergence order of the family is determined using first-order divided difference operator for functions of several variables and direct computation by Taylor's expansion. Computational efficiency is discussed, and a comparison between the efficiencies of the proposed techniques with the existing ones is made. Numerical tests are performed to compare the methods of the proposed family with the existing methods and to confirm the theoretical results. It is shown that the new family is especially efficient in solving large systems.
The problem of finding solution of the system of nonlinear equations , where , is an open convex domain in , by iterative methods is an important and challenging task in numerical analysis and many applied scientific branches. One of the basic procedures for solving nonlinear equations is the quadratically convergent Newton method (see [1, 2]): where is the inverse of the first Fréchet derivative of the function .
In many practical situations, it is preferable to avoid the calculation of derivative of the function . In such situations, it is preferable to use only the computed values of and to approximate by employing the values of at suitable points. For example, a basic derivative free iterative method is the Traub-Steffensen method , which also converges quadratically and follows the scheme where is the inverse of the first-order divided difference of and ; is an arbitrary nonzero constant. Throughout this paper, is used to denote the th iteration function of convergence order . For , the scheme (2) reduces to the well-known Steffensen method .
In recent years, many derivative free higher-order methods of great efficiency are developed for solving scalar equation ; see [5–14] and the references therein. For systems of nonlinear equations, however, the construction of efficient higher-order derivative free methods is a difficult task and therefore not many such methods can be found in the literature. Recently, based on Steffensen's scheme, that is, when in (2), a family of seventh-order methods has been proposed in . Some important members of this family, as shown in , are given as follows: Per iteration, both methods use four functions, five first-order divided differences, and three matrix inversions. The notable feature of these algorithms is their simple design which makes them easily implemented to solve systems of nonlinear equations. Here, the fourth-order method is the generalization of the method proposed by Ren et al. in  and is the generalization of the method by Liu et al. .
In this paper, our aim is to develop derivative free iterative methods that may satisfy the basic requirements of generating quality numerical algorithms, that is, the algorithms with (i) high convergence speed, (ii) minimum computational cost, and (iii) simple design. In this way, we here propose a derivative free family of sixth-order methods. The scheme is composed of three steps of which the first two steps consist of any derivative free fourth-order method with the base as the Traub-Steffensen iteration (2) whereas the third step is weighted Traub-Steffensen iteration. The algorithm of the present contribution is as simple as the methods (3) and (4) but with an additional advantage that it possesses high computational efficiency, especially when applied for solving large systems of equations.
The rest of the paper is summarized as follows. The sixth-order scheme with its convergence analysis is presented in Section 2. In Section 3, the computational efficiency of new methods is discussed and is compared with the methods which lie in the same category. Various numerical examples are considered in Section 4 to show the consistent convergence behavior of the methods and to verify the theoretical results. Section 5 contains the concluding remarks.
2. The Method and Its Convergence
Based on the above considerations of a quality numerical algorithm, we begin with the following three-step iteration scheme: where denotes any derivative free fourth-order scheme and , , are some parameters to be determined.
In order to find the convergence order of scheme (5), we first define divided difference operator for multivariable function (see ). The divided difference operator of is a mapping defined by Expanding in Taylor series at the point and then integrating, we have where .
Let . Assuming that exists and then developing and its first three derivatives in a neighborhood of , we have where and . are symmetric operators that are used later on.
We can now analyze behavior of the scheme (5) through the following theorem.
Theorem 1. Let the function be sufficiently differentiable in an open neighborhood of its zero and is a fourth-order iteration function which satisfies where and . If an initial approximation is sufficiently close to , then the local order of convergence of method (5) is at least 6 provided , , and .
Proof. Let . Then, using (8), it follows that
Employing (7) for , , and and then using (9)–(11), we write
Expanding in the formal power developments, the inverse of preceding operator can be written as
Using (8) and (15) to the required terms in the first step of (5), we find that
Equation (7) for , , and yields
Similarly, substituting , , and in (7), we obtain
Then, using (15), (17), and (18) to the required terms, we find that
Postmultiplying (19) by and simplifying
Taylor series of about yields
Then, using (20) and (21) in the third step of (5), we obtain the error equation as
In order to find , , and , it will be sufficient to equate the factors , , and to and then solving the resulting system of equations
we obtain , , and .
Thus, for this set of values, the above error equation reduces to which shows the sixth order of convergence and hence the result follows.
Finally, the sixth-order family of methods is expressed by wherein
Thus, the scheme (25) defines a new three-step family of derivative free sixth-order methods with the first two steps as any fourth-order scheme whose base is the Traub-Steffensen method (3). Some simple members of this family are as follows.
Method-I. The first method, which is denoted by , is given by where is the fourth-order method as given in the formula (3). It is clear that this formula uses four functions, three first-order divided differences, and two matrix inversions per iteration.
Method-II. The second method, that we denote by , is given as where is the fourth-order method shown in (4). This method also requires the same evaluations as in the above method.
3. Computational Efficiency
Here, we estimate the computational efficiency of the proposed methods and compare it with the existing methods. To do this, we will make use of efficiency index, according to which the efficiency of an iterative method is given by , where is the order of convergence and is the computational cost per iteration. For a system of nonlinear equations in unknowns, the computational cost per iteration is given by (see ) Here, denotes the number of evaluations of scalar functions used in the evaluation of and , and denotes the number of products needed per iteration. The divided difference of is an matrix with elements given as (see [17, 18]) In order to express the value of in terms of products, a ratio between products and evaluations of scalar functions and a ratio between products and quotients are required.
To compute in any iterative function, we evaluate scalar functions , and if we compute a divided difference , then we evaluate scalar functions, where and are computed separately. We must add quotients from any divided difference, products for multiplication of a matrix with a vector or of a matrix by a scalar, and products for multiplication of a vector by a scalar. In order to compute an inverse linear operator, we solve a linear system, where we have products and quotients in the LU decomposition and products and quotients in the resolution of two triangular linear systems.
The computational efficiency of the present sixth-order methods and is compared with the existing fourth-order methods and and with the seventh-order methods and . In addition, we also compare the present methods with each other. Let us denote efficiency indices of by and computational cost by . Then, taking into account the above and previous considerations, we have
3.1. Comparison between Efficiencies
To compare the computational efficiencies of the iterative methods, say against , we consider the ratio It is clear that if , the iterative method is more efficient than .
versus Case. For this case, the ratio (37) is given by which shows that for , , and . Thus, we have for , , and .
versus Case. In this case, the ratio (37) takes the following form In this case, it is easy to prove that for , , and , which implies that .
versus Case. The ratio (37) yields which shows that for , , and . Thus, we conclude that for , , and .
versus Case. In this case, the ratio (37) is given by With the same range of , as in the previous case and , the ratio , which implies that .
versus Case. In this case, the ratio (37) is given by It is easy to show that for , , and , which implies that for this range of values of the parameters .
versus Case. The ratio (37) is given by With the same range of , as in the previous cases and , we have , which implies that .
versus Case. The ratio (37) yields which shows that for , , and , so it follows that .
versus Case. For this case, the ratio (37) is given by In this case, also it is not difficult to prove that for , , and , which implies that .
We summarize the above results in the following theorem.
Theorem 2. For and , we have the following: (i) for ;(ii) for ;(iii) for ;(iv) for ;(v) for ;(vi) for ;(vii) for . Otherwise, the comparison depends on , , and .
4. Numerical Results
In this section, some numerical problems are considered to illustrate the convergence behavior and computational efficiency of the proposed methods. The performance is compared with the existing methods , , , and . All computations are performed using the programming package Mathematica  using multiple-precision arithmetic with 4096 digits. For every method, we analyze the number of iterations needed to converge to the solution such that . In numerical results, we also include the CPU time utilized in the execution of program which is computed by the Mathematica command “TimeUsed”. In order to verify the theoretical order of convergence, we calculate the computational order of convergence using the formula (see ) taking into consideration the last three approximations in the iterative process.
To connect the analysis of computational efficiency with numerical examples, the definition of the computational cost (29) is applied, according to which an estimation of the factor is claimed. For this, we express the cost of the evaluation of the elementary functions in terms of products, which depends on the computer, the software, and the arithmetics used (see [21, 22]). In Table 1, the elapsed CPU time (measured in milliseconds) in the computation of elementary functions and an estimation of the cost of the elementary functions in product units are displayed. The programs are performed in the processor, Intel (R) Core (TM) i5-480M CPU @ 2.67 GHz (64-bit Machine) Microsoft Windows 7 Home Basic 2009, and are compiled by Mathematica 7.0 using multiple-precision arithmetic. It can be observed from Table 1 that, for this hardware and the software, the computational cost of quotient with respect to product is .
The present methods and are tested by using the values , , and for the parameter . The following problems are chosen for numerical tests.
Problem 1. Considering the system of two equations, In this problem, are the values used in (31)–(36) for calculating computational costs and efficiency indices of the considered methods. The initial approximation chosen is and the solution is .
Problem 3. Next, consider the following boundary value problem (see ): Assume the following partitioning of the interval : Let us define . If we discretize the problem by using the numerical formula for second derivative, we obtain a system of nonlinear equations in variables: In particular, we solve this problem for so that by selecting as the initial value. The solution of this problem is and concrete values of the parameters are .
Problem 4. Consider the system of fifteen equations (see ): In this problem, the concrete values of the parameters are . The initial approximation assumed is and the solution of this problem is
Problem 5. Consider the system of fifty equations: In this problem, are the values used in (31)–(36) for calculating computational costs and efficiency indices. The initial approximation assumed is for obtaining the solution .
Problem 6. Lastly, consider the nonlinear and nondifferentiable integral equation of mixed Hammerstein type (see ): where ; and the kernel is given as follows: We transform the above equation into a finite-dimensional problem by using Gauss-Legendre quadrature formula given as where the abscissas and the weights are determined for by Gauss-Legendre quadrature formula. Denoting the approximation of by , we obtain the system of nonlinear equations: where In this problem, are the values used in (31)–(36) for calculating computational costs and efficiency indices. The initial approximation assumed is for obtaining the solution
Table 2 shows the numerical results obtained for the considered problems by various methods. Displayed in this table are the number of iterations , the computational order of convergence , the computational costs in terms of products, the computational efficiencies , and the mean elapsed CPU time (e-time). Computational cost and efficiency are calculated according to the corresponding expressions given by (31)–(36) by using the values of parameters and as calculated in each problem, while taking in each case. The mean elapsed CPU time is calculated by taking the mean of 50 performances of the program, where we use as the stopping criterion in single performance of the program.
From the numerical results, we can observe that like the existing methods the present methods show consistent convergence behavior. It is seen that some methods do not preserve the theoretical order of convergence, especially when applied for solving some typical type of nonlinear systems. This can be observed in the last problem of nondifferentiable mixed Hammerstein integral equation where the seventh-order methods and yield the eighth order of convergence. However, for the present methods, the computational order of convergence overwhelmingly supports the theoretical order of convergence. Comparison of the numerical values of computational efficiencies exhibited in the second last column of Table 2 verifies the theoretical results of Theorem 2. As we know, the computational efficiency is proportional to the reciprocal value of the total CPU time necessary to complete running iterative process. This means that the method with high efficiency utilizes less CPU time than the method with low efficiency. The truthfulness of this fact can be judged from the numerical values of computational efficiency and elapsed CPU time displayed in the last two columns of Table 2, which are in complete agreement according to the notion.
5. Concluding Remarks
In the foregoing study, we have proposed iterative methods with the sixth order of convergence for solving systems of nonlinear equations. The schemes are totally derivative free and therefore particularly suited to those problems in which derivatives require lengthy computation. A development of first-order divided difference operator for functions of several variables and direct computation by Taylor's expansion are used to prove the local convergence order of new methods. Comparison of efficiencies of the new schemes with the existing schemes is shown. It is observed that the present methods have an edge over similar existing methods, especially when applied for solving large systems of equations. Six numerical examples have been presented and the relevant performances are compared with the existing methods. Computational results have confirmed the robust and efficient character of the proposed techniques. Similar numerical experimentations have been carried out for a number of problems and results are found to be on a par with those presented here.
We conclude the paper with the remark that in many numerical applications multiprecision in computations is required. The results of numerical experiments justify that the high-order efficient methods associated with a multiprecision arithmetic floating point are very useful, because they yield a clear reduction in the number of iterations to achieve the required solution.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
- J. M. Ortega and W. C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, NY, USA, 1970.
- C. T. Kelley, Solving Nonlinear Equations with Newton's Method, SIAM, Philadelphia, Pa, USA, 2003.
- J. F. Traub, Iterative Methods for the Solution of Equations, Prentice-Hall, New Jersey, NJ, USA, 1964.
- J. F. Steffensen, “Remarks on iteration,” Skandinavski Aktuarsko Tidskrift, vol. 16, pp. 64–72, 1933.
- H. Ren, Q. Wu, and W. Bi, “A class of two-step Steffensen type methods with fourth-order convergence,” Applied Mathematics and Computation, vol. 209, no. 2, pp. 206–210, 2009.
- Z. Liu, Q. Zheng, and P. Zhao, “A variant of Steffensen's method of fourth-order convergence and its applications,” Applied Mathematics and Computation, vol. 216, no. 7, pp. 1978–1983, 2010.
- M. S. Petković, S. Ilić, and J. Džunić, “Derivative free two-point methods with and without memory for solving nonlinear equations,” Applied Mathematics and Computation, vol. 217, no. 5, pp. 1887–1895, 2010.
- M. S. Petković and L. D. Petković, “Families of optimal multipoint methods for solving nonlinear equations: a survey,” Applicable Analysis and Discrete Mathematics, vol. 4, no. 1, pp. 1–22, 2010.
- M. S. Petković, J. Džunić, and L. D. Petković, “A family of two-point methods with memory for solving nonlinear equations,” Applicable Analysis and Discrete Mathematics, vol. 5, no. 2, pp. 298–317, 2011.
- Q. Zheng, J. Li, and F. Huang, “An optimal Steffensen-type family for solving nonlinear equations,” Applied Mathematics and Computation, vol. 217, no. 23, pp. 9592–9597, 2011.
- J. Dunić and M. S. Petković, “On generalized multipoint root-solvers with memory,” Journal of Computational and Applied Mathematics, vol. 236, no. 11, pp. 2909–2920, 2012.
- J. R. Sharma, R. K. Guha, and P. Gupta, “Some efficient derivative free methods with memory for solving nonlinear equations,” Applied Mathematics and Computation, vol. 219, no. 2, pp. 699–707, 2012.
- X. Wang and T. Zhang, “A family of Steffensen type methods with seventh-order convergence,” Numerical Algorithms, vol. 62, no. 3, pp. 429–444, 2013.
- M. S. Petkovic, B. Neta, L. D. Petkovic, and J. Dzunic, Multipoint Methods for Solving Nonlinear Equations, Elsevier, Boston, Mass, USA, 2013.
- M. Grau-Sánchez, À. Grau, and M. Noguera, “Frozen divided difference scheme for solving systems of nonlinear equations,” Journal of Computational and Applied Mathematics, vol. 235, no. 6, pp. 1739–1743, 2011.
- M. Grau-Sánchez and M. Noguera, “A technique to choose the most efficient method between secant method and some variants,” Applied Mathematics and Computation, vol. 218, no. 11, pp. 6415–6426, 2012.
- M. Grau-Sánchez, À. Grau, and M. Noguera, “On the computational efficiency index and some iterative methods for solving systems of nonlinear equations,” Journal of Computational and Applied Mathematics, vol. 236, no. 6, pp. 1259–1266, 2011.
- F. A. Potra and V. Ptak, Nondiscrete Induction and Iterarive Processes, Pitman, Boston, Mass, USA, 1984.
- S. Wolfram, The Mathematica Book, Wolfram Media, 5th edition, 2003.
- L. O. Jay, “A note on -order of convergence,” BIT Numerical Mathematics, vol. 41, no. 2, pp. 422–429, 2001.
- L. Fousse, G. Hanrot, V. Lefevre, P. Pelissier, and P. Zimmermann, “MPFR: a multiple-precision binary oating-point library with correct rounding,” ACM Transactions on Mathematical Software, vol. 33, no. 2, article 13, 15 pages, 2007.
- M. Grau-Sánchez, J. M. Peris, and J. M. Gutiérrez, “Accelerated iterative methods for finding solutions of a system of nonlinear equations,” Applied Mathematics and Computation, vol. 190, no. 2, pp. 1815–1823, 2007.
- J. A. Ezquerro, M. Grau-Sánchez, and M. Hernández, “Solving non-differentiable equations by a new one-point iterative method with memory,” Journal of Complexity, vol. 28, no. 1, pp. 48–58, 2012.