Approximation Methods: Theory and ApplicationsView this Special Issue
Some Novel Sixth-Order Iteration Schemes for Computing Zeros of Nonlinear Scalar Equations and Their Applications in Engineering
In this paper, we propose two novel iteration schemes for computing zeros of nonlinear equations in one dimension. We develop these iteration schemes with the help of Taylor’s series expansion, generalized Newton-Raphson’s method, and interpolation technique. The convergence analysis of the proposed iteration schemes is discussed. It is established that the newly developed iteration schemes have sixth order of convergence. Several numerical examples have been solved to illustrate the applicability and validity of the suggested schemes. These problems also include some real-life applications associated with the chemical and civil engineering such as adiabatic flame temperature equation, conversion of nitrogen-hydrogen feed to ammonia, the van der Wall’s equation, and the open channel flow problem whose numerical results prove the better efficiency of these methods as compared to other well-known existing iterative methods of the same kind.
The solution of nonlinear scalar equations plays a vital role in many fields of applied sciences such as Engineering, Physics, and Mathematics. Analytical methods do not help us to solve such equations, and therefore, we need iterative methods for approximate the solution. In an iterative process, the first step is to choose an initial guess which is improved step by step by means of iterations till the approximate solution is achieved with the required accuracy. Some basic iterative methods are given in literature [1–8] and the references therein. In the last few years, a lot of researchers worked on iterative methods with their applications and proposed some new iterative schemes which possesses either a high convergence rate or have less number of functional evaluations per iteration, see [9–21] and the references therein. The convergence rate of an iterative method can be increased by involving predictor and corrector steps which results multistep iterative methods whereas the number of functional evaluations can be reduced by removing second and higher derivatives in the considered iterative method using different mathematical techniques. When we try to raise the convergence rate of an iterative scheme, we have to use more functional evaluations per iteration, and similarly, less number of functional evaluations per iterations causes low order of convergence which is the main drawback. It is much difficult to manage both terms, i.e., the convergence rate and functional evaluations per iterations as it seems that there exists an inverse relation between them. In twenty-first century, many mathematicians try to modify the existing methods with less number of functional evaluations per iterations and higher convergence order by applying different techniques such as predictor-corrector technique, finite difference scheme, interpolation technique, Taylor’s series, and quadrature formula etc. In 2007, Noor et al.  introduced a two-step Halley’s method with sextic convergence and then approximated its second derivative by the utilization of finite difference scheme and suggested a novel second-derivative free iterative algorithm which have fifth convergence order. In 2012, Hafiz and Al-Goria  suggested two new algorithms with order seven and nine, respectively, which were based on the weight combination of midpoint with Simpson quadrature formulas and using the predictor-corrector technique. Nazeer et al.  in 2016 proposed a new second derivative free generalized Newton-Raphson’s method with convergence of order five by means of finite difference scheme. In 2017, Kumar et al.  suggested a sixth-order parameter-based family of algorithms for solving nonlinear equations. In the same year, Salimi et al.  proposed an optimal class of eighth-order methods by using weight functions and Newton interpolation technique. Very recently, Naseem et al.  presented some new sixth-order algorithms for finding zeros of nonlinear equations and then investigated their dynamics by means of polynomiography and presented some novel mathematical art through the execution of the presented algorithms.
In this paper, we suggested two novel iteration schemes in the form of predictor-corrector type numerical methods, namely, Algorithms 1 and 2, by taking Newton’s iteration method as a predictor step. The derivation of the first iteration scheme is purely based on the Taylor’s series expansion and generalized Newton-Raphson’s method whereas in second one, we use interpolation technique for removing its second derivative which results the higher efficiency index. We examined the convergence criteria of the suggested schemes and proved that these iteration schemes bearing sextic convergence and superior to the other well-known methods of the similar nature. The efficiency indices of the presented schemes have been compared with the other similar existing two-step iteration schemes. The proposed iteration schemes have been applied to solve some real life problems along with the arbitrary transcendental and algebraic equations in order to assess its applicability, validity, and accuracy.
2. Main Results
Consider the nonlinear algebraic equation
We assume that is a simple zero of (1) and is an initial guess sufficiently close to . Using the Taylor’s series around for (1), we have
If , we can evaluate the above expression as follows:
If we choose the root of equation, then we have
This is quadratically convergent Newton’s method [2–4] for root-finding of nonlinear functions and needs two computations for its execution. From (2), one can evaluate
In iterative form: which is cubically convergent generalize Newton-Raphson’s method  and requires three functional evaluations per iteration for the execution. After simplification of (2), one can obtain:
Now from generalized Newton-Raphson’s method in (5)
After rewriting the above obtained equality in the general form with the insertion of Newton’s iteration method as a predictor, we arrive at a new algorithm of the form:
Algorithm 1. For a given , compute the approximate solution by the following iterative schemes:
which is the modification of the generalized Newton-Raphson’s method for determining the approximate roots of the nonlinear algebraic equations. To find the approximate root of the given nonlinear equation by means of the above described algorithm, one has to find the first as well as the second derivative of the given function . But in several cases, we have to deal with such functions in which second derivative does not exists and our proposed algorithm fails to find approximate root in that situation. To resolve this issue, we apply interpolation technique for the approximation of the second derivative as follows:
Consider the function where the values of the unknowns , , , and can be found by applying the following interpolation conditions:
From the above conditions, we gain a system containing four linear equations with four variables, the solution of which gives the following equality:
After putting the value of from the above equality in Algorithm 1, we gain novel second-derivative free algorithm as follows:
Algorithm 2. For a given , compute the approximate solution by the following iterative schemes: which is a novel second-derivative free iterative algorithm for computing the approximate solutions of the nonlinear algebraic equations. One of the main features of the suggested algorithm is that it can be applied to all those nonlinear functions in which second derivative does not exist. The removal of second derivative causes less number of functional evaluations per iteration which yields the best efficiency index as compared to those methods which require second derivative. The results of the given test examples certified its best performance in comparison with the other similar existing methods in literature.
3. Convergence Analysis
This section includes the discussion regarding the convergence criteria of the suggested iteration schemes.
Theorem 3. Assuming as a simple zero of the given equation , where is sufficiently smooth in the neighborhood of , then the convergence orders of Algorithms 1 and 2 are at least six.
Proof. To prove the convergence of Algorithms 1 and 2, we assume that is the simple root of the equation and be the error at th iteration; then, and by using Taylor series about , we have where
With the help of equations (16) and (17), we get
With the help of equations (16)–(21), we have
Using equations (19)–(23) in Algorithms 1 and 2, we get the following equalities which imply that
Equations (25) and (26) show that the orders of convergence of Algorithms 1 and 2 are atleast six.
4. Comparison of Efficiency Index
In numerical analysis, the efficiency index of an algorithm provides us the information about the speed and performance of the algorithm which is being under the consideration. It is actually a numerical quantity that relates to the number of computational resources needed to execute the considered algorithm. The efficiency of an algorithm can be thought of as analogous to the engineering productivity for a process that includes iterations. The term efficiency index is used to analyze the numeric behavior of different algorithms. In iterative algorithms, this quantity totally depends upon the two factors. The first one is the convergence order of the algorithm whereas the second factor is the number of computations per iteration, i.e., the number of functional and derivatives evaluations, required to execute the algorithm for the purpose of root-finding of the nonlinear functions. If the convergence order is represented by and the number of computations per iteration by , then the efficiency index can be written mathematically as:
Since Noor’s method one  has quadratic convergence and requires three computations per iteration for execution, so its efficiency index will be . In the same way, the cubically convergent Noor’s method two  requires three computations per iteration and has as an efficiency index. Similarly, the efficiency index of the Traub’s methods  is because it possesses the convergence of order four with four computations for execution. Since the modified Halley’s method  has fifth convergence order with four computations per iteration, so its efficiency index will be . Now, we calculate the efficiency indices of the suggested algorithms. Both algorithms bearing the convergence of order six. The number of computations per iteration for the execution of the first algorithm is five whereas the second proposed algorithm requires only four evaluations per iteration. So, their efficiency indices will be and , respectively. The efficiency indices of the different iterative methods, we have discussed above, are summarized in the following Table 1.
Table 1 clearly shows that the presented method, namely, Algorithm 2, has better efficiency index among the other compared methods.
5. Numerical Comparisons and Applications
In this section, we include four real-life engineering problems and seven arbitrary problems in the form of transcendental and algebraic equations to illustrate the applicability and efficiency of our newly developed iterative methods. We compare these methods with the following similar existing two-step iteration schemes:
5.1. Noor’s Method One (NM1)
For a provided initial guess , determine the approximate root with the iteration schemes given below: which is quadratically convergent Noor’s method one  for root-finding of nonlinear equations.
5.2. Noor’s Method Two (NM2)
For a provided initial guess , determine the approximate root with the iteration schemes given below: which is cubically convergent Noor’s method two  for root-finding of nonlinear equations.
5.3. Traub’s Method (TM)
For a provided initial guess , determine the approximate root with the iteration schemes given below: which is two-step fourth order Traub’s method  for root-finding of nonlinear equations which bearing the convergence of order four.
5.4. Modified Halley’s Method (MHM)
For a provided initial guess , determine the approximate root with the iteration schemes given below: which is two-step Halley’ method  for root-finding of nonlinear equations which has the convergence of fifth order. In order to make the numerical comparison of the above defined methods with the presented algorithms, we consider the following test Examples 1–5.
The general algorithm for finding the approximate solution of the given nonlinear functions is given as:
In Algorithm 3, we take the accuracy in the stopping criteria . We did all the calculations of the numerical examples with the aid of the computer program Maple 13, and their numerical results can be seen in the following presented Tables 2–6.
Example 1. Adiabatic flame temperature equation. The adiabatic flame temperature equation is represented by the following relation: where and For further details, see [29, 30] and the references therein. The above function is actually a polynomial of degree three, and by the fundamental theorem of Algebra, it must have exactly three roots. Among these roots, is a simple one which we approximated through the proposed methods by choosing the initial guess , and the numerical results have been shown in Table2.
Example 2. Fraction conversion of nitrogen-hydrogen to ammonia. We take this example from , which describe the fraction conversion of nitrogen-hydrogen feed to ammonia, usually known as fractional conversion. In this problem, the values of temperature and pressure have been taken as 500°C and 250 atm, respectively. This problem has the following nonlinear form: which can be easily reduced to the following polynomial: Since the degree of the above polynomial is four, so, it must have exactly four roots. By definition, the fraction conversion lies in interval, so only one real root exists in this interval which is 0.2777595428. The other three roots have no physical meanings. We started the iteration process by the initial guess . The numerical results through different methods have been shown in Table 3.
Example 3. Finding volume from van der Waal’s equation. In Chemical Engineering, the van der Waal’s equation has been used for interpreting real and ideal gas behavior , having the following form: By taking the specific values of the parameters of the above equation, we can easily convert it to the following nonlinear function: where represents the volume that can easily be found by solving the function . Since the degree of the polynomial is three, so it must possess three roots. Among these roots, there is only one positive real root which is feasible because the volume of the gas can never be negative. We start the iteration process with the initial guess , and their results can be seen in Table 4.
Example 4. Open channel flow problem. The water flow in an open channel with uniform flow condition is given by Manning’s equation , having the following standard form: where , , and represent the slope, area, and hydraulic radius of the corresponding channel, respectively, and denotes Manning’s roughness coefficient. For a rectangular-shaped channel, having width and depth of water in channel , then we may write: Using these values in (37), we obtain: To find the depth of water in the channel for a given quantity of water, the above equation may written in the form of nonlinear function as: We take the values of different parameters as m3/s, m, , and . We choose the initial guess to start the iteration process, and the corresponding results through different iteration schemes are given in Table 5.
Example 5. Transcendental and algebraic problems. To numerically analyze the suggested algorithms, we consider the following seven transcendental and algebraic equations: and their numerical results can be seen in Table 6.
Tables 2–6 exhibit the numerical comparison of the suggested algorithms with other similar-nature existing algorithms. In the columns of the above presented tables, represents the iterations consumed by different algorithms, denotes the absolute value of at final approximation, shows the final approximated root, represents the absolute distance between the two consecutive approximations, and (COC) denotes the computational order of convergence having the following approximated formula:
The above approximation was firstly suggested in 2000 by Weerakoon and Fernando . When we look at the numerical results of Tables 2–6, we come to know that the presented methods are showing best performance as compared to the other ones. For example, in second, fourth, fifth, tenth, and eleventh test examples, Algorithm 1 is the best as it took less number of iterations among the all other compared methods with great precision. In the seventh test example, Algorithm 2 showing the best performance than the other ones whereas in first, third, sixth, eighth, and ninth test examples, both proposed algorithms behave alike and looks better than all the other ones. In short, we can say that the proposed algorithms are superior in terms of accuracy, speed, number of iterations, and computational order of convergence to the other well-known existing iteration schemes.
Table 7 exhibits the comparison of the iterations consumed by different algorithms with the newly proposed methods for the root-finding of nonlinear algebraic functions with the accuracy . Here, the columns of the table denote the iterations’ number for various test functions together with the initial guess . The numerical results as shown in Table 7 again certified the fast and best performance of the presented algorithms in terms of number of iterations for the above defined stopping criteria with the given accuracy. In all test examples, the proposed algorithms consumed less number of iterations in comparison with the other iterative algorithms. We did all the calculations with the aid of the computer program Maple 13.
6. Concluding Remarks
In this work, two novel iteration schemes for computing the zeros of nonlinear functions have been established which possess the sextic convergence. The first iteration scheme is derived using the Taylor’s series expansion and generalized Newton-Raphson’s method whereas in second one, we apply the basic idea of interpolation technique for approximating second derivative which results higher efficiency index. A comparison table regarding the efficiency indices of different methods of the similar nature has been presented which shows that the presented method has higher efficiency index among the other compared methods. By solving some engineering and arbitrary test problems with the aid of computer program, the validity and applicability of the suggested iteration schemes have been analyzed. The numerical results of the Tables 1–7 certified the superiority of the suggested iteration schemes to the other existing two-step iteration schemes of the similar nature. Using the basic idea of interpolation technique, one can derive a broad range of new iteration schemes for computing zeros of one-dimensional nonlinear equations.
All data required for this paper is including within this paper.
Conflicts of Interest
The authors do not have any conflicts.
All authors contribute equally in this paper.
C. Chun, “Construction of Newton-like iteration methods for solving nonlinear equations,” Numeriche Mathematik, vol. 104, no. 3, pp. 297–315, 2006.View at: Publisher Site | Google Scholar
R. L. Burden and J. D. Faires, Numerical Analysis, Brooks/Cole Publishing Company, California, USA, 6th edition, 1997.
J. Stoer and R. Bulirsch, Introduction to Numerical Analysis, Springer-Verlag, New York, USA, 3rd edition, 2002.View at: Publisher Site
A. Quarteroni, R. Sacco, and F. Saleri, Numerical Mathematics, Springer-Verlag, New York, USA, 2000.
I. K. Argyros, “A note on the Halley method in Banach spaces,” Applied Mathematics and Computation, vol. 58, pp. 215–224, 1993.View at: Google Scholar
J. F. Traub, Iterative Methods for the Solution of Equations, Chelsea Publishing company, New York, USA, 1982.
J. M. Gutierrez and M. A. Hernandez, “A family of Chebyshev-Halley type methods in Banach spaces,” Bulletin of the Australian Mathematical Society, vol. 55, no. 1, pp. 113–130, 1997.View at: Publisher Site | Google Scholar
S. Householder, The Numerical Treatment of a Single Nonlinear Equation, McGraw-Hill, New York, USA, 1970.
A. Amiri, A. Cordero, M. T. Darvishi, and J. R. Torregrosa, “A fast algorithm to solve systems of nonlinear equations,” Journal of Computational and Applied Mathematics, vol. 354, pp. 242–258, 2019.View at: Publisher Site | Google Scholar
W. Nazeer, S. M. Kang, M. Tanveer, and A. A. Shahid, “Fixed point results in the generation of Julia and Mandelbrot sets,” Journal of Inequalities and Applications, vol. 2015, no. 1, 2015.View at: Publisher Site | Google Scholar
M. A. Noor, K. I. Noor, and K. Aftab, “Some new iterative methods for solving nonlinear equations,” World Applied Sciences Journal, vol. 20, no. 6, pp. 870–874, 2012.View at: Google Scholar
A. Özyapici, “Effective numerical methods for non-linear equations,” International Journal of Applied and Computational Mathematics, vol. 6, no. 2, p. 35, 2020.View at: Publisher Site | Google Scholar
D. Kumar, J. R. Sharma, and I. K. Argyros, “Optimal one-point iterative function free from derivatives for multiple roots,” Mathematics, vol. 8, no. 5, p. 709, 2020.View at: Publisher Site | Google Scholar
Y. C. Kwun, A. A. Shahid, W. Nazeer, M. Abbas, and S. M. Kang, “Fractal generation via CR iteration scheme with S-convexity,” IEEE Access, vol. 7, pp. 69986–69997, 2019.View at: Google Scholar
A. Naseem, M. A. Rehman, and T. Abdeljawad, “Higher-order root-finding algorithms and their basins of attraction,” Journal of Mathematics, vol. 2020, Article ID 5070363, 11 pages, 2020.View at: Google Scholar
W. Nazeer, M. Tanveer, S. M. Kang, and A. Naseem, “A new Househölder’s method free from second derivatives for solving nonlinear equations and polynomiography,” Journal of Nonlinear Sciences and Applications, vol. 9, no. 3, pp. 998–1007, 2016.View at: Publisher Site | Google Scholar
S. M. Kang, W. Nazeer, M. Tanveer, and A. A. Shahid, “New fixed point results for fractal generation in Jungck Noor orbit with -convexity,” Journal of Function Spaces, vol. 2015, Article ID 963016, 7 pages, 2015.View at: Google Scholar
A. Naseem, M. A. Rehman, and T. Abdeljawad, “Some new iterative algorithms for solving one-dimensional non-linear equations and their graphical representationSome New Iterative Algorithms for Solving One-Dimensional Non-Linear Equations and Their Graphical Representation,” IEEE Access, vol. 9, pp. 8615–8624, 2021.View at: Publisher Site | Google Scholar
P. B. Chand, F. I. Chicharro, N. Garrido, and P. Jain, “Design and complex dynamics of Potra-Pták-type optimal methods for solving nonlinear equations and its applications,” Mathematics, vol. 7, no. 10, p. 942, 2019.View at: Publisher Site | Google Scholar
R. Behl, M. Salimi, M. Ferrara, S. Sharifi, and S. K. Alharbi, “Some real-life applications of a newly constructed derivative free iterative scheme,” Symmetry, vol. 11, no. 2, p. 239, 2019.View at: Publisher Site | Google Scholar
R. Imin and A. Iminjan, “A new SPH iterative method for solving nonlinear equations,” International Journal of Computational Methods, vol. 15, no. 3, pp. 1–9, 2018.View at: Google Scholar
M. A. Noor, W. A. Khan, and A. Hussain, “A new modified Halley method without second derivatives for nonlinear equation,” Applied Mathematics and Computation, vol. 189, no. 2, pp. 1268–1273, 2007.View at: Publisher Site | Google Scholar
M. A. Hafiz and S. M. H. Al-Goria, “New ninth and seventh order methods for solving nonlinear equations,” European Scientific Journal, vol. 8, no. 27, pp. 83–95, 2012.View at: Google Scholar
W. Nazeer, A. Naseem, S. M. Kang, and Y. C. Kwun, “Generalized Newton Raphson’s method free from second derivative,” Journal of Nonlinear Sciences and Applications, vol. 9, no. 5, pp. 2823–2831, 2016.View at: Publisher Site | Google Scholar
A. Kumar, P. Maroju, R. Behl, D. K. Gupta, and S. S. Motsa, “A family of higher order iterations free from second derivative for nonlinear equations in R,” Journal of Computational and Applied Mathematics, vol. 330, pp. 215–224, 2018.View at: Google Scholar
M. Salimi, N. M. A. NikLong, S. Sharifi, and B. A. Pansera, “A multi-point iterative method for solving nonlinear equations with optimal order of convergence,” Japan Journal of Industrial and Applied Mathematics, vol. 35, no. 2, pp. 497–509, 2018.View at: Publisher Site | Google Scholar
A. Naseem, M. A. Rehman, and T. Abdeljawad, “Numerical algorithms for finding zeros of nonlinear equations and their dynamical aspects,” Journal of Mathematics, vol. 2020, Article ID 2816843, 11 pages, 2020.View at: Google Scholar
P. Sebah and X. Gourdon, “Newton’s method and high order iterations,” Tech. Rep., Technical Report, 2001, http://numbers.computation.free.fr.View at: Google Scholar
M. Shacham, “An improved memory method for the solution of a nonlinear equation,” Chemical Engineering Science, vol. 44, no. 7, pp. 1495–1501, 1989.View at: Publisher Site | Google Scholar
M. Shacham and E. Kehat, “Converging interval methods for the iterative solution of a non-linear equation,” Chemical Engineering Science, vol. 28, no. 12, pp. 2187–2193, 1973.View at: Publisher Site | Google Scholar
G. V. Balaji and J. D. Seader, “Application of interval Newton’s method to chemical engineering problems,” Reliable Computing, vol. 1, no. 3, pp. 215–223, 1995.View at: Publisher Site | Google Scholar
V. D. Waals and J. Diderik, Over de Continuiteit van den Gas- en Vloeistoftoestand (On the Continuity of the Gas and Liquid State) [Ph.D. thesis], Doctoral dissertation, Leiden, The Netherlands, 1873.
R. Manning, “On the flow of water in open channels and pipes,” Transactions of the Institution of Civil Engineers of Ireland, vol. 20, pp. 161–207, 1891.View at: Google Scholar
S. Weerakoon and T. G. I. Fernando, “A variant of Newton’s method with accelerated third-order convergence,” Applied Mathematics Letters, vol. 13, no. 8, pp. 87–93, 2000.View at: Publisher Site | Google Scholar