/ / Article
Special Issue

## Approximation Methods: Theory and Applications

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 5566379 | https://doi.org/10.1155/2021/5566379

M. A. Rehman, Amir Naseem, Thabet Abdeljawad, "Some Novel Sixth-Order Iteration Schemes for Computing Zeros of Nonlinear Scalar Equations and Their Applications in Engineering", Journal of Function Spaces, vol. 2021, Article ID 5566379, 11 pages, 2021. https://doi.org/10.1155/2021/5566379

# Some Novel Sixth-Order Iteration Schemes for Computing Zeros of Nonlinear Scalar Equations and Their Applications in Engineering

Revised03 Mar 2021
Accepted11 Mar 2021
Published13 Apr 2021

#### Abstract

In this paper, we propose two novel iteration schemes for computing zeros of nonlinear equations in one dimension. We develop these iteration schemes with the help of Taylor’s series expansion, generalized Newton-Raphson’s method, and interpolation technique. The convergence analysis of the proposed iteration schemes is discussed. It is established that the newly developed iteration schemes have sixth order of convergence. Several numerical examples have been solved to illustrate the applicability and validity of the suggested schemes. These problems also include some real-life applications associated with the chemical and civil engineering such as adiabatic flame temperature equation, conversion of nitrogen-hydrogen feed to ammonia, the van der Wall’s equation, and the open channel flow problem whose numerical results prove the better efficiency of these methods as compared to other well-known existing iterative methods of the same kind.

#### 1. Introduction

The solution of nonlinear scalar equations plays a vital role in many fields of applied sciences such as Engineering, Physics, and Mathematics. Analytical methods do not help us to solve such equations, and therefore, we need iterative methods for approximate the solution. In an iterative process, the first step is to choose an initial guess which is improved step by step by means of iterations till the approximate solution is achieved with the required accuracy. Some basic iterative methods are given in literature  and the references therein. In the last few years, a lot of researchers worked on iterative methods with their applications and proposed some new iterative schemes which possesses either a high convergence rate or have less number of functional evaluations per iteration, see  and the references therein. The convergence rate of an iterative method can be increased by involving predictor and corrector steps which results multistep iterative methods whereas the number of functional evaluations can be reduced by removing second and higher derivatives in the considered iterative method using different mathematical techniques. When we try to raise the convergence rate of an iterative scheme, we have to use more functional evaluations per iteration, and similarly, less number of functional evaluations per iterations causes low order of convergence which is the main drawback. It is much difficult to manage both terms, i.e., the convergence rate and functional evaluations per iterations as it seems that there exists an inverse relation between them. In twenty-first century, many mathematicians try to modify the existing methods with less number of functional evaluations per iterations and higher convergence order by applying different techniques such as predictor-corrector technique, finite difference scheme, interpolation technique, Taylor’s series, and quadrature formula etc. In 2007, Noor et al.  introduced a two-step Halley’s method with sextic convergence and then approximated its second derivative by the utilization of finite difference scheme and suggested a novel second-derivative free iterative algorithm which have fifth convergence order. In 2012, Hafiz and Al-Goria  suggested two new algorithms with order seven and nine, respectively, which were based on the weight combination of midpoint with Simpson quadrature formulas and using the predictor-corrector technique. Nazeer et al.  in 2016 proposed a new second derivative free generalized Newton-Raphson’s method with convergence of order five by means of finite difference scheme. In 2017, Kumar et al.  suggested a sixth-order parameter-based family of algorithms for solving nonlinear equations. In the same year, Salimi et al.  proposed an optimal class of eighth-order methods by using weight functions and Newton interpolation technique. Very recently, Naseem et al.  presented some new sixth-order algorithms for finding zeros of nonlinear equations and then investigated their dynamics by means of polynomiography and presented some novel mathematical art through the execution of the presented algorithms.

In this paper, we suggested two novel iteration schemes in the form of predictor-corrector type numerical methods, namely, Algorithms 1 and 2, by taking Newton’s iteration method as a predictor step. The derivation of the first iteration scheme is purely based on the Taylor’s series expansion and generalized Newton-Raphson’s method whereas in second one, we use interpolation technique for removing its second derivative which results the higher efficiency index. We examined the convergence criteria of the suggested schemes and proved that these iteration schemes bearing sextic convergence and superior to the other well-known methods of the similar nature. The efficiency indices of the presented schemes have been compared with the other similar existing two-step iteration schemes. The proposed iteration schemes have been applied to solve some real life problems along with the arbitrary transcendental and algebraic equations in order to assess its applicability, validity, and accuracy.

#### 2. Main Results

Consider the nonlinear algebraic equation

We assume that is a simple zero of (1) and is an initial guess sufficiently close to . Using the Taylor’s series around for (1), we have

If , we can evaluate the above expression as follows:

If we choose the root of equation, then we have

This is quadratically convergent Newton’s method  for root-finding of nonlinear functions and needs two computations for its execution. From (2), one can evaluate

In iterative form: which is cubically convergent generalize Newton-Raphson’s method  and requires three functional evaluations per iteration for the execution. After simplification of (2), one can obtain:

Now from generalized Newton-Raphson’s method in (5)

Using (8) in (7), we obtain

After rewriting the above obtained equality in the general form with the insertion of Newton’s iteration method as a predictor, we arrive at a new algorithm of the form:

Algorithm 1. For a given , compute the approximate solution by the following iterative schemes: which is the modification of the generalized Newton-Raphson’s method for determining the approximate roots of the nonlinear algebraic equations. To find the approximate root of the given nonlinear equation by means of the above described algorithm, one has to find the first as well as the second derivative of the given function . But in several cases, we have to deal with such functions in which second derivative does not exists and our proposed algorithm fails to find approximate root in that situation. To resolve this issue, we apply interpolation technique for the approximation of the second derivative as follows:
Consider the function where the values of the unknowns , , , and can be found by applying the following interpolation conditions:

From the above conditions, we gain a system containing four linear equations with four variables, the solution of which gives the following equality:

After putting the value of from the above equality in Algorithm 1, we gain novel second-derivative free algorithm as follows:

Algorithm 2. For a given , compute the approximate solution by the following iterative schemes: which is a novel second-derivative free iterative algorithm for computing the approximate solutions of the nonlinear algebraic equations. One of the main features of the suggested algorithm is that it can be applied to all those nonlinear functions in which second derivative does not exist. The removal of second derivative causes less number of functional evaluations per iteration which yields the best efficiency index as compared to those methods which require second derivative. The results of the given test examples certified its best performance in comparison with the other similar existing methods in literature.

#### 3. Convergence Analysis

This section includes the discussion regarding the convergence criteria of the suggested iteration schemes.

Theorem 3. Assuming as a simple zero of the given equation , where is sufficiently smooth in the neighborhood of , then the convergence orders of Algorithms 1 and 2 are at least six.

Proof. To prove the convergence of Algorithms 1 and 2, we assume that is the simple root of the equation and be the error at th iteration; then, and by using Taylor series about , we have where

With the help of equations (16) and (17), we get

With the help of equations (16)–(21), we have

Using equations (19)–(23) in Algorithms 1 and 2, we get the following equalities which imply that

Equations (25) and (26) show that the orders of convergence of Algorithms 1 and 2 are atleast six.

#### 4. Comparison of Efficiency Index

In numerical analysis, the efficiency index of an algorithm provides us the information about the speed and performance of the algorithm which is being under the consideration. It is actually a numerical quantity that relates to the number of computational resources needed to execute the considered algorithm. The efficiency of an algorithm can be thought of as analogous to the engineering productivity for a process that includes iterations. The term efficiency index is used to analyze the numeric behavior of different algorithms. In iterative algorithms, this quantity totally depends upon the two factors. The first one is the convergence order of the algorithm whereas the second factor is the number of computations per iteration, i.e., the number of functional and derivatives evaluations, required to execute the algorithm for the purpose of root-finding of the nonlinear functions. If the convergence order is represented by and the number of computations per iteration by , then the efficiency index can be written mathematically as:

Since Noor’s method one  has quadratic convergence and requires three computations per iteration for execution, so its efficiency index will be . In the same way, the cubically convergent Noor’s method two  requires three computations per iteration and has as an efficiency index. Similarly, the efficiency index of the Traub’s methods  is because it possesses the convergence of order four with four computations for execution. Since the modified Halley’s method  has fifth convergence order with four computations per iteration, so its efficiency index will be . Now, we calculate the efficiency indices of the suggested algorithms. Both algorithms bearing the convergence of order six. The number of computations per iteration for the execution of the first algorithm is five whereas the second proposed algorithm requires only four evaluations per iteration. So, their efficiency indices will be and , respectively. The efficiency indices of the different iterative methods, we have discussed above, are summarized in the following Table 1.

 Method Convergence order No. of required computations Efficiency index Noor’s method one 2 3 1.2599 Noor’s method two 3 3 1.4422 Traub’s method 4 4 1.4142 Modified Halley’s method 5 4 1.4953 Algorithm 1 6 5 1.4310 Algorithm 2 6 4 1.5651

Table 1 clearly shows that the presented method, namely, Algorithm 2, has better efficiency index among the other compared methods.

#### 5. Numerical Comparisons and Applications

In this section, we include four real-life engineering problems and seven arbitrary problems in the form of transcendental and algebraic equations to illustrate the applicability and efficiency of our newly developed iterative methods. We compare these methods with the following similar existing two-step iteration schemes:

##### 5.1. Noor’s Method One (NM1)

For a provided initial guess , determine the approximate root with the iteration schemes given below: which is quadratically convergent Noor’s method one  for root-finding of nonlinear equations.

##### 5.2. Noor’s Method Two (NM2)

For a provided initial guess , determine the approximate root with the iteration schemes given below: which is cubically convergent Noor’s method two  for root-finding of nonlinear equations.

##### 5.3. Traub’s Method (TM)

For a provided initial guess , determine the approximate root with the iteration schemes given below: which is two-step fourth order Traub’s method  for root-finding of nonlinear equations which bearing the convergence of order four.

##### 5.4. Modified Halley’s Method (MHM)

For a provided initial guess , determine the approximate root with the iteration schemes given below: which is two-step Halley’ method  for root-finding of nonlinear equations which has the convergence of fifth order. In order to make the numerical comparison of the above defined methods with the presented algorithms, we consider the following test Examples 15.

The general algorithm for finding the approximate solution of the given nonlinear functions is given as:

 Input: — non-linear function, k — maximum number of iterations, — iteration method, — accuracy. Output: Approximated root of the given non-linear function. fordo whiledo ifthen break is the required solution.

In Algorithm 3, we take the accuracy in the stopping criteria . We did all the calculations of the numerical examples with the aid of the computer program Maple 13, and their numerical results can be seen in the following presented Tables 26.

Example 1. Adiabatic flame temperature equation. The adiabatic flame temperature equation is represented by the following relation: where and For further details, see [29, 30] and the references therein. The above function is actually a polynomial of degree three, and by the fundamental theorem of Algebra, it must have exactly three roots. Among these roots, is a simple one which we approximated through the proposed methods by choosing the initial guess , and the numerical results have been shown in Table2.

 Method COC , NR1 9 4305.30991366612556300000 2 NR2 4 4305.30991366612556300000 3 TM 3 4305.30991366612556300000 4 MHM 3 4305.30991366612556300000 5 Algorithm 1 2 4305.30991366612556300000 6 Algorithm 2 2 4305.30991366612556300000 6

Example 2. Fraction conversion of nitrogen-hydrogen to ammonia. We take this example from , which describe the fraction conversion of nitrogen-hydrogen feed to ammonia, usually known as fractional conversion. In this problem, the values of temperature and pressure have been taken as 500°C and 250 atm, respectively. This problem has the following nonlinear form: which can be easily reduced to the following polynomial: Since the degree of the above polynomial is four, so, it must have exactly four roots. By definition, the fraction conversion lies in interval, so only one real root exists in this interval which is 0.2777595428. The other three roots have no physical meanings. We started the iteration process by the initial guess . The numerical results through different methods have been shown in Table 3.

 Method COC , NR1 7 0.27775954284172065910 2 NR2 3 0.27775954284172065910 3 TM 3 0.27775954284172065910 4 MHM 2 0.27775954284172065910 5 Algorithm 1 2 0.27775954284172065910 6 Algorithm 2 2 0.27775954284172065910 6

Example 3. Finding volume from van der Waal’s equation. In Chemical Engineering, the van der Waal’s equation has been used for interpreting real and ideal gas behavior , having the following form: By taking the specific values of the parameters of the above equation, we can easily convert it to the following nonlinear function: where represents the volume that can easily be found by solving the function . Since the degree of the polynomial is three, so it must possess three roots. Among these roots, there is only one positive real root which is feasible because the volume of the gas can never be negative. We start the iteration process with the initial guess , and their results can be seen in Table 4.

 Method COC , NR1 4 1.92984624284786221696 2 NR2 3 1.92984624284786221696 3 TM 3 1.92984624284786221696 4 MHM 2 1.92984624284786221696 5 Algorithm 1 2 1.92984624284786221696 6 Algorithm 2 2 1.92984624284786221696 6

Example 4. Open channel flow problem. The water flow in an open channel with uniform flow condition is given by Manning’s equation , having the following standard form: where , , and represent the slope, area, and hydraulic radius of the corresponding channel, respectively, and denotes Manning’s roughness coefficient. For a rectangular-shaped channel, having width and depth of water in channel , then we may write: Using these values in (37), we obtain: To find the depth of water in the channel for a given quantity of water, the above equation may written in the form of nonlinear function as: We take the values of different parameters as m3/s,  m, , and . We choose the initial guess to start the iteration process, and the corresponding results through different iteration schemes are given in Table 5.

 Method COC , NR1 6 1.46509122029582464238 2 NR2 3 1.46509122029582464238 3 TM 3 1.46509122029582464238 4 MHM 3 1.46509122029582464238 5 Algorithm 1 2 1.46509122029582464238 6 Algorithm 2 2 1.46509122029582464238 6

Example 5. Transcendental and algebraic problems. To numerically analyze the suggested algorithms, we consider the following seven transcendental and algebraic equations: and their numerical results can be seen in Table 6.

 Method COC , NR1 9 −0.52248077281054548914 2 NR2 7 −0.52248077281054548914 3 TM 25 −0.52248077281054548914 4 MHM 3 −0.52248077281054548914 5 Algorithm 1 2 −0.52248077281054548914 6 Algorithm 2 2 −0.52248077281054548914 6 , NR1 5 0.40999201798913713162 2 NR2 45 0.40999201798913713162 3 TM 4 0.40999201798913713162 4 MHM 3 0.40999201798913713162 5 Algorithm 1 2 0.40999201798913713162 6 Algorithm 2 2 0.40999201798913713162 6 , NR1 7 0.56714329040978387300 2 NR2 4 0.56714329040978387300 3 TM 3 0.56714329040978387300 4 MHM 3 0.56714329040978387300 5 Algorithm 1 2 0.56714329040978387300 6 Algorithm 2 2 0.56714329040978387300 6 , NR1 141 2.15443469003188372180 2 NR2 4 2.15443469003188372180 3 TM 3 2.15443469003188372180 4 MHM 3 2.15443469003188372180