Abstract

In this paper, we propose two novel iteration schemes for computing zeros of nonlinear equations in one dimension. We develop these iteration schemes with the help of Taylor’s series expansion, generalized Newton-Raphson’s method, and interpolation technique. The convergence analysis of the proposed iteration schemes is discussed. It is established that the newly developed iteration schemes have sixth order of convergence. Several numerical examples have been solved to illustrate the applicability and validity of the suggested schemes. These problems also include some real-life applications associated with the chemical and civil engineering such as adiabatic flame temperature equation, conversion of nitrogen-hydrogen feed to ammonia, the van der Wall’s equation, and the open channel flow problem whose numerical results prove the better efficiency of these methods as compared to other well-known existing iterative methods of the same kind.

1. Introduction

The solution of nonlinear scalar equations plays a vital role in many fields of applied sciences such as Engineering, Physics, and Mathematics. Analytical methods do not help us to solve such equations, and therefore, we need iterative methods for approximate the solution. In an iterative process, the first step is to choose an initial guess which is improved step by step by means of iterations till the approximate solution is achieved with the required accuracy. Some basic iterative methods are given in literature [18] and the references therein. In the last few years, a lot of researchers worked on iterative methods with their applications and proposed some new iterative schemes which possesses either a high convergence rate or have less number of functional evaluations per iteration, see [921] and the references therein. The convergence rate of an iterative method can be increased by involving predictor and corrector steps which results multistep iterative methods whereas the number of functional evaluations can be reduced by removing second and higher derivatives in the considered iterative method using different mathematical techniques. When we try to raise the convergence rate of an iterative scheme, we have to use more functional evaluations per iteration, and similarly, less number of functional evaluations per iterations causes low order of convergence which is the main drawback. It is much difficult to manage both terms, i.e., the convergence rate and functional evaluations per iterations as it seems that there exists an inverse relation between them. In twenty-first century, many mathematicians try to modify the existing methods with less number of functional evaluations per iterations and higher convergence order by applying different techniques such as predictor-corrector technique, finite difference scheme, interpolation technique, Taylor’s series, and quadrature formula etc. In 2007, Noor et al. [22] introduced a two-step Halley’s method with sextic convergence and then approximated its second derivative by the utilization of finite difference scheme and suggested a novel second-derivative free iterative algorithm which have fifth convergence order. In 2012, Hafiz and Al-Goria [23] suggested two new algorithms with order seven and nine, respectively, which were based on the weight combination of midpoint with Simpson quadrature formulas and using the predictor-corrector technique. Nazeer et al. [24] in 2016 proposed a new second derivative free generalized Newton-Raphson’s method with convergence of order five by means of finite difference scheme. In 2017, Kumar et al. [25] suggested a sixth-order parameter-based family of algorithms for solving nonlinear equations. In the same year, Salimi et al. [26] proposed an optimal class of eighth-order methods by using weight functions and Newton interpolation technique. Very recently, Naseem et al. [27] presented some new sixth-order algorithms for finding zeros of nonlinear equations and then investigated their dynamics by means of polynomiography and presented some novel mathematical art through the execution of the presented algorithms.

In this paper, we suggested two novel iteration schemes in the form of predictor-corrector type numerical methods, namely, Algorithms 1 and 2, by taking Newton’s iteration method as a predictor step. The derivation of the first iteration scheme is purely based on the Taylor’s series expansion and generalized Newton-Raphson’s method whereas in second one, we use interpolation technique for removing its second derivative which results the higher efficiency index. We examined the convergence criteria of the suggested schemes and proved that these iteration schemes bearing sextic convergence and superior to the other well-known methods of the similar nature. The efficiency indices of the presented schemes have been compared with the other similar existing two-step iteration schemes. The proposed iteration schemes have been applied to solve some real life problems along with the arbitrary transcendental and algebraic equations in order to assess its applicability, validity, and accuracy.

2. Main Results

Consider the nonlinear algebraic equation

We assume that is a simple zero of (1) and is an initial guess sufficiently close to . Using the Taylor’s series around for (1), we have

If , we can evaluate the above expression as follows:

If we choose the root of equation, then we have

This is quadratically convergent Newton’s method [24] for root-finding of nonlinear functions and needs two computations for its execution. From (2), one can evaluate

In iterative form: which is cubically convergent generalize Newton-Raphson’s method [28] and requires three functional evaluations per iteration for the execution. After simplification of (2), one can obtain:

Now from generalized Newton-Raphson’s method in (5)

Using (8) in (7), we obtain

After rewriting the above obtained equality in the general form with the insertion of Newton’s iteration method as a predictor, we arrive at a new algorithm of the form:

Algorithm 1. For a given , compute the approximate solution by the following iterative schemes: which is the modification of the generalized Newton-Raphson’s method for determining the approximate roots of the nonlinear algebraic equations. To find the approximate root of the given nonlinear equation by means of the above described algorithm, one has to find the first as well as the second derivative of the given function . But in several cases, we have to deal with such functions in which second derivative does not exists and our proposed algorithm fails to find approximate root in that situation. To resolve this issue, we apply interpolation technique for the approximation of the second derivative as follows:
Consider the function where the values of the unknowns , , , and can be found by applying the following interpolation conditions:

From the above conditions, we gain a system containing four linear equations with four variables, the solution of which gives the following equality:

After putting the value of from the above equality in Algorithm 1, we gain novel second-derivative free algorithm as follows:

Algorithm 2. For a given , compute the approximate solution by the following iterative schemes: which is a novel second-derivative free iterative algorithm for computing the approximate solutions of the nonlinear algebraic equations. One of the main features of the suggested algorithm is that it can be applied to all those nonlinear functions in which second derivative does not exist. The removal of second derivative causes less number of functional evaluations per iteration which yields the best efficiency index as compared to those methods which require second derivative. The results of the given test examples certified its best performance in comparison with the other similar existing methods in literature.

3. Convergence Analysis

This section includes the discussion regarding the convergence criteria of the suggested iteration schemes.

Theorem 3. Assuming as a simple zero of the given equation , where is sufficiently smooth in the neighborhood of , then the convergence orders of Algorithms 1 and 2 are at least six.

Proof. To prove the convergence of Algorithms 1 and 2, we assume that is the simple root of the equation and be the error at th iteration; then, and by using Taylor series about , we have where

With the help of equations (16) and (17), we get

With the help of equations (16)–(21), we have

Using equations (19)–(23) in Algorithms 1 and 2, we get the following equalities which imply that

Equations (25) and (26) show that the orders of convergence of Algorithms 1 and 2 are atleast six.

4. Comparison of Efficiency Index

In numerical analysis, the efficiency index of an algorithm provides us the information about the speed and performance of the algorithm which is being under the consideration. It is actually a numerical quantity that relates to the number of computational resources needed to execute the considered algorithm. The efficiency of an algorithm can be thought of as analogous to the engineering productivity for a process that includes iterations. The term efficiency index is used to analyze the numeric behavior of different algorithms. In iterative algorithms, this quantity totally depends upon the two factors. The first one is the convergence order of the algorithm whereas the second factor is the number of computations per iteration, i.e., the number of functional and derivatives evaluations, required to execute the algorithm for the purpose of root-finding of the nonlinear functions. If the convergence order is represented by and the number of computations per iteration by , then the efficiency index can be written mathematically as:

Since Noor’s method one [11] has quadratic convergence and requires three computations per iteration for execution, so its efficiency index will be . In the same way, the cubically convergent Noor’s method two [11] requires three computations per iteration and has as an efficiency index. Similarly, the efficiency index of the Traub’s methods [6] is because it possesses the convergence of order four with four computations for execution. Since the modified Halley’s method [22] has fifth convergence order with four computations per iteration, so its efficiency index will be . Now, we calculate the efficiency indices of the suggested algorithms. Both algorithms bearing the convergence of order six. The number of computations per iteration for the execution of the first algorithm is five whereas the second proposed algorithm requires only four evaluations per iteration. So, their efficiency indices will be and , respectively. The efficiency indices of the different iterative methods, we have discussed above, are summarized in the following Table 1.

Table 1 clearly shows that the presented method, namely, Algorithm 2, has better efficiency index among the other compared methods.

5. Numerical Comparisons and Applications

In this section, we include four real-life engineering problems and seven arbitrary problems in the form of transcendental and algebraic equations to illustrate the applicability and efficiency of our newly developed iterative methods. We compare these methods with the following similar existing two-step iteration schemes:

5.1. Noor’s Method One (NM1)

For a provided initial guess , determine the approximate root with the iteration schemes given below: which is quadratically convergent Noor’s method one [11] for root-finding of nonlinear equations.

5.2. Noor’s Method Two (NM2)

For a provided initial guess , determine the approximate root with the iteration schemes given below: which is cubically convergent Noor’s method two [11] for root-finding of nonlinear equations.

5.3. Traub’s Method (TM)

For a provided initial guess , determine the approximate root with the iteration schemes given below: which is two-step fourth order Traub’s method [6] for root-finding of nonlinear equations which bearing the convergence of order four.

5.4. Modified Halley’s Method (MHM)

For a provided initial guess , determine the approximate root with the iteration schemes given below: which is two-step Halley’ method [22] for root-finding of nonlinear equations which has the convergence of fifth order. In order to make the numerical comparison of the above defined methods with the presented algorithms, we consider the following test Examples 15.

The general algorithm for finding the approximate solution of the given nonlinear functions is given as:

Input: — non-linear function, k — maximum number of iterations, — iteration method, — accuracy.
Output: Approximated root of the given non-linear function.
fordo
whiledo
   
   ifthen
     break
   
is the required solution.

In Algorithm 3, we take the accuracy in the stopping criteria . We did all the calculations of the numerical examples with the aid of the computer program Maple 13, and their numerical results can be seen in the following presented Tables 26.

Example 1. Adiabatic flame temperature equation. The adiabatic flame temperature equation is represented by the following relation: where and For further details, see [29, 30] and the references therein. The above function is actually a polynomial of degree three, and by the fundamental theorem of Algebra, it must have exactly three roots. Among these roots, is a simple one which we approximated through the proposed methods by choosing the initial guess , and the numerical results have been shown in Table2.

Example 2. Fraction conversion of nitrogen-hydrogen to ammonia. We take this example from [31], which describe the fraction conversion of nitrogen-hydrogen feed to ammonia, usually known as fractional conversion. In this problem, the values of temperature and pressure have been taken as 500°C and 250 atm, respectively. This problem has the following nonlinear form: which can be easily reduced to the following polynomial: Since the degree of the above polynomial is four, so, it must have exactly four roots. By definition, the fraction conversion lies in interval, so only one real root exists in this interval which is 0.2777595428. The other three roots have no physical meanings. We started the iteration process by the initial guess . The numerical results through different methods have been shown in Table 3.

Example 3. Finding volume from van der Waal’s equation. In Chemical Engineering, the van der Waal’s equation has been used for interpreting real and ideal gas behavior [32], having the following form: By taking the specific values of the parameters of the above equation, we can easily convert it to the following nonlinear function: where represents the volume that can easily be found by solving the function . Since the degree of the polynomial is three, so it must possess three roots. Among these roots, there is only one positive real root which is feasible because the volume of the gas can never be negative. We start the iteration process with the initial guess , and their results can be seen in Table 4.

Example 4. Open channel flow problem. The water flow in an open channel with uniform flow condition is given by Manning’s equation [33], having the following standard form: where , , and represent the slope, area, and hydraulic radius of the corresponding channel, respectively, and denotes Manning’s roughness coefficient. For a rectangular-shaped channel, having width and depth of water in channel , then we may write: Using these values in (37), we obtain: To find the depth of water in the channel for a given quantity of water, the above equation may written in the form of nonlinear function as: We take the values of different parameters as m3/s,  m, , and . We choose the initial guess to start the iteration process, and the corresponding results through different iteration schemes are given in Table 5.

Example 5. Transcendental and algebraic problems. To numerically analyze the suggested algorithms, we consider the following seven transcendental and algebraic equations: and their numerical results can be seen in Table 6.

Tables 26 exhibit the numerical comparison of the suggested algorithms with other similar-nature existing algorithms. In the columns of the above presented tables, represents the iterations consumed by different algorithms, denotes the absolute value of at final approximation, shows the final approximated root, represents the absolute distance between the two consecutive approximations, and (COC) denotes the computational order of convergence having the following approximated formula:

The above approximation was firstly suggested in 2000 by Weerakoon and Fernando [34]. When we look at the numerical results of Tables 26, we come to know that the presented methods are showing best performance as compared to the other ones. For example, in second, fourth, fifth, tenth, and eleventh test examples, Algorithm 1 is the best as it took less number of iterations among the all other compared methods with great precision. In the seventh test example, Algorithm 2 showing the best performance than the other ones whereas in first, third, sixth, eighth, and ninth test examples, both proposed algorithms behave alike and looks better than all the other ones. In short, we can say that the proposed algorithms are superior in terms of accuracy, speed, number of iterations, and computational order of convergence to the other well-known existing iteration schemes.

Table 7 exhibits the comparison of the iterations consumed by different algorithms with the newly proposed methods for the root-finding of nonlinear algebraic functions with the accuracy . Here, the columns of the table denote the iterations’ number for various test functions together with the initial guess . The numerical results as shown in Table 7 again certified the fast and best performance of the presented algorithms in terms of number of iterations for the above defined stopping criteria with the given accuracy. In all test examples, the proposed algorithms consumed less number of iterations in comparison with the other iterative algorithms. We did all the calculations with the aid of the computer program Maple 13.

6. Concluding Remarks

In this work, two novel iteration schemes for computing the zeros of nonlinear functions have been established which possess the sextic convergence. The first iteration scheme is derived using the Taylor’s series expansion and generalized Newton-Raphson’s method whereas in second one, we apply the basic idea of interpolation technique for approximating second derivative which results higher efficiency index. A comparison table regarding the efficiency indices of different methods of the similar nature has been presented which shows that the presented method has higher efficiency index among the other compared methods. By solving some engineering and arbitrary test problems with the aid of computer program, the validity and applicability of the suggested iteration schemes have been analyzed. The numerical results of the Tables 17 certified the superiority of the suggested iteration schemes to the other existing two-step iteration schemes of the similar nature. Using the basic idea of interpolation technique, one can derive a broad range of new iteration schemes for computing zeros of one-dimensional nonlinear equations.

Data Availability

All data required for this paper is including within this paper.

Conflicts of Interest

The authors do not have any conflicts.

Authors’ Contributions

All authors contribute equally in this paper.