Efficacy of Optimal Methods for Nonlinear Equations with Chemical Engineering Applications
In this study, we propose a modified predictor-corrector Newton-Halley (MPCNH) method for solving nonlinear equations. The proposed sixteenth-order MPCNH is free of second derivatives and has a high efficiency index. The convergence analysis of the modified method is discussed. Different problems were tested to demonstrate the applicability of the proposed method. Some are real life problems such as a chemical equilibrium problem (conversion in a chemical reactor), azeotropic point of a binary solution, and volume from van der Waals equation. Several comparisons with other optimal and nonoptimal iterative techniques of equal order are presented to show the efficiency of the modified method and to clarify the question, are the optimal methods always good for solving nonlinear equations?
Searching out a solution of , when is nonlinear, is highly significant in mathematics; because many equations of that type are common in applied sciences and real life problems. Newton’s iterative technique for solving such equations is defined aswhich has second-order of convergence . Many researchers have improved the method of Newton to attain better results and to increase the convergence order; for instance, see [2–4] and the references therein. One of the most famous improvements of Newton’s scheme is the technique of order three given in :and another well-known improvement of Newton’s technique is the third-order iterative method proposed by Householder :Recently, many researchers applied the technique of updating the solution to improve the convergence order of the iterative schemes. In this technique, firstly suggested by Traub , the composition of two iterative schemes of orders and can yield a new scheme of order . Traub showed that the two-step Newton technique has an order of convergence equal to four. In the same manner and by making a combination of three methods, i.e., Newton, Halley, and Householder, Bahgat and Hafiz  presented a three-step iterative method which is of eighteenth-order of convergence; they call their method the predictor-corrector Newton-Halley (PCNH) method. Many examples on iterative methods created using the same technique can be found in [8–11] and the references therein.
Besides, one of the most common ways to compare the power of iterative methods is the efficiency index which can be determined by , where is the convergence order of the iterative scheme and represents the number of functional evaluations at each iteration. For example, the PCNH mentioned above has an efficiency index equal to . There are some problems that can occur when the technique of updating solution is applied. The main problem is that the number of functional evaluations in each iteration will increase, and consequently the efficiency index will decrease. The second problem which is usually faced is due to the appearance of the second derivative in the numerical scheme. Kung and Traub  conjectured that the iterative scheme with the number of functional evaluations equal to is optimal if its order of convergence equals . Many authors have proposed optimal iterative methods of different orders. The default way for constructing optimal method is the composition technique together with the usage of some interpolations and approximations to minimize the number of functional evaluations. Several optimal fourth-order iterative methods were constructed; see, for example, [9–11]. The optimal eighth-order of convergence was reached by many authors as presented in [12–14]. Also, many sixteenth-order iterative methods were proposed; for instance, see [15–17].
As we mentioned above, usually, to obtain optimal methods to reduce the functional evaluations at each iteration, researchers use different approximations and interpolations. This process can bring a disadvantage; that is, even if the number of functional evaluations is reduced to the minimum, the number of algebraic operations will be increased. We shall find the answers to the following questions:(i)Are optimal iterative techniques always the best for solving nonlinear equations?(ii)Is it important to minimize the number of function evaluations at each iteration to make the area of convergence of the iterative scheme larger?(iii)Do nonoptimal methods always take longer computational time?(iv)Do optimal methods with equal orders have the same behavior in their dynamics?
We shall in this study answer the above questions by constructing a very simple nonoptimal method of order sixteen using some modifications of PCNH. The work is arranged as follows: The derivation of the modified method is carried out in Section 2. The convergence analysis of the proposed method is discussed in Section 3. Different comparisons with other optimal schemes of equal order are given in Section 4. Comparison using the basins of attraction (dynamics) of the proposed technique and other techniques of equal order is shown in Section 5. Eventually, in Section 6, the conclusion is illustrated.
2. Modified Predictor-Corrector Newton-Halley (MPCNH) Method
Let be an equation such that is a nonlinear function defined on some open interval and sufficiently differentiable. Let be a simple root of , and consider as an initial guess which is sufficiently close to . Using the technique of updating the solution, that is, using Newton’s scheme (1) as a predictor, and both Halley’s scheme (2) and Householder’s scheme (3) as a corrector, Bahgat and Hafiz  proposed the following three-step iterative method:Scheme (4) is called the predictor-corrector Newton-Halley (PCNH) method. Bahgat and Hafiz  proved that this iterative technique is of eighteenth-order of convergence. Per iteration, PCNH needs the evaluation of three functions, three first derivatives, and two second derivatives. Therefore, the efficiency index of this scheme is equal to . This index is better than for Newton’s method, but worse than that of the classical Halley and Householder methods, which is .
In order to make the efficiency index of PCNH better, we approximate the second derivatives ( and ) using Hermite’s interpolating polynomial of order three. To do that, let , where , and can be found from the following conditions: By solving the system of linear equations resulting from the above conditions, and after substituting the obtained coefficients, one can write asIn the same manner, we can obtain an approximation for :After substituting (6) and (7) into (4), we have the new modified scheme (MPCNH)This method has sixteenth-order of convergence which will be shown in the next section. At each iteration, MPCNH needs the evaluation of three functions and three first derivatives. Therefore, the proposed scheme has efficiency index , which is better than PCNH and both Halley’s and Householder’s methods. Another advantage of this modified method is that it is second derivative free scheme. Note that this modified method is not optimal since it does not satisfy Kung-Traub conjecture.
3. Convergence Analysis
Now, we consider the convergence analysis of MPCNH.
Theorem 1. Let be a simple zero of the function , where is sufficiently differentiable in an open interval . Let be an initial guess close enough to the zero ; then MPCNH technique is at least of sixteenth-order of convergence.
Proof. Let be a zero of , and let be the error at the th iteration. Using the Taylor series about , we getwhere , . From (9) one obtainsFrom (9) and (10) we haveUsing (11) we can write in (8) asExpanding and about and using (12), we getAlso we haveSubstituting (13)–(15) into in (8), we obtainExpanding and about and using (16), we haveAlso we haveSubstituting (16)–(20) in in (8), we getwhich implies thatHence, MPCNH technique is of at least sixteenth-order of convergence.
4. Test Problems and Comparisons
In this section we test the presented technique by applying it on some real life problems resulting from chemical engineering (Examples 1–4) and other seven arbitrary nonlinear functions (Example 5). Also, we show using different comparisons that optimal methods are not always good for nonlinear equations. For this, we compare the proposed MPCNH method with other five iterative methods of order sixteen; one of them is nonoptimal and the remaining four are optimal. The methods used in the comparison with their abbreviations are the nonoptimal LMMW method of , optimal SSSL1 method of , optimal GK method of , optimal SAK method of , and optimal SK method of .
We select two convergence conditions for computer programs. The first stopping criterion is , while the second stopping criterion is . All calculations have been performed under the same conditions on Intel Core i3-2330M CPU @2.20 GHz with 4GB RAM, with Microsoft Windows 10, 64 bit, X64-based processor. The software used is Mathematica 9 with 10000 significant digits. Now, consider the following examples.
Example 1 (a chemical equilibrium problem). Consider the equation from  describing the fraction of the nitrogen-hydrogen feed that gets converted to ammonia (this fraction is called fractional conversion). Also, consider that we have pressure of atm and temperature of C; the original problem consists of solving the equation which can be reduced in polynomial form asand the four roots of this function are and . By definition, the factional conversion must be between and . Therefore, only the first real root is acceptable and physically meaningful. We started by as an initial guess.
Example 2 (azeotropic point of a binary solution). Consider the problem obtained by  to determine the azeotropic point of a binary solution:where and are coefficients in the van Laar equation which describes phase equilibria of liquid solutions. Consider for this problem that and . The root of this equation is . We considered initial approximation .
Example 3 (conversion in a chemical reactor). In this example from , the following nonlinear equation is to be solved:where in this equation is the fractional conversion of species, for example, , in a chemical reactor. Therefore, should be bounded between and . The solution of this equation is . As an initial solution, we selected .
Example 4 (volume from van der Waals equation). Van der Waals’ equation is given bywhere are the pressure, volume, temperature in Kelvin, and number of moles of the gas. is the gas constant equal to . Finally, a and b are called van der Waals constants and they depend on the gas type. It is clear that the above equation is nonlinear in . It can be reduced to the following function of For instance, if one has to find the volume of moles of benzene vapor under pressure of atm and temperature of 500°C, given that van der Waals constants for benzene are and , then the problem arising is to find roots of this polynomial.The above equation has three roots which are one real and two complex roots . As is a volume, only the positive real roots are physically meaningful, that is, the first root. We considered the initial approximation for this problem.
Example 5. To study the proposed method on transcendental equations, consider the following seven test examples:
Tables 1–5 illustrate the comparisons between the iterative methods for Examples 1–5, respectively, where indicates the number of iterations such that the first stopping criterion is achieved, is the approximate root, is the absolute difference between two successive approximations of the root such that , is the value of the approximate root, the approximated computational order of convergence (ACOC) given in , which can be estimated as and, finally, CPU time is the time in seconds required to satisfy the stopping criterion using the built in function “TimeUsed” in “Mathematica 9” software. From Tables 1–5, we can see clearly that the approximate solutions obtained by MPCNH are more accurate than the estimations obtained by the other five methods. It is also clear that MPCNH needs fewer or equal number of iterations to meet the stopping criterion when compared to the other techniques. In addition, MPCNH needs less CPU time than the other methods to achieve the convergence criterion even if they need the same number of iterations. Also note that MPCNH has ACOC equal to 18 for both and while the other methods have ACOC equal to 16.
For the second convergence condition, we tested the seven functions given in Example 5. It is shown in Table 6 that number of iterations needed to satisfy the stopping criterion is less than or equal to, for MPCNH, the other methods with noticeable difference in CPU time needed to satisfy this condition.
Although the four optimal methods compared are of the same order as MPCNH, their results are overall not as accurate as MPCNH, and furthermore they need more CPU time to achieve the stopping criteria under the same conditions. The optimal methods need in some cases more iterations to satisfy the convergence conditions as compared to MPCNH.
5. Dynamical Comparison
In this section we present the dynamics of the iterative schemes to compare the area of convergence of each iterative method. We want to see whether or not optimal methods always perform better than nonoptimal methods.
Firstly, let uss start with some preliminaries which are related to the subject of basins of attraction (dynamics of iterative methods). The point is called a fixed point for if . Let be the Riemann sphere; then for , we define the orbit of as orb, where is the iteration of . The point is called a periodic point of period if is the smallest number with . If is periodic of period , then it is a fixed point for . A point is said to be super-attracting if , attracting if , repelling if , and neutral if .
We call the closure of the set of its repelling periodic points ‘Julia set’ of a nonlinear function . Fatou set is defined to be the complement of Julia set. If is an attracting orbit which is periodic with period , then the basin of attraction is defined to be the open set of all points such that the consecutive iterates converge to some point of . In symbols, we can define the basin of attraction for any root of to be . The attraction basins of a periodic orbit may be of infinite components. It can be said that Fatou set containing attraction basins of any fixed point tends to an attractor, and Julia set contains the boundaries of these basins of attraction.
We compare the dynamics of the proposed root finding method MPCNH to the dynamics of the same iterative methods used in the previous section, in addition to the CPU time in seconds needed to view these dynamics. We choose four test examples to visualize the basins of attraction. All examples are polynomials with roots of multiplicity one. The test polynomials are , with roots , with roots ,, with roots , and , with roots . The dynamics of the four polynomials are shown in Figures 1–4, respectively.
In all test examples, a 4 by 4 region is positioned at the origin including all the roots of the tested functions. A uniform grid in the square is taken to select initial points for the iterative schemes via attraction basins. Each grid point is colored based on number of iterations needed for convergence and the root it converges to. The black area denotes points where the method is divergent. The exact roots were denoted as black points on the graph. The appearance of darker region shows that the scheme needs less number of iterations.
Figures 1 and 2 show that MPCNH, LMMW, SAK, and SK iterative methods give good basins of attraction, where the areas of convergence for these methods are larger than the areas of convergence of the optimal SSSL1 and GK methods. Those two methods show large black areas which represent points of divergent.
In Figure 3 we show that SAK and SK methods represent better dynamics than MPCNH, showing little chaos, while LMMW, SSSL1, and GK show larger divergent areas. Finally, the dynamics illustrated in Figure 4 shows that all the considered methods contain divergent points, with MPCNH and SAK having the fewest such points.
Table 7 presents the CPU time needed to obtain the attraction basins of the zeros of the examples considered. Even though MPCNH gives better basins of attraction than LMMW, it needs more time to be obtained in all cases. In general, the area of convergence of a root finding method depends on many factors, and it is not necessary that optimal methods have better dynamics than the nonoptimal ones. Also, even if a method shows better dynamics, that does not mean it needs less CPU time to be obtained.
We have developed in this study a new modified predictor-corrector technique for solving nonlinear equations. We have approximated the second derivatives using divided differences, obtaining the new technique of sixteenth-order of convergence. The efficiency index of the new method is 1.587 which is better than the original method. The convergence of the new algorithm has been discussed and the convergence order has been carried out. Four real life problems from chemical engineering, in addition to arbitrary seven different nonlinear equations, were tested. Numerical and dynamical comparisons have been done for several iterative methods of the same order, four of which are optimal. Overall, the implemented method exhibits faster and more accurate convergence to the exact solution than the other methods and the dynamics of the proposed methods showed larger area of convergence than some of the optimal methods used in the comparison. We have demonstrated that optimal methods are not always better than nonoptimal methods. There are many factors that affect the efficiency and the dynamics of the iterative methods other than number of functional evaluations per iteration, like number of algebraic operations at each iteration and number of steps needed by the iterative method. As a very good idea for future work, we think there is a need for an alternative efficiency index, which can be used to compare the efficiency of the iterative methods instead of the current efficiency index which does not reflect the real efficiency as we saw in this study.
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Both authors have equal contribution to this study.
Obadah Said Solaiman acknowledges the Deanship of Scientific Research at King Faisal University for the financial support under Nasher Track (Grant No. 186066).
J. F. Traub, Iterative Methods for the Solution of Equations, Prentice-Hall, Englewood, NJ, USA, 1964.View at: MathSciNet
E. Halley, “A new, exact and easy method of finding the roots of equations generally, and that without any previous reduction,” Philosophical Transactions of the Royal Society of London, vol. 18, pp. 136–148, 1694.View at: Google Scholar
A. S. Householder, The Numerical Treatment of a Single Nonlinear Equation, McGraw-Hill Book Co., New York, NY, USA, 1970.View at: MathSciNet
Y. H. Geum, Y. I. Kim, and B. Neta, “Constructing a family of optimal eighth-order modified Newton-type multiple-zero finders along with the dynamics behind their purely imaginary extraneous fixed points,” Journal of Computational and Applied Mathematics, vol. 333, pp. 131–156, 2018.View at: Publisher Site | Google Scholar | MathSciNet