Applied Mathematics for Engineering Problems in Biomechanics and Robotics 2020
View this Special IssueResearch Article  Open Access
Mudassir Shams, Nazir Ahmad Mir, Naila Rafiq, A. Othman Almatroud, Saima Akram, "On Dynamics of Iterative Techniques for Nonlinear Equation with Applications in Engineering", Mathematical Problems in Engineering, vol. 2020, Article ID 5853296, 17 pages, 2020. https://doi.org/10.1155/2020/5853296
On Dynamics of Iterative Techniques for Nonlinear Equation with Applications in Engineering
Abstract
In this article, we construct an optimal family of iterative methods for finding the single root and then extend this family for determining all the distinct as well as multiple roots of singlevariable nonlinear equations simultaneously. Convergence analysis is presented for both the cases to show that the optimal order of convergence is 4 in the case of single root finding methods and 6 for simultaneous determination of all distinct as well as multiple roots of a nonlinear equation. The computational cost, basins of attraction, efficiency, log of residual, and numerical test examples show that the newly constructed methods are more efficient as compared to the existing methods in the literature.
1. Introduction
To solve a nonlinear equation,is the oldest problem of science, in general, and mathematics, in particular. The nonlinear equations have diverse applications in many areas of science and engineering. In general, to find the roots of (1), we look towards iterative schemes, which can be further classified as to approximate the single root and all roots of (1). In this article, we are going to work on both types of iterative methods. Many iterative methods of different convergence orders already exist in the literature (see [1–11]) to approximate the roots of (1). Ostrowski [12] defined the efficiency index I to classify these iterative methods in terms of their convergence order k and number of function evaluations or its derivatives per iteration, say i.e.,
An iterative method is said to be optimal according to the Kung–Traub conjecture [13], ifholds, where H is the optimal order of convergence. The aforementioned methods are used to approximate one root at a time. But, mathematicians are also interested in finding all the roots of (1) simultaneously. This is due to the fact that simultaneous iterative methods are very popular due to their wider region of convergence, are more stable as compared to single root finding methods, and can be implemented for parallel computing as well. More detail on single as well as simultaneous determination of all roots can be found in [1, 10–28] and the reference cited therein.
The most famous of the single root finding methods is the classical Newton–Raphson method:
Method (4) is optimal with an efficiency of 1.41 by the Traub conjecture. If we use Weierstrass’ correction [26]:in (4), then, we get classical Weierstrass–Dochev methods to approximate all roots of nonlinear equation (1) as
Method (6) has a convergence order 2. Later, Aberth [29] presented the 3rd order simultaneous method, given aswhere .
First of all, we construct a family of optimal fourthorder methods using the procedure of weight function and then convert it into simultaneous iterative methods for finding all distinct as well as multiple roots of nonlinear equation (1).
2. Construction of Methods and Convergence Analysis
King [9] presented the following optimal fourthorder method (abbreviated as E1):
Cordero et al. [7] gave the fourthorder optimal method as follows (abbreviated as E2):
Chun [4] in 2008 gave the fourthorder optimal method as follows (abbreviated as E3):
Maheshwari [11] gave the fourthorder optimal method as follows (abbreviated as E4):
Chun [3] gave the fourthorder optimal method as follows (abbreviated as E5):
Kou et al. [10] gave the fourthorder optimal method as follows (abbreviated as E6):
Behzad [5] give the fourthorder optimal method as follows (abbreviated as E7):
Chun [2] in 2006 gave the fourthorder optimal method as follows (abbreviated as E8):
Ostrowski [12] give the fourthorder optimal method as follows (abbreviated as E9):
Here, we propose the following two families of iterative methods:where and is a real number.
For the iteration schemes (17), we have the following convergence theorem by using CAS Maple 18 and error relation of the iterative schemes defined in (17) is found.
Theorem 1. Let be a simple root of a sufficiently differential able function in an open interval I. If is sufficiently close to and is a realvalued function satisfying , and , then the convergence order of the family of iterative method (17) is four and the error equation is given bywhere .
Proof. Let be a simple root of and . By Taylor’s series expansion of around taking , we getandDividing (19) by (20), we havesoNow,Expanding about the origin, we haveBy putting in (25), we haveHence, it proves the theorem.
2.1. The Concrete FourthOrder Methods
We now construct some concrete forms of the family of methods described by (17). Let us take the function satisfying the conditions of Theorem 1.
Therefore, we get the following new three iterative methods with arbitrary constant and by choosing different weight functions given in Table 1:
 
Note. and . 
Concrete method 1 (abbreviated as Q1):
Concrete method 2 (abbreviated as Q2):
Concrete method 3 (abbreviated as Q3):where and
2.2. Complex Dynamical Study of Families of Iterative Methods
Here, we discuss the dynamical study of iterative methods (Q1–Q3 and E1–E9). We investigate the regions, from where we take the initial estimates to achieve the roots of the nonlinear equation. We actually numerically approximate the domain of attractions of the roots as a qualitative measure, that is, how the iterative methods depend on the choice of initial estimates? To answer this question on the dynamical behavior of the iterative methods, we investigate the dynamics of the methods (Q1–Q3) and compare it with (E1–E9). Let us recall some basic concepts of this study in the background contexture of complex dynamics. For more details on the dynamical behavior of the iterative methods, one can consult [6, 30, 31]. Taking a rational function , where denotes the complex plane, the orbit is defined as a set such as . The convergence of is understood in the sense, if exists. A point is known as periodic with a minimal period if holds, where is the smallest positive integer. A periodic point for is known as fixed, attracting if , repelling if , and neutral otherwise. An attracting point defines the basin of attraction as the set of starting points whose orbit tends to . The closure of the set of its repelling periodic points of a rational map is known as the Julia set denoted by , and its complement is the Fatou set denoted by . The iterative methods when applied to find the roots of (1) provide the rational map. But, we are interested in the basins of attraction of the roots of nonlinear function (1). Fatou set contains the basins of attraction of different roots is a wellknown fact. In general, the Julia set (a fractal and rational map) behaves as unstable in this region. For the dynamical and graphical point of view, we take grids of square . To each root of (1), we assign a color to which the corresponding orbit of the iterative method starts and converges to a fixed point. Take color map as Jet. We use as a stopping criteria, and the maximum number of iterations is taken as 20. We mark a dark blue point, if the orbit of the iterative method does not converge to the root after 20 iterations which means it has a distance greater than to any other root. Different color is used for different roots. Iterative methods have different basins of attraction distinguished by their colors. In basins, brightness in color represents the number of iterations to achieve the root of (1). Note that the darkest blue regions denote the lack of convergence to any root of (1). Finally, in Tables 2–5, we present the elapsed time of basins of attraction corresponding to iterative maps (Q1–Q3 and E1–E9) using the tictoc command in code using MATLAB (R2011b). Figures 1–4 show the basins of attraction of iterative methods (Q1–Q3 and E1–E9) for nonlinear functions , , , and , respectively. By observing the basins of attraction, we can easily judge the stability of iterative methods (Q1–Q3 and E1–E9). Elapsed time, divergent regions, and brightness in color presents that Q1–Q3 is better than E1–E9.




(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
Figures 1(a)–1(l) present the basins of attraction of methods (Q1–Q3 and E1–E9) for a nonlinear function . In Figures 1(a)–1(l), brightness of the color in basins shows the less number of iterations for convergence of iterative methods. Table 2 shows the elapsed time of Q1–Q3 and E1–E9.
Figures 2(a)–2(l) present the basins of attraction of methods (Q1–Q3 and E1–E9) for a nonlinear function . In Figures 2(a)–2(l), brightness of the color in basins shows the less number of iterations for convergence of iterative methods. Table 3 shows the elapsed time of Q1–Q3 and E1–E9.
Figures 3(a)–3(l) present the basins of attraction of methods (Q1–Q3 and E1–E9) for a nonlinear function . In Figures 3(a)–3(l), brightness of the color in basins shows the less number of iterations for convergence of iterative methods. Table 4 shows the elapsed time of Q1–Q3 and E1–E9.
Figures 4(a)–4(l) present the basins of attraction of methods (Q1–Q3 and E1–E9) for a nonlinear function . In Figures 4(a)–4(l), brightness of the color in basins shows the less number of iterations for convergence of iterative methods. Table 5 shows the elapsed time of Q1–Q3 and E1–E9.
3. Generalization to Simultaneous Methods
Suppose nonlinear equation (1) has roots. Then, and can be approximated as
This implies
This gives the Aberth–Ehrlich method (7):
Now from (31), an approximation of is formed by replacing with as follows:
Using (33) in (4), we have the following method for finding all the distinct roots:
In case of multiple roots, we have the following method:where is the multiplicity of the root andin which and . Using correction , we get the following new family of simultaneous iterative methods for extracting multiple roots of nonlinear equation (1):
Thus, we have constructed new three simultaneous iterative methods (37) abbreviated as SM1–SM3.
3.1. Convergence Analysis
In this section, we discuss the convergence analysis of a family of simultaneous methods (SM1–SM3) which is given in form of the following theorem. Obviously, convergence for the method (34) will follow from the convergence of the method (SM1–SM3) from Theorem 2 when the multiplicities of the roots are one.
Theorem 2. Let be the n number of simple roots of nonlinear equation (1). If , , , …, be the initial approximations of the roots, respectively, and sufficiently close to actual roots, then the order of convergence of the method (SM1–SM3) equals six.
Proof. Let and be the errors in and approximations, respectively. Then, obviously for distinct rootsThus, for multiple roots, we have from (37) thatwhere from (18) and . Thus,If it is assumed that absolute values of all errors are of the same order, say , then from (40), we haveHence, the theorem is proved.
4. Computational Aspect
Here, we compare the computational efficiencies of the Petkovic et al. [21] method and the new methods (SM1–SM3). As presented in [21], the efficiency of an iterative method can be estimated using the efficiency index given bywhere is the computational cost and is the order of convergence of the iterative method. Arithmetic operations per iteration with certain weight depending on the execution time of operation are used to evaluate the computational cost . The weights used for division, multiplication, and addition plus subtraction are , respectively. For a given polynomial of degree and roots, the number of division, multiplication, and addition and subtraction per iteration for all roots are denoted by , and . The cost of computation can be calculated as
Thus, (42) becomes
Consider that the number of operations of a complex polynomial with real and complex roots reduces to the operation of real arithmetic, which is given in Table 6 as a polynomial degree m taking the dominant term of order . Applying (44) and the data given in Table 6, we calculate the percentage ratio [21] given bywhere is the Petkovic method [21] of order 4. Figures 5(a)–5(d) graphically illustrate these percentage ratios. Figures 5(a)–5(c) show the computational efficiency of methods (SM1–SM3) w.r.t the method , and Figure 5(d) shows the computational efficiency of the method PJ6 w.r.t (SM1–SM3). It is evident from Figures 5(a)–5(d) that the newly constructed simultaneous methods (SM1–SM3) are more efficient as compared to [21].

(a)
(b)
(c)
(d)
5. Numerical Results
Here, some numerical examples are considered in order to demonstrate the performance of our family of onestep fourthorder single root finding methods (Q1–Q3) and sixthorder simultaneous methods (SM1–SM3), respectively. We compare our family of optimal fourthorder single root finding methods (Q1–Q3) with E1–E9 methods. The family of simultaneous methods (SM1–SM3) of order six is compared with the method [21] of the same order. All the computations are performed using CAS Maple 18 with 9000 (64 digits floating point arithmetic in case of simultaneous methods) significant digits. For single root finding methods, the stopping criteria is as follows:whereasfor simultaneous methods. We take for the single root finding method and for simultaneous determination of all roots of nonlinear equation (1).
Numerical tests examples from [32–34] are provided in Tables 7–13. In Tables 7, 9, 11, and 12, we present the numerical results for simultaneous determination of all roots, while Tables 8, 10, and 13 represent for single root finding methods. In all tables, CO represents the convergence order; , the number of iterations; the computational order of convergence; and CPU, the computational time in seconds. Table 14 shows the values of arbitrary parameters and used in iterative methods Q1–Q3 for test Examples 1–3.







We also calculate the CPU execution time, as all the calculations are done using Maple 18 (Processor Intel(R) Core(TM) i33110m CPU@2.4 GHz with 64bit operating system). We observe from tables that CPU time of the methods SM1–SM3 is comparable or better than that of the method [21], showing the efficiency of our methods (SM1–SM3) as compared to them.
6. Applications in Engineering
In this section, we consider two examples from engineering.
Example 1. (beam designing model, see [34]).
An engineer considers that a problem of embedment s of a sheetpile wall results a nonlinear function given asThe exact roots of (48) are , , and as shown in Figure 6.
The initial estimates for are taken as
Example 2. (fractional conversion, see [32]).
As the expression described in [14, 35],is the fractional conversion of nitrogen, with hydrogen feed at 250 atm. and 227 k. The exact roots of (50) are . Real roots of (50) are shown in Figure 7. The initial estimates for are taken as
Example 3. (see [33], for simultaneous determination of distinct and multiple roots).
Here, we consider another standard test function for the demonstration of convergence behavior of newly constructed methods.
Considerwith exact roots as shown in Figure 8. The initial guessed values have been taken asFor distinct roots, we use method (34), and for multiple roots, method (37):Figures 9(e)–9(g) show the residual fall of iterative methods (Q1–Q3 and E1–E9), Figures 9(a), 9(b), and 9(d) present the residual fall of simultaneous iterative methods (SM1–SM3 and PJ6) for nonlinear functions , , and when the multiplicity of roots is simple, and Figure 9(c) presents the residual fall for multiple roots for a nonlinear function , respectively.
Table 13 shows the values of parameters used in iterative methods Q1–Q3 for test Examples 1–3.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
7. Conclusion
We have developed here three families of singlestep single root finding methods of optimal convergence order four and three families of simultaneous methods of order six, respectively. From Tables 7–13 and Figures 1–5 and 8, we observe that our methods (Q1–Q3 and SM1–SM3) are superior in terms of efficiency, stability, CPU time, and residual error as compared to the methods E1–E9 and PJ6 method, respectively.
Data Availability
No data were used to support this study.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this article.
Authors’ Contributions
All authors contributed equally in the preparation of this manuscript.
Acknowledgments
This research work was supported by all the authors of this manuscript.
References
 B. Bradie, A Friendly Introduction to Numerical Analysis, Pearson Education Inc, New Delhi, India, 2006.
 C. Chun, “Construction of Newtonlike iteration methods for solving nonlinear equations,” Numerische Mathematik, vol. 104, no. 3, pp. 297–315, 2006. View at: Publisher Site  Google Scholar
 C. Chun, “Some variants of King’s fourthorder family of methods for nonlinear equations,” Applied Mathematics and Computation, vol. 190, no. 1, pp. 57–62, 2007. View at: Publisher Site  Google Scholar
 C. Chun, “Some fourthorder iterative methods for solving nonlinear equations,” Applied Mathematics and Computation, vol. 195, no. 2, pp. 454–459, 2008. View at: Publisher Site  Google Scholar
 B. Ghanbari, “A new general fourthorder family of method for finding simple roots of nonlinear equations,” Journal of King Saud UniversityScience, vol. 23, no. 2, pp. 395–398, 2011. View at: Publisher Site  Google Scholar
 F. Chicharro, A. Cordero, and J. R. Torregrosa, “Drawing dynamical and parameters planes of iterative families and methods,” The Scientific World Journal, vol. 2013, Article ID 780153, 11 pages, 2013. View at: Publisher Site  Google Scholar
 A. Cordero, J. L. Hueso, E. Martínez, and J. R. Torregrosa, “New modifications of PotraPták’s method with optimal fourth and eighth orders of convergence,” Journal of Computational and Applied Mathematics, vol. 234, no. 10, pp. 2969–2976, 2010. View at: Publisher Site  Google Scholar
 L. O. Jay, “A note on Qorder of convergence,” Bit Numerical Mathematics, vol. 41, no. 2, pp. 422–429, 2001. View at: Google Scholar
 R. F. King, “A family of fourth order methods for nonlinear equations,” SIAM Journal on Numerical Analysis, vol. 10, no. 5, pp. 876–879, 1973. View at: Publisher Site  Google Scholar
 J. Kou, Y. Li, and X. Wang, “A composite fourthorder iterative method for solving nonlinear equations,” Applied Mathematics and Computation, vol. 184, no. 2, pp. 471–475, 2007. View at: Publisher Site  Google Scholar
 A. K. Maheshwari, “A fourth order iterative method for solving nonlinear equations,” Applied Mathematics and Computation, vol. 211, no. 2, pp. 383–391, 2009. View at: Publisher Site  Google Scholar
 A. M. Ostrowski, Solution of Equations and Systems of Equations, PrenticeHall, Englewood Cliffs, NJ, USA, 1964.
 H. T. Kung and J. F. Traub, “Optimal order of onepoint and multipoint iteration,” Journal of the ACM (JACM), vol. 21, no. 4, pp. 643–651, 1974. View at: Publisher Site  Google Scholar
 I. K. Argyros, Á. A. Magreñán, and L. Orcos, “Local convergence and a chemical application of derivative free root finding methods with one parameter based on interpolation,” Journal of Mathematical Chemistry, vol. 54, no. 7, pp. 1404–1416, 2016. View at: Publisher Site  Google Scholar
 S. I. Cholakov, “Local and semilocal convergence of WangZheng’s method for simultaneous finding polynomial zeros,” Symmetry, vol. 11, no. 6, p. 736, 2019. View at: Publisher Site  Google Scholar
 M. Cosnard and P. Fraigniaud, “Finding the roots of a polynomial on an MIMD multicomputer,” Parallel Computing, vol. 15, no. 1–3, pp. 75–85, 1990. View at: Google Scholar
 S. Kanno, N. Kjurkchiev, and T. Yamamoto, “On some methods for the simultaneous determination of polynomial zeros,” Japan Journal of Industrial and Applied Mathematics, vol. 13, no. 2, pp. 267–288, 1995. View at: Publisher Site  Google Scholar
 N. A. Mir, R. Muneer, and I. Jabeen, “Some families of twostep simultaneous methods for determining zeros of nonlinear equations,” ISRN Applied Mathematics, vol. 2011, Article ID 817174, 11 pages, 2011. View at: Publisher Site  Google Scholar
 G. H. Nedzhibov, “Iterative methods for simultaneous computing arbitrary number of multiple zeros of nonlinear equations,” International Journal of Computer Mathematics, vol. 90, no. 5, pp. 994–1007, 2013. <