Special Issue

## Applied Mathematics for Engineering Problems in Biomechanics and Robotics 2020

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 5853296 | https://doi.org/10.1155/2020/5853296

Mudassir Shams, Nazir Ahmad Mir, Naila Rafiq, A. Othman Almatroud, Saima Akram, "On Dynamics of Iterative Techniques for Nonlinear Equation with Applications in Engineering", Mathematical Problems in Engineering, vol. 2020, Article ID 5853296, 17 pages, 2020. https://doi.org/10.1155/2020/5853296

# On Dynamics of Iterative Techniques for Nonlinear Equation with Applications in Engineering

Revised06 May 2020
Accepted27 May 2020
Published25 Jun 2020

#### Abstract

In this article, we construct an optimal family of iterative methods for finding the single root and then extend this family for determining all the distinct as well as multiple roots of single-variable nonlinear equations simultaneously. Convergence analysis is presented for both the cases to show that the optimal order of convergence is 4 in the case of single root finding methods and 6 for simultaneous determination of all distinct as well as multiple roots of a nonlinear equation. The computational cost, basins of attraction, efficiency, log of residual, and numerical test examples show that the newly constructed methods are more efficient as compared to the existing methods in the literature.

#### 1. Introduction

To solve a nonlinear equation,is the oldest problem of science, in general, and mathematics, in particular. The nonlinear equations have diverse applications in many areas of science and engineering. In general, to find the roots of (1), we look towards iterative schemes, which can be further classified as to approximate the single root and all roots of (1). In this article, we are going to work on both types of iterative methods. Many iterative methods of different convergence orders already exist in the literature (see ) to approximate the roots of (1). Ostrowski  defined the efficiency index I to classify these iterative methods in terms of their convergence order k and number of function evaluations or its derivatives per iteration, say i.e.,

An iterative method is said to be optimal according to the Kung–Traub conjecture , ifholds, where H is the optimal order of convergence. The aforementioned methods are used to approximate one root at a time. But, mathematicians are also interested in finding all the roots of (1) simultaneously. This is due to the fact that simultaneous iterative methods are very popular due to their wider region of convergence, are more stable as compared to single root finding methods, and can be implemented for parallel computing as well. More detail on single as well as simultaneous determination of all roots can be found in [1, 1028] and the reference cited therein.

The most famous of the single root finding methods is the classical Newton–Raphson method:

Method (4) is optimal with an efficiency of 1.41 by the Traub conjecture. If we use Weierstrass’ correction :in (4), then, we get classical Weierstrass–Dochev methods to approximate all roots of nonlinear equation (1) as

Method (6) has a convergence order 2. Later, Aberth  presented the 3rd order simultaneous method, given aswhere .

First of all, we construct a family of optimal fourth-order methods using the procedure of weight function and then convert it into simultaneous iterative methods for finding all distinct as well as multiple roots of nonlinear equation (1).

#### 2. Construction of Methods and Convergence Analysis

King  presented the following optimal fourth-order method (abbreviated as E1):

Cordero et al.  gave the fourth-order optimal method as follows (abbreviated as E2):

Chun  in 2008 gave the fourth-order optimal method as follows (abbreviated as E3):

Maheshwari  gave the fourth-order optimal method as follows (abbreviated as E4):

Chun  gave the fourth-order optimal method as follows (abbreviated as E5):

Kou et al.  gave the fourth-order optimal method as follows (abbreviated as E6):

Behzad  give the fourth-order optimal method as follows (abbreviated as E7):

Chun  in 2006 gave the fourth-order optimal method as follows (abbreviated as E8):

Ostrowski  give the fourth-order optimal method as follows (abbreviated as E9):

Here, we propose the following two families of iterative methods:where and is a real number.

For the iteration schemes (17), we have the following convergence theorem by using CAS Maple 18 and error relation of the iterative schemes defined in (17) is found.

Theorem 1. Let be a simple root of a sufficiently differential able function in an open interval I. If is sufficiently close to and is a real-valued function satisfying , and , then the convergence order of the family of iterative method (17) is four and the error equation is given bywhere .

Proof. Let be a simple root of and . By Taylor’s series expansion of around taking , we getandDividing (19) by (20), we havesoNow,Expanding about the origin, we haveBy putting in (25), we haveHence, it proves the theorem.

##### 2.1. The Concrete Fourth-Order Methods

We now construct some concrete forms of the family of methods described by (17). Let us take the function satisfying the conditions of Theorem 1.

Therefore, we get the following new three iterative methods with arbitrary constant and by choosing different weight functions given in Table 1:

 S. no. 1 2 3
Note. and .

Concrete method 1 (abbreviated as Q1):

Concrete method 2 (abbreviated as Q2):

Concrete method 3 (abbreviated as Q3):where and

##### 2.2. Complex Dynamical Study of Families of Iterative Methods

Here, we discuss the dynamical study of iterative methods (Q1–Q3 and E1–E9). We investigate the regions, from where we take the initial estimates to achieve the roots of the nonlinear equation. We actually numerically approximate the domain of attractions of the roots as a qualitative measure, that is, how the iterative methods depend on the choice of initial estimates? To answer this question on the dynamical behavior of the iterative methods, we investigate the dynamics of the methods (Q1–Q3) and compare it with (E1–E9). Let us recall some basic concepts of this study in the background contexture of complex dynamics. For more details on the dynamical behavior of the iterative methods, one can consult [6, 30, 31]. Taking a rational function , where denotes the complex plane, the orbit is defined as a set such as . The convergence of is understood in the sense, if exists. A point is known as periodic with a minimal period if holds, where is the smallest positive integer. A periodic point for is known as fixed, attracting if , repelling if , and neutral otherwise. An attracting point defines the basin of attraction as the set of starting points whose orbit tends to . The closure of the set of its repelling periodic points of a rational map is known as the Julia set denoted by , and its complement is the Fatou set denoted by . The iterative methods when applied to find the roots of (1) provide the rational map. But, we are interested in the basins of attraction of the roots of nonlinear function (1). Fatou set contains the basins of attraction of different roots is a well-known fact. In general, the Julia set (a fractal and rational map) behaves as unstable in this region. For the dynamical and graphical point of view, we take grids of square . To each root of (1), we assign a color to which the corresponding orbit of the iterative method starts and converges to a fixed point. Take color map as Jet. We use as a stopping criteria, and the maximum number of iterations is taken as 20. We mark a dark blue point, if the orbit of the iterative method does not converge to the root after 20 iterations which means it has a distance greater than to any other root. Different color is used for different roots. Iterative methods have different basins of attraction distinguished by their colors. In basins, brightness in color represents the number of iterations to achieve the root of (1). Note that the darkest blue regions denote the lack of convergence to any root of (1). Finally, in Tables 25, we present the elapsed time of basins of attraction corresponding to iterative maps (Q1–Q3 and E1–E9) using the tic-toc command in code using MATLAB (R2011b). Figures 14 show the basins of attraction of iterative methods (Q1–Q3 and E1–E9) for nonlinear functions , , , and , respectively. By observing the basins of attraction, we can easily judge the stability of iterative methods (Q1–Q3 and E1–E9). Elapsed time, divergent regions, and brightness in color presents that Q1–Q3 is better than E1–E9.

 Q1 Q2 Q3 E1 E2 E3 E4 E5 E6 E7 E8 E9 17.4013 12.1801 11.8321 13.745 11.8612 11.6603 23.2042 12.5878 15.1456 12.6226 17.9360 12.9451
 Q1 Q2 Q3 E1 E2 E3 E4 E5 E6 E7 E8 E9 12.5548 10.4487 9.4843 10.5134 9.0246 11.4686 22.6099 12.3507 12.5471 11.23315 8.8094 8.6127
 Q1 Q2 Q3 E1 E2 E3 E4 E5 E6 E7 E8 E9 20.2058 14.0826 12.7465 14.6167 13.4429 14.4602 22.1014 15.6955 19.3235 15.0689 20.4771 15.9503
 Q1 Q2 Q3 E1 E2 E3 E4 E5 E6 E7 E8 E9 35.6621 39.6453 40.0639 33.8435 39.1895 38.9645 67.9233 46.3621 56.9810 33.9914 42.2391 28.6589

Figures 1(a)–1(l) present the basins of attraction of methods (Q1–Q3 and E1–E9) for a nonlinear function . In Figures 1(a)–1(l), brightness of the color in basins shows the less number of iterations for convergence of iterative methods. Table 2 shows the elapsed time of Q1–Q3 and E1–E9.

Figures 2(a)–2(l) present the basins of attraction of methods (Q1–Q3 and E1–E9) for a nonlinear function . In Figures 2(a)–2(l), brightness of the color in basins shows the less number of iterations for convergence of iterative methods. Table 3 shows the elapsed time of Q1–Q3 and E1–E9.

Figures 3(a)–3(l) present the basins of attraction of methods (Q1–Q3 and E1–E9) for a nonlinear function . In Figures 3(a)–3(l), brightness of the color in basins shows the less number of iterations for convergence of iterative methods. Table 4 shows the elapsed time of Q1–Q3 and E1–E9.

Figures 4(a)–4(l) present the basins of attraction of methods (Q1–Q3 and E1–E9) for a nonlinear function . In Figures 4(a)–4(l), brightness of the color in basins shows the less number of iterations for convergence of iterative methods. Table 5 shows the elapsed time of Q1–Q3 and E1–E9.

#### 3. Generalization to Simultaneous Methods

Suppose nonlinear equation (1) has roots. Then, and can be approximated as

This implies

This gives the Aberth–Ehrlich method (7):

Now from (31), an approximation of is formed by replacing with as follows:

Using (33) in (4), we have the following method for finding all the distinct roots:

In case of multiple roots, we have the following method:where is the multiplicity of the root andin which and . Using correction , we get the following new family of simultaneous iterative methods for extracting multiple roots of nonlinear equation (1):

Thus, we have constructed new three simultaneous iterative methods (37) abbreviated as SM1–SM3.

##### 3.1. Convergence Analysis

In this section, we discuss the convergence analysis of a family of simultaneous methods (SM1–SM3) which is given in form of the following theorem. Obviously, convergence for the method (34) will follow from the convergence of the method (SM1–SM3) from Theorem 2 when the multiplicities of the roots are one.

Theorem 2. Let be the n number of simple roots of nonlinear equation (1). If , , , …, be the initial approximations of the roots, respectively, and sufficiently close to actual roots, then the order of convergence of the method (SM1–SM3) equals six.

Proof. Let and be the errors in and approximations, respectively. Then, obviously for distinct rootsThus, for multiple roots, we have from (37) thatwhere from (18) and . Thus,If it is assumed that absolute values of all errors are of the same order, say , then from (40), we haveHence, the theorem is proved.

#### 4. Computational Aspect

Here, we compare the computational efficiencies of the Petkovic et al.  method and the new methods (SM1–SM3). As presented in , the efficiency of an iterative method can be estimated using the efficiency index given bywhere is the computational cost and is the order of convergence of the iterative method. Arithmetic operations per iteration with certain weight depending on the execution time of operation are used to evaluate the computational cost . The weights used for division, multiplication, and addition plus subtraction are , respectively. For a given polynomial of degree and roots, the number of division, multiplication, and addition and subtraction per iteration for all roots are denoted by , and . The cost of computation can be calculated as

Thus, (42) becomes

Consider that the number of operations of a complex polynomial with real and complex roots reduces to the operation of real arithmetic, which is given in Table 6 as a polynomial degree m taking the dominant term of order . Applying (44) and the data given in Table 6, we calculate the percentage ratio  given bywhere is the Petkovic method  of order 4. Figures 5(a)–5(d) graphically illustrate these percentage ratios. Figures 5(a)–5(c) show the computational efficiency of methods (SM1–SM3) w.r.t the method , and Figure 5(d) shows the computational efficiency of the method PJ6 w.r.t (SM1–SM3). It is evident from Figures 5(a)–5(d) that the newly constructed simultaneous methods (SM1–SM3) are more efficient as compared to .

 Methods CO ASm Mm Dm SM1 6 SM2 6 SM3 6 PJ6 6

#### 5. Numerical Results

Here, some numerical examples are considered in order to demonstrate the performance of our family of one-step fourth-order single root finding methods (Q1–Q3) and sixth-order simultaneous methods (SM1–SM3), respectively. We compare our family of optimal fourth-order single root finding methods (Q1–Q3) with E1–E9 methods. The family of simultaneous methods (SM1–SM3) of order six is compared with the method  of the same order. All the computations are performed using CAS Maple 18 with 9000 (64 digits floating point arithmetic in case of simultaneous methods) significant digits. For single root finding methods, the stopping criteria is as follows:whereasfor simultaneous methods. We take for the single root finding method and for simultaneous determination of all roots of nonlinear equation (1).

Numerical tests examples from  are provided in Tables 713. In Tables 7, 9, 11, and 12, we present the numerical results for simultaneous determination of all roots, while Tables 8, 10, and 13 represent for single root finding methods. In all tables, CO represents the convergence order; , the number of iterations; the computational order of convergence; and CPU, the computational time in seconds. Table 14 shows the values of arbitrary parameters and used in iterative methods Q1–Q3 for test Examples 13.

 (0) S1,2 = 2.5, (0) S3 = −7.4641, (0) S4 = −0.5359 Method CO CPU n e1 e2 e3 PJ6 6 0.032 5 2.0e − 40 1.1e − 38 1.5e − 39 SM1–SM3 6 0.015 5 0.0 3.2e − 27 0.0
 Method CPU ρ Q1 2.0e − 58 1.0e − 76 0.062 4.0 Q2 3.8e − 63 2.0e − 81 0.031 4.0 Q3 2.1e − 60 1.1e − 78 0.031 4.0 E1 1.6e − 67 8.5e − 86 0.062 4.0 E2 4.8e − 66 2.4e − 84 0.031 4.0 E3 1.8e − 67 9.5e − 86 0.031 4.0 E4 3.6e − 66 1.8e − 84 0.063 4.0 E5 3.2e − 65 1.6e − 83 0.531 4.0 E6 3.1e − 67 1.9e − 85 0078 4.0 E7 1.3e − 64 7.1e − 83 0.046 4.0 E8 3.2e − 65 1.6e − 83 0.047 4.0 E9 6.9e − 69 3.5e − 87 0.062 4.0
 Method CO CUP n e1 e2 e3 e4 PJ6 6 0.031 4 4.6e − 62 7.0e − 62 3.7e − 93 0.0 SM1–SM3 6 0.015 4 0.0 0.0 1.0e − 137 1.0e − 133
 Method CPU ρ Q1 9.5e − 1947 3.0e − 7783 0.079 4.0 Q2 1.3e − 2120 2.9e − 8477 0.074 4.0 Q3 5.9e − 2002 3.2e − 8004 0.073 4.0 E1 4.1e − 2016 6.5e − 8061 0.078 4.0 E2 1.4e − 1934 1.6e − 7734 0.062 4.0 E3 2.8e − 2016 1.5e − 8061 0.047 4.0 E5 2.1e − 1934 9.1e − 7734 0.063 4.0 E5 8.6e − 1906 2.6e − 7619 0.062 4.0 E6 7.4e − 1970 8.4e − 7876 0.078 4.0 E7 1.0e − 1881 6.7e − 7523 0.079 4.0 E8 8.6e − 1906 2.6e − 7619 0.047 4.0 E9 2.9e − 2083 1.1e − 8329 0.078 4.0
 Method CO CPU n e1 e2 e3 PJ6 6 0.047 3 0.01 0.01 0.2e − 5 SM1–SM3 6 0.031 3 8.8e − 3 5.1e − 5 4.3e − 4
 Method CO CPU n e1 e2 e3 PJ6 6 0.047 3 5.5e − 16 2.7e − 8 2.2e − 17 SM1–SM3 6 0.031 3 1.8e − 21 1.7e − 13 3.9e − 18
 Method CPU ρ Q1 6.7e − 519 7.8e − 2074 0.422 4.0 Q2 4.2e − 557 5.1e − 2227 0.734 4.0 Q3 8.8e − 533 1.7e − 2129 1.328 4.0 E1 1.2e − 536 7.5e − 2145 1.328 4.0 E2 97e − 516 3.5e − 2061 1.344 4.0 E3 1.0e − 536 2.7e − 2145 1.344 4.0 E4 1.6e − 575 3.1e − 2050 1.375 4.0 E5 8.0e − 507 1.9e − 2025 0.906 4.0 E6 2.2e − 525 8.4e − 2100 1.375 4.0 E7 8.2e − 499 2.4e − 1993 0.0.891 4.0 E8 8.0e − 507 1.9e − 2025 1.1328 4.0 E9 2.5e − 550 8.3e − 2200 1.329 4.0
 Method Example 1 Example 2 Example 3 β α β α β α Q1 10.005 — 10.005 — 10.005 — Q2 5.005 1.001 5.005 1.001 5.005 1.001 Q3 −5.005 — 15.0005 — −5.005 —

We also calculate the CPU execution time, as all the calculations are done using Maple 18 (Processor Intel(R) Core(TM) i3-3110m CPU@2.4 GHz with 64-bit operating system). We observe from tables that CPU time of the methods SM1–SM3 is comparable or better than that of the method , showing the efficiency of our methods (SM1–SM3) as compared to them.

#### 6. Applications in Engineering

In this section, we consider two examples from engineering.

Example 1. (beam designing model, see ).
An engineer considers that a problem of embedment s of a sheet-pile wall results a nonlinear function given asThe exact roots of (48) are , , and as shown in Figure 6.
The initial estimates for are taken as

Example 2. (fractional conversion, see ).
As the expression described in [14, 35],is the fractional conversion of nitrogen, with hydrogen feed at 250 atm. and 227 k. The exact roots of (50) are . Real roots of (50) are shown in Figure 7. The initial estimates for are taken as

Example 3. (see , for simultaneous determination of distinct and multiple roots).
Here, we consider another standard test function for the demonstration of convergence behavior of newly constructed methods.
Considerwith exact roots as shown in Figure 8. The initial guessed values have been taken asFor distinct roots, we use method (34), and for multiple roots, method (37):Figures 9(e)–9(g) show the residual fall of iterative methods (Q1–Q3 and E1–E9), Figures 9(a), 9(b), and 9(d) present the residual fall of simultaneous iterative methods (SM1–SM3 and PJ6) for nonlinear functions , , and when the multiplicity of roots is simple, and Figure 9(c) presents the residual fall for multiple roots for a nonlinear function , respectively.
Table 13 shows the values of parameters used in iterative methods Q1–Q3 for test Examples 13.

#### 7. Conclusion

We have developed here three families of single-step single root finding methods of optimal convergence order four and three families of simultaneous methods of order six, respectively. From Tables 713 and Figures 15 and 8, we observe that our methods (Q1–Q3 and SM1–SM3) are superior in terms of efficiency, stability, CPU time, and residual error as compared to the methods E1–E9 and PJ6 method, respectively.

#### Data Availability

No data were used to support this study.

#### Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this article.

#### Authors’ Contributions

All authors contributed equally in the preparation of this manuscript.

#### Acknowledgments

This research work was supported by all the authors of this manuscript.

1. B. Bradie, A Friendly Introduction to Numerical Analysis, Pearson Education Inc, New Delhi, India, 2006.
2. C. Chun, “Construction of Newton-like iteration methods for solving nonlinear equations,” Numerische Mathematik, vol. 104, no. 3, pp. 297–315, 2006. View at: Publisher Site | Google Scholar
3. C. Chun, “Some variants of King’s fourth-order family of methods for nonlinear equations,” Applied Mathematics and Computation, vol. 190, no. 1, pp. 57–62, 2007. View at: Publisher Site | Google Scholar
4. C. Chun, “Some fourth-order iterative methods for solving nonlinear equations,” Applied Mathematics and Computation, vol. 195, no. 2, pp. 454–459, 2008. View at: Publisher Site | Google Scholar
5. B. Ghanbari, “A new general fourth-order family of method for finding simple roots of non-linear equations,” Journal of King Saud University-Science, vol. 23, no. 2, pp. 395–398, 2011. View at: Publisher Site | Google Scholar
6. F. Chicharro, A. Cordero, and J. R. Torregrosa, “Drawing dynamical and parameters planes of iterative families and methods,” The Scientific World Journal, vol. 2013, Article ID 780153, 11 pages, 2013. View at: Publisher Site | Google Scholar
7. A. Cordero, J. L. Hueso, E. Martínez, and J. R. Torregrosa, “New modifications of Potra-Pták’s method with optimal fourth and eighth orders of convergence,” Journal of Computational and Applied Mathematics, vol. 234, no. 10, pp. 2969–2976, 2010. View at: Publisher Site | Google Scholar
8. L. O. Jay, “A note on Q-order of convergence,” Bit Numerical Mathematics, vol. 41, no. 2, pp. 422–429, 2001. View at: Google Scholar
9. R. F. King, “A family of fourth order methods for nonlinear equations,” SIAM Journal on Numerical Analysis, vol. 10, no. 5, pp. 876–879, 1973. View at: Publisher Site | Google Scholar
10. J. Kou, Y. Li, and X. Wang, “A composite fourth-order iterative method for solving non-linear equations,” Applied Mathematics and Computation, vol. 184, no. 2, pp. 471–475, 2007. View at: Publisher Site | Google Scholar
11. A. K. Maheshwari, “A fourth order iterative method for solving nonlinear equations,” Applied Mathematics and Computation, vol. 211, no. 2, pp. 383–391, 2009. View at: Publisher Site | Google Scholar
12. A. M. Ostrowski, Solution of Equations and Systems of Equations, Prentice-Hall, Englewood Cliffs, NJ, USA, 1964.
13. H. T. Kung and J. F. Traub, “Optimal order of one-point and multipoint iteration,” Journal of the ACM (JACM), vol. 21, no. 4, pp. 643–651, 1974. View at: Publisher Site | Google Scholar
14. I. K. Argyros, Á. A. Magreñán, and L. Orcos, “Local convergence and a chemical application of derivative free root finding methods with one parameter based on interpolation,” Journal of Mathematical Chemistry, vol. 54, no. 7, pp. 1404–1416, 2016. View at: Publisher Site | Google Scholar
15. S. I. Cholakov, “Local and semilocal convergence of Wang-Zheng’s method for simultaneous finding polynomial zeros,” Symmetry, vol. 11, no. 6, p. 736, 2019. View at: Publisher Site | Google Scholar
16. M. Cosnard and P. Fraigniaud, “Finding the roots of a polynomial on an MIMD multicomputer,” Parallel Computing, vol. 15, no. 1–3, pp. 75–85, 1990. View at: Google Scholar
17. S. Kanno, N. Kjurkchiev, and T. Yamamoto, “On some methods for the simultaneous determination of polynomial zeros,” Japan Journal of Industrial and Applied Mathematics, vol. 13, no. 2, pp. 267–288, 1995. View at: Publisher Site | Google Scholar
18. N. A. Mir, R. Muneer, and I. Jabeen, “Some families of two-step simultaneous methods for determining zeros of nonlinear equations,” ISRN Applied Mathematics, vol. 2011, Article ID 817174, 11 pages, 2011. View at: Publisher Site | Google Scholar
19. G. H. Nedzhibov, “Iterative methods for simultaneous computing arbitrary number of multiple zeros of nonlinear equations,” International Journal of Computer Mathematics, vol. 90, no. 5, pp. 994–1007, 2013. <