Special Issue

## Applied Mathematics for Engineering Problems in Biomechanics and Robotics 2020

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 3524324 | https://doi.org/10.1155/2020/3524324

Naila Rafiq, Saima Akram, Nazir Ahmad Mir, Mudassir Shams, "Study of Dynamical Behavior and Stability of Iterative Methods for Nonlinear Equation with Applications in Engineering", Mathematical Problems in Engineering, vol. 2020, Article ID 3524324, 20 pages, 2020. https://doi.org/10.1155/2020/3524324

# Study of Dynamical Behavior and Stability of Iterative Methods for Nonlinear Equation with Applications in Engineering

Guest Editor: Carlos Llopis-Albert
Revised14 May 2020
Accepted20 May 2020
Published27 Jul 2020

#### Abstract

In this article, we first construct a family of optimal 2-step iterative methods for finding a single root of the nonlinear equation using the procedure of weight function. We then extend these methods for determining all roots simultaneously. Convergence analysis is presented for both cases to show that the order of convergence is 4 in case of the single-root finding method and is 6 for simultaneous determination of all distinct as well as multiple roots of a nonlinear equation. The dynamical behavior is presented to analyze the stability of fixed and critical points of the rational operator of one-point iterative methods. The computational cost, basins of attraction, efficiency, log of the residual, and numerical test examples show that the newly constructed methods are more efficient as compared with the existing methods in the literature.

#### 1. Introduction

To solve the nonlinear equationis the oldest problem of engineering in general and in mathematics in particular. These nonlinear equations have diverse applications in many areas of science and engineering. To find the roots of (1), we look towards iterative schemes, which can be classified as to approximate single root and all roots of (1). In this article, we are going to work on both types of iterative schemes. A lot of iterative methods of different convergence orders already exist in the literature (see ) to approximate the roots of (1). Ostrowski  defined efficiency index I of these iterative methods in terms of their convergence order k and the number of function evaluations per iteration, say , i.e.,

An iterative method is said to be optimal according to Kung–Traub conjecture  ifholds. The aforementioned methods are used to approximate one root at a time. However, mathematicians are also interested in finding of all roots of (1) simultaneously. This is due to the fact that simultaneous iterative methods are very popular due to their wider region of convergence, are more stable as compared to single-root finding methods, and implemented for parallel computing as well. More detail on single as well as simultaneous determination of all roots can be found in [1, 1224] and references cited therein.

The most famous of the single-root finding method is the classical Newton–Raphson method:

Method (4) requires one evaluation of the function and one of its first derivative to achieve optimal order 2 having efficiency using the Traub conjecture. Using Weierstrass’ correction ,in (4), we get the classical Weierstrass—Dochive method to approximate all roots of nonlinear equation (1) given as

Method (6) has convergence order 2. Later, Aberth-Ehrlich  presented the 3rd-order simultaneous method given aswhere .

The main aim of this paper is to construct the family of optimal fourth-order single-root finding methods using the procedure of weight function and then convert them into simultaneous iterative methods for finding all distinct as well as multiple roots of nonlinear equation (1). Using the complex dynamical system, we will be able to choose those values of parameters of iterative methods which give a wider convergence area on initial approximations.

#### 2. Construction of the Method and Convergence Analysis

King  presented the following optimal fourth-order method (abbreviated as MM1):

Chun  gave the fourth-order optimal method as (abbreviated as MM2)

Cordero et al.  proposed the fourth-order optimal method as (abbreviated as MM3)

Chun  introduced the fourth-order optimal method as (abbreviated as MM4)

Here, we propose the following family of iterative methods:

For iterative scheme (12), we have the following convergence theorem:

Theorem 1. Let be a simple root of a sufficiently differential function in an open interval I. If is sufficiently close to and be a real-valued function satisfying , then the convergence order of the family of iterative method (12) is 4 and satisfies the following error equation:where .

Proof. Let be a simple root of , and . By Taylor’s series expansion of about taking , we getDividing (14) by (10), we haveThis givesThus, using Taylor series, we haveNow, taking in equation (12) and on simplification givesHence, it proves fourth-order convergence.

##### 2.1. The Concrete Fourth-Order Family of Methods

We now construct some concrete forms of the family of fourth-order methods from the family of methods (12) by choosing weight function containing an arbitrary real number β as provided in Table 1, satisfying the condition of Theorem 1 with β as a real number. Therefore, we get the following new five families of iterative methods:Method-1 (abbreviated as BB1):Method-2 (abbreviated as BB2):Method-3 (abbreviated as BB3):Method-4 (abbreviated as BB4):Method-5 (abbreviated as BB5):

 S. no B1 B2 B3 B4 B5

#### 3. Construction of Simultaneous Methods

Suppose nonlinear equation (1) has roots. Then, and can be approximated as

This implies that

Now, an approximation of is formed by replacing with as follows:

Using (27) in (4), we have

Using corrections from BB1 to BB5, we get the following five simultaneous iterative methods for extracting all distinct as well as multiple roots of nonlinear equation (1):

Thus, we constructed new five simultaneous iterative methods (29) abbreviated as M1–M5.

##### 3.1. Convergence Analysis

In this section, the convergence analysis of a family of simultaneous methods (M1–M5) is given in the form of the following theorem. Obviously, convergence for the methods (28) will follow from the convergence of the methods (29) from theorem (2) when the multiplicities of the roots are one.

Theorem 2. Let be the simple roots of nonlinear equation (1). If . If be the initial approximations of the roots, respectively, and sufficiently close to actual roots, the order of convergence of method (5862) is six.

Proof. Let and be the errors in approximations and , respectively. Then, for distinct roots, we haveThus, for multiple roots, we have from (29)If it is assumed that absolute values of all errors are of the same order as, say, , then from (30), we haveThus, (32) shows the convergence order of methods M1–M5 which is six. Hence, the theorem is proved.

#### 4. Complex Dynamical Study of Families of Iterative Methods

Here, we discuss stability of the family of iterative method (BB1) only in the background contexture of complex dynamics. Rational map arising due to iterative method (BB1) iswhere and .

Recalling some basic concepts of this theory, detailed information can be found in [2, 4, 6, 8]. Taking a rational function , where denotes the Riemann sphere, the orbit defines a set such as A point is called a fixed point if . In particular, a fixed point is called the strange fixed point if when . A T-periodic point is defined as a point satisfying but for If is the fixed point of , then it is(i)Superattracting if (ii)Attracting if (iii)Repulsive if (iv)Neutral if (v)A strange fixed point if it not associated to any root of nonlinear equation (1)

An attracting point defines the basin of attraction, , as the set of starting points whose orbit tends to .

Furthermore, the implementation of the dynamical plane of the rational operator corresponding to iterative methods divides the complex plane into a mesh of values of real part along the x-axis and imaginary on the y-axis. The initial estimates are depicted in color depending on where their orbit converges, and thus, basins of attraction of the corresponding iterative methods are obtained. The scaling theorem allows the suitable change of the coordinate to reduce dynamics of iteration of general maps to study the specific family of iteration of similar maps.

Theorem 3. Let us take an analytic function and be an affine map, with Take then, , i.e., affine conjugate by T (Scaling theorem).

As iterative method (33), holds scaling theorem and thus allows the dynamic studies of iterative function (BB1) for the polynomial, where . One-point iterative method (BB1) has a universal Julia set if a rational map exists which conjugates by Mobius transformation.

Theorem 4. For a rational map arising from (33) applied to , where is conjugate via Möbius transformation by towhere and .

Proof. Let where Möbius transformation is given bywhich is considered as the map from Then, we havewhere and .
Similarly, we can get the following conclusions.

Theorem 5. For a rational map arising from (BB2) to (BB5) applied to where is conjugate via Möbius transformation by to the following:where and .

Theorem 6. For a rational map arising from (MM1) to (MM4) applied to where is conjugate via Möbius transformation by to the following:

The unions of the respective stability functions of all the strange fixed points of rational maps (36)–(41) are graphed in Figures 1(a)1(f).

The fixed points for rational function (36) are For the stability of fixed points of iterative method (36), we calculate , i.e.,where .

It is evident from (36) that and are always superattractive fixed points, but stability of other fixed points depends on the value of parameter which is present here. The operator for gives

Analyzing (46), as , we obtained horizontal asymptotes in .

In the following result, we present the stability of the strange fixed point for .

Theorem 7. The character of the strange point is as follows:(i)If then is an attractor, and it can be superattractor for (ii)When is a parabolic point(iii)If and then is repulsive

Proof. From (45), we haveSo,Let us consider as an arbitrary complex number. Then, . Therefore, . Finally, if varies, then and is repulsive, except if for which is not a fixed point.
The functions where we examine stability of iterative method (36) are given as

##### 4.1. Analysis of Critical Points

The critical points of (36) satisfy , i.e., and for , andwhere , and , where . We observe and . Figure 2 presents the zones of stability of strange fixed points. Fixed points and are represented by black-dotted lines, while critical points are represented by black, red, blue, green and orange-dotted lines, respectively (see Figure 3).

Theorem 8. The only member of the family of iterative methods whose operator is always conjugated to the rational map is the element corresponding to .

Proof. From (51), we representThe unique value of for which is .

##### 4.2. Parametric Planes

Parametric planes are obtained by taking over a mesh of values in the complex plane in A critical point is taken as the initial approximation. The method is then iterated until it reaches to the maximum iterations or converges to any fixed point. Taking as a tolerance used for stopping criteria . The complex value of the parameter in the complex plane is paint in red if the method converges to any of the roots and, black in the other case.

In Figures 4(a)4(d) and 5(a)5(h), red color shows the convergence region, and all those values of parameter show stable behavior, while those values of the parameter which are taken from the black region (divergence region) show the unstable behavior of the iterative map (36). Stable and unstable behavior is shown in Figures 6 and 7, respectively.

##### 4.3. Dynamical Planes

The generations of the dynamical planes are similar to the parametric plane. To execute them, the real and imaginary parts of the starting approximation are represented as two axes over a mesh of in the complex plane. The stopping criteria are the same as in the parametric plane but assign different colors to indicate to which root the method converges and black in the other case.

Let us note that the iterative methods BB2–BB5, MM1–MM4, and BB1 satisfy Cayley’s test  for all parameter values of . It can be observed from Figure 8 that iterative methods BB1–BB5 and MM1–MM4 verifying Cayley’s test  have the same dynamical properties as Newton’s method. Figures 8(a)8(e) clearly show quite stable behavior of iterative methods BB1–BB5 over MM1–MM4.

For unstable behavior, the value of parameter is chosen from the black region. To generate basins of attractions, we take a square box of . To each root of (1), we assign color to which the corresponding orbit of iterative methods (BB1–BB5 and MM1–MM4) starts and convergences to a fixed point. Take color map as HSV. We use as the stopping criteria, and the maximum iteration is taken as 30. We make dark red points if the orbit of the iterative methods does not converge to a root after 30 iterations. Different colors are used for different roots. Iterative methods have different basins of attractions distinguished by their color. In basins, brightness in color represents less number of iterations to achieve the roots of (1). Figures 9 and 10 show the basins of attractions of iterative methods (BB1–BB5 and MM1–MM4) for nonlinear function and , respectively. From Figures 9 and 10, divergent regions and brightness in color present that BB1–BB5 are better than MM1–MM4, respectively.

#### 5. Computational Aspect

Here, we compare the computational efficiency of M. S. Petkovi´c method  and the new methods (M1–M5) given by (28). As presented in , the efficiency of an iterative method can be estimated using the efficiency index given bywhere is the computational cost and is the order of convergence of the iterative method. Using arithmetic operation per iteration with certain weight depending on the execution time of operation, the computational cost is evaluated. The weights used for division, multiplication, and addition plus subtraction are , respectively. For a given polynomial of degree and roots, the number of division, multiplication, addition and subtraction per iteration for all roots is denoted by , , and . The cost of computation can be calculated as

Thus, (52) becomes

Reducing the number of operations of a complex polynomial of degree m with real and complex roots to operations of real arithmetic, as given in Table 2. Applying (38) and data given in Table 2, we calculate the percentage ratio  which is given bywhere is the Petkovic method  of order six. Figures 11(a)11(d) graphically illustrate these percentage ratios. It is evident from Figures 11(a)11(d) that the newly constructed simultaneous methods (M1–M5) are more efficient as compared with the Petkovic method .

 Methods CO ASm Mm Dm M1 6 M2 6 M3 6 M4 6 M5 6 PJ6M 6

We also calculate the CPU execution time as all the calculations are done using Maple 18 on Processor Intel(R) Core(TM) i3-3110m CPU@2.4 GHz with 64 bit Operating System. We observe that CPU time of the methods M1–M5 is less than M. S. Petkovi´c methods , showing the dominance efficiency of our methods (M1–M5) as compared to them.

#### 6. Numerical Results

Here, some numerical examples are considered in order to demonstrate the performance of our family of two-step fourth-order single-root finding methods (BB1–BB5) and six-order simultaneous methods (M1–M5), respectively. We compare our family of optimal fourth-order single-root finding methods (BB1–BB5) with MM1–MM4 methods. Family of simultaneous methods (M1–M5) of convergence order six is compared with M. S. Petkovi´c method  of the same order (abbreviated as the PJ6M method). All the computations are performed using CAS Maple 18 with 2500 (64-digit floating-point arithmetic in case of simultaneous methods) significant digits. For single-root finding methods, the stopping criteria are as follows:(i)(ii)whereas for simultaneous methods.

We take for the single-root finding method and for simultaneous determination of all roots of nonlinear equation (1).

Numerical test examples from [22, 3335] are provided in Tables 312. In Tables 3, 4, 6, 7, 9, and 11, we present the numerical results for simultaneous determination of all roots while Tables 5, 8, 10, and 12 for single-root finding methods. In all tables, CO represents the convergence order, n, the number of iterations, , the computational order of convergence, and CPU, the computational time in seconds. The value of arbitrary parameter used in iterative methods BB1–BB5 and M1–M5 is 1.5 for test Examples 14. We observe that numerical results of single-root finding methods (BB1–BB5) as well as all-root finding methods (M1–M5) are better than MM1–MM4 and PJ6M, respectively, on the same number of iterations. Figures 12(a)–12(d) represent the residual fall for the iterative methods (M1–M5 and PJ6M), and Figures 12(e)– 12(h) represent the residual fall of iterative methods (BB1–BB5 and MM1–MM4) for given nonlinear functions. Tables 312 show that the family of methods BB1–BB5 and M1–M5 are more efficient as compared with MM1–MM4 and PJ6M, respectively.

 Method CO CPU n e1 e2 e3 e4 PJ6M 6 0.031 4 3.5e − 2 3.5e − 2 6.0e − 11 0.0 M1–M5 6 0.015 4 5.0e − 13 5.0e − 13 0.0 0.0
 Methods CO CPU n e1 e2 e3 e4 PJ6M 6 0.032 4 3.1e − 10 3.1e − 10 3.2e − 6 0.0 M1–M5 6 0.016 4 0.0 0.0 0.0 0.0
 Method CPU p BB1 1.4e − 1399 3.5e − 5594 0.032 4.00 BB2 1.9e − 1825 1.7e − 7299 0.063 4.00 BB3 1.8e − 1829 1.5e − 7315 0.047 4.01 BB4 6.4e − 1420 9.9e − 5676 0.031 4.00 BB5 8.9e − 1420 3.7e − 5675 0.047 4.01 MM1 4.4e − 1295 5.3e − 5176 0.016 4.00 MM2 4.9e − 1569 1.3e − 6272 0.031 4.00 MM3 1.7e − 1458 4.7e − 5830 0.063 4.01 MM4 1.0e − 1458 4.0e − 5830 0.032 4.00