/ / Article

Research Article | Open Access

Volume 2020 |Article ID 2816843 | https://doi.org/10.1155/2020/2816843

Amir Naseem, M. A. Rehman, Thabet Abdeljawad, "Numerical Algorithms for Finding Zeros of Nonlinear Equations and Their Dynamical Aspects", Journal of Mathematics, vol. 2020, Article ID 2816843, 11 pages, 2020. https://doi.org/10.1155/2020/2816843

# Numerical Algorithms for Finding Zeros of Nonlinear Equations and Their Dynamical Aspects

Accepted25 Jul 2020
Published30 Sep 2020

#### Abstract

In this paper, we developed two new numerical algorithms for finding zeros of nonlinear equations in one dimension and one of them is second derivative free which has been removed using the interpolation technique. We derive these algorithms with the help of Taylor’s series expansion and Golbabai and Javidi’s method. The convergence analysis of these algorithms is discussed. It is established that the newly developed algorithms have sixth order of convergence. Several numerical examples have been solved which prove the better efficiency of these algorithms as compared to other well-known iterative methods of the same kind. Finally, the comparison of polynomiographs generated by other well-known iterative methods with our developed algorithms has been made which reflects their dynamical aspects.

#### 1. Introduction

One of the major problems in applied mathematics and engineering sciences is to solve the nonlinear equation of the form:where f is a real-valued function whose domain is an open connected set.

The solution of such type of nonlinear equations cannot be found directly except in special cases. Therefore, we have to adopt iterative methods for solving such type of equations. In iterative methods, we start with an initial guess x0 which is improved step by step by means of iterations.

We assume that α is a simple zero of equation (1) and x0 is an initial guess sufficiently close to α.

Using Taylor’s series around x0 for equation (1), we have

If f ′(x0) ≠ 0, we can evaluate the above expression as follows:

If we choose xk+1 the root of equation, then we have

This is so-called Newton’s method [1, 2] for root-finding of nonlinear functions, which converges quadratically. From equation (2), one can evaluate

This is so-called Halley’s method  for root-finding of nonlinear functions, which converges cubically. Simplification of equation (2) yields another iterative method as follows:

This is known as Househölder’s method  for solving nonlinear equations in one variable, which converges cubically.

In recent years, a large number of iterative methods have been developed using different techniques such as decomposition method, Taylor’s series, perturbation method, quadrature formula, and variational iteration techniques  and the references therein.

For improving convergence, various modified methods have been developed in the literature. Some of them are given in  and the references therein.

In the twentieth century, Ostrowski  took Newton’s method as a predictor step and proposed a two-step iterative method having fourth convergence order. After that, Traub  suggested an iterative scheme in which he took Newton’s method as a predictor and corrector step and proved that the proposed method has convergence of fourth order. Noor and Noor, in 2007 , developed the two-step Halley’s method in which Newton’s method was taken as a predictor step and Halley’s method as a corrector step and then proved that the suggested method has sixth order of convergence and then removing its second derivative using finite difference scheme and established a new algorithm having fifth order of convergence. In 2018, Kumar et al.  established a parameter-based family of algorithm which has sixth convergence order for solving nonlinear equations.

In this paper, we proposed and analyzed two new predictor-corrector type iterative methods, namely, Algorithms 1 and 2 in which we take Newton’s method as a predictor step. We proved that these newly developed algorithms have sixth order of convergence and are most efficient as compared to other well-known iterative methods of the same kind. The proposed algorithms applied to solve some test examples in order to assess its validity and accuracy. In the last section, we generate the polynomiographs of complex polynomials of different degrees through our proposed algorithms and compare it with other methods of the same category. The presented polynomiographs have very interesting and aesthetic patterns which reflects different properties of the polynomials.

#### 2. Main Results

Let f: X ⟶ R, XR is a scalar function, then by using the basic idea of modified homotopy perturbation method, Golbabai and Javidi  derive the following method:

In iterative form:which is known as Golbabai and Javidi’s method  having cubic order of convergence.

After simplification of equation (2), one can obtain

Now from Golbabai and Javidi’s method in equation (7),

With the help of equations (9) and (10), we obtain

Rewriting the above equation with Newton’s method as a predictor gives us a new algorithm as follows.

Algorithm 1. For a given x0, compute the approximate solution xn+1 by the following iterative schemes:which is the modification of Golbabai and Javidi's method with Newton’s method as a predictor.
In order to find the solution of the given nonlinear equation, we have to calculate the first as well as the second derivative of the function f(x). But in several cases, we face such a situation where the second derivative of the function does not exist and our method fails to find the solution.
To overcome this difficulty, we use interpolation technique for approximating the second derivative as follows:
Consider the functionwhere a, b, c, and d are unknowns to be found with the following interpolation conditions:A system of four linear equations of four variables is obtained from the above conditions. Solving this system givesUsing the above equation in Algorithm 1, we derive a new algorithm free from second derivative as follows.

Algorithm 2. For a given x0, compute the approximate solution xn+1 by the following iterative schemes:which is a new two-step iterative method free from second derivative, having Newton’s method as a predictor step. With the help of this method, we can solve such type of nonlinear equations in which second derivative does not exist. Also this requires only two evaluations of the function and one of its derivatives which shows better efficiency of this method as compared to those which require second derivative. Several examples are given which proved best performance of this method as compared to other well-known iterative methods of the same kind.

#### 3. Convergence Analysis

In this section, we discuss the convergence order of the purposed iterative methods.

Theorem 1. Suppose that α is a root of the equation f(x) = 0. If f(x) is sufficiently smooth in the neighborhood of α, then the order of convergence of Algorithm 1 is six.

Proof. To analyze the convergence of Algorithm 1, suppose that α is a root of the equation f(x) = 0 and en be the error at nth iteration, then en = xn − α, and by using Taylor series expansion, we have where .
With the help of equations (17) and (18), we getUsing equations (19)─(22) in Algorithm 1, we getwhich implies thatThe above equation shows that the order of convergence of Algorithm 1 is six.

Theorem 2. Suppose that α is a root of the equation f(x) = 0. If f(x) is sufficiently smooth in the neighborhood of α, then the order of convergence of Algorithm 2 is six.

Proof. With the help of equations (17)─(21) along with the same assumptions of the previous theorem, we haveUsing equations (20), (21), and (25) in Algorithm 2, we getwhich implies thatThe above equation shows that the order of convergence of Algorithm 2 is six.

#### 4. Numerical Examples

In this section, we include some nonlinear functions to illustrate the efficiency of our newly developed numerical algorithms. We compare these algorithms with Noor’s method one (NR1) , Noor’s method two (NR2) , Ostrowski’s method (OM) , Traub’s method (TM) , and modified Halley’s method (MHM) .

For this purpose, the following numerical examples have been solved:

Here, we take in the following stopping criteria . The numerical examples have been solved using the computer program Maple 13.

Tables 17 show the numerical comparisons of our developed algorithms with Noor's method one (NR1), Noor’s method two (NR2), Ostrowski's method (OM), Traub's method (TM), and modified Halley’s method (MHM). The columns represent the number of iterations , the magnitude of at the final estimate , the approximate root , the difference between two consecutive approximations of the equation and the computational order of convergence (COC) which can be approximated using the following formula:which was introduced by Weerakoon and Fernando in (2000) .

 Method N COC NR1 27 3.710177 1.36523001341409684580 6.769715 2 NR2 4 1.813409 7.403402 3 OM 5 7.395989 8.442974 4 TM 84 1.071491 1.531881 4 MHM 23 7.392887 4.595974 5 Algorithm 1 4 3.518283 3.794720 6 Algorithm 2 4 3.518283 3.794720 6
 Method N COC NR1 17 3.818521 −0.52248077281054548914 7.695727 2 NR2 90 9.891015 3.435165 3 OM 88 7.511035 1.017176 4 TM 18 8.259938 5.124588 4 MHM 9 1.268356 4.262132 5 Algorithm 1 8 1.013204 1.409549 6 Algorithm 2 3 1.837741 5.335861 6
 Method N COC NR1 5 3.163807 0.40999201798913713162 5.629386 2 NR2 45 3.382231 1.512669 3 OM 4 6.783183 1.630792 4 TM 4 6.063382 1.586231 4 MHM 3 4.499472 1.880726 5 Algorithm 1 2 1.658501 1.059288 6 Algorithm 2 2 1.670194 1.059787 6
 Method N COC NR1 4 4.418933 0.56714329040978387300 4.418705 2 NR2 4 9.674661 1.596435 3 OM 3 8.984315 1.921047 4 TM 3 2.130596 6.120512 4 MHM 3 1.116440 2.540557 5 Algorithm 1 2 5.068654 6.667718 6 Algorithm 2 2 2.535508 4.396326 6
 Method N COC NR1 100 7.261452 2.15443469003188372180 1.059948 2 NR2 11 4.753110 2.385814 3 OM 5 5.537625 4.942045 4 TM 5 4.893426 1.369225 4 MHM 4 4.922706 2.732126 5 Algorithm 1 3 2.129060 1.663953 6 Algorithm 2 3 2.129060 1.663953 6
 Method N COC NR1 6 4.872692 0.73908513321516064166 6.769715 2 NR2 4 3.229578 7.403402 3 OM 3 2.576412 8.442974 4 TM 3 2.617193 1.531881 4 MHM 3 1.154792 4.595974 5 Algorithm 1 2 8.918884 3.794720 6 Algorithm 2 2 1.374491 3.794720 6
 Method N COC NR1 4 2.455690 0.00000000000000000000 9.031605 2 NR2 4 3.424681 1.271327 3 OM 3 2.704322 1.194704 4 TM 3 7.907266 5.282400 4 MHM 3 1.556811 5 Algorithm 1 2 2.674407 6 Algorithm 2 2 3.197382 6

Tables 8 shows the comparison of the number of iterations required for different iterative methods with our developed algorithms to approximate the root of the given nonlinear function for the stopping criteria . The columns represent the number of iterations for different functions along with initial guess . All calculations have been carried out using the computer program Maple 13.

 Methods Functions NR1 29 20 8 7 103 9 6 NR2 5 91 6 5 12 6 5 OM 6 89 5 4 6 4 4 TM 85 20 5 4 7 4 3 MHM 24 10 4 4 6 4 4 Algorithm 1 5 9 4 3 4 3 3 Algorithm 2 5 4 4 3 4 3 3

#### 5. Polynomiography

Polynomials are one of the most significant objects in many fields of mathematics. Polynomial root-finding has played a key role in the history of mathematics. It is one of the oldest and most deeply studied mathematical problems. The last interesting contribution to the polynomials root-finding history was made by Kalantari , who introduced the polynomiography. As a method which generates nice-looking graphics, it was patented by Kalantari in USA in 2005 . Polynomiography is defined to be “the art and science of visualization in approximation of the zeros of complex polynomials, via fractal, and nonfractal images created using the mathematical convergence properties of iteration functions” . An individual image is called a “polynomiograph”. Polynomiography combines both art and science aspects. Polynomiography gives a new way to solve the ancient problem by using new algorithms and computer technology. Polynomiography is based on the use of one or an infinite number of iteration methods formulated for the purpose of approximation of the roots of polynomials, e.g., Newton’s method and Halley’s method.

The word “fractal,” which partially appeared in the definition of polynomiography, was coined by the famous mathematician Benoit Mandelbrot . Both fractal images and polynomiographs can be obtained via different iterative schemes. Fractals are self-similar having typical structure and independent of scale. On the other hand, polynomiographs are quite different. The “polynomiographer” can control the shape and design in a more predictable way by using different iteration methods to the infinite variety of complex polynomials. Generally, fractals and polynomiographs belong to different classes of graphical objects. Polynomiography has diverse applications in Mathematics, Science, Education, Art, and Design.

According to the fundamental theorem of algebra, any complex polynomial with complex coefficientsor by its zeros of degree has zeros which may or may not be distinct. The degree of polynomial describes the number of basins of attraction and localization of basins can be controlled by placing roots on the complex plane manually. Usually, polynomiographs are colored based on the number of iterations needed to obtain the approximation of some polynomial root with a given accuracy and a chosen iteration method. The description of polynomiography, its theoretical background, and artistic applications are described in [7, 8, 2429].

#### 6. Applications

In the numerical algorithms that are based on the iterative processes, we need a stopping criterion for the process, that is, a test that tells us that the process has converged or it is very near to the solution. This type of test is called a convergence test. Usually, in the iterative process that use a feedback, like the root-finding methods, the standard convergence test has the following form:where and are two successive points in the iteration process and >0 is a given accuracy. In this paper, we also use stopping criterion (32). The different colors of an image depend on the number of iterations to reach a root with given accuracy . One can obtain infinitely many nice-looking polynomiographs by changing parameter , where is the upper bound of the number of iterations.

Here, we present some examples of the following complex polynomials using our developed algorithms and compare it with the polynomiographs obtained by using other well-known two-step iterative methods:

In Figures 16, polynomiographs of different complex polynomials for Noor’s method one (NR1), Noor’s method two (NR2), Ostrowski’s method (OM), Traub’s method (TM), modified Halley’s method (MHM), and our developed algorithms have been shown which describe the regions of convergence of these polynomials. When we look at the generated images, we can read two important characteristics. The first one is the speed of convergence of the algorithm, i.e., the color of each point gives us information on how many iterations were performed by the algorithm to reach the root. The second characteristic is the dynamics of the algorithm. Low dynamics is in areas where the variation of colors is small, whereas in areas with a large variation of colors, the dynamics is high. The black color in images shows that places where the solution cannot be achieved for the given number of iterations. The areas of the same colors in the above figures indicate the same number of iterations required to determine the solution and they look similar to the contour lines on the map.

All these figures have been generated using the computer program Mathematica by taking and where shows the accuracy of the given root and represents the upper bound of the number of iterations.

#### 7. Conclusions

Two new numerical algorithms for finding zeros of nonlinear equations have been developed, having convergence of order six. By using some test examples, the performance of the proposed algorithms is also discussed. The numerical results uphold the analysis of the convergence which can be seen in Tables 18. Both algorithms show better results in terms of the number of iterations, efficiency, and convergence order in comparison with other well-known two-step iterative methods of the same kind. Polynomiographs of complex polynomials of different degrees using two-step iterative methods and our proposed algorithms have been generated. The presented polynomiographs are rich and colorful and have very interesting and aesthetic patterns, which reflects the dynamical aspects of our proposed algorithms.

#### Data Availability

All data required for this paper are included within this paper.

#### Conflicts of Interest

The authors do not have any conflicts of interest.

#### Authors’ Contributions

All authors contributed equally to this paper.

1. D. E. Kincaid and E. W. Cheney, Numerical Analysis, Brooks/Cole Publishing Company, Pacific Grove, CA, USA, 1990.
2. R. L. Burden and J. D. Faires, Numerical Analysis, Brooks/Cole Publishing Company, Pacific Grove, CA, USA, 6th edition, 1997.
3. I. K. Argyros, “A note on the halley method in banach spaces,” Applied Mathematics and Computation, vol. 58, no. 2-3, pp. 215–224, 1993. View at: Publisher Site | Google Scholar
4. A. S. Householder, The Numerical Treatment of a Single Nonlinear Equation, McGraw-Hill, New York, NY, USA, 1970.
5. M. Nawaz, A. Naseem, and W. Nazeer, “New twelfth order algorithms for solving nonlinear equations by using variational iteration technique,” Journal of Prime Research in Mathematics, vol. 14, pp. 24–36, 2018. View at: Google Scholar
6. W. Nazeer, A. Naseem, S. M. Kang, and Y. C. Kwun, “Generalized newton raphsons method free from second derivative,” Journal of Nonlinear Sciences and Applications, vol. 09, no. 5, pp. 2823–2831, 2016. View at: Publisher Site | Google Scholar
7. Y. C. Kwun, Z. Majeed, A. Naseem, W. Nazeer, and S. M. Kang, “New iterative methods using variational iteration technique and their dynamical behavior,” International Journal of Pure and Applied Mathematics, vol. 116, no. 4, pp. 1093–1113, 2017. View at: Google Scholar
8. S. M. Kang, A. Naseem, W. Nazeer, and A. Jan, “Modification of abbasbandy’s method and polynomigraphy,” International Journal of Mathematical Analysis, vol. 10, no. 24, pp. 1197–1210, 2016. View at: Publisher Site | Google Scholar
9. W. Nazeer, M. Tanveer, S. M. Kang, and A. Naseem, “A new householders method free from second derivatives for solving nonlinear equations and polynomiography,” Journal of Nonlinear Sciences and Applications, vol. 09, no. 03, pp. 998–1007, 2016. View at: Publisher Site | Google Scholar
10. I. Muhammad, H. Ma, A. Naseem, S. Ali, and A. Nizami, “New algorithms for nonlinear equations,” International Journal of Advanced and Applied Sciences, vol. 5, pp. 28–32, 2018. View at: Publisher Site | Google Scholar
11. O. A. Taiwo and O. S. Odetunde, “Approximate solution of variational problems by an iterative decomposition method,” Maejo International Journal of Science and Technology, vol. 3, no. 03, pp. 426–433, 2009. View at: Google Scholar
12. J. F. Traub, Iterative Methods for the Solution of Equations, Chelsea Publishing company, New York, NY, USA, 1982.
13. M. A. Noor, W. A. Khan, and A. Hussain, “A new modified Halley method without second derivatives for nonlinear equation,” Applied Mathematics and Computation, vol. 189, pp. 1268–1273, 2007. View at: Google Scholar
14. J.-H. He, “Homotopy perturbation technique,” Computer Methods in Applied Mechanics and Engineering, vol. 178, no. 3-4, pp. 257–262, 1999. View at: Publisher Site | Google Scholar
15. J. Kuo, “The improvements of modified Newton’s method,” Applied Mathematics and Computation, vol. 189, pp. 602–609, 2007. View at: Google Scholar
16. J.-H. He, “Variational iteration method - a kind of non-linear analytical technique: some examples,” International Journal of Non-linear Mechanics, vol. 34, no. 4, pp. 699–708, 1999. View at: Publisher Site | Google Scholar
17. J. M. Guti’errez and M. A. Hern’andez, “A family of chebyshev-halley type methods in banach spaces,” Bulletin of the Australian Mathematical Society, vol. 55, pp. 113–130, 1997. View at: Google Scholar
18. F. Shah, M. A. Noor, and M. Waseem, “Some second-derivative-free sixth-order convergent iterative methods for non-linear equations,” Maejo International Journal of Science and Technology, vol. 10, no. 01, pp. 79–87, 2016. View at: Google Scholar
19. A. M. Ostrowski, Solution of Equations and Systems of Equations, Academic Press, Cambridge, MA, USA, 2nd edition, 1966.
20. A. Kumar, P. Maroju, R. Behl, D. K. Gupta, and S. S. Motsa, “A family of higher order iterations free from second derivative for nonlinear equations in R,” Journal of Computational and Applied Mathematics, vol. 330, pp. 215–224, 2018. View at: Publisher Site | Google Scholar
21. A. Golbabai and M. Javidi, “A third-order Newton type method for nonlinear equations based on modified homotopy perturbation method,” Applied Mathematics and Computation, vol. 191, no. 1, pp. 199–205, 2007. View at: Publisher Site | Google Scholar
22. M. A. Noor, K. I. Noor, and K. Aftab, “Some new iterative methods for solving nonlinear equations,” World Applied Sciences Journal, vol. 20, no. 6, pp. 870–874, 2012. View at: Google Scholar
23. S. Weerakoon and T. G. I. Fernando, “A variant of Newton’s method with accelerated third-order convergence,” Applied Mathematics Letters, vol. 13, no. 8, pp. 87–93, 2000. View at: Publisher Site | Google Scholar
24. B. Kalantari, “Method of creating graphical works based on polynomials,” U.S. Patent, vol. 6, pp. 894–705, 2005. View at: Google Scholar
25. B. Kalantari, “Polynomiography: from the fundamental theorem of Algebra to art,” Leonardo, vol. 38, no. 3, pp. 233–238, 2005. View at: Publisher Site | Google Scholar
26. B. Mandelbrot, The Fractal Geometry of Nature, W.H. Freeman and Company, New York, NY, USA, 1983.
27. B. Kalantari and E. H. Lee, “Newton-Ellipsoid polynomiography,” Journal of Mathematics and the Arts, vol. 13, no. 4, pp. 336–352, 2019. View at: Publisher Site | Google Scholar
28. K. Gdawiec, “Fractal patterns from the dynamics of combined polynomial root finding methods,” Nonlinear Dynamics, vol. 90, no. 4, pp. 2457–2479, 2017. View at: Publisher Site | Google Scholar
29. S. M. Kang, A. Naseem, W. Nazeer, M. Munir, and C. Y. Jung, “Polynomiography of some iterative methods,” International Journal of Mathematical Analysis, vol. 11, no. 3, pp. 133–149, 2017. View at: Publisher Site | Google Scholar

#### More related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.