Journal of Mathematics

Journal of Mathematics / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 2816843 | https://doi.org/10.1155/2020/2816843

Amir Naseem, M. A. Rehman, Thabet Abdeljawad, "Numerical Algorithms for Finding Zeros of Nonlinear Equations and Their Dynamical Aspects", Journal of Mathematics, vol. 2020, Article ID 2816843, 11 pages, 2020. https://doi.org/10.1155/2020/2816843

Numerical Algorithms for Finding Zeros of Nonlinear Equations and Their Dynamical Aspects

Academic Editor: Francisco Balibrea
Received20 Apr 2020
Accepted25 Jul 2020
Published30 Sep 2020

Abstract

In this paper, we developed two new numerical algorithms for finding zeros of nonlinear equations in one dimension and one of them is second derivative free which has been removed using the interpolation technique. We derive these algorithms with the help of Taylor’s series expansion and Golbabai and Javidi’s method. The convergence analysis of these algorithms is discussed. It is established that the newly developed algorithms have sixth order of convergence. Several numerical examples have been solved which prove the better efficiency of these algorithms as compared to other well-known iterative methods of the same kind. Finally, the comparison of polynomiographs generated by other well-known iterative methods with our developed algorithms has been made which reflects their dynamical aspects.

1. Introduction

One of the major problems in applied mathematics and engineering sciences is to solve the nonlinear equation of the form:where f is a real-valued function whose domain is an open connected set.

The solution of such type of nonlinear equations cannot be found directly except in special cases. Therefore, we have to adopt iterative methods for solving such type of equations. In iterative methods, we start with an initial guess x0 which is improved step by step by means of iterations.

We assume that α is a simple zero of equation (1) and x0 is an initial guess sufficiently close to α.

Using Taylor’s series around x0 for equation (1), we have

If f ′(x0) ≠ 0, we can evaluate the above expression as follows:

If we choose xk+1 the root of equation, then we have

This is so-called Newton’s method [1, 2] for root-finding of nonlinear functions, which converges quadratically. From equation (2), one can evaluate

This is so-called Halley’s method [3] for root-finding of nonlinear functions, which converges cubically. Simplification of equation (2) yields another iterative method as follows:

This is known as Househölder’s method [4] for solving nonlinear equations in one variable, which converges cubically.

In recent years, a large number of iterative methods have been developed using different techniques such as decomposition method, Taylor’s series, perturbation method, quadrature formula, and variational iteration techniques [513] and the references therein.

For improving convergence, various modified methods have been developed in the literature. Some of them are given in [1418] and the references therein.

In the twentieth century, Ostrowski [19] took Newton’s method as a predictor step and proposed a two-step iterative method having fourth convergence order. After that, Traub [12] suggested an iterative scheme in which he took Newton’s method as a predictor and corrector step and proved that the proposed method has convergence of fourth order. Noor and Noor, in 2007 [13], developed the two-step Halley’s method in which Newton’s method was taken as a predictor step and Halley’s method as a corrector step and then proved that the suggested method has sixth order of convergence and then removing its second derivative using finite difference scheme and established a new algorithm having fifth order of convergence. In 2018, Kumar et al. [20] established a parameter-based family of algorithm which has sixth convergence order for solving nonlinear equations.

In this paper, we proposed and analyzed two new predictor-corrector type iterative methods, namely, Algorithms 1 and 2 in which we take Newton’s method as a predictor step. We proved that these newly developed algorithms have sixth order of convergence and are most efficient as compared to other well-known iterative methods of the same kind. The proposed algorithms applied to solve some test examples in order to assess its validity and accuracy. In the last section, we generate the polynomiographs of complex polynomials of different degrees through our proposed algorithms and compare it with other methods of the same category. The presented polynomiographs have very interesting and aesthetic patterns which reflects different properties of the polynomials.

2. Main Results

Let f: X ⟶ R, XR is a scalar function, then by using the basic idea of modified homotopy perturbation method, Golbabai and Javidi [21] derive the following method:

In iterative form:which is known as Golbabai and Javidi’s method [21] having cubic order of convergence.

After simplification of equation (2), one can obtain

Now from Golbabai and Javidi’s method in equation (7),

With the help of equations (9) and (10), we obtain

Rewriting the above equation with Newton’s method as a predictor gives us a new algorithm as follows.

Algorithm 1. For a given x0, compute the approximate solution xn+1 by the following iterative schemes:which is the modification of Golbabai and Javidi's method with Newton’s method as a predictor.
In order to find the solution of the given nonlinear equation, we have to calculate the first as well as the second derivative of the function f(x). But in several cases, we face such a situation where the second derivative of the function does not exist and our method fails to find the solution.
To overcome this difficulty, we use interpolation technique for approximating the second derivative as follows:
Consider the functionwhere a, b, c, and d are unknowns to be found with the following interpolation conditions:A system of four linear equations of four variables is obtained from the above conditions. Solving this system givesUsing the above equation in Algorithm 1, we derive a new algorithm free from second derivative as follows.

Algorithm 2. For a given x0, compute the approximate solution xn+1 by the following iterative schemes:which is a new two-step iterative method free from second derivative, having Newton’s method as a predictor step. With the help of this method, we can solve such type of nonlinear equations in which second derivative does not exist. Also this requires only two evaluations of the function and one of its derivatives which shows better efficiency of this method as compared to those which require second derivative. Several examples are given which proved best performance of this method as compared to other well-known iterative methods of the same kind.

3. Convergence Analysis

In this section, we discuss the convergence order of the purposed iterative methods.

Theorem 1. Suppose that α is a root of the equation f(x) = 0. If f(x) is sufficiently smooth in the neighborhood of α, then the order of convergence of Algorithm 1 is six.

Proof. To analyze the convergence of Algorithm 1, suppose that α is a root of the equation f(x) = 0 and en be the error at nth iteration, then en = xn − α, and by using Taylor series expansion, we have where .
With the help of equations (17) and (18), we getUsing equations (19)─(22) in Algorithm 1, we getwhich implies thatThe above equation shows that the order of convergence of Algorithm 1 is six.

Theorem 2. Suppose that α is a root of the equation f(x) = 0. If f(x) is sufficiently smooth in the neighborhood of α, then the order of convergence of Algorithm 2 is six.

Proof. With the help of equations (17)─(21) along with the same assumptions of the previous theorem, we haveUsing equations (20), (21), and (25) in Algorithm 2, we getwhich implies thatThe above equation shows that the order of convergence of Algorithm 2 is six.

4. Numerical Examples

In this section, we include some nonlinear functions to illustrate the efficiency of our newly developed numerical algorithms. We compare these algorithms with Noor’s method one (NR1) [22], Noor’s method two (NR2) [22], Ostrowski’s method (OM) [19], Traub’s method (TM) [12], and modified Halley’s method (MHM) [13].

For this purpose, the following numerical examples have been solved:

Here, we take in the following stopping criteria . The numerical examples have been solved using the computer program Maple 13.

Tables 17 show the numerical comparisons of our developed algorithms with Noor's method one (NR1), Noor’s method two (NR2), Ostrowski's method (OM), Traub's method (TM), and modified Halley’s method (MHM). The columns represent the number of iterations , the magnitude of at the final estimate , the approximate root , the difference between two consecutive approximations of the equation and the computational order of convergence (COC) which can be approximated using the following formula:which was introduced by Weerakoon and Fernando in (2000) [23].


MethodNCOC


NR1273.7101771.365230013414096845806.7697152
NR241.8134097.4034023
OM57.3959898.4429744
TM841.0714911.5318814
MHM237.3928874.5959745
Algorithm 143.5182833.7947206
Algorithm 243.5182833.7947206


MethodNCOC


NR1173.818521−0.522480772810545489147.6957272
NR2909.8910153.4351653
OM887.5110351.0171764
TM188.2599385.1245884
MHM91.2683564.2621325
Algorithm 181.0132041.4095496
Algorithm 231.8377415.3358616


MethodNCOC


NR153.1638070.409992017989137131625.6293862
NR2453.3822311.5126693
OM46.7831831.6307924
TM46.0633821.5862314
MHM34.4994721.8807265
Algorithm 121.6585011.0592886
Algorithm 221.6701941.0597876


MethodNCOC


NR144.4189330.567143290409783873004.4187052
NR249.6746611.5964353
OM38.9843151.9210474
TM32.1305966.1205124
MHM31.1164402.5405575
Algorithm 125.0686546.6677186
Algorithm 222.5355084.3963266


MethodNCOC


NR11007.2614522.154434690031883721801.0599482
NR2114.7531102.3858143
OM55.5376254.9420454
TM54.8934261.3692254
MHM44.9227062.7321265
Algorithm 132.1290601.6639536
Algorithm 232.1290601.6639536


MethodNCOC


NR164.8726920.739085133215160641666.7697152
NR243.2295787.4034023
OM32.5764128.4429744
TM32.6171931.5318814
MHM31.1547924.5959745
Algorithm 128.9188843.7947206
Algorithm 221.3744913.7947206


MethodNCOC


NR142.4556900.000000000000000000009.0316052
NR243.4246811.2713273
OM32.7043221.1947044
TM37.9072665.2824004
MHM31.5568115
Algorithm 122.6744076
Algorithm 223.1973826

Tables 8 shows the comparison of the number of iterations required for different iterative methods with our developed algorithms to approximate the root of the given nonlinear function for the stopping criteria . The columns represent the number of iterations for different functions along with initial guess . All calculations have been carried out using the computer program Maple 13.


MethodsFunctions

NR129208710396
NR2591651265
OM68954644
TM852054743
MHM241044644
Algorithm 15943433
Algorithm 25443433

5. Polynomiography

Polynomials are one of the most significant objects in many fields of mathematics. Polynomial root-finding has played a key role in the history of mathematics. It is one of the oldest and most deeply studied mathematical problems. The last interesting contribution to the polynomials root-finding history was made by Kalantari [24], who introduced the polynomiography. As a method which generates nice-looking graphics, it was patented by Kalantari in USA in 2005 [25]. Polynomiography is defined to be “the art and science of visualization in approximation of the zeros of complex polynomials, via fractal, and nonfractal images created using the mathematical convergence properties of iteration functions” [24]. An individual image is called a “polynomiograph”. Polynomiography combines both art and science aspects. Polynomiography gives a new way to solve the ancient problem by using new algorithms and computer technology. Polynomiography is based on the use of one or an infinite number of iteration methods formulated for the purpose of approximation of the roots of polynomials, e.g., Newton’s method and Halley’s method.

The word “fractal,” which partially appeared in the definition of polynomiography, was coined by the famous mathematician Benoit Mandelbrot [26]. Both fractal images and polynomiographs can be obtained via different iterative schemes. Fractals are self-similar having typical structure and independent of scale. On the other hand, polynomiographs are quite different. The “polynomiographer” can control the shape and design in a more predictable way by using different iteration methods to the infinite variety of complex polynomials. Generally, fractals and polynomiographs belong to different classes of graphical objects. Polynomiography has diverse applications in Mathematics, Science, Education, Art, and Design.

According to the fundamental theorem of algebra, any complex polynomial with complex coefficientsor by its zeros of degree has zeros which may or may not be distinct. The degree of polynomial describes the number of basins of attraction and localization of basins can be controlled by placing roots on the complex plane manually. Usually, polynomiographs are colored based on the number of iterations needed to obtain the approximation of some polynomial root with a given accuracy and a chosen iteration method. The description of polynomiography, its theoretical background, and artistic applications are described in [7, 8, 2429].

6. Applications

In the numerical algorithms that are based on the iterative processes, we need a stopping criterion for the process, that is, a test that tells us that the process has converged or it is very near to the solution. This type of test is called a convergence test. Usually, in the iterative process that use a feedback, like the root-finding methods, the standard convergence test has the following form:where and are two successive points in the iteration process and >0 is a given accuracy. In this paper, we also use stopping criterion (32). The different colors of an image depend on the number of iterations to reach a root with given accuracy . One can obtain infinitely many nice-looking polynomiographs by changing parameter , where is the upper bound of the number of iterations.

Here, we present some examples of the following complex polynomials using our developed algorithms and compare it with the polynomiographs obtained by using other well-known two-step iterative methods:

In Figures 16, polynomiographs of different complex polynomials for Noor’s method one (NR1), Noor’s method two (NR2), Ostrowski’s method (OM), Traub’s method (TM), modified Halley’s method (MHM), and our developed algorithms have been shown which describe the regions of convergence of these polynomials. When we look at the generated images, we can read two important characteristics. The first one is the speed of convergence of the algorithm, i.e., the color of each point gives us information on how many iterations were performed by the algorithm to reach the root. The second characteristic is the dynamics of the algorithm. Low dynamics is in areas where the variation of colors is small, whereas in areas with a large variation of colors, the dynamics is high. The black color in images shows that places where the solution cannot be achieved for the given number of iterations. The areas of the same colors in the above figures indicate the same number of iterations required to determine the solution and they look similar to the contour lines on the map.

All these figures have been generated using the computer program Mathematica by taking and where shows the accuracy of the given root and represents the upper bound of the number of iterations.

7. Conclusions

Two new numerical algorithms for finding zeros of nonlinear equations have been developed, having convergence of order six. By using some test examples, the performance of the proposed algorithms is also discussed. The numerical results uphold the analysis of the convergence which can be seen in Tables 18. Both algorithms show better results in terms of the number of iterations, efficiency, and convergence order in comparison with other well-known two-step iterative methods of the same kind. Polynomiographs of complex polynomials of different degrees using two-step iterative methods and our proposed algorithms have been generated. The presented polynomiographs are rich and colorful and have very interesting and aesthetic patterns, which reflects the dynamical aspects of our proposed algorithms.

Data Availability

All data required for this paper are included within this paper.

Conflicts of Interest

The authors do not have any conflicts of interest.

Authors’ Contributions

All authors contributed equally to this paper.

References

  1. D. E. Kincaid and E. W. Cheney, Numerical Analysis, Brooks/Cole Publishing Company, Pacific Grove, CA, USA, 1990.
  2. R. L. Burden and J. D. Faires, Numerical Analysis, Brooks/Cole Publishing Company, Pacific Grove, CA, USA, 6th edition, 1997.
  3. I. K. Argyros, “A note on the halley method in banach spaces,” Applied Mathematics and Computation, vol. 58, no. 2-3, pp. 215–224, 1993. View at: Publisher Site | Google Scholar
  4. A. S. Householder, The Numerical Treatment of a Single Nonlinear Equation, McGraw-Hill, New York, NY, USA, 1970.
  5. M. Nawaz, A. Naseem, and W. Nazeer, “New twelfth order algorithms for solving nonlinear equations by using variational iteration technique,” Journal of Prime Research in Mathematics, vol. 14, pp. 24–36, 2018. View at: Google Scholar
  6. W. Nazeer, A. Naseem, S. M. Kang, and Y. C. Kwun, “Generalized newton raphsons method free from second derivative,” Journal of Nonlinear Sciences and Applications, vol. 09, no. 5, pp. 2823–2831, 2016. View at: Publisher Site | Google Scholar
  7. Y. C. Kwun, Z. Majeed, A. Naseem, W. Nazeer, and S. M. Kang, “New iterative methods using variational iteration technique and their dynamical behavior,” International Journal of Pure and Applied Mathematics, vol. 116, no. 4, pp. 1093–1113, 2017. View at: Google Scholar
  8. S. M. Kang, A. Naseem, W. Nazeer, and A. Jan, “Modification of abbasbandy’s method and polynomigraphy,” International Journal of Mathematical Analysis, vol. 10, no. 24, pp. 1197–1210, 2016. View at: Publisher Site | Google Scholar
  9. W. Nazeer, M. Tanveer, S. M. Kang, and A. Naseem, “A new householders method free from second derivatives for solving nonlinear equations and polynomiography,” Journal of Nonlinear Sciences and Applications, vol. 09, no. 03, pp. 998–1007, 2016. View at: Publisher Site | Google Scholar
  10. I. Muhammad, H. Ma, A. Naseem, S. Ali, and A. Nizami, “New algorithms for nonlinear equations,” International Journal of Advanced and Applied Sciences, vol. 5, pp. 28–32, 2018. View at: Publisher Site | Google Scholar
  11. O. A. Taiwo and O. S. Odetunde, “Approximate solution of variational problems by an iterative decomposition method,” Maejo International Journal of Science and Technology, vol. 3, no. 03, pp. 426–433, 2009. View at: Google Scholar
  12. J. F. Traub, Iterative Methods for the Solution of Equations, Chelsea Publishing company, New York, NY, USA, 1982.
  13. M. A. Noor, W. A. Khan, and A. Hussain, “A new modified Halley method without second derivatives for nonlinear equation,” Applied Mathematics and Computation, vol. 189, pp. 1268–1273, 2007. View at: Google Scholar
  14. J.-H. He, “Homotopy perturbation technique,” Computer Methods in Applied Mechanics and Engineering, vol. 178, no. 3-4, pp. 257–262, 1999. View at: Publisher Site | Google Scholar
  15. J. Kuo, “The improvements of modified Newton’s method,” Applied Mathematics and Computation, vol. 189, pp. 602–609, 2007. View at: Google Scholar
  16. J.-H. He, “Variational iteration method - a kind of non-linear analytical technique: some examples,” International Journal of Non-linear Mechanics, vol. 34, no. 4, pp. 699–708, 1999. View at: Publisher Site | Google Scholar
  17. J. M. Guti’errez and M. A. Hern’andez, “A family of chebyshev-halley type methods in banach spaces,” Bulletin of the Australian Mathematical Society, vol. 55, pp. 113–130, 1997. View at: Google Scholar
  18. F. Shah, M. A. Noor, and M. Waseem, “Some second-derivative-free sixth-order convergent iterative methods for non-linear equations,” Maejo International Journal of Science and Technology, vol. 10, no. 01, pp. 79–87, 2016. View at: Google Scholar
  19. A. M. Ostrowski, Solution of Equations and Systems of Equations, Academic Press, Cambridge, MA, USA, 2nd edition, 1966.
  20. A. Kumar, P. Maroju, R. Behl, D. K. Gupta, and S. S. Motsa, “A family of higher order iterations free from second derivative for nonlinear equations in R,” Journal of Computational and Applied Mathematics, vol. 330, pp. 215–224, 2018. View at: Publisher Site | Google Scholar
  21. A. Golbabai and M. Javidi, “A third-order Newton type method for nonlinear equations based on modified homotopy perturbation method,” Applied Mathematics and Computation, vol. 191, no. 1, pp. 199–205, 2007. View at: Publisher Site | Google Scholar
  22. M. A. Noor, K. I. Noor, and K. Aftab, “Some new iterative methods for solving nonlinear equations,” World Applied Sciences Journal, vol. 20, no. 6, pp. 870–874, 2012. View at: Google Scholar
  23. S. Weerakoon and T. G. I. Fernando, “A variant of Newton’s method with accelerated third-order convergence,” Applied Mathematics Letters, vol. 13, no. 8, pp. 87–93, 2000. View at: Publisher Site | Google Scholar
  24. B. Kalantari, “Method of creating graphical works based on polynomials,” U.S. Patent, vol. 6, pp. 894–705, 2005. View at: Google Scholar
  25. B. Kalantari, “Polynomiography: from the fundamental theorem of Algebra to art,” Leonardo, vol. 38, no. 3, pp. 233–238, 2005. View at: Publisher Site | Google Scholar
  26. B. Mandelbrot, The Fractal Geometry of Nature, W.H. Freeman and Company, New York, NY, USA, 1983.
  27. B. Kalantari and E. H. Lee, “Newton-Ellipsoid polynomiography,” Journal of Mathematics and the Arts, vol. 13, no. 4, pp. 336–352, 2019. View at: Publisher Site | Google Scholar
  28. K. Gdawiec, “Fractal patterns from the dynamics of combined polynomial root finding methods,” Nonlinear Dynamics, vol. 90, no. 4, pp. 2457–2479, 2017. View at: Publisher Site | Google Scholar
  29. S. M. Kang, A. Naseem, W. Nazeer, M. Munir, and C. Y. Jung, “Polynomiography of some iterative methods,” International Journal of Mathematical Analysis, vol. 11, no. 3, pp. 133–149, 2017. View at: Publisher Site | Google Scholar

Copyright © 2020 Amir Naseem et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views343
Downloads254
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.