Abstract

In this paper, we proposed and analyzed three new root-finding algorithms for solving nonlinear equations in one variable. We derive these algorithms with the help of variational iteration technique. We discuss the convergence criteria of these newly developed algorithms. The dominance of the proposed algorithms is illustrated by solving several test examples and comparing them with other well-known existing iterative methods in the literature. In the end, we present the basins of attraction using some complex polynomials of different degrees to observe the fractal behavior and dynamical aspects of the proposed algorithms.

1. Introduction

Most of the problems in Mathematics, Physics, and Engineering sciences are linked with the solution of nonlinear equations of the following form:where is a scalar function defined on an open connected set .

Mostly, the roots of such equations cannot be found directly, and therefore, we need to adopt an iterative method for approximating the roots of such type of equations. In an iterative method, we start the process by choosing an initial guess which is refined sequentially by means of iterations until the approximated solution is achieved. Some of the basic and classical methods are given in [19] and the references therein.

The most famous and well-known method for finding roots of nonlinear equations is of the following form:which is quadratically convergent Newton’s method [10] for the solution of nonlinear equations.

To improve its convergence, a large number of iteration schemes have been proposed by means of various types of techniques such as Adomian decomposition method, Taylor’s series, perturbation method, quadrature formula, interpolation, and finite difference techniques for removing higher-order derivatives and variational iteration techniques, see [1022] and the references therein.

In this paper, we proposed three new algorithms using variational iteration technique by considering two auxiliary functions and . The first one, , behaves like a predictor function having convergence order , where and helps to attain iterative methods of convergence order , where is the order of convergence of the second auxiliary function . Using variational iteration technique, we develop some new higher order root-finding algorithms with best performance and efficiency. The variational iteration technique was introduced by Inokuti et al. [15] in 1978. Using this technique, Noor and Shah [20, 21] proposed some iterative methods for the solution of nonlinear equations. The purpose of this technique was to solve a variety of diverse problems [1214].

Now, we apply the described technique to obtain higher-order root-finding algorithms. New algorithms are very fast using less number of iterations to reach the required root, free from 3rd and higher derivatives with ninth order of convergence which raises the efficiency index of these algorithms.

The rest of the paper is divided as follows. The new iteration schemes are described in Section 2. In Section 3, the convergence criteria of the proposed algorithms have been discussed. In Section 4, various test examples have been solved to show their performance as compared to the other similar existing methods in the literature. The basins of attraction for some complex polynomials have been presented in Section 5 which shows the dynamical and fractal behavior of the proposed algorithms. Finally, the conclusion of the paper is given in Section 6.

2. Construction of Root-Finding Algorithms Using Variational Iteration Technique

In this section, we construct some new root-finding algorithms with the help of variational iteration technique. These algorithms are multistep iterative methods which involve predictor and corrector steps. The proposed algorithms possess higher order of convergence than one-step methods. By applying variational iteration technique, we derive some new root-finding algorithms of order where are the orders of convergence of the auxiliary iteration functions and . Now, consider the nonlinear equation of the following form:

Suppose that is the simple root and is the initial guess sufficiently close to . For better understanding and to deliver the basic idea, we suppose the approximate solution of (3) such that

We consider and as two iteration functions of order and , respectively. Thenwhere , is a recurrence relation which generates iterative methods of order and is any arbitrary function which later on is converted to and is a parameter, called the “Lagrange’s multiplier” and can be determined from (5) by using the optimality criteria as follows:

From (5) and (6), we get

Now, we are going to apply (7) for constructing a general iterative scheme for iterative methods. For this, suppose thatwhich is well-known Househölder’s method with cubic convergence [7]. With the help of (7) and (8), we can write

Letwhere , which is the two-step iterative method having convergence of order six. Differentiate equation (10) w.r.t “,” we haveand from Taylor’s series, we can write

The last expression has been obtained by putting the value of from (10).

From (11) and (12), we havewith the help of (9), (10), and (13), we getwhere , which is according to the above described technique. Then, equation (14) becomes

Relation (15) is the main and general iterative scheme, which we use to deduce some new root-finding algorithms by considering some particular cases of the auxiliary function , keeping in mind that it should not be zero or small for all iterations. If obtains zero value, then (15) reduces to which is similar to (10) and we are unable to derive new algorithms.

2.1. Case 1

Let , then . Using these values in (15), we obtain the following algorithm.

Algorithm 1. For a given , compute the approximate solution by the following iterative schemes:

2.2. Case 2

Let , then . Using these values in (15), we obtain the following algorithm.

Algorithm 2. For a given , compute the approximate solution by the following iterative schemes:

2.3. Case 3

Let , then . Using these values in (15), we obtain the following algorithm.

Algorithm 3. For a given , compute the approximate solution by the following iterative schemes:To obtain best results in all above algorithms, always choose that values of which makes the denominator nonzero and largest in magnitude.

3. Convergence Analysis

In this section, we discuss the convergence criteria of the general iteration scheme described in relation (15).

Theorem 1. Assume that be the simple root of the differentiable function on an open interval . If the initial guess is sufficiently close to , then the convergence order of the main and general iteration scheme described in relation (15) is at least nine.

Proof. To prove the convergence of the main and general iteration scheme described in relation (15), we assume that is the simple root of the equation and be the error at th iteration, then and by using Taylor series about , we havewhereWith the help of (20)–(23), we getWith the help of (24)–(26), we haveUsing equations (20)–(38) in general iteration scheme (15), we get the same result as follows:which implies thatwhich shows that the main and general iteration scheme (15) is of ninth order of convergence and all algorithms deduced from it have also the same order of convergence.

4. Numerical Results

In this section, we included some nonlinear functions to demonstrate the performance of newly proposed algorithms for . We compare these algorithms with the following well-known iterative methods:

4.1. Newton’s Method (NM)

For a given , compute the approximate solution by the following iterative scheme:which is well-known Newton’s method [10] for finding zeros of nonlinear functions, having quadratic order of convergence.

4.2. Halley’s Method (HM)

For a given , compute the approximate solution by the following iterative scheme:

This is so called Halley’s method [2] for root-finding of nonlinear functions, which converges cubically.

4.3. Traub’s Method (TM)

For a given , compute the approximate solution by the following iterative schemes:which is known as Traub’s method [10] for finding roots of nonlinear functions and possesses fourth order of convergence.

4.4. Modified Halley’s Method (MHM)

For a given , compute the approximate solution by the following iterative schemes:which is modified Halley’s method [23] for finding the solution of nonlinear functions, having fifth order of convergence.

In order to make numerical comparison of the above described methods with the newly developed algorithms, the following test examples have been solved:as shown in Table 1.

Table 1 shows the numerical comparison of our developed algorithms (for ) with Newton’s method, Halley’s method, Traub’s method, and modified Halley’s method. The columns represent the number of iterations , the magnitude of at the final estimate , the approximate root , and the computational order of convergence (COC) that can be expressed mathematically using the following formula:which was suggested by Cordero and Torregrosa in (2007) [24].

When we look at the numerical results of Table 1, we come to know that our proposed algorithms are showing best performance as compared to the other ones. For example, in first test example of Table 1, Algorithm 3 is the best as it took less number of iterations among the all other compared methods with great precision. In second, third, and fifth test examples, Algorithm 1 performed better than the other ones. In the fourth one, Algorithm 3 looks superior to the other methods. In short, we can say that the proposed algorithms are best in terms of accuracy, speed, number of iterations, and computational order of convergence as compared to the other well-known iterative methods. All numerical examples have been solved using the computer program Maple 13 by taking the accuracy for the following stopping criteria:

Table 2 shows the comparison of the number of iterations required for different iterative methods with our developed algorithms (for ) to approximate the root of the given nonlinear function with the accuracy . The columns represent the number of iterations for different functions along with initial guess .

The numerical results as shown in Table 2 again certified the fast and best performance of the proposed algorithms in terms of number of iterations for the above defined stopping criteria with the given accuracy. In all test examples, the proposed algorithms consumed less number of iterations as compared to the other iterative methods. All calculations have been carried out using the computer program Maple 13.

Table 3 shows the effect of parameter on the proposed algorithms by taking three different values of . We applied the proposed algorithms on different test examples and the obtained results showing that the numeric behavior of the proposed algorithms changes with the change of the parameter . A major change can be seen in first test example of Table 3. Here, the Algorithm 1 took six iterations for , thirty-two iterations for , and thirteen iterations for . Similar changes can be observed from the other examples of Table 3. By looking at the overall results of Table 3, we can conclude that is the best choice of parameter for the proposed algorithms, which we have already used in Tables 1 and 2.

5. Basins of Attraction

An attractor’s basin of attraction is the region of the phase space, over which iterations are defined, such that any point (any initial condition) in that region will eventually be iterated into the attractor. Equations or systems that are nonlinear can give rise to a richer variety of behavior than can linear systems. The basins of attraction describe the dynamical aspects and characteristics of an iterative method that is being under consideration over a large number of examples and sets of parameter values, see [25, 26] and references cited therein. The basin of attraction for complex Newton’s method was first considered and attributed by Cayley [27]. The aim of this section is to represent the basins for the proposed algorithms by using graphical tools. In order to represent the basins of attraction via different computer programmes, we have to choose an initial rectangle R which contains the roots of the considered polynomial. Then, for every point in the region, we run an iterative method and then color the point corresponding to which is depended upon the approximate convergence of the truncated orbit to a root, or lack thereof. The resolution of the image depends upon our discretization of the rectangle R. For example, if we discretize R into a 2000 by 2000 grid, then the result would be a high-resolution image.

The basins of attraction using different algorithms have various applications in different fields such as Mathematics, Science, Education, Art, and Design. From the Fundamental Theorem of Algebra, any complex polynomial with complex coefficients :or by its zeros (roots) :of degree has exactly roots which may be distinct or repeated. The polynomial’s degree illustrates the number of basins of attraction and placing roots on the complex plane. The localization of basins can be controlled manually. Usually, the colors of basins of attraction depend upon the considered iterative method and the total number of iterations required to attain the approximate solution of some polynomials with a given accuracy. The detailed study of basins of attraction, its theoretical background, and interesting applications are discussed in [2836].

5.1. Applications

In a numerical algorithm that is purely based on an iterative process, we always require stopping criteria for the whole process, i.e., a test which provides us information that the process has converged or it is very close to the solution. Such type of test is called a convergence test having the following standard form:where and are two consecutive points in the iteration process and is a given accuracy. In this paper, we also use the stopping criteria (42). Using newly developed algorithms, we presented basins of attractions of various polynomials that were much attractive, colorful, and aesthetic ones. The different colors of an image was based on the number of iterations required to approximate the root with the defined accuracy . A large number of such images can be generated by giving different values to the parameter , where represents the upper bound of the number of iterations.

Here, we present the basins of attractions using the following complex polynomials of different degrees:

All the figures have been generated using the computer program Mathematica 10.0 by taking , , and , where shows the accuracy of the given root, represents the area in which we draw the basins of attraction, and represents the upper bound of the number of iterations.

Example 1. Basins of attraction using the complex polynomial for proposed algorithms.

Example 2. Basins of attraction using the complex polynomial for proposed algorithms.

Example 3. Basins of attraction using the complex polynomial for proposed algorithms.

Example 4. Basins of attraction using the complex polynomial for proposed algorithms.

Example 5. Basins of attraction using the complex polynomial for proposed algorithms.

Example 6. Basins of attraction using the complex polynomial for proposed algorithms.
In Examples 16, basins of attraction for the complex polynomials of different degrees through our proposed algorithms (for ) have been shown.
In the first experiment, we have run all the proposed algorithms to obtain the simple zeros of the cubic polynomial . The results of the basin of attractions are given in Figure 1. For each distinct root of the considered polynomial, there exists a unique color on the corresponding basins of attraction, so there exists three unique colors, namely, brown, yellow, and red that can be easily seen from Figure 1. In the next experiment, we consider the polynomial which has three distinct roots with multiplicity 2. The basins of attraction are presented in Figure 2. Three unique colors corresponding to the distinct roots can be seen in Figure 2. The repeated roots appeared with the same colors on the basins of attraction. In Examples 3 and 4, we ran the proposed algorithms for the complex polynomials and . The results are given in Figures 3 and 4. Both the polynomials have four distinct roots, and the corresponding four colors can be easily seen in Figures 3 and 4, respectively. The later polynomial has all roots with multiplicity 2. The basins of attraction for the complex polynomials and through our proposed algorithms have been presented in Figures 5 and 6, respectively. These polynomials have five distinct roots, but the zeros of the later polynomial are not simple and have multiplicity 2. The five unique colors corresponding to these roots appeared on the basins of attraction that can be seen from Figures 5 and 6.
When we look at the generated images, we can read two important characteristics. One of them is the convergence speed of the considered algorithm, which can be depicted by the shade of the color. The darkness of the colors shows less number of iterations of the considered algorithm and vice versa. The second one is the dynamical aspects of the algorithm. Low dynamics are in those specific areas which contain small variation of colors, whereas in areas having large variation of colors, the dynamics are high. The black color in images locates those places where the solution cannot be achieved for the given number of iterations. The areas with the same colors in above figures indicate the same number of iterations needed to approximate the solution and give similar look to the contour lines on the map.

6. Concluding Remarks

Using variational iteration technique, three new root-finding algorithms for the solution of nonlinear equations in one variable have been established, having ninth order of convergence. By using some test examples, the performance and efficiency of the proposed algorithms have been analyzed. Tables 1 and 2 show the best performance of the proposed algorithms in terms of accuracy, speed, number of iterations, and computational order of convergence as compared to other well-known existing iterative methods. We have also presented the basins of attraction using some complex polynomials through newly developed algorithms which describe the fractal behavior and dynamical aspects of the proposed algorithms. The variational iteration technique can be applied to derive a broad range of new algorithms for solving a system of nonlinear equations in one dimension.

Data Availability

All data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Authors’ Contributions

All the authors have contributed equally to this paper.

Acknowledgments

The corresponding author Thabet Abdeljawad would like to thank Prince Sultan University for funding this work through research group Nonlinear Analysis Methods in Applied Mathematics (NAMAM) (group number RG-DES-2017-01-17).