/ / Article

Research Article | Open Access

Volume 2014 |Article ID 601205 | https://doi.org/10.1155/2014/601205

T. Lotfi, F. Soleymani, S. Sharifi, S. Shateyi, F. Khaksar Haghani, "Multipoint Iterative Methods for Finding All the Simple Zeros in an Interval", Journal of Applied Mathematics, vol. 2014, Article ID 601205, 13 pages, 2014. https://doi.org/10.1155/2014/601205

# Multipoint Iterative Methods for Finding All the Simple Zeros in an Interval

Revised28 May 2014
Accepted02 Jun 2014
Published24 Jul 2014

#### Abstract

Two new families of multipoint without memory iterative methods with eighth- and sixteenth-orders are constructed using the symbolic software Mathematica. The key idea in constructing such methods is based on producing some generic suitable functions to reduce the functional evaluations and increase the order of convergence along the computational efficiency. Again by applying Mathematica, we design a hybrid algorithm to capture all the simple real solutions of nonlinear equations in an interval. The application of the new schemes in producing fractal pictures is also furnished.

#### 1. Introduction

In this work, we are concerned with the numerical solution of nonlinear equations with application. As usual, let the scalar function be sufficiently differentiable in the open interval and have a zero. That is, exists such that . If , we categorize the solution as a simple one, while if and , then the solution is multiple. The application of such schemes, from finding Moore-Penrose inverse to calculating the -adic inverse of a number to the module , has attracted the researchers for constructing multipoint schemes possessing a much more reliable computational efficiency; see, for example, .

Basically in the literature (see, e.g., [5, 6]), two important features determine the choice of an iteration method for solving the equation as follows: first the total number of iterations and second the computational cost. The former is measured by the order of convergence and the latter by the necessary number of evaluations of the function and its derivatives per full cycle. These two characteristics are linked by the concept of computational efficiency index, defined by Traub in  as wherein is the order and is the number of functional evaluations per iteration by considering that both function and derivative evaluations have the same computational cost. It should be remarked that, based on the still unproven Kung-Traub conjecture , an iterative method without memory with (functional) evaluation in each iteration has the maximum order of convergence (called optimal order).

The aim of this work is threefold. First is to apply the programming package Mathematica so as to obtain some new multipoint iterative methods of optimal orders eight and sixteen, including three and four steps, respectively.

The second is the application of the new schemes in computer graphics. In fact, each iterative fixed-point type method produces unique basins of attraction and fractal behavior, which results in nice pictures, useful in arts as discussed fully by Kalantari in .

The only drawback of such iterative schemes is in the choice of the initial guess (seed) to start the process and guarantee the convergence. In this paper, an algorithm will be constructed to provide a list of simple real initial approximations and then to obtain all the simple real zeros up to any desired tolerance, when high precision computing is needed. This would be the third goal of the present work.

The rest of the paper unfolds the contents as follows. Section 2 gives a generic three-step family of methods, while Section 3 is devoted to propose a new general family of four-step optimal methods along its analysis of convergence. The fractal behaviors for one of the new methods in producing nice self-similar pictures in arts will be given in Section 4. In Section 5, we remind of the approach of Wagon  for extracting a list of initial approximations for simple real zeros along some notes and a hybrid algorithm for computing all the simple real zeros. Some numerical examples will also be pointed out therein. Section 6 concludes the paper.

#### 2. A New Optimal Eighth-Order Method with Two Generic Functions

At first in this section, we aim at constructing an optimal eighth-order method without memory, which is used as the first three steps in constructing our four-step optimal sixteenth-order family in the next section.

To this end, consider the following three-point method (without the index ):

The first two steps are optimal since that is Ostrowski’s fourth-order two-step iterative method, since it reaches the order four using three functional evaluations. But, the third step is not good enough. This method performs five functional evaluations per cycle, which does not coincide with Kung and Traub conjecture for iterations without memory. In fact, functional evaluations must be reduced such that it consumes four instead of five.

For this purpose, it is tried to approximate in terms of , and ; that is where , , and . Note that we omit the index for simplicity only. Substituting (2) into (1), we have

Although the iteration (3) uses four functional evaluations per full cycle, generally it does not have convergence order eight. It is necessary to find some suitable conditions on the generic functions and so that iteration (3) attains an optimal eighth order. The approach of undetermined coefficients, Maclaurin series and Mathematica  are our means. For simplicity, we consider the following truncated Maclaurin series: where for , and wherein for . Note that we defined the truncated Maclaurin series (in the above way purposely) as if to construct the most general iterative methods without memory in the programming package Mathematica.

Theorem 1. Let be a simple zero of the sufficiently differentiable function : and , . If is sufficiently close to , then the order of convergence for (3) is eight if Its error equation reads

Proof. Let , , , and . Denote , . Using Taylor’s expansion and taking into account , we have , also by differentiation, we obtain . Using the last two equations gives us + . Hence,
Similarly, we get + + + . Now, we have
Moreover, we obtain
From (9) and (10), follows
Also, + + . We now need , , and . Then, while + + .
It is remarked that although we have not presented higher order terms in the above expressions, one can obtain them simply by the aid of a system of computation such as Mathematica. In the third step of (3), we need to consider Taylor’s series of and about and , respectively. To our purposes, it suffices to use the following: , and .
So, we get the following general error equation if is subtracted from both sides of (3): where , and
Moreover, note that the first two steps are optimal because it is Ostrowski’s method. Therefore, we find some necessary conditions on the generic functions and in such a way that all the coefficients of in the general error equation become zero for . To this end, we proceed as follows.
We have . Then, to vanish this coefficient it is enough to choose and . If we consider this condition, then . Again, to vanish this coefficient, we chose . Repeating this argument leads to . It turns to the conditions and . Finally, for which we obtain and .
Vanishing the coefficient of states that, by considering the above conditions for the weight functions (a.k.a. generic functions), we then could have an error equation of the form . This error reveals that we have constructed an optimal eighth-order scheme and the proof is completed.

At this time, we introduce specific sets of generic functions, satisfying obtained conditions (6) and also applying them to (3) for constructing concrete methods. For example, we could have which satisfy the conditions of Theorem 1. Thus, one iteration scheme can be defined as

We could also have , or . And also .

#### 3. A New Optimal Sixteenth-Order Family

This section is concerned with construction of a new sixteenth-order family. We add Newton’s step to the obtained concrete method (18) in the previous section as follows:

Although the first three steps are optimal and iteration (19) has convergence order sixteen, the whole procedure is not optimal since it is not satisfying Kung and Traub conjecture. The iteration (19) reaches order 16 with 6 functional evaluations. To obtain an optimal method, we must reduce one functional evaluation. To this end, our goal is to approximate in terms of the other existing function values, that is, , and , that is similar to the case of Section 2. Finding this approximation sounds easy at first, but when we tried to do, it was not simple at all. In fact, after trying many generic functions we had succeeded.

In addition, the general approach is very similar to the previous section and to save the space of the present paper we avoid repeating some cumbersome details. Let where ; the generic functions , , , and are supposed to be determined in such a way that scheme (19) has optimal order sixteen. We can express the following relations: where , , ,

Finding the required conditions for the new generic functions , and is very similar to the above approach in determination functions and . Therefore, we provide and address its Mathematica source in the Appendix due to limited space. We summarize the obtained results in the following theorem.

Theorem 2. Let be a simple zero of the sufficiently differentiable function : and , . If is sufficiently close to , then the order of convergence for (19)-(20) is sixteen if Its error equation is

Proof. See the Appendix.

Now, we introduce specific sets of generic functions, satisfying obtained conditions (23), and apply it in (19), to construct concrete methods. Hence, we attain

Each derived method from the scheme (3) by considering the suitable weight functions achieves the optimal efficiency index , which is the same to  and is greater than of the Newton’s method, of the optimal fourth-order methods (e.g., see ), and of the optimal three-point methods such as (18) or the methods given in  and even with memory methods recently developed in . This shows that the contribution in this work is quite effective.

Note that although we have performed all the developments by considering : , all the theorems could be extended and deduced if the function is defined in the complex plane as : , or having a complex zero. In such case a complex seed (initial guess) is needed for converging.

#### 4. Application in Art

Kalantari in  coined the term polynomiography as a clear application of fixed-point iterative methods in producing beautiful fractal pictures, which are in use in computer graphics and art. Polynomiography is defined to be the art and science of visualization in approximation of the zeros of complex polynomials, via fractal and nonfractal images created using the mathematical convergence properties of iteration functions.

As is known according to the Fundamental Theorem of Algebra, a polynomial of degree , with real or complex coefficients, has zeros (roots) which may or may not be distinct.

In this section, using machine precision and the computer programming package Mathematica 8 (see, e.g., ), we produce some of such beautiful fractals obtained from our new methods. In fact, an iteration function is a mapping of the plane into itself; that is, given any point in the plane, it is a rule that provides another point in the plane. This section is necessary in this paper to show how the new schemes could be considered in polynomiography.

Consider a polynomial and a fixed natural number . The basins of attraction of a root of with respect to the iteration function are regions in the complex plane such that, given an initial point within them, the corresponding sequence , , will converge to that root (see for more details [17, 18]).

It turns out that the boundary of the basins of attraction of any of the polynomial roots is the same set. This boundary is known as the Julia set and its complement is known as the Fatou set.

From now on, a complex rectangle will be considered and we subsequently assign a color to each complex point according to the root, at which the corresponding method starting from converges. We assign a color to each attraction basin of a root. But, we further make the color lighter or darker according to the number of iterations needed to reach the root with the fixed precision required .

An important aspect of the work of Kalantari which emerged in his Carpet design () is in applying iterations functions for higher order polynomials with various styles of coloring. We herein use six different polynomials with different coloring forms in Mathematica 8. Toward such an option, we used three different colorings as indicated in Figure 1. The results of basins of attraction (by (18)), which provide beautiful pictures, are given in Figures 2, 3, and 4, via coloring described in Figure 1.

#### 5. All the Simple Real Zeros

The fixed-point type iterative methods might diverge if the conditions of the main theorems given in Sections 2 and 3 fail. Actually, an important advantage of multipoint optimal schemes, as the ones in Sections 2 and 3, is that their convergence could not be achieved without having a robust approximation of the position of the zeros. This sometimes is referred to as the most important difficulty of iterative methods in practical problems. Besides, in practice one wants to find all the real solutions of a nonlinear function at the same time.

To remedy such shortcomings, only a few approaches such as interval methods (see, e.g.,  or ) have been given to the literature up to now. For instance, Yun in  applied a numerical integration based technique toward this goal. His technique is robust, but it might be time consuming in case of having so many zeros in an interval.

Herein we provide a predictor-corrector algorithm based on Mathematica 8 software consisting of two parts. In the first part, we apply the procedure given by Wagon in  for extracting robust initial approximations for all the simple real zeros. And second is to apply any of the presented optimal methods in this paper as a corrector to increase the accuracy of the solution in a short piece of time, when high precision computing is needed. To illustrate the procedure, we start by the fact that we attempt to find all the axis crossings; that is to say, we will not attempt to capture simple zeros at which there is no axis crossing.

Toward this end and as Wagon did in , we can use the options of plotting as a function and to find the crossings. Thus, one picks out the points from the normal graphics form of the plot . Note that the setting of   MeshFunctions must be a pure function and cannot be just .

To further illustrate, consider the following example in which the whole of the procedure is to extract all the simple real zeros of a nonlinear equation in a given interval (see Algorithm 1).

 Clear[“Global`*”] f[x_]:= ChebyshevT[20, x]  −  ChebyshevU[40, x] + Sin[x + 1]  −  11/10; a = −1.; b = +1.; p = Plot[f[x], {x, a, b}, Mesh  ->   0 , MeshFunctions  ->  (f[#] &), PlotPoints  ->  1000, PerformanceGoal  ->  “Quality”, WorkingPrecision  ->  128];

We used -> and -> in order to obtain high reliable initial approximations for the crossing of the function to the -axis. Note that some of such considerations for some problems could also be changed adaptively for providing better feedbacks. In what follows, we extract the initial guesses, sort them, and obtain the (estimate) number of zeros in the considered interval (see Algorithm 2).

 seeds = Cases[Normal[p], Point[z_]:> z[[ ]], Infinity]; setInitial = Sort[seeds] NumberOfGuesses = Length[setInitial]

The plot of the function is given in Figure 5, which clearly reveals the difficulty in obtaining all the real solutions. But the hybrid algorithm descried in this section could be applied for all the real simple solutions. For this test, we have 40 zeros which must be found in high precision accuracy. The initial list of approximations would be {−0.995857, −0.989387, −0.972279, −0.953544, −0.92376, −0.905373, −0.852924, −0.817623, −0.763259, −0.740905, −0.652616, −0.602838, −0.53862, −0.502713, −0.392391, −0.328099, −0.271123, −0.212284, −0.096817, −0.0213889, 0.0204006, 0.0989106, 0.207534, 0.282518, 0.316156, 0.399215, 0.492675, 0.553758, 0.590388, 0.660032, 0.731315, 0.774491, 0.81035, 0.857463, 0.901219, 0.927616, 0.951482, 0.973356, 0.988923, 0.996117}.

Now the corrector part of our algorithm could be written in what follows for the optimal scheme (18) (see Algorithm 3).

 digits = 2000; For[i = 1, i <= NumberOfGuesses, i++, {k = 0; X = SetAccuracy[setInitial[[i]], digits]; While[Abs[f[X]] > 10∧−1000 && k <= 20, {k = k + 1; fX = SetAccuracy[f[X], digits]; f1X = SetAccuracy[f′[X], digits]; Y = SetAccuracy[X  −  fX/f1X, digits]; fY = SetAccuracy[f[Y], digits]; Z = SetAccuracy[Y  −  (fX/(fX  −  2 fY)) (fY/f1X), digits]; fZ = SetAccuracy[f[Z], digits]; t = fY/fX; s = fZ/fX; u = fZ/fY; Weight = SetAccuracy[1 + 2 t + 4 s + 5 t∧2 + 12 t∧3 + u/(1 + u∧2), digits]; X = SetAccuracy[Z  −  Weight  (fZ/f1X), digits];};]; Print[Column[{ “The number of full iterations is:”  k, “The zero is:”  N[X, 64], “The residual norm of the approximate zero is:”  N[Abs[f[X]], 5]}, Background -> LightRed, LightBrown, LightBlue , Frame -> All]];}]; // AbsoluteTiming

In the above piece of code, each member of the list of initial approximations obtained from the predictor step of our algorithm will be corrected until its residual, that is, , while we work with 2000 number of fixed point arithmetics. Note that the other iterative solvers could be coded in a similar way.

An interesting point is that one may ask that higher optimal order is equal to higher computational load and thus the whole procedures might be inefficient in terms of computational time. We clearly respond that there is no such thing for optimal schemes and especially when we mix them with a convergence guaranteed algorithm; that is to say, the computation of functional evaluations is much more expensive than operational cost when working in high precision computing. This note alongside the efficiency of our new schemes is totally revealed in Table 1, whereas the total elapsed time for three different correctors has been compared for the above numerical example. One may easily observe that the new optimal 16th order method (by considering , , , and ) beats the other solvers in terms of computational time (in seconds). Note that, due to page limitation, we do not include the approximation of the 40 zeros in the considered interval which are basically correct up to 1500 decimal places.

 Iterative methods Newton (18) Computational time (in seconds) 6.06 4.82 3.37

Note that, the computer specifications in this paper are Intel(R) Core (TM) 2 Quad CPU, Q9550 2.83 GHz with 2.00 GB of RAM. The theoretical number of real solutions in an interval is important for the approach shown in this paper to succeed in finding all the simple real zeros in the desired interval.

The theoretical number of real zeros in the desired interval must be known in advance. Let us consider the above test equation in a given interval . We can show that has only 40 zeros which are all real and contained in the interval by using element properties of Chebyshev polynomials of either kind and Weierstrass approximation theorem. We first of all observe that the smooth function in can be approximated by a cubic polynomial: , with maximum interpolating absolute error of 0.0151886 at via Weierstrass approximation theorem. Hence, there exist at most 40 real zeros in .

Now, we compose , where and . Then, by the properties of Chebyshev polynomials of either kind, has only 40 real zeros in . Indeed, has double zeros at since and , while all the remaining 38 simple real zeros are in . Since moves downward slightly, the double zero at splits into two simple zeros near , and . But in the remaining locations, the overall downward movement of does not affect the number of real zeros of . Consequently, the theoretical number of real roots for is 40.

We also compare different optimal methods in finding all the simple real zeros of the nonlinear oscillatory function on the interval with the same stopping criterion as in the above test. The graph of this function is illustrated in Figure 6 and the results of comparison in terms of the elapsed time are reported in Table 2. The list of initial approximations would be {−0.951071, −0.929957, −0.910096, −0.873815, −0.819725, −0.804222, −0.767296, −0.749607, −0.703525, −0.603417, −0.561856, −0.538937, −0.508479, −0.484731, −0.445715, −0.338982, −0.301488, −0.275968, −0.24768, −0.221955, −0.185071, −0.0766472, −0.0399471, −0.0139904, 0.0139903, 0.0399483, 0.0766347, 0.185246, 0.221749, 0.247938, 0.275612, 0.302007, 0.33791, 0.448186, 0.48259, 0.510735, 0.536286, 0.565254, 0.597367, 0.713616, 0.741522, 0.775409, 0.795057, 0.831029, 0.853649, 0.984546, 0.995607}. We have 47 zeros in this interval which shows the difficulty of finding all the simple real zeros. Hopefully the hybrid algorithm given above works well in this case for finding all the simple real zeros in the interval. Again, the new optimal sixteenth-order method beats the other schemes.

 Iterative methods Newton (18) Computational time (in seconds) 9.26 6.10 5.04

Without the theoretical number of real zeros in the desired interval, it would be also difficult to choose the number of initial guesses. It must be noted that this approach must only be taken into account for finding the real simple zeros of nonlinear equations. In fact, if the nonlinear function has zeros with multiplicity, then the fast approach of Wagon must be replaced by a verified method, as the ones discussed in  to extract all the simple and multiple real solutions of a nonlinear function as the seeds for the simple and multiple zero-finders developed in this paper.

#### 6. Concluding Remarks

This paper has contributed two optimal eighth- and sixteenth-order methods for solving nonlinear equations using generic functions (weight function) in the computer programming package Mathematica 8. It was observed that any derived method from the new optimal methods possesses 1.682 and 1.741 as the optimal computational efficiency indices.

From Tables 1 and 2 and the tested examples, we can conclude that all implemented methods for nonlinear functions converge fast in a shorter piece of time in contrast to the lower order schemes when a list of powerful initial guesses is available for all the simple real zeros in the interval.

The application of one of the new schemes, that is, (18), was given by producing beautiful fractal pictures useful in computer graphics as Kalantari mentioned. We note that keeping tighter conditions will produce fractals with much more quality and reliability. For further application, refer to [4, 23, 24].

We also used the programming package Mathematica 8 in our calculations and gave the necessary cautions and pieces of codes for the users to implement them in their own problems as easily as possible. We have designed a hybrid algorithm to capture all the simple real solutions of nonlinear equations as rapidly as possible using a similar technique introduced by Stan Wagon. The algorithm worked efficiently for very hard test problems. And thus, the proposed algorithm could be easily used in practical problems.

#### Proof of Theorem 2

Consider the following:

The first three steps are optimal according to Section 2. Now, our goal is to add another step and identify some appropriate conditions in such a way that produces an optimal four-step iterative method. Thus, we have

The following command produces the coefficient of in the error equation:

To vanish the coefficient of it is sufficient to set Similarly, we find the following: Set