- About this Journal
- Abstracting and Indexing
- Aims and Scope
- Annual Issues
- Article Processing Charges
- Articles in Press
- Author Guidelines
- Bibliographic Information
- Citations to this Journal
- Contact Information
- Editorial Board
- Editorial Workflow
- Free eTOC Alerts
- Publication Ethics
- Reviewers Acknowledgment
- Submit a Manuscript
- Subscription Information
- Table of Contents
Mathematical Problems in Engineering
Volume 2013 (2013), Article ID 394654, 9 pages
A Generalized Regula Falsi Method for Finding Zeros and Extrema of Real Functions
Instituto de Telecomunicações, Universidade da Beira Interior, 6200-001 Covilhã, Portugal
Received 18 November 2012; Revised 18 June 2013; Accepted 2 July 2013
Academic Editor: Hung Nguyen-Xuan
Copyright © 2013 Abel Gomes and José Morgado. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Many zero-finding numerical methods are based on the Intermediate Value Theorem, which states that a zero of a real function is bracketed in a given interval if and have opposite signs; that is, . But, some zeros cannot be bracketed this way because they do not satisfy the precondition . For example, local minima and maxima that annihilate may not be bracketed by the Intermediate Value Theorem. In this case, we can always use a numerical method for bracketing extrema, checking then whether it is a zero of or not. Instead, this paper introduces a single numerical method, called generalized regula falsi (GRF) method to determine both zeros and extrema of a function. Consequently, it differs from the standard regula falsi method in that it is capable of finding any function zero in a given interval even when the Intermediate Value Theorem is not satisfied.
Numerical methods such as bisection method [1, 2], secant method [3–6], regula falsi method [7, 8], and Newton-Raphson method [9, 10], amongst others, are known as zero-finding algorithms. They are used for bracketing a zero. We say that a zero is bracketed in the interval if and have opposite signs; that is, . In this paper, a zero of this sort is called a crossing zero, as illustrated in Figure 1(a).
Although this class of methods satisfying the Intermediate Value Theorem succeeds in finding most zeros (i.e., crossing zeros), they fail to bracket zeros that are also extrema, here called extremal zeros. We say that an extremum is bracketed in the interval if and , but not their derivatives, have identical signs; that is, and . If and , “go downhill, taking steps of increasing size, until your function starts back uphill” . At this point, we have only to check that such a minimum is a zero. Analogously, if and , basically, “go uphill, taking steps of increasing size, until your function starts back downhill” . Then, we have only to check that such a maximum is a zero.
Despite the existence of algorithms for finding extrema (maxima and minima) of functions in the numerical analysis literature, there is no single iterative formula to find both crossing and extremum zeros of a real function. Such a method is described in the next section and is called generalized regula falsi method.
2. Generalized Regula Falsi Method
The generalized regula falsi (GRF) method is based on the ratio of similar triangles. Its main novelty is that it can be used to compute both zeros and extrema through a single interpolation formula.
Let us consider the interval , as illustrated in Figures 1(a) and 1(b), being and the absolute values of the function at and , respectively. The endpoints of the interval work as two distinct estimates for a zero of . The next estimate is the intersection point of the straight-line segments and , which is given by
As known, the vectorial equation of the line that passes through the points and is given by with and . Thus, the next estimate occurs at , a result due to the following equality of triangles:
Remarkably, as explained below, the iterative formula (1) applies to both zeros and extrema.
The computation of a zero is illustrated in Figure 1(a). Let us assume that , with . By replacing and into (1), we obtain that is, the interpolation formula of the well-known regula falsi method . It is used to determine crossing zeros.
The computation of a local extremum is done exactly in the same way for minima and maxima. Let us first consider a local minimum as illustrated in Figure 1(b). In this case, , with . By replacing and into (1), we obtain that is, the next estimate of a local minimum.
Let us now consider that a reflection takes place across the -axis in Figure 1(b). In this case, we have a function with a local maximum; that is, , with . Again, by replacing and into (1), we obtain the next estimate through (5).
Thus, formula (5) works equally well for local minima and maxima, independently of whether and are either positive or negative. In fact, looking at (4) and (5), we see that it is enough to change the sign of either or to determine the next estimate of an extremum through the regula falsi technique.
However, the condition only indicates that there may be an extremum between and . We have to evaluate the finite difference , with an infinitesimal , at and and test whether they have opposite signs, that is, , to guarantee that an extremum exists in . We use the finite difference—the discrete counterpart of the derivative—of a function at a given point for two main reasons.(i)Consistency: the regula falsi method does not use derivatives. So, the generalized regula falsi method does not use them for consistency either. (ii)Differentiability: there are zeros and extrema at which derivatives do not exist. For example, in Figure 3(f) has a cusp at that is also a minimum. Derivatives cannot be used in this case because the program would break down.
Therefore, we have to check whether the finite difference of the estimate is very close to zero. If so, we have an extremum at approximately. Besides, if , with an accuracy of , then is a zero.
In short, in the literature, we find numerical methods for calculating zeros and extrema. The main novelty of the method introduced in this paper is its ability to compute zeros and extrema by means of a single interpolation formula (cf. formula (1)), that is, using the same formula.
2.3. Convergence Analysis
The convergence of the GRF method is guaranteed by the following theorem.
Theorem 1. Let be a continuous real function on and or . If is the only zero or extremum in , then the GRF method generates a sequence converging to for two initial approximations and .
Proof. Without loss of generality, that is equivalent to prove the existence of a sequence of intervals converging to , with for all .
Let be the length of the interval so that , , and
Now, from (1) we obtain or
In general, we can write
Taking into account that
we have for all . Thus, the sequence is monotone decreasing. In addition, it is bounded because . So, by the Monotone Convergence Theorem we can conclude that is convergent. Taking into account that for all , we conclude that the GRF method converges to , being a zero or an extremum of .
It is known that the zero-finding regula falsi method converges linearly . Intuitively, we expect that the GRF method also converges linearly because it uses the same interpolation formula and bracketing strategy for both zeros and extrema.
Theorem 2. The GRF method has linear convergence.
Proof. Let be a sequence of approximations to either a zero or an extremum produced by the GRF method, where . Let be the error in the th iterate.
By (1), the successive iterations are
To determine the speed of convergence subtract and divide by to get
Since and , we obtain from where we conclude that . This concludes the proof.
2.4. GRF Implementation
The C function that implements the generalized regula falsi method returns either a zero or an extremum bracketed in a given interval (see Algorithm 1).
Note that, for the sake of convenience, we have used the setting for evaluating finite differences, but this is clearly prone to cancellation errors. A better choice to would be , where denotes the machine precision, which is typically of the order for doubles on a 32-bit computer.
3. Moving GRF Method
As illustrated in Figure 2(a), the regula falsi method may converge slowly to a zero, spending many iterations in pulling distant bounds closer. For example, the crossing zero of the function on (Figure 3(d)) was found after 222 iterations. Likewise, the crossing zero of the function on (Figure 3(e)) was only found after 999 iterations. The reason behind the slow convergence to some zeros is due to the fact that GRF method always retains a fixed endpoint of interval .
The leading idea of speeding up the convergence of the GRF method is then moving the fixed point as well. To achieve that we replace the fixed endpoint of the interval by the abscissa of a point that results from intersecting the tangents at the bound points and as shown in Figures 2(b) and 2(c). That is, the abscissa is the solution of the following system of equations: which yields
where and are the finite differences at and , respectively. Note that the estimate may be closer to the solution than the estimate produced by Newton's method that results from the intersection between a single tangent and the -axis (Algorithm 2).
So, the C function that implements the moving generalized regula falsi (mGRF) method differs from GRF in that we have to include statements concerning the calculation of the moving fixed point that results from intersecting tangents at and , as well as some statements to make sure that remains bracketed in the interval.
3.1. Convergence of mGRF Method
The convergence of the mGRF method is stated by the following theorem.
Theorem 3. Let be a continuous real function on and or . If is such that or , then the mGRF method generates two sequences and converging to for two initial approximations and .
Proof. The mGRF method uses two preliminary estimates and (i.e., the endpoints of the interval ) to generate two sequences of estimates, and , that supposedly march towards the solution .
Without loss of generality, we can say that the GRF method generates the sequence converging to , as proved by Theorem 1.
It remains to prove that the sequence generated by the computation of (cf. formula (15)) also converges to . Without loss of generality, let us assume that and so that and . Now, formula (15) can be rewritten as follows: or, equivalently which is the vectorial equation of the line defined by the points and , being
Let be the length of the interval so that , . Now, from (17) we obtain
But, as said before, the new estimate is only taken into account in the convergence process if it lies in the interval ; otherwise, equals to the estimate determined by the GRF method; that is, . From (17) and (19), this implies that and for all . Thus, the sequence is monotone decreasing. In addition, it is bounded because . So, by the Monotone Convergence Theorem we can conclude that is convergent. Taking into account that for all , we can also conclude that the mGRF method converges because the sequence converges to , being a zero or an extremum of .
4. Experimental Results
We have carried out a number of the convergence tests on computer in order to assess the convergence of the superlinear version of generalized regula falsi (GRF) method. For that purpose, we have used a MacBook Pro laptop powered by a 2.33 GHz Intel Core 2 Duo processor, with 3 GB 667 MHz DDR2 SDRAM, running Mac OS X operating system (version 10.6.8).
4.1. Functions, Zeros, and Extrema
We have used a number of real analytic functions in testing of the method convergence, including algebraic (or polynomial) and transcendental functions, some of which are depicted in Figure 3; for example, and are algebraic functions, but not and which are transcendental functions. These functions have been chosen because they cover most types of zeros and (local) extrema, namely, as shown in what follows.(i)Crossing Zeros (cZ): a crossing zero is a point at which the function graph crosses the -axis. We find 5 crossing zeros in Figure 3. More specifically, has 2 crossing zeros in the interval , has 1 crossing zero in the interval , has 1 crossing zero in the interval , and has 1 crossing zero in the interval . (ii)Minima (m): we find 1 minimum in Figure 3; has 1 local minimum in the interval . (iii)Maxima (M): we have 4 maxima in Figure 3; has 2 local maxima in the interval , while possesses 2 local maxima, the first in the interval and the second in the interval . (iv)Zeroed Minima (Zm): a local zeroed minimum is a point at which the function takes on the value 0 and has a local minimum; that is, the function graph touches but does not cross the -axis. We have 5 zeroed minima in Figure 3. In particular, has 2 local zeroed minima in the interval , also possesses 2 local zeroed minima, the first at and the second in the interval , whereas has 1 local minimum at that is a cusp. (v)Zeroed Maxima (ZM): a local zeroed maximum is a point at which the function takes on the value 0 and has a local maximum.
For the purpose of testing, we carried out the convergence experiments for crossing zeros and extrema separately, as described below.
4.2. Convergence to Crossing Zeros
The experimental results shown in Table 1 put in evidence how different GRF and mGRF methods are in terms of convergence to crossing zeros. According to Theorem 2, the GRF has linear convergence. But, following the results shown in Figure 4(top), mGRF seems to have superlinear convergence.
As known, the GRF converges very slowly to a crossing zero when the slope of the function graph changes very fast near to such a zero, as it is the case of those shown in Figures 3(d)-3(e). For example, in Figure 3(d), the crossing zero of on the narrow interval was found after 222 iterations, while that one of in Figure 3(e) was found after 999 iterations, though in the bigger interval (cf. Table 1).
As explained above, the mGRF is faster than GRF because both endpoints of the bracketing interval converge to the solution, while GRF keeps one of the endpoints fixed. The mGRF convergence rate seems to be at least quadratic because the estimates produced by (15) are usually closer to the solution than those produced by Newton’s method. This is confirmed by the error semilogarithmic graphs versus number of iterations shown in Figure 4(top). The convergence curve in black concerns the crossing zero of in the interval , while the red, blue, and green curves represent the convergence to zero crossings of , , and (cf. Table 1).
4.3. Convergence to Extrema
Unlike the numerical methods found in the literature, the GRF and mGRF methods are capable of computing both zeros and extrema of a real function through a single interpolation formula.
However, as shown in Figure 4(bottom), the convergence rate for extrema is lower than for crossing zeros and tends to be linear. This is explained by the fact that some estimates of the lateral sequence jump to the sequence , and vice versa.
The data shown in Table 2 concerns some extrema of , , and , whose convergence curves are plotted in Figure 4(bottom); the red curve concerns the minimum of in the interval , the magenta curve has to do with the convergence to the maximum in the interval , and the cyan curve refers to zeroed minimum (cusp) of in the interval . This latter case shows that the method proposed in this paper is also capable of calculating a zero at which is not differentiable. These semilogarithmic graphs suggest that mGRF has linear convergence for extrema. Nevertheless, the red convergence curve associated to the local maximum of in the interval may suggest that we can possibly obtain superlinear convergence for extrema too, but that would require redesigning the algorithm somehow.
The experimental results presented in the previous section show us the following.(i)It is feasible to have a single numerical method to determine both zeros and extrema of a real function. (ii)The algorithm (m)GRF works well even under conditions of lack of differentiability. So, singular zeros such as cusps and corners can be also computed. (iii)It is also feasible to compute inflection points. For that, the algorithm must include the computation of the second finite differences.
This algorithm has been designed and implemented to solve a difficult problem in computer graphics and geometric modeling, which has to do with rendering sign-invariant components of implicit curves. For example, the circle described by function in cannot be sampled and, thus, rendered by algorithms that make usage of the Intermediate Value Theorem because the function is positive everywhere, with the exception of the curve points on which the function takes on the value 0; that is, all the circle points are extrema.
The authors would like to acknowledge the helpful suggestions of anonymous reviewers.
- S. D. Conte and C. de Boor, Elementary Numerical Analysis: An Algorithmic Approach, McGraw-Hill, New York, NY, USA, 1965.
- G. Corliss, “Which root does the bisection algorithm find?” SIAM Review, vol. 19, no. 2, pp. 325–327, 1977.
- R. P. Brent, “An algorithm with guaranteed convergence for finding a zero of a function,” The Computer Journal, vol. 14, pp. 422–425, 1971.
- W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, Numerical Recipes in C, Cambridge University Press, Cambridge, UK, 1992.
- S. A. Stage, “Comments on an improvement to the Brent's method,” International Journal of Experimental Algorithms, vol. 4, no. 1, pp. 1–16, 2013.
- Z. Zhang, “An improvement to the Brent's method,” International Journal of Experimental Algorithms, vol. 2, no. 1, pp. 21–26, 2011.
- A. Ralston and P. Rabinowitz, A First Course in Numerical Analysis, International Series in Pure and Applied Mathematics, McGraw-Hill, New York, NY, USA, 1978.
- C. F. Ridders, “A new algorithm for computing a single root of a real continuous function,” Transactions on Circuits and Systems, vol. 26, no. 11, pp. 979–980, 1979.
- J. Kou, Y. Li, and X. Wang, “On modified Newton methods with cubic convergence,” Applied Mathematics and Computation, vol. 176, no. 1, pp. 123–127, 2006.
- J. R. Sharma, “A family of Newton-like methods based on an exponential model,” International Journal of Computer Mathematics, vol. 84, no. 3, pp. 297–304, 2007.