Abstract
This paper presents a compensated algorithm for accurate evaluation of a polynomial in Legendre basis. Since the coefficients of the evaluated polynomial are fractions, we propose to store these coefficients in two floating point numbers, such as double-double format, to reduce the effect of the coefficients’ perturbation. The proposed algorithm is obtained by applying error-free transformation to improve the Clenshaw algorithm. It can yield a full working precision accuracy for the ill-conditioned polynomial evaluation. Forward error analysis and numerical experiments illustrate the accuracy and efficiency of the algorithm.
1. Introduction
Legendre polynomial is often used in numerical analysis [1–3], such as approximation theory and quadrature and differential equations. Legendre polynomial satisfies 3-term recurrence relation; that is, for Legendre polynomial , The polynomial represented in Legendre basis is , where and is Legendre polynomial.
The Clenshaw algorithm [4, 5] is usually used to evaluate a linear combination of Chebyshev polynomials, but it can apply to any class of functions that can be defined by a three-term recurrence relation. Therefore the Clenshaw algorithm can evaluate a polynomial in Legendre basis. The error analysis of the Clenshaw algorithm was considered in the literatures [6–10]. The relative accuracy bound of the computed values by the Clenshaw algorithm verifies
For ill-conditioned problems, several researches applied error-free transformations [11] to propose accurate compensated algorithms [12–15] to evaluate the polynomials in monomial, Bernstein, and Chebyshev bases with Horner, de Casteljau, and Clenshaw algorithms, respectively. Some recent applications of high-precision arithmetic were given in [16].
Motivated by them, we apply error-free transformations to analyze the effect of round-off errors and then compensate them to the original result of the Clenshaw algorithm. Since the coefficients of the Legendre polynomial are fractions, the coefficient perturbations in the evaluation may exist when the coefficients are truncated to floating point numbers. We store the coefficients which are not floating point numbers in double-double format, with the double working precision, to get the perturbation. We also compensate the approximate perturbed errors to the original result of the Clenshaw algorithm. Based on the above, we construct a compensated Clenshaw algorithm for the evaluation of a linear combination of Legendre polynomials, which can yield a full working precision accuracy and its relative accuracy bound satisfies
The paper is organized as follows. Section 2 shows some basic notations in error analysis, floating point arithmetic, error-free transformations, compensated algorithm, Clenshaw algorithm, and condition number. Section 3 presents the compensated algorithm and its error bound. Section 4 gives several numerical experiments to illustrate the efficiency and accuracy of the compensated algorithm for polynomial in Legendre basis.
2. Mathematical and Arithmetical Preliminaries
2.1. Basic Notations and Definitions
Throughout this paper, we assume to work with a floating point arithmetic adhering to IEEE-754 floating point standard in rounding to nearest and no overflow nor underflow occurs. Let represent the floating point computation; then the computation obeys the model where and ( is the round-off unit). We also assume that the computed result of in floating point arithmetic is denoted by and the set of all floating point numbers is denoted by . The following definition will be used in error analysis (see more details in [17]).
Definition 1. One defines where with , , and .
There are three classic properties which will also be used in error analysis:(i);(ii);(iii).
2.2. Accurate Sum and Product
Let , and no overflow nor underflow occurs. The transformation is regarded as an error-free transformation () that causes to exist such that , .
Let us show the error-free transformations of the sum and product of two floating point numbers in Algorithms 1-3 which are the algorithm by Knuth [18] and the algorithm by Dekker [19], respectively.
|
|
|
Algorithms 1–3 satisfy the Theorem 2.
Theorem 2 (see [11]). For and , and verify
We present the compensated algorithm for the product of three floating point numbers in Algorithm 4, which refers to [20].
|
According to Theorem 2, we have and . Hence, +; then . According to Lemma 3.2 in [20], we can propose the error bound of Algorithm 4 in Theorem 3.
Theorem 3. For and , verifies
Proof. According to Lemma 3.2 in [20], we have In Algorithm 4, ; then
2.3. The Clenshaw Algorithm and Condition Number
The standard general algorithm for the evaluation of polynomial in Legendre basis is the Clenshaw algorithm [5]. We recall it in Algorithm 5.
|
Barrio et al. [21] proposed a general polynomial condition number for any polynomial basis defined by a linear recurrence and used this new condition number to give the error bounds for the Clenshaw algorithm. Based on this general condition number, we propose the absolute Legendre polynomial, which is similar to the absolute polynomial mentioned in [7, 21].
Definition 4. Let be Legendre polynomial. We define the absolute Legendre polynomial which is associated with and satisfies where , .
The absolute Legendre polynomial satisfies the following property.
Lemma 5. Let be the absolute Legendre polynomial. Then we have
Proof . Let and . From Definition 4, we have then Thus the equivalent form of (11) is Since is increasing with , we obtain So we finally obtain .
Following Definition 4, we introduce the condition number for the evaluation of polynomials in Legendre basis [21].
Definition 6. Let , where is Legendre polynomial. Let be the absolute Legendre polynomial. Then the absolute condition number is and the relative condition number is
3. Compensated Algorithm for Evaluating Polynomials
In this section, we exhibit the exact round-off errors generated by the Clenshaw algorithm with . We also analyze the perturbations generated by truncating the fractions in Algorithm 5. We propose a compensated Clenshaw algorithm to evaluate finite Legendre series and present its error bound in the following.
Firstly, in order to analyze the perturbations, we split each coefficient into three parts as follows: where , and . Let and . Then we describe the recurrence relation at th step of Algorithm 5 for theoretical computation as For numerical computation associated with (19), it is
Remark 7. Let . We split a coefficient like (18); then the representation of splitting is unique and , .
Let and in Algorithm 5. Since every elementary floating point operation in Algorithm 5 causes round-off errors in numerical computation, we apply and the algorithms to take notes of all round-off errors and obtain The sum of the perturbation and the round-off errors of the recurrence relation at th step is where , is defined in Theorem 3. Then we obtain the following theorem.
Theorem 8. Let be a Legendre series of degree , and let be a floating point value. One assumes that and in Algorithm 5 and is the numerical result; is described in (22) for . Then one obtains
Proof. The perturbation of Algorithm 5 is
Let and in Algorithm 5. From Theorems 2–3 and (21), we can easily obtain
Let for ; then
From (18), (19), (22), and (24), we have
Thus
then
To devise a compensated algorithm of Algorithm 5 and give its error bound, we need the following lemmas.
Lemma 9. Let be a Legendre series of degree , a floating point value, and the numerical result of Algorithm 5. Then
Proof. Applying the standard model of floating point arithmetic, from Algorithm 5 and Remark 7, we have
Thus we obtain
Then, from Definition 1, we get
Lemma 10. Let be a Legendre series of degree and a floating point value. We assume that , where is real numbers; then
Proof. According to (31), we get Since , from Definition 1, we have then According to Lemma 5, we obtain
Among the perturbed and round-off errors in Algorithm 5, we deem that some errors do not influence the numerical result in working precision. Now we can give the perturbed error bounds and the round-off error bounds, respectively. At first we analyze the perturbed error in (24). Let . According to and Remark 7, we obtain Then let and from Lemma 10, taking into account that , we have
is , so this coefficient perturbation may influence the accuracy; we need to consider it in our compensated algorithm.
Similarly, we let ; then
is , so this coefficient perturbation does not influence the accuracy.
Remark 11. When , the th step of Algorithm 5 is , so that we only need to consider the perturbation of coefficient .
Next we deduce the round-off error bound. Let . According to Theorems 2 and 3 and Remark 7, we obtain then
Let and using it into Lemma 10, from , we have
is , so this round-off error may influence the accuracy, we also need to consider it in our compensated algorithm.
From Theorem 3 we have known that round-off error . According to Lemma 10 we let , from and , we have
is , so this round-off error does not influence the accuracy.
Observing the error bounds we described above, the perturbation generated by the third part of coefficients in (18) does not influence the accuracy. Thanks to the algorithm in Appendix B (see Algorithm 13), we only need to split the coefficients into two floating point numbers.
Applying , the algorithm and the algorithm, considering all errors which may influence the numerical result in working precision in Algorithm 5, we obtain the compensated algorithm in Algorithm 6.
|
Here we give the error bound of Algorithm 6.
Theorem 12. Let be a Legendre series of degree and a floating point value. The forward error bound of the compensated Clenshaw algorithm is
Proof. From Algorithm 6, we obtain
where .
According to Theorem 8, we have ; then
Next we analyze the bound of .
Let ; then
Thus we obtain
According to Lemma 9, we get
From (50), (51) and , we derive
According to and Remark 7, we obtain
From (53) we use into Lemma 10; taking into account that and , according to (52), we get
Next we let ; then
thus
According to Lemma 9, we obtain
From (56), (57), and , we derive
From (43), we let and using it into Lemma 10, taking into account that and , we have
Combining (54) and (59), we get
From Definitions 4 and 6 and Theorem 12, we easily get the following corollary.
Corollary 13. Let be a Legendre series of degree and a floating point value. The relative error bound of the compensated Clenshaw algorithm is
4. Numerical Results
All our experiments are performed using IEEE-754 double precision as working precision. Here, we consider the polynomials in Legendre basis with real coefficients and floating point entry . All the programs about accuracy measurements have been written in MATLAB R2012b and that about timing measurements have been written in C code on a 2.53-GHz Intel Core i5 laptop.
4.1. Evaluation of the Polynomial in Legendre Basis
In order to construct an ill-conditioned polynomial, we consider the evaluation of the polynomial in Legendre basis converted by the polynomials in the neighborhood of its multiple roots 0.75 and 1. We use the , algorithms and the Symbolic Toolbox to evaluate the polynomial in Legendre basis. In order to observe the perturbation of polynomial coefficients clearly, we propose the algorithm in Appendix A (see Algorithm 7) to obtain the coefficients of the Legendre series. To decrease the perturbation of the coefficients we also propose the algorithm in Appendix B (see Algorithm 16) to store the coefficients in double-double format. That is, the coefficients of the polynomial evaluated, which are obtained by the algorithm and algorithm, are in double format and double-double format, respectively. In this experiment we evaluate the polynomials for 400 equally spaced points in the intervals , , and .
|
From Figures 1-2 we observe that the polynomial evaluated by algorithm (on the top figure) is oscillating, and the compensated algorithm is more smooth drawing. The polynomials we evaluated by Symbolic Toolbox (on the bottom of Figures 1-2) are different because the perturbations of coefficients obtained by the algorithm are smaller than those by the algorithm. We can see that the accuracy of the polynomials evaluated by the compensated algorithm is the same with evaluated by Symbolic Toolbox in Figure 1. The polynomials in Legendre basis evaluated by the compensated algorithm are much more smooth drawing and just a little oscillation in the intervals in Figure 2. In fact, if we use the Symbolic Toolbox to get the polynomial coefficients, the oscillation will be smaller than it is in Figure 2. However, this method is expensive. We just need to use the algorithm to get the coefficients; the result obtained by the algorithm is almost the same as that by using the Symbolic Toolbox in working precision.


4.2. Accuracy of the Compensated Algorithm
The closer to the root, the larger the condition number. Thus, in this experiment, the evaluation is for 120 points near the root 0.75, that is, , for 1 : 40 and for 1 : 80. We compare the compensated algorithm with multiple precision library. Since the working precision is double precision, we choose the double-double arithmetic [22] (see Appendix B) which is the most efficient way to yield a full precision accuracy of evaluating the polynomial in Legendre basis to compare with the compensated algorithm. We evaluate the polynomials by the , , and algorithms in Appendix B (see Algorithm 15) and the Symbolic Toolbox, respectively, so that the relative forward errors can be obtained by and the relative error bounds are described from Corollaries 13 and A.2 in Appendix A. Then we propose the relative forward errors of evaluation of the polynomial in Legendre basis in Figure 3. As we can see, the relative errors of the compensated algorithm and double-double arithmetic are both smaller than () when the condition number is less than . And the accuracy of both algorithms is decreasing linearly for the condition number larger than . However, the algorithm cannot yield the working precision; the accuracy of which decreases linearly since the condition number is less than . When the condition number is lager than , the Clenshaw algorithm cannot obtain even one significant bit.

4.3. Time Performances
We can easily know that the , , and algorithms in Appendix B (Algorithm 8) require 6, 17, and 3 flops, respectively. Then we obtain the computational cost of the , , and algorithms:(i): 5n-2 flops;(ii): 79n-29 flops;(iii): 110n-44 flops.
|