Abstract

A Chebyshev tensor product surface is widely used in image analysis and numerical approximation. This article illustrates an accurate evaluation for the surface in form of Chebyshev tensor product. This algorithm is based on the application of error-free transformations to improve the traditional Clenshaw Chebyshev tensor product algorithm. Our error analysis shows that the error bound is in contrast to classic scheme , where is working precision and is a condition number of bivariate polynomial , which means that the accuracy of the computed result is similar to that produced by classical approach with twice working precision. Numerical experiments verify that the proposed algorithm is stable and efficient.

1. Introduction

Chebyshev polynomials have been extended to almost all mathematical and physical discipline, including spectral methods, approximation theory, and representation of potentials [14]. Bivariate Chebyshev polynomials have gained attention of the computer vision researchers [5, 6]. Over the years, researchers have focused on the implementation of Chebyshev tensor product series in image analysis [57]. The Chebyshev tensor product series can be used to approximate an image, which is essentially regarded as a two-dimensional spatial function [8]. Two separable univariate Chebyshev polynomials that are discrete and orthogonal can approximate two-dimensional signal. Mukundan et al. [5] introduce a new discrete Chebyshev tensor product based on Chebyshev polynomials. This discrete Chebeshev tensor product functions show the effectiveness as feature descriptors. Rahmalan et al. [6] propose a novel approach based on discrete orthogonal Chebyshev tensor product for an efficient image compression. Recently, Omar et al. [7] propose a novel method for fusing images using Chebyshev tensor product series. All above need an image reconstruction from a finite Chebyshev tensor product surface. Thus, developing fast and reliable algorithms to evaluate the Chebyshev tensor product series are of challenging interest [9]. The Clenshaw tensor product algorithm () [10] is one of algorithms that are used to evaluate Chebyshev tensor product series.

In order to get a high-precision approximation of an image, it is essential to evaluate the series accurately. Particularly, we require higher level of accurate numeric results for ill-conditioned cases. Li et al.’s double-double [11] (double-double numbers are represented as an unevaluated sum of a leading double and a trailing double) is a library used to improve the accuracy of numerical computation. However, the algorithm is time-consuming when an input image becomes larger.

Error-free transformation studied by Ogita et al. [12] is another direct possible method to improve the accuracy apart from increasing the working precision. Compensated algorithms to evaluate the univariate polynomials in different basis have been proposed in [1215]. Inspired by their work, we extend the univariate compensated algorithm to tensor product case using the compensated Clenshaw algorithm [16] for evaluation of Chebyshev series [15]. We perform a compensated Clenshaw tensor product algorithm (for simplicity we denote it by CompCTP algorithm) to evaluate the polynomials expressed in Chebyshev tensor product form. The proposed algorithm produces the same accuracy as using twice working precision.

Since the image is fundamentally treated as two-dimensional spatial function, we use general two-dimensional function to illustrate our algorithm in the sequel. This paper has the following layout. In Section 2, we introduce some preliminaries and basic algorithms underlying our algorithm. In Section 3, we propose the compensated algorithm to compute surface in form of Chebyshev tensor product. In Section 4, we analyze forward error bound of the algorithm. In Section 5, a series of numerical experiments illustrate the accuracy and efficiency of the proposed algorithm.

2. Preliminaries and Error-Free Transformations

2.1. Basic Notations

At the present time, IEEE 64-bit floating arithmetic standard is implemented, which is sufficiently accurate for most scientific applications. Throughout the paper, we presume that all the computations are performed using IEEE-754 [17] standard in double precision so that neither overflow nor underflow occurs.

We assume that the computations are produced in a floating-point arithmetic which yields the modelswhere and is the working precision. For brevity we denote , . Besides, we denote [18] and use and as the computed element of .

Finally, we recall the Chebyshev polynomials and analogous Chebyshev polynomials [15, 19]. The forms of Chebyshev polynomials and analogous Chebyshev polynomials definite in the interval with three term recurrence are shown in (2) and (3), respectively.

2.2. Error-Free Transformations

Rounding errors are an unavoidable consequence of working in finite precision arithmetic [18]. Error-free transformations (EFTs) are a technology of the floating-point operation , which transforms any pair of floating-point numbers into a new pair with and to obtain an accurate result. Two algorithms of EFTs are Donald et al.’s   [20] (compensated summation of two floating-point numbers) and Dekker’s algorithm [21] (compensated product of two floating-point numbers).

The Clenshaw algorithm [16] is a recursive method to compute a linear combination of Chebyshev series . Reviewing work [15], the forward error bound of Clenshaw algorithm satisfies (4).

Theorem 1 (see [15]). Let be a polynomial at point and Clenshaw denote the numerical result of Clenshaw algorithm; then

Combining EFTs with Clenshaw algorithm, [15] proposes a compensated Clenshaw algorithm ( in Algorithms 37) to evaluate univariate finite Chebyshev series. The algorithm shows a smaller forward error bound than Clenshaw algorithm.

Theorem 2 (see [15]). Let be a finite Chebyshev series. The forward error bound of compensated Clenshaw algorithm ()verifies

Since the element is , comparing to (4), is more stable to get an accurate result.

3. Accurate Algorithm to Evaluate Chebyshev Tensor Product Surface

In this section, we perform a compensated algorithm to evaluate Chebyshev tensor product series based on EFTs. The technology is to extend compensated Clenshaw algorithm to polynomials expressed in Chebyshev tensor product. In order to extend Clenshaw algorithm to tensor product case, we express the series asTherefore, we write Clenshaw tensor product algorithm to evaluate the Chebyshev tensor product series with a nested Clenshaw algorithm (Algorithm 1).

function % assuming is the coefficients matrix of polynomial
for    do
end for
%   is numerical result of using Clenshaw algorithm

Substituting     , we put forward a compensated Clenshaw tensor product algorithm to improve algorithm. We call the compensated algorithm as (Algorithm 2).

function
for    do
end for
%  —the expected evaluation of the polynomial;
%  —the expected condition number;
%  —the coordinates;
%  —the actual evaluation of the polynomial;
%  —the actual condition number;
, for , ;
;
;
;
;
;
Change to coordinates ,
where is a random coefficient of the surface in accord with in some sort order;
;
;
for    do
;
end for
;
;
for    do
;
end for
;
;
;
function
for    do
end for
function
for    do
end for
function
for    do
()()
end for
()()
function
for    do
% here
end for

According to Algorithm 2, combining we have that iswhere , is the theoretical error produced by at step, and is the theoretical error generated by .

Based on previous analysis, we apparently know that is the approximation of . So the result of algorithm is more accurate than the floating-point result of Algorithm 1.

4. Error Analysis of Algorithm

In this section, we carry out an error bound of algorithm for Chebyshev tensor product surface. Firstly, we consider the error bound of the algorithm. According to Theorem 1, we deduct the Theorem 3.

Theorem 3. Let us consider a Chebyshev tensor product surface and suppose that , where is the working precision. Then the value computed in floating-point arithmetic through Algorithm 1 satisfies

Proof. Suppose , where is the numerical result of Clenshaw algorithm by at th step. Using Theorem 1, we obtain

In order to analyze the error bound of algorithm, we need a lemma (Lemma 5). Firstly, we review a lemma in [15].

Lemma 4 (see [15]). Given and , is the round-off error of EFTs in (Algorithm 5), one obtains

Lemma 5. Let and be the error of theoretical and numerical error of , respectively. One can get

Proof. Obviously, is the Clenshaw algorithm with coefficient in (Algorithm 5). Using (4) we obtainUsing Lemma 4 to (14), we have We obtain . Then we deduct relation (13).

Finally, we show the forward error bound of algorithm using previous analysis.

Theorem 6. Let be a Chebyshev tensor product surface. The algorithm satisfies

Proof. According to Algorithm 2, we get the numerical result of algorithm as Using relation (9) and we obtainNext, we consider the bound of . For , and , we getThen, according to , we haveLet us consider . Because is the error from algorithm, according to Lemma 5, we obtainUsing the relation and (22), we haveFinally, we focus on the bound of . We assume , using triangle inequalitythen apparently we haveActuallythenwhere acts as the theoretical error of at th step; combining Theorem 1 we getCombining (13), (28), and (29) we obtainSynthetically, combining (27) and (30) we deriveAccording to (25), (26), and (31), we obtainusing (20), (21), (24), and (32), the error bound yieldswhere is By the properties of the element [18], we deduct some inequations as follows:   So we have and combine (19) with (33); then we obtain relation (16).

Let be a polynomial in Chebyshev tensor product. If the condition number for polynomial evaluation of the at entry is defined by we show the relative error bound of the and algorithm in Corollary 7.

Corollary 7. Let be a polynomial in Chebyshev tensor product. The relative error bound of the algorithm and algorithm yields

As Corollary 7 illustrates that if , the error of the result evaluated by is bounded by working precision . Besides that, the computed result generated by the scheme is as accurate as if evaluated in double-double precision.

5. Experimental Results

In this section, we perform a series of numerical experiments. The programs are written in Matlab code and run with the software of MATLAB R2015b. We consider the polynomials with floating-point numbers coefficients and floating-point element expressed in Chebyshev tensor product. Considering the efficiency, we use with quad-double arithmetic [22] to accurately evaluate polynomial instead of symbolic toolbox.

Generally, we can get an accurate result using Algorithm 1 as the Chebyshev tensor product series is well-conditioned. But, we need a more accurate algorithm when the problem is ill-conditioned. We consider a bivariate polynomial in area proposed by [14] and convert it to a Chebyshev tensor product form.

We extend the univariate conversion algorithm [23] to change a bivariate polynomial in power form to a Chebyshev tensor product form. We transform the coefficients via the Matlab symbolic toolbox. Since the coefficients of polynomials in Chebyshev tensor product which are evaluated by us are floating-point numbers, they need to be rounded to the nearest floating-point elements.

We use , , and ( algorithm along with quad-double arithmetic) [22] to compute the value of Chebyshev tensor product polynomial at grid points near the multiple root . Figure 1 shows the surface generated by different algorithms. It is obvious that the compensated algorithm can approximate the expected smooth surface as accurate as that using algorithm with quad-double arithmetic, when the results are rounded to the working precision. Observe that the surface of is a folding interface and varies slightly in the direction . The reason is that we firstly compute at the th step in the loop of the algorithm and then compute leading to the more obviously influences of the round-off errors along . Thus, the algorithm can compute a desired result.

Figure 2 performs an absolute forward error using the algorithms and for points. It is clear that the reduces the error better than algorithm. Besides, we observe that the relative errors of algorithm are smaller than the working precision even near the point . Therefore, the experiments verify our estimation of relation (37).

Next we consider the relative error bounds for ill-conditioned polynomials. We produce a series of ill-conditioned polynomials in Chebyshev tensor product and evaluate them using , , and ( with double-double precision in Algorithms 37). We choose degree (where is parameter from (6)) with condition numbers changing from to . The algorithm generating the ill-conditioned polynomials is shown in Algorithms 37 (), which is similar to algorithm in [24]. Considering the size of the problem and computational efficiency, we choose . We plot the results in Figure 3. Obviously, we observe that the illustrates an expected result, where the is more stable and accurate than . The relative errors are equal to or smaller than when . While , the relative errors increase almost linearly. Meanwhile, we notice that the shows high accuracy because the rounding errors are recorded to approximate the real errors even under ill-condition. Besides, we also compute the ill-conditioned polynomials using based on the Bailey’s quad-double [2, 25] arithmetic. Comparing with , we can find that the results of algorithm are as accurate as those computed using double-double arithmetic, which are illustrated by our numerical experiment.

Finally, we focus on the computational complexity of all the algorithms.: : : : : :

Considering the previous comparisons of the accuracy, we can confirm that algorithm is as accurate as computation with algorithm (shown in Algorithms 37). However, only requires on the average about of flops. So our algorithm is more efficient than with double-double arithmetic.

6. Conclusion

We present an accurate and efficient algorithm for evaluation of Chebyshev tensor product surface, which is based on the Clenshaw algorithm and error-free transformations. The error analysis shows that algorithm can get the same accuracy as that computed by the traditional algorithm with twice working precision. Besides, this compensated algorithm can run more efficiently than algorithm with double-double precision. Experiments illustrate that our algorithm is stable and accurate even in some ill-condition cases.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

Hao Jiang is partially supported by the National Natural Science Foundation of China (no. 61402495, no. 61602166, no. 61303189, and no. 61402496). Chongwen Duan is partially supported by the National Natural Science Foundation of China (no. 61401515). Hongxia Wang is partially supported by the National Natural Science Foundation of China (no. 61571008). Lizhi Cheng is partially supported by Science Project of National University of Defense Technology (JC120201) and National Natural Science Foundation of Hunan Province in China (13JJ2001).