Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2017 (2017), Article ID 1729287, 10 pages
https://doi.org/10.1155/2017/1729287
Research Article

Accurate and Efficient Evaluation of Chebyshev Tensor Product Surface

1School of Science, National University of Defense Technology, Changsha, China
2College of Computer, National University of Defense Technology, Changsha, China
3State Key Laboratory of Astronautic Dynamics, Xi’an, China

Correspondence should be addressed to Hao Jiang; nc.ude.tdun@gnaijoah

Received 10 March 2017; Accepted 3 July 2017; Published 27 September 2017

Academic Editor: Elisa Francomano

Copyright © 2017 Keshan He et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A Chebyshev tensor product surface is widely used in image analysis and numerical approximation. This article illustrates an accurate evaluation for the surface in form of Chebyshev tensor product. This algorithm is based on the application of error-free transformations to improve the traditional Clenshaw Chebyshev tensor product algorithm. Our error analysis shows that the error bound is in contrast to classic scheme , where is working precision and is a condition number of bivariate polynomial , which means that the accuracy of the computed result is similar to that produced by classical approach with twice working precision. Numerical experiments verify that the proposed algorithm is stable and efficient.

1. Introduction

Chebyshev polynomials have been extended to almost all mathematical and physical discipline, including spectral methods, approximation theory, and representation of potentials [14]. Bivariate Chebyshev polynomials have gained attention of the computer vision researchers [5, 6]. Over the years, researchers have focused on the implementation of Chebyshev tensor product series in image analysis [57]. The Chebyshev tensor product series can be used to approximate an image, which is essentially regarded as a two-dimensional spatial function [8]. Two separable univariate Chebyshev polynomials that are discrete and orthogonal can approximate two-dimensional signal. Mukundan et al. [5] introduce a new discrete Chebyshev tensor product based on Chebyshev polynomials. This discrete Chebeshev tensor product functions show the effectiveness as feature descriptors. Rahmalan et al. [6] propose a novel approach based on discrete orthogonal Chebyshev tensor product for an efficient image compression. Recently, Omar et al. [7] propose a novel method for fusing images using Chebyshev tensor product series. All above need an image reconstruction from a finite Chebyshev tensor product surface. Thus, developing fast and reliable algorithms to evaluate the Chebyshev tensor product series are of challenging interest [9]. The Clenshaw tensor product algorithm () [10] is one of algorithms that are used to evaluate Chebyshev tensor product series.

In order to get a high-precision approximation of an image, it is essential to evaluate the series accurately. Particularly, we require higher level of accurate numeric results for ill-conditioned cases. Li et al.’s double-double [11] (double-double numbers are represented as an unevaluated sum of a leading double and a trailing double) is a library used to improve the accuracy of numerical computation. However, the algorithm is time-consuming when an input image becomes larger.

Error-free transformation studied by Ogita et al. [12] is another direct possible method to improve the accuracy apart from increasing the working precision. Compensated algorithms to evaluate the univariate polynomials in different basis have been proposed in [1215]. Inspired by their work, we extend the univariate compensated algorithm to tensor product case using the compensated Clenshaw algorithm [16] for evaluation of Chebyshev series [15]. We perform a compensated Clenshaw tensor product algorithm (for simplicity we denote it by CompCTP algorithm) to evaluate the polynomials expressed in Chebyshev tensor product form. The proposed algorithm produces the same accuracy as using twice working precision.

Since the image is fundamentally treated as two-dimensional spatial function, we use general two-dimensional function to illustrate our algorithm in the sequel. This paper has the following layout. In Section 2, we introduce some preliminaries and basic algorithms underlying our algorithm. In Section 3, we propose the compensated algorithm to compute surface in form of Chebyshev tensor product. In Section 4, we analyze forward error bound of the algorithm. In Section 5, a series of numerical experiments illustrate the accuracy and efficiency of the proposed algorithm.

2. Preliminaries and Error-Free Transformations

2.1. Basic Notations

At the present time, IEEE 64-bit floating arithmetic standard is implemented, which is sufficiently accurate for most scientific applications. Throughout the paper, we presume that all the computations are performed using IEEE-754 [17] standard in double precision so that neither overflow nor underflow occurs.

We assume that the computations are produced in a floating-point arithmetic which yields the modelswhere and is the working precision. For brevity we denote , . Besides, we denote [18] and use and as the computed element of .

Finally, we recall the Chebyshev polynomials and analogous Chebyshev polynomials [15, 19]. The forms of Chebyshev polynomials and analogous Chebyshev polynomials definite in the interval with three term recurrence are shown in (2) and (3), respectively.

2.2. Error-Free Transformations

Rounding errors are an unavoidable consequence of working in finite precision arithmetic [18]. Error-free transformations (EFTs) are a technology of the floating-point operation , which transforms any pair of floating-point numbers into a new pair with and to obtain an accurate result. Two algorithms of EFTs are Donald et al.’s   [20] (compensated summation of two floating-point numbers) and Dekker’s algorithm [21] (compensated product of two floating-point numbers).

The Clenshaw algorithm [16] is a recursive method to compute a linear combination of Chebyshev series . Reviewing work [15], the forward error bound of Clenshaw algorithm satisfies (4).

Theorem 1 (see [15]). Let be a polynomial at point and Clenshaw denote the numerical result of Clenshaw algorithm; then

Combining EFTs with Clenshaw algorithm, [15] proposes a compensated Clenshaw algorithm ( in Algorithms 37) to evaluate univariate finite Chebyshev series. The algorithm shows a smaller forward error bound than Clenshaw algorithm.

Theorem 2 (see [15]). Let be a finite Chebyshev series. The forward error bound of compensated Clenshaw algorithm ()verifies

Since the element is , comparing to (4), is more stable to get an accurate result.

3. Accurate Algorithm to Evaluate Chebyshev Tensor Product Surface

In this section, we perform a compensated algorithm to evaluate Chebyshev tensor product series based on EFTs. The technology is to extend compensated Clenshaw algorithm to polynomials expressed in Chebyshev tensor product. In order to extend Clenshaw algorithm to tensor product case, we express the series asTherefore, we write Clenshaw tensor product algorithm to evaluate the Chebyshev tensor product series with a nested Clenshaw algorithm (Algorithm 1).

Algorithm 1: Clenshaw algorithm for evaluation of Chebyshev tensor product surface.

Substituting     , we put forward a compensated Clenshaw tensor product algorithm to improve algorithm. We call the compensated algorithm as (Algorithm 2).

Algorithm 2: Compensated Clenshaw algorithm for evaluation of Chebyshev tensor product surface.
Algorithm 3: Generate polynomial in the form of Chebyshev tensor product.
Algorithm 4: [16] Clenshaw algorithm to evaluate finite Chebyshev series.
Algorithm 5: [15] compensated Clenshaw algorithm to evaluate Chebyshev series accurately.
Algorithm 6: [15] Clenshaw algorithm in double-double format.
Algorithm 7: Clenshaw Chebyshev tensor product in double-double format.

According to Algorithm 2, combining we have that iswhere , is the theoretical error produced by at step, and is the theoretical error generated by .

Based on previous analysis, we apparently know that is the approximation of . So the result of algorithm is more accurate than the floating-point result of Algorithm 1.

4. Error Analysis of Algorithm

In this section, we carry out an error bound of algorithm for Chebyshev tensor product surface. Firstly, we consider the error bound of the algorithm. According to Theorem 1, we deduct the Theorem 3.

Theorem 3. Let us consider a Chebyshev tensor product surface and suppose that , where is the working precision. Then the value computed in floating-point arithmetic through Algorithm 1 satisfies

Proof. Suppose , where is the numerical result of Clenshaw algorithm by at th step. Using Theorem 1, we obtain

In order to analyze the error bound of algorithm, we need a lemma (Lemma 5). Firstly, we review a lemma in [15].

Lemma 4 (see [15]). Given and , is the round-off error of EFTs in (Algorithm 5), one obtains

Lemma 5. Let and be the error of theoretical and numerical error of , respectively. One can get

Proof. Obviously, is the Clenshaw algorithm with coefficient in (Algorithm 5). Using (4) we obtainUsing Lemma 4 to (14), we have We obtain . Then we deduct relation (13).

Finally, we show the forward error bound of algorithm using previous analysis.

Theorem 6. Let be a Chebyshev tensor product surface. The algorithm satisfies

Proof. According to Algorithm 2, we get the numerical result of algorithm as Using relation (9) and we obtainNext, we consider the bound of . For , and , we getThen, according to , we haveLet us consider . Because is the error from algorithm, according to Lemma 5, we obtainUsing the relation and (22), we haveFinally, we focus on the bound of . We assume , using triangle inequalitythen apparently we haveActuallythenwhere acts as the theoretical error of at th step; combining Theorem 1 we getCombining (13), (28), and (29) we obtainSynthetically, combining (27) and (30) we deriveAccording to (25), (26), and (31), we obtainusing (20), (21), (24), and (32), the error bound yieldswhere is By the properties of the element [18], we deduct some inequations as follows:   So we have and combine (19) with (33); then we obtain relation (16).

Let be a polynomial in Chebyshev tensor product. If the condition number for polynomial evaluation of the at entry is defined by we show the relative error bound of the and algorithm in Corollary 7.

Corollary 7. Let be a polynomial in Chebyshev tensor product. The relative error bound of the algorithm and algorithm yields

As Corollary 7 illustrates that if , the error of the result evaluated by is bounded by working precision . Besides that, the computed result generated by the scheme is as accurate as if evaluated in double-double precision.

5. Experimental Results

In this section, we perform a series of numerical experiments. The programs are written in Matlab code and run with the software of MATLAB R2015b. We consider the polynomials with floating-point numbers coefficients and floating-point element expressed in Chebyshev tensor product. Considering the efficiency, we use with quad-double arithmetic [22] to accurately evaluate polynomial instead of symbolic toolbox.

Generally, we can get an accurate result using Algorithm 1 as the Chebyshev tensor product series is well-conditioned. But, we need a more accurate algorithm when the problem is ill-conditioned. We consider a bivariate polynomial in area proposed by [14] and convert it to a Chebyshev tensor product form.

We extend the univariate conversion algorithm [23] to change a bivariate polynomial in power form to a Chebyshev tensor product form. We transform the coefficients via the Matlab symbolic toolbox. Since the coefficients of polynomials in Chebyshev tensor product which are evaluated by us are floating-point numbers, they need to be rounded to the nearest floating-point elements.

We use , , and ( algorithm along with quad-double arithmetic) [22] to compute the value of Chebyshev tensor product polynomial at grid points near the multiple root . Figure 1 shows the surface generated by different algorithms. It is obvious that the compensated algorithm can approximate the expected smooth surface as accurate as that using algorithm with quad-double arithmetic, when the results are rounded to the working precision. Observe that the surface of is a folding interface and varies slightly in the direction . The reason is that we firstly compute at the th step in the loop of the algorithm and then compute leading to the more obviously influences of the round-off errors along . Thus, the algorithm can compute a desired result.

Figure 1: The surface using , , and algorithm to evaluate Chebyshev tensor product series.

Figure 2 performs an absolute forward error using the algorithms and for points. It is clear that the reduces the error better than algorithm. Besides, we observe that the relative errors of algorithm are smaller than the working precision even near the point . Therefore, the experiments verify our estimation of relation (37).

Figure 2: Accuracy of the absolute error of algorithm (a) and algorithm (b).

Next we consider the relative error bounds for ill-conditioned polynomials. We produce a series of ill-conditioned polynomials in Chebyshev tensor product and evaluate them using , , and ( with double-double precision in Algorithms 37). We choose degree (where is parameter from (6)) with condition numbers changing from to . The algorithm generating the ill-conditioned polynomials is shown in Algorithms 37 (), which is similar to algorithm in [24]. Considering the size of the problem and computational efficiency, we choose . We plot the results in Figure 3. Obviously, we observe that the illustrates an expected result, where the is more stable and accurate than . The relative errors are equal to or smaller than when . While , the relative errors increase almost linearly. Meanwhile, we notice that the shows high accuracy because the rounding errors are recorded to approximate the real errors even under ill-condition. Besides, we also compute the ill-conditioned polynomials using based on the Bailey’s quad-double [2, 25] arithmetic. Comparing with , we can find that the results of algorithm are as accurate as those computed using double-double arithmetic, which are illustrated by our numerical experiment.

Figure 3: The relative forward error by using , , and to evaluate ill-conditioned polynomials.

Finally, we focus on the computational complexity of all the algorithms.: : : : : :

Considering the previous comparisons of the accuracy, we can confirm that algorithm is as accurate as computation with algorithm (shown in Algorithms 37). However, only requires on the average about of flops. So our algorithm is more efficient than with double-double arithmetic.

6. Conclusion

We present an accurate and efficient algorithm for evaluation of Chebyshev tensor product surface, which is based on the Clenshaw algorithm and error-free transformations. The error analysis shows that algorithm can get the same accuracy as that computed by the traditional algorithm with twice working precision. Besides, this compensated algorithm can run more efficiently than algorithm with double-double precision. Experiments illustrate that our algorithm is stable and accurate even in some ill-condition cases.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

Hao Jiang is partially supported by the National Natural Science Foundation of China (no. 61402495, no. 61602166, no. 61303189, and no. 61402496). Chongwen Duan is partially supported by the National Natural Science Foundation of China (no. 61401515). Hongxia Wang is partially supported by the National Natural Science Foundation of China (no. 61571008). Lizhi Cheng is partially supported by Science Project of National University of Defense Technology (JC120201) and National Natural Science Foundation of Hunan Province in China (13JJ2001).

References

  1. L. Fox and I. Parker, Chebyshev Polynomials in Numerical Analysis, Oxford University Press, 1968.
  2. D. H. Bailey, R. Barrio, and J. M. Borwein, “High-precision computation: mathematical physics and dynamics,” Applied Mathematics and Computation, vol. 218, no. 20, pp. 10106–10121, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  3. T. J. Rivlin and J. Theodore, “The chebyshev polynomials,” vol. 30, no. 134, p. 374, 1974. View at Google Scholar
  4. R. Barrio and A. Elipe, “Integration of orbital motions with chebyshev polynomial,” in Dynamics and Astrometry of Natural and Artificial Celestial Bodies, pp. 419–424, Springer, 1997. View at Google Scholar
  5. R. Mukundan, S. H. Ong, and P. A. Lee, “Image analysis by Tchebichef moments,” IEEE Transactions on Image Processing, vol. 10, no. 9, pp. 1357–1364, 2001. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  6. H. Rahmalan, N. A. Abu, and S. L. Wong, “Using tchebichef moment for fast and efficient image compression,” Pattern Recognition and Image Analysis, vol. 20, no. 4, pp. 505–512, 2010. View at Publisher · View at Google Scholar · View at Scopus
  7. Z. Omar, N. Mitianoudis, and T. Stathaki, “Two-dimensional chebyshev polynomials for image fusion,” in Proceedings of the 28th Picture Coding Symposium (PCS '10), pp. 426–429, Japan, December 2010. View at Publisher · View at Google Scholar · View at Scopus
  8. N. A. Abu, N. Suryana, and R. Mukundan, “Perfect image reconstruction using discrete orthogonal moments,” in Proceedings of the Fourth IASTED International Conference on Visualization, Imaging, and Image Processing (VIIP '04), pp. 903–907, September 2004. View at Scopus
  9. N. A. Abu and M. R. K. Ariffin, “A novel psychovisual model on an independent video frame for an almost lossless compression,” in Proceedings of the 2014 10th International Conference on Information Assurance and Security (IAS '14), pp. 60–65, Japan, November 2014. View at Publisher · View at Google Scholar · View at Scopus
  10. W. Guobao and W. Shigang, “Recursive computation of Tchebichef moment and its inverse transform,” Pattern Recognition, vol. 39, no. 1, pp. 47–56, 2006. View at Publisher · View at Google Scholar · View at Scopus
  11. X. S. Li, J. W. Demmel, D. H. Bailey et al., “Design, implementation and testing of extended and mixed precision blas,” ACM Transactions on Mathematical Software, vol. 28, no. 2, pp. 152–205, 2002. View at Publisher · View at Google Scholar · View at MathSciNet
  12. T. Ogita, S. M. Rump, and S. Oishi, “Accurate sum and dot product,” SIAM Journal on Scientific Computing, vol. 26, no. 6, pp. 1955–1988, 2005. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  13. S. Graillat, P. Langlois, and N. Louvet, “Compensated horner scheme,” in Proceedings of the In Algebraic and Numerical Algorithms and Computer-Assisted Proofs, B. Buchberger, S. Oishi, M. Plum, and S. M. Rump, Eds., p. 05391, 2005.
  14. H. Jiang, S. Li, L. Cheng, and F. Su, “Accurate evaluation of a polynomial and its derivative in Bernstein form,” Computers & Mathematics with Applications, vol. 60, no. 3, pp. 744–755, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  15. H. Jiang, R. Barrio, H. Li, X. Liao, L. Cheng, and F. Su, “Accurate evaluation of a polynomial in Chebyshev form,” Applied Mathematics and Computation, vol. 217, no. 23, pp. 9702–9716, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  16. C. W. Clenshaw, “A note on the summation of chebyshev series,” Mathematics of Computation, vol. 9, no. 51, pp. 118–120, 1955. View at Publisher · View at Google Scholar · View at Scopus
  17. IEEE Standards Committee and others, 754-2008 IEEE standard for floating-point arithmetic. IEEE Computer Society Std, 2008. View at Publisher · View at Google Scholar
  18. N. J. Higham, Accuracy and stability of numerical algorithms, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, USA, 2002. View at Publisher · View at Google Scholar · View at MathSciNet
  19. R. Barrio, H. Jiang, and S. Serrano, “A general condition number for polynomials,” SIAM Journal on Numerical Analysis, vol. 51, no. 2, pp. 1280–1294, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  20. E. K. Donald et al., “The art of computer programming,” Sorting and Searching, vol. 3, pp. 426–458, 1999. View at Google Scholar
  21. T. J. Dekker, “A floating-point technique for extending the available precision,” Numerische Mathematik, vol. 18, pp. 224–242, 1971/72. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  22. Y. Hida, X. S. Li, and D. H. Bailey, “Algorithms for quad-double precision floating point arithmetic,” in Proceedings of the 15th IEEE Symposium on Computer Arithmetic, pp. 155–162, Vail, Colo, USA, June 2001. View at Scopus
  23. P. Langlois and N. Louvet, “More instruction level parallelism explains the actual efficiency of compensated algorithms,” Tech. Rep., DALI Research Team (2007), http://hal.archives-ouvertes.fr/hal-00165020.
  24. N. Louvet, Compensated algorithms in floating point arithmetic: accuracy, validation, performances [Ph.D. thesis], Université de Perpignan Via Domitia, 2007.
  25. D. H. Bailey, High-precision software directory: QD library, double-double library, Berkeley National Laboratory, 2008, http://www.nersc.gov/dhbailey/mpdist/mpdist.html.