About this Journal Submit a Manuscript Table of Contents
Abstract and Applied Analysis
Volume 2013 (2013), Article ID 154637, 9 pages
http://dx.doi.org/10.1155/2013/154637
Research Article

Uniform Bounds of Aliasing and Truncated Errors in Sampling Series of Functions from Anisotropic Besov Class

School of Mathematics and LPMC, Nankai University, Tianjin 300071, China

Received 1 May 2013; Accepted 11 June 2013

Academic Editor: Yiming Ying

Copyright © 2013 Peixin Ye and Yongjie Han. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Errors appear when the Shannon sampling series is applied to approximate a signal in real life. This is because a signal may not be bandlimited, the sampling series may have to be truncated, and the sampled values may not be exact and may have to be quantized. In this paper, we truncate the multidimensional Shannon sampling series via localized sampling and obtain the uniform bounds of aliasing and truncation errors for functions from anisotropic Besov class without any decay assumption. The bounds are optimal up to a logarithmic factor. Moreover, we derive the corresponding results for the case that the sampled values are given by a linear functional and its integer translations. Finally we give some applications.

1. Introduction

Since Shannon introduced the sampling series in the landmark paper [1], the Shannon sampling theorem has been a fundamental result in the field of information theory, in particular telecommunications and signal processing; see [27] and the references therein. The theorem states that a bandlimited signal can be exactly recovered from an infinite sequence of its samples if the bandlimit is no greater than half the sampling rate. The theorem also leads to a formula for reconstruction of the original function from its samples. When the function is not bandlimit, the reconstruction exhibits imperfections known as aliasing. Moreover, in practice, the signal and the sampled values are not the accurate functional values. So several types of errors such as aliasing errors, truncated errors, jitter errors, and amplitude errors appear when the Shannon sampling series is applied to approximate a signal in real life. These types of errors have been widely studied under the assumption that signals satisfy some decay conditions at infinity; see [813]. On the other hand, one can avoid assumptions upon the decay rate of the initial signals by using localized sampling; see [1419]. Recently the uniform bounds for truncated Shannon series based on local sampling are derived for nonbandlimited functions from Sobolev classes without decay assumption; see [18, 19]. In this paper we study errors in truncated multivariable Shannon sampling series via localized sampling by considering nonbandlimited functions from anisotropic Besov classes.

It is well known that the sampling theorem is usually formulated for functions of a single variable. Consequently, the theorem is directly applicable to time-dependent signals. However, the sampling theorem can be extended in a straightforward way to functions of arbitrarily many variables. The multivariable sampling theorem can be used in the reconstruction of some types of images such as gray-scale images.

We begin our discussion with the definitions of some function spaces. Let , , be the space of all th power Lebesgue integrable functions on equipped with the usual norm for and Set . For any vector with positive coordinates we say an entire function is of exponential type provided that for every there exists a positive number such that for all complex vectors we have the bound Denote by the space of all entire functions of exponential type . Let be the subset of which are bounded on . Set Every vector determines the rectangle According to the Schwartz theorem [20], where is the Fourier transform of in the sense of distribution. For the case , it is the classical Paley-Wiener theorem.

Now we define anisotropic Besov space. Suppose that and . For , we define the th partial difference of in the th coordinate direction at the point with step by the formula Let , and   for , , and . We say if , and the following seminorm is finite: The linear space is a Banach space with the norm and is called an anisotropic Besov space. We introduce the notation which plays an important role in our error estimates. In this paper, we assume , which ensures that is embedded into by a Sobolev-type embedding theorem, and therefore function values are well defined; see [20].

Now we make some illustrations about why we choose Besov spaces as the hypothesis function spaces; that is, why we assume the signals come from Besov spaces. Firstly in studying the aliasing errors for nonbandlimited functions, one often uses Lipschitz or Sobolev regularity to replace the strong bandlimited assumption. In this way one can derive some reasonable convergence rates as the distance between the sampling periods tends to zero. However the aliasing and truncation errors by local sampling for these nonbandlimited functions have not been thoroughly studied. In particular errors by localized sampling approximation for these spaces of functions with measured values have never been considered. In this paper, using the tools in the study of mean -dimension width for Besov classes and the related imbedding theorems, we can consider anisotropic Besov spaces which include Lipschitz or Sobolev spaces as special cases. Thus our results immediately lead to the results for these two types of hypothesis function spaces. Of course the results on these two spaces are also novel. On the other hand, from the viewpoint of approximation theory it is worth studying the Besov class, since the best possible orders of approximation by bandlimited functions are known for Besov classes from the corresponding results of mean -dimension width theory. Note that a convergent Shannon series is a bandlimited function. So it is natural to ask if one can use Shannon interpolation formula to realize the best approximation for these spaces. In what follows we will give an affirmative answer to this question.

By the way for later use we recall the classical Sobolev space which consists of functions such that for all multi-index vector , with , the distributional partial derivative belongs to .

The remaining part of this paper is organized as follows. In Section 2, we consider errors in truncated Shannon sampling series with exactly functional values based on localized sampling. In Section 3 we firstly generalize part of the results in Section 2 to the sampling series with measured sampled values and then give some applications.

In what follows, let , , and so forth denote vector variables living in , and write and . We use the same symbol for possibly different positive constants. These constants are independent of and . Denote by the largest integer not exceeding .

2. The Exactly Functional Values Case

The famous Shannon sampling theorem states every function can be completely reconstructed from its sampled values taken at instances (cf. [1]). In this case the representation of is given by where , and . Series (11) converges absolutely and uniformly on .

In [10], the authors establish multidimensional Shannon sampling theorem by extending (11) to the case , , and . They obtained the following theorem.

Theorem A. Let ,. Then for any where . The series on the right-hand side of (12) converges absolutely and uniformly on .

Shannon's expansion requires us to know the exact values of a signal at infinitely many points and to sum an infinite series. In practice, only finitely many samples are available, and hence the symmetric truncation error has been widely studied under the assumption that satisfies some decay condition. Among others, in [11] the uniform truncation error bounds are determined for with a decay condition. In [12] the uniform bounds of truncation error and aliasing error are derived for functions belonging to the Besov class with the same decay condition as in [11]. Since their results are the motivations of our works, we restate them as follows. Throughout the paper we denote the unit ball of the space by .

Theorem B (see [12]). Let , , and satisfy the decay condition inequality where and are constants and . For define the associated by setting for . If for , then

Theorem C (see [12]). Let , , satisfy the decay condition (14). Then for any with , , one has

Now we truncate the series on the right-hand side of (12) based on localized sampling. That is, if we want to estimate , we only sum over values of on a part of near . Thus for any we consider the finite sum as an approximation to . In this way we can derive the uniform bounds for the associated truncation error and aliasing error without any assumption about the decay of .

Our main result of this section is the following uniform bound of the aliasing error

Theorem 1. Let with , , and for . For , define in the same manner as in Theorem B; then one has

We firstly note that due to the localized sampling the function in Theorem 1 does not need to satisfy any decay assumption at infinity. Next we make a comment on the bound . It is known from the results of mean -dimension Kolmogorov widths for Besov class that Thus the bound in Theorem 1 is optimal up to the logarithmic factor ; see [21]. As a consequence of Theorem 1, we show that, using truncated sampling series (17), we can still achieve this near optimal bound.

Theorem 2. Let , with the same , , and as in Theorem 1. For , define as in Theorem B. Then for with , one has

To prove Theorem 1 we will choose an intermediate function which is a good approximation for both and . Now we describe how to choose this function. For more details, one can see [21, 22].

For any positive real number , we define the function where the constant is taken such that .

Suppose , . For any , set where .

When and , we let and observe from formulas (25) and (26) that has the alternative representation We define the value of a kernel at by and introduce the operator Consequently, is given by It is known from [20] that . We will exploit the following properties of in the proof of Theorem 1.

Lemma 3. Let , , and . For , define with for ; then one has

Proof. When , the inequality was proved in [12]. By the imbedding relationship where (see [20] for more details) we can derive the corresponding inequalities for the case from that of .

Lemma 4 (see [22]). If , , and , then where .
For , let be the Banach space of all infinite bounded -summable sequences such that the norm is finite.

Lemma 5 (see [10]). Let , . Then the series converges uniformly on to a function in .

We also need the following bound for sinc series: .

Lemma 6 (see [11]). Let , , and . Then for any ,
For , one has the following Marcinkiewicz-type inequality.

Lemma 7 (see [20, 23]). Let , . Then one has

The next lemma presents a Marcinkiewicz-type inequality for functions from Sobolev spaces.

Lemma 8 (see [10]). Let , , and . Then

Lemma 9 (see [20, 24]). Let , , ,, and . For , it follows that there exists a constant depending on , and but independent of , such that

Proof of Theorem 1. It is known from Lemma 9 (letting for ) that the fact with implies ; therefore by Lemma 8 And hence by Lemma 5, converges uniformly on .
Set and . So as mentioned above. By Theorem A we have . Thus Using the triangle inequality we obtain By Lemma 3, It is clear that Applying Hölder's inequality with exponent , we get where , and the second inequality follows from (44) and Lemma 6.
Next we estimate . By Hölder's inequality, where .
By Lemma 7 and Lemma 4, we obtain It follows from (40), (48), and Minkowski inequality Set . Note that , for all and . Thus to give an upper estimate for on , we only need to bound it on . Note that A straightforward computation shows that for Therefore It follows from (46) and (52) that We choose and . It is easy to see that and . A simple computation gives Note that . Thus we have Collecting the above results we obtain Combining (44) and (56), we prove the theorem.

Proof of Theorem 2. By the triangle inequality, we have By the arguments similar to those used in the proof of Theorem 1, we obtain where we use in the last inequality.
Combining Theorem 1 and (58), we complete the proof of Theorem 2.

3. The Measured Sampled Values Case

In practice, the sampled values of a signal may not be exactly the functional values and may have to be quantized. Typical errors arising from these facts are jitter errors and amplitude errors. Using the key idea of quasi-interpolation which adopts integer translations of a basic function and integer translations of a linear functional to approximate functions, see [8, 25] and the references therein. We may consider sampled values that are the results of a linear functional and its integer translations acting on an undergoing signal [4, 25]. Such sampled values are called measured sampled values because they are closer to the true measurements taken from a signal. The sampling series with the measured sampled values is defined to be where is any sequence of continuous linear functionals , with being the set of all continuous functions defined on and tending to zero at infinity.

Similar to the definition of and , we have the finite sum and the truncation error

To establish our theorems we need the error modulus We write for if no confusion arises. The error modulus provides a quantity for the quality of signal's measured sampling values. When the functionals in are concrete, we may get some reasonable estimates for .

Sampling series with measured sampled values has been studied in [8] for bandlimited functions but without truncation. The truncation errors are considered for functions from Lipschitz class with a decay condition in [13]. Now we recall a typical result in [13].

Denote by the set of all continuous functions satisfying for all and . Set , , and .

Theorem D. Let satisfy the decay condition for some . Let be any sequence of continuous linear functionals. For each one has for the truncation error at provided , where is the smallest integer that is greater or equal to a given , and .

In [9] the author obtain the uniform bound of symmetric truncation error for functions from isotropic Besov space with a similar decay condition. Now we will provide the estimation for the truncation error without any assumption about the decay of .

Theorem 10. Let , with the same , , and as in Theorem 1. For , define as in Theorem B. Let be any sequence of continuous linear functionals. For one has provided and for some constant .

Proof. By the triangle inequality, we have where Similar to (52), we have Using Hölder's inequality we obtain where . Now we select the same and as in the proof of Theorem 1. Similar to (55), we have . Thus A simple computation gives and . Notice that . Collecting these results, we obtain It follows from Theorem 1, (72), and (73) that which completes the proof.

Finally we apply Theorem 10 to some practical examples. The first one is that the measured sampled values are given by averages of a function. For we define the modulus of continuity where may be any positive number.

Corollary 11. Let , with the same , , and as in Theorem 1. For , define as in Theorem 10. Suppose the sampled values of are obtained by the rule where are numbers satisfying for all and . If and , then

Proof. Let be the sequence of the linear functionals on and a continuous function on . Define Then . Clearly, . Therefore Note that the function is monotonely increasing for . The corollary follows from Theorem 10.
The second example is an estimate for the combination of all four errors existing in sampling series: the amplitude error, the time-jitter error, the truncation errors, and the aliasing errors. We give some explanation for the amplitude error and the time-jitter error.
We assume the amplitude error results from quantization, which means the functional value of a function at moment is replaced by the nearest discrete value or machine number . The quantization size is often known before hand or can be chosen arbitrarily. We may assume that the local error at any moment is bounded by a constant ; that is, . The time-jitter error arises if the sampled instances are not met correctly but might differ from the exact ones by ; we assume for all and some constant . The combined error is defined to be

Corollary 12. Let , , , , and . Then provided , , and , where are positive constants, and .

Proof. We define where is the Dirac distribution. Then is a sequence of linear functional on . It is clear that . Then Thus . By Theorem 10 we get the desired result.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grant nos. 10971251, 11101220, and 11271199) and the Program for New Century Excellent Talents at University of China (NCET-10-0513).

References

  1. C. E. Shannon, “A mathematical theory of communication,” The Bell System Technical Journal, vol. 27, pp. 379–423, 1948. View at Zentralblatt MATH · View at MathSciNet
  2. P. L. Butzer, W. Engels, and U. Scheben, “Magnitude of the truncation error in sam-pling expansions of bandlimited signals,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 30, no. 6, pp. 906–912, 1982.
  3. A. I. Zayed, Advances in Shannon's Sampling Theory, CRC Press, Boca Raton, Fla, USA, 1993. View at MathSciNet
  4. S. D. Casey and D. F. Walnut, “Systems of convolution equations, deconvolution, Shannon sampling, and the wavelet and Gabor transforms,” SIAM Review, vol. 36, no. 4, pp. 537–577, 1994. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  5. P. L. Butzer, G. Schmeisser, and R. L. Stens, “An introduction to sampling analysis,” in Nonuniform Sampling, Theory and Practice, F. Marvasti, Ed., pp. 17–121, Kluwer Academic, New York, NY, USA, 2001. View at MathSciNet
  6. S. Smale and D.-X. Zhou, “Shannon sampling and function reconstruction from point values,” Bulletin of the American Mathematical Society, vol. 41, no. 3, pp. 279–305, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  7. S. Smale and D.-X. Zhou, “Shannon sampling. II. Connections to learning theory,” Applied and Computational Harmonic Analysis, vol. 19, no. 3, pp. 285–302, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  8. P. L. Butzer and J. Lei, “Approximation of signals using measured sampled values and error analysis,” Communications in Applied Analysis, vol. 4, no. 2, pp. 245–255, 2000. View at Zentralblatt MATH · View at MathSciNet
  9. P. X. Ye, “Error analysis for Shannon sampling series approximation with measured sampled values,” Research Journal of Applied Sciences, Engineering and Technology, vol. 5, no. 3, pp. 858–864, 2013.
  10. J. J. Wang and G. S. Fang, “A multidimensional sampling theorem and an estimate of the aliasing error,” Acta Mathematicae Applicatae Sinica, vol. 19, no. 4, pp. 481–488, 1996. View at Zentralblatt MATH · View at MathSciNet
  11. X. M. Li, “Uniform bounds for sampling expansions,” Journal of Approximation Theory, vol. 93, no. 1, pp. 100–113, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  12. L. Jingfan and F. Gensun, “On uniform truncation error bounds and aliasing error for multidimensional sampling expansion,” Sampling Theory in Signal and Image Processing, vol. 2, no. 2, pp. 103–115, 2003. View at Zentralblatt MATH · View at MathSciNet
  13. P. L. Butzer and J. Lei, “Errors in truncated sampling series with measured sampled values for not-necessarily bandlimited functions,” Functiones et Approximatio, vol. 26, pp. 25–39, 1998. View at MathSciNet
  14. H. D. Helms and J. B. Thomas, “Truncation error of sampling-theorem expansions,” Proceedings of The IRE, vol. 50, no. 2, pp. 179–184, 1962. View at MathSciNet
  15. D. Jagerman, “Bounds for truncation error of the sampling expansion,” SIAM Journal on Applied Mathematics, vol. 14, no. 4, pp. 714–723, 1966. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  16. C. A. Micchelli, Y. Xu, and H. Zhang, “Optimal learning of bandlimited functions from localized sampling,” Journal of Complexity, vol. 25, no. 2, pp. 85–114, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  17. A. Ya. Olenko and T. K. Pogány, “Universal truncation error upper bounds in sampling restoration,” Georgian Mathematical Journal, vol. 17, no. 4, pp. 765–786, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  18. P.-X. Ye and Z.-H. Song, “Truncation and aliasing errors for Whittaker-Kotelnikov-Shannon sampling expansion,” Applied Mathematics B, vol. 27, no. 4, pp. 412–418, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  19. P. X. Ye, B. H. Sheng, and X. H. Yuan, “Optimal order of truncation and aliasing errors for multi-dimensional whittaker-shannon sampling expansion,” International Journal of Wireless and Mobile Computing, vol. 5, no. 4, pp. 327–333, 2012.
  20. S. M. Nikolskii, Approximation of Functions of Several Variables and Imbedding Theorems, Springer, New York, NY, USA, 1975. View at MathSciNet
  21. Y. Jiang and Y. Liu, “Average widths and optimal recovery of multivariate Besov classes in Lp(Rd),” Journal of Approximation Theory, vol. 102, no. 1, pp. 155–170, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  22. C. A. Micchelli, Y. S. Xu, and P. X. Ye, “Cucker-Smale learning theory in Besov spaces,” in Advances in LearnIng Theory: Methods, Models and Applications, J. Suykens, G. Horvath, S. Basu, et al., Eds., pp. 47–68, IOS Press, Amsterdam, The Netherlands, 2003.
  23. R. P. Boas, Jr., Entire Functions, Academic Press, New York, NY, USA, 1954. View at MathSciNet
  24. G. Fang, F. J. Hickernell, and H. Li, “Approximation on anisotropic Besov classes with mixed norms by standard information,” Journal of Complexity, vol. 21, no. 3, pp. 294–313, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  25. H. G. Burchard and J. Lei, “Coordinate order of approximation by functional-based approximation operators,” Journal of Approximation Theory, vol. 82, no. 2, pp. 240–256, 1995. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet