Abstract

Errors appear when the Shannon sampling series is applied to approximate a signal in real life. This is because a signal may not be bandlimited, the sampling series may have to be truncated, and the sampled values may not be exact and may have to be quantized. In this paper, we truncate the multidimensional Shannon sampling series via localized sampling and obtain the uniform bounds of aliasing and truncation errors for functions from anisotropic Besov class without any decay assumption. The bounds are optimal up to a logarithmic factor. Moreover, we derive the corresponding results for the case that the sampled values are given by a linear functional and its integer translations. Finally we give some applications.

1. Introduction

Since Shannon introduced the sampling series in the landmark paper [1], the Shannon sampling theorem has been a fundamental result in the field of information theory, in particular telecommunications and signal processing; see [27] and the references therein. The theorem states that a bandlimited signal can be exactly recovered from an infinite sequence of its samples if the bandlimit is no greater than half the sampling rate. The theorem also leads to a formula for reconstruction of the original function from its samples. When the function is not bandlimit, the reconstruction exhibits imperfections known as aliasing. Moreover, in practice, the signal and the sampled values are not the accurate functional values. So several types of errors such as aliasing errors, truncated errors, jitter errors, and amplitude errors appear when the Shannon sampling series is applied to approximate a signal in real life. These types of errors have been widely studied under the assumption that signals satisfy some decay conditions at infinity; see [813]. On the other hand, one can avoid assumptions upon the decay rate of the initial signals by using localized sampling; see [1419]. Recently the uniform bounds for truncated Shannon series based on local sampling are derived for nonbandlimited functions from Sobolev classes without decay assumption; see [18, 19]. In this paper we study errors in truncated multivariable Shannon sampling series via localized sampling by considering nonbandlimited functions from anisotropic Besov classes.

It is well known that the sampling theorem is usually formulated for functions of a single variable. Consequently, the theorem is directly applicable to time-dependent signals. However, the sampling theorem can be extended in a straightforward way to functions of arbitrarily many variables. The multivariable sampling theorem can be used in the reconstruction of some types of images such as gray-scale images.

We begin our discussion with the definitions of some function spaces. Let , , be the space of all th power Lebesgue integrable functions on equipped with the usual norm for and Set . For any vector with positive coordinates we say an entire function is of exponential type provided that for every there exists a positive number such that for all complex vectors we have the bound Denote by the space of all entire functions of exponential type . Let be the subset of which are bounded on . Set Every vector determines the rectangle According to the Schwartz theorem [20], where is the Fourier transform of in the sense of distribution. For the case , it is the classical Paley-Wiener theorem.

Now we define anisotropic Besov space. Suppose that and . For , we define the th partial difference of in the th coordinate direction at the point with step by the formula Let , and   for , , and . We say if , and the following seminorm is finite: The linear space is a Banach space with the norm and is called an anisotropic Besov space. We introduce the notation which plays an important role in our error estimates. In this paper, we assume , which ensures that is embedded into by a Sobolev-type embedding theorem, and therefore function values are well defined; see [20].

Now we make some illustrations about why we choose Besov spaces as the hypothesis function spaces; that is, why we assume the signals come from Besov spaces. Firstly in studying the aliasing errors for nonbandlimited functions, one often uses Lipschitz or Sobolev regularity to replace the strong bandlimited assumption. In this way one can derive some reasonable convergence rates as the distance between the sampling periods tends to zero. However the aliasing and truncation errors by local sampling for these nonbandlimited functions have not been thoroughly studied. In particular errors by localized sampling approximation for these spaces of functions with measured values have never been considered. In this paper, using the tools in the study of mean -dimension width for Besov classes and the related imbedding theorems, we can consider anisotropic Besov spaces which include Lipschitz or Sobolev spaces as special cases. Thus our results immediately lead to the results for these two types of hypothesis function spaces. Of course the results on these two spaces are also novel. On the other hand, from the viewpoint of approximation theory it is worth studying the Besov class, since the best possible orders of approximation by bandlimited functions are known for Besov classes from the corresponding results of mean -dimension width theory. Note that a convergent Shannon series is a bandlimited function. So it is natural to ask if one can use Shannon interpolation formula to realize the best approximation for these spaces. In what follows we will give an affirmative answer to this question.

By the way for later use we recall the classical Sobolev space which consists of functions such that for all multi-index vector , with , the distributional partial derivative belongs to .

The remaining part of this paper is organized as follows. In Section 2, we consider errors in truncated Shannon sampling series with exactly functional values based on localized sampling. In Section 3 we firstly generalize part of the results in Section 2 to the sampling series with measured sampled values and then give some applications.

In what follows, let , , and so forth denote vector variables living in , and write and . We use the same symbol for possibly different positive constants. These constants are independent of and . Denote by the largest integer not exceeding .

2. The Exactly Functional Values Case

The famous Shannon sampling theorem states every function can be completely reconstructed from its sampled values taken at instances (cf. [1]). In this case the representation of is given by where , and . Series (11) converges absolutely and uniformly on .

In [10], the authors establish multidimensional Shannon sampling theorem by extending (11) to the case , , and . They obtained the following theorem.

Theorem A. Let ,. Then for any where . The series on the right-hand side of (12) converges absolutely and uniformly on .

Shannon's expansion requires us to know the exact values of a signal at infinitely many points and to sum an infinite series. In practice, only finitely many samples are available, and hence the symmetric truncation error has been widely studied under the assumption that satisfies some decay condition. Among others, in [11] the uniform truncation error bounds are determined for with a decay condition. In [12] the uniform bounds of truncation error and aliasing error are derived for functions belonging to the Besov class with the same decay condition as in [11]. Since their results are the motivations of our works, we restate them as follows. Throughout the paper we denote the unit ball of the space by .

Theorem B (see [12]). Let , , and satisfy the decay condition inequality where and are constants and . For define the associated by setting for . If for , then

Theorem C (see [12]). Let , , satisfy the decay condition (14). Then for any with , , one has

Now we truncate the series on the right-hand side of (12) based on localized sampling. That is, if we want to estimate , we only sum over values of on a part of near . Thus for any we consider the finite sum as an approximation to . In this way we can derive the uniform bounds for the associated truncation error and aliasing error without any assumption about the decay of .

Our main result of this section is the following uniform bound of the aliasing error

Theorem 1. Let with , , and for . For , define in the same manner as in Theorem B; then one has

We firstly note that due to the localized sampling the function in Theorem 1 does not need to satisfy any decay assumption at infinity. Next we make a comment on the bound . It is known from the results of mean -dimension Kolmogorov widths for Besov class that Thus the bound in Theorem 1 is optimal up to the logarithmic factor ; see [21]. As a consequence of Theorem 1, we show that, using truncated sampling series (17), we can still achieve this near optimal bound.

Theorem 2. Let , with the same , , and as in Theorem 1. For , define as in Theorem B. Then for with , one has

To prove Theorem 1 we will choose an intermediate function which is a good approximation for both and . Now we describe how to choose this function. For more details, one can see [21, 22].

For any positive real number , we define the function where the constant is taken such that .

Suppose , . For any , set where .

When and , we let and observe from formulas (25) and (26) that has the alternative representation We define the value of a kernel at by and introduce the operator Consequently, is given by It is known from [20] that . We will exploit the following properties of in the proof of Theorem 1.

Lemma 3. Let , , and . For , define with for ; then one has

Proof. When , the inequality was proved in [12]. By the imbedding relationship where (see [20] for more details) we can derive the corresponding inequalities for the case from that of .

Lemma 4 (see [22]). If , , and , then where .
For , let be the Banach space of all infinite bounded -summable sequences such that the norm is finite.

Lemma 5 (see [10]). Let , . Then the series converges uniformly on to a function in .

We also need the following bound for sinc series: .

Lemma 6 (see [11]). Let , , and . Then for any ,
For , one has the following Marcinkiewicz-type inequality.

Lemma 7 (see [20, 23]). Let , . Then one has

The next lemma presents a Marcinkiewicz-type inequality for functions from Sobolev spaces.

Lemma 8 (see [10]). Let , , and . Then

Lemma 9 (see [20, 24]). Let , , ,, and . For , it follows that there exists a constant depending on , and but independent of , such that

Proof of Theorem 1. It is known from Lemma 9 (letting for ) that the fact with implies ; therefore by Lemma 8 And hence by Lemma 5, converges uniformly on .
Set and . So as mentioned above. By Theorem A we have . Thus Using the triangle inequality we obtain By Lemma 3, It is clear that Applying Hölder's inequality with exponent , we get where , and the second inequality follows from (44) and Lemma 6.
Next we estimate . By Hölder's inequality, where .
By Lemma 7 and Lemma 4, we obtain It follows from (40), (48), and Minkowski inequality Set . Note that , for all and . Thus to give an upper estimate for on , we only need to bound it on . Note that A straightforward computation shows that for Therefore It follows from (46) and (52) that We choose and . It is easy to see that and . A simple computation gives Note that . Thus we have Collecting the above results we obtain Combining (44) and (56), we prove the theorem.

Proof of Theorem 2. By the triangle inequality, we have By the arguments similar to those used in the proof of Theorem 1, we obtain where we use in the last inequality.
Combining Theorem 1 and (58), we complete the proof of Theorem 2.

3. The Measured Sampled Values Case

In practice, the sampled values of a signal may not be exactly the functional values and may have to be quantized. Typical errors arising from these facts are jitter errors and amplitude errors. Using the key idea of quasi-interpolation which adopts integer translations of a basic function and integer translations of a linear functional to approximate functions, see [8, 25] and the references therein. We may consider sampled values that are the results of a linear functional and its integer translations acting on an undergoing signal [4, 25]. Such sampled values are called measured sampled values because they are closer to the true measurements taken from a signal. The sampling series with the measured sampled values is defined to be where is any sequence of continuous linear functionals , with being the set of all continuous functions defined on and tending to zero at infinity.

Similar to the definition of and , we have the finite sum and the truncation error

To establish our theorems we need the error modulus We write for if no confusion arises. The error modulus provides a quantity for the quality of signal's measured sampling values. When the functionals in are concrete, we may get some reasonable estimates for .

Sampling series with measured sampled values has been studied in [8] for bandlimited functions but without truncation. The truncation errors are considered for functions from Lipschitz class with a decay condition in [13]. Now we recall a typical result in [13].

Denote by the set of all continuous functions satisfying for all and . Set , , and .

Theorem D. Let satisfy the decay condition for some . Let be any sequence of continuous linear functionals. For each one has for the truncation error at provided , where is the smallest integer that is greater or equal to a given , and .

In [9] the author obtain the uniform bound of symmetric truncation error for functions from isotropic Besov space with a similar decay condition. Now we will provide the estimation for the truncation error without any assumption about the decay of .

Theorem 10. Let , with the same , , and as in Theorem 1. For , define as in Theorem B. Let be any sequence of continuous linear functionals. For one has provided and for some constant .

Proof. By the triangle inequality, we have where Similar to (52), we have Using Hölder's inequality we obtain where . Now we select the same and as in the proof of Theorem 1. Similar to (55), we have . Thus A simple computation gives and . Notice that . Collecting these results, we obtain It follows from Theorem 1, (72), and (73) that which completes the proof.

Finally we apply Theorem 10 to some practical examples. The first one is that the measured sampled values are given by averages of a function. For we define the modulus of continuity where may be any positive number.

Corollary 11. Let , with the same , , and as in Theorem 1. For , define as in Theorem 10. Suppose the sampled values of are obtained by the rule where are numbers satisfying for all and . If and , then

Proof. Let be the sequence of the linear functionals on and a continuous function on . Define Then . Clearly, . Therefore Note that the function is monotonely increasing for . The corollary follows from Theorem 10.
The second example is an estimate for the combination of all four errors existing in sampling series: the amplitude error, the time-jitter error, the truncation errors, and the aliasing errors. We give some explanation for the amplitude error and the time-jitter error.
We assume the amplitude error results from quantization, which means the functional value of a function at moment is replaced by the nearest discrete value or machine number . The quantization size is often known before hand or can be chosen arbitrarily. We may assume that the local error at any moment is bounded by a constant ; that is, . The time-jitter error arises if the sampled instances are not met correctly but might differ from the exact ones by ; we assume for all and some constant . The combined error is defined to be

Corollary 12. Let , , , , and . Then provided , , and , where are positive constants, and .

Proof. We define where is the Dirac distribution. Then is a sequence of linear functional on . It is clear that . Then Thus . By Theorem 10 we get the desired result.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grant nos. 10971251, 11101220, and 11271199) and the Program for New Century Excellent Talents at University of China (NCET-10-0513).