Abstract

This paper considers a multichannel deconvolution model with Gaussian white noises. The goal is to estimate the -th derivatives of an unknown function in the model. For super-smooth case, we construct an adaptive linear wavelet estimator by wavelet projection method. For regular-smooth case, we provide an adaptive nonlinear wavelet estimator by hard-thresholded method. In order to measure the global performances of our estimators, we show upper bounds on convergence rates using the -risk ().

1. Introduction

We consider a multichannel deconvolution problem with Gaussian white noises. The signal () is observed indirectly in noisy signals,

where are known positive constants and is the number of channel. Suppose are 1-periodic on , the blurring functions are known and

Additive noises are present with () denoting independent standard Brownian motions. The goal is to estimate the derivatives of the signal from the data .

The notation indicates that with a positive constant which is independent of and ; means ; stands for and .

For , its Fourier coefficients are defined by

In the Fourier domain, the smooth-type convolution functions are of the form

where , ; and . The so-called super-smooth convolution or exponential decay occurs when () and the regular-smooth convolution or polynomial decay occurs when ().

There are many works on deconvolution Gaussian white noise model, see [15]. In the single-channel deconvolution case (), [6] constructs a kernel estimator and obtains the optimal convergence rates over the Sobolev spaces. Furthermore, [7, 8] provide the wavelet estimation with a general noise assumption. In the multichannel deconvolution case (), [9] considers the model in a periodic setting, proposes adaptive wavelet-thresholding estimators by weight method, and obtains the upper bounds on -risk (). Moreover, [10] generalizes the results in [9] by relaxing the noise condition. As an example of a multichannel deconvolution model, multiple light detection and ranging (LIDAR) devices are used to receive the return signals to determine the distance and location of the reflecting material (see [10]). The effective accuracy loss caused by radar signal propagation can be corrected by deconvolution.

We focus on a more general problem: estimate the -th derivative of . This is of interest to detect possible bumps, concavity, or convexity properties of . The problem of estimating derivatives arises in many scientific settings (see [1115]), such as signal processing, chemistry, and geophysics. For example, higher-order derivatives can detect important features in restoring distorted and noise-containing digital images. These features can be used to reconstruct and restore digital images, see [13]. Derivatives can be used for raw spectral detection in the field of chemistry, see [11]. Evaluation of higher derivatives (gradients) of potential fields plays an important role in geophysical interpretation, see [12].

Derivative estimation has been studied in many statistical model, see [1621]. For example, [17] develops an adaptive estimator of -th derivatives of unknown function in the standard deconvolution model () and proves that it achieves near optimal rates of convergence under the mean integrated squared error (MISE) over a wide range of smoothness classes. [18] studies the heteroscedastic multichannel deconvolution model. The number of channels is equal to the number of samples, which tends to infinity in the model. Under the assumption that the Fourier decay of the blurring functions is at a same polynomial rate in regular-smooth case, they construct the adaptive wavelet block-thresholding estimator for the -th derivative function and measure the estimation accuracy by MISE over Besov spaces. Recently, [20] generalizes the results of [18] from - to - () risk case.

Wavelet method has been a popular choice in deconvolution problems due to their local property in both time and frequency domain, see [22, 23]. Motivated by [9, 18, 20], we will construct the wavelet estimator of () from model (1) and examine the estimation accuracy by -risk ().

The paper is organized in the following way. Section 2 introduces the periodized Meyer wavelets and Besov spaces. We construct the wavelet estimators for the -th derivative in Section 3. The convergence rates of linear and nonlinear estimators are given in Section 4 and Section 5, respectively. For both super-smooth and regular-smooth case, we obtain the adaptive wavelet estimators and measure their performances by -risk (). The last part provides some conclusions.

2. Wavelets and Besov Space

In order to establish the convergence rates, we use the periodized Meyer’s wavelet basis on the unit interval. Let and be the Meyer scaling and wavelet functions (see [24]), respectively. As usual, for any , ,

The periodic scaling and wavelet functions are defined by with and . For an integer , the collection

constitutes an orthonormal basis of (see [25]).

In the periodic setting, Besov spaces can be defined by the behavior of the wavelet coefficients (see [26]). Denote the inner product operator, , where denotes the complex conjugate of .

Definition 1. The periodic Besov spaces are defined via periodic wavelets, where By the above definition, the following embedding result can be easily obtained:

3. Wavelet Estimators

The estimation is provided in the Fourier domain to reduce the convolution operator to a product of Fourier coefficients.

Let the Fourier basis functions , . Let . Denote the relevant Fourier coefficients,

We consider model (1) in the Fourier domain. Then,

Following a similar procedure to [9], (14) leads to the following expression for Fourier coefficients of the target function,

where are weights to be specified later.

Assume that . Then can be expanded into a wavelet series as where

Define the following collection,

Let us now investigate the estimation of . Assume that is -times () differentiable on and . Then, for any , . This with the Parseval identity shows

According to (13), the wavelet coefficient can be estimated with

Similarly, are defined by

We now present the considered procedure for the estimation of . Denote the parameter in the following.

The linear wavelet estimator is defined by where in the case of regular-smooth convolution and in the case of super-smooth convolution.

In the regular-smooth case, depends on smooth parameter , which leads to the linear estimator nonadaptive. Hence, we construct the following adaptive nonlinear wavelet estimator by hard-thresholded method, where

and denotes the indicator function of the set . Here, the parameters are given by where

4. Convergence Rates of the Linear Estimators

We will show the convergence rates of the linear wavelet estimator for regular-smooth and super-smooth case, respectively.

Now, we need to clarify two lemmas, which will be used in later discussions.

Lemma 2. Consider model (1) with Condition (4). Suppose is -times differentiable on and . If and are defined as (21) and (20), respectively, then (i)The is complex Gaussian withFurthermore, is independent of , and ; (ii)The is complex Gaussian with

Similarly, is independent of , , and .

Proof. We just show Lemma 2 (ii) is true. The Conclusion (i) can be proved similarly.
According to Formulas (13), (19), and (20), we obtain The noise terms in (11) are complex Gaussian with . Hence, which means is unbias.
For each , where is the Kronecker delta function. Since is independent of for , . Thus, for each , , we obtain Now, we compute the variance of . Hence, By Cauchy-Schwarz inequality, one can obtain where the equality only holds for . Using the optimal weights , one has By [27], are independent and identically distributed complex Gaussian random variables with mean zero and unit variance. Hence, is complex Gaussian random. It is easy to obtain that is independent of , and .

Remark 3. The channel number is fixed, while those of [18, 20] tend to infinity. To obtain Lemma 2, we construct the wavelet coefficient estimators (20) and (21) by weighted summation method. The weights are determined by Cauchy-Schwarz inequality.

Lemma 4. Consider model (1) with Condition (4). Suppose is -times differentiable on and . Then for any , (i)Super-smooth case(ii)Regular-smooth case

Proof. We just prove the conclusions on . The result on can be obtained similarly. For super-smooth case, we obtain by Lemma 2 (i). It follows from Parseval identity and that . Hence, According to (4) and , Define . By Lemma 2 (i), is complex Gaussian with and Furthermore, is independent of , , and . Hence, where . By and (43), one has The regular-smooth case leads to . Hence, for the regular-smooth case.

Remark 5. In our discussion, we choose parameter . Now, we explain the regular-smooth case. Our Fourier decay of the blurring function is at a polynomial rate , which changes with the channel index . The regular-smooth case implies , which stands for the optimal channel. However, [18, 20] assume that the Fourier decay of the blurring functions is at a same polynomial rate in regular-smooth case. If are same, our condition of blurring functions reduces to [18, 20].

Now, we are in the position to state the main result of this section.

Theorem 6. Consider model (1) with Condition (4). Suppose is -times differentiable on , , and .The linear wavelet estimator is defined by (22).
In the case of regular-smooth convolution, with satisfies In the case of super-smooth convolution, with satisfies

Here, , .

Proof. By (16) and (22), we get where Firstly, we consider the regular-smooth case.
Estimate the term . By Lemma 4 and (48), one has Estimate the term . For , we obtain with . The Besov spaces embedding result (10) implies . By the definition of Besov spaces, For , one gets by Hölder inequality and (56) with . The above inequality with (56) shows holds for . By (48), we can obtain Hence, (53), (54), and the above inequality lead to for .
Nextly, we consider the super-smooth case.
Estimate the term . By Lemma 4 and (50), we have where .
Estimate the term . Similar to the regular-smooth case, This with (53) and (61) shows for .

Remark 7. Theorem 6 shows that the regular-smooth case has a better convergence rate than the super-smooth case. The convergence rate of regular-smooth case depends on , while that of super-smooth case does not. In the super-smooth case, (50) implies does not depend on the smooth parameter , which shows the linear wavelet estimator adaptive. However, the linear wavelet estimator in the regular-smooth case is nonadaptive for (48) depending on . We will construct adaptive wavelet estimator for the regular-smooth case in later section.

5. Convergence Rates of the Nonlinear Estimator

In this section, we will give the convergence rates of nonlinear wavelet estimator for regular-smooth case. We provide a useful lemma at the beginning.

Lemma 8. Consider model (1) with Condition (4). Suppose is -times differentiable on and . If is defined as (27), then

Proof. Denote . By Lemma 2 (ii), is complex Gaussian with and . Furthermore, is independent of , , and . Obviously, Hence,

Theorem 9. Consider model (1) with Condition (4). Suppose is -times differentiable on , , and . For the case of regular-smooth convolution, the nonlinear estimator defined by (25)–(28) satisfies where .

Proof. When, the space is continuously embedded into the space , then the Jensen’s inequality implies Therefore, we only need to show the conclusion for .
By (16) and (25), we obtain where According to Lemma 4 (ii), we derive that Choose where . Then , and Similar to (55) and (56), we obtain with . Taking , then . This leads to Finally, we consider the term Utilizing properties of convex functions and , we obtain that The definition of is shown in (27). Define , , . Then Therefore, we need to estimate the following items: where Obviously , , where . Meanwhile, for and . Hence, This with Hölder inequality shows that Note that . Then, we obtain by Lemma 4 and Lemma 8.
Choosing large enough such that , then Similar to previous discussions in (71)–(73), we conclude that The main part that remains is estimating term . (1)If , namely, ; then for any , we have and . By (27), (28), and Lemma 2 (ii), one has . Therefore, we getMeanwhile, Note that for . Then it follows from (86) and (87) that Taking , one obtains and Furthermore, . Therefore, (2)If , then with in (72).Case 1: For , due to Lemma 4, we know that Meanwhile, for . Hence, According to (72), we obtain Case 2: For , by and Lemma 4, we have On the other hand, Hence, Since we obtain where . Note that follows from . Then Taking as in (72), we find that This with (93) shows

Remark 10. Theorem 9 shows that the convergence rate of nonlinear estimator changes as decreases. Now, we discuss the convergence rates in the regular case from Theorem 6 (i) and Theorem 9. For , the nonlinear estimator has the same convergence rate as the linear one up to a logarithmic factor; for , the nonlinear estimator performs better than the linear one. Moreover, the nonlinear wavelet estimator is adaptive.

6. Conclusions

In this paper, we consider the wavelet estimation of function derivatives from a multichannel deconvolution model. We construct the linear wavelet estimator by wavelet projection method and prove its upper bounds of - risk. It turns out that the regular-smooth case has a better convergence rate than the super-smooth one. Note that the linear estimator is only adaptive in the super-smooth case. Then, we construct the adaptive nonlinear wavelet estimator by hard-thresholding method and provide its upper bounds of - risk. If , our convergence rates of the nonlinear estimator agree with results of [9] up to a logarithmic factor. In addition, our result generalizes Theorem 9 of [17] from - to -risk case up to a logarithmic factor when .

We consider both regular-smooth and super-smooth cases, while [18, 20] focus on the regular-smooth one. They mainly consider the nonlinear wavelet estimator. We also provide the linear wavelet estimator, which is simple and easy to compute. We want to point out that this paper is a theoretical exploration. It is interesting to give numerical experiments or some applications. We will investigate these aspects in future.

Data Availability

No data were used in this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This paper is supported by the National Natural Science Foundation of China (Nos. 12001132 and 12001133), the Guangxi Colleges and Universities Key Laboratory of Data Analysis and Computation, and the Center for Applied Mathematics of Guangxi (GUET).