Research Article | Open Access

Christophe Chesneau, "A Note on Wavelet Estimation of the Derivatives of a Regression Function in a Random Design Setting", *International Journal of Mathematics and Mathematical Sciences*, vol. 2014, Article ID 195765, 8 pages, 2014. https://doi.org/10.1155/2014/195765

# A Note on Wavelet Estimation of the Derivatives of a Regression Function in a Random Design Setting

**Academic Editor:**A. Zayed

#### Abstract

We investigate the estimation of the derivatives of a regression function in the nonparametric regression model with random design. New wavelet estimators are developed. Their performances are evaluated via the mean integrated squared error. Fast rates of convergence are obtained for a wide class of unknown functions.

#### 1. Introduction

We consider the nonparametric regression model with random design described as follows. Let be random variables defined on a probability space , where are i.i.d. random variables such that and , are i.i.d. random variables with common density , and is an unknown regression function. It is assumed that and are independent for any . We aim to estimate , that is, the th derivative of , for any integer , from .

In the literature, various estimation methods have been proposed and studied. The main ones are the kernel methods (see, e.g., [1â€“5]), the smoothing splines, and local polynomial methods (see, e.g., [6â€“9]). The object of this note is to introduce new efficient estimators based on wavelet methods. Contrary to the others, they have the benefit of enjoying local adaptivity against discontinuities thanks to the use of a multiresolution analysis. Reviews on wavelet methods can be found in, for example, Antoniadis [10], HĂ¤rdle et al. [11], and Vidakovic [12]. To the best of our knowledge, only Cai [13] and Petsa and Sapatinas [14] have proposed wavelet estimators for from (1) but defined with a deterministic equidistant design; that is, . The consideration of a random design complicates significantly the problem and no wavelet estimators exist in this case. This motivates our study.

In the first part, assuming that is known, we propose two wavelet estimators: the first one is linear nonadaptive and the second one nonlinear adaptive. Both use the approach of Prakasa Rao [15] initially developed in the context of the density estimation problem. Then we determine their rates of convergence by considering the mean integrated squared error (MISE) and assuming that belongs to Besov balls. In a second part, we develop a linear wavelet estimator in the case where is unknown. It is derived from the one introduced by Pensky and Vidakovic [16] considering the estimation of from (1). We evaluate its rate of convergence again under the MISE over Besov balls. The obtained rates of convergence are similar to those attained by wavelet estimators for the derivatives of a density (see, e.g., [15, 17, 18]).

The organization of this note is as follows. The next section describes some basics on wavelets and Besov balls. Our estimators and their rates of convergence are presented in Section 3. The proofs are carried out in Section 4.

#### 2. Preliminaries

This section is devoted to the presentation of the considered wavelet basis and the Besov balls.

##### 2.1. Wavelet Basis

We set We consider the wavelet basis on introduced by Cohen et al. [19]. Let and be the initial wavelet functions of the Daubechies wavelets family with (see, e.g., [20]). These functions have the distinction of being compactly supported and belong to the class for . For any , we set and, for ,

With appropriated treatments at the boundaries, there exists an integer such that, for any integer , forms an orthonormal basis of . For any integer and , we have the following wavelet expansion: where These quantities are called the wavelet coefficients of . See, for example, Cohen et al. [19] and Mallat [21].

##### 2.2. Besov Balls

We consider the following wavelet sequential definition of the Besov balls. We say that with , , , and if there exists a constant such that and (6) satisfy with the usual modifications if or .

The interest of Besov balls is to contain various kinds of homogeneous and inhomogeneous functions . For particular choices of , , and , correspond to standard balls of function spaces, as the HĂ¶lder and Sobolev balls (see, e.g., [11, 22]).

#### 3. Results

In this section, we set the assumptions on the model, present our wavelet estimators, and determine their rates of convergence under the MISE over Besov balls.

##### 3.1. Assumptions

We formulate the following assumptions.(**K1**)We have for any .(**K2**)There exists a constant such that
(**K3**)There exists a constant such that
(**K4**)There exists a constant such that

##### 3.2. Wavelet Estimators: When Is Known

We consider the wavelet basis with to ensure that and belong to .

*Linear Wavelet Estimator*. We define the linear wavelet estimator by
where
and is an integer chosen a posteriori.

The definition of is motivated by the following unbiased property: using the independence between and , , and integrations by parts with** (K1)**, we obtain
which is the wavelet coefficient of associated with .

This approach was initially introduced by Prakasa Rao [15] for the estimation of the derivatives of a density. Its adaptation to (1) gives a suitable alternative to the wavelet methods developed by Cai [13] and Petsa and Sapatinas [14] in the case , specially in the treatment of the random design.

Note that, for the standard case , this estimator has been considered and studied in Chesneau [23].

Theorem 1 investigates the rate of convergence attained by under the MISE assuming that belongs to Besov balls.

Theorem 1. *Suppose that (K1), (K2), and (K3) are satisfied and that with , , , and . Let be defined by (11) with such that
*

*, and denotes the integer part of .*

*Then there exists a constant such that*

The rate of convergence corresponds to the one obtained in the derivatives density estimation framework. See, for example, Prakasa Rao [15] and Chaubey et al. [17, 18]. For , Theorem 1 becomes [23, Theorem ], with .

In the rest of the study, the rate of convergence will be taken for benchmark. However, we do not claim that it is the optimal one in a minimax sense; the lower bounds are not determined. However, from some logical considerations, it is a serious candidate.

*Hard Thresholding Wavelet Estimator*. We define the hard thresholding wavelet estimator by
, where is defined by (12),
is the indicator function, is a large enough constant, is the integer satisfying
and .

The construction of is an adaptation of the hard thresholding wavelet estimator introduced by Delyon and Juditsky [24] to the estimation of from (1). It used the modern version developed by Chaubey et al. [25]. The advantage of over (11) is that is adaptive; thanks to the thresholding in (17), its performance does not depend on the knowledge of the smoothness of . The second thresholding in (17) enables us to relax some assumptions on the model, and, in particular, to only suppose on (its density can be unknown). Basics and important results on hard thresholding wavelet estimators can be found in, for example, Donoho and Johnstone [26, 27], Donoho et al. [28, 29], and Delyon and Juditsky [24].

Theorem 2 determines the rate of convergence attained by under the MISE assuming that belongs to Besov balls.

Theorem 2. *Suppose that (K1), (K2), and (K3) are satisfied and that with , , or . Let be defined by (16). Then there exists a constant such that
*

The proof is based on a general result proved by [25, Theorem 6.1]. Let us observe that, for the case , is equal to the rate of convergence attained by up to a logarithmic factor (see Theorem 1). However, for the case , it is significantly better in terms of power.

##### 3.3. Wavelet Estimators: When Is Unknown

In the case where is unknown, we propose the linear wavelet estimator defined by
where
, is an integer chosen a posteriori, refers to** (K3)**, and is an estimator of constructed from the random variables . For instance, we can consider the linear wavelet estimator by
where
and is an integer chosen a posteriori.

The estimator is close to the â€śNES linear wavelet estimatorâ€ť proposed by Pensky and Vidakovic [16] for . However, there are notable differences in the thresholding in (21), the partitioning of the variables, and the definition of , making the study of its performance under the MISE more simpler (see the proofs of Theorem 3 below).

Theorem 3 determines an upper bound of the MISE of and then exhibits its rate of convergence when belongs to Besov balls.

Theorem 3. *Suppose that (K1), (K2), and (K3) are satisfied and that with , , , and . Let be defined by (20). Then there exists a constant such that
*

*with .*

*In addition, suppose that*

**(K4)**is satisfied, with , , , and ; consider with the estimator defined by (22) with such that*and such that*

*Then there exists a constant such that*

The first point of Theorem 3 is proved for any estimator of depending on . Taking , it corresponds to the upper bound of the MISE for established in the proof of Theorem 1. Note that the rate of convergence described in the second point is slower to the one attained by (see Theorem 1). The fact that the smoothness of influences the performance of and, a fortiori, seems natural. This phenomenon also appears in [16, Theorem 2.1], for .

*Remark 4. *If exists but is unknown, we can define as (20) with instead of in the threshold of (21). The impact of this modification is a logarithmic term in Theorem 3; that is,
Moreover, choosing such that
there exists a constant such that

*Remark 5. *Note that the assumption** (K4)** has been only used in the second point of Theorem 3.

*Conclusion and Perspectives*. We explore the estimation of from (1). Distinguishing the cases where is known or not, we propose wavelet methods and prove that they attain fast rates of convergence under the MISE assuming that .

Perspectives of this work are(i)to develop an adaptive wavelet estimator, as the hard thresholding one, for the estimation of in the case where is unknown;(ii)to relax assumptions on the model. Indeed, several techniques exist to relax** (K3)**; that is, has potential zeros. See, for example, Kerkyacharian and Picard [30], GaĂŻffas [31], and Antoniadis et al. [32]. However, their adaptations to the estimation of are more difficult than they appear at first glance;(iii)to consider dependent .

These aspects need further investigations that we leave for a future work.

#### 4. Proofs

In this section, denotes any constant that does not depend on , , and . Its value may change from one term to another and may depend on or .

*Proof of Theorem 1. *First of all, we expand the function on at the level given by (14):
where and .

Since forms an orthonormal basis of , we get
Using the fact that is an unbiased estimator of (see (13)), are i.i.d., the inequalities: for any random variable , and , , and** (K2)** and** (K3)**, we have
Using , the change of variables , and the fact that is compactly supported, we obtain
Therefore
and, for satisfying (14), it holds that
On the other hand, we have (see [11, Corollary 9.2]), which implies
It follows from (32), (36), and (37) that
Theorem 1 is proved.

*Proof of Theorem 2. *Observe that, for , any integer and any , (i)using arguments similar to (13), we obtain
(ii)using arguments similar to (33) and (34), we have
with .

Applying [25, Theorem 6.1], (presented in Appendix) with , , , ,
and with , , either or , we prove the existence of a constant such that
Theorem 2 is proved.

*Proof of Theorem 3. *As in the proof of Theorem 1, we first expand the function on at the level given by (26):

Since forms an orthonormal basis of , we get
Using (see [11, Corollary 9.2]), we have
Let be (12) with and (26). The elementary inequality , , yields
where
Proceeding as in (36), we get
Let us now investigate the upper bound for .

The triangular inequality gives
Moreover, we have
It follows from the triangular inequality, the indicator function,** (K3)**, â€‰â€‰â€‰â€‰, and the Markov inequality that
Hence
where
Let us now consider . For any random variable , we have the equality
where denotes the expectation of conditionally to and , the variance of conditionally to . Therefore
where
Let us now observe that, owing to the independence of , the random variables conditionally to are independent. This remark combines with the inequalities: for any random variable and , , the independence between and ,** (K2)** and** (K3)**, yields
Thanks to the support compact of , we have . Therefore, using ,
On the other hand, by the HĂ¶lder inequality for conditional expectations and arguments similar to (33) and (34), we get
Hence
It follows from (55), (58), and (60) that
Putting (46), (48), and (61) together, we get

Combining (44), (45), and (62), we obtain

A slight adaptation of [29], Proposition 1, gives the following result. Suppose that** (K4)** is satisfied and with , , , and . Let be defined by (22) with as (25). Then there exists a constant such that

Therefore, choosing as (26) and using (63), we have
Theorem 3 is proved.

#### Appendix

Let us now present in details [25, Theorem 6.1], used in the proof of Theorem 2.

We consider a general form of the hard thresholding wavelet estimator denoted by for estimating an unknown function from independent random variables : where , and is the integer satisfying Here, we suppose that there exist (i) functions with for any ,(ii)two sequences of real numbers and satisfying and Such that, for ,â€‰(A1) any integer and any , â€‰(A2) there exist two constants, and , such that, for any integer and any ,Let be (A.1) under (A1) and (A2). Suppose that with , or . Then there exists a constant such that

#### Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgment

The author is thankful to the reviewers for their comments which have helped in improving the presentation.

#### References

- T. Gasser and H.-G. Müller, â€śEstimating regression functions and their derivatives by the kernel method,â€ť
*Scandinavian Journal of Statistics, Theory and Applications*, vol. 11, no. 3, pp. 171â€“185, 1984. View at: Google Scholar | Zentralblatt MATH | MathSciNet - W. Härdle and T. Gasser, â€śOn robust kernel estimation of derivatives of regression functions,â€ť
*Scandinavian Journal of Statistics*, vol. 12, no. 3, pp. 233â€“240, 1985. View at: Google Scholar | MathSciNet - Y. P. Mack and H.-G. Müller, â€śDerivative estimation in nonparametric regression with random predictor variable,â€ť
*Sankhya: The Indian Journal of Statistics, Series A*, vol. 51, no. 1, pp. 59â€“72, 1989. View at: Google Scholar | Zentralblatt MATH | MathSciNet - D. Ruppert and M. P. Wand, â€śMultivariate locally weighted least squares regression,â€ť
*The Annals of Statistics*, vol. 22, no. 3, pp. 1346â€“1370, 1994. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - M. P. Wand and M. C. Jones,
*Kernel Smoothing*, Chapman and Hall, London, UK, 1995. View at: MathSciNet - C. Stone, â€śAdditive regression and other nonparametric models,â€ť
*The Annals of Statistics*, vol. 13, no. 2, pp. 689â€“705, 1985. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - G. Wahba and Y. H. Wang, â€śWhen is the optimal regularization parameter insensitive to the choice of the loss function?â€ť
*Communications in Statistics: Theory and Methods*, vol. 19, no. 5, pp. 1685â€“1700, 1990. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - S. Zhou and D. A. Wolfe, â€śOn derivative estimation in spline regression,â€ť
*Statistica Sinica*, vol. 10, no. 1, pp. 93â€“108, 2000. View at: Google Scholar | Zentralblatt MATH | MathSciNet - R. Jarrow, D. Ruppert, and Y. Yu, â€śEstimating the interest rate term structure of corporate debt with a semiparametric penalized spline model,â€ť
*Journal of the American Statistical Association*, vol. 99, no. 465, pp. 57â€“66, 2004. View at: Google Scholar | Zentralblatt MATH | MathSciNet - A. Antoniadis, â€śWavelets in statistics: a review,â€ť
*Journal of the Italian Statistical Society Series B*, vol. 6, no. 2, pp. 97â€“130, 1997. View at: Publisher Site | Google Scholar - W. Härdle, G. Kerkyacharian, D. Picard, and A. Tsybakov,
*Wavelets, Approximation, and Statistical Applications*, vol. 129 of*Lecture Notes in Statistics*, Springer, New York, NY, USA, 1998. View at: Publisher Site | MathSciNet - B. Vidakovic,
*Statistical Modeling by Wavelets*, John Wiley & Sons, New York, NY, USA, 1999. View at: Publisher Site | MathSciNet - T. Cai, â€śOn adaptive wavelet estimation of a derivative and other related linear inverse problems,â€ť
*Journal of Statistical Planning and Inference*, vol. 108, no. 1-2, pp. 329â€“349, 2002. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - A. Petsa and T. Sapatinas, â€śOn the estimation of the function and its derivatives in nonparametric regression: a bayesian testimation approach,â€ť
*Sankhya, Series A*, vol. 73, no. 2, pp. 231â€“244, 2011. View at: Google Scholar | Zentralblatt MATH | MathSciNet - B. L. S. Prakasa Rao, â€śNonparametric estimation of the derivatives of a density by the method of wavelets,â€ť
*Bulletin of Informatics and Cybernetics*, vol. 28, no. 1, pp. 91â€“100, 1996. View at: Google Scholar | Zentralblatt MATH | MathSciNet - M. Pensky and B. Vidakovic, â€śOn non-equally spaced wavelet regression,â€ť
*Annals of the Institute of Statistical Mathematics*, vol. 53, no. 4, pp. 681â€“690, 2001. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - Y. P. Chaubey, H. Doosti, and B. L. S. P. Rao, â€śWavelet based estimation of the derivatives of a density with associated variables,â€ť
*International Journal of Pure and Applied Mathematics*, vol. 27, no. 1, pp. 97â€“106, 2006. View at: Google Scholar | Zentralblatt MATH | MathSciNet - Y. P. Chaubey, H. Doosti, and B. L. S. P. Rao, â€śWavelet based estimation of the derivatives of a density for a negatively associated process,â€ť
*Journal of Statistical Theory and Practice*, vol. 2, no. 3, pp. 453â€“463, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - A. Cohen, I. Daubechies, and P. Vial, â€śWavelets on the interval and fast wavelet transforms,â€ť
*Applied and Computational Harmonic Analysis*, vol. 1, no. 1, pp. 54â€“81, 1993. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - I. Daubechies,
*Ten Lectures on Wavelets*, Society for Industrial and Applied Mathematics, 1992. View at: Publisher Site | MathSciNet - S. Mallat,
*A Wavelet Tour of Signal Processing*, The Sparse Way, with Contributions from Gabriel Peyré, Elsevier, Academic Press, Amsterdam, The Netherlands, 3rd edition, 2009. View at: MathSciNet - Y. Meyer,
*Wavelets and Operators*, Cambridge University Press, Cambridge, UK, 1992. View at: MathSciNet - C. Chesneau, â€śRegression with random design: a minimax study,â€ť
*Statistics and Probability Letters*, vol. 77, no. 1, pp. 40â€“53, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - B. Delyon and A. Juditsky, â€śOn minimax wavelet estimators,â€ť
*Applied and Computational Harmonic Analysis*, vol. 3, no. 3, pp. 215â€“228, 1996. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - Y. P. Chaubey, C. Chesneau, and H. Doosti, â€śAdaptive wavelet estimation of a density from mixtures under multiplicative censoring,â€ť http://hal.archives-ouvertes.fr/hal-00918069. View at: Google Scholar
- D. L. Donoho and J. M. Johnstone, â€śIdeal spatial adaptation by wavelet shrinkage,â€ť
*Biometrika*, vol. 81, no. 3, pp. 425â€“455, 1994. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - D. L. Donoho and I. M. Johnstone, â€śAdapting to unknown smoothness via wavelet shrinkage,â€ť
*Journal of the American Statistical Association*, vol. 90, no. 432, pp. 1200â€“1224, 1995. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - D. L. Donoho, I. M. Johnstone, G. Kerkyacharian, and D. Picard, â€śWavelet shrinkage: asymptopia?â€ť
*Journal of the Royal Statistical Society, Series B*, vol. 57, no. 2, pp. 301â€“369, 1995. View at: Google Scholar | Zentralblatt MATH | MathSciNet - D. L. Donoho, I. M. Johnstone, G. Kerkyacharian, and D. Picard, â€śDensity estimation by wavelet thresholding,â€ť
*Annals of Statistics*, vol. 24, no. 2, pp. 508â€“539, 1996. View at: Google Scholar | Zentralblatt MATH | MathSciNet - G. Kerkyacharian and D. Picard, â€śRegression in random design and warped wavelets,â€ť
*Bernoulli*, vol. 10, no. 6, pp. 1053â€“1105, 2004. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - S. Gaïffas, â€śSharp estimation in sup norm with random design,â€ť
*Statistics and Probability Letters*, vol. 77, no. 8, pp. 782â€“794, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - A. Antoniadis, M. Pensky, and T. Sapatinas, â€śNonparametric regression estimation based on spatially inhomogeneous data: minimax global convergence rates and adaptivity,â€ť
*ESAIM: Probability and Statistics*, vol. 18, pp. 1â€“41, 2014. View at: Publisher Site | Google Scholar | MathSciNet

#### Copyright

Copyright © 2014 Christophe Chesneau. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.