Abstract

We find asymptotically sufficient statistics that could help simplify inference in nonparametric regression problems with correlated errors. These statistics are derived from a wavelet decomposition that is used to whiten the noise process and to effectively separate high-resolution and low-resolution components. The lower-resolution components contain nearly all the available information about the mean function, and the higher-resolution components can be used to estimate the error covariances. The strength of the correlation among the errors is related to the speed at which the variance of the higher-resolution components shrinks, and this is considered an additional nuisance parameter in the model. We show that the NPR experiment with correlated noise is asymptotically equivalent to an experiment that observes the mean function in the presence of a continuous Gaussian process that is similar to a fractional Brownian motion. These results provide a theoretical motivation for some commonly proposed wavelet estimation techniques.

1. Introduction

A nonparametric regression (NPR) problem consists of estimating an unknown mean function that smoothly changes between observations at different design points. There are observations of the form

where is the unknown smooth mean function on and the errors are observations from a zero-mean Gaussian process. For NPR problems that have a particular long memory structure to the covariance of the error terms, we will find a continuous Gaussian experiment approximation to the problem of estimating the mean.

Brown and Low [1] showed that the NPR experiment is asymptotically equivalent to the white-noise model where the mean function is observed in the presence of a Brownian motion process. This result paralleled the work by Nussbaum [2] in showing that asymptotic results in nonparametric function estimation problems can be simplified using approximations by the continuous white-noise experiments that Pinsker [3] studied. The original asymptotic equivalence results for NPR experiments were extended by Brown et al. [4] and Carter [5, 6] along with refinements in the approximations from Rohde [7] and Reiss [8].

All of these results assume that the errors in (1.1) are all independent, and this assumption is critical in establishing the appropriateness of a white-noise model that also has independent increments. We want to consider the effect of correlation between the observations on these approximations. Presumably, if the correlation is weak, then the effect washes out asymptotically. However, we wish to consider cases where there is sufficient long-range correlation to affect the form of the approximation. In particular, we will show that the appropriate approximation is by a continuous Gaussian process experiment that is no longer white noise but is closer to a fractional Brownian motion.

Our approach is motivated by the work by Johnstone and Silverman [9] and Johnstone [10]. They investigated the wavelet decomposition of data of this type and used a fractional Brownian motion approximation in the limit:

They argued that the wavelet decomposition resulted in nearly independent coefficients which simplified the inference significantly. We will assume that the process is decorrelated by a wavelet decomposition and then show that this continuous model is asymptotically equivalent to the NPR experiment with the same covariance structure.

Theorem 1.1. The nonparametric regression experiment observes as in (1.1) for an unknown mean function from a parameter set defined in Section 1.2 and a known covariance structure as described in Section 1.3. This experiment is asymptotically equivalent to the experiment that observes where is a Brownian motion with covariance kernel .

This will be proven in two steps. First, Lemma 2.1 proves that the first wavelet coefficients in a decomposition of are asymptotically sufficient in for estimating . For the second step, Lemma 3.1 shows that a discrete wavelet transform of the observations from produces observations with nearly the same distribution as these asymptotically sufficient statistics.

Furthermore, in both experiments the lower-frequency terms in the wavelet decomposition are sufficient for estimating the means, allowing the higher-frequency terms to be used to give information about the variance process. This leads to Theorem 1.2, which proposes an experiment that allows some flexibility in the error structure.

Theorem 1.2. The NPR experiment observes the as in (1.1), where the covariance structure depends on the parameters and and is such that the variance of the wavelet coefficients is .
The experiment observes the pair (where and are defined in Section 4) and then observes the continuous Gaussian process conditionally on and : where the covariance is such that . The estimators and are the same as but truncated so that , and .
For , , and for some constant , the experiments and are asymptotically equivalent.

This theorem can be seen as an extension of Carter [6] Theorem 1.1 from a case where there is a single unknown variance for all the wavelet coefficients to a case where the variance changes as a log-linear function of the resolution level (or frequency).

Wang [11] addressed the issue of asymptotically sufficient statistics in the fractional Brownian motion process. In Section 3 of that article there is an argument that bounds the difference between minimax errors in an NPR experiment with correlated errors and an experiment that observes the mean in the presence of fractional Brownian motion error. This result extends the sort of approximation by Donoho and Johnstone [12] to correlated errors and is very much in the spirit of our Theorem 1.1 here. Our results differ from Wang [11] in that we have made a stronger assumption on the covariance structure of the errors in order to obtain the full asymptotic equivalence of the experiments as discussed in Section 1.1.

Lemma 2.1 is presented and proven in Section 2. Section 3 presents Lemma 3.1 and the proof of Theorem 1.1. The proof for Theorem 1.2 is in Section 4 with some relevant bounds in Sections 5 and 6.

1.1. Asymptotic Sufficiency

Instead of focusing on single estimation techniques, we will consider approximations of the entire statistical experiment. For large sample sizes, there is often a simpler statistical experiment that can approximate the problem at hand. One benefit of finding an approximating experiment is that it may have convenient sufficient statistics even when they are not available in the original experiment.

Our approximations will therefore be of experiments rather than particular distributions. A statistical experiment that observes data consists of a set of probability distributions indexed by the parameter set . We wish to compare the information about in to another experiment that observes data from among the set of distributions that are indexed by the same parameter . Implicitly, we are concerned with two sequences of experiments and where roughly denotes the increasing sample size, but generally, we will leave off the subscript . It will always be understood that the distributions depend on the “sample size.”

The NPR experiment will be approximated using Le Cam's notion of asymptotically equivalent experiments [13, 14] and asymptotically sufficient statistics [15]. Asymptotically equivalent experiments have corresponding inference procedures (such as estimators or tests) in each experiment that perform nearly as well. Specifically, if there is an estimator in with risk , then, for bounded loss functions, there is an estimator such that

as . These asymptotic equivalence results are stronger than the equivalence of minimax rates that is derived under a similar model by, for example, Wang [11]. Our results imply a correspondence over a range of bounded loss functions. Thus, the equivalence holds for a global error as well as local error measurements or other distances.

Asymptotic sufficiency is a stronger notion, where if is a sufficient statistic for inference about in , then is asymptotically sufficient for when the total-variation distance between and is negligible. These asymptotically sufficient statistics generate experiments that are all asymptotically equivalent. In particular, and are asymptotically equivalent, and they are also asymptotically equivalent to the experiments generated by the distributions of and . As a result, an estimator in should generally be of the form and there is a corresponding estimator that performs nearly as well in the experiment. There is a basic transitive property to the asymptotic equivalence that implies if is asymptotically equivalent to , and is asymptotically equivalent to , then is asymptotically equivalent to .

Le Cam's asymptotic equivalence is characterized using the total-variation distance between the distributions. We will abuse this notation a bit by writing . It will often be more convenient to use the Kullback–Leibler divergence () to bound the total variation distance:

See [16]. The divergence is convenient for product experiments because .

1.2. Wavelet Basis

We will use orthonormal wavelet bases to characterize the function space and to simplify the covariance structure of the errors.

Assuming we are considering periodic functions on the interval , we can construct periodic wavelet bases as by Daubechies [17, Chapter 9.3]. We start with a space which consists of functions of the form

where is an orthonormal set of periodic functions generated via

We will work with a function having finite support , and at the boundaries of the interval the are given the proper periodic extensions. This space generates wavelet functions that span the difference between the and and can be written with the proper periodic adjustment at the boundary (e.g., for a small ). This periodic adjustment has a small effect at the high resolution levels but is a larger factor for small values of . In particular, the scaling function at level 0 is .

The mean functions will be assumed to be constructed from this wavelet basis:

We will restrict the mean functions to those that belong to a Hölder class of functions. Specifically, the class of periodic mean functions is :

for some and . This smoothness condition on the functions bounds the rate of growth of the higher-frequency terms in the orthonormal expansion. Originally from Meyer [18], in Daubechies [17, page 299 equation, equation (9.2.1)] gives the bound

for a constant related to . This bounds implies the useful bound on the squared error in the tail of the basis expansion:

where .

1.3. Error Structure

These results rely on a specific structure to the covariance matrix of the errors in the NPR experiment. As by Johnstone [10], the fractional Brownian motion is the motivating example for our continuous Gaussian model. However, this model does not necessarily provide the independent coefficients that would simplify the inference. Instead, an error structure that has roughly some of the properties of the fractional Brownian motion will be considered.

Traditionally, the asymptotics of the NPR experiment have assumed independent noise. This white-noise model is especially convenient because all of the eigenvalues of the covariance operator are equal. Thus, any orthonormal basis generates a set of independent standard normal coefficients. With a more general covariance function, the eigenvalues are different and only particular decompositions lead to independent coefficients. Thus there is much less flexibility in the choice of basis, and this basis determines some of the structure of the covariance.

Following Johnstone [10], Johnstone and Silverman [9], Zhang and Waiter [19], Wang [11], and Cavalier [20] among others, we will assume a covariance structure that is whitened by a wavelet decomposition. When there is a long-range positive correlation between the observations, the wavelet decomposition tends to decorrelate the error process because the wavelet functions act like bandpass filters.

We will assume that there exists an orthonormal basis and for and such that the decomposition of the error process generates independent normal coefficients. In other words, the error process is a zero-mean Gaussian process that is roughly

in the distributional sense where the are independent normals. The will be assumed to depend on and not as a sort of stationarity condition. In particular, we will assume that and then for some in the interval . If , then this is the white-noise process.

This is a convenient form for the error, but not completely unrealistic. Wavelet decompositions nearly whiten the fractional Brownian motion process. Wornell [21] argued that long-memory processes can be constructed via a wavelet basis with variances at resolution level shrinking like for . McCoy and Walden [22] showed that the discrete wavelet transform nearly decorrelates the noise in fractionally differenced white-noise processes. Alternatively, Wang [11] used a wavelet—vaguelette decomposition [23] to find a decomposition of the fractional Brownian motion that results in independent coefficients for a nearly orthonormal basis.

Section 7 demonstrates some properties of the specific Gaussian process generated by using the Haar basis as the wavelet basis. These properties are consistent with the sort of behavior that we want in the covariances of our observations. The correlation between observations decreases like for where measures the distance between the locations of the coefficients.

A well-established method for estimating the parameter in these long-range dependent models is to fit a linear function to the log of an estimate of the variances of the coefficients at each resolution level. This idea goes back to at least Abry and Veitch [24] and is now a standard approach that has been improved upon in subsequent work; see Veitch and Abry [25], Stoev et al. [26], among others. This motivates the asymptotic sufficient statistics in Theorem 1.2 which are least squares estimates from the fitted line.

The assumptions in Theorem 1.1 on the covariance structure of the errors are strong and could limit the applicability of the result. However, if we allow the variances at different scales to have a range of linear relationship, we could then have a sufficiently rich class of error models. Theorem 1.2 allows for this somewhat larger class of models, and it seems likely that the changing magnitude of the variances over different resolutions level will have a greater effect on the distribution of the observed errors than the underlying basis.

2. Approximate Sufficiency in the Gaussian Sequence Experiment

The first step in the proof of Theorem 1.1 is to establish that a truncated wavelet decomposition is asymptotically sufficient for the continuous Gaussian experiment.

Lemma 2.1. The experiment is a Gaussian process experiment that observes where is a zero-mean continuous Gaussian process with covariance . There are asymptotically sufficient statistics for estimating : for as long as

In the Gaussian sequence experiment where only the mean is to be estimated, the likelihood is

where is the distribution of and is the distribution of the version with mean 0 which would just be .

We want to approximate this experiment by a similar experiment where the mean is projected onto the first resolution levels, that is; is replaced by :

The likelihood becomes

Therefore, this experiment has sufficient statistics and for . These observations are approximately sufficient in the experiment if the distance between the distributions in the two experiments is small.

By (5.3), the distance between these two sets of experiments is

For the parameter space , (5.4) bounds the distance between the two experiments as

which is negligible as when the dimension of the sufficient statistics increases like

for . The worst case is when , and thus we have proved Lemma 2.1.

3. Approximating the NPR Experiment

Theorem 1.1 can now be proven by approximating the sufficient statistics from Lemma 2.1 using the observations from the NPR experiment .

We suppose that we have observations from the NPR experiment as in (1.1) where the are Gaussian random variables with a specified covariance function. Specifically, let be the matrix that performs the discrete wavelet transform, and let be its inverse. The vector of random wavelet coefficients from Lemma 2.1, where , can be transformed via the discrete wavelet transform to create .

The expected value of this transformed vector is

For a function that is nearly constant around , ; so we can approximate by .

In the original NPR experiment, the variances are for a constant that depends on and the basis we will be using. The covariance matrix for these will be assumed to be where is the diagonal matrix of . The variance of the should be the same as that of , and in the model described, . Therefore, should be set to .

Lemma 3.1. If the mean function is in then the total variation distance between the distribution of the vector and the distribution of the vector is

This lemma essentially goes back to the original work of Mallat [27] and parallels some of what was done by Brown and Low [1], Johnstone [10], Rohde [7], and Carter [5].

The NPR observations are such that the covariance matrix of is also , and therefore the total variation distance between the distributions is bounded in (5.2) by

with

A standard calculation bounds the difference between the means when with support on and the are Hölder for :

The covariance matrix is a positive definite matrix such that . The first column of the wavelet transform matrix is where is the vector of 1's. Therefore,

which is negligible for large and .

3.1. Proof of Theorem 1.1

The theorem follows from the fact that the observations for are asymptotically sufficient for the continuous process in (1.2). Then a linear function of these sufficient statistics is still approximately sufficient. Thus, the experiment that seeks to draw inference about from the observations is asymptotically equivalent to the experiment that observes (1.2) by Lemma 2.1.

Furthermore, by Lemma 3.1, the original NPR experiment that has the same covariance structure as is asymptotically equivalent to that experiment and thus, by transitivity, to the experiment that observes the process (1.2) as well. This proves Theorem 1.1.

3.2. Remarks on the Covariance Structure

This result is restrictive in that it requires a specific known covariance structure. We are working under the assumption that the covariance matrix has eigenfunctions that correspond to a wavelet basis. This does not generally lead to a typical covariance structure. It does not even necessarily lead to a stationary Gaussian process; see the Haar basis example below.

The difficulty is that the requirement for having asymptotically equivalent experiments is quite strict, and the total variation distance between the processes with even small differences in the structure of the covariance is not negligible. For two multivariate Guassian distributions with the same means but where one covariance matrix is and the other is , a diagonal matrix with the same diagonal elements as , the Kullback–Leibler divergence between the distributions is .

If the correlation between the highest level coefficients is , then the contribution to the difference of the log determinants is on the order of . The dimension of the problem is growing while the correlations are generally not going to 0 significantly quickly. For instance, in a typical wavelet basis decomposition of the true fractional Brownian motion where is a constant that depends on but not or .

Thus, the difference will not go to 0 as the sample size increases. Therefore, for the sort of long-range correlation structures that we are considering here, the eigenfunctions of the kernel need to be known or else the experiments will not be asymptotically equivalent.

4. Estimating the Covariance of the Increments

The key limitation of Theorem 1.1 is that it supposes that the covariance structure of the errors is known to the experimenter. To make the approximation more useful, it would help if the covariance structure was more flexible. A strategy similar to that used by Carter [6] can be used to estimate the variances of the coefficients.

By Carter [6], I showed that a model with a variance that changes slowly over time can still be approximated by the Gaussian process as long as all of the observations are independent. Our result here is that for correlated observations, if the variance is a linear function of the frequency, then a similar technique can be used to establish a set of asymptotically sufficient statistics.

Flexibility with regard to the covariance structure is added by allowing the magnitude of the to depend on the resolution level . The variances will be described by two parameters and , which characterize the size of the error and the speed that it shrinks at higher resolution levels. These nuisance parameters can be estimated using part of the data, and then the inference can be carried out conditionally on the estimates.

Specifically, the experiment observes independent components and

where the factor is included to match up with the scaling functions at the th resolution level. These observations form a new experiment with a parameter set that includes where , , and is bounded below by a constant .

This experiment with the parametric variances is no longer an approximately sufficient statistic for the experiment that observes all of the . That experiment has too much information about the variance. If we observed the entire sequence at all resolution levels , then and could be estimated exactly. We need to adopt another approximating experiment as in Carter [6] Many of the bounds in this section follow arguments from that paper.

4.1. Proof of Theorem 1.2

The theorem can be proven by applying Lemma 3.1 and then a version of Lemma 2.1 that uses only a small proportion of the low-frequency wavelet coefficients. The rest of the coefficients can be used to fix the parameters in the covariance of the observations.

The first step is to decompose the nonparametric regression into a set of wavelet coefficients. The NPR observations can be transformed by dividing by and then performing the discrete wavelet transformation as in Lemma 3.1. The result is that a sequence of wavelet coefficients and for is equivalent to the original NPR observations with a total-variation distance between the distributions of

The supremum of this bound over all and is

which will be negative for .

The key strategy is to break the observations from this wavelet composition into pieces starting at level , where observations on are assumed to be informative about the means, and the higher resolution levels are used to estimate the covariance structure.

For each resolution level with , we generate the approximately sufficient statistics . Along with the for , the collection of is exactly sufficient if the means are for , because if there is no information about the means in the higher-frequency terms, then we have a piece of the experiment that is like a normal scale family. This new experiment is asymptotically equivalent to our .

The error in approximating by , where the means of the higher-frequencies coefficients are 0, is bounded by (5.3):

For in space, (5.4) bounds the distance as bound

which is negligible when .

This experiment has sufficient statistics , for , and

Furthermore, there are approximately sufficient statistics in this experiment where and are the weighted least-squares estimates of and from the data . These are exactly sufficient statistics in the experiment that observes the and for the lower resolution levels as before, in addition to the observations for where

The distance between and depends on the distance between the distribution of the log of the Gamma variables and the normal approximation to this distribution. The calculation in of [6, Section ] gives a bound on the Kullback–Leibler divergence of where is the distribution of , and is the distribution of . Therefore, the total error between the two experiments is

Therefore, the observations in are asymptotically sufficient for and thus also (as long as with ).

In the experiment , the sufficient statistics for estimating and are the weighted least-squares estimators and :

where is the diagonal matrix with for along its diagonal, is the design matrix with rows , and is the column of observations . The vector of estimators is normal with mean and covariance (.

Therefore, we can compare this experiment to an experiment that observes the same , but the and for are replaced by Gaussian random variables with variances (conditional on ) that are . The error in this approximation depends on the distance between the two sets of independent normal experiments with different variances. Letting be the distribution of and the distribution of , the bound (6.12) in Section 6 gives

There are independent normals for so that the total divergence is

Therefore, the experiments and are asymptotically equivalent for

for some .

We can improve this approximation by replacing the estimators and in by using

and to match up with the bounds on the parameter space. The new version of this experiment therefore observes and the normal coordinates for . The error between and this new version of is smaller because , which makes the bound in (6.2) uniformly smaller.

Finally, we create a continuous Gaussian version of the experiment. This approximation observes all the for with means and variances . The are actually sufficient statistics for an experiment that observes and for and for :

The difference between the experiments and conditional on and is as in Section 2 and (5.4) less than . The expectation of this bound when averaged over the possible values of is a bound on the unconditional error. Furthermore, this expectation is less than the minimum over possible values of (this is the real advantage that comes from going from to ). Thus,

As before, this is asymptotically negligible when .

All that is left to do is to choose the level at which to split the data so that the requirements from (4.5) and (4.15) that and from (4.12) that are all fulfilled. We could choose so that

which is greater than for . This choice of plugged into the bound in (4.5) gives us

as for . At the same time, the bound in (4.11) becomes

Thus, Theorem 1.2 is established.

5. Bounding the Total Variation Distance

We need a bound on the distance between two multivariate normal distributions with different means in order to bound the error in many of our approximations.

For shifted Gaussian processes, the total-variation distance between the distributions is

where . The expression in (5.1) for the total variation distance is concave for positive , so a simple expansion gives

For the Gaussian process with correlated components, we will assume that the variance of each wavelet coefficient is of the form where the variance is calibrated so that . A bound on the error in the projection onto the span of for comes from (5.2) which depends on

where the upper bound in (5.4) follows from , the definition of and the bound in (1.13), and . This error is negligible as whenever

6. Bounds from the Estimated Variances

In order to expand our asymptotically sufficient statistics out into a continuous Gaussian experiment, we need a bound on the total-variation distance between which, for , observes a sequence of normals with variances and , which observes a similar set of normals with variances .

For two normal distributions with the same means , and variances and , respectively, the Kullback–Leibler divergence is

Thus, for (the distribution of the ) and (the distribution of the ), the divergence between the conditional distributions given and is

This divergence between conditional distributions can be used to bound the joint divergence:

where the expectation is taken over the estimators and .

To bound the expected value of the divergence in (6.2), we need the distribution of the estimators:

This implies that

and therefore

Via elementary linear algebra calculations we get that for

As a result,

To simplify this expression, let be a random variable with probability mass function proportional to for . Thus, the is equal to . Similarly, the main factor in (6.9) is equal to

A simple bound of leads to . Furthermore, the variance of is decreasing in , and thus it is greater than when . Therefore,

because .

Thus,

Analogously, the expected divergence between and is bounded by

If we add up these errors over the observations in the experiment, we get that the error in the approximation is less than , which is negligible for sufficiently small.

7. Haar Basis Covariance

The Haar basis is a simple enough wavelet basis by which we can make some explicit calculations of the properties of the error distribution. We will show that the resulting errors will have variances of approximately as we expected, and the correlation between and will decrease at about a rate of .

The scaling functions for the Haar basis are constant on dyadic intervals at the resolution level . The assumption is that we have a single scaling function coefficient with , and then every wavelet coefficient is independent and has variance . Then the covariances can be calculated from the synthesis formula for the Haar basis.

The formula for synthesizing the scaling function coefficients from the wavelet decomposition is

where is the index such that has support that includes the support of . The is either 1 or depending on whether sits in the positive or negative half of the function.

Using the covariance structure described above, the variance of is

for . For , the variance of each scaling function coefficient is 1 as in white noise. For , direct calculation leads to a variance of .

To find the covariance between two variables and , we need which is the highest resolution level such that the support of includes the support of both scaling functions and . The covariance is thus

For large the correlation is on the order of where is a proxy for the distance between the observations. For , all of these covariances are 0. For , the correlation is .

Acknowledgment

This work is supported by NSF Grant no. DMS-08-05481.