Abstract

We consider a transfer function model with time-varying coefficients. We propose an estimation procedure, based on the least squares method and wavelet expansions of the time-varying coefficients. We discuss some statistical properties of the estimators and assess the validity of the methodology through a simulation study. We also present an application of the proposed procedure to a real pair of series.

1. Introduction

In this paper, we consider the transfer function model𝑌𝑡,𝑇=𝑚𝑖=1𝛿𝑖𝑡𝑇𝑌𝑡𝑖,𝑇+𝑛𝑗=0𝜔𝑗𝑡𝑇𝑋𝑡𝑗,𝑇+𝜀𝑡,𝑡=1,,𝑇,(1.1) where 𝑇 is the number of observations and 𝜀𝑡 is i.i.d. (0,𝜎2). We assume that the error and the input series are independent. The functions 𝜔𝑗(𝑢), 𝑗=0,1,2,,𝑛, and 𝛿𝑗(𝑢), 𝑗=1,2,,𝑚, are supported on the interval [0,1] and connected to the underlying series by an appropriate rescaling of time, 𝑢=𝑡/𝑇.

As noted by Nason and Sapatinas [1], when both time series 𝑋𝑡 and 𝑌𝑡 belong to the class of ARMA models, the transfer function models made popular by Box and Jenkins are appropriate. See Box et al. [2].

Model (1.1) is important because it can be used when either or both time series are not stationary, but it is perhaps more suitable when the series exhibit a locally stationary behaviour, for example, in the sense of Dahlhaus [3] or of Nason et al. [4]. In several areas, such as signal analysis (speech analysis in particular) and economics, empirical and theoretical works suggest that models with time-varying coefficients are useful, reflecting, for example, the evolution of the economy over time.

Chiann and Morettin [5, 6] considered a special case of (1.1), namely,𝑌𝑡,𝑇=𝑗=0𝑎𝑗𝑡𝑇𝑋𝑡𝑗,𝑇𝑡+𝜎𝑇𝜀𝑡,(1.2) while Dahlhaus et al. [7] entertained the model𝑋𝑡,𝑇=𝑝𝑗=1𝑎𝑗𝑡𝑇𝑋𝑡𝑗,𝑇𝑡+𝜎𝑇𝜀𝑡.(1.3)

These authors used wavelet expansions of the time-varying coefficients in order to obtain estimates.

Related references are Brockwell et al. [8, 9], who used a state-space approach to transfer function modelling, allowing for nonstationary input and output sequences, and Nason and Sapatinas [1] who show how a nondecimated wavelet packet transform (NDWPT) can be used to model a response time series 𝑌𝑡 in terms of an explanatory time series 𝑋𝑡, both assumed to be nonstationary. Rather than building a model directly between 𝑋𝑡 and 𝑌𝑡 some regression-type model is built between 𝑌𝑡 and an NDWPT version of 𝑋𝑡.

We consider the problem of estimating 𝜔𝑗(𝑢), 𝑗=0,1,2,,𝑛 and 𝛿𝑗(𝑢), 𝑗=1,2,,𝑚, in time domain, using wavelet expansions. We use least squares to obtain the estimators of the wavelet coefficients. Then the empirical detail coefficients are shrunk before the inverse wavelet transform is applied to obtain the final estimates of 𝜔𝑗(𝑢) and 𝛿𝑗(𝑢). This results in a nonlinear smoothing procedure. See Section 2 for further details.

We will apply the methodology to real data, namely, a pair of hourly wind speeds recorded at two Welsh Meteorological Office stations, Valley and Aberporth. Here the idea is to relate the target wind speeds (Valley) to wind speeds measured at a nearby reference site (Aberporth). See Nason and Sapatinas [1] for a complete description of this data and Section 6.

The plan of the paper is as follows. In Section 2, we introduce some background material on wavelets. In Section 3, we present the estimation procedure, and in Section 4, we give some properties of the empirical wavelet coefficients. In Section 5, we perform some simulations and in Section 6, we apply the methodology to the data described above. The paper ends with some remarks in Section 7.

2. Wavelets

In this section, we discuss some basic ideas on wavelets. For more details, see the books by Vidakovic [10], Percival and Walden [11], and Nason [12]. From two basic functions, the scaling function 𝜙(𝑥) and the wavelet 𝜓(𝑥), we define infinite collections of translated and scaled versions, 𝜙𝑗,𝑘(𝑥)=2𝑗/2𝜙(2𝑗𝑥𝑘), 𝜓𝑗,𝑘(𝑥)=2𝑗/2𝜓(2𝑗𝑥𝑘), 𝑗,𝑘={0,±1,±2,}. We assume that {𝜙,𝑘()}𝑘{𝜓𝑗,𝑘()}𝑗;𝑘 forms an orthonormal basis of 𝐿2(), for some coarse scale . A key point [13] is that it is possible to construct compactly supported 𝜙 and 𝜓 that generate an orthonormal system and have space-frequency localization, which allows parsimonious representations for wide classes of functions in wavelet series.

In some applications, the functions involved are defined in a compact interval, such as [0,1]. This will be the case of our functions 𝜔𝑗(𝑢) and 𝛿𝑗(𝑢) in (1.1). So it will be necessary to consider an orthonormal system that spans 𝐿2([0,1]). Several solutions were proposed, the most satisfactory one being that by Cohen et al. [14]. As in any proposal, close to the boundaries the basis functions are modified to preserve orthogonality. Cohen et al. [14] use a preconditioned step. Periodized wavelets are defined by𝜙𝑗,𝑘(𝑥)=𝑛𝜙𝑗,𝑘(𝑥𝑛),𝜓𝑗,𝑘(𝑥)=𝑛𝜓𝑗,𝑘(𝑥𝑛),(2.1) and these generate a multiresolution level ladder 𝑉0𝑉1, in which the spaces 𝑉𝑗 are generated by the 𝜓𝑗,𝑘. Negative values of 𝑗 are not necessary, since 𝜙𝜙=0,0=1 and if 𝑗0, 𝜓𝑗,𝑘(𝑥)=2𝑗/2. See Vidakovic [10] for details and Walter and Cai [15] for a different approach. Other instances are the boundary corrected Meyer wavelets [16]. From now on, we denote the periodized wavelets simply by 𝜓𝑗,𝑘.

Accordingly, for any function 𝑓𝐿2([0,1]), we can expand it in an orthogonal series𝑓(𝑥)=𝛼0,0𝜙(𝑥)+𝑗0𝑘𝐼𝑗𝛽𝑗,𝑘𝜓𝑗,𝑘(𝑥),(2.2) where we take =0 and 𝐼𝑗={𝑘𝑘=0,,2𝑗1}. For each 𝑗, the set 𝐼𝑗 brings the values 𝑘, such that 𝛽𝑗𝑘 belongs to scale 2𝑗. For example, for 𝑗=3, we have 8 wavelet coefficients in scale 23, while for 𝑗=2 we have half of this number, namely, 4 coefficients in scale 22. The wavelet coefficients are given by𝛼0,0=𝑓(𝑥)𝜙(𝑥)𝑑𝑥,𝛽𝑗,𝑘=𝑓(𝑥)𝜓𝑗,𝑘(𝑥)𝑑𝑥.(2.3)

Often we consider the sum in (2.2) for a maximum level 𝐽,𝑓(𝑥)𝛼00𝜙(𝑥)+𝐽1𝑗=0𝑘𝐼𝑗𝛽𝑗𝑘𝜓𝑗𝑘(𝑥),(2.4) in such a way that we approximate 𝑓 in the 𝑉𝐽 space.

Wavelet shrinkage provides a simple tool for nonparametric function estimation. It is an active research area where the methodology is based on optimal shrinkage estimators for the location parameters. Some references are Donoho and Johnstone [17, 18] and Donoho et al. [19]. In this paper, we focus on the simplest, yet most important shrinkage strategy-wavelet thresholding. Earlier models considered nonparametric regression with i.i.d. normal errors. These were further extended by Johnstone and Silverman [20] for correlated errors. In this context, the paper by Opsomer at al. [21] is a good survey of methods using kernels, splines, and wavelets.

The thresholding technique consists of reducing the noise included in a signal through the application of a threshold to the estimated wavelets coefficients ̂𝛽𝑗𝑘. Some commonly used forms are the soft and hard thresholds, given by𝛿(𝑠)̂𝛽𝑗𝑘=||̂𝛽,𝜆𝑗𝑘||𝜆+̂𝛽sgn𝑗𝑘,𝛿()̂𝛽𝑗𝑘=̂𝛽,𝜆𝑗𝑘𝐼||̂𝛽𝑗𝑘||,𝜆(2.5) respectively. One crucial issue is the choice of the parameter 𝜆. If the empirical wavelet coefficients ̂𝛽𝑗𝑘 are assumed to be Gaussian with mean 𝛽𝑗𝑘 and constant variance, a popular method is to apply the so-called universal threshold 𝜆=𝜎2log2𝐽, where 𝜎2 is the noise variance. Other methods include the SURE procedure, cross validation, and Bayesian approaches. See Percival and Walden [11] and Vidakovic [10] for details.

But despite the good properties of these procedures, which are spatially adaptive and nearly optimum over wide classes of function spaces, they are not appropriate when the design is not regular or it is random. Recently, several approaches were proposed dealing with nonregular designs. Delouille [22] introduced the concept of design adapted wavelets, and Kerkyacharian and Picard [23] introduced the warped wavelets. Cai and Brown [24, 25] consider wavelet estimation for models with nonequispaced or random uniform designs, and Sweldens [26] introduced the so-called lifting scheme; now the wavelets are no longer obtained as dilations and translations of some mother wavelet. See also Jansen and Oonincx [27]. Some applications and extensions of these approaches are given in Porto et al. [28, 29] and Morettin and Chiann [30].

In this paper, we will use ordinary wavelets, as in Dahlhaus et al. [7], since these wavelets work well for our purposes. See also a related discussion in Morettin and Chiann [30]. Concerning the choice of the mother wavelet, there are no established rules. In our experience, we agree with Nason and Sapatinas [1], when they say that “how the choice of of wavelets affects the final model and its interpretation is an area of further research.” We will make use in particular of the Daubechies compactly supported wavelets in the simulations and application. Concerning thresholding, we use hard thresholds and the parameter 𝜆 chosen as explained in Sections 4 and 5.

3. Estimators

The aim is to estimate the functions 𝛿𝑖(𝑢), 𝑖=1,,𝑚, and 𝜔𝑖(𝑢), 𝑖=0,1,,𝑛, appearing in model (1.1), given 𝑇 observations of the series 𝑋𝑡,𝑇 and 𝑌𝑡,𝑇. We assume that the orders 𝑚 and 𝑛 are fixed and known. The idea is to expand these functions in wavelet series𝛿𝑖(𝑢)=𝛼(𝛿𝑖)00𝜙(𝑢)+𝐽1𝑗=0𝑘𝐼𝑗𝛽(𝛿𝑖)𝑗𝑘𝜓𝑗𝑘𝜔(𝑢),𝑖=1,,𝑚,𝑖(𝑢)=𝛼(𝜔𝑖)00𝜙(𝑢)+𝐽1𝑗=0𝑘𝐼𝑗𝛽(𝜔𝑖)𝑗𝑘𝜓𝑗𝑘(𝑢),𝑖=0,,𝑛.(3.1)

The empirical wavelet coefficients are obtained minimizing𝑇𝑡=𝑣+1𝑌𝑡,𝑇𝑚𝑖=1𝛿𝑖(𝑢)𝑌𝑡𝑖,𝑇𝑛𝑗=0𝜔𝑗(𝑢)𝑋𝑡𝑗,𝑇2,(3.2) with 𝛿𝑖(𝑢) and 𝜔𝑖(𝑢) replaced by (3.1), 𝑣=max(𝑚,𝑛). These empirical wavelet coefficients are then modified using a hard threshold, and finally we build estimators of 𝛿𝑖(𝑢) and 𝜔𝑖(𝑢) by applying the inverse wavelet transform to these thresholded coefficients.

For ease of exposition, we restrict our attention from now on to the simple model with 𝑚=1 and 𝑛=0, namely,𝑌𝑡,𝑇=𝛿1𝑡𝑇𝑌𝑡1,𝑇+𝜔0𝑡𝑇𝑋𝑡,𝑇+𝜀𝑡.(3.3)

So we regress 𝑌𝑡,𝑇 on 𝑋𝑡,𝑇 and 𝑌𝑡1,𝑇, using the expansion (3.1) for 𝛿1(𝑡/𝑇) and 𝜔0(𝑡/𝑇). Let Δ=2𝐽1. In matrix notation, we have𝑌2,𝑇𝑌3,𝑇𝑌𝑇,𝑇=𝜙002𝑇𝑌1,𝑇𝜓002𝑇𝑌1,𝑇𝜓𝐽1,Δ2𝑇𝑌1,𝑇𝜙003𝑇𝑌2,𝑇𝜓003𝑇𝑌2,𝑇𝜓𝐽1,Δ3𝑇𝑌2,𝑇𝜙00𝑇𝑇𝑌𝑇1,𝑇𝜓00𝑇𝑇𝑌𝑇1,𝑇𝜓𝐽1,Δ𝑇𝑇𝑌𝑇1,𝑇𝛼(𝛿1)00𝛽𝛿100𝛽(𝛿1)𝐽1,Δ+𝜙002𝑇𝑋2,𝑇𝜓002𝑇𝑋2,𝑇𝜓𝐽1,Δ2𝑇𝑋2,𝑇𝜙003𝑇𝑋3,𝑇𝜓003𝑇𝑋3,𝑇𝜓𝐽1,Δ3𝑇𝑋3,𝑇𝜙00𝑇𝑇𝑋𝑇,𝑇𝜓00𝑇𝑇𝑋𝑇,𝑇𝜓𝐽1,Δ𝑇𝑇𝑋𝑇,𝑇𝛼(𝜔0)00𝛽(𝜔0)00𝛽(𝜔0)𝐽1,Δ+𝜀2𝜀3𝜀𝑇(3.4)

or𝚽𝐘=𝑌𝚿𝑌(0)𝚿𝑌(1)𝚿𝑌(𝐽1)𝜷(𝛿1)+𝚽𝑋𝚿𝑋(0)𝚿𝑋(1)𝚿𝑋(𝐽1)𝜷(𝜔0)+𝜀,(3.5) where𝑌𝐘=2,𝑇𝑌3,𝑇𝑌𝑇,𝑇,𝜷(𝛿1)=𝛼(𝛿1)00𝛽(𝛿1)00𝛽(𝛿1)10𝛽(𝛿1)11𝛽(𝛿1)𝐽1,0𝛽(𝛿1)𝐽1,Δ,𝜷(𝜔0)=𝛼(𝜔0)00𝛽(𝜔0)00𝛽(𝜔0)10𝛽(𝜔0)11𝛽(𝜔0)𝐽1,0𝛽(𝜔0)𝐽1,Δ,𝜀𝜀=2𝜀3𝜀𝑇.(3.6)

The vector Φ𝑌 is given by𝚽𝑌=𝜙002𝑇𝑌1,𝑇𝜙003𝑇𝑌2,𝑇𝜙00𝑇𝑇𝑌𝑇1,𝑇,(3.7) and the matrices Ψ𝑌(𝑚) are given, for 0𝑚𝐽1, by𝚿𝑌(𝑚)=𝜓𝑚02𝑇𝑌1,𝑇𝜓𝑚12𝑇𝑌1,𝑇𝜓𝑚,2𝑚12𝑇𝑌1,𝑇𝜓𝑚03𝑇𝑌2,𝑇𝜓𝑚13𝑇𝑌2,𝑇𝜓𝑚,2𝑚13𝑇𝑌2,𝑇𝜓𝑚0𝑇𝑇𝑌𝑇1,𝑇𝜓𝑚1𝑇𝑇𝑌𝑇1,𝑇𝜓𝑚,2𝑚1𝑇𝑇𝑌𝑇1,𝑇.(3.8)

The vector Φ𝑋 is given by𝚽𝑋=𝜙002𝑇𝑋2,𝑇𝜙003𝑇𝑋3,𝑇𝜙00𝑇𝑇𝑋𝑇,𝑇,(3.9) and the matrices Ψ𝑋(𝑚) are given, for 0𝑚𝐽1, by𝚿𝑋(𝑚)=𝜓𝑚02𝑇𝑋2,𝑇𝜓𝑚12𝑇𝑋2,𝑇𝜓𝑚,2𝑚12𝑇𝑋2,𝑇𝜓𝑚03𝑇𝑋3,𝑇𝜓𝑚13𝑇𝑋3,𝑇𝜓𝑚,2𝑚13𝑇𝑋3,𝑇𝜓𝑚0𝑇𝑇𝑋𝑇,𝑇𝜓𝑚1𝑇𝑇𝑋𝑇,𝑇𝜓𝑚,2𝑚1𝑇𝑇𝑋𝑇,𝑇.(3.10)

Let Ψ𝑋=[Φ𝑋Ψ𝑋(0)Ψ𝑋(𝐽1)] and Ψ𝑌=[Φ𝑌Ψ𝑌(0)Ψ𝑌(𝐽1)].

It follows easily that the least squares estimators of the coefficients are then given by𝜷(𝛿1)𝜷(𝜔0)=Ψ𝑌Ψ𝑌Ψ𝑌Ψ𝑋Ψ𝑋Ψ𝑌Ψ𝑌Ψ𝑋1Ψ𝑌𝐘Ψ𝑋𝐘.(3.11)

Having obtained the estimates given by (3.11), we plug them in (3.1), resulting in linear estimates ̂𝛿𝑖(𝑢) and 𝜔𝑖(𝑢). Finally, nonlinear smoothed estimators ̃𝛿𝑖(𝑢) and 𝜔𝑖(𝑢), respectively, are obtained applying some threshold to the empirical detail coefficients ̂𝛽(𝑖)𝑗𝑘 and taking the inverse wavelet transform.

We might think of using other procedures, as the LASSO (least absolute shrinkage and selection operator) procedure of Tibshirani [31] or the nonnegative garotte of Breiman [32]. For the case of orthogonal design, LASSO has a relationship with the soft shrinkage procedure of Donoho and coworkers. But using these procedures would make the derivations of the properties of the estimators above considerably more difficult, or even impossible.

4. Properties of Empirical Coefficients

Now we present some properties of the linear empirical wavelet coefficients. The techniques used to prove the results are quite involved and are based on function space theory. Basically we adapt the results of Dahlhaus et al. [7] for the transfer function model (1.1). The proofs may be obtained from the authors upon request or from the site of the journal.

We assume that functions 𝜔𝑖(𝑢), 𝑖=0,1,,𝑛 and 𝛿𝑖(𝑢), 𝑖=1,,𝑚 belong to some function spaces 𝑖 given by𝑖=𝑓𝑖=𝛼(𝑖)00𝜙+𝑗,𝑘𝛽(𝑖)𝑗𝑘𝜓𝑗𝑘𝛼(𝑖)00𝐶𝑖1,𝛽(𝑖)..,𝑝,𝑞𝐶𝑖2,(4.1) where𝛽(𝑖)..𝑖,𝑝𝑖,𝑞𝑖=𝑗02𝑗𝑠𝑖𝑝𝑖𝑘𝐼𝑗|||𝛽(𝑖)𝑗𝑘|||𝑝𝑖𝑞𝑖/𝑝𝑖1/𝑞𝑖,(4.2)

𝑠𝑖=𝑖+1/21/𝑝𝑖. Here, 𝑖 is the smoothness degree of each space, 𝑝𝑖 and 𝑞𝑖(1𝑝𝑖,𝑞𝑖) specify the norm, and 𝐶𝑖1 and 𝐶𝑖2 are positive constants. For these function spaces, the following result is valid [19]:sup𝑓𝑖𝑖𝑗𝐽𝑘|||𝛽(𝑖)𝑗𝑘|||22=𝑂2𝐽̃𝑠𝑖,(4.3) where ̃𝑠𝑖=𝑖+1/21/̃𝑝𝑖, with ̃𝑝𝑖=min{𝑝𝑖,2}.

These classes contain Besov, Hölder, and 𝐿2-Sobolev spaces, see, for example, Vidakovic [10], Dahlhaus et al. [7], and Triebel [33].

It can be shown, see Donoho et al. [19], that the loss incurred by truncating at level 𝐽 is of order 𝑇2𝑖/(2𝑖+1) if we choose 𝐽 such that 2𝐽1𝑇1/22𝐽. This is achieved if ̃𝑠𝑖>1.

Concerning the wavelets, we assume that 𝜙 and 𝜓 are compactly supported on [0,1] and have continuous derivatives up to order 𝑟>𝑙, with =max𝑖. We denote the spectral norm by 2 and the sup norm by .

In order to analyze the statistical behavior of the estimated coefficients, it is convenient to assume that we take expansions of these coefficients as linear combinations of functions in the 𝑉𝐽 spaces, generated by {𝜙𝐽,1,𝜙𝐽,2,,𝜙𝐽,2𝐽}. With this basis, we can write (3.3) as𝑌𝑡,𝑇=2𝐽𝑖=1𝜁(𝛿1)𝐽,𝑖𝜙𝐽,𝑖𝑡𝑇𝑌𝑡1,𝑇+2𝐽𝑖=1𝜁(𝜔0)𝐽,𝑖𝜙𝐽,𝑖𝑡𝑇𝑋𝑡,𝑇+𝛾𝑡,𝑇,(4.4) where𝛾𝑡,𝑇=𝑗𝐽𝑘𝐼𝑗𝛽(𝛿1)𝑗𝑘𝜓𝑗𝑘𝑡𝑇𝑌𝑡1,𝑇+𝑗𝐽𝑘𝐼𝑗𝛽(𝜔0)𝑗𝑘𝜓𝑗𝑘𝑡𝑇𝑋𝑡,𝑇+𝜀𝑡.(4.5)

Equation (4.4) in matrix form becomes𝜑𝐘=𝑌𝜑𝑋𝜻(𝛿1)𝜻(𝜔0)+𝛾,(4.6) with𝜑𝑌=𝜙𝐽,12𝑇𝑌1,𝑇𝜙𝐽,22𝑇𝑌1,𝑇𝜙𝐽,2𝐽2𝑇𝑌1,𝑇𝜙𝐽,13𝑇𝑌2,𝑇𝜙𝐽,23𝑇𝑌2,𝑇𝜙𝐽,2𝐽3𝑇𝑌2,𝑇𝜙𝐽,1𝑇𝑇𝑌𝑇1,𝑇𝜙𝐽,2𝑇𝑇𝑌𝑇1,𝑇𝜙𝐽,2𝐽𝑇𝑇𝑌𝑇1,𝑇,𝜑𝑋=𝜙𝐽,12𝑇𝑋2,𝑇𝜙𝐽,22𝑇𝑋2,𝑇𝜙𝐽,2𝐽2𝑇𝑋2,𝑇𝜙𝐽,13𝑇𝑋3,𝑇𝜙𝐽,23𝑇𝑋3,𝑇𝜙𝐽,2𝐽3𝑇𝑋3,𝑇𝜙𝐽,1𝑇𝑇𝑋𝑇,𝑇𝜙𝐽,2𝑇𝑇𝑋𝑇,𝑇𝜙𝐽,2𝐽𝑇𝑇𝑋𝑇,𝑇.(4.7)

In (4.6), 𝜻(𝛿1) and 𝜻(𝜔0) contain the coefficients in the expansion (4.4) and 𝛾 contains lines given by (4.5).

Analogously to (3.11), we have the least squares estimates𝜻(𝛿1)𝜻(𝜔0)=𝜑𝑌𝜑𝑌𝜑𝑌𝜑𝑋𝜑𝑋𝜑𝑌𝜑𝑋𝜑𝑋1𝜑𝑌𝐘𝜑𝑋𝐘.(4.8)

The relationship between 𝜷1)(𝛿𝜷0)(𝜔 and 𝜻1)(𝛿𝜻0)(𝜔 is𝜷(𝛿1)𝜷(𝜔0)𝜻=𝚪(𝛿1)𝜻(𝜔0),(4.9) where the Γ is a (2𝐽+1×2𝐽+1) block diagonal matrix. The matrixΓ does the transformation (𝛼(𝑖)00,̂𝛽(𝑖)00,̂𝛽(𝑖)10,̂𝛽(𝑖)11̂𝛽,,(𝑖)𝐽1,0̂𝛽,,(𝑖)𝐽1,Δ̂𝜁)=Γ((𝑖)𝐽,1̂𝜁,,(𝑖)𝐽,2𝐽); so each coefficient estimate ̂𝛽(𝑖)𝑗𝑘, 𝑖=𝛿1,𝜔0 in (3.11) can be written as ̂𝛽(𝑖)𝑗𝑘=Γ𝑖,𝑗𝑘̂𝜁 and Γ𝛿1,𝑗,𝑘𝐿2=Γ𝜔0,𝑗,𝑘𝐿2=1.

Equation (4.6) can be written in the form𝐘=𝚼𝜻+𝛾,(4.10) with Υ=[𝜑𝑌𝜑𝑋], 𝜻=𝜻1)(𝛿𝜻0)(𝜔 and the vector𝛾 as defined above. The least squares estimator of𝜻 is given by𝚼𝜻=𝚼1𝚼𝐘.(4.11)

We notice that the error term𝛾 is not independent from the regressors 𝚼, which means that the estimator is biased and its squared error is of order 𝑂𝑝(2𝐽𝑇1/2+bias). But this order can be improved, as shown below. To take care of the bias, we might use some robust procedure, as in Martin et al. [34], for example.

From (4.10) and (4.11), we obtain𝜻=(𝚼𝚼)1𝚼(𝚼𝜻+𝛾).(4.12) If we write 𝛾=𝜀+𝐒, we get𝚼𝜻=𝚼1𝚼(𝚼𝜻+𝜀+𝐒),(4.13) where the matrix 𝐒 is an ((𝑇1)×1) matrix, given by𝐒=𝑗𝐽𝑘𝐼𝑗𝛽(𝛿1)𝑗𝑘𝜓𝑗𝑘2𝑇𝑌1,𝑇+𝑗𝐽𝑘𝐼𝑗𝛽(𝜔0)𝑗𝑘𝜓𝑗𝑘2𝑇𝑋2,𝑇𝑗𝐽𝑘𝐼𝑗𝛽(𝛿1)𝑗𝑘𝜓𝑗𝑘3𝑇𝑌2,𝑇+𝑗𝐽𝑘𝐼𝑗𝛽(𝜔0)𝑗𝑘𝜓𝑗𝑘3𝑇𝑋3,𝑇𝑗𝐽𝑘𝐼𝑗𝛽(𝛿1)𝑗𝑘𝜓𝑗𝑘𝑇𝑇𝑌𝑇1,𝑇+𝑗𝐽𝑘𝐼𝑗𝛽(𝜔0)𝑗𝑘𝜓𝑗𝑘𝑇𝑇𝑋𝑇,𝑇.(4.14)

We now state results on the squared error and mean squared error of the linear empirical wavelet coefficients. The following assumptions are needed.(A1) The wavelets 𝜙 and 𝜓 are compactly supported on [0,1] and have continuous derivatives up to order 𝑟>𝑙, with =max𝑖.(A2) In the estimation procedure, we have used first a linear estimator, truncating the wavelet expansion at scale 2𝐽. In order to get functions with the appropriate smoothness, 𝐽 was chosen such that ̃𝑠𝑖>1.(A3) The matrix𝚼 in (4.10) satisfies 𝐸(𝚼𝚼)1𝐿2+𝛿2=𝑂(𝑇2𝛿), for some 𝛿>0.

The Daubechies Extremal Phase wavelets 𝐷2𝑁, for an appropriate choice of 𝑁, satisfy (A1). See Härdle et al. [35] for details. The number of continuous derivatives has a connection with the number of null moments of 𝜓. Also, the support of 𝜓 is contained in the interval [𝑁+1,𝑁].

Since 𝛽𝑗𝑘 is the coefficient of the regressor 𝜓𝑗𝑘, the squared error of the least squares estimator is ̂𝛽𝑗𝑘𝛽𝑗𝑘2=𝑂𝑝(𝑇1), which implies that the square error ̂𝛿(𝑢)𝛿(𝑢)2=𝑂𝑝(2𝐽𝑇1/2+bias), for example. We can improve this rate as in the next result.

Proposition 4.1. Under Assumptions (A1)–(A3), we have that 𝜻𝜻2=𝑂𝑝(2𝐽𝑇1log(𝑇)).

This proposition allows us to derive the rate of convergence of the mean square error of the original linear estimators of the wavelet coefficients.

Proposition 4.2. If Assumptions (A1)–(A3) hold, then ̂𝛽𝐸((𝑖)𝑗𝑘𝛽(𝑖)𝑗𝑘)2=𝑂(𝑇1) holds uniformly in 𝑖, 𝑘 and 𝑗<𝐽.

5. Some Simulations

We now present three simulation examples for the linear and nonlinear estimates ̂𝛿𝑖(), 𝑤𝑖() and ̃𝛿𝑖(), 𝑤𝑖(), respectively. In the first, the coefficients are very smooth functions, the second presents an intermediary case, and, in the third, the functions are piecewise constants.

The performance of the estimators are assessed via the square root of the mean squared errors (RMSE), namely,RMSE(𝛿)𝑗=1𝑇𝑣𝑇𝑡=𝑣+1̂𝛿𝑗𝑡𝑇𝛿𝑗𝑡𝑇21/2,RMSE(𝑤)𝑗=1𝑇𝑣𝑇𝑡=𝑣+1𝑤𝑗𝑡𝑇𝑤𝑗𝑡𝑇21/2,(5.1) the same for the nonlinear estimators. Recall that 𝑣=max(𝑚,𝑛).

Concerning the choice of the wavelet, we have tried the Daubechies Extremal Phase D8 and D12, the Daubechies Least Asymmetric S8 and S12 and the Haar wavalets. The best results were obtained with the wavelet mentioned in each example. Of course, in a real problem situation, this cannot be done, since we do not know the true function coefficients. In our experience, the Haar and D8 wavelets have proven useful.

Other important issues are the choice of 𝐽, how to estimate 𝜎2, the noise variance, and the choice of the threshold parameter 𝜆. Concerning 𝐽, we have used the rule specified in Section 4, below (4.3), resulting 𝐽=6 for all examples. The variance was estimated using the empirical wavelet coefficients in all scales. Hard thresholding was used with the parameter 𝜆 chosen as 𝜆=𝜎2log(2𝐽).

In all cases, the input time series was generated as an AR(2) model, 𝑋𝑡,𝑇=𝑎1(𝑡/𝑇)𝑋(𝑡1),𝑇+𝑎2(𝑡/𝑇)𝑋(𝑡2),𝑇+𝜍𝑡, where 𝜍𝑡 are independent, normally distributed random variables, with zero mean and variance one, for 𝑡=1,2,,𝑇, and the coefficients are given by𝑎1𝑎(𝑢)=1.69,0<𝑢0.6,0.3,0.6<𝑢1,2(𝑢)=0.810<𝑢1.(5.2)

Example 5.1. Let 𝑚=2 and 𝑛=0, so the transfer function model is 𝑌𝑡,𝑇=𝛿1𝑡𝑇𝑌𝑡1,𝑇+𝛿2𝑡𝑇𝑌𝑡2,𝑇+𝜔0𝑡𝑇𝑋𝑡,𝑇+𝜖𝑡,𝑡=1,2,,𝑇.(5.3)

We choose 𝜔0(𝑢), 𝛿1(𝑢), and 𝛿2(𝑢) as 𝜔0(𝑢)=0.9cos(2𝜋𝑢+𝜋), 𝛿1(𝑢)=0.5sin(2𝜋𝑢), and 𝛿2(𝑢)=0.4cos(2𝜋𝑢+𝜋/4).

For the simulation, we generated 𝑇=2048 data values for 𝑋𝑡,𝑇 and 𝑌𝑡,𝑇 with 5000 replicates. For this example, the Daubechies D12 wavelet is employed for our estimation procedure.

Figure 1 presents the input and output series, and Figure 2 presents the true and estimated coefficients from a typical sample. The boxplots for the 5000 RMSE values are presented in Figure 3. We see that both types of estimators perform well, but clearly the nonlinear estimator perform, better, both visually and in RMSE sense.

In Figure 4, we use bootstrap with 𝐵=300 replications to see that the true transfer function coefficients lie within the 95% confidence regions. See Section 6 for the description of the bootstrap procedure.

In Figures 17 and 18 of the appendix, we show histograms for linear and nonlinear estimators of the parameter 𝛿1 at some points of the interval [0,1]. Notice that the normal approximation for the estimators is a possibility to be considered under the theoretical point of view.

Example 5.2. Let 𝑚=1 and 𝑛=0; so the transfer function model is 𝑌𝑡,𝑇=𝛿1𝑡𝑇𝑌𝑡1,𝑇+𝜔0𝑡𝑇𝑋𝑡,𝑇+𝜖𝑡,𝑡=1,2,,𝑇.(5.4)

We choose 𝜔0(𝑢) and 𝛿1(𝑢) as 𝜔0(𝑢)=9cos(2𝜋𝑢+𝜋), 𝑢(0,1]; 𝛿1(𝑢)=0.8cos(1.5cos(4𝜋𝑢+𝜋)), for 0<𝑢0.25 and 0.75<𝑢1, and 𝛿1(𝑢)=0.8cos(3cos(4𝜋𝑢+𝜋/2)), for 0.25<𝑢0.75.

As in the Example 5.1, we generated 𝑇=2048 data values for 𝑋𝑡,𝑇 and 𝑌𝑡,𝑇 with 5000 replicates and the Daubechies Extremal Phase wavelet D8 is employed.

From a typical sample, the input and output series are shown in Figure 5 and the estimated coefficients in Figure 6. The boxplots for the 5000 RMSE values are presented in Figure 7. We see that, visually, both types of estimators perform well, but the thresholded estimator is better for the smoother parameter 𝜔0 in terms of RMSE. This is a well-known fact, see, for example, Percival and Walden [11]. In Figure 8, we use bootstrap with 𝐵=300 replications to see that the true transfer function coefficients lie within the 95% confidence regions.

In Figures 6(c) and 6(e), we see some unwanted artifacts next to discontinuity points. These do not disappear, even with expansions with many terms. But this Gibbs effect is less severe than in Fourier expansions. See Vidakovic [10] and Percival and Walden [36] for details.

In Figures 19 and 20 of the appendix, we show histograms for linear and nonlinear estimators of the parameter 𝛿1 at some points of the interval [0,1].

Example 5.3. Let Let 𝑚=1 and 𝑛=0; so the transfer function model is as in Example 5.2.

We choose the coefficients 𝜔0(𝑢) and 𝛿1(𝑢) as piecewise constant functions: 𝜔0(𝑢)=2, for 0<𝑢0.5 and 𝜔0(𝑢)=2, for 0.5<𝑢1; 𝛿1(𝑢)=0.6, for 0<𝑢0.25 or 0.50<𝑢0.75, and 𝛿1(𝑢)=0.5, for 0.25<𝑢0.5 or 0.75<𝑢1.

We simulated 𝑇=2048 data values for 𝑋𝑡,𝑇 and 𝑌𝑡,𝑇 with 5000 replicates, and, due to the nature of the functions, the Haar wavelet is appropriate in this situation.

Figure 9 presents the input and output series, and Figure 10 presents the true and estimated coefficients from a typical sample. The boxplots for the 5000 RMSE values are presented in Figure 11. We see that both types of estimators perform well, but clearly the nonlinear estimator perform better, both visually and in RMSE sense.

In Figure 12, we use bootstrap with 𝐵=300 replications to see that the true transfer function coefficients lie within the 95% confidence regions.

In Figures 21 and 22 of the appendix, we present histograms for linear and nonlinear estimators of 𝛿1 at some points of the interval [0,1].

6. An Application

The data that we analyze are hourly wind speeds recorded from 00:00 on 6th April 1995 at two Welsh Meteorological Office stations: Valley and Aberporth, with 2048 observations. Valley is located approximately 120 km north of Aberporth, and they are mostly separated by Cardigan Bay. Part of these data were analysed by Nason and Sapatinas [1]. Figure 13 shows the data we analyze. We are grateful to N&M Wind Power for providing the data.

In this application, our aim is to model Valley’s wind speeds, {𝑌𝑡,𝑇}, in terms of its lagged values and present and lagged values of wind speeds at Aberporth, {𝑋𝑡,𝑇}. We should remember that both series are not stationary and so classical methods should not be used. We fitted four potential models, namely,𝑀1𝑦𝑡,𝑇=𝛿1𝑡𝑇𝑦𝑡1,𝑇+𝜔0𝑡𝑇𝑥𝑡,𝑇+𝜖𝑡,𝑀2𝑦𝑡,𝑇=𝛿1𝑡𝑇𝑦𝑡1,𝑇+𝜔0𝑡𝑇𝑥𝑡,𝑇+𝜔1𝑡𝑇𝑥𝑡1,𝑇+𝜖𝑡,𝑀3𝑦𝑡,𝑇=𝛿1𝑡𝑇𝑦𝑡1,𝑇+𝛿2𝑡𝑇𝑦𝑡2,𝑇+𝜔0𝑡𝑇𝑥𝑡,𝑇+𝜖𝑡,𝑀4𝑦𝑡,𝑇=𝛿1𝑡𝑇𝑦𝑡1,𝑇+𝛿2𝑡𝑇𝑦𝑡2,𝑇+𝜔0𝑡𝑇𝑥𝑡,𝑇+𝜔1𝑡𝑇𝑥𝑡1,𝑇+𝜖𝑡.(6.1)

The Daubechies Extremal Phase D8 wavelet was used in the computations. The model 𝑀4 was marginally superior (in terms of RMSE) to model 𝑀3, so it was discarded. The mean residual sums of squares for 𝑀1, 𝑀2, and 𝑀3 are 0.02258, 0.02186, and 0.02177, respectively, for linear estimation. For nonlinear estimation, the figures are 0.02391, 0.02359, and 0.02178, respectively. The final coefficient estimates for model 𝑀3 are shown in Figure 14.

For Model 𝑀3, Figures 15 and 16 show the 95% confidence regions based on bootstrap, using linear and nonlinear estimates, respectively.

The bootstrap procedure works as follows:(a)using the real series, we obtain the estimated transfer function coefficients;(b)we obtain the fitted series 𝑌𝑡,𝑇 using these estimated coefficients;(c)we obtain the residuals from the fitted model;(d)we resample these residuals and obtain bootstrap samples of the 𝑌𝑡,𝑇 series;(e)we obtain bootstrap estimators of the transfer function coefficients.

We used 𝐵=300 bootstrap replications of ̂𝛿1, ̂𝛿2, 𝜔0, and ̃𝛿1, ̃𝛿2, 𝜔0, to obtain bootstrap standard errors and the 95% confidence regions. We also fitted the model 𝑀3 with constant coefficients given an MSE value of 0.02354, indicating that the model with varying coefficients is better. We see that is not possible to plot constant lines totally within these regions, showing that, in fact, the coefficients are time varying.

7. Further Remarks

In this paper, we have proposed an estimation procedure for a transfer function model with time-varying coefficients. Basically it is a least squares procedure, with the use of wavelets to expand the function coefficients. Firstly, linear estimators for the time-varying coefficients are obtained, truncating the wavelet expansion at an appropriate scale. Then thresholds are applied to the empirical wavelet coefficients, and inverse wavelet transformation gives the nonlinear smoothed estimators for the function coefficients.

Simulations have shown that this procedure leads to estimators with a good performance. Some statistical properties for the empirical wavelet coefficients in the case of linear estimators were presented.

Further studies are needed on the estimation of the variance and asymptotic normality of the empirical wavelet coefficients in the linear case, on the rate of convergence for the risk of the nonlinear estimator over the smoothness classes 𝑖 and on issues related to the identification, diagnostics, and forecasting for these nonstationary models.

The issue of forecasting was discussed in the literature for some special and related models. Fryzlewicz et al. [37] addressed the problem of how to forecast locally stationary wavelet processes in the sense of Nason et al. [4] by means of nondecimated wavelets. Van Bellegem and von Sachs [38] consider a special modulated locally stationary process, where a stationary process is modulated by a time-varying variance, so the problem reduces to forecast this variance.

Concerning asymptotic distribution of the empirical wavelet estimators, in the linear case, we believe that this can be obtained under further assumptions, namely on the cumulants of the 𝜀𝑡, on the thresholds used and on the roots of the polynomials associated with the model (1.1). This will be the subject of further research.

Appendix

For more details, see Figures 17, 18, 19, 20, 21, and 22.

Acknowledgments

The authors are grateful to two anonymous referees, whose comments improved substantially a previous version of the paper. The first author acknowledges the support of CNPq, and the last three authors acknowledge the suport of Fapesp (Grant 2008/51097-6) and Capes/Procad (Grant 177/2007).