About this Journal Submit a Manuscript Table of Contents
Journal of Probability and Statistics
Volume 2012 (2012), Article ID 738636, 15 pages
http://dx.doi.org/10.1155/2012/738636
Research Article

G-Filtering Nonstationary Time Series

1Biostatistics Branch, NIH/NIEHS (National Institutes of Health/National Institute of Environmental Health Sciences), Research Triangle Park, NC 27709, USA
2Department of Mathematics, Odessa College, Odessa, TX 79764, USA
3Department of Statistical Science, Southern Methodist University, Dallas, TX 75205, USA

Received 16 August 2011; Revised 15 November 2011; Accepted 15 November 2011

Academic Editor: Shein-chung Chow

Copyright © 2012 Mengyuan Xu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The classical linear filter can successfully filter the components from a time series for which the frequency content does not change with time, and those nonstationary time series with time-varying frequency (TVF) components that do not overlap. However, for many types of nonstationary time series, the TVF components often overlap in time. In such a situation, the classical linear filtering method fails to extract components from the original process. In this paper, we introduce and theoretically develop the G-filter based on a time-deformation technique. Simulation examples and a real bat echolocation example illustrate that the G-filter can successfully filter a G-stationary process whose TVF components overlap with time.

1. Introduction and Background

In this paper we develop filters for G-stationary processes, which are nonstationary processes with time-varying frequency (TVF) behavior. We begin with a very brief discussion of linear filters since the filters we develop here are generalizations of the linear filter. The traditional linear filter is defined as 𝑌(𝑡)=𝑋𝑎(𝑡)=+𝑋(𝑡𝑢)𝑎(𝑢)𝑑𝑢,where𝑡(,). Letting 𝑃𝑋(𝑓)and 𝑃𝑌(𝑓) denote the power spectra of the stationary input and output processes 𝑋(𝑡)and 𝑌(𝑡), respectively, the fundamental filtering theorem states that 𝑃𝑌(𝑓)=|𝐴(𝑓)|2𝑃𝑋(𝑓), where 𝐴(𝑓)=𝑎(𝑡)𝑒2𝜋𝑖𝑓𝑡𝑑𝑡is the frequency response function [1]. The importance of this result is that it provides information concerning how the filter, as represented by the squared frequency response function, |𝐴(𝑓)|2, impacts the frequency behavior of the output signal. This can be useful in designing filters with specific properties. Linear filters are commonly used to “filter out” certain frequencies from a time series realization. In this discussion we will consider low-pass and high-pass filters which are designed to remove the high-frequency behavior and low frequencies, respectively. We will use the popular Butterworth filter (see [2]). The squared frequency response function of an 𝑁th order low-pass Butterworth filter is given by||||𝐴(𝑓)2=11+𝑓/𝑓𝑐2𝑁,(1.1) where 𝑓𝑐 is the cutoff frequency, and 𝑁 is the order of the filter. The frequency response function for a high-pass Butterworth filter has a similar form. Traditional linear filters such as the Butterworth filter can successfully extract components from stationary processes where the frequency behavior of the data does not change with time. However, for many nonstationary time series with time varying frequency (TVF) behavior, the time-invariant assumption of traditional filters often results in the failure to extract TVF components from such processes. Example 1.1 examines the application of the Butterworth filter on data containing time-varying frequencies.

Example 1.1 (linear filter applied to TVF data). Figure 1(a) shows a realization of length 𝑛=400 from the model 𝑋(𝑡)=cos36𝜋ln(𝑡+175)+𝜓1+.5cos85𝜋ln(𝑡+175)+𝜓2,(1.2) where 𝜓1=1 and 𝜓2=1.59. The TVF behavior is clear, both in the data in Figure 1(a) and the two components that are shown in Figures 1(b) and 1(c). However, it should be noted that the frequency content at the beginning of the “low frequency” component (Figure 1(b)) is about the same as that toward the end of the “high frequency” component seen in Figure 1(c). For this reason, a Butterworth-type filter will not be able to completely separate the components. In Figures 1(d) and 1(e) we show the results of 3rd order low-pass Butterworth filters applied to the data in Figure 1(a) with 𝑓𝑐=.15 and .06, respectively. In Figure 1(d) the cutoff frequency, 𝑓𝑐=.15,does a good job of extracting the low-frequency behavior for time 𝑡100, but toward the end of the signal, both components pass through the filter. This occurs since the frequency behavior (which decreases with time) in the “high frequency” component has decreased to the point that it passes through the filter with 𝑓𝑐=.15. Figure 1(e) shows the effects of lowering the cut off frequency to 𝑓𝑐=.06. For 𝑡250 the filter does a reasonable job of extracting only the low-frequency component. However, the effect of lowering the cut-off frequency is that neither the high- nor low-frequency componentpasses through the filter toward the beginning of the data. In fact, because of the overlap in frequency content between the first part of the signal in Figure 1(b) and the latter part of the signal in Figure 1(c), no cut-off frequency will be able to separate these two components using a filter such as the Butterworth.
The data set in Example 1.1 is a realization from an M-stationary process which is a special case of the G-stationary processes that have been introduced to generalize the concept of stationarity [3]. As illustrated above, current time-invariant linear filtering would have to be adjusted in order to filter the data from a G-stationary processes. Baraniuk and Jones [4] used the theory of unitary equivalence systems for designing generic classes of signal analysis and processing systems based on alternate coordinate systems. Applying the principle of unitary equivalence systems, the filtering procedure is to (1) preprocess the data, (2) apply a traditional time-invariant filtering method, and (3) convert the results to the original time scale. To solve the filtering problem for G-stationary processes, in this paper we define the G-filter and illustrate its application on signals that originated from G-stationary processes.
The paper is organized as follows. In Section 2 we discuss G-stationary models with focus on strategies for fitting 𝐆(𝝀)-stationary models of Jiang, et al. [3]. In Section 3 we present results concerning G-filters designed for filtering data from G-stationary models, and in Section 4 we introduce a straight-forward implementation of the G-filter for extracting components such as those shown in Figures 1(b) and 1(c).

fig1
Figure 1: (a) Realization from signal in (1.2), (b) true low-frequency component, (c) true high-frequency component, (d) output from low-pass filter with 𝑓𝑐=.15, and (e) output from low-pass filter with 𝑓𝑐=.06.

2. Time Deformation Method and the G-Stationary Process

A stationary process is a stochastic process in which the distribution does not change with time. A large volume of statistical theory and methods have been developed for stationary time series. However, many processes are nonstationary where the distribution changes with time. Several techniques have been proposed for analyzing these nonstationary processes using time deformation. For example, Girardin and Senoussi [5] presented a framework based on harmonic analysis for semigroups, where they defined a semi-group stationary process including a local stationary process and a time-warped process. Clerc and Mallat [6] derived an estimator of the deformation by showing that the deformed autocovariance satisfies a transport equation at small scales. Among the different types of nonstationary processes, special attention has been given to warped nonstationary processes that are obtained by deforming the index set of stationary processes. For example, self-similar processes are obtained by applying the Lamperti operator on the time scale of a stationary process [7]. Hlawatsch and Tauböck [8] and Hlawatsch et al. [9] discuss the use of time frequency displacement operators to displace signals in the time-frequency plane. See also Sampson and Guttorp [10] and Perrin and Senoussi [11].

Gray and Zhang [12] base time deformation on a log transformation of the time axis. They refer to processes that are stationary on the log scale as M-stationary processes and show that the resulting spectral representation is based on the Mellin transform. Flandrin et al. [7] point out that the log transformation of Gray and Zhang [12] is a special case of the Lamperti transform. Gray et al. [13] extend the M-stationary model to analyze data collected at discrete time points. Jiang et al. [3] defined a more general family of nonstationary processes called G-stationary processes whose frequencies monotonically change with time. See also Woodward et al. [14, Chapter  13]. In this paper, we investigate the use of time deformation based on G-stationary models to filter TVF data. We refer to the resulting filters as G-filters.

2.1. G-Stationary Processes

Definition 2.1. Let {𝑋(𝑡)𝑡𝑆} be a stochastic process defined on 𝑆, let 𝑢=𝑔(𝑡) be a mapping onto a set 𝑅𝑔, and let 𝑔1 denote an inverse. Then 𝑋(𝑡) is said to be a G-stationary process if (i)𝐸[𝑋(𝑡)]=𝜇, (ii)Var𝑋(𝑡)=𝜎2<, (iii)𝐸[(𝑋(𝑡)𝜇)(𝑋(𝑔1(𝑔(𝑡)+𝑔(𝜏)))𝜇)]=𝑅𝑋(𝜏).

Although not immediately obvious, this definition basically says the following. Let 𝑔 be a transformation of the time axis and let 𝑔(𝑡)=𝑢and 𝑡=𝑔1(𝑢). Consequently, 𝑋(𝑡)=𝑋(𝑔1(𝑢))=𝑌(𝑢), where𝑌=𝑋𝑔1. Letting 𝜉=𝑔(𝜏), then 𝑋𝑔1(𝑔(𝑡)+𝑔(𝜏))=𝑌(𝑔(𝑡)+𝑔(𝜏))=𝑌(𝑢+𝜉).(2.1) So, it follows that 𝑅𝑋𝑋𝑔(𝜏)=𝐸(𝑋(𝑡)𝜇)1[],(𝑔(𝑡)+𝑔(𝜏))𝜇=𝐸(𝑌(𝑢)𝜇)(𝑌(𝑢+𝜉)𝜇)(2.2) that is, Definition 2.1 gives the conditions on 𝑋(𝑡) and 𝑔(𝑡) that are necessary and sufficient in order for the dual process, 𝑌(𝑢), to be weakly stationary. The implication is that, whereas 𝑋(𝑡) may not be stationary on the original index set (on 𝑡), it can be mapped onto a new (deformed) time scale on which it is stationary. We will refer to 𝑌(𝑢)as the stationary dual of 𝑋(𝑡),and we let 𝐶𝑌(𝜉)=𝐸[(𝑌(𝑢)𝜇)(𝑌(𝑢+𝜉)𝜇)] denote the autocovariance of the dual process. 𝑅𝑋(𝜏) is called the G-stationary autocovariance, and clearly 𝐶𝑌(𝜉)=𝑅𝑋(𝜏). The G-spectrum is defined in Definition 2.2.

Definition 2.2. Let {𝑋(𝑡),𝑡(𝑎,𝑏)} be a G-stationary process and let {𝑌(𝑢),𝑢(,)} be the corresponding dual with respect to the time-deformation function, 𝑔(𝑡). The G-spectrum of the G-stationary process,𝑋(𝑡), is defined as the spectrum of the dual process, that is, 𝐺𝑋(𝑓;𝑔)=𝑒2𝜋𝑖𝑓𝜉𝑅𝑌(𝜉)𝑑𝜉.(2.3) If the mapping, 𝑔(𝑡), from the original space 𝑡(𝑎,𝑏) to the dual space 𝑢(,)is onto, then 𝐺𝑋(𝑓;𝑔)=𝑒2𝜋𝑖𝑓𝜉𝑅𝑌=(𝜉)𝑑𝜉𝑒2𝜋𝑖𝑓𝜉𝐶𝑋𝑔1=(𝜉);𝑔𝑑𝜉𝑏𝑎𝑒2𝜋𝑖𝑓𝑔(𝜏)𝐶𝑋(𝜏;𝑔)𝑔(𝜏)𝑑(𝜏).(2.4)
When𝑔(𝑡)=𝑎𝑡+𝑏,𝑡(,), the G-stationary process, 𝑋(𝑡), is simply the traditional weakly stationary process, and when𝑔(𝑡)=ln(𝑡), 𝑡(0,), 𝑋(𝑡)is called an M-stationary process [13]. When 𝑔(𝑡) is the Box-Cox transformation, 𝑡𝑔(𝑡)=𝜆1𝜆(2.5)𝑡(0,), then 𝑋(𝑡)is called a G(𝜆)-stationary process [3]. When 𝑔(𝑡)=𝑎𝑡2+𝑏𝑡, 𝑡(0,)with𝑎>0and𝑏0, then 𝑋(𝑡)is called a linear chirp stationary process. See Liu [15], and Robertson et al. [16].

2.2. General Strategy for Analyzing G-Stationary Processes

We give the following outline for analyzing G-stationary processes.(1)Transform the time axis to obtain a stationary dual realization, (we will discuss this below).(2)Analyze the transformed (dual) realization using methods for stationary time series. For example, sometimes this is done by fitting an AR(𝑝) or an ARMA(𝑝,𝑞) model to the dual data and then computing forecasts, spectral estimates, and so forth as desired for the problem at hand (in the current setting we will be filtering the dual data).(3)Transform back to original time scale as needed.

Step 1 (transform the time axis to obtain a stationary dual realization). Finding a time transformation, 𝑢=𝑔(𝑡), is equivalent to determining a sampling scheme such that 𝑋(𝑡𝑘)=𝑋(𝑔1(𝑢𝑘))=𝑌𝑘, where 𝑌𝑘 is stationary. That is, for a G-stationary process, the stationary dual process is obtained theoretically by sampling 𝑋(𝑡) at the points 𝑡=𝑡𝑘=𝑔1(𝑢𝑘). In general, however, the observed data will have been obtained at equally spaced points, and are not available at the points 𝑡𝑘. Interpolation has primarily been used to deal with this problem. See Gray et al. [13], Jiang et al. [3], and Woodward et al. [14]. For an approach based on Kalman filtering, see Wang et al. [17].
In the following discussion we will discuss techniques for fitting a G(𝜆)-stationary model to observed TVF data. The G(𝜆) model is sufficiently general to include TVF data with either increasing or decreasing frequency behavior. The values, 𝑡𝑘, needed for obtaining the stationary dual based on the discrete Box-Cox time transformation, 𝑘=(𝑡𝜆𝑘1)/(Δ𝑛𝜆)𝜉, are 𝑡𝑘=((𝑘+𝜉)Δ𝑛𝜆+1)1/𝜆, where (𝜉Δ𝑛𝜆+1)1/𝜆 is called the realization offset and Δ𝑛 is the sampling increment. Jiang et al. [3] employ a search routine to determine the values of 𝜆 and offset that produce the “most stationary” dual. For each set of 𝜆 and offset values considered in the search, the data, 𝑋(𝑡𝑘), 𝑘=1,2,, are approximated at the 𝑡𝑘's using interpolation. By then indexing on 𝑘, the dual realization associated with the given 𝜆 and offset is obtained, and for each combination of 𝜆 and offset, the resulting dual realization is checked for stationarity. Jiang, et al. [3] suggest measuring stationarity by examining the characteristics (e.g., sample autocorrelations) of the first and second halves of the transformed data. They employ a Q-statistic for measuring the difference between the sample autocorrelations of the two halves. This measure is based on the fact that the correlation structure under stationarity stays constant across the realization. The software, GWS, written in S+, can be used to perform this search procedure, and it is available from the authors at the website http://www.texasoft.com/atsa. This software can be used to fit a G(𝜆) model to a set of data, and it provides methods for spectral analysis, forecasting, and so forth. In the examples here involving analysis of TVF data, we use the GWS software package.

Example 2.3 (G(𝜆)-analysis of the TVF data in Example 1.1). In this example we perform G(𝜆) analysis on the time series discussed in Example 1.1. We also use Wigner-Ville plots, which display the time-frequency behavior in the data by computing inverse Fourier transforms of windowed versions of the autocovariance function (see [14, 18, 19]). As previously noted, 𝑋(𝑡) in (1.2) is an M-stationary process. Figure 2(a) shows the data previously shown in Figure 1(a), and in Figure 2(c) we show the associated Parzen spectral density. Figure 2(c) is a so-called “spread spectrum” showing a wide range of frequency behavior between about 𝑓=.06 to 𝑓=.22 which is caused by the frequency changes in the data that were noted in Example 1.1. The frequency change can be visualized in the Wigner-Ville plot in Figure 2(e). In the plot, darker shading corresponds to locations of stronger frequency behavior.For example, Figure 2(a) shows a very pronounced lower TVF component which is illustrated with the lower “strip” that is at about 𝑓=.1 at the beginning of the data whereas by 𝑡=300 the period length associated with the lower TVF component is about 20 (𝑓=.05) which is visually consistent with the component shown in Figure 1(b). The higher TVF component, represented by a lighter strip, indicates frequency at about 𝑓=.22 (periods of about 5) early in the data which decreases to about 𝑓=.1 (periods of about 10) by about 𝑡=300. Again, this is consistent with Figure 1(c). The checkered pattern between these two strips is interference and is not indicative of strong frequency behavior. The Wigner-Ville plot visually illustrates the fact mentioned in Example 1.1, that the lower frequency behavior (the bottom strip) has higher frequency at the beginning of the data than does the higher frequency component (upper strip) at the end of the data set. That is, a horizontal line cannot be drawn that stays entirely within the two strips. Using the GWS software, it is seen that a G(𝜆) transformation with 𝜆=0 (M-stationary) and offset 175 is a reasonable choice for producing a stationary dual. The dual data based on this transformation is given in Figure 2(b). The high-frequency and low-frequency behavior is similar to that seen in Figure 2(a) except that neither frequency displays time-varying behavior. The corresponding spectral density in Figure 2(d) shows two distinct peaks (one near .07 and the other near .13). That is, the spectral density of the transformed data (i.e., the G-spectral density) clearly shows that there are two dominant “frequency” components in the data. The Wigner Ville plot in Figure 2(f) shows two parallel lines, one at about 𝑓=.07 and another at about 𝑓=.13. This plot conveys the fact that the frequency behavior is not changing with time, which is consistent with stationary data.

fig2
Figure 2: (a) Realization from signal in (1.2), (b) dual data based on 𝜆=0 and offset 200, (c) Parzen spectral density for data in (a), (d) Parzen spectral density for dual data in (b), (e) Wigner-Ville plot for data in (a), and (f) Wigner-Ville plot for data in (b).

3. The G-Filter

In this section we define the G-filter for filtering G-stationary data. The G-filter can be viewed as a specific example of a unitary equivalent system for filtering G-stationary signals (see [4]).

Definition 3.1. Given a stochastic process{𝑋(𝑡),𝑡(𝑎,𝑏)} and an impulse response function, (𝑡), 𝑡(𝑎,𝑏), the G-filter or G-convolution, denoted 𝑋(𝑡), is defined as 𝑌(𝑡)=𝑋(𝑡)𝑋𝑑𝑑(𝑔(𝑡)),(3.1) where 𝑋𝑑(𝑢)=𝑋(𝑔1(𝑢)) and 𝑑(𝑢)=(𝑔1(𝑢)), 𝑢(,) are the duals of 𝑋(𝑡) and (𝑡)with respect to the time-deformation function𝑔(𝑡), and𝑋𝑑𝑑(𝑡)is the usual convolution, 𝑋𝑑𝑑𝑋(𝑡)=𝑑(𝜏)𝑑(𝑡𝜏)𝑑𝜏.

Theorem 3.2. If the mapping 𝑔(𝑡) from the original space 𝑡(𝑎,𝑏) to the dual space 𝑢(,)is onto, then 𝑌(𝑡)=𝑋(𝑡)=𝑏𝑎𝑔𝑋(𝑣)1𝑔(𝑔(𝑡)𝑔(𝑣))(𝑣)𝑑𝑣.(3.2)

Proof. Consider the following: =𝑌(𝑡)=𝑋(𝑡)𝑋𝑑(𝑢)𝑑(=𝑔(𝑡)𝑢)𝑑𝑢𝑏𝑎𝑋𝑑(𝑢)𝑑=(𝑔(𝑡)𝑔(𝑣))𝑑𝑔(𝑣)𝑏𝑎𝑔𝑋(𝑣)1=(𝑔(𝑡)𝑔(𝑣))𝑑𝑔(𝑣)𝑏𝑎𝑔𝑋(𝑣)1𝑔(𝑔(𝑡)𝑔(𝑣))(𝑣)𝑑𝑣.(3.3)

Corollary 3.3. Since 𝑋𝑑𝑑(𝑢)=𝑑𝑋𝑑(𝑢), where 𝑢=𝑔(𝑡), it follows immediately that 𝑋(𝑡)=𝑋(𝑡).

Theorem 3.4. Let {𝑋(𝑡),𝑡(𝑎,𝑏)} be a G-stationary input process with the time-deformation function𝑔(𝑡), then(a)the output process, 𝑌(𝑡)=𝑋(𝑡), 𝑡(𝑎,𝑏), where (t)is the impulse response function, is also G-stationary, (b)the G-spectra of 𝑋(𝑡)and 𝑌(𝑡)satisfy 𝐺𝑌||𝑊(𝑓;𝑔)=||(𝑓;𝑔)2𝐺𝑋(𝑓;𝑔),(3.4) where 𝑊(𝑓;𝑔)=𝐴𝑑(𝑓)=𝑒2𝜋𝑖𝑓𝑢𝑑(𝑢)𝑑𝑢 is the frequency response function of 𝑑(𝑢).

Proof. (a) Let {𝑋𝑑(𝑢),𝑢(,)} be the stationary dual process of 𝑋(𝑡) and {𝑑(𝑢),𝑢(,)} be the dual of (t). Since 𝑋𝑑(𝑢) is stationary, then 𝑌𝑑(𝑢)=𝑋𝑑𝑑(𝑢) is stationary. Thus 𝑌(𝑡)=𝑋(𝑡)=𝑋𝑑𝑑(𝑔(𝑡))=𝑌𝑑(𝑔(𝑡)),𝑡(𝑎,𝑏)(3.5) is G-stationary.
(b) Let 𝑃𝑋𝑑(𝑓) and𝑃𝑌𝑑(𝑓) be the spectra of the stationary dual of the processes 𝑋𝑑(𝑢)and𝑌𝑑(𝑢), that is, the G-spectra of 𝑋(𝑡) and 𝑌(𝑡). Since 𝐴𝑑(𝑓) is the frequency response function of𝑑(𝑢), then based on the fundamental linear filtering theorem, it follows that𝑃𝑌𝑑(||𝐴𝑓)=𝑑(||𝑓)2𝑃𝑋𝑑(𝑓),thatis,𝐺𝑌(||𝑊𝑓;𝑔)=(||𝑓;𝑔)2𝐺𝑋(𝑓;𝑔).(3.6) If the mapping 𝑔(𝑡) from the original space 𝑡(𝑎,𝑏) to the dual space 𝑢(,) is onto, then 𝑊(𝑓;𝑔)=𝐴𝑑(𝑓)=𝑒2𝜋𝑖𝑓𝑢𝑑(𝑢)𝑑𝑢=𝑏𝑎𝑒2𝜋𝑖𝑓𝑔(𝑣)(𝑣)𝑔(𝑣)𝑑𝑣.

3.1. The M-Filter

Gray and Zhang [12] introduced the M-convolution or M-filter for filtering the M-stationary process. When 𝑔(𝑡)=ln(𝑡), 𝑡(0,), then it follows that 𝑌(𝑡)=0𝑔𝑋(𝑣)1(𝑔(𝑡)𝑔(𝑣))𝑑𝑔(𝑣)=0𝑡𝑋(𝑣)𝑣𝑑ln(𝑣)=𝑋#(𝑡),(3.7) which is the M-convolution defined by Gray and Zhang (1988).

4. Filtering Data Using the G-Filter

Definition 3.1 shows that the G-filter is not a linear filter if viewed from the original space, but it is a linear filter if viewed from the dual space. Consequently, based on the definitions and results concerning G-filters in Section 3, we use the strategy proposed by Baraniuk and Jones [4]: (1)Make an appropriate time deformation, 𝑢=𝑔(𝑡), on the original time space to obtain a stationary dual process. (2)Apply a traditional linear filter on the dual space to extract components. (3)Transform the filtered dual components back to the original time space.

In the following examples we illustrate the implementation of the G-filter using the steps outlined above.

Example 4.1 (Examples 1.1 and 2.3 revisited). In this example we revisit the TVF data set shown in Figure 1(a) that was discussed in Examples 1.1 and 2.3. We will G-filter the data (based on a G(𝜆) model fit to the data) following the steps outlined above.
(1)The first step was done in Example 2.3 yielding the dual data in Figure 2(b).(2)Based on the discussion in Example 2.3 and the Wigner-Ville plot in Figure 2(f) it seems that the low and high frequency components in the dual data could be separated using a cutoff frequency of about 𝑓𝑐=.1. Figures 3(a) and 3(b) show the results of a low pass and high pass Butterworth filter, respectively, applied to the dual data in Figure 2(b). It can be seen that these filters do a good job of separating the low and high frequency components in the dual data.(3)Figures 3(c) and 3(d) show the data in 3(a) and 3(b), respectively, plotted on the original time scale (using linear interpolation). Comparing Figures 3(c) and 3(d) with Figures 1(b) and 1(c), respectively, shows that the procedure did a good job of separating the two TVF frequencies that we were previously unable to separate in Example 1.1 using standard methods. The somewhat jagged behavior of the peak heights is due to the interpolation. Alternatives to linear interpolation are under current investigation.

fig3
Figure 3: (a, b) Outputs from low-pass and high-pass filters with 𝑓𝑐=.1 applied to the dual data in Figure 2(b); (c, d) are the filtered dual components after transforming back to the original time scale.

Example 4.2 (filtering example). In this example we consider the G(𝜆) stationary model 11.62𝐵+0.995𝐵211.96𝐵+0.99𝐵2𝑋(𝑡+30)313=𝑎(𝑡+30)313.(4.1) [𝑋(𝑡) is referred to by Jiang et al. [3] as a G(4;3) processes.] Clearly, after the time transformation 𝑔(𝑡)=((𝑡+30)31)/(3)=𝑢, the dual model for 𝑌(𝑢)=𝑋(𝑔(𝑡)) is the stationary AR(4) model 11.62𝐵+0.995𝐵211.96𝐵+0.99𝐵2𝑌𝑡=𝑎𝑡,(4.2) which has a characteristic equation (11.62𝑧+.995𝑧2)(11.96𝑧+.99𝑧2)=0 that has two pairs of complex conjugate roots quite close to the unit circle. Associated with each component is a system frequency. Realizations from this model will be characterized by frequency behavior in the neighborhoods of the two system frequencies, and consequently spectral estimates will have peaks close to these two frequency values. (See [14, 17]). Figure 4(a) shows a realization of length 𝑛=200 from the model in (4.1), and it is seen that there is a dramatic increase in frequency with time (i.e., period lengths decrease). There is also an indication of two TVFs. Using the GWS code, a time transformation with 𝜆=2.9 and offset 14 was selected. Based on this time transformation, the dual data are obtained as in Figure 4(b) where the data appear stationary, again with two underlying frequencies. The extreme increase in frequencies in the original data set is illustrated in the Wigner-Ville plot in Figure 4(c). The lower strip going from near zero frequency for small values of 𝑡 to about 𝑓=.1 at𝑡=200. The upper (less visible) strip is associated with a higher-frequency TVF component that also begins at near zero frequency and increases to about 𝑓=.25 toward 𝑡=200. The Wigner-Ville plot for the dual data indicates that the two frequencies have been stabilized and further support the stationary nature of the dual. The dual model is dominated by two frequencies, a lower one at about 𝑓=.03 and an upper (again less visible) frequency of about 𝑓=.1. The results of applying 3rd order low-pass and high-pass Butterworth filters with a cutoff frequency𝑓𝑐=.065 are shown in Figures 4(e) and 4(f), respectively. Figures 4(g) and 4(h) show the filtered data sets plotted on the original time axis.
The behavior of the two filtered components is consistent with that in the original data as is shown in Figures 5(a) and 5(b). Also, the TVF components are close to those in the original data as seen by comparing Figures 5(c) and 5(d) with Figure 4(c).

fig4
Figure 4: (a) the Signal, (b) Dual data, (c) Wigner-Ville plot for (a), (d) Wigner-Ville Plot for (b), (e) and (f) Results of low-pass and high-pass filters on dual data (g, h) Filtered components in (e, f) plotted on original time scale.
fig5
Figure 5: (a, b) Two filtered components (solid lines) with the original signal (dashed lines), (c, d) Wigner-Ville plot for the two extracted components.

Example 4.3 (bat echolocation data). In this example, we consider echolocation data from a large brown bat. The data were obtained courtesy of Al Feng of the Beckman Institute at the University of Illinois. The data shown in Figure 6(a) consist of 381 data points taken at 7-microsecond intervals with a total duration of .00266 seconds. This signal is quite complex and appears to be made up of possibly two different signals. The Wigner-Ville plot in Figure 6(c) suggests that the data contain two TVF components with a suggestion of possibly another TVF associated with higher frequencies than the two main components. Gray et al. [13] analyzed this data set as an M-stationary model with offset 203. Their analysis suggests that there are three dominant TVF components and that the highest component is sufficiently high frequency at the beginning of the time series that it is too rapid for the sampling rate to detect until about 𝑡=100. In our analysis here we will use the time transformation suggested by Gray, et al. [13] to compute the dual data. Using this time transformation produces the dual data in Figure 6(b). The Wigner-Ville plot in Figure 6(c) shows overlap between the TVF strips, for example, the frequency associated with the lower TVF strip near 𝑡=0 is similar to that of the upper strip at around 𝑡=250. The Wigner-Ville plot of the dual data is given in Figure 6(d) where it can be seen that the two dominant frequencies have been stabilized, and that the two dominant dual frequencies are well separated and fall at about 𝑓=.18 and 𝑓=.3. Figures 6(e) and 6(f) show the low-pass and high-pass filter results, respectively, plotted on the original time axis.

fig6
Figure 6: (a) Brown bat echolocation data, (b) dual of data in (a), (c) Wigner-Ville plot for (a), (d) Wigner-Ville plot for (b), (e) low pass filtered data, and (f) high pass filtered data.

5. Concluding Remarks

In this paper, we show that the classical linear filtering methods cannot be directly applied to TVF data where the TVF components overlap over time. We discussed a family of models, G-stationary models, which have proven to be a useful extension of the usual stationary models, and which can be used to model a certain range of nonstationary time series with TVF. Then we introduce an easy and straightforward method for filtering the data from G-stationary models. This G-filtering method extends the standard linear filter, and provides techniques for filtering data with time varying frequencies that overlap in time. Simulation examples and a real data example are given to show that the effectiveness of G-filter in filtering data which are from, or can be approximated by, G-stationary models.

References

  1. A. Papoulis, Probability, Random Variables, and Stochastic Processes, McGraw-Hill, New York, NY, USA, 2nd edition, 1984.
  2. S. Butterworth, “On the theory of filter amplifiers,” Experimental Wireless and the Wireless Engineer, vol. 7, pp. 536–541, 1930.
  3. H. Jiang, H. L. Gray, and W. A. Woodward, “Time-frequency analysis—Gλ stationary processes,” Computational Statistics & Data Analysis, vol. 51, no. 3, pp. 1997–2028, 2006. View at Publisher · View at Google Scholar · View at MathSciNet
  4. R. G. Baraniuk and D. L. Jones, “Unitary equivalence: a new twist on signal processing,” IEEE Transactions on Signal Processing, vol. 43, no. 10, pp. 2269–2282, 1995. View at Publisher · View at Google Scholar
  5. V. Girardin and R. Senoussi, “Semigroup stationary processes and spectral representation,” Bernoulli, vol. 9, no. 5, pp. 857–876, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  6. M. Clerc and S. Mallat, “Estimating deformations of stationary processes,” The Annals of Statistics, vol. 31, no. 6, pp. 1772–1821, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  7. P. Flandrin, P. Borgnat, and P. O. Amblard, “From stationarity to self-similarity, and back: variations on the Lamperti transformation,” in Processes with Long-Range Correlations, G. Raganjaran and M. Ding, Eds., pp. 88–117, Springer, 2003.
  8. F. Hlawatsch and G. Tauböck, “The covariance theory of time-frequency analysis,” in Time-Frequency Signal Analysis and Processing: A Comprehensive Reference, B. Boashash, Ed., chapter 4.3, pp. 102–113, Elsevier, Oxford, UK, 2003.
  9. F. Hlawatsch, G. Tauböck, and T. Twaroch, “Covariant time-frequency analysis,” in Wavelets and Signal Processing, pp. 177–231, Birkhäuser Boston, Boston, Mass, USA, 2003. View at Zentralblatt MATH
  10. P. D. Sampson and P. Guttorp, “Nonparametric estimation of nonstationary spatial covariance structure,” Journal of the American Statistical Association, vol. 87, pp. 108–119, 1992.
  11. O. Perrin and R. Senoussi, “Reducing non-stationary stochastic processes to stationarity by a time deformation,” Statistics & Probability Letters, vol. 43, no. 4, pp. 393–397, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  12. H. L. Gray and N. F. Zhang, “On a class of nonstationary processes,” Journal of Time series Analysis, vol. 9, no. 2, pp. 133–154, 1988. View at Publisher · View at Google Scholar
  13. H. L. Gray, C. C. Vijverberg, and W. A. Woodward, “Nonstationary data analysis by time deformation,” Communications in Statistics, vol. 34, no. 1, pp. 163–192, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  14. W. A. Woodward, H. L. Gray, and A. C. Elliott, Applied Time Series Analysis, Chapman and Hall/CRC, 2012.
  15. L. Liu, Spectral analysis with time-varying frequency, Ph.D. dissertation, Southern Methodist University, Department of Statistical Science, 2004.
  16. S. D. Robertson, H. L. Gray, and W. A. Woodward, “The generalized linear chirp process,” Journal of Statistical Planning and Inference, vol. 140, no. 12, pp. 3676–3687, 2010. View at Publisher · View at Google Scholar
  17. Z. Wang, W. A. Woodward, and H. L. Gray, “The application of the Kalman filter to nonstationary time series through time deformation,” Journal of Time Series Analysis, vol. 30, no. 5, pp. 559–574, 2009. View at Publisher · View at Google Scholar
  18. W. Martin and P. Flandrin, “Wigner-Ville spectral analysis of nonstationary processes,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 33, no. 6, pp. 1461–1470, 1985.
  19. B. Boashash, Time Frequency Analysis, Elsevier, Oxford, UK, 2003.