Table of Contents
Advances in Decision Sciences
Volume 2012, Article ID 704693, 22 pages
http://dx.doi.org/10.1155/2012/704693
Research Article

Estimation for Non-Gaussian Locally Stationary Processes with Empirical Likelihood Method

Waseda University, Tokyo 169-8050, Japan

Received 28 January 2012; Revised 28 March 2012; Accepted 30 March 2012

Academic Editor: David Veredas

Copyright © 2012 Hiroaki Ogata. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

An application of the empirical likelihood method to non-Gaussian locally stationary processes is presented. Based on the central limit theorem for locally stationary processes, we give the asymptotic distributions of the maximum empirical likelihood estimator and the empirical likelihood ratio statistics, respectively. It is shown that the empirical likelihood method enables us to make inferences on various important indices in a time series analysis. Furthermore, we give a numerical study and investigate a finite sample property.

1. Introduction

The empirical likelihood is one of the nonparametric methods for a statistical inference proposed by Owen [1, 2]. It is used for constructing confidence regions for a mean, for a class of M-estimates that includes quantile, and for differentiable statistical functionals. The empirical likelihood method has been applied to various problems because of its good properties: generality of the nonparametric method and effectiveness of the likelihood method. For example, we can name applications to the general estimating equations, [3] the regression models [46], the biased sample models [7], and so forth. Applications are also extended to dependent observations. Kitamura [8] developed the blockwise empirical likelihood for estimating equations and for smooth functions of means. Monti [9] applied the empirical likelihood method to linear processes, essentially under the circular Gaussian assumption, using a spectral method. For short- and long-range dependence, Nordman and Lahiri [10] gave the asymptotic properties of the frequency domain empirical likelihood. As we named above, some applications to time series analysis can be found but it seems that they were mainly for stationary processes. Although stationarity is the most fundamental assumption when we are engaged in a time series analysis, it is also known that real time series data are generally nonstationary (e.g., economics analysis). Therefore we need to use nonstationary models in order to describe the real world. Recently Dahlhaus [1113] proposed an important class of nonstationary processes, called locally stationary processes. They have so-called time-varying spectral densities whose spectral structures smoothly change in time.

In this paper we extend the empirical likelihood method to non-Gaussian locally stationary processes with time-varying spectra. First, We derive the asymptotic normality of the maximum empirical likelihood estimator based on the central limit theorem for locally stationary processes, which is stated in Dahlhaus [13, Theorem A.2]. Next, we show that the empirical likelihood ratio converges to a sum of Gamma distribution. Especially, when we consider a stationary case, that is, the time-varying spectral density is independent of a time parameter, the asymptotic distribution becomes the chi-square.

As an application of this method, we can estimate an extended autocorrelation for locally stationary processes. Besides we can consider the Whittle estimation which is stated in Dahlhaus [13].

This paper is organized as follows. Section 2 briefly reviews the stationary processes and explains about the locally stationary processes. In Section 3, we propose the empirical likelihood method for non-Gaussian locally stationary processes and give the asymptotic properties. In Section 4 we give numerical studies on confidence intervals of the autocorrelation for locally stationary processes. Proofs of theorems are given in Section 5.

2. Locally Stationary Processes

The stationary process is a fundamental setting in a time series analysis. If the process is stationary with mean zero, it is known to have the spectral representation: where is a -periodic complex-valued function with , called transfer function, and is a stochastic process on with and where is the -periodic extension of the Dirac delta function. If the process is stationary, the covariance between and is independent of time and a function of only the time lag . We denote it by . The Fourier transform of the autocovariance function is called spectral density function. In the expression of (2.1), the spectral density function is written by . It is estimated by the periodogram, defined by . If one wants to change the weight of each data, we can insert the function defined on into the periodogram: . The function is called data taper. Now, we give a simple example of the stationary process below.

Example 2.1. Consider the following AR process: where are independent random variables with mean zero and variance 1. In the form of (2.1), this is obtained by letting

As an extension of the stationary process, Dahlhaus [13] introduced the concept of locally stationary. An example of the locally stationary processes is the following time-varying AR() process: where is a function defined on and are independent random variables with mean zero and variance 1. If all are constant, the process (2.6) reduces to stationary. To define a general class of the locally stationary processes, we can naturally consider the time-varying spectral representation However, it turns out that (2.6) has not exactly but only approximately a solution of the form of (2.7). Therefore, we only require that (2.7) holds approximately. The following is the definition of the locally stationary processes given by Dahlhaus [13].

Definition 2.2. A sequence of stochastic processes is called locally stationary with mean zero and transfer function , if there exists a representation where the following holds.(i) is a stochastic process on with and where denotes the cumulant of th order, , , for all and is the -periodic extension of the Dirac delta function.(ii)There exists a constant and -periodic function with which satisfies for all ; is assumed to be continuous in .
The time-varying spectral density is defined by . As an estimator of , we define the local periodogram (for even ) as follows: Here, is a data taper with for . Thus, is nothing but the periodogram over a segment of length with midpoint . The shift from segment to segment is denoted by , which means we calculate with midpoints , where , or, written in rescaled time, at time points . Hereafter we set rather than . That means the segments overlap each other.

3. Empirical Likelihood Approach for Non-Gaussian Locally Stationary Processes

Consider an inference on a parameter based on an observed stretch . We suppose that information about exists through a system of general estimating equations. For short- or long-memory processes, Nordman and Lahiri [10] supposed that , the true value of , is specified from the following spectral moment condition: where is an appropriate function depending on . Following this manner, we naturally suppose that satisfies the following time-varying spectral moment condition: in a locally stationary setting. Here is a function depending on and satisfies Assumption 3.4(i). We give brief examples of and corresponding .

Example 3.1 (autocorrelation). Let us set Then (3.2) leads to When we consider the stationary case, that is, is independent of the time parameter , (3.4) becomes which corresponds to the autocorrelation with lag . So, (3.4) can be interpreted as a kind of autocorrelation with lag for the locally stationary processes.

Example 3.2 (Whittle estimation). Consider the problem of fitting a parametric spectral model to the true spectral density by minimizing the disparity between them. For the stationary process, this problem is considered in Hosoya and Taniguchi [14] and Kakizawa [15]. For the locally stationary process, the disparity between the parametric model and the true spectral density is measured by and we seek the minimizer Under appropriate conditions, in (3.7) is obtained by solving the equation . Suppose that the fitting model is described as , which means is free from innovation part . Then, by Kolmogorov’s formula (Dahlhaus [11, Theorem 3.2]) we can see that is independent of . So the differential condition on becomes This is the case when we set

Now, we set as an estimating function and use the following empirical likelihood ratio function defined by Denote the maximum empirical likelihood estimator by , which maximizes the empirical likelihood ratio function .

Remark 3.3. We can also use the following alternative estimating function: instead of in (3.10). The asymptotic equivalence of and can be proven if is satisfied for any , and this is shown by straightforward calculation.

To show the asymptotic properties of and , we impose the following assumption.

Assumption 3.4. (i)The functions and are -periodic in , and the periodic extensions are differentiable in and with uniformly bounded derivative (, resp.).(ii)The parameters and fulfill the relations .(iii)The data taper with for all is continuous on and twice differentiable at all where is a finite set and .(iv)For ,

Remark 3.5. Assumption 3.4(ii) seems to be restrictive. However, this is required to use the central limit theorem for locally stationary processes (cf. Assumption A.1 and Theorem A.2 of Dahlhaus [13]) (Most of the restrictions on result from the -unbiasedness in the central limit theorem). See also A.3. Remarks of Dahlhaus [13] for the detail.

Now we give the following theorem.

Theorem 3.6. Suppose that Assumption 3.4 holds and is realization of the locally stationary process which has the representation (2.8). Then, as , where Here and are the by matrices whose elements are respectively, and is the by matrix which is defined as

In addition, we give the following theorem on the asymptotic property of the empirical likelihood ratio .

Theorem 3.7. Suppose that Assumption 3.4 holds and is realization of a locally stationary process which has the representation (2.8). Then, as , where is a -dimensional normal random vector with zero mean vector and covariance matrix (identity matrix) and . Here and are same matrices in Theorem 3.6.

Remark 3.8. Denote the eigenvalues of by , then we can write where is distributed as , independently.

Remark 3.9. If the process is stationary, that is, the time-varying spectral density is independent of the time parameter , we can easily see that and the asymptotic distribution becomes the chi-square with degree of freedom .

Remark 3.10. In our setting, the number of the estimating equations and that of the parameters are equal. In that case, the empirical likelihood ratio at the maximum empirical likelihood estimator, , becomes one (cf. [3, page 305]). That means the test statistic in Theorem 3.7 becomes zero when we evaluate it at the maximum empirical likelihood estimator.

4. Numerical Example

In this section, we present simulation results of the estimation of the autocorrelation in locally stationary processes which is stated in Example 3.1. Consider the following time-varying AR(1) process: where and . The observations are generated from the process (4.1), and we make the confidence intervals of the autocorrelation with lag , which is expressed as based on the result of Theorem 3.7. The several combinations of the sample size and the window length are chosen: , , , , , and the data taper is set as . Then we calculate the values of the test statistic at numerous points and obtain confidence intervals by collecting the points which satisfy where , is -percentile of the asymptotic distribution in Theorem 3.7. We admit that Assumption 3.4. (ii) is hard to hold in a finite sample experiment, but this Monte Carlo simulation is purely illustrative and just for investigating how the sample size and the window length affect the results of confidence intervals.

We set a confidence level as and carry out the above procedure 1000 times for each case. Table 1 shows the averages of lower and upper bounds, lengths of the intervals, and the successful rates. Looking at the results, we find out that the larger sample size gives the shorter length of the interval, as expected. Furthermore, the results indicate that the larger window length leads to the worse successful rate. We can predict that the best rate lies around 0.02 because the combination seems to give the best result among all.

tab1
Table 1: 90% confidence intervals of the autocorrelation with lag .

5. Proofs

5.1. Some Lemmas

In this subsection we give the three lemmas to prove Theorems 3.6 and 3.7. First of all, we introduce the following function , which is defined by the -periodic extension of The properties of the function are described in Lemma A.4 of Dahlhaus [13].

Lemma 5.1. Suppose (3.2) and Assumption 3.4 hold. Then for ,

Proof. Let and let . Since the th cumulant of is equal to As in the proof of Theorem 2.2 of Dahlhaus [12] we replace by and we obtain with some constant while for . The replacement error is smaller than In the same way we replace by for , and then we obtain The integral part is equal to So we get Since for , we only have to consider the range of which satisfies . Therefore we can regard as , and Taylor expansion of around gives the first equation of the desired result. Moreover, as in the same manner of the proof of Lemma A.5 of Dahlhaus [13] we can see that which leads to the second equation.

Lemma 5.2. Suppose (3.2) and Assumption 3.4 hold. Then,

Proof. We set Henceforth we denote by for simplicity. This lemma is proved by proving the convergence of the cumulants of all orders. Due to Lemma A.8 of Dahlhaus [13] the expectation of is equal to By (3.2) and , this converges to zero.
Next, we calculate the covariance of . From the relation we can rewrite Then the -element of the covariance matrix of is equal to Due to Lemma A.9 of Dahlhaus [13], this converges to By Assumption 3.4(iv) the covariance tends to .
The th cumulant for tends to zero due to Lemma A.10 of Dahlhaus [13]. Then we obtain the desired result.

Lemma 5.3. Suppose (3.2) and Assumption 3.4 hold. Then,

Proof. First we calculate the mean of -element of : Due to Dahlhaus [12, Theorem 2.2 (i)] the second term of (5.19) becomes Next we consider We calculate the three terms separately. From Lemma 5.1 the first term of (5.21) is equal to It converges to zero when and is equal to when . Similarly the second term of (5.21) converges to zero when and is equal to (5.23) when . We can also apply Lemma 5.1 to the third term of (5.21), and analogous calculation shows that it converges to zero. After all we can see that (5.19) converges to , the -element of .
Next we calculate the second-order cumulant:
This is equal to Using the product theorem for cumulants (cf. [16, Theorem ]) we have to sum over all indecomposable partitions with of the scheme We can apply Lemma 5.1 to all cumulants which is seen in (5.25), and the dominant term of the cumulants is so (5.25) tends to zero. Then we obtain the desired result.

5.2. Proof of Theorem 3.6

Using the lemmas in Section 5.1, we prove Theorem 3.6. To find the maximizing weights of (3.11), we proceed by the Lagrange multiplier method. Write where and are Lagrange multipliers. Setting gives So the equation gives . Then, we may write where the vector satisfies equations given by Therefore, is a minimizer of the following (minus) empirical log likelihood ratio function and satisfies Denote Then, from (5.30) and (5.32), we have where . Let us see the asymptotic properties of the above four derivatives. First, From Lemmas A.8 and A.9 of Dahlhaus [13], we have which leads to Similarly, we have Next, from Lemma 5.3, we obtain Finally, we have Now, (5.34), (5.35) and (5.38)–(5.41) give where Because of Lemma 5.2, we have From this and the relation (5.42), (5.43), we can see that . Again, from (5.42), (5.43), and Lemma 5.2, direct calculation gives that

5.3. Proof of Theorem 3.7

Using the lemmas in Section 5.1, we prove Theorem 3.7. The proof is the same as that of Theorem 3.6 up to (5.30). Let where , and we introduce Note and from (5.30) we find that Every , so , and therefore by (5.47), we get where and are defined in Lemmas 5.2 and 5.3. Then by (5.48), we get From Lemmas 5.2 and 5.3 we can see that We evaluate the order of . We can write Then, for any , The above expectation is written as From Lemma 5.1 this is of order , so we can see that (5.52) tends to zero, which leads From (5.49), (5.50), and (5.54), it is seen that Therefore, Now we have from (5.54) that and from (5.30) that Noting that we can see that the final term in (5.58) has a norm bounded by Hence, we can write where . By (5.57), we may write where for some finite We may write Here it is seen that And finally from Lemmas 5.2 and 5.3, we can show that Then we can obtain the desired result.

Acknowledgments

The author is grateful to Professor M. Taniguchi, J. Hirukawa, and H. Shiraishi for their instructive advice and helpful comments. Thanks are also extended to the two referees whose comments are useful. This work was supported by Grant-in-Aid for Young Scientists (B) (22700291).

References

  1. A. B. Owen, “Empirical likelihood ratio confidence intervals for a single functional,” Biometrika, vol. 75, no. 2, pp. 237–249, 1988. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  2. A. B. Owen, “Empirical likelihood ratio confidence regions,” The Annals of Statistics, vol. 18, no. 1, pp. 90–120, 1990. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  3. J. Qin and J. Lawless, “Empirical likelihood and general estimating equations,” The Annals of Statistics, vol. 22, no. 1, pp. 300–325, 1994. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  4. A. B. Owen, “Empirical likelihood for linear models,” The Annals of Statistics, vol. 19, no. 4, pp. 1725–1747, 1991. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  5. S. X. Chen, “On the accuracy of empirical likelihood confidence regions for linear regression model,” Annals of the Institute of Statistical Mathematics, vol. 45, no. 4, pp. 621–637, 1993. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  6. S. X. Chen, “Empirical likelihood confidence intervals for linear regression coefficients,” Journal of Multivariate Analysis, vol. 49, no. 1, pp. 24–40, 1994. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  7. J. Qin, “Empirical likelihood in biased sample problems,” The Annals of Statistics, vol. 21, no. 3, pp. 1182–1196, 1993. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  8. Y. Kitamura, “Empirical likelihood methods with weakly dependent processes,” The Annals of Statistics, vol. 25, no. 5, pp. 2084–2102, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  9. A. C. Monti, “Empirical likelihood confidence regions in time series models,” Biometrika, vol. 84, no. 2, pp. 395–405, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  10. D. J. Nordman and S. N. Lahiri, “A frequency domain empirical likelihood for short- and long-range dependence,” The Annals of Statistics, vol. 34, no. 6, pp. 3019–3050, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  11. R. Dahlhaus, “On the Kullback-Leibler information divergence of locally stationary processes,” Stochastic Processes and Their Applications, vol. 62, no. 1, pp. 139–168, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  12. R. Dahlhaus, “Asymptotic statistical inference for nonstationary processes with evolutionary spectra,” in Proceedings of the Athens Conference on Applied Probability and Time Series Analysis, vol. 115 of Lecture Notes in Statistics, pp. 145–159, Springer, 1996. View at Publisher · View at Google Scholar
  13. R. Dahlhaus, “Fitting time series models to nonstationary processes,” The Annals of Statistics, vol. 25, no. 1, pp. 1–37, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  14. Y. Hosoya and M. Taniguchi, “A central limit theorem for stationary processes and the parameter estimation of linear processes,” The Annals of Statistics, vol. 10, no. 1, pp. 132–153, 1982, Correction: vol 21, pp. 1115–1117, 1993. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  15. Y. Kakizawa, “Parameter estimation and hypothesis testing in stationary vector time series,” Statistics & Probability Letters, vol. 33, no. 3, pp. 225–234, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  16. D. R. Brillinger, Time Series: Data Analysis and Theory, Holden-Day, San Francisco, Calif, USA, 2001. View at Zentralblatt MATH