Table of Contents Author Guidelines Submit a Manuscript
Advances in Decision Sciences
Volume 2012 (2012), Article ID 261707, 15 pages
http://dx.doi.org/10.1155/2012/261707
Research Article

On the Causality between Multiple Locally Stationary Processes

Faculty of Science, Niigata University, 8050 Ikarashi 2-no-cho, Nishi-ku, Niigata 950-2181, Japan

Received 14 January 2012; Accepted 25 March 2012

Academic Editor: Kenichiro Tamaki

Copyright © 2012 Junichi Hirukawa. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

When one would like to describe the relations between multivariate time series, the concepts of dependence and causality are of importance. These concepts also appear to be useful when one is describing the properties of an engineering or econometric model. Although the measures of dependence and causality under stationary assumption are well established, empirical studies show that these measures are not constant in time. Recently one of the most important classes of nonstationary processes has been formulated in a rigorous asymptotic framework by Dahlhaus in (1996), (1997), and (2000), called locally stationary processes. Locally stationary processes have time-varying spectral densities whose spectral structures smoothly change in time. Here, we generalize measures of linear dependence and causality to multiple locally stationary processes. We give the measures of linear dependence, linear causality from one series to the other, and instantaneous linear feedback, at time t and frequency λ.

1. Introduction

In discussion of the relations between time series, concepts of dependence and causality are frequently invoked. Geweke [1] and Hosoya [2] have proposed measures of dependence and causality for multiple stationary processes (see also Taniguchi et al. [3]). They have also showed that these measures can be additively decomposed into frequency-wise. However, it seems to be restrictive that these measures are constants all the time. Priestley [4] has developed the extensions of prediction and filtering theory to nonstationary processes which have evolutionary spectra. Alternatively, in this paper we generalize measures of dependence and causality to multiple locally stationary processes.

When we deal with nonstationary processes, one of the difficult problems to solve is how to set up an adequate asymptotic theory. To meet this Dahlhaus [57] introduced an important class of nonstationary processes and developed the statistical inference. We give the precise definition of multivariate locally stationary processes which is due to Dahlhaus [8].

Definition 1.1. A sequence of multivariate stochastic processes , is called locally stationary with mean vector and transfer function matrix if there exists a representation where(i) is a complex valued stochastic vector process on with and for , , where denotes the cumulant of th order, and is the period extension of the Dirac delta function.(ii)There exists a constant and a -periodic matrix valued function with and for all and . is assumed to be continuous in .

We call the time-varying spectral density matrix of the process, where . Write then becomes a white noise process with and .

Our objective is the generalization of dependence and causality measures to locally stationary processes and construction of test statistics which can examine the nonstationary effect of actual time series data. The paper, organized as follows. Section 2 explains the generalization of causality measures to multiple locally stationary processes. Since this extension is natural, we do is not explain the original idea of the causality measures in stationary case and recommend to refer Geweke [1] and Hosoya [2] for it. In Section 3 we introduce the nonparametric spectral estimator of multivariate locally stationary processes and explain their asymptotic properties. Finally, we propose the test statistics for linear dependence and show their performance in terms of empirical numerical example in Section 4.

2. Measurements of Linear Dependence and Causality for Nonstationary Processes

Here, we generalize measures of dependence and causality to multiple locally stationary processes. The assumptions and results of this section are straightforward extension of the original idea in stationary case. To avoid repetition, Geweke [1] and Hosoya [2] should be referred to for the original idea of causality.

For the -dimensional locally stationary process , we introduce , the Hilbert space spanned by , , , and call the closed subspace spanned by , , . We obtain the best one-step linear predictor of by projecting the components of the vector onto , so here projection implies component-wise projection. We denote the error of prediction by . Then, for locally stationary process we have where is the Kronecker delta function. Note that ’s are uncorrelated but do not have identical covariance matrices; namely, are time-dependent. Now, we impose the following assumption on .

Assumption 2.1. The covariance matrices of errors are nonsingular for all and .

Define as a one-sided linear process and where coefficient matrices are

Note that each , is projection of onto the closed subspace spanned by . Now, we have the following Wold decomposition for locally stationary processes.

Lemma 2.2 (Wold decomposition). If is a locally stationary vector process of components, then , where is given by (2.1), (2.2), and (2.4), is deterministic, and .

If only occurs, we say that is purely nondeterministic.

Assumption 2.3. is purely nondeterministic.

In view of Lemma 2.2, we can see that under Assumptions 2.1 and 2.3, becomes a one-side linear process given by (2.2). For locally stationary process, if we choose an orthonormal basis , , in the closed subspace spanned by , then will be an uncorrelated stationary process. We call a fundamental process of and ; denote the corresponding coefficients,that is,

Let be the time-varying spectral density matrix of . A process is said to have the maximal rank if it has nondegenerate spectral density matrix a.e.

Assumption 2.4. The locally stationary process has the maximal rank for all and . In particular where denotes the determinant of the matrix .

We will say that a function , analytic in the unit disc, belongs to the class if

Under Assumptions 2.12.4, it follows that has a time-varying spectral density which has rank for almost all , and is representable in the form where denotes the complex conjugate of matrix and is the boundary value of a analytic function in the unit disc, and it holds that .

Now, we introduce measures of linear dependence, linear causality, and instantaneous linear feedback at time . Let be -dimensional locally stationary process, which has time-varying spectral density matrix:

We will find the partitions and useful. Meanwhile and denote the covariance matrices of the one-step-ahead errors and when and are forecasts from their own pasts alone; namely, and are the residuals of the projections of and onto and , respectively.

We define the measures of linear dependence, linear causality from to , from to and instantaneous linear feedback, at time as respectively; then we have

Next, we decompose measures of linear causality into frequency-wise. To define frequency-wise measures of causality, we introduce the following analytic facts.

Lemma 2.5. The analytic matrix corresponding to a fundamental process (for ) is maximal among analytic matrices with components from the class , and satisfying the boundary condition (2.8); that is,

Although the following assumption is natural extension of Kolmogorov’s formula in stationary case (see, e.g., [9]), it is not straightforward and unfortunately, so far, we cannot prove it from more simple assumption. We guess it requires another completely technical paper.

Assumption 2.6 (Kolmogorov’s formula). Under Assumptions 2.12.4, an analytic matrix satisfying the boundary condition (2.8) will be maximal if and only if

Now we define the process as then is the residuals of the projection of onto , whereas is the residuals of the projection of onto .

Furthermore, we have so we can see that is orthogonal to . For a matrix we have .

If we set we have the following lemma.

Lemma 2.7. is an analytic function in the unit disc with and thus maximal, such that the time-varying spectral density has a factorization

From this lemma, it is seen that time-varying spectral density is decomposed into two parts: where is a left-upper submatrix of . The former part is related to the process whereas the latter part is related to the process , which is orthogonal to . This relation suggests frequency-wise measure of causality, from to at time :

Similarly, we propose where is in the same manner of .

Now, we introduce the following assumption.

Assumption 2.8. The roots of and all lie outside the unit circle.

The relation of frequency-wise measure to overall measure is addressed in the following result.

Theorem 2.9. Under Assumptions 2.12.8, we have If Assumptions 2.12.6 hold, but Assumption 2.8 does not hold, then

Remark 2.10. Since and , we can see that . Therefore, the best one-step prediction error of the process is given by . Let be a time-varying spectral density matrix of the process and denote the partition by Then, we obtain another representation of frequency-wise measure of causality, from to at time :
This relation suggests that we apply the nonparametric time-varying spectral density estimator of the residual process . However, this problem requires another paper. We will make it as a further work.

3. Nonparametric Spectral Estimator of Multivariate Locally Stationary Processes

In this section we introduce the nonparametric spectral estimator of multivariate locally stationary processes. First, we make the following assumption on the transfer function matrix .

Assumption 3.1. (i) The transfer function matrix is -periodic in , and the periodic extension is twice differentiable in and with uniformly bounded continuous derivatives , and . Furthermore, the uniformly bounded continuous derivative also exists.
(ii) All the eigenvalues of are bounded from below and above by some constants uniformly in and .

As an estimator of , we use the nonparametric estimator of kernel type defined by where is the weight function and depends on , and is the localized periodogram matrix over the segment defined as Here is a data taper and . It should be noted that is not a consistent estimator of the time-varying spectral density. To make a consistent estimator of we have to smooth it over neighbouring frequencies.

Now we impose the following assumptions on and .

Assumption 3.2. The weighted function satisfies for and is continuous and even function satisfying and.

Assumption 3.3. The data taper satisfies (i) for all and ; (ii) is continuous on , twice differentiable at all where is a finite set of , and . Write which plays a role of kernel in the time domain.

Furthermore, we assume the following.

Assumption 3.4. and , satisfy

The following lemmas are multivariate version of Theorem 2.2 of Dahlhaus [10] and Theorem of Dahlhaus [7] (see also [11]).

Lemma 3.5. Assume that Assumptions 3.13.4 hold. Then
(i)
(ii)
(iii) Hence, we have where is the Euclidean norm of the matrix and .

Lemma 3.6. Assume that Assumptions 3.13.4 hold. Let , be matrix-valued continuous function on which satisfies the same conditions as the transfer function matrix in Assumption 3.1 and , . Then have, asymptotically, a normal distribution with zero mean vector and covariance matrix whose -the element is

Assumption 3.4 does not coincide with Assumption A.1(ii) of Dahlhaus [7]. As mentioned in A.3 Remarks of Dahlhaus [7, page 27], Assumption A.1(ii) of Dahlhaus [7] is required because of the -unbiasedness at the boundary 0 and 1. If we assume that and are available with Assumption 3.4, then from Lemma 3.5 (i)

4. Testing Problem for Linear Dependence

In this section we discuss the testing problem for linear dependence. The average measure of linear dependence is given by the following integral functional of time varying spectral density: where

We consider the testing problem for existence of linear dependence: against

For this testing problem, we define the test statistics as then, we have the following result.

Theorem 4.1. Under , where the asymptotic variance of is given by with and is the first derivative of .

To simplify, is assumed to be Gaussian locally stationary process. Then, the asymptotic variance of becomes the integral functional of the time-varying spectral density:

If we take , then is consistent estimator of asymptotic variance, so, we have

Next, we introduce a measure of goodness of our test. Consider a sequence of alternative spectral density matrices: where is a matrix whose entries are square-integrable functions on .

Let and denote the expectation under and the variance under , respectively. It is natural to define an efficacy of by in line with the usual definition for a sequence of “parametric alternatives.” Then we see that

For another test we can define an asymptotic relative efficiency (ARE) of relative to by

If we take the test statistic based on stationary assumption as another test , we can measure the effect of nonstationarity when the process concerned is locally stationary process.

Finally, we discuss a testing problem of linear dependence for stock prices of Tokyo Stock Exchange. The data are daily log returns of 7 companies; 1 : HITACHI 2 : MATSUSHITA 3 : SHARP 4 : SONY 5 : HONDA 6 : NISSAN 7 : TOYOTA. The individual time series are 1174 data points since December 28, 1999 until October 1, 2004. We compute in (4.10) for each two companies. The selected parameters are , , and , where is the length of segment which the localized periodogram is taken over and is the bandwidth of the weight function.

The results are listed in Table 1. It shows that all values for each two companies are large. Since under null hypothesis the limit distribution of is standard normal, we can conclude hypothesis is rejected. Namely, the linear dependencies exist at each two companies. In particular, the values both among electric appliance companies and among automobile companies are significantly large. Therefore, we can see that the companies in the same business have strong dependence.

tab1
Table 1: in (4.10) for each two companies.

In Figures 1 and 2, the daily linear dependence between HONDA and TOYOTA and between HITACHI and SHARP is plotted. They show that the daily dependencies are not constant and change in time. So, it seems to be reasonable that we use the test statistic based on nonstationary assumption.

261707.fig.001
Figure 1: The daily linear dependence between HONDA and TOYOTA.
261707.fig.002
Figure 2: The daily linear dependence between HITACHI and SHARP.

References

  1. J. Geweke, “Measurement of linear dependence and feedback between multiple time series,” Journal of the American Statistical Association, vol. 77, no. 378, pp. 304–313, 1982. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  2. Y. Hosoya, “The decomposition and measurement of the interdependency between second-order stationary processes,” Probability Theory and Related Fields, vol. 88, no. 4, pp. 429–444, 1991. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  3. M. Taniguchi, M. L. Puri, and M. Kondo, “Nonparametric approach for non-Gaussian vector stationary processes,” Journal of Multivariate Analysis, vol. 56, no. 2, pp. 259–283, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  4. M. B. Priestley, Spectral Analysis and Time Series, Academic Press, London, UK, 1981.
  5. R. Dahlhaus, “On the Kullback-Leibler information divergence of locally stationary processes,” Stochastic Processes and Their Applications, vol. 62, no. 1, pp. 139–168, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  6. R. Dahlhaus, “Maximum likelihood estimation and model selection for locally stationary processes,” Journal of Nonparametric Statistics, vol. 6, no. 2-3, pp. 171–191, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  7. R. Dahlhaus, “Fitting time series models to nonstationary processes,” The Annals of Statistics, vol. 25, no. 1, pp. 1–37, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  8. R. Dahlhaus, “A likelihood approximation for locally stationary processes,” The Annals of Statistics, vol. 28, no. 6, pp. 1762–1794, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  9. Yu. A. Rozanov, Stationary Random Processes, Holden-Day, San Francisco, Calif, USA, 1967.
  10. R. Dahlhaus, “Asymptotic statistical inference for nonstationary processes with evolutionary spectra,” in Athens Conference on Applied Probability and Time Series 2, vol. 115 of Lecture Notes in Statist, pp. 145–159, Springer, New York, NY, USA, 1996. View at Publisher · View at Google Scholar
  11. K. Sakiyama and M. Taniguchi, “Discriminant analysis for locally stationary processes,” Journal of Multivariate Analysis, vol. 90, no. 2, pp. 282–300, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH