Abstract
We consider a nonparametric CUSUM test for change in the mean of multivariate time series with time varying covariance. We prove that under the null, the test statistic has a Kolmogorov limiting distribution. The asymptotic consistency of the test against a large class of alternatives which contains abrupt, smooth and continuous changes is established. We also perform a simulation study to analyze the size distortion and the power of the proposed test.
1. Introduction
In the statistical literature there is a vast amount of works on testing for change in the mean of univariate time series. Sen and Srivastava [1, 2], Hawkins [3], Worsley [4], and James et al. [5] considered tests for mean shifts of normal i.i.d. sequences. Extension to dependent univariate time series has been studied by many authors, see Tang and MacNeill [6], Antoch et al. [7], Shao and Zhang [8], and the references therein. Since the paper of Srivastava and Worsley [9] there are a few works on testing for change in the mean of multivariate time series. In their paper they considered the likelihood ratio tests for change in the multivariate i.i.d. normal mean. Tests for change in mean with dependent but stationary error terms have been considered by HorvΓ‘th et al. [10]. In a more general context of regression, Qu and Perron [11] considered a model where changes in the covariance matrix of the errors occur at the same time as changes in the regression coefficients, and hence the covariance matrix of the errors is a step-function of time. To our knowledge there are no results testing for change in the mean of multivariate models when the covariance matrix of the errors is time varying with unknown form. The main objective of this paper is to handle this problem. More precisely we consider the -dimensional model where is an i.i.d. sequence of random vectors (not necessary normal) with zero mean and covariance , the identity matrix. The sequence of matrices is deterministic with unknown form. The null and the alternative hypotheses are as follows: In practice, some particular models of (1.1) have been considered in many areas. For instance, in the univariate case (), Starica and Granger [12] show that an appropriate model for the logarithm of the absolute returns of the S&P500 index is given by (1.1) where and are step functions, that is, for some integers and . They also show that model (1.1) and (1.3) gives forecasts superior to those based on a stationary GARCH(1,1) model. In the multivariate case (), HorvΓ‘th et al. [10] considered the model (1.1) where is subject to change and is constant; they applied such model to temperature data to provide evidence for the global warming theory. For financial data, it is well known that assetsβ returns have a time varying covariance. Therefore, for example, in portfolio management, our test can be used to indicate if the mean of one or more assets returns are subject to change. If so, then taking into account such a change is very useful in computing the portfolio risk measures such as the value at Risk (VaR) or the expected shortfall (ES) (see Artzner et al. [13] and Holton [14] for more details).
2. The Test Statistic and the Assumptions
In order to construct the test statistic let where is a square root of , are the empirical covariance and mean of the sample , respectively, is the integer part of , and is the transpose of .
The CUSUM test statistic we will consider is given by where
Assumption 1. The sequence of matrices is bounded and satisfies
Assumption 2. There exists such that , where denotes the Euclidian norm of .
3. Limiting Distribution of under the Null
Theorem 3.1. Suppose that Assumptions 1 and 2 hold. Then, under ,
ββdenotes the convergence in distribution and is a multivariate Brownian Bridge with independent components.
Moreover, the cumulative distribution function of is given by
To prove Theorem 3.1 we will establish first a functional central limit theorem for random sequences with time varying covariance. Such a theorem is of independent interest. Let be the space of random functions that are right-continuous and have left limits, endowed with the Skorohod topology. For a given , let be the product space. The weak convergence of a sequence of random elements in to a random element in will be denoted by .
For two random vectors and means that has the same distribution as .
Consider an i.i.d. sequence of random vectors such that and var. Let satisfy (2.5) and set where is a square root of . Many functional central limit theorems were established for covariance stationary random sequences, see Boutahar [15] and the references therein. Note that the sequence we consider here is not covariance stationary.
There are two sufficient conditions to prove that (see Billingsley [16] and Iglehart [17]), namely,(i)the finite-dimensional distributions of converge to the finite-dimensional distributions of ,(ii) is tight for all , if .
Theorem 3.2. Assume that is an i.i.d. sequence of random vectors such that , and that Assumptions 1 and 2 hold. Then where is a standard multivariate Brownian motion.
Proof. Write , the -th entry of the matrix . To prove that the finite-dimensional distributions of converge to those of it is sufficient to show that for all integer , for all , and for all , ,
Denote by the characteristic function of and by a generic positive constant, not necessarily the same at each occurrence. We have
where
Since is an i.i.d. sequence of random vectors we have
Hence
Let if the argument is true and 0 otherwise, ,
the filtration spanned by .
Then is a zero-mean square-integrable martingale array with differences . Observe that
Now using Assumption 1 we obtain that uniformly on for some positive constant , hence Assumption 2 implies that for all ,
where
consequently (see Hall and Heyde [18], Theorem 3.2)
where is a normal random variable with zero mean and variance . Therefore
which together with (3.6) implies that
the last equality holds since, with ,
For , fixed, in order to obtain the tightness of it suffices to show the following inequality (Billingsley [16], Theorem 15.6):
for some , , where is a nondecreasing continuous function on and .
We have
where
Now observe that
Likewise . Since , the inequality (3.18) holds with , .
In order to prove Theorem 3.1 we need also the following lemma.
Lemma 3.3. Assume that is given by (1.1), where is an i.i.d sequence of random vectors such that , and that satisfies (2.5). Then under the null , the empirical covariance of satisfies where denotes the almost sure convergence.
Proof. Let and for fixed, .
Then is a martingale difference sequence with respect to . Since and the matrix is bounded, it follows that
since by using Assumptions 1 and 2 we get
Therefore, Theorem 5 of Chow [19] implies that
or
where denotes the -th entry of the matrix . Hence
Lemma 2 of Lai and Wei [20], page 157, implies that with probability one or which implies that Note that , hence combining (3.27) and (3.30) we obtain
Proof of Theorem 3.1. Under the null we have , thus recalling (3.3) we can write Therefore the result (3.1) holds by applying Theorem 3.2, Lemma 3.3, and the continuous mapping theorem.
4. Consistency of
We assume that under the alternative the means are bounded and satisfy the following.
Assumption H1. There exists a function from into such that
Assumption H2. There exists such that
Assumption H3. There exists such that where .
Theorem 4.1. Suppose that Assumptions 1 and 2 hold. If is given by (1.1) and the means satisfy the Assumptions H1, H2, and H3, then the test based on is consistent against , that is, where denotes the convergence in probability.
Proof. We have
where
Straightforward computation leads to
Therefore
where is a square root of , that is, , and
Hence
which implies that
4.1. Consistency of against Abrupt Change
Without loss of generality we assume that under the alternative hypothesis there is a single break date, that is, is given by (1.1) where
Corollary 4.2. Suppose that Assumptions 1 and 2 hold. If is given by (1.1) and the means satisfy (4.12), then the test based on is consistent against .
Proof. It is easy to show that (4.1)β(4.3) are satisfied with Note that (4.2) is satisfied for all since .
Remark 4.3. The result of Corollary 4.2 remains valid if under the alternative hypothesis there are multiple breaks in the mean.
4.2. Consistency of against Smooth Change
In this subsection we assume that the break in the mean does not happen suddenly but the transition from one value to another is continuous with slow variation. A well-known dynamic is the smooth threshold model (see TerΓ€svirta [21]), in which the mean is time varying as follows where is a the smooth transition function assumed to be continuous from into , and are the values of the mean in the two extreme regimes, that is, when and . The slope parameter indicates how rapid the transition between two extreme regimes is. The parameter is the location parameter.
Two choices for the function are frequently evoked, the logistic function given by and the exponential one For example, for the logistic function with , the extreme regimes are obtained as follows:(i)if and large then and thus ,(ii)if and large then and thus .
This means that at the beginning of the sample is close to and then moves towards and becomes close to it at the end of the sample.
Corollary 4.4. Suppose that Assumptions 1 and 2 hold. If is given by (1.1) and the means satisfy (4.14), then the test based on is consistent against .
Proof. The assumptions (4.1) and (4.3) are satisfied with
where
Since , to prove (4.2), it suffices to show that there exists such .
Assume that for all then
which implies that for all or
and this contradicts the alternative hypothesis .
4.3. Consistency of against Continuous Change
In this subsection we will examine the behaviour under the alternative where the mean varies at each time, and hence can take an infinite number of values. As an example we consider a polynomial evolution for :
Corollary 4.5. Suppose that Assumptions 1 and 2 hold. If is given by (1.1) and the means satisfy (4.21), then the test based on is consistent against .
Proof. The assumptions H1βH3 are satisfied with Note that (4.2) is satisfied for all , provided that there exist and such that .
5. Finite Sample Performance
All models are driven from an i.i.d. sequences , where each , has a distribution, a Student distribution with 3 degrees of freedom, and and are independent for all . Simulations were performed using the software R. We carry out an experiment of 1000 samples for seven models and we use three different sample sizes, , , and . The empirical sizes and powers are calculated at the nominal levels , 5%, and 10%, in both cases.
5.1. Study of the Size
In order to evaluate the size distortion of the test statistic we consider two bivariate models with the following.
Model 1 (constant covariance).
Model 2 (time varying covariance).
From Table 1, we observe that for small sample size () the test statistic has a severe size distortion. But as the sample size increases, the distortion decreases. The empirical size becomes closer to (but always lower than) the nominal level. The distortion in the nonstationary Model 2 (time varying covariance) is a somewhat greater than the one in the stationary Model 1 (constant covariance). However the test seems to be conservative in both cases.
5.2. Study of the Power
In order to see the power of the test statistic we consider five bivariate models with the following.
5.2.1. Abrupt Change in the Mean
Model 3 (constant covariance).
Model 4. In this model the mean and the covariance are subject to an abrupt change at the same time:
Model 5. The mean is subject to an abrupt change and the covariance is time varying (see Figure 1):
5.2.2. Smooth Change in the Mean
Model 6. We consider a logistic smooth transition for the mean and a time varying covariance (see Figure 1):
5.2.3. Continuous Change in the Mean
Model 7. In this model the mean is a polynomial of order two and the covariance matrix is also time varying as in the preceding Models 5 and 6 (see Figure 1):
From Table 2, we observe that for small sample size (), the test statistic has a low power. However, for the five models, the power becomes good as the sample size increases. The powers in nonstationary models are always smaller than those of stationary models. This is not surprising since, from Table 1, the test statistic is more conservative in nonstationary models. We observe also that the power is almost the same in abrupt and logistic smooth changes (compare Models 5 and 6). However, for the polynomial change (Model 7) the power is lower than those of Models 5 and 6. To explain this underperformance we can see, in Figure 1, that in the polynomial change, the time intervals where the mean stays near the extreme values 0 and 1 are very short compared to those in abrupt and smooth changes. We have simulated other continuous changes, linear and cubic polynomial, trigonometric, and many other functions. Like in Model 7, changes are hardly detected for small values of , and the test has a good performance only in large samples.
Acknowledgment
The author would like to thank the anonymous referees for their constructive comments.