About this Journal Submit a Manuscript Table of Contents
Journal of Probability and Statistics
Volume 2012 (2012), Article ID 969753, 17 pages
http://dx.doi.org/10.1155/2012/969753
Research Article

Testing for Change in Mean of Independent Multivariate Observations with Time Varying Covariance

Institute of Mathematics of Luminy, 163 Avenue de Luminy, 13288 Marseille Cedex 9, France

Received 28 August 2011; Accepted 24 November 2011

Academic Editor: Man Lai Tang

Copyright © 2012 Mohamed Boutahar. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We consider a nonparametric CUSUM test for change in the mean of multivariate time series with time varying covariance. We prove that under the null, the test statistic has a Kolmogorov limiting distribution. The asymptotic consistency of the test against a large class of alternatives which contains abrupt, smooth and continuous changes is established. We also perform a simulation study to analyze the size distortion and the power of the proposed test.

1. Introduction

In the statistical literature there is a vast amount of works on testing for change in the mean of univariate time series. Sen and Srivastava [1, 2], Hawkins [3], Worsley [4], and James et al. [5] considered tests for mean shifts of normal i.i.d. sequences. Extension to dependent univariate time series has been studied by many authors, see Tang and MacNeill [6], Antoch et al. [7], Shao and Zhang [8], and the references therein. Since the paper of Srivastava and Worsley [9] there are a few works on testing for change in the mean of multivariate time series. In their paper they considered the likelihood ratio tests for change in the multivariate i.i.d. normal mean. Tests for change in mean with dependent but stationary error terms have been considered by Horváth et al. [10]. In a more general context of regression, Qu and Perron [11] considered a model where changes in the covariance matrix of the errors occur at the same time as changes in the regression coefficients, and hence the covariance matrix of the errors is a step-function of time. To our knowledge there are no results testing for change in the mean of multivariate models when the covariance matrix of the errors is time varying with unknown form. The main objective of this paper is to handle this problem. More precisely we consider the 𝑑-dimensional model𝑌𝑡=𝜇𝑡+Γ𝑡𝜀𝑡,𝑡=1,,𝑛,(1.1) where (𝜀𝑡) is an i.i.d. sequence of random vectors (not necessary normal) with zero mean and covariance 𝐼𝑑, the identity matrix. The sequence of matrices (Γ𝑡) is deterministic with unknown form. The null and the alternative hypotheses are as follows:𝐻0𝜇𝑡𝐻=𝜇𝑡1against1Thereexist𝑡𝑠suchthat𝜇𝑡𝜇𝑠.(1.2) In practice, some particular models of (1.1) have been considered in many areas. For instance, in the univariate case (𝑑=1), Starica and Granger [12] show that an appropriate model for the logarithm of the absolute returns of the S&P500 index is given by (1.1) where 𝜇𝑡 and Σ𝑡 are step functions, that is, 𝜇𝑡=𝜇(𝑗)if𝑡=𝑛𝑗1+1,,𝑛𝑗,𝑛𝑗=𝜆𝑗𝑛,0<𝜆1<<𝜆𝑚1<1,𝑡=𝜎(𝑗)if𝑡=𝑡𝑗1+1,,𝑡𝑗,𝑡𝑗=𝜏𝑗𝑛,0<𝜏1<<𝜏𝑚2<1,(1.3) for some integers 𝑚1 and 𝑚2. They also show that model (1.1) and (1.3) gives forecasts superior to those based on a stationary GARCH(1,1) model. In the multivariate case (𝑑>1), Horváth et al. [10] considered the model (1.1) where 𝜇𝑡 is subject to change and Σ𝑡=Σ is constant; they applied such model to temperature data to provide evidence for the global warming theory. For financial data, it is well known that assets’ returns have a time varying covariance. Therefore, for example, in portfolio management, our test can be used to indicate if the mean of one or more assets returns are subject to change. If so, then taking into account such a change is very useful in computing the portfolio risk measures such as the value at Risk (VaR) or the expected shortfall (ES) (see Artzner et al. [13] and Holton [14] for more details).

2. The Test Statistic and the Assumptions

In order to construct the test statistic let𝐵𝑛(Γ𝜏)=11𝑛[𝑛𝜏]𝑡=1𝑌𝑡𝑌[],𝜏0,1(2.1) where Γ is a square root of ΓΓΣ,thatis,Σ=,=1𝑛𝑛𝑡=1𝑌𝑡𝑌𝑌𝑡𝑌,1𝑌=𝑛𝑛𝑡=1𝑌𝑡(2.2) are the empirical covariance and mean of the sample (𝑌1,,𝑌𝑛), respectively, [𝑥] is the integer part of 𝑥, and 𝑋 is the transpose of 𝑋.

The CUSUM test statistic we will consider is given by𝑛=sup𝜏[0,1]𝐵𝑛(𝜏),(2.3) where𝑋=max1𝑖𝑑||𝑋(𝑖)||𝑋if𝑋=(1),,𝑋(𝑑).(2.4)

Assumption 1. The sequence of matrices (Γ𝑡) is bounded and satisfies 1𝑛𝑛𝑡=1Γ𝑡Γ𝑡Σ>0as𝑛.(2.5)

Assumption 2. There exists 𝛿>0 such that 𝐸(𝜀12+𝛿)<, where 𝑋 denotes the Euclidian norm of 𝑋.

3. Limiting Distribution of 𝑛 under the Null

Theorem 3.1. Suppose that Assumptions 1 and 2 hold. Then, under 𝐻0, 𝑛=sup𝜏[0,1]𝐵(𝜏),(3.1)  denotes the convergence in distribution and 𝐵(𝜏) is a multivariate Brownian Bridge with independent components.
Moreover, the cumulative distribution function of 𝐵 is given by 𝐹𝐵(𝑧)=1+2𝑘=1(1)𝑘exp2𝑘2𝑧2𝑑.(3.2)

To prove Theorem 3.1 we will establish first a functional central limit theorem for random sequences with time varying covariance. Such a theorem is of independent interest. Let 𝐷=𝐷[0,1] be the space of random functions that are right-continuous and have left limits, endowed with the Skorohod topology. For a given 𝑑, let 𝐷𝑑=𝐷𝑑[0,1] be the product space. The weak convergence of a sequence of random elements 𝑋𝑛 in 𝐷𝑑 to a random element 𝑋 in 𝐷𝑑 will be denoted by 𝑋𝑛𝑋.

For two random vectors 𝑋 and 𝑌,𝑋law=𝑌 means that 𝑋 has the same distribution as 𝑌.

Consider an i.i.d. sequence (𝜀𝑡) of random vectors such that 𝐸(𝜀𝑡)=0 and var(𝜀𝑡)=𝐼𝑑. Let (Γ𝑡) satisfy (2.5) and set𝑊𝑛(Γ𝜏)=1𝑛[𝑛𝜏]𝑡=1Γ𝑡𝜀𝑡[],,𝜏0,1(3.3) where Γ is a square root of Σ,thatis,Σ=ΓΓ. Many functional central limit theorems were established for covariance stationary random sequences, see Boutahar [15] and the references therein. Note that the sequence (Γ𝑡𝜀𝑡) we consider here is not covariance stationary.

There are two sufficient conditions to prove that 𝑊𝑛𝑊 (see Billingsley [16] and Iglehart [17]), namely,(i)the finite-dimensional distributions of 𝑊𝑛 converge to the finite-dimensional distributions of 𝑊,(ii)𝑊𝑛(𝑖) is tight for all 1𝑖𝑑, if 𝑊𝑛=(𝑊𝑛(1),,𝑊𝑛(𝑑)).

Theorem 3.2. Assume that (𝜀𝑡) is an i.i.d. sequence of random vectors such that 𝐸(𝜀𝑡)=0, var(𝜀𝑡)=𝐼𝑑 and that Assumptions 1 and 2 hold. Then 𝑊𝑛𝑊,(3.4) where 𝑊 is a standard multivariate Brownian motion.

Proof. Write 𝐹𝑡=Γ1Γ𝑡, 𝐹𝑡(𝑖,𝑗) the (𝑖,𝑗)-th entry of the matrix 𝐹𝑡,𝜀𝑡=(𝜀𝑡(1),,𝜀𝑡(𝑑)). To prove that the finite-dimensional distributions of 𝑊𝑛 converge to those of 𝑊 it is sufficient to show that for all integer 𝑟1, for all 0𝜏1<<𝜏𝑟1, and for all 𝛼𝑖𝑑, 1𝑖𝑟, 𝑍𝑛=𝑟𝑖=1𝛼𝑖𝑊𝑛𝜏𝑖𝑍=𝑟𝑖=1𝛼𝑖𝑊𝜏𝑖.(3.5) Denote by Φ𝑍𝑛(𝑢)=𝐸(exp(𝑖𝑢𝑍𝑛)) the characteristic function of 𝑍𝑛 and by 𝐶 a generic positive constant, not necessarily the same at each occurrence. We have Φ𝑍𝑛(𝑢)=𝐸exp𝑖𝑢𝑛𝑟𝑘=1𝛼𝑘[𝑛𝜏𝑘]𝑡=1𝐹𝑡𝜀𝑡=𝐸exp𝑖𝑢𝑛𝑟𝑘=1𝑟𝑗=𝑘𝛼𝑗[𝑛𝜏𝑘]𝑡=[𝑛𝜏𝑘1]+1𝐹𝑡𝜀𝑡,𝜏0==0𝑟𝑘=1Φ𝑘,𝑛(𝑢),(3.6) where Φ𝑘,𝑛(𝑢)=𝐸exp𝑖𝑢𝑛𝑟𝑗=𝑘𝛼𝑗[𝑛𝜏𝑘]𝑡=[𝑛𝜏𝑘1]+1𝐹𝑡𝜀𝑡.(3.7) Since (𝜀𝑡) is an i.i.d. sequence of random vectors we have 𝜀[𝑛𝜏𝑘1]+1,,𝜀[𝑛𝜏𝑘]law=𝜀1,,𝜀[𝑛𝜏𝑘][𝑛𝜏𝑘1].(3.8) Hence Φ𝑘,𝑛(𝑢)=𝐸exp𝑖𝑢𝑛𝑟𝑗=𝑘𝛼𝑗[𝑛𝜏𝑘][𝑛𝜏𝑘1]𝑡=1𝐹[𝑛𝜏𝑘1]+𝑡𝜀𝑡.(3.9) Let 𝐈(𝐴)=1 if the argument 𝐴 is true and 0 otherwise, 𝑘𝑛=[𝑛𝜏𝑘][𝑛𝜏𝑘1], 𝜉𝑛,𝑖=1𝑛𝑟𝑗=𝑘𝛼𝑗𝐹[𝑛𝜏𝑘1]+𝑖𝜀𝑖,𝑀𝑛,𝑘𝑛=𝑘𝑛𝑖=1𝜉𝑛,𝑖,(3.10)𝑛,𝑡=𝜎(𝜀1,,𝜀𝑡,𝑡𝑘𝑛) the filtration spanned by 𝜀1,,𝜀𝑡.
Then (𝑀𝑛,𝑖,𝑛,𝑖,1𝑖𝑘𝑛,𝑛1) is a zero-mean square-integrable martingale array with differences 𝜉𝑛,𝑖. Observe that 𝑘𝑛𝑖=1𝐸𝜉2𝑛,𝑖𝑛,𝑖1=1𝑛𝑟𝑗=𝑘𝛼𝑗𝑘𝑛𝑖=1𝐹[𝑛𝜏𝑘1]+𝑖𝐹[𝑛𝜏𝑘1]+𝑖𝑟𝑗=𝑘𝛼𝑗𝜎2𝑘=𝜏𝑘𝜏𝑘1𝑟𝑗=𝑘𝛼𝑗2as𝑛.(3.11) Now using Assumption 1 we obtain that Γ𝑡<𝐾 uniformly on 𝑡 for some positive constant 𝐾, hence Assumption 2 implies that for all 𝜀>0, 𝑘𝑛𝑖=1𝐸𝜉2𝑛,𝑖𝐈||𝜉𝑛,𝑖||>𝜀𝑛,𝑖11𝜀𝛿𝑘𝑛𝑖=1𝐸||𝜉𝑛,𝑖||2+𝛿𝑛,𝑖1𝐶𝑘𝑛𝑛1+𝛿/20as𝑛,(3.12) where 𝐸𝜀𝐶=12+𝛿𝜀𝛿𝐾Γ1𝑟𝑗=𝑘𝛼𝑗2+𝛿,(3.13) consequently (see Hall and Heyde [18], Theorem 3.2) 𝑀𝑛,𝑘𝑛𝑍𝑘,(3.14) where 𝑍𝑘 is a normal random variable with zero mean and variance 𝜎2𝑘. Therefore Φ𝑘,𝑛(𝑢)=𝐸exp𝑖𝑢𝑀𝑛,𝑘𝑛1exp2𝜎2𝑘𝑢2as𝑛,(3.15) which together with (3.6) implies that Φ𝑍𝑛1(𝑢)exp2𝑟𝑘=1𝜎2𝑘𝑢2=Φ𝑍(𝑢)as𝑛,(3.16) the last equality holds since, with 𝜏0=0, 𝑟𝑘=1𝜏𝑘𝜏𝑘1𝑟𝑗=𝑘𝛼𝑗2=1𝑖,𝑗𝑟𝛼𝑖𝛼𝑗𝜏min𝑖,𝜏𝑗.(3.17)
For 1𝑖𝑑, fixed, in order to obtain the tightness of 𝑊𝑛(𝑖) it suffices to show the following inequality (Billingsley [16], Theorem 15.6): 𝐸||𝑊𝑛(𝑖)(𝜏)𝑊𝑛(𝑖)𝜏1||𝛾||𝑊𝑛(𝑖)𝜏2𝑊𝑛(𝑖)||(𝜏)𝛾𝐹𝜏2𝜏𝐹1𝛼,(3.18) for some 𝛾>0, 𝛼>1, where 𝐹 is a nondecreasing continuous function on [0,1] and 0<𝜏1<𝜏<𝜏2<1.
We have 𝐸||𝑊𝑛(𝑖)(𝜏)𝑊𝑛(𝑖)𝜏1||2||𝑊𝑛(𝑖)(𝜏2)𝑊𝑛(𝑖)||(𝜏)2=𝑇1𝑇2,(3.19) where 𝑇1=1𝑛𝐸||||||[𝑛𝜏]𝑡=𝑛𝜏1𝑑+1𝑗=1𝐹𝑡(𝑖,𝑗)𝜀𝑡(𝑗)||||||2,𝑇2=1𝑛𝐸|||||[𝑛𝜏2][]𝑑𝑡=𝑛𝜏+1𝑗=1𝐹𝑡(𝑖,𝑗)𝜀𝑡(𝑗)|||||2.(3.20) Now observe that 𝑇1=1𝑛𝑡,𝑠cov𝑑𝑗=1𝐹𝑡(𝑖,𝑗)𝜀𝑡(𝑗),𝑑𝑗=1𝐹𝑠(𝑖,𝑗)𝜀𝑠(𝑗)=1𝑛[𝑛𝜏]𝑡=[𝑛𝜏1𝑑]+1𝑗=1𝐹𝑡(𝑖,𝑗)2𝐶𝜏𝜏1forsomeconstant𝐶>0.(3.21) Likewise 𝑇2𝐶(𝜏2𝜏). Since (𝜏𝜏1)(𝜏2𝜏)(𝜏2𝜏1)2/2, the inequality (3.18) holds with 𝛾=𝛼=2, 𝐹(𝑡)=𝐶𝑡/2.

In order to prove Theorem 3.1 we need also the following lemma.

Lemma 3.3. Assume that (𝑌𝑡) is given by (1.1), where (𝜀𝑡) is an i.i.d sequence of random vectors such that 𝐸(𝜀𝑡)=0, var(𝜀𝑡)=𝐼𝑑 and that (Γ𝑡) satisfies (2.5). Then under the null 𝐻0, the empirical covariance of 𝑌𝑡 satisfies a.s.,(3.22) where a.s. denotes the almost sure convergence.

Proof. Let 𝑊𝑡=Γ𝑡𝜀𝑡,𝑡=𝜎(𝜀1,,𝜀𝑡) and for 𝑖,𝑗 fixed, 1𝑖𝑑,1𝑗𝑑,𝑒𝑡=𝑊𝑡(𝑖)𝑊𝑡(𝑗)𝐸(𝑊𝑡(𝑖)𝑊𝑡(𝑗)𝑡1).
Then (𝑒𝑡) is a martingale difference sequence with respect to 𝑡. Since 𝑒𝑡=𝑊𝑡(𝑖)𝑊𝑡(𝑗)𝑑𝑘=1Γ𝑡(𝑖,𝑘)Γ𝑡(𝑗,𝑘) and the matrix Γ𝑡 is bounded, it follows that 𝐸||𝑒𝑡||(2+𝛿)/2||𝑊𝐶+𝐸𝑡(𝑖)𝑊𝑡(𝑗)||(2+𝛿)/2||𝑊𝐸𝑡(𝑖)||2+𝛿1/2𝐸||𝑊𝑡(𝑗)||2+𝛿1/2𝐶,(3.23) since by using Assumptions 1 and 2 we get 𝐸||𝑊𝑡(𝑖)||2+𝛿𝑑𝑘=1𝐸||Γ𝑡(𝑖,𝑘)𝜀𝑡(𝑘)||2+𝛿1/(2+𝛿)2+𝛿𝐶.(3.24) Therefore, Theorem 5 of Chow [19] implies that 𝑛𝑡=1𝑒𝑡=𝑜(𝑛)almostsurely(3.25) or 1𝑛𝑛𝑡=1𝑊𝑡(𝑖)𝑊𝑡(𝑗)=1𝑛𝑛𝑡=1Γ𝑡Γ𝑡(𝑖,𝑗)+𝑜(1)almostsurely,(3.26) where (Γ𝑡Γ𝑡)(𝑖,𝑗) denotes the (𝑖,𝑗)-th entry of the matrix Γ𝑡Γ𝑡. Hence 1𝑛𝑛𝑡=1𝑊𝑡𝑊𝑡a.s..(3.27)

Lemma 2 of Lai and Wei [20], page 157, implies that with probability one𝑛𝑡=1Γ𝑡(𝑖,𝑘)𝜀𝑡(𝑘)=𝑜𝑛𝑡=1Γ𝑡(𝑖,𝑘)2+𝑂(1)=𝑜(𝑛)+𝑂(1)1𝑖𝑑,(3.28) or1𝑛𝑛𝑡=1Γ𝑡(𝑖,𝑘)𝜀𝑡(𝑘)1=𝑜(1)+𝑂𝑛almostsurely,(3.29) which implies that1𝑛𝑛𝑡=1𝑊𝑡=𝑑𝑘=11𝑛𝑛𝑡=1Γ𝑡(1,𝑘)𝜀𝑡(𝑘)1,,𝑛𝑛𝑡=1Γ𝑡(𝑑,𝑘)𝜀𝑡(𝑘)a.s.0.(3.30) Note that 𝑌𝑡=𝜇+𝑊𝑡, hence combining (3.27) and (3.30) we obtain=1𝑛𝑛𝑡=1𝑌𝑡𝑌𝑡𝑌𝑌1=𝜇𝑛𝑛𝑡=1𝑊𝑡+1𝑛𝑛𝑡=1𝑊𝑡𝜇+𝜇𝜇+1𝑛𝑛𝑡=1𝑊𝑡𝑊𝑡𝑌𝑌a.s..(3.31)

Proof of Theorem 3.1. Under the null 𝐻0 we have 𝑌𝑡=𝜇+Γ𝑡𝜀𝑡, thus recalling (3.3) we can write 𝐵𝑛(Γ𝜏)=11𝑛[𝑛𝜏]𝑡=1𝑌𝑡𝑌=Γ1Γ1𝑛Γ1[𝑛𝜏]𝑡=1𝑌𝑡𝜇=Γ𝑌𝜇1Γ𝑊𝑛[](𝜏)𝑛𝜏𝑛𝑊𝑛.(1)(3.32) Therefore the result (3.1) holds by applying Theorem 3.2, Lemma 3.3, and the continuous mapping theorem.

4. Consistency of 𝑛

We assume that under the alternative 𝐻1 the means (𝜇𝑡) are bounded and satisfy the following.

Assumption H1. There exists a function 𝑈 from [0,1] into 𝑑 such that [],1𝜏0,1𝑛[𝑛𝜏]𝑡=1𝜇𝑡𝑈(𝜏)as𝑛.(4.1)

Assumption H2. There exists 𝜏(0,1) such that 𝒰𝜏𝜏=𝑈𝜏𝑈(1)0.(4.2)

Assumption H3. There exists 𝜇 such that 1𝑛𝑛𝑡=1𝜇𝑡𝜇𝜇𝑡𝜇𝜇as𝑛,(4.3) where 𝜇=(1/𝑛)𝑛𝑡=1𝜇𝑡.

Theorem 4.1. Suppose that Assumptions 1 and 2 hold. If (𝑌𝑡) is given by (1.1) and the means (𝜇𝑡) satisfy the Assumptions H1, H2, and H3, then the test based on 𝑛 is consistent against 𝐻1, that is, 𝑛𝑃+,(4.4) where 𝑃 denotes the convergence in probability.

Proof. We have 𝐵𝑛(𝜏)=𝐵0𝑛(𝜏)+𝐵1𝑛(𝜏),(4.5) where 𝐵0𝑛(Γ𝜏)=1𝑛[𝑛𝜏]𝑡=1𝑊𝑡𝑊,𝑊𝑡=Γ𝑡𝜀𝑡,1𝑊=𝑛𝑛𝑡=1𝑊𝑡,𝐵1𝑛Γ(𝜏)=1𝑛[𝑛𝜏]𝑡=1𝜇𝑡𝜇.(4.6)
Straightforward computation leads to a.s.=+𝜇.(4.7) Therefore 𝐵0𝑛(𝜏)Γ1Γ𝐵(𝜏),(4.8) where Γ is a square root of Σ, that is, Σ=ΓΓ, and 𝐵1𝑛(𝜏)𝑛a.s.Γ1𝒰(𝜏).(4.9) Hence ||||𝐵𝑛(𝜏)||||𝑃+,(4.10) which implies that 𝑛𝑃+.(4.11)

4.1. Consistency of 𝑛 against Abrupt Change

Without loss of generality we assume that under the alternative hypothesis 𝐻1 there is a single break date, that is, (𝑌𝑡) is given by (1.1) where𝜇𝑡=𝜇(1)if1𝑡𝑛𝜏1𝜇(2)if𝑛𝜏1+1𝑡𝑛forsome𝜏1(0,1)and𝜇(1)𝜇(2).(4.12)

Corollary 4.2. Suppose that Assumptions 1 and 2 hold. If (𝑌𝑡) is given by (1.1) and the means (𝜇𝑡) satisfy (4.12), then the test based on 𝑛 is consistent against 𝐻1.

Proof. It is easy to show that (4.1)–(4.3) are satisfied with 𝒰𝜏(𝜏)=1𝜏1𝜇(1)𝜇(2)if𝜏𝜏1𝜏11𝜏1𝜇(1)𝜇(2)if𝜏>𝜏1,𝜇=𝜏1𝜇(1)𝜇(2)𝜇(1)𝜇(2).(4.13) Note that (4.2) is satisfied for all 0<𝜏<𝜏1 since 𝜇(1)𝜇(2).

Remark 4.3. The result of Corollary 4.2 remains valid if under the alternative hypothesis there are multiple breaks in the mean.

4.2. Consistency of 𝑛 against Smooth Change

In this subsection we assume that the break in the mean does not happen suddenly but the transition from one value to another is continuous with slow variation. A well-known dynamic is the smooth threshold model (see Teräsvirta [21]), in which the mean 𝜇𝑡 is time varying as follows𝜇𝑡=𝜇(1)+𝜇(2)𝜇(1)𝐹𝑡𝑛,𝜏1,𝛾,1𝑡𝑛,𝜇(1)𝜇(2),(4.14) where 𝐹(𝑥,𝜏1,𝛾) is a the smooth transition function assumed to be continuous from [0,1] into [0,1], 𝜇(1) and 𝜇(2) are the values of the mean in the two extreme regimes, that is, when 𝐹0 and 𝐹1. The slope parameter 𝛾 indicates how rapid the transition between two extreme regimes is. The parameter 𝜏1 is the location parameter.

Two choices for the function 𝐹 are frequently evoked, the logistic function given by𝐹𝐿𝑥,𝜏1=,𝛾1+exp𝛾𝑥𝜏11,(4.15) and the exponential one𝐹𝑒𝑥,𝜏1,𝛾=1exp𝛾𝑥𝜏12.(4.16) For example, for the logistic function with 𝛾>0, the extreme regimes are obtained as follows:(i)if 𝑥0 and 𝛾 large then 𝐹0 and thus 𝜇𝑡=𝜇(1),(ii)if 𝑥1 and 𝛾 large then 𝐹1 and thus 𝜇𝑡=𝜇(2).

This means that at the beginning of the sample 𝜇𝑡 is close to 𝜇(1) and then moves towards 𝜇(2) and becomes close to it at the end of the sample.

Corollary 4.4. Suppose that Assumptions 1 and 2 hold. If (𝑌𝑡) is given by (1.1) and the means (𝜇𝑡) satisfy (4.14), then the test based on 𝑛 is consistent against 𝐻1.

Proof. The assumptions (4.1) and (4.3) are satisfied with 𝒰𝜇(𝜏)=(2)𝜇(1)𝑇(𝜏),(4.17) where 𝑇(𝜏)=𝜏0𝐹𝑥,𝜏1,𝛾𝑑𝑥𝜏10𝐹𝑥,𝜏1,𝛾𝑑𝑥,𝜇=𝜇(2)𝜇(1)𝜇(2)𝜇(1)10𝐹2𝑥,𝜏1,𝛾𝑑𝑥10𝐹𝑥,𝜏1,𝛾𝑑𝑥2.(4.18)
Since 𝜇(2)𝜇(1)0, to prove (4.2), it suffices to show that there exists 𝜏 such 𝑇(𝜏)0.
Assume that 𝑇(𝜏)=0 for all 𝜏(0,1) then 𝑑𝑇(𝜏)𝑑𝜏=𝐹𝜏,𝜏1,𝛾10𝐹𝑥,𝜏1,𝛾𝑑𝑥=0𝜏(0,1),(4.19) which implies that 𝐹(𝜏,𝜏1,𝛾)=10𝐹(𝑥,𝜏1,𝛾)𝑑𝑥=𝐶 for all 𝜏(0,1) or 𝜇𝑡=𝜇(1)+𝜇(2)𝜇(1)𝐶=𝜇𝑡1,(4.20) and this contradicts the alternative hypothesis 𝐻1.

4.3. Consistency of 𝑛 against Continuous Change

In this subsection we will examine the behaviour 𝑛 under the alternative where the mean (𝜇𝑡) varies at each time, and hence can take an infinite number of values. As an example we consider a polynomial evolution for 𝜇𝑡:𝜇𝑡=𝑃1𝑡𝑛,,𝑃𝑑𝑡𝑛,𝑃𝑗(𝑥)=𝑝𝑗𝑘=0𝛼𝑗,𝑘𝑥𝑘,1𝑗𝑑.(4.21)

Corollary 4.5. Suppose that Assumptions 1 and 2 hold. If (𝑌𝑡) is given by (1.1) and the means (𝜇𝑡) satisfy (4.21), then the test based on 𝑛 is consistent against 𝐻1.

Proof. The assumptions H1–H3 are satisfied with 𝒰(𝜏)=𝑝1𝑘=0𝛼1,𝑘𝜏𝑘+1𝑘+1𝜏,,𝑝𝑑𝑘=0𝛼𝑑,𝑘𝜏𝑘+1𝑘+1𝜏,𝜇(𝑖,𝑗)=𝑝𝑖𝑝𝑘=0𝑗𝑙=01𝛼𝑙+𝑘+1𝑖,𝑘𝛼𝑗,𝑙𝑝𝑖𝑘=0𝛼𝑖,𝑘𝑘+1𝑝𝑗𝑘=0𝛼𝑗,𝑘.𝑘+1(4.22) Note that (4.2) is satisfied for all 0<𝜏<1, provided that there exist 𝑖,1𝑖𝑑 and 𝑘,1𝑘𝑝𝑖 such that 𝛼𝑖,𝑘0.

5. Finite Sample Performance

All models are driven from an i.i.d. sequences 𝜀𝑡=(𝜀𝑡(1),,𝜀𝑡(𝑑)), where each 𝜀𝑡(𝑗),1𝑗𝑑, has a 𝑡(3) distribution, a Student distribution with 3 degrees of freedom, and 𝜀𝑡(𝑖) and 𝜀𝑡(𝑗) are independent for all 𝑖𝑗. Simulations were performed using the software R. We carry out an experiment of 1000 samples for seven models and we use three different sample sizes, 𝑛=30, 𝑛=100, and 𝑛=500. The empirical sizes and powers are calculated at the nominal levels 𝛼=1%, 5%, and 10%, in both cases.

5.1. Study of the Size

In order to evaluate the size distortion of the test statistic 𝑛 we consider two bivariate models 𝑌𝑡=𝜇𝑡+Γ𝑡𝜀𝑡 with the following.

Model 1 (constant covariance). 𝜇𝑡=11,Γ𝑡=2112.(5.1)

Model 2 (time varying covariance). 𝜇𝑡=11,Γ𝑡=𝜋2sin(𝑡𝜔)112cos(𝑡𝜔),𝜔=4.(5.2)

From Table 1, we observe that for small sample size (𝑛=30) the test statistic 𝑛 has a severe size distortion. But as the sample size 𝑛 increases, the distortion decreases. The empirical size becomes closer to (but always lower than) the nominal level. The distortion in the nonstationary Model 2 (time varying covariance) is a somewhat greater than the one in the stationary Model 1 (constant covariance). However the test seems to be conservative in both cases.

tab1
Table 1: Empirical sizes (%).
5.2. Study of the Power

In order to see the power of the test statistic 𝑛 we consider five bivariate models 𝑌𝑡=𝜇𝑡+Γ𝑡𝜀𝑡 with the following.

5.2.1. Abrupt Change in the Mean

Model 3 (constant covariance). 𝜇𝑡=01𝑛if1𝑡210𝑛if2Γ+1𝑡𝑛,𝑡=2112.(5.3)

Model 4. In this model the mean and the covariance are subject to an abrupt change at the same time: 𝜇𝑡=01𝑛if1𝑡210𝑛if2Γ+1𝑡𝑛𝑡=𝑛1001if1𝑡2𝑛2102if2+1𝑡𝑛.(5.4)

Model 5. The mean is subject to an abrupt change and the covariance is time varying (see Figure 1): 𝜇𝑡=01𝑛if1𝑡210𝑛if2Γ+1𝑡𝑛,𝑡=𝜋2sin(𝑡𝜔)112cos(𝑡𝜔),𝜔=4.(5.5)

969753.fig.001
Figure 1: The three kinds of change in the mean.
5.2.2. Smooth Change in the Mean

Model 6. We consider a logistic smooth transition for the mean and a time varying covariance (see Figure 1): 𝜇𝑡=𝜇(1)+𝜇(2)𝜇(1)𝜇(1+exp(30(𝑡/𝑛1/2))),1𝑡𝑛,(1)=01,𝜇(2)=10,Γ𝑡=𝜋2sin(𝑡𝜔)112cos(𝑡𝜔),𝜔=4.(5.6)

5.2.3. Continuous Change in the Mean

Model 7. In this model the mean is a polynomial of order two and the covariance matrix is also time varying as in the preceding Models 5 and 6 (see Figure 1): 𝜇𝑡=𝑡𝑛𝑡2𝑛𝑡𝑛𝑡2𝑛,Γ𝑡=𝜋2sin(𝑡𝜔)112cos(𝑡𝜔),𝜔=4.(5.7)

From Table 2, we observe that for small sample size (𝑛=30), the test statistic 𝑛 has a low power. However, for the five models, the power becomes good as the sample size 𝑛 increases. The powers in nonstationary models are always smaller than those of stationary models. This is not surprising since, from Table 1, the test statistic 𝑛 is more conservative in nonstationary models. We observe also that the power is almost the same in abrupt and logistic smooth changes (compare Models 5 and 6). However, for the polynomial change (Model 7) the power is lower than those of Models 5 and 6. To explain this underperformance we can see, in Figure 1, that in the polynomial change, the time intervals where the mean stays near the extreme values 0 and 1 are very short compared to those in abrupt and smooth changes. We have simulated other continuous changes, linear and cubic polynomial, trigonometric, and many other functions. Like in Model 7, changes are hardly detected for small values of 𝑛, and the test 𝑛 has a good performance only in large samples.

tab2
Table 2: Empirical powers (%).

Acknowledgment

The author would like to thank the anonymous referees for their constructive comments.

References

  1. A. Sen and M. S. Srivastava, “On tests for detecting change in mean,” The Annals of Statistics, vol. 3, pp. 98–108, 1975. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  2. A. Sen and M. S. Srivastava, “Some one-sided tests for change in level,” Technometrics, vol. 17, pp. 61–64, 1975. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  3. D. M. Hawkins, “Testing a sequence of observations for a shift in location,” Journal of the American Statistical Association, vol. 72, no. 357, pp. 180–186, 1977. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  4. K. J. Worsley, “On the likelihood ratio test for a shift in location of normal populations,” Journal of the American Statistical Association, vol. 74, no. 366, pp. 365–367, 1979. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  5. B. James, K. L. James, and D. Siegmund, “Tests for a change-point,” Biometrika, vol. 74, no. 1, pp. 71–83, 1987. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  6. S. M. Tang and I. B. MacNeill, “The effect of serial correlation on tests for parameter change at unknown time,” The Annals of Statistics, vol. 21, no. 1, pp. 552–575, 1993. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  7. J. Antoch, M. Hušková, and Z. Prášková, “Effect of dependence on statistics for determination of change,” Journal of Statistical Planning and Inference, vol. 60, no. 2, pp. 291–310, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  8. X. Shao and X. Zhang, “Testing for change points in time series,” Journal of the American Statistical Association, vol. 105, no. 491, pp. 1228–1240, 2010. View at Publisher · View at Google Scholar
  9. M. S. Srivastava and K. J. Worsley, “Likelihood ratio tests for a change in the multivariate normal mean,” Journal of the American Statistical Association, vol. 81, no. 393, pp. 199–204, 1986. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  10. L. Horváth, P. Kokoszka, and J. Steinebach, “Testing for changes in multivariate dependent observations with an application to temperature changes,” Journal of Multivariate Analysis, vol. 68, no. 1, pp. 96–119, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  11. Z. Qu and P. Perron, “Estimating and testing structural changes in multivariate regressions,” Econometrica, vol. 75, no. 2, pp. 459–502, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  12. C. Starica and C. W. J. Granger, “Nonstationarities in stock returns,” The Review of Economics and Statistics, vol. 87, no. 3, pp. 503–522, 2005. View at Publisher · View at Google Scholar
  13. P. Artzner, F. Delbaen, J.-M. Eber, and D. Heath, “Coherent measures of risk,” Mathematical Finance, vol. 9, no. 3, pp. 203–228, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  14. A. G. Holton, Value-at-Risk: Theory and Practice, Academic Press, 2003.
  15. M. Boutahar, “Identification of persistent cycles in non-Gaussian long-memory time series,” Journal of Time Series Analysis, vol. 29, no. 4, pp. 653–672, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  16. P. Billingsley, Convergence of Probability Measures, Wiley, New York, NY, USA, 1968.
  17. D. L. Iglehart, “Weak convergence of probability measures on product spaces with application to sums of random vectors,” Technical Report, Stanford University, Department of Statistics, Stanford, Calif, USA, 1968.
  18. P. Hall and C. C. Heyde, Martingale Limit Theory and Its Application, Academic Press, New York, NY, USA, 1980.
  19. Y. S. Chow, “Local convergence of martingales and the law of large numbers,” Annals of Mathematical Statistics, vol. 36, no. 2, pp. 552–558, 1965. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  20. T. L. Lai and C. Z. Wei, “Least squares estimates in stochastic regression models with applications to identification and control of dynamic systems,” The Annals of Statistics, vol. 10, no. 1, pp. 154–166, 1982. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  21. T. Teräsvirta, “Specification, estimation, and evaluation of smooth transition autoregressive models,” Journal of American Statistical Association, vol. 89, pp. 208–218, 1994. View at Publisher · View at Google Scholar