Journal of Probability and Statistics

Volume 2012, Article ID 969753, 17 pages

http://dx.doi.org/10.1155/2012/969753

## Testing for Change in Mean of Independent Multivariate Observations with Time Varying Covariance

Institute of Mathematics of Luminy, 163 Avenue de Luminy, 13288 Marseille Cedex 9, France

Received 28 August 2011; Accepted 24 November 2011

Academic Editor: Man Lai Tang

Copyright © 2012 Mohamed Boutahar. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We consider a nonparametric CUSUM test for change in the mean of multivariate time series with time varying covariance. We prove that under the null, the test statistic has a Kolmogorov limiting distribution. The asymptotic consistency of the test against a large class of alternatives which contains abrupt, smooth and continuous changes is established. We also perform a simulation study to analyze the size distortion and the power of the proposed test.

#### 1. Introduction

In the statistical literature there is a vast amount of works on testing for change in the mean of univariate time series. Sen and Srivastava [1, 2], Hawkins [3], Worsley [4], and James et al. [5] considered tests for mean shifts of normal i.i.d. sequences. Extension to dependent univariate time series has been studied by many authors, see Tang and MacNeill [6], Antoch et al. [7], Shao and Zhang [8], and the references therein. Since the paper of Srivastava and Worsley [9] there are a few works on testing for change in the mean of multivariate time series. In their paper they considered the likelihood ratio tests for change in the multivariate i.i.d. normal mean. Tests for change in mean with dependent but stationary error terms have been considered by Horváth et al. [10]. In a more general context of regression, Qu and Perron [11] considered a model where changes in the covariance matrix of the errors occur at the same time as changes in the regression coefficients, and hence the covariance matrix of the errors is a step-function of time. To our knowledge there are no results testing for change in the mean of multivariate models when the covariance matrix of the errors is time varying with unknown form. The main objective of this paper is to handle this problem. More precisely we consider the -dimensional model where is an i.i.d. sequence of random vectors (not necessary normal) with zero mean and covariance , the identity matrix. The sequence of matrices is deterministic with unknown form. The null and the alternative hypotheses are as follows: In practice, some particular models of (1.1) have been considered in many areas. For instance, in the univariate case (), Starica and Granger [12] show that an appropriate model for the logarithm of the absolute returns of the S&P500 index is given by (1.1) where and are step functions, that is, for some integers and . They also show that model (1.1) and (1.3) gives forecasts superior to those based on a stationary GARCH(1,1) model. In the multivariate case (), Horváth et al. [10] considered the model (1.1) where is subject to change and is constant; they applied such model to temperature data to provide evidence for the global warming theory. For financial data, it is well known that assets’ returns have a time varying covariance. Therefore, for example, in portfolio management, our test can be used to indicate if the mean of one or more assets returns are subject to change. If so, then taking into account such a change is very useful in computing the portfolio risk measures such as the value at Risk (VaR) or the expected shortfall (ES) (see Artzner et al. [13] and Holton [14] for more details).

#### 2. The Test Statistic and the Assumptions

In order to construct the test statistic let where is a square root of , are the empirical covariance and mean of the sample , respectively, is the integer part of , and is the transpose of .

The CUSUM test statistic we will consider is given by where

*Assumption 1. *The sequence of matrices is bounded and satisfies

*Assumption 2. *There exists such that , where denotes the Euclidian norm of .

#### 3. Limiting Distribution of under the Null

Theorem 3.1. *Suppose that Assumptions 1 and 2 hold. Then, under ,
** denotes the convergence in distribution and is a multivariate Brownian Bridge with independent components.**Moreover, the cumulative distribution function of is given by
*

To prove Theorem 3.1 we will establish first a functional central limit theorem for random sequences with time varying covariance. Such a theorem is of independent interest. Let be the space of random functions that are right-continuous and have left limits, endowed with the Skorohod topology. For a given , let be the product space. The weak convergence of a sequence of random elements in to a random element in will be denoted by .

For two random vectors and means that has the same distribution as .

Consider an i.i.d. sequence of random vectors such that and var. Let satisfy (2.5) and set where is a square root of . Many functional central limit theorems were established for covariance stationary random sequences, see Boutahar [15] and the references therein. Note that the sequence we consider here is not covariance stationary.

There are two sufficient conditions to prove that (see Billingsley [16] and Iglehart [17]), namely,(i)the finite-dimensional distributions of converge to the finite-dimensional distributions of ,(ii) is tight for all , if .

Theorem 3.2. *Assume that is an i.i.d. sequence of random vectors such that , and that Assumptions 1 and 2 hold. Then
**
where is a standard multivariate Brownian motion.*

*Proof. *Write , the -th entry of the matrix . To prove that the finite-dimensional distributions of converge to those of it is sufficient to show that for all integer , for all , and for all , ,
Denote by the characteristic function of and by a generic positive constant, not necessarily the same at each occurrence. We have
where
Since is an i.i.d. sequence of random vectors we have
Hence
Let if the argument is true and 0 otherwise, ,
the filtration spanned by .

Then is a zero-mean square-integrable martingale array with differences . Observe that
Now using Assumption 1 we obtain that uniformly on for some positive constant , hence Assumption 2 implies that for all ,
where
consequently (see Hall and Heyde [18], Theorem 3.2)
where is a normal random variable with zero mean and variance . Therefore
which together with (3.6) implies that
the last equality holds since, with ,

For , fixed, in order to obtain the tightness of it suffices to show the following inequality (Billingsley [16], Theorem 15.6):
for some , , where is a nondecreasing continuous function on and .

We have
where
Now observe that
Likewise . Since , the inequality (3.18) holds with , .

In order to prove Theorem 3.1 we need also the following lemma.

Lemma 3.3. *Assume that is given by (1.1), where is an i.i.d sequence of random vectors such that , and that satisfies (2.5). Then under the null , the empirical covariance of satisfies
**
where denotes the almost sure convergence.*

*Proof. *Let and for fixed, .

Then is a martingale difference sequence with respect to . Since and the matrix is bounded, it follows that
since by using Assumptions 1 and 2 we get
Therefore, Theorem 5 of Chow [19] implies that
or
where denotes the -th entry of the matrix . Hence

Lemma 2 of Lai and Wei [20], page 157, implies that with probability one or which implies that Note that , hence combining (3.27) and (3.30) we obtain

*Proof of Theorem 3.1. *Under the null we have , thus recalling (3.3) we can write
Therefore the result (3.1) holds by applying Theorem 3.2, Lemma 3.3, and the continuous mapping theorem.

#### 4. Consistency of

We assume that under the alternative the means are bounded and satisfy the following.

*Assumption H1. *There exists a function from into such that

*Assumption H2. *There exists such that

*Assumption H3. *There exists such that
where .

Theorem 4.1. *Suppose that Assumptions 1 and 2 hold. If is given by (1.1) and the means satisfy the Assumptions H1, H2, and H3, then the test based on is consistent against , that is,
**
where denotes the convergence in probability.*

*Proof. *We have
where

Straightforward computation leads to
Therefore
where is a square root of , that is, , and
Hence
which implies that

##### 4.1. Consistency of against Abrupt Change

Without loss of generality we assume that under the alternative hypothesis there is a single break date, that is, is given by (1.1) where

Corollary 4.2. *Suppose that Assumptions 1 and 2 hold. If is given by (1.1) and the means satisfy (4.12), then the test based on is consistent against .*

*Proof. *It is easy to show that (4.1)–(4.3) are satisfied with
Note that (4.2) is satisfied for all since .

*Remark 4.3. *The result of Corollary 4.2 remains valid if under the alternative hypothesis there are multiple breaks in the mean.

##### 4.2. Consistency of against Smooth Change

In this subsection we assume that the break in the mean does not happen suddenly but the transition from one value to another is continuous with slow variation. A well-known dynamic is the smooth threshold model (see Teräsvirta [21]), in which the mean is time varying as follows where is a the smooth transition function assumed to be continuous from into , and are the values of the mean in the two extreme regimes, that is, when and . The slope parameter indicates how rapid the transition between two extreme regimes is. The parameter is the location parameter.

Two choices for the function are frequently evoked, the logistic function given by and the exponential one For example, for the logistic function with , the extreme regimes are obtained as follows:(i)if and large then and thus ,(ii)if and large then and thus .

This means that at the beginning of the sample is close to and then moves towards and becomes close to it at the end of the sample.

Corollary 4.4. *Suppose that Assumptions 1 and 2 hold. If is given by (1.1) and the means satisfy (4.14), then the test based on is consistent against .*

*Proof. *The assumptions (4.1) and (4.3) are satisfied with
where

Since , to prove (4.2), it suffices to show that there exists such .

Assume that for all then
which implies that for all or
and this contradicts the alternative hypothesis .

##### 4.3. Consistency of against Continuous Change

In this subsection we will examine the behaviour under the alternative where the mean varies at each time, and hence can take an infinite number of values. As an example we consider a polynomial evolution for :

Corollary 4.5. *Suppose that Assumptions 1 and 2 hold. If is given by (1.1) and the means satisfy (4.21), then the test based on is consistent against .*

*Proof. *The assumptions H1–H3 are satisfied with
Note that (4.2) is satisfied for all , provided that there exist and such that .

#### 5. Finite Sample Performance

All models are driven from an i.i.d. sequences , where each , has a distribution, a Student distribution with 3 degrees of freedom, and and are independent for all . Simulations were performed using the software R. We carry out an experiment of 1000 samples for seven models and we use three different sample sizes, , , and . The empirical sizes and powers are calculated at the nominal levels , 5%, and 10%, in both cases.

##### 5.1. Study of the Size

In order to evaluate the size distortion of the test statistic we consider two bivariate models with the following.

*Model 1 (constant covariance). *

*Model 2 (time varying covariance). *

From Table 1, we observe that for small sample size () the test statistic has a severe size distortion. But as the sample size increases, the distortion decreases. The empirical size becomes closer to (but always lower than) the nominal level. The distortion in the nonstationary Model 2 (time varying covariance) is a somewhat greater than the one in the stationary Model 1 (constant covariance). However the test seems to be conservative in both cases.

##### 5.2. Study of the Power

In order to see the power of the test statistic we consider five bivariate models with the following.

###### 5.2.1. Abrupt Change in the Mean

*Model 3 (constant covariance). *

*Model 4. *In this model the mean and the covariance are subject to an abrupt change at the same time:

*Model 5. * The mean is subject to an abrupt change and the covariance is time varying (see Figure 1):

###### 5.2.2. Smooth Change in the Mean

*Model 6. *We consider a logistic smooth transition for the mean and a time varying covariance (see Figure 1):

###### 5.2.3. Continuous Change in the Mean

*Model 7. *In this model the mean is a polynomial of order two and the covariance matrix is also time varying as in the preceding Models 5 and 6 (see Figure 1):

From Table 2, we observe that for small sample size (), the test statistic has a low power. However, for the five models, the power becomes good as the sample size increases. The powers in nonstationary models are always smaller than those of stationary models. This is not surprising since, from Table 1, the test statistic is more conservative in nonstationary models. We observe also that the power is almost the same in abrupt and logistic smooth changes (compare Models 5 and 6). However, for the polynomial change (Model 7) the power is lower than those of Models 5 and 6. To explain this underperformance we can see, in Figure 1, that in the polynomial change, the time intervals where the mean stays near the extreme values 0 and 1 are very short compared to those in abrupt and smooth changes. We have simulated other continuous changes, linear and cubic polynomial, trigonometric, and many other functions. Like in Model 7, changes are hardly detected for small values of , and the test has a good performance only in large samples.

#### Acknowledgment

The author would like to thank the anonymous referees for their constructive comments.

#### References

- A. Sen and M. S. Srivastava, “On tests for detecting change in mean,”
*The Annals of Statistics*, vol. 3, pp. 98–108, 1975. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - A. Sen and M. S. Srivastava, “Some one-sided tests for change in level,”
*Technometrics*, vol. 17, pp. 61–64, 1975. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - D. M. Hawkins, “Testing a sequence of observations for a shift in location,”
*Journal of the American Statistical Association*, vol. 72, no. 357, pp. 180–186, 1977. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - K. J. Worsley, “On the likelihood ratio test for a shift in location of normal populations,”
*Journal of the American Statistical Association*, vol. 74, no. 366, pp. 365–367, 1979. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - B. James, K. L. James, and D. Siegmund, “Tests for a change-point,”
*Biometrika*, vol. 74, no. 1, pp. 71–83, 1987. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - S. M. Tang and I. B. MacNeill, “The effect of serial correlation on tests for parameter change at unknown time,”
*The Annals of Statistics*, vol. 21, no. 1, pp. 552–575, 1993. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - J. Antoch, M. Hušková, and Z. Prášková, “Effect of dependence on statistics for determination of change,”
*Journal of Statistical Planning and Inference*, vol. 60, no. 2, pp. 291–310, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - X. Shao and X. Zhang, “Testing for change points in time series,”
*Journal of the American Statistical Association*, vol. 105, no. 491, pp. 1228–1240, 2010. View at Publisher · View at Google Scholar - M. S. Srivastava and K. J. Worsley, “Likelihood ratio tests for a change in the multivariate normal mean,”
*Journal of the American Statistical Association*, vol. 81, no. 393, pp. 199–204, 1986. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - L. Horváth, P. Kokoszka, and J. Steinebach, “Testing for changes in multivariate dependent observations with an application to temperature changes,”
*Journal of Multivariate Analysis*, vol. 68, no. 1, pp. 96–119, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - Z. Qu and P. Perron, “Estimating and testing structural changes in multivariate regressions,”
*Econometrica*, vol. 75, no. 2, pp. 459–502, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - C. Starica and C. W. J. Granger, “Nonstationarities in stock returns,”
*The Review of Economics and Statistics*, vol. 87, no. 3, pp. 503–522, 2005. View at Publisher · View at Google Scholar - P. Artzner, F. Delbaen, J.-M. Eber, and D. Heath, “Coherent measures of risk,”
*Mathematical Finance*, vol. 9, no. 3, pp. 203–228, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - A. G. Holton,
*Value-at-Risk: Theory and Practice*, Academic Press, 2003. - M. Boutahar, “Identification of persistent cycles in non-Gaussian long-memory time series,”
*Journal of Time Series Analysis*, vol. 29, no. 4, pp. 653–672, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - P. Billingsley,
*Convergence of Probability Measures*, Wiley, New York, NY, USA, 1968. - D. L. Iglehart, “Weak convergence of probability measures on product spaces with application to sums of random vectors,” Technical Report, Stanford University, Department of Statistics, Stanford, Calif, USA, 1968. View at Google Scholar
- P. Hall and C. C. Heyde,
*Martingale Limit Theory and Its Application*, Academic Press, New York, NY, USA, 1980. - Y. S. Chow, “Local convergence of martingales and the law of large numbers,”
*Annals of Mathematical Statistics*, vol. 36, no. 2, pp. 552–558, 1965. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - T. L. Lai and C. Z. Wei, “Least squares estimates in stochastic regression models with applications to identification and control of dynamic systems,”
*The Annals of Statistics*, vol. 10, no. 1, pp. 154–166, 1982. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - T. Teräsvirta, “Specification, estimation, and evaluation of smooth transition autoregressive models,”
*Journal of American Statistical Association*, vol. 89, pp. 208–218, 1994. View at Publisher · View at Google Scholar