Abstract

This paper proposes several test statistics to detect additive or innovative outliers in adaptive functional-coefficient autoregressive (AFAR) models based on extreme value theory and likelihood ratio tests. All the test statistics follow a tractable asymptotic Gumbel distribution. Also, we propose an asymptotic critical value on a fixed significance level and obtain an asymptotic -value for testing, which is used to detect outliers in time series. Simulation studies indicate that the extreme value method for detecting outliers in AFAR models is effective both for AO and IO, for a lone outlier and multiple outliers, and for separate outliers and outlier patches. Furthermore, it is shown that our procedure can reduce possible effects of masking and swamping.

1. Introduction

Outlier detection and analysis play important roles in practical applications. For instance, outlier detection can be applied to anomaly detection in computer networks, financial time series, and data series in geosciences, as can be seen from [1, Chap. 1] and [2]. Other examples include the study on loss of customers in the commercial field and the detection and tracking of financial crime as credit card fraud, all of which involve and exploit the useful information provided by the presence of outliers. On the other hand, outliers in dynamic systems or engineering time series [3] can have adverse effects on model identification and parameter estimation, where eliminating outliers is necessary in the statistical modeling of time series for the purpose of preprocessing data, see, for example, [4]. Some studies have even shown that the emerging of outliers generates certain nonlinear time series. Several procedures are available in the literature to deal with problems related to outliers. Chen [5] developed a method for detecting additive outliers in bilinear time series, which belong to the family of fractal time series models [6]. Cai et al. [7] studied the functional-coefficient regression models of nonlinear time series. Battaglia [8] discovered a way to identify and estimate outliers in functional autoregressive time series. Battaglia and Orfei [9] addressed the issue of outlier detection and estimation in nonlinear time series. Chen et al. [10] and Chen et al. [11] discussed the detection of outlier patches, change point, and outliers in bilinear models.

Extreme value theory and likelihood ratio tests have been used in the detection of outliers and time series analysis. For instance, Martin [12] conducted extreme value analysis on the optimal level cross-prediction of linear Gaussian processes. Zhu and Ling [13] performed likelihood ratio tests on the structural change from an AR(p) model to a threshold AR(p) model. Furthermore, based on extreme value theory Chareka et al. [14] proposed an alternative test method for the detection of additive outliers in Gaussian time series. On the other hand, some scholars such as Fung et al. [15] and Río [16], focused their studies on special cases of outliers detection. It is commonly agreed that the key of outlier detection lies in determining whether the test statistic exceeds a critical value, that is, the threshold under a given significance level. However, explanations for the selection of threshold in many literatures are ambiguous, and the threshold itself can hardly be controlled under a certain level of significance. In this paper, we propose an asymptotic critical value on a fixed significance level, which is used to detect additive and innovative outliers in adaptive functional-coefficient autoregressive (AFAR) models (see, e.g., Fan and Yao [17]).

This paper is structured as follows. In Section 2, we consider AFAR models with additive or innovative outliers and their estimation. In Section 3, several procedures are proposed to detect outliers in the AFAR models based on extreme value theory and likelihood ratio tests. In Section 4, we present some simulation studies and demonstrate the effectiveness of the proposed method through an empirical way. Concluding remarks are summarized in Section 5.

2. Outliers Models and Test Statistics

We now consider the AFAR model: which can be written as where is a Gaussian white noise with mean zero and variance and , .

Suppose that is a zero-mean stationary process following model (2) description, and there is an outlier at time , whose influence magnitude is , then the observed series may be presented as follows.

(1) Innovative outlier (IO) model:

(2) Additive outlier (AO) model: where is the Kronecker symbol: If , then , else .

It is obvious that the detection for IO or AO may be transformed to test whether the null hypothesis or is true under a certain level of significance. This can be solved by knowing the distributions of and under the null hypothesis. We estimate by using the maximum likelihood method. The maximum likelihood function of observations is the positive ratio of the maximum likelihood function of residuals. For a given , assume that the and corresponding parameters are known, then we can obtain the estimation of by maximizing the likelihood function. For convenience, suppose that there is no outlier in the two former observations. Denote the residual of observations by , for any , and let , , and . , be obtained by letting . Under initial conditions mentioned above, the conditional likelihood function of is given by

We can obtain the maximum likelihood estimation of at the minimum of . It is an accurate estimation for the linear model. However, it is just an approximation for the nonlinear model. For , the residual of observations is given by . We then discuss the estimations of and for model (2).

(1) Assume that there is an IO at and its influence magnitude is . From (3), it follows that and (), where . So, the maximum likelihood estimation of is given by For known and , we have , which is similar to Battaglia [8]. Under the null hypothesis , , we obtain likelihood radio test statistic by standardizing . When and are unknown, they can be replaced by their consistent estimations and . Similarly, we also have that

(2) Assume that there is an AO at and its influence magnitude is . From (4), it follows that When , , if , that is, , then , if , we have that and if , we have that where , , and . If , then , from which it follows that where and , so we have that By minimizing the above expression, we obtain where may be obtained by estimation of , while is complicated and difficult to be confirmed. Nevertheless, Battaglia [8] indicated that we could estimate by , which is convenient and effective. If all and are known, then we have Similar to Battaglia [8], we obtain Under the null hypothesis , we obtain likelihood radio test statistics by standardizing . In practice, if and are unknown, which can be replaced by their consistent estimations , we likewise obtain

It is indicated by (6) and (13) that the best estimate of is at time for the case of IO. the best estimate of is the linear combination of errors , , and at time in the case of AO. If the location of outlier is known, the distributions of likelihood ratio test statistics and are given by (7) and (16). However, in practice the location of outlier is unknown, thus we should inspect the magnitude of test statistic at every time point. By replacing and with and , the following theorem indicates that the processes of and are both Gaussian process, respectively.

Theorem 1. (1) Under the hypothesis , that is, there is no at any time, is a stationary zero-mean Gaussian process with variance , and its autocovariance function is , where is the index function. If , then , and else .
(2) Under the hypothesis , that is, there is no at any time, is a stationary zero-mean Gaussian process with variance , and its autocovariance function is

Proof. The Gaussian character is shown by (7) and (16). We derive their autocovariance function as follows.(1)Under the null hypothesis , we have and (2)Under the null hypothesis , we have and

We know that is not only Gaussian, but also a stationary process by Theorem 1. Also, it is serial independent under the null hypothesis . However, despite that is Gaussian, it is not stationary due to the fact that its autocovariance function is dependent on . Besides, is truncated in finite steps, which means that its maximum correlation is within two steps; that is, is independent with when . By standardizing and , we can obtain and . It is obvious that is a standardized stationary Gaussian process under the hypothesis . Also, is a standardized Gaussian process under the hypothesis , but it is not stationary, whereas its autocovariance function is truncated in finite steps.

3. Detection of Outliers Based on Extreme Value Theory

As similar to Chareka et al. [14] and Leadbetter and Rootzén [18], we obtain the following.

Lemma 2 (see [18]). Assume that is a stationary zero-mean Gaussian time series and its autocorrelation function is . Let , and if the Berman condition is satisfied, then one has that , , where , and .

Lemma 3 (see [18]). Assume that is a stationary zero-mean Gaussian time series, and let and , and if satisfies the Berman condition, then and have the same distribution.

Theorem 4. Let and , and under the hypothesis and , one has that where and .

Proof. It follows from Theorem 1 that the autocorrelation function of is under the hypothesis , so ; that is, the Berman condition is satisfied. Again, is a stationary zero-mean Gaussian time series. Hence, from Lemma 2, we have that . Again from Lemma 3, we know that the distribution of is the same as that of , so we have that . Similarly, under the hypothesis , from Theorem 1 we know that the autocorrelation function of is where ; that is, the Berman condition is satisfied. Therefore, similar to the statements above, it must hold that .

Lemma 5 (see [14]). Assume that is a stationary zero-mean Gaussian time series with variance 1 and autocorrelation function , and let , and if satisfies the Berman condition, then one has , , where .

Theorem 6. Let and , under the hypothesis and , and if the corresponding Berman conditions for and are satisfied, then one has where .

Proof. We know that the Berman conditions for and are satisfied under the hypothesis and by the inferential process of Theorem 4. Thus, the conclusion is also obtained by Lemma 5.

Let be the test significance level and . We denote the -distribution function with degrees of freedom by . Let and , where is the quantile of the Gumbel distribution. Similarly, we have the following.

Theorem 7. Under the conditions above, one has that

At this point, it is convenient to introduce two more pieces of notation: and , which we call absolute value test statistics; and , which we call square test statistics; and , which we call adjusted square test statistic. We likewise obtain

If there are several outliers, the main idea is to first detect the maximum outlier by using our method and to obtain new test statistics by deleting its effect. Then, we go on to detect the next outlier and repeat the procedure till there is no outlier. When two types of outliers appear, the test is

For details, we provide the following steps using absolute value test statistics to detect the AO and IO in AFAR model as follows: (a) Let test statistics and . (b) Calculate and at every time point, and calculate the maximum and of absolute value test statistics. (c) Let , if , then we believe that the observation is an IO at , and if , then we believe that the observation is an AO at ; else, we believe there is no IO or AO. (d) Calculate the -value and , and if or , then reject the hypothesis , believing there is an IO or an AO. Furthermore, we decide whether it is an IO or an AO by its minimal -value. (e) Delete the effect of detected outliers and detect the next outlier. Repeat the above steps till there is no outlier.

4. Simulation Studies

Example 8. In this simulation, we consider the following model, where there is only one IO. The sample sizes are , , , , respectively and . We have
For , All the three test statistics detect an outlier at . The -values for believing it is an IO are , , and , respectively. The -values for believing it is an AO are , , and , respectively. Because the -values for believing it is an IO are smaller than the -values for believing it is an AO, we believe it is an IO, and its influence magnitude is 3.1994. The results are similar for , , , respectively, we thus omit the details here. For different test statistics, their -values for believing it is an IO are summarized as follows in Table 1.

Example 9. There are four IOs and one AO in the time series. These IOs appear at , , , and sequentially. Their sizes are 3, 6, 5, and 4, respectively. One AO appears at alone, and its size is 7, . The model is as follows: where is obtained by (3) or (4).
First, we detect an AO at , and its size is 6.8904. The three test statistics and their corresponding critical values are shown in Figure 1, where Figure 1(a) corresponds to the absolute value test statistics, Figure 1(b) corresponds to square test statistics, and Figure 1(c) corresponds to adjusted square test statistics. Symbol ‘‘’’ denotes IO, and ‘‘’’ denotes AO. Parallel broken line denotes critical value, which is uniform in the figures. Deleting the effect of this AO and then continuing to detect other outliers in the series, we observe an IO at , whose size is 6.0285; see details in Figure 2. Deleting the effect of the aforementioned two outliers and then continuing to detect other outliers in the series, we observe an IO at , whose size is 4.8119. Deleting the effect of the aforementioned three outliers and then continuing to detect other outliers in the series, we observe an IO at , whose size is 4.0024. Deleting the effect of the aforementioned four outliers and then continuing to detect other outlier in the series, we detect an IO at , whose size is 2.9046; see Figure 3. Deleting the effect of the aforementioned five outliers and then continuing to detect other outliers in the series, we have not detected any outlier; see Figure 4. Considering the length of paper, we omit some figures here. The result is consistent with the advance enactment.

5. Conclusions

The FAR model is mainly featured by the model-dependent-variable, which in one way or another limits the scope of its applications. As a generalization of the class of models, AFAR model clearly covers a larger range of objects than the FAR model, which makes it possible to reduce modeling biases [17] via choosing a proper model-dependent direction. This paper is concerned with detecting AO and IO in AFAR models using extreme value methods. We derive the asymptotic distribution of test statistics and provide a control for significance level, which serves as an extension and improvement of existing methods. Based on several simulation studies, we give conclusion remarks as follows. (a) The extreme value method for detecting outliers in AFAR models is tractable and effective not only for IO and AO, but also for separate outliers and outlier patches. Furthermore, it is shown that our method can reduce possible effects of masking and swamping. (b) When applying extreme value theory to detect outliers with a relatively small size samples at hand, the employment of square test statistics works better than that of adjusted square test statistics as well as absolute value test statistics. While following the increments in the samples size, the detecting effect of adjusted square test statistics also increases against that of square test statistics and absolute value test statistics (e.g., see Table 1). (c) Selection of model parameters and the magnitude of outliers have huge influences on the effect of detection.

Acknowledgments

The authors sincerely wish to thank the editor and the three referees for their insightful suggestions which have led to improving the early version of the paper. The research is supported by the National Natural Science Foundation of China (11171065), the Natural Science Foundation of Jiangsu Province (BK2011058), and Research Fund for the Doctoral Program of Higher Education of China (20120092110021).