Abstract

This paper points out that the predictability analysis of conventional time series may in general be invalid for long-range dependent (LRD) series since the conventional mean-square error (MSE) may generally not exist for predicting LRD series. To make the MSE of LRD series prediction exist, we introduce a generalized MSE. With that, the proof of the predictability of LRD series is presented in Hilbert space.

1. Introduction

Let be a realization, which is a second-order random function for . Let be a given sample of for . Then, one of the important problems in time series is to predict or forecast for based on the known realizations of ; see, for example, Clements and Hendry [1], Box et al. [2], and Fuller [3].

A well-known case in the field of time series prediction refers to Yule’s work for the analysis of Wolfer's sunspot numbers (Yule [4]). The early basic theory of predicting a 2nd-order stationary random function in the conventional sense refers to the work of Wiener [5] and Kolmogorov [6]. By conventional sense, we mean that the stationary random functions Wiener and Kolmogorov considered are not long-range dependent (LRD). In other words, the time series they studied have finite mean and variance. Consequently, they in general are not heavy tailed as can be seen from Zadeh and Ragazzini [7], Bhansali [8], and Robinson [9].

The predictability of conventional time series has been well studied; see, for example, Papoulis [10], Vaidyanathan [11], Bhansali [12], Lyman et al. [13], Lyman and Edmonson [14], and Dokuchaev [15]. The basic idea in this regard is to use mean-square error (MSE) as a constraint to obtain a prediction; see, for example, Harrison et al. [16], Bellegem and Sachs [17], Man [18], as well as Clements and Hendry [19]. We shall note in next section that the conventional MSE may in general fail to be used for predicting LRD series.

LRD processes gain increasing applications in various fields of sciences and technologies; see, for example, Beran [20], Mandelbrot [21], Cattani et al. [22], and Li et al. [2329]. Consequently, the prediction is desired for LRD series. The literature regarding the prediction of LRD series appears to be increasing; see, for example, Brodsky and Hurvich [30], Reisen and Lopes [31], Bisaglia and Bordignon [32], Bhansali and Kokoszka [33], Man [34], Bayraktar et al. [35], Man and Tiao [36], Bisaglia and Gerolimetto [37], Godet [38], as well as Gooijer and Hyndman [39]. However, unfortunately, suitable MSE used for predicting LRD series may be overlooked, leaving a pitfall in this respect. We shall present a generalized MSE in the domain of generalized functions for the purpose of proving the existence of LRD series prediction.

The rest of this article is arranged as follows. Section 2 will point out the pitfall of prediction of time series based on traditional MSE. The proof of the predictability of LRD series will be proposed in Section 3, which is followed by discussions and conclusions.

2. Problem Statement

Denote the autocorrelation function (ACF) of by , where . Then, is called short-range dependent (SRD) series if is integrable (Beran [20]), that is,

On the other side, is long-range dependent (LRD) if is nonintegrable, that is,

A typical form of such an ACF has the following asymptotic expression:

where is a constant and .

Denote the probability density function (PDF) of by . Then, the ACF of can be expressed by

Considering that is nonintegrable, we see that a heavy-tailed PDF is a consequence of LRD series; see, for example, Resnick [40], Heath et al. [41], Paxson and Floyd [42], Li [23, 24, 43], Abry et al. [44], as well as Adler et al. [45].

Denote by the mean of . Then,

The variance of is given by

One remarkable thing in LRD series is that the tail of p(x) may be so heavy that the above integral either (2.5) or (2.6) does not exist (Bassingthwaighte et al. [46], Doukhan et al. [47], Li [48]). To explain this, we utilize the Pareto distribution. Denote the PDF of the Pareto distribution. Then (G. A. Korn and T. M. Korn [49]),

where . The mean and variance of that follows are, respectively, given by

It can be easily seen that and do not exist if .

Following the work of Kolmogorov’s, a linear prediction can be expressed as follows. Given and , the selection of proper real coefficient as is such that the following linear combination of random variables given by

can approximate as accurately as possible (Kolmogorov [6]). The following MSE is usually chosen as the prediction criterion of (2.10):

By minimizing (2.11), one has the desired in (2.10). Wiener well studied that criterion for both prediction and filtering; see, for example, Levinson [50, 51]. A predictor following (2.10) and (2.11) can be regarded in the class of Wiener-Kolmogorov predictors.

Various forms of linear combination in terms of (2.10) have been developed, such as autoregressive moving average (ARMA) model, autoregressive (AR) model, moving average (MA) model, and autoregressive integrated moving average (ARIMA); see, for example, Lyman et al. [13], Lyman and Edmonson [14], Wolff et al. [52], Bhansali [12, 53], Markhoul [54], Kohn and Ansley [55], Zimmerman and Cressie [56], Peiris and Perera [57], Kudritskii [58], Bisaglia and Bordignon [59], Kim [60], Cai [61], Harvill and Ray [62], Atal [63], Huang [64], Schick and Wefelmeyer [65], Jamalizadeh and Balakrishnan [66], Clements and Hendry [1], and Box et al. [2]. However, one thing in common for different forms of predictors is to minimize prediction error that in principle usually follows the form of (2.11).

Note that the necessary condition for the above-described Wiener-Kolmogorov predictor to be valid is that exists (Kolmogorov [6]). For LRD series, however, it may not always be satisfied. For instance, if an LRD series obeys the Pareto distribution, its mean does not exist for ; see (2.8).

In addition to the fact that the mean of an LRD series may not exist, its variance may not exist either. The error in (2.11) can be expressed by

Kolmogorov stated that the above does not increase as increases [6]. However, that statement may be untrue if is LRD.

It is worth noting that errors may be heavy tailed; see, for example, Peng and Yao [67] as well as Hall and Yao [68]. For instance, LRD teletraffic is heavy tailed with the possible heavy-tail model of Pareto (Resnick [69], Michiel and Laevens [70]) and it is Gaussian at large time scales (Paxson and Floyd [71], Scherrer et al. [72]). Therefore, it is quite reasonable to assume that follows a heavy-tailed distribution, for example, the Pareto distribution, for the purpose of this presentation. If it obeys the Pareto one, then the above expression approaches infinite for (see (2.9)) no matter how large is.

From the above discussions, we see that it may be unsuitable to use the conventional MSE as used in the class of conventional Wiener-Kolmogorov predictors to infer that LRD series is predictable. In the next section, we shall give the proof of the predictability of LRD series.

3. Predictability of LRD Series

Let , where is the set of LRD processes. Let Then, We now consider the norms and inner products in and .

Definition 3.1 (see [73]). A function of rapid decay is a smooth function such that as for all , where is the space of complex numbers. The set of all functions of rapid decay is denoted by .

In the discrete case, the rapid decayed function is denoted by and we still use the symbol S to specify the space it belongs to for the simplicity without confusions.

Lemma 3.2 (see [73]). Every function belonging to is absolutely integrable in the continuous case or absolutely summable in the discrete case.

Now, define the norm of by

where . Define the inner product of by

Then, combining any with its limit makes a Hilbert space.

Note that

Then, is the closed subset of .

Lemma 3.3 (see [7375], Existence of a unique minimizing element in Hilbert space). Let be a Hilbert space and let be a closed convex subset of . Let , . Then, there exists a unique element satisfying

Theorem 3.4. Let L be a linear combination of the past values of according to (2.10). Then, there exists a unique such that

Proof. is a Hilbert space. is its closed subset and it is obviously convex. According to Lemma 3.3, for any there exists a unique such that

The above theorem exhibits that LRD series are predictable in the sense that the mean-square error expressed by (2.12) is in general generalized to the following for :

4. Discussions and Conclusions

LRD series considerably differ from the conventional series; see, for example, Beran [20, 76], Adler et al. [45], Doukhan et al. [47], as well as Künsch et al. [77]. Examples mentioned in this regard are regressions for fitting LRD models (Peng and Yao [67], Beran [78], and Beran et al. [79]), variance analysis of autocorrelation estimation (Li and Zhao [80]), stationarity test (Li et al. [81]), power spectra (Li and Lim [82, 83]), and [8491]. This paper addresses the particularity of the predictability of LRD series. We have given a proof of LRD series being predictable. As a side product obtained from the proof procedure, the mean-square error used by Kolmogorov as a criterion of LRD series prediction has been generalized to be the form of (3.5).

Acknowledgments

This work was partly supported by the National Natural Science Foundation of China (NSFC) under the project Grant numbers 60573125, 60873264, 60703112, and 60873168 and the National High Technology Research and Development Program (863) of China under Grant no. 2009AA01Z418.