Abstract

This paper mainly discusses some dynamics asymptotic properties of autoregressive processes. By using the -dependence of random variables, we prove the least squares (LS) estimator of the unknown parameters satisfies the law of iterated logarithm.

1. Introduction

Autoregressive process is a basic model in engineering, insurance, and business. It is a representation of a type of random process; as such, it describes certain time-varying processes in nature, engineering, and so forth. The autoregressive model specifies that the output variable depends linearly on its own previous values. It is a special case of the more general autoregressive-moving average (ARMA) model of time series. For example, in engineering one considers a dam, with input of random amounts at random times, and a steady withdrawal of water for irrigation or power usage. This model has a Markovian representation. It is well known that stability problem is an important interest topic, which explains why the stochastic systems always stay in “reasonable values”: the dam does not overflow. The parameters in the process have a direct relation to the system stability, so the estimation of fixed coefficients and its dynamics behaviors are very useful in engineering. There was a rich literature which focused on the research (see Mann and Wald [1], Anderson [2], Menneteau [3], Hwang and Choi [4], Hwang and Baek [5], and Miao and Shen [6]). Recently, some attention has been directed to random coefficient autoregressive models. This way of handling the data allows for large shocks in the dynamic structure of the model and also for some flexibility in the features of the volatility of the series, which are not available in fixed coefficient autoregressive models.

For simplicity, in this paper, we are concerned with some dynamics behaviors (in probability language, we call it the law of iterated logarithm) of the parameters estimation for the linear autoregressive model which is of practical importance, by a different technique from the standard method. Define the following first order autoregressive process (AR) by where the initial value is not necessarily zero and is a sequence of independent and identically distributed (i.i.d.) random errors with mean zero and variance . Based on the observations , the least squares (LS) estimators of are given by By a simple calculation, we have It is well known that, in the stable (or, in other words, asymptotically stationary) case, when , is asymptotically normal under the assumption that is a sequence of i.i.d. random variables with mean zero and variance 1; namely, where “” denotes the convergence in distribution, although it does not hold uniformly for (see Anderson [2] and Meyn and Tweedie [7]). In the unstable (or, in other words, unit root) case when , for the sequence , we have where denotes a standard Wiener process. In the explosive case, when , the sequence is again not asymptotically normal. If, for example, , then In general, the limit distribution depends on the distribution of (see Basawa and Brockwell [8]).

In this paper, we consider the convergence rates of and under some conditions. By constructing an -dependent random variables sequence and applying an approximation method along with a central limit theorem for random variables, we prove the LS estimators of and satisfy the law of iterated logarithm. Our results can be considered as an embodiment of Miao and Shen [6].

In the following statement, we have an assumption as follows:(C.1)there exists a constant such that for any and .

2. Main Results

The main result is as follows.

Theorem 1. Assume that . Then the law of iterated logarithm for holds; namely,

3. Preliminary Lemmas

In order to prove Theorem 1, we need the following lemmas.

Lemma 2. Let be a sequence of i.i.d. random variables with and . Assume that there a positive constant such that . Then, for any ,

Proof. Let and as . By virtue of the moderate deviation principle for (see pp 109, Dembo and Zeitouni [9]), we have for any closed set , and for any open set , where . Let and . Set ; we easily get the claim by (9) and (10).
The following lemma is about Lévy’s inequality. For completeness, we still give its proof.

Lemma 3. Let be a sequence of independent random variables taking their values in . Let . Then for any , where denotes the median of r.v. .

Proof. Let , , , and . From the definition of median, it can be seen that for any . Further,

Remark 4. A real number is usually called the median of r.v. , if and .

Lemma 5. Let be a sequence of independent random variables with Let and . Assume the following conditions hold: and there exists a constant satisfying Then satisfies the law of iterated logarithm; that is,

Proof. See Theorem 1.2 of Wittmann [10].

Lemma 6. Let , . Then

Proof. Firstly, note that and for any . So . Furthermore, for , Then it is not difficult to see that is a strictly stationary -dependence sequence of random variables. By (20) and Theorem 1 of Chen [11], one can prove the following equation: For the proof of (18), the method is similar.

Lemma 7. Let Then,

Proof. In order to prove (23), we only need to show that, for any and enough large , where . denotes infinitely often occurring. Note that, for any , there exists an increasing integer sequence satisfying So , where denotes . Furthermore, for any , we have the following inequality: Also note that, for enough large, So Set and let satisfy . Thus Since , one has Consequently, For any positive natural numbers , , and , define It is easy to see that and are two sets of . random variables. Let be an . sequence of random variables with the same law as . We have Note that for any random variable with mean zero, . In fact, By Lemma 3, where the second inequality holds for any fixed constant and enough large . By Lemma 2, we further have From (25), as . So By virtue of and , we easily see that . Thus Applying Borel-Cantelli Lemma, we get (24). This completes the proof.

Lemma 8. Let and . Then

Proof. From (1), one can easily calculate Consequently, Now for any , so the bracket of (41) divided by converges a.e. to 0, thanks to Theorem of Chung [12]. Furthermore, The former equality of (39) is obtained.
As to the later one, can be written as Note that , one can see that The next to last term of (44) equals , which yields the fact that the mean is zero and variance is bounded by constant times . Thus The last step is to prove By a simple calculation, one has Note that is a sequence of i.i.d. random variables and By virtue of the strong law of large numbers on independent random variables (see Chung [12]), one has The mean of the third term in (48) is zero and the variance is less than which yields So the proof of the lemma is completed.

4. Proof of Theorem 1

The proof is divided into four steps as follows.

Step 1. Let Then can be calculated by It is easily seen that Now we come to prove Indeed, set a nature number and define Note that is a linear combination of terms and for any ; we easily obtain . Furthermore, from Lemma 7, Next, we let Then the limit distribution of is the same as that of . Further, From (58) and (60), combining Lemma 6, we get (56).

Step 2. We claim that Note that and ; we have Since is a sequence of i.i.d. random variables, one easily checks that the sequence satisfies the conditions of Lemma 5. Thus, (61) is obtained.

Step 3. We claim that By (54), (55), and (61), we easily get (64). By virtue of Lemma 8, (65) is equivalent to From (63), we can see thatSo from (58) and (60), (65) is also equivalent toSimilarly to the proof of Lemma 6, (68) can be easily derived.

Step 4. Since By Lemma 8, together with (64) and (65), we obtain (7). This completes the proof of the theorem.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by the Major Project of National Social Science Fund (12&2D223), by the Program of Introducing Talents of Discipline to Universities (B07042), and in part by NSF IIP 1160960 and NNS IIP 1332024.