Abstract

Consider an Ornstein-Uhlenbeck process driven by a fractional Brownian motion. It is an interesting problem to find criteria for whether the process is stable or has a unit root, given a finite sample of observations. Recently, various asymptotic distributions for estimators of the drift parameter have been developed. We illustrate through computer simulations and through a Stein's bound that these asymptotic distributions are inadequate approximations of the finite-sample distribution for moderate values of the drift and the sample size. We propose a new model to obtain asymptotic distributions near zero and compute the limiting distribution. We show applications to regression analysis and obtain hypothesis tests and their asymptotic power.

1. Introduction

Stability properties of the ordinary differential equation depend on the sign of the parameter : the equation is asymptotically stable if , neutrally stable if , and instable if . These stability results carry over to the stochastic process driven by noise . When the value of is not known and a trajectory of is observed over a finite time interval , a natural problem is to develop the zero-root test, that is, a statistical procedure for testing the hypothesis versus one of the possible alternatives , , or . While the classical solution to this problem is wellknown (to use the maximum likelihood estimator (MLE) of the parameter as the test statistic), further analysis is necessary because the exact distribution of the MLE is usually too complicated to allow an explicit computation of either the critical region or the power of the test. More specifically, an approximate distribution of the MLE must be introduced and investigated, both in the finite-sample asymptotic and in the limit . There are other potential complications, such as when MLE is not available (e.g., if is a stable Lévy process, see [1]) or when the MLE is difficult to implement numerically (e.g., if is a fractional Brownian motion, see [2]).

The objective of this work is the analysis and implementation of the zero root test for (1.1) when , the fractional Brownian motion with the Hurst parameter , and . When , the integral transformation of Jost [3, Corollary 5.2] reduces the corresponding model back to (see [2]).

Recall that the fractional Brownian motion , , is a Gaussian process with , mean zero, and covariance Direct computations show that, for every continuous process , (1.1) has a closed-form solution that does not involve stochastic integration When , let denote the corresponding fractional Ornstein-Uhlenbeck process: and let (now with explicit dependence on the parameter ) denote the particular case of (1.4) with zero initial condition: Define random variables where . To motivate the main result of this paper, note that if , then is the maximum likelihood estimator of based on the observations , (see [4], Section 17.3): While the exact distribution of is not known, the following asymptotic relations hold as : where is the normal distribution, is from (1.7) with and , and is the standard Cauchy distribution with probability density function , .

While (1.10) suggests that the distribution can be used to construct an asymptotic zero-root test, it turns out that neither (1.9) nor (1.11) is a good choice for analyzing the power of the resulting test for small values of the product . There are two reasons: (a) both (1.9) and (1.11) suggest that the product should be sufficiently large for the corresponding approximation to work; (b) it follows from (1.9)–(1.11) that the limit distribution of the appropriately normalized residual has a discontinuity near , while, by (1.6), the distribution of is a continuous function of for each fixed . Further discussion of the finite sample statistical inference is in Sections 2 and 3. In particular, Table 1 and Figure 1 in Section 3 provide some numerical results when .

It is therefore natural to derive a different family of asymptotic distributions, one that depends continuously on the parameter near . To this end, let depend on the observation time and . Then, for each fixed , equality (1.4) still defines the fractional Ornstein-Uhlenbeck process, but the asymptotic behavior of the estimator (1.8) changes.

The following is the main result of the paper.

Theorem 1.1. Assume that , and let be a family of parameters such that for some . Then, as , with probability one and

The proof is given in Section 3. The almost sure convergence can be shown using the law of iterated logarithm. To proof (1.12), we first show asymptotic properties of individual terms of the estimator . Finally, we combine these results using the continuous mapping theorem.

An alternative least-squares type estimator has been considered in [5] and is given by The stochastic integral in (1.13) is understood as divergence integral for (see Section 3) and as Itô integral for . A serious drawback of this estimator is that, unless , there is no computable representation of given the observations , because there is no known way to compute the divergence integral. If , then and is computable.

Nonetheless, there is an analogue of Theorem 1.1 for all . To state the result, define the random variable where and is Kummer’s hypergeometric function, see Section 3 for details.

Theorem 1.2. Assume that , and let be a family of parameters such that for some . Then, as , with probability one and

The proof is given in Section 3. The almost sure convergence can be shown using the law of iterated logarithm. To proof (1.15), we first find a different representation of . Then, we use asymptotic distributions of individual terms and combine these results using the continuous mapping theorem.

2. Strong Consistency and Large-Sample Asymptotic

In this section, we study the asymptotic behavior as of estimator given by (1.6). First, we show that it is a strongly consistent estimator of the parameter . Consistency is a minimum requirement for any statistic to be of practical use for estimating parameter . Moreover, we derive its rate of convergence and corresponding limit theorems. Finally, we illustrate some estimation problems if is small through a Stein’s bound and computer simulations.

2.1. Strong Consistency

We show that the estimator is a strongly consistent estimator of , that is, for all , with probability one.

Theorem 2.1. Let and be the fractional Ornstein-Uhlenbeck process defined in (1.4). Then, defined in (1.8) is a strongly consistent estimator of .

Proof. If and , then, by [6, Lemma 3.3], with probability one; analysis of the proof shows that (2.2) also holds when . Also, with probability one, see ([6], the remark after the proof of Theorem 3.4). Then, (2.1) follows from (2.2) and (2.3).
If and , then, by [7, Theorem 1], with probability one; analysis of the proof shows that (2.4) also holds when . Also, by [7, Lemmas 2 and 3], the finite limit exists with probability one. Therefore, all with probability one, and (2.2) follows.
If , then , and the law of iterated logarithm for self-similar Gaussian processes [8, Corollary 3.1] implies that with probability one for every and Since , equality (2.1) follows.

2.2. Convergence Rates and Asymptotic Distributions

We have the following asymptotic distributions and convergence rates of .

Theorem 2.2. As , with and and are independent standards normally distributed.

Proof. Set Then, .
Case : by [5, Theorem 4.1], we have if . Hence, for (2.9), it is enough to show that . We have as . By ([5], (3.7)) combined with [5, Corollary 5.2] in the web-only appendix, it follows that almost surely, and the result follows.
Case : from [7, Theorem 5], with the obvious modification to nonzero initial condition, , with defined in (2.12). Moreover, using the first convergence in (2.6) and , almost surely, as .
Case : the convergence in (2.10) follows from Theorem 1.1, which is proved later.

2.3. A Stein’s Bound

While both (2.9) and (2.11) suggest that the rate of convergence is determined by the product , more precise estimates are possible when and .

If , then (1.8) implies that where is the standard Brownian motion, is the corresponding Ornstein-Uhlenbeck process, and . In the following, we show that, when and , the rate of convergence of (a) the numerator of (2.16) to the normal distribution and (b) the denominator of a constant indeed depends on how large the term is. For (a), we use elements of Stein’s method on Wiener chaos (see [9]). To simplify the notations, we switch from to and assume zero initial condition, that is, we consider where .

For random variables on , we define the total variation distance

Note that where , denotes the iterated Wiener integral for symmetric square integrable functions . Then one can see that the numerator converges to a normal distribution, and the denominator converges to a constant, both almost surely and in mean square.

By an estimate from [10, Section 2], we get, using an application of Chebyshev’s inequality, Thus the convergence of the denominator depends on the size of . For the numerator, we need some additional computations to get the rate of convergence to the normal.

Note that is in the second Wiener chaos of , see ([5], Section 1.1.2) for Wiener chaos for the white noise case. Hence, by [9], Theorem 1.5], we have where is a generic standard normally distributed random variable.

We have Therefore, that is,

We also obtain that is,

Combining (2.26), (2.28), and (2.23), we finally conclude that An explicit value of can be recovered from the above computations.

2.4. Computer Simulations

Both (2.22) and (2.29) suggest that the distribution of will be rather different from normal for moderate values of . This conclusion is consistent with Monte Carlo simulations: if and (so that , moderate indeed), then the normality assumption for is rejected at the significance level , see Table 1.

In the next section, we study the asymptotic distribution of the statistic and obtain a better finite-sample distribution approximation of .

3. Finite-Sample Approximation and Hypothesis Testing

In this section, we develop approximations of the finite-sample distribution of the estimator which are different from (2.9) and (2.11). The approximate distribution is continuous as a function of the suitable parameter and, according to Monte Carlo simulations, works well when (2.9) and (2.11) do not.

As a motivation, recall an analogous result for the first-order stochastic difference equation where are i.i.d. normally distributed random variables with mean zero and variance , and is an unknown parameter. The maximum likelihood estimator is the least-squares estimator (see [11]): It is known that is consistent estimator of , that is, in probability. Moreover, the asymptotic distribution as of is given by Equation (3.3) has been proven in [11], and (3.4) and (3.5) in [12]. Several authors deal with asymptotic distributions in the case that is near , which has been extensively studied in [1315]. The idea is to choose the parameter according to where is the sample size. Note that this family of parameters satisfy and , where corresponds to stationary case, corresponds to the unit root, and corresponds to explosive case. The distribution of converges to a functional of the Ornstein-Uhlenbeck process, see [13, Theorem 1(b)]: where is given in (1.7) with , leading to a better asymptotic distribution than (3.3) and (3.5) for moderate values of and .

The results in Section 2 suggest that, whenever the product is small, continuous-time analogues of (3.6) and (3.7) are necessary, which needs a better understanding of stochastic integration with respect to the fractional Brownian motion. This understanding is necessary both to further analyze (1.6) and to establish the connection between (1.6) and (1.13). Similar to [5, 16, 17], we follow the Malliavin calculus approach.

3.1. Stochastic Integration with respect to Fractional Brownian Motion

As before, denote that is a fractional Brownian motion with index . It can be shown that has stationary increments, and it is self-similar, in the sense that for every , Assume furthermore that the sigma-field is generated by . Let be the set of real-valued step functions on , and let be the real separable Hilbert space defined as the closure of with respect to the scalar product and denote by the image of an element under the map . The space is not only a space of functions, but it also contains distributions, see [18].

Let denote the space of smooth and cylindrical random variables of the form where , the space of infinitely differentiable functions , where and all its derivatives have at most polynomial growth.

Define the derivative operator of such as the -valued random variable

The derivative operator is a closable unbounded operator from to for any . Define as , , the closure of with respect to the norm

Denote by the adjoint of the operator . The domains are all such that there exists a constant , For an element , we can define through the relationship In [5, Proposition 1.3.1], it is shown that for any , and hence, the space is contained in . In fact, for any simple function of the form we have

For any , we call the divergence integral and write

Let be Hölder continuous functions of order , and respectively, with . Young [19] proved that the Riemann-Stieltjes integral (now known as the Young integral) exists. Accordingly, for , we can define the pathwise Young integral for any process that has Hölder-continuous paths of order by where is any partition such that as . In [17, Theorem 12], it is shown that using the right endpoints such as in (3.19) does not change the limit.

Suppose additionally that , is a stochastic process in the space , and suppose that Then as in [6, 7], we have the following relation between the two integrals and the Malliavin derivative:

3.2. Asymptotic Distribution of the Statistics

Let depend on the observation time and . In analogy to the discrete time case (3.6), we make the assumption that depends on the observation time interval , so that for some real number . The parameter plays the same role as in (3.6). The particular form of , for example, , will affect the finite-sample distribution of the least-squares estimator but, as long as (3.22) holds, it will not matter in the limit .

Equation (1.4) for the process becomes Strictly speaking, now we should be writing , but, for the sake of simplicity of notations, we will omit the explicit dependence of on .

Lemma 3.1. As , one has the following asymptotic distributions: where is defined in (1.4).

Proof. Rewriting (3.23) yields Using self-similarity of and the continuous mapping Theorem (3.24) follows. Moreover, and Using again self-similarity, it follows that Therefore,

In Theorem 2.1, we used the law of iterated logarithm for self-similar Gaussian processes: where is a suitable constant (). In the following lemma, we show that a similar result holds after replacing with and then use the result to prove the almost sure convergence of .

Lemma 3.2. As , one has with probability one and, hence, with probability one for every and

Proof. Let . Then, for sufficiently large, Recalling that , it follows that the first term vanishes with probability one as . The second term, in absolute value, converges to the constant given in (3.30). For the last term, define a sequence By the reverse of Fatou’s lemma, we only need to show that there exists an integrable function such that for all . Since for all sufficiently large the result follows.

Proof of Theorem 1.1. The almost sure convergence follows from the law of iterated logarithm in Lemma 3.2 similar to the proof of Theorem 2.1. Convergence in distribution (1.12) is a consequence of Lemma 3.1 and the continuous mapping theorem.

Comparing Figures 1 and 2 suggests that distribution in (2) is a better candidate for the finite-sample distribution for and than the Gaussian distribution in Figure 1.

3.3. Analysis of Estimator (1.13)

Recall that if , then both (1.6) and (1.13) become (1.8), which also happens to be the maximum likelihood estimator. If , then the maximum likelihood estimator also exists: where see [20, 21] for details. If , then is different from and and, despite a number of very desirable properties, is not easily computable. In fact, only is computable, and , originally introduced in [5], is necessary to study .

The estimator , as defined in (1.13), is motivated by (formally) minimizing the least-squares functional It is shown in [6] that is a strongly consistent estimator of , that is, with probability one. Reference [7] implied strong consistency for the case after a slight modification of the estimator.

A feature of much interest is asymptotic distribution of as . The following is a summary of the results: where and is the standard Cauchy distribution with probability density function , . Results (3.40) and (3.41) have been shown in [6, 7], respectively. Moreover, for and , Bishwal [10] obtained the rate of convergence in (3.40).

In the case , estimator (1.6) is also the maximum likelihood estimator of the parameter . Denote by the measure generated by the Ornstein-Uhlenbeck process , , in the space of continuous functions . Then, the measures and are equivalent, and the likelihood function is given by Maximizing the density with respect to leads to (1.6). An extension of this result to second-order differential equations is available in [22].

We cannot use in practice for two reasons. First, there is no way to compute the divergence integral given observations . Second, the alternative representation we obtain in this section (see Lemma 3.3) depends on the unknown parameter and, therefore, cannot be used to compute the value of . Nonetheless, the finite-sample asymptotic for this estimator is an interesting subject to investigate, and we can also see similarities to .

In the following, let be Kummer’s confluent hypergeometric function (see [23, Chapter 13], which is given by where

It is an analytic function of complex variables , except for poles at . From the series representation, we get that . If , can be represented as an integral

Lemma 3.3. The estimator defined in (1.13) has representation

Proof. Assume, . From the relation between divergence integral and the pathwise Riemann-Stieltjes integral, where denotes the Malliavin derivative. The latter integral is simplified to Moreover, Moreover, since , If , then the result follows by Itô’s formula.

Proof of Theorem 1.2. The almost sure convergence follows by the law of iterated logarithm in Lemma 3.2 and the representation in Lemma 3.3. The asymptotic distribution is a consequence of Lemmas 3.1 and 3.3 and the continuous mapping theorem.

3.4. Hypothesis Testing

Following the original motivation from the introduction, we now consider the problem of testing for the zero root and for stability, Using the main result in Theorem 1.1, we can easily construct a statistical decision function for these problems. For an introduction to the statistics of random processes, see [4, 24].

Define by the statistical decision function, which is if is accepted and if not.

Denote by the measure generated by the Ornstein-Uhlenbeck process , in the space of continuous functions .

Fix a number , the so-called level of significance, and define by the class of tests of asymptotic significance level smaller than , that is, where the expectation is the integral taken with respect to the measure . Denote by the quantiles of the distribution which can be obtained by Monte Carlo simulation.

Given observations , , the statistical decision function for test (3.53) is given by where if and otherwise. Hence, .

Likewise, the statistical decision function for test (3.52) is given by

Next, we analyze the power of the zero-root test, of a unit root against a simple alternative

Define by the probability of the true decision under , that is, The value is called the power of the test .

The asymptotic power of can be computed using the asymptotic distributions in (2.9) and (2.11). Consider the hypothesis test (3.58). If and , the asymptotic power of the test is : recalling that . A similar calculation shows that for and the asymptotic power of the test is as well.

This is not a very informative result: for every reasonable test, one would expect full power in the large-sample asymptotic. More interesting is the power of the test when the sample is finite. Here, the result of Theorem 1.1 helps. We get for any where is the asymptotic distribution of as stated in Theorem 1.1. Thus, we get a better approximation of the actual power under finite samples.

Acknowledgments

The author is indebted to Sergey Lototsky who helped with many valuable suggestions and comments to improve the paper. He also would like to thank the reviewer for constructive and helpful suggestions.