#### Abstract

We first present two convergence results about the second-order quadratic variations of the subfractional Brownian motion: the first is a deterministic asymptotic expansion; the second is a central limit theorem. Next we combine these results and concentration inequalities to build confidence intervals for the self-similarity parameter associated with one-dimensional subfractional Brownian motion.

#### 1. Introduction

A fundamental assumption in many statistical and stochastic models is that of independent observations. Moreover, many models that do not make the assumption have the convenient Markov property, according to which the future of the system is not affected by its previous states but only by the current one.

The *long-range dependence* property has become an important aspect of stochastic models in various scientific areas including hydrology, telecommunication, turbulence, image processing, and finance. The best known and most widely used process that exhibits the long-range dependence property is *fractional Brownian motion* (fBm in short). The fBm is a suitable generalization of the standard Brownian motion. The reader is referred, for example, to Alòs et al. [1] and Nualart [2] for a comprehensive introduction to fractional Brownian motion. On the other hand, many authors have proposed to use more general self-similar Gaussian processes and random fields as stochastic models. Such applications have raised many interesting theoretical questions about self-similar Gaussian processes.

As a generalization of Brownian motion, recently, Bojdecki et al. [3, 4] introduced and studied a rather special class of self-similar Gaussian processes which preserves many properties of the fractional Brownian motion. This process arises from occupation time fluctuations of branching particle systems with Poisson initial condition, which is called the *subfractional Brownian motion*. The so-called subfractional Brownian motion (sub-fBm in short) with index is a mean zero Gaussian process with and the covariance
for all . For , coincides with the standard Brownian motion. is neither a semimartingale nor a Markov process unless , so many of the powerful techniques from stochastic analysis are not available when dealing with . The sub-fBm has properties analogous to those of fBm (self-similarity, long-range dependence, Hölder paths) and, for , satisfies the following estimates:
Thus, Kolmogorov continuity criterion implies that the subfractional Brownian motion is Hölder continuous of order for any . But its increments are not stationary. More works for sub-fBm can be found in Bojdecki et al. [3, 4], Liu and Yan [5, 6], Liu [7], Tudor [8–12], Yan and Shen [13, 14], and others.

The problem of the statistical estimation of the self-similarity parameter is of great importance. The self-similarity parameter characterizes all of the important properties of the self-similar processes and consequently describes the behavior of the underlying physical system. Therefore, properly estimating them is of the utmost importance. Several statistics have been introduced to this end, such as wavelets, -variations, variograms, maximum likelihood estimators, and spectral methods. This issue has generated a vast literature. See Chronopoulou et al. [15, 16], Liu [7], Tudor and Viens [17, 18], and references therein for more details. Recently, Breton et al. [19] firstly obtained the *nonasymptotic* construction of confidence intervals for the Hurst parameter of fractional Brownian motion. Observe that the knowledge of explicit nonasymptotic confidence intervals may be of great practical value, for instance in order to evaluate the accuracy of a given estimation of when only a fixed number of observations are available.

Motivated by all these results, in the present note, we will construct the confidence intervals for the self-similarity parameter associated with the so-called subfractional Brownian motion. It is well known that, in contrast to the extensive studies on fractional Brownian motion, there has been little systematic investigation on other self-similar Gaussian processes. The main reasons are the complexity of dependence structures and the nonavailability of convenient stochastic integral representations for self-similar Gaussian processes which do not have stationary increments. As we know, in comparison with fractional Brownian motion, the subfractional Brownian motion has nonstationary increments, and the increments over nonoverlapping intervals are more weakly correlated and their covariance decays polynomially as a higher rate in comparison with fractional Brownian motion (for this reason in Bojdecki et al. [3] it is called subfractional Brownian motion). The above-mentioned properties make subfractional Brownian motion a possible candidate for models which involve long-range dependence, self-similarity, and nonstationary increments. Therefore, it seems interesting to construct the confidence intervals of self-similar parameter of subfractional Brownian motion. And we need more precise estimates to prove our results because of the nonstationary increments.

The first aim of this note is to prove a deterministic asymptotic expansion and a central limit theorem of the so-called second-order quadratic variation which is defined by because the standard quadratic variation does not satisfy a central limit theorem in general. The second aim is to exploit the concentration inequality proved by Nourdin and Viens [20] in order to derive an exact (i.e., nonasymptotic) confidence interval for the self-similar parameter of subfractional Brownian motion . Our formula hinges on the class of statistics and .

This note is organized as follows. In Section 2 we present some preliminaries for concentration inequality and two convergence results about the quadratic variations of some Gaussian processes. In Section 3 we prove the asymptotic expansion and central limit theorem for the second-order quadratic variations of subfractional Brownian motion with . In Section 4 we state and prove the main result of this note.

*Notation*. Most of the estimates of this paper contain unspecified constants. An unspecified positive and finite constant will be denoted by or , which may not be the same in each occurrence. Sometimes we will emphasize the dependence of these constants upon parameters.

#### 2. Preliminaries

Consider a finite centered Gaussian family and write . In what follows, we will consider two quadratic forms associated with and with some real coefficient . The first is obtained by summing up the squares of the elements of and by subtracting the corresponding variances the second quadratic form is

The following result, whose proof relies on the Malliavin calculus techniques developed in Nourdin and Peccati [21], Nourdin and Viens [20],characterizes the tail behavior of .

Theorem 2.1 (Theorem 2.1 in Breton et al. [19]). *If the above assumptions are satisfied, suppose that is not a.s. zero and fix and . Assume that , a.s.-. Then, for all , one has
**
in particular,
*

On the other hand, to be sure that the second-order quadratic variation converges almost surely to a deterministic limit, we need to normalize this quantity. A result of the form is expected, where is related to the regularity of the paths of the subfractional Brownian motion and is related to the nondifferentiability of on the diagonal and is called the singularity function of the process. Begyn [22] considered a class of processes for which a more general normalization is needed. Moreover, he presented a better result about the asymptotic expansion of the left hand of (2.5) and proved a central limit theorem. Because Theorems 1 and 2 in Begyn [22] are crucial in the proofs of Theorems 3.1 and 3.2, it is useful to recall the results.

We define the second-order increments of the covariance function of a Gaussian process as follows: First, we recall the result of asymptotic expansion of under some certain conditions on the covariance function.

Theorem 2.2 (Theorem 1 in Begyn [22]). *Assume that the Gaussian process satisfies the following statements.*(1)* has a bounded first derivative in .*(2)*The covariance function of has the following properties: (a) is continuous in . (b) The derivative exists and is continuous in . There exists a constant , a real and a positive slowly varying function such that
(c) There exist functions from to , real numbers and a function such that (i) if , then , is Lipschitz on ; (ii) is bounded on ; (iii) one has
where the symbol “” denotes the composition of functions and if , then and ; else if , then .*(3)*If , we assume that
*(4)*If is not centered, we make the additional assumption
where if , then .**Then, for all , one has almost surely
*

Second, let us recall the result of central limit theorem.

Theorem 2.3 (Theorem 2 in Begyn [22]). *Assume that the Gaussian process is centered and satisfies the following statements.*(1)*The covariance function of is continuous in .*(2)*Let . We assume that the derivative exists in and that there exists a continuous function , a real and a positive slowly varying function such that
where denotes the interior of (i.e., ).*(3)*We assume that there exist functions () from to , real numbers and a function such that (a) if , then for all , is Lipschitz on ; (b) is -Hölderian on with ; (c) there exists such that ; (d) one has
where if , then and where if , then . (e) there exists a bounded function such that
**Then one has
**
in distribution as tends to infinity where
**
and with, if ,
**
if ,
*

#### 3. Asymptotic Expansion and Central Limit Theorem

In the following theorem the almost sure convergence of the second-order quadratic variations is proved.

Theorem 3.1. *For all , one has almost surely
*

* Proof. *It is clear that the derivative exists on . Moreover we can check that, for all ,
Therefore the assumption 2(b) in Theorem 2.2 is satisfied with and .

For the assumption 2(b) in Theorem 2.2, standard computations yield
with
and we can check . So that Taylor formula yields
Therefore, we have
which yields

Therefore, the assumption 2(c) in Theorem 2.2 is fulfilled with
Consequently, we can apply Theorem 2.2 to and obtain the desired result.

Next we study the weak convergence.

Theorem 3.2. *One has the following weak convergence
**
where
*

* Proof. *We apply Theorem 2.3 to . As in the proof of Theorem 3.1, we need only to show that the assumptions 2 and 3 in Theorem 2.3 are satisfied.

For assumption 2, the previous computation showed that, for all ,
Therefore
This means that the assumption 2 in Theorem 2.3 is satisfied with , and defined by (3.12).

For assumption 3 in Theorem 2.3, the expression (3.7) shows that the assumption 3 in Theorem 2.3 is fulfilled with , and . Moreover, one can check that
Using the same arguments as those used for in the previous proof, we obtain
This shows that the assumption 3(e) in Theorem 2.3 is satisfied with
Consequently, we can apply Theorem 2.3 to to obtain the desired result.

#### 4. Confidence Intervals

Let is a subfractional Brownian motion with unknown Hurst parameter , with known. The following result is the main finding of the present note.

Theorem 4.1. *For defined in (1.3), fix and a real number such that . For , set . Then, with probability at least
**
where is a positive constant depending only on and stands for the positive part function; the unknown quantity belongs to the following confidence intervals
*

* Proof. *The idea used here is essentially due to Breton et al. [19]. Define , where
One can prove by standard computations that the covariance structure of Gaussian family is described by the relation
where
Now let , where is defined in (1.3). It is easy to see that
where
On the other hand
with
Since , Theorem 2.1 yields
Now let us find bounds on . Using
We denote by
where
The second term has been bounded by Breton et al. [19]. They proved that
Now let us bound the first term . We denote by
We can write for any ,
Note that the sign of is the same as that of , and
Hence we can write, for any ,
One can easily check that , if . And moreover,
Then we have, for any ,
Consequently, we get
and the positive constant does not depend on the unknown parameter . Putting this bound in (4.10) yields
Now we can construct the confidence interval for . First observe that . Using the assumption on the one hand and (4.22) on the other hand, we get
where and the positive constant does not depend on the unknown parameter . This is the desired result.

#### Acknowledgments

The authors want to thank the academic editor and anonymous referee whose remarks and suggestions greatly improved the presentation of this paper. The project is sponsored by NSFC (10871041), NSFC (81001288), NSRC (10023). Innovation Program of Shanghai Municipal Education Commission (12ZZ063) and NSF of Jiangsu Educational Committee (11KJD11002).