Abstract

The Hurst exponent and variance are two quantities that often characterize real-life, high-frequency observations. Such real-life signals are generally measured under noise environments. We develop a multiscale statistical method for simultaneous estimation of a time-changing Hurst exponent and a variance parameter in a multifractional Brownian motion model in the presence of white noise. The method is based on the asymptotic behavior of the local variation of its sample paths which applies to coarse scales of the sample paths. This work provides stable and simultaneous estimators of both parameters when independent white noise is present. We also discuss the accuracy of the simultaneous estimators compared with a few selected methods and the stability of computations with regard to adapted wavelet filters.

1. Introduction

Fractional Brownian motion (fBm) has been commonly used to characterize a wide range of complex signals in natural phenomena that exhibit self-similarity and long-range dependence since the pioneering work of Mandelbrot and Van Ness [1]. Examples of such complex signals in time are abundant in medicine, economics, and geoscience, to list a few. The fBm model is characterized by two parameters of the regularity level and the variance level of a signal. The regularity attribute, also called the Hurst exponent, expresses the strength of statistical similarity at many different frequencies, and the variance attribute describes an order of energy magnitude.

To model path regularity varying with time, multifractional Brownian motion (mBm) has been proposed as a generalization of fractional Brownian motion (fBm). The theory and applications of both fBm and mBm models have attracted the interests of researchers in numerous problems of, for example, sea level fluctuations [2], currency exchange rates [3], and network traffic [46]. To model mBm, Lévy-Véhel and Peltier [7] proposed a mean average approach, and Benassi et al. [8] introduced a spectral approach. Lim and Muniandy [9, 10] also proposed a mBm model based on the fBm defined by the Riemann-Liouville type of fractional integral. The proposed models represent mBm as a Gaussian process with a covariance function involving Hurst exponent by a function of time, , and variance parameter . Specifically, a Gaussian process is called mBm with Hurst function and its variance level (scaling factor) if its covariance function is represented as for ; ; ; and , . The process is well defined, or square-integrable, if function is the Hölderian of order on . Clearly, the process is not weakly stationary since the covariance function does not depend on only. From (1), we have , and consequently, . In this sense, is called the variance level of the process.

The time-changing Hurst exponent characterizes the path regularity of process at time since sample paths near with small , close to , are space filling and highly irregular, while paths with large , close to , are very smooth. The variance constant determines the energy level of the process. Alternatively, a spectral representation of mBm is given by where is a constant scale (variance) parameter and the standard Brownian [7].

Several approaches were proposed to estimate time-changing Hurst exponent and variance from sample paths of mBm signals. Benassi et al. [8] investigated estimation of a continuously differentiable without the direct estimation of . A local version of quadratic variations was used in several researches to estimate the constant Hurst exponent [1113]. Recently Fhima et al. [14] adopted the increment ratio statistic method for estimation only. For an overview of estimating constant , the reader is also referred to Beran [15] including various statistical methods or Bardet and Bertrand [16] concentrating on wavelet approaches. Estimation of both and variance parameter has received little attention from the statistics community while is mostly treated as a nuisance parameter. When a signal is modeled with mBm, the estimation of can be improved by the accurate estimation of from covariance structures involving both and . For that purpose, the application of a local version of quadratic variations for estimating and in mBm was discussed in Coeurjolly [17], in which was, however, locally estimated in each sample path. Moreover, the existence of noise in mBm signals has not been dealt with in the literature though real-life signals are commonly measured under noise environments.

The main objective of this paper is to develop a stable and accurate estimation procedure for unknown parameters given a path of in the presence of independent white noise. Previous approaches by Coeurjolly [17] relied on local sample paths in the absence of white noise that resulted in estimators of sensitive to the sampled paths. It is widely accepted that noise occurs from a variety of sources such as measurement devices.

In this paper, we assume that mBm signals are contaminated by a moderate amount of noise. We extend the quadratic variations method to estimate and simultaneously for mBm by applying dilated high-pass filters to all sampled paths (all subsample paths from a given sample path) and aggregating all local conditions from the previous filtering step. This method includes filtering all sampled paths with a dilated filter possessing a sufficient number of vanishing moments to capture regularity conditions at associated coarse scales and generating stationary filtered signals. The method further calculates empirical moments of the filtered signals and then estimates and simultaneously together with a noise level in a regression setup specified by the empirical moments.

This paper is organized as follows. Section 2 introduces local variations in a mBm setting, discussing the procedures and justification for the simultaneous estimators of unknown parameters. Section 3 discusses numerical simulations and computational issues with adapted wavelet filters. The appendix presents proofs for the propositions in the preceding sections.

2. Multiscale Local Variations of Multifractional Brownian Motion

Let us consider a case in which a discretized sample path is given by in which is independent white noise and is the noise level. Hurst function , generated by , is assumed to be Hölderian function of order on . In addition, noise magnitude is assumed to be sufficiently small compared to the variance of mBm. The covariance function of is where is an indicator of relation and as defined above. From the above covariance function, we have , and consequently, . Noticeably the estimation of is nontrivial because of the dependence structure from the covariance function; that is to say, the sample variance of a sample path does not lead to the direct expression of alone but to an expression mixed with all unknown parameters. The entries in (4) generate covariance matrix , which depends on unknown parameters . The covariance matrix consists of parameters (due to symmetry) that can be organized into an vector . Model (4) is locally identifiable almost everywhere if Jacobian matrix , which is , has full column rank [18].

In order to weaken the dependence in in (3), a differencing filter of length and order (the number of vanishing moments) is applied. Filter is defined by its taps such that Let us also introduce based on filter , defined by We observe that , the filter dilated times, focuses on a resolution at a low frequency, corresponding to a coarse space, as increases. For , it captures the finest level of detail. For example, by definition, and for a second-order filter , becomes . Furthermore, we can choose as high-pass wavelet filters corresponding to orthogonal wavelets such as Daubechies and Symlet wavelets. A detailed discussion of wavelet filters can be found in Daubechies [19] and Vidakovic [20].

Let () be a process consisting of () filtered by , that is, For example, when is , the filter is of order , and represents the second-order differences of . The process () is defined similarly with () instead of (): () is a process consisting of () filtered by . The filtering by breaks the dependence structure between observations. Specifically the process is stationary due to the vanishing moment property of filter . To verify it, we need to introduce a sufficiently small neighborhood covering . Let be an index set of a neighborhood of , defined as for a parameter . We set to be a function of in such a way that , , and as . In other words, for a sufficiently large , the size of one neighbor becomes sufficiently small while maintaining the summation of the sizes of all the neighbors that are sufficiently large. In addition, it is possible to make converge to zero faster than grows. Then we derive the covariance of () as follows.

Proposition 1. Let , . Then, the covariance of in (7), , depends on as follows: where

The above proposition states that is weakly stationary as Gaussian. Particularly for , as , it simplifies to Observably the above proposition deals with two pointwise positions, and , for two neighborhoods near and , respectively. Thus an aggregate behavior of each neighborhood is analyzed via the following setup.

Let us define the second empirical moment of the filtered signal as follows: which represents the average squared energy of the -filtered signal in the neighborhood of . We notice that is random because is random and its expectation equals that of because is weakly stationary. That is to say, Now, to relate to more specifically, we define a statistic , called the -scale local variation of (), as where is the cardinal number of . The statistic captures the amount of deviations of the -filtered signal from the standard normal distribution near because is normalized by its standard deviation, the square root of . The definition based on the second order can be extended to the th order Hermite polynomial in the summation of (14): the second Hermite polynomial is defined by . In this paper, we use local variations based on the second Hermite polynomial as the minimum asymptotic variance estimators, as shown in Coeurjolly [13] for fBm settings. Next, we connect the -scale local variation with the empirical moment through the following relationship.

Proposition 2. Let -scale local variation and the empirical moment be defined by (14) and (12), respectively, given as above and of order . Then

The proposition connects the empirical moment and the log of its expectation through -scale local variation . Since the -scale local variation converges almost surely to and its distribution follows normal distribution asymptotically [17], the above relationship establishes a regression setup. We also note that a filter of an order of at least 2 ensures asymptotic normality for all the values of the function . For a filter of order , this convergence is available if and only if .

Next the relationship between the log of the expectation of the empirical moment and the parameters of interest (, , and ) is derived naturally in the light of Proposition 2 and (13). Thus for we obtain a regression model for as : which is nonlinear with respect to , , and . In particular, when the noise level is considered to be zero, the regression model simplifies to, for all and , which turns out to be linear with respect to with intercept , if is negligible. The above regression model possesses a computational advantage though ignoring the presence of noise.

When is nonzero, the following least square estimator of is introduced: The computation of the least-square estimator is feasible because, based on (16), for fixed as and as , the computation of is separable into each . In other words, a solution of is given by Numerical approaches such as the bisection method can be used for the above procedure, which is nonlinear in . The bisection method achieves a desired precision level, , for with the number of iterations greater than . In other words, 10 iterations, for example, results in precision .

3. Simulations and Comparisons

We present here a simulation study of the performance of the approach suggested in this paper, denoted by S-K-var. Simulation is done with the “known truth” of Hurst function the controlled signal variance and the signal-to-noise (SNR) ratio. Test functions are shown in Figure 1 with the step function for in Figure 1(a) and the straight-line function in Figure 1(c). Their illustrations of are shown in Figures 1(b) and 1(d), correspondingly. For the sake of comparison, we chose several popular methods such as the local spectra slope, which is summarized in Gao [21] and denoted by LSS, and -variation of variance-uncorrected, denoted by K-var, and the -variation of variance-corrected version in Coeurjolly [13], denoted by K-var-VC. The average mean squared error (AMSE) was used as a performance measure to capture the difference between true and estimated . To simulate a sample path from a fBm on , we used the method of Wood and Chan [22]. One can simulate a standard mBm with covariance matrix by generating and estimating . This method is exact in theory and sufficiently fast for a reasonable sample size .

In this section we will use the following notations regarding filters: Diff.i denoting the filter of differences of order , Db.i denoting a Daubechies wavelet filter of order , and Sym.i denoting a Symlet wavelet filter of order . We generate 1,000 series of length for step-function and for straight-line function . A simple difference filter (Diff.2) was used for straight-line , and Db.6 was used for step-function . For the local spectra slope of LSS, the length of the subsignal was set to be , which is sufficient for its numerical stability, and the two levels, by which spectral slopes are calculated, were and . The size of a neighborhood of , in (8), is set to be 50 for S-K-var, K-var, and K-var-VC.

Illustrations of the estimators under no noise are shown in Figure 2. Estimation by K-var-VC most accurately follows true among the tested methods. Estimation results by K-var, considering no scale parameter , notably deviate from true . We note that the distance between K-var estimation and true relates to . Estimators by LSS are bumpy because it assumes that subsignals during its computation follow fBm without considering the variability of . We also observe that K-var-VC is more unstable than S-K-var.

Regarding the estimation of , the comparison between K-var-VC and S-K-var is shown in Figure 3, in which empirical confidence intervals for true are shown with the upper panels for no noise and the lower panels for SNR 10. We sampled 1000 series of with and straight-line under white noise of SNR 10. Consistently, the estimation results by S-K-var at the right panels are more accurate, and their confidence intervals are sharper than those by K-var-VC. We also note that the confidence intervals by S-K-var in Figures 3(b) and 3(d) are constant in time since S-K-var employs a global constant in regression model (16). A noise level of SNR 10 heavily worsened the estimation results by K-var-VC while those by S-K-var yielded a slight increase in the confidence intervals. Accurate estimation of variance level by K-var-VC leads to accurate estimation of , which will be demonstrated in the following tests.

We compared S-K-var with K-var-VC and K-var in terms of AMSE in various settings. The method LSS was dropped due to obvious poor performance as is shown in Figure 2. We varied the levels of variance from to and under SNR levels of , , and the infinity. The number of sample path was 1000 in each setting, and AMSE was computed for each of the methods. The results are shown in Table 1 for each step-function and straight-line . We observe that our proposed method S-K-var consistently outperforms the other methods except for only a few settings. Overall, there was little difference between K-var-VC and K-var in performance. This experimental result is not surprising since S-K-var reflects the existence of white noise and globally includes variance constant .

The effects of adapted filters are summarized in Figure 4. The experiments were done with straight-line , variance , and SNR = 7. We observe that the performance of S-K-var on the estimation of does not change much depending on the filter it uses. However, we mention that the variance of AMSEs tends to increase according to the filter size.

4. Conclusion

To conclude, we proposed the joint estimators of the time-changing Hurst exponent and its variance coefficient for mBm under white noise. The proposed method is based on filtering sampled paths with dilated high-pass filters to derive regularity conditions at associated scales. The second empirical moment, average squared energy, of the filtered signals near a time position is connected to the theoretical expectation and used to establish a regression setup through the asymptotic distribution of multiscale local variation statistics. The effectiveness of the approach was verified through numerical experiments that compared it with that of several other approaches. Simulation results show that the proposed approach yields more precise and stable estimation of Hurst exponents and variance constants under noiseless or noised conditions.

Appendices

A. Proof of Proposition 1

Let denote for the sake of simplicity. Then, the covariance becomes, by ,

By Taylor’s expansion and Hölderian order of , for in the neighborhood of we approximate with Similarly, is approximated with . In addition, by Taylor’s expansion of around the point , , we have Using also as goes to infinity, the covariance can be written as follows: Since the order of filter is at least 1, , (A.4) becomes Since , for all , where . When we replace with , the proof is completed.

B. Proof of Proposition 2

Let and denote, respectively, Then can be written as by Taylor’s expansion of near . Similarly, is expressed as Using independence of and , the property of white noise , and the convergence of to almost surely as , we approximate , , and . Then the difference between in (B.2) and in (B.3) becomes

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This research was provided by Hanyang University (201300000001465). The authors give special thanks to Brani Vidakovic and Jean-Francois Coeurjolly for their careful comments.