Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2018, Article ID 3132048, 14 pages
https://doi.org/10.1155/2018/3132048
Research Article

The Structure of Autocovariance Matrix of Discrete Time Subfractional Brownian Motion

School of Mathematics and Statistics, Hubei Normal University, Huangshi 435002, China

Correspondence should be addressed to Guo Jiang; nc.ude.unbh@gnaijg

Received 7 October 2017; Revised 11 December 2017; Accepted 19 December 2017; Published 20 May 2018

Academic Editor: Marcello Vasta

Copyright © 2018 Guo Jiang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This article explores the structure of autocovariance matrix of discrete time subfractional Brownian motion and obtains an approximation theorem and a structure theorem to the autocovariance matrix of this stochastic process. Moreover, we give an expression to the unique time varying eigenvalue of the autocovariance matrix in asymptotic means and prove that the increments of subfractional Brownian motion are asymptotic stationary processes. At last, we illustrate these results with numerical experiments and give some probable applications in finite impulse response filter.

1. Introduction

Fractional Brownian motion (FBm) is a continuous centered Gaussian process starting from zero and with covariance where , . When , fBm is a standard Brownian motion (see [1, 2] and their references).

Subfractional Brownian motion (SfBm) had been introduced by Bojdecki et al. [3]; it is also a continuous Gaussian process starting from zero and with covariance When , it arises from occupation time fluctuation of branching particle systems (see [35]). Moreover, sfBm has some properties analogous to those of fBm, such as self-similarity, long-rang dependence, and Hölder continuity (for details we refer to [69]). But sfBm has nonstationary increments, its increments over nonoverlapping intervals are more weakly correlated, and the covariance decays polynomially at a higher rate in comparison with fBm; thereby it is called sfBm in [3].

It is well known that fBm has been successfully applied in the engineering fields like characterization of texture in bone radiographs, image processing and segmentation, terrain surface modeling, medical image analysis, and network traffic analysis. However, in contrast to the extensive applications on fBm, there have been lesser systematic investigations on sfBm. The main reason for this is the complexity of dependence structures for the process which has not stationary increments. Therefore, it seems meaningful to study the structure of autocovariance matrix of discrete time sfBm.

In recent years, Tudor [7] studied some new properties of sfBm such as strong variation, renormalized power variation, the Dirichlet property, and short and long memory properties. Moreover, in [8, 9], Tudor characterized the domain of the Wiener integral with respect to sfBm. Yan et al. [10] also defined stochastic integral with respect to sfBm with and obtained Itô and Tanaka formulas. Shen and Chen [6] defined the extended divergence integral with respect to sfBm with and established the corresponding versions of Itô and Tanaka formulas. Shen and Yan [11] and Rao [12] give the efficient estimation for the drift term parameter driven by sfBm. Shen and Yan [13] obtain an approximation theorem for sfBm using martingale differences. Liu et al. [14, 15] study the estimators for the self-similarity of sfBm.

On the other hand, covariance matrix is a very important feature of stochastic processes. These studies to covariance matrix are always a significant and interesting topic in probability and statistics. There exist large amounts of literatures about theories and applications of covariance matrix (see [1623], etc.). In fact, one usually works with discrete time stochastic processes for practical application purposes (e.g., [17, 2426]). In particular, Ayache et al. [24] presented the explicit covariance formula of multifractional Brownian motion and briefly reported some applications in the synthesis problem and the long-term structure. In [17], Gupta and Joshi studied the structure of covariance matrix of discrete time fBm and gave the application in -band wavelet system. Yet the corresponding researches are still blank space for the sfBm which has nonstationary increments.

Inspired by these, we explore the structure of autocovariance matrix of discrete time sfBm (dtsfBm), and then we obtain an approximation theorem and a structure theorem to the autocovariance matrix of dtsfBm. At the same time, we give an expression to the unique time varying eigenvalue of the autocovariance matrix in asymptotically means and prove that the increments of sfBm are asymptotically stationary. At last, we illustrate these results with numerical experiments and give some probable application in finite impulse response filter. These researches fill the gap between the theories and applications of sfBm.

This paper is organized as follows: In Section 2, we present some preliminaries about sfBm and matrix theory. In Section 3, we give the approximation to the autocovariance matrix of dtsfBm, testify the unique time varying eigenvalue in asymptotic sense, and prove that these increments of dtsfBm are asymptotically stationary. Then, we establish the structure theorem (Theorem 17) to the autocovariance matrix of dtsfBm. In Section 4, we illustrate those results obtained in Section 3 by numerical experiments and give some probable applications. In Section 5, we come to the conclusion of the paper.

2. Preliminaries

2.1. Discrete Time Subfractional Brownian Motion

In fact, we often observe the discrete time signals. SfBm could be used to model some random conditions. This article exactly concerns the statistical properties of dtsfBm and presents some results on the structure of its autocovariance matrix. DtsfBm is defined as where and is the sampling period. For convenience, one usually lets . Since sfBm is self-similar, we have . Moreover, the mean value, variance, and autocovariance function of dtsfBm are given as follows:

Let be an -dimensions random vector of dtsfBm; that is, In the following, we mainly study the autocovariance of the vector as follows: where and .

Obviously, sfBm is nonstationary and the autocovariance matrix is a function in .

2.2. Some Results of the Matrix Theory

In this section, we recall some useful results of the matrix theory (for details see also [27]).

Let be a matrix, are the distinct eigenvalues of , and is the minimal polynomial of with degree .

Theorem 1 (see [27, P 314]). If a matrix , a given function is defined on the spectrum of , and denote the value of the jth derivative of at the eigenvalue , then there exist component matrices which are independent of and Moreover, the matrices are linearly independent members of and commutative with and with each other.
In particular, where , the resolvent of can be expressed as

Theorem 2 (see [27, P 315]). The components of the matrix satisfy the following conditions: where, for and ,

Theorem 3 (see [27, P 319]). For and , We now present some relevant results on analytic perturbation of linear operator.

Lemma 4 (see [27, P 392]). Let be an unperturbed matrix with eigenvalues . is an matrix whose elements are analytic functions of complex number in a neighborhood of such that the eigenvalues of depend continuously on , and as for , where and the superscripts do not denote derivatives in this paper; the following results are established:
(i) if is an unrepeated eigenvalue of , then is analytic in a neighborhood of ;
(ii) if has algebraic multiplicity and for , then is an analytic function of in a neighborhood of , where and is one of the branches of the function . Thus, has a power series expansion in .

Remark 5. The perturbed eigenvalues remain real for a Hermitian matrix; the component matrices of associated with perturbed eigenvalues will be Hermitian and analytic in , whatever the multiplicity of the unperturbed eigenvalues may be. The eigenvectors of are analytic and orthonormal throughout a neighborhood of .

Lemma 6 (see [27, P 396]). Let be a matrix that is analytic in on a neighborhood of , and . Let be an unrepeated eigenvalue of with index one; then, for sufficiently small , there is an eigenvalue of such that Moreover, there are right and left eigenvectors and , respectively, associated with for which

Remark 7 (see [27, P 399]). (I) The first-order perturbation coefficients are as follows: where the matrix is defined as (II) In particular, if the perturbation of is linear in , that is, , then the perturbation coefficients for all orders are given by

Lemma 8 (see [27, P 402]). Let be a matrix that is analytic in on a neighborhood of , and . is an eigenvalue of with index one and multiplicity ; then the eigenvalue splits into for sufficiently small . Let be the eigenvalues of and . Then there is a number and a positive integer such that where is an eigenvalue of , and with being a simple closed contour that encircles and no other eigenvalues of or .
Moreover, there is at least one eigenvector which corresponds to each such that with and .

Lemma 9 (see [27, P 403]). If is an eigenvalue of with index one, then there exists such that where is an eigenvalue of with right eigenvector .

3. Main Results

In this section, we study the autocovariance matrix of dtsfbm and give the approximation theorem of the matrix. At the same time, we give an expression to the unique time varying eigenvalue of the autocovariance matrix in asymptotic means (i.e., for large enough ). Though the increments of sfBm are nonstationary, we show that the increments of dtsfBm are asymptotically stationary and obtain a structure theorem about the autocovariance matrix.

Theorem 10 (approximation theorem). Let be a random vector of length of dtsfBm (see (7)) and let be autocovariance matrix of the vector ; then can be approximated as for large enough , and where , ,
Moreover, for a positive , if the normalized approximation error is where is the Frobenius norm, then

Proof. By virtue of Taylor theorem, we have For , , the terms such as on the right-hand side of (32) are negligible for large enough .
Notice that and are the largest and smallest term in the autocovariance , respectively; is strictly increasing in indexes and . The normalized approximation error is defined as (30); that is, where . Therefore, the normalized approximation error decays in .
Given is the upper bound of error , we can obtain the minimum value of as follows: and tends to zero as .
Thus, for large enough , the approximate matrix of autocovariance matrix can be shown as follows: where and .
That is, which completes the proof.

Remark 11. The matrix has two different eigenvalues, with index one and multiplicity , and with index one and multiplicity one and the corresponding eigenvector . And, the minimal polynomial of is .

Proposition 12. Let , , , , be the same as Theorem 10 and is the unique nonzero eigenvalue of associated with the eigenvector . is a small perturbation to matrix ; then denote the perturbed eigenvalue of as follows: (i)when ,(ii)when ,

Proof. In terms of Theorem 10, and then the perturbation of is linear in with . By virtue of (19), the corresponding first-order perturbation in is that is, Notice that the component matrices and of corresponding to and , respectively, can be written as By (20) and (21), the first-order perturbation in the eigenvector is given as follows: where ; then By (22), the second-order perturbation coefficient can be expressed as that is, If , and , as ; if , and , as . Therefore,(1)if ,(2)if , The third and other higher order perturbation in are sufficiently small for large enough and the quantity of (49) is also sufficiently small; then denote the perturbed as follows: (i)when ,(ii)when ,Submitting , into (51) and (52), the proof is completed.

Proposition 13. Assume the autocovariance matrix where is an orthogonal matrix and is a diagonal matrix; then can be approximated as for , where is characterized as (28) and is a diagonal matrix corresponding to .

Proof. Since approaches to for large enough , we have The remaining is obvious, so we omit it.

Proposition 14. Suppose that is given the same as Proposition 12; then has a unique time varying eigenvalue in asymptotic means (i.e., for large enough ) as follows: ()when ,()when ,

Proof. By virtue of Theorem 10 and Proposition 12, it is obvious that is the eigenvalue of . Now we prove that it is unique time varying eigenvalue in asymptotic means (i.e., for large enough ). Firstly, we study the effect of perturbation on the eigenvalue of which is index one and multiplicity . According to Lemma 8, can be split into for sufficiently small , , and is an eigenvalue of , , as , .
Thus, we express the eigenvalues of as where .
By (40) and (43), Notice that the matrix is a constant matrix independent of ; then is an eigenvalue of constant matrix and is also independent of . On the other hand, the rank of is while is full rank; then which means that there are eigenvalues of index one for Hermitian matrix . According to Lemma 9, there exists a number corresponding to such that and is an eigenvalue of with right eigenvector , where and .
By (21), (40), (43), and Theorem 10, we have and Here, it is indifferent to designate the unique nonzero eigenvalue of as . It is obvious that (for ).
Therefore, we obtain For large enough , we have Since the eigenvalues are independent of time in the asymptotic sense, decreases with the increasing of and it can be considered as independent of time; then we simply denote in above formula by , and the diagonal matrix associated with is expressed as follows: The proof is completed.

Remark 15. In view of Theorem 10 and Propositions 13 and 14, we can conclude that the unique eigenvalue of is time varying in asymptotic sense (i.e., for large enough ).

Proposition 16. Let be the increment of dtsfBm and let be the autocovariance matrix of . Then is an asymptotic stationary process (i.e., for large enough ) and the approximate autocovariance matrix of vector is simultaneously diagonalizable with in asymptotic sense (i.e., for large enough ).

Proof. It is easy to know that Considering (6), we have where . By (32), for large enough , According to Theorem 10, we obtain where is the same as in Theorem 10 and Hence, we show that the autocovariance matrix is independent of large enough time index ; that is, is an asymptotic stationary process.
Denote for simplicity. Assume that the diagonalization of is as follows:where is a constant orthogonal matrix independent of . For , it is easy to verify that which means that the matrices and can be simultaneously diagonalized in asymptotic sense.

Now, on the basis of the above discussion, we receive the second main result.

Theorem 17 (structure theorem). The autocovariance matrix can be diagonalizable as and be approximated as , where

4. Simulation and Illustration

In this section, we simulate the eigenvalues and eigenvectors of the actual autocovariance matrices with different and by numerical experiment. The relative errors of time varying eigenvalues are given. Moreover, we analyze certain interesting behaviors of the autocovariance matrices (the data of Tables 17 are obtained by Maple; for details see Tables 17).

Table 1: The eigenvalues and eigenvectors of actual autocovariance matrices of dtsfBm with and .
Table 2: The eigenvalues and eigenvectors of actual autocovariance matrices of dtsfBm with and .
Table 3: The eigenvalues for and .
Table 4: The time varying eigenvalues and corresponding eigenvectors of actual autocovariance matrices of dtsfBm with , , and .
Table 5: The relative errors between the actual eigenvalues and the approximate values .
Table 6: The values of with , , and .
Table 7: The values of with , , and .

In Section 3, we show that the actual autocovariance matrices can be approximated as . In Tables 1 and 2, we express the constant orthogonal matrices by decomposing actual (; ; ). At the same time, Tables 1 and 2 are useful datum to those who need to know without computation at any time. Table 3 contains all eigenvalues of for , , and . In fact, it has been illustrated by these tables that all eigenvalues except for the largest one are time invariant and depend only on . Table 4 shows the time varying eigenvalues and corresponding eigenvectors of actual autocovariance matrices of dtsfBm with , , and . On the other hand, the largest time varying eigenvalues can be approximated in (55) or (56). Table 5 reveals the relative error () between the actual eigenvalues and approximate values in (55) or (56). Given be the upper bound of error , we can obtain the minimum values of , that is, in (34). Tables 6 and 7 list all for , , , and , respectively. In the following, we give some comments from Tables 17.

(I) In Tables 1 and 2, the eigenvectors corresponding to the largest time varying eigenvalues always have nearly equal entries which hardly vary for . In fact, these are also true for by consulting Tables 1, 2, and 4. In signal processing, these eigenvectors associated with the largest time varying eigenvalues correspond to a low-pass filter and are the direction of the average information of the signal if we consider them as the impulse response of an Finite Impulse Response (FIR) filter.

(II) By Table 5, we know that the approximation of the largest time varying eigenvalue in (55) or (56) are very valid. Tables 3 and 4 can also verify Proposition 14 by numerical datum.

(III) Tables 6 and 7 illustrate that the approximation of autocovariance matrices in Theorems 10 and 17 are very well for . When , sfBm is then a standard Brownian motion, the value of is zero, and this implies that the two theorems above are true for all values of time index .

5. Conclusion

This article analyzes the autocovariance matrix of discrete time subfractional Brownian motion. The approximation and structure theorems of the autocovariance matrix are given. the unique largest time varying eigenvalues in asymptotic sense are obtained. The eigenvectors corresponding to the largest time varying eigenvalues always have nearly equal entries which hardly vary for different . With numerical experiments, we inspect and verify the theoretical results. Though the increments of sfBm are nonstationary, we prove that the increments of dtsfBm are asymptotically stationary which fill with the blank space on nonstationary increments Gaussian process. It is believed that these results will be important to the further applications of sfBm in engineering, electron, network, and so forth. These are also our future works.

Conflicts of Interest

The author declares that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work is supported by NSF Grants 11471105 and 71561017 of China, NSF Grants 2016CFB526 of Hubei Province, and Innovation Team of the Educational Department of Hubei Province T201412.

References

  1. F. Biagini, Y. Hu, B. Øksendal, and T. Zhang, Stochastic Calculus for Fractional Brownian Motion and Applications, Springer, London, UK, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
  2. Y. S. Mishura, Stochastic Calculus for Fractional Brownian Motion and Related Processes, vol. 1929 of Lecture Notes in Mathematics, Springer, Berlin, Germany, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
  3. T. Bojdecki, L. G. Gorostiza, and A. Talarczyk, “Sub-fractional Brownian motion and its relation to occupation times,” Statistics & Probability Letters, vol. 69, no. 4, pp. 405–419, 2004. View at Publisher · View at Google Scholar · View at MathSciNet
  4. T. Bojdecki, L. G. Gorostiza, and A. Talarczyk, “Fractional Brownian density process and its self-intersection local time of order k,” Journal of Theoretical Probability, vol. 17, no. 3, pp. 717–739, 2004. View at Publisher · View at Google Scholar · View at MathSciNet
  5. T. Bojdecki, L. G. Gorostiza, and A. Talarczyk, “Limit theorems for occupation time fluctuations of branching systems. I. Long-range dependence,” Stochastic Processes and Their Applications, vol. 116, no. 1, pp. 1–18, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  6. G. Shen and C. Chen, “Stochastic integration with respect to the subfractional Brownian motion with H (0, 12),” Statistics & Probability Letters, vol. 82, no. 2, pp. 240–251, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  7. C. Tudor, “Some properties of the sub-fractional Brownian motion,” Stochastics, vol. 79, no. 5, pp. 431–448, 2007. View at Publisher · View at Google Scholar · View at MathSciNet
  8. C. Tudor, “Inner product spaces of integrands associated to subfractional Brownian motion,” Statistics & Probability Letters, vol. 78, no. 14, pp. 2201–2209, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
  9. C. Tudor, “On the Wiener integral with respect to a sub-fractional Brownian motion on an interval,” Journal of Mathematical Analysis and Applications, vol. 351, no. 1, pp. 456–468, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  10. L. Yan, G. Shen, and K. He, “Itô's formula for a sub-fractional Brownian motion,” Communications on Stochastic Analysis, vol. 5, no. 1, pp. 135–159, 2011. View at Google Scholar · View at MathSciNet
  11. G. Shen and L. Yan, “Estimators for the drift of subfractional Brownian motion,” Communications in Statistics—Theory and Methods, vol. 43, no. 8, pp. 1601–1612, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  12. B. P. Rao, “Optimal estimation of a signal perturbed by a sub-fractional Brownian motion,” Stochastic Analysis and Applications, vol. 35, no. 3, pp. 1–9, 2017. View at Publisher · View at Google Scholar · View at MathSciNet
  13. G. Shen and L. Yan, “An approximation of subfractional Brownian motion,” Communications in Statistics—Theory and Methods, vol. 43, no. 9, pp. 1873–1886, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  14. J. Liu, D. Tang, and Y. Cang, “Variations and estimators for self-similarity parameter of sub-fractional Brownian motion via Malliavin calculus,” Communications in Statistics—Theory and Methods, vol. 46, no. 7, pp. 3276–3289, 2017. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  15. J. Liu, L. Yan, Z. Peng, and D. Wang, “Remarks on confidence intervals for self-similarity parameter of a subfractional Brownian motion,” Abstract and Applied Analysis, vol. 2012, Article ID 804942, 14 pages, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  16. T. J. Fisher, X. Sun, and C. M. Gallagher, “A new test for sphericity of the covariance matrix for high dimensional data,” Journal of Multivariate Analysis, vol. 101, no. 10, pp. 2554–2570, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  17. A. Gupta and S. Joshi, “Some studies on the structure of covariance matrix of discrete-time fBm,” IEEE Transactions on Signal Processing, vol. 56, no. 10, part 1, pp. 4635–4650, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  18. T. Ma, L. Jia, and Y. Su, “A new estimator of covariance matrix,” Journal of Statistical Planning and Inference, vol. 142, no. 2, pp. 529–536, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  19. M. Packalen and T. S. Wirjanto, “Inference about clustering and parametric assumptions in covariance matrix estimation,” Computational Statistics & Data Analysis, vol. 56, no. 1, pp. 1–14, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  20. Y. Sheena and A. Takemura, “Admissible estimator of the eigenvalues of the variance-covariance matrix for multivariate normal distributions,” Journal of Multivariate Analysis, vol. 102, no. 4, pp. 801–815, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  21. N. Su and R. Lund, “Multivariate versions of Bartlett's formula,” Journal of Multivariate Analysis, vol. 105, pp. 18–31, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  22. T.-H. Sun, C.-S. Liu, and F.-C. Tien, “Invariant 2D object recognition using eigenvalues of covariance matrices, re-sampling and autocorrelation,” Expert Systems with Applications, vol. 35, no. 4, pp. 1966–1977, 2008. View at Publisher · View at Google Scholar · View at Scopus
  23. W. B. Wu and M. Pourahmadi, “Banding sample autocovariance matrices of stationary processes,” Statistica Sinica, vol. 19, no. 4, pp. 1755–1768, 2009. View at Google Scholar · View at MathSciNet · View at Scopus
  24. A. Ayache, S. Cohen, and J. L. Véhel, “The covariance structure of multifractional Brownian motion, with application to long range dependence,” in Proceedings of the 25th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '00), pp. 3810–3813, June 2000. View at Publisher · View at Google Scholar · View at Scopus
  25. E. Perrin, R. Harba, C. Berzin-Joseph, I. Iribarren, and A. Bonami, “nth-order fractional Brownian motion and fractional Gaussian noises,” IEEE Transactions on Signal Processing, vol. 49, no. 5, pp. 1049–1059, 2001. View at Publisher · View at Google Scholar · View at Scopus
  26. E. Perrin, R. Harba, R. Jennane, and I. Iribarren, “Fast and exact synthesis for 1-D fractional Brownian motion and fractional Gaussian noises,” IEEE Signal Processing Letters, vol. 9, no. 11, pp. 382–384, 2002. View at Publisher · View at Google Scholar · View at Scopus
  27. P. Lancaster and M. Tismenetsky, The Theory of Matrices, Academic Press, New York, NY, USA, 2nd edition, 1985. View at MathSciNet