`Mathematical Problems in EngineeringVolumeΒ 2012Β (2012), Article IDΒ 342705, 15 pageshttp://dx.doi.org/10.1155/2012/342705`
Research Article

## Asymptotic Parameter Estimation for a Class of Linear Stochastic Systems Using Kalman-Bucy Filtering

1School of Information Science and Technology, Donghua University, Shanghai 200051, China
2School of Science, Donghua University, Shanghai 200051, China

Received 21 June 2012; Accepted 21 July 2012

Copyright Β© 2012 Xiu Kan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The asymptotic parameter estimation is investigated for a class of linear stochastic systems with unknown parameter . Continuous-time Kalman-Bucy linear filtering theory is first used to estimate the unknown parameter ΞΈ based on Bayesian analysis. Then, some sufficient conditions on coefficients are given to analyze the asymptotic convergence of the estimator. Finally, the strong consistent property of the estimator is discussed by comparison theorem.

#### 1. Introduction

Stochastic differential equations (SDEs) are a natural choice to model the time evolution of dynamic systems which are subject to random influences. Such models have been used with great success in a variety of application areas, including biology, mechanics, economics, geophysics, oceanography, and finance. For instance, refer to [1β8]. In reality, it is unavoidable that a stochastic system contains unknown parameters. Since 1962, Arato et al. [10] first applied parameter estimation to geophysical problem. Parameter estimation for SDEs has attracted the close attention of many researchers, and many parameter estimation methods for various advanced models have been studied, such as maximum likelihood estimation (MLE), Bayes estimation (BE), maximum probability estimation (MPE), minimum distance estimation (MDE), minimum contrast estimation (MCE), and M-estimation (ME). See [10β15] for details.

In practice, most stochastic systems cannot be observed completely, but the development of filtering theory provides an effective method to solve this problem. Over the past few decades, a lot of effective approaches have been proposed to overcome the difficulties in parameter estimation for stochastic models by filtering methods. It turns out to be helpful both in computability and asymptotic studies. See [9, 16β26]. In particular, the parameter estimation has been studied based on filtering observation, and the strong consistency property has also been shown in [27, 28]. In [29], a large deviation inequality has been obtained which implies the strong consistency, local asymptotic normality, and the convergence of moments. The asymptotic properties of estimators have been studied for a class of special Gaussian ItΓ΄ processes with noisy observations in [30]. It should be pointed out that, so far, although the parameter estimation problem has been widely investigated for SDEs, the parameter estimation problem for stock price model has gained much less research attention due probably to the mathematical complexity.

Stock return volatility process is an important topic in options pricing theory. During the past decades, many SDEs have been modeled to solve the financial problems. For instance, refer to [2, 31β35]. Particularly, the so-called Hull-White model has been established by Hull and White [34] to analyze European call options prices under stochastic volatility at 1987. Using Taylor series expansion, an accurate formula for call options has been derived where stock returns and stock volatilities are uncorrelated. In addition, the Hull-White model readily lends itself to the estimation of underlying stochastic process parameters. Since the Hull-White formula is an effective options pricing model, it has been widely used to model the practice stock price problem. Therefore, it is reasonable to study the parameter estimation problem for Hull-White model with unknown parameter. Unfortunately, to the best of the authorsβ knowledge, the parameter estimation for Hull-White model with unknown parameter based on Kalman-Bucy linear filtering theory has not been fully studied despite its potential in practical application, and this situation motivates our present investigation.

Summarizing the above discussions, in this paper, we aim to investigate the parameter estimation problem for a general class of linear stochastic systems. The main contributions of this paper lie in the following aspects. (1) Kalman-Bucy linear filtering is used to solve the parameter estimation problem. (2) The asymptotic convergence of the estimator is investigated by analyzing Riccati equation. (3) The strong consistent property is studied by comparison theorem. The rest of this paper is organized as follows. In Section 2, we formulate the problem and state the well-known fact which would be used later. In Section 3, we study the asymptotic convergence of the estimator. In Section 4, the strong consistent of estimator is given. In Section 5, some conclusions are drawn.

Notation. The notation used here is fairly standard except where otherwise stated. and . For a vector , is the Euclidean norm (or norm) with . and represent the transpose and inverse of the matrix . denotes the determinant of the matrix . denotes the identity matrix of compatible dimension. Moreover, let be a complete probability space with a natural filtration satisfying the usual conditions (i.e., it is right continuous, and contains all -null sets). stands for the expectation of the stochastic variable with respect to the given probability measure . denotes the class of all continuous time on .

#### 2. Problem Statement

Hull-White model is a continuous-time, real stochastic process as follows: with initial value as a Gaussian random variable, where are deterministic continuous functions on time , is a Brownian motion independent of the initial value . Obviously, Hull-White model (2.1) is a general continuous-time linear SDE for , and we assume that the coefficient contains an unknown parameter as follows: and we observe the process by the following filtering observations: where are deterministic bounded continuous functions on time , and is a Brownian motion independent of .

Now, our aim is to estimate in (2.2) based on the observation of (2.3). First, we can use Bayesian analysis to deal with the unknown parameter . We model as a random variable and denoted it as . We assume normally distributed and independent of . Then, we can rewrite (2.2) as a two-component system for as follows: Similarly, filtering observations system (2.3) can be expressed as follows: Therefore, we can use the Kalman-Bucy linear filtering theory to estimate as follows: and moreover, we also have .

For given Gaussian initial conditions and , it is well known from Kalman-Bucy linear filtering theory that error covariance matrix satisfies the following Riccati equation: where , and as we all know the error covariance matrix is defined as follows: Set , and . From Riccati equation (2.7), one can get the following system:

Remark 2.1. Equation (2.9) is a nontrivial nonlinear ordinary differential equation system, and it is well known from the Kalman-Bucy linear filtering theory that such Riccati equations have unique solutions for all .

Remark 2.2. From the equation , we can see that the error variance is monotonically decreasing.

#### 3. Asymptotic Convergence Analysis

Assume that the initial conditions and are independent and have nonvariances, so that and ; thus, is a regular matrix. For the property of continuity of , exists at least for small times. In order to obtain the rate of convergence of the estimator, should satisfy the regularity conditions. The following Theorem certifies the regularity of .

Theorem 3.1. β(a1) Assume the initial conditions and for system (2.2) are independent and have nonvanishing variances.
ββ(a2) Let .
βThen, the error covariance matrix satisfies S(t) for all , and

Proof. By Kalman-Bucy linear filtering theory, we know that for all . Furthermore, it is not difficult to show that (3.1) holds for all .
Since , it follows that exists. Set As we know that implies that , one can easily have that It follows readily form (2.9) and (3.3) that Using a similar computation as (2.9), we can get The condition (a1) shows that , and , which implies that , and . Since the Riccati equations (2.9) have unique solutions on , thus the nonlinear system (3.5) has a unique solution on . Furthermore, the first equation with initial condition has a unique solution on a maximal time interval , where . Assume that there exists a smallest time such that . By the property of continuity of , we have , for . Thus, this contradicts with for all . Therefore, , for .
As long as for all and are bounded, we have , where is a constant. So that is bounded from below by 0 and from above by , which implies that cannot explode in finite time, thus . This shows that system (3.5) has a unique solution on because the second equation is a linear equation for which can be solved analytically on , and can get by integration.
Define . Since for all , thus for all , moreover, for all . Finally, we assume that there exists such that, , then , so that , and this contradicts . Hence, for all .
The proof is complete.

In order to obtain the convergence rate, the Riccati equation must be solved, and we just need the solution of (3.5). Now, we solve the equation when are equal to constants.

In the case , we get where , , .

In the other case , the solution shows that for all .

Thus, for each , , , , , the solution obviously satisfies

The convergence rate of the estimator is given by following theorem.

Theorem 3.2. Assume that , are all bounded, and there are constants , , , , , , , , , , and , such that(b1): for all ;(b2): for all ;(b3): for all ;(b4): for all ;(b5): for all ;(b6): where .
Then, for arbitrary and , we have where is a positive constant independent of and .

Proof. Let be the solution to , and .
Since for all , by the comparison theorem [2, 36], we obtain that It follows from (3.7) that is bounded, and for any given , there is a such that For , we can obtain from (3.5) and that As holds for all , thus, the first term in (3.12) goes to 0 as . For the second term in (3.12), we have By similar arguments, we obtain that Therefore, for any , there exists such that For all , we can get from (3.5) that By assumption (b6), we get for a sufficiently small . This implies that goes to infinity at least as a linear function. Thus, there exists a constant , such that Hence, for arbitrary and all , it follows from Chebyshevβs inequality that
The proof is complete.

Remark 3.3. From the proof of Theorem 3.2, we can see that goes to 0 in -sense under the given conditions. In other words, is asymptotically unbiased.

Remark 3.4. It is well known that Kalman-Bucy linear filtering theory remains valid if one replaces the Brownian motion () in systems (2.2) and (2.3) by an arbitrary centered orthogonal increment process of the same covariance structure. Thus, Theorem 3.2 remains valid under this replacement.

#### 4. Strong Consistency

In last section, we give the conditions for the convergence rate of the estimator. Furthermore, we use the comparison theorem to proof the strong consistency in this section. As we all know, if the parameter is, a genuine Gaussian random variable, then we can have a clear statistical interpretation for the convergence rate. Firstly, we pick at random; secondly, let system (2.2) run up to time and simultaneously observe by system (2.3); finally, compute as the following form.

The Kalman-Bucy linear filtering theory shows us with initial conditions and . If we denote that is the matrix fundamental solution of the deterministic linear system then the solution to (4.1) is given by And for every particular experiment , the quantity would be the squared estimation error.

But in this paper is a fixed parameter, so we can only choose , and then the statistical mean over different values of has no experimental meaning. The true estimation error is given by , not . It is therefore desirable that estimator converges to for βall fixed values " a.s. To establish such an assertion we work with a product space , where denotes the law of , and is the underlying probability space for Brownian motion . This space is most appropriate because one can make a.s. statements for fixed . Notice that in this representation we have for all . Assuming this underlying probability space, we use the comparison theorem to get the following consistency result.

In the proof of Theorem 3.2, we know that is bonded and is monotonically increasing, moreover, and . Thus, there exist positive constants , and such that and .

Theorem 4.1. Assume that the following two conditions are satisfied:(c1): converges to in ;(c2): ;(c3): .
Then, for all fixed , we have

Proof. We will show that (4.4) holds for all , where .
By Kalman-Bucy linear filtering theory, we know with initial conditions .
Since the following linear equations: equal to it follows from (c1)β(c3) that For linear equations: if we set and that are the matrix fundamental solution of (4.9), we can obtain from the comparison theorem that
It is not difficult to explore (4.9), and get where,, ,,.
By assumption (c2) and (c3), we know that , and .
By the ODE theory [37, 38] and above discussion, we know that the solution of (4.1) is given by Using the similar method, we can also obtain the solutions for the following two equations: where and .
The solutions of the two equations are explored as the following form:
For (4.14), we have that yields that Since and , it is easy to get For (4.13), we can also get Hence, for (4.1), we can get the following result: The proof is complete.

Remark 4.2. Under the probability space used in this paper, we can see that Theorem 3.2 is the particular form of Theorem 4.1 if we use Chebyshevβs inequality on the result of Theorem 4.1.

Remark 4.3. The strong consistency in Deck [30] requires that is a martingale, while, in our result, can be not a martingale. Furthermore, when is a martingale, our result is more strong than Deckβs, so in that case we can relax the conditions as Deck.

#### 5. Conclusions

In this paper, we have investigated the parameter estimation problem for a class of linear stochastic systems called Hull-White stochastic differential equations which are important models in finance. Firstly, Bayesian viewpoint is first chosen to analyze the parameter estimation problem based on Kalman-Bucy linear filtering theory. Secondly, some sufficient conditions on coefficients are given to study the asymptotic convergence problem. Finally, the strong consistent property of estimator is discussed by Kalman-Bucy linear filtering theory and comparison theorem.

#### Acknowledgments

This work was supported by the National Nature Science Foundation of China under Grant no. 60974030 and the Science and Technology Project of Education Department in Fujian Province JA11211.

#### References

1. B. M. Bibby and M. Sørensen, βMartingale estimation functions for discretely observed diffusion processes,β Bernoulli, vol. 1, no. 1-2, pp. 17β39, 1995.
2. C. Corrado and T. Su, βAn empirical test of the Hull-White option pricing model,β Journal of Futures Markets, vol. 18, no. 4, pp. 363β378, 1998.
3. H. Dong, Z. Wang, and H. Gao, β${H}_{\infty }$ fuzzy control for systems with repeated scalar nonlinearities and random packet losses,β IEEE Transactions on Fuzzy Systems, vol. 17, no. 2, pp. 440β450, 2009.
4. J. Hu, Z. Wang, and H. Gao, βA delay fractioning approach to robust sliding mode control for discrete-time stochastic systems with randomly occurring non-linearities,β IMA Journal of Mathematical Control and Information, vol. 28, no. 3, pp. 345β363, 2011.
5. J. Hu, Z. Wang, H. Gao, and L. K. Stergioulas, βRobust sliding mode control for discrete stochastic systems with mixed time delays, randomly occurring uncertainties, and randomly occurring nonlinearities,β IEEE Transactions on Industrial Electronics, vol. 59, no. 7, pp. 3008β3015, 2012.
6. J. Hu, Z. Wang, and H. Gao, βRobust ${H}_{\infty }$ sliding mode control for discrete time-delay systems with stochastic nonlinearities,β Journal of the Franklin Institute, vol. 349, no. 4, pp. 1459β1479, 2012.
7. E. Ince, Ordinary Differential Equations, Dover, New York, NY, USA, 1956.
8. C. R. Rao, Linear Statistical Inference and its Applications, John Wiley & Sons, New York, NY, USA, 1973.
9. M. H. A. Davis, Linear Estimation and Stochastic Control, Chapman and Hall, London, UK, 1977.
10. M. Arato, A. Kolmogrorov, and Y. Sinai, On Parameter Estimation of a Complex Stationary Gaussian Process, vol. 146, Doklady Academy, USSR, 1962.
11. H. Frydman and P. Lakner, βMaximum likelihood estimation of hidden Markov processes,β The Annals of Applied Probability, vol. 13, no. 4, pp. 1296β1312, 2003.
12. L. Galtchouk and V. Konev, βOn sequential estimation of parameters in semimartingale regression models with continuous time parameter,β The Annals of Statistics, vol. 29, no. 5, pp. 1508β1536, 2001.
13. N. R. Kristensen, H. Madsen, and S. B. Jørgensen, βParameter estimation in stochastic grey-box models,β Automatica, vol. 40, no. 2, pp. 225β237, 2004.
14. B. L. S. Prakasa Rao, Statistical Inference for Diffusion Type Processes, vol. 8, Edward Arnold, London, UK, 1999.
15. I. Shoji and T. Ozaki, βComparative study of estimation methods for continuous time stochastic processes,β Journal of Time Series Analysis, vol. 18, no. 5, pp. 485β506, 1997.
16. H. Dong, Z. Wang, and H. Gao, βRobust ${H}_{\infty }$ filtering for a class of nonlinear networked systems with multiple stochastic communication delays and packet dropouts,β IEEE Transactions on Signal Processing, vol. 58, no. 4, pp. 1957β1966, 2010.
17. H. Dong, Z. Wang, and H. Gao, βDistributed filtering for a class of time-varying systems over sensor networks with quantization errors and successive packet dropouts,β IEEE Transactions on Signal Processing, vol. 60, no. 6, pp. 3164β3173, 2012.
18. J. Hu, Z. Wang, H. Gao, and L. K. Stergioulas, βProbability-guaranteed H finite-horizon filtering for a class of nonlinear time-varying systems with sensor saturations,β Systems and Control Letters, vol. 61, no. 4, pp. 477β484, 2012.
19. J. Hu, Z. Wang, Y. Niu, and L. K. Stergioulas, β${H}_{\infty }$ sliding mode observer design for a class of nonlinear discrete time-delay systems: a delay-fractioning approach,β International Journal of Robust and Nonlinear Control. In press.
20. J. Picard, βAsymptotic study of estimation problems with small observation noise,β in Stochastic Modelling and Filtering (Rome, 1984), vol. 91 of Lecture Notes in Control and Inference Science, Springer, 1987.
21. B. Shen, Z. Wang, and Y. S. Hung, βDistributed ${H}_{\infty }$-consensus filtering in sensor networks with multiple missing measurements: the finite-horizon case,β Automatica, vol. 46, no. 10, pp. 1682β1688, 2010.
22. B. Shen, Z. Wang, and X. Liu, βBounded ${H}_{\infty }$ synchronization and state estimation for discrete time-varying stochastic complex networks over a finite-horizon,β IEEE Transactions on Neural Networks, vol. 22, no. 1, pp. 145β157, 2011.
23. B. Shen, Z. Wang, Y. Hung, and G. Chesi, βDistributed ${H}_{\infty }$ filtering for polynomial nonlinear stochastic systems in sensor networks,β IEEE Transactions on Industrial Electronics, vol. 58, no. 5, pp. 1971β1979, 2011.
24. Z. Wang, J. Lam, and X. Liu, βFiltering for a class of nonlinear discrete-time stochastic systems with state delays,β Journal of Computational and Applied Mathematics, vol. 201, no. 1, pp. 153β163, 2007.
25. Z. Wang, X. Liu, Y. Liu, J. Liang, and V. Vinciotti, βAn extended Kalman filtering approach to modelling nonlinear dynamic gene regulatory networks via short gene expression time series,β IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 6, no. 3, pp. 410β419, 2009.
26. G. Wei and H. Shu, β${H}_{\infty }$ filtering on nonlinear stochastic systems with delay,β Chaos, Solitons and Fractals, vol. 33, no. 2, pp. 663β670, 2007.
27. M. R. James and F. Le Gland, βConsistent parameter estimation for partially observed diffusions with small noise,β Applied Mathematics and Optimization, vol. 32, no. 1, pp. 47β72, 1995.
28. R. S. Liptser and A. N. Shiryayev, Statistics of Random Processes. I, Springer, New York, NY, USA, 1977.
29. Yu. A. Kutoyants, βParameter estimation for diffusion type processes of observation,β Mathematische Operationsforschung und Statistik Series Statistics, vol. 15, no. 4, pp. 541β551, 1984.
30. T. Deck, βAsymptotic properties of Bayes estimators for Gaussian Itô-processes with noisy observations,β Journal of Multivariate Analysis, vol. 97, no. 2, pp. 563β573, 2006.
31. J. P. N. Bishwal, Parameter Estimation in Stochastic Differential Equations, Springer, Berlin, Germany, 2008.
32. A. Brace, D. Gatarek, and M. Musiela, βThe market model of interest rate dynamics,β Mathematical Finance, vol. 7, no. 2, pp. 127β155, 1997.
33. D. Heath, R. Jarrow, and A. Morton, βBond pricing and the term structure of interest-rates: a new methodology,β Econometrica, vol. 60, no. 1, pp. 77β105, 1992.
34. J. Hull and A. White, βThe pricing of option on assets with stochastic volatilities,β The Journal of Finance, vol. 42, no. 2, pp. 281β300, 1987.
35. J. Hull and A. White, βThe general Hull-White model and super calibration,β Financial Analysis Journal, vol. 57, no. 6, pp. 34β43, 2001.
36. G. Kallianpur and R. S. Selukar, βParameter estimation in linear filtering,β Journal of Multivariate Analysis, vol. 39, no. 2, pp. 284β304, 1991.
37. V. I. Arnold, Geometrical Methods in the Theory of Ordinary Differential Equations, Springer, New York, NY, USA, 1983.
38. E. L. Ince, Ordinary Differential Equations, Dover, New York, NY, USA, 1944.