Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
VolumeΒ 2012Β (2012), Article IDΒ 342705, 15 pages
Research Article

Asymptotic Parameter Estimation for a Class of Linear Stochastic Systems Using Kalman-Bucy Filtering

1School of Information Science and Technology, Donghua University, Shanghai 200051, China
2School of Science, Donghua University, Shanghai 200051, China

Received 21 June 2012; Accepted 21 July 2012

Academic Editor: JunΒ Hu

Copyright Β© 2012 Xiu Kan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


The asymptotic parameter estimation is investigated for a class of linear stochastic systems with unknown parameter πœƒβˆΆπ‘‘π‘‹π‘‘=(πœƒπ›Ό(𝑑)+𝛽(𝑑)𝑋𝑑)𝑑𝑑+𝜎(𝑑)π‘‘π‘Šπ‘‘. Continuous-time Kalman-Bucy linear filtering theory is first used to estimate the unknown parameter ΞΈ based on Bayesian analysis. Then, some sufficient conditions on coefficients are given to analyze the asymptotic convergence of the estimator. Finally, the strong consistent property of the estimator is discussed by comparison theorem.

1. Introduction

Stochastic differential equations (SDEs) are a natural choice to model the time evolution of dynamic systems which are subject to random influences. Such models have been used with great success in a variety of application areas, including biology, mechanics, economics, geophysics, oceanography, and finance. For instance, refer to [1–8]. In reality, it is unavoidable that a stochastic system contains unknown parameters. Since 1962, Arato et al. [10] first applied parameter estimation to geophysical problem. Parameter estimation for SDEs has attracted the close attention of many researchers, and many parameter estimation methods for various advanced models have been studied, such as maximum likelihood estimation (MLE), Bayes estimation (BE), maximum probability estimation (MPE), minimum distance estimation (MDE), minimum contrast estimation (MCE), and M-estimation (ME). See [10–15] for details.

In practice, most stochastic systems cannot be observed completely, but the development of filtering theory provides an effective method to solve this problem. Over the past few decades, a lot of effective approaches have been proposed to overcome the difficulties in parameter estimation for stochastic models by filtering methods. It turns out to be helpful both in computability and asymptotic studies. See [9, 16–26]. In particular, the parameter estimation has been studied based on filtering observation, and the strong consistency property has also been shown in [27, 28]. In [29], a large deviation inequality has been obtained which implies the strong consistency, local asymptotic normality, and the convergence of moments. The asymptotic properties of estimators have been studied for a class of special Gaussian ItΓ΄ processes with noisy observations in [30]. It should be pointed out that, so far, although the parameter estimation problem has been widely investigated for SDEs, the parameter estimation problem for stock price model has gained much less research attention due probably to the mathematical complexity.

Stock return volatility process is an important topic in options pricing theory. During the past decades, many SDEs have been modeled to solve the financial problems. For instance, refer to [2, 31–35]. Particularly, the so-called Hull-White model has been established by Hull and White [34] to analyze European call options prices under stochastic volatility at 1987. Using Taylor series expansion, an accurate formula for call options has been derived where stock returns and stock volatilities are uncorrelated. In addition, the Hull-White model readily lends itself to the estimation of underlying stochastic process parameters. Since the Hull-White formula is an effective options pricing model, it has been widely used to model the practice stock price problem. Therefore, it is reasonable to study the parameter estimation problem for Hull-White model with unknown parameter. Unfortunately, to the best of the authors’ knowledge, the parameter estimation for Hull-White model with unknown parameter based on Kalman-Bucy linear filtering theory has not been fully studied despite its potential in practical application, and this situation motivates our present investigation.

Summarizing the above discussions, in this paper, we aim to investigate the parameter estimation problem for a general class of linear stochastic systems. The main contributions of this paper lie in the following aspects. (1) Kalman-Bucy linear filtering is used to solve the parameter estimation problem. (2) The asymptotic convergence of the estimator is investigated by analyzing Riccati equation. (3) The strong consistent property is studied by comparison theorem. The rest of this paper is organized as follows. In Section 2, we formulate the problem and state the well-known fact which would be used later. In Section 3, we study the asymptotic convergence of the estimator. In Section 4, the strong consistent of estimator is given. In Section 5, some conclusions are drawn.

Notation. The notation used here is fairly standard except where otherwise stated. ℝ=(βˆ’βˆž,+∞) and ℝ+=[0,+∞). For a vector π‘₯=βˆˆβ„, |π‘₯| is the Euclidean norm (or 𝐿2 norm) with √|π‘₯|=π‘₯β‹…π‘₯. 𝑀𝑇 and π‘€βˆ’1 represent the transpose and inverse of the matrix 𝑀. det(𝑀) denotes the determinant of the matrix 𝑀. 𝐼 denotes the identity matrix of compatible dimension. Moreover, let (Ξ©,β„±,𝐏) be a complete probability space with a natural filtration {ℱ𝑑}𝑑β‰₯0 satisfying the usual conditions (i.e., it is right continuous, and β„±0 contains all 𝐏-null sets). 𝔼[π‘₯] stands for the expectation of the stochastic variable π‘₯ with respect to the given probability measure 𝐏. 𝐢(ℝ+) denotes the class of all continuous time on π‘‘βˆˆβ„+.

2. Problem Statement

Hull-White model is a continuous-time, real stochastic process as follows: 𝑋𝑑=𝑋0+ξ€œπ‘‘0𝛼(𝑠)+𝛽(𝑠)π‘‹π‘ ξ€Έξ€œπ‘‘π‘ +𝑑0𝜎(𝑠)π‘‘π‘Šπ‘ (2.1) with initial value 𝑋0 as a Gaussian random variable, where 𝛼,𝛽,𝜎 are deterministic continuous functions on time 𝑑, π‘Šπ‘‘ is a Brownian motion independent of the initial value 𝑋0. Obviously, Hull-White model (2.1) is a general continuous-time linear SDE for 𝑋𝑑, and we assume that the coefficient 𝛼 contains an unknown parameter πœƒβˆˆπ‘… as follows: 𝑑𝑋𝑑=ξ€·πœƒπ›Ό(𝑑)+𝛽(𝑑)𝑋𝑑𝑑𝑑+𝜎(𝑑)π‘‘π‘Šπ‘‘π‘‘β‰₯0,(2.2) and we observe the process 𝑋𝑑 by the following filtering observations: π‘‘π‘Œπ‘‘=πœ‡(𝑑)𝑋𝑑𝑑𝑑+𝛾(𝑑)𝑑𝑉𝑑𝑑β‰₯0,(2.3) where πœ‡,𝛾 are deterministic bounded continuous functions on time 𝑑, and 𝑉𝑑 is a Brownian motion independent of π‘Šπ‘‘.

Now, our aim is to estimate πœƒ in (2.2) based on the observation of (2.3). First, we can use Bayesian analysis to deal with the unknown parameter πœƒ. We model πœƒ as a random variable and denoted it as πœƒ0. We assume πœƒ0 normally distributed and independent of 𝜎(π‘Šπ‘‘,𝑉𝑑,𝑑β‰₯0). Then, we can rewrite (2.2) as a two-component system for (𝑋𝑑,πœƒπ‘‘) as follows: βŽ›βŽœβŽœβŽπ‘‘π‘‹π‘‘π‘‘πœƒπ‘‘βŽžβŽŸβŽŸβŽ =βŽ›βŽœβŽœβŽβŽžβŽŸβŽŸβŽ βŽ›βŽœβŽœβŽπ‘‹π›½(𝑑)𝛼(𝑑)00π‘‘πœƒπ‘‘βŽžβŽŸβŽŸβŽ βŽ›βŽœβŽœβŽ0βŽžβŽŸβŽŸβŽ π‘‘π‘‘+𝜎(𝑑)π‘‘π‘Šπ‘‘π‘‘β‰₯0.(2.4) Similarly, filtering observations system (2.3) can be expressed as follows: π‘‘π‘Œπ‘‘=ξ‚€ξ‚βŽ›βŽœβŽœβŽπ‘‹πœ‡(𝑑)0π‘‘πœƒπ‘‘βŽžβŽŸβŽŸβŽ π‘‘π‘‘+𝛾(𝑑)𝑑𝑉𝑑𝑑β‰₯0.(2.5) Therefore, we can use the Kalman-Bucy linear filtering theory to estimate πœƒ0 as follows: Μ‚πœƒπ‘‘ξ€Ίπœƒ=𝔼0βˆ£π‘Œπ‘ ξ€»,,0≀𝑠≀𝑑(2.6) and moreover, we also have 𝑋𝑑=𝔼[𝑋𝑑|π‘Œπ‘ ,0≀𝑠≀𝑑].

For given Gaussian initial conditions 𝑋0 and πœƒ0, it is well known from Kalman-Bucy linear filtering theory that error covariance matrix 𝑆(𝑑) satisfies the following Riccati equation: ̇𝑆(𝑑)=𝐴𝑆+π‘†π΄π‘‡βˆ’π‘†πΆπ‘‡ξ€·π·π·π‘‡ξ€Έβˆ’1𝐢𝑆+𝐡𝐡𝑇,(2.7) where 𝐴=𝛽(𝑑)𝛼(𝑑)00ξ€Έξ€·,𝐡=0𝜎(𝑑)ξ€Έ,𝐢=(πœ‡(𝑑)0),𝐷=𝛾(𝑑), and as we all know the error covariance matrix 𝑆(𝑑) is defined as follows: βŽ›βŽœβŽœβŽπ‘†π‘†(𝑑)=π‘₯π‘₯(𝑑)𝑆π‘₯πœƒπ‘†(𝑑)πœƒπ‘₯(𝑑)π‘†πœƒπœƒβŽžβŽŸβŽŸβŽ =βŽ›βŽœβŽœβŽœβŽπ”Όξ‚Έξ‚€π‘‹(𝑑)π‘‘βˆ’ξπ‘‹π‘‘ξ‚2ξ‚Ήπ”Όπ‘‹ξ‚ƒξ‚€π‘‘βˆ’ξπ‘‹π‘‘ξ‚ξ€·πœƒ0βˆ’Μ‚πœƒπ‘‘ξ€Έξ‚„π”Όπ‘‹ξ‚ƒξ‚€π‘‘βˆ’ξπ‘‹π‘‘ξ‚ξ€·πœƒ0βˆ’Μ‚πœƒπ‘‘ξ€Έξ‚„π”Όξ‚ƒξ€·πœƒ0βˆ’Μ‚πœƒπ‘‘ξ€Έ2ξ‚„βŽžβŽŸβŽŸβŽŸβŽ .(2.8) Set π‘Ž=𝑆π‘₯π‘₯,𝑏=𝑆π‘₯πœƒ=π‘†πœƒπ‘₯, and 𝑐=π‘†πœƒπœƒ. From Riccati equation (2.7), one can get the following system: Μ‡π‘Ž=2π›½π‘Ž+2𝛼𝑏+𝜎2βˆ’πœ‡2𝛾2π‘Ž2,Μ‡πœ‡π‘=𝛽𝑏+π›Όπ‘βˆ’2𝛾2πœ‡π‘Žπ‘,̇𝑐=βˆ’2𝛾2𝑏2.(2.9)

Remark 2.1. Equation (2.9) is a nontrivial nonlinear ordinary differential equation system, and it is well known from the Kalman-Bucy linear filtering theory that such Riccati equations have unique solutions for all π‘‘βˆˆβ„+.

Remark 2.2. From the equation ̇𝑐=βˆ’(πœ‡2/𝛾2)𝑏2, we can see that the error variance 𝔼[(πœƒ0βˆ’Μ‚πœƒπ‘‘)2] is monotonically decreasing.

3. Asymptotic Convergence Analysis

Assume that the initial conditions 𝑋0 and πœƒ0 are independent and have nonvariances, so that 𝑏(0)=0 and π‘Ž(0)=𝔼[𝑋20]>0,𝑐(0)=𝔼[πœƒ20]>0; thus, 𝑆(0) is a regular matrix. For the property of continuity of 𝑆(𝑑), π‘†βˆ’1(𝑑) exists at least for small times. In order to obtain the rate of convergence of the estimator, 𝑆(𝑑) should satisfy the regularity conditions. The following Theorem certifies the regularity of 𝑆(𝑑).

Theorem 3.1.  (a1) Assume the initial conditions 𝑋0 and πœƒ0 for system (2.2) are independent and have nonvanishing variances.
  (a2) Let 𝛼(𝑑),𝛽(𝑑),𝜎(𝑑),πœ‡(𝑑),𝛾(𝑑)∈𝐢(ℝ+).
 Then, the error covariance matrix 𝑆(𝑑) satisfies det(S(t))>0 for all 𝑑β‰₯0, and 𝑆π‘₯π‘₯(𝑑)>0,π‘†πœƒπœƒ(𝑑)>0βˆ€π‘‘β‰₯0.(3.1)

Proof. By Kalman-Bucy linear filtering theory, we know that det(𝑆(𝑑))>0 for all 𝑑β‰₯0. Furthermore, it is not difficult to show that (3.1) holds for all 𝑑β‰₯0.
Since det(𝑆(𝑑))>0, it follows that π‘†βˆ’1(𝑑) exists. Set 𝑅(𝑑)=π‘†βˆ’1βŽ›βŽœβŽœβŽβŽžβŽŸβŽŸβŽ (𝑑)=𝑒(𝑑)𝑓(𝑑)𝑓(𝑑)𝑔(𝑑).(3.2) As we know that 𝑅=1/𝑆 implies that ̇𝑅=βˆ’(1/𝑆2)̇𝑆, one can easily have that ̇̇𝑅=βˆ’π‘…π‘†π‘….(3.3) It follows readily form (2.9) and (3.3) that ̇𝑅=βˆ’π‘…π΄βˆ’π΄π‘‡π‘…+πΆπ‘‡ξ€·π·π·π‘‡ξ€Έβˆ’1πΆβˆ’π‘…π΅π΅π‘‡π‘….(3.4) Using a similar computation as (2.9), we can get πœ‡Μ‡π‘’=2𝛾2βˆ’2π›½π‘’βˆ’πœŽ2𝑒2,̇𝑓=βˆ’π›Όπ‘’βˆ’π›½π‘“βˆ’πœŽ2𝑒𝑓,̇𝑔=βˆ’2π›Όπ‘“βˆ’πœŽ2𝑓2.(3.5) The condition (a1) shows that π‘Ž(0)>0,𝑏(0)=0, and 𝑐(0)>0, which implies that 𝑒(0)>0,𝑓(0)=0, and 𝑔(0)>0. Since the Riccati equations (2.9) have unique solutions on 𝑅+, thus the nonlinear system (3.5) has a unique solution on ℝ+. Furthermore, the first equation ̇𝑒=πœ‡2/𝛾2βˆ’2π›½π‘’βˆ’πœŽ2𝑒2 with initial condition 𝑒(0)>0 has a unique solution on a maximal time interval [0,𝑇), where π‘‡βˆˆβ„+. Assume that there exists a smallest time βˆ’π‘‘βˆˆ(0,𝑇) such that 𝑒(βˆ’π‘‘)=0. By the property of continuity of 𝑒(𝑑), we have 𝑒(𝑑)>0, for 0≀𝑑<βˆ’π‘‘. Thus, ̇𝑒(𝑑)=limΔ𝑑→0π‘’ξ‚€βˆ’π‘‘ξ‚ξ‚€βˆ’π‘’βˆ’π‘‘ξ‚βˆ’Ξ”π‘‘Ξ”π‘‘<0,(3.6) this contradicts with ̇𝑒(𝑑)=πœ‡2(βˆ’π‘‘)/𝛾2(βˆ’π‘‘)βˆ’2𝛽(βˆ’π‘‘)𝑒(βˆ’π‘‘)βˆ’πœŽ2(βˆ’π‘‘)𝑒2(βˆ’π‘‘)β‰€πœ‡2(βˆ’π‘‘)/𝛾2(βˆ’π‘‘) for all π‘‘βˆˆ[0,𝑇). Therefore, 𝑒(𝑑)>0, for π‘‘βˆˆ[0,𝑇).
As long as ̇𝑒(𝑑)=πœ‡2(βˆ’π‘‘)/𝛾2(βˆ’π‘‘)βˆ’2𝛽(βˆ’π‘‘)𝑒(βˆ’π‘‘)βˆ’πœŽ2(βˆ’π‘‘)𝑒2(βˆ’π‘‘)β‰€πœ‡2(βˆ’π‘‘)/𝛾2(βˆ’π‘‘) for all π‘‘βˆˆ[0,𝑇) and πœ‡(𝑑),𝛾(𝑑) are bounded, we have ̇𝑒(𝑑)≀𝐢, where 𝐢 is a constant. So that 𝑒(𝑑) is bounded from below by 0 and from above by 𝑒(0)+𝑑, which implies that 𝑒(𝑑) cannot explode in finite time, thus 𝑇=+∞. This shows that system (3.5) has a unique solution on ℝ+ because the second equation is a linear equation for 𝑓 which can be solved analytically on ℝ+, and 𝑔 can get by integration.
Define β„Ž(𝑑)∢=det(𝑅(𝑑))=𝑒(𝑑)𝑔(𝑑)βˆ’π‘“2(𝑑). Since det(𝑆(𝑑))>0 for all 𝑑β‰₯0, thus β„Ž(𝑑)=det(𝑅(𝑑))=1/det(𝑆(𝑑))>0 for all 𝑑β‰₯0, moreover, π‘†πœƒπœƒ>0 for all 𝑑β‰₯0. Finally, we assume that there exists 𝑑0 such that, 𝑆π‘₯π‘₯(𝑑0)=0, then 𝑔(𝑑0)=𝑆π‘₯π‘₯(𝑑0)β„Ž(𝑑0)=0, so that β„Ž(𝑑0)=𝑒(𝑑0)𝑔(𝑑0)βˆ’π‘“2(𝑑0)≀0, and this contradicts β„Ž(𝑑0)>0. Hence, 𝑆π‘₯π‘₯>0 for all 𝑑β‰₯0.
The proof is complete.

In order to obtain the convergence rate, the Riccati equation must be solved, and we just need the solution of (3.5). Now, we solve the equation ̇𝑒=πœ‡2/𝛾2βˆ’2π›½π‘’βˆ’πœŽ2𝑒2 when 𝛽,𝜎,πœ‡,𝛾 are equal to constants.

In the case 𝑒(0)≠𝑙2, we get 𝑒𝑙(𝑑)=1+𝑙2𝑙𝐿expξ€Ίξ€·1+𝑙2ξ€ΈπœŽ2𝑑𝑙𝐿expξ€Ίξ€·1+𝑙2ξ€ΈπœŽ2𝑑,βˆ’1(3.7) where 𝐿=(𝑒(0)+𝑙1)/(𝑒(0)βˆ’π‘™2), 𝑙1=(2𝛽/𝜎2+√4𝛽2/𝜎4+4πœ‡2/𝜎2𝛾2)/2, 𝑙2=(βˆ’(2𝛽/𝜎2√)+4𝛽2/𝜎4+4πœ‡2/𝜎2𝛾2)/2.

In the other case 𝑒(0)=𝑙2, the solution shows that 𝑒(𝑑)=𝑙2 for all 𝑑β‰₯0.

Thus, for each 𝛼>0, 𝛽>0, 𝜎>0, πœ‡>0, 𝛾>0, the solution 𝑒(𝑑) obviously satisfies 𝑒(𝑑)βŸΆπ‘™2asπ‘‘βŸΆ+∞.(3.8)

The convergence rate of the estimator is given by following theorem.

Theorem 3.2. Assume that 𝛼,𝛽,𝜎,πœ‡,π›ΎβˆˆπΆ(ℝ+), are all bounded, and there are constants 𝛼1, 𝛼2, 𝛽1, 𝛽2, 𝜎1, 𝜎2, πœ‡1, πœ‡2, 𝛾1, 𝛾2, and 𝑑0, such that(b1): 0<𝛼1≀|𝛼(𝑑)|≀𝛼2 for all 𝑑β‰₯𝑑0;(b2): 0<𝛽1≀|𝛽(𝑑)|≀𝛽2 for all 𝑑β‰₯𝑑0;(b3): 0<𝜎1≀|𝜎(𝑑)|β‰€πœŽ2 for all 𝑑β‰₯𝑑0;(b4): 0<πœ‡2≀|πœ‡(𝑑)|β‰€πœ‡1 for all 𝑑β‰₯𝑑0;(b5): 0<𝛾1≀|𝛾(𝑑)|≀𝛾2 for all 𝑑β‰₯𝑑0;(b6): 2𝛼1(𝛽1+𝜎21𝑙22)>𝜎22𝑙21 where 𝑙2𝑖=(βˆ’2𝛽𝑖/𝜎2𝑖+(4𝛽2𝑖)/(𝜎4𝑖)+(4πœ‡2𝑖)/(𝜎2𝑖𝛾2𝑖))/2,𝑖=1,2.
Then, for arbitrary πœ–>0 and 𝑇>0, we have 𝑃||πœƒ0βˆ’Μ‚πœƒπ‘‘||≀1>πœ–πœ–2πΆπ‘‡βˆ’1,(3.9) where 𝐢 is a positive constant independent of πœ– and 𝑇.

Proof. Let 𝑒𝑖 be the solution to ̇𝑒𝑖=πœ‡2𝑖/𝛾2π‘–βˆ’2π›½π‘–π‘’π‘–βˆ’πœŽ2𝑖𝑒2𝑖,𝑖=1,2, and 𝑒𝑖(𝑑0)=𝑒(𝑑0).
Since πœ‡22/𝛾22βˆ’2𝛽2π‘’βˆ’πœŽ22𝑒2≀̇𝑒=πœ‡2/𝛾2βˆ’2π›½π‘’βˆ’πœŽ2𝑒2β‰€πœ‡21/𝛾21βˆ’2𝛽1π‘’βˆ’πœŽ21𝑒2 for all 𝑑β‰₯𝑑0, by the comparison theorem [2, 36], we obtain that 𝑒2(𝑑)≀𝑒(𝑑)≀𝑒1(𝑑)βˆ€π‘‘β‰₯𝑑0.(3.10) It follows from (3.7) that 𝑒 is bounded, and for any given π›Ώβˆˆ(0,1), there is a 𝑑1β‰₯𝑑0 such that 0<𝑙22(1βˆ’π›Ώ)≀𝑒(π‘Ÿ)≀𝑙21(1+𝛿)βˆ€π‘Ÿβ‰₯𝑑1.(3.11) For 𝑑β‰₯𝑑1, we can obtain from (3.5) and 𝑓(0)=0 that ξ€œπ‘“(𝑑)=βˆ’π‘‘0ξ‚Έβˆ’ξ€œexp𝑑𝑠𝛽(π‘Ÿ)+𝜎2(ξ€Έξ‚Ήξ‚Έβˆ’ξ€œπ‘Ÿ)𝑒(π‘Ÿ)π‘‘π‘Ÿπ›Ό(𝑠)𝑒(𝑠)𝑑𝑠=βˆ’exp𝑑0𝛽(π‘Ÿ)+𝜎2(ξ€Έξ‚Ήξ€œπ‘Ÿ)𝑒(π‘Ÿ)π‘‘π‘Ÿπ‘‘10ξ‚Έξ€œexp𝑠0𝛽(π‘Ÿ)+𝜎2(ξ€Έξ‚Ήβˆ’ξ€œπ‘Ÿ)𝑒(π‘Ÿ)π‘‘π‘Ÿπ›Ό(𝑠)𝑒(𝑠)𝑑𝑠𝑑𝑑1ξ‚Έβˆ’ξ€œexp𝑑𝑠𝛽(π‘Ÿ)+𝜎2ξ€Έξ‚Ή(π‘Ÿ)𝑒(π‘Ÿ)π‘‘π‘Ÿπ›Ό(𝑠)𝑒(𝑠)𝑑𝑠.(3.12) As 𝛽(π‘Ÿ)+𝜎2(π‘Ÿ)𝑒(π‘Ÿ)β‰₯𝛽1+𝜎21𝑙22(1βˆ’π›Ώ) holds for all 𝑑β‰₯𝑑1, thus, the first term in (3.12) goes to 0 as π‘‘β†’βˆž. For the second term in (3.12), we have ||||ξ€œπ‘‘π‘‘1ξ‚Έβˆ’ξ€œexp𝑑𝑠𝛽(π‘Ÿ)+𝜎2ξ€Έξ‚Ή||||β‰€ξ€œ(π‘Ÿ)𝑒(π‘Ÿ)π‘‘π‘Ÿπ›Ό(𝑠)𝑒(𝑠)𝑑𝑠𝑑0ξ€Ίβˆ’ξ€·π›½exp1+𝜎21𝑙22𝑙(1βˆ’π›Ώ)(π‘‘βˆ’π‘ )21=𝑙(1+𝛿)𝑑𝑠21(1+𝛿)𝛽1+𝜎21𝑙22ξ€œ(1βˆ’π›Ώ)𝑑0ξ€Ίβˆ’ξ€·π›½exp1+𝜎21𝑙22𝑑𝛽(1βˆ’π›Ώ)(π‘‘βˆ’π‘ )1+𝜎21𝑙22𝑠=𝑙(1βˆ’π›Ώ)21(1+𝛿)𝛽1+𝜎21𝑙22ξ€·ξ€Ίβˆ’ξ€·π›½(1βˆ’π›Ώ)1βˆ’exp1+𝜎21𝑙22𝑑≀𝑙(1βˆ’π›Ώ)ξ€»ξ€Έ21(1+𝛿)𝛽1+𝜎21𝑙22.(1βˆ’π›Ώ)(3.13) By similar arguments, we obtain that ||||ξ€œπ‘‘π‘‘1ξ‚Έβˆ’ξ€œexp𝑑𝑠𝛽(π‘Ÿ)+𝜎2ξ€Έξ‚Ή||||β‰₯𝑙(π‘Ÿ)𝑒(π‘Ÿ)π‘‘π‘Ÿπ›Ό(𝑠)𝑒(𝑠)𝑑𝑠22(1βˆ’π›Ώ)𝛽2+𝜎22𝑙21.(1+𝛿)(3.14) Therefore, for any πœ‰>0, there exists 𝑑(πœ‰)>0 such that 𝑙22(1βˆ’π›Ώ)𝛽2+𝜎22𝑙21≀||||≀𝑙(1+𝛿)𝑓(𝑑)21(1+𝛿)𝛽1+𝜎21𝑙22(1βˆ’π›Ώ)βˆ€π‘‘β‰₯𝑑(πœ‰).(3.15) For all 𝑑β‰₯𝑑(πœ‰), we can get from (3.5) that ̇𝑔=2|𝛼|βˆ’πœŽ2||𝑓||ξ€Έ||𝑓||β‰₯2𝛼1βˆ’πœŽ22𝑙21(1+𝛿)𝛽1+𝜎21𝑙22ξƒͺ𝑙(1βˆ’π›Ώ)22(1βˆ’π›Ώ)𝛽2+𝜎22𝑙21=(1+𝛿)2𝛼1𝛽1+𝜎21𝑙22ξ€Έβˆ’πœŽ22𝑙21ξ€Έ(1+𝛿)𝛽1+𝜎21𝑙22(ξƒͺ𝑙1βˆ’π›Ώ)22(1βˆ’π›Ώ)𝛽2+𝜎22𝑙21(.1+𝛿)(3.16) By assumption (b6), we get ̇𝑔>0 for a sufficiently small πœ‰>0. This implies that 𝑔(𝑑) goes to infinity at least as a linear function. Thus, there exists a constant 𝐢>0, such that π”Όξ€·πœƒ0βˆ’Μ‚πœƒπ‘‘ξ€Έ2=π‘†πœƒπœƒ=π‘’β„Žβ‰€πΆπ‘‘βˆ’1.(3.17) Hence, for arbitrary πœ–>0 and all 𝑇>0, it follows from Chebyshev’s inequality that 𝑃||πœƒ0βˆ’Μ‚πœƒπ‘‘||≀1>πœ–πœ–2πΆπ‘‡βˆ’1.(3.18)
The proof is complete.

Remark 3.3. From the proof of Theorem 3.2, we can see that πœƒ0βˆ’Μ‚πœƒπ‘‘ goes to 0 in 𝐿2-sense under the given conditions. In other words, Μ‚πœƒπ‘‘ is asymptotically unbiased.

Remark 3.4. It is well known that Kalman-Bucy linear filtering theory remains valid if one replaces the Brownian motion (π‘Šπ‘‘,𝑉𝑑) in systems (2.2) and (2.3) by an arbitrary centered orthogonal increment process of the same covariance structure. Thus, Theorem 3.2 remains valid under this replacement.

4. Strong Consistency

In last section, we give the conditions for the convergence rate of the estimator. Furthermore, we use the comparison theorem to proof the strong consistency in this section. As we all know, if the parameter πœƒ is, a genuine Gaussian random variable, then we can have a clear statistical interpretation for the convergence rate. Firstly, we pick πœƒ0 at random; secondly, let system (2.2) run up to time 𝑑 and simultaneously observe π‘Œ by system (2.3); finally, compute Μ‚πœƒπ‘‘ as the following form.

The Kalman-Bucy linear filtering theory shows us βŽ›βŽœβŽœβŽπ‘‘π‘‹π‘‘π‘‘πœƒπ‘‘βŽžβŽŸβŽŸβŽ =𝐢𝐴(𝑑)βˆ’π‘‡(𝑑)𝐢(𝑑)𝐷2ξ‚ΆβŽ›βŽœβŽœβŽπ‘‹(𝑑)𝑆(𝑑)π‘‘πœƒπ‘‘βŽžβŽŸβŽŸβŽ π‘‘π‘‘+𝐢(𝑑)𝐷2(𝑑)𝑆(𝑑)π‘‘π‘Œπ‘‘=βŽ›βŽœβŽœβŽœβŽπœ‡π›½(𝑑)βˆ’2(𝑑)𝛾2𝑆(𝑑)π‘₯π‘₯βˆ’πœ‡(𝑑)𝛼(𝑑)2(𝑑)𝛾2(𝑆𝑑)πœƒπ‘₯βŽžβŽŸβŽŸβŽŸβŽ βŽ›βŽœβŽœβŽπ‘‹(𝑑)0π‘‘πœƒπ‘‘βŽžβŽŸβŽŸβŽ πœ‡π‘‘π‘‘+2(𝑑)𝛾2βŽ›βŽœβŽœβŽπ‘†(𝑑)π‘₯π‘₯𝑆(𝑑)πœƒπ‘₯⎞⎟⎟⎠(𝑑)π‘‘π‘Œπ‘‘(4.1) with initial conditions 𝑋0=𝔼[𝑋0] and Μ‚πœƒ0=𝔼[πœƒ0]. If we denote that Ξ¦(𝑑) is the matrix fundamental solution of the deterministic linear system βŽ›βŽœβŽœβŽΜ‡π‘₯π‘‘Μ‡π‘¦π‘‘βŽžβŽŸβŽŸβŽ =βŽ›βŽœβŽœβŽœβŽπœ‡π›½(𝑑)βˆ’2(𝑑)𝛾2𝑆(𝑑)π‘₯π‘₯βˆ’πœ‡(𝑑)𝛼(𝑑)2(𝑑)𝛾2𝑆(𝑑)πœƒπ‘₯βŽžβŽŸβŽŸβŽŸβŽ βŽ›βŽœβŽœβŽβŽžβŽŸβŽŸβŽ (𝑑)0π‘₯(𝑑)𝑦(𝑑),(4.2) then the solution to (4.1) is given by βŽ›βŽœβŽœβŽξπ‘‹π‘‘Μ‚πœƒπ‘‘βŽžβŽŸβŽŸβŽ =Ξ¦(𝑑)Ξ¦βˆ’1βŽ›βŽœβŽœβŽπ”Όξ€Ίπ‘‹(0)0ξ€»π”Όξ€Ίπœƒ0ξ€»βŽžβŽŸβŽŸβŽ +ξ€œπ‘‘0Ξ¦(𝑑)Ξ¦βˆ’1βŽ›βŽœβŽœβŽπ‘†(𝑠)π‘₯π‘₯𝑆(𝑑)πœƒπ‘₯(βŽžβŽŸβŽŸβŽ π‘‘)π‘‘π‘Œπ‘ .(4.3) And for every particular experiment πœ”, the quantity (πœƒ0Μ‚πœƒ(πœ”)βˆ’π‘‘(πœ”))2 would be the squared estimation error.

But in this paper πœƒ is a fixed parameter, so we can only choose πœƒ0(πœ”)=πœƒ, and then the statistical mean over different values of πœƒ0(πœ”) has no experimental meaning. The true estimation error is given by Μ‚πœƒπœƒβˆ’π‘‘, not πœƒ0βˆ’Μ‚πœƒπ‘‘. It is therefore desirable that estimator Μ‚πœƒπ‘‘ converges to πœƒ0 for β€œall fixed values 𝜐=πœƒ0" a.s. To establish such an assertion we work with a product space (𝑅×Ω,ℬ(𝑅)βŠ—β„±,πœ‚βŠ—π‘ƒ), where πœ‚ denotes the law of πœƒ0, and (Ξ©,β„±,𝑃) is the underlying probability space for Brownian motion (π‘Šπ‘‘,𝑉𝑑)𝑑β‰₯0. This space is most appropriate because one can make 𝑃 a.s. statements for fixed πœβˆˆβ„. Notice that in this representation we have πœƒ0(𝜐,πœ”)=𝜐 for all (𝜐,πœ”)βˆˆβ„Γ—Ξ©. Assuming this underlying probability space, we use the comparison theorem to get the following consistency result.

In the proof of Theorem 3.2, we know that 𝑒,𝑓 is bonded and 𝑔 is monotonically increasing, moreover, 𝑆π‘₯π‘₯(𝑑)=π‘Ž=𝑔/β„Ž=𝑔/(π‘’π‘”βˆ’π‘“2)=(π‘”βˆ’π‘“2/𝑒+𝑓2/𝑒)/(π‘’π‘”βˆ’π‘“2)=1/𝑒+𝑓2/𝑒(π‘’π‘”βˆ’π‘“2) and π‘†πœƒπ‘₯(𝑑)=𝑏=𝑓/β„Ž=𝑓/(π‘’π‘”βˆ’π‘“2). Thus, there exist positive constants π‘Ž1,π‘Ž2,𝑏1, and 𝑏2 such that π‘Ž1β‰€π‘Žβ‰€π‘Ž2 and 𝑏1≀𝑏≀𝑏2.

Theorem 4.1. Assume that the following two conditions are satisfied:(c1): Μ‚πœƒπ‘‘ converges to πœƒ0 in 𝐿2(πœ‚βŠ—π‘ƒ);(c2): 𝛽2βˆ’πœ‡22/𝛾22<0;(c3): (𝛽2βˆ’(πœ‡22/𝛾22)π‘Ž2)2βˆ’4𝛼2(πœ‡22/𝛾22)𝑏2<0.
Then, for all fixed πœβˆˆβ„, we have Μ‚πœƒπ‘‘(𝜐,β‹…)⟢𝜐,𝑃-π‘Ž.𝑠.,π‘Žπ‘ π‘‘βŸΆβˆž.(4.4)

Proof. We will show that (4.4) holds for all πœβˆˆπ‘π‘, where πœ‚(𝑁)=0.
By Kalman-Bucy linear filtering theory, we know βŽ›βŽœβŽœβŽπ‘‘π‘‹π‘‘π‘‘πœƒπ‘‘βŽžβŽŸβŽŸβŽ =𝐢𝐴(𝑑)βˆ’π‘‡(𝑑)𝐢(𝑑)𝐷2ξ‚ΆβŽ›βŽœβŽœβŽπ‘‹(𝑑)𝑆(𝑑)π‘‘πœƒπ‘‘βŽžβŽŸβŽŸβŽ π‘‘π‘‘+𝐢(𝑑)𝐷2(𝑑)𝑆(𝑑)π‘‘π‘Œπ‘‘=βŽ›βŽœβŽœβŽœβŽπœ‡π›½(𝑑)βˆ’2(𝑑)𝛾2𝑆(𝑑)π‘₯π‘₯βˆ’πœ‡(𝑑)𝛼(𝑑)2(𝑑)𝛾2(𝑆𝑑)πœƒπ‘₯βŽžβŽŸβŽŸβŽŸβŽ βŽ›βŽœβŽœβŽπ‘‹(𝑑)0π‘‘πœƒπ‘‘βŽžβŽŸβŽŸβŽ πœ‡π‘‘π‘‘+2(𝑑)𝛾2βŽ›βŽœβŽœβŽπ‘†(𝑑)π‘₯π‘₯𝑆(𝑑)πœƒπ‘₯⎞⎟⎟⎠(𝑑)π‘‘π‘Œπ‘‘(4.5) with initial conditions 𝑋0=𝔼[𝑋0Μ‚πœƒ]and0=𝔼[πœƒ0]=𝔼[𝜐]=𝜐.
Since the following linear equations: βŽ›βŽœβŽœβŽΜ‡π‘₯π‘‘Μ‡π‘¦π‘‘βŽžβŽŸβŽŸβŽ =βŽ›βŽœβŽœβŽœβŽπœ‡π›½(𝑑)βˆ’2(𝑑)𝛾2𝑆(𝑑)π‘₯π‘₯βˆ’πœ‡(𝑑)𝛼(𝑑)2(𝑑)𝛾2𝑆(𝑑)πœƒπ‘₯βŽžβŽŸβŽŸβŽŸβŽ βŽ›βŽœβŽœβŽβŽžβŽŸβŽŸβŽ (𝑑)0π‘₯(𝑑)𝑦(𝑑)(4.6) equal to Μ‡π‘₯𝑑=ξ‚Έπœ‡π›½(𝑑)βˆ’2(𝑑)𝛾2𝑆(𝑑)π‘₯π‘₯ξ‚Ή(𝑑)π‘₯(𝑑)+𝛼(𝑑)π‘Œ(𝑑),Μ‡π‘¦π‘‘πœ‡=βˆ’2(𝑑)𝛾2𝑆(𝑑)πœƒπ‘₯(𝑑)π‘₯(𝑑),(4.7) it follows from (c1)–(c3) that 𝛽1βˆ’πœ‡21𝛾21π‘Ž1πœ‡β‰€π›½(𝑑)βˆ’2(𝑑)𝛾2𝑆(𝑑)π‘₯π‘₯(𝑑)≀𝛽2βˆ’πœ‡22𝛾22π‘Ž2𝛼<0,1≀𝛼(𝑑)≀𝛼2,βˆ’πœ‡21𝛾21𝑏1πœ‡β‰€βˆ’2(𝑑)𝛾2𝑆(𝑑)πœƒπ‘₯πœ‡(𝑑)β‰€βˆ’22𝛾22𝑏2.(4.8) For linear equations: βŽ›βŽœβŽœβŽΜ‡π‘₯π‘‘Μ‡π‘¦π‘‘βŽžβŽŸβŽŸβŽ =βŽ›βŽœβŽœβŽœβŽœβŽπ›½1βˆ’πœ‡21𝛾21π‘Ž1𝛼1βˆ’πœ‡21𝛾21𝑏10βŽžβŽŸβŽŸβŽŸβŽŸβŽ βŽ›βŽœβŽœβŽβŽžβŽŸβŽŸβŽ ,βŽ›βŽœβŽœβŽπ‘₯(𝑑)𝑦(𝑑)Μ‡π‘₯π‘‘Μ‡π‘¦π‘‘βŽžβŽŸβŽŸβŽ =βŽ›βŽœβŽœβŽœβŽœβŽπ›½2βˆ’πœ‡22𝛾22π‘Ž2𝛼2βˆ’πœ‡22𝛾22𝑏20βŽžβŽŸβŽŸβŽŸβŽŸβŽ βŽ›βŽœβŽœβŽπ‘₯⎞⎟⎟⎠,(𝑑)𝑦(𝑑)(4.9) if we set Ξ¦1(𝑑) and Ξ¦2(𝑑) that are the matrix fundamental solution of (4.9), we can obtain from the comparison theorem that Ξ¦1(𝑑)≀Φ(𝑑)≀Φ2(𝑑).(4.10)
It is not difficult to explore (4.9), and get Ξ¦1βŽ›βŽœβŽœβŽœβŽœβŽβˆ’πœ†(𝑑)=ξ…ž1𝑁21π‘’πœ†β€²1π‘‘βˆ’πœ†ξ…ž2𝑁21π‘’πœ†β€²2π‘‘π‘’πœ†β€²1π‘‘π‘’πœ†β€²2π‘‘βŽžβŽŸβŽŸβŽŸβŽŸβŽ ,Ξ¦2βŽ›βŽœβŽœβŽœβŽœβŽβˆ’πœ†(𝑑)=1𝑀21π‘’πœ†1π‘‘βˆ’πœ†2𝑀21π‘’πœ†2π‘‘π‘’πœ†1π‘‘π‘’πœ†2π‘‘βŽžβŽŸβŽŸβŽŸβŽŸβŽ ,Ξ¦1βˆ’1(βŽ›βŽœβŽœβŽœβŽœβŽœβŽβˆ’π‘π‘‘)=21πœ†ξ…ž1βˆ’πœ†2π‘’βˆ’πœ†β€²1π‘‘βˆ’πœ†ξ…ž2πœ†ξ…ž1βˆ’πœ†ξ…ž2π‘’βˆ’πœ†β€²1𝑑𝑁21πœ†ξ…ž1βˆ’πœ†ξ…ž2π‘’βˆ’πœ†β€²2π‘‘πœ†ξ…ž1πœ†ξ…ž1βˆ’πœ†ξ…ž2π‘’βˆ’πœ†β€²2π‘‘βŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽ ,Ξ¦2βˆ’1(βŽ›βŽœβŽœβŽœβŽœβŽœβŽβˆ’π‘€π‘‘)21πœ†1βˆ’πœ†2π‘’βˆ’πœ†1π‘‘βˆ’πœ†2πœ†1βˆ’πœ†2π‘’βˆ’πœ†1𝑑𝑀21πœ†1βˆ’πœ†2π‘’βˆ’πœ†2π‘‘πœ†1πœ†1βˆ’πœ†2π‘’βˆ’πœ†2π‘‘βŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽ ,(4.11) where𝑁11=𝛽1βˆ’(πœ‡21/𝛾21)π‘Ž1,𝑁12=𝛼1,𝑁21=(πœ‡21/𝛾21)𝑏1,πœ†ξ…ž1=(𝑁11+𝑁211βˆ’4𝑁12𝑁21)/2,πœ†ξ…ž2=(𝑁11βˆ’ξ”π‘211βˆ’4𝑁12𝑁21)/2, 𝑀11=𝛽2βˆ’(πœ‡22/𝛾22)π‘Ž2,𝑀12=𝛼2,𝑀21=(πœ‡22/𝛾22)𝑏2,πœ†1=(𝑀11+𝑀211βˆ’4𝑀12𝑀21)/2,πœ†2=(𝑀11βˆ’ξ”π‘€211βˆ’4𝑀12𝑀21)/2.
By assumption (c2) and (c3), we know that πœ†ξ…ž1<0,πœ†ξ…ž2<0,πœ†1<0, and πœ†2<0.
By the ODE theory [37, 38] and above discussion, we know that the solution of (4.1) is given by βŽ›βŽœβŽœβŽξπ‘‹π‘‘Μ‚πœƒπ‘‘βŽžβŽŸβŽŸβŽ =Ξ¦(𝑑)Ξ¦βˆ’1βŽ›βŽœβŽœβŽπ”Όξ€Ίπ‘‹(0)0ξ€»π”Όξ€Ίπœƒ0ξ€»βŽžβŽŸβŽŸβŽ +ξ€œπ‘‘0Ξ¦(𝑑)Ξ¦βˆ’1βŽ›βŽœβŽœβŽπ‘†(𝑠)π‘₯π‘₯𝑆(𝑑)πœƒπ‘₯(βŽžβŽŸβŽŸβŽ π‘‘)π‘‘π‘Œπ‘ .(4.12) Using the similar method, we can also obtain the solutions for the following two equations: βŽ›βŽœβŽœβŽπ‘‘ξπ‘‹π‘‘π‘‘Μ‚πœƒπ‘‘βŽžβŽŸβŽŸβŽ =βŽ›βŽœβŽœβŽœβŽœβŽπ›½1βˆ’πœ‡21𝛾21π‘Ž1𝛼1βˆ’πœ‡21𝛾21𝑏10βŽžβŽŸβŽŸβŽŸβŽŸβŽ βŽ›βŽœβŽœβŽξπ‘‹π‘‘Μ‚πœƒπ‘‘βŽžβŽŸβŽŸβŽ πœ‡π‘‘π‘‘+1𝛾1βŽ›βŽœβŽœβŽπ‘Ž1𝑏1βŽžβŽŸβŽŸβŽ π‘‘π‘Œπ‘‘βŽ›βŽœβŽœβŽπ‘‘ξπ‘‹,(4.13)π‘‘π‘‘Μ‚πœƒπ‘‘βŽžβŽŸβŽŸβŽ =βŽ›βŽœβŽœβŽœβŽœβŽπ›½2βˆ’πœ‡22𝛾22π‘Ž2𝛼2βˆ’πœ‡22𝛾22𝑏20βŽžβŽŸβŽŸβŽŸβŽŸβŽ βŽ›βŽœβŽœβŽξπ‘‹π‘‘Μ‚πœƒπ‘‘βŽžβŽŸβŽŸβŽ πœ‡π‘‘π‘‘+2𝛾2βŽ›βŽœβŽœβŽπ‘Ž2𝑏2βŽžβŽŸβŽŸβŽ π‘‘π‘Œπ‘‘,(4.14) where 𝑋0=𝔼[𝑋0] and Μ‚πœƒ0=𝔼[πœƒ0]=𝔼[𝜐]=𝜐.
The solutions of the two equations are explored as the following form: βŽ›βŽœβŽœβŽξπ‘‹π‘‘Μ‚πœƒπ‘‘βŽžβŽŸβŽŸβŽ =Ξ¦1(𝑑)Ξ¦1βˆ’1βŽ›βŽœβŽœβŽπ”Όξ€Ίπ‘‹(0)0ξ€»π”Όξ€Ίπœƒ0ξ€»βŽžβŽŸβŽŸβŽ +ξ€œπ‘‘0Ξ¦1(𝑑)Ξ¦1βˆ’1βŽ›βŽœβŽœβŽπ‘Ž(𝑠)1𝑏1βŽžβŽŸβŽŸβŽ π‘‘π‘Œπ‘ ,βŽ›βŽœβŽœβŽξπ‘‹π‘‘Μ‚πœƒπ‘‘βŽžβŽŸβŽŸβŽ =Ξ¦2(𝑑)Ξ¦2βˆ’1(βŽ›βŽœβŽœβŽπ”Όξ€Ίπ‘‹0)0ξ€»π”Όξ€Ίπœƒ0ξ€»βŽžβŽŸβŽŸβŽ +ξ€œπ‘‘0Ξ¦2(𝑑)Ξ¦2βˆ’1(βŽ›βŽœβŽœβŽπ‘Žπ‘ )2𝑏2βŽžβŽŸβŽŸβŽ π‘‘π‘Œπ‘ .(4.15)
For (4.14), we have that βŽ›βŽœβŽœβŽξπ‘‹π‘‘Μ‚πœƒπ‘‘βŽžβŽŸβŽŸβŽ =Ξ¦2(𝑑)Ξ¦2βˆ’1βŽ›βŽœβŽœβŽπ”Όξ€Ίπ‘‹(0)0ξ€»π”Όξ€Ίπœƒ0ξ€»βŽžβŽŸβŽŸβŽ +ξ€œπ‘‘0Ξ¦2(𝑑)Ξ¦2βˆ’1βŽ›βŽœβŽœβŽπ‘Ž(𝑠)2𝑏2βŽžβŽŸβŽŸβŽ π‘‘π‘Œπ‘ (4.16) yields that Μ‚πœƒπ‘‘=ξ€œπ‘‘0ξ‚Έπ‘Ž2𝑀21πœ†1βˆ’πœ†2π‘’βˆ’πœ†2(π‘‘βˆ’π‘ )βˆ’π‘€21πœ†1βˆ’πœ†2π‘’βˆ’πœ†2(π‘‘βˆ’π‘ )ξ‚Ά+𝑏2ξ‚΅πœ†1πœ†1βˆ’πœ†2π‘’βˆ’πœ†2(π‘‘βˆ’π‘ )βˆ’πœ†2πœ†1βˆ’πœ†2π‘’βˆ’πœ†2(π‘‘βˆ’π‘ )ξ‚Άξ‚Ήπ‘‘π‘Œπ‘ +𝑀21πœ†1βˆ’πœ†2π‘’βˆ’πœ†2π‘‘βˆ’π‘€21πœ†1βˆ’πœ†2π‘’βˆ’πœ†2𝑑𝑋0+ξ‚΅πœ†1πœ†1βˆ’πœ†2π‘’βˆ’πœ†2π‘‘βˆ’πœ†2πœ†1βˆ’πœ†2π‘’βˆ’πœ†2π‘‘ξ‚Άπœƒ0.(4.17) Since πœ†1<0 and πœ†2<0, it is easy to get Μ‚πœƒπ‘‘(𝜐,β‹…)⟢𝜐,𝑃-a.s.,asπ‘‘βŸΆβˆž.(4.18) For (4.13), we can also get Μ‚πœƒπ‘‘(𝜐,β‹…)⟢𝜐,𝑃-a.s.,asπ‘‘βŸΆβˆž.(4.19) Hence, for (4.1), we can get the following result: Μ‚πœƒπ‘‘(𝜐,β‹…)⟢𝜐,𝑃-a.s.,asπ‘‘βŸΆβˆž.(4.20) The proof is complete.

Remark 4.2. Under the probability space used in this paper, we can see that Theorem 3.2 is the particular form of Theorem 4.1 if we use Chebyshev’s inequality on the result of Theorem 4.1.

Remark 4.3. The strong consistency in Deck [30] requires that Μ‚πœƒπ‘‘ is a martingale, while, in our result, Μ‚πœƒπ‘‘ can be not a martingale. Furthermore, when Μ‚πœƒπ‘‘ is a martingale, our result is more strong than Deck’s, so in that case we can relax the conditions as Deck.

5. Conclusions

In this paper, we have investigated the parameter estimation problem for a class of linear stochastic systems called Hull-White stochastic differential equations which are important models in finance. Firstly, Bayesian viewpoint is first chosen to analyze the parameter estimation problem based on Kalman-Bucy linear filtering theory. Secondly, some sufficient conditions on coefficients are given to study the asymptotic convergence problem. Finally, the strong consistent property of estimator is discussed by Kalman-Bucy linear filtering theory and comparison theorem.


This work was supported by the National Nature Science Foundation of China under Grant no. 60974030 and the Science and Technology Project of Education Department in Fujian Province JA11211.


  1. B. M. Bibby and M. Sørensen, β€œMartingale estimation functions for discretely observed diffusion processes,” Bernoulli, vol. 1, no. 1-2, pp. 17–39, 1995. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  2. C. Corrado and T. Su, β€œAn empirical test of the Hull-White option pricing model,” Journal of Futures Markets, vol. 18, no. 4, pp. 363–378, 1998. View at Google Scholar
  3. H. Dong, Z. Wang, and H. Gao, β€œH fuzzy control for systems with repeated scalar nonlinearities and random packet losses,” IEEE Transactions on Fuzzy Systems, vol. 17, no. 2, pp. 440–450, 2009. View at Google Scholar
  4. J. Hu, Z. Wang, and H. Gao, β€œA delay fractioning approach to robust sliding mode control for discrete-time stochastic systems with randomly occurring non-linearities,” IMA Journal of Mathematical Control and Information, vol. 28, no. 3, pp. 345–363, 2011. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  5. J. Hu, Z. Wang, H. Gao, and L. K. Stergioulas, β€œRobust sliding mode control for discrete stochastic systems with mixed time delays, randomly occurring uncertainties, and randomly occurring nonlinearities,” IEEE Transactions on Industrial Electronics, vol. 59, no. 7, pp. 3008–3015, 2012. View at Google Scholar
  6. J. Hu, Z. Wang, and H. Gao, β€œRobust H sliding mode control for discrete time-delay systems with stochastic nonlinearities,” Journal of the Franklin Institute, vol. 349, no. 4, pp. 1459–1479, 2012. View at Google Scholar
  7. E. Ince, Ordinary Differential Equations, Dover, New York, NY, USA, 1956.
  8. C. R. Rao, Linear Statistical Inference and its Applications, John Wiley & Sons, New York, NY, USA, 1973.
  9. M. H. A. Davis, Linear Estimation and Stochastic Control, Chapman and Hall, London, UK, 1977.
  10. M. Arato, A. Kolmogrorov, and Y. Sinai, On Parameter Estimation of a Complex Stationary Gaussian Process, vol. 146, Doklady Academy, USSR, 1962.
  11. H. Frydman and P. Lakner, β€œMaximum likelihood estimation of hidden Markov processes,” The Annals of Applied Probability, vol. 13, no. 4, pp. 1296–1312, 2003. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  12. L. Galtchouk and V. Konev, β€œOn sequential estimation of parameters in semimartingale regression models with continuous time parameter,” The Annals of Statistics, vol. 29, no. 5, pp. 1508–1536, 2001. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  13. N. R. Kristensen, H. Madsen, and S. B. Jørgensen, β€œParameter estimation in stochastic grey-box models,” Automatica, vol. 40, no. 2, pp. 225–237, 2004. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  14. B. L. S. Prakasa Rao, Statistical Inference for Diffusion Type Processes, vol. 8, Edward Arnold, London, UK, 1999.
  15. I. Shoji and T. Ozaki, β€œComparative study of estimation methods for continuous time stochastic processes,” Journal of Time Series Analysis, vol. 18, no. 5, pp. 485–506, 1997. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  16. H. Dong, Z. Wang, and H. Gao, β€œRobust H filtering for a class of nonlinear networked systems with multiple stochastic communication delays and packet dropouts,” IEEE Transactions on Signal Processing, vol. 58, no. 4, pp. 1957–1966, 2010. View at Publisher Β· View at Google Scholar
  17. H. Dong, Z. Wang, and H. Gao, β€œDistributed filtering for a class of time-varying systems over sensor networks with quantization errors and successive packet dropouts,” IEEE Transactions on Signal Processing, vol. 60, no. 6, pp. 3164–3173, 2012. View at Google Scholar
  18. J. Hu, Z. Wang, H. Gao, and L. K. Stergioulas, β€œProbability-guaranteed H finite-horizon filtering for a class of nonlinear time-varying systems with sensor saturations,” Systems and Control Letters, vol. 61, no. 4, pp. 477–484, 2012. View at Google Scholar
  19. J. Hu, Z. Wang, Y. Niu, and L. K. Stergioulas, β€œH sliding mode observer design for a class of nonlinear discrete time-delay systems: a delay-fractioning approach,” International Journal of Robust and Nonlinear Control. In press. View at Publisher Β· View at Google Scholar
  20. J. Picard, β€œAsymptotic study of estimation problems with small observation noise,” in Stochastic Modelling and Filtering (Rome, 1984), vol. 91 of Lecture Notes in Control and Inference Science, Springer, 1987. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  21. B. Shen, Z. Wang, and Y. S. Hung, β€œDistributed H-consensus filtering in sensor networks with multiple missing measurements: the finite-horizon case,” Automatica, vol. 46, no. 10, pp. 1682–1688, 2010. View at Publisher Β· View at Google Scholar
  22. B. Shen, Z. Wang, and X. Liu, β€œBounded H synchronization and state estimation for discrete time-varying stochastic complex networks over a finite-horizon,” IEEE Transactions on Neural Networks, vol. 22, no. 1, pp. 145–157, 2011. View at Google Scholar
  23. B. Shen, Z. Wang, Y. Hung, and G. Chesi, β€œDistributed H filtering for polynomial nonlinear stochastic systems in sensor networks,” IEEE Transactions on Industrial Electronics, vol. 58, no. 5, pp. 1971–1979, 2011. View at Google Scholar
  24. Z. Wang, J. Lam, and X. Liu, β€œFiltering for a class of nonlinear discrete-time stochastic systems with state delays,” Journal of Computational and Applied Mathematics, vol. 201, no. 1, pp. 153–163, 2007. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  25. Z. Wang, X. Liu, Y. Liu, J. Liang, and V. Vinciotti, β€œAn extended Kalman filtering approach to modelling nonlinear dynamic gene regulatory networks via short gene expression time series,” IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 6, no. 3, pp. 410–419, 2009. View at Google Scholar
  26. G. Wei and H. Shu, β€œH filtering on nonlinear stochastic systems with delay,” Chaos, Solitons and Fractals, vol. 33, no. 2, pp. 663–670, 2007. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  27. M. R. James and F. Le Gland, β€œConsistent parameter estimation for partially observed diffusions with small noise,” Applied Mathematics and Optimization, vol. 32, no. 1, pp. 47–72, 1995. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  28. R. S. Liptser and A. N. Shiryayev, Statistics of Random Processes. I, Springer, New York, NY, USA, 1977.
  29. Yu. A. Kutoyants, β€œParameter estimation for diffusion type processes of observation,” Mathematische Operationsforschung und Statistik Series Statistics, vol. 15, no. 4, pp. 541–551, 1984. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  30. T. Deck, β€œAsymptotic properties of Bayes estimators for Gaussian Itô-processes with noisy observations,” Journal of Multivariate Analysis, vol. 97, no. 2, pp. 563–573, 2006. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  31. J. P. N. Bishwal, Parameter Estimation in Stochastic Differential Equations, Springer, Berlin, Germany, 2008. View at Publisher Β· View at Google Scholar
  32. A. Brace, D. Gatarek, and M. Musiela, β€œThe market model of interest rate dynamics,” Mathematical Finance, vol. 7, no. 2, pp. 127–155, 1997. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  33. D. Heath, R. Jarrow, and A. Morton, β€œBond pricing and the term structure of interest-rates: a new methodology,” Econometrica, vol. 60, no. 1, pp. 77–105, 1992. View at Google Scholar
  34. J. Hull and A. White, β€œThe pricing of option on assets with stochastic volatilities,” The Journal of Finance, vol. 42, no. 2, pp. 281–300, 1987. View at Google Scholar
  35. J. Hull and A. White, β€œThe general Hull-White model and super calibration,” Financial Analysis Journal, vol. 57, no. 6, pp. 34–43, 2001. View at Google Scholar
  36. G. Kallianpur and R. S. Selukar, β€œParameter estimation in linear filtering,” Journal of Multivariate Analysis, vol. 39, no. 2, pp. 284–304, 1991. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  37. V. I. Arnold, Geometrical Methods in the Theory of Ordinary Differential Equations, Springer, New York, NY, USA, 1983.
  38. E. L. Ince, Ordinary Differential Equations, Dover, New York, NY, USA, 1944.