`Journal of Probability and StatisticsVolume 2011, Article ID 259091, 15 pageshttp://dx.doi.org/10.1155/2011/259091`
Research Article

## A Bayes Formula for Nonlinear Filtering with Gaussian and Cox Noise

1Department of Statistics and Probability, Michigan State University, East Lansing, MI 48824, USA
2Department of Mathematics, University of Munich, Theresienstrasse 39, 80333 Munich, Germany
3Center of Mathematics for Applications, University of Oslo, P.O. Box 1053, Blindern, 0316 Oslo, Norway

Received 27 May 2011; Accepted 6 September 2011

#### Abstract

A Bayes-type formula is derived for the nonlinear filter where the observation contains both general Gaussian noise as well as Cox noise whose jump intensity depends on the signal. This formula extends the well-known Kallianpur-Striebel formula in the classical non-linear filter setting. We also discuss Zakai-type equations for both the unnormalized conditional distribution as well as unnormalized conditional density in case the signal is a Markovian jump diffusion.

#### 1. Introduction

The general filtering setting can be described as follows. Assume a partially observable process defined on a probability space . The real valued process stands for the unobservable component, referred to as the signal process or system process, whereas is the observable part, called observation process. Thus information about can only be obtained by extracting the information about that is contained in the observation in a best possible way. In filter theory this is done by determining the conditional distribution of given the information -field generated by . Or stated in an equivalent way, the objective is to compute the optimal filter as the conditional expectation

for a rich enough class of functions .

In the classical nonlinear filter setting, the dynamics of the observation process is supposed to follow the following Itô process

where is a Brownian motion independent of . Under certain conditions on the drift (see [1, 2]), Kallianpur and Striebel derived a Bayes-type formula for the conditional distribution expressed in terms of the so-called unnormalized conditional distribution. In the special case when the dynamics of the signal follows an Itô diffusion

for a second Brownian motion , Zakai [3] showed under certain conditions that the unnormalized conditional density is the solution of an associated stochastic partial differential equation, the so called Zakai equation.

In this paper we extend the classical filter model to the following more general setting. For a general signal process we suppose that the observation model is given as where (i) is a general Gaussian process with zero mean and continuous covariance function , that is, independent of the signal process ; (ii)Let (resp. ) denote the -algebra generated by (resp. ) augmented by the null sets. Define the filtration through . Then we assume that the processis a pure jump -semimartingale determined through the integer-valued random measure that has an -predictable compensator of the form for a Lévy measure and a functional . In particular, and are independent; (iii)the function is such that is -measurable and is in for almost all , where denotes the Hilbert space generated by (see Section 2).

The observation dynamics consists thus of an information drift of the signal disturbed by some Gaussian noise plus a pure jump part whose jump intensity depends on the signal. Note that a jump process of the form given above is also referred to as Cox process.

The objective of the paper is in a first step to extend the Kallianpur-Striebel Bayes type formula to the generalized filter setting from above. When there are no jumps present in the observation dynamics (1.4), the corresponding formula has been developed in [4]. We will extend their way of reasoning to the situation including Cox noise.

In a second step we then derive a Zakai-type measure valued stochastic differential equations for the unnormalized conditional distribution of the filter. For this purpose we assume the signal process to be a Markov process with generator given as

with the coefficients and being in for every . Here, is the space of continuous functions with compact support and bounded derivatives up to order 2. Further, we develop a Zakai-type stochastic parabolic integropartial differential equation for the unnormalized conditional density, given it exists. In the case the dynamics of does not contain any jumps and the Gaussian noise in the observation is Brownian motion, the corresponding Zakai equation was also studied in [5]. We further refer to [6] where nonlinear filtering for jump diffusions is considered. For further information on Zakai equations in a semimartingale setting we refer to [7, 8].

The remaining part of the paper is organized as follows. in Section 2 we briefly recall some theory of reproducing kernel Hilbert spaces. In Section 3 we obtain the Kallianpur-Striebel formula, before we discuss the Zakai-type equations in Section 4.

#### 2. Reproducing Kernel Hilbert Space and Stochastic Processes

A Hilbert space consisting of real valued functions on some set is said to be a reproducing kernel Hilbert space (RKHS), if there exists a function on with the following two properties: for every in and in , (i), (ii). (The reproducing property)

is called the reproducing kernel of . The following basic properties can be found in [9]. (1)If a reproducing kernel exists, then it is unique. (2)If is the reproducing kernel of a Hilbert space , then spans . (3)If is the reproducing kernel of a Hilbert space , then it is nonnegative definite in the sense that for all in and

The converse of (3), stated in Theorem 2.1 below, is fundamental towards understanding the RKHS representation of Gaussian processes. A proof of the theorem can be found in [9].

Theorem 2.1 (E. H. Moore). A symmetric nonnegative definite function on generates a unique Hilbert space, which we denote by or sometimes by , of which is the reproducing kernel.

Now suppose , is a nonnegative definite function. Then, by Theorem 2.1, there is a RKHS, , with as its reproducing kernel. If we restrict to where , then is still a nonnegative definite function. Hence restricted to will also correspond to a reproducing kernel Hilbert space of functions defined on . The following result from ([9, pp. 351]) explains the relationship between these two.

Theorem 2.2. Suppose , defined on , is the reproducing kernel of the Hilbert space with the norm . Let and be the restriction of on . Then consists of all in restricted to . Further, for such a restriction the norm is the minimum of for all whose restriction to is .

If is the covariance function for some zero mean process , then, by Theorem 2.1, there exists a unique RKHS, , for which is the reproducing kernel. It is also easy to see (e.g., see [10, Theorem 3D]) that there exists a congruence (linear, one-to-one, inner product preserving map) between and which takes to . Let us denote by , the image of under the congruence.

We conclude the section with an important special case.

##### 2.1. A Useful Example

Suppose the stochastic process is a Gaussian process given by

where for all and is Brownian motion. Then the covariance function and the corresponding RKHS is given by for some (necessarily unique) , with the inner product

where

For , by taking to be , we see, from (2.3) and (2.4), that . To check the reproducing property suppose . Then

Also, in this case, it is very easy to check (cf. [11], Theorem 4D) that the congruence between and is given by

#### 3. The Filter Setting and a Bayes Formula

Assume a partially observable process defined on a probability space . The real valued process stands for the unobservable component, referred to as the signal process, whereas is the observable part, called observation process. In particular, we assume that the dynamics of the observation process is given as follows: where (i) is a Gaussian process with zero mean and continuous covariance function , that is, independent of the signal process ; (ii)the function is such that is -measurable and is in for almost all , where denotes the Hilbert space generated by (see Section 2); (iii)let (resp., ) denote the -algebra generated by (resp., ) augmented by the null sets. Define the filtration through . Then we assume that the processis a pure jump -semimartingale determined through the integer-valued random measure that has an -predictable compensator of the form for a Lévy measure and a functional ; (iv)the functional is assumed to be strictly positive and such that

is a well-defined -martingale. Here stands for the compensated jump measure

Remark 3.1. Note that the specific from of the predictable compensator   implies that    is a process with conditionally independent increments with respect to the  -algebra  , that is, for all bounded measurable functions , , and (see, e.g., in [12, Theorem 6.6]). Also, it follows that the processes is independent of the random measure .

Given a Borel measurable function , our nonlinear filtering problem then comes down to determine the least square estimate of , given the observations up to time . In other words, the problem consists in evaluating the optimal filter In this section we want to derive a Bayes formula for the optimal filter (3.8) by an extension of the reference measure method presented in [4] for the purely Gaussian case. For this purpose, define for each with

Then the main tool is the following extension of Theorem 3.1 in [4].

Lemma 3.2. Define Then is a probability measure, and under we have that where , , is a Gaussian process with zero mean and covariance function , , , is a pure jump Lévy process with Lévy measure , and the process , has the same distribution as under . Further, the processes , , and are independent under .

Proof. Fix . First note that since almost surely, we have by Theorem 2.2 that almost surely. Further, by the independence of the Gaussian process from and from the random measure it follows that Since for the random variable is Gaussian with zero mean and variance , it follows again by the independence of from and the martingale property of that , and is a probability measure.
Now take and real numbers , , and consider the joint characteristic function Here, for computational convenience, the part of the characteristic function that concerns is formulated in terms of increments of (where we set ). Now, as in in [4, Theorem 3.1], we get by the independence of from that which is the characteristic function of a Gaussian process with mean zero and covariance function .
Further, by the conditional independent increments of we get like in the proof of in [12, Theorem 6.6] that for . So that for one increment one obtains The generalization to the sum of increments is straightforward, and one obtains the characteristic function of the finite dimensional distribution of a Lévy process (of finite variation): All together we end up with which completes the proof.

Remark 3.3. Note that in case    is Brownian motion Lemma 3.2 is just the usual Girsanov theorem for Brownian motion and random measures. In this case, it follows from Cameron-Martin's result and the fact that    is independent of that    is a martingale and    is a probability measure.

is -a.s. by condition (3.4) and an argument like in [4, p. 857] given through

Here

is now a compensated Poisson random measure under . Then we have by the Bayes formula for conditional expectation for any -measurable integrable function

From Lemma 3.2 we know that the processes , , and are independent under and that the distribution of is the same under as under . Hence conditional expectations of the form can be computed as

where and the index denotes integration with respect to . Consequently, we get the following Bayes formula for the optimal filter.

Theorem 3.4. Under the above specified conditions, for any -measurable integrable function where

#### 4. Zakai-Type Equations

Using the Bayes formula from above we now want to proceed further in deriving a Zakai-type equations for the unnormalized filter. This equation is basic in order to obtain the filter recursively. To this end we have to impose certain restrictions on both the signal process and the Gaussian part of the observation process.

Regarding the signal process , we assume its dynamics to be Markov. More precisely, we consider the parabolic integrodifferential operator , where

for . Here, is the space of continuous functions with compact support and bounded derivatives up to order 2. Further, we suppose that , , and are in for every and that is a Lévy measure with second moment. The signal process , is then assumed to be a solution of the martingale problem corresponding to , that is,

is a -martingale with respect to for every .

Further, we restrict the Gaussian process of the observation process in (3.1) to belong to the special case presented in Section 2.1, that is,

where is Brownian motion and is a deterministic function such that , . Note that this type of processes both includes Ornstein-Uhlenbeck processes as well as fractional Brownian motion. Then will be of the form

Further, with

we get and , and in Theorem 3.4 becomes

Note that in this case , is a Brownian motion under .

For we now define the unnormalized filter by

Then this unnormalized filter obeys the following dynamics.

Theorem 4.1 (Zakai equation I). Under the above specified assumptions, the unnormalized filter satisfies the equation

Proof. Set Then, by our assumptions on the coefficients , , and on the Lévy measure , we have for some constant . Since is a martingale we obtain If we denote then, because is -measurable for each , (4.10) implies that By definition of , Also, is the Doléans-Dade solution of the following linear SDE: So we get The first term on the right hand side equals , and for the second one we can invoke Fubini's theorem to get For the third term we employ the stochastic Fubini theorem for Brownian motion (see, for Example 5.14 in [13]) in order to get Further, one easily sees that the analogue stochastic Fubini theorem for compensated Poisson random measures holds, and we get analogously for the last term which completes the proof.

If one further assumes that the filter has a so-called unnormalized conditional density then we can derive a stochastic integro-PDE determining which for the Brownian motion case was first established in [3] and usually is referred to as Zakai equation.

Definition 4.2. We say that a process is the unnormalized conditional density of the filter if for all bounded continuous functions .

From now on we restrict the intergo part of the operator to be the one of a pure jump Lévy process, that is, , and we assume the initial value of the signal process to possess a density denoted by . Then the following holds.

Theorem 4.3 (Zakai equation II). Suppose the unnormalized conditional density of our filter exists. Then, provided that a solution exists, solves the following stochastic integro-PDE: Here is the adjoint operator of given through for .

For sufficient conditions on the coefficients under which there exists a classical solution of (4.20) see, for example, [5]; in [14] the existence of solutions in a generalized sense of stochastic distributions is treated.

Proof. By (4.8) and (4.19) we have for all Now, using integration by parts, we get Further it holds again integration by parts and by substitution that Fubini together with (4.23) and (4.24) then yields Since this holds for all , we get (4.20).

#### Acknowledgments

This work was started when V. Mandrekar was at the CMA, University of Oslo. He thanks the CMA for support and Professor Øksendal for interesting discussions and hospitality.

#### References

1. G. Kallianpur and C. Striebel, “Estimation of stochastic systems: arbitrary system process with additive white noise observation errors,” Annals of Mathematical Statistics, vol. 39, pp. 785–801, 1968.
2. G. Kallianpur, Stochastic Filtering Theory, Springer, New York, NY, USA, 1980.
3. M. Zakai, “On the optimal filtering of diffusion processes,” Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, vol. 11, no. 3, pp. 230–243, 1969.
4. P. Mandal and V. Mandrekar, “A Bayes formula for Gaussian noise processes and its applications,” SIAM Journal on Control and Optimization, vol. 39, no. 3, pp. 852–871, 2000.
5. T. Meyer-Brandis and F. Proske, “Explicit solution of a non-linear filtering problem for Lévy processes with application to finance,” Applied Mathematics and Optimization, vol. 50, no. 2, pp. 119–134, 2004.
6. D. Poklukar, “Nonlinear filtering for jump-diffusions,” Journal of Computational and Applied Mathematics, vol. 197, no. 2, pp. 558–567, 2006.
7. B. Grigelionis, “Stochastic non-linear filtering equations and semimartingales,” in Nonlinear Filtering and Stochastic Control, S. K. Mitter and A. Moro, Eds., vol. 972 of Lecture Notes in Mathematics, Springer, Berlin, Germany, 1982.
8. B. Grigelionis, “Stochastic evolution equations and densities of the conditional distributions,” in Theory and Application of Random Fields, vol. 49 of Lecture Notes in Control and Information Sciences, pp. 49–88, Springer, Berlin, Germany, 1983.
9. N. Aronszajn, “Theory of reproducing kernels,” Transactions of the American Mathematical Society, vol. 68, pp. 337–404, 1950.
10. E. Parzen, “Regression analysis of continuous parameter time series,” in Proceedings of the 4th Berkeley Symposium on Mathematical Statistics and Probability, vol. 1, pp. 469–489, University of California Press, 1961.
11. E. Parzen, “Statistical inference on time series by Hilbert space methods,” in Time Series Analysis Papers, pp. 251–382, Holden-Day, London, UK, 1967.
12. J. Jacod and A. N. Shiryaev, Limit Theorems for Stochastic Processes, vol. 288, Springer, Berlin, Germany, 2nd edition, 2002.
13. R. S. Liptser and A. N. Shiryaev, Statistics of Random Processes, vol. 1, Springer, New York, NY, USA, 1977.
14. T. Meyer-Brandis, “Stochastic Feynman-Kac equations associated to Lévy-Itô diffusions,” Stochastic Analysis and Applications, vol. 25, no. 5, pp. 913–932, 2007.