We show that every separable Gaussian process with integrable variance function admits a Fredholm representation with respect to a Brownian motion. We extend the Fredholm representation to a transfer principle and develop stochastic analysis by using it. We show the convenience of the Fredholm representation by giving applications to equivalence in law, bridges, series expansions, stochastic differential equations, and maximum likelihood estimations.

1. Introduction

The stochastic analysis of Gaussian processes that are not semimartingales is challenging. One way to overcome the challenge is to represent the Gaussian process under consideration, , say, in terms of a Brownian motion, and then develop a transfer principle so that the stochastic analysis can be done in the “Brownian level” and then transferred back into the level of .

One of the most studied representations in terms of a Brownian motion is the so-called Volterra representation. A Gaussian Volterra process is a process that can be represented aswhere is a Brownian motion and . Here, the integration goes only up to , hence the name “Volterra.” This Volterra nature is very convenient: it means that the filtration of is included in the filtration of the underlying Brownian motion . Gaussian Volterra processes and their stochastic analysis have been studied, for example, in [1, 2], just to mention a few. Apparently, the most famous Gaussian process admitting Volterra representation is the fractional Brownian motion and its stochastic analysis indeed has been developed mostly by using its Volterra representation; see, for example, the monographs [3, 4] and references therein.

In discrete finite time the Volterra representation (1) is nothing but the Cholesky lower-triangular factorization of the covariance of , and hence every Gaussian process is a Volterra process. In continuous time this is not true; see Example 16 in Section 3.

There is a more general representation than (1) by Hida; see [5, Theorem  4.1]. However, this Hida representation includes possibly infinite number of Brownian motions. Consequently, it seems very difficult to apply the Hida representation to build a transfer principle needed by stochastic analysis. Moreover, the Hida representation is not quite general. Indeed, it requires, among other things, that the Gaussian process is purely nondeterministic. The Fredholm representation (2) does not require pure nondeterminism. Example 16 in Section 3, which admits a Fredholm representation, does not admit a Hida representation, and the reason is the lack of pure nondeterminism.

The problem with the Volterra representation (1) is the Volterra nature of the kernel , as far as generality is concerned. Indeed, if one considers Fredholm kernels, that is, kernels where the integration is over the entire interval under consideration, one obtains generality. A Gaussian Fredholm process is a process that admits the Fredholm representationwhere is a Brownian motion and . In this paper we show that every separable Gaussian process with integrable variance function admits representation (2). The price we have to pay for this generality is twofold: (i)The process is generated, in principle, from the entire path of the underlying Brownian motion . Consequently, and do not necessarily generate the same filtration. This is unfortunate in many applications.(ii)In general the kernel depends on even if the covariance does not, and consequently the derived operators also depend on . This is why we use the cumbersome notation of explicitly stating out the dependence when there is one. In stochastic analysis this dependence on seems to be a minor inconvenience, however. Indeed, even in the Volterra case as examined, for example, by Alòs et al. [1], one cannot avoid the dependence on in the transfer principle. Of course, for statistics, where one would like to let tend to infinity, this is a major inconvenience.

Let us note that the Fredholm representation has already been used, without proof, in [6], where the Hölder continuity of Gaussian processes was studied.

Let us mention a few papers that study stochastic analysis of Gaussian processes here. Indeed, several different approaches have been proposed in the literature. In particular, fractional Brownian motion has been a subject of active study (see the monographs [3, 4] and references therein). More general Gaussian processes have been studied in the already mentioned work by Alòs et al. [1]. They considered Gaussian Volterra processes where the kernel satisfies certain technical conditions. In particular, their results cover fractional Brownian motion with Hurst parameter . Later Cheridito and Nualart [7] introduced an approach based on the covariance function itself rather than on the Volterra kernel . Kruk et al. [8] developed stochastic calculus for processes having finite -planar variation, especially covering fractional Brownian motion . Moreover, Kruk and Russo [9] extended the approach to cover singular covariances, hence covering fractional Brownian motion . Furthermore, Mocioalca and Viens [10] studied processes which are close to processes with stationary increments. More precisely, their results cover cases where , where satisfies some minimal regularity conditions. In particular, their results cover some processes which are not even continuous. Finally, the latest development we are aware of is a paper by Lei and Nualart [11] who developed stochastic calculus for processes having absolute continuous covariance by using extended domain of the divergence introduced in [9]. Finally, we would like to mention Lebovits [12] who used the S-transform approach and obtained similar results to ours, although his notion of integral is not elementary as ours.

The results presented in this paper give unified approach to stochastic calculus for Gaussian processes and only integrability of the variance function is required. In particular, our results cover processes that are not continuous.

The paper is organized as follows.

Section 2 contains some preliminaries on Gaussian processes and isonormal Gaussian processes and related Hilbert spaces.

Section 3 provides the proof of the main theorem of the paper: the Fredholm representation.

In Section 4 we extend the Fredholm representation to a transfer principle in three contexts of growing generality: First we prove the transfer principle for Wiener integrals in Section 4.1, then we use the transfer principle to define the multiple Wiener integral in Section 4.2, and, finally, in Section 4.3 we prove the transfer principle for Malliavin calculus, thus showing that the definition of multiple Wiener integral via the transfer principle done in Section 4.2 is consistent with the classical definitions involving Brownian motion or other Gaussian martingales. Indeed, classically one defines the multiple Wiener integrals either by building an isometry with removed diagonals or by spanning higher chaos by using the Hermite polynomials. In the general Gaussian case one cannot of course remove the diagonals, but the Hermite polynomial approach is still valid. We show that this approach is equivalent to the transfer principle. In Section 4.3 we also prove an Itô formula for general Gaussian processes and in Section 4.4 we extend the Itô formula even further by using the technique of extended domain in the spirit of [7]. This Itô formula is, as far as we know, the most general version for Gaussian processes existing in the literature so far.

Finally, in Section 5 we show the power of the transfer principle in some applications. In Section 5.1 the transfer principle is applied to the question of equivalence of law of general Gaussian processes. In Section 5.2 we show how one can construct net canonical-type representation for generalized Gaussian bridges, that is, for the Gaussian process that is conditioned by multiple linear functionals of its path. In Section 5.3 the transfer principle is used to provide series expansions for general Gaussian processes.

2. Preliminaries

Our general setting is as follows: let be a fixed finite time-horizon and let be a Gaussian process with covariance that may or may not depend on . Without loss of any interesting generality we assume that is centered. We also make the very weak assumption that is separable in the sense of the following definition.

Definition 1 (separability). The Gaussian process is separable if the Hilbert space is separable.

Example 2. If the covariance is continuous, then is separable. In particular, all continuous Gaussian processes are separable.

Definition 3 (associated operator). For a kernel one associates an operator on , also denoted by , as

Definition 4 (isonormal process). The isonormal process associated with , also denoted by , is the Gaussian family , where the Hilbert space is generated by the covariance as follows: (i)Indicators ,  , belong to .(ii) is endowed with the inner product .

Definition 4 states that is the image of in the isometry that extends the relation linearly. Consequently, we can have the following definition.

Definition 5 (Wiener integral). is the Wiener integral of the element with respect to . One will also denote

Remark 6. Eventually, all the following will mean the same:

Remark 7. The Hilbert space is separable if and only if is separable.

Remark 8. Due to the completion under the inner product it may happen that the space is not a space of functions but contains distributions; compare [13] for the case of fractional Brownian motions with Hurst index bigger than half.

Definition 9. The function space is the space of functions that can be approximated by step-functions on in the inner product .

Example 10. If the covariance is of bounded variation, then is the space of functions satisfying

Remark 11. Note that it may be that but for some we have ; compare [14] for an example with fractional Brownian motion with Hurst index less than half. For this reason we keep the notation instead of simply writing . For the same reason we include the dependence of whenever there is one.

3. Fredholm Representation

Theorem 12 (Fredholm representation). Let be a separable centered Gaussian process. Then there exists a kernel and a Brownian motion , independent of , such thatin law if and only if the covariance of satisfies the trace conditionRepresentation (8) is unique in the sense that any other representation with kernel , say, is connected to (8) by a unitary operator on such that . Moreover, one may assume that is symmetric.

Proof. Let us first remark that (9) is precisely what we need to invoke Mercer’s theorem and take square root in the resulting expansion.
Now, by Mercer’s theorem we can expand the covariance function on aswhere and are the eigenvalues and the eigenfunctions of the covariance operator Moreover, is an orthonormal system on .
Now, , being a covariance operator, admits a square root operator defined by the relationfor all and . Now, condition (9) means that is trace class and, consequently, is Hilbert-Schmidt operator. In particular, is a compact operator. Therefore, it admits a kernel. Indeed, a kernel can be defined by using the Mercer expansion (10) asThis kernel is obviously symmetric. Now, it follows that and representation (8) follows from this.
Finally, let us note that the uniqueness up to a unitary transformation is obvious from the square root relation (12).

Remark 13. The Fredholm representation (8) holds also for infinite intervals, that is, , if the trace condition (9) holds. Unfortunately, this is seldom the case.

Remark 14. The above proof shows that the Fredholm representation (8) holds in law. However, one can also construct the process via (8) for a given Brownian motion . In this case, representation (8) holds of course in . Finally, note that in general it is not possible to construct the Brownian motion in representation (8) from the process . Indeed, there might not be enough randomness in . To construct from one needs that the indicators , belong to the range of the operator .

Remark 15. We remark that the separability of ensures representation of form (8) where the kernel only satisfies a weaker condition for all , which may happen if the trace condition (9) fails. In this case, however, the associated operator does not belong to , which may be undesirable.

Example 16. Let us consider the following very degenerate case: suppose , where is deterministic and is a standard normal random variable. Suppose . ThenSo, . Now, if , then condition (9) is satisfied and . On the other hand, even if we can still write in form (15). However, in this case the kernel does not belong to .

Example 17. Consider a truncated series expansion where are independent standard normal random variables and where ,  , is an orthonormal basis in . Now it is straightforward to check that this process is not purely nondeterministic (see [15] for definition) and, consequently, cannot have Volterra representation while it is clear that admits a Fredholm representation. On the other hand, by choosing the functions to be the trigonometric basis on , is a finite-rank approximation of the Karhunen-Loève representation of standard Brownian motion on . Hence by letting tend to infinity we obtain the standard Brownian motion and hence a Volterra process.

Example 18. Let be a standard Brownian motion on and consider the Brownian bridge. Now, there are two representations of the Brownian bridge (see [16] and references therein on the representations of Gaussian bridges). The orthogonal representation is Consequently, has a Fredholm representation with kernel . The canonical representation of the Brownian bridge is Consequently, the Brownian bridge has also a Volterra-type representation with kernel .

4. Transfer Principle and Stochastic Analysis

4.1. Wiener Integrals

Theorem 22 is the transfer principle in the context of Wiener integrals. The same principle extends to multiple Wiener integrals and Malliavin calculus later in the following subsections.

Recall that for any kernel its associated operator on is

Definition 19 (adjoint associated operator). The adjoint associated operator of a kernel is defined by linearly extending the relation

Remark 20. The name and notation of “adjoint” for comes from Alòs et al. [1] where they showed that in their Volterra context admits a kernel and is an adjoint of in the sense thatfor step-functions and belonging to . It is straightforward to check that this statement is valid also in our case.

Example 21. Suppose the kernel is of bounded variation for all and that is nice enough. Then

Theorem 22 (transfer principle for Wiener integrals). Let be a separable centered Gaussian process with representation (8) and let . Then

Proof. Assume first that is an elementary function of form for some disjoint intervals . Then the claim follows by the very definition of the operator and Wiener integral with respect to together with representation (8). Furthermore, this shows that provides an isometry between and . Hence can be viewed as a closure of elementary functions with respect to which proves the claim.

4.2. Multiple Wiener Integrals

The study of multiple Wiener integrals goes back to Itô [17] who studied the case of Brownian motion. Later Huang and Cambanis [18] extended the notion to general Gaussian processes. Dasgupta and Kallianpur [19, 20] and Perez-Abreu and Tudor [21] studied multiple Wiener integrals in the context of fractional Brownian motion. In [19, 20] a method that involved a prior control measure was used and in [21] a transfer principle was used. Our approach here extends the transfer principle method used in [21].

We begin by recalling multiple Wiener integrals with respect to Brownian motion and then we apply transfer principle to generalize the theory to arbitrary Gaussian process.

Let be an elementary function on that vanishes on the diagonals; that is, where and whenever for some For such we define the multiple Wiener integral as where we have denoted For we set Now, it can be shown that elementary functions that vanish on the diagonals are dense in Thus, one can extend the operator to the space . This extension is called the multiple Wiener integral with respect to the Brownian motion.

Remark 23. It is well-known that can be understood as multiple or iterated. Itô integrals if and only if unless In this case we haveFor the case of Gaussian processes that are not martingales this fact is totally useless.

For a general Gaussian process , recall first the Hermite polynomials: For any let the th Wiener chaos of be the closed linear subspace of generated by the random variables , where is the th Hermite polynomial. It is well-known that the mapping provides a linear isometry between the symmetric tensor product and the th Wiener chaos. The random variables are called multiple Wiener integrals of order with respect to the Gaussian process .

Let us now consider the multiple Wiener integrals for a general Gaussian process . We define the multiple integral by using the transfer principle in Definition 25 and later argue that this is the “correct” way of defining them. So, let be a centered Gaussian process on with covariance and representation (8) with kernel .

Definition 24 (-fold adjoint associated operator). Let be the kernel in (8) and let be its adjoint associated operator. DefineIn the same way, define Here the tensor products are understood in the sense of Hilbert spaces; that is, they are closed under the inner product corresponding to the -fold product of the underlying inner product.

Definition 25. Let be a centered Gaussian process with representation (8) and let . Then

The following example should convince the reader that this is indeed the correct definition.

Example 26. Let and let , where both and are step-functions. Then

The following proposition shows that our approach to define multiple Wiener integrals is consistent with the traditional approach where multiple Wiener integrals for more general Gaussian process are defined as the closed linear space generated by Hermite polynomials.

Proposition 27. Let be the th Hermite polynomial and let . Then

Proof. First note that without loss of generality we can assume Now by the definition of multiple Wiener integral with respect to we havewhereConsequently, by [22, Proposition  1.1.4] we obtain which implies the result together with Theorem 22.

Proposition 27 extends to the following product formula, which is also well-known in the Gaussian martingale case but apparently new for general Gaussian processes. Again, the proof is straightforward application of transfer principle.

Proposition 28. Let and .Thenwhereand denotes the preimage of .

Proof. The proof follows directly from the definition of and [22, Proposition  1.1.3].

Example 29. Let and be of forms and . Then Hence

Remark 30. In the literature multiple Wiener integrals are usually defined as the closed linear space spanned by Hermite polynomials. In such a case Proposition 27 is clearly true by the very definition. Furthermore, one has a multiplication formula (see, e.g., [23]): where denotes symmetrization of tensor product and is a complete orthonormal basis of the Hilbert space . Clearly, by Proposition 27, both formulas coincide. This also shows that (39) is well-defined.

4.3. Malliavin Calculus and Skorohod Integrals

We begin by recalling some basic facts on Malliavin calculus.

Definition 31. Denote by the space of all smooth random variables of the form where and all its derivatives are bounded. The Malliavin derivative of is an element of defined by In particular, .

Definition 32. Let be the Hilbert space of all square integrable Malliavin differentiable random variables defined as the closure of with respect to norm The divergence operator is defined as the adjoint operator of the Malliavin derivative .

Definition 33. The domain of the operator is the set of random variables satisfying for any and some constant depending only on . For the divergence operator is a square integrable random variable defined by the duality relation for all .

Remark 34. It is well-known that .

We use the notation

Theorem 35 (transfer principle for Malliavin calculus). Let be a separable centered Gaussian process with Fredholm representation (8). Let and be the Malliavin derivative and the Skorohod integral with respect to on . Similarly, let and be the Malliavin derivative and the Skorohod integral with respect to the Brownian motion of (8) restricted on . Then

Proof. The proof follows directly from transfer principle and the isometry provided by with the same arguments as in [1]. Indeed, by isometry we have where denotes the preimage, which implies that which justifies . Furthermore, we have relation for any smooth random variable and . Hence, by the very definition of and transfer principle, we obtain and proving the claim.

Now we are ready to show that the definition of the multiple Wiener integral in Section 4.2 is correct in the sense that it agrees with the iterated Skorohod integral.

Proposition 36. Let be of form . Then is iteratively times Skorohod integrable and Moreover, if is such that it is times iteratively Skorohod integrable, then (55) still holds.

Proof. Again the idea is to use the transfer principle together with induction. Note first that the statement is true for by definition and assume next that the statement is valid for . We denote . Hence, by induction assumption, we have Put now and . Hence by [22, Proposition  1.3.3] and by applying the transfer principle we obtain that belongs to and Hence the result is valid also for by Proposition 28 with .
The claim for general follows by approximating with a product of simple function.

Remark 37. Note that (55) does not hold for arbitrary in general without the a priori assumption of times iterative Skorohod integrability. For example, let and be a fractional Brownian motion with and define for some fixed . Then But does not belong to (see [7]).

We end this section by providing an extension of Itô formulas provided by Alòs et al. [1]. They considered Gaussian Volterra processes; that is, they assumed the representation where the kernel satisfied certain technical assumptions. In [1] it was proved that in the case of Volterra processes one has if satisfies the growth condition for some and . In the following we will consider different approach which enables us to (i)prove that such formula holds with minimal requirements,(ii)give more instructive proof of such result,(iii)extend the result from Volterra context to more general Gaussian processes,(iv)drop some technical assumptions posed in [1].

For simplicity, we assume that the variance of is of bounded variation to guarantee the existence of the integral If the variance is not of bounded variation, then integral (62) may be understood by integration by parts if is smooth enough or, in the general case, via the inner product . In Theorem 40 we also have to assume that the variance of is bounded.

The result for polynomials is straightforward, once we assume that the paths of polynomials of belong to .

Proposition 38 (Itô formula for polynomials). Let be a separable centered Gaussian process with covariance and assume that is a polynomial. Furthermore, assume that for each polynomial one has . Then for each one has if and only if belongs to .

Remark 39. The message of the above result is that once the processes , then they automatically belong to the domain of which is a subspace of . However, in order to check one needs more information on the kernel . A sufficient condition is provided in Corollary 43 which covers many cases of interest.

Proof. By definition and applying transfer principle, we have to prove that belongs to domain of and that for every random variable from a total subset of . In other words, it is sufficient to show that (69) is valid for random variables of form , where is a step- function.
Note first that it is sufficient to prove the claim only for Hermite polynomials . Indeed, it is well-known that any polynomial can be expressed as a linear combination of Hermite polynomials and, consequently, the result for arbitrary polynomial follows by linearity.
We proceed by induction. First it is clear that first two polynomials and satisfy (64). Furthermore, by assumption belongs to from which (64) is easily deduced by [22, Proposition  1.3.3]. Assume next that the result is valid for Hermite polynomials . Then, recall well-known recursion formulas The induction step follows with straightforward calculations by using the recursion formulas above and [22, Proposition  1.3.3]. We leave the details to the reader.

We will now illustrate how the result can be generalized for functions satisfying the growth condition (61) by using Proposition 38. First note that the growth condition (61) is indeed natural since it guarantees that the left side of (60) is square integrable. Consequently, since operator is a mapping from into , functions satisfying (61) are the largest class of functions for which (60) can hold. However, it is not clear in general whether belongs to . Indeed, for example, in [1] the authors posed additional conditions on the Volterra kernel to guarantee this. As our main result we show that implies that (60) holds. In other words, not only is the Itô formula (60) natural but it is also the only possibility.

Theorem 40 (Itô formula for Skorohod integrals). Let be a separable centered Gaussian process with covariance such that all the polynomials Assume that satisfies growth condition (61) and that the variance of is bounded and of bounded variation. If for any , then

Proof. In this proof we assume, for notational simplicity and with no loss of generality, that .
First it is clear that (66) implies that belongs to domain of . Hence we only have to prove that for every random variable .
Now, it is well-known that Hermite polynomials, when properly scaled, form an orthogonal system in when equipped with the Gaussian measure. Now each satisfying the growth condition (61) has a series representation Indeed, the growth condition (61) implies that Furthermore, we have where the series converge almost surely and in , and similar conclusion is valid for derivatives and .
Then, by applying (66) we obtain that for any there exists such that we have where Consequently, for random variables of form we obtain, by choosing large enough and applying Proposition 38, that Now the left side does not depend on which concludes the proof.

Remark 41. Note that actually it is sufficient to have from which the result follows by Proposition 38. Furthermore, taking account growth condition (61) this is actually sufficient and necessary condition for formula (60) to hold. Consequently, our method can also be used to obtain Itô formulas by considering extended domain of (see [9] or [11]). This is the topic of Section 4.4.

Example 42. It is known that if is a fractional Brownian motion with , then satisfies condition (66) while for it does not (see [22, Chapter  5]). Consequently, a simple application of Theorem 40 covers fractional Brownian motion with . For the case one has to consider extended domain of which is proved in [9]. Consequently, in this case we have (75) for any .

We end this section by illustrating the power of our method with the following simple corollary which is an extension of [1, Theorem  1].

Corollary 43. Let be a separable centered continuous Gaussian process with covariance that is bounded such that the Fredholm kernel is of bounded variation and Then, for any one has

Proof. Note that assumption is a Fredholm version of condition (K2) in [1] which implies condition (66). Hence, the result follows by Theorem 40.

4.4. Extended Divergence Operator

As shown in Section 4.3, the Itô formula (60) is the only possibility. However, the problem is that the space may be too small to contain the elements . In particular, it may happen that not even the process itself belongs to (see, e.g., [7] for the case of fractional Brownian motion with ). This problem can be overcome by considering an extended domain of . The idea of extended domain is to extend the inner product for simple to more general processes and then define extended domain by (47) with a restricted class of test variables . This also gives another intuitive reason why extended domain of can be useful; indeed, here we have proved that Itô formula (60) is the only possibility, and what one essentially needs for such result is the following:(i) belongs to .(ii)Equation (75) is valid for functions satisfying (61).

Consequently, one should look for extensions of operator such that these two things are satisfied.

To facilitate the extension of domain, we make the following relatively moderate assumption:(H)The function is of bounded variation on and

Remark 44. Note that we are making the assumption on the covariance , not the kernel . Hence, our case is different from that of [1]. Also, [11] assumed absolute continuity in ; we are satisfied with bounded variation.

We will follow the idea from Lei and Nualart [11] and extend the inner product beyond .

Consider a step-function . Then, on the one hand, by the isometry property we have where . On the other hand, by using adjoint property (see Remark 20) we obtain where, computing formally, we have Consequently, This gives motivation to the following definition similar to that of [11, Definition  2.1].

Definition 45. Denote by the space of measurable functions satisfying and let be a step-function of form . Then we extend to by defining In particular, this implies that, for and as above, we have We define extended domain similarly as in [11].

Definition 46. A process belongs to iffor any smooth random variable . In this case, is defined by duality relationship

Remark 47. Note that in general and are not comparable. See [11] for discussion.

Note now that if a function satisfies the growth condition (61), then since (61) implies for any . Consequently, with this definition we are able to get rid of the problem that processes might not belong to corresponding -spaces. Furthermore, this implies that the series expansion (69) converges in the norm defined by which in turn implies (75). Hence, it is straightforward to obtain the following by first showing the result for polynomials and then by approximating in a similar manner as done in Section 4.3, but using the extended domain instead.

Theorem 48 (Itô formula for extended Skorohod integrals). Let be a separable centered Gaussian process with covariance and assume that satisfies growth condition (61). Furthermore, assume that (H) holds and that the variance of is bounded and is of bounded variation. Then for any the process belongs to and

Remark 49. As an application of Theorem 48 it is straightforward to derive version of Itô-Tanaka formula under additional conditions which guarantee that for a certain sequence of functions we have the convergence of term to the local time. For details we refer to [11], where authors derived such formula under their assumptions.

Finally, let us note that the extension to functions is straightforward, where satisfies the following growth condition: for some and .

Theorem 50 (Itô formula for extended Skorohod integrals, II). Let be a separable centered Gaussian process with covariance and assume that satisfies growth condition (91). Furthermore, assume that (H) holds and that the variance of is bounded and is of bounded variation. Then for any the process belongs to and

Proof. Taking into account that we have no problems concerning processes to belong to the required spaces, the formula follows by approximating with polynomials of form and following the proof of Theorem 40.

5. Applications

We illustrate how some results transfer easily from the Brownian case to the Gaussian Fredholm processes.

5.1. Equivalence in Law

The transfer principle has already been used in connection with the equivalence of law of Gaussian processes in, for example, [24] in the context of fractional Brownian motions and in [2] in the context of Gaussian Volterra processes satisfying certain nondegeneracy conditions. The following proposition uses the Fredholm representation (8) to give a sufficient condition for the equivalence of general Gaussian processes in terms of their Fredholm kernels.

Proposition 51. Let and be two Gaussian processes with Fredholm kernels and , respectively. If there exists a Volterra kernel such thatthen and are equivalent in law.

Proof. Recall that by the Hitsuda representation theorem [5, Theorem  6.3] a centered Gaussian process is equivalent in law to a Brownian motion on if and only if there exists a kernel and a Brownian motion such that admits the representationLet have the Fredholm representationsThen, is equivalent to if it admits, in law, the representationwhere is connected to of (95) by (94).
In order to show (96), let be the Fredholm representation of . Here is some Brownian motion. Then, by using connection (93) and the Fubini theorem, we obtain Thus, we have shown representation (96) and consequently the equivalence of and .

5.2. Generalized Bridges

We consider the conditioning, or bridging, of on linear functionals of its paths:We assume, without any loss of generality, that the functions are linearly independent. Also, without loss of generality we assume that and the conditioning is on the set instead of the apparently more general conditioning on the set . Indeed, see in [16] how to obtain the more general conditioning from this one.

The rigorous definition of a bridge is as follows.

Definition 52. The generalized bridge measure is the regular conditional law A representation of the generalized Gaussian bridge is any process satisfying

We refer to [16] for more details on generalized Gaussian bridges.

There are many different representations for bridges. A very general representation is the so-called orthogonal representation given by where, by the transfer principle,A more interesting representation is the so-called canonical representation where the filtration of the bridge and the original process coincide. In [16] such representations were constructed for the so-called prediction-invertible Gaussian processes. In this subsection we show how the transfer principle can be used to construct a canonical-type bridge representation for all Gaussian Fredholm processes. We start with an example that should make it clear how one uses the transfer principle.

Example 53. We construct a canonical-type representation for , the bridge of conditioned on . Assume . Now, by the Fredholm representation of we can write the conditioning asLet us then denote by the canonical representation of the Brownian bridge with conditioning (104). Then, by [16, Theorem  4.12], Now, by integrating against the kernel , we obtain from this that This canonical-type bridge representation seems to be a new one.

Let us then denoteThen, in the same way as Example 53, by applying the transfer principle to [16, Theorem  4.12], we obtain the following canonical-type bridge representation for general Gaussian Fredholm processes.

Proposition 54. Let be a Gaussian process with Fredholm kernel such that . Then the bridge admits the canonical-type representation where .

5.3. Series Expansions

The Mercer square root (13) can be used to build the Karhunen-Loève expansion for the Gaussian process . But the Mercer form (13) is seldom known. However, if one can find some kernel such that representation (8) holds, then one can construct a series expansion for by using the transfer principle of Theorem 22 as follows.

Proposition 55 (series expansion). Let be a separable Gaussian process with representation (8). Let be any orthonormal basis on . Then admits the series expansionwhere is a sequence of independent standard normal random variables. Series (109) converges in and also almost surely uniformly if and only if is continuous.

The proof below uses reproducing kernel Hilbert space technique. For more details on this we refer to [25] where the series expansion is constructed for fractional Brownian motion by using the transfer principle.

Proof. The Fredholm representation (8) implies immediately that the reproducing kernel Hilbert space of is the image and is actually an isometry from to the reproducing kernel Hilbert space of . The -expansion (109) follows from this due to [26, Theorem  3.7] and the equivalence of almost sure convergence of (109) and continuity of follows [26, Theorem  3.8].

5.4. Stochastic Differential Equations and Maximum Likelihood Estimators

Let us briefly discuss the following generalized Langevin equation: with some Gaussian noise , parameter , and initial condition . This can be written in the integral formHere the integral can be understood in a pathwise sense or in a Skorohod sense, and both integrals coincide. Suppose now that the Gaussian noise has the Fredholm representation By applying the transfer principle we can write (111) as This equation can be interpreted as a stochastic differential equation with some anticipating Gaussian perturbation term . Now the unique solution to (111) with an initial condition is given by By using integration by parts and by applying the Fredholm representation of this can be written as which, thanks to stochastic Fubini’s theorem, can be written as