Abstract

This article aims to give a formula for differentiating, with respect to 𝑇, an expression of the form 𝜆(𝑇,𝑥)=𝔼𝑥[𝑓(𝑋𝑇)𝑒𝑇0𝑉(𝑋𝑠)𝑑𝑠(det(𝐼+𝐾𝑋,𝑇))𝑃], where 𝑝0 and 𝑋 is a diffusion process starting from 𝑥, taking values in a manifold, and the expectation is taken with respect to the law of this process. 𝐾𝑋,𝑇𝐿2([0,𝑇)𝑁)𝐿2([0,𝑇)𝑁) is a trace class operator defined by 𝐾𝑋,𝑇𝑓(𝑠)=𝑇0𝐻(𝑠𝑡)Γ(𝑋(𝑡))𝑓(𝑡)𝑑𝑡, where 𝐻, Γ are locally Lipschitz, positive 𝑁×𝑁 matrices.

1. Introduction

Suppose we have a differentiable manifold 𝑀 of dimension 𝑑. By the Whitney's embedding theorem, there exists an embedding 𝑖𝑀𝑁 such that 𝑖(𝑀) is a closed subset of 𝑁. It turns out that 𝑁=2𝑑+1 will do. We will identify 𝑀 with the image 𝑖(𝑀) and assume that 𝑀 itself is a closed submanifold of 𝑁. We will also assume that 𝑀 does not have a boundary. Let 𝑀{𝜕𝑀} be a one-point compactification of 𝑀.

Definition 1.1. An 𝑀-valued path 𝜔 with explosion time 𝑒=𝑒(𝜔)>0 is a continuous map 𝜔[0,)𝑀{𝜕𝑀} such that 𝜔𝑡𝑀 for 0𝑡<𝑒 and 𝜔𝑡=𝜕𝑀 for all 𝑡𝑒 if 𝑒<. The space of 𝑀-valued paths with explosion time is called the path space of 𝑀 and is denoted by 𝑊(𝑀).

Let (Ω,,) be a filtered probability space and let  𝐿 be a smooth second-order elliptic operator on 𝑀. Using the coordinates of the ambient space {𝑥1,,𝑥𝑁}, and extending 𝐿 smoothly to 𝐿 in the ambient space, we may write1𝐿=2𝑁𝑖,𝑗=1𝐴𝑖𝑗𝜕2𝜕𝑥𝑖𝜕𝑥𝑗+𝑁𝑖=1𝑏𝑖𝜕𝜕𝑥𝑖,(1.1)with 𝜎=𝐴, 𝐴 a positive matrix. Since 𝐴 is smooth, its square root is locally Lipschitz. Construct a time homogeneous Itô diffusion process 𝑋Ω𝑊(𝑀) which solves the following stochastic differential equation:𝑑𝑋𝑠𝑋=𝑏𝑠𝑋𝑑𝑠+𝜎𝑠𝑑𝐵𝑠,𝑠𝑡;𝑋𝑡=𝑥(1.2)in the ambient space 𝑁, where 𝐵𝑠 is 𝑁-dimensional Euclidean-Brownian motion and 𝑏𝑁𝑁,𝜎𝑁𝑁×𝑁 such that+||||𝑏(𝑥)𝑏(𝑦)𝜎(𝑥)𝜎(𝑦)𝐷(𝑅)𝑥𝑦(1.3)for some constant 𝐷(𝑅) dependent on an open ball centered at 𝑥 with radius 𝑅. Till the explosion time 𝑒(𝑋), 𝑋𝑠𝑀 for 0𝑠<𝑒(𝑋). On 𝑀, 𝐿=𝐿. Furthermore, 𝜇𝑋=𝑋1 is an 𝐿-diffusion measure on 𝑊(𝑀). As a result, we use 𝜇𝑋 to be the probability measure on 𝑊(𝑀). Refer to [1] for a more detailed description.

Fix some positive integer 𝑁. Let 𝑀𝑁() and 𝒮𝑁() be the spaces of 𝑁×𝑁 real-valued matrices and symmetric 𝑁×𝑁 real-valued matrices, respectively. Also let 𝑀+𝑁()𝒮𝑁() be the space of nonnegative matrices. Suppose that 𝐻[0,)𝑀+𝑁() is locally Lipschitz with 0𝐻(𝑠)<𝐻(𝑡) if 𝑠<𝑡, 𝐻(0)=0, and Γ𝑀𝑀+𝑁() is locally Lipschitz. Also assume that sup𝑥𝑀Γ(𝑥) is bounded, where is the operator norm. Define 𝑀Γ(𝑋) as the multiplication operator with Γ(𝑋) and Υ𝑇 as the integral operator with kernel 𝐻(𝑠𝑡), that is, for any 𝑓𝐿2([0,𝑇]𝑁),Υ𝑇𝑓(𝑠)=𝑇0𝐻(𝑠𝑡)𝑓(𝑡)𝑑𝑡,(1.4)where 𝑠𝑡 is the minimum of 𝑠 and 𝑡. Note that under the assumptions on 𝐻, Υ𝑇 is a positive operator and is trace class (see Proposition 3.1.).

Consider the following integral operator: 𝐾𝑋,𝑇𝐿2([0,𝑇]𝑁)𝐿2([0,𝑇]𝑁),𝐾𝑋,𝑇𝑓(𝑠)=Υ𝑀Γ(𝑋)𝑓(𝑠)=𝑇0𝐻(𝑠𝑡)Γ𝑋(𝑡)𝑓(𝑡)𝑑𝑡.(1.5)It is a fact that for any trace class operator 𝐾, if we let 𝐾1 denote the trace of |𝐾|, then𝐾𝑋,𝑇1sup𝑥𝑀ΥΓ(𝑥)1𝐶𝑇0tr𝐻(𝑠)𝑑𝑠,(1.6)for some constant 𝐶. Here, tr means taking the trace of a matrix. Thus 𝐾𝑋,𝑇 is trace class. Therefore,|||det𝐼+𝐾𝑋,𝑇|||𝐾exp𝑋,𝑇1𝐶exp𝑇0tr𝐻(𝑠)𝑑𝑠.(1.7)Hence the Fredholm determinant is bounded for each 𝑇.

Let 𝑓,𝑉 be continuous bounded functions taking 𝑥𝑀{𝜕𝑀}. Fix some number 𝑝0. Define a function 𝜆[0,)×𝑀 by𝜆(𝑇,𝑥)=𝔼𝑥𝑋[𝑓𝑇𝑒𝑇0𝑉(𝑋𝑠)𝑑𝑠det𝐼+𝐾𝑋,𝑇𝑝],(1.8)where the expectation is taken with respect to 𝜇𝑋 and the paths start from 𝑥. Note that 𝜆 is finite for any 𝑥,𝑇 from the above discussion. Let [0,)×𝑀×𝑀𝑁()𝑀𝑁() such that(𝑡,𝑥,𝑣)=𝑣𝐻(𝑡)Γ(𝑥)𝑣𝐻(𝑡).(1.9)The main result is as follows.

Theorem 1.2. Let (𝑡,𝑥,𝑣)[0,)×𝑀×𝒮𝑁(). Then 𝑒𝜆(𝑇,𝑥)=𝑇𝐻𝑓(0,𝑥,0),(1.10)where 𝜕𝐻=𝐿++𝜕𝑡𝑁𝑖,𝑗=1(𝑡,𝑥,𝑣)𝑖𝑗𝜕𝜕𝑣𝑖𝑗𝑉(𝑥)+𝑝𝑡𝑟𝐻(𝑡)Γ(𝑥)𝑣Γ(𝑥).(1.11)Here, (𝑡,𝑥,𝑣)𝑖𝑗 is the 𝑖,𝑗 component of the matrix .

Clearly, 𝜆 is not in the Feynman-Kac formula form using the process 𝑋. The idea is to construct a diffusion process 𝑊𝑠=(𝑠,𝑋𝑠,𝑍𝑠)[0,)×𝑀×𝒮𝑁() such thatdet𝐼+𝐾𝑋,𝑇=exp𝑇0𝐺𝑊𝑠𝑑𝑠,(1.12)and 𝐺 is given by𝐺(𝑡,𝑥,𝑣)=tr𝐻(𝑡)Γ(𝑥)𝑣Γ(𝑥).(1.13)If we can achieve this, then our result follows from a simple application of the Feynman-Kac formula. Proving (1.12) requires the following 2 steps.

First, we have to prove an essential formula for the derivative of logdet(𝐼+𝑧𝐾𝑋,𝑇) with respect to 𝑇, given by𝑑𝑑𝑇logdet𝐼+𝑧𝐾𝑋,𝑇𝑋=tr𝑧𝐻(𝑇)Γ𝑇𝑍𝑧,𝑇Γ𝑋𝑇,(1.14)where 𝑧 is a complex number and 𝑍𝑧,𝑇 is some adapted process. For a precise definition of 𝑍𝑧,𝑇, see (4.1), with 𝐾𝑋,𝑠 replaced by 𝑧𝐾𝑋,𝑠. When 𝑧=1, 𝑍1,𝑇=𝑍𝑇. The goal is to show that the formula holds for 𝑧=1 by analytic continuation.

Fix a time 𝑇. By making |𝑧| small such that 𝑧𝐾𝑋,𝑇<1, we can use the perturbation formula and apply it to the determinant; see (2.3). Differentiating this equation with respect to 𝑇 will give us (1.14). By analytic continuation, we can extend the formula in some domain 𝑂 containing the origin, provided we avoid the poles of the resolvent of 𝑧𝐾𝑋,𝑠 for all 𝑠𝑇. If 1𝑂, then (1.14) holds with 𝑧=1. By integrating both sides and raising to the exponent, we will get (1.12). Note that if 𝐾𝑋,𝑇<1 for some time 𝑇, (1.14) and hence (1.12) hold. The details are given in Sections 2 and 3.

Now assume that (1.14) holds with 𝑧=1. The second step consists of constructing a diffusion process 𝑊 from 𝑋 by using a stochastic differential equation. To do this, we differentiate 𝑍𝑇 with respect to 𝑇 and show that it satisfies the differential equation𝑑𝑍𝑇=𝑇,𝑋𝑇,𝑍𝑇=𝑍𝑇Γ𝑋𝐻(𝑇)𝑇𝑍𝑇𝐻(𝑇)𝑑𝑇,(1.15)and hence 𝑊𝑇=(𝑇,𝑋𝑇,𝑍𝑇) satisfies the following stochastic differential equation:𝑑𝑊𝑇=𝑋1,𝑏𝑇𝑊,𝑇𝑋𝑑𝑇+0,𝜎𝑇,0𝑑𝐵𝑇,(1.16)with explosion time 𝑒(𝑊). From this stochastic differential equation, it is clear that 𝑊 is a diffusion process and by replacing the Fredholm determinant by the formula in (1.12), 𝜆(𝑇,𝑥) can be written as a Feynman-Kac form using this process 𝑊. However, if 𝑒(𝑊)<𝑇<𝑒(𝑋), then (1.12) may fail to hold.

The positivity of 𝐻 and Γ are used to show that 𝑍1,𝑇=𝑍𝑇 exists for all time 𝑇 and hence (1.14) holds at 𝑧=1. This will also imply that 𝑒(𝑊)=𝑒(𝑋). In particular, only the positivity of 𝐻 is required to show that 𝐾𝑋,𝑇 is a trace class operator. To avoid 𝑒(𝑊)<𝑒(𝑋), we can restrict ourselves to small time 𝑇 such that (2.24) holds true.

We can weaken our assumptions on 𝐻 and Γ by not insisting that they are symmetric matrices. If we only assume that 𝐾𝑋,𝑇 is trace class, then we can replace 𝒮𝑁() with 𝑀𝑁(). Under these weaker assumptions, we have the following result.

Theorem 1.3. Suppose that, for a given locally Lipschitz 𝐻 and Γ, 𝐾𝑋,𝑇 is trace class. Assume that there exists some constant 𝐶 such that sup𝑥𝑀Γ(𝑥)𝐶<,𝐻(𝑠)𝐶𝑠,𝑠0.(1.17) Let (𝑡,𝑥,𝑣)[0,)×𝑀×𝑀𝑁(). Then for all 𝑇<1/𝐶2, 𝑒𝜆(𝑇,𝑥)=𝑇𝐻𝑓(0,𝑥,0),(1.18)where 𝜕𝐻=𝐿++𝜕𝑡𝑁𝑖,𝑗=1(𝑡,𝑥,𝑣)𝑖𝑗𝜕𝜕𝑣𝑖𝑗𝑉(𝑥)+𝑝𝑡𝑟𝐻(𝑡)Γ(𝑥)𝑣Γ(𝑥).(1.19)

From Lemma 2.4, using the assumptions on 𝐻 and Γ, 𝐾𝑋,𝑇𝐶2𝑇. If 𝑇<1/𝐶2, then the norm is less than 1. Hence (1.14) holds and thus (1.12) holds true.

2. Functional Analytic Tools

Notation 2.1. Suppose that 𝐾 is an integral operator, acting on 𝐿2([0,𝑇]𝑁)𝐿2([0,𝑇]𝑁). We will write 𝐾𝑓 to mean (𝐾𝑓)(𝑠)=𝑇0𝐾(𝑠,𝑡)𝑓(𝑡)𝑑𝑡,(2.1)where 𝑇<. To distinguish the operator 𝐾 from its kernel, we will write 𝐾(𝑠,𝑡) to refer to its kernel. This may be confusing, but it is used to avoid too many symbols. In this article, our integral operator is always trace class and the kernel is a continuous 𝑁×𝑁 matrix-valued function. By abuse of notation, 𝐾𝑛(𝑠,𝑡) refers to the kernel of the integral operator 𝐾𝑛.

Notation 2.2. We will use tr to denote the trace of a matrix and Tr to denote taking the trace of a trace class operator. will denote the operator norm.

It is well known that for a trace class operator 𝐴 and 𝑧, Trlog(𝐼+𝑧𝐴) is a meromorphic function and has singularities at points 𝑧 such that 𝑧1𝜎(𝐴). Define the determinant det(𝐼+𝑧𝐴), given bydet(𝐼+𝑧𝐴)=𝑒Trlog(𝐼+𝑧𝐴).(2.2)However, this determinant, also known as the Fredholm determinant of 𝐴, is analytic in 𝑧 because the singularities 𝑧 such that 𝑧1𝜎(𝐴) are removable; see [2, Lemma 16].

We want to differentiate the function logdet(𝐼+𝐾𝑇) with respect to 𝑇, where we write 𝐾𝑇 to denote the dependence on the domain [0,𝑇]. If the kernel of 𝐾𝑇 is given by 𝐾(𝑠,𝑡), then for small 𝑧 such that 𝑧𝐾𝑇<1, using the perturbation formula,det𝐼+𝑧𝐾𝑇=exp(𝑛=1(1)𝑛+1𝑛Tr𝑧𝐾𝑇𝑛).(2.3)If we let 𝑟=𝐾𝑇<1, then𝑛=1|||(1)𝑛+1𝑛Tr𝐾𝑛𝑇|||𝑛=1𝑟𝑛1𝑛𝐾𝑇1𝐾𝑇11𝑟.(2.4)Thus the series in the exponent converges absolutely.

We will define the resolvent 𝑅𝑇 by 𝐾𝑇(𝐼+𝐾𝑇)1. Since we can write 𝐾𝑇(𝐼+𝐾𝑇)1=𝐾𝑇𝐾𝑇(𝐼+𝐾𝑇)1𝐾𝑇, the kernel of 𝑅𝑇 can be written as𝑅𝑇(𝑠,𝑡)=𝐾(𝑠,𝑡)𝐾𝑇𝐼+𝐾𝑇1𝐾𝑇(𝑠,𝑡).(2.5)When we write 𝐾𝑇(𝐼+𝐾𝑇)1𝐾𝑇(𝑠,𝑡), we mean𝐾𝑇𝐼+𝐾𝑇1𝐾𝑇(𝑠,𝑡)=𝑇0𝐾(𝑠,𝑢)𝐼+𝐾𝑇1𝐾(,𝑡)(𝑢)𝑑𝑢.(2.6)We will also write the resolvent of the operator 𝑧𝐾𝑇, 𝑧 as 𝑅𝑇,𝑧. One more point to note is that we assume that 𝐾(,0)=𝐾(0,)=0.

Now the operator 𝐾𝑠 is an operator defined on different Hilbert spaces 𝐿2[0,𝑠]. Therefore, we will now think of our operator 𝐾𝑠 as acting on 𝐿2[0,𝑇], defined as𝐾𝑠𝑓(𝑢)=𝑇0𝐾(𝑢,𝑣)𝜒[0,𝑠](𝑣)𝑓(𝑣)𝑑𝑣,(2.7)where 𝜒 is the characteristic function. Hence now our operator 𝐾𝑠 has a kernel 𝐾(𝑢,𝑣)𝜒[0,𝑠](𝑣) dependent on the parameter 𝑠. Note that our 𝐾𝑠 is continuous in the 𝑢 variable but is discontinuous at 𝑣=𝑠. Thus when we write 𝐾𝑠(𝑢,𝑠), we mean𝐾𝑠(𝑢,𝑠)=lim𝑣𝑠𝐾𝑠(𝑢,𝑣).(2.8)

Definition 2.3. Let 𝐾(,) be a continuous matrix-valued function and let  𝐾(,) be the matrix norm of 𝐾(,). Define 𝐶𝑇 to be the maximum value of 𝐾(,) on [0,𝑇]×[0,𝑇].

The next lemma allows us to control the operator norm of the operator by controlling the sup norm of the kernel.

Lemma 2.4. For 0𝑠<𝑠𝑇, 𝐾𝑠𝐾𝑠𝐶𝑇𝑇𝑠𝑠.(2.9)

Proof. For 𝑓,𝑔𝐿2 and any 𝑠[0,𝑇],|||𝐾𝑠𝐾𝑠|||𝑓,𝑔𝑇0𝑔(𝑢)𝑇𝜒[𝑠,𝑠](𝑣)𝐾(𝑢,𝑣)𝑓(𝑣)𝑑𝑣𝑑𝑢𝑇0𝜒[𝑠,𝑠](𝑣)𝐶𝑇||||||||||||𝑔(𝑢)𝑓(𝑣)𝑑𝑣𝑑𝑢𝐶𝑇𝑇0||||||𝑔(𝑢)𝑑𝑢𝑇0𝜒[𝑠,𝑠]||||||(𝑢)𝑓(𝑢)𝑑𝑢𝐶𝑇𝑇0||||||𝑔(𝑢)2𝑑𝑢𝑇01𝑑𝑢1/2𝑇0||||||𝑓(𝑢)2𝑑𝑢𝑇0𝜒[𝑠,𝑠](𝑢)𝑑𝑢1/2𝐶𝑇𝑇𝑠𝑓𝑠2𝑔2.(2.10)Hence,𝐾𝑠𝐾𝑠𝐶𝑇𝑇𝑠𝑠(2.11)for all 0𝑠<𝑠𝑇.

Lemma 2.5. Fix a 𝑧. For any 𝑇 such that |𝑧𝐶𝑇𝑇|<1 and if 𝐾(𝑠,𝑡) is continuous, then 𝑑𝑑𝑇logdet𝐼+𝑧𝐾𝑇=𝑡𝑟𝑅𝑇,𝑧(𝑇,𝑇).(2.12)

Proof. Since 𝑧 is fixed, we will replace 𝑧𝐾𝑇 by 𝐾𝑇 and hence assume that |𝐶𝑇𝑇|<1. Lemma 2.4 tells us that 𝐾𝑇<1 and thus (2.3) holds true. Taking log on both sides of (2.3), we havelogdet𝐼+𝐾𝑇=𝑛=1(1)𝑛+1𝑛Tr𝐾𝑛𝑇.(2.13)NowTr𝐾𝑛𝑇=𝑇0𝑇0tr𝑛𝑖=1𝐾𝑠𝑖,𝑠𝑖+1𝑑𝑠1𝑑𝑠𝑛,(2.14)where 𝑠𝑛+1=𝑠1. Differentiate with respect to 𝑇 and using the fundamental theorem of calculus, we get𝑑𝑑𝑇𝑇0𝑇0tr𝑛𝑖=1𝐾𝑠𝑖,𝑠𝑖+1𝑑𝑠1𝑑𝑠𝑛=𝑛tr𝑇0𝑇0𝐾𝑇,𝑠2(𝑛1𝑖=2𝐾𝑠𝑖,𝑠𝑖+1𝑠)𝐾𝑛,𝑇𝑑𝑠2𝑑𝑠𝑛=𝑛tr𝐾𝑛𝑇(𝑇,𝑇).(2.15)Let 𝐶𝑇𝑇=𝛼. Thus,𝑛=1|||tr𝐾𝑛𝑇|||(𝑇,𝑇)𝑁𝑛=1𝐾𝑛𝑇(𝑇,𝑇)𝑁𝑛=1𝐶𝑛𝑇𝑇𝑛1𝑁𝐶𝑇𝑛=0𝛼𝑛=𝑁𝐶𝑇1𝛼<.(2.16)Thus𝑑𝑑𝑇logdet𝐼+𝐾𝑇=𝑛=1𝑑(𝑑𝑇(1)𝑛+1𝑛tr𝑇0𝑇0𝑛𝑖=1𝐾𝑠𝑖,𝑠𝑖+1𝑑𝑠1𝑑𝑠𝑛)=tr𝑛=1(1)𝑛+1𝐾𝑛𝑇(𝑇,𝑇)=tr𝐾𝑇(𝐼+𝐾𝑇)1(𝑇,𝑇)=tr𝑅𝑇(𝑇,𝑇).(2.17)

Lemma 2.6. Let 𝐾(,) be continuous. For all 𝑧 such that 𝑧1𝜎(𝐾𝑠) for all 𝑠[0,𝑇], then 𝑡𝑟𝑅,𝑧(,) is continuous.

Proof. Fix a 𝑧 and write 𝑧𝐾𝑠 as 𝐾𝑠. By assumption, 𝐼+𝐾𝑠 is invertible for all 𝑠. By Lemma 2.4, 𝐾𝑠𝐾𝑠0 as 𝑠𝑠0. Note that𝐼+𝐾𝑠=𝐼+𝐾𝑠0+𝐾𝑠𝐾𝑠0=𝐼+𝐾𝑠0𝐼+𝐼+𝐾𝑠01𝐾𝑠𝐾𝑠0,(2.18)and if we let 𝐺𝑠=𝐾𝑠𝐾𝑠0, then𝐼+𝐾𝑠1=𝐼+𝐼+𝐾𝑠𝑜1𝐺𝑠1𝐼+𝐾𝑠𝑜1.(2.19)By the open mapping theorem, because 𝐼+𝐾𝑠𝑜 is a surjective continuous map, it is an open map. Therefore, its inverse is a bounded operator. Since 𝐺𝑠=𝐾𝑠𝐾𝑠00, thus (𝐼+(𝐼+𝐾𝑠𝑜)1𝐺𝑠)1𝐼 and hence (𝐼+𝐾𝑠)1 converges to (𝐼+𝐾𝑠0)1 as 𝑠𝑠0. This shows that (𝐼+𝐾𝑠)1 is continuous in 𝑠. Note that𝑅𝑠(𝑠,𝑠)=𝐾(𝑠,𝑠)𝐾𝑠𝐼+𝐾𝑠1𝐾(𝑠,𝑠).(2.20)Since 𝐾(𝑠,𝑠), 𝐾𝑠 and (𝐼+𝐾𝑠)1 are continuous in 𝑠, hence 𝑅𝑠(𝑠,𝑠) is continuous in 𝑠.

Lemma 2.7. Let 𝐾(,) be continuous. If there exists an open-connected set 𝑂 containing 0 such that for 𝑧𝑂, (𝐼+𝑧𝐾𝑠)1 is analytic for all 𝑠[0,𝑇], then logdet𝐼+𝑧𝐾𝑇=𝑇0𝑡𝑟𝑅𝑠,𝑧(𝑠,𝑠)𝑑𝑠(2.21)for all 𝑧𝑂.

Proof. For all 𝑧𝑂, 𝐼+𝑧𝐾𝑠 is invertible for all 𝑠[0,𝑇] and hence tr𝑅𝑠,𝑧(𝑠,𝑠) is analytic in 𝑂. Therefore, it follows that 𝑇0tr𝑅𝑠,𝑧(𝑠,𝑠)𝑑𝑠 is analytic in 𝑂, because𝑑𝑑𝑧𝑇0tr𝑅𝑠,𝑧(𝑠,𝑠)𝑑𝑠=𝑇0𝑑𝑑𝑧tr𝑅𝑠,𝑧(𝑠,𝑠)𝑑𝑠.(2.22)By Lemma 2.4, 𝐾𝑠𝑇𝐶𝑇 for all 𝑠[0,𝑇]. Thus if we choose 𝑈={𝑧||𝑧|<1/(𝑇𝐶𝑇)}, then 𝑈 is an open set containing 0 and for 𝑧𝑈, 𝑧𝐾𝑠<1 for all 𝑠[0,𝑇]. From Lemma 2.5, for 𝑧𝑈,logdet𝐼+𝑧𝐾𝑇=𝑇0tr𝑅𝑠,𝑧(𝑠,𝑠)𝑑𝑠.(2.23)Since logdet(𝐼+𝑧𝐾𝑇) is also analytic in 𝑂 and agrees with 𝑇0tr𝑅𝑠,𝑧(𝑠,𝑠)𝑑𝑠 in 𝑈, it follows that both functions are equal for all 𝑧𝑂.

The proof in the previous theorem gives us the existence of a small neighbourhood containing 0 such that (2.21) holds. Hence we have the following corollary.

Corollary 2.8. Fix 𝑇>0 and 𝐶𝑇=sup𝑠,𝑡[0,𝑇]𝐾(𝑠,𝑡). There exists an open set 𝑈𝑇={𝑧||𝑧|<1/(𝑇𝐶𝑇)} such that (2.21) holds for all 𝑧𝑈𝑇.

Corollary 2.9. Let 𝑂 be an open-connected set as in Lemma 2.7 such that 1𝑂. Then for 𝑠(0,𝑇), 𝑑𝑑𝑠logdet𝐼+𝐾𝑠=𝑡𝑟𝑅𝑠(𝑠,𝑠).(2.24)

Proof. The corollary follows from differentiating (2.21). By Lemma 2.6, tr𝑅𝑠(𝑠,𝑠) is continuous and hence the fundamental theorem of calculus applies.

3. Fredholm Determinant

The kernel we are interested in is 𝐾𝑋,𝑇=𝐻(𝑠𝑡)Γ(𝑋(𝑡)), for some process 𝑋. More generally, the kernel we are interested in is of the form 𝐾𝑇(𝑠,𝑡)=𝐻(𝑠𝑡)Λ(𝑡) for some continuous matrix-valued Λ. The Hilbert space is 𝐿2([0,𝑇]𝑁) for some positive number 𝑇. Without any ambiguity, we will in future write this space as 𝐿2. We will also use 2 to denote the 𝐿2 norm.

Proposition 3.1. If 𝐻 is continuous, 0𝐻(𝑠)𝐻(𝑡) for any 𝑠𝑡 and Λ continuous, then Υ𝑇 as defined in Section 1 is a trace class operator.

To prove this result, we need the following theorem, which is [3, Theorem 2.12].

Theorem 3.2. Let 𝜇 be a Baire measure on a locally compact space 𝑋. Let 𝐾 be a function on 𝑋×𝑋 which is continuous and Hermitian positive, that is, 𝐽𝑖,𝑗=1𝑧𝑖𝑧𝑗𝐾𝑥𝑖,𝑥𝑗0(3.1)for any 𝑥1,,𝑥𝐽𝑋, 𝑧1,,𝑧𝐽𝐽 and for any 𝐽. Then 𝐾(𝑥,𝑥)0 for all 𝑥. Suppose that, in addition, 𝐾(𝑥,𝑥)𝑑𝜇(𝑥)<.(3.2)Then there exists a unique trace class integral operator 𝐴 such that 𝐴(𝐴𝑓)(𝑥)=𝐾(𝑥,𝑦)𝑓(𝑦)𝑑𝜇(𝑦),1=𝐾(𝑥,𝑥)𝑑𝜇.(3.3)

Proof of Proposition 3.1. Let 𝑋=[0,𝑇] and 𝜇 be Lebesgue measure. Using Theorem 3.2, it suffices to show that 𝐻(𝑠𝑡) is Hermitian positive. Let 𝑧1,𝑧2,,𝑧𝐽 be any complex column vectors. Note that there are 𝑁 entries in each column and the entries are complex valued. Let 𝑠1,,𝑠𝐽[0,𝑇]. The proof is obtained using induction. Clearly, when 𝐽=1, it is trivial. Suppose it is true for all values from 𝑘=1,2,,𝐽1. By relabelling, we can assume that 𝑠1𝑠𝑘, 𝑘=2,,𝐽. Hence 𝑠1𝑠𝑘=𝑠1 for any 𝑘. Let , be the usual dot product. Then𝐽𝑗=1𝐻𝑠1𝑠𝑗𝑧𝑗,𝑧1+𝐽𝑗=1𝐻𝑠𝑗𝑠1𝑧1,𝑧𝑗=𝐽𝑗=1𝐻𝑠1𝑧𝑗,𝑧1+𝐽𝑗=1𝐻𝑠1𝑧1,𝑧𝑗=𝐽𝑗=1𝐻𝑠1𝑧𝑗,𝐽𝑗=1𝑧𝑗𝐽𝑖,𝑗=2𝐻𝑠1𝑧𝑗,𝑧𝑖.(3.4)Therefore,𝐽𝑖,𝑗=1𝐻𝑠𝑖𝑠𝑗𝑧𝑗,𝑧𝑖=𝐽𝑗=1𝐻𝑠1𝑧𝑗,𝐽𝑗=1𝑧𝑗𝐽𝑖,𝑗=2𝐻𝑠1𝑧𝑗,𝑧𝑖+𝐽𝑖,𝑗=2𝐻𝑠𝑖𝑠𝑗𝑧𝑗,𝑧𝑖=𝐽𝑗=1𝐻𝑠1𝑧𝑗,𝐽𝑗=1𝑧𝑗+𝐽𝑖,𝑗=2𝐻𝑠𝑖𝑠𝑗𝑠𝐻1𝑧𝑗,𝑧𝑖.(3.5)Since 𝑠𝑖𝑠𝑗𝑠1,𝑖=2,,𝐽 and hence 𝐻(𝑠𝑖𝑠𝑗)𝐻(𝑠1) by assumption on 𝐻. Thus by induction hypothesis, (replace 𝐻 by 𝐻()𝐻(𝑠1)),𝐽𝑖,𝑗=2𝐻𝑠𝑖𝑠𝑗𝑠𝐻1𝑧𝑗,𝑧𝑖0(3.6)and hence𝐽𝑖,𝑗=1𝐻𝑠𝑖𝑠𝑗𝑧𝑗,𝑧𝑖0.(3.7)

Notation 3.3. By abuse of notation, ,𝑇 will denote integration over [0,𝑇], 𝑓,𝑔𝑇=𝑇0𝑓(𝑢)𝑔(𝑢)𝑑𝑢,(3.8) where should be interpreted as matrix multiplication or inner product, depending on the context. To ease the notation, we will write in future 𝐾(𝑇)=𝐾(𝑇,𝑇).

Remark 3.4. If we assume that 𝐻(𝑠)𝐻(𝑡) is strictly positive if 𝑠>𝑡, which is the case we are interested in this article, then the proof of Proposition 3.1 shows that the operator Υ𝑇 with kernel 𝐻(𝑠𝑡) is strictly positive, that is, Υ𝑇𝑓,𝑓𝑇>0 if 𝑓,𝑓𝑇>0. This follows using a Riemann lower sum approximation on a double integral and that for any complex vectors 𝑧1,,𝑧𝐽,𝐽𝑖,𝑗=1𝐻𝑠𝑖𝑠𝑗𝑧𝑗,𝑧𝑖𝑇2𝐽2>0(3.9)under the strict positivity assumptions.

The next proposition is a crucial statement. For the time being, we will assume that (𝐼+𝐾𝑇)1 exists for any time 𝑇 without any further justification. Later on, we will prove that for our operator 𝐾𝑋,𝑇, this is true; (see Proposition 5.2.). Writing in our new notation, we obtain the next proposition from Corollary 2.9.

Proposition 3.5. Let 𝐾𝑇 be an integral operator with kernel 𝐾𝑇(𝑠,𝑡)=𝐻(𝑠𝑡)Λ(𝑡) for some continuous matrix-valued Λ: 𝑑𝑑𝑇logdet𝐼+𝐾𝑇=𝑡𝑟𝐾(𝑇)𝐻Λ,𝐼+𝐾𝑇1𝐻𝑇Λ(𝑇).(3.10)

Proof. We will use (2.5). So𝑅𝑇(𝑇,𝑇)=𝐾(𝑇,𝑇)𝐾(𝑇,),𝐼+𝐾𝑇1𝐾(,𝑇)𝑇=𝐾(𝑇,𝑇)𝐻()Λ(),𝐼+𝐾𝑇1𝐻()Λ(𝑇)𝑇=𝐾(𝑇)𝐻Λ,𝐼+𝐾𝑇1𝐻𝑇Λ(𝑇).(3.11)Taking trace completes the proof.

Definition 3.6. Let 𝑍𝑇=𝐻Λ,𝐼+𝐾𝑇1𝐻𝑇.(3.12)

To ease the notation, we will now write 𝐿=(𝐼+𝐾𝑇)1 and 𝐿(𝑠,𝑡) to be the kernel of 𝐿. Note that in future we will drop the subscript 𝑇 from the operator 𝐾 and it should be understood that 𝐾 is dependent on 𝑇. Operators with a prime will denote its derivative with respect to 𝑇. Our task now is to differentiate 𝑍𝑇.

Define a distributional kernel𝜌(𝑠,𝑡)=𝑅(𝑠,𝑡)𝛿(𝑠𝑡),(3.13)where 𝛿 is the Dirac delta function and 𝑅 is the resolvent.

For any operator depending smoothly on some parameter 𝑇, we have the differentiation formula𝐿𝑇=𝑑𝑑𝑇𝐼+𝐾𝑇1=𝐼+𝐾𝑇1𝐾𝑇𝐼+𝐾𝑇1.(3.14)For the integral operator 𝐾, its kernel is given, by the fundamental theorem of calculus, by𝐾(𝑠,𝑡)=𝐾(𝑠,𝑇)𝛿(𝑡𝑇);(3.15)and hence combining with (3.14), we have𝐿(𝑠,𝑡)=𝐿𝐾𝐿(𝑠,𝑡)=𝐿𝐾(𝑠,𝑡)𝐿𝐾(𝑠,𝑇)𝛿(𝑇),𝐾𝐿(,𝑡)𝑇=𝐿𝐾(𝑠,𝑇)𝛿(𝑡𝑇)+𝐿𝐾(𝑠,𝑇)𝛿(𝑇),𝐾𝐿(,𝑡)𝑇=𝑅(𝑠,𝑇)𝛿(𝑡𝑇)+𝑅(𝑠,𝑇)𝑅(𝑇,𝑡)=𝑅(𝑠,𝑇)𝜌(𝑇,𝑡).(3.16)

Notation 3.7. Let 𝐾 be an integral operator with kernel 𝐾(𝑠,𝑡). We define the adjoint 𝐾 by 𝐾=𝑓)(𝑡𝑇0𝑓(𝑠)𝐾(𝑠,𝑡)𝑑𝑡.(3.17)Here, 𝑓 is an 𝑁×𝑁 matrix-valued function. We will also write Λ𝑇=Λ(𝑇) and 𝐻𝑇=𝐻(𝑇).

The following lemma defines the relationship between 𝑅 and 𝐿.

Lemma 3.8. It holds that 𝑅(𝑠,𝑇)=(𝐿𝐻)(𝑠)Λ𝑇,𝐿𝑅(𝑇,𝑡)=𝐻Λ(𝑡).(3.18)

Proof. We write the identity operator 𝐼 as 𝐼𝑓(𝑠)=𝑓(𝑠)=𝑇0𝛿(𝑠𝑡)𝑓(𝑡)𝑑𝑡, where 𝛿 is Dirac delta function. Then𝑅(𝑠,𝑇)=𝐿(𝑠,),𝐾(,𝑇)𝑇=𝐿(𝑠,),𝐻(𝑇)Λ𝑇𝑇=𝐿(𝑠,),𝐻()Λ𝑇𝑇=(𝐿𝐻)(𝑠)Λ𝑇,𝑅(𝑇,𝑡)=𝐾(𝑇,),𝐿(,𝑡)𝑇=𝐻(𝑇)Λ(),𝐿(,𝑡)𝑇=𝐿𝐻Λ(𝑡).(3.19)

Theorem 3.9. 𝑍𝑇 satisfies the following differential equation: 𝑍𝑇=𝑍𝑇𝐻𝑇Λ𝑇𝑍𝑇𝐻𝑇.(3.20)

Proof. Now by definition of 𝑍𝑇,𝑍𝑇=𝐻Λ,𝐼+𝐾𝑇1𝐻𝑇=𝐻Λ,𝐿𝐻𝑇=𝐿𝐻Λ,𝐻𝑇.(3.21)Using (3.16) and from Lemma 3.8, 𝐿𝐻(𝑠)=𝑅(𝑠,𝑇)𝜌(𝑇,),𝐻()𝑇=𝑅(𝑠,𝑇)𝑅(𝑇,),𝐻()𝑇𝑅(𝑠,𝑇)𝐻𝑇=(𝐿𝐻)(𝑠)Λ𝑇𝐿𝐻Λ,𝐻(𝐿𝐻)(𝑠)Λ𝑇𝐻𝑇=(𝐿𝐻)(𝑠)Λ𝑇𝑍𝑇(𝐿𝐻)(𝑠)Λ𝑇𝐻𝑇.(3.22)Hence differentiating with respect to 𝑇, using the fundamental theorem and (3.22), gives𝑑𝑍𝑑𝑇𝑇=(𝐻Λ𝐿𝐻)(𝑇)+𝐻Λ,𝐿𝐻𝑇=(𝐻Λ𝐿𝐻)(𝑇)+(𝐻Λ)(),(𝐿𝐻)()Λ𝑇𝑍𝑇(𝐿𝐻)()Λ𝑇𝐻𝑇𝑇=(𝐻Λ𝐿𝐻)(𝑇)𝑍𝑇Λ𝑇𝐻𝑇+𝑍𝑇Λ𝑇𝑍𝑇.(3.23)But(𝐾𝐿𝐻)(𝑇)=𝐾(𝐼+𝐾)1𝐻=(𝑇)𝐻(𝑇)Λ(),(𝐿𝐻)()𝑇=𝐻Λ,𝐿𝐻𝑇=𝑍𝑇.(3.24)Hence(𝐻Λ𝐿𝐻)(𝑇)=(𝐻Λ𝐻)(𝑇)(𝐻Λ𝐾𝐿𝐻)(𝑇)=(𝐻Λ𝐻)(𝑇)(𝐻Λ)(𝑇)𝑍𝑇.(3.25)Therefore𝑑𝑍𝑑𝑇𝑇=(𝐻Λ𝐻)(𝑇)(𝐻Λ)(𝑇)𝑍𝑇𝑍𝑇Λ𝑇𝐻𝑇+𝑍𝑇Λ𝑇𝑍𝑇=𝑍𝑇𝐻𝑇Λ𝑇𝑍𝑇𝐻𝑇.(3.26)This completes the proof.

4. Integral Operator Driven by a Diffusion Process

Now back to the integral operator 𝐾𝑋,𝑇 defined in Section 1. Define a process 𝑍𝑠Ω𝑀𝑁(),𝑍𝑠=𝐻Γ𝑋,𝐼+𝐾𝑋,𝑠1𝐻𝑠,(4.1)where Γ𝑋=Γ(𝑋) (see Notation 3.3 for the definition of the angle brackets). From the definition, it is clear that 𝑍𝑠 is adapted. In fact, 𝑍𝑠 is a symmetric matrix under the usual assumptions on 𝐻 and Γ.

Proposition 4.1. If 𝐻 and Γ are symmetric matrices, then 𝑍 is symmetric as a matrix.

Proof. Since 𝑠 is fixed, we will drop the subscript 𝑠. Also fix an 𝜔Ω, so we will also drop the subscript 𝑋. Let 𝐾 be the adjoint of 𝐾 with kernel Γ(𝑠)𝐻(𝑠𝑡). By assumption of symmetry and by definition,𝑍𝑇=𝐻𝐼+𝐾1,Γ𝐻=𝐻Γ,𝐻𝐻𝐾𝐼+𝐾1,Γ𝐻=𝐻Γ,𝐻𝐻Γ𝐾(𝐼+𝐾1=,𝐻𝐻Γ𝐼+𝐾1=,𝐻𝐻Γ,𝐼+𝐾1𝐻=𝑍.(4.2)

Theorem 4.2. Let 𝑋𝑠 be an 𝐿-diffusion process satisfying (1.2) and 𝐻[0,)𝑀+𝑁() and let Γ𝑀𝑀+𝑁() be continuous. Further assume that 𝐻(𝑠)𝐻(𝑡)0 for 𝑠𝑡. Let 𝐾𝑋,𝑠 be an integral operator defined by (1.5) and 𝑍𝑠=𝐻Γ𝑋,𝐼+𝐾𝑋,𝑠1𝐻𝑠.(4.3)Let 𝑒(𝑍) be the explosion time of 𝑍. Then for 𝑠<𝑒(𝑍), 𝑊𝑠=(𝑠,𝑋𝑠,𝑍𝑠)Ω[0,)×𝑀×𝒮𝑁() satisfies the following stochastic differential equation: 𝑑𝑊𝑠=𝑋1,𝑏𝑠𝑊,𝑠𝑋𝑑𝑠+0,𝜎𝑠,0𝑑𝐵𝑠,𝑠𝑡;𝑊𝑡=(𝑡,𝑥,𝑣),(4.4)where (𝑡,𝑥,𝑣)=𝑣𝐻(𝑡)Γ(𝑥)𝑣𝐻(𝑡).(4.5)

Proof. In the ambient space 𝑁, (𝑠,𝑋𝑠) is a diffusion satisfying the stochastic differential equation of the form𝑑𝑠,𝑋𝑠=𝑋1,𝑏𝑠𝑋𝑑𝑠+0,𝜎𝑠𝑑𝐵𝑠.(4.6)Now by Theorem 3.9, 𝑍𝑠 satisfies the following differential equation:𝑑𝑍𝑠=𝑠,𝑋𝑠,𝑍𝑠𝑑𝑠,(4.7)where is defined by (4.5) and by Proposition 4.1, 𝑍𝑠𝒮𝑁(). Thus we can write𝑑𝑠,𝑋𝑠,𝑍𝑠=𝑋1,𝑏𝑠,𝑠,𝑋𝑠,𝑍𝑠𝑋𝑑𝑠+0,𝜎𝑠,0𝑑𝐵𝑠,(4.8)which is (4.4). The existence of 𝑍𝑠 for small time is guaranteed by Lemma 2.4.

Lemma 4.3. If Γ is locally Lipschitz, then 𝑊𝑠 is the unique solution (path-wise) to (4.4).

Proof. Now by the definition of 𝑋𝑠, 𝑏 and 𝜎 are locally Lipschitz. However, since Γ is locally Lipschitz on the manifold 𝑀 with bounded operator norm, it follows that is locally Lipschitz. Therefore (4.4) has a unique solution and is given by 𝑊𝑠.

5. Long-Time Existence of 𝑊𝑠

We had addressed the existence and uniqueness of the solution to (4.5), given by 𝑊𝑠=(𝑠,𝑋𝑠,𝑍𝑠), with 𝑒(𝑊)𝑒(𝑋). We will now give sufficient conditions for 𝑒(𝑊)=𝑒(𝑋).

Proposition 5.1. Suppose that the integral operator Υ𝑇 with kernel 𝐻(𝑠𝑡) is a strictly positive operator and Γ is a symmetric nonnegative matrix. Then for 𝑧 such that 𝑅𝑒(𝑧)0, (𝐼+𝑧𝐾𝑋,𝑇)1 exists for all 𝑇<𝑒(𝑋).

Proof. When 𝑧=0 is trivial. So assume 𝑧0. Fix an 𝜔Ω and any 𝑇<𝑒(𝑋(𝜔)). Since 𝐾𝑋=𝐾𝑋,𝑇 is a compact operator, it suffices to show that the kernel of 𝐼+𝑧𝐾𝑋 is 0. Write Γ(𝑋(𝜔))=Γ and 𝐾𝑋(𝜔)=𝐾. Let 0𝑣𝐿2 such that 𝑣,𝑣>0 and Γ𝑣=0. Then (𝐼+𝑧𝐾𝑋)𝑣=𝑣 is nonzero. Hence we can assume that Γ𝑣,Γ𝑣>0. Note that 𝐾=Υ𝑀Γ and (𝑀Γ+𝑀ΓΥ𝑀Γ) is a symmetric operator (see Section 1 for definitions of Υ and 𝑀Γ.) Therefore,𝑀Γ+𝑧𝑀ΓΥ𝑀Γ=𝑀𝑣,𝑣Γ𝑀𝑣,𝑣+𝑧ΓΥ𝑀Γ𝑣,𝑣=Γ𝑣,𝑣+𝑧ΥΓ𝑣,Γ𝑣.(5.1)Since Γ is a nonnegative matrix, Γ𝑣,𝑣0, and because Υ is a strictly positive operator, ΥΓ𝑣,Γ𝑣>0. If Re(𝑧)>0,𝑀ReΓ𝑣,𝑣+𝑧Υ𝑀Γ𝑣,𝑀Γ𝑣>0.(5.2)Otherwise, Im(𝑧)0 and hence we haveIm(𝑧)Υ𝑀Γ𝑣,𝑀Γ𝑣0.(5.3)Either way, if Re(𝑧)0,𝑀Γ+𝑧𝑀ΓΥ𝑀Γ𝑣=𝑀Γ𝐼+𝑧Υ𝑀Γ𝑣(5.4)is nonzero, and therefore (𝐼+𝑧Υ𝑀Γ)𝑣 is nonzero. Thus for any nonzero 𝐿2[0,𝑇] function 𝑣, (𝐼+𝑧Υ𝑀Γ)𝑣 is never zero and since 𝜔 is arbitrary, hence 𝐼+𝑧𝐾𝑋 is invertible for any 𝜔Ω.

Proposition 5.2. Suppose that the usual assumptions on Γ and 𝐻 hold. Then (𝐼+𝐾𝑋,𝑠)1 exists for all 0𝑠<𝑒(𝑋). Furthermore, (3.10) holds for all 0𝑠<𝑒(𝑋).

Proof. Under the assumptions on Γ and 𝐻, Proposition 3.1 and Remark 3.4 will imply Proposition 5.1. Fix an 𝜔Ω, a 𝑇<𝑒(𝑋(𝜔)) and let 𝐶𝑇 be as defined in Lemma 2.4. Then on 𝑈={𝑧|𝑧|<1/𝐶𝑇}, (𝐼+𝑧𝐾𝑋,𝑠) is invertible for all 𝑠[0,𝑇]. Hence 𝑂=𝑈{𝑧Re(𝑧)>0} is an open-connected set containing 0, and (𝐼+𝑧𝐾𝑋,𝑠)1 exists for all 𝑠[0,𝑇]. In particular, at 𝑧=1. Then the assumptions in Corollary 2.9 are met, and hence (3.10) holds.

6. Proof of Main Result

The proof of Theorem 1.2 now follows from Theorem 4.2 and Proposition 5.2. Integrating (3.10), we have for 𝑇<𝑒(𝑋),logdet𝐼+𝐾𝑋,𝑇=𝑇0𝑋tr𝐻(𝑠)Γ𝑠𝑑𝑠𝑍𝑠Γ𝑋𝑠𝑑𝑠.(6.1)For (𝑡,𝑥,𝑣)[0,)×𝑀×𝒮𝑁(), defineΨ(𝑇,𝑡,𝑥,𝑣)=𝔼(𝑡,𝑥,𝑣)𝑓𝑋𝑇exp𝑇0𝑉𝑋𝑠𝑋𝑝tr𝐻(𝑠)Γ𝑠)𝑍𝑠Γ𝑋𝑠𝑑𝑠(6.2)and observe thatΨ(𝑇,0,𝑥,0)=𝔼𝑥𝑋[𝑓𝑇𝑒𝑇0𝑉(𝑋𝑠)𝑑𝑠det𝐼+𝐾𝑋,𝑇𝑝]=𝜆(𝑇,𝑥).(6.3)By the Feynman-Kac formula, Ψ satisfies the following partial differential equation:𝜕𝜕𝑇Ψ(𝑇,𝑡,𝑥,𝑣)=𝐻Ψ(𝑇,𝑠,𝑥,𝑣).(6.4)Thus𝑒Ψ(𝑇,𝑡,𝑥,𝑣)=𝑇𝐻𝑓(𝑡,𝑥,𝑣),(6.5)and therefore from (6.3),𝜆(𝑇,𝑥)=𝑒𝑇𝐻𝑓(0,𝑥,0).(6.6)