Journal of Applied Mathematics

Volume 2012, Article ID 921038, 25 pages

http://dx.doi.org/10.1155/2012/921038

## An Inverse Problem for a Class of Linear Stochastic Evolution Equations

College of Science, Civil Aviation University of China, Tianjin 300300, China

Received 8 May 2012; Revised 7 August 2012; Accepted 9 August 2012

Academic Editor: Chong Lin

Copyright © 2012 Yuhuan Zhao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

An inverse problem for a linear stochastic evolution equation is researched. The stochastic evolution equation contains a parameter with values in a Hilbert space. The solution of the evolution equation depends continuously on the parameter and is Fréchet differentiable with respect to the parameter. An optimization method is provided to estimate the parameter. A sufficient condition to ensure the existence of an optimal parameter is presented, and a necessary condition that the optimal parameter, if it exists, should satisfy is also presented. Finally, two examples are given to show the applications of the above results.

#### 1. Introduction

The purpose of this paper is to study an inverse problem for the following linear stochastic evolution equation: where , is a parameter to be determined, and is a convex domain in . The solution of (1.1) corresponding to can be denoted as to explicitly show the dependence of on .

The problem of this paper is to determine the unknown parameter based on the measurement , which is defined by the following: where , , , , and are Hilbert spaces.

There are many papers dealing with parameter estimation problems for stochastic partial differential equations, for instance, see [1–7], but only a few papers to estimate directly parameters involved in stochastic evolution equations in infinite dimensional spaces, for example, [8, 9]. In particular, Lototsky and Rosovskii [9] consider a problem estimating a constant parameter and obtain an estimate that is consistent and asymptotically normal.

Denote by the linear continuous operator space on to , by the inner product of , and by the dual product of and , where is the dual of .

, , and make up an evolution triple, namely, they should satisfy where each space is dense in the following space and has a continuous injection, , and

For any and , , , , , , and where the constant is independent of and . Let be a complete probability space and an increasing family of sub -algebras of .

denotes the space of random variables with values in a Hilbert space . is a Wiener process with values in a separable Hilbert space , which is adapted to , that is, for all, is a real Wiener process and an -martingale, with the correlation function where , and is a positive self-adjoint nuclear operator almost everywhere on . is called the covariance operator. Moreover, assume that satisfies where , is independent of .

Now, determine the parameter in the system (1.1) and (1.2). It is transformed into an optimization problem as most researchers expect. That is, seek an optimal parameter such that the cost functional reaches its minimum over the admissible parameter set at , that is, where is a conditional expectation, that is, and is the sub--algebra induced by the stochastic process , , which is adapted to .

If there exists a neighbourhood of such that then is called a relative optimal parameter.

In Section 2, the base of this paper is given, under certain conditions the function is continuous and Fréchet continuously differentiable.

In Section 3, the main results of this paper are proposed. The problem estimating the parameter is transformed into the optimization problem. The above optimization problem, such as existence of the optimal parameter and necessary conditions, is studied.

In Section 4, the results in Sections 2 and 3 are applied to parabolic stochastic partial differential equations to identify certain parameters involved in those equations.

#### 2. Continuity and Differentiability with Respect to a Parameter

In this section the continuity and the differentiability of the solution of the system (1.1) with respect to the parameter are studied.

Before studying the properties of the solution to (1.1), it must be shown that the system (1.1) is well-behaved in some sense on certain conditions. There are many papers dealing with solvability of Stochastic evolution equation (1.1), for example, see [10–12]. From Bensoussan [10] the following lemma is useful.

Lemma 2.1. *Besides the assumptions for , , , , , and in Section 1, one assumes that, for any ,
**
Then there exists a unique generalized solution, , in the Ladyzenskaja sense of (1.1) almost every such that
** is adapted to (as a process with values in ), and is measurable with values in . *

In the above lemma the space with , consists of all continuous functions that have continuous Fréchet derivatives up to order on , with the norm

*Remark 2.2. *The generalized solution in the Ladyzenskaja sense is the solution of the following variational equation:
where the space is a Hilbert space that is defined by
and it is well known that .

Now, it is time to give the main results in this section.

Theorem 2.3. *If the assumptions of Lemma 2.1 are satisfied and
**
are continuous, the solution of (1.1)
**
is continuous, that is, or the following equalities are true:
*

Before proving Theorem 2.3, from Bensoussan [10] the following formula in a Hilbert space is quoted.

Lemma 2.4. *Let be a functional on , which is twice continuously Fréchet differentiable in and continuously differentiable in . Assume has the stochastic differential:
**
where is a stochastic process with values in , which is adapted and satisfies the condition
**
where is an adapted process with values in such that , is measurable and
**
then one has the following formula in the Hilbert space:
**
where is the adjoint of and the symbol “tr” is the trace operator, of which definition for a nuclear operator is as follows:
**
where is an orthonormal basis of . *

The following two lemmas are obvious, so their proofs are omitted.

Lemma 2.5. *Supposes that is a nuclear operator and , then
*

Lemma 2.6. * If ,
*

Lemma 2.7. *If ,
*

*Proof. *First, suppose that is a step function, that is,
where and .

Let be an orthomomal basis of . Obviously,
and is a Wiener process. Furthermore,
So (2.18) is proved.

For the general case, (2.18) can be obtained as most lectures on stochastic integral have done, which is omitted.

New, the proof of Theorem 2.3 can be given as follows.

*Proof of Theorem 2.3. *Denote and , and then
Let , and then from the above equalities, it follows

Setting according to Lemma 2.4, it gets
which is

Taking the expectation from the above and considering Lemma 2.6, it has

From the assumptions of the spaces and , there exists a constant such that
Furthermore, according to the assumptions of the operator and using (2.28) and , we can obtain
By the assumptions of Theorem 2.3 there is a constant such that
Obviously,
Using the Gronwall inequality (see [13] for the definition), and the assumptions of Theorem 2.3, letting , it has
which is (2.8). Furthermore, from (2.31), it follows
which is just (2.10).

From (2.26), it has the following estimate:
Similarly, it also obtains
On the other hand, the indefinite stochastic integral
is a continuous martingale, so
where the following martingale inequality is used:
with .

Combining (2.35) with (2.37), it has
Letting in , by the assumptions of Theorem 2.3, it immediately obtains
which is (2.9).

Theorem 2.8. *Besides the assumptions of Theorem 2.3, suppose that the mappings
**
are continuously Fréchet differentiable, then the solution of (1.1)
**
is continuously Fréchet differentiable and its Fréchet derivative operators at , , are determined by the following system:
**
where , , is determined by (1.1), and , , , and are the Fréchet derivative operators of , , , and , at , respectively.*

*Proof. *By Lemma 2.1, there exists a unique solution to (2.43), . Taking , for any , setting and , where and are defined by (2.22) and (2.23), respectively, it has
where , , , , , and .

Letting or , according to the definition of the Fréchet differentiability, it has

Moreover, by Theorem 2.3 it gets and .

Using the deduction similar to Theorem 2.3 it obtains
which is just .

So, is Fréchet differentiable at and its Fréchet derivative operator is determined by (2.43).

The continuity of can be proved in a way similar to proof of Theorem 2.3, which is omitted here.

#### 3. Existence and Necessary Conditions for Optimality

In this section the optimization problem (1.10) is researched. First, we prove that the cost functional is continuous and continuously Fréchet differentiable. Next, we prove that under certain sufficient conditions there exists an optimal parameter , at which the cost functional reaches its minimum over the admissible parameter set , and derive necessary conditions for optimality, which means that the optimal parameter should satisfy some inequalities.

Theorem 3.1. *Let the assumptions of Theorem 2.3 be satisfied, , and let
**
be continuous, then the mapping
**
is continuous. *

*Proof. *Take , and set , , , , and , then it has
Letting , that is, , using Theorem 2.3 and the assumptions of Theorem 3.1, it obtains at once
Hence, is continuous.

From Zeidler [14] the following lemma is quoted.

Lemma 3.2. *The minimum problem
**
has a solution, where , , if one of the following conditions is fulfilled:*(a)*is a topological space and is lower semicompact; *(b)* is a topological space, is compact, and is continuous. *

Lemma 3.3. *The extreme problem
**
has a solution, , if one of the following conditions is fulfilled:*(a)* is a closed subspace of ;*(b)* is a closed and convex in . *

*Proof. *Obviously, the condition (a) is a special case of the condition (b). So, it needs only proof that the conclusion holds in the case (b).

Assume . Obviously, is closed and convex.

Suppose is a minimizing sequence, that is, .

In order to prove that is a Cauchy sequence, we prove the following inequality:
Obviously, we have the following inequality:
Therefore,
which is exactly (3.7).

For any , , from (3.7) it has
Letting , it obtains
Hence, is a Cauchy sequence. Since is closed, there exists such that
By the definition of , there exists such that . Obviously, it has

Theorem 3.4. *Let the assumptions of Theorem 3.1 be true and let one of the following be fulfilled:*(a)* is a finite-dimensional Euclidean space and is convex, bounded, and closed;*(b)*the function is linear, is independent of , and is closed and convex;*(c)* is compact in , **then the optimization problem (1.9) has a solution, , namely,
*

*Proof. *Obviously, the result can be obtained by Lemma 3.2 and Theorem 3.1 when (a) or (c) is fulfilled.

If the condition (b) holds, there is an operator , , such that , is the solution of (1.1). Therefore, satisfies the assumptions of Lemma 3.3. So, by Lemma 3.3 the result immediately can be obtained.

Furthermore, we also can obtain the smoothness of the mapping .

Theorem 3.5. *Let the assumptions of Theorem 2.8 be satisfied, , and let
**
be Fréchet differentiable, then
**
is continuously Fréchet differentiable and the Fréchet differential of at along the direction is determined by the following:
**
where and is determined by (2.43). *

*Proof. *Take , , set , then we have
where and . Letting , that is, , it has
So, the functional is Fréchet differentiable and is determined by (3.17), obviously, is continuous.

It is now in the position to state necessary conditions for the optimization problem (1.9).

Theorem 3.6. *Let the assumptions of Theorem 3.5 be satisfied. If a point is a relative optimal parameter for (1.9), then is characterized by
**
where is a neighbourhood of . *

*Proof. *Firstly, let be the relative optimal parameter, then, it has
from which it immediately obtains (3.20).

Alternatively suppose (3.20) is true. Using the Taylor formula with the Peano remainder
it at once obtains

Theorem 3.7. *Let the assumptions of Theorem 3.5 be satisfied and let the functional be convex. If is an extreme point, then is an optimal parameter and is characterized by the following inequality
**
In particular, if is linear, (3.24) is true.**
(The results of this theorem are obvious, whom proof is omitted here.) *

Theorem 3.8. *Let the assumptions of Theorem 3.5 be satisfied and let the observation operator be independent of . Then the optimal parameter that minimizes over is characterized by the following optimization system:
**, where .*

*Proof. *Firstly, using the flow of time reversed (changing to ) according to Lemma 2.1, it is easy to show the problem (3.26) is well posed. By Theorems 3.5 and 3.6
Setting and using (3.26) and (2.43), it has

#### 4. Applications

In this section the above results are applied to systems governed by stochastic partial differential equations. The following symbols are used:

: a bounded open set;

: the boundary of , which is smooth;

: the partial differential with respect to ;

;

: Sobolev space, its definition can be found in [15].

##### 4.1. The System Governed by a Stochastic Parabolic Partial Differential Equations

Consider the following stochastic parabolic partial differential equation: where with , , , , for all , with , , , is a Wiener process in .

The problem we shall deal with is to determine the unknown coefficients and based on the measurement

Suppose and , which is the subspace of consisting of all elements that vanish on the boundary, then . Note that . , , and make up the evolutional tripe.

Take the unknown parameter and define the parameter space by with the norm then is a Hilbert space.

In order to make sense of the problem (4.1), it assumes the admissible parameter set as follows: where and are given constants.

Obviously, is a convex and closed set in . By the Sobolev imbedding theorem (see Chapter [15], 6), the imbedding is compact, where is the Hölder space and .

Next, define the operator where , and then, obviously, . Using the generalized Green formula, for all , , it has Due to , it has By the Poincaré inequality it has which, along with (4.8), shows that satisfies (1.5).

Next, for all Therefore, , for all .

Finally, set , , and . Thus, (4.1) can be written as (3.25).

Summing up the above reasoning, it has the following theorem.

Theorem 4.1. *If , then (4.1) has a unique solution : **,
** is adapted to (as a process with values in ),** is measurable with values in .*

Theorem 4.2. *If , then for any , the mapping is continuously Fréchet differentiable and its Fréchet differential is determined by the following system:
**
where and is determined by (4.1). Furthermore, the mapping is infinitely Fréchet differentiable. *

Because , . So, we can use the following cost functional: in order to determine . The operator obviously satisfies .

Theorem 4.3. *The mapping defined by (4.13) is continuously Fréchet differentiable and its Fréchet differential is determined by
**
where , the gradient operator is
** is the solution of (4.1), and is defined by the following system:
*

##### 4.2. Example 2

Consider the following system: where with .

The problem addressed is to identify the unknown parameter , which varies in an admissible parameter set based on the approximate point measurement: