• Views 711
• Citations 0
• ePub 25
• PDF 528
`Journal of Applied MathematicsVolume 2014, Article ID 524860, 7 pageshttp://dx.doi.org/10.1155/2014/524860`
Research Article

## Approximate Controllability of a 3D Nonlinear Stochastic Wave Equation

Institute of Mathematics, Jilin University, Changchun 130012, China

Received 19 October 2013; Revised 21 December 2013; Accepted 4 January 2014; Published 23 February 2014

Academic Editor: Debasish Roy

Copyright © 2014 Peng Gao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We study the well-posedness of a 3D nonlinear stochastic wave equation which derives from the Maxwell system by the Galerkin method. Then we study the approximate controllability of this system by the Hilbert uniqueness method.

#### 1. Introduction

Many physical models are described by wave equations, such as the scalar wave equations, the elastic wave equations, and Maxwell’s equations. Therefore, the well-posedness and controllability of wave equations are both important issues in theories and applications.

The well-posedness of partial differential equations (PDEs) has been studied by many authors; there are some useful methods: the semigroup theory , the fixed point method [5, 6], the variational method , and the regularization method . In this paper, we study the well-posedness of the 3D nonlinear stochastic equation by the Galerkin method. In order to use this method, we have to establish the basis of the , via the Hilbert-Schmidt theorem.

The controllability of PDEs has been a very active research field since 1960s. From , we can know the definitions of exact controllability, null controllability, and approximate controllability. From the definition of controllability, we should consider a final condition problem in PDE rather than an initial condition problem. In fact, the problem of controllability is related to the solution of the backward adjoint problem. The connection of these two problems is the famous Hilbert uniqueness method (HUM), proposed by Lagnese. From , we know that “the theoretical basis of HUM is, roughly speaking, the observation that if one has uniqueness of solutions for a linear evolutionary system in a Hilbert space it is possible to introduce a Hilbert space norm based on the uniqueness property in such a way that the dual system is exactly controllable to the dual space . The exact controllability problem is thereby transferred to the problem of identifying (or otherwise characterizing) the couple . The latter is essentially a problem in PDEs when the original evolutionary system is a distributed parameter system: can a priori estimates of be obtained in terms of norms in spaces which are both intrinsic to the given problem and readily identifiable?”

For a detailed discussion of HUM, we can refer to . Through this adjoint problem, one may connect both controllability problems and minimization problems of appropriate functionals. The existence of solutions to these minimization problems is in turn related to certain inequalities, the so-called observability inequalities, for the dynamical system in question, which show that certain observed quantities of the solution uniquely determine the solution of the dynamical system. The observability inequalities are related to continuous dependence questions of the solution on the initial or the final data. In order to get the observation estimates of the adjoint problem, we can use the multiplier method  and microlocal analysis.

In a number of applications, the systems in question are subject to stochastic fluctuations arising as a result of either uncertain forcing (stochastic external forcing) or uncertainty of the governing laws of the system. Noise due to experimental uncertainties, intrinsic randomness of the media, and so forth plays an important role in wave propagation, thus calling for the inclusion of stochastic terms in the wave equations. When a physical state is modeled, external random noise can be a serious disturbance. Apparently this affects controllability of the model. In this paper we discuss controllability of a vector wave equation with random noise.

The wave equation with the dissipative term has been extensively studied by many authors (see ) and the physical background can be found in many works of literature.

In this paper, we consider the nonlinear stochastic wave equation where are 3D stochastic processes, is a map which meets some conditions, is 1D stochastic process, is the gradient operator and , is the Laplace operator, and is the divergence operator; will be defined in Section 2.

System (1) can be derived from the Maxwell system. Indeed, let us consider the classical Maxwell system where is the curl operator. Differentiating the second equation in (2) with respect to , it follows that Combining this with the first equation in (2), we can obtain

Noting the identical equation, and , (4) becomes This motivates us to consider (1). This idea can also be found in [5, 6]. For convenience, we write (1) in the following equivalent form: Question. Given an initial condition (where is an appropriately chosen Hilbert space) and a final state , can we find an adapted control such that the system (1) is driven -close to the final condition in the chosen time period?

Theorem 4 is a positive answer to the above question.

For deterministic wave equations, the exact controllability has been extensively investigated for the past few decades (see ). References [9, 15] have studied the stochastic wave equations. In this paper, the method we use to deal with our problem is inspired by the idea of these papers.

This paper is organized as follows. In Section 2, we establish the existence and uniqueness theorem for the initial-boundary value problem. In Section 3, the approximate controllability of (1) is stated and proved.

#### 2. The Initial-Boundary Value Problem

Throughout this paper, let be the standard 1D Brownian motions over the stochastic basic , where is the augmentation of the filtration generated by the Brownian motions under the probability measure (see ). is always assumed to be a bounded and simply connected domain in , is smooth, and , with , and is the outward normal vector on . Set

The definitions and properties of these spaces can be found in [16, 17]. stands for the inner product in .

Definition 1. A stochastic process is said to be a solution of (1) if holds for all and all , for a.s. .

First, we consider the following system:

By using the Galerkin method in , we can obtain the following theorem.

Theorem 2. Let , let and be -valued predictable processes, and let , . There is a solution to the system (10). Furthermore, the solution is pathwise unique, and for every .

Next, we consider the system (1).

Theorem 3. Assume the following conditions: (i)is an -valued predictable process and (ii) is an -valued predictable process and for some nonnegative constant , for all , for a.s. (iii)(iv)satisfies for all , where is the Euclid norm and is a positive constant;(v) is an -valued predictable process and .
Then there is a solution to problem (1). Furthermore, the solution is pathwise unique.

Proof. We formally write for all .
Let be the solution of the following linear problem:
Since is -valued -measurable, is independent of time and -valued -measurable by (12). Also, is -valued -measurable, and
If is adapted to , is an -valued predictable process and
Therefore, we apply Theorem 2 to problem (13) for .
By virtue of (12), we have for all and a.s. .
We have
Then by (11), (16), and (17) we know for every . In the last step, we have used the fact that for every .
Let for every and each .
Then we can derive that, for some positive constant , for all and all .
By (11) and (14) we can obtain some positive constant such that
By induction, it follows that then
Consequently, is a Cauchy sequence in .
Then namely, as ,
From (13), we can get that, for every , holds for every , every , and a.s. .
By the diagonal process and taking limits, we get that holds for every , every , and a.s. .
In the end, we prove the uniqueness of the solution.
Let and be two solutions. We define and for .
Let be the solution of the following linear problem:
By virtue of (11), we see that, for all , satisfies
In the meantime, by the uniqueness of solutions to the linear problem, we can obtain for . Then, by using assumptions (i) and (iv), it follows from (20) that, for all ,
From Gronwall’s inequality, we can obtain that for all and a.s. .

#### 3. Approximate Controllability of (1)

Now, we can answer the question in Section 1.

Theorem 4. Suppose that assumptions (i)–(iv) of Theorem 3 hold and is given. For each , there is a control that satisfies the following: is an -valued predictable process over and where is the solution of (1).

In the first part we derive fundamental estimates. The proof of Theorem 4 is given in the second part.

By using the method similar to that in [9, 15], we can establish the following lemmas.

Lemma 5. Let be the solution of the system such that . Then it holds that for every , where is a positive constant depending only on .

Lemma 6. Let be the solution of the system where is the solution of the system (34). Let . Then it holds that where is a positive constant depending only on and .

Remark 7. The estimates (35) and (37) naturally carry over to the case where is an -valued random variable.
If , then and Also, .

Lemma 8. Consider the following initial-boundary value problem:
Let be given and let satisfy (ii) of Lemma 6. Then there is a unique solution of (39) such that, for each , is -valued and -measurable and, for almost all . Furthermore, it holds that for every , where is a positive constant independent of and .

Let be given and let , and be the solutions of (34), (39), and (36), respectively. Set in (39). We will estimate .

Lemma 9. Assume that . Set . Let be a positive constant such that . Then it holds that where is a positive constant independent of .

Consider the map

The following property of is essential for Theorem 4.

Lemma 10. For every -measurable random variable with values in , there exists an such that

In fact, Lemma 10 can be obtained from Lemmas 5 and 6 and Lax-Milgram lemma.

The major technical result, essential in our work, is a generalization of the Martingale representation theorem (see ) that is summarized in the following lemma from [9, 15].

Lemma 11. Given an -measurable random variable , for any , there exist and an -measurable random variable such that, for some , and .

Remark 12. The proof of Lemma 11 is similar to [15, Lemma 1.2] and [9, Lemma 1].

Now, we can prove our main result.

Proof of Theorem 4. The following procedure shows the construction of the approximate control. We use the Hilbert uniqueness method. (1)Choose an and a desired final state for the system (1).(2)Solve the forward uncontrolled system and find the solution ; set . This is an -measurable random variable.(3)Find the distance of the uncontrolled solution from the desired state and set . This is an -measurable random variable.(4)According to Lemma 11, we can approximate by -measurable random variable such that We can take small sufficiently such that where is the same as in (41) and .(5)From Lemma 10, we can find the solution of the equation .(6)Find the solution of (34) with .(7)According to Lemma 8, we can obtain the solution of (39) with .(8)Extend to such that for .Then and is -measurable for each . Furthermore, is a pathwise unique solution of the following problem: where is the characteristic function of the time interval .(9)Define , where is the solution of (44).
Now we verify that is indeed a desired control for system (1). It is obvious that is an -valued predictable process over and . We set . Then is a solution of (1). Furthermore, where is the solution of (36) with being replaced by such that , and . Hence, by virtue of (41), (45)–(48), we have
This completes the proof of Theorem 4.

#### Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgment

The author would like to sincerely thank Professor Yong Li for many useful suggestions and help.

#### References

1. S. Nicaise and C. Pignotti, “Internal and boundary observability estimates for the heterogeneous Maxwell's system,” Applied Mathematics and Optimization, vol. 54, no. 1, pp. 47–70, 2006.
2. H. Barucq and M. Fontes, “Well-posedness and exponential stability of Maxwell-like systems coupled with strongly absorbing layers,” Journal des Mathematiques Pures et Appliquees, vol. 87, no. 3, pp. 253–273, 2007.
3. S. Nicaise, “Exact boundary controllability of Maxwell's equations in heterogeneous media and an application to an inverse source problem,” SIAM Journal on Control and Optimization, vol. 38, no. 4, pp. 1145–1170, 2000.
4. B. V. Kapitonov, “Stabilization and exact boundary controllability for Maxwell's equations,” SIAM Journal on Control and Optimization, vol. 32, no. 2, pp. 408–420, 1994.
5. H.-M. Yin, “On Maxwell's equations in an electromagnetic field with the temperature effect,” SIAM Journal on Mathematical Analysis, vol. 29, no. 3, pp. 637–651, 1998.
6. H.-M. Yin, “Existence and regularity of a weak solution to Maxwell's equations with a thermal effect,” Mathematical Methods in the Applied Sciences, vol. 29, no. 10, pp. 1199–1213, 2006.
7. J. E. Lagnese, “Exact boundary controllability of Maxwell's equations in a general region,” SIAM Journal on Control and Optimization, vol. 27, no. 2, pp. 374–388, 1989.
8. H. M. Yin, “Global solution of Maxwell's equations in an electromagnetic field with the temperature-dependent electrical,” European Journal of Applied Mathematics, vol. 5, pp. 57–71, 1994.
9. T. Horsin, I. G. Stratis, and A. N. Yannacopoulos, “On the approximate controllability of the stochastic Maxwell equations,” IMA Journal of Mathematical Control and Information, vol. 27, no. 1, pp. 103–118, 2010.
10. J. E. Lagnese, “The Hilbert uniqueness method: aretrospective,” Optimal Control of Paitial Differential Equations, vol. 149, pp. 158–181, 1991.
11. J. L. Lions, “Exact controllability, stabilization and perturbations for distributed systems,” SIAM Review, vol. 30, no. 1, pp. 1–68, 1988.
12. Z. J. Yang, “Global existence, asymptotic behavior and blowup of solutions for a class of nonlinear wave equations with dissipative term,” Journal of Differential Equations, vol. 187, no. 2, pp. 520–540, 2003.
13. J. Vancostenoble, “Weak asymptotic decay for a wave equation with gradient dependent damping,” Asymptotic Analysis, vol. 26, no. 1, pp. 1–20, 2001.
14. M. Slemrod, “Weak asymptotic decay via a “relaxed invarviance principle” for a wave equation with non- linear, non-monotone daming,” Proceeding of the Royal Society of Edinburgh A, vol. 113, pp. 87–97, 1989.
15. J. U. Kim, “Approximate controllability of a stochastic wave equation,” Applied Mathematics and Optimization, vol. 49, no. 1, pp. 81–98, 2004.
16. M. Cessenat, Mathematical Methods in Electromagnetism: Linear Theory and Applications, World Scienfitic, Singapore, 1996.
17. R. Dautray and J. L. Lions, Mathematical Analysis and Numerical Methods for Science and Technology, vol. 3, Springer, New York, NY, USA, 1990.
18. I. Karatzas and S. Shreve, Brownian Motion and Stochastic Calculus, Springer, New York, NY, USA, 2nd edition, 1997.