Abstract

We introduce and study a type of (one-dimensional) wave equations with noisy point sources. We first study the existence and uniqueness problem of the equations. Then, we assume that the locations of point sources are unknown but we can observe the solution at some other location continuously in time. We propose an estimator to identify the point source locations and prove the convergence of our estimator.

1. Introduction

Assume that there are a certain number of objects in a certain area of ocean or other media. The total number of the objects and the location of each object are unknown. We need to identify the total number and the precise locations of the objects. The objects are also assumed to emit (point source) sound waves and we are able to measure these sound waves received in some known locations. The objective is to use these measurements for our identification problem.

This type of problem has been studied mathematically in the framework of inverse problems for partial differential equations (wave equations). The sound travels according to the following second-order wave equation with point sources: where , , , and are some given points in , is the Dirac delta function, and , , are some known constants. The solution is supposed to be known for some space points and for some interval . The total number and each location of point sources are estimated from . We refer to, for example, [1] and in particular the references therein for some recent study in this area. This theory has found substantial applications in determining the heat sources in heat conduction, the magnetic sources in brain, the earthquake sources of seismic waves, and so on.

In practice, the sound wave travels inevitably under some influence of noises. Voluntarily or involuntarily, the point sources themselves may also emit noises to avoid being detected. Thus, we are led to the following stochastic wave equations: where , , , and are some given points in , is the Dirac delta function, and are two given deterministic functions, and , , are independent Gaussian noises which are white in time and correlated in space.

When , the stochastic wave equation (2) has been studied since long time. Let us mention the first lecture note [2] and a recent lecture note [3]. Many properties such as the sample path Hölder continuity of the solution are obtained (see [4, 5] and the references therein).

However, when , , are not all zero, then (2) is highly singular because of the presence of the Dirac delta functions multiplied by the Gaussian noises. Such equation has not been studied yet. The first objective of this paper is to give the definition of the solution to such an equation and to show the existence of uniqueness of the solution under some appropriate conditions. This will be done in Section 2. Since the case when has been well studied, we will now assume to simplify our presentation. However, since our objective is the identification of the and point source positions , we will not go to spend too much effort here. For this reason, we restrict ourselves to only one space dimension case. We will present higher space dimension case in another project.

To well explain our approach of identification, we will further restrict our model. We will consider the special case of (2) where the coefficients are independent of and and . Namely, we will concentrate on the stochastic wave equation of the form In this case, we will write down the explicit expression of the solution. It will be done in Section 3. In this section, we also obtain some properties of the solution which will be useful in the later section of the paper.

Now, we assume that, in (3), the total number of point sources and the positions , , are unknown. However, we are able to observe the sound signal received at some given known locations , continuously in the time interval . Namely, we assume that are known. We would like to use this information to identify and , . In Section 4, we will develop a new approach to obtain some statistical estimators and to estimate the total number and the locations of the point sources. The approach combines the reciprocity gap functional approach from the theory of partial differential equations with theory from stochastic processes. We show the almost sure convergence of our estimators and to the true parameters and .

2. Stochastic Wave Equations with Noisy Point Sources

Let be a basic probability space with right continuous filtration of -algebras satisfying the usual conditions. Let be dimensional Gaussian random fields. The formal derivatives , , are called the Gaussian noises. We assume that these Gaussian noises are white in time and correlated in space with covariance (see [5]). Namely, we assume that where denotes the expectation on , , and () are some symmetric positive definite functions of and . This is interpreted as For any (deterministic) smooth function in , the stochastic integral is well-defined in the sense of Walsh ([2]). The following fact is well-known: for any two smooth functions and in , we have We also call the covariance functional of , denoted by .

Sometimes, we also use the Fourier transform theory to study the stochastic integral and the stochastic equations. We use to denote the Fourier transformation of and .

If we assume that the noise is spatially homogeneous, that is, , then there exists a nonnegative tempered measure which is the Fourier transform of . With this notation, we can also write From the general theory of stochastic integral (see, e.g., [3]), we see that if is a real valued -adapted process such that then the stochastic integral is well-defined and Now, let be a given fixed point in and let be a real valued -adapted process. We want to define the stochastic integral , where is the Dirac delta function. To this end, we will use smooth functions to approximate the Dirac delta function.

Let be a smooth function with compact support . Set . It is clear that converges to in the sense of distribution as . For each , the stochastic integral is well-defined. We want to know whether has a limit in or not. First, we have the following computations: Passing the difference to that of and , we have can be written as Now, we let be a continuous function of and . We also assume that there is a such that We write , where and are defined and estimated as follows. For , we have For , we have (recall ) Thus, we have converging to .

In the same way, we can show that converges to under the same conditions.

Theorem 1. Let be a Gaussian noise which is white in time and correlated in space with covariance . Assume that is a continuous function of and . Let be an -adapted processes such that conditions (14) hold. Then, the stochastic integral exists and

Proof. The above argument shows the existence of the stochastic integral . We have Now, the Fatou lemma yields (17).

We also need to bound general moments of the stochastic integral . We have the following Burkholder type inequality.

Theorem 2. Let be a Gaussian noise which is white in time and correlated in space with covariance . Assume that is a continuous function of and . Let be an -adapted processes such that Then, the stochastic integral exists and

Now, we turn to consider the existence and uniqueness of the solution to the stochastic wave equation (2). We will follow the idea of mild solution. Since the Green function is more sophisticated to study in the dimension higher than , we will only study the one space dimensional wave equation in this paper. Higher dimension case needs much more care. Consider In the one-dimensional case, the associated fundamental solution (Green’s function) of the wave operator is First, we give the following definition about the solution.

Definition 3. A random field is called a solution to the stochastic wave equation if it satisfies the following identity:

Theorem 4. Assume that and , , satisfy the global Lipschitz condition and the linear growth condition in uniformly in and . Assume that is bounded continuous functions in and is bounded continuously differentiable functions in with bounded derivative. Then, there is a unique solution to (21).

Proof. Since the solution of wave equation has the past-light cone property (see [6], p. 63), we can study the solution on a bounded domain. To simplify the presentation, we assume .
Let us define as the set of all mappings such that is continuous in for almost all and It is clear that is a Banach space with the norm We will prove the existence and uniqueness of the solutions in to (21) by the Picard iteration. We define recursively by the following for any . We also define .
First, we show the well-posedness of the above stochastic integral for every .
For any , let . By the Burkholder inequality, we have By the Hölder inequality, we have where is the conjugate number of .
In the similar way, we can show for any . Thus by induction we have for all and . Let denote the set of all functions of and , which are Hölder continuous on both and on any compact subinterval of . Hence, by Kolmogorov’s theorem, we have, for any (fixed) , for every . It is easy to check . Now, we verify easily Moreover, for any . Thus, , satisfy conditions (14). Therefore, (26) is well-defined for all .
Next, we show that is a Cauchy sequence in for any . For , by Burkholder’s inequality and Jensen’s inequality, we have Let us denote by the set of all mappings such that is continuous in for almost all and We also denote Thus, (32) can be written as Now, a routine argument shows that is a Cauchy sequence in . The limit of this sequence is denoted by .
Letting in (29), we have and . This implies that the stochastic integral is well-defined. It is easy to see as . Now, letting tend to infinity on both sides of (26), we see that satisfies (23). The uniqueness can be proved in similar way. Thus, we complete the proof of this theorem.

3. Some Preliminaries for Estimation of the Point Sources

To simplify the presentation of estimation method, we assume in (21). We also assume that the noises are space independent. Without loss of generality, we also assume . Moreover, we assume that . This means we will consider the following stochastic wave equation:

From the Duhamel principle, we know that the solution of one-dimensional stochastic wave equations (38) is given by

In the remainder of this paper, we assume that the parameters , are unknown. However, at some fixed location , we can observe the sound wave signal , continuously over the time interval . We would like to use to identify , , and . If one can observe the sound wave signals at some other locations , we can use the similar approach to (better) estimate , from all the observations .

Put . We arrange the real positive numbers in increasing order. For example, we can assume .

If , then, for any and , cannot hold, and hence .

If and , then ; hence Similarly, we have, for , for , and, for ,

Now, for any process , we define its quadratic variation process (if it exists) as where is a partition of the interval and . For dimensional Brownian motion , we know that for and any , where , if , , otherwise.

The quadratic variation process of is given by Since we can observe at the space location continuously from the time interval , we know that the quadratic variation process , is also an observable.

Let be large enough; for example, . We denote . On , has its second-order distributional derivative by For with , we can define linear operator , which is called the reciprocity gap functional as follows: By the integration by parts formula, we have For any function independent of the unknown parameters, we know from the definition that is also independent of the unknown parameters. Namely, is observable.

To obtain our estimators for the parameters, we take Then, where .

Clearly, since are constructed from , we see that they are observable. Furthermore, once we know , then we can get from the identity .

The can be obtained from by the following way. We let be the matrices of the following forms:

The following result is from [7, 8]. For more details, see [7, 8] and references therein.

Theorem 5. Let be defined above; then are the eigenvalues of Hermite matrix .

Proof. First, we will introduce some intermediate quantities as follows. For , we define We denote the diagonal matrix We define the vectors and vector .
It is obvious that, for all , we have Thus, for all , we have Denote . Then, and the matrix has the as its eigenvalue corresponding to the eigenvector for .
On the other hand, we have Because is a Vandermonde matrix and are assumed to be distinct, we also can show that vectors are independent, which means that is invertible. Hence, . The conclusion follows.

Remark 6. Let be integers. Defining ( denotes the transpose), one can see that the vectors are images of by the action of the matrix from which and from the independence of the vectors one can deduce that the determinant of matrix is equal to zero. For more details, see [7].

Remark 7. One can find, from the proof, that vector is the solution of linear equations .

4. Estimations for Point Sources from Discrete Time Observations

By Theorem 5, we know that the parameters we want to estimate are contained in the eigenvalues of Hermite matrix .

We assume in this section that the wave signals are observed at the location but at discrete time instants . We denote .

We define the approximation of the quadratic variations of solution process for any . Assume that there are integers such that for all . For any , since there is an integer such that , one defines This is also an approximation of the quadratic variations process of solution process based on the observation time instants . We know that for fixed   as   with . In fact, we can show that convergence holds in the almost sure sense (see Lemma 10).

To use to obtain estimators of the parameters, we compute the following. For any twice differentiable function , by Abel sum formula, we have where , . Thus, we can define For any nonnegative integer , we introduce

First, we estimate . It is known that when , will be degenerate. However, there is a difference between and and there may be also error from the computer computation. To make the method robust, we introduce a small number . We propose to estimate by the following estimator:

As in the case of continuous time observation we let be the eigenvalues (in increasing order) of the Hermite matrix . Then, we obtain the estimations of the locations of the point sources by for . The estimation of follows from (64). Note that , are given by (49).

Similar to in Theorem 5, we define As in Remark 7, the estimation of the strength of the point sources can be obtained by solving the following linear equation:

Now, we can summarize the estimation procedure as follows.

Step 1. Compute on and according to transformation (61).

Step 2. Identify the value of as the maximum such that

Step 3. Compute the eigenvalues of matrix .

Step 4. Compute the intensities by formulae .

5. Convergence of Estimations

In this section, we will show that the estimations obtained in the previous section converge to the true values a.s. as time space tends to zero.

Theorem 8. For any function with , we have converging to a.s. as .

This will be fulfilled by the following two lemmas.

Lemma 9. For any function , converges to in the sense of as .

Proof. It is sufficient to show that, for any function , converges to in the sense of as . By Hölder inequality, one has, for any function , where For , we have Let be a standard normal random variable. Denote the constant by (actually ). By the independent increments property of Brownian motion and the independence between and for , we have where the constant depends only on , , ,    and as . In the similar way, we can get the same estimates for the first and last terms in . Since the total number of the terms in is less than , as . Hence, we complete the proof of this lemma.

Lemma 10. For any function , converges to a.s. as .

Proof. By integration by parts, we know where . Therefore, for any , the square integrable functional of dimensional Brownian motion belongs to the direct sum of and the second chaos.
To this end, we define the second quantization operator (for ) from to by where .
Setting , then, by the hypercontractivity of OU semigroup (see [9, 10]), one has Letting , we have Thus, the hypercontractivity justifies the following form: In the last inequality, we use the facts appearing in the end of the proof of Lemma 9. Therefore, Now, we identify the with as . Denote the set by . Then, . Choose large enough such that holds. Thus, the Borel-Cantelli lemma can be applied to show a.s. as tends to zero.

Proof of Theorem 8. As the proof of Lemma 10, we just need to show that the limit holds in the sense of . Actually, as , one has Hence, the result of Theorem 8 is concluded.

Since Weyl’s perturbation theorem (see [11, 12]) says that by Theorem 8, we can get the convergence of estimations of the locations by an obvious way, which is stated as follows.

Theorem 11. For any , one has a.s. as .

Next, we will give the convergence of estimators of the intensities of the point sources.

Theorem 12. Let be the solution of linear equations ; that is, . Then, one has a.s. as .

Proof. Notice that solves the equation . By Theorems 8 and 11, one can get the conclusion of this theorem.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

Y. Hu was partially supported by Grant no. 209206 from the Simons Foundation and a General Research Fund (GRF) of University of Kansas. G. Rang was supported by NSFC-11171262 in China and by Post 70s Project of Wuhan University.