#### Abstract

We consider the problem of identification of the unknown source in a heat equation. The problem is ill posed in the sense that the solution (if it exists) does not depend continuously on the data. Meyer wavelets have the property that their Fourier transform has compact support. Therefore, by expanding the data and the solution in the basis of the Meyer wavelets, high-frequency components can be filtered away. Under the additional assumptions concerning the smoothness of the solution, we discuss the stability and convergence of a wavelet-Galerkin method for the source identification problem. Numerical examples are presented to verify the efficiency and accuracy of the method.

#### 1. Introduction

Inverse source identification problems are important in many branches of engineering sciences; for example, an accurate estimation of pollutant source is crucial to environmental safeguard in cities with high population. This inverse problem has been investigated in some theoretical papers concerned with the conditional stability and the data compatibility of the solution, notably in . The optimal error bound has been given in . Several numerical methods  have been proposed for the inverse source identification problem. In the present paper, based on some ideas , we propose a wavelet-Galerkin method to solve the inverse source problem.

We consider the following initial value problem for the nonhomogeneous heat equation: where represents state variable and denotes the source (sink) term.

The problem (1.1) is a classical direct problem which has been extensively studied in the past decades. Unfortunately, in many practical situations, the characteristics of the source (sink) term are always unknown. Therefore, the problem is mathematically under-determined, and an additional data must be supplied to fully determine the physical process. Our task is to determine the heat source on the usual initial conditions with the assistance of additionally supplied data. This is inversely determined and it is usually ill posed in the sense that small perturbation in the data may result, enormous deviation in the solution.

In this paper we consider the inverse problem of determining the function in (1.1) from the overspecified condition: This means that our purpose is to determine the pair of functions from the following problem: As shown in [17, 18], this problem has a unique solution, but the solution is very sensitive to small data perturbations; hence, it is ill posed. Since can only be measured in practice, there would be measurement errors and we actually have the noisy data function which satisfies where denotes the -norm, and the constant represents the noise level. The ill posedness of problem (1.3) can be seen by solving it in the frequency domain. Let denote the Fourier transform of the function . The problem (1.3) can now be formulated in frequency space as follows:

It is easy to know that the function in (1.6) can be given by On account of as and , the existence of a solution depends on a rapid decay of at high frequencies. However, for the measurement data function is merely in and in general does not possess such a decay property, high frequency components in the error are magnified and can destroy the solution. Therefore it is impossible to solve the problem using classical numerical methods and some special techniques are required to be employed.

Since the convergence rates can only be given under a priori assumptions on the exact solution , we will formulate such an a priori assumption in terms of the exact solution by considering where denotes the norm in the Sobolev space defined by

Meyer’s wavelets are special because, unlike most other wavelets, they have compact support in the frequency domain but not in the time domain (however, they decay very fast). The wavelet-Galerkin method for approximation solutions of the sideways heat equation has been used in [15, 16], and so forth. It was demonstrated there that using this method the sideways heat equation can be solved efficiently and in a numerically stable way.

The purpose of this paper is to demonstrate that, using a wavelet-Galerkin approach, we can solve problem (1.3) efficiently. By using the method, we give an error estimate between the exact solution and its approximation, as well as the rule for choosing an appropriate wavelet subspace, depending on the noise level of data.

The outline of the paper is as follows. First in Section 2 we describe Meyer’s wavelets and discuss the properties that make them useful for solving ill-posed problems. Then, in Section 3, we describe the wavelet-Galerkin method and give an error estimate which shows the continuous dependence of approximated solution on the data.

#### 2. The Meyer Wavelets

Let , be Meyer’s scaling and wavelet function defined by their Fourier transform in  which satisfy The function family constitute an orthonormal basis of and The multiresolution analysis (MRA) of the Meyer wavelet is generated by The functions constitute the orthonormal complement of in ; that is, . Let and denote the orthogonal projections of onto and , respectively. Then the orthogonal projections of any function on space and can be expressed by respectively.

It is easy to see from (2.3) and (2.5) that

From (2.7) we see that the projection can be considered as a low-pass filter: frequencies higher than will be filtered away.

#### 3. A Galerkin Method in

We now introduce the Galerkin method for approximation of the solutions of problem (1.3) based on the separation of variables and using wavelets approximation in the space variable.

Consider the weak formulation of the differential equation with test functions from : for all . The problem (3.1) can be rewritten in the equivalent form: find such that Then with the Ansatz we get the infinite-dimensional system of ordinary differential equations for the vector of coefficients and , where ; that is, and the matrix is given by The matrix is the second-order differentiation operator in , and the following boundness guarantees the well-posedness of the Galerkin equation (3.1).

Proposition 3.1. The infinite matrix is symmetric and has Toeplitz structure. Its norm satisfies Moreover, if is a continuous function, then

The proof is similar to  and is given in the appendix.

Let us denote

We are interested in the norm estimation of the distance between the Galerkin solution of problem (3.1) for the noisy data and the unknown solution of problem (1.3) for the exact data . Let denote the projection of on the space ; by the triangle inequality we know where the first term of the right-hand side of (3.10) describes the approximation of the exact solution in the scaling subspaces , the second one represents the norm of the “error” function and the last one corresponds to the stability of the Galerkin method. Now we consider the three terms,respectively.

First, let us consider the problem of stability of the Galerkin solution with respect to perturbations of the data function .

Theorem 3.2. Let be the measured data which satisfies (1.4). Let and be the solution of the Galerkin problem (3.2) with data and , respectively. Then

Proof. Let and be given by where and are the solution of the Galerkin equations (3.4) with data and (with the obvious definition of ). Then, by the Parseval formula, Due to Proposition 3.1, we have Combining it with (1.4), we get (3.12).

Before investigating the relation between the exact solution and the corresponding Galerkin solution with the exact data , we list two useful lemma and corollary whose proofs are similar to Lemma  3.3 in  and will be given in the appendix.

Lemma 3.3. If the exact solution satisfies a priori condition (1.8) for , then there holds

Corollary 3.4. Suppose condition of Lemma 3.3 holds, is the exact temperature distribution of problem (1.1) and (1.2), then there holds

Theorem 3.5. If (1.8) is satisfied for a certain , then

Proof. Due to (2.7) and (2.8), we know Taking into account the assumption (1.8), we can estimate the second term of the right-hand side of (3.10) as follows: Combining (3.20) with Lemma 3.3, we complete the proof.

It remains to reckon with the second term in the right-hand side of (3.10).

Theorem 3.6. Let and be the solution of problems (1.3) and (3.2), respectively, for exact data . If (1.8) is satisfied, then there holds

Proof. Since the Galerkin equation (3.1) for satisfies (3.2), and the pair of functions satisfy if we denote , , then the error functions satisfy the equation Let be the representations in the wavelet basis of the functions and , respectively; that is, Then (3.24) is equivalent to the infinite system of first-order differential equations for and : Then we have taking into account (3.25), we get hence we have in the last inequality we have used Proposition 3.1 and Corollary 3.4.

Theorems 3.2, 3.5, and 3.6 give the estimates of the three terms appearing in the error bound inequality (3.10); by combining the three results we can give a Hölder-type error estimate for the wavelet-Galerkin method in the following theorem.

Theorem 3.7. Let be the exact solution of (1.1) and (1.2) satisfying (1.8) for , and let be the Galerkin solution of (3.1) for the measured data such that (1.4) holds. If satisfies then there holds where is a positive constant independent of and .

#### 4. Numerical Complement

In this section, we will describe a numerical complement of the proposed method.

Note that the problem (1.3) is essentially local. That is a strong source at some position will influence the solution for but have limited impact further away. This sort of local property of the problem (1.3) allows us to truncate the problem to a finite internal of and still obtain reasonable solutions.

##### 4.1. Solve from the Direct Problem

We select and a known for , and suppose that (this is to agree with compatibility condition) and computed data functions , hence by solving a well-posed initial-boundary value problem on the interval , using the Crank-Nicolson implicit scheme. It is described as follows: let and be the step lengths on time and space coordinates, , , and denote equidistant partitions of the . We define and , and the finite difference approximation is Then we can easily obtain the data and .

##### 4.2. Discrete Wavelet Transform

In the numerical solution of (3.4) by an ODE solver, we need to evaluate matrix-vector products . The representation of differentiation operators in bases of compactly supported wavelets is described in the literature; see, for example, . In our context of Meyer’s wavelets, which do not have compact support, the situation is different. The proof of Proposition 3.1 actually gives a fast algorithm for this. From the definition of , it is easily shown that . Thus, we can compute approximations of the elements of by first sampling the function equidistantly and then computing its discrete Fourier transform.

We will use DMT as a short form of the “discrete Meyer (wavelet) transform.” Algorithms for discretely implementing the Meyer wavelet transform are described in . These algorithms are based on the fast Fourier transform (FFT), and computing the DMT of a vector in requires operations . The algorithms presuppose the vector to be transformed represents a periodic function. So we need to make periodic the vector at first. A discussion on how to make a function “periodic” can be found in .

##### 4.3. Solve from Problem (3.4)

In the solution of problem (1.3) in , we replace the infinite-dimensional ODE (3.4) by the finite-dimensional where represents the approximation of the solution in and is chosen according to Theorem 3.7. For simplicity we suppress the dependence in , and since we are dealing with functions, for which only a finite number of coefficients are nonzero, is a finite portion of the infinite matrix (we use superscript to indicate this). We also denote is the step size of time axis . Define , . Then, using a modified Euler scheme, we have that is, where By the initial condition, We know that if , there holds where , are the eigenvalues of . Hence then we obtain

#### 5. Numerical Examples

In this section some numerical examples are presented to demonstrate the usefulness of the approach. The tests were performed using Matlab and the wavelet package WaveLab 850.

Suppose that the sequence represents samples from the function on an equidistant grid and is even, then we add a random uniformly distributed perturbation to each data and obtain the perturbation data, where Then the total noise can be measured in the sense of root mean square error according to where “” is a normally distributed random variable with zero mean and unit standard deviation and dictates the level of noise. “” returns an array of random entries that is the same size as .

The numerical examples were constructed in the following way. First we selected function , for , and computed , and hence , by solving a well-posed initial-boundary value problem on the domain , using the Crank-Nicolson implicit scheme (see Section 4.1). Then we added a normally distributed perturbation to data function giving vectors . From the perturbed data functions, we reconstructed and compared the result with the known solution.

Example 5.1. It is easy to verify that the function is the exact solution of problem (1.3) with data

Example 5.2. We examine the reconstruction of a Gaussian normal distribution where is the mean and is the standard deviation. Note that when is small expression, (5.6) mimics a Dirac delta distribution . Since the direct problem with given by (5.6) does not have an analytical solution, the data is obtained by solving the direct problem using finite difference.

Example 5.3. Consider a continuous piecewise smooth heat source; namely,

Example 5.4. This example involves reconstructing a discontinuous heat source given by

The results from these examples are given in Figures 1, 2, 3, and 4. In all cases, the length of the data vector was 512. The regularization parameters were selected according to the recipe given in Theorem 3.7. In all cases the number of step length in the ODE solver were ; that is, . Before presenting the results, we recomputed our coarse level approximation on the fine scale, using the inverse Meyer wavelet transform.

Figures 14 show that the proposed approach seems to be useful. Moreover, the smaller the error , the better the approximation result . The scheme works equally well for piecewise smooth and discontinuous heat sources. To illustrate this, the numerical results retrieved for Examples 5.3 and 5.4 are presented in Figures 3 and 4. From these figures, it can be seen that the numerical solutions are less accurate than that of Examples 5.1 and 5.2. It is not difficult to see that the well-known Gibbs phenomenon and the recovered data near the nonsmooth and discontinuities points are not accurate. Note that the same situation happened for iterative method [17, 18]. Taking into consideration the ill posedness of the problems, the results presented here are quite satisfactory.

#### A. Proof of Proposition 3.1

For the proof we use the following two lemmas.

Lemma A.1. The matrix is symmetric and has the Toeplitz structure.

Proof. It can be easily shown by integration by parts that is symmetric. Moreover, hence, is constant along diagonals; that is, the matrix has the Toeplitz structure. Denote the element of the th diagonal of the matrix , then

Lemma A.2. For , define the function extend it periodically, and expand it in the Fourier series Then for all , , where is the element in diagonal of .

Proof. The Fourier coefficients are given by where we have used the three terms in the definition (A.3) of . As the result of periodicity, we can rewrite the first term Rewriting similarly, combining the expression for , and , and noting that for , we get From the definition of , we now see that .

We can now prove Proposition 3.1. From  we know that, since we only need to estimate .

First, due to , is an even function, we only need consider the interval . Here, is identically zero, and are nonnegative. Since Finally we get The estimate (3.7) for is proved. Since is a symmetric matrix, it can be written as where is a family of orthogonal projections; see Engl et al. . It follows that if is a continuous function, Thus we get

#### B. Proof of Lemma 3.3

Since we have On the other hand, each coefficient can be written as where Thus, Since , we have and it follows that Hence,

#### C. Proof of Corollary 3.4

Since we have Since , or equivalently, , similar to the proof of Lemma 3.3, we get (3.17).

#### Acknowledgments

The work described in this paper was partially supported by a grant from the Fundamental Research Funds for the Central Universities of China (Project no. ZYGX2009J099) and the National Natural Science Funds of China (Project no. 11171054).