#### Abstract

We consider an ill-posed initial boundary value problem for the Helmholtz equation. This problem is reduced to the inverse continuation problem for the Helmholtz equation. We prove the well-posedness of the direct problem and obtain a stability estimate of its solution. We solve numerically the inverse problem using the Tikhonov regularization, Godunov approach, and the Landweber iteration. Comparative analysis of these methods is presented.

#### 1. Introduction

Let us consider the initial boundary value problem (continuation problem) for the Helmholtz equation in the domain : where is a given constant. To find the function in from is required.

The continuation problem is ill-posed problem; its solution is unique, but it does not depend continuously on the Cauchy data . Note that the problem was studied by many authors. For example, Tuan and Quan  considered the case and proposed a regularization technique which allows one to obtain a stable solution in a two-dimensional domain. Regińska and Regiński  showed that if satisfies a certain condition, then the Cauchy problem for the Helmholtz equation has a stable solution in a three-dimensional domain. Isakov and Kindermann  used the singular value decomposition to prove that in a simple domain the considered problem becomes more stable with increasing . The same result was obtained numerically for the general case. The uniqueness of the solution of the investigated problem was proved, for example, by Arendt and Regińska , where the concept of weak normal derivative was introduced in formulating the problem. In [15, 16] singular values of the continuation problem were obtained for the two-dimensional Helmholtz equation with complex wave number for simple geometry.

We consider two approaches to the numerical solution of the problem (1). The first consists of formulating problem (1) in an operator form and minimizing the coast functional by the Landweber iteration . In the second approach, problem (1) is reduced to the system of linear algebraic equations which is solved using the Tikhonov regularization and Godunov approach. In this work we present a comparative analysis of the proposed methods for the numerical solution of the problem (1).

#### 2. The Direct and Inverse Problems

Let us consider the direct (well-posed) problem of finding the function from the relations and note that the continuation problem (1) can be reduced to the inverse problem of finding function from (2)–(5) using the additional information Let us consider some theoretical results [7, 17].

Definition 1. A function is called a generalized solution of the direct problem (2)–(5), if for any such that the following equality holds:

Theorem 2 (existence of a generalized solution of the direct problem). If and , then the direct problem (2)–(5) has a unique generalized solution and the following estimate is true:

Proof. Let us introduce the auxiliary problem Integrating the identity over the domain and considering (10)–(13), we obtain whence Taking into account (12) and the equality we have Combining (16) and (18) yields From the identities it follows that Integrating (21) on , we get
Due to (8), we obtain Hence,
Thus, we have proved the well-posedness of the direct problem, which allows us to apply well-elaborated computational methods. Also, a stability estimate has been obtained in .

#### 3. Landweber Iteration

##### 3.1. Formulation in the Operator Form and Description of the Algorithm

Let us reduce the inverse problem (2)–(6) to the operator equation. Let us consider the operator such that where is the solution of the direct problem (2)–(5). Then the inverse problem (2)–(6) takes the form We will find the solution of the problem (26) by minimizing the following functional [7, 1820]: using the Landweber iteration where is the descent parameter .

Let us describe the iterative algorithm. First we choose the initial approximation and we suppose that we calculated successively by formula (28). Assuming that we have found , we show below how to calculate .(1)Solve the direct problem (2)–(5) with the known .(2)Calculate by formula (27).(3)Check the stopping criterion . Finish if meets this inequality.(4)Solve the adjoint problem (5)Calculate the gradient by the formula (6)Calculate the next approximation and proceed to step 1.

##### 3.2. The Numerical Solution of the Direct and Adjoint Problems

The direct and adjoint problems are solved using the direct finite-difference method. For discretizing the direct problem, we construct a grid in with steps , , where are positive integers. Let us denote the grid by . After exchanging derivatives by finite-difference analoges with the second order, we obtain the following discrete direct problem (2)–(5): By introducing the parameters , , we get

Thus, we obtain the system of algebraic equations where is a matrix of size , is an unknown vector of the form and is the data vector (boundary and additional conditions).

Similarly, the discrete adjoint problem (29) has the form As above, this problem can be reformulated in a matrix form where is an unknown vector and is the data vector (boundary and additional conditions of the adjoint problem).

##### 3.3. Results of the Numerical Experiment

Let , . We choose the parameter . In order to test the algorithm, we assume that the exact solution has the form and calculate the corresponding additional information . Then, let be the initial approximation; we try to restore the original exact solution using the Landweber iteration with . If the data are given with an error , we choose the following one as a stopping criterion: .

The computational experiment was carried out for different noise levels. Tables 1, 2, and 3 show the calculation results obtained using PC Intel(R)Core(TM) i7 processor with a frequency 3.9 GHz.

The approximate solution in the case of is shown in Figure 1.

We observe that in the case of no noise the functional decreases monotonically, while in the other cases the decrease stops after 100 iterations. This phenomenon can be explained by the error that arises in solving the direct and adjoint problems. Note that in the case of noise, the stopping criterion does not guarantee the minimal error in the solution of the inverse problem. However, the criterion ensures that the error is of the same order as that of the minimal one, since further calculation leads to an increase in the error.

#### 4. Regularization Methods

In this section we consider a discrete analog of problem (1) and study the stability of its solution. Tikhonov regularization method and Godunov approach method are applied.

##### 4.1. The Discretization of Problem (1)

We reduce the continuation problem (1) to the system of linear algebraic equations as follows : where is a matrix of size , is the data vector, and is the desired vector of the form

Assuming , , , and , we calculate the norm and the condition number of the matrices and corresponding to the original problem (1) and the direct problem (2)–(5), respectively.

The matrix is ill-conditioned  (see decreasing of its singular values in Figure 4). The condition number and matrix norma of the discrete direct problem are presented in Table 4 and Figure 5. We see that the direct problem is well-posed.

In view of the ill-conditioning of the matrix , that is, the ill-posedness of the original problem, we will use regularization methods.

##### 4.2. Tikhonov Regularization

The Tikhonov regularization consists of replacing the system by the system . We choose the regularization parameter , minimizing the discrepancy according to .

As above, we put , and and calculate with , , and . The approximate solution in the case of is shown in Figure 2.

##### 4.3. Godunov Approach

S. K. Godunov proposed considering the extended system whereas contains some a priori information concerning the inverse problem solution. We take as a priori information the existence of the second derivative of the solution .

We choose to minimize .

For , , and , we calculate with different values of . The result is shown in Figure 3.

#### 5. Comparative Analysis of Methods

Tables 5, 6, and 7 present the results of the numerical solution of the problem (1) with different levels of noise. We see that the Godunov method is more accurate compared to the other methods. The same is demonstrated in Figure 6.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgments

The work was partially supported by the Ministry of Education and Science of the Russian Federation, joint Project SB RAS and NAS of Ukraine, 2013, no. 12, and RFBR Grant 14-01-00208.