Inverse Free Iterative Methods for Nonlinear Ill-Posed Operator Equations
We present a new iterative method which does not involve inversion of the operators for obtaining an approximate solution for the nonlinear ill-posed operator equation . The proposed method is a modified form of Tikhonov gradient (TIGRA) method considered by Ramlau (2003). The regularization parameter is chosen according to the balancing principle considered by Pereverzev and Schock (2005). The error estimate is derived under a general source condition and is of optimal order. Some numerical examples involving integral equations are also given in this paper.
This paper is devoted to the study of nonlinear ill-posed problem where is a nonlinear operator between the Hilbert spaces and . We assume that is a Fréchet-differentiable nonlinear operator acting between infinite dimensional Hilbert spaces and with corresponding inner products and norms , respectively. Further it is assumed that (1) has a solution , for exact data; that is, , but due to the nonlinearity of this solution need not be unique. Therefore we consider a -minimal norm solution of (1). Recall that [1–3] a solution of (1) is said to be an -minimal norm (-MNS) solution of (1) if
Since (1) is ill-posed, regularization techniques are required to obtain an approximation for . Tikhonov regularization has been investigated by many authors (see e.g., [2, 4, 5]) to solve nonlinear ill-posed problems in a stable manner. In Tikhonov regularization, a solution of the problem (1) is approximated by a solution of the minimization problem where is a small regularization parameter and is the available noisy data, for which we have the additional information that It is known  that the minimizer of the functional satisfies the Euler equation of the Tikhonov functional . Here denotes the adjoint of the Fréchet derivative . It is also known that  for properly chosen regularization parameter , the minimizer of the functional is a good approximation to a solution with minimal distance from . Thus the main focus is to find a minimizing element of the Tikhonov functional (4). But the Tikhonov functional with nonlinear operator might have several minima, so to ensure the convergence of any optimization algorithm to a global minimizer of the Tikhonov functional (3), one has to employ stronger restrictions on the operator .
In the last few years many authors considered iterative methods, for example, Landweber method [7, 8], Levenberg-Marquardt method , Gauss-Newton [10, 11], Conjugate Gradient , Newton-like methods [13, 14], and TIGRA (Tikhonov gradient method) . For Landweber's and TIGRA method, one has to evaluate the operator and the adjoint of the Fréchet derivative of . For all other methods, one has to solve a linear equation additionally.
In , Ramlau considered the TIGRA method defined iteratively by where , is a scaling parameter and is a regularization parameter, which will change during the iteration and obtained a convergence rate estimate for the TIGRA algorithm under the following assumptions:(1)is twice Fréchet differentiable with a continuous second derivative,(2)the first derivative is Lipschitz continuous: (3)there exists with and(4) and .In , Scherzer considered a similar iteration procedure under a much stronger condition
In this paper we consider a modified form of iteration (7). Precisely we consider the sequence defined iteratively by where , , and is the regularization parameter. The regularization parameter is chosen from a finite set using the adaptive method considered by Pereverzev and Schock in . We prove that converges to the unique solution (see Theorem 2) of the equation and that is a good approximation for . Our approach, in the convergence analysis of the method as well as the choice of the parameters, is different from that of .
Note that in TIGRA method (i.e., (7)) the scaling parameter and the regularization parameter will change during the iteration, but in the proposed method no scaling parameter is used and the regularization parameter does not change during the iteration. Also, in the proposed method one needs to compute the Fréchet derivative only at one point . These are the main advantages of the proposed method.
The organization of this paper is as follows. In Section 2 we provide some preparatory results and Section 3 deals with the convergence analysis of the proposed method. Error bounds under an a priori and under the balancing principle are given in Section 4. Numerical examples involving integral equations are given in Section 5. Finally the paper ends with conclusion in Section 6.
2. Preparatory Results
Throughout this paper we assume that the operator satisfies the following assumptions.
Assumption 1. (a) There exists a constant such that for every and , there exists an element satisfying
(b) for all .
Notice that in the literature the stronger than (a) condition
is used for some . However, holds in general and can be arbitrarily large . It is also worth noticing that implies (a) but not necessarily vice versa and element is less accurate and more difficult to find than (see also the numerical example).
Next result shows that (12) has a unique solution in .
Proof. For , let . If is invertible, then has a unique solution in . Observe that and hence Therefore by (17) has a unique solution in . So it remains to show that is invertible. Note that by Assumption 1, we have So is invertible. Now from the relation it follows that is invertible.
Assumption 3. There exists a continuous, strictly monotonically increasing function with satisfying(i)(ii)(iii)There exists such that
One of the crucial results we are going to use to prove our main result is the following theorem.
Theorem 4. Let and be the solution of (12). Then
Proof. Let . Then, by fundamental theorem of integration and hence by (12), we have . Thus where , , and . Note that by Assumption 3 and by Assumption 1 The result now follows from (26), (27), (28), and (29).
3. Convergence Analysis
In this paper we present a semilocal convergence analysis under the following conditions.(C1)Suppose . Let Let with and let where and .(C2)Suppose . Let Let with and let where and .Let and let
In due course we will make use of the following lemma extensively.
Proof. Let . Then, by fundamental theorem of integration . Hence, so by Assumption 1, we have that This completes the proof.
Remark 6. Note that we need for the convergence of the sequence to . This in turn forces us to assume that ; that is, (see ) and ; that is, (see ), .
For convenience, we use the notation for . Let
Remark 7. Observe that and by the choice of (this can be seen by substituting into and solving the inequality for ).
Proof. Suppose for all . Then
and hence by Lemma 5, we have
By Remark 7, . Now suppose for some . Then
that is, . Thus by induction for all . Now we will prove that is a Cauchy sequence in . We have
Thus is a Cauchy sequence in and hence it converges, say to .
By letting in (11), we obtain . Now the result follows by letting in (48).
Remark 9. (a) Instead of Assumption 1, if we use center-Lipschitz conditions; that is,
holds for all , then one can obtain the estimate
(b) Note that for small enough condition (31), (34) is not a severe restriction on .
Remark 10. If Assumption 1 is fulfilled only for all , where is a convex closed a priori set, for which , then we can modify process (11) by the following way: to obtain the same estimate in the following Theorem 11; here is the metric projection onto the set and is the step operator in (11).
4. Error Bounds under Source Conditions
4.1. A Priori Choice of the Parameter
Observe that the estimate in Theorem 11 is of optimal order for the choice which satisfies . Now, using the function , , we have so that .
In view of the above observation, Theorem 11 leads to the following.
4.2. Balancing Principle
Note that the a priori choice of the parameter could be achieved only in the ideal situation when the function is known. The point is that the best function measuring the rate of convergence in Theorem 11 is usually unknown. Therefore in practical applications different parameters are often selected from some finite set and corresponding elements , are studied on line. Let and let . Then from Theorem 11, we have
To obtain a conclusion from this parameter choice we considered all possible functions satisfying Assumption 1 and . Any of such functions is called admissible for and it can be used as a measure for the convergence of (see ).
The main result of this section is the following theorem, proof of which is analogous to the proof of Theorem 4.4 in .
5. Numerical Examples
Let the half-space be modeled by two layers of constant different densities , , separated by a surface to be determined. In the Cartesian coordinate system, whose plane coincides with the ground surface and the axis is directed downward, the inverse gravimetry problem has the form (see  and References in it) here is gravity constant, is the density jump at the interface , described by the function to be evaluated, and is the anomalous gravitational field caused by deviation of the interface from horizontal asymptotic plane ; that is, for the actual solution the following relation holds: is given on the domain .
The derivative of the operator at the point is expressed by the formula Applying to the integral equations (63) two-dimensional analogy of rectangle’s formula with uniform grid for every variable, we obtain the following system of nonlinear equations: , for the unknown vector , , in vector-matrix form this system takes the form where are vectors of dimension .
The discrete variant of the derivative has the form where is constant and is symmetric matrix, for which the component with member is evaluated by formula (67).
Let us define the exact solution as where is given on the domain . Let , , , .
It means that for the matrix approximating the operator , we have taken initial guess .
Observe that where Now since is self-adjoint, we have and hence Assumption 3 satisfies for .
The results of numerical experiments are presented in Table 1. Here is the numerical solution obtained by our method; the relative error of solution and residual for a noisy right-hand side.
Next we present an example for nonlinear equations where Assumption (a) is satisfied but not .
Example 1. Let , , and define function on by
where , are real parameters and an integer. Then is not Lipschitz on . Hence, Assumption is not satisfied. However central Lipschitz condition Assumption (a) holds.
Indeed, we have where and where .
We have considered an iterative method which does not involve inversion of operator, for obtaining approximate solution for a nonlinear ill-posed operator equation when the available data is in place of the exact data . It is assumed that is Fréchet differentiable. The procedure involves finding the fixed point of the function in an iterative manner. For choosing the regularization parameter we made use of the adaptive method suggested in .
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
H. W. Engl, M. Hanke, and A. Neubauer, Regularization of Inverse Problems, Kluwer, Dordrecht, The Netherlands, 1996.
A. B. Bakushinskii, “The problem of the convergence of the iteratively regularized Gauss-Newton method,” Computational Mathematics and Mathematical Physics, vol. 32, no. 9, pp. 1353–1359, 1992.View at: Google Scholar
I. K. Argyros, Convergence and Applications of Newton-Type Iterations, Springer, New York, NY, USA, 2008.
S. Lu and S. V. Pereverzev, “Sparsity reconstruction by the standard Tikhonov method,” RICAM-Report, 2008.View at: Google Scholar
V. V. Vasin, I. I. Prutkin, M. Timerkhanova, and L. Yu, “Retrieval of a three-dimensional relief of geological boundary from gravity data,” Izvestiya, Physics of the Solid Earth, vol. 32, no. 11, pp. 58–62, 1996.View at: Google Scholar
V. V. Vasin, “Modified processes of Newton type generating Fejer approximations of regularized solutions of nonlinear equations,” Proceedings in Mathematics and Machanics, vol. 19, no. 2, pp. 85–97, 2013 (Russian).View at: Google Scholar