Research Article  Open Access
Inverse Free Iterative Methods for Nonlinear IllPosed Operator Equations
Abstract
We present a new iterative method which does not involve inversion of the operators for obtaining an approximate solution for the nonlinear illposed operator equation . The proposed method is a modified form of Tikhonov gradient (TIGRA) method considered by Ramlau (2003). The regularization parameter is chosen according to the balancing principle considered by Pereverzev and Schock (2005). The error estimate is derived under a general source condition and is of optimal order. Some numerical examples involving integral equations are also given in this paper.
1. Introduction
This paper is devoted to the study of nonlinear illposed problem where is a nonlinear operator between the Hilbert spaces and . We assume that is a FrĂ©chetdifferentiable nonlinear operator acting between infinite dimensional Hilbert spaces and with corresponding inner products and norms , respectively. Further it is assumed that (1) has a solution , for exact data; that is, , but due to the nonlinearity of this solution need not be unique. Therefore we consider a minimal norm solution of (1). Recall that [1â€“3] a solution of (1) is said to be an minimal norm (MNS) solution of (1) if
In the following, we always assume the existence of an MNS for exact data . The element in (3) plays the role of a selection criterion [4] and is assumed to be known.
Since (1) is illposed, regularization techniques are required to obtain an approximation for . Tikhonov regularization has been investigated by many authors (see e.g., [2, 4, 5]) to solve nonlinear illposed problems in a stable manner. In Tikhonov regularization, a solution of the problem (1) is approximated by a solution of the minimization problem where is a small regularization parameter and is the available noisy data, for which we have the additional information that It is known [3] that the minimizer of the functional satisfies the Euler equation of the Tikhonov functional . Here denotes the adjoint of the FrĂ©chet derivative . It is also known that [1] for properly chosen regularization parameter , the minimizer of the functional is a good approximation to a solution with minimal distance from . Thus the main focus is to find a minimizing element of the Tikhonov functional (4). But the Tikhonov functional with nonlinear operator might have several minima, so to ensure the convergence of any optimization algorithm to a global minimizer of the Tikhonov functional (3), one has to employ stronger restrictions on the operator [6].
In the last few years many authors considered iterative methods, for example, Landweber method [7, 8], LevenbergMarquardt method [9], GaussNewton [10, 11], Conjugate Gradient [12], Newtonlike methods [13, 14], and TIGRA (Tikhonov gradient method) [6]. For Landweber's and TIGRA method, one has to evaluate the operator and the adjoint of the FrĂ©chet derivative of . For all other methods, one has to solve a linear equation additionally.
In [6], Ramlau considered the TIGRA method defined iteratively by where ,â€‰â€‰ is a scaling parameter and is a regularization parameter, which will change during the iteration and obtained a convergence rate estimate for the TIGRA algorithm under the following assumptions:(1)is twice FrĂ©chet differentiable with a continuous second derivative,(2)the first derivative is Lipschitz continuous: (3)there exists with and(4) and .In [15], Scherzer considered a similar iteration procedure under a much stronger condition
In this paper we consider a modified form of iteration (7). Precisely we consider the sequence defined iteratively by where ,â€‰â€‰, and is the regularization parameter. The regularization parameter is chosen from a finite set using the adaptive method considered by Pereverzev and Schock in [16]. We prove that converges to the unique solution (see Theorem 2) of the equation and that is a good approximation for . Our approach, in the convergence analysis of the method as well as the choice of the parameters, is different from that of [6].
Note that in TIGRA method (i.e., (7)) the scaling parameter and the regularization parameter will change during the iteration, but in the proposed method no scaling parameter is used and the regularization parameter does not change during the iteration. Also, in the proposed method one needs to compute the FrĂ©chet derivative only at one point . These are the main advantages of the proposed method.
The organization of this paper is as follows. In Section 2 we provide some preparatory results and Section 3 deals with the convergence analysis of the proposed method. Error bounds under an a priori and under the balancing principle are given in Section 4. Numerical examples involving integral equations are given in Section 5. Finally the paper ends with conclusion in Section 6.
2. Preparatory Results
Throughout this paper we assume that the operator satisfies the following assumptions.
Assumption 1. (a) There exists a constant such that for every and , there exists an element satisfying
(b)
for all .
Notice that in the literature the stronger than (a) condition
is used for some . However, holds in general and can be arbitrarily large [17]. It is also worth noticing that implies (a) but not necessarily vice versa and element is less accurate and more difficult to find than (see also the numerical example).
Next result shows that (12) has a unique solution in .
Theorem 2. Let be a solution of (1), Assumption 1 satisfied, and let be FrĂ©chet differentiable in a ball with radius . Then the regularized problem (12) possesses a unique solution in .
Proof. For , let . If is invertible, then has a unique solution in . Observe that and hence Therefore by (17) has a unique solution in . So it remains to show that is invertible. Note that by Assumption 1, we have So is invertible. Now from the relation it follows that is invertible.
Assumption 3. There exists a continuous, strictly monotonically increasing function with satisfying(i)(ii)(iii)There exists such that
One of the crucial results we are going to use to prove our main result is the following theorem.
Theorem 4. Let and be the solution of (12). Then
Proof. Let . Then, by fundamental theorem of integration and hence by (12), we have . Thus where , , and . Note that by Assumption 3 and by Assumption 1 The result now follows from (26), (27), (28), and (29).
3. Convergence Analysis
In this paper we present a semilocal convergence analysis under the following conditions.(C1)Suppose . Let â€‰Let with â€‰and let â€‰where and .(C2)Suppose . Let â€‰Let with â€‰and let â€‰where and .Let and let
In due course we will make use of the following lemma extensively.
Lemma 5. Let be as in (38) and be as in (37). Suppose Assumption 1 holds. Then for
Proof. Let . Then, by fundamental theorem of integration . Hence, so by Assumption 1, we have that This completes the proof.
Remark 6. Note that we need for the convergence of the sequence to . This in turn forces us to assume that ; that is, (see ) and ; that is, (see ), .
For convenience, we use the notation for . Let
Remark 7. Observe that and by the choice of (this can be seen by substituting into and solving the inequality for ).
Theorem 8. Let , be as in (37) and (38), respectively. Then defined in (11) is a Cauchy sequence in and converges to . Further and
Proof. Suppose for all . Then
and hence by Lemma 5, we have
By Remark 7, . Now suppose for some . Then
that is, . Thus by induction for all . Now we will prove that is a Cauchy sequence in . We have
Thus is a Cauchy sequence in and hence it converges, say to .
By letting in (11), we obtain . Now the result follows by letting in (48).
Remark 9. (a) Instead of Assumption 1, if we use centerLipschitz conditions; that is,
holds for all , then one can obtain the estimate
(b) Note that for small enough condition (31), (34) is not a severe restriction on .
Remark 10. If Assumption 1 is fulfilled only for all , where is a convex closed a priori set, for which , then we can modify process (11) by the following way: to obtain the same estimate in the following Theorem 11; here is the metric projection onto the set and is the step operator in (11).
4. Error Bounds under Source Conditions
Combining the estimates in Theorems 4 and 8 we obtain the following.
Theorem 11. Let the assumptions in Theorems 4 and 8 hold and let be as in (11). Then Further if , then where .
4.1. A Priori Choice of the Parameter
Observe that the estimate in Theorem 11 is of optimal order for the choice which satisfies . Now, using the function , , we have so that .
In view of the above observation, Theorem 11 leads to the following.
Theorem 12. Let , and assumptions in Theorem 11 hold. For , let and let be as in Theorem 11. Then
4.2. Balancing Principle
Note that the a priori choice of the parameter could be achieved only in the ideal situation when the function is known. The point is that the best function measuring the rate of convergence in Theorem 11 is usually unknown. Therefore in practical applications different parameters are often selected from some finite set and corresponding elements , are studied on line. Let and let . Then from Theorem 11, we have
We consider the balancing principle suggested by Pereverzev and Schock [16], for choosing the regularization parameter from the set defined by where for some constant (see [18]) and .
To obtain a conclusion from this parameter choice we considered all possible functions satisfying Assumption 1 and . Any of such functions is called admissible for and it can be used as a measure for the convergence of (see [19]).
The main result of this section is the following theorem, proof of which is analogous to the proof of Theorem 4.4 in [13].
Theorem 13. Assume that there exists such that . Let assumptions of Theorem 11 be satisfied and let where is as in Theorem 11. Then and
5. Numerical Examples
Let the halfspace be modeled by two layers of constant different densities , , separated by a surface to be determined. In the Cartesian coordinate system, whose plane coincides with the ground surface and the axis is directed downward, the inverse gravimetry problem has the form (see [20] and References in it) here is gravity constant, is the density jump at the interface , described by the function to be evaluated, and is the anomalous gravitational field caused by deviation of the interface from horizontal asymptotic plane ; that is, for the actual solution the following relation holds: is given on the domain .
Since in (61) the first term does not depend onâ€‰â€‰ (61) can be written as where .
The derivative of the operator at the point is expressed by the formula Applying to the integral equations (63) twodimensional analogy of rectangleâ€™s formula with uniform grid for every variable, we obtain the following system of nonlinear equations: , for the unknown vector , , in vectormatrix form this system takes the form where are vectors of dimension .
The discrete variant of the derivative has the form where is constant and is symmetric matrix, for which the component with member is evaluated by formula (67).
Let us define the exact solution as where is given on the domain . Let , , , .
Note that on the set (see Remarks 9 and 10) satisfied (see [20, 21]). Besides, for our data is symmetric and positive definite matrix with the minimal eigenvalue and the condition number .
It means that for the matrix approximating the operator , we have taken initial guess .
Observe that where Now since is selfadjoint, we have and hence Assumption 3 satisfies for .
The results of numerical experiments are presented in Table 1. Here is the numerical solution obtained by our method; the relative error of solution and residual for a noisy righthand side.

Next we present an example for nonlinear equations where Assumption (a) is satisfied but not .
Example 1. Let , , and define function on by
where , are real parameters and an integer. Then is not Lipschitz on . Hence, Assumption is not satisfied. However central Lipschitz condition Assumption (a) holds.
Indeed, we have
where and
where .
6. Conclusion
We have considered an iterative method which does not involve inversion of operator, for obtaining approximate solution for a nonlinear illposed operator equation when the available data is in place of the exact data . It is assumed that is FrĂ©chet differentiable. The procedure involves finding the fixed point of the function in an iterative manner. For choosing the regularization parameter we made use of the adaptive method suggested in [16].
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
References
 H. W. Engl, M. Hanke, and A. Neubauer, Regularization of Inverse Problems, Kluwer, Dordrecht, The Netherlands, 1996.
 Q.N. Jin and Z.Y. Hou, â€śOn an a posteriori parameter choice strategy for tikhonov regularization of nonlinear illposed problems,â€ť Numerische Mathematik, vol. 83, no. 1, pp. 139â€“159, 1999. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 U. Tautenhahn and Q.N. Jin, â€śTikhonov regularization and a posteriori rules for solving nonlinear ill posed problems,â€ť Inverse Problems, vol. 19, no. 1, pp. 1â€“21, 2003. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 H. W. Engl, K. Kunisch, and A. Neubauer, â€śConvergence rates for Tikhonov regularisation of nonlinear illposed problems,â€ť Inverse Problems, vol. 5, no. 4, article 007, pp. 523â€“540, 1989. View at: Publisher Site  Google Scholar
 A. Neubauer, â€śTikhonov regularisation for nonlinear illposed problems: optimal convergence rates and finitedimensional approximation,â€ť Inverse Problems, vol. 5, no. 4, article 008, pp. 541â€“557, 1989. View at: Publisher Site  Google Scholar
 R. Ramlau, â€śTIGRA—an iterative algorithm for regularizing nonlinear illposed problems,â€ť Inverse Problems, vol. 19, no. 2, pp. 433â€“465, 2003. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 M. Hanke, A. Neubauer, and O. Scherzer, â€śA convergence analysis of the Landweber iteration for nonlinear illposed problems,â€ť Numerische Mathematik, vol. 72, pp. 21â€“37, 1995. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 R. Ramlau, â€śModified Landweber method for inverse problems,â€ť Numerical Functional Analysis and Optimization, vol. 20, no. 1, pp. 79â€“98, 1999. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 M. Hanke, â€śA regularizing LevenbergMarquardt scheme, with applications to inverse groundwater filtration problems,â€ť Inverse Problems, vol. 13, no. 1, pp. 79â€“95, 1997. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 A. B. Bakushinskii, â€śThe problem of the convergence of the iteratively regularized GaussNewton method,â€ť Computational Mathematics and Mathematical Physics, vol. 32, no. 9, pp. 1353â€“1359, 1992. View at: Google Scholar
 B. Blaschke, A. Neubauer, and O. Scherzer, â€śOn convergence rates for the iteratively regularized GaussNewton method,â€ť IMA Journal of Numerical Analysis, vol. 17, no. 3, pp. 421â€“436, 1997. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 M. Hanke, â€śRegularizing properties of a truncated Newtoncg algorithm for nonlinear inverse problems,â€ť Numerical Functional Analysis and Optimization, vol. 18, no. 910, pp. 971â€“993, 1997. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 S. George, â€śOn convergence of regularized modified Newton's method for nonlinear illposed problems,â€ť Journal of Inverse and IllPosed Problems, vol. 18, no. 2, pp. 133â€“146, 2010. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 B. Kaltenbacher, â€śSome Newtontype methods for the regularization of nonlinear illposed problems,â€ť Inverse Problems, vol. 13, no. 3, pp. 729â€“753, 1997. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 O. Scherzer, â€śA convergence analysis of a method of steepest descent and a twostep algorithm for nonlinear illposed problems,â€ť Numerical Functional Analysis and Optimization, vol. 17, no. 12, pp. 197â€“214, 1996. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 S. Pereverzev and E. Schock, â€śOn the adaptive selection of the parameter in regularization of illposed problems,â€ť SIAM Journal on Numerical Analysis, vol. 43, no. 5, pp. 2060â€“2076, 2005. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 I. K. Argyros, Convergence and Applications of NewtonType Iterations, Springer, New York, NY, USA, 2008.
 E. V. Semenova, â€śLavrentiev regularization and balancing principle for solving illposed problems with monotone operators,â€ť Computational Methods in Applied Mathematics, vol. 10, no. 4, pp. 444â€“454, 2010. View at: Google Scholar  Zentralblatt MATH
 S. Lu and S. V. Pereverzev, â€śSparsity reconstruction by the standard Tikhonov method,â€ť RICAMReport, 2008. View at: Google Scholar
 V. V. Vasin, I. I. Prutkin, M. Timerkhanova, and L. Yu, â€śRetrieval of a threedimensional relief of geological boundary from gravity data,â€ť Izvestiya, Physics of the Solid Earth, vol. 32, no. 11, pp. 58â€“62, 1996. View at: Google Scholar
 V. V. Vasin, â€śModified processes of Newton type generating Fejer approximations of regularized solutions of nonlinear equations,â€ť Proceedings in Mathematics and Machanics, vol. 19, no. 2, pp. 85â€“97, 2013 (Russian). View at: Google Scholar
Copyright
Copyright © 2014 Ioannis K. Argyros et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.