- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Journal of Mathematics
Volume 2013 (2013), Article ID 439316, 9 pages
Newton-Type Iteration for Tikhonov Regularization of Nonlinear Ill-Posed Problems
Department of Mathematical and Computational Sciences, National Institute of Technology, Mangalore, Karnataka 575 025, India
Received 14 August 2012; Accepted 15 October 2012
Academic Editor: De-Xing Kong
Copyright © 2013 Santhosh George. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Recently in the work of George, 2010, we considered a modified Gauss-Newton method for approximate solution of a nonlinear ill-posed operator equation , where is a nonlinear operator between the Hilbert spaces and . The analysis in George, 2010 was carried out using a majorizing sequence. In this paper, we consider also the modified Gauss-Newton method, but the convergence analysis and the error estimate are obtained by analyzing the odd and even terms of the sequence separately. We use the adaptive method in the work of Pereverzev and Schock, 2005 for choosing the regularization parameter. The optimality of this method is proved under a general source condition. A numerical example of nonlinear integral equation shows the performance of this procedure.
“Dedicated to Prof. Ulrich Tautenhahn”
Inverse problems have been one of the fastest growing research area in applied mathematics in the last decades. It is well known that these problems typically lead to mathematical models that are ill-posed (according to Hadamard’s definition ) in the sense that it is not possible to provide a unique solution.
In this paper, we consider the task of approximately solving the nonlinear ill-posed equation This equation and the task of solving it make sense only when placed in an appropriate framework. Throughout this paper, we will assume that is a nonlinear operator between Hilbert spaces and with inner product and corresponding norm denoted by and , respectively, and . We assume that (1) has a unique solution . For , let be an available data with Since (1) is ill-posed, regularization methods are used to obtain stable approximate solutions [2, 3]. Iterative regularization methods are one such class of regularization methods [4–8].
In , Bakushinskii proposed an iterative method, namely, the iteratively regularized Gauss-Newton method, in which the iterations are defined by where (here and in the following denotes the Fréchet derivative of ) and is a sequence of real numbers satisfying for some constant . For convergence analysis, Bakushinskii used the following Hölder-type source condition on the exact solution of (1) for some . Later Hohage [9, 10] and Langer and Hohage  considered the iteratively regularized Gauss-Newton method under different source conditions and stopping rules.
In , Bakushinskii generalized the procedure in  by considering a generalized form of the regularized Gauss-Newton method in which the iterations are defined by where and are as in (3), and each for is a piecewise continuous function. It should be noted that the convergence of (3) was also shown by Bakushinsky and Smirnova in . In , Blaschke et al. considered the above generalized procedure under a stopping index such that and the error estimate is obtained under the following Hölder-type source condition:
Recently, Mahale and Nair , considered the iteration procedure: where , is as in (3), and for is a positive real-valued piecewise continuous function defined in with . In , the stopping index for the iteration is chosen such that , where is a sufficiently large constant not depending on , and
To prove the results in , Mahale and Nair considered the following general source condition: for some with , . Here, is a continuous, strictly monotonically increasing function satisfying with .
In , Kaltenbacher considered the following iteration procedure: where , and proved that the sequence converges to the critical point of the Tikhonov functional, characterized by . In order to obtain an error estimate of in , the following two kinds of conditions are used:(a), for ,(b), for , ,with for condition (a) and for condition (b).
In , the author considered a particular case of the method (9), that is, or equivalently, for approximately solving (1). Analysis in , was carried out using a suitably constructed majorizing sequence, and the stopping rule in  was based on this majorizing sequence.
Recall [14, Definition ], that a nonnegative increasing sequence (i.e., ) is said to be a majorizing sequence of a sequence in , if
In this paper, we consider the sequence (17) and analyze it by considering its even and odd terms separately and obtain the optimal order of the error. The regularization parameter is chosen according to the balancing principle considered by Pereverzev and Schock in .
The organization of this paper is as follows. Proposed method and its convergence analysis are given in Section 2. Error analysis and parameter choice strategy are discussed in Section 3. Section 4 deals with the implementation of the method; a numerical example is given in Section 5, and finally, the paper ends with a conclusion in Section 6.
2. Convergence of the Method (17)
Let where and is the initial guess.
Remark 1. It can be seen that and , where is defined as in (17).
Assumption 2. There exists a constant such that for every and , there exists an element satisfying for all and .
Hereafter, for convenience, we use the notation , , and for , , and , respectively.
Let , , the parameter is selected from some finite set Throughout this paper, we assume that the operator is Fréchet differentiable at all .
Remark 3. Note that if , then by Assumption 2, we have This can be seen as follows:
Using the inequality (24), we prove the following.
Proof. Observe that if , for all , then by Assumption 2, we have
This proves (a).
Again observe that if , for all , and hence by Assumption 2 and (27), we have This proves (b).
Thus, if , for all , then (c) follows from (a) and (b). Now, we will prove using induction that , for all . Note that , and hence by (27) and Remark 3, that is, , again by (29) and Remark 3, that is, . Suppose that for some . Then, since we shall first find an estimate for . Note that by (a), (b), and (c), we have Therefore by (32) and Remark 3, we have that is, . So by induction for all . Again by (a), (b) and (34) we have Thus, , and hence by induction, for all . This completes the proof of the theorem.
The main result of Section 2 is the following.
Proof. Using the relation (33), we obtain
Thus, is a Cauchy sequence in , and hence it converges, say to . Observe that and . Hence, also converges to .
Now, by in (20), we obtain , that is,
This completes the proof.
3. Error Analysis
The next assumption on source condition is based on a source function and a property of the source function . We will be using this assumption to obtain an error estimate for .
Assumption 6. There exists a continuous, strictly monotonically increasing function with satisfying(i),(ii), for all ,(iii)there exists such that
Theorem 7. Let be as in (38). Then,
Proof. Let . Then, and hence by (38), we have . Thus, where , , and . Note that by Assumption 6 and by Assumption 2 The result now follows from (42), (43), (44), and (45). This completes the proof of the theorem.
3.1. Error Bounds under Source Conditions
3.2. A Priori Choice of the Parameter
Observe that the upper bound in Theorem 8 is of optimal order for the choice which satisfies . Now, using the function , , we have so that . Here, means the inverse function of .
3.3. Adaptive Choice of the Parameter
In the balancing principle considered by Pereverzev and Schock in , the regularization parameter is selected from some finite set Let and let , where is defined as in (20) with and . Then, from Theorem 8, we have Precisely, we choose the regularization parameter from the set defined by where .
To obtain a conclusion from this parameter choice, we consider all possible functions satisfying Assumptions 2 and . Any of such functions is called admissible for , and it can be used as a measure for the convergence of (see ).
The main result of Section 3 is the following.
Proof. The proof is analogous to the proof of Theorem 4.4 in  and is omitted therefore here.
4. Implementation of the Method
Finally, the balancing algorithm associated with the choice of the parameter specified in Theorem 10 involves the following steps.(i)Choose such that and .(ii)Choose big enough but not too large and , .(iii)Choose .
5. Numerical Example
We apply the algorithm by choosing a sequence of finite dimensional subspace of with dim . Precisely, we choose as the linear span of , where , are the linear splines in a uniform grid of points in .
We consider the same example of nonlinear integral operator as in [17, Section 4.3]. Let be defined by where The Fréchet derivative of is given by
Note that for , where .
Observe that So, Assumption 2 is satisfied with
In our computation, we take and . Then, the exact solution We use as our initial guess, so that the function satisfies the source condition where .
Observe that while performing numerical computation on finite dimensional subspace of , one has to consider the operator instead of , where is the orthogonal projection on to . Thus, incurs an additional error .
In this paper, we considered a modified Gauss-Newton method for approximately solving the nonlinear ill-posed operator equation , where is a nonlinear operator between the Hilbert spaces and . The same method was considered in  by the author, but the analysis in  was based on a suitably constructed majorizing sequence. In this paper, we analyze the sequence by considering its even and odd terms separately. The analysis in this paper is easier than that of . We use the adaptive method considered by Pereverzev and Schock in  for choosing the regularization parameter. The optimality of this method is proved under a general source condition. Finally, a numerical example of nonlinear integral equation shows the performance of this method.
- J. Hadamard, Lectures on Cauchy’s Problem in Linear Partial Differential Equations, Dover Publications, New York, NY, USA, 1952.
- H. W. Engl, K. Kunisch, and A. Neubauer, Regularization of Inverse Problems, Kluwer Academic, Dordrecht, The Netherlands, 1996.
- H. W. Engl, K. Kunisch, and A. Neubauer, “Convergence rates for Tikhonov regularisation of non-linear ill-posed problems,” Inverse Problems, vol. 5, no. 4, pp. 523–540, 1989.
- A. B. Bakushinskii, “The problem of the convergence of the iteratively regularized Gauss-Newton method,” Computational Mathematics and Mathematical Physics, vol. 32, no. 9, pp. 1353–1359, 1992.
- A. B. Bakushinskii, “Iterative methods without saturation for solving degenerate nonlinear operator equations,” Doklady Akademii Nauk, vol. 344, pp. 7–8, 1995.
- B. Blaschke, A. Neubauer, and O. Scherzer, “On convergence rates for the iteratively regularized Gauss-Newton method,” IMA Journal of Numerical Analysis, vol. 17, no. 3, pp. 421–436, 1997.
- B. Kaltenbacher, “A note on logarithmic convergence rates for nonlinear Tikhonov regularization,” Journal of Inverse and Ill-Posed Problems, vol. 16, no. 1, pp. 79–88, 2008.
- P. A. Mahale and M. T. Nair, “A simplified generalized Gauss-Newton method for nonlinear ill-posed problems,” Mathematics of Computation, vol. 78, no. 265, pp. 171–184, 2009.
- T. Hohage, “Logarithmic convergence rates of the iteratively regularized Gauss-Newton method for an inverse potential and an inverse scattering problem,” Inverse Problems, vol. 13, no. 5, pp. 1279–1299, 1997.
- T. Hohage, “Regularization of exponentially ill-posed problems,” Numerical Functional Analysis and Optimization, vol. 21, no. 3, pp. 439–464, 2000.
- S. Langer and T. Hohage, “Convergence analysis of an inexact iteratively regularized Gauss-Newton method under general source conditions,” Journal of Inverse and Ill-Posed Problems, vol. 15, no. 3, pp. 311–327, 2007.
- A. Bakushinsky and A. Smirnova, “On application of generalized discrepancy principle to iterative methods for nonlinear ill-posed problems,” Numerical Functional Analysis and Optimization, vol. 26, no. 1, pp. 35–48, 2005.
- S. George, “On convergence of regularized modified Newton's method for nonlinear ill-posed problems,” Journal of Inverse and Ill-Posed Problems, vol. 18, no. 2, pp. 133–146, 2010.
- I. K. Argyros, Convergence and Applications of Newton-Type Iterations, Springer, New York, NY, USA, 2008.
- S. Pereverzev and E. Schock, “On the adaptive selection of the parameter in regularization of ill-posed problems,” SIAM Journal on Numerical Analysis, vol. 43, no. 5, pp. 2060–2076, 2005.
- S. Lu and S. V. Pereverzev, “Sparsity reconstruction by the standard Tikhonov method,” RICAM-Report 2008-17, 2008.
- E. V. Semenova, “Lavrentiev regularization and balancing principle for solving ill-posed problems with monotone operators,” Computational Methods in Applied Mathematics, vol. 4, no. 4, pp. 444–454, 2010.
- C. W. Groetsch, J. T. King, and D. Murio, “Asymptotic analysis of a finite element method for Fredholm equations of the first kind,” in Treatment of Integral Equations by Numerical Methods, C. T. H. Baker and G. F. Miller, Eds., pp. 1–11, Academic Press, London, UK, 1982.