About this Journal Submit a Manuscript Table of Contents
Journal of Mathematics
Volume 2013 (2013), Article ID 439316, 9 pages
http://dx.doi.org/10.1155/2013/439316
Research Article

Newton-Type Iteration for Tikhonov Regularization of Nonlinear Ill-Posed Problems

Department of Mathematical and Computational Sciences, National Institute of Technology, Mangalore, Karnataka 575 025, India

Received 14 August 2012; Accepted 15 October 2012

Academic Editor: De-Xing Kong

Copyright © 2013 Santhosh George. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Recently in the work of George, 2010, we considered a modified Gauss-Newton method for approximate solution of a nonlinear ill-posed operator equation , where is a nonlinear operator between the Hilbert spaces and . The analysis in George, 2010 was carried out using a majorizing sequence. In this paper, we consider also the modified Gauss-Newton method, but the convergence analysis and the error estimate are obtained by analyzing the odd and even terms of the sequence separately. We use the adaptive method in the work of Pereverzev and Schock, 2005 for choosing the regularization parameter. The optimality of this method is proved under a general source condition. A numerical example of nonlinear integral equation shows the performance of this procedure.

“Dedicated to Prof. Ulrich Tautenhahn”

1. Introduction

Inverse problems have been one of the fastest growing research area in applied mathematics in the last decades. It is well known that these problems typically lead to mathematical models that are ill-posed (according to Hadamard’s definition [1]) in the sense that it is not possible to provide a unique solution.

In this paper, we consider the task of approximately solving the nonlinear ill-posed equation This equation and the task of solving it make sense only when placed in an appropriate framework. Throughout this paper, we will assume that is a nonlinear operator between Hilbert spaces and with inner product and corresponding norm denoted by and , respectively, and . We assume that (1) has a unique solution . For , let be an available data with Since (1) is ill-posed, regularization methods are used to obtain stable approximate solutions [2, 3]. Iterative regularization methods are one such class of regularization methods [48].

In [4], Bakushinskii proposed an iterative method, namely, the iteratively regularized Gauss-Newton method, in which the iterations are defined by where (here and in the following denotes the Fréchet derivative of ) and is a sequence of real numbers satisfying for some constant . For convergence analysis, Bakushinskii used the following Hölder-type source condition on the exact solution of (1) for some . Later Hohage [9, 10] and Langer and Hohage [11] considered the iteratively regularized Gauss-Newton method under different source conditions and stopping rules.

In [5], Bakushinskii generalized the procedure in [4] by considering a generalized form of the regularized Gauss-Newton method in which the iterations are defined by where and are as in (3), and each for is a piecewise continuous function. It should be noted that the convergence of (3) was also shown by Bakushinsky and Smirnova in [12]. In [6], Blaschke et al. considered the above generalized procedure under a stopping index such that and the error estimate is obtained under the following Hölder-type source condition:

Recently, Mahale and Nair [8], considered the iteration procedure: where , is as in (3), and for is a positive real-valued piecewise continuous function defined in with . In [8], the stopping index for the iteration is chosen such that , where is a sufficiently large constant not depending on , and

To prove the results in [8], Mahale and Nair considered the following general source condition: for some with , . Here, is a continuous, strictly monotonically increasing function satisfying with .

Note that the source conditions (5) and (8) involves the Fréchet derivative at the exact solution which is unknown in practice. But the source condition (12) depends on the Fréchet derivative of at .

In [7], Kaltenbacher considered the following iteration procedure: where , and proved that the sequence converges to the critical point of the Tikhonov functional, characterized by . In order to obtain an error estimate of in [7], the following two kinds of conditions are used:(a), for ,(b), for , ,with for condition (a) and for condition (b).

In [13], the author considered a particular case of the method (9), that is, or equivalently, for approximately solving (1). Analysis in [13], was carried out using a suitably constructed majorizing sequence, and the stopping rule in [13] was based on this majorizing sequence.

Recall [14, Definition ], that a nonnegative increasing sequence (i.e., ) is said to be a majorizing sequence of a sequence in , if

The majorizing sequence in [13], depends on the initial guess , and the conditions on (see, e.g., in [13]) are restrictive, so the method is not suitable for practical consideration.

In this paper, we consider the sequence (17) and analyze it by considering its even and odd terms separately and obtain the optimal order of the error. The regularization parameter is chosen according to the balancing principle considered by Pereverzev and Schock in [15].

The organization of this paper is as follows. Proposed method and its convergence analysis are given in Section 2. Error analysis and parameter choice strategy are discussed in Section 3. Section 4 deals with the implementation of the method; a numerical example is given in Section 5, and finally, the paper ends with a conclusion in Section 6.

2. Convergence of the Method (17)

Let where and is the initial guess.

Remark 1. It can be seen that and , where is defined as in (17).

Assumption 2. There exists a constant such that for every and , there exists an element satisfying for all and .

Let

Hereafter, for convenience, we use the notation , , and for , , and , respectively.

Let , , the parameter is selected from some finite set Throughout this paper, we assume that the operator is Fréchet differentiable at all .

Remark 3. Note that if , then by Assumption 2, we have This can be seen as follows:

Using the inequality (24), we prove the following.

Theorem 4. Let , , where . Let be as in (22), and let and be as in (19) and (20), respectively, with and . Then, we have the following:(a), (b),(c),(d).

Proof. Observe that if , for all , then by Assumption 2, we have and hence This proves (a).
Again observe that if , for all , and hence by Assumption 2 and (27), we have This proves (b).
Thus, if , for all , then (c) follows from (a) and (b). Now, we will prove using induction that , for all . Note that , and hence by (27) and Remark 3, that is, , again by (29) and Remark 3, that is, . Suppose that for some . Then, since we shall first find an estimate for . Note that by (a), (b), and (c), we have Therefore by (32) and Remark 3, we have that is, . So by induction for all . Again by (a), (b) and (34) we have Thus, , and hence by induction, for all . This completes the proof of the theorem.

The main result of Section 2 is the following.

Theorem 5. Let and be as in (19) and (20), respectively, with , and assumptions of Theorem 4 hold. Then, is a Cauchy sequence in and converges to . Further, , and

Proof. Using the relation (33), we obtain Thus, is a Cauchy sequence in , and hence it converges, say to . Observe that and . Hence, also converges to .
Now, by in (20), we obtain , that is,
This completes the proof.

3. Error Analysis

The next assumption on source condition is based on a source function and a property of the source function . We will be using this assumption to obtain an error estimate for .

Assumption 6. There exists a continuous, strictly monotonically increasing function with satisfying(i),(ii), for all ,(iii)there exists such that

Theorem 7. Let be as in (38). Then,

Proof. Let . Then, and hence by (38), we have . Thus, where , , and . Note that by Assumption 6 and by Assumption 2 The result now follows from (42), (43), (44), and (45). This completes the proof of the theorem.

3.1. Error Bounds under Source Conditions

Combining the estimates in Theorems 5 and 7, we obtain the following.

Theorem 8. Let be defined as in (20). If all assumptions of Theorems 5 and 7 are fulfilled, then Further, if , then where .

3.2. A Priori Choice of the Parameter

Observe that the upper bound in Theorem 8 is of optimal order for the choice which satisfies . Now, using the function , , we have so that . Here, means the inverse function of .

Theorem 9. Suppose that all assumptions of Theorems 5 and 7 are fulfilled. For , let , and let be as in Theorem 8 with . Then,

3.3. Adaptive Choice of the Parameter

In the balancing principle considered by Pereverzev and Schock in [15], the regularization parameter is selected from some finite set Let and let , where is defined as in (20) with and . Then, from Theorem 8, we have Precisely, we choose the regularization parameter from the set defined by where .

To obtain a conclusion from this parameter choice, we consider all possible functions satisfying Assumptions 2 and . Any of such functions is called admissible for , and it can be used as a measure for the convergence of (see [16]).

The main result of Section 3 is the following.

Theorem 10. Assume that there exists such that . Let the assumptions of Theorems 5 and 7 be fulfilled, and let where is as in Theorem 8. Then, and

Proof. The proof is analogous to the proof of Theorem  4.4 in [13] and is omitted therefore here.

4. Implementation of the Method

Finally, the balancing algorithm associated with the choice of the parameter specified in Theorem 10 involves the following steps.(i)Choose such that and .(ii)Choose big enough but not too large and , .(iii)Choose .

4.1. Algorithm

(1)Set .(2)Choose .(3)Solve by using the iteration in (19) and (20) with and .(4)If , then take and return .(5)Else set , and return to step (2).

5. Numerical Example

We apply the algorithm by choosing a sequence of finite dimensional subspace of with dim . Precisely, we choose as the linear span of , where , are the linear splines in a uniform grid of points in .

We consider the same example of nonlinear integral operator as in [17, Section 4.3]. Let be defined by where The Fréchet derivative of is given by

Note that for , where .

Observe that So, Assumption 2 is satisfied with

In our computation, we take and . Then, the exact solution We use as our initial guess, so that the function satisfies the source condition where .

Observe that while performing numerical computation on finite dimensional subspace of , one has to consider the operator instead of , where is the orthogonal projection on to . Thus, incurs an additional error .

Let . For the operator defined in (57), (cf. [18]). Thus, we expect to obtain the rate of convergence .

We choose , , , and . The results of the computation are presented in Table 1. The plots of the exact solution and the approximate solution obtained are given in Figures 1 and 2.

tab1
Table 1: Iterations and corresponding error estimates.
fig1
Figure 1: Curves of the exact and approximate solutions.
fig2
Figure 2: Curves of the exact and approximate solutions.

6. Conclusion

In this paper, we considered a modified Gauss-Newton method for approximately solving the nonlinear ill-posed operator equation , where is a nonlinear operator between the Hilbert spaces and . The same method was considered in [13] by the author, but the analysis in [13] was based on a suitably constructed majorizing sequence. In this paper, we analyze the sequence by considering its even and odd terms separately. The analysis in this paper is easier than that of [13]. We use the adaptive method considered by Pereverzev and Schock in [15] for choosing the regularization parameter. The optimality of this method is proved under a general source condition. Finally, a numerical example of nonlinear integral equation shows the performance of this method.

References

  1. J. Hadamard, Lectures on Cauchy’s Problem in Linear Partial Differential Equations, Dover Publications, New York, NY, USA, 1952. View at MathSciNet
  2. H. W. Engl, K. Kunisch, and A. Neubauer, Regularization of Inverse Problems, Kluwer Academic, Dordrecht, The Netherlands, 1996.
  3. H. W. Engl, K. Kunisch, and A. Neubauer, “Convergence rates for Tikhonov regularisation of non-linear ill-posed problems,” Inverse Problems, vol. 5, no. 4, pp. 523–540, 1989. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  4. A. B. Bakushinskii, “The problem of the convergence of the iteratively regularized Gauss-Newton method,” Computational Mathematics and Mathematical Physics, vol. 32, no. 9, pp. 1353–1359, 1992. View at Scopus
  5. A. B. Bakushinskii, “Iterative methods without saturation for solving degenerate nonlinear operator equations,” Doklady Akademii Nauk, vol. 344, pp. 7–8, 1995. View at MathSciNet
  6. B. Blaschke, A. Neubauer, and O. Scherzer, “On convergence rates for the iteratively regularized Gauss-Newton method,” IMA Journal of Numerical Analysis, vol. 17, no. 3, pp. 421–436, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  7. B. Kaltenbacher, “A note on logarithmic convergence rates for nonlinear Tikhonov regularization,” Journal of Inverse and Ill-Posed Problems, vol. 16, no. 1, pp. 79–88, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  8. P. A. Mahale and M. T. Nair, “A simplified generalized Gauss-Newton method for nonlinear ill-posed problems,” Mathematics of Computation, vol. 78, no. 265, pp. 171–184, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  9. T. Hohage, “Logarithmic convergence rates of the iteratively regularized Gauss-Newton method for an inverse potential and an inverse scattering problem,” Inverse Problems, vol. 13, no. 5, pp. 1279–1299, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  10. T. Hohage, “Regularization of exponentially ill-posed problems,” Numerical Functional Analysis and Optimization, vol. 21, no. 3, pp. 439–464, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  11. S. Langer and T. Hohage, “Convergence analysis of an inexact iteratively regularized Gauss-Newton method under general source conditions,” Journal of Inverse and Ill-Posed Problems, vol. 15, no. 3, pp. 311–327, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  12. A. Bakushinsky and A. Smirnova, “On application of generalized discrepancy principle to iterative methods for nonlinear ill-posed problems,” Numerical Functional Analysis and Optimization, vol. 26, no. 1, pp. 35–48, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  13. S. George, “On convergence of regularized modified Newton's method for nonlinear ill-posed problems,” Journal of Inverse and Ill-Posed Problems, vol. 18, no. 2, pp. 133–146, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  14. I. K. Argyros, Convergence and Applications of Newton-Type Iterations, Springer, New York, NY, USA, 2008. View at Zentralblatt MATH · View at MathSciNet
  15. S. Pereverzev and E. Schock, “On the adaptive selection of the parameter in regularization of ill-posed problems,” SIAM Journal on Numerical Analysis, vol. 43, no. 5, pp. 2060–2076, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  16. S. Lu and S. V. Pereverzev, “Sparsity reconstruction by the standard Tikhonov method,” RICAM-Report 2008-17, 2008.
  17. E. V. Semenova, “Lavrentiev regularization and balancing principle for solving ill-posed problems with monotone operators,” Computational Methods in Applied Mathematics, vol. 4, no. 4, pp. 444–454, 2010. View at MathSciNet
  18. C. W. Groetsch, J. T. King, and D. Murio, “Asymptotic analysis of a finite element method for Fredholm equations of the first kind,” in Treatment of Integral Equations by Numerical Methods, C. T. H. Baker and G. F. Miller, Eds., pp. 1–11, Academic Press, London, UK, 1982. View at Zentralblatt MATH · View at MathSciNet