Newton Type Iteration for Tikhonov Regularization of Nonlinear Ill-Posed Problems in Hilbert Scales
Recently, Vasin and George (2013) considered an iterative scheme for approximately solving an ill-posed operator equation . In order to improve the error estimate available by Vasin and George (2013), in the present paper we extend the iterative method considered by Vasin and George (2013), in the setting of Hilbert scales. The error estimates obtained under a general source condition on ( is the initial guess and is the actual solution), using the adaptive scheme proposed by Pereverzev and Schock (2005), are of optimal order. The algorithm is applied to numerical solution of an integral equation in Numerical Example section.
In this study, we are interested in approximately solving a nonlinear ill-posed operator equation: where is a nonlinear operator. Here is the domain of , and is the inner product with corresponding norm on the Hilbert spaces and . Throughout this paper we denote by the ball of radius centered at denotes the Fréchet derivative of at , and denotes the adjoint of . We assume that are the available noisy data satisfying where is the noise level. Equation (1) is, in general, ill-posed, in the sense that a unique solution that depends continuously on the data does not exist. Since the available data is , one has to solve (approximately) the perturbed equation instead of (1).
To solve the ill-posed operator equations, various regularization methods are used, for example, Tikhonov regularization, Landweber iterative regularization, Levenberg-Marquardt method, Lavrentiev regularization, Newton type iterative method, and so forth (see, e.g., [1–16]).
In , Vasin and George considered the iteration (which is a modified form of the method considered in ) where , is the initial guess, is the regularization parameter, and . Iteration (4) was used to obtain an approximation for the zero of and proved that is an approximate solution of (1). The regularization parameter in  was chosen appropriately from the finite set depending on the inexact data and the error level satisfying (2) using the adaptive parameter selection procedure suggested by Pereverzev and Schock .
In order to improve the rate of convergence many authors have considered the Hilbert scale variant of the regularization methods for solving ill-posed operator equations, for example, [18–26]. In this study, we present the Hilbert scale variant of (4).
We consider the Hilbert scale (see [14, 18, 23, 26–29]) generated by a strictly positive self-adjoint operator , with the domain dense in satisfying , for all . Recall [19, 28] that the space is the completion of with respect to the norm , induced by the inner product
As in , we use the following center-type Lipschitz condition for the convergence of the iterative scheme.
Assumption 1. Let be fixed. There exists a constant such that, for every and , there exists an element satisfying
The error estimates in this work are obtained using the source condition on . In addition to the advantages listed in [16, see page 3], the method considered in this paper gives optimal order for a range of values of smoothness assumptions on . The regularization parameter is chosen from some finite set using the balancing principle considered by Pereverzev and Schock in .
The paper is organized as follows. In Section 2, we give the analysis of the method for regularization of (6) in the setting of Hilbert scales. The error analysis and adaptive scheme of parameter are given in Section 3. In Section 4, implementation of the method along with a numerical example is presented to validate the efficiency of the proposed method and we conclude the paper in Section 5.
2. The Method
Let . We make use of the relation which follows from the spectral properties of the positive self-adjoint operator , . Usually, for the analysis of regularization methods in Hilbert scales, an assumption of the form (cf. [18, 24]) on the degree of ill-posedness is used. In this paper instead of (10) we require only a weaker assumption: for some positive reals , , and .
Proposition 2 (cf. see [29, Proposition 2.1]). For and ,
Let us define a few parameters essential for the convergence analysis. Let with Further let , and For , let Then .
Lemma 3. Let Proposition 2 hold. Then for all , the following hold: (a), (b).
Proof. Observe that, by Proposition 2,
This completes the proof of the lemma.
Proof. If , then by Assumption 1,
and , and hence by Assumption 1 and Lemma 3(a), we have
and by Lemma 3(b),
Hence, by (19), (21), and (22), we have
Next we show that , using Assumption 1 and Lemma 3. Observe that Hence, (a) follows from (23) and (24).
To prove (b), note that . Suppose for some ; then Thus, by induction for all . This proves (b).
Next we go to the main result of this section.
The following assumption on source function and source condition is required to obtain the error estimates.
Assumption 6. There exists a continuous, strictly monotonically increasing function such that the following conditions hold:(i),(ii) for all , and(iii)there exists with , such that
Remark 7. If , for example, , for some positive constant and , then, we have , where , , and .
Proof. Let . Then
Since , one can see that
Note that by Assumption 1 and Lemma 3,
by Proposition 2,
and by Assumption 6,
Hence, by (34)–(36) and (32), we have This completes the proof of the theorem.
2.1. Error Bounds under Source Conditions
2.2. A Priori Choice of the Parameter
The error estimate in Theorem 9 attains minimum for the choice which satisfies . Clearly , where
Thus, we have the following theorem.
2.3. Adaptive Scheme and Stopping Rule
In this subsection, we consider the adaptive scheme suggested by Pereverzev and Schock in , modified suitably for choosing the parameter which does not involve even the regularization method in an explicit manner.
3. Implementation of the Method
Finally, the balancing algorithm associated with the choice of the parameter specified in Theorem 11 involves the following steps:(i)choose such that and ;(ii)choose big enough but not too large and , , where ;(iii)choose .
Algorithm 1. (1)set ;(2)choose ;(3)solve by using the iteration (6);(4)if , then take ;(5)else set and return to Step .
4. Numerical Example
Example 1. In this example, we consider a nonlinear integral operator, , defined by
The Fréchet derivative of is given by
In our computation, we take and , where is a random vector with and is a constant . Then the exact solution
We take as as our initial guess, so that the function satisfies the source condition (see [20, Proposition 5.3]). Thus, we expect to have an accuracy of order at least .
As in , we use the matrix as a discrete approximation of the first-order differential operator (50).
We choose , , , and . The results of the computation are presented in Tables 1 and 2. The plots of the exact and the approximate solution obtained with are given in Figures 1 and 2.
The last column of Tables 1 and 2 shows that the error is of .
In this paper, we present an iterative regularization method for obtaining an approximate solution of a nonlinear ill-posed operator equation in the Hilbert scale setting. Here is a nonlinear operator and we assume that the available data is in place of exact data . The convergence analysis was based on the center-type Lipschitz condition. We considered a Hilbert scale, , generated by , where is a linear, unbounded, self-adjoint, densely defined, and strictly positive operator on . For choosing the regularization parameter , the adaptive scheme considered by Pereverzev and Schock in  was used. Finally, a numerical example is presented in support of our method which is found to be efficient.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Ms. Monnanda Erappa Shobha thanks NBHM, DAE, Government of India, for the financial support.
I. K. Argyros, Y. J. Cho, and S. Hilout, Numerical Methods for Equations and its Applications, CRC Press, Taylor and Francis, New York, NY, USA, 2012.View at: MathSciNet
A. B. Bakushinsky and M. Y. Kokurin, Iterative Methods for Approximate Solution of Inverse Problems, Springer, Dordrecht, The Netherlands, 2004.View at: MathSciNet
H. W. Engl, K. Kunisch, and A. Neubauer, Regularization of Inverse Problems, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1996.
B. Kaltenbacher, A. Neubauer, and O. Scherzer, Iterative Regularization Methods for Nonlinear Ill-Posed Porblems, de Gruyter, Berlin, Germany, 2008.
C. T. Kelley, Iterative Methods for Linear and Nonlinear Equations, SIAM, Philadelphia, Pa, USA, 1995.