Abstract
A two step modified Newton method is considered for obtaining an approximate solution for the nonlinear ill-posed equation when the available data are with and the operator is monotone. The derived error estimate under a general source condition on is of optimal order; here is the initial guess and is the actual solution. The regularization parameter is chosen according to the adaptive method considered by Perverzev and Schock (2005). The computational results provided endorse the reliability and effectiveness of our method.
1. Introduction
Recently George and Elmahdy [1] studied iterative methods for solving the ill-posed operator equation: where is a nonlinear monotone operator (i.e., ) and is a real Hilbert space. The convergence analysis in [1] is based on suitably constructed majorizing sequences. Recall that a sequence in is said to be a majorizing sequence of the sequence if
Throughout this paper, the inner product and the corresponding norm on the Hilbert space are denoted by and respectively, is the domain of and is the Fréchet derivative of .
In application, usually only noisy data are available, such that Then the problem of recovery of from noisy equation is ill posed, in the sense that a small perturbation in the data can cause large deviation in the solution.
For monotone operators, one usually uses the Lavrentiev regularization method (see [2–5]) for solving (1.1). In this method, the regularized approximation is obtained by solving the operator equation It is known (cf. [5], Theorem 1.1) that (1.4) has a unique solution for any provided that .
The optimality of the Lavrentiev method was proved in [5] under a general source condition on . However the main drawback here is that, the regularized equation (1.4) remains nonlinear and one may have difficulties in solving them numerically.
Thus in the last few years, more emphasis was put on the investigation of iterative regularization methods (see [6–12]). In this paper, we considered a modified form of the method considered in [1], but we analyse the method as a two step modified Newton Lavrentiev method (TSMNLM). The proposed analysis is motivated by the two step directional Newton method (TSDNM) considered in [13], by Argyros and Hilout, for approximating a zero of a differentiable function defined on a convex subset of a Hilbert space with values in . The TSMNLM for approximating the zero of is defined as where and . Here the regularization parameter is chosen from the finite set .
The organization of this paper is as follows. Convergence analysis of TSMNLM is given in Section 2, error bounds under an a priori and under an adaptive choice of the regularization parameter is given are Section 3 and Section 4 deals with implementation of the adaptive choice rule. Example and results of computational experiments are given in Section 5. Finally the paper ends with conclusion in Section 6.
2. Convergence Analysis for TSMNLM
We need the following assumptions for the convergence analysis of TSMNLM.
Assumption 2.1 (see [4]). possesses a locally uniformly bounded Fréchet derivative at all in the domain .
Assumption 2.2 (cf. [4], Assumption 3). There exists a constant such that for every and , there exists an element satisfying for all in and .
Let Hereafter, for convenience, we use the notations , and for , and , respectively.
Remark 2.3. Note that if , then by Assumption 2.2 it follows that
Let Then , if
Theorem 2.4. Let q and r be as in (2.5) and (2.6), respectively, and let be as in (2.3). Let be as in (2.2) and let and be as in (1.5) and (1.6), respectively, with and . Then(a); (b); (c); (d).
Proof. Observe that if , then by Assumption 2.2 we have and hence Again observe that if , by Assumption 2.2 and (2.8), we have and hence Thus if , then (a) and (b) follows from (2.8) and (2.10), respectively. Now using induction we will prove that . Note that and hence by (2.8) that is, ; again by (2.10) that is, . Suppose for some . Then since we will first find an estimate for . Note that by (a) and (b) we have Therefore by (2.13) we have So by induction for all . Again by (a), (b), and (2.15) we have Thus and hence by induction for all . This completes the proof of the Theorem.
The main result of this section is the following theorem.
Theorem 2.5. Let and be as in (1.5) and (1.6), respectively, and assumptions of Theorem 2.4 hold. Then is Cauchy sequence in and converges to . Further and
Proof. Using the relation (b) and (c) of Theorem 2.4, we obtain Thus is a Cauchy sequence in and hence it converges, say, to . Observe that Now by letting in (2.19), we obtain . This completes the proof.
3. Error Bounds under Source Conditions
The objective of this section is to obtain an error estimate for under the following assumption on .
Assumption 3.1 (see [4]). There exists a continuous, strictly monotonically increasing function with satisfying and with such that
Remark 3.2. It can be seen that functions for and for satisfy Assumption 3.1 (see [14]).
We will be using the error estimates in the following proposition, which can be found in [5], for our error analysis.
Proposition 3.3 (cf. [5], Proposition 3.1). Let be a solution of (1.1) and let be a monotone operator in . Let be the unique solution of (1.4) with in place of and let be the unique solution of (1.4). Then
The following theorem can be found in [1].
Theorem 3.4 (see, [1], Theorem 3.1). Let be the unique solution of (1.4) with in place of . Let the assumptions in the Proposition 3.3 and the Assumptions 2.1, 2.2, and 3.1 be satisfied. Then
Combining the estimates in Proposition 3.3, Theorems 2.5, and 3.4 we obtain the following.
Theorem 3.5. Let be as in (1.6) and let the assumptions in Proposition 3.3, Theorems 2.5, and 3.4 be satisfied. Then
Let and let
Theorem 3.6. Let be the unique solution of (1.4) and let be as in (1.6). Let the assumptions in Theorem 3.5 be satisfied. Let be as in (3.7) and let be as in (3.8). Then
3.1. A Priori Choice of the Parameter
Note that the error estimate in (3.9) is of optimal order if satisfies, .
Now using the function , , we have , so that . In view of the above observations and (3.9), we have the following.
Theorem 3.7. Let for , and let the assumptions in Theorem 3.6 hold. For , let . Let be as in (3.8). Then
3.2. An Adaptive Choice of the Parameter
In this subsection, we will present a parameter choice rule based on the adaptive method studied in [3, 9]. Let where , , and let Then for , we have
Let . In this paper, we select from for computing for each .
Theorem 3.8 (cf. [9], Theorem 4.3). Assume that there exists such that . Let assumptions of Theorems 3.6 and 3.7 hold and let Then and where .
4. Implementation of Adaptive Choice Rule
Following steps are involved in implementing the adaptive choice rule.(i)Choose such that and .(ii)Choose .
Finally the adaptive algorithm associated with the choice of the parameter specified in Theorem 3.8 involves the following steps.
4.1. Algorithm
Step 1. Set .
Step 2. Choose .
Step 3. Solve by using the iteration (1.5) and (1.6).
Step 4. If , then take and return .
Step 5. Else set and go to Step 2.
5. Numerical Example
In this section, we consider the example considered in [4] for illustrating the algorithm considered in Section 4.1. We apply the algorithm by choosing a sequence of finite dimensional subspace of with . Precisely we choose as the linear span of , where are the linear splines in a uniform grid of points in .
Example 5.1 (see, [4], Section ). Let defined by
where
Then for all ,
Thus the operator is monotone. The Fréchet derivative of is given by
Note that for ,
where .
Observe that
So Assumption 2.2 satisfies with .
In our computation, we take and . Then the exact solutions
We use
as our initial guess, so that the function satisfies the source condition:
where . Thus we expect to obtain the rate of convergence .
Observe that while performing numerical computation on finite dimensional subspace of , one has to consider the operator instead of , where is the orthogonal projection on to . Thus incurs an additional error .
Let . For the operator defined in (5.4), (cf. [15]). Thus we expect to obtain the rate of convergence .
We choose , and . The results of the computation (four decimal places) are presented in Table 1. The plots of the exact solution and the approximate solution obtained are given in Figures 1 and 2.
(a)
(b)
(c)
(d)
(a)
(b)
(c)
(d)
The last column of Table 1 shows that the error is of .
6. Conclusion
We considered a two step method for obtaining an approximate solution for a nonlinear ill-posed operator equation where is a monotone operator defined on a real Hilbert space when the available data is with . The proposed method converges to a solution of the normal equation and hence is an approximation for the solution , for properly chosen parameter . We obtained optimal order error estimate by choosing the regularization parameter according to the adaptive method considered by Pereverzev and Schock [3]. The computational results provided endorse the reliability and effectiveness of our method.
Acknowledgment
S. Pareth thanks the National Institute of Technology Karnataka, Surathkal for the financial support.