`ISRN Applied MathematicsVolumeΒ 2012, Article IDΒ 728627, 14 pageshttp://dx.doi.org/10.5402/2012/728627`
Research Article

## Two-Step Modified Newton Method for Nonlinear Lavrentiev Regularization

Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, Surathkal 575025, India

Received 15 December 2011; Accepted 1 February 2012

Copyright Β© 2012 Santhosh George and Suresan Pareth. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

A two step modified Newton method is considered for obtaining an approximate solution for the nonlinear ill-posed equation when the available data are with and the operator is monotone. The derived error estimate under a general source condition on is of optimal order; here is the initial guess and is the actual solution. The regularization parameter is chosen according to the adaptive method considered by Perverzev and Schock (2005). The computational results provided endorse the reliability and effectiveness of our method.

#### 1. Introduction

Recently George and Elmahdy [1] studied iterative methods for solving the ill-posed operator equation: where is a nonlinear monotone operator (i.e., ) and is a real Hilbert space. The convergence analysis in [1] is based on suitably constructed majorizing sequences. Recall that a sequence in is said to be a majorizing sequence of the sequence if

Throughout this paper, the inner product and the corresponding norm on the Hilbert space are denoted by and respectively, is the domain of and is the FrΓ©chet derivative of .

In application, usually only noisy data are available, such that Then the problem of recovery of from noisy equation is ill posed, in the sense that a small perturbation in the data can cause large deviation in the solution.

For monotone operators, one usually uses the Lavrentiev regularization method (see [2β5]) for solving (1.1). In this method, the regularized approximation is obtained by solving the operator equation It is known (cf. [5], Theorem 1.1) that (1.4) has a unique solution for any provided that .

The optimality of the Lavrentiev method was proved in [5] under a general source condition on . However the main drawback here is that, the regularized equation (1.4) remains nonlinear and one may have difficulties in solving them numerically.

Thus in the last few years, more emphasis was put on the investigation of iterative regularization methods (see [6β12]). In this paper, we considered a modified form of the method considered in [1], but we analyse the method as a two step modified Newton Lavrentiev method (TSMNLM). The proposed analysis is motivated by the two step directional Newton method (TSDNM) considered in [13], by Argyros and Hilout, for approximating a zero of a differentiable function defined on a convex subset of a Hilbert space with values in . The TSMNLM for approximating the zero of is defined as where and . Here the regularization parameter is chosen from the finite set .

The organization of this paper is as follows. Convergence analysis of TSMNLM is given in Section 2, error bounds under an a priori and under an adaptive choice of the regularization parameter is given are Section 3 and Section 4 deals with implementation of the adaptive choice rule. Example and results of computational experiments are given in Section 5. Finally the paper ends with conclusion in Section 6.

#### 2. Convergence Analysis for TSMNLM

We need the following assumptions for the convergence analysis of TSMNLM.

Assumption 2.1 (see [4]). possesses a locally uniformly bounded FrΓ©chet derivative at all in the domain .

Assumption 2.2 (cf. [4], Assumption 3). There exists a constant such that for every and , there exists an element satisfying for all in and .

Let Hereafter, for convenience, we use the notations , and for , and , respectively.

Remark 2.3. Note that if , then by Assumption 2.2 it follows that

Let Then , if

Theorem 2.4. Let q and r be as in (2.5) and (2.6), respectively, and let be as in (2.3). Let be as in (2.2) and let and be as in (1.5) and (1.6), respectively, with and . Then(a); (b); (c); (d).

Proof. Observe that if , then by Assumption 2.2 we have and hence Again observe that if , by Assumption 2.2 and (2.8), we have and hence Thus if , then (a) and (b) follows from (2.8) and (2.10), respectively. Now using induction we will prove that . Note that and hence by (2.8) that is, ; again by (2.10) that is, . Suppose for some . Then since we will first find an estimate for . Note that by (a) and (b) we have Therefore by (2.13) we have So by induction for all . Again by (a), (b), and (2.15) we have Thus and hence by induction for all . This completes the proof of the Theorem.

The main result of this section is the following theorem.

Theorem 2.5. Let and be as in (1.5) and (1.6), respectively, and assumptions of Theorem 2.4 hold. Then is Cauchy sequence in and converges to . Further and

Proof. Using the relation (b) and (c) of Theorem 2.4, we obtain Thus is a Cauchy sequence in and hence it converges, say, to . Observe that Now by letting in (2.19), we obtain . This completes the proof.

#### 3. Error Bounds under Source Conditions

The objective of this section is to obtain an error estimate for under the following assumption on .

Assumption 3.1 (see [4]). There exists a continuous, strictly monotonically increasing function with satisfying and with such that

Remark 3.2. It can be seen that functions for and for satisfy Assumption 3.1 (see [14]).

We will be using the error estimates in the following proposition, which can be found in [5], for our error analysis.

Proposition 3.3 (cf. [5], Proposition 3.1). Let be a solution of (1.1) and let be a monotone operator in . Let be the unique solution of (1.4) with in place of and let be the unique solution of (1.4). Then

The following theorem can be found in [1].

Theorem 3.4 (see, [1], Theorem 3.1). Let be the unique solution of (1.4) with in place of . Let the assumptions in the Proposition 3.3 and the Assumptions 2.1, 2.2, and 3.1 be satisfied. Then

Combining the estimates in Proposition 3.3, Theorems 2.5, and 3.4 we obtain the following.

Theorem 3.5. Let be as in (1.6) and let the assumptions in Proposition 3.3, Theorems 2.5, and 3.4 be satisfied. Then

Let and let

Theorem 3.6. Let be the unique solution of (1.4) and let be as in (1.6). Let the assumptions in Theorem 3.5 be satisfied. Let be as in (3.7) and let be as in (3.8). Then

##### 3.1. A Priori Choice of the Parameter

Note that the error estimate in (3.9) is of optimal order if satisfies, .

Now using the function , , we have , so that . In view of the above observations and (3.9), we have the following.

Theorem 3.7. Let for , and let the assumptions in Theorem 3.6 hold. For , let . Let be as in (3.8). Then

##### 3.2. An Adaptive Choice of the Parameter

In this subsection, we will present a parameter choice rule based on the adaptive method studied in [3, 9]. Let where , , and let Then for , we have

Let . In this paper, we select from for computing for each .

Theorem 3.8 (cf. [9], Theorem 4.3). Assume that there exists such that . Let assumptions of Theorems 3.6 and 3.7 hold and let Then and where .

#### 4. Implementation of Adaptive Choice Rule

Following steps are involved in implementing the adaptive choice rule.(i)Choose such that and .(ii)Choose .

Finally the adaptive algorithm associated with the choice of the parameter specified in Theorem 3.8 involves the following steps.

##### 4.1. Algorithm

Step 1. Set .

Step 2. Choose .

Step 3. Solve by using the iteration (1.5) and (1.6).

Step 4. If , then take and return .

Step 5. Else set and go to Step 2.

#### 5. Numerical Example

In this section, we consider the example considered in [4] for illustrating the algorithm considered in Section 4.1. We apply the algorithm by choosing a sequence of finite dimensional subspace of with . Precisely we choose as the linear span of , where are the linear splines in a uniform grid of points in .

Example 5.1 (see, [4], Section ). Let defined by where Then for all ,
Thus the operator is monotone. The FrΓ©chet derivative of is given by
Note that for , where .
Observe that So Assumption 2.2 satisfies with .
In our computation, we take and . Then the exact solutions We use as our initial guess, so that the function satisfies the source condition: where . Thus we expect to obtain the rate of convergence .
Observe that while performing numerical computation on finite dimensional subspace of , one has to consider the operator instead of , where is the orthogonal projection on to . Thus incurs an additional error .
Let . For the operator defined in (5.4), (cf. [15]). Thus we expect to obtain the rate of convergence .
We choose , and . The results of the computation (four decimal places) are presented in Table 1. The plots of the exact solution and the approximate solution obtained are given in Figures 1 and 2.

Table 1
Figure 1: Curves of the exact and approximate solutions.
Figure 2: Curves of the exact and approximate solutions.

The last column of Table 1 shows that the error is of .

#### 6. Conclusion

We considered a two step method for obtaining an approximate solution for a nonlinear ill-posed operator equation where is a monotone operator defined on a real Hilbert space when the available data is with . The proposed method converges to a solution of the normal equation and hence is an approximation for the solution , for properly chosen parameter . We obtained optimal order error estimate by choosing the regularization parameter according to the adaptive method considered by Pereverzev and Schock [3]. The computational results provided endorse the reliability and effectiveness of our method.

#### Acknowledgment

S. Pareth thanks the National Institute of Technology Karnataka, Surathkal for the financial support.

#### References

1. S. George and A. I. Elmahdy, βA quadratic convergence yielding iterative method for nonlinear ill-posed operator equations,β Computational Methods in Applied Mathematics, vol. 12, no. 1, pp. 32β45, 2012.
2. J. Janno and U. Tautenhahn, βOn Lavrentiev regularization for ill-posed problems in Hilbert scales,β Numerical Functional Analysis and Optimization, vol. 24, no. 5-6, pp. 531β555, 2003.
3. S. Pereverzev and E. Schock, βOn the adaptive selection of the parameter in regularization of ill-posed problems,β SIAM Journal on Numerical Analysis, vol. 43, no. 5, pp. 2060β2076, 2005.
4. E. V. Semenova, βLavrentiev regularization and balancing principle for solving ill-posed problems with monotone operators,β Computational Methods in Applied Mathematics, vol. 10, no. 4, pp. 444β454, 2010.
5. U. Tautenhahn, βOn the method of Lavrentiev regularization for nonlinear ill-posed problems,β Inverse Problems, vol. 18, no. 1, pp. 191β207, 2002.
6. B. Blaschke, A. Neubauer, and O. Scherzer, βOn convergence rates for the iteratively regularized Gauss-Newton method,β IMA Journal of Numerical Analysis, vol. 17, no. 3, pp. 421β436, 1997.
7. P. Deuflhard, H. W. Engl, and O. Scherzer, βA convergence analysis of iterative methods for the solution of nonlinear ill-posed problems under affinely invariant conditions,β Inverse Problems, vol. 14, no. 5, pp. 1081β1106, 1998.
8. S. George, βNewton-Lavrentiev regularization of ill-posed Hammerstein type operator equation,β Journal of Inverse and Ill-Posed Problems, vol. 14, no. 6, pp. 573β582, 2006.
9. S. George and M. T. Nair, βA modified Newton-Lavrentiev regularization for nonlinear ill-posed Hammerstein-type operator equations,β Journal of Complexity, vol. 24, no. 2, pp. 228β240, 2008.
10. Q.-N. Jin, βError estimates of some Newton-type methods for solving nonlinear inverse problems in Hilbert scales,β Inverse Problems, vol. 16, no. 1, pp. 187β197, 2000.
11. Q.-N. Jin, βOn the iteratively regularized Gauss-Newton method for solving nonlinear ill-posed problems,β Mathematics of Computation, vol. 69, no. 232, pp. 1603β1623, 2000.
12. P. Mahale and M. T. Nair, βIterated Lavrentiev regularization for nonlinear ill-posed problems,β The ANZIAM Journal, vol. 51, no. 2, pp. 191β217, 2009.
13. I. K. Argyros and S. Hilout, βA convergence analysis for directional two-step Newton methods,β Numerical Algorithms, vol. 55, no. 4, pp. 503β528, 2010.
14. M. T. Nair and P. Ravishankar, βRegularized versions of continuous Newton's method and continuous modified Newton's method under general source conditions,β Numerical Functional Analysis and Optimization, vol. 29, no. 9-10, pp. 1140β1165, 2008.
15. C. W. Groetsch, J. T. King, and D. Murio, βAsymptotic analysis of a finite element method for Fredholm equations of the first kind,β in Treatment of Integral Equations by Numerical Methods, pp. 1β11, Academic Press, London, UK, 1982.