Table of Contents
ISRN Applied Mathematics
VolumeΒ 2012Β (2012), Article IDΒ 728627, 14 pages
http://dx.doi.org/10.5402/2012/728627
Research Article

Two-Step Modified Newton Method for Nonlinear Lavrentiev Regularization

Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, Surathkal 575025, India

Received 15 December 2011; Accepted 1 February 2012

Academic Editor: K.Β Djidjeli

Copyright Β© 2012 Santhosh George and Suresan Pareth. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A two step modified Newton method is considered for obtaining an approximate solution for the nonlinear ill-posed equation 𝐹(π‘₯)=𝑓 when the available data are 𝑓𝛿 with β€–π‘“βˆ’π‘“π›Ώβ€–β‰€π›Ώ and the operator 𝐹 is monotone. The derived error estimate under a general source condition on π‘₯0βˆ’Μ‚π‘₯ is of optimal order; here π‘₯0 is the initial guess and Μ‚π‘₯ is the actual solution. The regularization parameter is chosen according to the adaptive method considered by Perverzev and Schock (2005). The computational results provided endorse the reliability and effectiveness of our method.

1. Introduction

Recently George and Elmahdy [1] studied iterative methods for solving the ill-posed operator equation:𝐹(π‘₯)=𝑓,(1.1) where 𝐹∢𝐷(𝐹)βŠ†π‘‹β†’π‘‹ is a nonlinear monotone operator (i.e., ⟨𝐹(π‘₯)βˆ’πΉ(𝑦),π‘₯βˆ’π‘¦βŸ©β‰₯0,forallπ‘₯,π‘¦βˆˆπ·(𝐹)) and 𝑋 is a real Hilbert space. The convergence analysis in [1] is based on suitably constructed majorizing sequences. Recall that a sequence (𝑑𝑛) in ℝ is said to be a majorizing sequence of the sequence (π‘₯𝑛)βˆˆπ‘‹ ifβ€–β€–π‘₯𝑛+1βˆ’π‘₯𝑛‖‖≀𝑑𝑛+1βˆ’π‘‘π‘›,βˆ€π‘›β‰₯0.(1.2)

Throughout this paper, the inner product and the corresponding norm on the Hilbert space 𝑋 are denoted by βŸ¨β‹…,β‹…βŸ© and β€–β‹…β€– respectively, 𝐷(𝐹) is the domain of 𝐹 and 𝐹′(β‹…) is the FrΓ©chet derivative of 𝐹.

In application, usually only noisy data 𝑓𝛿 are available, such thatβ€–β€–π‘“βˆ’π‘“π›Ώβ€–β€–β‰€π›Ώ.(1.3) Then the problem of recovery of Μ‚π‘₯ from noisy equation 𝐹(π‘₯)=𝑓𝛿 is ill posed, in the sense that a small perturbation in the data can cause large deviation in the solution.

For monotone operators, one usually uses the Lavrentiev regularization method (see [2–5]) for solving (1.1). In this method, the regularized approximation π‘₯𝛿𝛼 is obtained by solving the operator equation𝐹(π‘₯)+𝛼π‘₯βˆ’π‘₯0ξ€Έ=𝑓𝛿.(1.4) It is known (cf. [5], Theorem 1.1) that (1.4) has a unique solution π‘₯π›Ώπ›Όβˆˆπ΅π‘Ÿ(Μ‚π‘₯)∢={π‘₯βˆˆπ‘‹βˆΆβ€–π‘₯βˆ’Μ‚π‘₯β€–<π‘Ÿ}βŠ‚π·(𝐹) for any 𝛼>0 provided that π‘Ÿ=β€–π‘₯0βˆ’Μ‚π‘₯β€–+𝛿/𝛼.

The optimality of the Lavrentiev method was proved in [5] under a general source condition on π‘₯0βˆ’Μ‚π‘₯. However the main drawback here is that, the regularized equation (1.4) remains nonlinear and one may have difficulties in solving them numerically.

Thus in the last few years, more emphasis was put on the investigation of iterative regularization methods (see [6–12]). In this paper, we considered a modified form of the method considered in [1], but we analyse the method as a two step modified Newton Lavrentiev method (TSMNLM). The proposed analysis is motivated by the two step directional Newton method (TSDNM) considered in [13], by Argyros and Hilout, for approximating a zero π‘₯βˆ— of a differentiable function 𝐹 defined on a convex subset π’Ÿ of a Hilbert space 𝐻 with values in ℝ. The TSMNLM for approximating the zero π‘₯𝛿𝛼 of 𝐹(π‘₯)βˆ’π‘“π›Ώ+𝛼(π‘₯βˆ’π‘₯0)=0 is defined as𝑦𝛿𝑛,𝛼=π‘₯𝛿𝑛,π›Όβˆ’π‘…π›Όξ€·π‘₯0ξ€Έβˆ’1𝐹π‘₯𝛿𝑛,π›Όξ€Έβˆ’π‘“π›Ώξ€·π‘₯+𝛼𝛿𝑛,π›Όβˆ’π‘₯0,ξ€Έξ€»(1.5)π‘₯𝛿𝑛+1,𝛼=𝑦𝛿𝑛,π›Όβˆ’π‘…π›Όξ€·π‘₯0ξ€Έβˆ’1𝐹𝑦𝛿𝑛,π›Όξ€Έβˆ’π‘“π›Ώξ€·π‘¦+𝛼𝛿𝑛,π›Όβˆ’π‘₯0ξ€Έξ€»,(1.6) where π‘₯𝛿0,π›ΌβˆΆ=π‘₯0 and 𝑅𝛼(π‘₯0)=πΉξ…ž(π‘₯0)+𝛼𝐼. Here the regularization parameter 𝛼 is chosen from the finite set 𝐷𝑀={π›Όπ‘–βˆΆ0<𝛼0<𝛼1<β‹―<𝛼𝑀}.

The organization of this paper is as follows. Convergence analysis of TSMNLM is given in Section 2, error bounds under an a priori and under an adaptive choice of the regularization parameter is given are Section 3 and Section 4 deals with implementation of the adaptive choice rule. Example and results of computational experiments are given in Section 5. Finally the paper ends with conclusion in Section 6.

2. Convergence Analysis for TSMNLM

We need the following assumptions for the convergence analysis of TSMNLM.

Assumption 2.1 (see [4]). 𝐹 possesses a locally uniformly bounded FrΓ©chet derivative 𝐹′(β‹…) at all π‘₯ in the domain 𝐷(𝐹).

Assumption 2.2 (cf. [4], Assumption 3). There exists a constant π‘˜0>0 such that for every π‘₯,π‘’βˆˆπ·(𝐹) and π‘£βˆˆπ‘‹, there exists an element Ξ¦(π‘₯,𝑒,𝑣)βˆˆπ‘‹ satisfying ξ€ΊπΉξ…ž(π‘₯)βˆ’πΉξ…žξ€»(𝑒)𝑣=πΉξ…ž(𝑒)Ξ¦(π‘₯,𝑒,𝑣),β€–Ξ¦(π‘₯,𝑒,𝑣)β€–β‰€π‘˜0‖𝑣‖‖π‘₯βˆ’π‘’β€–(2.1) for all π‘₯,𝑒 in 𝐷(𝐹) and π‘£βˆˆπ‘‹.

Let𝑒𝛿𝑛,π›Όβ€–β€–π‘¦βˆΆ=𝛿𝑛,π›Όβˆ’π‘₯𝛿𝑛,𝛼‖‖𝛿,βˆ€π‘›β‰₯0,0<3𝛼04π‘˜0√,𝜌<3/2βˆ’2π‘˜0𝛿0/𝛼0βˆ’1π‘˜0,(2.2)π›ΎπœŒπ›ΏβˆΆ=0𝛼0+π‘˜02𝜌2+𝜌.(2.3) Hereafter, for convenience, we use the notations π‘₯𝑛,𝑦𝑛, and 𝑒𝑛 for π‘₯𝛿𝑛,𝛼,𝑦𝛿𝑛,𝛼, and 𝑒𝛿𝑛,𝛼, respectively.

Remark 2.3. Note that if β€–π‘₯0βˆ’Μ‚π‘₯β€–β‰€πœŒ, then by Assumption 2.2 it follows that 𝑒0=‖‖𝑦0βˆ’π‘₯0β€–β€–=‖‖𝑅𝛼π‘₯0ξ€Έβˆ’1𝑓𝛿π‘₯βˆ’πΉ0β€–β€–=‖‖𝑅𝛼π‘₯0ξ€Έβˆ’1𝑓𝛿π‘₯βˆ’π‘“+𝐹(Μ‚π‘₯)βˆ’πΉ0ξ€Έβˆ’πΉξ…žξ€·π‘₯0ξ€Έξ€·Μ‚π‘₯βˆ’π‘₯0ξ€Έ+πΉξ…žξ€·π‘₯0ξ€Έξ€·Μ‚π‘₯βˆ’π‘₯0‖‖≀‖‖𝑅𝛼π‘₯0ξ€Έβˆ’1𝑓𝛿‖‖+β€–β€–β€–π‘…βˆ’π‘“π›Όξ€·π‘₯0ξ€Έβˆ’1ξ€œ10ξ€ΊπΉξ…žξ€·ξ€·Μ‚π‘₯+𝑑̂π‘₯βˆ’π‘₯0ξ€Έξ€Έβˆ’πΉξ…žξ€·π‘₯0𝑑𝑑̂π‘₯βˆ’π‘₯0ξ€Έβ€–β€–β€–+‖‖𝑅𝛼π‘₯0ξ€Έβˆ’1πΉξ…žξ€·π‘₯0ξ€Έξ€·Μ‚π‘₯βˆ’π‘₯0‖‖≀𝛿𝛼+π‘˜02𝜌2≀𝛿+𝜌0𝛼0+π‘˜02𝜌2+𝜌=π›ΎπœŒ.(2.4)

Letπ‘ž=π‘˜0π‘Ÿ.(2.5) Then π›ΎπœŒ/(1βˆ’π‘ž)<π‘Ÿ, if ξƒ©βˆšπ‘Ÿβˆˆ1βˆ’1βˆ’4π‘˜0π›ΎπœŒ2π‘˜0,√1+1βˆ’4π‘˜0π›ΎπœŒ2π‘˜0ξƒͺ.(2.6)

Theorem 2.4. Let q and r be as in (2.5) and (2.6), respectively, and let π›ΎπœŒ be as in (2.3). Let 𝑒𝑛 be as in (2.2) and let 𝑦𝑛 and π‘₯𝑛 be as in (1.5) and (1.6), respectively, with π›Ώβˆˆ[0,𝛿0) and π›Όβˆˆπ·π‘€. Then(a)β€–π‘₯π‘›βˆ’π‘¦π‘›βˆ’1β€–β‰€π‘žβ€–π‘¦π‘›βˆ’1βˆ’π‘₯π‘›βˆ’1β€–; (b)β€–π‘¦π‘›βˆ’π‘₯π‘›β€–β‰€π‘ž2β€–π‘¦π‘›βˆ’1βˆ’π‘₯π‘›βˆ’1β€–; (c)π‘’π‘›β‰€π‘ž2π‘›π›ΎπœŒ; (d)π‘₯𝑛,π‘¦π‘›βˆˆπ΅π‘Ÿ(π‘₯0).

Proof. Observe that if π‘₯𝑛,π‘¦π‘›βˆˆπ΅π‘Ÿ(π‘₯0), then by Assumption 2.2 we have π‘₯π‘›βˆ’π‘¦π‘›βˆ’1=π‘¦π‘›βˆ’1βˆ’π‘₯π‘›βˆ’1βˆ’π‘…π›Όξ€·π‘₯0ξ€Έβˆ’1ξ€·πΉξ€·π‘¦π‘›βˆ’1ξ€Έξ€·π‘₯βˆ’πΉπ‘›βˆ’1𝑦+π›Όπ‘›βˆ’1βˆ’π‘₯π‘›βˆ’1ξ€Έξ€Έ=𝑅𝛼π‘₯0ξ€Έβˆ’1𝑅𝛼π‘₯0π‘¦ξ€Έξ€·π‘›βˆ’1βˆ’π‘₯π‘›βˆ’1ξ€Έβˆ’ξ€·πΉξ€·π‘¦π‘›βˆ’1ξ€Έξ€·π‘₯βˆ’πΉπ‘›βˆ’1ξ€·π‘¦ξ€Έξ€Έβˆ’π›Όπ‘›βˆ’1βˆ’π‘₯π‘›βˆ’1ξ€Έξ€»=𝑅𝛼π‘₯0ξ€Έβˆ’1ξ€œ10ξ€ΊπΉξ…žξ€·π‘₯0ξ€Έβˆ’πΉξ…žξ€·π‘₯π‘›βˆ’1𝑦+π‘‘π‘›βˆ’1βˆ’π‘₯π‘›βˆ’1Γ—ξ€·π‘¦ξ€Έξ€Έξ€»π‘›βˆ’1βˆ’π‘₯π‘›βˆ’1𝑑𝑑=𝑅𝛼π‘₯0ξ€Έβˆ’1πΉξ…žξ€·π‘₯0ξ€Έξ€œ10Ξ¦ξ€·π‘₯0,π‘₯π‘›βˆ’1𝑦+π‘‘π‘›βˆ’1βˆ’π‘₯π‘›βˆ’1ξ€Έ,π‘¦π‘›βˆ’1βˆ’π‘₯π‘›βˆ’1𝑑𝑑,(2.7) and hence β€–β€–π‘₯π‘›βˆ’π‘¦π‘›βˆ’1β€–β€–β‰€π‘˜0π‘Ÿβ€–β€–π‘¦π‘›βˆ’1βˆ’π‘₯π‘›βˆ’1β€–β€–.(2.8) Again observe that if π‘₯𝑛,π‘¦π‘›βˆˆπ΅π‘Ÿ(π‘₯0), by Assumption 2.2 and (2.8), we have π‘¦π‘›βˆ’π‘₯𝑛=π‘₯π‘›βˆ’π‘¦π‘›βˆ’1βˆ’π‘…π›Όξ€·π‘₯0ξ€Έβˆ’1𝐹π‘₯π‘›ξ€Έβˆ’π‘“π›Ώξ€·π‘₯+π›Όπ‘›βˆ’π‘₯0ξ€Έξ€Έ=𝑅𝛼π‘₯0ξ€Έβˆ’1𝑅𝛼π‘₯0π‘₯ξ€Έξ€·π‘›βˆ’π‘¦π‘›βˆ’1ξ€Έβˆ’ξ€·πΉξ€·π‘₯π‘›ξ€Έξ€·π‘¦βˆ’πΉπ‘›βˆ’1ξ€·π‘₯ξ€Έξ€Έβˆ’π›Όπ‘›βˆ’π‘¦nβˆ’1ξ€Έξ€»=𝑅𝛼π‘₯0ξ€Έβˆ’1ξ€œ10ξ€ΊπΉξ…žξ€·π‘₯0ξ€Έβˆ’πΉξ…žξ€·π‘¦π‘›βˆ’1ξ€·π‘₯+π‘‘π‘›βˆ’π‘¦π‘›βˆ’1ξ€·π‘₯ξ€Έξ€Έξ€»π‘‘π‘‘π‘›βˆ’π‘¦π‘›βˆ’1ξ€Έ=𝑅𝛼π‘₯0ξ€Έβˆ’1πΉξ…žξ€·π‘₯0ξ€Έξ€œ10Ξ¦ξ€·π‘₯0,π‘¦π‘›βˆ’1ξ€·π‘₯+π‘‘π‘›βˆ’π‘¦π‘›βˆ’1ξ€Έ,π‘₯π‘›βˆ’π‘¦π‘›βˆ’1𝑑𝑑,(2.9) and hence β€–β€–π‘¦π‘›βˆ’π‘₯π‘›β€–β€–β‰€π‘˜0π‘Ÿβ€–β€–π‘₯π‘›βˆ’π‘¦π‘›βˆ’1β€–β€–β‰€π‘ž2β€–β€–π‘¦π‘›βˆ’1βˆ’π‘₯π‘›βˆ’1β€–β€–.(2.10) Thus if π‘₯𝑛,π‘¦π‘›βˆˆπ΅π‘Ÿ(π‘₯0), then (a) and (b) follows from (2.8) and (2.10), respectively. Now using induction we will prove that π‘₯𝑛,π‘¦π‘›βˆˆπ΅π‘Ÿ(π‘₯0). Note that π‘₯0,𝑦0βˆˆπ΅π‘Ÿ(π‘₯0) and hence by (2.8) β€–β€–π‘₯1βˆ’π‘₯0‖‖≀‖‖π‘₯1βˆ’π‘¦0β€–β€–+‖‖𝑦0βˆ’π‘₯0‖‖≀(1+π‘ž)𝑒0≀𝑒0≀𝛾1βˆ’π‘žπœŒ1βˆ’π‘ž<π‘Ÿ,(2.11) that is, π‘₯1βˆˆπ΅π‘Ÿ(π‘₯0); again by (2.10) ‖‖𝑦1βˆ’π‘₯0‖‖≀‖‖𝑦1βˆ’π‘₯1β€–β€–+β€–β€–π‘₯1βˆ’π‘₯0β€–β€–β‰€π‘ž2𝑒0+(1+π‘ž)𝑒0≀𝑒0≀𝛾(1βˆ’π‘ž)𝜌1βˆ’π‘ž<π‘Ÿ,(2.12) that is, 𝑦1βˆˆπ΅π‘Ÿ(π‘₯0). Suppose π‘₯π‘˜,π‘¦π‘˜βˆˆπ΅π‘Ÿ(π‘₯0) for some π‘˜>1. Then since β€–β€–π‘₯π‘˜+1βˆ’π‘₯0‖‖≀‖‖π‘₯π‘˜+1βˆ’π‘₯π‘˜β€–β€–+β€–β€–π‘₯π‘˜βˆ’π‘₯π‘˜βˆ’1β€–β€–β€–β€–π‘₯+β‹―+1βˆ’π‘₯0β€–β€–,(2.13) we will first find an estimate for β€–π‘₯π‘˜+1βˆ’π‘₯π‘˜β€–. Note that by (a) and (b) we have β€–β€–π‘₯π‘˜+1βˆ’π‘₯π‘˜β€–β€–β‰€β€–β€–π‘₯π‘˜+1βˆ’π‘¦π‘˜β€–β€–+β€–β€–π‘¦π‘˜βˆ’π‘₯π‘˜β€–β€–β€–β€–π‘¦β‰€(π‘ž+1)π‘˜βˆ’π‘₯π‘˜β€–β€–β‰€(1+π‘ž)π‘ž2π‘˜π‘’0.(2.14) Therefore by (2.13) we have β€–β€–π‘₯π‘˜+1βˆ’π‘₯0β€–β€–ξ€Ίπ‘žβ‰€(1+π‘ž)2π‘˜+π‘ž2(π‘˜βˆ’1)𝑒+β‹―+10≀(1+π‘ž)1βˆ’π‘ž2π‘˜+11βˆ’π‘ž2𝑒0≀𝑒0≀𝛾1βˆ’π‘žπœŒ1βˆ’π‘ž<π‘Ÿ.(2.15) So by induction π‘₯π‘›βˆˆπ΅π‘Ÿ(π‘₯0) for all 𝑛β‰₯0. Again by (a), (b), and (2.15) we have β€–β€–π‘¦π‘˜+1βˆ’π‘₯0β€–β€–β‰€β€–β€–π‘¦π‘˜+1βˆ’π‘₯π‘˜+1β€–β€–+β€–β€–π‘₯π‘˜+1βˆ’π‘₯0β€–β€–β‰€π‘ž2π‘˜+2𝑒0ξ€Ίπ‘ž+(1+π‘ž)2π‘˜+π‘ž2(π‘˜βˆ’1)𝑒+β‹―+10≀(1+π‘ž)1βˆ’π‘ž2π‘˜+31βˆ’π‘ž2𝑒0≀𝑒0≀𝛾1βˆ’π‘žπœŒ1βˆ’π‘ž<π‘Ÿ.(2.16) Thus π‘¦π‘˜+1βˆˆπ΅π‘Ÿ(π‘₯0) and hence by induction π‘¦π‘›βˆˆπ΅π‘Ÿ(π‘₯0) for all 𝑛β‰₯0. This completes the proof of the Theorem.

The main result of this section is the following theorem.

Theorem 2.5. Let 𝑦𝑛 and π‘₯𝑛 be as in (1.5) and (1.6), respectively, and assumptions of Theorem 2.4 hold. Then (π‘₯𝑛) is Cauchy sequence in π΅π‘Ÿ(π‘₯0) and converges to π‘₯π›Ώπ›Όβˆˆπ΅π‘Ÿ(π‘₯0). Further 𝐹(π‘₯𝛿𝛼)+𝛼(π‘₯π›Ώπ›Όβˆ’π‘₯0)=𝑓𝛿 and β€–β€–π‘₯π‘›βˆ’π‘₯π›Ώπ›Όβ€–β€–β‰€π‘ž2π‘›π›ΎπœŒ.(1βˆ’π‘ž)(2.17)

Proof. Using the relation (b) and (c) of Theorem 2.4, we obtain β€–β€–π‘₯𝑛+π‘šβˆ’π‘₯π‘›β€–β€–β‰€π‘šβˆ’1𝑖=0β€–β€–π‘₯𝑛+𝑖+1βˆ’π‘₯𝑛+π‘–β€–β€–β‰€π‘šβˆ’1𝑖=0(1+π‘ž)𝑒𝑛+π‘–β‰€π‘šβˆ’1𝑖=0(1+π‘ž)π‘ž2(𝑛+𝑖)𝑒0ξ€·π‘žβ‰€(1+π‘ž)2π‘›βˆ’π‘ž2𝑛+2π‘šξ€Έ1βˆ’π‘ž2𝑒0β‰€π‘ž2𝑛𝑒1βˆ’π‘ž0β‰€π‘ž2𝑛𝛾1βˆ’π‘žπœŒ.(2.18) Thus π‘₯𝑛 is a Cauchy sequence in π΅π‘Ÿ(π‘₯0) and hence it converges, say, to π‘₯π›Ώπ›Όβˆˆπ΅π‘Ÿ(π‘₯0). Observe that ‖‖𝐹π‘₯π‘›ξ€Έβˆ’π‘“π›Ώξ€·π‘₯+π›Όπ‘›βˆ’π‘₯0ξ€Έβ€–β€–=‖‖𝑅𝛼π‘₯0π‘₯ξ€Έξ€·π‘›βˆ’π‘¦π‘›ξ€Έβ€–β€–β‰€β€–β€–π‘…π›Όξ€·π‘₯0ξ€Έβ€–β€–β€–β€–ξ€·π‘₯π‘›βˆ’π‘¦π‘›ξ€Έβ€–β€–β‰€1π›Όπ‘ž2π‘›π›ΎπœŒ.(2.19) Now by letting π‘›β†’βˆž in (2.19), we obtain 𝐹(π‘₯𝛿𝛼)βˆ’π‘“π›Ώ+𝛼(π‘₯π›Ώπ›Όβˆ’π‘₯0)=0. This completes the proof.

3. Error Bounds under Source Conditions

The objective of this section is to obtain an error estimate for β€–π‘₯π‘›βˆ’Μ‚π‘₯β€– under the following assumption on π‘₯0βˆ’Μ‚π‘₯.

Assumption 3.1 (see [4]). There exists a continuous, strictly monotonically increasing function πœ‘βˆΆ(0,π‘Ž]β†’(0,∞) with π‘Žβ‰₯‖𝐹′(Μ‚π‘₯)β€– satisfying limπœ†β†’0πœ‘(πœ†)=0 and π‘£βˆˆπ‘‹ with ‖𝑣‖≀1 such that π‘₯0ξ€·πΉβˆ’Μ‚π‘₯=πœ‘ξ…žξ€Έ(Μ‚π‘₯)𝑣,supπœ†β‰₯0π›Όπœ‘(πœ†)πœ†+π›Όβ‰€π‘πœ‘πœ‘].(𝛼),βˆ€πœ†βˆˆ(0,π‘Ž(3.1)

Remark 3.2. It can be seen that functions πœ‘(πœ†)=πœ†πœˆ,πœ†>0(3.2) for 0<πœˆβ‰€1 and ξƒ―ξ‚€1πœ‘(πœ†)=lnπœ†ξ‚βˆ’π‘,0<πœ†β‰€π‘’βˆ’(𝑝+1),0,otherwise,(3.3) for 𝑝β‰₯0 satisfy Assumption 3.1 (see [14]).

We will be using the error estimates in the following proposition, which can be found in [5], for our error analysis.

Proposition 3.3 (cf. [5], Proposition 3.1). Let Μ‚π‘₯∈𝐷(𝐹) be a solution of (1.1) and let 𝐹∢𝐷(𝐹)βŠ†π‘‹β†’π‘‹ be a monotone operator in 𝑋. Let π‘₯𝛼 be the unique solution of (1.4) with 𝑓 in place of 𝑓𝛿 and let π‘₯𝛿𝛼 be the unique solution of (1.4). Then β€–β€–π‘₯π›Ώπ›Όβˆ’π‘₯𝛼‖‖≀𝛿𝛼,β€–β€–π‘₯𝛼‖‖≀‖‖π‘₯βˆ’Μ‚π‘₯0β€–β€–.βˆ’Μ‚π‘₯(3.4)

The following theorem can be found in [1].

Theorem 3.4 (see, [1], Theorem 3.1). Let π‘₯𝛼 be the unique solution of (1.4) with 𝑓 in place of 𝑓𝛿. Let the assumptions in the Proposition 3.3 and the Assumptions 2.1, 2.2, and 3.1 be satisfied. Then β€–β€–π‘₯π›Όβ€–β€–β‰€ξ€·π‘˜βˆ’Μ‚π‘₯0ξ€Έπ‘π‘Ÿ+1πœ‘πœ‘(𝛼).(3.5)

Combining the estimates in Proposition 3.3, Theorems 2.5, and 3.4 we obtain the following.

Theorem 3.5. Let π‘₯𝑛 be as in (1.6) and let the assumptions in Proposition 3.3, Theorems 2.5, and 3.4 be satisfied. Then β€–β€–π‘₯π‘›β€–β€–β‰€π‘žβˆ’Μ‚π‘₯2π‘›π›ΎπœŒξ€½ξ€·π‘˜(1βˆ’π‘ž)+max1,0ξ€Έπ‘π‘Ÿ+1πœ‘ξ€Ύξ‚€π›Ώπœ‘(𝛼)+𝛼.(3.6)

Letξ‚»π›ΎπΆβˆΆ=maxπœŒξ€·π‘˜1βˆ’π‘ž+1,0ξ€Έπ‘π‘Ÿ+1πœ‘ξ‚Ό,(3.7) and letπ‘›π›Ώξ‚†βˆΆ=minπ‘›βˆΆπ‘ž2𝑛≀𝛿𝛼.(3.8)

Theorem 3.6. Let π‘₯𝛿𝛼 be the unique solution of (1.4) and let π‘₯𝑛 be as in (1.6). Let the assumptions in Theorem 3.5 be satisfied. Let 𝐢 be as in (3.7) and let 𝑛𝛿 be as in (3.8). Then β€–β€–π‘₯π‘›π›Ώβ€–β€–β‰€βˆ’Μ‚π‘₯πΆξ‚€π›Ώπœ‘(𝛼)+𝛼.(3.9)

3.1. A Priori Choice of the Parameter

Note that the error estimate πœ‘(𝛼)+𝛿/𝛼 in (3.9) is of optimal order if π›ΌβˆΆ=𝛼𝛿 satisfies, πœ‘(𝛼𝛿)𝛼𝛿=𝛿.

Now using the function πœ“(πœ†)∢=πœ†πœ‘βˆ’1(πœ†), 0<πœ†β‰€π‘Ž, we have 𝛿=π›Όπ›Ώπœ‘(𝛼𝛿)=πœ“(πœ‘(𝛼𝛿)), so that 𝛼𝛿=πœ‘βˆ’1(πœ“βˆ’1(𝛿)). In view of the above observations and (3.9), we have the following.

Theorem 3.7. Let πœ“(πœ†)∢=πœ†πœ‘βˆ’1(πœ†) for 0<πœ†β‰€π‘Ž, and let the assumptions in Theorem 3.6 hold. For 𝛿>0, let π›ΌβˆΆ=𝛼𝛿=πœ‘βˆ’1(πœ“βˆ’1(𝛿)). Let 𝑛𝛿 be as in (3.8). Then β€–β€–π‘₯π‘›π›Ώβ€–β€–ξ€·πœ“βˆ’Μ‚π‘₯=π‘‚βˆ’1ξ€Έ.(𝛿)(3.10)

3.2. An Adaptive Choice of the Parameter

In this subsection, we will present a parameter choice rule based on the adaptive method studied in [3, 9]. Let𝐷𝑀𝛼(𝛼)∢=𝑖=πœ‡π‘–π›Ό0ξ€Ύ,𝑖=0,1,…,𝑀,(3.11) where πœ‡>1, 𝛼0>0, and letπ‘›π‘–ξ‚»βˆΆ=minπ‘›βˆΆπ‘ž2𝑛≀𝛿𝛼𝑖.(3.12) Then for 𝑖=0,1,…,𝑀, we haveβ€–β€–π‘₯π‘›π‘–βˆ’π‘₯𝛿𝛼𝑖‖‖≀𝛿𝛼𝑖,βˆ€π‘–=0,1,…,𝑀.(3.13)

Let π‘₯π‘–βˆΆ=π‘₯𝛿𝑛𝑖,𝛼𝑖. In this paper, we select 𝛼=𝛼𝑖 from 𝐷𝑀(𝛼) for computing π‘₯𝑖 for each 𝑖=0,1,…,𝑀.

Theorem 3.8 (cf. [9], Theorem 4.3). Assume that there exists π‘–βˆˆ{0,1,2,…,𝑀} such that πœ‘(𝛼𝑖)≀𝛿/𝛼𝑖. Let assumptions of Theorems 3.6 and 3.7 hold and let ξ‚»ξ€·π›Όπ‘™βˆΆ=maxπ‘–βˆΆπœ‘π‘–ξ€Έβ‰€π›Ώπ›Όπ‘–ξ‚Όξ‚»β€–β€–π‘₯<𝑀,π‘˜βˆΆ=maxπ‘–βˆΆπ‘–βˆ’π‘₯𝑗‖‖≀4𝐢𝛿𝛼𝑗.,𝑗=0,1,2,…,π‘–βˆ’1(3.14) Then π‘™β‰€π‘˜ and β€–β€–Μ‚π‘₯βˆ’π‘₯π‘˜β€–β€–β‰€π‘πœ“βˆ’1(𝛿),(3.15) where 𝑐=6πΆπœ‡.

4. Implementation of Adaptive Choice Rule

Following steps are involved in implementing the adaptive choice rule.(i)Choose 𝛼0>0 such that 𝛿0<3𝛼0/4π‘˜0 and πœ‡>1.(ii)Choose π›Όπ‘–βˆΆ=πœ‡π‘–π›Ό0,𝑖=0,1,2,….

Finally the adaptive algorithm associated with the choice of the parameter specified in Theorem 3.8 involves the following steps.

4.1. Algorithm

Step 1. Set 𝑖=0.

Step 2. Choose 𝑛𝑖=min{π‘›βˆΆπ‘ž2𝑛≀𝛿/𝛼𝑖}.

Step 3. Solve π‘₯π‘–βˆΆ=π‘₯𝛿𝑛𝑖,𝛼𝑖 by using the iteration (1.5) and (1.6).

Step 4. If β€–π‘₯π‘–βˆ’π‘₯𝑗‖>4𝑐𝛿𝛼𝑗,𝑗<𝑖, then take π‘˜=π‘–βˆ’1 and return π‘₯π‘˜.

Step 5. Else set 𝑖=𝑖+1 and go to Step 2.

5. Numerical Example

In this section, we consider the example considered in [4] for illustrating the algorithm considered in Section 4.1. We apply the algorithm by choosing a sequence of finite dimensional subspace (𝑉𝑛) of 𝑋 with dim𝑉𝑛=𝑛+1. Precisely we choose 𝑉𝑛 as the linear span of {𝑣1,𝑣2,…,𝑣𝑛+1}, where 𝑣𝑖,𝑖=1,2,…,𝑛+1 are the linear splines in a uniform grid of 𝑛+1 points in [0,1].

Example 5.1 (see, [4], Section 4.3). Let 𝐹∢𝐷(𝐹)βŠ†πΏ2(0,1)→𝐿2(0,1) defined by ξ€œπΉ(𝑒)∢=10π‘˜(𝑑,𝑠)𝑒3(𝑠)𝑑𝑠,(5.1) where ξ‚»(π‘˜(𝑑,𝑠)=(1βˆ’π‘‘)𝑠,0≀𝑠≀𝑑≀1,1βˆ’π‘ )𝑑,0≀𝑑≀𝑠≀1.(5.2) Then for all π‘₯(𝑑),𝑦(𝑑)∢π‘₯(𝑑)>𝑦(𝑑), ξ€œβŸ¨πΉ(π‘₯)βˆ’πΉ(𝑦),π‘₯βˆ’π‘¦βŸ©=10ξ‚Έξ€œ10ξ€·π‘₯π‘˜(𝑑,𝑠)3βˆ’π‘¦3ξ€Έξ‚Ή(𝑠)𝑑𝑠(π‘₯βˆ’π‘¦)(𝑑)𝑑𝑑β‰₯0.(5.3)
Thus the operator 𝐹 is monotone. The FrΓ©chet derivative of 𝐹 is given by πΉβ€²ξ€œ(𝑒)𝑀=310π‘˜(𝑑,𝑠)(𝑒(𝑠))2𝑀(𝑠)𝑑𝑠.(5.4)
Note that for 𝑒,𝑣>0, ξ€·πΉξ…ž(𝑣)βˆ’πΉξ…žξ€Έξ‚΅3ξ€œ(𝑒)𝑀=10π‘˜(𝑑,𝑠)(𝑒(𝑠))2ξ‚ΆΓ—ξƒ¬βˆ«π‘‘π‘ 10π‘˜ξ€·(𝑑,𝑠)(𝑣(𝑠))2βˆ’(𝑒(𝑠))2𝑀(𝑠)π‘‘π‘ βˆ«10π‘˜(𝑑,𝑠)(𝑒(𝑠))2ξƒ­π‘‘π‘ βˆΆ=πΉξ…ž(𝑒)Ξ¦(𝑣,𝑒,𝑀),(5.5) where ∫Φ(𝑣,𝑒,𝑀)=(10π‘˜(𝑑,𝑠)((𝑣(𝑠))2βˆ’(𝑒(𝑠))2∫)𝑀(𝑠)𝑑𝑠)/(10π‘˜(𝑑,𝑠)(𝑒(𝑠))2𝑑𝑠).
Observe that ∫Φ(𝑣,𝑒,𝑀)=10ξ€·π‘˜(𝑑,𝑠)(𝑣(𝑠))2βˆ’(𝑒(𝑠))2𝑀(𝑠)π‘‘π‘ βˆ«10π‘˜(𝑑,𝑠)(𝑒(𝑠))2=βˆ«π‘‘π‘ 10π‘˜(𝑑,𝑠)(𝑒(𝑠)+𝑣(𝑠))(𝑣(𝑠)βˆ’π‘’(𝑠))𝑀(𝑠)π‘‘π‘ βˆ«10π‘˜(𝑑,𝑠)(𝑒(𝑠))2.𝑑𝑠(5.6) So Assumption 2.2 satisfies with π‘˜0∫β‰₯β€–(10βˆ«π‘˜(𝑑,𝑠)(𝑒(𝑠)+𝑣(𝑠))𝑑𝑠)/(10π‘˜(𝑑,𝑠)(𝑒(𝑠))2𝑑𝑠)β€–.
In our computation, we take 𝑦(𝑑)=(π‘‘βˆ’π‘‘11)/110 and 𝑦𝛿=𝑦+𝛿𝑣𝑛+1. Then the exact solutions Μ‚π‘₯(𝑑)=𝑑3.(5.7) We use π‘₯0(𝑑)=𝑑3+3ξ€·56π‘‘βˆ’π‘‘8ξ€Έ(5.8) as our initial guess, so that the function π‘₯0βˆ’Μ‚π‘₯ satisfies the source condition: π‘₯0ξ€·πΉβˆ’Μ‚π‘₯=πœ‘ξ…žξ€Έ(Μ‚π‘₯)1,(5.9) where πœ‘(πœ†)=πœ†. Thus we expect to obtain the rate of convergence 𝑂(𝛿1/2).
Observe that while performing numerical computation on finite dimensional subspace (𝑉𝑛) of 𝑋, one has to consider the operator π‘ƒπ‘›πΉξ…ž(π‘₯0)𝑃𝑛 instead of πΉξ…ž(π‘₯0), where 𝑃𝑛 is the orthogonal projection on to 𝑉𝑛. Thus incurs an additional error β€–π‘ƒπ‘›πΉξ…ž(π‘₯0)π‘ƒπ‘›βˆ’πΉξ…ž(π‘₯0)β€–=𝑂(β€–πΉξ…ž(π‘₯0)(πΌβˆ’π‘ƒπ‘›)β€–).
Let β€–πΉξ…ž(π‘₯0)(πΌβˆ’π‘ƒπ‘›)β€–β‰€πœ€π‘›. For the operator πΉξ…ž(π‘₯0) defined in (5.4), πœ€π‘›=𝑂(π‘›βˆ’2) (cf. [15]). Thus we expect to obtain the rate of convergence 𝑂((𝛿+πœ€π‘›)1/2).
We choose 𝛼0=(1.5)𝛿, πœ‡=1.5 and π‘ž=0.51. The results of the computation (four decimal places) are presented in Table 1. The plots of the exact solution and the approximate solution obtained are given in Figures 1 and 2.

tab1
Table 1
fig1
Figure 1: Curves of the exact and approximate solutions.
fig2
Figure 2: Curves of the exact and approximate solutions.

The last column of Table 1 shows that the error β€–π‘₯π‘˜βˆ’Μ‚π‘₯β€– is of 𝑂((𝛿+πœ€π‘›)1/2).

6. Conclusion

We considered a two step method for obtaining an approximate solution for a nonlinear ill-posed operator equation 𝐹(π‘₯)=𝑓 where 𝐹∢𝐷(𝐹)βŠ†π‘‹β†’π‘‹ is a monotone operator defined on a real Hilbert space 𝑋 when the available data is 𝑓𝛿 with β€–π‘“βˆ’π‘“π›Ώβ€–β‰€π›Ώ. The proposed method converges to a solution of the normal equation 𝐹(π‘₯)+𝛼(π‘₯βˆ’π‘₯0)=𝑓𝛿 and hence is an approximation for the solution Μ‚π‘₯, for properly chosen parameter 𝛼. We obtained optimal order error estimate by choosing the regularization parameter 𝛼 according to the adaptive method considered by Pereverzev and Schock [3]. The computational results provided endorse the reliability and effectiveness of our method.

Acknowledgment

S. Pareth thanks the National Institute of Technology Karnataka, Surathkal for the financial support.

References

  1. S. George and A. I. Elmahdy, β€œA quadratic convergence yielding iterative method for nonlinear ill-posed operator equations,” Computational Methods in Applied Mathematics, vol. 12, no. 1, pp. 32–45, 2012. View at Publisher Β· View at Google Scholar
  2. J. Janno and U. Tautenhahn, β€œOn Lavrentiev regularization for ill-posed problems in Hilbert scales,” Numerical Functional Analysis and Optimization, vol. 24, no. 5-6, pp. 531–555, 2003. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  3. S. Pereverzev and E. Schock, β€œOn the adaptive selection of the parameter in regularization of ill-posed problems,” SIAM Journal on Numerical Analysis, vol. 43, no. 5, pp. 2060–2076, 2005. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  4. E. V. Semenova, β€œLavrentiev regularization and balancing principle for solving ill-posed problems with monotone operators,” Computational Methods in Applied Mathematics, vol. 10, no. 4, pp. 444–454, 2010. View at Google Scholar
  5. U. Tautenhahn, β€œOn the method of Lavrentiev regularization for nonlinear ill-posed problems,” Inverse Problems, vol. 18, no. 1, pp. 191–207, 2002. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  6. B. Blaschke, A. Neubauer, and O. Scherzer, β€œOn convergence rates for the iteratively regularized Gauss-Newton method,” IMA Journal of Numerical Analysis, vol. 17, no. 3, pp. 421–436, 1997. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  7. P. Deuflhard, H. W. Engl, and O. Scherzer, β€œA convergence analysis of iterative methods for the solution of nonlinear ill-posed problems under affinely invariant conditions,” Inverse Problems, vol. 14, no. 5, pp. 1081–1106, 1998. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  8. S. George, β€œNewton-Lavrentiev regularization of ill-posed Hammerstein type operator equation,” Journal of Inverse and Ill-Posed Problems, vol. 14, no. 6, pp. 573–582, 2006. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  9. S. George and M. T. Nair, β€œA modified Newton-Lavrentiev regularization for nonlinear ill-posed Hammerstein-type operator equations,” Journal of Complexity, vol. 24, no. 2, pp. 228–240, 2008. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  10. Q.-N. Jin, β€œError estimates of some Newton-type methods for solving nonlinear inverse problems in Hilbert scales,” Inverse Problems, vol. 16, no. 1, pp. 187–197, 2000. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  11. Q.-N. Jin, β€œOn the iteratively regularized Gauss-Newton method for solving nonlinear ill-posed problems,” Mathematics of Computation, vol. 69, no. 232, pp. 1603–1623, 2000. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  12. P. Mahale and M. T. Nair, β€œIterated Lavrentiev regularization for nonlinear ill-posed problems,” The ANZIAM Journal, vol. 51, no. 2, pp. 191–217, 2009. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  13. I. K. Argyros and S. Hilout, β€œA convergence analysis for directional two-step Newton methods,” Numerical Algorithms, vol. 55, no. 4, pp. 503–528, 2010. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  14. M. T. Nair and P. Ravishankar, β€œRegularized versions of continuous Newton's method and continuous modified Newton's method under general source conditions,” Numerical Functional Analysis and Optimization, vol. 29, no. 9-10, pp. 1140–1165, 2008. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  15. C. W. Groetsch, J. T. King, and D. Murio, β€œAsymptotic analysis of a finite element method for Fredholm equations of the first kind,” in Treatment of Integral Equations by Numerical Methods, pp. 1–11, Academic Press, London, UK, 1982. View at Google Scholar Β· View at Zentralblatt MATH