Abstract

A two step modified Newton method is considered for obtaining an approximate solution for the nonlinear ill-posed equation 𝐹(𝑥)=𝑓 when the available data are 𝑓𝛿 with 𝑓𝑓𝛿𝛿 and the operator 𝐹 is monotone. The derived error estimate under a general source condition on 𝑥0̂𝑥 is of optimal order; here 𝑥0 is the initial guess and ̂𝑥 is the actual solution. The regularization parameter is chosen according to the adaptive method considered by Perverzev and Schock (2005). The computational results provided endorse the reliability and effectiveness of our method.

1. Introduction

Recently George and Elmahdy [1] studied iterative methods for solving the ill-posed operator equation:𝐹(𝑥)=𝑓,(1.1) where 𝐹𝐷(𝐹)𝑋𝑋 is a nonlinear monotone operator (i.e., 𝐹(𝑥)𝐹(𝑦),𝑥𝑦0,forall𝑥,𝑦𝐷(𝐹)) and 𝑋 is a real Hilbert space. The convergence analysis in [1] is based on suitably constructed majorizing sequences. Recall that a sequence (𝑡𝑛) in is said to be a majorizing sequence of the sequence (𝑥𝑛)𝑋 if𝑥𝑛+1𝑥𝑛𝑡𝑛+1𝑡𝑛,𝑛0.(1.2)

Throughout this paper, the inner product and the corresponding norm on the Hilbert space 𝑋 are denoted by , and respectively, 𝐷(𝐹) is the domain of 𝐹 and 𝐹() is the Fréchet derivative of 𝐹.

In application, usually only noisy data 𝑓𝛿 are available, such that𝑓𝑓𝛿𝛿.(1.3) Then the problem of recovery of ̂𝑥 from noisy equation 𝐹(𝑥)=𝑓𝛿 is ill posed, in the sense that a small perturbation in the data can cause large deviation in the solution.

For monotone operators, one usually uses the Lavrentiev regularization method (see [25]) for solving (1.1). In this method, the regularized approximation 𝑥𝛿𝛼 is obtained by solving the operator equation𝐹(𝑥)+𝛼𝑥𝑥0=𝑓𝛿.(1.4) It is known (cf. [5], Theorem 1.1) that (1.4) has a unique solution 𝑥𝛿𝛼𝐵𝑟(̂𝑥)={𝑥𝑋𝑥̂𝑥<𝑟}𝐷(𝐹) for any 𝛼>0 provided that 𝑟=𝑥0̂𝑥+𝛿/𝛼.

The optimality of the Lavrentiev method was proved in [5] under a general source condition on 𝑥0̂𝑥. However the main drawback here is that, the regularized equation (1.4) remains nonlinear and one may have difficulties in solving them numerically.

Thus in the last few years, more emphasis was put on the investigation of iterative regularization methods (see [612]). In this paper, we considered a modified form of the method considered in [1], but we analyse the method as a two step modified Newton Lavrentiev method (TSMNLM). The proposed analysis is motivated by the two step directional Newton method (TSDNM) considered in [13], by Argyros and Hilout, for approximating a zero 𝑥 of a differentiable function 𝐹 defined on a convex subset 𝒟 of a Hilbert space 𝐻 with values in . The TSMNLM for approximating the zero 𝑥𝛿𝛼 of 𝐹(𝑥)𝑓𝛿+𝛼(𝑥𝑥0)=0 is defined as𝑦𝛿𝑛,𝛼=𝑥𝛿𝑛,𝛼𝑅𝛼𝑥01𝐹𝑥𝛿𝑛,𝛼𝑓𝛿𝑥+𝛼𝛿𝑛,𝛼𝑥0,(1.5)𝑥𝛿𝑛+1,𝛼=𝑦𝛿𝑛,𝛼𝑅𝛼𝑥01𝐹𝑦𝛿𝑛,𝛼𝑓𝛿𝑦+𝛼𝛿𝑛,𝛼𝑥0,(1.6) where 𝑥𝛿0,𝛼=𝑥0 and 𝑅𝛼(𝑥0)=𝐹(𝑥0)+𝛼𝐼. Here the regularization parameter 𝛼 is chosen from the finite set 𝐷𝑀={𝛼𝑖0<𝛼0<𝛼1<<𝛼𝑀}.

The organization of this paper is as follows. Convergence analysis of TSMNLM is given in Section 2, error bounds under an a priori and under an adaptive choice of the regularization parameter is given are Section 3 and Section 4 deals with implementation of the adaptive choice rule. Example and results of computational experiments are given in Section 5. Finally the paper ends with conclusion in Section 6.

2. Convergence Analysis for TSMNLM

We need the following assumptions for the convergence analysis of TSMNLM.

Assumption 2.1 (see [4]). 𝐹 possesses a locally uniformly bounded Fréchet derivative 𝐹() at all 𝑥 in the domain 𝐷(𝐹).

Assumption 2.2 (cf. [4], Assumption 3). There exists a constant 𝑘0>0 such that for every 𝑥,𝑢𝐷(𝐹) and 𝑣𝑋, there exists an element Φ(𝑥,𝑢,𝑣)𝑋 satisfying 𝐹(𝑥)𝐹(𝑢)𝑣=𝐹(𝑢)Φ(𝑥,𝑢,𝑣),Φ(𝑥,𝑢,𝑣)𝑘0𝑣𝑥𝑢(2.1) for all 𝑥,𝑢 in 𝐷(𝐹) and 𝑣𝑋.

Let𝑒𝛿𝑛,𝛼𝑦=𝛿𝑛,𝛼𝑥𝛿𝑛,𝛼𝛿,𝑛0,0<3𝛼04𝑘0,𝜌<3/22𝑘0𝛿0/𝛼01𝑘0,(2.2)𝛾𝜌𝛿=0𝛼0+𝑘02𝜌2+𝜌.(2.3) Hereafter, for convenience, we use the notations 𝑥𝑛,𝑦𝑛, and 𝑒𝑛 for 𝑥𝛿𝑛,𝛼,𝑦𝛿𝑛,𝛼, and 𝑒𝛿𝑛,𝛼, respectively.

Remark 2.3. Note that if 𝑥0̂𝑥𝜌, then by Assumption 2.2 it follows that 𝑒0=𝑦0𝑥0=𝑅𝛼𝑥01𝑓𝛿𝑥𝐹0=𝑅𝛼𝑥01𝑓𝛿𝑥𝑓+𝐹(̂𝑥)𝐹0𝐹𝑥0̂𝑥𝑥0+𝐹𝑥0̂𝑥𝑥0𝑅𝛼𝑥01𝑓𝛿+𝑅𝑓𝛼𝑥0110𝐹̂𝑥+𝑡̂𝑥𝑥0𝐹𝑥0𝑑𝑡̂𝑥𝑥0+𝑅𝛼𝑥01𝐹𝑥0̂𝑥𝑥0𝛿𝛼+𝑘02𝜌2𝛿+𝜌0𝛼0+𝑘02𝜌2+𝜌=𝛾𝜌.(2.4)

Let𝑞=𝑘0𝑟.(2.5) Then 𝛾𝜌/(1𝑞)<𝑟, if 𝑟114𝑘0𝛾𝜌2𝑘0,1+14𝑘0𝛾𝜌2𝑘0.(2.6)

Theorem 2.4. Let q and r be as in (2.5) and (2.6), respectively, and let 𝛾𝜌 be as in (2.3). Let 𝑒𝑛 be as in (2.2) and let 𝑦𝑛 and 𝑥𝑛 be as in (1.5) and (1.6), respectively, with 𝛿[0,𝛿0) and 𝛼𝐷𝑀. Then(a)𝑥𝑛𝑦𝑛1𝑞𝑦𝑛1𝑥𝑛1; (b)𝑦𝑛𝑥𝑛𝑞2𝑦𝑛1𝑥𝑛1; (c)𝑒𝑛𝑞2𝑛𝛾𝜌; (d)𝑥𝑛,𝑦𝑛𝐵𝑟(𝑥0).

Proof. Observe that if 𝑥𝑛,𝑦𝑛𝐵𝑟(𝑥0), then by Assumption 2.2 we have 𝑥𝑛𝑦𝑛1=𝑦𝑛1𝑥𝑛1𝑅𝛼𝑥01𝐹𝑦𝑛1𝑥𝐹𝑛1𝑦+𝛼𝑛1𝑥𝑛1=𝑅𝛼𝑥01𝑅𝛼𝑥0𝑦𝑛1𝑥𝑛1𝐹𝑦𝑛1𝑥𝐹𝑛1𝑦𝛼𝑛1𝑥𝑛1=𝑅𝛼𝑥0110𝐹𝑥0𝐹𝑥𝑛1𝑦+𝑡𝑛1𝑥𝑛1×𝑦𝑛1𝑥𝑛1𝑑𝑡=𝑅𝛼𝑥01𝐹𝑥010Φ𝑥0,𝑥𝑛1𝑦+𝑡𝑛1𝑥𝑛1,𝑦𝑛1𝑥𝑛1𝑑𝑡,(2.7) and hence 𝑥𝑛𝑦𝑛1𝑘0𝑟𝑦𝑛1𝑥𝑛1.(2.8) Again observe that if 𝑥𝑛,𝑦𝑛𝐵𝑟(𝑥0), by Assumption 2.2 and (2.8), we have 𝑦𝑛𝑥𝑛=𝑥𝑛𝑦𝑛1𝑅𝛼𝑥01𝐹𝑥𝑛𝑓𝛿𝑥+𝛼𝑛𝑥0=𝑅𝛼𝑥01𝑅𝛼𝑥0𝑥𝑛𝑦𝑛1𝐹𝑥𝑛𝑦𝐹𝑛1𝑥𝛼𝑛𝑦n1=𝑅𝛼𝑥0110𝐹𝑥0𝐹𝑦𝑛1𝑥+𝑡𝑛𝑦𝑛1𝑥𝑑𝑡𝑛𝑦𝑛1=𝑅𝛼𝑥01𝐹𝑥010Φ𝑥0,𝑦𝑛1𝑥+𝑡𝑛𝑦𝑛1,𝑥𝑛𝑦𝑛1𝑑𝑡,(2.9) and hence 𝑦𝑛𝑥𝑛𝑘0𝑟𝑥𝑛𝑦𝑛1𝑞2𝑦𝑛1𝑥𝑛1.(2.10) Thus if 𝑥𝑛,𝑦𝑛𝐵𝑟(𝑥0), then (a) and (b) follows from (2.8) and (2.10), respectively. Now using induction we will prove that 𝑥𝑛,𝑦𝑛𝐵𝑟(𝑥0). Note that 𝑥0,𝑦0𝐵𝑟(𝑥0) and hence by (2.8) 𝑥1𝑥0𝑥1𝑦0+𝑦0𝑥0(1+𝑞)𝑒0𝑒0𝛾1𝑞𝜌1𝑞<𝑟,(2.11) that is, 𝑥1𝐵𝑟(𝑥0); again by (2.10) 𝑦1𝑥0𝑦1𝑥1+𝑥1𝑥0𝑞2𝑒0+(1+𝑞)𝑒0𝑒0𝛾(1𝑞)𝜌1𝑞<𝑟,(2.12) that is, 𝑦1𝐵𝑟(𝑥0). Suppose 𝑥𝑘,𝑦𝑘𝐵𝑟(𝑥0) for some 𝑘>1. Then since 𝑥𝑘+1𝑥0𝑥𝑘+1𝑥𝑘+𝑥𝑘𝑥𝑘1𝑥++1𝑥0,(2.13) we will first find an estimate for 𝑥𝑘+1𝑥𝑘. Note that by (a) and (b) we have 𝑥𝑘+1𝑥𝑘𝑥𝑘+1𝑦𝑘+𝑦𝑘𝑥𝑘𝑦(𝑞+1)𝑘𝑥𝑘(1+𝑞)𝑞2𝑘𝑒0.(2.14) Therefore by (2.13) we have 𝑥𝑘+1𝑥0𝑞(1+𝑞)2𝑘+𝑞2(𝑘1)𝑒++10(1+𝑞)1𝑞2𝑘+11𝑞2𝑒0𝑒0𝛾1𝑞𝜌1𝑞<𝑟.(2.15) So by induction 𝑥𝑛𝐵𝑟(𝑥0) for all 𝑛0. Again by (a), (b), and (2.15) we have 𝑦𝑘+1𝑥0𝑦𝑘+1𝑥𝑘+1+𝑥𝑘+1𝑥0𝑞2𝑘+2𝑒0𝑞+(1+𝑞)2𝑘+𝑞2(𝑘1)𝑒++10(1+𝑞)1𝑞2𝑘+31𝑞2𝑒0𝑒0𝛾1𝑞𝜌1𝑞<𝑟.(2.16) Thus 𝑦𝑘+1𝐵𝑟(𝑥0) and hence by induction 𝑦𝑛𝐵𝑟(𝑥0) for all 𝑛0. This completes the proof of the Theorem.

The main result of this section is the following theorem.

Theorem 2.5. Let 𝑦𝑛 and 𝑥𝑛 be as in (1.5) and (1.6), respectively, and assumptions of Theorem 2.4 hold. Then (𝑥𝑛) is Cauchy sequence in 𝐵𝑟(𝑥0) and converges to 𝑥𝛿𝛼𝐵𝑟(𝑥0). Further 𝐹(𝑥𝛿𝛼)+𝛼(𝑥𝛿𝛼𝑥0)=𝑓𝛿 and 𝑥𝑛𝑥𝛿𝛼𝑞2𝑛𝛾𝜌.(1𝑞)(2.17)

Proof. Using the relation (b) and (c) of Theorem 2.4, we obtain 𝑥𝑛+𝑚𝑥𝑛𝑚1𝑖=0𝑥𝑛+𝑖+1𝑥𝑛+𝑖𝑚1𝑖=0(1+𝑞)𝑒𝑛+𝑖𝑚1𝑖=0(1+𝑞)𝑞2(𝑛+𝑖)𝑒0𝑞(1+𝑞)2𝑛𝑞2𝑛+2𝑚1𝑞2𝑒0𝑞2𝑛𝑒1𝑞0𝑞2𝑛𝛾1𝑞𝜌.(2.18) Thus 𝑥𝑛 is a Cauchy sequence in 𝐵𝑟(𝑥0) and hence it converges, say, to 𝑥𝛿𝛼𝐵𝑟(𝑥0). Observe that 𝐹𝑥𝑛𝑓𝛿𝑥+𝛼𝑛𝑥0=𝑅𝛼𝑥0𝑥𝑛𝑦𝑛𝑅𝛼𝑥0𝑥𝑛𝑦𝑛1𝛼𝑞2𝑛𝛾𝜌.(2.19) Now by letting 𝑛 in (2.19), we obtain 𝐹(𝑥𝛿𝛼)𝑓𝛿+𝛼(𝑥𝛿𝛼𝑥0)=0. This completes the proof.

3. Error Bounds under Source Conditions

The objective of this section is to obtain an error estimate for 𝑥𝑛̂𝑥 under the following assumption on 𝑥0̂𝑥.

Assumption 3.1 (see [4]). There exists a continuous, strictly monotonically increasing function 𝜑(0,𝑎](0,) with 𝑎𝐹(̂𝑥) satisfying lim𝜆0𝜑(𝜆)=0 and 𝑣𝑋 with 𝑣1 such that 𝑥0𝐹̂𝑥=𝜑(̂𝑥)𝑣,sup𝜆0𝛼𝜑(𝜆)𝜆+𝛼𝑐𝜑𝜑].(𝛼),𝜆(0,𝑎(3.1)

Remark 3.2. It can be seen that functions 𝜑(𝜆)=𝜆𝜈,𝜆>0(3.2) for 0<𝜈1 and 1𝜑(𝜆)=ln𝜆𝑝,0<𝜆𝑒(𝑝+1),0,otherwise,(3.3) for 𝑝0 satisfy Assumption 3.1 (see [14]).

We will be using the error estimates in the following proposition, which can be found in [5], for our error analysis.

Proposition 3.3 (cf. [5], Proposition 3.1). Let ̂𝑥𝐷(𝐹) be a solution of (1.1) and let 𝐹𝐷(𝐹)𝑋𝑋 be a monotone operator in 𝑋. Let 𝑥𝛼 be the unique solution of (1.4) with 𝑓 in place of 𝑓𝛿 and let 𝑥𝛿𝛼 be the unique solution of (1.4). Then 𝑥𝛿𝛼𝑥𝛼𝛿𝛼,𝑥𝛼𝑥̂𝑥0.̂𝑥(3.4)

The following theorem can be found in [1].

Theorem 3.4 (see, [1], Theorem 3.1). Let 𝑥𝛼 be the unique solution of (1.4) with 𝑓 in place of 𝑓𝛿. Let the assumptions in the Proposition 3.3 and the Assumptions 2.1, 2.2, and 3.1 be satisfied. Then 𝑥𝛼𝑘̂𝑥0𝑐𝑟+1𝜑𝜑(𝛼).(3.5)

Combining the estimates in Proposition 3.3, Theorems 2.5, and 3.4 we obtain the following.

Theorem 3.5. Let 𝑥𝑛 be as in (1.6) and let the assumptions in Proposition 3.3, Theorems 2.5, and 3.4 be satisfied. Then 𝑥𝑛𝑞̂𝑥2𝑛𝛾𝜌𝑘(1𝑞)+max1,0𝑐𝑟+1𝜑𝛿𝜑(𝛼)+𝛼.(3.6)

Let𝛾𝐶=max𝜌𝑘1𝑞+1,0𝑐𝑟+1𝜑,(3.7) and let𝑛𝛿=min𝑛𝑞2𝑛𝛿𝛼.(3.8)

Theorem 3.6. Let 𝑥𝛿𝛼 be the unique solution of (1.4) and let 𝑥𝑛 be as in (1.6). Let the assumptions in Theorem 3.5 be satisfied. Let 𝐶 be as in (3.7) and let 𝑛𝛿 be as in (3.8). Then 𝑥𝑛𝛿̂𝑥𝐶𝛿𝜑(𝛼)+𝛼.(3.9)

3.1. A Priori Choice of the Parameter

Note that the error estimate 𝜑(𝛼)+𝛿/𝛼 in (3.9) is of optimal order if 𝛼=𝛼𝛿 satisfies, 𝜑(𝛼𝛿)𝛼𝛿=𝛿.

Now using the function 𝜓(𝜆)=𝜆𝜑1(𝜆), 0<𝜆𝑎, we have 𝛿=𝛼𝛿𝜑(𝛼𝛿)=𝜓(𝜑(𝛼𝛿)), so that 𝛼𝛿=𝜑1(𝜓1(𝛿)). In view of the above observations and (3.9), we have the following.

Theorem 3.7. Let 𝜓(𝜆)=𝜆𝜑1(𝜆) for 0<𝜆𝑎, and let the assumptions in Theorem 3.6 hold. For 𝛿>0, let 𝛼=𝛼𝛿=𝜑1(𝜓1(𝛿)). Let 𝑛𝛿 be as in (3.8). Then 𝑥𝑛𝛿𝜓̂𝑥=𝑂1.(𝛿)(3.10)

3.2. An Adaptive Choice of the Parameter

In this subsection, we will present a parameter choice rule based on the adaptive method studied in [3, 9]. Let𝐷𝑀𝛼(𝛼)=𝑖=𝜇𝑖𝛼0,𝑖=0,1,,𝑀,(3.11) where 𝜇>1, 𝛼0>0, and let𝑛𝑖=min𝑛𝑞2𝑛𝛿𝛼𝑖.(3.12) Then for 𝑖=0,1,,𝑀, we have𝑥𝑛𝑖𝑥𝛿𝛼𝑖𝛿𝛼𝑖,𝑖=0,1,,𝑀.(3.13)

Let 𝑥𝑖=𝑥𝛿𝑛𝑖,𝛼𝑖. In this paper, we select 𝛼=𝛼𝑖 from 𝐷𝑀(𝛼) for computing 𝑥𝑖 for each 𝑖=0,1,,𝑀.

Theorem 3.8 (cf. [9], Theorem 4.3). Assume that there exists 𝑖{0,1,2,,𝑀} such that 𝜑(𝛼𝑖)𝛿/𝛼𝑖. Let assumptions of Theorems 3.6 and 3.7 hold and let 𝛼𝑙=max𝑖𝜑𝑖𝛿𝛼𝑖𝑥<𝑀,𝑘=max𝑖𝑖𝑥𝑗4𝐶𝛿𝛼𝑗.,𝑗=0,1,2,,𝑖1(3.14) Then 𝑙𝑘 and ̂𝑥𝑥𝑘𝑐𝜓1(𝛿),(3.15) where 𝑐=6𝐶𝜇.

4. Implementation of Adaptive Choice Rule

Following steps are involved in implementing the adaptive choice rule.(i)Choose 𝛼0>0 such that 𝛿0<3𝛼0/4𝑘0 and 𝜇>1.(ii)Choose 𝛼𝑖=𝜇𝑖𝛼0,𝑖=0,1,2,.

Finally the adaptive algorithm associated with the choice of the parameter specified in Theorem 3.8 involves the following steps.

4.1. Algorithm

Step 1. Set 𝑖=0.

Step 2. Choose 𝑛𝑖=min{𝑛𝑞2𝑛𝛿/𝛼𝑖}.

Step 3. Solve 𝑥𝑖=𝑥𝛿𝑛𝑖,𝛼𝑖 by using the iteration (1.5) and (1.6).

Step 4. If 𝑥𝑖𝑥𝑗>4𝑐𝛿𝛼𝑗,𝑗<𝑖, then take 𝑘=𝑖1 and return 𝑥𝑘.

Step 5. Else set 𝑖=𝑖+1 and go to Step 2.

5. Numerical Example

In this section, we consider the example considered in [4] for illustrating the algorithm considered in Section 4.1. We apply the algorithm by choosing a sequence of finite dimensional subspace (𝑉𝑛) of 𝑋 with dim𝑉𝑛=𝑛+1. Precisely we choose 𝑉𝑛 as the linear span of {𝑣1,𝑣2,,𝑣𝑛+1}, where 𝑣𝑖,𝑖=1,2,,𝑛+1 are the linear splines in a uniform grid of 𝑛+1 points in [0,1].

Example 5.1 (see, [4], Section 4.3). Let 𝐹𝐷(𝐹)𝐿2(0,1)𝐿2(0,1) defined by 𝐹(𝑢)=10𝑘(𝑡,𝑠)𝑢3(𝑠)𝑑𝑠,(5.1) where (𝑘(𝑡,𝑠)=(1𝑡)𝑠,0𝑠𝑡1,1𝑠)𝑡,0𝑡𝑠1.(5.2) Then for all 𝑥(𝑡),𝑦(𝑡)𝑥(𝑡)>𝑦(𝑡), 𝐹(𝑥)𝐹(𝑦),𝑥𝑦=1010𝑥𝑘(𝑡,𝑠)3𝑦3(𝑠)𝑑𝑠(𝑥𝑦)(𝑡)𝑑𝑡0.(5.3)
Thus the operator 𝐹 is monotone. The Fréchet derivative of 𝐹 is given by 𝐹(𝑢)𝑤=310𝑘(𝑡,𝑠)(𝑢(𝑠))2𝑤(𝑠)𝑑𝑠.(5.4)
Note that for 𝑢,𝑣>0, 𝐹(𝑣)𝐹3(𝑢)𝑤=10𝑘(𝑡,𝑠)(𝑢(𝑠))2×𝑑𝑠10𝑘(𝑡,𝑠)(𝑣(𝑠))2(𝑢(𝑠))2𝑤(𝑠)𝑑𝑠10𝑘(𝑡,𝑠)(𝑢(𝑠))2𝑑𝑠=𝐹(𝑢)Φ(𝑣,𝑢,𝑤),(5.5) where Φ(𝑣,𝑢,𝑤)=(10𝑘(𝑡,𝑠)((𝑣(𝑠))2(𝑢(𝑠))2)𝑤(𝑠)𝑑𝑠)/(10𝑘(𝑡,𝑠)(𝑢(𝑠))2𝑑𝑠).
Observe that Φ(𝑣,𝑢,𝑤)=10𝑘(𝑡,𝑠)(𝑣(𝑠))2(𝑢(𝑠))2𝑤(𝑠)𝑑𝑠10𝑘(𝑡,𝑠)(𝑢(𝑠))2=𝑑𝑠10𝑘(𝑡,𝑠)(𝑢(𝑠)+𝑣(𝑠))(𝑣(𝑠)𝑢(𝑠))𝑤(𝑠)𝑑𝑠10𝑘(𝑡,𝑠)(𝑢(𝑠))2.𝑑𝑠(5.6) So Assumption 2.2 satisfies with 𝑘0(10𝑘(𝑡,𝑠)(𝑢(𝑠)+𝑣(𝑠))𝑑𝑠)/(10𝑘(𝑡,𝑠)(𝑢(𝑠))2𝑑𝑠).
In our computation, we take 𝑦(𝑡)=(𝑡𝑡11)/110 and 𝑦𝛿=𝑦+𝛿𝑣𝑛+1. Then the exact solutions ̂𝑥(𝑡)=𝑡3.(5.7) We use 𝑥0(𝑡)=𝑡3+356𝑡𝑡8(5.8) as our initial guess, so that the function 𝑥0̂𝑥 satisfies the source condition: 𝑥0𝐹̂𝑥=𝜑(̂𝑥)1,(5.9) where 𝜑(𝜆)=𝜆. Thus we expect to obtain the rate of convergence 𝑂(𝛿1/2).
Observe that while performing numerical computation on finite dimensional subspace (𝑉𝑛) of 𝑋, one has to consider the operator 𝑃𝑛𝐹(𝑥0)𝑃𝑛 instead of 𝐹(𝑥0), where 𝑃𝑛 is the orthogonal projection on to 𝑉𝑛. Thus incurs an additional error 𝑃𝑛𝐹(𝑥0)𝑃𝑛𝐹(𝑥0)=𝑂(𝐹(𝑥0)(𝐼𝑃𝑛)).
Let 𝐹(𝑥0)(𝐼𝑃𝑛)𝜀𝑛. For the operator 𝐹(𝑥0) defined in (5.4), 𝜀𝑛=𝑂(𝑛2) (cf. [15]). Thus we expect to obtain the rate of convergence 𝑂((𝛿+𝜀𝑛)1/2).
We choose 𝛼0=(1.5)𝛿, 𝜇=1.5 and 𝑞=0.51. The results of the computation (four decimal places) are presented in Table 1. The plots of the exact solution and the approximate solution obtained are given in Figures 1 and 2.

The last column of Table 1 shows that the error 𝑥𝑘̂𝑥 is of 𝑂((𝛿+𝜀𝑛)1/2).

6. Conclusion

We considered a two step method for obtaining an approximate solution for a nonlinear ill-posed operator equation 𝐹(𝑥)=𝑓 where 𝐹𝐷(𝐹)𝑋𝑋 is a monotone operator defined on a real Hilbert space 𝑋 when the available data is 𝑓𝛿 with 𝑓𝑓𝛿𝛿. The proposed method converges to a solution of the normal equation 𝐹(𝑥)+𝛼(𝑥𝑥0)=𝑓𝛿 and hence is an approximation for the solution ̂𝑥, for properly chosen parameter 𝛼. We obtained optimal order error estimate by choosing the regularization parameter 𝛼 according to the adaptive method considered by Pereverzev and Schock [3]. The computational results provided endorse the reliability and effectiveness of our method.

Acknowledgment

S. Pareth thanks the National Institute of Technology Karnataka, Surathkal for the financial support.