Two-Step Newton-Tikhonov Method for Hammerstein-Type Equations: Finite-Dimensional Realization
Santhosh George1and Monnanda Erappa Shobha1
Academic Editor: A. J. Kearsley, J. Mรญguez, A. Bellouquid, J. Shen, A. El-Sayed
Received26 Dec 2011
Accepted26 Jan 2012
Published08 May 2012
Abstract
Finite-dimensional realization of a Two-Step Newton-Tikhonov method
is considered for obtaining a stable approximate solution to nonlinear ill-posed Hammerstein-type operator equations . Here is nonlinear monotone operator, is a bounded linear
operator, is a real Hilbert space, and is a Hilbert space. The error analysis for this method is done under two general source conditions, the
first one involves the operator and the second one involves the Frรฉchet
derivative of at an initial approximation of the the solution : balancing principle of Pereverzev and Schock (2005) is employed in choosing
the regularization parameter and order optimal error bounds are established. Numerical illustration is given to confirm the reliability of our approach.
1. Introduction
Tikhonovโs regularization (e.g., [1]) method has been used extensively to stabilize the approximate solution of nonlinear ill-posed problems. In recent years, increased emphasis has been placed on iterative regularization procedures [2, 3] for obtaining the approximate solution of such problems. In this paper, we examine the use of iterative regularization procedures for Hammerstein-type [4, 5] equations of the form
where , is nonlinear monotone operator, does not exists, and is a bounded linear operator. Throughout this paper, is the domain of is the Frรฉchet derivative of is a real Hilbert space, and is a Hilbert space. The inner product and the corresponding norm are denoted by and , respectively.
Recall that [6] the operator is said to be a monotone operator if , for all .
It is assumed throughout that are the available noisy data with
and possesses a uniformly bounded Frรฉchet derivative for each (cf. [7]), that is,
for some .
Observe that the solution of (1.1) with in place of can be obtained by first solving
for and then solving the nonlinear problem
In [4, 5, 8] this was exploited. In [4], is approximated with where
and then solve (1.5) iteratively using the following Newton-type procedure:
with and obtained local linear convergence. Here and below is a known initial approximation of the solution of (1.1) such that .
In [8], to solve (1.5), George and Kunhanandan used the iteration
where
and obtained local quadratic convergence.
A sequence in with is said to be convergent of order , if there exist positive reals and such that, for all ,
If a sequence satisfies , , then is said to be linearly convergent.
As in [8], it is assumed that the solution of (1.1) satisfies
The regularization parameter is chosen from a finite set
using the adaptive method considered by Pereverzev and Schock in [9].
In [10], Argyros and Hilout considered a method called Two-Step Directional Newton Method (TSDNM) for approximating a zero of a differentiable function defined on a convex subset of a Hilbert space with values in . Motivated by TSDNM, in [11], we propose a Two-Step Newton-Tikhonov Methods (TSNTM) for solving (1.1).
In fact, in [11] we consider two cases of , in the first case we assume that exist and in the second case we assume is monotone. In this paper we consider the finite-dimensional realization of the second case, that is, is monotone. The finite-dimensional realization of the method and associated algorithm are proposed for which local-cubic convergence is established theoretically and validated numerically.
The organization of this paper is as follows. Section 2 deals with Discretized Tikhonov regularization and Section 3 investigates the convergence of the Discretized TSNTM. Section 4 discusses the algorithm and finally the paper ends with a numerical example in Section 5.
2. Discretized Tikhonov Regularization
This section deals with discretized Tikhonov regularized solution of (1.4) and (an a priori and an a posteriori) error estimate for using an error estimate for from [8].
The following assumption is used in [8] to obtain the error estimate.
Assumption 2.1. There exists a continuous, strictly monotonically increasing function with satisfying(i),
(ii)(iii) for some such that .
Remark 2.2. The functions
for and
for satisfy the above assumption (see [12]).
Theorem 2.3 (cf. [8], Theorem 4.3). Let be as in (1.9) and Assumption 2.1 holds. Then
Let be a family of orthogonal projections on . Let
and is such that ,โ and . We assume that and as . The above assumption is satisfied if, pointwise and if and are compact operators. Further we assume that and where . The discretized Tikhonov regularization method for the regularized equation (1.4) consists of solving the equation
Theorem 2.4. Suppose assumptions in Theorem 2.3 hold. Let be as in (2.7) and . Then
where .
Proof . Let . Then
Now the result follows from (2.9), Theorem 2.3 and the following triangle inequality:
2.1. A Priori Choice of the Parameter
Note that the estimate in (2.8) is of optimal order for the choice which satisfies . Let . Then we have and
So the relation (2.8) leads to .
2.2. An Adaptive Choice of the Parameter
In this subsection, we consider the balancing principle established by Pereverzev and Shock [9] for choosing the parameter . Let
be the set of possible values of the parameter .
Let
where .
We use the following theorem, proof of which is analogous to the proof of Theoremโโ4.3 in [8], for our error analysis.
Theorem 2.5 (cf. [8], Theorem 4.3). Let be as in (2.13), let be as in (2.14), and let be as in (2.7) with . Then and
3. Discretized Two-Step Newton Method (DTSNM)
We need the following assumptions for the convergence of DTSNM and to obtain the error estimate.
Assumption 3.1 (cf. [7], Assumption 3 (A3)). There exists a constant such that for every and there exists an element such that , .
Assumption 3.2. There exists a continuous, strictly monotonically increasing function with satisfying(i),
(ii)(iii) there exists with (cf. [6]) such that(iv) for each there exists a bounded linear operator (cf. [13]) such that
with .First we consider a DTSNM for approximating the zero of
and then we show that is an approximation to the solution of (1.1) where . For an initial guess and for , the DTSNM is defined as
where . Note that with the above notation
Let
and let be such that
Remark 3.3. Note that the above assumption is satisfied if we choose . Let be the function defined by
Let , with
Theorem 3.4. Let and be as in (3.8) and (3.10), respectively, and let and be as in (3.6) and (3.5), respectively, with and . Then the following holds:(a)(b)(c)(d)(e)
Proof. Observe that
Now by Assumption 3.1 and (3.7) we have
This proves (a). Now (b) follows from (a) and the triangle inequality
To prove (c) we observe that
where
Note that
The last but one step follows from Assumption 3.1 and (3.7). Similarly one can prove that
Thus from (3.20), (3.22), (3.23), (a) and (b) we have
Again since for , for all , by (3.24) we have
provided , for all. From (3.26) it is clear that, if . This can be seen as follows:
Therefore by (3.27) and (2.9) we have
Now since is monotonic increasing and , we have . This completes the proof of the theorem.
Theorem 3.5. Let and the assumptions of Theorem 3.4 hold. Then , for all .
Proof. Note that by (b) of Theorem 3.4 we have
that is, . Again note that by (3.29) and (c) of Theorem 3.4 we have
that is, . Further by (3.29) and (b) of Theorem 3.4 we have
The last but one step follows from the monotonicity of and (3.28). And by (3.31) and (c) of Theorem 3.4 we have
that is, . Continuing this way one can prove that , for all . This completes the proof.
The main result of this section is the following theorem.
Theorem 3.6. Let and be as in (3.5) and (3.6), respectively, and let assumptions of Theorem 3.5 hold. Then is a Cauchy sequence in and converges to . Further and
where and .
Proof. Using the relation (b) and (e) of Theorem 3.4 and (3.28), we obtain
Thus is a Cauchy sequence in and hence it converges, say, to . Observe that from (3.5)
Now by letting in (3.35) we obtain . This completes the proof.
Hereafter we assume that and . The proof of the following theorem is analogous to the proof of Theoremโโ3.14 in [11] but for the sake of completeness we give the proof.
Theorem 3.7 (cf. [11], Theorem 3.14). Suppose is the solution of
and Assumptions 3.1 and 3.2 holds, then
Proof. Note that , so
Thus
where . So by Assumption 3.2, we obtain
and hence by (3.39) and (3.40) we have
This completes the proof of the theorem.
Theorem 3.8. Suppose is the solution of (3.4) and Assumption 2.1 and Theorem 3.7 hold. In addition if , then
Proof. Suppose and are the solutions of (3.36) and (3.4), respectively, then by (3.36) we have,
So from (3.4) and (3.43),
Let . Then by (3.44) we have
and hence
Thus
Now the result follows from (2.9), (3.47) and the relation
The following theorem is a consequence of Theorems 3.6, 3.7, and 3.8.
Theorem 3.9. Let be as in (3.6) and let assumptions in Theorems 3.6, 3.7 and 3.8 hold. Then
where and are as in Theorem 3.6.
Theorem 3.10. Let be as in (3.6) and let assumptions in Theorem 3.9 hold. Further let and
Then
4. Algorithm
Note that for ,
Therefore the balancing principle algorithm associated with the choice of the parameter specified in Section 2 involves the following steps:
In the next section we consider an example to illustrate the above algorithm. The computational results provided endorse the reliability and effectiveness of our method.
5. Example
In this section we consider an example satisfying the assumptions made in this paper and give the numerical illustration. Consider the operator where is defined by
and defined by
where
Then for all (see [7], Section ):
Thus the operator is monotone. The Frรฉchet derivative of is given by
Let be a sequence of finite-dimensional subspaces of and let denote the orthogonal projection on with range . We assume that and as for all . We choose the linear splines in a uniform grid of points in as a basis of .
Since is of the form for some scalars . It can be seen that is a solution of (4.2) if and only if is the unique solution of
where
Observe that in Step 4 of algorithm is again in and hence for some . One can see that for is a solution of
if and only if is the unique solution of
where
Compute till and fix . Now choose .
Let ,โโ,โโ and . Then from (3.5) we have
where are the grid points.
Observe that is a solution of (3.5) if and only if is the unique solution of
where
and .
Further from (3.6) it follows that
Thus is a solution of (5.14) if and only if is the unique solution of
where .
5.1. Numerical Example
Example 5.1. To illustrate the method discussed in the above section, we consider the space and the Fredholm integral operator . The algorithm in Section 5 is applied by choosing as the space of linear splines in a uniform grid of points in . In our computation, we take and . Then the exact solution
We use
as our initial guess, so that the function satisfies the source condition
where . Thus we expect to have an accuracy of order at least . We choose , and approximately. For all the number of iteration in this example. The results of the computation are presented in Table 1. The plots of the exact and the approximate solution obtained are given in Figures 1 and 2.
(a)
(b)
(c)
(d)
(a)
(b)
(c)
(d)
Acknowledgment
M. E. Shobha thanks National Institute of Technology Karnataka India, for the financial support.
References
H. W. Engl, M. Hanke, and A. Neubauer, Regularization of Inverse Problems, Kluwer Academic, Dodrecht, The Netherlands, 1993.
S. George and A. I. Elmahdy, โA quadratic convergence yielding iterative method for nonlinear ill-posed operator equations,โ Computational Methods in Applied Mathematics, vol. 12, p. 32, 2012.
B. Kaltenbacher, A. Neubauer, and O. Scherzer, Iterative Regularisation Methods for Nolinear Ill-Posed Porblems, Walter de Gruyter, Berlin, Germany, 2008.
S. George, โNewton-Tikhonov regularization of ill-posed Hammerstein operator equation,โ Journal of Inverse and Ill-Posed Problems, vol. 14, no. 2, pp. 135โ145, 2006.
S. George and M. T. Nair, โA modified Newton-Lavrentiev regularization for nonlinear ill-posed Hammerstein-type operator equations,โ Journal of Complexity, vol. 24, no. 2, pp. 228โ240, 2008.
P. Mahale and M. T. Nair, โA simplified generalized Gauss-Newton method for nonlinear ill-posed problems,โ Mathematics of Computation, vol. 78, no. 265, pp. 171โ184, 2009.
E. V. Semenova, โLavrentiev regularization and balancing principle for solving ill-posed problems with monotone operators,โ Computational Methods in Applied Mathematics, vol. 10, no. 4, pp. 444โ454, 2010.
S. George and M. Kunhanandan, โAn iterative regularization method for ill-posed Hammerstein type operator equation,โ Journal of Inverse and Ill-Posed Problems, vol. 17, no. 9, pp. 831โ844, 2009.
S. Pereverzev and E. Schock, โOn the adaptive selection of the parameter in regularization of ill-posed problems,โ SIAM Journal on Numerical Analysis, vol. 43, no. 5, pp. 2060โ2076, 2005.
I. K. Argyros and S. Hilout, โA convergence analysis for directional two-step Newton methods,โ Numerical Algorithms, vol. 55, no. 4, pp. 503โ528, 2010.
S. George and M. E. Shobha, Two Step Newton Tikhonov Methods for Hammerstein Equations (communicated).
M. T. Nair and P. Ravishankar, โRegularized versions of continuous Newton's method and continuous modified Newton's method under general source conditions,โ Numerical Functional Analysis and Optimization. An International Journal, vol. 29, no. 9-10, pp. 1140โ1165, 2008.
A. G. Ramm, A. B. Smirnova, and A. Favini, โContinuous modified Newton's-type method for nonlinear operator equations,โ Annali di Matematica Pura ed Applicata. Series IV, vol. 182, no. 1, pp. 37โ52, 2003.
Copyright ยฉ 2012 Santhosh George and Monnanda Erappa Shobha. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.