#### Abstract

We present a new iterative method which does not involve inversion of the operators for obtaining an approximate solution for the nonlinear ill-posed operator equation . The proposed method is a modified form of Tikhonov gradient (TIGRA) method considered by Ramlau (2003). The regularization parameter is chosen according to the balancing principle considered by Pereverzev and Schock (2005). The error estimate is derived under a general source condition and is of optimal order. Some numerical examples involving integral equations are also given in this paper.

#### 1. Introduction

This paper is devoted to the study of nonlinear ill-posed problem where is a nonlinear operator between the Hilbert spaces and . We assume that is a Fréchet-differentiable nonlinear operator acting between infinite dimensional Hilbert spaces and with corresponding inner products and norms , respectively. Further it is assumed that (1) has a solution , for exact data; that is, , but due to the nonlinearity of this solution need not be unique. Therefore we consider a -minimal norm solution of (1). Recall that [1–3] a solution of (1) is said to be an -minimal norm (-MNS) solution of (1) if

In the following, we always assume the existence of an -MNS for exact data . The element in (3) plays the role of a selection criterion [4] and is assumed to be known.

Since (1) is ill-posed, regularization techniques are required to obtain an approximation for . Tikhonov regularization has been investigated by many authors (see e.g., [2, 4, 5]) to solve nonlinear ill-posed problems in a stable manner. In Tikhonov regularization, a solution of the problem (1) is approximated by a solution of the minimization problem where is a small regularization parameter and is the available noisy data, for which we have the additional information that It is known [3] that the minimizer of the functional satisfies the Euler equation of the Tikhonov functional . Here denotes the adjoint of the Fréchet derivative . It is also known that [1] for properly chosen regularization parameter , the minimizer of the functional is a good approximation to a solution with minimal distance from . Thus the main focus is to find a minimizing element of the Tikhonov functional (4). But the Tikhonov functional with nonlinear operator might have several minima, so to ensure the convergence of any optimization algorithm to a global minimizer of the Tikhonov functional (3), one has to employ stronger restrictions on the operator [6].

In the last few years many authors considered iterative methods, for example, Landweber method [7, 8], Levenberg-Marquardt method [9], Gauss-Newton [10, 11], Conjugate Gradient [12], Newton-like methods [13, 14], and TIGRA (Tikhonov gradient method) [6]. For Landweber's and TIGRA method, one has to evaluate the operator and the adjoint of the Fréchet derivative of . For all other methods, one has to solve a linear equation additionally.

In [6], Ramlau considered the TIGRA method defined iteratively by where , is a scaling parameter and is a regularization parameter, which will change during the iteration and obtained a convergence rate estimate for the TIGRA algorithm under the following assumptions:(1)is twice Fréchet differentiable with a continuous second derivative,(2)the first derivative is Lipschitz continuous: (3)there exists with and(4) and .In [15], Scherzer considered a similar iteration procedure under a much stronger condition

In this paper we consider a modified form of iteration (7). Precisely we consider the sequence defined iteratively by where , , and is the regularization parameter. The regularization parameter is chosen from a finite set using the adaptive method considered by Pereverzev and Schock in [16]. We prove that converges to the unique solution (see Theorem 2) of the equation and that is a good approximation for . Our approach, in the convergence analysis of the method as well as the choice of the parameters, is different from that of [6].

Note that in TIGRA method (i.e., (7)) the scaling parameter and the regularization parameter will change during the iteration, but in the proposed method no scaling parameter is used and the regularization parameter does not change during the iteration. Also, in the proposed method one needs to compute the Fréchet derivative only at one point . These are the main advantages of the proposed method.

The organization of this paper is as follows. In Section 2 we provide some preparatory results and Section 3 deals with the convergence analysis of the proposed method. Error bounds under an a priori and under the balancing principle are given in Section 4. Numerical examples involving integral equations are given in Section 5. Finally the paper ends with conclusion in Section 6.

#### 2. Preparatory Results

Throughout this paper we assume that the operator satisfies the following assumptions.

*Assumption 1. *(a) There exists a constant such that for every and , there exists an element satisfying

(b)
for all .

Notice that in the literature the stronger than (a) condition

is used for some . However, holds in general and can be arbitrarily large [17]. It is also worth noticing that implies (a) but not necessarily vice versa and element is less accurate and more difficult to find than (see also the numerical example).

Next result shows that (12) has a unique solution in .

Theorem 2. *Let be a solution of (1), Assumption 1 satisfied, and let be Fréchet differentiable in a ball with radius . Then the regularized problem (12) possesses a unique solution in .*

*Proof. *For , let . If is invertible, then
has a unique solution in . Observe that
and hence
Therefore by (17) has a unique solution in . So it remains to show that is invertible. Note that by Assumption 1, we have
So is invertible. Now from the relation
it follows that is invertible.

*Assumption 3. *There exists a continuous, strictly monotonically increasing function with satisfying(i)(ii)(iii)There exists such that

One of the crucial results we are going to use to prove our main result is the following theorem.

Theorem 4. *Let and be the solution of (12). Then
*

*Proof. *Let . Then, by fundamental theorem of integration
and hence by (12), we have . Thus
where , , and . Note that
by Assumption 3
and by Assumption 1
The result now follows from (26), (27), (28), and (29).

#### 3. Convergence Analysis

In this paper we present a semilocal convergence analysis under the following conditions.(C1)Suppose . Let Let with and let where and .(C2)Suppose . Let Let with and let where and .Let and let

In due course we will make use of the following lemma extensively.

Lemma 5. *Let be as in (38) and be as in (37). Suppose Assumption 1 holds. Then for *

*Proof. *Let . Then, by fundamental theorem of integration . Hence,
so by Assumption 1, we have that
This completes the proof.

*Remark 6. *Note that we need for the convergence of the sequence to . This in turn forces us to assume that ; that is, (see ) and ; that is, (see ), .

For convenience, we use the notation for . Let

*Remark 7. *Observe that
and by the choice of (this can be seen by substituting into and solving the inequality for ).

Theorem 8. *Let , be as in (37) and (38), respectively. Then defined in (11) is a Cauchy sequence in and converges to . Further and
*

*Proof. *Suppose for all . Then
and hence by Lemma 5, we have
By Remark 7, . Now suppose for some . Then
that is, . Thus by induction for all . Now we will prove that is a Cauchy sequence in . We have
Thus is a Cauchy sequence in and hence it converges, say to .

By letting in (11), we obtain . Now the result follows by letting in (48).

*Remark 9. *(a) Instead of Assumption 1, if we use center-Lipschitz conditions; that is,
holds for all , then one can obtain the estimate

(b) Note that for small enough condition (31), (34) is not a severe restriction on .

*Remark 10. *If Assumption 1 is fulfilled only for all , where is a convex closed a priori set, for which , then we can modify process (11) by the following way:
to obtain the same estimate in the following Theorem 11; here is the metric projection onto the set and is the step operator in (11).

#### 4. Error Bounds under Source Conditions

Combining the estimates in Theorems 4 and 8 we obtain the following.

Theorem 11. *Let the assumptions in Theorems 4 and 8 hold and let be as in (11). Then
**
Further if , then
**
where .*

##### 4.1. A Priori Choice of the Parameter

Observe that the estimate in Theorem 11 is of optimal order for the choice which satisfies . Now, using the function , , we have so that .

In view of the above observation, Theorem 11 leads to the following.

Theorem 12. *Let , and assumptions in Theorem 11 hold. For , let and let be as in Theorem 11. Then
*

##### 4.2. Balancing Principle

Note that the a priori choice of the parameter could be achieved only in the ideal situation when the function is known. The point is that the best function measuring the rate of convergence in Theorem 11 is usually unknown. Therefore in practical applications different parameters are often selected from some finite set and corresponding elements , are studied on line. Let and let . Then from Theorem 11, we have

We consider the balancing principle suggested by Pereverzev and Schock [16], for choosing the regularization parameter from the set defined by where for some constant (see [18]) and .

To obtain a conclusion from this parameter choice we considered all possible functions satisfying Assumption 1 and . Any of such functions is called admissible for and it can be used as a measure for the convergence of (see [19]).

The main result of this section is the following theorem, proof of which is analogous to the proof of Theorem 4.4 in [13].

Theorem 13. *Assume that there exists such that . Let assumptions of Theorem 11 be satisfied and let
**
where is as in Theorem 11. Then and
*

#### 5. Numerical Examples

Let the half-space be modeled by two layers of constant different densities , , separated by a surface to be determined. In the Cartesian coordinate system, whose plane coincides with the ground surface and the axis is directed downward, the inverse gravimetry problem has the form (see [20] and References in it) here is gravity constant, is the density jump at the interface , described by the function to be evaluated, and is the anomalous gravitational field caused by deviation of the interface from horizontal asymptotic plane ; that is, for the actual solution the following relation holds: is given on the domain .

Since in (61) the first term does not depend on (61) can be written as where .

The derivative of the operator at the point is expressed by the formula Applying to the integral equations (63) two-dimensional analogy of rectangle’s formula with uniform grid for every variable, we obtain the following system of nonlinear equations: , for the unknown vector , , in vector-matrix form this system takes the form where are vectors of dimension .

The discrete variant of the derivative has the form where is constant and is symmetric matrix, for which the component with member is evaluated by formula (67).

Let us define the exact solution as where is given on the domain . Let , , , .

Note that on the set (see Remarks 9 and 10) satisfied (see [20, 21]). Besides, for our data is symmetric and positive definite matrix with the minimal eigenvalue and the condition number .

It means that for the matrix approximating the operator , we have taken initial guess .

Observe that where Now since is self-adjoint, we have and hence Assumption 3 satisfies for .

The results of numerical experiments are presented in Table 1. Here is the numerical solution obtained by our method; the relative error of solution and residual for a noisy right-hand side.

Next we present an example for nonlinear equations where Assumption (a) is satisfied but not .

*Example 1. *Let , , and define function on by
where , are real parameters and an integer. Then is not Lipschitz on . Hence, Assumption is not satisfied. However central Lipschitz condition Assumption (a) holds.

Indeed, we have
where and
where .

#### 6. Conclusion

We have considered an iterative method which does not involve inversion of operator, for obtaining approximate solution for a nonlinear ill-posed operator equation when the available data is in place of the exact data . It is assumed that is Fréchet differentiable. The procedure involves finding the fixed point of the function in an iterative manner. For choosing the regularization parameter we made use of the adaptive method suggested in [16].

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.