Rate of Convergence of Tikhonov Method of Regularization for Constrained Linear Equations with Operators Having Closed Ranges
We derive the estimates of the rate of convergence of the Tikhonov method of regularization for a constrained operator linear equation. In case that the range of the corresponding operator is closed, the estimate is of the same order as the estimates for a linear equation without constraints.
Let and be real Hilbert spaces, —a linear bounded operator from to , —a fixed element, and —a closed convex set. We will deal with the equation Even when the constraint is absent, this problem can be ill-posed; that is, it is possible that there is which is far from the set of solution of (1), such that . The ill-posedness of the equation in case of infinite-dimensional spaces and obviously comes from the fact that the range of the operator is nonclosed. However, if the operator is known only approximately, then this equation can be ill-posed even in case of . In this case, in order to solve the given problem, one has to use methods of regularization, for example, the Tikhonov method of regularization (see [1, 2]). Note that the presence of the constraints significantly complicates all issues concerning the ill-posedness.
Let us suppose that the set is given by where is a continuously differentiable convex function satisfying Lipschitz and Slater conditions:
We also suppose that sets and of the solutions of (1), and the corresponding equation without constraint are nonempty. In this case, the minimization problems have unique solutions that will be denoted by and and called normal solutions of (1) and (5).
Let us present one prominent example of the equation with the previously mentioned constraint with infinite-dimensional spaces and/or .
Given that and matrices , and , whose elements , , and are piecewise continuous functions on . Let us consider a system in the so-called state-variable (or state-space) form: where is a control from the space . This system of control is usually analyzed under some constraints on the control or on the trajectory. For example, the problems, “given , find a control such that ” is a problem of type (1), (2). Then, one has to solve the equation on the set where
For example, electrical networks with a finite number of interconnected resistors, capacitors, and inductors can be described in state-variable form (see ).
The problems of the type (1) in the literature are usually considered under the assumption that, instead of the exact operator and instead of the element , one actually deals with their approximations , such that where are small positive real numbers and .
As for the approximations of the normal solutions and , one can use [1, 4] the so-called Tikhonov regularized approximate solutions and , that is, the unique solutions of the regularized problems find such that
Let us observe that the above variational inequality (13) is necessary and sufficient conditions for the minimization problem of the Tikhonov function on the set or on the space . The unique point of the minimum of the function on the set is said to be a regularized solution of (1).
The main aim of this paper is to establish whether there exists a choice of a parameter of regularization which implies the convergence rate of regularized solutions to . Namely, in , such convergence was proved in case of (5) without constraints and under the assumption that . Note also that even in case of , the rate of convergence , depending on the boundary of the set , can be arbitrarily slow (see ).
Consequently, even in case of , only if we require additional conditions on and , we can eventually expect the convergence rate of regularized solutions to normal solution . In , it was proved that if and projection of belong to , then for yields .
The estimates of the rate of convergence of regularized solutions to the normal solution have been obtained in , under the conditions related to the smoothness of the boundary of the set and of the properties of the projection of onto linear subspace of codimension , which is parallel to the tangential plane to the boundary of .
Let us emphasize that the results of this paper are inspired by the results from . However, our results related to the rate of convergence of regularized solutions were obtained under somewhat different general conditions and under the assumptions that, instead of the exact operator , and instead of the element , we only know their approximations and .
The paper is organized as follows.
In Section 2, we described Tikhonov method of regularization adapted to the case when the set of constraints is given by (2). This constraint is taken into account by Lagrange multipliers. Lemma 1 was proved under the conditions of the sourcewise representativity type, that is, under the assumption that belongs to the projection of onto [2, 6, 8]. In this section, we also present some results concerning the properties of the operators and .
In Section 3, we suppose that the parameter of regularization is chosen in such way that . Under this assumption, we derive the estimates of the rate of convergence of Lagrange multipliers of regularized problems. The main result of the paper is contained in Theorem 11 which refers to the rate of convergence of regularized solutions.
2. Constrained Tikhonov Regularization and Previous Results
By applying the Kuhn-Tucker theorem to the minimization problem we obtain that, for its unique solution , there is such that Using the function , (in what follows, we will use this notation) the first equality in (17) can be written as or as where is the point of the minimum of the Tikhonov function on .
In what follows, we assume that the parameter is chosen such that .
Note that if , then , and for such , the corresponding estimates concerning the accuracy of the regularized solutions of the equation , , can be found, for example, in . (Note that in , page 100, it was proved that if , then for , the following estimate holds: . For this reason, in what follows, we will suppose that for all sufficiently small and . In this case, we have and, consequently, and for all such and . Therefore, bearing in mind that as , we conclude that . The Slater condition (4) then implies that is not a point of minimum of , and, consequently, .
Further estimates of the rate of convergence of the regularized solutions are based on the next property of the normal solution that for the operator with closed range becomes As it was argued in  (see also ), this condition does not have the same meaning as it has in case of problem (6), because the normal solution of the corresponding problem without constraints does not satisfy the condition . However, it is easy to prove that where by , we denote the operator of the metric projecting on the closed convex set .
Lemma 1. if and only if there exists such that .
Proof. For any , there is such that is a solution of the minimization problem
According to the Kuhn-Tucker theorem, there exists a real number such that . From here, bearing in mind that , we obtain
Now, let us suppose that there exists such that
This equality can be written in the form of Multiplying both sides of this equality by and bearing in mind (21), it comes out that The last equality means that . Having in mind that , it follows that .
As an immediate consequence of this lemma, we have that if the range of the operator is closed subspace of , then if and only if there is such that .
Finally, let us remark that it is easy to construct an example of convex set and operator such that .
Further, we will continue with three lemmas related to properties of the operators and , whose proofs can be found in , pages 99, 153, 154, and 156.
Lemma 2. If , then for all .
Lemma 3. (a) The range of the bounded linear operator is closed subspace of if and only if
(b) If the bounded linear operators and satisfy the condition then , where and are the spectrums of the operators , and .
By , we will denote the operator of orthogonal projecting of the space on invariant subspace of the operator that corresponds to the part of the spectrum belonging to . Then, is the operator of orthogonal projecting of the space on invariant subspace of the operator that corresponds to the part of the spectrum belonging to .
Lemma 4. If the conditions (34) are satisfied, then,
As the consequences of the previous lemmas, it is easy to prove the following estimates.
Lemma 5. Let the conditions (34) be satisfied. Then If, additionally, the parameter of regularization is chosen such that , then the following estimates hold for all :
Theorem 6. If the conditions (12) are satisfied and the parameter is chosen such that and as , then the regularized solutions of (14) converge to the normal solution of (5) as .
If, additionally, the range of the operator is closed, then for , the following estimate holds:
3. Convergence and Rate of Convergence
In order to derive an estimate for the solutions of the equation , we will begin this section with lemmas concerning the properties of Lagrange multipliers for problem (17), that will be used in the proof of the estimates of the rate of convergence. Note that throughout the section we will suppose that the parameter of the regularization is chosen such that .
Proof. By (39) and Lemma 4, from the equality we get that Furthermore, since , we have Consequently, for all sufficiently small and . Multiplying both sides of (19) by and taking into consideration the last inequality, we obtain Now, multiplying the equality (see Lemma 1) by and taking into account that , we find that In this way, we obtain the equality Let us observe that Now, having in mind condition (3), estimate (39), Lemma 4, and Theorem 6, we get
In the following theorem, we will present one result concerning the estimates of the discrepancies and .
Proof. First, let us consider the case when . Then, based on (19) and Lemma 1, we have that and
Multiplying both sides of this equality by and taking into account (22), we obtain
The last inequality can be written in the form
In this case, from here and from Theorem 6, it follows that
Now, let us consider the case of . Then, we have that ; that is from where we obtain that From here, by Theorem 6, Lemma 5 and having in mind (37) and condition (12), we obtain the estimate
The third estimate in the theorem is an immediate consequence of (12) and the equality
Before presenting the proof of the main result, we will prove two lemmas: the first regarding the convergence of Lagrange multipliers from equality (17) and the second concering the projection of on the subspace .
Proof. From , it follows that and Then, there is such that and consequently Adding and subtracting on the right side of this equality and then multiplying both sides by , we obtain Observe that remains bounded when , and . Now, using (37), we have that Besides, (Lemma 2) and Now, from (63), it follows that It means that for all sufficiently small and . Multiplying (19) by and taking into account the last inequality, we will obtain If we replace and in the previous equality by and , it becomes Based on Theorem 8, it follows the boundness of and as . Now, it is clear that also remains bounded as .
Proof. From we have that Consequently, The value remains bounded as the parameter converges to zero, in case of , as well as in case of . Now, for , the estimate follows from
The following theorem is the main result of the paper.
Proof. From (17), we obtain the following equality:
From here, taking into account the equality , we have
Multiplying both sides of this equality by and taking into account (13), we obtain
Now, adding and subtracting on the right side of this equality, it becomes
Observe that in case of , we can take . Then, . From here and (78), we obtain Finally, from here, by Lemmas 9 and 10, Theorem 8, and conditions (12) and (3), the estimate (74) follows.
In case of , the estimate (74) is an immediate consequence of Lemmas 7 and 9, Theorem 8, and conditions (12).
The authors would like to thank the reviewers for their valuable suggestions and comments.
W. H. Engl, M. Hanke, and A. Neubauer, Regularization of Inverse Problems, Kluwer Academic Publishers, Dordrecht, Netherlands, 1996.
A. B. Bakushinsky and M. Y. Kokurin, Iterative Methods for Approximate Solution of Inverse Problems, vol. 577 of Mathematics and Its Applications, Springer, Dordrecht, Netherlands, 2004.View at: MathSciNet
B. D. O. Anderson and J. B. Moore, Optimal Control, Linear Quadratic Methods, Prentice Hall, Englewood Cliffs, NJ, USA, 1990.
G. M. Vaĭnikko and A. Yu. Veretennikov, Iterative Procedures in Ill-Posed Problems, Nauka, Moscow, Russia, 1986 (Russian).View at: MathSciNet