Abstract
We construct two new methods for finding the minimum norm fixed point of nonexpansive mappings in Hilbert spaces. Some applications are also included.
1. Introduction
Let be a nonempty closed convex subset of a real Hilbert space . Recall that a mapping is nonexpansive if
Iterative algorithms for finding fixed point of nonexpansive mappings are very interesting topic due to the fact that many nonlinear problems can be reformulated as fixed point equations of nonexpansive mappings. Related works can be found in [1–32].
On the other hand, we notice that it is quite often to seek a particular solution of a given nonlinear problem, in particular, the minimum-norm solution. In an abstract way, we may formulate such problems as finding a point with the property where is a nonempty closed convex subset of a real Hilbert space . In other words, is the (nearest point or metric) projection of the origin onto , where is the metric (or nearest point) projection from onto .
A typical example is the least-squares solution to the constrained linear inverse problem where is a bounded linear operator from to another real Hilbert space and is a given point in . The least-squares solution to (1.4) is the least-norm minimizer of the minimization problem
Let denote the (closed convex) solution set of (1.4) (or equivalently (1.5)). It is known that is nonempty if and only if . In this case, has a unique element with minimum norm (equivalently, (1.4) has a unique least-squares solution); that is, there exists a unique point satisfying The so-called -constrained pseudoinverse of is then defined as the operator with domain and values given by
where is the unique solution to (1.6).
Note that the optimality condition for the minimization (1.5) is the variational inequality (VI) where is the adjoint of .
If , then (1.5) is consistent and its solution set coincides with the solution set of VI (1.8). On the other hand, VI (1.8) can be rewritten as where is any positive scalar. In the terminology of projections, (1.10) is equivalent to the fixed point equation It is not hard to find that for , the mapping is nonexpansive. Therefore, finding the least-squares solution of the constrained linear inverse problem (1.6) is equivalent to finding the minimum-norm fixed point of the nonexpansive mapping .
Motivated by the above least-squares solution to constrained linear inverse problems, we will study the general case of finding the minimum-norm fixed point of a nonexpansive mapping : where denotes the set of fixed points of (throughout we always assume that ).
We next briefly review two historic approaches which relate to the minimum-norm fixed point problem (1.11).
Browder [1] introduced an implicit scheme as follows. Fix a , and for each , let be the unique fixed point in of the contraction which maps into : Browder proved that That is, the strong limit of as is the fixed point of which is nearest from to .
Halpern [4], on the other hand, introduced an explicit scheme. Again fix a . Then with a sequence in and an arbitrary initial guess , we can define a sequence through the recursive formula It is now known that this sequence converges in norm to the same limit as Browder's implicit scheme (1.12) if the sequence satisfies, assumptions , , and as follows: , , either or .
Some more progress on the investigation of the implicit and explicit schemes (1.12) and (1.14) can be found in [33–42]. We notice that the above two methods do find the minimum-norm fixed point of if . However, if , then neither Browder's nor Halpern's method works to find the minimum-norm element . The reason is simple: if , we cannot take either in (1.12) or (1.14) since the contraction is no longer a self-mapping of (hence may fail to have a fixed point), or may not belong to , and consequently, may be undefined. In order to overcome the difficulties caused by possible exclusion of the origin from , we introduce the following two remedies.
For Browder's method, we consider the contraction for some . Since this contraction clearly maps into , it has a unique fixed point which is still denoted by , that is, . For Halpern's method, we consider the following iterative algorithm . It is easily seen that the net and the sequence are well defined (i.e., and ).
The purpose of this paper is to prove that the above both implicit and explicit methods converge strongly to the minimum-norm fixed point of the nonexpansive mapping . Some applications are also included.
2. Preliminaries
Let be a real Hilbert space with inner product and norm , respectively. Let be a nonempty closed convex subset of . Recall that the nearest point (or metric) projection from onto is defined as follows: for each point , is the unique point in with the property
Note that is characterized by the inequality
Consequently, is nonexpansive.
Below is the so-called demiclosedness principle for nonexpansive mappings.
Lemma 2.1 (cf. [7]). Let be a nonempty closed convex subset of a real Hilbert space , and let be a nonexpansive mapping with fixed points. If is a sequence in such that weakly and strongly, then .
Finally we state the following elementary result on convergence of real sequences.
Lemma 2.2 (see [19]). Let be a sequence of nonnegative real numbers satisfying where and are satisfied that (i); (ii)either or . Then converges to 0.
We use the following notation: (i) stands for the set of fixed points of ; (ii) stands for the weak convergence of to ; (iii) stands for the strong convergence of to .
3. Main Results
The aim of this section is to introduce some methods for finding the minimum-norm fixed point of a nonexpansive mapping . First, we prove the following theorem by using an implicit method.
Theorem 3.1. Let be a nonempty closed convex subset of a real Hilbert space and a nonexpansive mapping with . For and each , let be defined as the unique solution of fixed point equation Then the net converges in norm, as , to the minimum-norm fixed point of .
Proof. First observe that, for each , is well defined. Indeed, we define a mapping by
For , we have
which implies that is a self-contraction of . Hence has a unique fixed point which is the unique solution of the fixed point equation (3.1).
Next we prove that is bounded. Take . From (3.1), we have
that is,
Hence, is bounded and so is .
From (3.1), we have
that is,
Next we show that is relatively norm-compact as . Let be a sequence such that as . Put . From (3.7), we have
Again from (3.1), we get
It turns out that
where is some constant such that . In particular, we get from (3.10)
Since is bounded, without loss of generality, we may assume that converges weakly to a point . Noticing (3.8) we can use Lemma 2.1 to get . Therefore we can substitute for in (3.11) to get
However, . This together with (3.12) guarantees that . The net is therefore relatively compact, as , in the norm topology.
Now we return to (3.11) and take the limit as to get
This is equivalent to
Therefore, . This is sufficient to conclude that the entire net converges in norm to and is the minimum-norm fixed point of . This completes the proof.
Next, we introduce an explicit algorithm for finding the minimum norm fixed point of nonexpansive mappings.
Theorem 3.2. Let be a nonempty closed convex subset of a real Hilbert space , and let be a nonexpansive mapping with . For given , define a sequence iteratively by where and satisfying the following conditions: (C1) and ; (C2). Then the sequence converges strongly to the minimum-norm fixed point of .
Proof. First we prove that the sequence is bounded. Pick . Then, we have By induction, Next, we estimate . From (3.15), we have This together with Lemma 2.2 implies that Note that Thus, We next show that where , the minimum norm fixed point of . To see this, we can take a subsequence of satisfying the properties Now since (this is a consequence of Lemma 2.2 and (3.21)), we get by combining (3.22) and (3.23) Finally, we show that . As a matter of fact, we have By (C1) and (3.22), it is easily found that and . We can therefore apply Lemma 2.2 to (3.26) and conclude that as . This completes the proof.
4. Applications
We consider the following minimization problem where is a closed convex subset of a real Hilbert space and is a continuously Fréchet differentiable convex function. Denote by the solution set of (4.1); that is,
Assume . It is known that a point is a solution of (4.1) if and only if the following optimality condition holds: (Here denotes the gradient of at .) It is also known that the optimality condition (4.3) is equivalent to the following fixed point problem,
where is any positive number. Note that the solution set of (4.1) coincides with the set of fixed points of (for any ).
If the gradient is L-Lipschitzian continuous on , then it is not hard to see that the mapping is nonexpansive if .
Using Theorems 3.1 and 3.2, we immediately obtain the following result.
Theorem 4.1. Assume is continuously (Fréchet) differentiable and convex and its gradient is L-Lipschitzian. Assume the solution set of the minimization (4.1) is nonempty. Fix such that .
(i)For each , let be the unique solution of the fixed point equation
Then converges in norm as to the minimum-norm solution of the minimization (4.1).(ii)Define a sequence via the recursive algorithm
where the sequence satisfies conditions in Theorem 3.2. Then converges in norm to the minimum-norm solution of the minimization (4.1).
We next turn to consider a convexly constrained linear inverse problem where is a bounded linear operator with nonclosed range from a real Hilbert space to another real Hilbert space and is given.
Problem (4.7) models many applied problems arising from image reconstructions, learning theory, and so on.
Due to some reasons (errors, noises, etc.), (4.7) is often illposed and inconsistent; thus regularization and least-squares are taken into consideration; that is, we look for a solution to the minimization problem Let denote the solution set of (4.8). It is always closed convex (but possibly empty). It is known that is nonempty if and only if . In this case, has a unique element with minimum norm; that is, there exists a unique point satisfying The K-constrained pseudoinverse of , , is defined as
where is the unique solution to (4.9).
Set
Then is quadratic with gradient
where is the adjoint of . Clearly is Lipschitzian with constant . Therefore, applying Theorem 4.1, we obtain the following result.
Theorem 4.2. Let . Fix such that .
(i)For each , let be the unique solution of the fixed point equation
Then converges in norm as to .(ii)Define a sequence via the recursive algorithm
where the sequence satisfies conditions in Theorem 3.2. Then converges in norm to .
Acknowledgments
The authors are very grateful to the referees for their comments and suggestions which improved the presentation of this paper. Y. -C. Liou was supported in part by NSC 99-2221-E-230-006. Y. Yao was supported in part by Colleges and Universities Science and Technology Development Foundation (20091003) of Tianjin and NSFC 11071279.