Abstract

Let be a real Hilbert space and a closed convex subset. Let be a nonexpansive mapping with the nonempty set of fixed points . Kim and Xu (2005) introduced a modified Mann iteration , , , where is an arbitrary (but fixed) element, and and are two sequences in . In the case where , the minimum-norm fixed point of can be obtained by taking . But in the case where , this iteration process becomes invalid because may not belong to . In order to overcome this weakness, we introduce a new modified Mann iteration by boundary point method (see Section 3 for details) for finding the minimum norm fixed point of and prove its strong convergence under some assumptions. Since our algorithm does not involve the computation of the metric projection , which is often used so that the strong convergence is guaranteed, it is easy implementable. Our results improve and extend the results of Kim, Xu, and some others.

1. Introduction

Let be a subset of a real Hilbert space with an inner product and its induced norm is denoted by and , respectively. A mapping is called nonexpansive if for all . A point is called a fixed point of if . Denote by the set of fixed points of . Throughout this paper, is always assumed to be nonempty.

The iteration approximation processes of nonexpansive mappings have been extensively investigated by many authors (see [112] and the references therein). A classical iterative scheme was introduced by Mann [13], which is defined as follows. Take an initial guess arbitrarily and define , recursively, by where is a sequence in the interval . It is well known that under some certain conditions the sequence generated by (1) converges weakly to a fixed point of , and Mann iteration may fail to converge strongly even if it is in the setting of infinite-dimensional Hilbert spaces [14].

Some attempts to modify the Mann iteration method (1) so that strong convergence is guaranteed have been made. Nakajo and Takahashi [1] proposed the following modification of the Mann iteration method (1): where denotes the metric projection from onto a closed convex subset of . They proved that if the sequence is bounded above from one, then defined by (2) converges strongly to . But, at each iteration step, an additional projection is needed to calculate, which is not easy in general. To overcome this weakness, Kim and Xu [15] proposed a simpler modification of Mann's iteration scheme, which generates the iteration sequence via the following formula: where is an arbitrary (but fixed) element in , and and are two sequences in . In the setting of Banach spaces, Kim and Xu proved that the sequence generated by (3) converges strongly to the fixed point of   under certain appropriate assumptions on the sequences and .

In many practical problems, such as optimization problems, finding the minimum norm fixed point of nonexpansive mappings is quite important. In the case where , taking in (3), the sequence generated by (3) converges strongly to the minimum norm fixed point of [15]. But, in the case where , the iteration scheme (3) becomes invalid because may not belong to .

To overcome this weakness, a natural way to modify algorithm (3) is adopting the metric projection so that the iteration sequence belongs to ; that is, one may consider the scheme as follows: However, since the computation of a projection onto a closed convex subset is generally difficult, algorithm (4) may not be a well choice.

The main purpose of this paper is to introduce a new modified Mann iteration for finding the minimum norm fixed point of , which not only has strong convergence under some assumptions but also has nothing to do with any projection operators. At each iteration step, a point in (the boundary of ) is determined via a particular way, so our modification method is called boundary point method (see Section 3 for details). Moreover, since our algorithm does not involve the computation of the metric projection, it is very easy implementable.

The rest of this paper is organized as follows. Some useful lemmas are listed in the next section. In the last section, a function defined on is given firstly, which is important for us to construct our algorithm, then our algorithm is introduced and the strong convergence theorem is proved.

2. Preliminaries

Throughout this paper, we adopt the notations listed as follows:(1) converges strongly to ;(2) converges weakly to ;(3) denotes the set of cluster points of (i.e., such that );(4) denotes the boundary of .

We need some lemmas and facts listed as follows.

Lemma 1 (see [16]). Let be a closed convex subset of a real Hilbert space and let be the (metric of nearest point) projection from onto (i.e., for , is the only point in such that ). Given and . Then if and only if there holds the following relation:

Since is a closed convex subset of a real Hilbert space , so the metric projection is reasonable and thus there exists a unique element, which is denoted by , in such that ; that is, . is called the minimum norm fixed point of .

Lemma 2 (see [17]). Let be a real Hilbert space. Then there holds the following well-known results:(G1) for all ;(G2) for all .

We will give a definition in order to introduce the next lemma. A set is weakly closed if for any sequence such that , there holds .

Lemma 3 (see [18, 19]). If is convex, then is weakly closed if and only if is closed.

Assume is weakly closed; a function is called weakly lower semicontinuous at if for any sequence such that ; then holds. Generally, we called weakly lower semi-continuous over if it is weakly lower semi-continuous at each point in .

Lemma 4 (see [18, 19]). Let be a subset of a real Hilbert space and let be a real function; then is weakly lower semi-continuous over if and only if the set is weakly closed subset of , for any .

Lemma 5 (see [20]). Let be a closed convex subset of a real Hilbert space and let be a nonexpansive mapping such that . If a sequence in is such that and , then .

The following is a sufficient condition for a real sequence to converge to zero.

Lemma 6 (see [21, 22]). Let be a nonnegative real sequence satisfying If , and satisfy the conditions: (A1);(A2)either or ;(A3);then .

3. Iterative Algorithm

Let be a closed convex subset of a real Hilbert space . In order to give our main results, we first introduce a function by the following definition: Since is closed and convex, it is easy to see that is well defined. Obviously, for all in the case where . In the case where , it is also easy to see that and for every (otherwise, ; we have ; this is a contradiction).

An important property of is given as follows.

Lemma 7. is weakly lower semi-continuous over .

Proof. If , then for all and the conclusion is clear. For the case , using Lemma 4, in order to show that is weakly lower semi-continuous, it suffices to verify that is a weakly closed subset of for every ; that is, if such that , then   (i.e., ). Without loss of generality, we assume that (otherwise, there hold for and for , resp., and the conclusion holds obviously). Noting is convex, we have from the definition of that for each , holds for all . Clearly, . Using Lemma 3, then . This implies that Consequently, and this completes the proof.

Since the function will be important for us to construct the algorithm of this paper below, it is necessary to explain how to calculate for any given in actual computing programs. In fact, in practical problem, is often a level set of a convex function ; that is, is of the form , where is a real constant. Without loss of generality, we assume that and . Then it is easy to see that, for a given , we have Thus, in order to get the value , we only need to solve a algebraic equation with a single variable , which can be solved easily using many methods, for example, dichotomy method on the interval . In general, solving a algebraic equation above is quite easier than calculating the metric projection . To illustrate this viewpoint, we give the following simple example.

Example 8. Let be a strongly positive linear bounded operator with coefficient ; that is, there is a constant with the property , for all . Define a convex function by where is a given point in and is the only solution of the equation . (Notice that is a monogamy.) Setting , then it is easy to show that is a nonempty convex closed subset of such that . (Note that and .) For a given , we have . In order to get , let , where is an unknown number. Thus we obtain an algebraic equation Consequently, we have that is,

Now we give a new modified Mann iteration by boundary point method.

Algorithm 9. Define in the following way: where and ,  .
Since is closed and convex, we assert by the definition of that, for any given , holds for every , and then is guaranteed, where is generated by Algorithm 9. Obviously, for all if . If , calculating the value implies determining , a boundary point of , and thus our algorithm is called boundary point method.

Theorem 10. Assume that and satisfy the following conditions:(D1);(D2);(D3). Then generated by (17) converges strongly to .

Proof. We first show that is bounded. Taking arbitrarily, we have By induction, Thus, is bounded and so are and . As a result, we obtain by condition (D1) that
We next show that It suffices to show that Using (17), it follows from direct calculating that Substituting (24) into (23), we obtain Note the fact that (since is monotone increasing) and conditions (D1)–(D3); it concludes by using Lemma 6 that . Noting (20) and (25), we obtain Using Lemma 5, it derives that .
Then we show that Indeed take a subsequence of such that Without loss of generality, we may assume that . Noticing , we obtain from and Lemma 1 that
Finally, we show that . Using Lemma 2 and (17), it is easy to verify that Hence, where It is not hard to prove that , by conditions (D1) and (D2), and by (29). By Lemma 6, we concludes that , and the proof is finished.

Finally, we point out that a more general algorithm can be given for calculating the fixed point for any given . In fact, it suffices to modify the definition of the function by the following form:

Algorithm 11. Define in the following way: where and , where is defined by (33).

By an argument similar to the proof of Theorem 10, it is easy to obtain the result below.

Theorem 12. Assume that , and and satisfy the same conditions as in Theorem 10; then generated by (34) converges strongly to .

Acknowledgments

This work was supported in part by the Fundamental Research Funds for the Central Universities (ZXH2012K001) and in part by the Foundation of Tianjin Key Lab for Advanced Signal Processing.