Theory and Algorithms of Variational Inequality and Equilibrium Problems, and Their ApplicationsView this Special Issue
A Strong Convergence Algorithm for the Two-Operator Split Common Fixed Point Problem in Hilbert Spaces
The two-operator split common fixed point problem (two-operator SCFP) with firmly nonexpansive mappings is investigated in this paper. This problem covers the problems of split feasibility, convex feasibility, and equilibrium and can especially be used to model significant image recovery problems such as the intensity-modulated radiation therapy, computed tomography, and the sensor network. An iterative scheme is presented to approximate the minimum norm solution of the two-operator SCFP problem. The performance of the presented algorithm is compared with that of the last algorithm for the two-operator SCFP and the advantage of the presented algorithm is shown through the numerical result.
Throughout this paper, denotes a real Hilbert space with inner product and its induced norm , the identity mapping on , the set of all natural numbers, the set of all real numbers, and the metric projection onto set . is the upper bound of sequence , while is the lower bound. For a self-mapping on , Fix denotes the set of all fixed points of .
It has been an interesting topic of finding zero points of maximal monotone operators. A set-valued map with domain is called monotone if for all , and for any and , where is defined to be is said to be maximal monotone if its graph is not properly contained in the graph of any other monotone operator. For a positive real number , we denote by the resolvent of a monotone operator ; that is, for any . A point is called a zero point of a maximal monotone operator if . In the sequel, we will denote the set of all zero points of by , which is equal to for any . A well-known method to solve this problem is the proximal point algorithm which starts with any initial point and then generates the sequence in by where is a sequence of positive real numbers. This algorithm was first introduced by Martinet  and then generally studied by Rockafellar , who devised the iterative sequence by where is an error sequence in . Rockafellar showed that the sequence generated by (4) converges weakly to an element of provided that and . Since then, many authors have conducted research on modifying the sequence in (4) so that the strong convergence is guaranteed; compare [3–12] and the references therein.
On the other hand, let and be nonempty closed convex subsets of two Hilbert spaces and , respectively, and let be a bounded linear mapping. The split feasibility problem (SFP) is the problem of finding a point with the property: The SFP was first introduced by Censor and Elfving  for modeling inverse problems which arise from phase retrievals and medical image reconstruction. Recently, it has been found that the SFP can also be used to model the intensity-modulated radiation therapy. The most popular algorithm for the SFP is the algorithm introduced by Byrne [14, 15]. The sequence generated by the algorithm converges weakly to a solution of SFP (5); compare [14–16]. Under the assumption that SFP (5) has a solution, there are many algorithms designed to approximate a solution of SFP; compare [16–23] and the references therein.
Later, Censor and Segal  extended the SFP to the split common fixed point problem (SCFP) which is to find a point with the property: where , , and , , are directed operators in Hilbert spaces. Censor and Segal  gave an algorithm for SCFP (6) in spaces. Then, Moudafi  named SCFP (6) with the two-operator SCFP and gave an algorithm which generates a sequence weakly converging to the solution of the two-operator SCFP. Till very recently, Cui et al.  provided a damped projection algorithm, shown as below, to approach the solution of SCFP (6).
Assume that the solution set of the SCFP is nonempty. Start with any and generate a sequence through the iteration: where , , and satisfying that(i) and ;(ii);(iii).Then, the sequence converges strongly to .
Inspired by the work of [25, 26], this paper presents another algorithm to find the minimum norm solution of two-operator SCFP. We note that the two-operator SCFP contains the SFP and the zero point problem of maximal monotone operators. Let and be metric projections onto and , respectively. Putting and , the two-operator SCFP (6) is reduced to SFP (5). Let and be two maximal monotone operators on and , respectively. Replacing and with and , respectively, in (6), the SFP becomes a two-operator SCFP: Putting , the above two-operator SCFP is reduced to the common zero point problem of two maximal monotone operators and :
Let be in the SCFP (6), and let be . The target of the two-operator SCFP (6) is to find a fixed point of directed operator . Since the definition of a directed operator is based on its fixed point set, it may be difficult to show that is a directed operator before the two-operator SCFP is solved. Therefore, and are only considered as firmly nonexpansive mappings in our presented algorithm. The main result in this paper is as follows.
Let and be two firmly nonexpansive self-mappings on and , respectively. Assume that the solution set of the two-operator SCFP is nonempty. For any , start with any and define the sequence by where and and are sequences in satisfying that(i) and ;(ii).Then the sequence converges strongly to .
The two-operator SCFP covers problems of split feasibility, convex feasibility, and equilibrium as special cases. The presented algorithm can be considered as a unified methodology for solving the aforementioned problems. In Section 4, we use the numerical result to prove that the performance of the presented algorithm is more efficient and more consistent than that of the recent damped projection algorithm .
In order to facilitate our investigation in this paper, we recall some basic facts. A mapping is said to be(i)nonexpansive if (ii)firmly nonexpansive if (iii)directed if It is well-known that the fixed point set of a nonexpansive mapping is closed and convex; compare .
Let be a nonempty closed convex subset of . The metric projection from onto is the mapping that assigns each the unique point in with the property It is known that is firmly nonexpansive and characterized by the inequality, for any ,
There is a strongly convergent algorithm for a nonexpansive mapping with , which is related to the iteration scheme in our main result; for any , choose arbitrarily a point and define a sequence recursively by where is sequence in satisfying Then, the sequence converges strongly to ; compare [28, 29].
We need some lemmas that will be quoted in the sequel.
Lemma 1. For any and , the following hold:(a);(b).
Lemma 2 (see , demiclosedness principle). Suppose that is a nonexpansive self-mapping on and suppose that is a sequence in such that converges weakly to some and . Then, .
Lemma 3. Let be a maximal monotone operator on . Then(a) is single-valued and firmly nonexpansive;(b) and .
Lemma 4 (see ). Suppose that is a sequence of nonnegative real numbers satisfying where and verify the following conditions:(i), ;(ii).Then .
Lemma 5 (see ). Let be a sequence in that does not decrease at infinity in the sense that there exists a subsequence such that For any , define . Then as and .
3. Main Theorems
Throughout this section, and denote two firmly nonexpansive self-mappings on and , respectively, and denotes a bounded linear operator from to .
Under the assumption that the solution set of two-operator SCFP is nonempty, the following lemma says that the two-operator SCFP is equivalent to the fixed point problem for the operator .
Theorem 7. Let and be two firmly nonexpansive self-mappings on and , respectively. Assume that the solution set of the two-operator SCFP is nonempty. For any , start with any and define the sequence by where and and are sequences in satisfying that(i) and ;(ii).Then the sequence converges strongly to .
Proof. Putting , we see that . By Lemmas 1 and 6, we have
Furthermore, since is nonexpansive and , one has
from which it follows that
Therefore, it follows from (21), (22), and (24) that
Hence, by induction, we see that
This shows that is bounded. Now, by Lemma 1 and (22), we have
We now carry on with the proof by considering the following two cases: (I) is eventually decreasing and (II) is not eventually decreasing.
Case I. Suppose that is eventually decreasing; that is, there is such that is decreasing. In this case, exists in . From inequality (27), we have which together with the boundedness of and conditions (i) and (ii) implies Since is bounded, it has a subsequence such that converges weakly to some and where the last inequality follows from (15) since by Proposition 8 of , (29), and Lemmas 2 and 6. Moreover, from (27), we have
Accordingly, applying Lemma 4 to inequality (31), we conclude that
Case II. Suppose that is not eventually decreasing. In this case, by Lemma 5, there exists a nondecreasing sequence in such that and Then it follows from (27) and (33) that Therefore, which implies that and then it follows that From (35), we obtain and thus, letting , we obtain Also, since which together with (36) and conditions (i) and (ii) implies that , by virtue of (39). Consequently, we conclude that via (33) and (41). This completes the proof.
This theorem says that the sequence converges strongly to a point of which is nearest to . In particular, if is taken to be , then the limit point of the sequence is the unique minimum solution of two-operator SCFP (6).
Corollary 8. Let and be nonempty closed convex subsets of two Hilbert spaces and , respectively. Assume that the solution set of the SFP is nonempty. For any , start with any and define a sequence iteratively by where and and are sequences in satisfying that(i) and ;(ii).Then the sequence converges strongly to .
Corollary 9. Suppose that and are two maximal monotone operators on and , respectively. Assume that the solution set of problem is nonempty. Let . For any , start with any and define a sequence iteratively by where and and are sequences in satisfying that(i) and ;(ii).Then the sequence converges strongly to .
Corollary 10. Let be a maximal monotone operator on with , and let . For any , start with any and define a sequence iteratively by where and and are sequences in satisfying that(i) and ;(ii).Then, the sequence converges strongly to .
Proof. Putting , , , and in Corollary 9, the result follows immediately.
4. Numerical Results
There are four examples in this section provided to demonstrate our presented algorithm. The first three examples are the SFP, while the fourth example is the common zero point problem of two maximal monotone operators. The performance of the presented algorithm to solve the three examples of SFP is compared with that of the recent damped projection method . The result shows that the presented algorithm is more efficient and more consistent than the damped algorithm. In the first three examples, we assign the parameters in both algorithms to be , , , and . Let be their stop criterion. All codes were written in Matlab R2011a and ran on laptop ASUS ZenbookUX31E with i7-2677M CPU.
Example 11. Let , , and The metric projections for and are Then, we can use both the presented algorithm and the damped projection algorithm to approach a point such that From Table 1, we observe that the presented algorithm is more efficient than the damped projection algorithm.
Example 12. Let all conditions be the same with those in Example 11 except to The result for solving Example 12 is shown in Table 2. We observe that the presented algorithm is still more efficient than the damped algorithm. From the columns for the runtime (CPU) and the approximate solution (), the result of the presented algorithm is consistent although it starts from different initial points.
Example 13. In this example, we use in Example 11 but change its and . Let and . The metric projections for and are
The result is shown in Table 3. We also observe that the presented algorithm is more efficient and more consistent than the damped projection algorithm.
The presented algorithm contains an arbitrary point and that is an advantage of the algorithm. Knowing any information about the solution of two-operator SCFP of interest, we can choose a better to enhance the performance of the presented algorithm. For instance, let which is different with related to the result in Table 3. From Table 4, we observe that the runtime of the presented algorithm is reduced by one-third.
Example 14. Minimizing a convex function is called a convex minimization problem. This example shows that the presented algorithm can be used to search the common optimal solutions of two convex minimization problems. Let and be two functions from to and define and . We know that both and are convex functions. Now, we would like to search a common minimal point of the two convex functions.
Let denote the partial derivative of function with respect to . Define two operators and from to by Since and are convex functions, and are maximal monotone operators and any one of their common zero points is the common minimal point of and . The resolvents of and are According to Corollary 9, our presented algorithm can be used to search a common zero point of and . Let , , , , and in the algorithm, and let be the stop criterion. We ran the algorithm and started from point . The algorithm stopped at point after iterations. We know that and . Finally, we use Figure 1 to show the behavior of sequence which converges to the common minimal point of and .
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
S. Takahashi, W. Takahashi, and M. Toyoda, “Strong convergence theorems for maximal monotone operators with nonlinear mappings in Hilbert spaces,” Journal of Optimization Theory and Applications, vol. 147, no. 1, pp. 27–41, 2010.View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet