`Abstract and Applied AnalysisVolume 2014 (2014), Article ID 350479, 8 pageshttp://dx.doi.org/10.1155/2014/350479`
Research Article

## A Strong Convergence Algorithm for the Two-Operator Split Common Fixed Point Problem in Hilbert Spaces

1Department of Industrial Management, National Pingtung University of Science and Technology, Pingtung 91201, Taiwan
2Department of Accounting Information, Southern Taiwan University of Science and Technology, Tainan 71005, Taiwan

Received 27 February 2014; Accepted 13 June 2014; Published 6 July 2014

Copyright © 2014 Chung-Chien Hong and Young-Ye Huang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The two-operator split common fixed point problem (two-operator SCFP) with firmly nonexpansive mappings is investigated in this paper. This problem covers the problems of split feasibility, convex feasibility, and equilibrium and can especially be used to model significant image recovery problems such as the intensity-modulated radiation therapy, computed tomography, and the sensor network. An iterative scheme is presented to approximate the minimum norm solution of the two-operator SCFP problem. The performance of the presented algorithm is compared with that of the last algorithm for the two-operator SCFP and the advantage of the presented algorithm is shown through the numerical result.

#### 1. Introduction

Throughout this paper, denotes a real Hilbert space with inner product and its induced norm , the identity mapping on , the set of all natural numbers, the set of all real numbers, and the metric projection onto set . is the upper bound of sequence , while is the lower bound. For a self-mapping on , Fix denotes the set of all fixed points of .

It has been an interesting topic of finding zero points of maximal monotone operators. A set-valued map with domain is called monotone if for all , and for any and , where is defined to be is said to be maximal monotone if its graph is not properly contained in the graph of any other monotone operator. For a positive real number , we denote by the resolvent of a monotone operator ; that is, for any . A point is called a zero point of a maximal monotone operator if . In the sequel, we will denote the set of all zero points of by , which is equal to for any . A well-known method to solve this problem is the proximal point algorithm which starts with any initial point and then generates the sequence in by where is a sequence of positive real numbers. This algorithm was first introduced by Martinet [1] and then generally studied by Rockafellar [2], who devised the iterative sequence by where is an error sequence in . Rockafellar showed that the sequence generated by (4) converges weakly to an element of provided that and . Since then, many authors have conducted research on modifying the sequence in (4) so that the strong convergence is guaranteed; compare [312] and the references therein.

On the other hand, let and be nonempty closed convex subsets of two Hilbert spaces and , respectively, and let be a bounded linear mapping. The split feasibility problem (SFP) is the problem of finding a point with the property: The SFP was first introduced by Censor and Elfving [13] for modeling inverse problems which arise from phase retrievals and medical image reconstruction. Recently, it has been found that the SFP can also be used to model the intensity-modulated radiation therapy. The most popular algorithm for the SFP is the algorithm introduced by Byrne [14, 15]. The sequence generated by the algorithm converges weakly to a solution of SFP (5); compare [1416]. Under the assumption that SFP (5) has a solution, there are many algorithms designed to approximate a solution of SFP; compare [1623] and the references therein.

Later, Censor and Segal [24] extended the SFP to the split common fixed point problem (SCFP) which is to find a point with the property: where , , and , , are directed operators in Hilbert spaces. Censor and Segal [24] gave an algorithm for SCFP (6) in spaces. Then, Moudafi [25] named SCFP (6) with the two-operator SCFP and gave an algorithm which generates a sequence weakly converging to the solution of the two-operator SCFP. Till very recently, Cui et al. [26] provided a damped projection algorithm, shown as below, to approach the solution of SCFP (6).

Assume that the solution set of the SCFP is nonempty. Start with any and generate a sequence through the iteration: where , , and satisfying that(i) and ;(ii);(iii).Then, the sequence converges strongly to .

Inspired by the work of [25, 26], this paper presents another algorithm to find the minimum norm solution of two-operator SCFP. We note that the two-operator SCFP contains the SFP and the zero point problem of maximal monotone operators. Let and be metric projections onto and , respectively. Putting and , the two-operator SCFP (6) is reduced to SFP (5). Let and be two maximal monotone operators on and , respectively. Replacing and with and , respectively, in (6), the SFP becomes a two-operator SCFP: Putting , the above two-operator SCFP is reduced to the common zero point problem of two maximal monotone operators and :

Let be in the SCFP (6), and let be . The target of the two-operator SCFP (6) is to find a fixed point of directed operator . Since the definition of a directed operator is based on its fixed point set, it may be difficult to show that is a directed operator before the two-operator SCFP is solved. Therefore, and are only considered as firmly nonexpansive mappings in our presented algorithm. The main result in this paper is as follows.

Let and be two firmly nonexpansive self-mappings on and , respectively. Assume that the solution set of the two-operator SCFP is nonempty. For any , start with any and define the sequence by where and and are sequences in satisfying that(i) and ;(ii).Then the sequence converges strongly to .

The two-operator SCFP covers problems of split feasibility, convex feasibility, and equilibrium as special cases. The presented algorithm can be considered as a unified methodology for solving the aforementioned problems. In Section 4, we use the numerical result to prove that the performance of the presented algorithm is more efficient and more consistent than that of the recent damped projection algorithm [26].

#### 2. Preliminaries

In order to facilitate our investigation in this paper, we recall some basic facts. A mapping is said to be(i)nonexpansive if (ii)firmly nonexpansive if (iii)directed if It is well-known that the fixed point set of a nonexpansive mapping is closed and convex; compare [27].

Let be a nonempty closed convex subset of . The metric projection from onto is the mapping that assigns each the unique point in with the property It is known that is firmly nonexpansive and characterized by the inequality, for any ,

There is a strongly convergent algorithm for a nonexpansive mapping with , which is related to the iteration scheme in our main result; for any , choose arbitrarily a point and define a sequence recursively by where is sequence in satisfying Then, the sequence converges strongly to ; compare [28, 29].

We need some lemmas that will be quoted in the sequel.

Lemma 1. For any and , the following hold:(a);(b).

Lemma 2 (see [27], demiclosedness principle). Suppose that is a nonexpansive self-mapping on and suppose that is a sequence in such that converges weakly to some and . Then, .

Lemma 3. Let be a maximal monotone operator on . Then(a) is single-valued and firmly nonexpansive;(b) and .

Lemma 4 (see [12]). Suppose that is a sequence of nonnegative real numbers satisfying where and verify the following conditions:(i), ;(ii).Then .

Lemma 5 (see [30]). Let be a sequence in that does not decrease at infinity in the sense that there exists a subsequence such that For any , define . Then as and .

#### 3. Main Theorems

Throughout this section, and denote two firmly nonexpansive self-mappings on and , respectively, and denotes a bounded linear operator from to .

Under the assumption that the solution set of two-operator SCFP is nonempty, the following lemma says that the two-operator SCFP is equivalent to the fixed point problem for the operator .

Lemma 6 (see [17]). Let be the solution set of two-operator (6); that is, . For any , let . Suppose that . Then .

Theorem 7. Let and be two firmly nonexpansive self-mappings on and , respectively. Assume that the solution set of the two-operator SCFP is nonempty. For any , start with any and define the sequence by where and and are sequences in satisfying that(i) and ;(ii).Then the sequence converges strongly to .

Proof. Putting , we see that . By Lemmas 1 and 6, we have In addition, Furthermore, since is nonexpansive and , one has from which it follows that Therefore, it follows from (21), (22), and (24) that Hence, by induction, we see that This shows that is bounded. Now, by Lemma 1 and (22), we have We now carry on with the proof by considering the following two cases: (I) is eventually decreasing and (II) is not eventually decreasing.
Case I. Suppose that is eventually decreasing; that is, there is such that is decreasing. In this case, exists in . From inequality (27), we have which together with the boundedness of and conditions (i) and (ii) implies Since is bounded, it has a subsequence such that converges weakly to some and where the last inequality follows from (15) since by Proposition 8 of [17], (29), and Lemmas 2 and 6. Moreover, from (27), we have
Accordingly, applying Lemma 4 to inequality (31), we conclude that
Case II. Suppose that is not eventually decreasing. In this case, by Lemma 5, there exists a nondecreasing sequence in such that and Then it follows from (27) and (33) that Therefore, which implies that and then it follows that From (35), we obtain and thus, letting , we obtain Also, since which together with (36) and conditions (i) and (ii) implies that , by virtue of (39). Consequently, we conclude that via (33) and (41). This completes the proof.

This theorem says that the sequence converges strongly to a point of which is nearest to . In particular, if is taken to be , then the limit point of the sequence is the unique minimum solution of two-operator SCFP (6).

Corollary 8. Let and be nonempty closed convex subsets of two Hilbert spaces and , respectively. Assume that the solution set of the SFP is nonempty. For any , start with any and define a sequence iteratively by where and and are sequences in satisfying that(i) and ;(ii).Then the sequence converges strongly to .

Proof. Putting and in (20), the conclusion follows from Theorem 7.

Corollary 9. Suppose that and are two maximal monotone operators on and , respectively. Assume that the solution set of problem is nonempty. Let . For any , start with any and define a sequence iteratively by where and and are sequences in satisfying that(i) and ;(ii).Then the sequence converges strongly to .

Proof. By Lemma 3, a resolvent of a maximal monotone operator is firmly nonexpansive. Hence, we may put and in (20) to get the conclusion which follows from Theorem 7.

Corollary 10. Let be a maximal monotone operator on with , and let . For any , start with any and define a sequence iteratively by where and and are sequences in satisfying that(i) and ;(ii).Then, the sequence converges strongly to .

Proof. Putting , , , and in Corollary 9, the result follows immediately.

#### 4. Numerical Results

There are four examples in this section provided to demonstrate our presented algorithm. The first three examples are the SFP, while the fourth example is the common zero point problem of two maximal monotone operators. The performance of the presented algorithm to solve the three examples of SFP is compared with that of the recent damped projection method [26]. The result shows that the presented algorithm is more efficient and more consistent than the damped algorithm. In the first three examples, we assign the parameters in both algorithms to be , , , and . Let be their stop criterion. All codes were written in Matlab R2011a and ran on laptop ASUS ZenbookUX31E with i7-2677M CPU.

Example 11. Let , , and The metric projections for and are Then, we can use both the presented algorithm and the damped projection algorithm to approach a point such that From Table 1, we observe that the presented algorithm is more efficient than the damped projection algorithm.

Table 1: Numerical results for Example 11.

Example 12. Let all conditions be the same with those in Example 11 except to The result for solving Example 12 is shown in Table 2. We observe that the presented algorithm is still more efficient than the damped algorithm. From the columns for the runtime (CPU) and the approximate solution (), the result of the presented algorithm is consistent although it starts from different initial points.

Table 2: Numerical results for Example 12.

Example 13. In this example, we use in Example 11 but change its and . Let and . The metric projections for and are
The result is shown in Table 3. We also observe that the presented algorithm is more efficient and more consistent than the damped projection algorithm.

Table 3: Numerical results for Example 13.

The presented algorithm contains an arbitrary point and that is an advantage of the algorithm. Knowing any information about the solution of two-operator SCFP of interest, we can choose a better to enhance the performance of the presented algorithm. For instance, let which is different with related to the result in Table 3. From Table 4, we observe that the runtime of the presented algorithm is reduced by one-third.

Table 4: Numerical results for Example 13 with .

Example 14. Minimizing a convex function is called a convex minimization problem. This example shows that the presented algorithm can be used to search the common optimal solutions of two convex minimization problems. Let and be two functions from to and define and . We know that both and are convex functions. Now, we would like to search a common minimal point of the two convex functions.
Let denote the partial derivative of function with respect to . Define two operators and from to by Since and are convex functions, and are maximal monotone operators and any one of their common zero points is the common minimal point of and . The resolvents of and are According to Corollary 9, our presented algorithm can be used to search a common zero point of and . Let , , , , and in the algorithm, and let be the stop criterion. We ran the algorithm and started from point . The algorithm stopped at point after iterations. We know that and . Finally, we use Figure 1 to show the behavior of sequence which converges to the common minimal point of and .

Figure 1: The behavior of our presented algorithm to search the common minimal point of two convex minimization problems. The star sign marks the stop point, , of the algorithm.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### References

1. B. Martinet, “Régularisation d'inéquations variationnelles par approximations successives,” Revue Franćaise d'Informatique et de Recherche Opérationnelle, vol. 4, pp. 154–158, 1970.
2. R. T. Rockafellar, “Monotone operators and the proximal point algorithm,” SIAM Journal on Control and Optimization, vol. 14, no. 5, pp. 877–898, 1976.
3. O. A. Boikanyo and G. Moroşanu, “Inexact Halpern-type proximal point algorithm,” Journal of Global Optimization, vol. 51, no. 1, pp. 11–26, 2011.
4. O. A. Boikanyo and G. Moroşanu, “Four parameter proximal point algorithms,” Nonlinear Analysis. Theory, Methods & Applications, vol. 74, no. 2, pp. 544–555, 2011.
5. O. A. Boikanyo and G. Moroşanu, “A proximal point algorithm converging strongly for general errors,” Optimization Letters, vol. 4, no. 4, pp. 635–641, 2010.
6. S. Kamimura and W. Takahashi, “Approximating solutions of maximal monotone operators in Hilbert spaces,” Journal of Approximation Theory, vol. 106, no. 2, pp. 226–240, 2000.
7. G. Marino and H. K. Xu, “Convergence of generalized proximal point algorithms,” Communications on Pure and Applied Analysis, vol. 3, no. 4, pp. 791–808, 2004.
8. M. V. Solodov and B. F. Svaiter, “Forcing strong convergence of proximal point iterations in a Hilbert space,” Mathematical Programming, vol. 87, no. 1, pp. 189–202, 2000.
9. S. Takahashi, W. Takahashi, and M. Toyoda, “Strong convergence theorems for maximal monotone operators with nonlinear mappings in Hilbert spaces,” Journal of Optimization Theory and Applications, vol. 147, no. 1, pp. 27–41, 2010.
10. F. Wang and H. Cui, “On the contraction-proximal point algorithms with multi-parameters,” Journal of Global Optimization, vol. 54, no. 3, pp. 485–491, 2012.
11. H. K. Xu, “A regularization method for the proximal point algorithm,” Journal of Global Optimization, vol. 36, no. 1, pp. 115–125, 2006.
12. H. K. Xu, “Iterative algorithms for nonlinear operators,” Journal of the London Mathematical Society, vol. 66, no. 1, pp. 240–256, 2002.
13. Y. Censor and T. Elfving, “A multiprojection algorithm using Bregman projections in a product space,” Numerical Algorithms, vol. 8, no. 2–4, pp. 221–239, 1994.
14. C. Byrne, “Iterative oblique projection onto convex sets and the split feasibility problem,” Inverse Problems, vol. 18, no. 2, pp. 441–453, 2002.
15. C. Byrne, “A unified treatment of some iterative algorithms in signal processing and image reconstruction,” Inverse Problems, vol. 20, no. 1, pp. 103–120, 2004.
16. H. K. Xu, “Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces,” Inverse Problems, vol. 26, no. 10, Article ID 105018, 17 pages, 2010.
17. Y. Y. Huang and C. C. Hong, “A unified iterative treatment for solutions of problems of split feasibility and equilibrium in Hilbert spaces,” Abstract and Applied Analysis, vol. 2013, Article ID 613928, 13 pages, 2013.
18. Y. Y. Huang and C. C. Hong, “Approximating common fixed points of averaged self-mappings with applications to split feasibility problem and maximal monotone operators in Hilbert spaces,” Fixed Point Theory and Applications, vol. 2013, article 190, 2013.
19. E. Masad and S. Reich, “A note on the multiple-set split convex feasibility problem in Hilbert space,” Journal of Nonlinear and Convex Analysis, vol. 8, no. 3, pp. 367–371, 2007.
20. J. Quan, S. S. Chang, and X. Zhang, “Multiple-set split feasibility problems for $\kappa$-strictly pseudononspreading mapping in Hilbert spaces,” Abstract and Applied Analysis, vol. 2013, Article ID 342545, 5 pages, 2013.
21. F. Wang and H. Xu, “Approximating curve and strong convergence of the $CQ$ algorithm for the split feasibility problem,” Journal of Inequalities and Applications, Article ID 102085, 13 pages, 2010.
22. Y. Yao, J. Wu, and Y. C. Liou, “Regularized methods for the split feasibility problem,” Abstract and Applied Analysis, vol. 2012, Article ID 140679, 13 pages, 2012.
23. Y. Yao, Y. C. Liou, and N. Shahzad, “A strongly convergent method for the split feasibility problem,” Abstract and Applied Analysis, vol. 2012, Article ID 125046, 15 pages, 2012.
24. Y. Censor and A. Segal, “The split common fixed point problem for directed operators,” Journal of Convex Analysis, vol. 16, no. 2, pp. 587–600, 2009.
25. A. Moudafi, “The split common fixed-point problem for demicontractive mappings,” Inverse Problems, vol. 26, no. 5, Article ID 055007, 2010.
26. H. Cui, M. Su, and F. Wang, “Damped projection method for split common fixed point problems,” Journal of Inequalities and Applications, vol. 20113, article 123, 2013.
27. K. Goebel and W. A. Kirk, Topics in Metric Fixed Point Theory, vol. 28 of Cambridge Studies in Advanced Mathematics, Cambridge University Press, Cambridge, UK, 1990.
28. B. Halpern, “Fixed points of nonexpanding maps,” Bulletin of the American Mathematical Society, vol. 73, pp. 957–961, 1967.
29. R. Wittmann, “Approximation of fixed points of nonexpansive mappings,” Archiv der Mathematik, vol. 58, no. 5, pp. 486–491, 1992.
30. P. E. Maingé, “Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization,” Set-Valued Analysis, vol. 16, no. 7-8, pp. 899–912, 2008.