• Views 410
• Citations 6
• ePub 27
• PDF 408
`Abstract and Applied AnalysisVolume 2013 (2013), Article ID 813635, 5 pageshttp://dx.doi.org/10.1155/2013/813635`
Research Article

## Regularization Method for the Approximate Split Equality Problem in Infinite-Dimensional Hilbert Spaces

Department of Mathematics, Tianjin Polytechnic University, Tianjin 300387, China

Received 31 March 2013; Accepted 12 April 2013

Copyright © 2013 Rudong Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We studied the approximate split equality problem (ASEP) in the framework of infinite-dimensional Hilbert spaces. Let , , and   be infinite-dimensional real Hilbert spaces, let and   be two nonempty closed convex sets, and let and   be two bounded linear operators. The ASEP in infinite-dimensional Hilbert spaces is to minimize the function over and . Recently, Moudafi and Byrne had proposed several algorithms for solving the split equality problem and proved their convergence. Note that their algorithms have only weak convergence in infinite-dimensional Hilbert spaces. In this paper, we used the regularization method to establish a single-step iterative for solving the ASEP in infinite-dimensional Hilbert spaces and showed that the sequence generated by such algorithm strongly converges to the minimum-norm solution of the ASEP. Note that, by taking in the ASEP, we recover the approximate split feasibility problem (ASFP).

#### 1. Introduction

Let and be closed, nonempty convex sets, and let and be by and by real matrices, respectively. The split equality problem (SEP) in finite-dimensional Hilbert spaces is to find and such that ; the approximate split equality problem (ASEP) in finite-dimensional Hilbert spaces is to minimize the function over and . When and , the SEP reduces to the well-known split feasibility problem (SFP) and the ASEP becomes the approximate split feasibility problem (ASFP). For information on the split feasibility problem, please see [19].

In this paper, we work in the framework of infinite-dimensional Hilbert spaces. Let , , and be infinite-dimensional real Hilbert spaces, let and be two nonempty closed convex sets, and let and be two bounded linear operators. The ASEP in infinite-dimensional Hilbert spaces is over and .

Very recently, for solving the SEP, Moudafi introduced the following alternating CQ-algorithms (ACQA) in [10]:

Then, he proved the weak convergence of the sequence to a solution of the SEP provided that the solution set is nonempty and some conditions on the sequence of positive parameters are satisfied.

The ACQA involves two projections and and, hence, might be hard to be implemented in the case where one of them fails to have a closed-form expression. So, Moudafi proposed the following relaxed CQ-algorithm (RACQA) in [11]: where , were defined in [11], and then he proved the weak convergence of the sequence to a solution of the SEP.

In [12], Byrne considered and studied the algorithms to solve the approximate split equality problem (ASEP), which can be regarded as containing the consistent case and the inconsistent case of the SEP. There, he proposed a simultaneous iterative algorithm (SSEA) as follows: where . Then, he proposed the relaxed SSEA (RSSEA) and the perturbed version of the SSEA (PSSEA) for solving the ASEP, and he proved their convergence. Furthermore, he used these algorithms to solve the approximate split feasibility problem (ASFP), which is a special case of the ASEP. Note that he used the projected Landweber algorithm as a tool in that article.

Note that the algorithms proposed by Moudafi and Byrne have only weak convergence in infinite-dimensional Hilbert spaces. In this paper, we use the regularization method to establish a single-step iterative to solve the ASEP in infinite-dimensional Hilbert spaces, and we will prove its strong convergence.

#### 2. Preliminaries

Let be a real Hilbert space with inner product and norm , respectively, and let be a nonempty closed convex subset of . Recall that the projection from onto , denoted as , is defined in such a way that, for each , is the unique point in with the property

The following important properties of projections are useful to our study.

Proposition 1. Given that and ;(a)   if and only if , for all ;(b), for all .

Definition 2. A mapping is said to be(a)nonexpansive if (b)firmly nonexpansive if is nonexpansive, or equivalently, alternatively, is firmly nonexpansive if and only if can be expressed as where is nonexpansive. It is well known that projections are (firmly) nonexpansive.

Definition 3. Let be a nonlinear operator whose domain is and whose range is .(a) is said to be monotone if
(b) Given a number , is said to be -strongly monotone if
(c) Given a number ,   is said to be -Lipschitz if

Lemma 4 (see [13]). Assume that is a sequence of nonnegative real numbers such that where , are sequences of real numbers such that(i) and ;(ii)either or .
Then, .

Next, we will state and prove our main result in this paper.

#### 3. Regularization Method for the ASEP

Let . Define

The ASEP can now be reformulated as finding with minimizing the function over . Therefore, solving the ASEP (1) is equivalent to solving the following minimization problem (14).

The minimization problem is generally ill-posed. We consider the Tikhonov regularization (for more details about Tikhonov approximation, please see [8, 14] and the references therein) where is the regularization parameter. The regularization minimization (15) has a unique solution which is denoted by . Assume that the minimization (14) is consistent, and let be its minimum-norm solution; namely, ( is the solution set of the minimization (14)) has the property

The following result is easily proved.

Proposition 5. If the minimization (14) is consistent, then the strong exists and is the minimum-norm solution of the minimization (14).

Proof. For any , we have
It follows that, for all and ,
Therefore, is bounded. Assume that is such that . Then, the weak lower semicontinuity of implies that, for any ,
This means that . Since the norm is weak lower semicontinuous, we get from (18) that for all ; hence, . This is sufficient to ensure that . To obtain the strong convergence, noting that (18) holds for , we compute
Since , we get in norm. So, we complete the proof.

Next we will state that can be obtained by two steps. First, observing that the gradient is -Lipschitz and -strongly monotone, the mapping is a contraction with coefficient where

Indeed, observe that

Note that is a fixed point of the mapping for any satisfying (23) and can be obtained through the limit as of the sequence of Picard iterates as follows:

Secondly, letting yields in norm. It is interesting to know whether these two steps can be combined to get in a single step. The following result shows that for suitable choices of and , the minimum-norm solution can be obtained by a single step, motivated by Xu [8].

Theorem 6. Assume that the minimization problem (14) is consistent. Define a sequence by the iterative algorithm where and satisfy the following conditions:(i) for all (large enough) ;(ii) and ;(iii);(iv).
Then, converges in norm to the minimum-norm solution of the minimization problem (14).

Proof. Note that for any satisfying (23), is a fixed point of the mapping . For each , let be the unique fixed point of the contraction
Then, , and so
Thus, to prove the theorem, it suffices to prove that
Noting that has contraction coefficient of , we have
We now estimate
This implies that
However, since is bounded, we have, for an appropriate constant ,
Combining (30), (32), and (33), we obtain where
Now applying Lemma 4 to (34) and using the conditions (ii)(iv), we conclude that ; therefore, in norm.

Remark 7. Note that and with and   satisfy the conditions (i)–(iv).

Remark 8. We can express the algorithm (26) in terms of and , and we get
And we can obtain that the whole sequence generated by the algorithm (36) strongly converges to the minimum-norm solution of the ASEP (1) provided that the ASEP (1) is consistent and and satisfy the conditions (i)–(iv).

Remark 9. Now, we apply the algorithm (36) to solve the ASFP. Let ; the iteration in (36) becomes
This algorithm is different from the algorithms that have been proposed to solve the ASFP, but it does solve the ASFP.

In this paper, we considered the ASEP in infinite-dimensional Hilbert spaces, which has broad applicability in modeling significant real-world problems. Then, we used the regularization method to propose a single-step iterative and showed that the sequence generated by such an algorithm strongly converges to the minimum-norm solution of the ASEP (1). We also gave an algorithm for solving the ASFP in Remark 9.

#### Acknowledgments

The authors wish to thank the referees for their helpful comments and suggestions. This research was supported by NSFC Grants no. 11071279.

#### References

1. Y. Censor and T. Elfving, “A multiprojection algorithm using Bregman projections in a product space,” Numerical Algorithms, vol. 8, no. 2–4, pp. 221–239, 1994.
2. C. Byrne, “Iterative oblique projection onto convex sets and the split feasibility problem,” Inverse Problems, vol. 18, no. 2, pp. 441–453, 2002.
3. C. Byrne, “A unified treatment of some iterative algorithms in signal processing and image reconstruction,” Inverse Problems, vol. 20, no. 1, pp. 103–120, 2004.
4. Q. Yang, “The relaxed CQ algorithm solving the split feasibility problem,” Inverse Problems, vol. 20, no. 4, pp. 1261–1266, 2004.
5. B. Qu and N. Xiu, “A note on the CQ algorithm for the split feasibility problem,” Inverse Problems, vol. 21, no. 5, pp. 1655–1665, 2005.
6. B. Qu and N. Xiu, “A new halfspace-relaxation projection method for the split feasibility problem,” Linear Algebra and its Applications, vol. 428, no. 5-6, pp. 1218–1229, 2008.
7. J. Zhao and Q. Yang, “Several solution methods for the split feasibility problem,” Inverse Problems, vol. 21, no. 5, pp. 1791–1799, 2005.
8. H.-K. Xu, “Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces,” Inverse Problems, vol. 26, no. 10, Article ID 105018, p. 17, 2010.
9. L.-C. Ceng, Q. H. Ansari, and J.-C. Yao, “Relaxed extragradient methods for finding minimum-norm solutions of the split feasibility problem,” Nonlinear Analysis A: Theory and Methods, vol. 75, no. 4, pp. 2116–2125, 2012.
10. A. Moudafi, “Alternating CQ-algorithm for convex feasibility and split fixed-point problems,” Journal of Nonlinear and Convex Analysis, 2013.
11. A. Moudafi, “A relaxed alternating CQ-algorithm for convex feasibility problems,” Nonlinear Analysis. Theory, Methods & Applications A: Theory and Methods, vol. 79, pp. 117–121, 2013.
12. C. Byrne and A. Moudafi, “Extensions of the CQ algorithm for the split feasibility and split equality problems,” 2013, hal-00776640-version 1.
13. H. K. Xu and T. H. Kim, “Convergence of hybrid steepest-descent methods for variational inequalities,” Journal of Optimization Theory and Applications, vol. 119, no. 1, pp. 185–201, 2003.
14. A. Moudafi, “Coupling proximal algorithm and Tikhonov method,” Nonlinear Times and Digest, vol. 1, no. 2, pp. 203–209, 1994.