Abstract

We studied the approximate split equality problem (ASEP) in the framework of infinite-dimensional Hilbert spaces. Let , , and   be infinite-dimensional real Hilbert spaces, let and   be two nonempty closed convex sets, and let and   be two bounded linear operators. The ASEP in infinite-dimensional Hilbert spaces is to minimize the function over and . Recently, Moudafi and Byrne had proposed several algorithms for solving the split equality problem and proved their convergence. Note that their algorithms have only weak convergence in infinite-dimensional Hilbert spaces. In this paper, we used the regularization method to establish a single-step iterative for solving the ASEP in infinite-dimensional Hilbert spaces and showed that the sequence generated by such algorithm strongly converges to the minimum-norm solution of the ASEP. Note that, by taking in the ASEP, we recover the approximate split feasibility problem (ASFP).

1. Introduction

Let and be closed, nonempty convex sets, and let and be by and by real matrices, respectively. The split equality problem (SEP) in finite-dimensional Hilbert spaces is to find and such that ; the approximate split equality problem (ASEP) in finite-dimensional Hilbert spaces is to minimize the function over and . When and , the SEP reduces to the well-known split feasibility problem (SFP) and the ASEP becomes the approximate split feasibility problem (ASFP). For information on the split feasibility problem, please see [19].

In this paper, we work in the framework of infinite-dimensional Hilbert spaces. Let , , and be infinite-dimensional real Hilbert spaces, let and be two nonempty closed convex sets, and let and be two bounded linear operators. The ASEP in infinite-dimensional Hilbert spaces is over and .

Very recently, for solving the SEP, Moudafi introduced the following alternating CQ-algorithms (ACQA) in [10]:

Then, he proved the weak convergence of the sequence to a solution of the SEP provided that the solution set is nonempty and some conditions on the sequence of positive parameters are satisfied.

The ACQA involves two projections and and, hence, might be hard to be implemented in the case where one of them fails to have a closed-form expression. So, Moudafi proposed the following relaxed CQ-algorithm (RACQA) in [11]: where , were defined in [11], and then he proved the weak convergence of the sequence to a solution of the SEP.

In [12], Byrne considered and studied the algorithms to solve the approximate split equality problem (ASEP), which can be regarded as containing the consistent case and the inconsistent case of the SEP. There, he proposed a simultaneous iterative algorithm (SSEA) as follows: where . Then, he proposed the relaxed SSEA (RSSEA) and the perturbed version of the SSEA (PSSEA) for solving the ASEP, and he proved their convergence. Furthermore, he used these algorithms to solve the approximate split feasibility problem (ASFP), which is a special case of the ASEP. Note that he used the projected Landweber algorithm as a tool in that article.

Note that the algorithms proposed by Moudafi and Byrne have only weak convergence in infinite-dimensional Hilbert spaces. In this paper, we use the regularization method to establish a single-step iterative to solve the ASEP in infinite-dimensional Hilbert spaces, and we will prove its strong convergence.

2. Preliminaries

Let be a real Hilbert space with inner product and norm , respectively, and let be a nonempty closed convex subset of . Recall that the projection from onto , denoted as , is defined in such a way that, for each , is the unique point in with the property

The following important properties of projections are useful to our study.

Proposition 1. Given that and ;(a)   if and only if , for all ;(b), for all .

Definition 2. A mapping is said to be(a)nonexpansive if (b)firmly nonexpansive if is nonexpansive, or equivalently, alternatively, is firmly nonexpansive if and only if can be expressed as where is nonexpansive. It is well known that projections are (firmly) nonexpansive.

Definition 3. Let be a nonlinear operator whose domain is and whose range is .(a) is said to be monotone if
(b) Given a number , is said to be -strongly monotone if
(c) Given a number ,   is said to be -Lipschitz if

Lemma 4 (see [13]). Assume that is a sequence of nonnegative real numbers such that where , are sequences of real numbers such that(i) and ;(ii)either or .
Then, .

Next, we will state and prove our main result in this paper.

3. Regularization Method for the ASEP

Let . Define

The ASEP can now be reformulated as finding with minimizing the function over . Therefore, solving the ASEP (1) is equivalent to solving the following minimization problem (14).

The minimization problem is generally ill-posed. We consider the Tikhonov regularization (for more details about Tikhonov approximation, please see [8, 14] and the references therein) where is the regularization parameter. The regularization minimization (15) has a unique solution which is denoted by . Assume that the minimization (14) is consistent, and let be its minimum-norm solution; namely, ( is the solution set of the minimization (14)) has the property

The following result is easily proved.

Proposition 5. If the minimization (14) is consistent, then the strong exists and is the minimum-norm solution of the minimization (14).

Proof. For any , we have
It follows that, for all and ,
Therefore, is bounded. Assume that is such that . Then, the weak lower semicontinuity of implies that, for any ,
This means that . Since the norm is weak lower semicontinuous, we get from (18) that for all ; hence, . This is sufficient to ensure that . To obtain the strong convergence, noting that (18) holds for , we compute
Since , we get in norm. So, we complete the proof.

Next we will state that can be obtained by two steps. First, observing that the gradient is -Lipschitz and -strongly monotone, the mapping is a contraction with coefficient where

Indeed, observe that

Note that is a fixed point of the mapping for any satisfying (23) and can be obtained through the limit as of the sequence of Picard iterates as follows:

Secondly, letting yields in norm. It is interesting to know whether these two steps can be combined to get in a single step. The following result shows that for suitable choices of and , the minimum-norm solution can be obtained by a single step, motivated by Xu [8].

Theorem 6. Assume that the minimization problem (14) is consistent. Define a sequence by the iterative algorithm where and satisfy the following conditions:(i) for all (large enough) ;(ii) and ;(iii);(iv).
Then, converges in norm to the minimum-norm solution of the minimization problem (14).

Proof. Note that for any satisfying (23), is a fixed point of the mapping . For each , let be the unique fixed point of the contraction
Then, , and so
Thus, to prove the theorem, it suffices to prove that
Noting that has contraction coefficient of , we have
We now estimate
This implies that
However, since is bounded, we have, for an appropriate constant ,
Combining (30), (32), and (33), we obtain where
Now applying Lemma 4 to (34) and using the conditions (ii)(iv), we conclude that ; therefore, in norm.

Remark 7. Note that and with and   satisfy the conditions (i)–(iv).

Remark 8. We can express the algorithm (26) in terms of and , and we get
And we can obtain that the whole sequence generated by the algorithm (36) strongly converges to the minimum-norm solution of the ASEP (1) provided that the ASEP (1) is consistent and and satisfy the conditions (i)–(iv).

Remark 9. Now, we apply the algorithm (36) to solve the ASFP. Let ; the iteration in (36) becomes
This algorithm is different from the algorithms that have been proposed to solve the ASFP, but it does solve the ASFP.

In this paper, we considered the ASEP in infinite-dimensional Hilbert spaces, which has broad applicability in modeling significant real-world problems. Then, we used the regularization method to propose a single-step iterative and showed that the sequence generated by such an algorithm strongly converges to the minimum-norm solution of the ASEP (1). We also gave an algorithm for solving the ASFP in Remark 9.

Acknowledgments

The authors wish to thank the referees for their helpful comments and suggestions. This research was supported by NSFC Grants no. 11071279.