Abstract

We consider an implicit algorithm for the split fixed point and convex feasibility problems. Strong convergence theorem is obtained.

1. Introduction

Due to their broad applicability in many areas, especially in signal processing (e.g., phase retrieval) and image restoration, the split feasibility problems continue to receive great attention; see, for example, [16]. The present paper is devoted to this topic. Now we recall that the split feasibility problem originally introduced by Censor and Elfving [7] is to find such that where and are two closed convex subsets of two Hilbert spaces and , respectively, and is a bounded linear operator. A special case of (1) is when is singleton, and then (1) is reduced to the convexly constrained linear inverse problem which has received considerable attention. We can use projected Landweber algorithm to solve (2). The projected Landweber algorithm generates a sequence in such a way that where denotes the nearest point projection from onto , is a parameter such that , and is the transpose of . When the system (2) is reduced to the unconstrained linear system then the projected Landweber algorithm is turned to the Landweber algorithm Note that (1) is equivalent to the fixed point equation Using this relation, we can suggest the following iterative algorithm: which is refereed as algorithm and was devised by Byrne [8]. algorithm has been extensively studied; see, for instance, [911].

The algorithm (7) is proved to converge weakly but fails to converge in norm in general infinite-dimensional Hilbert spaces and . Tikhonov's regularization method can solve this problem. First, we define a convex function by with its gradient and consider the minimization problem It is known that solves (1) if and only if . We know that (10) is ill-posed. So regularization is needed. We consider Tikhonov's regularization: where is the regularization parameter. The gradient of is given by Define a Picard iterates Xu [12] proves that if (1) is solvable, then as , and consequently the strong exists and is the minimum-norm solution of (1). Note that (13) is a double-step iteration. Xu [12] introduced a single step regularized method: It is shown that the sequence generated by (14) converges to the solution of (1) provided that the parameters and satisfy

Inspired by (14), Ceng et al. [3] introduced the following relaxed extragradient method: where the sequences , , , , and satisfy the conditions for all . Ceng et al. proved that the sequence generated by (16) converges to the solution of (1) which is the the minimum-norm element. Recently, Ceng et al. [13] further introduced another regularization for the split feasibility problem and the fixed point problem: Ceng et al. proved that algorithm (18) has weak convergence.

Motivated by the above works, in this paper, our main purpose is to introduce an implicit algorithm for solving the split fixed point and convex feasibility problems. We show that the implicit algorithm converges strongly to the solution of the split fixed point and convex feasibility problems.

2. Preliminaries

Let be a real Hilbert space with inner product and norm , respectively. Let be a nonempty closed convex subset of .

Definition 1. A mapping is called nonexpansive if for all .

We will use to denote the set of fixed points of ; that is, .

Definition 2. A mapping is called contractive if for all and for some constant . In this case, we call a -contraction.

Definition 3. A linear bounded operator is called strongly positive if there exists a constant such that for all .

Definition 4. We call that is the metric projection if for each

It is well known that the metric projection is characterized by for all , . From this, we can deduce that is firmly nonexpansive; that is, for all . Hence is nonexpansive.

Lemma 5 (see [14]). Let be a closed convex subset of a real Hilbert space , and let be a nonexpansive mapping. Then, the mapping is demiclosed. That is, if is a sequence in such that weakly and strongly, then .

3. Main Result

In this section, we first introduce our algorithm for solving this problem and consequently we give convergence analysis.

Let and be two Hilbert spaces and and two nonempty closed convex sets. Let be a bounded linear operator with its adjoint . Let be a strongly positive bounded linear operator on with coefficient . Let be a -contraction. Let and be two nonexpansive mappings.

In the sequel, our objective is to We use to denote the solution set of (25); that is,

Now, we introduce the following implicit algorithm.

Algorithm 6. Define an implicit algorithm as follows: where and are two constants.

Remark 7. is well-defined. Define a mapping as Then, we have This indicates that is nonexpansive. Consequently, for fixed , we have that the mapping is contractive due to the facts that is a -contraction and is nonexpansive. Therefore, is well-defined.

Next, we prove the convergence of (27).

Theorem 8. Suppose that . Then the net generated by algorithm (25) converges strongly to which solves the following variational inequality:

Proof. Set , , and for all . Then . It is clear that the solution of (30) is unique. Let . Then, we have and . First, we easily deduce the following three inequalities: From (25), we have Note that Since is a linear operator and is the adjoint of , we get At the same time, we know By (33), (36), and (37), we get Substituting (38) into (35) to deduce It follows that Thus, from (34), we get So, The boundedness of the net yields.
Since , we obtain Using the firmly nonexpansive necessity of , we have From (34), we derive that This together with (44) implies that It follows that Hence, Returning to (45) and using (39), we have Thus, which implies that So, Note that It follows from (51) that From (43), (48), and (54), we get Next we show that the net is relatively norm-compact as . Assume that is such that as . Put and .
From (45), we have It follows that where is a constant satisfying . In particular, we have since is bounded, without loss of generality, we may assume that converges weakly to a point . We deduce from the above results that By the demiclosed principle of the nonexpansive mappings and (see Lemma 5), we deduce and . Note that and . From (58), we deduce and . To this end, we deduce and . So, . We substitute for in (58) to obtain since weakly converges to , we deduce that strongly. Therefore, the net is relatively norm-compact.
In (58), we take the limit as to deduce Hence, solves the variational inequality which is equivalent to its dual variational inequality
Therefore, . That is, is the unique solution in of the contraction . Clearly this is sufficient to deduce that converges strongly to as . The proof is completed.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This study was supported by research funds from Dong-A University.