Abstract

We introduced an implicit and an explicit iteration method based on the hybrid steepest descent method for finding a common element of the set of solutions of a constrained convex minimization problem and the set of solutions of a split variational inclusion problem.

1. Introduction

Fixed-point optimization methods are very popular methods for solving the nonlinear problems such as variational inequality problems, optimization problems, inverse problems, and equilibrium problems. The convex feasibility problem (CFP) is used for modeling inverse problems which arise from phase retrieval problems and the intensity-modulated radiation therapy. Moreover, the well-known special case of CEP is a split feasibility problem (SFP).

Let and be two real Hilbert spaces with inner product and norm . Let and be nonempty closed convex subsets of and , respectively. Now, we recall that the split feasibility problem (SFP) is to find a point with the following property: where and denotes the family of all bounded linear operators to . In 1994, the SFP was first introduced by Censor and Elfving [1], in finite-dimensional Hilbert spaces, for modeling inverse problems which arise from phase retrievals and in medical image reconstruction. A number of image reconstruction problems can be formulated as the SFP; see, for example, [2] and the references therein. Recently, it is found that the SFP can also be applied to study intensity modulated radiation therapy; see, for example, [3] and the references therein.

A special case of the SFP is following a convex constrained linear inverse problem [4] of finding an element such that

Recall that a mapping of is said to be a nonexpansive mapping such that for all . Further, we consider the following fixed point problem (FPP) for a nonexpansive mapping . Find such that The solution set of FPP (4) is denoted by . It is well known that if , is closed and convex. A mapping is said to be an averaged mapping if it can be written as the average of an identity and a nonexpansive mapping ; that is, where is a number in . More precisely, we say that is -averaged. It is known that the projection is -averaged. Consider the following constrained convex minimization problem: where is a closed and convex subset of a Hilbert space and is a real valued convex function. If is Fréchet differentiable, then the gradient-projection algorithm (GPA) generates a sequence according to the following recursive formula: or more generally, where, in both (7) and (8), the initial guess is taken from arbitrarily and the parameters, or , are positive real numbers satisfying certain conditions. The convergence of the algorithms (7) and (8) depends on the behavior of the gradient . It is known that if is -strongly monotone and -Lipschitzian with constants , , such that then, for , the operator is a contraction; hence, the sequence defined by the GPA (7) converges in norm to the unique solution of (6). More generally, if the sequence is chosen to satisfy the property then the sequence defined by the GPA (8) converges in norm to the unique minimizer of (6).

However, if the gradient fails to be strongly monotone, the operator defined in (10) may fail to be contractive; consequently, the sequence generated by the algorithm (7) may fail to converge strongly [5]. If is Lipschitzian, then the algorithms (7) and (8) can still converge in the weak topology under certain conditions [6, 7].

In 2011, Xu [5] gave an alternative operator-oriented approach to algorithm (8), namely, an averaged mapping approach. He gave his averaged mapping approach to the GPA (8) and the relaxed GPA. Moreover, he constructed an example which shows that the algorithm (7) does not converge in norm in an infinite-dimensional space and also presented two modifications of GPA which are shown to have strong convergence.

Given a mapping , the classical variational inequality problem (VIP) is to find such that The solution set of VIP (12) is denoted by . It is well known that if and only if for some . The variational inequality was first discussed by Lions [8] and now is well known. The variational inequality theory has been studied quite extensively and has emerged as an important tool in the study of a wide class of obstacle, unilateral, free, moving, and equilibrium problems arising in several branches of pure and applied sciences in a unified and general framework.

Yamada [9] introduced the hybrid steepest descent method as follows: where , , is a strongly monotone and Lipschitz continuous mapping, and is a positive real number. He considered the variational inequality problem over the set of common fixed points of a finite family of nonexpansive mappings and proved strong convergence of the sequence generated by the method. Later, Tian [10] considered the following iterative method for a nonexpansive mapping with where is a -Lipschitzian and -strongly monotone operator. He proved that the sequence generated by (14) converges to a fixed point , which is the unique solution of the variational inequality

Recently, Moudafi [11] introduced the following split monotone variational inclusion problem (SMVIP). Find such that and such that where and are multivalued maximal monotone mappings.

Moudafi [11] introduced an iterative method for solving SMVIP (16)-(17), which can be seen an important generalization of an iterative method given by Censor et al. [12] for split variational inequality problem. As Moudafi noted in [11], SMVIP (16)-(17) includes a special case, the split common fixed point problem, split variational inequality problem, split zero problem, and split feasibility problem [1, 3, 11, 12] which have already been studied and used in practice as a model in intensity-modulated radiation therapy treatment planning; see [1, 3]. This formalism is also at the core of modeling of many inverse problems arising for phase retrieval and other real-world problems, for instance, in sensor networks, in computerized tomography, and in data compression; see, for example, [2, 13].

If and then SMVIP (16)-(17) reduces to the following split variational inclusion problem (SVIP). Find such that and such that When looked separately, (18) is the variational inclusion problem and we denoted its solution set by SOLVIP (). The SVIP (18)-(19) constitutes a pair of variational inclusion problems which have to be solved so that the image under a given bounded linear operator of the solution of SVIP (18) in is the solution of another SVIP (19) in another space ; we denote the solution set of SVIP (19) by SOLVIP(). The solution set of SVIP (18)-(19) is denoted by and .

Very recently, Byrne et al. [14] studied the weak and strong convergence of the following iterative method for SVIP (18)-(19): for given , compute the iterative sequence generated by the following iterative scheme: for . In 2013, Kazmi and Rizvi [15] studied the strong convergence of the following iterative method: where and is the spectral radius of the operator , and is the adjoint of . He proved the sequence generated by (21) strongly convergence to fixed point of nonexpansive mapping and the solution set of SVIP (18)-(19).

In this paper, motivated by the work of Xu [5], Yamada [9], Tian [10], Byrne et al. [14], and Kazmi and Rizvi [15], we proved the strong convergence theorems for finding a common element of the set of solutions of a constrained convex minimization problem and the set of solutions of a split variational inclusion problem (18)-(19).

2. Preliminaries

Throughout this paper, we always write for weak convergence and for strong convergence. We need some facts and tools in a real Hilbert space , which are listed below. For any , there exists a unique nearest point in denoted by such that The mapping is called the metric projection of onto . We know that is a nonexpansive mapping from onto . It is also known that and satisfied

Moreover, is characterized by the fact that and

It is known that every nonexpansive operator satisfies, for all , the inequality and therefore, we get, for all , (see, e.g., Theorem 3 in [16] and Theorem 1 in [17]).

Lemma 1. Let be a real Hilbert space. There hold the following identities:(i), ;(ii), , .

Lemma 2 (see [7]). Assume that is a sequence of nonnegative real numbers such that where is a sequence in and is a sequence such that(i); (ii) or .Then, .

Lemma 3 (see [18]). Let be an -Lipschitzian and -strongly monotone operator on a Hilbert space with , , , and . Then is a contraction with contractive coefficients and .

Lemma 4. A nonlinear mapping whose domain is and range is is said to be (i)monotone, if (ii)-strongly monotone if there exists a constant such that (iii)-inverse strongly monotone (or, -ism), if there exists a constant such that (iv)firmly nonexpansive, if

A multivalued mapping is called monotone if, for all , and such that

A monotone mapping is maximal if Graph() is not properly contained in the graph of any other monotone mapping.

It is known that a monotone mapping is maximal if and only if, for , , for every Graph() implies that .

Let be a multivalued maximal monotone mapping. Then, the resolvent mapping associated with is defined by for some , where stands for identity operator on .

We note that for all the resolvent operator is single-valued, nonexpansive, and firmly nonexpansive.

Lemma 5 (see [15]). SVIP (18)-(19) is equivalent to find such that ,

Lemma 6 (see [19]). Let be an -Lipschitz mapping with coefficient and a strong positive bounded linear operator with . Then for , This is, is strongly monotone with coefficient .

Proposition 7 (see [20]). We have the following assertions.(i) is nonexpansive if and only if the complement is -ism.(ii)If is -ism and , then is ()-ism.(iii) is averaged if and only if the complement is -ism, for some . Indeed, for is -averaged if and only if is -ism.

Proposition 8 (see [20, 21]). We have the following assertions.(i)If , for some , is averaged and is nonexpansive, and then is averaged.(ii) is firmly nonexpansive if and only if the complement is firmly nonexpansive.(iii)If , for some , is firmly nonexpansive and is nonexpansive, and then is averaged.(iv)The composite of finite many averaged mappings is averaged. That is, if each of the mappings is averaged, then so is the composite . In particular, if is -averaged and is -averaged, where , then the composite is -averaged, where .

Lemma 9 (see [18]). Let be a Hilbert space a nonempty closed convex subset of , and a nonexpansive mapping with . If is a sequence in weakly converging to and if converges strongly to , then .

3. Main Results

Throughout the rest of this paper, we always assume that is an -Lipschitzian mapping with coefficient , and is a strongly positive bounded linear operator with coefficient . Then we obtain that is -Lipschitzian and -strongly monotone. Let be a real-valued convex function and assume that is an -Lipschitzian mapping with .

Note that is -Lipschitzian; it implies that is -ism, which then implies that is -ism. So by Proposition 7, its complement is -averaged. Since is -averaged, we obtain from Proposition 8 that the composition is -averaged for . Hence we have that, for each is -averaged. Therefore, we can write where is nonexpansive.

Suppose that minimization problem (6) is consistent and let denote its solution set. Assume that and .

Define a mapping . Since both and are firmly nonexpansive, they are averaged mappings. For , the mapping is averaged. It follows from Proposition 8(iv) that the mapping is averaged and hence that is nonexpansive mapping. It is easy to see that is also nonexpansive mapping.

Consider the following mapping on defined by where . From Lemma 3, we have Since , it follows that is a contraction. Therefore, by the Banach contraction principle, has a unique fixed point such that For simplicity, we will write for , provided no confusion occurs. Next, we prove that the sequence converges strongly to a point which solves the variational inequality Equivalently, .

3.1. An Implicit Iteration Method

Theorem 10. Let and be two real Hilbert spaces and let be a bounded linear operator, a real-value convex function, and an -Lipschitzian mapping with . Assume that . Let be an -Lipschitzain mapping with and let be a strongly positive bounded linear operator with coefficients , , and . Given arbitrarily, let and be a sequence generated by the following algorithm: where , , is nonexpansive, , , and is the adjoint of and satisfying the following conditions: (i), and ;(ii), .Then, the sequence converges strongly to a point , which solves the variational inequality (40).

Proof. Consider the following.
Step  1. Show first that is bounded.
Since , we can assume that . By Lemma 3, we have .
Let ; we have , . We estimate Thus, we have Now, we have Setting and using (26), we have
Using (43), (44), and (45), we obtain
Since , we obtain Thus, by (41) and Lemma 3, we derive that It follows that .
Hence is bounded and so is . It follows from the Lipschitz of that , and are also bounded. From the nonexpansivity of , it follows that is also bounded.
Step  2. Show that
Next, from (46) and (47), we will show that Therefore Since and , we get
Next, we estimate So, we obtain
Observe that, from (50) and (55), we get Therefore Since ,  , and as seen in (53), we get that (49) holds.
Step  3. Show that
Observe that Since and as seen in (49), we get that (58) holds.
Thus, From (49) and (58), we get .
Note that where . Since and as seen in (61), we get Since the boundedness of , , and , we conclude that So we conclude that
Since is bounded, there exists a subsequence which converges weakly to .
Step  4. Show that .
Since is closed and convex, is weakly closed so we have . By Lemma 9 and (63), we have .
Next, show that .
Consider that can be rewritten as Taking limit in (65) and by taking into account (49) and (53) and the fact that the graph of a maximal monotone operator is weakly strongly closed, we obtain that is, SOLVIP. Furthermore, since and have the same asymptotical behavior, weakly converges to . Again, by (53) and the fact that the resolvent is nonexpansive and Lemma 9, we obtain that ; that is, SOLVIP. Thus, .
Step  5. Show that , where ,
Hence, we obtain It follows that This implies that In particular, Since , it follows from (70) that as .
Next, we show that solves the variational inequality (40). By the algorithm (41), we have Therefore, we have that is,
Due to the nonexpansivity of , we have that is monotone; that is, , . Consider Now, by replacing in (74) with and taking , we get That is, is a solution of the variational inequality (40).
Further, by the uniqueness of the solution of the variational inequality (40), we conclude that as . We rewrite (40) as This is equivalent to the fixed point equation

3.2. An Explicit Iteration Method

Theorem 11. Let and be two real Hilbert spaces and let be a bounded linear operator, a real-value convex function, and an -Lipschitzian mapping with . Assume that . Let be an -Lipschitzain mapping with and let be a strongly positive bounded linear operator with coefficients , and . Given arbitrarily, let and be a sequence generated by the following algorithm: where , , is nonexpansive, , , and is the adjoint of and satisfying the following conditions:(i), and ;(ii), and .Then, the sequence converges strongly to a point , which solves the variational inequality (40).

Proof. The proof is divided into several steps.
Step  1. Show first that is bounded.
Let ; we have and . We have
Next, we derive that By induction, we obtain . Hence, is bounded and so is . It follows from the Lipschitz continuity of , and that , and are also bounded. From the nonexpansivity of , it follows that is also bounded.
Step  2. Show that By (78), we have Next, we estimate . Observe that where .
Substitute (83) into (82); we get for some approximate positive constant such that Since, for , the mapping is averaged and hence nonexpansive, we obtain Substitute (86) into (84); we get By Lemma 2, it follows from conditions (i) to (ii) that (81) holds. Further, from (86), this implies that
Step  3. Show that
From (55) and (78), we have This implies that Since , , and as seen in (53) and in (81), we get Next, It follows from condition , (81), and (92) that (89) holds. Furthermore we have It follows from (89) and (92) that .
Step  4. Show that where is a unique solution of the variational inequality (40). Indeed, take a subsequence of such that Since is bounded, there exists a subsequence of which converges weakly to . Without loss of generality, we can assume that . Since , it follows that This implies that (95) holds.
Step  5. Show that since It follows from (81) and (95) that Observe that This implies that where , . It is easy to see that . Hence by Lemma 2, the sequence converges strongly to .

Setting in (78) in Theorem 11, we have the following result.

Corollary 12. Let and be two real Hilbert spaces and let be a bounded linear operator a real-value convex function, and an -Lipschitzian mapping with . Let be a fixed point in . Assume that . Let be a strongly positive bounded linear operator with coefficients , , and . Given arbitrarily, let and be a sequence generated by the following algorithm: where is nonexpansive, , , and is the adjoint of and satisfying conditions (i)-(ii) in Theorem 11 Then, the sequence converges strongly to a point , which solves the variational inequality (40).

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors thank the referees for comments and suggestions on this paper. The first author was supported by the Thailand Research Fund through the Royal Golden Jubilee Ph.D. Program (Grant no. PHD/0033/2554) and the King Mongkut’s University of Technology Thonburi (KMUTT). Moreover, the second author was supported by the Thailand Research Fund and the King Mongkut’s University of Technology Thonburi (Grant no. RSA5780059).