An Inertial Iterative Algorithm to Find Common Solution of a Split Generalized Equilibrium and a Variational Inequality Problem in Hilbert Spaces
In this paper, we introduce and study an iterative algorithm via inertial and viscosity techniques to find a common solution of a split generalized equilibrium and a variational inequality problem in Hilbert spaces. Further, we prove that the sequence generated by the proposed theorem converges strongly to the common solution of our problem. Furthermore, we list some consequences of our established algorithm. Finally, we construct a numerical example to demonstrate the applicability of the theorem. We emphasize that the result accounted in the manuscript unifies and extends various results in this field of study.
Let and be real Hilbert spaces with inner product and norm . Let and be nonempty closed convex subsets of and , respectively. The variational inequality problem (in short, VIP) is to find such thatwhere is a nonlinear mapping. The solution set of VIP (1) is denoted by . It is introduced by Hartman and Stampacchia .
In the last two decades, EP (2) has been generalized and extensively studied in many directions due to its importance; see, for example [3–7], for the literature on the existence and iterative approximation of solution of the various generalizations of EP (2).
Censor et al.  introduced the split feasibility problem (in short, ) in finite-dimensional Hilbert spaces for modelling of inverse problems that arise from phase retrievals and in medical image restoration aswhere is a bounded linear operator.
In this paper, we consider the following split generalized equilibrium problem (in short, ):
Let and be nonlinear mappings, and be a bounded linear operator, then is to find such thatand such that
If we take , then becomes split equilibrium problem (in short, ) asand such that
When looked separately, (4) is the generalized equilibrium problem (GEP) and we denote its solution set by Sol (GEP(4)). The (4) and (5) constitute a pair of generalized equilibrium problems which have to be solved so that the image under a given bounded linear operator of the solution of the GEP(4) in is the solution of another GEP (4) in another space . We denote the solution set of GEP (5) by Sol (GEP (5)). The solution set of (4) and (5) is denoted by .
(4) and (5) generalize multiple-sets split feasibility problem. It also includes as special case, the split variational inequality problem, which is the generalization of split zero problems and split feasibility problems, see for details [9–12].
In 2008, Mainge  introduced the following inertial Krasnosel’skiǐ–Mann algorithm by combining Krasnosel’skiǐ–Mann algorithm and the inertial extrapolation:for each . He proved that the sequence generated by algorithm (8) converges weakly to a fixed point of under some conditions. Recently, Bot et al.  studied the convergence analysis of the inertial Krasnosel’skiǐ–Mann algorithm for approximating a fixed point of nonexpansive mapping by getting rid of some conditions used in the main result of Mainge . Recently, Dong et al. [15, 16] introduced the inertial hybrid algorithm and established a strong convergence theorem for approximating a fixed point of nonexpansive mapping in the setting of Hilbert space. For further study of some generalization of iterative algorithm (8), see for instance [17, 18]. Very recently, Monairah et al.  introduced and studied a hybrid iterative algorithm to approximate a common solution of generalized equilibrium problem, variational inequality problem, and fixed point problem in the framework of a 2 uniformly convex and uniformly smooth real Banach space. The inertial method has been studied by many researchers. The results and other related ones analyzed the convergence properties of inertial type algorithms and demonstrated their performance numerically on some imaging and data analysis problems, see for details [20–23].
Motivated by the work given in [6, 13, 24], we propose an iterative algorithm via inertial and viscosity techniques to find a common solution of a split generalized equilibrium and a variational inequality problem in Hilbert spaces. We obtained the strong convergence for the proposed algorithm. Further, we give some consequences of the main result. Finally, we discuss a numerical example to demonstrate the applicability of the iterative algorithm. The method and result presented in this paper generalize and unify the previously known related methods and results. Our result can extend several iterative methods given in the literature.
In this section, we collect some concepts and results which are required for the presentation of the work. Let symbols and denote strong and weak convergence, respectively.
For every point , there exists a unique nearest point to in denoted by such that
The mapping is called the metric projection of onto . It is well known that is nonexpansive and satisfies
Moreover, is characterized by the fact that and
This implies that
In a real Hilbert space , it is well known that
Definition 1 (see ). A multivalued mapping is called monotone if for all , and such that
Definition 2 (see ). A multivalued monotone mapping is maximal if the graph, the graph of , is not properly contained in the graph of any other monotone mapping.
Remark 1. It is known that a multivalued monotone mapping is maximal if and only if for , , for every implies that .
Lemma 1 (see ). Let and be bounded sequences in a Banach space and let be a sequence in with . Suppose for all integers and . Then,
Lemma 2 (see ). Let be a sequence of nonnegative real numbers such that there exists a subsequence of such that , . Then, there exists a nondecreasing sequence of such that and the following properties are satisfied by all (sufficiently large) numbers
In fact, is the largest number in the set such that .
Lemma 3 (see ). Assume that is a strongly positive self-adjoint bounded linear operator on a Hilbert space with coefficient and . Then, .
Lemma 4 (see ). Assume that is a sequence of nonnegative real numbers such thatwhere is a sequence in and is a sequence in such that(i);(ii).Then, .
Assumption 1. Let be bimappings satisfying the following conditions:(1);(2) is monotone, i.e.,(3)For each , is weakly upper semicontinuous;(4)For each , is convex and lower semicontinuous;(5) is weakly continuous and is convex;(6) is skew-symmetric, i.e.,
Now, we define as follows:where is a positive real number.
Lemma 5 (see ). Let be a nonempty closed convex subset of Hilbert space . Let be nonlinear mappings satisfying Assumption 1. Assume that for each and for each , there exists a bounded subset and such that for any ,Let the mapping be defined by (21). Then, the following conclusions hold:(i) is nonempty for each ;(ii) is single-valued;(iii) is a firmly nonexpansive mapping, i.e., for all ,(iv);(v) is closed and convex.Further, assume that satisfy Assumption 1. For and for all , define a mapping as follows:Then, we easily observe that is nonempty, single-valued, firmly nonexpansive, , and is closed and convex.
3. Main Result
Theorem 1. Let and be two nonempty closed convex subsets of Hilbert spaces and , respectively. Let be a bounded linear operator. Assume that , , , and are nonlinear mappings satisfying Assumption 1 and is upper semicontinuous in the first argument. Assume that . Let be a contraction mapping with constant and be a -inverse strongly monotone mapping. Let be generated by Algorithm 1 and satisfy the following conditions:(i);(ii);(iii);(iv);(v).Then, the sequence converges strongly to some , where .
Proof. We divide the proof into several steps. Step 1. We show that is bounded. Let , then and . Applying the similar steps used in Theorem 1 , we obtain Thus, We estimate From condition (ii), such that By (27)–(29), we have Since the mapping is nonexpansive, therefore We estimate Using (30) and (31) in the above inequality, we have Thus, is bounded. Also, , , and are bounded. Step 2. We show that , . We estimate Using (26) in the above inequality, we get From (30), we obtain By (35) and (36), we have which yields that where . Step 3. We show that Using the concept of firmly nonexpansive of , we have which implies that Using (41) in (35), we get Using (36), in the above inequality, which implies Step 4. We show that . We estimate Using (14), we calculate From (46) and (47), we have Step 5. We show that .To show it, we have the two cases as follows: Case 1. There exists such that . This shows that exists, and by step 2, we have Thanks to step 3 and (49), we obtain Since , therefore Now, Next, prove that . By (47), we have We set and let be a suitable constant with in the above inequality. Thus, This implies Thus, We compute