#### Abstract

In this paper, we present two iterative algorithms involving Yosida approximation operators for split monotone variational inclusion problems (). We prove the weak and strong convergence of the proposed iterative algorithms to the solution of in real Hilbert spaces. Our algorithms are based on Yosida approximation operators of monotone mappings such that the step size does not require the precalculation of the operator norm. To show the reliability and accuracy of the proposed algorithms, a numerical example is also constructed.

#### 1. Introduction

Variational inequality which was brought into existence by Hartman and Stampacchia [1] plays an important role as mathematical model in physics, economics, optimization, networking structural analysis, and medical images. In 1994, Censor and Elfving [2] first presented the split feasibility problems (in short, SFP) for modeling in medical image reconstruction. From the last two decades, SFP has been implemented widely in intensity-modulation therapy treatment planning and other branches of applied sciences (see, e.g., [3–5]). Censor et al. [6] combined the VIP and SFP and presented a new type of variational inequality problem called split variational inequality problem (in short, SVIP) as follows: where and are closed, convex subsets of Hilbert spaces and , respectively, is a bounded linear operator, and are two operators, and .

Moudafi [7] generalized SVIP into split monotone variational inclusion problem (in short, ) as follows: where and are set-valued mappings on Hilbert spaces and , respectively, and .

Moudafi [7] formulated the following iterative algorithm to find the solution of . Let , select an arbitrary starting point , and compute where is an adjoint operator of , with being a spectral radius of operator , and

Let and be normal cones to the closed and convex sets and , respectively. If and , then reduces to . If , then reduces to the split variational inclusion problem (in short, ) for set-valued maximal monotone mappings, introduced and studied by Byrne et al. [8]: where and , are the same as in (2). We denote the solution set of by . Moreover, Byrne et al. [8] presented the following iterative algorithm to find the solution of . Let , and select a starting point . Then, compute where is the adjoint operator of , , and , are the resolvents of monotone mappings , respectively. It can be easily seen that solves if and only if . Kazmi and Rizwi [9] proposed the following iterative method for approximating the common solutions of and fixed point problem of a nonexpansive mapping: where is a contraction and is nonexpansive mapping. Later, Sitthithakerngkiet et al. [10] studied the common solutions of and a fixed point of an infinite family of nonexpansive mappings and introduced the following iterative method:

where is a given point and is -mapping which is generated by an infinite family of nonexpansive mappings. Similar results related to can be found in [11–17].

The common figure among the above-explained iterative methods is that they used the resolvent of associated monotone mappings; secondly, the step size depends on the operator norm . To avoid this obstacle, self-adaptive step size iterative algorithms have been introduced (see, for example, [18–24]). Lopez et al. [20] introduced a relaxed method for solving split feasibility problem with self-adaptive step size. Recently, Dilshad et al. [25] proposed two iterative algorithms to solve in which the precalculation of the operator norm is not required. They studied the weak and strong convergence of the proposed methods to approximate the solution of with the step size , which do not depend upon the precalculated operator norm.

The resolvent of a maximal monotone operator is defined as , where is a positive real number. A resolvent operator of maximal monotone operator is single valued and firmly nonexpansive. Due to the fact that the zeros of maximal monotone operator are the fixed point sets of resolvent operator, the resolvent associated with a set-valued maximal monotone operator plays an important role to find the zeros of monotone operators. Following Byrne’s iterative method (5), which is mainly based on the resolvents of monotone mappings, many researchers introduced and studied various iterative methods for (see, for example, [7–9, 18, 25, 26] and references therein).

Yosida approximation operator for a monotone mapping and parameter is defined as . It is well known that set-valued monotone operator can be regularized into a single-valued monotone operator by the process known as the Yosida approximation. Yosida approximation is a tool to solve a variational inclusion problem using nonexpansive resolvent operator and has been used to solve various variational inclusions and system of variational inclusions in linear and nonlinear spaces (see, for example, [18, 25–30]).

Due to the fact that the zero of Yosida approximation operator associated with monotone operator is the zero of inclusion problem and inspired by the work of Moudafi, Byrne, Kazmi, and Dilshad et al., our motive is to propose two iterative methods to solve . The rest of the paper is organized as follows.

The next section contains some fundamental results and preliminaries. In Section 3, we describe two iterative algorithms using Yosida approximation of monotone mappings and . Section 4 is devoted to the study of weak and strong convergence of the proposed iterative methods to the solution of . In the last section, we give a numerical example in support of our main results and show the convergence of sequence obtained from the proposed algorithm to the solution of .

#### 2. Preliminaries

Let be a real Hilbert space endowed with norm and inner product . The strong and weak convergence of a sequence to is denoted by and , respectively. The operator is said to be a contraction if ; if , then is called nonexpansive and firmly nonexpansive if ; is called -inverse strongly monotone if there exists such that .

For some , there exists a unique nearest point in denoted by such that

is called the projection of onto , which satisfies

Moreover, is also characterized by the fact that

In Hilbert spaces, the following equality and inequality hold for all such that

Let be a set-valued operator. The graph of is defined by , and inverse of is denoted by . A set-valued mapping is said to be monotone if , for all . A monotone operator is called a maximal monotone if there exists no other monotone operator such that its graph properly contains the graph of .

Lemma 1 (see [31]). *If is a sequence of nonnegative real numbers such that
where is a sequence in and is a sequence in such that
*(i)*(ii)** or **then *

Lemma 2 (see [32]). *Let be a Hilbert space. A mapping is -inverse strongly monotone if and only if is firmly nonexpansive, for .*

Lemma 3 (see [33]). *Let be a Hilbert space and be a bounded sequence in . Assume there exists a nonempty subset satisfying the properties
*(i)* exists for every *(ii)*Then, there exists such that converges weakly to .*

Lemma 4 (see [34]). *Let be a sequence of real numbers that does not decrease at infinity in the sense that there exists a subsequence of such that for all . Also, consider the sequence of integers defined by
*

*Then, is a nondecreasing sequence verifying and for all ,*

#### 3. Yosida Approximation Iterative Methods

Let be single-valued monotone mappings and be set-valued mappings such that and are set-valued maximal monotone mappings; , and , are the resolvents and Yosida approximation operators of and , respectively. We propose the following iterative methods to approximate the solution of .

*Algorithm 1. *For an arbitrary , compute the iteration as follows:
where and are defined as
where , and such that

*Algorithm 2. *For an arbitrary , compute the iteration as follows:
where and are defined as
where , , , , and such that .

#### 4. Main Results

We assume that the problem is consistent and the solution set is denoted by .

First, we prove following lemmas, which are used in the proof of our main results.

Lemma 5. *Let be single-valued monotone mappings and be set-valued mappings such that be set-valued maximal monotone mapping. If and are the resolvent and Yosida approximation operators of , respectively, then for , following are equivalent:
*(i)* is the solution of *(ii)*(iii)*

*Proof. *The proof is trivial which is an immediate consequence of definitions of resolvent and Yosida approximation operator of maximal monotone mapping .☐

Theorem 6. *Let , be real Hilbert spaces; , be single-valued monotone mappings, , be set-valued maximal monotone mappings such that and are maximal monotone, and be a bounded linear operator. Assume that such that . Then, the sequence generated by Algorithm 1 converges weakly to a point .*

*Proof. *Let . Since the Yosida approximation operator is -inverse strongly monotone, for , then by Algorithm 1 and (12), we have
Now, using (17), we estimate that
From (21) and (22), we get
Since is -inverse strongly monotone and using (12), we estimate
By (18), it turns out that
It follows from (24) and (25) that
Combining (23) and (26), we get
which implies that is Fejér monotone with respect to and hence bounded, which assures that exists for all . Keeping in mind that , from (27), we have
Due to the assumption that and the properties of convergent series, we conclude that
Hence, there exist constants and such that
By Algorithm 1 and (30), we get
Letandbe a subsequence ofthat converges weakly to, which implies thatandalso converge to. Recall that is -inverse strongly monotone and converges to , and using (30), we get
Taking limit , we obtain

Replacing by , by with the same arguments, we get This completes the proof.☐

Theorem 7. *Let , be real Hilbert spaces; , be single-valued monotone mappings, , be set-valued maximal monotone mappings such that and are maximal monotone, and be a bounded linear operator. If , are real sequences in and such that and
then the sequence generated by Algorithm 2 converges strongly to .*

*Proof. *Let ; then, from (23) and (26) of the proof of Theorem 6, we have
Since , we get From Algorithm 2, we have
which implies that the sequence is bounded and hence, the sequences ,, and are also bounded. Now,
Combining (35), (36), and (38), we obtain
We discuss the two possible cases.

Case 1. If the sequence is nonincreasing, then there exists a number such that , for each . Then, exists and hence, . Thus, it follows from (39) that
From (40), we conclude that and We observe from Algorithm 2 that ; thus,
This shows that the sequence is asymptotically regular. By Theorem 6, we have that . Settingand rewriting, we have
From (42) and Algorithm 2, we get
or
where

Since and , then using (40), we get
Thus, by Lemma 1, we obtain .

Case 2. If the sequence is not nonincreasing, we can select a subsequence of such that for all . In this case, we define a subsequence of positive integers with the properties
If for some , then it follows from (39) that