Abstract

In this paper, we establish an iterative algorithm by combining Yamada’s hybrid steepest descent method and Wang’s algorithm for finding the common solutions of variational inequality problems and split feasibility problems. The strong convergence of the sequence generated by our suggested iterative algorithm to such a common solution is proved in the setting of Hilbert spaces under some suitable assumptions imposed on the parameters. Moreover, we propose iterative algorithms for finding the common solutions of variational inequality problems and multiple-sets split feasibility problems. Finally, we also give numerical examples for illustrating our algorithms.

1. Introduction

In 2005, Censor et al. [1] introduced the multiple-sets split feasibility problem (MSSFP), which is formulated as follows:where and are nonempty closed convex subsets of Hilbert spaces and , respectively, and is a bounded linear mapping. Denote by the set of solutions of MSSFP (1). Many iterative algorithms have been developed to solve the MSSFP (see [13]). Moreover, it arises in many fields in the real world, such as inverse problem of intensity-modulated radiation therapy, image reconstruction, and signal processing (see [1, 4, 5] and the references therein).

When , the MSSFP is known as the split feasibility problem (SFP); it was first introduced by Censor and Elfving [5], which is formulated as follows:

Denote by the set of solutions of SFP (2).

Assume that the SFP is consistent (i.e., (2) has a solution). It is well known that solves (2) if and only if it solves the fixed point equationwhere is a positive constant, is the adjoint operator of , and and are the metric projections of and onto and , respectively (for more details, see [6]).

The variational inequality problem (VIP) was introduced by Stampacchia [7], which is finding a pointwhere is a nonempty closed convex subset of a Hilbert space and is a mapping. The ideas of the VIP are being applied in many fields including mechanics, nonlinear programming, game theory, and economic equilibrium (see [812]).

In [13], we see that solves (4) if and only if it solves the fixed point equation

Moreover, it is well known that if is -Lipschitz continuous and -strongly monotone, then VIP (4) has a unique solution (see, e.g., [14]).

Since SFP and VIP include some special cases (see [15, 16]), indeed, convex linear inverse problem and split equality problem are special cases of SFP, and zero point problem and minimization problem are special cases of VIP. Jung [17] studied the common solution of variational inequality problem and split feasibility problem: find a pointwhere is the solution set of SFP (2) and is an -strongly monotone and -Lipschitz continuous mapping. After that, for solving problem (6), Buong [2] considered the following algorithms, which were proposed in [14, 18], respectively:where , and under the following conditions:(C1) as and .(C2) .

Moreover, Buong [2] considered the sequence that is generated by the following algorithm, which is weakly convergent to a solution of MSSFP (1):where and or and in which and , for and , are positive real numbers such that .

Motivated by the aforementioned works, we establish an iterative algorithm by combining algorithms (7) and (8) for finding the solution of problem (6) and prove the strong convergence of the sequence generated by our iterative algorithm to the solution of problem (6) in the setting of Hilbert spaces. Moreover, we propose iterative algorithms for solving the common solutions of variational inequality problems and multiple-sets split feasibility problems. Finally, we also give numerical examples for illustrating our algorithms.

2. Preliminaries

In order to solve our results, we now recall the following definitions and preliminary results that will be used in the sequel. Throughout this section, let be a nonempty closed convex subset of a real Hilbert space with inner product and norm .

Definition 1. A mapping is called(i)-Lipschitz continuous, if for all , where is a positive number.(ii)Nonexpansive, if (i) holds with .(iii)-strongly monotone, if for all , where is a positive number.(iv)Firmly nonexpansive, if for all .(v)-Averaged, if for some fixed and a nonexpansive mapping .In [5], we know that the metric projection is firmly nonexpansive and -averaged.
We collect some basic properties of averaged mappings in the following results.

Lemma 1 (see [16]). We have(i)The composite of finitely many averaged mappings is averaged. In particular, if is -averaged, where for , then the composite is -averaged, where .(ii)If the mappings are averaged and have a common fixed point, then

Proposition 1 (see [19]). Let be a nonempty subset of , be an integer, and be defined by

For every , let and be -averaged. Then, is -averaged, where .

The following properties of the nonexpansive mappings are very convenient and helpful to use.

Lemma 2 (see [20]). Assume that and are Hilbert spaces. Let be a linear bounded mapping such that and let be a nonexpansive mapping. Then, for is -averaged.

Proposition 2 (see [19]). Let be a nonempty subset of , and let be a finite family of nonexpansive mappings from to . Assume that and such that . Suppose that, for every , is -averaged; then, is -averaged, where .
The following results play a crucial role in the next section.

Lemma 3 (see [14]). Let be a real number in . Let be an -strongly monotone and -Lipschitz continuous mapping. The mapping , for each fixed point , is contractive with constant , i.e.,where .

Theorem 1 (see [21]). Let be a -Lipschitz continuous and -strongly monotone self-mapping of . Assume that is a finite family of nonexpansive mappings from to such that . Then, the sequence defined by the following algorithm converges strongly to the unique solution of the variational inequality (4):where , , for , and under the following conditions:(i) as and .(ii), for some , and as .

Theorem 2 (see [22]). Let , , , , and be as in Theorem 1. Then, the sequence defined by the following algorithm:converges strongly to the unique solution of variational inequality (4).

3. Main Results

In this section, we consider the following iterative algorithm by combining Yamada’s hybrid steepest descent method [14] and Wang’s algorithm [18] for solving problem (6):where . If we set for , then (15) is reduced to (7) studied by Buong [2]. On the other hand, in the Numerical Example section, we present the example illustrating that the two-step method (15) is more efficient that the one-step method (8) studied by Buong [2] and in terms of the two-step method (15) the generated sequence has the less number of iterations and converges faster than the sequence generated by the one-step method (8).

Throughout our results, unless otherwise stated, we assume that and are two real Hilbert spaces and is a linear bounded mapping. Let be an -strongly monotone and -Lipschitz continuous mapping on with some positive constants and . Assume that is a fixed number.

Theorem 3. Let and be two closed convex subsets in and , respectively. Then, as , the sequence defined by (15), where the sequences and satisfy conditions (C1) and (C2), respectively, converges strongly to the solution of (6).

Proof. From Lemma 2, we have that is -averaged. Since , by Lemma 1 (i), we get that is -averaged where . Moreover, we obtain that if and only if . It follows from Definition 1 (iv) that , where is nonexpansive. Then, iterative algorithm (15) can be rewritten as follows:where and . Since and are nonexpansive, then is also nonexpansive. Therefore, the strong convergence of (15) to the element in the solution set of (6) follows by Theorem 2.
In [23], Miao and Li showed the weak convergence results of the sequence converging to the element of where is generated by the following algorithm:which satisfies condition (C3) . Next, we will show the strong convergence for (17) where satisfies condition .

Theorem 4. Let and be two closed convex subsets in and , respectively. Then, as , the sequence defined by (17), where the sequence satisfies condition (C1) and and satisfy condition (C2), converges strongly to the solution of (6).

Proof. In the proof of Theorem 3, one can rewrite iterative algorithm (17) as follows:where and . Since is nonexpansive, then the strong convergence of (17) to the element in the solution set of (6) follows by Theorem 1.
Moreover, we obtain the following results which are solving the common solution of variational inequality problem and multiple-sets split feasibility problem, i.e., find a pointwhere is a solution set of (1), and is an -strongly monotone and -Lipschitz continuous mapping. This problem has been studied in [2].

Theorem 5. Let and be two finite families of closed convex subsets in and , respectively. Assume that , and satisfy conditions (C1) and (C2), respectively, and the parameters and satisfy the following conditions:(a) for such that .(b) for such that .

Then, as , the sequence , defined bywith one of the following cases:(A1) and (A2) and (A3) and (A4) and ,converges to the element in the solution set of (19).

Proof. Let . We will show that is averaged.
In the case of (A1), and . Since is -averaged for all , by Proposition 1, we get that is -averaged, where . Similarly, we have that is also averaged and so is nonexpansive. By using Lemma 2, we deduce that is -averaged, where . It follows from Lemma 1 (i) that is -averaged with .
If and , then by using Proposition 2 and condition (a), we obtain that is -averaged. From condition (b) and taking into account that is nonexpansive, for all , we have that is also nonexpansive. It follows from Lemma 2 that is -averaged. Thus, is -averaged with .
Cases (A3) and (A4) are similar. This implies that , where is nonexpansive. Moreover, by Lemma 1, we get thatThen, iterative algorithm (20) can be rewritten as follows:where and . Since and are nonexpansive, then is nonexpansive. Thus, the strong convergence of (20) to the element in the solution set of (19) follows by Theorem 2.

Theorem 6. Let , , and be as in Theorem 5. Then, as , the sequence , defined bywith one of the cases (A1)–(A4), converges strongly to an element in the solution set of (19).

Proof. In the proof of Theorem 5, one can rewrite iterative algorithm (23) as follows:where and . Since is nonexpansive, the strong convergence of (23) to the element in the solution set of (19) follows by Theorem 1.

4. Numerical Example

In this section, we present the numerical example comparing algorithm (8) which is given by Buong [2] and algorithm (15) (new method) to solve the following test problem in [2]: find an element such thatwhere is a convex function, having a strongly monotone and Lipschitz continuous derivative on the Euclidian space , where, for and ,, for and , and is an -matrix.

Example 1. We consider test problem (25), where , , for some fixed , andSo, we have that is a -Lipschitz continuous and -strongly monotone mapping with . For each algorithm, we set , for all , and , for all . Taking , the stopping criterion is defined by where and . The numerical results are listed in Table 1 with different initial points , where is the number of iterations and is the CPU time in seconds. In Figures 1 and 2, we present the graphs illustrating the number of iterations for both methods using the stopping criterion defined as above with the different initial points shown in Table 1.

Remark 1. From the numerical analysis of our results in Table 1 and Figures 1 and 2, we get that algorithm (15) (new method) has less number of iterations and faster convergence than algorithm (8) (Buong method).

Example 2. In this example, we consider algorithm (23) for solving test problem (25), where and . Let , , , , and be as in Example 1. In the numerical experiment, we take the stopping criterion . The numerical results are listed in Table 2 with different cases of and . In Figures 3 and 4, we present the graphs illustrating the number of iterations for all cases of and using the stopping criterion as above with the different initial points appeared in Table 2. Moreover, Table 3 shows the effect of different choices of .

Remark 2. We observe from the numerical analysis of Table 2 that algorithm (23) has the fastest convergence when and satisfy (A4) and the slowest convergence when and satisfy (A3). Moreover, we require less iteration steps and CPU times for convergence when is chosen very small and close to zero.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

Acknowledgments

The first author is thankful to the Science Achievement Scholarship of Thailand. The authors would like to thank the Department of Mathematics, Faculty of Science, Naresuan University (grant no. R2564E049), for the support.