International Journal of Mathematics and Mathematical Sciences

International Journal of Mathematics and Mathematical Sciences / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 9980309 | https://doi.org/10.1155/2021/9980309

Panisa Lohawech, Anchalee Kaewcharoen, Ali Farajzadeh, "Convergence Theorems for the Variational Inequality Problems and Split Feasibility Problems in Hilbert Spaces", International Journal of Mathematics and Mathematical Sciences, vol. 2021, Article ID 9980309, 7 pages, 2021. https://doi.org/10.1155/2021/9980309

Convergence Theorems for the Variational Inequality Problems and Split Feasibility Problems in Hilbert Spaces

Academic Editor: Sergejs Solovjovs
Received20 Mar 2021
Revised08 May 2021
Accepted17 May 2021
Published03 Jun 2021

Abstract

In this paper, we establish an iterative algorithm by combining Yamada’s hybrid steepest descent method and Wang’s algorithm for finding the common solutions of variational inequality problems and split feasibility problems. The strong convergence of the sequence generated by our suggested iterative algorithm to such a common solution is proved in the setting of Hilbert spaces under some suitable assumptions imposed on the parameters. Moreover, we propose iterative algorithms for finding the common solutions of variational inequality problems and multiple-sets split feasibility problems. Finally, we also give numerical examples for illustrating our algorithms.

1. Introduction

In 2005, Censor et al. [1] introduced the multiple-sets split feasibility problem (MSSFP), which is formulated as follows:where and are nonempty closed convex subsets of Hilbert spaces and , respectively, and is a bounded linear mapping. Denote by the set of solutions of MSSFP (1). Many iterative algorithms have been developed to solve the MSSFP (see [13]). Moreover, it arises in many fields in the real world, such as inverse problem of intensity-modulated radiation therapy, image reconstruction, and signal processing (see [1, 4, 5] and the references therein).

When , the MSSFP is known as the split feasibility problem (SFP); it was first introduced by Censor and Elfving [5], which is formulated as follows:

Denote by the set of solutions of SFP (2).

Assume that the SFP is consistent (i.e., (2) has a solution). It is well known that solves (2) if and only if it solves the fixed point equationwhere is a positive constant, is the adjoint operator of , and and are the metric projections of and onto and , respectively (for more details, see [6]).

The variational inequality problem (VIP) was introduced by Stampacchia [7], which is finding a pointwhere is a nonempty closed convex subset of a Hilbert space and is a mapping. The ideas of the VIP are being applied in many fields including mechanics, nonlinear programming, game theory, and economic equilibrium (see [812]).

In [13], we see that solves (4) if and only if it solves the fixed point equation

Moreover, it is well known that if is -Lipschitz continuous and -strongly monotone, then VIP (4) has a unique solution (see, e.g., [14]).

Since SFP and VIP include some special cases (see [15, 16]), indeed, convex linear inverse problem and split equality problem are special cases of SFP, and zero point problem and minimization problem are special cases of VIP. Jung [17] studied the common solution of variational inequality problem and split feasibility problem: find a pointwhere is the solution set of SFP (2) and is an -strongly monotone and -Lipschitz continuous mapping. After that, for solving problem (6), Buong [2] considered the following algorithms, which were proposed in [14, 18], respectively:where , and under the following conditions:(C1) as and .(C2) .

Moreover, Buong [2] considered the sequence that is generated by the following algorithm, which is weakly convergent to a solution of MSSFP (1):where and or and in which and , for and , are positive real numbers such that .

Motivated by the aforementioned works, we establish an iterative algorithm by combining algorithms (7) and (8) for finding the solution of problem (6) and prove the strong convergence of the sequence generated by our iterative algorithm to the solution of problem (6) in the setting of Hilbert spaces. Moreover, we propose iterative algorithms for solving the common solutions of variational inequality problems and multiple-sets split feasibility problems. Finally, we also give numerical examples for illustrating our algorithms.

2. Preliminaries

In order to solve our results, we now recall the following definitions and preliminary results that will be used in the sequel. Throughout this section, let be a nonempty closed convex subset of a real Hilbert space with inner product and norm .

Definition 1. A mapping is called(i)-Lipschitz continuous, if for all , where is a positive number.(ii)Nonexpansive, if (i) holds with .(iii)-strongly monotone, if for all , where is a positive number.(iv)Firmly nonexpansive, if for all .(v)-Averaged, if for some fixed and a nonexpansive mapping .In [5], we know that the metric projection is firmly nonexpansive and -averaged.
We collect some basic properties of averaged mappings in the following results.

Lemma 1 (see [16]). We have(i)The composite of finitely many averaged mappings is averaged. In particular, if is -averaged, where for , then the composite is -averaged, where .(ii)If the mappings are averaged and have a common fixed point, then

Proposition 1 (see [19]). Let be a nonempty subset of , be an integer, and be defined by

For every , let and be -averaged. Then, is -averaged, where .

The following properties of the nonexpansive mappings are very convenient and helpful to use.

Lemma 2 (see [20]). Assume that and are Hilbert spaces. Let be a linear bounded mapping such that and let be a nonexpansive mapping. Then, for is -averaged.

Proposition 2 (see [19]). Let be a nonempty subset of , and let be a finite family of nonexpansive mappings from to . Assume that and such that . Suppose that, for every , is -averaged; then, is -averaged, where .
The following results play a crucial role in the next section.

Lemma 3 (see [14]). Let be a real number in . Let be an -strongly monotone and -Lipschitz continuous mapping. The mapping , for each fixed point , is contractive with constant , i.e.,where .

Theorem 1 (see [21]). Let be a -Lipschitz continuous and -strongly monotone self-mapping of . Assume that is a finite family of nonexpansive mappings from to such that . Then, the sequence defined by the following algorithm converges strongly to the unique solution of the variational inequality (4):where , , for , and under the following conditions:(i) as and .(ii), for some , and as .

Theorem 2 (see [22]). Let , , , , and be as in Theorem 1. Then, the sequence defined by the following algorithm:converges strongly to the unique solution of variational inequality (4).

3. Main Results

In this section, we consider the following iterative algorithm by combining Yamada’s hybrid steepest descent method [14] and Wang’s algorithm [18] for solving problem (6):where . If we set for , then (15) is reduced to (7) studied by Buong [2]. On the other hand, in the Numerical Example section, we present the example illustrating that the two-step method (15) is more efficient that the one-step method (8) studied by Buong [2] and in terms of the two-step method (15) the generated sequence has the less number of iterations and converges faster than the sequence generated by the one-step method (8).

Throughout our results, unless otherwise stated, we assume that and are two real Hilbert spaces and is a linear bounded mapping. Let be an -strongly monotone and -Lipschitz continuous mapping on with some positive constants and . Assume that is a fixed number.

Theorem 3. Let and be two closed convex subsets in and , respectively. Then, as , the sequence defined by (15), where the sequences and satisfy conditions (C1) and (C2), respectively, converges strongly to the solution of (6).

Proof. From Lemma 2, we have that is -averaged. Since , by Lemma 1 (i), we get that is -averaged where . Moreover, we obtain that if and only if . It follows from Definition 1 (iv) that , where is nonexpansive. Then, iterative algorithm (15) can be rewritten as follows:where and . Since and are nonexpansive, then is also nonexpansive. Therefore, the strong convergence of (15) to the element in the solution set of (6) follows by Theorem 2.
In [23], Miao and Li showed the weak convergence results of the sequence converging to the element of where is generated by the following algorithm:which satisfies condition (C3) . Next, we will show the strong convergence for (17) where satisfies condition .

Theorem 4. Let and be two closed convex subsets in and , respectively. Then, as , the sequence defined by (17), where the sequence satisfies condition (C1) and and satisfy condition (C2), converges strongly to the solution of (6).

Proof. In the proof of Theorem 3, one can rewrite iterative algorithm (17) as follows:where and . Since is nonexpansive, then the strong convergence of (17) to the element in the solution set of (6) follows by Theorem 1.
Moreover, we obtain the following results which are solving the common solution of variational inequality problem and multiple-sets split feasibility problem, i.e., find a pointwhere is a solution set of (1), and is an -strongly monotone and -Lipschitz continuous mapping. This problem has been studied in [2].

Theorem 5. Let and be two finite families of closed convex subsets in and , respectively. Assume that , and satisfy conditions (C1) and (C2), respectively, and the parameters and satisfy the following conditions:(a) for such that .(b) for such that .

Then, as , the sequence , defined bywith one of the following cases:(A1) and (A2) and (A3) and (A4) and ,converges to the element in the solution set of (19).

Proof. Let . We will show that is averaged.
In the case of (A1), and . Since is -averaged for all , by Proposition 1, we get that is -averaged, where . Similarly, we have that is also averaged and so is nonexpansive. By using Lemma 2, we deduce that is -averaged, where . It follows from Lemma 1 (i) that is -averaged with .
If and , then by using Proposition 2 and condition (a), we obtain that is -averaged. From condition (b) and taking into account that is nonexpansive, for all , we have that is also nonexpansive. It follows from Lemma 2 that is -averaged. Thus, is -averaged with .
Cases (A3) and (A4) are similar. This implies that , where is nonexpansive. Moreover, by Lemma 1, we get thatThen, iterative algorithm (20) can be rewritten as follows:where and . Since and are nonexpansive, then is nonexpansive. Thus, the strong convergence of (20) to the element in the solution set of (19) follows by Theorem 2.

Theorem 6. Let , , and be as in Theorem 5. Then, as , the sequence , defined bywith one of the cases (A1)–(A4), converges strongly to an element in the solution set of (19).

Proof. In the proof of Theorem 5, one can rewrite iterative algorithm (23) as follows:where and . Since is nonexpansive, the strong convergence of (23) to the element in the solution set of (19) follows by Theorem 1.

4. Numerical Example

In this section, we present the numerical example comparing algorithm (8) which is given by Buong [2] and algorithm (15) (new method) to solve the following test problem in [2]: find an element such thatwhere is a convex function, having a strongly monotone and Lipschitz continuous derivative on the Euclidian space , where, for and ,, for and , and is an -matrix.

Example 1. We consider test problem (25), where , , for some fixed , andSo, we have that is a -Lipschitz continuous and -strongly monotone mapping with . For each algorithm, we set , for all , and , for all . Taking , the stopping criterion is defined by where and . The numerical results are listed in Table 1 with different initial points , where is the number of iterations and is the CPU time in seconds. In Figures 1 and 2, we present the graphs illustrating the number of iterations for both methods using the stopping criterion defined as above with the different initial points shown in Table 1.


Initial point

Buong method294610.364595294620431.362283
New method117840.241371117848123.411679

Buong method306320.565431306334333.468210
New method122520.324808122533625.570356

Remark 1. From the numerical analysis of our results in Table 1 and Figures 1 and 2, we get that algorithm (15) (new method) has less number of iterations and faster convergence than algorithm (8) (Buong method).

Example 2. In this example, we consider algorithm (23) for solving test problem (25), where and . Let , , , , and be as in Example 1. In the numerical experiment, we take the stopping criterion . The numerical results are listed in Table 2 with different cases of and . In Figures 3 and 4, we present the graphs illustrating the number of iterations for all cases of and using the stopping criterion as above with the different initial points appeared in Table 2. Moreover, Table 3 shows the effect of different choices of .


Initial pointA1A2A3A4

28577242642857724264
1.4912251.3550741.5344141.282528

33407314383340731438
1.7468681.6930691.8168971.690618


0.10.20.3

96751920028577
0.6695081.2451361.666702

113112244733407
1.3726001.958486

Remark 2. We observe from the numerical analysis of Table 2 that algorithm (23) has the fastest convergence when and satisfy (A4) and the slowest convergence when and satisfy (A3). Moreover, we require less iteration steps and CPU times for convergence when is chosen very small and close to zero.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

Acknowledgments

The first author is thankful to the Science Achievement Scholarship of Thailand. The authors would like to thank the Department of Mathematics, Faculty of Science, Naresuan University (grant no. R2564E049), for the support.

References

  1. Y. Censor, T. Elfving, N. Kopf, and T. Bortfeld, “The multiple-sets split feasibility problem and its applications for inverse problems,” Inverse Problems, vol. 21, no. 6, pp. 2071–2084, 2005. View at: Publisher Site | Google Scholar
  2. N. Buong, “Iterative algorithms for the multiple-sets split feasibility problem in Hilbert spaces,” Numerical Algorithms, vol. 76, no. 3, pp. 783–798, 2017. View at: Publisher Site | Google Scholar
  3. J. Zhao, D. Hou, and H. Zong, “Several iterative algorithms for solving the multiple-set split common fixed-point problem of averaged operators,” Journal of Nonlinear Functional Analysis, vol. 2019, Article ID 39, 2019. View at: Publisher Site | Google Scholar
  4. C. Byrne, “Iterative oblique projection onto convex sets and the split feasibility problem,” Inverse Problems, vol. 18, no. 2, pp. 441–453, 2002. View at: Publisher Site | Google Scholar
  5. Y. Censor and T. Elfving, “A multiprojection algorithm using bregman projections in a product space,” Numerical Algorithms, vol. 8, no. 2, pp. 221–239, 1994. View at: Publisher Site | Google Scholar
  6. H. K. Xu, “Iterative methods for the split feasibility problem in infinite-dimensional hilbert spaces,” Inverse Problems, vol. 26, no. 10, Article ID 105018, 2010. View at: Publisher Site | Google Scholar
  7. G. Stampacchia, “Formes bilineaires coercivites sur les ensembles convexes,” Comptes Rendus de l’Académie des Sciences Paris, vol. 258, pp. 4413–4416, 1964. View at: Google Scholar
  8. L. C. Ceng, Q. H. Ansari, and J. C. Yao, “Mann-type steepest-descent and modified hybrid steepest-descent methods for variational inequalities in banach spaces,” Numerical Functional Analysis and Optimization, vol. 29, no. 9-10, pp. 987–1033, 2008. View at: Publisher Site | Google Scholar
  9. L. C. Ceng, M. Teboulle, and J. C. Yao, “Weak convergence of an iterative method for pseudomonotone variational inequalities and fixed-point problems,” Journal of Optimization Theory and Applications, vol. 146, no. 1, pp. 19–31, 2010. View at: Publisher Site | Google Scholar
  10. M. Fukushima, “A relaxed projection method for variational inequalities,” Mathematical Programming, vol. 35, no. 1, pp. 58–70, 1986. View at: Publisher Site | Google Scholar
  11. D. Kinderlehrer and G. Stampacchia, An Introduction to Variational Inequalities and their Applications, SIAM, Philadelphia, PA, USA, 2000.
  12. H. Yang and M. G. H. Bell, “Traffic restraint, road pricing and network equilibrium,” Transportation Research Part B: Methodological, vol. 31, no. 4, pp. 303–314, 1997. View at: Publisher Site | Google Scholar
  13. A. Cegielski, Iterative Methods for Fixed Point Problems in Hilbert Spaces, Springer, Berlin, Germany, 2012.
  14. I. Yamada, “The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings,” in Inherently Parallel Algorithms In Feasibility and Optimization and their Applications, D. Butnariu, Y. Censor, and S. Reich, Eds., New York, NY, USA, 2001. View at: Google Scholar
  15. Y. Luo, “An inertial splitting algorithm for solving inclusion problems and its applications to compressed sensing,” Journal of Applied and Numerical Optimization., vol. 2, pp. 279–295, 2020. View at: Google Scholar
  16. H. K. Xu, “Averaged mappings and the gradient-projection algorithm,” Journal of Optimization Theory and Applications, vol. 150, no. 2, pp. 360–378, 2011. View at: Publisher Site | Google Scholar
  17. J. S. Jung, “Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem,” Journal of Nonlinear Sciences and Applications, vol. 9, no. 6, pp. 4214–4225, 2016. View at: Publisher Site | Google Scholar
  18. L. Wang, “An iterative method for nonexpansive mapping in hilbert spaces,” Journal of Fixed Point Theory and Applications., vol. 2007, Article ID 28619, p. 8, 2007. View at: Publisher Site | Google Scholar
  19. P. L. Combettes and I. Yamada, “Compositions and convex combinations of averaged nonexpansive operators,” Journal of Mathematical Analysis and Applications, vol. 425, no. 1, pp. 55–70, 2015. View at: Publisher Site | Google Scholar
  20. W. Takahashi, H. K. Xu, and J. C. Yao, “Iterative methods for generalized split feasibility problems in Hilbert spaces,” Set-Valued and Variational Analysis, vol. 23, no. 2, pp. 205–221, 2015. View at: Publisher Site | Google Scholar
  21. N. Buong and L. T. Duong, “An explicit iterative algorithm for a class of variational iequalities,” Journal of Optimization Theory and Applications, vol. 151, no. 3, pp. 513–524, 2011. View at: Publisher Site | Google Scholar
  22. H. Zhou and P. Wang, “A simpler explicit iterative algorithm for a class of variational inequalities in hilbert spaces,” Journal of Optimization Theory and Applications, vol. 161, no. 3, pp. 716–727, 2014. View at: Publisher Site | Google Scholar
  23. Y. Miao and L. Junfen, “Weak and strong convergence of an iterative method for nonexpansive mappings in hilbert spaces,” Applicable Analysis and Discrete Mathematics, vol. 2, no. 2, pp. 197–204, 2008. View at: Publisher Site | Google Scholar

Copyright © 2021 Panisa Lohawech et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views210
Downloads478
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.