Journal of Applied Mathematics

Journal of Applied Mathematics / 2012 / Article
Special Issue

Applications of Fixed Point and Approximate Algorithms

View this Special Issue

Research Article | Open Access

Volume 2012 |Article ID 412413 | 7 pages | https://doi.org/10.1155/2012/412413

On New Proximal Point Methods for Solving the Variational Inequalities

Academic Editor: Yonghong Yao
Received03 Oct 2011
Accepted19 Oct 2011
Published28 Nov 2011

Abstract

It is well known that the variational inequalities are equivalent to the fixed point problem. We use this alternative equivalent formulation to suggest and analyze some new proximal point methods for solving the variational inequalities. These new methods include the explicit, the implicit, and the extragradient methods as special cases. The convergence analysis of the new methods is considered under some suitable conditions. Results proved in this paper may stimulate further research in this direction.

1. Introduction

Variational inequalities, the origin of which can be traced back to Stampacchia [1], are being used to study a wide class of diverse unrelated problems arising in various branches of pure and applied sciences in a unified framework. It is well known that the variational inequalities are equivalent to the fixed point problem. This alternative equivalent formulation has played an important and fundamental role in the existence, numerical methods, and other aspects of the variational inequalities. This equivalent formulation has been used to suggest the projection iterative method, the implicit iterative method, and the extragradient method, which is due to Korpelevich [2], for solving the variational inequalities. It has been shown [3] that the implicit iterative method and the extragradient method are equivalent. We remark that the implicit iterative method and the explicit iterative method are two different and distinct methods. We use this alternative equivalent formulation to suggest and analyze some new proximal point methods, which include the implicit and explicit methods as special cases. This is the main motivation of this paper. We also consider its convergence criteria under suitable conditions. We hope that the ideas and techniques of this paper may stimulate further research in this area of pure and applied sciences.

2. Preliminaries

Let 𝐻 be a real Hilbert space, whose inner product and norm are denoted by βŸ¨β‹…,β‹…βŸ© and β€–β‹…β€–, respectively. Let 𝐾 be a nonempty, closed, and convex set in 𝐻.

For a given nonlinear operator π‘‡βˆΆπ»β†’π», we consider the problem of finding π‘’βˆˆπΎ such that βŸ¨π‘‡π‘’,π‘£βˆ’π‘’βŸ©β‰₯0,βˆ€π‘£βˆˆπΎ,(2.1) which is called the variational inequality, introduced and studied by Stampacchia [1].

For the applications, formulations, numerical methods, and other aspects of the equilibrium variational inequalities, see [1–14] and the references therein.

We now recall some well-known results and concepts.

Lemma 2.1. Let 𝐾 be a nonempty, closed, and convex set in 𝐻. Then, for a given π‘§βˆˆπ», π‘’βˆˆπΎ satisfies the inequality βŸ¨π‘’βˆ’π‘§,π‘£βˆ’π‘’βŸ©β‰₯0,βˆ€π‘£βˆˆπΎ,(2.2) if and only if 𝑒=𝑃𝐾𝑧,(2.3) where 𝑃𝐾 is the projection of 𝐻 onto the closed and convex set 𝐾.

It is well known that the projection operator 𝑃𝐾 is nonexpansive, that is, β€–Tπ‘’βˆ’π‘‡π‘£β€–β‰€β€–π‘’βˆ’π‘£β€–,βˆ€π‘’,π‘£βˆˆπ».(2.4) This property plays a very important part in the studies of the variational inequalities and related optimization.

Using Lemma 2.1, one can easily show that the variational inequality (2.1) is equivalent to finding π‘’βˆˆπΎ such that 𝑒=𝑃𝐾[]π‘’βˆ’πœŒπ‘‡π‘’,(2.5) where 𝜌>0 is a constant.

Definition 2.2. An operator π‘‡βˆΆπ»β†’π» is said to be strongly monotone if and only if there exists a constant 𝛼>0 such that βŸ¨π‘‡π‘’βˆ’π‘‡π‘£,π‘’βˆ’π‘£βŸ©β‰₯π›Όβ€–π‘’βˆ’π‘£β€–2,βˆ€π‘’,π‘£βˆˆπ»,(2.6) and Lipschitz continuous if there exists a constant 𝛽>0 such that β€–π‘‡π‘’βˆ’π‘‡π‘£β€–β‰€π›½β€–π‘’βˆ’π‘£β€–,βˆ€π‘’,π‘£βˆˆπ».(2.7)

3. Main Results

In this section, we use the fixed point formulation (2.5) to suggest a new unified implicit method for solving the variational inequality (2.1), and this is the main motivation of this paper. Using the equivalent fixed point formulation, one can suggest the following iterative method for solving the variational inequality (2.1).

Algorithm 3.1. For a given 𝑒0∈𝐾, find the approximate solution 𝑒𝑛+1 by the iterative scheme 𝑒𝑛+1=π‘ƒπΎξ€Ίπ‘’π‘›βˆ’πœŒπ‘‡π‘’π‘›ξ€»,𝑛=0,1,2,….(3.1)

Algorithm 3.1 is known as the projection iterative method. For the convergence analysis of Algorithm 3.1, see Noor [8].

For a given πœ†βˆˆ[0,1], we can rewrite (2.5) as 𝑒=𝑃𝐾[]π‘’βˆ’πœŒπ‘‡π‘’+πœ†πœŒ(π‘‡π‘’βˆ’π‘‡π‘’).(3.2) This fixed point formulation is used to suggest the following new proximal point iterative method for solving the variational inequality (2.1).

Algorithm 3.2. For a given 𝑒0∈𝐾, find the approximate solution 𝑒𝑛+1 by the iterative scheme 𝑒𝑛+1=π‘ƒπΎξ€Ίπ‘’π‘›βˆ’πœŒπ‘‡π‘’π‘›+1ξ€·+πœ†πœŒπ‘‡π‘’π‘›+1βˆ’π‘‡π‘’π‘›ξ€Έξ€»,𝑛=0,1,2,….(3.3)

Note that Algorithm 3.2 is an implicit-type iterative method. It is clear that for πœ†=1, Algorithm 3.2 reduces to Algorithm 3.1. For πœ†=0, Algorithm 3.2 collapses to the following implicit iterative method for solving the variational inequality (2.1).

Algorithm 3.3. For a given 𝑒0∈𝐾, find the approximate solution 𝑒𝑛+1 by the iterative scheme 𝑒𝑛+1=π‘ƒπΎξ€Ίπ‘’π‘›βˆ’πœŒπ‘‡π‘’π‘›+1ξ€»,𝑛=0,1,2,….(3.4)

For the convergence analysis of Algorithm 3.3, see Noor [3] and the references therein.

In order to implement Algorithm 3.2, we use the predictor-corrector technique. We use Algorithm 3.1 as the predictor and Algorithm 3.2 as the corrector. Consequently, we obtain the following two-step iterative method for solving the variational inequality (2.1).

Algorithm 3.4. For a given 𝑒0∈𝐾, find the approximate solution 𝑒𝑛+1 by the iterative schemes: 𝑦𝑛=π‘ƒπΎξ€Ίπ‘’π‘›βˆ’πœŒπ‘‡π‘’π‘›ξ€»π‘’,(3.5)𝑛+1=π‘ƒπΎξ€Ίπ‘’π‘›βˆ’πœŒπ‘‡π‘¦π‘›ξ€·+πœ†πœŒπ‘‡π‘¦π‘›βˆ’π‘‡π‘’π‘›ξ€Έξ€»,𝑛=0,1,2,….(3.6) Algorithm 3.4 is a new two-step iterative method for solving the variational inequality (2.1).
For πœ†=0, Algorithm 3.4 reduces to the following iterative method for solving the variational inequality (2.1).

Algorithm 3.5. For a given 𝑒0∈𝐾, find the approximate solution 𝑒𝑛+1 by the iterative schemes: 𝑦𝑛=π‘ƒπΎξ€Ίπ‘’π‘›βˆ’πœŒπ‘‡π‘’π‘›ξ€»,𝑒𝑛+1=π‘ƒπΎξ€Ίπ‘’π‘›βˆ’πœŒπ‘‡π‘¦π‘›ξ€»,𝑛=0,1,2,…(3.7) which is known as the extragradient method and is due to Korpelevich [2].

For πœ†=1/2, Algorithm 3.4 reduces to the following iterative method for solving the variational inequality (2.1) and appears to be a new one.

Algorithm 3.6. For a given 𝑒0∈𝐾, find the approximate solution 𝑒𝑛+1 by the iterative schemes: 𝑦𝑛=π‘ƒπΎξ€Ίπ‘’π‘›βˆ’πœŒπ‘‡π‘’π‘›ξ€»,𝑒𝑛+1=π‘ƒπΎξ‚Έπ‘’π‘›βˆ’πœŒπ‘‡π‘¦π‘›+𝑇𝑒𝑛2ξ‚Ή,𝑛=0,1,2,….(3.8)

We would like to mention that one can deduce several iterative methods for solving the variational inequality and related optimization problems by choosing the appropriate and suitable value of the parameter πœ†. This clearly shows that Algorithm 3.4 is a unified implicit method and includes the previously known implicit and predictor-corrector methods as special cases.

We now consider the convergence criteria of Algorithm 3.4, and this is the main motivation of our next result.

Theorem 3.7. Let the operator 𝑇 be strongly monotone with constant 𝛼>0 and Lipschitz continuous with constant 𝛽>0. If there exists a constant 𝜌>0 such that πœƒ1=1βˆ’2πœ†π›½πœŒ+πœ†2𝛽2𝜌2+𝜌(1βˆ’πœ†)1βˆ’2π›ΌπœŒ+𝛽2𝜌2ξ‚Ή<1,(3.9) then the approximate solution 𝑒𝑛+1 obtained from Algorithm 3.4 converges strongly to the exact solution π‘’βˆˆπΎ satisfying the variational inequality (2.1).

Proof. Let π‘’βˆˆπΎ be a solution of (2.1), and let 𝑒𝑛+1 be the approximate solution obtained from Algorithm 3.3. Then, from (2.5) and (3.5), we have ‖‖𝑦𝑛‖‖=β€–β€–π‘ƒβˆ’π‘’πΎξ€Ίπ‘’π‘›βˆ’πœŒπ‘‡π‘’π‘›ξ€»βˆ’π‘ƒπΎ[]β€–β€–β‰€β€–β€–π‘’π‘’βˆ’πœŒπ‘‡π‘’π‘›ξ€·βˆ’π‘’βˆ’πœŒπ‘‡π‘’π‘›ξ€Έβ€–β€–.βˆ’π‘‡π‘’(3.10) From the strongly monotonicity and the Lipschitz continuity of the operator 𝑇, we obtain β€–β€–π‘’π‘›ξ€·βˆ’π‘’βˆ’πœŒπ‘‡π‘’π‘›ξ€Έβ€–β€–βˆ’π‘‡π‘’2=ξ«π‘’π‘›ξ€·βˆ’π‘’βˆ’πœŒπ‘‡π‘’π‘›ξ€Έβˆ’π‘‡π‘’,π‘’π‘›ξ€·βˆ’π‘’βˆ’πœŒπ‘‡π‘’π‘›=β€–β€–π‘’βˆ’π‘‡π‘’ξ€Έξ¬π‘›β€–β€–βˆ’π‘’2βˆ’πœŒβŸ¨π‘‡π‘’π‘›βˆ’π‘‡π‘’,π‘’π‘›βˆ’π‘’βŸ©+𝜌2β€–β€–π‘‡π‘’π‘›β€–β€–βˆ’π‘‡π‘’2≀1βˆ’2π›ΌπœŒ+𝛽2𝜌2ξ€Έβ€–β€–π‘’π‘›β€–β€–βˆ’π‘’2.(3.11) From (3.10) and (3.11), we obtain β€–β€–π‘¦π‘›β€–β€–β‰€ξ”βˆ’π‘’1βˆ’2π›ΌπœŒ+𝛽2𝜌2β€–β€–π‘’π‘›β€–β€–β€–β€–π‘’βˆ’π‘’=πœƒπ‘›β€–β€–,βˆ’π‘’(3.12) where ξ”πœƒ=1βˆ’2π›ΌπœŒ+𝛽2𝜌2.(3.13) Form (2.5), (3.6), (3.9), (3.12), and (3.13), we have ‖‖𝑒𝑛+1β€–β€–=β€–β€–π‘ƒβˆ’π‘’πΎξ€Ίπ‘’π‘›βˆ’πœŒπ‘‡π‘¦π‘›ξ€·+πœ†πœŒπ‘‡π‘¦π‘›βˆ’π‘‡π‘’π‘›ξ€Έξ€»βˆ’π‘ƒπΎ[]β€–β€–β‰€β€–β€–π‘’π‘’βˆ’πœŒπ‘‡π‘’π‘›ξ€·βˆ’π‘’βˆ’πœ†πœŒπ‘‡π‘’π‘›ξ€Έβ€–β€–β€–β€–βˆ’π‘‡π‘’+𝜌(1βˆ’πœ†)π‘‡π‘¦π‘›β€–β€–β‰€ξ”βˆ’π‘‡π‘’1βˆ’2π›Όπœ†πœŒ+𝛽2πœ†2𝜌2β€–β€–π‘’π‘›β€–β€–β€–β€–π‘¦βˆ’π‘’+𝜌(1βˆ’πœ†)π›½π‘›β€–β€–βˆ’π‘’=πœƒ1‖‖𝑒𝑛‖‖,βˆ’π‘’(3.14) where πœƒ1=1βˆ’2πœ†π›½πœŒ+πœ†2𝛽2𝜌2+𝜌(1βˆ’πœ†)1βˆ’2π›ΌπœŒ+𝛽2𝜌2ξ‚Ή.(3.15) From (3.9), it follows that πœƒ1<1. Thus the fixed point problem (2.5) has a unique solution, and consequently the iterative solution 𝑒𝑛+1 obtained from Algorithm 3.3 converges to 𝑒, the exact solution of (2.5).

For a given πœ†βˆˆ[0,1], we can rewrite (2.5) as 𝑒=𝑃𝐾[]π‘’βˆ’πœŒπ‘‡{(1βˆ’πœ†)𝑒+πœ†π‘’}.(3.16) This fixed point formulation (3.16) has been used to suggest and analyze the following unified proximal methods for solving the variational inequality (2.1).

Algorithm 3.8. For a given 𝑒0∈𝐾, find the approximate solution 𝑒𝑛+1 by the iterative scheme 𝑒𝑛+1=π‘ƒπΎξ€Ίπ‘’π‘›ξ€½βˆ’πœŒπ‘‡(1βˆ’πœ†)𝑒𝑛+1+πœ†π‘’π‘›ξ€Ύξ€»,𝑛=0,1,2,….(3.17)

For the convergence analysis of Algorithm 3.8, see Noor [10]. For different and appropriate choice of the parameter πœ†, Algorithm 3.8 includes the extragradient method of Korpelevich [2] and other methods as special cases.

We would like to mention that if the operator 𝑇 is linear, then Algorithm 3.4 and Algorithm 3.8 are equivalent. In this case, one can easily prove that the convergence of Algorithm 3.4 requires only the partially relaxed strong monotonicity of the operator 𝑇, which is a weaker condition.

4. Conclusion

In this paper, we have used the equivalence between the variational inequality and the fixed point problem to suggest and analyze some new proximal point methods for solving the variational inequality. We have also shown that these new implicit methods include the extragradient method of Korpelevich [2] and the classical implicit method as special cases. We have also discussed the convergence criteria of the proposed new iterative methods under some suitable conditions. Results proved in this paper may inspire further research in this area. It is an open problem to consider the implementation of these new proximal methods and the comparison with other methods. Using the ideas and techniques of this, one can suggest and analyze several new proximal point methods for solving the general variational inequality and its variant form.

Acknowledgments

This research is supported by the Visiting Professor Program of King Saud University, Riyadh, Saudi Arabia, and the Research Grant no. KSU.VPP.108. The authors are also grateful to Dr. S. M. Junaid Zaidi, Rector, COMSATS Institute of Information Technology, Pakistan, for providing the excellent research facilities.

References

  1. G. Stampacchia, β€œFormes bilineaires coercitives sur les ensembles convexes,” Comptes Rendus de L'AcadΓ©mie des Sciences, vol. 258, pp. 4413–4416, 1964. View at: Google Scholar | Zentralblatt MATH
  2. G. M. Korpelevich, β€œAn extragradient method for finding saddle points and for other problems,” Ekonomika i Matematicheskie Metody, vol. 12, no. 4, pp. 747–756, 1976. View at: Google Scholar
  3. M. A. Noor, β€œOn an implicit method for nonconvex variational inequalities,” Journal of Optimization Theory and Applications, vol. 147, no. 2, pp. 411–417, 2010. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  4. F. Giannessi and A. Maugeri, Variational Inequalities and Network Equilibrium Problems, Plenum Press, New York, NY, USA, 1995.
  5. F. Giannessi, A. Maugeri, and P. M. Pardalos, Equilibrium Problems: Nonsmooth Optimization and Variational Inequality Methods, Nonconvex Optimization and Its Applications, Kluwer Academic, Dordrecht, The Netherlands, 2001.
  6. R. Glowinski, J. L. Lions, and R. Tremolieres, Numerical Analysis of Variational Inequalities, North-Holland, Amsterdam, Holland, 1981. View at: Publisher Site | Zentralblatt MATH
  7. M. A. Noor, β€œGeneral variational inequalities,” Applied Mathematics Letters, vol. 1, no. 2, pp. 119–121, 1988. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  8. M. A. Noor, β€œSome developments in general variational inequalities,” Applied Mathematics and Computation, vol. 152, no. 1, pp. 199–277, 2004. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  9. M. A. Noor, β€œExtended general variational inequalities,” Applied Mathematics Letters, vol. 22, no. 2, pp. 182–185, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  10. M. A. Noor, On Unified Proximal Method for Variational Inequalities. In press, COMSATS Institute of Information Technology, Islamabad, Pakistan, 2011.
  11. M. A. Noor, E. Al-Said, K. I. Noor, and Y. Yao, β€œExtragradient methods for solving nonconvex variational inequalities,” Journal of Computational and Applied Mathematics, vol. 235, no. 9, pp. 3104–3108, 2011. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  12. M. A. Noor, K. I. Noor, S. Zainab, and E. Al-Said, β€œProximal algorithms for solving mixed bifunction variational inequalities,” International Journal of Physical Sciences, vol. 6, no. 17, pp. 4203–4207, 2011. View at: Google Scholar
  13. M. A. Noor, K. I. Noor, S. Zainab, and E. Al-Said, β€œSome iterative algorithms for solving regularized mixed quasi variational inequalities,” International Journal of Physical Sciences, vol. 6, 2011. View at: Google Scholar
  14. M. A. Noor, K. I. Noor, and T. M. Rassias, β€œSome aspects of variational inequalities,” Journal of Computational and Applied Mathematics, vol. 47, no. 3, pp. 285–312, 1993. View at: Publisher Site | Google Scholar | Zentralblatt MATH

Copyright Β© 2012 Muhammad Aslam Noor et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

654Β Views | 485Β Downloads | 4Β Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.