About this Journal Submit a Manuscript Table of Contents
Abstract and Applied Analysis
Volume 2012 (2012), Article ID 303569, 16 pages
http://dx.doi.org/10.1155/2012/303569
Review Article

Some Aspects of Extended General Variational Inequalities

Mathematics Department, COMSATS Institute of Information Technology, Park Road, Islamabad, Pakistan

Received 1 December 2011; Accepted 23 December 2011

Academic Editor: Khalida Inayat Noor

Copyright © 2012 Muhammad Aslam Noor. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Noor (“Extended general variational inequalities,” 2009, “Auxiliary principle technique for extended general variational inequalities,” 2008, “Sensitivity analysis of extended general variational inequalities,” 2009, “Projection iterative methods for extended general variational inequalities,” 2010) introduced and studied a new class of variational inequalities, which is called the extended general variational inequality involving three different operators. This class of variational inequalities includes several classes of variational inequalities and optimization problems. The main motivation of this paper is to review some aspects of these variational inequalities including the iterative methods and sensitivity analysis. We expect that this paper may stimulate future research in this field along with novel applications.

1. Introduction

Variational inequalities, which were introduced and studied in early sixties, contain a wealth of new ideas. Variational inequalities can be considered as a natural extension of the variational principles. It is now well known that the variational inequalities enable us to study a wide class of problems such as free, moving, obstacle, unilateral, equilibrium, and fixed points in a unified and simple framework. Variational inequalities are closely connected with the convexity optimization problem. We would like to point out that the minimum of a differentiable convex function on a convex set in a normed space can be characterized by the variational inequalities. This shows that the variational inequalities are closely related to the convexity. In recent years, the concept of the convexity has been extended and generalized in several direction using some novel and innovative techniques. We emphasize that these generalizations of the convexity have played a fundamental and basic part in the introduction of a new class of variational inequalities. Motivated by these developments, Noor [1] considered a new class of variational inequality involving two different operators. It turned out that a wide class of odd-order and nonsymmetric problems can be studied via these general variational inequalities. Youness [2] introduced and studied a new class of convex function with respect to an arbitrary function. This class of functions is usually called the -convex functions. These nonconvex functions may not be convex, and the underlying set may not be a convex set in the classical convex analysis sense. Noor [3] showed that the minimum of this type of differentiable nonconvex function on the nonconvex (-convex) set can be characterized by the general variational inequalities. This result shows that the general variational inequalities are closely associated with nonlinear optimization. For the recent developments in general variational inequalities, see [142] and the references therein.

Motivated and inspired by the research activities going on in this dynamic field, Noor [1316] introduced the nonconvex functions involving two arbitrary functions. This class of nonconvex functions is called the -convex function. This class of nonconvex function is more general and unifying ones. One can easily show that this class of nonconvex functions includes the -convex function introduced by Youness [2] and the classical convex functions as special cases. Noor [1316] has shown that the minimum of such type of differentiable nonconvex (-convex) functions can be characterized by a class of variational inequalities on the nonconvex (-convex) sets. This fact motivated Noor [1316] to introduce and study a new class of variational inequalities, called the extended general variational inequalities involving three different operators. It has been shown that for different and suitable choice of the operators, one can obtain several known and new classes of variational inequalities. These variational inequalities have important and novel applications in various branches of engineering, physical, regional, mathematical, physical, social, and natural sciences.

Several numerical techniques have been developed for solving variational inequalities using different technique and ideas. Using the projection technique, one can establish the equivalence between the variational inequalities and the fixed point problem. This alternative equivalent form has been used to study the existence of a solution of the variational inequalities and related problems. This technique and its variant forms have been used to develop several iterative methods for solving the extended general variational inequalities and optimization problems.

Theory of extended general variational inequalities is quite a new one. We shall content ourselves to give the main flavour of the ideas and techniques involved. The technique used to analyze the various iterative methods and other results for extended general variational inequalities are a beautiful blend of ideas of pure and applied sciences. In this paper, we have presented the main results regarding the various iterative methods, their convergence analysis, and other aspects. The language used is necessary to be that of functional analysis, convex analysis, and some knowledge of elementary Hilbert space theory. The framework chosen should be seen as a model setting for more general results for other classes of variational inclusions. One of the main purposes of this paper is to demonstrate the close connection among various classes of iterative methods for solving the extended general variational inequalities. We would like to emphasize that the results obtained and discussed in this paper may motivate and bring a large number of novel, innovative, and important applications, extensions, and generalizations in other fields.

2. Basic Concepts

Let be a real Hilbert space whose inner product and norm are denoted by and , respectively. Let be a nonempty closed convex set in .

For given nonlinear operators , we consider the problem of finding such that which is called the extended general variational inequality. Noor [1316] has shown that the minimum of a class of differentiable nonconvex functions on -convex set in can be characterized by extended general variational inequality (2.1).

For this purpose, we recall the following well-known concepts, see [7].

Definition 2.1 (see [6, 13]). Let be any set in . The set is said to be -convex if there exist two functions such that Note that every convex set is an -convex set, but the converse is not true, see [6]. If , then the -convex set is called the -convex set, which was introduced by Youness [2].
From now onward, we assume that is an -convex set, unless otherwise specified.

Definition 2.2 (see [24, 28]). The function is said to be -convex, if and only if, there exist two functions , such that for all , , . Clearly, every convex function is a -convex, but the converse is not true. For , Definition 2.2 is due to Youness [2].
We now show that the minimum of a differentiable -convex function on the -convex set in can be characterized by the extended general variational inequality (2.1). This result is due to Noor [1316]. We include all the details for the sake of completeness and to convey the main idea.

Lemma 2.3 (see [1316]). Let be a differentiable -convex function. Then is the minimum of -convex function on if and only if satisfies the inequality where is the differential of at .

Proof. Let be a minimum of -convex function on . Then Since is an -convex set, so, for all , , , . Setting in (2.5), we have Dividing the above inequality by and taking , we have which is the required result (2.4).
Conversely, let satisfy the inequality (2.4). Since is an -convex function, for all , , , , and which implies that Letting in the above inequality and using (2.4), we have which implies showing that is the minimum of on in

Lemma 2.3 implies that -convex programming problem can be studied via the extended general variational inequality (2.1) with . In a similar way, one can show that the extended general variational inequality is the Fritz-John condition of the inequality constrained optimization problem.

We now list some special cases of the extended general variational inequality (2.1).

(i) If , then problem (2.1) is equivalent to finding such that which is known as general variational inequality, introduced and studied by Noor [1] in 1988. It turned out that odd order and nonsymmetric obstacle, free, moving, unilateral, and equilibrium problems arising in various branches of pure and applied sciences can be studied via general variational inequalities (2.12), see [142] and the references therein.

(ii) For , the identity operator, problem (2.1) is equivalent to finding such that which is also called the general variational inequalities, introduced and studied by Noor [19].

(iii) For , the identity operator, the extended general variational inequality (2.1) collapses to: find such that which is also called the general variational inequality, see Noor [11].

(iv) For , the identity operator, the extended general variational inequality (2.1) is equivalent to finding such that which is known as the classical variational inequality and was introduced in 1964 by Stampacchia [40]. For the recent applications, numerical methods, sensitivity analysis, dynamical systems, and formulations of variational inequalities, see [142] and the references therein.

(v) If is a polar(dual) convex cone of a closed convex cone in , then problem (2.1) is equivalent to finding such that which is known as the general complementarity problem, see [1]. If , the identity operator, then problem (2.16) is called the generalized complementarity problem. For , where is a point-to-point mapping, then problem (2.16) is called the quasi(implicit) complementarity problem, see [3, 11] and the references therein.

From the above discussion, it is clear that the extended general variational inequalities (2.1) is most general and includes several previously known classes of variational inequalities and related optimization problems as special cases. These variational inequalities have important applications in mathematical programming and engineering science optimization problems.

We would like to emphasize that problem (2.1) is equivalent to finding such that This equivalent formulation is also useful from the applications point of view.

If is convex set, then problem (2.1) is equivalent to finding such that which is called the extended general variational inclusion problem associated with general variational inequality (2.1). Here denotes the normal cone of at in the sense of nonconvex analysis. This equivalent formulation plays a crucial and basic part in this paper. We would like to point out that this equivalent formulation allows us to use the projection operator technique for solving the general nonconvex variational inequalities of the type (2.1).

We also need the following concepts and results.

Lemma 2.4. Let be a closed and convex set in . Then, for a given , satisfies the inequality if and only if where is the projection of onto the closed and convex set in .

Definition 2.5. For all , an operator is said to be(i)strongly monotone if there exists a constant such that (ii)Lipschitz continuous if there exists a constant such that From (i) and (ii), it follows that .

Remark 2.6. It follows from the strongly monotonicity of the operator , that which implies that This observation enables us to define the following concept.

Definition 2.7. The operator is said to be firmly expanding if

Definition 2.8. An operator with respect to the arbitrary operators , is said to be -pseudomonotone, if and only if,

3. Projection Methods

It is known that the extended general variational inequality (2.1) is equivalent to the fixed point problem. One can also prove this result using Lemma 2.4.

Lemma 3.1 (see [13]). is a solution of the extended general variational inequality (2.17) if and only if satisfies the relation where is the projection of onto the closed and convex set .

We rewrite the the relation (3.1) in the following form: which is used to study the existence of a solution of the extended general variational inequalities (2.17).

We now study those conditions under which the extended general variational inequality (2.1) has a unique solution and this is the main motivation of our next result.

Theorem 3.2 (see [13]). Let the operators be relaxed cocoercive strongly monotone with constants (), (), (), and Lipschitz continuous with constants with , , , respectively. If where then there exists a unique solution of the extended general variational inequality (2.1).

Proof. From Lemma 3.1, it follows that problems (3.1) and (2.1) are equivalent. Thus it is enough to show that the map , defined by (3.2), has a fixed point. For all , where we have used the fact that the operator is nonexpansive.
Since the operator is relaxed cocoercive strongly monotone with constants , and Lipschitz continuous with constant , it follows that In a similar way, we have where , , , , and , are the relaxed cocoercive strongly monotonicity and Lipschitz continuity constants of the operator , respectively.
From (3.4), (3.5), (3.6), and (3.7), we have where From (3.3), it follows that , which implies that the map defined by (3.2) has a fixed point, which is a unique solution of (2.1).

Using the fixed point formulation (2.15), we suggest and analyze the following iterative methods for solving the extended general nonconvex variational inequality (2.1).

Algorithm 3.3. For a given , find the approximate solution by the iterative scheme which is called the explicit iterative method. For the convergence analysis of Algorithm 3.3, see Noor [21].
We again use the fixed point formulation to suggest and analyze the following iterative method for solving (2.1).

Algorithm 3.4. For a given , find the approximate solution by the iterative scheme Algorithm 3.4 is an implicit iterative method for solving the extended general variational inequalities (2.1). Using Lemma 2.3, one can rewrite Algorithm 3.4 in the following equivalent form.

Algorithm 3.5. For a given , find the approximate solution by the iterative schemes
To implement Algorithm 3.4, we use the predictor-corrector technique. We use Algorithm 3.3 as a predictor and Algorithm 3.4 as a corrector to obtain the following predictor-corrector method for solving the extended general variational inequality (2.1).

Algorithm 3.6. For a given , find the approximate solution by the iterative schemes Algorithm 3.6 is known as the extended extragradient method. This method includes the extragradient method of Korpelevi [8] for . Here we would like to point out that the implicit method (Algorithm 3.4) and the extragradient method (Algorithm 3.6) are equivalent.

We now consider the convergence analysis of Algorithm 3.4, and this is the main motivation of our next result.

Theorem 3.7. Let be a solution of (2.1), and let be the approximate solution obtained from Algorithm 3.4. If the operator is -pseudomonotone, then

Proof. Let be a solution of (2.1). Then since the operator is -pseudomonotone. Take in (3.16); we have Taking in (3.13), we have From (3.17) and (3.18), we have It is well known that Using (3.19), from (3.20), one can easily obtain the required result (3.15).

Theorem 3.8. Let be a solution of (2.1), and let be the approximate solution obtained from Algorithm 3.4. Let be a finite dimensional space. Then .

Proof. Let be a solution of (2.1). Then the sequence is nonincreasing and bounded and which implies Let be a cluster point of . Then there exists a subsequence such that converges to . Replacing by in (3.13), taking the limits in (3.13), and using (3.23), we have This shows that solves the extended general variational inequality (2.1) and which implies that the sequence has a unique cluster point and is the solution of (2.1), the required result.

We again use the fixed point formulation (3.1) to suggest the following method for solving (2.1).

Algorithm 3.9. For a given , find the approximate solution by the iterative schemes which is also known as an implicit method. To implement this method, we use the prediction-correction technique. We use Algorithm 3.3 as the predictor and Algorithm 3.9 as the corrector. Consequently, we obtain the following iterative method.

Algorithm 3.10. For a given , find the approximate solution by the following iterative schemes: Algorithm 3.10 is called the two-step or predictor-corrector method for solving the extended general variational inequality (2.1).
For a given step size , one can suggest and analyze the following two-step iterative method of the form.

Algorithm 3.11. For a given , find the approximate solution by the iterative schemes: Note that for , Algorithm 3.11 reduces to Algorithm 3.10. Using the technique of Noor [12], one may study the convergence analysis of Algorithms 3.6 and 3.7.

4. Auxiliary Principle Technique

In this section, we use the auxiliary principle technique to study the existence of a solution of the extended general variational inequality (2.1).

Theorem 4.1. Let be a strongly monotone with constant and Lipschitz continuous with constant . Let be a strongly monotone and Lipschitz continuous operator with constants and , respectively. If the operator is firmly expanding and there exists a constant such that where then the extended general variational inequality (2.1) has a unique solution.

Proof. We use the auxiliary principle technique to prove the existence of a solution of (2.1). For a given satisfying the extended general variational inequality (2.1), we consider the problem of finding a solution such that where is a constant.
The inequality of type (4.3) is called the auxiliary extended general variational inequality associated with the problem (2.1). It is clear that the relation (2.5) defines a mapping . It is enough to show that the mapping defined by the relation (4.3) has a unique fixed point belonging to satisfying the extended general variational inequality (2.1). Let be two solutions of (2.1) related to , respectively. It is sufficient to show that for a well-chosen , with , where is independent of and . Taking (resp. ) in (4.3) related to (resp., ), adding the resultant, we have from which we have Since is both strongly monotone and Lipschitz continuous operator with constants and , respectively, it follows that In a similar way, using the strongly monotonicity with constant and Lipschitz continuity with constant , we have From (4.6), (4.7), (4.2), and using the fact that the operator is firmly expanding, we have From (4.1) and (4.2), it follows that showing that the mapping defined by (4.3) has a fixed point belonging to , which is the solution of (2.1), the required result.

We note that, if , then clearly is a solution of the extended general variational inequality (2.17). This observation enables to suggest and analyze the following iterative method for solving the extended general variational inequalities (2.1).

Algorithm 4.2. For a given , find the approximate solution by the iterative scheme We remark that Algorithm 4.2 can be rewritten in the equivalent form using the projection technique as follows.

Algorithm 4.3. For a given , find the approximate solution by the iterative scheme which is exactly Algorithm 3.3.

We now use the auxiliary principle technique to suggest the implicit iterative method for solving the extended general variational inequality (2.1). For a given satisfying the extended general variational inequality (2.1), we consider the problem of finding a solution such that where is a constant.

It is clear that, if , then is a solution of the extended general variational inequality (2.17). We use this fact to suggest another iterative method for solving (2.1).

Algorithm 4.4. For a given , find the approximate solution by the iterative scheme We remark that Algorithm 4.4 can be rewritten in the equivalent form using the projection technique as follows.

Algorithm 4.5. For a given , find the approximate solution by the iterative scheme which is exactly Algorithm 3.4.

The auxiliary principle technique can be used to develop several two-step, three-step, and alternating direction methods for solving the extended general variational inequalities. This is an interesting problem for further research.

We now define the residue vector by It is clear from Lemma 2.4 that the extended general variational inequality (2.1) has solution ; , if and only if, is a zero of the equation For a positive step size , (4.16) can be written as This fixed point formulation can be used to suggest and analyze the following iterative method for solving the extended general variational inequality (2.1).

Algorithm 4.6. For a given , find the approximate solution by the iterative scheme which is an implicit method.

It is worth mentioning that one can suggest and analyze a wide class of iterative methods for solving the extended general variational inequality and its variant forms by using the technique of Noor [11]. We leave this to the interested readers.

5. Conclusion

In this paper, we have introduced and considered a new class of variational inequalities, which is called the extended general variational inequalities. We have established the equivalent between the extended general variational inequalities and fixed point problem using the technique of the projection operator. This equivalence is used to study the existence of a solution of the extended general variational inequalities as well as to suggest and analyze some iterative methods for solving the extended general variational inequalities. Several special cases are also discussed. Results proved in this paper can be extended for multivalued and system of extended general variational inequalities using the technique of this paper. The comparison of the iterative method for solving extended general variational inequalities is an interesting problem for future research. Using the technique of Noor [11], one can study the sensitivity analysis and the properties of the associated dynamical system related to the extended general variational inequalities. We hope that the ideas and technique of this paper may stimulate further research in this field.

Acknowledgment

The author is grateful to Dr. S. M. Junaid Zaidi, Rector, COMSATS Institute of Information Technology, Pakistan, for providing the excellent research facilities.

References

  1. M. A. Noor, “General variational inequalities,” Applied Mathematics Letters, vol. 1, no. 2, pp. 119–122, 1988. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  2. E. A. Youness, “E-convex sets, E-convex functions, and E-convex programming,” Journal of Optimization Theory and Applications, vol. 102, no. 2, pp. 439–450, 1999. View at Publisher · View at Google Scholar · View at MathSciNet
  3. M. A. Noor, “New approximation schemes for general variational inequalities,” Journal of Mathematical Analysis and Applications, vol. 251, no. 1, pp. 217–229, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  4. M. A. Noor, K. I. Noor, S. Zainab, and E. Al-Said, “Regularized mixed variational-like inequalities,” Journal of Applied Mathematics. In press.
  5. A. Bnouhachem and M. A. Noor, “Inexact proximal point method for general variational inequalities,” Journal of Mathematical Analysis and Applications, vol. 324, no. 2, pp. 1195–1212, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  6. G. Cristescu and L. Lupşa, Non-Connected Convexities and Applications, vol. 68, Kluwer Academic, Dodrecht, The Netherlands, 2002.
  7. D. Kinderlehrer and G. Stampacchia, An Introduction to Variational Inequalities and Their Applications, vol. 31, Society for Industrial and Applied Mathematics, Philadelphia, Pa, USA, 2000.
  8. G. M. Korpelevič, “An extragradient method for finding saddle points and for other problems,” Èkonomika i Matematicheskie Metody, vol. 12, no. 4, pp. 747–756, 1976.
  9. Q. Liu and J. Cao, “A recurrent neural network based on projection operator for extended general variational inequalities,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 40, no. 3, Article ID 5339227, pp. 928–938, 2010. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  10. Q. Liu and Y. Yang, “Global exponential system of projection neural networks for system of generalized variational inequalities and related nonlinear minimax problems,” Neurocomputing, vol. 73, no. 10–12, pp. 2069–2076, 2010. View at Publisher · View at Google Scholar · View at Scopus
  11. M. Aslam Noor, “Some developments in general variational inequalities,” Applied Mathematics and Computation, vol. 152, no. 1, pp. 199–277, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  12. M. A. Noor, “Iterative schemes for nonconvex variational inequalities,” Journal of Optimization Theory and Applications, vol. 121, no. 2, pp. 385–395, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  13. M. A. Noor, “Extended general variational inequalities,” Applied Mathematics Letters, vol. 22, no. 2, pp. 182–186, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  14. M. A. Noor, “Auxiliary principle technique for extended general variational inequalities,” Banach Journal of Mathematical Analysis, vol. 2, no. 1, pp. 33–39, 2008. View at Zentralblatt MATH
  15. M. A. Noor, “Sensitivity analysis of extended general variational inequalities,” Applied Mathematics E-Notes, vol. 9, pp. 17–26, 2009. View at Zentralblatt MATH
  16. M. A. Noor, “Projection iterative methods for extended general variational inequalities,” Journal of Applied Mathematics and Computing, vol. 32, no. 1, pp. 83–95, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  17. M. A. Noor, “Solvability of extended general mixed variational inequalities,” Albanian Journal of Mathematics, vol. 4, no. 1, pp. 13–17, 2010.
  18. M. A. Noor, “Extended general quasi-variational inequalities,” Nonlinear Analysis Forum, vol. 15, pp. 33–39, 2010.
  19. M. A. Noor, “Differentiable non-convex functions and general variational inequalities,” Applied Mathematics and Computation, vol. 199, no. 2, pp. 623–630, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  20. M. A. Noor, “Some iterative methods for general nonconvex variational inequalities,” Mathematical and Computer Modelling, vol. 21, pp. 87–96, 2010.
  21. M. A. Noor, “Some classes of general nonconvex variational inequalities,” Albanian Journal of Mathematics, vol. 3, no. 4, pp. 175–188, 2009. View at Zentralblatt MATH
  22. M. A. Noor, “Nonconvex quasi variational inequalities,” Journal of Advanced Mathematical Studies, vol. 3, no. 1, pp. 59–72, 2010. View at Zentralblatt MATH
  23. M. A. Noor, “Projection methods for nonconvex variational inequalities,” Optimization Letters, vol. 3, no. 3, pp. 411–418, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  24. M. A. Noor, “Implicit iterative methods for nonconvex variational inequalities,” Journal of Optimization Theory and Applications, vol. 143, no. 3, pp. 619–624, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  25. M. A. Noor, “Iterative methods for general nonconvex variational inequalities,” Albanian Journal of Mathematics, vol. 3, no. 3, pp. 117–127, 2009. View at Zentralblatt MATH
  26. M. A. Noor, “An extragradient algorithm for solving general nonconvex variational inequalities,” Applied Mathematics Letters, vol. 23, no. 8, pp. 917–921, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  27. M. A. Noor, “Some iterative methods for general nonconvex variational inequalities,” Mathematical and Computer Modelling, vol. 54, pp. 2953–2961, 2011.
  28. M. A. Noor and K. I. Noor, “Some iterative methods for solving general bifunction variational inequalities,” Journal of Advanced Mathematical Studies, vol. 20, no. 6, 2012. View at Publisher · View at Google Scholar
  29. M. A. Noor, K. I. Noor, and T. M. Rassias, “Some aspects of variational inequalities,” Journal of Computational and Applied Mathematics, vol. 47, no. 3, pp. 285–312, 1993. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  30. M. A. Noor, K. I. Noor, and E. Al-Said, “Iterative projection methods for general nonconvex variational inequalities,” Applied and Computational Mathematics, vol. 10, no. 2, pp. 309–320, 2011. View at Zentralblatt MATH
  31. M. A. Noor , K. I. Noor, and E. Al-Said, “On new proximal pint methods for solving the variational inequalities,” Journal of Applied Mathematics, vol. 2012, Article ID 412413, 7 pages, 2012. View at Publisher · View at Google Scholar
  32. M. Aslam Noor, S. Ullah, K. Inayat Noor, and E. Al-Said, “Iterative methods for solving extended general mixed variational inequalities,” Computers & Mathematics with Applications, vol. 62, no. 2, pp. 804–813, 2011. View at Publisher · View at Google Scholar
  33. M. A. Noor, K. I. Noor, Y. Z. Huang, and E. Al-Said\, “Implicit schemes for solving extended general nonconvex variational inequalities,” Journal of Applied Mathematics, vol. 2012, Article ID 646259, 10 pages, 2012. View at Publisher · View at Google Scholar
  34. M. A. Noor, K. I. Noor, and E. Al-Said, “Resolvent iterative methods for solving system of extended general variational inclusions,” Journal of Inequalities and Applications, vol. 2011, Article ID 371241, 10 pages, 2011. View at Publisher · View at Google Scholar
  35. M. A. Noor, K. I. Noor, and E. Al-Said, “Auxiliary principle technique for solving bifunction variational inequalities,” Journal of Optimization Theory and Applications, vol. 149, no. 2, pp. 441–445, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  36. M. A. Noor, K. I. Noor, and E. Al-Said, “Iterative methods for solving nonconvex equilibrium variational inequalities,” Applied Mathematics & Information Sciences, vol. 6, no. 1, pp. 65–69, 2012.
  37. M. A. Noor, K. I. Noor, and E. Al-Said, “Some iterative methods for trifunction equilibrium variational inequalities,” International Journal of the Physical Sciences, vol. 6, no. 22, pp. 5223–5229, 2011.
  38. M. A. Noor, S. Zainab, K. I. Noor, and E. Al-Said, “Mixed equilibrium problems,” International Journal of Physical Sciences, vol. 6, no. 23, pp. 5412–5418, 2011.
  39. M. Sun, “Merit functions and equivalent differentiable optimization problems for the extended general variational inequalities,” International Journal of Pure and Applied Mathematics, vol. 63, no. 1, pp. 39–49, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  40. G. Stampacchia, “Formes bilinéaires coercitives sur les ensembles convexes,” Comptes Rendus Mathematique, vol. 258, pp. 4413–4416, 1964. View at Zentralblatt MATH
  41. Y. Yao, M. A. Noor, Y. C. Liou, and S. M. Kang, “Iterative algorithms for general multivalued variational inequalities,” Abstract and Applied Analysis, vol. 2012, Article ID 768272, 10 pages, 2012. View at Publisher · View at Google Scholar
  42. Y. Zhao and D. Sun, “Alternative theorems for nonlinear projection equations and applications to generalized complementarity problems,” Nonlinear Analysis, vol. 46, no. 6, pp. 853–868, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet