Abstract

Noor (“Extended general variational inequalities,” 2009, “Auxiliary principle technique for extended general variational inequalities,” 2008, “Sensitivity analysis of extended general variational inequalities,” 2009, “Projection iterative methods for extended general variational inequalities,” 2010) introduced and studied a new class of variational inequalities, which is called the extended general variational inequality involving three different operators. This class of variational inequalities includes several classes of variational inequalities and optimization problems. The main motivation of this paper is to review some aspects of these variational inequalities including the iterative methods and sensitivity analysis. We expect that this paper may stimulate future research in this field along with novel applications.

1. Introduction

Variational inequalities, which were introduced and studied in early sixties, contain a wealth of new ideas. Variational inequalities can be considered as a natural extension of the variational principles. It is now well known that the variational inequalities enable us to study a wide class of problems such as free, moving, obstacle, unilateral, equilibrium, and fixed points in a unified and simple framework. Variational inequalities are closely connected with the convexity optimization problem. We would like to point out that the minimum of a differentiable convex function on a convex set in a normed space can be characterized by the variational inequalities. This shows that the variational inequalities are closely related to the convexity. In recent years, the concept of the convexity has been extended and generalized in several direction using some novel and innovative techniques. We emphasize that these generalizations of the convexity have played a fundamental and basic part in the introduction of a new class of variational inequalities. Motivated by these developments, Noor [1] considered a new class of variational inequality involving two different operators. It turned out that a wide class of odd-order and nonsymmetric problems can be studied via these general variational inequalities. Youness [2] introduced and studied a new class of convex function with respect to an arbitrary function. This class of functions is usually called the -convex functions. These nonconvex functions may not be convex, and the underlying set may not be a convex set in the classical convex analysis sense. Noor [3] showed that the minimum of this type of differentiable nonconvex function on the nonconvex (-convex) set can be characterized by the general variational inequalities. This result shows that the general variational inequalities are closely associated with nonlinear optimization. For the recent developments in general variational inequalities, see [142] and the references therein.

Motivated and inspired by the research activities going on in this dynamic field, Noor [1316] introduced the nonconvex functions involving two arbitrary functions. This class of nonconvex functions is called the -convex function. This class of nonconvex function is more general and unifying ones. One can easily show that this class of nonconvex functions includes the -convex function introduced by Youness [2] and the classical convex functions as special cases. Noor [1316] has shown that the minimum of such type of differentiable nonconvex (-convex) functions can be characterized by a class of variational inequalities on the nonconvex (-convex) sets. This fact motivated Noor [1316] to introduce and study a new class of variational inequalities, called the extended general variational inequalities involving three different operators. It has been shown that for different and suitable choice of the operators, one can obtain several known and new classes of variational inequalities. These variational inequalities have important and novel applications in various branches of engineering, physical, regional, mathematical, physical, social, and natural sciences.

Several numerical techniques have been developed for solving variational inequalities using different technique and ideas. Using the projection technique, one can establish the equivalence between the variational inequalities and the fixed point problem. This alternative equivalent form has been used to study the existence of a solution of the variational inequalities and related problems. This technique and its variant forms have been used to develop several iterative methods for solving the extended general variational inequalities and optimization problems.

Theory of extended general variational inequalities is quite a new one. We shall content ourselves to give the main flavour of the ideas and techniques involved. The technique used to analyze the various iterative methods and other results for extended general variational inequalities are a beautiful blend of ideas of pure and applied sciences. In this paper, we have presented the main results regarding the various iterative methods, their convergence analysis, and other aspects. The language used is necessary to be that of functional analysis, convex analysis, and some knowledge of elementary Hilbert space theory. The framework chosen should be seen as a model setting for more general results for other classes of variational inclusions. One of the main purposes of this paper is to demonstrate the close connection among various classes of iterative methods for solving the extended general variational inequalities. We would like to emphasize that the results obtained and discussed in this paper may motivate and bring a large number of novel, innovative, and important applications, extensions, and generalizations in other fields.

2. Basic Concepts

Let be a real Hilbert space whose inner product and norm are denoted by and , respectively. Let be a nonempty closed convex set in .

For given nonlinear operators , we consider the problem of finding such that which is called the extended general variational inequality. Noor [1316] has shown that the minimum of a class of differentiable nonconvex functions on -convex set in can be characterized by extended general variational inequality (2.1).

For this purpose, we recall the following well-known concepts, see [7].

Definition 2.1 (see [6, 13]). Let be any set in . The set is said to be -convex if there exist two functions such that Note that every convex set is an -convex set, but the converse is not true, see [6]. If , then the -convex set is called the -convex set, which was introduced by Youness [2].
From now onward, we assume that is an -convex set, unless otherwise specified.

Definition 2.2 (see [24, 28]). The function is said to be -convex, if and only if, there exist two functions , such that for all , , . Clearly, every convex function is a -convex, but the converse is not true. For , Definition 2.2 is due to Youness [2].
We now show that the minimum of a differentiable -convex function on the -convex set in can be characterized by the extended general variational inequality (2.1). This result is due to Noor [1316]. We include all the details for the sake of completeness and to convey the main idea.

Lemma 2.3 (see [1316]). Let be a differentiable -convex function. Then is the minimum of -convex function on if and only if satisfies the inequality where is the differential of at .

Proof. Let be a minimum of -convex function on . Then Since is an -convex set, so, for all , , , . Setting in (2.5), we have Dividing the above inequality by and taking , we have which is the required result (2.4).
Conversely, let satisfy the inequality (2.4). Since is an -convex function, for all , , , , and which implies that Letting in the above inequality and using (2.4), we have which implies showing that is the minimum of on in

Lemma 2.3 implies that -convex programming problem can be studied via the extended general variational inequality (2.1) with . In a similar way, one can show that the extended general variational inequality is the Fritz-John condition of the inequality constrained optimization problem.

We now list some special cases of the extended general variational inequality (2.1).

(i) If , then problem (2.1) is equivalent to finding such that which is known as general variational inequality, introduced and studied by Noor [1] in 1988. It turned out that odd order and nonsymmetric obstacle, free, moving, unilateral, and equilibrium problems arising in various branches of pure and applied sciences can be studied via general variational inequalities (2.12), see [142] and the references therein.

(ii) For , the identity operator, problem (2.1) is equivalent to finding such that which is also called the general variational inequalities, introduced and studied by Noor [19].

(iii) For , the identity operator, the extended general variational inequality (2.1) collapses to: find such that which is also called the general variational inequality, see Noor [11].

(iv) For , the identity operator, the extended general variational inequality (2.1) is equivalent to finding such that which is known as the classical variational inequality and was introduced in 1964 by Stampacchia [40]. For the recent applications, numerical methods, sensitivity analysis, dynamical systems, and formulations of variational inequalities, see [142] and the references therein.

(v) If is a polar(dual) convex cone of a closed convex cone in , then problem (2.1) is equivalent to finding such that which is known as the general complementarity problem, see [1]. If , the identity operator, then problem (2.16) is called the generalized complementarity problem. For , where is a point-to-point mapping, then problem (2.16) is called the quasi(implicit) complementarity problem, see [3, 11] and the references therein.

From the above discussion, it is clear that the extended general variational inequalities (2.1) is most general and includes several previously known classes of variational inequalities and related optimization problems as special cases. These variational inequalities have important applications in mathematical programming and engineering science optimization problems.

We would like to emphasize that problem (2.1) is equivalent to finding such that This equivalent formulation is also useful from the applications point of view.

If is convex set, then problem (2.1) is equivalent to finding such that which is called the extended general variational inclusion problem associated with general variational inequality (2.1). Here denotes the normal cone of at in the sense of nonconvex analysis. This equivalent formulation plays a crucial and basic part in this paper. We would like to point out that this equivalent formulation allows us to use the projection operator technique for solving the general nonconvex variational inequalities of the type (2.1).

We also need the following concepts and results.

Lemma 2.4. Let be a closed and convex set in . Then, for a given , satisfies the inequality if and only if where is the projection of onto the closed and convex set in .

Definition 2.5. For all , an operator is said to be(i)strongly monotone if there exists a constant such that (ii)Lipschitz continuous if there exists a constant such that From (i) and (ii), it follows that .

Remark 2.6. It follows from the strongly monotonicity of the operator , that which implies that This observation enables us to define the following concept.

Definition 2.7. The operator is said to be firmly expanding if

Definition 2.8. An operator with respect to the arbitrary operators , is said to be -pseudomonotone, if and only if,

3. Projection Methods

It is known that the extended general variational inequality (2.1) is equivalent to the fixed point problem. One can also prove this result using Lemma 2.4.

Lemma 3.1 (see [13]). is a solution of the extended general variational inequality (2.17) if and only if satisfies the relation where is the projection of onto the closed and convex set .

We rewrite the the relation (3.1) in the following form: which is used to study the existence of a solution of the extended general variational inequalities (2.17).

We now study those conditions under which the extended general variational inequality (2.1) has a unique solution and this is the main motivation of our next result.

Theorem 3.2 (see [13]). Let the operators be relaxed cocoercive strongly monotone with constants (), (), (), and Lipschitz continuous with constants with , , , respectively. If where then there exists a unique solution of the extended general variational inequality (2.1).

Proof. From Lemma 3.1, it follows that problems (3.1) and (2.1) are equivalent. Thus it is enough to show that the map , defined by (3.2), has a fixed point. For all , where we have used the fact that the operator is nonexpansive.
Since the operator is relaxed cocoercive strongly monotone with constants , and Lipschitz continuous with constant , it follows that In a similar way, we have where , , , , and , are the relaxed cocoercive strongly monotonicity and Lipschitz continuity constants of the operator , respectively.
From (3.4), (3.5), (3.6), and (3.7), we have where From (3.3), it follows that , which implies that the map defined by (3.2) has a fixed point, which is a unique solution of (2.1).

Using the fixed point formulation (2.15), we suggest and analyze the following iterative methods for solving the extended general nonconvex variational inequality (2.1).

Algorithm 3.3. For a given , find the approximate solution by the iterative scheme which is called the explicit iterative method. For the convergence analysis of Algorithm 3.3, see Noor [21].
We again use the fixed point formulation to suggest and analyze the following iterative method for solving (2.1).

Algorithm 3.4. For a given , find the approximate solution by the iterative scheme Algorithm 3.4 is an implicit iterative method for solving the extended general variational inequalities (2.1). Using Lemma 2.3, one can rewrite Algorithm 3.4 in the following equivalent form.

Algorithm 3.5. For a given , find the approximate solution by the iterative schemes
To implement Algorithm 3.4, we use the predictor-corrector technique. We use Algorithm 3.3 as a predictor and Algorithm 3.4 as a corrector to obtain the following predictor-corrector method for solving the extended general variational inequality (2.1).

Algorithm 3.6. For a given , find the approximate solution by the iterative schemes Algorithm 3.6 is known as the extended extragradient method. This method includes the extragradient method of Korpelevi [8] for . Here we would like to point out that the implicit method (Algorithm 3.4) and the extragradient method (Algorithm 3.6) are equivalent.

We now consider the convergence analysis of Algorithm 3.4, and this is the main motivation of our next result.

Theorem 3.7. Let be a solution of (2.1), and let be the approximate solution obtained from Algorithm 3.4. If the operator is -pseudomonotone, then

Proof. Let be a solution of (2.1). Then since the operator is -pseudomonotone. Take in (3.16); we have Taking in (3.13), we have From (3.17) and (3.18), we have It is well known that Using (3.19), from (3.20), one can easily obtain the required result (3.15).

Theorem 3.8. Let be a solution of (2.1), and let be the approximate solution obtained from Algorithm 3.4. Let be a finite dimensional space. Then .

Proof. Let be a solution of (2.1). Then the sequence is nonincreasing and bounded and which implies Let be a cluster point of . Then there exists a subsequence such that converges to . Replacing by in (3.13), taking the limits in (3.13), and using (3.23), we have This shows that solves the extended general variational inequality (2.1) and which implies that the sequence has a unique cluster point and is the solution of (2.1), the required result.

We again use the fixed point formulation (3.1) to suggest the following method for solving (2.1).

Algorithm 3.9. For a given , find the approximate solution by the iterative schemes which is also known as an implicit method. To implement this method, we use the prediction-correction technique. We use Algorithm 3.3 as the predictor and Algorithm 3.9 as the corrector. Consequently, we obtain the following iterative method.

Algorithm 3.10. For a given , find the approximate solution by the following iterative schemes: Algorithm 3.10 is called the two-step or predictor-corrector method for solving the extended general variational inequality (2.1).
For a given step size , one can suggest and analyze the following two-step iterative method of the form.

Algorithm 3.11. For a given , find the approximate solution by the iterative schemes: Note that for , Algorithm 3.11 reduces to Algorithm 3.10. Using the technique of Noor [12], one may study the convergence analysis of Algorithms 3.6 and 3.7.

4. Auxiliary Principle Technique

In this section, we use the auxiliary principle technique to study the existence of a solution of the extended general variational inequality (2.1).

Theorem 4.1. Let be a strongly monotone with constant and Lipschitz continuous with constant . Let be a strongly monotone and Lipschitz continuous operator with constants and , respectively. If the operator is firmly expanding and there exists a constant such that where then the extended general variational inequality (2.1) has a unique solution.

Proof. We use the auxiliary principle technique to prove the existence of a solution of (2.1). For a given satisfying the extended general variational inequality (2.1), we consider the problem of finding a solution such that where is a constant.
The inequality of type (4.3) is called the auxiliary extended general variational inequality associated with the problem (2.1). It is clear that the relation (2.5) defines a mapping . It is enough to show that the mapping defined by the relation (4.3) has a unique fixed point belonging to satisfying the extended general variational inequality (2.1). Let be two solutions of (2.1) related to , respectively. It is sufficient to show that for a well-chosen , with , where is independent of and . Taking (resp. ) in (4.3) related to (resp., ), adding the resultant, we have from which we have Since is both strongly monotone and Lipschitz continuous operator with constants and , respectively, it follows that In a similar way, using the strongly monotonicity with constant and Lipschitz continuity with constant , we have From (4.6), (4.7), (4.2), and using the fact that the operator is firmly expanding, we have From (4.1) and (4.2), it follows that showing that the mapping defined by (4.3) has a fixed point belonging to , which is the solution of (2.1), the required result.

We note that, if , then clearly is a solution of the extended general variational inequality (2.17). This observation enables to suggest and analyze the following iterative method for solving the extended general variational inequalities (2.1).

Algorithm 4.2. For a given , find the approximate solution by the iterative scheme We remark that Algorithm 4.2 can be rewritten in the equivalent form using the projection technique as follows.

Algorithm 4.3. For a given , find the approximate solution by the iterative scheme which is exactly Algorithm 3.3.

We now use the auxiliary principle technique to suggest the implicit iterative method for solving the extended general variational inequality (2.1). For a given satisfying the extended general variational inequality (2.1), we consider the problem of finding a solution such that where is a constant.

It is clear that, if , then is a solution of the extended general variational inequality (2.17). We use this fact to suggest another iterative method for solving (2.1).

Algorithm 4.4. For a given , find the approximate solution by the iterative scheme We remark that Algorithm 4.4 can be rewritten in the equivalent form using the projection technique as follows.

Algorithm 4.5. For a given , find the approximate solution by the iterative scheme which is exactly Algorithm 3.4.

The auxiliary principle technique can be used to develop several two-step, three-step, and alternating direction methods for solving the extended general variational inequalities. This is an interesting problem for further research.

We now define the residue vector by It is clear from Lemma 2.4 that the extended general variational inequality (2.1) has solution ; , if and only if, is a zero of the equation For a positive step size , (4.16) can be written as This fixed point formulation can be used to suggest and analyze the following iterative method for solving the extended general variational inequality (2.1).

Algorithm 4.6. For a given , find the approximate solution by the iterative scheme which is an implicit method.

It is worth mentioning that one can suggest and analyze a wide class of iterative methods for solving the extended general variational inequality and its variant forms by using the technique of Noor [11]. We leave this to the interested readers.

5. Conclusion

In this paper, we have introduced and considered a new class of variational inequalities, which is called the extended general variational inequalities. We have established the equivalent between the extended general variational inequalities and fixed point problem using the technique of the projection operator. This equivalence is used to study the existence of a solution of the extended general variational inequalities as well as to suggest and analyze some iterative methods for solving the extended general variational inequalities. Several special cases are also discussed. Results proved in this paper can be extended for multivalued and system of extended general variational inequalities using the technique of this paper. The comparison of the iterative method for solving extended general variational inequalities is an interesting problem for future research. Using the technique of Noor [11], one can study the sensitivity analysis and the properties of the associated dynamical system related to the extended general variational inequalities. We hope that the ideas and technique of this paper may stimulate further research in this field.

Acknowledgment

The author is grateful to Dr. S. M. Junaid Zaidi, Rector, COMSATS Institute of Information Technology, Pakistan, for providing the excellent research facilities.