Abstract

It is well known that the mixed variational inequalities are equivalent to the fixed point problem. We use this alternative equivalent formulation to suggest some new proximal point methods for solving the mixed variational inequalities. These new methods include the explicit, the implicit, and the extragradient method as special cases. The convergence analysis of these new methods is considered under some suitable conditions. Our method of constructing these iterative methods is very simple. Results proved in this paper may stimulate further research in this direction.

1. Introduction

Variational inequalities are being used to study a wide class of diverse unrelated problems arising in various branches of pure and applied sciences in a unified framework. Several extensions and generalizations of the variational inequalities have been considered using novel and innovative techniques. A useful and significant generalization of the variational inequalities is known as the mixed variational inequality involving a nonlinear term. Due to the presence of the nonlinear term, the projection and its variant forms cannot be applied to establish the equivalence between the mixed variational inequalities and the fixed point problem. It is well known that the nonlinear term in the mixed variational inequality is a proper, convex, and lower semicontinuous, then one can establish the equivalence between the mixed variational inequality and the fixed point problem. This equivalence has been used to study the existence of a solution of the mixed variational inequalities as well as to develop numerical methods. We use this alternative equivalent formulation to suggest and analyze a wide class of proximal point methods for solving the mixed variational inequalities, which includes the implicit and explicit resolvent method as special cases. This is the main motivation of this paper. We also consider its convergence criteria under suitable conditions. Our method of constructing these methods is very simple as compared with other methods. We have only included very iterative methods. Readers are encouraged to construct other methods using the technique of this paper for solving other kinds of variational inequalities and related optimization problems. We hope that the ideas and techniques of this paper may stimulate further research in this area of pure and applied sciences.

2. Preliminaries

Let be a real Hilbert space, whose inner product and norm are denoted by and , respectively. Let be a nonempty, closed, and convex set in , and let be a continuous function.

For a given operator , we consider the problem of finding such that which is called the mixed variational inequality or variational inequality of the second kind.

For the applications, formulations, numerical methods, and other aspects of the mixed variational inequalities, see [115] and the references therein.

If the operator is linear, positive, and symmetric and the function is convex function, then the minimum of the functional , defined as on the convex set, can be characterized by the mixed variational inequality (2.1).

If the function is proper, convex, and semicontinuous, then problem (2.1) is equivalent to find such that where is the subdifferential. The problem (2.3) is called the variational inclusion problem or problem of finding the zero of the sum of two (more) monotone operators. It is known that a wide class of problems with applications in industry, physical, regional and engineering sciences can be studied by the problems (2.1) and (2.3), see, for example, [115] and the references therein.

If the function is the indictor function of a closed and convex set in the Hilbert space , then problem (2.1) is equivalent to finding such that which is known as the classical variational inequality, introduced and studied by Stampacchia [15] in 1964. For the applications, formulations, generalizations, numerical results, and other aspects of the variational inequalities, see [115].

We now recall some well-known results and concepts.

Definition 2.1 (see [1]). Let be a maximal monotone operator, then the resolvent operator associated with is defined as where is a constant and is the identity operator.

It is well known that the subdifferential is a maximal monotone operator, and we can define its resolvent as The resolvent operator defined by (2.6) has the following useful characterization.

Lemma 2.2 (see [1]). For a given satisfies the inequality if and only if where is the resolvent operator.

It is well known that the resolvent operator is nonexpansive, that is, Using Lemma 2.2, one can easily show that the mixed variational inequality (2.1) is equivalent to finding such that where is a constant.

Lemma 2.2 implies that variational inequality (2.1) and the fixed point problem (2.10) are equivalent. This alternative equivalent formulation has played a central role in the development of the mixed variational inequality theory. This equivalence formulation has been extensively used to develop several iterative methods for solving the variational inequalities, see, for example, [115] and the references therein.

Definition 2.3. An operator is said to be partially relaxed strongly monotone, if and only if, there exists a constant such that and pseudomontone with respect to the function if and only if

Definition 2.4. An operator , is said to be partially relaxed strongly pseudomonotone if the operator is both relaxed strongly monotone and pseudomonotone.

3. Main Results

In this section, we use the fixed point formulation (2.10) to suggest a new unified implicit method for solving the mixed variational inequalities (2.1), and this is the main motivation of this paper.

Algorithm 3.1. For a given , find the approximate solution by the iterative scheme

Algorithm 3.1 is known as the projection iterative method. For the convergence analysis of Algorithm 3.1, see Noor et al. [6].

For a given , we can rewrite (2.10) as This fixed point formulation is used to suggest the following new proximal point iterative method for solving mixed variational inequality (2.1) as follows.

Algorithm 3.2. For a given , find the approximate solution by the iterative scheme Note that Algorithm 3.2 is an implicit-type iterative method. It is clear that for , Algorithm 3.2 reduces to Algorithm 3.1. For , Algorithm 3.2 collapses to the following implicit iterative method for solving the mixed variational inequality (2.1).

Algorithm 3.3. For a given , find the approximate solution by the iterative scheme For the convergence analysis of Algorithm 3.3, see Noor [8] and the references therein.In order to implement Algorithm 3.2, we use the predictor-corrector technique. We use Algorithm 3.1 as the predictor and Algorithm 3.2 as the corrector. Consequently, we obtain the following two-step iterative method for solving the variational inequality (2.1).

Algorithm 3.4. For a given , find the approximate solution by the iterative schemes: Algorithm 3.3 is a new two-step iterative method for solving the variational inequality (2.1).

For , Algorithm 3.3 reduces to Algorithm 3.1. For , Algorithm 3.3 reduces to the following iterative method for solving the mixed variational inequality (2.1).

Algorithm 3.5. For a given , find the approximate solution by the iterative schemes: which is known as the extragradient method and is due to Korpelevič [5]?

For , Algorithm 3.2 collapses to the following iterative method for solving the mixed variational inequality (2.1) and appears to be a new one.

Algorithm 3.6. For a given , find the approximate solution by the iterative scheme This clearly shows that Algorithm 3.2 is a unified implicit method and includes the previously known extragradient method of Korpelevič [5] and several new methods as special cases.
We now consider the convergence criteria of Algorithm 3.2 and this is the main motivation of our next result.

Theorem 3.7. Let the operator be partially relaxed strongly pseudomonotone with constant . Then where is the approximate solution, which is obtained from Algorithm 3.2.

Proof. Let be a solution of (2.1). Then, by using the pseudomonotonicity of , we have Taking in (3.9) and (2.1), we have Using Lemma 2.2, we can rewrite (3.3) in the following equivalent form: Taking in (3.11), we have Form (3.10) and (3.12), we have Using the inequality , for all , and partially relaxed strongly monotonicity of the operator , form (3.13), we obtain which is the required (3.3).

Theorem 3.8. Let be a finite dimensional space. If is the approximate solution obtained from Algorithm 3.2, and let be a solution of problem (2.1). If , then .

Proof. Let be a solution of (2.1). For , we see that the sequence is nonincreasing and consequently is bounded. Also from (3.8), we have which implies that Let be the cluster point of and the subsequence of this sequence converges to . Replacing by in (3.11) and taking the limit as and using (3.16), we have which shows that solves the mixed variational inequality (2.1) and Thus, it follows from the above inequality that the sequence has exactly one cluster point and , the required result.

For a given , we can rewrite (2.10) as: We use this alternative equivalent fixed point formulation to suggest and analyze the following iterative method for solving the mixed variational inequality (2.1) as.

Algorithm 3.9. For a given , find the approximate solution by the iterative scheme It is clear that Algorithm 3.9 is an implicit method. To implement Algorithm 3.9, we use the predictor-corrector technique. We use Algorithm 3.1 as the predictor and Algorithm 3.9 as the corrector. Consequently, we have the following predictor-corrector method for solving the mixed variational inequalities (2.1).

Algorithm 3.10. For a given , find the approximate solution by the iterative schemes

Algorithm 3.10 is a new two-step implicit method for solving the mixed variational inequality (2.1).

For , Algorithm 3.9 collapses to the following algorithm.

Algorithm 3.11. For a given , find the approximate solution by the iterative schemes which is known as extraresolvent method and includes the extragradient method of Korpelevič [5] as special case.

For , Algorithm 3.9 reduces to the following algorithm.

Algorithm 3.12. For a given , find the approximate solution by the iterative schemes

Algorithm 3.12 is called the modified extraresolvent method, which is mainly due to Noor [8].

From the above discussion, it is clear that Algorithm 3.9 is a unified implicit resolvent method for solving the mixed variational inequalities. Algorithm 3.9 includes several new and previously known methods as special cases. Using the technique of Noor et al. [13], one can easily consider the convergence analysis of Algorithm 3.9.

For a given , one can rewrite (2.10) in the following form:

This fixed point formulation enables us to suggest and analyze a class of iterative methods for solving the mixed variational inequality (2.1).

Algorithm 3.13. For a given , find the approximate solution by the iterative scheme For different choice of the parameter , we can obtain the extraresoalvent method and Algorithm 3.1 as special cases. In particular, for , we have the following iterative method for solving the mixed variational inequality (2.1).

Algorithm 3.14. For a given , find the approximate solution by the iterative scheme We remark that if is a linear operator, then Algorithm 3.6 and Algorithm 3.14 are equivalent.

Remark 3.15. We remark that if the function is the indicator function of a closed and convex set , then , the projection of onto the convex and closed set . Consequently, Algorithms 3.13.12 are reduced to the algorithms considered in [13].

4. Conclusion

In this paper, we have used the equivalence between the mixed variational inequality and the fixed point problem to suggest and analyze some new proximal point methods for solving the mixed variational inequality. We have also shown that these new implicit methods include the extraresolvent and the classical implicit resolvent methods as special cases. We have also discussed the convergence criteria of the proposed new iterative methods under some suitable conditions. We have also shown that this technique can be used to suggest several iterative methods for solving various classes of equilibrium and variational inequalities problems. The technique of constructing various iterative methods is very simple and contains a wealth of new ideas. The results proved in this paper can be extended for the multivalued mixed quasi-variational inequalities and related optimization problems. Comparison of these methods with other methods is an interesting problem for further research.

Acknowledgment

The authors are grateful to Dr. S. M. Junaid Zaidi, Rector, COMSATS Institute of Information Technology, Pakistan for providing the excellent research facilities. The research of Prof. Dr. Z. Y. Huang is supported by National Natural Science Foundation of China (NSFC Grant No. 10871092), supported by the Fundamental Research Funds for the Central University of China (Grant No.1113020301 and Grant No.1116020301), and supported by A Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD Grant).