Abstract

It is well known that the variational inequalities are equivalent to the fixed point problem. We use this alternative equivalent formulation to suggest and analyze some new proximal point methods for solving the variational inequalities. These new methods include the explicit, the implicit, and the extragradient methods as special cases. The convergence analysis of the new methods is considered under some suitable conditions. Results proved in this paper may stimulate further research in this direction.

1. Introduction

Variational inequalities, the origin of which can be traced back to Stampacchia [1], are being used to study a wide class of diverse unrelated problems arising in various branches of pure and applied sciences in a unified framework. It is well known that the variational inequalities are equivalent to the fixed point problem. This alternative equivalent formulation has played an important and fundamental role in the existence, numerical methods, and other aspects of the variational inequalities. This equivalent formulation has been used to suggest the projection iterative method, the implicit iterative method, and the extragradient method, which is due to Korpelevich [2], for solving the variational inequalities. It has been shown [3] that the implicit iterative method and the extragradient method are equivalent. We remark that the implicit iterative method and the explicit iterative method are two different and distinct methods. We use this alternative equivalent formulation to suggest and analyze some new proximal point methods, which include the implicit and explicit methods as special cases. This is the main motivation of this paper. We also consider its convergence criteria under suitable conditions. We hope that the ideas and techniques of this paper may stimulate further research in this area of pure and applied sciences.

2. Preliminaries

Let 𝐻 be a real Hilbert space, whose inner product and norm are denoted by , and , respectively. Let 𝐾 be a nonempty, closed, and convex set in 𝐻.

For a given nonlinear operator 𝑇𝐻𝐻, we consider the problem of finding 𝑢𝐾 such that 𝑇𝑢,𝑣𝑢0,𝑣𝐾,(2.1) which is called the variational inequality, introduced and studied by Stampacchia [1].

For the applications, formulations, numerical methods, and other aspects of the equilibrium variational inequalities, see [114] and the references therein.

We now recall some well-known results and concepts.

Lemma 2.1. Let 𝐾 be a nonempty, closed, and convex set in 𝐻. Then, for a given 𝑧𝐻, 𝑢𝐾 satisfies the inequality 𝑢𝑧,𝑣𝑢0,𝑣𝐾,(2.2) if and only if 𝑢=𝑃𝐾𝑧,(2.3) where 𝑃𝐾 is the projection of 𝐻 onto the closed and convex set 𝐾.

It is well known that the projection operator 𝑃𝐾 is nonexpansive, that is, T𝑢𝑇𝑣𝑢𝑣,𝑢,𝑣𝐻.(2.4) This property plays a very important part in the studies of the variational inequalities and related optimization.

Using Lemma 2.1, one can easily show that the variational inequality (2.1) is equivalent to finding 𝑢𝐾 such that 𝑢=𝑃𝐾[]𝑢𝜌𝑇𝑢,(2.5) where 𝜌>0 is a constant.

Definition 2.2. An operator 𝑇𝐻𝐻 is said to be strongly monotone if and only if there exists a constant 𝛼>0 such that 𝑇𝑢𝑇𝑣,𝑢𝑣𝛼𝑢𝑣2,𝑢,𝑣𝐻,(2.6) and Lipschitz continuous if there exists a constant 𝛽>0 such that 𝑇𝑢𝑇𝑣𝛽𝑢𝑣,𝑢,𝑣𝐻.(2.7)

3. Main Results

In this section, we use the fixed point formulation (2.5) to suggest a new unified implicit method for solving the variational inequality (2.1), and this is the main motivation of this paper. Using the equivalent fixed point formulation, one can suggest the following iterative method for solving the variational inequality (2.1).

Algorithm 3.1. For a given 𝑢0𝐾, find the approximate solution 𝑢𝑛+1 by the iterative scheme 𝑢𝑛+1=𝑃𝐾𝑢𝑛𝜌𝑇𝑢𝑛,𝑛=0,1,2,.(3.1)

Algorithm 3.1 is known as the projection iterative method. For the convergence analysis of Algorithm 3.1, see Noor [8].

For a given 𝜆[0,1], we can rewrite (2.5) as 𝑢=𝑃𝐾[]𝑢𝜌𝑇𝑢+𝜆𝜌(𝑇𝑢𝑇𝑢).(3.2) This fixed point formulation is used to suggest the following new proximal point iterative method for solving the variational inequality (2.1).

Algorithm 3.2. For a given 𝑢0𝐾, find the approximate solution 𝑢𝑛+1 by the iterative scheme 𝑢𝑛+1=𝑃𝐾𝑢𝑛𝜌𝑇𝑢𝑛+1+𝜆𝜌𝑇𝑢𝑛+1𝑇𝑢𝑛,𝑛=0,1,2,.(3.3)

Note that Algorithm 3.2 is an implicit-type iterative method. It is clear that for 𝜆=1, Algorithm 3.2 reduces to Algorithm 3.1. For 𝜆=0, Algorithm 3.2 collapses to the following implicit iterative method for solving the variational inequality (2.1).

Algorithm 3.3. For a given 𝑢0𝐾, find the approximate solution 𝑢𝑛+1 by the iterative scheme 𝑢𝑛+1=𝑃𝐾𝑢𝑛𝜌𝑇𝑢𝑛+1,𝑛=0,1,2,.(3.4)

For the convergence analysis of Algorithm 3.3, see Noor [3] and the references therein.

In order to implement Algorithm 3.2, we use the predictor-corrector technique. We use Algorithm 3.1 as the predictor and Algorithm 3.2 as the corrector. Consequently, we obtain the following two-step iterative method for solving the variational inequality (2.1).

Algorithm 3.4. For a given 𝑢0𝐾, find the approximate solution 𝑢𝑛+1 by the iterative schemes: 𝑦𝑛=𝑃𝐾𝑢𝑛𝜌𝑇𝑢𝑛𝑢,(3.5)𝑛+1=𝑃𝐾𝑢𝑛𝜌𝑇𝑦𝑛+𝜆𝜌𝑇𝑦𝑛𝑇𝑢𝑛,𝑛=0,1,2,.(3.6) Algorithm 3.4 is a new two-step iterative method for solving the variational inequality (2.1).
For 𝜆=0, Algorithm 3.4 reduces to the following iterative method for solving the variational inequality (2.1).

Algorithm 3.5. For a given 𝑢0𝐾, find the approximate solution 𝑢𝑛+1 by the iterative schemes: 𝑦𝑛=𝑃𝐾𝑢𝑛𝜌𝑇𝑢𝑛,𝑢𝑛+1=𝑃𝐾𝑢𝑛𝜌𝑇𝑦𝑛,𝑛=0,1,2,(3.7) which is known as the extragradient method and is due to Korpelevich [2].

For 𝜆=1/2, Algorithm 3.4 reduces to the following iterative method for solving the variational inequality (2.1) and appears to be a new one.

Algorithm 3.6. For a given 𝑢0𝐾, find the approximate solution 𝑢𝑛+1 by the iterative schemes: 𝑦𝑛=𝑃𝐾𝑢𝑛𝜌𝑇𝑢𝑛,𝑢𝑛+1=𝑃𝐾𝑢𝑛𝜌𝑇𝑦𝑛+𝑇𝑢𝑛2,𝑛=0,1,2,.(3.8)

We would like to mention that one can deduce several iterative methods for solving the variational inequality and related optimization problems by choosing the appropriate and suitable value of the parameter 𝜆. This clearly shows that Algorithm 3.4 is a unified implicit method and includes the previously known implicit and predictor-corrector methods as special cases.

We now consider the convergence criteria of Algorithm 3.4, and this is the main motivation of our next result.

Theorem 3.7. Let the operator 𝑇 be strongly monotone with constant 𝛼>0 and Lipschitz continuous with constant 𝛽>0. If there exists a constant 𝜌>0 such that 𝜃1=12𝜆𝛽𝜌+𝜆2𝛽2𝜌2+𝜌(1𝜆)12𝛼𝜌+𝛽2𝜌2<1,(3.9) then the approximate solution 𝑢𝑛+1 obtained from Algorithm 3.4 converges strongly to the exact solution 𝑢𝐾 satisfying the variational inequality (2.1).

Proof. Let 𝑢𝐾 be a solution of (2.1), and let 𝑢𝑛+1 be the approximate solution obtained from Algorithm 3.3. Then, from (2.5) and (3.5), we have 𝑦𝑛=𝑃𝑢𝐾𝑢𝑛𝜌𝑇𝑢𝑛𝑃𝐾[]𝑢𝑢𝜌𝑇𝑢𝑛𝑢𝜌𝑇𝑢𝑛.𝑇𝑢(3.10) From the strongly monotonicity and the Lipschitz continuity of the operator 𝑇, we obtain 𝑢𝑛𝑢𝜌𝑇𝑢𝑛𝑇𝑢2=𝑢𝑛𝑢𝜌𝑇𝑢𝑛𝑇𝑢,𝑢𝑛𝑢𝜌𝑇𝑢𝑛=𝑢𝑇𝑢𝑛𝑢2𝜌𝑇𝑢𝑛𝑇𝑢,𝑢𝑛𝑢+𝜌2𝑇𝑢𝑛𝑇𝑢212𝛼𝜌+𝛽2𝜌2𝑢𝑛𝑢2.(3.11) From (3.10) and (3.11), we obtain 𝑦𝑛𝑢12𝛼𝜌+𝛽2𝜌2𝑢𝑛𝑢𝑢=𝜃𝑛,𝑢(3.12) where 𝜃=12𝛼𝜌+𝛽2𝜌2.(3.13) Form (2.5), (3.6), (3.9), (3.12), and (3.13), we have 𝑢𝑛+1=𝑃𝑢𝐾𝑢𝑛𝜌𝑇𝑦𝑛+𝜆𝜌𝑇𝑦𝑛𝑇𝑢𝑛𝑃𝐾[]𝑢𝑢𝜌𝑇𝑢𝑛𝑢𝜆𝜌𝑇𝑢𝑛𝑇𝑢+𝜌(1𝜆)𝑇𝑦𝑛𝑇𝑢12𝛼𝜆𝜌+𝛽2𝜆2𝜌2𝑢𝑛𝑦𝑢+𝜌(1𝜆)𝛽𝑛𝑢=𝜃1𝑢𝑛,𝑢(3.14) where 𝜃1=12𝜆𝛽𝜌+𝜆2𝛽2𝜌2+𝜌(1𝜆)12𝛼𝜌+𝛽2𝜌2.(3.15) From (3.9), it follows that 𝜃1<1. Thus the fixed point problem (2.5) has a unique solution, and consequently the iterative solution 𝑢𝑛+1 obtained from Algorithm 3.3 converges to 𝑢, the exact solution of (2.5).

For a given 𝜆[0,1], we can rewrite (2.5) as 𝑢=𝑃𝐾[]𝑢𝜌𝑇{(1𝜆)𝑢+𝜆𝑢}.(3.16) This fixed point formulation (3.16) has been used to suggest and analyze the following unified proximal methods for solving the variational inequality (2.1).

Algorithm 3.8. For a given 𝑢0𝐾, find the approximate solution 𝑢𝑛+1 by the iterative scheme 𝑢𝑛+1=𝑃𝐾𝑢𝑛𝜌𝑇(1𝜆)𝑢𝑛+1+𝜆𝑢𝑛,𝑛=0,1,2,.(3.17)

For the convergence analysis of Algorithm 3.8, see Noor [10]. For different and appropriate choice of the parameter 𝜆, Algorithm 3.8 includes the extragradient method of Korpelevich [2] and other methods as special cases.

We would like to mention that if the operator 𝑇 is linear, then Algorithm 3.4 and Algorithm 3.8 are equivalent. In this case, one can easily prove that the convergence of Algorithm 3.4 requires only the partially relaxed strong monotonicity of the operator 𝑇, which is a weaker condition.

4. Conclusion

In this paper, we have used the equivalence between the variational inequality and the fixed point problem to suggest and analyze some new proximal point methods for solving the variational inequality. We have also shown that these new implicit methods include the extragradient method of Korpelevich [2] and the classical implicit method as special cases. We have also discussed the convergence criteria of the proposed new iterative methods under some suitable conditions. Results proved in this paper may inspire further research in this area. It is an open problem to consider the implementation of these new proximal methods and the comparison with other methods. Using the ideas and techniques of this, one can suggest and analyze several new proximal point methods for solving the general variational inequality and its variant form.

Acknowledgments

This research is supported by the Visiting Professor Program of King Saud University, Riyadh, Saudi Arabia, and the Research Grant no. KSU.VPP.108. The authors are also grateful to Dr. S. M. Junaid Zaidi, Rector, COMSATS Institute of Information Technology, Pakistan, for providing the excellent research facilities.