Research Article | Open Access
Abdellah Bnouhachem, Muhammad Aslam Noor, Eman H. Al-Shemas, "On Self-Adaptive Method for General Mixed Variational Inequalities", Mathematical Problems in Engineering, vol. 2008, Article ID 280956, 13 pages, 2008. https://doi.org/10.1155/2008/280956
On Self-Adaptive Method for General Mixed Variational Inequalities
We suggest and analyze a new self-adaptive method for solving general mixed variational inequalities, which can be viewed as an improvement of the method of (Noor 2003). Global convergence of the new method is proved under the same assumptions as Noor's method. Some preliminary computational results are given to illustrate the efficiency of the proposed method. Since the general mixed variational inequalities include general variational inequalities, quasivariational inequalities, and nonlinear (implicit) complementarity problems as special cases, results proved in this paper continue to hold for these problems.
Variational inequalities introduced in the early sixties have played a critical and significant part in the study of several unrelated problems arising in finance, economics, network analysis, transportation, elasticity, and optimization. Variational inequalities theory has witnessed an explosive growth in theoretical advances, algorithmic development, and applications across all disciplines of pure and applied sciences, see [1–16]. A useful and important generalization of variational inequalities is the mixed variational inequality containing a nonlinear term . But the applicability of the projection method is limited due to the fact that it is not easy to find the projection except in very special cases. Secondly, the projection method cannot be applied to suggest iterative algorithms for solving general mixed variational inequalities involving the nonlinear term This fact has motivated many authors to develop the auxiliary principle technique for solving the mixed variational inequalities. In recent years, several techniques have been developed to suggest and analyze various iterative methods for solving different types of variational inequalities. It is worth mentioning that if the nonlinear term in the variational inequalities is a proper, convex, and semilower continuous function, then it is well known that the variational inequalities involving the nonlinear term are equivalent to the fixed point problems and the resolvent equations. In , Noor solved the general mixed variational inequality problem by using the resolvent equations technique. Inspired and motivated by the results of Noor , we propose a new method for solving general mixed variational inequalities by using a new direction with a new step size . We prove the global convergence of the proposed method under the same assumptions as in . An example is given to illustrate the efficiency and its comparison with the results of Noor [11, 14]. This shows that the method is robust and efficient. This new method can be viewed as an important and significant improvement of Noor and other methods.
Let be a real Hilbert space, whose inner product and norm are denoted by and let be the identity mapping on and be two operators. Let denotes the subdifferential of function where is a proper convex lower semicontinuous function on It is well known that the subdifferential is a maximal monotone operator. We consider the problem of finding such that which is known as the mixed general variational inequality, see Noor . We also note that the general variational inequality can be written in the equivalent form as find such thatwhich is known as the problem of finding a zero of sum of two(more) monotone operators. It is well known that a wide class of linear and nonlinear problems arising in pure and applied sciences can be studied via the general mixed variational inequalities, see [1–16] and the references therein.
If is a closed convex set in and whereis the indicator function of then the problem (2.1) is equivalent to finding such that and Problem (2.4) is called the general variational inequality, which was first introduced and studied by Noor  in 1988. For the applications, formulation, and numerical methods of general variational inequalities (2.4), we refer the reader to the survey, see [1–3, 7, 12, 13, 16].
Lemma 2.1 (see ). For a given , and the inequalityholds if and only if where is the resolvent operator.
It follows from Lemma 2.1 that
If is the indicator function of a closed convex set in then the resolvent operator where is the projection of onto the closed convex set It is well known that is nonexpansive, that is,
From Lemma 2.2, it is clear that is solution of (2.1) if and only if is a zero point of the functionIn , Noor used the fixed-point formulation (2.9) and the resolvent equations to suggest and analyze the following algorithm for solving problem (2.1).
Algorithm 2.3. For a given compute the approximate solution by the iterative schemes.
whereis the corrector step size.
If is an indicator function of a closed convex set in then , the projection of onto and consequently Algorithm 2.3 collapses.
Algorithm 2.4 (see ). For a given compute the approximate solution by the iterative schemes.
whereis the corrector step size.
Throughout this paper, we make following assumptions.
(i) is finite dimension space.(ii) is homeomorphism on , that is, is bijective, continuous and is continuous.(iii) is continuous and g-pseudomonotone operator on , that is,(iv)The solution set of problem (2.1) denoted by is nonempty.
3. Iterative Method and Basic Results
In this section, we suggest and analyze a new method for solving mixed general variational inequality (2.1) by using a new direction with a new step size and this is the main motivation of this paper.
Algorithm 3.1 Step 1. Given ,
and .Step 2. Set Step 3. If then set
and the next iterate and go to Step 2.Step 4. Reduce the value of by
set and go to Step 3.
If is an indicator function of a closed convex set in then , the projection of onto Consequently, Algorithm 3.1 reduces to Algorithm 3.2 for solving the general variational inequalities (2.4).
Remark 3.3. Equation (3.2) implies that
The next lemma shows that and are lower bounded away from zero, whenever .
The next lemma shows that is a nondecreasing function with respect to which can be proved using the techniques as in .
Lemma 3.5. For all and it holds that
Next lemma has already been studied in .
4. Convergence Analysis
In this section, we prove the global convergence of the proposed method. The following result plays a crucial role in the convergence analysis of the proposed method.
Let be a solution of problem (2.1).
Thenwhere the first inequality
follows from the nonexpansiveness of the resolvent operator, the second
inequality follows from (3.7) and (3.19), and the third inequality follows from
(3.15). Since and ,
Since is homeomorphism, it is easy to verify that the sequence is bounded.
We now prove the convergence of Algorithm 3.1.
Proof. It follows from (4.1) thatwhich means that Since is homeomorphism, we haveThis implies that is bounded. Since is a nondecreasing function of it follows from thatand from (4.5), we get Let be a cluster point of and the subsequence converges to Since is a continuous function of ,
it follows from (4.8) thatFrom Lemma 2.2, it follows that is a solution point of problem (2.1). Note
that inequality (4.1) is true for all solution point of problem (2.1), hence we
have Since and ,
for any given ,
there is an ,
Therefore, for any , it follows from (4.10) and (4.11) thatand thus the sequence converges to . Using is homeomorphism, we see that the sequence converges to
We now prove that the sequence has exactly one cluster point. Assume that is another cluster point and satisfiesSince is a cluster point of the sequence and is homeomorphism, there is a such thatOn the other hand, since and from (4.1), we haveit follows thatThis contradicts the assumption that is cluster point of Thus the sequence converges to
5. Numerical Results
In this section, we present some numerical results for the proposed method. In order to verify the theoretical assertions, we consider the following problems:where is an matrix, is a simple closed convex set in is a parameter vector. Here, the statement that the set is simple means that the projection onto is simple to carry out. For the same reason given in Fletcher (see [5, page 222]), each element of the optimal solution of problem (5.1) is positive. Thus the bounds are inactive and can be ignored, therefore problem (5.1) can be written as By attaching the Lagrange multiplier to the equality constraints , the Lagrange function of problem (5.2) iswhich is defined on . If is a KKT point of problem (5.2), then we have Note that problem (5.1) is invariant under multiplication by some positive scalar Denoting and eliminating and in (5.4), we see that problem (5.1) is equivalent to a general variational inequality problem. Find such that
In the test, we let be a randomly generated vector, and be an Householder matrix. LetNote thatSincewe setwhere In this way, we have
In all the tests, we take and The calculations are started with a vector whose elements are randomly chosen in (0,1) and stopped, whenever
Since is known, we also report the distance after All codes are written in Matlab and run on a P4-2.00G note book computer. We test the problem with dimensions and . The iteration numbers and the computational time for Algorithms 2.4 and 3.2 with different dimensions and initial parameter are given in the Tables 1-2, and for Algorithm 3.2 and the method of Noor  in Tables 3-4
From Tables 1–4, we could see that Algorithms 2.4 and the method in  work well, if is sufficient large. If the parameter is too small, then the iteration numbers and the computational time can increase significantly. Also these tables show that Algorithm 3.2 is very efficient for the problem tested. In addition, for our method, it seems that the computational time and the iteration numbers are not very sensitive to the problem size.
The authors would like to thank the referees and Professor Dr. Alois Steindl for their very constructive comments and suggestions. Abdellah Bnouhachem was supported by china Postdoctoral science Foundation Grant no. 20060390915 and the National Key Technology R&D Program (2006BAH02A06).
- A. Bnouhachem, “A self-adaptive method for solving general mixed variational inequalities,” Journal of Mathematical Analysis and Applications, vol. 309, no. 1, pp. 136–150, 2005.
- A. Bnouhachem, “A new step size rule in Noor's method for solving general variational inequalities,” The Australian Journal of Mathematical Analysis and Applications, vol. 4, no. 1, article 12, 10 pages, 2007.
- A. Bnouhachem and M. A. Noor, “Numerical comparison between prediction-correction methods for general variational inequalities,” Applied Mathematics and Computation, vol. 186, no. 1, pp. 496–505, 2007.
- H. Brezis, Operateurs Maximaux Monotone et Semigroupes de Contractions dans les Espace d'Hilbert, North-Holland, Amsterdam, The Netherlands, 1973.
- R. Fletcher, Practical Methods of Optimization, A Wiley-Interscience Publication, John Wiley & Sons, Chichester, UK, 2nd edition, 1987.
- R. Glowinski, J.-L. Lions, and R. Trémolières, Numerical Analysis of Variational Inequalities, vol. 8 of Studies in Mathematics and Its Applications, North-Holland, Amsterdam, The Netherlands, 1981.
- B. He, “Inexact implicit methods for monotone general variational inequalities,” Mathematical Programming, vol. 86, no. 1, pp. 199–217, 1999.
- J.-L. Lions and G. Stampacchia, “Variational inequalities,” Communications on Pure and Applied Mathematics, vol. 20, no. 3, pp. 493–519, 1967.
- M. A. Noor, “General variational inequalities,” Applied Mathematics Letters, vol. 1, no. 2, pp. 119–122, 1988.
- M. A. Noor, “An implicit method for mixed variational inequalities,” Applied Mathematics Letters, vol. 11, no. 4, pp. 109–113, 1998.
- M. A. Noor, “Pseudomonotone general mixed variational inequalities,” Applied Mathematics and Computation, vol. 141, no. 2-3, pp. 529–540, 2003.
- M. A. Noor, “New extragradient-type methods for general variational inequalities,” Journal of Mathematical Analysis and Applications, vol. 277, no. 2, pp. 379–394, 2003.
- M. A. Noor, “Some developments in general variational inequalities,” Applied Mathematics and Computation, vol. 152, no. 1, pp. 199–277, 2004.
- M. A. Noor, “On general mixed quasi variational inequalities,” Journal of Optimization Theory and Applications, vol. 120, no. 3, pp. 579–599, 2003.
- M. A. Noor, “Fundamentals of mixed quasi variational inequalities,” International Journal of Pure and Applied Mathematics, vol. 15, no. 2, pp. 137–258, 2004.
- M. A. Noor and A. Bnouhachem, “On an iterative algorithm for general variational inequalities,” Applied Mathematics and Computation, vol. 185, no. 1, pp. 155–168, 2007.
- G. Stampacchia, “Formes bilinéaires coercitives sur les ensembles convexes,” Comptes Rendus de L'Académie des Sciences, vol. 258, pp. 4413–4416, 1964.
Copyright © 2008 Abdellah Bnouhachem et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.