Abstract

We suggest and analyze a new self-adaptive method for solving general mixed variational inequalities, which can be viewed as an improvement of the method of (Noor 2003). Global convergence of the new method is proved under the same assumptions as Noor's method. Some preliminary computational results are given to illustrate the efficiency of the proposed method. Since the general mixed variational inequalities include general variational inequalities, quasivariational inequalities, and nonlinear (implicit) complementarity problems as special cases, results proved in this paper continue to hold for these problems.

1. Introduction

Variational inequalities introduced in the early sixties have played a critical and significant part in the study of several unrelated problems arising in finance, economics, network analysis, transportation, elasticity, and optimization. Variational inequalities theory has witnessed an explosive growth in theoretical advances, algorithmic development, and applications across all disciplines of pure and applied sciences, see [1ā€“16]. A useful and important generalization of variational inequalities is the mixed variational inequality containing a nonlinear term . But the applicability of the projection method is limited due to the fact that it is not easy to find the projection except in very special cases. Secondly, the projection method cannot be applied to suggest iterative algorithms for solving general mixed variational inequalities involving the nonlinear term This fact has motivated many authors to develop the auxiliary principle technique for solving the mixed variational inequalities. In recent years, several techniques have been developed to suggest and analyze various iterative methods for solving different types of variational inequalities. It is worth mentioning that if the nonlinear term in the variational inequalities is a proper, convex, and semilower continuous function, then it is well known that the variational inequalities involving the nonlinear term are equivalent to the fixed point problems and the resolvent equations. In [11], Noor solved the general mixed variational inequality problem by using the resolvent equations technique. Inspired and motivated by the results of Noor [11], we propose a new method for solving general mixed variational inequalities by using a new direction with a new step size . We prove the global convergence of the proposed method under the same assumptions as in [11]. An example is given to illustrate the efficiency and its comparison with the results of Noor [11, 14]. This shows that the method is robust and efficient. This new method can be viewed as an important and significant improvement of Noor and other methods.

2. Preliminaries

Let be a real Hilbert space, whose inner product and norm are denoted by and let be the identity mapping on and be two operators. Let denotes the subdifferential of function where is a proper convex lower semicontinuous function on It is well known that the subdifferential is a maximal monotone operator. We consider the problem of finding such that which is known as the mixed general variational inequality, see Noor [11]. We also note that the general variational inequality can be written in the equivalent form as find such thatwhich is known as the problem of finding a zero of sum of two(more) monotone operators. It is well known that a wide class of linear and nonlinear problems arising in pure and applied sciences can be studied via the general mixed variational inequalities, see [1ā€“16] and the references therein.

If is a closed convex set in and whereis the indicator function of then the problem (2.1) is equivalent to finding such that and Problem (2.4) is called the general variational inequality, which was first introduced and studied by Noor [9] in 1988. For the applications, formulation, and numerical methods of general variational inequalities (2.4), we refer the reader to the survey, see [1ā€“3, 7, 12, 13, 16].

If then the problem (2.4) is equivalent to finding such that which is known as the classical variational inequality introduced and studied by Stampacchia [17].

Lemma 2.1 (see [4]). For a given , and the inequalityholds if and only if where is the resolvent operator.

It follows from Lemma 2.1 that

If is the indicator function of a closed convex set in then the resolvent operator where is the projection of onto the closed convex set It is well known that is nonexpansive, that is,

Lemma 2.2 (see [10]). is solution of problem (2.1) if and only if satisfies the relation: where is the resolvent operator .

From Lemma 2.2, it is clear that is solution of (2.1) if and only if is a zero point of the functionIn [11], Noor used the fixed-point formulation (2.9) and the resolvent equations to suggest and analyze the following algorithm for solving problem (2.1).

Algorithm 2.3. For a given compute the approximate solution by the iterative schemes.
Predictor Step
where satisfies

Corrector Step
whereis the corrector step size.
If is an indicator function of a closed convex set in then [10], the projection of onto and consequently Algorithm 2.3 collapses.

Algorithm 2.4 (see [11]). For a given compute the approximate solution by the iterative schemes.
Predictor Step
where satisfies

Corrector Step
whereis the corrector step size.
Throughout this paper, we make following assumptions.

Assumptions
(i) is finite dimension space.(ii) is homeomorphism on , that is, is bijective, continuous and is continuous.(iii) is continuous and g-pseudomonotone operator on , that is,(iv)The solution set of problem (2.1) denoted by is nonempty.

3. Iterative Method and Basic Results

In this section, we suggest and analyze a new method for solving mixed general variational inequality (2.1) by using a new direction with a new step size and this is the main motivation of this paper.

Algorithm 3.1 Step 1. Given , and .Step 2. Set Step 3. If then set the stepsize
and the next iterate and go to Step 2.
Step 4. Reduce the value of by
set and go to Step 3.
If is an indicator function of a closed convex set in then [10], the projection of onto Consequently, Algorithm 3.1 reduces to Algorithm 3.2 for solving the general variational inequalities (2.4).

Algorithm 3.2 Step 1. Given , and .Step 2. SetStep 3. If
then set
the stepsize
and the next iterate
and go to Step 2.
Step 4. Reduce the value of by
set and go to Step 3.

Remark 3.3. Equation (3.2) implies that

The next lemma shows that and are lower bounded away from zero, whenever .

Lemma 3.4. For given and , let and satisfy to (3.1) and (3.3), then

Proof. It follows from (3.4) and (3.14) that
Otherwise, we have we can get the assertion of this lemma.

The next lemma shows that is a nondecreasing function with respect to which can be proved using the techniques as in [1].

Lemma 3.5. For all and it holds that

Next lemma has already been studied in [11].

Lemma 3.6. For all , and we have
where and are defined in (3.5) and (3.6), respectively.

4. Convergence Analysis

In this section, we prove the global convergence of the proposed method. The following result plays a crucial role in the convergence analysis of the proposed method.

Theorem 4.1. Let be a solution of problem (2.1) and let be the sequence obtained from Algorithm 3.1. Then is bounded and

Proof. Let be a solution of problem (2.1). Thenwhere the first inequality follows from the nonexpansiveness of the resolvent operator, the second inequality follows from (3.7) and (3.19), and the third inequality follows from (3.15). Since and , we have
Since is homeomorphism, it is easy to verify that the sequence is bounded.

We now prove the convergence of Algorithm 3.1.

Theorem 4.2. The sequence generated by the Algorithm 3.1 converges to a solution of problem (2.1).

Proof. It follows from (4.1) thatwhich means that Since is homeomorphism, we haveThis implies that is bounded. Since is a nondecreasing function of it follows from thatand from (4.5), we get Let be a cluster point of and the subsequence converges to Since is a continuous function of , it follows from (4.8) thatFrom Lemma 2.2, it follows that is a solution point of problem (2.1). Note that inequality (4.1) is true for all solution point of problem (2.1), hence we have Since and , for any given , there is an , such that
Therefore, for any , it follows from (4.10) and (4.11) thatand thus the sequence converges to . Using is homeomorphism, we see that the sequence converges to
We now prove that the sequence has exactly one cluster point. Assume that is another cluster point and satisfiesSince is a cluster point of the sequence and is homeomorphism, there is a such thatOn the other hand, since and from (4.1), we haveit follows thatThis contradicts the assumption that is cluster point of Thus the sequence converges to

5. Numerical Results

In this section, we present some numerical results for the proposed method. In order to verify the theoretical assertions, we consider the following problems:where is an matrix, is a simple closed convex set in is a parameter vector. Here, the statement that the set is simple means that the projection onto is simple to carry out. For the same reason given in Fletcher (see [5, page 222]), each element of the optimal solution of problem (5.1) is positive. Thus the bounds are inactive and can be ignored, therefore problem (5.1) can be written as By attaching the Lagrange multiplier to the equality constraints , the Lagrange function of problem (5.2) iswhich is defined on . If is a KKT point of problem (5.2), then we have Note that problem (5.1) is invariant under multiplication by some positive scalar Denoting and eliminating and in (5.4), we see that problem (5.1) is equivalent to a general variational inequality problem. Find such that

where It is well known (see [7, Theorem 1]) that solving (5.5)-(5.6) is equivalent to finding a zero point of the functionThen, solving (5.5)-(5.6) is equivalent to find a pair , such that

whereIn this case Algorithms 2.3 and 3.1 collapse to Algorithms 2.4 and 3.2, respectively.

In the test, we let be a randomly generated vector, and be an Householder matrix. LetNote thatSincewe setwhere In this way, we have

In all the tests, we take and The calculations are started with a vector whose elements are randomly chosen in (0,1) and stopped, whenever

Since is known, we also report the distance after All codes are written in Matlab and run on a P4-2.00G note book computer. We test the problem with dimensions and . The iteration numbers and the computational time for Algorithms 2.4 and 3.2 with different dimensions and initial parameter are given in the Tables 1-2, and for Algorithm 3.2 and the method of Noor [14] in Tables 3-4

tab5.1
Table 1: Numerical results for problem (5.9) with
tab5.2
Table 2: Numerical results for problem (5.9) with
tab5.3
Table 3: Numerical results for problem (5.9) with
tab5.4
Table 4: Numerical results for problem (5.9) with

From Tables 1ā€“4, we could see that Algorithms 2.4 and the method in [14] work well, if is sufficient large. If the parameter is too small, then the iteration numbers and the computational time can increase significantly. Also these tables show that Algorithm 3.2 is very efficient for the problem tested. In addition, for our method, it seems that the computational time and the iteration numbers are not very sensitive to the problem size.

Acknowledgments

The authors would like to thank the referees and Professor Dr. Alois Steindl for their very constructive comments and suggestions. Abdellah Bnouhachem was supported by china Postdoctoral science Foundation Grant no. 20060390915 and the National Key Technology R&D Program (2006BAH02A06).