Gap Functions and Algorithms for Variational Inequality Problems
We solve several kinds of variational inequality problems through gap functions, give algorithms for the corresponding problems, obtain global error bounds, and make the convergence analysis. By generalized gap functions and generalized D-gap functions, we give global bounds for the set-valued mixed variational inequality problems. And through gap function, we equivalently transform the generalized variational inequality problem into a constraint optimization problem, give the steepest descent method, and show the convergence of the method.
Variational inequality problem (VIP) provides us with a simple, natural, unified, and general frame to study a wide class of equilibrium problems arising in transportation system analysis [1, 2], regional science [3, 4], elasticity , optimization , and economics . Canonical VIP can be described as follows: find a point such that where is a nonempty closed convex subset of , is a mapping from into itself, and denotes the inner product in .
In recent years, considerable interest has been shown in developing various, useful, and important extensions and generalizations of VIP, both for its own sake and for its applications, such as general variational inequality problem (GVIP)  and set-valued (mixed) variational inequality problem (SMVIP) . There are significant developments of these problems related to multivalued operators, nonconvex optimization, iterative methods, and structural analysis. More recently, much attention has been given to reformulate the VIP as an optimization problem. And gap functions, which can constitute an equivalent optimization problem, turn out to be very useful in designing new globally convergent algorithms and in analyzing the rate of convergence of some iterative methods. Various gap functions for VIP have been suggested and proposed by many authors in [8, 10–13] and the references therein. Error bounds are functions which provide a measure of the distance between a solution set and an arbitrary point. Therefore, error bounds play an important role in the analysis of global or local convergence analysis of algorithms for solving VIP.
For the VIP defined in (1), the authors in  provided an equivalent optimization problem formulation through regularized gap function defined by where is a parameter. The authors proved that is the solution of problem (1) if and only if is global minimizer of function in and . In order to expand the definition of regularized gap function, the authors in  gave the definition of generalized regularized gap function defined by where is an abstract function which satisfies conditions ranked as follows:(C1) is continuous differentiable on ;(C2) is nonnegative on ;(C3) is uniformly strongly convex on ; that is, there exists a positive number such that (C4);(C5) is uniformly Lipschtiz continuous on ; that is, there exists a constant such that Note that is the partial of with respect to the second component and conditions (C1)–(C5) can make sense. One can refer to [10, 14] and so forth for more details.
Many gap functions have been explored during the past two decades as it is shown in [10–16] and the references therein. Motivated by their work, in this paper, we solve some classes of VIP through gap functions, give algorithms for the corresponding problems, obtain global error bounds, and make the convergence analysis. We consider generalized gap functions and generalized D-gap functions for SMVIP and give global bounds for the problem through the two functions, respectively. And for GVIP, we equivalently transform it into a constraint optimization problem through gap function, introduce the steepest descent method, and show the convergence of the method.
Let be a real Hilbert space whose inner product and norm are denoted by and , respectively. Let be a nonempty closed convex set in and let be the family of all nonempty compact subsets of .
Let be nonlinear operators. The GVIP can be described as follows: Find , such that
For single-valued operator , which is proper convex and lower semicontinuous, and for given multivalued operator , the SMVIP can be described as follows: Find , such that Note that when , the original problem (7) reduces to a set-valued variational inequality problem; when and is a single-valued operator, problem (7) is the right problem discussed in (1).
Recall that the multivalued operator is said to be strongly monotone with modulus on if And is said to be Lipschtiz continuous on a nonempty bounded set , if there exists a positive constant such that where is the Hausdorff metric on defined by
Let . Then is a -function if , for all and . Assume . is called smoothing approximation function of , if there exists a positive constant such that And is a uniform approximation if is independent of .
A matrix is a -matrix if each of its principal minors is nonnegative.
We need the following lemmas. The parameters involved in the lemmas can be found in the following sections.
Lemma 1 (see ). If abstract function satisfies condition (C1), then the following holds: that is, is strong monotone in , and by (C5), one obtains that .
Lemma 2 (see ). If abstract function satisfies conditions (C1)–(C4), then
Lemma 3 (see ). If abstract function satisfies conditions (C1)–(C5) and and are the corresponding coefficients defined above, then one has
Lemma 4 (see ). If abstract function satisfies conditions (C1)–(C4), then . Moreover, when , is a solution of SMVIP.
Lemma 5 (see ). If abstract function satisfies conditions (C1)–(C4), then is differentiable and
Lemma 7 (see ). Let abstract function satisfy conditions (C1)–(C4). If and is positive definite, then is a solution of .
3. Gap Functions and Error Bounds for SMVIP
In this section, by introducing appropriate gap functions, we give global error bound for SMVIP. Firstly, we need the following propositions.
Proposition 8. Let be a nonempty closed convex set in and let be strictly convex in . Then has only one minimum in .
Proof. We use proof by contradiction to show the desired result. Let be two minimal points of ; that is, . Since is strictly convex, one obtains that This implies that there exists a point , such that , which is a contradiction. This completes the proof.
Let , , and be defined as above and let be a nonempty closed convex set in . Now, we can introduce generalized gap function of defined as follows: From uniform convex of , one obtains that is also uniform convex in . By Proposition 8, there exists a minimal point of in , such that
Proposition 9. If abstract function satisfies conditions (C1)–(C4) and is proper convex and lower semicontinuous, then for all , is a solution of ().
Proof. From the definition of , one has
By the definition of subgradient, we have
which is equivalent to
On the one hand, if , from Lemma 2, one obtains , and so does . So, from (21), we have which implies that is a solution of .
On the other hand, if is a solution of , take in (7), then we have From condition (C3), one has And by conditions (C2) and (C4), So we have Combining (23) with (26), we have . This completes the proof.
Based on the above discussion, one can obtain the following global error bound.
Theorem 10. If abstract function satisfies conditions (C1)–(C5), is closed convex, and is strong monotone and Lipschtiz continuous with respect to the solution of , then one has where and can be found in (5) and (9), respectively.
Proof. Since is a solution of , take , then we obtain Let , for all . Then inequality (28) reduces to Take , in (21) such that . Then inequality (21) changes to Combining (29) and (30), we have And note that From (8), one has so we have This completes the proof.
Theorem 11. If abstract function satisfies conditions (C1)–(C5) and is strong monotone for the solution of SMVIP and is Lipschtiz continuous with module , then has global error bound with respect to SMVIP; that is,
Now, we introduce generalized D-gap function for SMVIP which is defined by where and are minimal points for and in , respectively, and . For , we can conclude the following result.
Proposition 12. If abstract function satisfies condition (C3), then one has
Proof. From the definition of , one obtains that can be proved similarly. This completes the proof.
From Proposition 12, one has the following.
Proposition 13. If satisfies conditions (C1)–(C4), then is nonnegative, and is a solution of .
Proof. From Proposition 12 and nonnegative property of , we have that is nonnegative.
On the one hand, if , then by conditions (C2) and (C4), one has . Then by Proposition 9, we conclude that is a solution of .
On the other hand, if is a solution of , by Proposition 9, one obtains that . From condition (C4), one has . And since is nonnegative, we have . This completes the proof.
By the generalized D-gap function, we have the following error bound for .
Theorem 14. Let satisfy conditions (C1)–(C5). is strong monotone for the solution of SMVIP and is Lipschtiz continuous with module ; then has global error bound with respect to SMVIP; that is,
4. Steepest Descent Method for GVIP
In this section, by introducing appropriate generalized gap function, the original in (6) can be changed into an optimization problem with restrictions. When one designs algorithms to solve the optimization problem, the gradient of objective function is unavoidable. We try to design a new algorithm, constructing a class of descent direction, to solve the optimization problem. In the following, we set to be . And we introduce the following generalized gap function for : where is a minimal point for , is a positive parameter, and satisfies conditions (C1)–(C5) stated above. For , we have the following useful results :(A1) is nonnegative in ;(A2) for some is a solution of VIP;(A3) is the only minimizer of in . And similar to the discussion in [10, 11], we also give the following two assumptions:(a) is positive definite for all ;(b).
Step 0. Choose an initial value , , and put .
Step 1. If , then we can end the circulation.
Step 2. Compute , and let
Step 3. Let be the minimal nonnegative integer , such that
Step 4. Let , ; go to Step 1.
Proof. To begin, we show that , for all positive integer . From Algorithm 15, one obtains that . We prove this result by induction. Assume ; we only need to show that . Since , , and is convex, we have For simplicity, , are replaced by , , respectively. From Lemma 5, one has Since , we only need to show that . Since is the unique minimizer of in , we have Let in (52). One has From assumption (b), we have This completes the proof.
Now, we are in a position to show the global convergence result for Algorithm 15.
Theorem 17. Let be a sequence generated by Algorithm 15, and let be the cluster point of . Then is a solution of .
Proof. Let be a subsequence which converges to . If , then from Lemma 6, is a solution of GVIP. If , from the continuous property, one obtains that which implies that Now, we begin to show that the cluster point of is zero. We use proof by contradiction. Assume . On the one hand, from Proposition 16, one has that On the other hand, from Proposition 16, we obtain that is monotonically decreasing and bounded; that is, the sequence is convergent. From step 3 of Algorithm 15, one has Hence, we have ; that is, Without loss of generality, we assume , for all . Then one cannot find the minimal nonnegative integer ; that is, Or, equally, Let , from (58), and be continuous and differentiable; we can obtain Inequalities (56) and (61) are at odds. This completes the proof.
The authors would like to thank the referees for the helpful suggestions. This work is supported by the National Natural Science Foundation of China, Contact/Grant nos. 11071109 and 11371198, the Priority Academic Program Development of Jiangsu Higher Education Institutions, and the Foundation for Innovative Program of Jiangsu Province Contact/Grant no. CXZZ12_0383.
M. Florian and M. Los, “A new look at static spatial price equilibrium model,” Regional Science and Urban Economy, vol. 12, pp. 374–389, 1982.View at: Google Scholar
A. B. Nagurney, “Competitive equilibrium problems, variational inequalities and regional science,” Journal of Regional Science, vol. 27, pp. 503–517, 1987.View at: Google Scholar
B. Qu, C. Y. Wang, and J. Z. Zhang, “Convergence and error bound of a method for solving variational inequality problems via the generalized D-gap function,” Journal of Optimization Theory and Applications, vol. 119, no. 3, pp. 535–552, 2003.View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
Y. H. Hu, Gap gunctions and weak sharpness of solutions for variational inequalities [Ph.D. thesis], Southest Normal University, 2010 (Chinese).