• Views 543
• Citations 16
• ePub 24
• PDF 444
`Abstract and Applied AnalysisVolume 2013 (2013), Article ID 942315, 8 pageshttp://dx.doi.org/10.1155/2013/942315`
Research Article

## Solving the Variational Inequality Problem Defined on Intersection of Finite Level Sets

1College of Science, Civil Aviation University of China, Tianjin 30030, China
2Tianjin Key Laboratory for Advanced Signal Processing, Civil Aviation University of China, Tianjin 300300, China

Received 3 March 2013; Accepted 14 April 2013

Copyright © 2013 Songnian He and Caiping Yang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Consider the variational inequality of finding a point satisfying the property , for all , where is the intersection of finite level sets of convex functions defined on a real Hilbert space and is an -Lipschitzian and -strongly monotone operator. Relaxed and self-adaptive iterative algorithms are devised for computing the unique solution of . Since our algorithm avoids calculating the projection (calculating by computing several sequences of projections onto half-spaces containing the original domain ) directly and has no need to know any information of the constants and , the implementation of our algorithm is very easy. To prove strong convergence of our algorithms, a new lemma is established, which can be used as a fundamental tool for solving some nonlinear problems.

#### 1. Introduction

The variational inequality problem can mathematically be formulated as the problem of finding a point with the property where is a real Hilbert space with inner product and norm , is a nonempty closed convex subset of , and is a nonlinear operator. Since its inception by Stampacchia [1] in 1964, the variational inequality problem has received much attention due to its applications in a large variety of problems arising in structural analysis, economics, optimization, operations research and engineering sciences; see [123] and the references therein. Using the projection technique, one can easily show that is equivalent to the fixed-point problem (see, for example, [15]).

Lemma 1. is a solution of if and only if satisfies the fixed-point relation: where is an arbitrary constant, is the orthogonal projection onto , and is the identity operator on .

Recall that an operator is called monotone, if Moreover, a monotone operator is called strictly monotone if the equality “” holds only when in the last relation. It is easy to see that (1) has at most one solution if is strictly monotone.

For variational inequality (1), is generally assumed to be Lipschitzian and strongly monotone on ; that is, for some constants ,   satisfies the conditions In this case, is also called an -Lipschitzian and -strongly monotone operator. It is quite easy to show the simple result as follows.

Lemma 2. Assume that satisfies conditions (4) and and are constants such that and , respectively. Let (or ) and (or ). Then and are all contractions with coefficients and , respectively, where .

Using Banach's contraction mapping principle, the following well-known result can be obtained easily from Lemmas 1 and 2.

Theorem 3. Assume that satisfies the conditions (4). Then has a unique solution. Moreover, for any , the sequence with initial guess and defined recursively by converges strongly to the unique solution of .

However, Algorithm (5) has two evident weaknesses. On one hand, Algorithm (5) involves calculating the mapping , while the computation of a projection onto a closed convex subset is generally difficult. If is the intersection of finite closed convex subsets of , that is, , where is a closed convex subset of , then the computation of is much more difficult. On the other hand, the determination of the stepsize depends on the constants and . This means that in order to implement Algorithm (5), one has first to compute (or estimate) the constants and , which is sometimes not an easy work in practice.

In order to overcome the above weaknesses of the algorithm (5), a new relaxed and self-adaptive algorithm is proposed in this paper to solve , where is the intersection of finite level sets of convex functions defined on and is an -Lipschitzian and -strongly monotone operator. Our method calculates by computing finite sequences of projections onto half-spaces containing the original set and selects the stepsizes through a self-adaptive way. The implementation of our algorithm avoids computing directly and has no need to know any information about and .

The rest of this paper is organized as follows. Some useful lemmas are listed in the next section; in particular, a new lemma is established in order to prove strong convergence theorems of our algorithms, which can also be used as a fundamental tool for solving some nonlinear problems relating to fixed point. In the last section, a relaxed algorithm (for the case where and are known) and a relaxed self-adaptive algorithm (for the case where and are not known) are proposed, respectively. The strong convergence theorems of our algorithms are proved.

#### 2. Preliminaries

Throughout the rest of this paper, we denote by a real Hilbert space and by the identity operator on . If is a differentiable functional, then we denote by the gradient of . We will also use the following notations: (i) denotes strong convergence.(ii) denotes weak convergence.(iii) such that denotes the weak -limit set of .

Recall a trivial inequality, which is well known and in common use.

Lemma 4. For all , there holds the following relation:

Recall that a mapping is said to be nonexpansive if is said to be firmly nonexpansive if, for ,

The following are characterizations of firmly nonexpansive mappings (see [7] or [24]).

Lemma 5. Let be an operator. The following statements are equivalent.(i) is firmly nonexpansive. (ii) is firmly nonexpansive. (iii),  .

We know that the orthogonal projection from onto a nonempty closed convex subset is a typical example of a firmly nonexpansive mapping [7], which is defined by It is well known that is characterized [7] by the inequality (for )

It is well known that the following lemma [25] is often used when we analyze the strong convergence of some algorithms for solving some nonlinear problems, such as fixed points of nonlinear mappings, variational inequalities, and split feasibility problems. In fact, this lemma has been regarded as a fundamental tool for solving some nonlinear problems relating to fixed point.

Lemma 6 (see [25]). Assume is a sequence of nonnegative real numbers such that where is a sequence in and is a sequence in such that(i), (ii) or .
Then .

In this paper, inspired and encouraged by an idea in [26], we obtain the following lemma. Its key effect on the proofs of our main results will be illustrated in the next section and this may show that this lemma is likely to become a new fundamental tool for solving some nonlinear problems relating to fixed point.

Lemma 7. Assume is a sequence of nonnegative real numbers such that where is a sequence in , is a sequence of nonnegative real numbers and , , and are three sequences in such that(i), (ii), (iii) implies for any subsequence , (iv).
Then .

Proof. Following and generalizing an idea in [26], we distinguish two cases to prove as .
Case  1. is eventually decreasing (i.e., there exists such that holds for all ). In this case, must be convergent, and from (13) it follows that Noting condition (ii), letting in (14) yields as . Using condition (iii), we get that . Noting this together with conditions (i) and (iv), we obtain by applying Lemma 6 to (12).
Case  2. is not eventually decreasing. Hence, we can find an integer such that . Let us now define Obviously, is nonempty and satisfies . Let It is clear that as (otherwise, is eventually decreasing). It is also clear that for all . Moreover, In fact, if , then inequity (17) is trivial; if , then , and (17) is also trivial. If , then there exists an integer such that . Thus we deduce from the definition of that and inequity (17) holds again. Since for all , it follows from (14) that so that as using condition (ii). Due to the condition (iii), this implies that Noting for all again, it follows from (12) that Combining (20), (21), and condition (iv) yields and hence as . This together with (13) implies that which together with (17), in turn, implies that as .

The following result is just a special case of Lemma 7, that is, the case where for all .

Lemma 8. Assume is a sequence of nonnegative real numbers such that where is a sequence in , is a sequence of nonnegative real numbers, and and are two sequences in such that(i), (ii), (iii) implies for any subsequence .
Then .

Recall that a function is called convex if A differentiable function is convex if and only if there holds the following relation: Recall that an element is said to be a subgradient of at if

A function is said to be subdifferentiable at , if it has at least one subgradient at . The set of subgradients of at the point is called the subdifferential of at and is denoted by . The last relation above is called the subdifferential inequality of at . A function is called subdifferentiable, if it is subdifferentiable at all . If a function is differentiable and convex, then its gradient and subgradient coincide.

Recall that a function is said to be weakly lower semicontinuous at if implies

#### 3. Iterative Algorithms

In this section, we consider the iterative algorithms for solving a particular kind of variational inequality (1) in which the closed convex subset is of the particular structure, that is the intersection of finite level sets of convex functions given as follows: where is a positive integer and is a convex function. We always assume that is subdifferentiable on and is a bounded operator (i.e., bounded on bounded sets). It is worth noting that every convex function defined on a finite-dimensional Hilbert space is subdifferentiable and its subdifferential operator is a bounded operator (see [27, Corollary 7.9]). We also assume that is an -Lipschitzian and -strongly monotone operator. It is well known that in this case VI() has a unique solution, henceforth, which is denoted by .

Without loss of the generality, we will consider only the case ; that is, , where All of our results can be extended easily to the general case.

The computation of a projection onto a closed convex subset is generally difficult. To overcome this difficulty, Fukushima [21] suggested a way to calculate the projection onto a level set of a convex function by computing a sequence of projections onto half-spaces containing the original level set. This idea is followed by Yang [28] and López et al. [29], respectively, who introduced the relaxed algorithms for solving the split feasibility problem in finite-dimensional and infinite-dimensional Hilbert spaces, respectively. This idea is also used by Censor et al. [30] in the subgradient extragradient method for solving variational inequalities in a Hilbert space.

We are now in a position to introduce a relaxed algorithm for computing the unique solution of VI(), where and is given as in (30). This scheme applies to the case where and are easy to be determined.

Algorithm 1. Choose an arbitrary initial guess . The sequence is constructed via the formula where where ,  , the sequence is in , and is a constant such that .

We now analyze strong convergence of Algorithm 1, which also illustrates the application of Lemma 7 (or Lemma 8).

Theorem 9. Assume that and . Then the sequence generated by Algorithm 1 converges strongly to the unique solution of .

Proof. Firstly, we verify that is bounded. Indeed, it is easy to see from the subdifferential inequality and the definitions of and that and hold for all , and hence it follows that . Since the projection operators and are nonexpansive, we obtain from (31), Lemmas 2 and 4 that where .
Consequently It turns out that inductively and this means that is bounded. Obviously, is also bounded.
Secondly, since a projection is firmly nonexpansive, we obtain thus we also have where is a positive constant such that . The combination of (38) and (39) leads to Setting then (33) and (40) can be rewritten as the following forms, respectively: Finally, observing that the conditions and imply and , respectively, in order to complete the proof using Lemma 7 (or Lemma 8), it suffices to verify that implies for any subsequence . In fact, if as , then and hold. Since and are bounded on bounded sets, we have two positive constants and such that and for all (noting that is also bounded due to the fact that ). From (32) and the trivial fact that and , it follows that Now if , and such that without loss of the generality, then the and (45) imply that This means that holds. On the other hand, noting , we can assert that and have from the and (46) that This, in turn, implies that . Moreover, we obtain that and hence .
Noting is the unique solution of , it turns out that Since and is bounded, it is easy to see that .

Observing that in Algorithm 1 the determination of the stepsize still depends on the constants and ; this means that in order to implement Algorithm 1, one has first to estimate the constants and , which is sometimes not an easy work in practice.

To overcome this difficulty, we furthermore introduce a so-called relaxed and self-adaptive algorithm, that is, a modification of Algorithm 1, in which the stepsize is selected through a self-adaptive way that has no connection with the constants and .

Algorithm 2. Choose an arbitrary initial guess and an arbitrary element such that . Assume that the th iterate    has been constructed. Continue and calculate the th iterate via the following formula: where and are given as in (32), the sequence is in , and the sequence is determined via the following relation:

Firstly, we show that the sequence is well defined. Noting strong monotonicity of , implies that and is well defined via the first formula of (51). Consequently, is well defined inductively according to (51) and thus the sequence is also well defined.

Next, we estimate roughly. If (that is, ), set Obviously, it turns out that Consequently By the definition of , we can assert that (54) holds for all .

Lemma 7 (or Lemma 8) is also important for the proof of the strong convergence of Algorithm 2.

Theorem 10. Assume that and . Then the sequence generated by Algorithm 2 converges strongly to the unique solution of .

Proof. Setting and , it concludes observing and (54) that there exists some positive integer such that and consequently Using Lemma 2, we have from (55) that (so is ) is a contraction with coefficient . This concludes that, for all , and hence Using (56), it turns out that inductively and this means that is bounded, so is .
By an argument similar to getting (38)–(40), we have where is a positive constant. Setting then (57) and (62) can be rewritten as the following forms, respectively: Clearly, and , together with (54) and (56), imply that and .
By an argument very similar to the proof of Theorem 9, it is not difficult to verify that implies for any subsequence . Thus we can complete the proof by using Lemma 7 (or Lemma 8).

#### Acknowledgments

This work was supported by National Natural Science Foundation of China (Grant no. 11201476) and in part by the Foundation of Tianjin Key Lab for Advanced Signal Processing.

#### References

1. G. Stampacchia, “Formes bilineaires coercivites sur les ensembles convexes,” Comptes Rendus de l'Académie des Sciences, vol. 258, pp. 4413–4416, 1964.
2. C. Baiocchi and A. Capelo, Variational and Quasivariational Inequalities, John Wiley & Sons, New York, NY, USA, 1984.
3. A. Bnouhachem, “A self-adaptive method for solving general mixed variational inequalities,” Journal of Mathematical Analysis and Applications, vol. 309, no. 1, pp. 136–150, 2005.
4. H. Brezis, Operateurs Maximaux Monotone et Semigroupes de Contractions dans les Espace d'Hilbert, North-Holland, Amsterdam, The Netherlands, 1973.
5. R. W. Cottle, F. Giannessi, and J. L. Lions, Variational Inequalities and Complementarity Problems: Theory and Application, John Wiley & Sons, New York, NY, USA, 1980.
6. M. Fukushima, “Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems,” Mathematical Programming A, vol. 53, no. 1, pp. 99–110, 1992.
7. K. Goebel and S. Reich, Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings, vol. 83, Marcel Dekker, New York, NY, USA, 1984.
8. F. Giannessi, A. Maugeri, and P. M. Pardalos, Equilibrium Problems: Nonsmooth Optimization and Variational Inequality Models, Kluwer Academic, Dodrecht, The Netherlands, 2001.
9. R. Glowinski, J. L. Lions, and R. Tremolier, Numerical Analysis of Variational Inequalities, vol. 8, North-Holland, The Netherlands, Amsterdam, 1981.
10. P. T. Harker and J. S. Pang, “Finite-dimensional variational inequality and nonlinear complementarity problems: a survey of theory, algorithms and applications,” Mathematical Programming B, vol. 48, no. 2, pp. 161–220, 1990.
11. B. S. He, “A class of implicit methods for monotone variational inequalities,” Reports of the Institute of Mathematics 95-1, Nanjing University, Nanjing, China, 1995.
12. B. S. He and L. Z. Liao, “Improvements of some projection methods for monotone nonlinear variational inequalities,” Journal of Optimization Theory and Applications, vol. 112, no. 1, pp. 111–128, 2002.
13. B. S. He, Z. H. Yang, and X. M. Yuan, “An approximate proximal-extragradient type method for monotone variational inequalities,” Journal of Mathematical Analysis and Applications, vol. 300, no. 2, pp. 362–374, 2004.
14. S. He and H. K. Xu, “Variational inequalities governed by boundedly Lipschitzian and strongly monotone operators,” Fixed Point Theory, vol. 10, no. 2, pp. 245–258, 2009.
15. D. Kinderlehrer and G. Stampacchia, An Introduction to Variational Inequalities and their Applications, SIAM, Philadelphia, Pa, USA, 2000.
16. J. L. Lions and G. Stampacchia, “Variational inequalities,” Communications on Pure and Applied Mathematics, vol. 20, pp. 493–519, 1967.
17. H. K. Xu and T. H. Kim, “Convergence of hybrid steepest-descent methods for variational inequalities,” Journal of Optimization Theory and Applications, vol. 119, no. 1, pp. 185–201, 2003.
18. H. K. Xu, “Viscosity approximation methods for nonexpansive mappings,” Journal of Mathematical Analysis and Applications, vol. 298, no. 1, pp. 279–291, 2004.
19. H. Yang and M. G. H. Bell, “Traffic restraint, road pricing and network equilibrium,” Transportation Research B, vol. 31, no. 4, pp. 303–314, 1997.
20. I. Yamada, “The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings,” in Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications, D. Butnariu, Y. Censor, and S. Reich, Eds., vol. 8, pp. 473–504, North-Holland, Amsterdam, The Netherlands, 2001.
21. M. Fukushima, “A relaxed projection method for variational inequalities,” Mathematical Programming, vol. 35, no. 1, pp. 58–70, 1986.
22. L. C. Ceng, Q. H. Ansari, and J. C. Yao, “Mann-type steepest-descent and modified hybrid steepest-descent methods for variational inequalities in Banach spaces,” Numerical Functional Analysis and Optimization, vol. 29, no. 9-10, pp. 987–1033, 2008.
23. L. C. Ceng, M. Teboulle, and J. C. Yao, “Weak convergence of an iterative method for pseudomonotone variational inequalities and fixed-point problems,” Journal of Optimization Theory and Applications, vol. 146, no. 1, pp. 19–31, 2010.
24. K. Goebel and W. A. Kirk, Topics on Metric Fixed Point Theory, Cambridge University Press, Cambridge, UK, 1990.
25. H. K. Xu, “Iterative algorithms for nonlinear operators,” Journal of the London Mathematical Society, vol. 66, no. 1, pp. 240–256, 2002.
26. P. E. Maingé, “A hybrid extragradient-viscosity method for monotone operators and fixed point problems,” SIAM Journal on Control and Optimization, vol. 47, no. 3, pp. 1499–1515, 2008.
27. H. H. Bauschke and J. M. Borwein, “On projection algorithms for solving convex feasibility problems,” SIAM Review, vol. 38, no. 3, pp. 367–426, 1996.
28. Q. Yang, “The relaxed CQ algorithm solving the split feasibility problem,” Inverse Problems, vol. 20, no. 4, pp. 1261–1266, 2004.
29. G. López, V. Martín-Márquez, F. Wang, and H. K. Xu, “Solving the split feasibility problem without prior knowledge of matrix norms,” Inverse Problems, vol. 28, no. 8, p. 085004, 18, 2012.
30. Y. Censor, A. Gibali, and S. Reich, “The subgradient extragradient method for solving variational inequalities in Hilbert space,” Journal of Optimization Theory and Applications, vol. 148, no. 2, pp. 318–335, 2011.