Abstract

Consider the variational inequality of finding a point satisfying the property , for all , where is the intersection of finite level sets of convex functions defined on a real Hilbert space and is an -Lipschitzian and -strongly monotone operator. Relaxed and self-adaptive iterative algorithms are devised for computing the unique solution of . Since our algorithm avoids calculating the projection (calculating by computing several sequences of projections onto half-spaces containing the original domain ) directly and has no need to know any information of the constants and , the implementation of our algorithm is very easy. To prove strong convergence of our algorithms, a new lemma is established, which can be used as a fundamental tool for solving some nonlinear problems.

1. Introduction

The variational inequality problem can mathematically be formulated as the problem of finding a point with the property where is a real Hilbert space with inner product and norm , is a nonempty closed convex subset of , and is a nonlinear operator. Since its inception by Stampacchia [1] in 1964, the variational inequality problem has received much attention due to its applications in a large variety of problems arising in structural analysis, economics, optimization, operations research and engineering sciences; see [123] and the references therein. Using the projection technique, one can easily show that is equivalent to the fixed-point problem (see, for example, [15]).

Lemma 1. is a solution of if and only if satisfies the fixed-point relation: where is an arbitrary constant, is the orthogonal projection onto , and is the identity operator on .

Recall that an operator is called monotone, if Moreover, a monotone operator is called strictly monotone if the equality “” holds only when in the last relation. It is easy to see that (1) has at most one solution if is strictly monotone.

For variational inequality (1), is generally assumed to be Lipschitzian and strongly monotone on ; that is, for some constants ,   satisfies the conditions In this case, is also called an -Lipschitzian and -strongly monotone operator. It is quite easy to show the simple result as follows.

Lemma 2. Assume that satisfies conditions (4) and and are constants such that and , respectively. Let (or ) and (or ). Then and are all contractions with coefficients and , respectively, where .

Using Banach's contraction mapping principle, the following well-known result can be obtained easily from Lemmas 1 and 2.

Theorem 3. Assume that satisfies the conditions (4). Then has a unique solution. Moreover, for any , the sequence with initial guess and defined recursively by converges strongly to the unique solution of .

However, Algorithm (5) has two evident weaknesses. On one hand, Algorithm (5) involves calculating the mapping , while the computation of a projection onto a closed convex subset is generally difficult. If is the intersection of finite closed convex subsets of , that is, , where is a closed convex subset of , then the computation of is much more difficult. On the other hand, the determination of the stepsize depends on the constants and . This means that in order to implement Algorithm (5), one has first to compute (or estimate) the constants and , which is sometimes not an easy work in practice.

In order to overcome the above weaknesses of the algorithm (5), a new relaxed and self-adaptive algorithm is proposed in this paper to solve , where is the intersection of finite level sets of convex functions defined on and is an -Lipschitzian and -strongly monotone operator. Our method calculates by computing finite sequences of projections onto half-spaces containing the original set and selects the stepsizes through a self-adaptive way. The implementation of our algorithm avoids computing directly and has no need to know any information about and .

The rest of this paper is organized as follows. Some useful lemmas are listed in the next section; in particular, a new lemma is established in order to prove strong convergence theorems of our algorithms, which can also be used as a fundamental tool for solving some nonlinear problems relating to fixed point. In the last section, a relaxed algorithm (for the case where and are known) and a relaxed self-adaptive algorithm (for the case where and are not known) are proposed, respectively. The strong convergence theorems of our algorithms are proved.

2. Preliminaries

Throughout the rest of this paper, we denote by a real Hilbert space and by the identity operator on . If is a differentiable functional, then we denote by the gradient of . We will also use the following notations: (i) denotes strong convergence.(ii) denotes weak convergence.(iii) such that denotes the weak -limit set of .

Recall a trivial inequality, which is well known and in common use.

Lemma 4. For all , there holds the following relation:

Recall that a mapping is said to be nonexpansive if is said to be firmly nonexpansive if, for ,

The following are characterizations of firmly nonexpansive mappings (see [7] or [24]).

Lemma 5. Let be an operator. The following statements are equivalent.(i) is firmly nonexpansive. (ii) is firmly nonexpansive. (iii),  .

We know that the orthogonal projection from onto a nonempty closed convex subset is a typical example of a firmly nonexpansive mapping [7], which is defined by It is well known that is characterized [7] by the inequality (for )

It is well known that the following lemma [25] is often used when we analyze the strong convergence of some algorithms for solving some nonlinear problems, such as fixed points of nonlinear mappings, variational inequalities, and split feasibility problems. In fact, this lemma has been regarded as a fundamental tool for solving some nonlinear problems relating to fixed point.

Lemma 6 (see [25]). Assume is a sequence of nonnegative real numbers such that where is a sequence in and is a sequence in such that(i), (ii) or .
Then .

In this paper, inspired and encouraged by an idea in [26], we obtain the following lemma. Its key effect on the proofs of our main results will be illustrated in the next section and this may show that this lemma is likely to become a new fundamental tool for solving some nonlinear problems relating to fixed point.

Lemma 7. Assume is a sequence of nonnegative real numbers such that where is a sequence in , is a sequence of nonnegative real numbers and , , and are three sequences in such that(i), (ii), (iii) implies for any subsequence , (iv).
Then .

Proof. Following and generalizing an idea in [26], we distinguish two cases to prove as .
Case  1. is eventually decreasing (i.e., there exists such that holds for all ). In this case, must be convergent, and from (13) it follows that Noting condition (ii), letting in (14) yields as . Using condition (iii), we get that . Noting this together with conditions (i) and (iv), we obtain by applying Lemma 6 to (12).
Case  2. is not eventually decreasing. Hence, we can find an integer such that . Let us now define Obviously, is nonempty and satisfies . Let It is clear that as (otherwise, is eventually decreasing). It is also clear that for all . Moreover, In fact, if , then inequity (17) is trivial; if , then , and (17) is also trivial. If , then there exists an integer such that . Thus we deduce from the definition of that and inequity (17) holds again. Since for all , it follows from (14) that so that as using condition (ii). Due to the condition (iii), this implies that Noting for all again, it follows from (12) that Combining (20), (21), and condition (iv) yields and hence as . This together with (13) implies that which together with (17), in turn, implies that as .

The following result is just a special case of Lemma 7, that is, the case where for all .

Lemma 8. Assume is a sequence of nonnegative real numbers such that where is a sequence in , is a sequence of nonnegative real numbers, and and are two sequences in such that(i), (ii), (iii) implies for any subsequence .
Then .

Recall that a function is called convex if A differentiable function is convex if and only if there holds the following relation: Recall that an element is said to be a subgradient of at if

A function is said to be subdifferentiable at , if it has at least one subgradient at . The set of subgradients of at the point is called the subdifferential of at and is denoted by . The last relation above is called the subdifferential inequality of at . A function is called subdifferentiable, if it is subdifferentiable at all . If a function is differentiable and convex, then its gradient and subgradient coincide.

Recall that a function is said to be weakly lower semicontinuous at if implies

3. Iterative Algorithms

In this section, we consider the iterative algorithms for solving a particular kind of variational inequality (1) in which the closed convex subset is of the particular structure, that is the intersection of finite level sets of convex functions given as follows: where is a positive integer and is a convex function. We always assume that is subdifferentiable on and is a bounded operator (i.e., bounded on bounded sets). It is worth noting that every convex function defined on a finite-dimensional Hilbert space is subdifferentiable and its subdifferential operator is a bounded operator (see [27, Corollary 7.9]). We also assume that is an -Lipschitzian and -strongly monotone operator. It is well known that in this case VI() has a unique solution, henceforth, which is denoted by .

Without loss of the generality, we will consider only the case ; that is, , where All of our results can be extended easily to the general case.

The computation of a projection onto a closed convex subset is generally difficult. To overcome this difficulty, Fukushima [21] suggested a way to calculate the projection onto a level set of a convex function by computing a sequence of projections onto half-spaces containing the original level set. This idea is followed by Yang [28] and López et al. [29], respectively, who introduced the relaxed algorithms for solving the split feasibility problem in finite-dimensional and infinite-dimensional Hilbert spaces, respectively. This idea is also used by Censor et al. [30] in the subgradient extragradient method for solving variational inequalities in a Hilbert space.

We are now in a position to introduce a relaxed algorithm for computing the unique solution of VI(), where and is given as in (30). This scheme applies to the case where and are easy to be determined.

Algorithm 1. Choose an arbitrary initial guess . The sequence is constructed via the formula where where ,  , the sequence is in , and is a constant such that .

We now analyze strong convergence of Algorithm 1, which also illustrates the application of Lemma 7 (or Lemma 8).

Theorem 9. Assume that and . Then the sequence generated by Algorithm 1 converges strongly to the unique solution of .

Proof. Firstly, we verify that is bounded. Indeed, it is easy to see from the subdifferential inequality and the definitions of and that and hold for all , and hence it follows that . Since the projection operators and are nonexpansive, we obtain from (31), Lemmas 2 and 4 that where .
Consequently It turns out that inductively and this means that is bounded. Obviously, is also bounded.
Secondly, since a projection is firmly nonexpansive, we obtain thus we also have where is a positive constant such that . The combination of (38) and (39) leads to Setting then (33) and (40) can be rewritten as the following forms, respectively: Finally, observing that the conditions and imply and , respectively, in order to complete the proof using Lemma 7 (or Lemma 8), it suffices to verify that implies for any subsequence . In fact, if as , then and hold. Since and are bounded on bounded sets, we have two positive constants and such that and for all (noting that is also bounded due to the fact that ). From (32) and the trivial fact that and , it follows that Now if , and such that without loss of the generality, then the and (45) imply that This means that holds. On the other hand, noting , we can assert that and have from the and (46) that This, in turn, implies that . Moreover, we obtain that and hence .
Noting is the unique solution of , it turns out that Since and is bounded, it is easy to see that .

Observing that in Algorithm 1 the determination of the stepsize still depends on the constants and ; this means that in order to implement Algorithm 1, one has first to estimate the constants and , which is sometimes not an easy work in practice.

To overcome this difficulty, we furthermore introduce a so-called relaxed and self-adaptive algorithm, that is, a modification of Algorithm 1, in which the stepsize is selected through a self-adaptive way that has no connection with the constants and .

Algorithm 2. Choose an arbitrary initial guess and an arbitrary element such that . Assume that the th iterate    has been constructed. Continue and calculate the th iterate via the following formula: where and are given as in (32), the sequence is in , and the sequence is determined via the following relation:

Firstly, we show that the sequence is well defined. Noting strong monotonicity of , implies that and is well defined via the first formula of (51). Consequently, is well defined inductively according to (51) and thus the sequence is also well defined.

Next, we estimate roughly. If (that is, ), set Obviously, it turns out that Consequently By the definition of , we can assert that (54) holds for all .

Lemma 7 (or Lemma 8) is also important for the proof of the strong convergence of Algorithm 2.

Theorem 10. Assume that and . Then the sequence generated by Algorithm 2 converges strongly to the unique solution of .

Proof. Setting and , it concludes observing and (54) that there exists some positive integer such that and consequently Using Lemma 2, we have from (55) that (so is ) is a contraction with coefficient . This concludes that, for all , and hence Using (56), it turns out that inductively and this means that is bounded, so is .
By an argument similar to getting (38)–(40), we have where is a positive constant. Setting then (57) and (62) can be rewritten as the following forms, respectively: Clearly, and , together with (54) and (56), imply that and .
By an argument very similar to the proof of Theorem 9, it is not difficult to verify that implies for any subsequence . Thus we can complete the proof by using Lemma 7 (or Lemma 8).

Acknowledgments

This work was supported by National Natural Science Foundation of China (Grant no. 11201476) and in part by the Foundation of Tianjin Key Lab for Advanced Signal Processing.