`Journal of Function Spaces and ApplicationsVolume 2012 (2012), Article ID 678353, 26 pageshttp://dx.doi.org/10.1155/2012/678353`
Research Article

Hybrid Gradient-Projection Algorithm for Solving Constrained Convex Minimization Problems with Generalized Mixed Equilibrium Problems

1Department of Mathematics, Scientific Computing Key Laboratory of Shanghai Universities, Shanghai Normal University, Shanghai 200234, China
2Center for General Education, Kaohsiung Medical University, Kaohsiung 80708, Taiwan

Received 16 July 2012; Accepted 8 September 2012

Copyright © 2012 Lu-Chuan Ceng and Ching-Feng Wen. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

It is well known that the gradient-projection algorithm (GPA) for solving constrained convex minimization problems has been proven to have only weak convergence unless the underlying Hilbert space is finite dimensional. In this paper, we introduce a new hybrid gradient-projection algorithm for solving constrained convex minimization problems with generalized mixed equilibrium problems in a real Hilbert space. It is proven that three sequences generated by this algorithm converge strongly to the unique solution of some variational inequality, which is also a common element of the set of solutions of a constrained convex minimization problem, the set of solutions of a generalized mixed equilibrium problem, and the set of fixed points of a strict pseudocontraction in a real Hilbert space.

1. Introduction

Let be a real Hilbert space with inner product and norm . Let be a nonempty closed convex subset of and let be the metric projection of onto . Recall that a -Lipschitz continuous mapping is a mapping on such that where is a constant. In particular, if then is called a contraction on ; if then is called a nonexpansive mapping on . A mapping is called monotone if A mapping is called -inverse strongly monotone if there exists a constant such that see, for example, [1]. A self-mapping is called -strictly pseudocontractive if there exists a constant such that see, for example, [2]. In particular, if , then reduces to a nonexpansive self-mapping on .

Consider the following constrained convex minimization problem: where is a real-valued convex function. If is (Frechet) differentiable, then the gradient-projection method (for short, GPM) generates a sequence via the recursive formula or more generally, where in both (1.6) and (1.7), the initial guess is taken from arbitrarily, the parameters, or , are positive real numbers, and is the metric projection from onto . The convergence of the algorithms (1.6) and (1.7) depends on the behavior of the gradient . As a matter of fact, it is known that if is strongly monotone and Lipschitzian; namely, there are constants satisfying the properties for all , then, for , the operator is a contraction; hence, the sequence defined by algorithm (1.6) converges in norm to the unique solution of the minimization (1.5). More generally, if the sequence is chosen to satisfy the property then the sequence defined by algorithm (1.7) converges in norm to the unique minimizer of (1.5). However, if the gradient fails to be strongly monotone, the operator defined in (1.10) would fail to be contractive; consequently, the sequence generated by algorithm (1.6) may fail to converge strongly (see Section 4 in Xu [3]). The following theorem states that if the Lipschitz condition (1.9) holds, then the algorithms (1.6) and (1.7) can still converge in the weak topology.

Theorem 1.1 (see [3, Theorem 3.2]). Assume the minimization (1.5) is consistent and let denote its solution set. Assume the gradient satisfies the Lipschitz condition (1.9). Let the sequence of parameters, , satisfy the condition Then the sequence generated by the gradient-projection algorithm (1.7) converges weakly to a minimizer of (1.5).

From the above theorem, it is known that the gradient-projection algorithm has weak convergence, in general, unless the underlying Hilbert space is finite dimensional. This gives naturally rise to a question how to appropriately modify the gradient-projection algorithm so as to have strong convergence. Xu [3] gave two such modifications, one of which is simply a convex combination of a contraction with the point generated by the projected gradient algorithm.

Theorem 1.2 (see [3, Theorem 5.2]). Assume the minimization (1.5) is consistent and let denote its solution set. Assume the gradient satisfies the Lipschitz condition (1.9). Let be a -contraction with . Let a sequence be generated by the following hybrid gradient-projection algorithm: Assume the sequence satisfies the condition (1.12) and, in addition, the following conditions are satisfied for and :(i);(ii);(iii);(iv).
Then the sequence converges in norm to a minimizer of (1.5) which is also the unique solution of the variational inequality of finding such that In other words, is the unique fixed point of the contraction , .

On the other hand, Peng and Yao [4] recently introduced the following generalized mixed equilibrium problem of finding such that where is a nonlinear mapping and is a function and is a bifunction. The set of solutions of problem (1.15) is denoted by GMEP. Subsequently, Yao et al. [5] and Ceng and Yao [6] also considered problem (1.15).

The problem (1.15) is very general in the sense that it includes, as special cases, optimization problems, variational inequalities, minimax problems, Nash equilibrium problems in noncooperative games, and others; see, for example, [715]. Here some special cases of problem (1.15) are stated as follows.

If , then problem (1.15) reduces to the following mixed equilibrium problem of finding such that which was considered by Ceng and Yao [7] and Bigi et al. [16]. Very recently, Peng [10] further discussed this problem. The set of solutions of this problem is denoted by MEP.

If , then problem (1.15) reduces to the following generalized equilibrium problem of finding such that which was studied by S. Takahashi and W. Takahashi [8].

If and , then problem (1.15) reduces to the following equilibrium problem of finding such that

If ,   and , then problem (1.15) reduces to the following classical variational inequality of finding such that whose solution set is denoted by .

The variational inequalities have been extensively studied in the literature; see [14, 1727] and the references therein. In 2006, Nadezhkina and Takahashi [22, 25] and Zeng and Yao [18] proposed some variants of Korpelevič's extragradient method [17] for finding an element of , where is a nonexpansive self-mapping on .

Very recently, Peng [10] also introduced a variant of Korpelevič's extragradient method [17] for finding a common element of the set of solutions of a mixed equilibrium problem, the set of fixed points of a strict pseudocontraction, and the set of solutions of a variational inequality for a monotone, Lipschitz continuous mapping.

Theorem 1.3 (see [10, Theorem 3.1]). Let be a nonempty closed convex subset of a real Hilbert space . Let be a bifunction satisfying conditions (H1)–(H4) and a lower semicontinuous and convex function with assumptions (A1) or (A2), where(H1), for all ;(H2) is monotone, that is, , for all ;(H3) for each ,   is weakly upper semicontinuous;(H4) for each ,   is convex and lower semicontinuous;(A1) for each and , there exists a bounded subset and such that for any , (A2) is a bounded set.
Let be a monotone and -Lipschitz-continuous mapping and be a -strict pseudocontraction for some such that . For given arbitrarily, let , , , , be sequences generated by Assume that for some , for some and let satisfy . Then, , , , , converge strongly to .

Furthermore, related iterative methods for solving fixed point problems, variational inequalities, equilibrium problems, and optimization problems can be found in [1, 2, 6, 11, 1316, 19, 20, 24, 2635].

In this paper, let be a nonempty closed convex subset of a real Hilbert space . Let be a bifunction satisfying conditions (H1)–(H4) and a lower semicontinuous and convex function with assumptions (A1) or (A2). Suppose the minimization (1.5) is consistent and let denote its solution set. Let the gradient be -Lipschitzian with constant and be an -inverse strongly monotone mapping. Let be a -strictly pseudocontractive mapping such that . Let be a -contraction with . For given arbitrarily, let the sequences , and be generated iteratively by where and , , , are four sequences in such that for all . It is proven that under very mild conditions, the sequences , and converge strongly to the unique solution of the variational inequality of finding such that In other words, is the unique fixed point of the contraction ,  . The result presented in this paper generalizes and improves some well-known results in the literature. Indeed, compared with some well-known results in the literature, our result improves and extends them in the following aspects.(i) Compared with Xu [3, Theorem 3.2], a weak convergence result, our result is a strong convergence result.(ii) Our problem of finding an element of is more general than the problem of finding an element of in [14, 18, 22, 23, 25].(iii) In our algorithm (1.22), Xu's modified gradient-projection algorithm in [3, Theorem 5.2] is rewritten as the second iteration step Here the main purpose of the reason why we use such an iteration step is to play a convenience and efficiency role in the computation of an element of . Therefore, Xu's algorithm (1.13) is extended to develop our algorithm (1.22).(iv) Our problem of finding an element of is more general than the problem of finding an element of in Xu [3]. In addition, it is worth pointing out that Xu's conditions and in the above Theorem 1.2 are replaced by the weaker conditions and in our result (see Theorem 3.2 in Section 3).

2. Preliminaries

Let be a real Hilbert space with inner product and norm and a nonempty closed convex subset of . We write to indicate that the sequence converges strongly to and to indicate that the sequence converges weakly to . Moreover, we use to denote the weak -limit set of the sequence , that is,

For every point , there exists a unique nearest point in , denoted by , such that is called the metric projection of onto . We know that is a firmly nonexpansive mapping of onto ; that is, there holds the following relation: Consequently, is nonexpansive and monotone. It is also known that is characterized by the following properties: and for all ,  ; see [36] for more details. Let be a monotone mapping. In the context of the variational inequality, this implies that

A set-valued mapping is called monotone if for all ,   and imply . A monotone mapping is called maximal if its graph is not properly contained in the graph of any other monotone mapping. It is known that a monotone mapping is maximal if and only if for , for every implies .

Let be a monotone, -Lipschitz-continuous mapping and let be the normal cone to at , that is, , for all . Define Then, is maximal monotone and if and only if ; see [37].

Recall that a mapping is called a strict pseudocontraction if there exists a constant such that In this case, we also say that is a -strict pseudocontraction. A mapping is called -inverse strongly monotone if there exists a constant such that It is obvious that any -inverse strongly monotone mapping is Lipschitz continuous. Meantime, observe that (2.8) is equivalent to It is easy to see that if is a -strictly pseudocontractive mapping, then is -inverse strongly monotone and hence -Lipschitz continuous. Thus, is Lipschitz continuous with constant . We denote by the set of fixed points of . It is clear that the class of strict pseudocontractions strictly includes the one of nonexpansive mappings which are mappings such that for all .

In order to prove our main result in the next section, we need the following lemmas and propositions.

Lemma 2.1 (see [7]). Let be a nonempty closed convex subset of a real Hilbert space . Let be a bifunction satisfying conditions (H1)–(H4) and let be a lower semicontinuous and convex function. For and , define a mapping as follows: for all . Assume that either (A1) or (A2) holds. Then the following conclusions hold:(i) for each and is single-valued;(ii) is firmly nonexpansive, that is, for any , (iii);(iv) is closed and convex.

Remark 2.2. If , then is rewritten as .

The following lemma is an immediate consequence of an inner product.

Lemma 2.3. In a real Hilbert space , there holds the inequality

Proposition 2.4 (see [6, Proposition 2.1]). Let , , , , and be as in Lemma 2.1. Then the following relation holds: for all and .

Recall that is called a quasi-strict pseudocontraction if the fixed point set of , , is nonempty and if there exists a constant such that We also say that is a -quasi-strict pseudocontraction if condition (2.15) holds.

Proposition 2.5 (see [2, Proposition 2.1]). Assume is a nonempty closed convex subset of a real Hilbert space and let be a self-mapping on .(i)If is a -strict pseudocontraction, then satisfies the Lipschitz condition (ii)If is a -strict pseudocontraction, then the mapping is demiclosed (at ). That is, if is a sequence in such that and , then , that is, .(iii)If is a -quasi-strict pseudocontraction, then the fixed point set of is closed and convex so that the projection is well defined.

The following lemma was proved by Suzuki [30].

Lemma 2.6 (see [30]). Let and be bounded sequences in a Banach space and let be a sequence in with . Suppose for all integers and . Then, .

Lemma 2.7 (see [34]). Let be a sequence of nonnegative numbers satisfying the condition where , are sequences of real numbers such that(i) and , or equivalently, (ii), or,(iii) is convergent.
Then .

3. Strong Convergence Theorem

In order to prove our main result, we shall need the following lemma given in [21].

Lemma 3.1. Let be a nonempty closed convex subset of a real Hilbert space . Let be a -strictly pseudocontractive mapping. Let and be two nonnegative real numbers. Assume . Then

We are now in a position to state and prove our main result.

Theorem 3.2. Let be a nonempty bounded closed convex subset of a real Hilbert space . Let be a bifunction satisfying conditions (H1)–(H4) and a lower semicontinuous and convex function with assumptions (A1) or (A2). Suppose the minimization (1.5) is consistent and let denote its solution set. Assume the gradient is -Lipschitzian with constant and is an -inverse strongly monotone mapping. Let be a -strictly pseudocontractive mapping such that . Let be a -contraction with . For given arbitrarily, let the sequences , , and be generated iteratively by where ,  , and , , , are four sequences in such that(i) and ;(ii) and ;(iii) and for all ;(iv) and ;(v) and ;(vi).
Then the sequences ,  , and converge strongly to the unique solution of the variational inequality of finding such that In other words, is the unique fixed point of the contraction , .

Proof. First it is obvious that there hold the following assertions:(a) solves the minimization (1.5);(b) solves the fixed point equation where is any fixed positive number;(c) solves the variational inequality of finding such that
where its solution set is denoted by .
We divide the proof into several steps.
Step  1. We claim that .
Indeed, first, we can write (3.2) as , for all , where . It follows that From Lemma 3.1 and (3.2), we get Let be a sequence of mappings defined as in Lemma 2.1. Note that the -Lipschitz continuity of implies that the gradient is -ism [31]. Since and are -inverse strongly monotone mapping and -inverse strongly monotone mapping, respectively, then we have It is clear that if and , then and are nonexpansive. It follows from that Then, So, from (3.6), (3.7), and (3.10), we have Utilizing Proposition 2.4 and condition (ii), we have This implies that Hence by Lemma 2.6, we get . Consequently, Step  2. We claim that and .
Indeed, let . Then we have ,   and Hence from (3.8), we have It follows from (3.2), (3.16), and (3.17) that Utilizing the convexity of , we have where is some appropriate constant. So, from (3.18) and (3.19), it follows that Therefore, Since ,  ,   and , we have Step  3. We claim that .
Indeed, set . Noticing the firm nonexpansivity of , we have Thus, we have It follows that From (3.18), (3.19), and (3.25), we have It follows that Note that ,   and . Then we immediately deduce that From (3.19) and (3.27), we have So, we obtain Note that ,   and . Then we immediately conclude that This together with , implies that Thus, from (3.30) and (3.34), we deduce that Since Therefore, Step  4. We claim that where .
Indeed, since is bounded, there exists a subsequence of such that and We can obtain that . First, we show . Since and , we conclude that and . Let where is the normal cone to at . We have already mentioned that in this case, the mapping is maximal monotone, and if and only if ; see [37]. Let be the graph of and let . Then, we have and hence . So, we have for all . On the other hand, from and , we have and hence From for all and , we have Hence, we obtain as . Since is maximal monotone, we have and hence .
Secondly, let us show . Since and , we have . Also, since , it follows that as . So, in terms of Proposition 2.5(ii) we obtain .
Next, let us show . From , we know that From (H2), it follows that Replacing by , we have Put for all and . Then, we have . So, from (3.45), we have