Abstract

The gradient projection algorithm plays an important role in solving constrained convex minimization problems. In general, the gradient projection algorithm has only weak convergence in infinite-dimensional Hilbert spaces. Recently, H. K. Xu (2011) provided two modified gradient projection algorithms which have strong convergence. Motivated by Xu’s work, in the present paper, we suggest three more simpler variant gradient projection methods so that strong convergence is guaranteed.

1. Introduction

Let be a real Hilbert space and a nonempty closed and convex subset of . Let   be a real-valued convex function. Now we consider the following constrained convex minimization problem: Assume that (1.1) is consistent; that is, it has a solution and we use to denote its solution set. If is Fréchet differentiable, then solves (1.1) if and only if satisfies the following optimality condition: where denotes the gradient of . Note that (1.2) can be rewritten as This shows that the minimization (1.1) is equivalent to the fixed-point problem: where is an any constant and is the nearest point projection from onto . By using this relationship, the gradient-projection algorithm is usually applied to solve the minimization problem (1.1). This algorithm generates a sequence through the recursion: where the initial guess is chosen arbitrarily and is a sequence of step sizes which may be chosen in different ways. The gradient-projection algorithm (1.5) is a powerful tool for solving constrained convex optimization problems and has well been studied in the case of constant stepsizes for all . The reader can refer to [19]. It has recently been applied to solve split feasibility problems which find applications in image reconstructions and the intensity modulated radiation therapy (see [1017]).

It is known [3] that if has a Lipschitz continuous and strongly monotone gradient, then the sequence can be strongly convergent to a minimizer of in . If the gradient of is only assumed to be Lipschitz continuous, then can only be weakly convergent if is infinite dimensional. This gives naturally rise to a question.

Question 1. How to appropriately modify the gradient projection algorithm so as to have strong convergence?

For this purpose, recently, Xu [18] first introduced the following modification: where the sequences and satisfy the following conditions: (i), and ; (ii) and . Xu [18] proved that the sequence converges strongly to a minimizer of (1.1).

Remark 1.1. Xu's modification (1.6) is a convex combination of the gradient-projection algorithm (1.5) and a self-mapping which is usually referred as a so-called viscosity item.

In [18], Xu presented another modification as follows: Consequently, Xu [18] proved that Algorithm (1.7) also converges strongly to which solves the minimization problem (1.1).

Remark 1.2. Equation (1.7) involved in additional projections which couple the gradient projection method (1.5) with the so-called CQ method.

It should be pointed out that Xu's modifications (1.6) and (1.7) are interesting and provide us with a direction for solving (1.1) in infinite-dimensional Hilbert spaces.

Motivated by Xu's work, in the present paper, we suggest three variant gradient projection methods so that strong convergence is guaranteed for solving (1.1) in infinite-dimensional Hilbert spaces. Our motivations are mainly in the two respects.

Reason 1. The solution of the minimization problem (1.1) is not always unique, so that there may be many solutions to the problem. In that case, a special solution (e.g., the minimum norm solution) must be found from among candidate solutions. The minimum norm problem is motivated by the following least squares solution to the constrained linear inverse problem: where is a nonempty closed convex subset of a real Hilbert space , is a bounded linear operator from to another real Hilbert space , is the adjoint of , and is a given point in . The least-squares solution to (1.8) is the least-norm minimizer of the minimization problem: For some related works, please see Solodov and Svaiter [19], Goebel and Kirk [20], and Martinez-Yanes and Xu [21].

Reason 2. Projection methods are used extensively in a variety of methods in optimization theory. Apart from theoretical interest, the main advantage of projection methods, which makes them successful in real-world applications, is computational (see [2231]). In this respect, (1.7) is particularly useful. But we observe that (1.7) involves two half-spaces and . If the sets and are simple enough, then and are easily executed. But may be complicate, so that the projection is not easily executed. This might seriously affect the efficiency of the method. Hence, it is interesting that one can relax or from (1.7).

In the present paper, we suggest the following three methods: We will show that (1.10) can be used to find the minimum norm solution of the minimization problem (1.1), and (1.11) which is only involved in also has strong convergence.

2. Preliminaries

Let be a nonempty closed convex subset of a real Hilbert space . A mapping is called nonexpansive if Recall that the (nearest point or metric) projection from onto , denoted , assigns, to each , the unique point with the property It is well known that the metric projection of onto has the following basic properties: (i), for all ;(ii), for every ;(iii), for all , . Next we adopt the following notation: (i) means that converges strongly to ; (ii) means that converges weakly to ; (iii) is the weak -limit set of the sequence .

Lemma 2.1 (see [32] (Demiclosedness Principle)). Let be a closed and convex subset of a Hilbert space , and let be a nonexpansive mapping with . If is a sequence in weakly converging to and if converges strongly to , then
In particular, if , then .

Lemma 2.2 (see [33]). Let be a closed convex subset of . Let be a sequence in and . If is such that and satisfies the condition then .

Lemma 2.3 (see [34]). Assume is a sequence of nonnegative real numbers such that where is a sequence in and is a sequence such that(1);(2) or . Then .

Lemma 2.4 (see [35]). Let and be bounded sequences in a Banach space , and let be a sequence in with Suppose for all , and Then, .

3. Main Results

In this section, we will state and prove our main results.

Theorem 3.1. Let be a closed convex subset of a real Hilbert space . Let be a real-valued Fréchet differentiable convex function. Assume that the solution set of (1.1) is nonempty. Assume that the gradient is -Lipschitzian. Let be a -contraction with . Let be a sequence generated by the following hybrid gradient projection algorithm: where the sequences and satisfy the following conditions: (i), and ; (ii) and . Then the sequence generated by (3.1) converges to a minimizer of (1.1) which is the unique solution of the following variational inequality:

Proof. Take any . Since solves the minimization problem (1.1) if and only if solves the fixed-point equation, for any fixed positive number . So, we have for all . It can be rewritten as From condition (ii) , there exist two constants and such that for sufficiently large ; without loss of generality, we can assume for all . Since , without loss of generality, we can assume that for all . So, . Hence, is nonexpansive.
From (3.1), we get Thus, we deduce by induction that This indicates that the sequence is bounded and so are the sequences and . Then, we can chose a constant such that Next, we estimate . By (3.1), we have Then, we can combine the last inequality and Lemma 2.3 to conclude that Now we show that the weak limit set . Choose any . Then there must exist a subsequence of such that . At the same time, the real number sequence is bounded. Thus, there exists a subsequence of which converges to . Without loss of generality, we may assume that . Note that . So, . That is, as . Next, we only need to show that . First, from (3.8) we have that . Then, we have Since , is nonexpansive. It then follows from Lemma 2.1 (demi-closedness principle) that . Hence, because of . So, .
Finally, we prove that , where is the unique solution of the VI (3.2). First, we show that . Observe that there exists a subsequence of satisfying Since is bounded, there exists a subsequence of such that . Without loss of generality, we assume that . Then, we obtain By using the property (ii) of , we have It follows that From Lemma 2.3, (3.11), and (3.13), we deduce that . This completes the proof.

From Theorem 3.1, we obtain immediately the following theorem.

Theorem 3.2. Let be a closed convex subset of a real Hilbert space . Let be a real-valued Fréchet differentiable convex function. Assume . Assume that the gradient is -Lipschitzian. Let be a sequence generated by the following hybrid gradient projection algorithm: where the sequences and satisfy the following conditions: (i), and ; (ii) and . Then the sequence generated by (3.14) converges to a minimizer of (1.1) which is the minimum norm element in .

Proof. In Theorem 3.1, we note that is a non-self mapping from to the whole space . Hence, if we chose for all , then Algorithm (3.1) reduces to (3.14). And sequence converges strongly to which is obviously the minimum norm element in . The proof is completed.

Next, we suggest another simple algorithm for dropping the assumption .

Theorem 3.3. Let be a closed convex subset of a real Hilbert space . Let be a real-valued Fréchet differentiable convex function. Assume . Assume that the gradient is -Lipschitzian. Let be a sequence generated by the following hybrid gradient projection algorithm: where is a constant and the sequences satisfy the following conditions: (1); (2). Then the sequence generated by (3.15) converges to a minimizer of (1.1) which is the minimum norm element in .

Proof. Claim 1. The sequence is bounded.
Take . Then we have By induction,
Claim 2. and .
By the similar argument as that in [18, page 366], we can write where is nonexpansive and . Then we can rewrite (3.15) as where It follows that So, This together with Lemma 2.4 implies that Thus, Note that Therefore, Now repeating the proof of Theorem 3.1, we conclude that .
Claim 3. where is the minimum norm element in .
Observe that there exists a subsequence of satisfying Since is bounded, there exists a subsequence of such that . Without loss of generality, we assume that . Then, we obtain
Claim 4. . From (3.15), we have It is obvious that . Then we can apply Lemma 2.3 to the last inequality to conclude that . The proof is completed.

Next, we suggest another algorithm with the additional projections applied to the gradient projection algorithm. We show that this algorithm has strong convergence.

Theorem 3.4. Let be a closed convex subset of a real Hilbert space . Let be a real-valued Fréchet differentiable convex function. Assume . Assume that the gradient is -Lipschitzian. Let . For and , define a sequence of as follows: where the sequence satisfies the condition . Then the sequence generated by (3.30) converges to .

Proof. It is obvious that is convex. For any , we have This implies that . Hence, . From , we have Since , we have So, for , we have Hence, This implies that is bounded.
From and , we have Hence, and therefore This implies that
From (3.36) and (3.39), we obtain By the fact , we get Therefore, from (3.40) and (3.41), we deduce Now (3.42) and Lemma 2.1 guarantee that every weak limit point of is a fixed point of . That is, . At the same time, if we choose in (3.35), we have This fact and Lemma 2.2 ensure the strong convergence of to . This completes the proof.

Now we give some remarks on our variant gradient projection methods.

Remark 3.5. Under the same control parameters, the gradient projection methods (3.1) and (1.6) are all strong convergent. However, (3.1) seems to have more advantage than (1.6) as is a non-self-mapping.

Remark 3.6. The gradient projection method (3.14) is similar to (1.5) by using instead of . But (3.1) has strong convergence, and especially (3.1) converges strongly to the minimum norm element of .

Remark 3.7. The advantage of the gradient projection method (3.15) is that it has strong convergence under some weaker assumptions on parameter .

Remark 3.8. The gradient projection method (3.30) is simpler than (1.7).

Acknowledgments

Y. Yao was supported in part by NSFC 11071279 and NSFC 71161001-G0105. Y.-C. Liou was supported in part by NSC 100-2221-E-230-012. C.-F. Wen was supported in part by NSC 100-2115-M-037-001.