International Journal of Mathematics and Mathematical Sciences

International Journal of Mathematics and Mathematical Sciences / 2018 / Article

Research Article | Open Access

Volume 2018 |Article ID 1580837 | 10 pages | https://doi.org/10.1155/2018/1580837

On Solving of Constrained Convex Minimize Problem Using Gradient Projection Method

Academic Editor: Jean Michel Rakotoson
Received16 May 2018
Accepted13 Aug 2018
Published01 Oct 2018

Abstract

Let and be closed convex subsets of real Hilbert spaces and , respectively, and let be a strictly real-valued convex function such that the gradient is an -ism with a constant . In this paper, we introduce an iterative scheme using the gradient projection method, based on Mann’s type approximation scheme for solving the constrained convex minimization problem (CCMP), that is, to find a minimizer of the function over set . As an application, it has been shown that the problem (CCMP) reduces to the split feasibility problem (SFP) which is to find such that where is a linear bounded operator. We suggest and analyze this iterative scheme under some appropriate conditions imposed on the parameters such that another strong convergence theorems for the CCMP and the SFP are obtained. The results presented in this paper improve and extend the main results of Tian and Zhang (2017) and many others. The data availability for the proposed SFP is shown and the example of this problem is also shown through numerical results.

1. Introduction

Throughout this paper, we always assume that be a closed convex subset of a real Hilbert space with inner product and norm are denoted by and , respectively. Let be a strictly real-valued convex function.

Consider the following constrained convex minimization problem (CCMP):Assume that (1) is consistent (that is, the CCMP has a solution) and we use to denote its solution set. If is Fréchet differentiable, then the gradient projection algorithm (GPA) is usually applied to solving the CCMP (1), which generates a sequence through the recursion:or more generally,where the initial guess is chosen arbitrarily, the parameters or are positive real number, and is the metric projection from onto . It is well known that the convergence of algorithms (2) and (3) depends on the behavior of the gradient . It is known from Levitin and Polyak [1] that if is -strongly monotone and -Lipschitzian, that is, there exists the constants and such that andrespectively, then, for , the operatoris a contraction; hence, the sequence defined by the GPA (2) converges in norm to the unique minimizer of the CCMP (1). More generally, for for all , the operatoris a contraction; if the sequence is chosen satisfying the property then the sequence defined by the GPA (3) converges in norm to the unique minimizer of the CCMP (1). However, if the gradient fails to be -strongly monotone (it means that the gradient only satisfies the -Lipschitzian condition), then the operators and defined by (6) and (7), respectively, may fail to be contraction; consequently, the sequence generated by algorithms (2) and (3) may fail to converge strongly (see also Xu [2]) in the setting of infinite-dimensional real Hilbert space, but still converge weakly as the following statement.

Theorem 1 (see [1, 2]). Assume that the CCMP (1) is the unique consistent. Let the gradient satisfy the -Lipschitzian condition and the sequence of the parameter satisfies the following condition:for all , where and are the constants. Then the sequence generated by the GPA (3) converges weakly to the minimizer of the CCMP (1). Indeed, the results of this theorem still hold on the gradient which satisfies an -inverse strongly monotone with (in brief, we denote -ism), that is, for all , because the class of -Lipschitzian mapping contains the class of -ism mapping.

We observe from Theorem 1 that if the parameter converges to such that satisfies the condition (9) then solves the CCMP (1) which is the unique consistent if and only if solves the fixed-point equation

It is well known that the gradient-projection algorithm is very useful in dealing with the CCMP (1) and has extensively been studied (see [19] and the references therein). It has recently been applied to solve the split feasibility problems (SFP) (see [1015]) which find applications in image reconstructions and the intensity modulated radiation therapy (see [1318]). We now consider the following regularized minimization (that is, the CCMP (1) has the unique minimizer solution) problem: where for all and is a continuous differentiable function, and we also consider the regularized gradient-projection algorithm which generates a sequence by the following recursive formula:

Many researchers studied the strong convergency theorems for solving the CCMP (1) using the sequence which is generated by algorithm (12) for their proposal on the gradient which is the class of nonexpansive mapping and the class of -Lipschitzian mapping (see [1925]) and in case the gradient is the class of -ism mapping such that , Xu (2010) introduced the sequence which is generated by algorithm (12), and he proved that this sequence converges weakly to the minimizer of the CCMP (1) in the setting of infinite-dimensional real Hilbert space (see [15]) under some appropriate condition.

Recently, Tian and Zhang (2017) introduced the sequence generated by algorithm (12), and they proved that this sequence converges strongly to the minimizer of the CCMP (1) in the same setting of infinite-dimensional real Hilbert space (see [26]) under the control conditions:(i).(ii), and .(iii).

In this paper, under the motivated and the inspired by above results, we introduce new iterative scheme, based on Mann’s type approximation scheme for solving the CCMP (1) in the case of the gradient being the class of -ism mapping such that as follows: under the mild some appropriate conditions of the parameters , and , we obtain a strong convergency theorem to solve the CCMP (1), in which condition (iii) of Tian and Zhang to be removed. In Section 4 of the applications, it has been shown that the CCMP (1) reduces to the split feasibility problem (SFP) and the data availability for the proposed SFP is shown in Section 5, and the example of this problem is also shown in Section 6 through numerical results.

2. Preliminaries

Let be a nonempty closed convex subset of a real Hilbert space . If is a differentiable function, then we denote the gradient of the function . We will also use the notation: to denote the strong convergency, to denote the weak convergency, and to denote the fixed point set of the mapping .

Recall that the metric projection is defined as follows: for each , is the unique point in satisfying Let be a function. Recall that the function is a strictly real-valued convex function if such that . We collect together some known lemmas and definitions which are our main tool in proving our results.

Lemma 2. Let be a real Hilbert space. Then, for all ,

Lemma 3 (see [27]). Let be a nonempty closed convex subset of a real Hilbert space . Then,

Definition 4. Let be a real Hilbert space. The operator is called(i)-Lipschitzian with if (ii)-contraction with a positive real number such that if (iii)nonexpansive if (iv)monotone if (v)-strongly monotone if (vi)-inverse strongly monotone (-ism) if (vii)firmly nonexpansive if

Lemma 5 (see [27]). Let be a nonempty closed convex subset of a real Hilbert space . Then, for and , we have

Lemma 6 (see [28]). Let be a real Hilbert space and be an operator. The following statements are equivalent: (i) is firmly nonexpansive,(ii), ,(iii) is firmly nonexpansive.

Lemma 7 (see [28]). Let and be two real Hilbert spaces and let be a firmly nonexpansive mapping such that is a convex function from to . Let be a bounded linear operator and for all . Then, (i) for all where is adjoint operator of ,(ii) is -Lipschitzian.

Lemma 8 (see [29, 30]). Let be a nonempty closed convex subset of a real Hilbert space . Let and be two classes of nonexpansive mappings from into such that Then, for any bounded sequence , we have (i)if then for all , which is called that the NST-condition(I),(ii)if then for all , which is called that the NST-condition (II).

Lemma 9 (see [31] (demiclosedness principle)). Let be a nonempty closed convex subset of a real Hilbert space and let be a nonexpansive mapping with . If the sequence converges weakly to and the sequence converges strongly to . Then, , in particular, if then .

Lemma 10 (see [32]). Let be a sequence of nonnegative real number such that where is a sequence in and is a sequence in such that (i);(ii) or . Then, .

3. Main Result

Throughout this paper, we let be a nonempty closed convex subset of a real Hilbert space . First, we will show that which is defined by has the unique fixed point under the conditions , and where be a strictly real-valued convex function such that is -ism with . Since, is -ism and the nonexpansiveness of . Then, for each , we haveTherefore,It follows that, for , by (30) we have That is, So, is a contraction, therefore, by Banach’s contraction principle, has the unique fixed point. Therefore, is well-defined.

Let be the solution set of the CCMP (1). It is clear that is a closed and convex sets. We now ready to present my main results as follows.

Theorem 11. Let be a nonempty closed convex subset of a real Hilbert space , is a strictly real-valued convex function such that the gradient is -ism with . Assume that and let be a sequence generated by where and satisfy the following conditions: (i) such that for all ,(ii) and , then the sequence converges strongly to , which is the unique minimizer of the CCMP (1).

Proof. We divide the proof into 4 steps.
Step 1. We will show that is bounded. Let . By the strictly convexity of , we have that is a singleton set. Noticing from -ism of that is -Lipschitzian. So, by (10), we have . Therefore, by (30), we have It follows that is bounded, and so are and .
Step 2. We will show that . Since, Therefore, by conditions (i) and (ii), we haveSince, is -ism, then we have Hence, that is, is a nonexpansive. Therefore, by (36) and NST-condition (II) in Lemma 8, we haveStep 3. Let . Since, is a singleton set, we haveTherefore, by Lemma 3, we have . We will show that . From (10) we have Since is bounded, we consider a subsequence of ; there exists a subsequence of which converges weakly to . It follows by the demiclosedness to the zero in Lemma 9 and (39) that . So, by (10) we have (indeed, ). Therefore, by (40), we haveStep 4. We will show that converges strongly to . By (30), Lemma 5, condition (i), and the linearity orthogonal projection of , we have