Abstract

Let and be closed convex subsets of real Hilbert spaces and , respectively, and let be a strictly real-valued convex function such that the gradient is an -ism with a constant . In this paper, we introduce an iterative scheme using the gradient projection method, based on Mann’s type approximation scheme for solving the constrained convex minimization problem (CCMP), that is, to find a minimizer of the function over set . As an application, it has been shown that the problem (CCMP) reduces to the split feasibility problem (SFP) which is to find such that where is a linear bounded operator. We suggest and analyze this iterative scheme under some appropriate conditions imposed on the parameters such that another strong convergence theorems for the CCMP and the SFP are obtained. The results presented in this paper improve and extend the main results of Tian and Zhang (2017) and many others. The data availability for the proposed SFP is shown and the example of this problem is also shown through numerical results.

1. Introduction

Throughout this paper, we always assume that be a closed convex subset of a real Hilbert space with inner product and norm are denoted by and , respectively. Let be a strictly real-valued convex function.

Consider the following constrained convex minimization problem (CCMP):Assume that (1) is consistent (that is, the CCMP has a solution) and we use to denote its solution set. If is Fréchet differentiable, then the gradient projection algorithm (GPA) is usually applied to solving the CCMP (1), which generates a sequence through the recursion:or more generally,where the initial guess is chosen arbitrarily, the parameters or are positive real number, and is the metric projection from onto . It is well known that the convergence of algorithms (2) and (3) depends on the behavior of the gradient . It is known from Levitin and Polyak [1] that if is -strongly monotone and -Lipschitzian, that is, there exists the constants and such that andrespectively, then, for , the operatoris a contraction; hence, the sequence defined by the GPA (2) converges in norm to the unique minimizer of the CCMP (1). More generally, for for all , the operatoris a contraction; if the sequence is chosen satisfying the property then the sequence defined by the GPA (3) converges in norm to the unique minimizer of the CCMP (1). However, if the gradient fails to be -strongly monotone (it means that the gradient only satisfies the -Lipschitzian condition), then the operators and defined by (6) and (7), respectively, may fail to be contraction; consequently, the sequence generated by algorithms (2) and (3) may fail to converge strongly (see also Xu [2]) in the setting of infinite-dimensional real Hilbert space, but still converge weakly as the following statement.

Theorem 1 (see [1, 2]). Assume that the CCMP (1) is the unique consistent. Let the gradient satisfy the -Lipschitzian condition and the sequence of the parameter satisfies the following condition:for all , where and are the constants. Then the sequence generated by the GPA (3) converges weakly to the minimizer of the CCMP (1). Indeed, the results of this theorem still hold on the gradient which satisfies an -inverse strongly monotone with (in brief, we denote -ism), that is, for all , because the class of -Lipschitzian mapping contains the class of -ism mapping.

We observe from Theorem 1 that if the parameter converges to such that satisfies the condition (9) then solves the CCMP (1) which is the unique consistent if and only if solves the fixed-point equation

It is well known that the gradient-projection algorithm is very useful in dealing with the CCMP (1) and has extensively been studied (see [19] and the references therein). It has recently been applied to solve the split feasibility problems (SFP) (see [1015]) which find applications in image reconstructions and the intensity modulated radiation therapy (see [1318]). We now consider the following regularized minimization (that is, the CCMP (1) has the unique minimizer solution) problem: where for all and is a continuous differentiable function, and we also consider the regularized gradient-projection algorithm which generates a sequence by the following recursive formula:

Many researchers studied the strong convergency theorems for solving the CCMP (1) using the sequence which is generated by algorithm (12) for their proposal on the gradient which is the class of nonexpansive mapping and the class of -Lipschitzian mapping (see [1925]) and in case the gradient is the class of -ism mapping such that , Xu (2010) introduced the sequence which is generated by algorithm (12), and he proved that this sequence converges weakly to the minimizer of the CCMP (1) in the setting of infinite-dimensional real Hilbert space (see [15]) under some appropriate condition.

Recently, Tian and Zhang (2017) introduced the sequence generated by algorithm (12), and they proved that this sequence converges strongly to the minimizer of the CCMP (1) in the same setting of infinite-dimensional real Hilbert space (see [26]) under the control conditions:(i).(ii), and .(iii).

In this paper, under the motivated and the inspired by above results, we introduce new iterative scheme, based on Mann’s type approximation scheme for solving the CCMP (1) in the case of the gradient being the class of -ism mapping such that as follows: under the mild some appropriate conditions of the parameters , and , we obtain a strong convergency theorem to solve the CCMP (1), in which condition (iii) of Tian and Zhang to be removed. In Section 4 of the applications, it has been shown that the CCMP (1) reduces to the split feasibility problem (SFP) and the data availability for the proposed SFP is shown in Section 5, and the example of this problem is also shown in Section 6 through numerical results.

2. Preliminaries

Let be a nonempty closed convex subset of a real Hilbert space . If is a differentiable function, then we denote the gradient of the function . We will also use the notation: to denote the strong convergency, to denote the weak convergency, and to denote the fixed point set of the mapping .

Recall that the metric projection is defined as follows: for each , is the unique point in satisfying Let be a function. Recall that the function is a strictly real-valued convex function if such that . We collect together some known lemmas and definitions which are our main tool in proving our results.

Lemma 2. Let be a real Hilbert space. Then, for all ,

Lemma 3 (see [27]). Let be a nonempty closed convex subset of a real Hilbert space . Then,

Definition 4. Let be a real Hilbert space. The operator is called(i)-Lipschitzian with if (ii)-contraction with a positive real number such that if (iii)nonexpansive if (iv)monotone if (v)-strongly monotone if (vi)-inverse strongly monotone (-ism) if (vii)firmly nonexpansive if

Lemma 5 (see [27]). Let be a nonempty closed convex subset of a real Hilbert space . Then, for and , we have

Lemma 6 (see [28]). Let be a real Hilbert space and be an operator. The following statements are equivalent: (i) is firmly nonexpansive,(ii), ,(iii) is firmly nonexpansive.

Lemma 7 (see [28]). Let and be two real Hilbert spaces and let be a firmly nonexpansive mapping such that is a convex function from to . Let be a bounded linear operator and for all . Then, (i) for all where is adjoint operator of ,(ii) is -Lipschitzian.

Lemma 8 (see [29, 30]). Let be a nonempty closed convex subset of a real Hilbert space . Let and be two classes of nonexpansive mappings from into such that Then, for any bounded sequence , we have (i)if then for all , which is called that the NST-condition(I),(ii)if then for all , which is called that the NST-condition (II).

Lemma 9 (see [31] (demiclosedness principle)). Let be a nonempty closed convex subset of a real Hilbert space and let be a nonexpansive mapping with . If the sequence converges weakly to and the sequence converges strongly to . Then, , in particular, if then .

Lemma 10 (see [32]). Let be a sequence of nonnegative real number such that where is a sequence in and is a sequence in such that (i);(ii) or . Then, .

3. Main Result

Throughout this paper, we let be a nonempty closed convex subset of a real Hilbert space . First, we will show that which is defined by has the unique fixed point under the conditions , and where be a strictly real-valued convex function such that is -ism with . Since, is -ism and the nonexpansiveness of . Then, for each , we haveTherefore,It follows that, for , by (30) we have That is, So, is a contraction, therefore, by Banach’s contraction principle, has the unique fixed point. Therefore, is well-defined.

Let be the solution set of the CCMP (1). It is clear that is a closed and convex sets. We now ready to present my main results as follows.

Theorem 11. Let be a nonempty closed convex subset of a real Hilbert space , is a strictly real-valued convex function such that the gradient is -ism with . Assume that and let be a sequence generated by where and satisfy the following conditions: (i) such that for all ,(ii) and , then the sequence converges strongly to , which is the unique minimizer of the CCMP (1).

Proof. We divide the proof into 4 steps.
Step 1. We will show that is bounded. Let . By the strictly convexity of , we have that is a singleton set. Noticing from -ism of that is -Lipschitzian. So, by (10), we have . Therefore, by (30), we have It follows that is bounded, and so are and .
Step 2. We will show that . Since, Therefore, by conditions (i) and (ii), we haveSince, is -ism, then we have Hence, that is, is a nonexpansive. Therefore, by (36) and NST-condition (II) in Lemma 8, we haveStep 3. Let . Since, is a singleton set, we haveTherefore, by Lemma 3, we have . We will show that . From (10) we have Since is bounded, we consider a subsequence of ; there exists a subsequence of which converges weakly to . It follows by the demiclosedness to the zero in Lemma 9 and (39) that . So, by (10) we have (indeed, ). Therefore, by (40), we haveStep 4. We will show that converges strongly to . By (30), Lemma 5, condition (i), and the linearity orthogonal projection of , we haveand, therefore,where and It is easy to see that and . Therefore, by Lemma 10, we obtain converges strongly to . This completes the proof.

Notice that when for all then the result of Theorem 11 can be reduced into the result of Tian and Zhang [26] without the control condition as follows.

Corollary 12 (see [26]). Let be a nonempty closed convex subset of a real Hilbert space and be a strictly real-valued convex function such that the gradient is -ism with . Assume that and let be a sequence generated by where and satisfies the following conditions: and , then the sequence converges strongly to , which is the unique minimizer of the CCMP (1).

4. Applications

Let and be closed convex subsets of real Hilbert spaces and , respectively, and be a bounded linear operator. We now consider the split feasibility problem (SFP) which introduced in 1994 by Censor and Elfving [13], where this problem is to find an element such that . Define the convex function as follows: It follows by Lemma 7 that the gradient of as where is the adjoint operator of , and is -ism. We have the consequence results as follows.

Theorem 13. Let and be closed convex subsets of real Hilbert spaces and , respectively, and let be a bounded linear operator. Suppose that the SFP has a nonempty solution. Let be a sequence generated by where and satisfy the following conditions: (i) such that for all ,(ii) and , then the sequence converges strongly to , which is the unique minimizer of the minimum-norm solution of the SFP.

Corollary 14. Let and be closed convex subsets of real Hilbert spaces and , respectively, and let be a bounded linear operator. Suppose that the SFP has a nonempty solution. Let be a sequence generated by where and satisfies the following conditions: and , then the sequence converges strongly to , which is the unique minimizer of the minimum-norm solution of the SFP.

5. Data Availability

In order of the feasible solution, all algorithms of the iterations have to compute many inner iterations to find the appropriate result, and stack overflow often occurs in which a computer program makes too many subroutine calls and its call stack runs out of space when the parameters of iterations have using many stack arrays to compute the feasible solution.

To avoid the stack overflow, we introduce how to do the mathematical programming without using the stack arrays of its parameters for solving the SFP of the algorithm in Corollary 14. Indeed, the situation of the stack overflow may have occurred from calculating the floating point numbers or the significant decimal digits; to avoid it we ought to be careful of that by always using digit precision command such as the command in Mathematica, and the command in Matlab, and also define all matrix in the regular command type without using the matrix palette to avoid it.

Some mathematical software has a command to give the total number of seconds of CPU time used and the total number of seconds since the beginning of computation in the session such as the commands and in Mathematica, respectively, and the commands and in Matlab, respectively.

We now give the formulation of orthogonal projection where is a simply closed convex sets as follows, and in the case that is not a simply closed convex sets, for instance, is a halfspace, we can found more the formulation in [33].

Proposition 15. For we have (i)if then ,(ii)if such that then ,(iii)if then

Proof. Obviously, the results (i) and (ii) hold by the definition of orthogonal projection of , and the result (iii) also holds by the normal vector of the boundary points set of .

We are ready to introduce how to do the mathematical programming without using the stack arrays of its parameters for solving the SFP of the algorithm in Corollary 14 as follows. Suppose that the SFP has the unique consistent. Taking and into Corollary 14. Let the sets and , the operator , the sequence , and the parameters satisfy the conditions in Corollary 14. We have that is a convergent sequence, and so it is a Cauchy sequence. Hence, we can choose the stopping criteria which satisfies for stopping the program, and also the approximate solution refers to the last iteration. Steps of the mathematical programming of the algorithm in Corollary 14 are shown as follows:Mathematical programming for the split feasibility problemFinding the solution of an augmented matrix equation .

Step 1. Declare of all parameters , the starting point and

We set and .

The example of the commands in Mathematica is shown as follows.

;

;

Step 2. Define the formulations of the orthogonal projections of and where If we choose and such that then the orthogonal projections of and are easy to calculated, and, hence, we do not need to define its formulations in this step, and we can put directly its formulations to process.

The example of the commands in Mathematica is shown as follows.

;

Step 3. Set the starting index and fix parameter . If the parameter is not a fix number such that it is a sequence, then we must lie it in the while loop of step 4.

We set .

The example of the commands in Mathematica is shown as follows.

;

Step 4. Start to calculate the iterations of the sequence such that using the while loop. Set the parameter for all into the while loop such that it satisfies the following conditions: and . If then we break the while loop for approximate feasible solution, which is referred to in the last iteration.

It well known that, in the case of finite-dimensional real space, where stands for matrix transposition of , and, hence, the algorithm in Corollary 14 can be reduced to where . We set for all and instead of and with the variables and , respectively, and also instead of with in the while loop for avoidance using stack arrays of the parameters.

The example of the commands in Mathematica is shown as follows.;;;, , , ;,;;, //MatrixForm;;;,;;;

Step 5. Clear memory of the system.

The example of the command in Mathematica is shown as follows.

;

6. Numerical Results

In this section, we give some insight into the behavior of the algorithm presented in Corollary 14. We implemented them in Mathematica to solve and run on a computer Intel(R) Core (TM) i3 processor 2.00 GHz. We use as the stopping criteria.

Throughout the computational experiments, the parameters used in those algorithms were sets as and for all , where is a bounded linear operator. In the results report below, all CPU times reported are in seconds. The approximate solution is referred to the last iteration.

Example 1. Find the solution of linear equation system as follows: where .
Let . Take , and into Corollary 14. We have chosen arbitrarily andwhere for all . As , we have such that is the our solution. The numerical results are listed in Table 1.

7. Conclusion

In this paper, we obtain an iterative scheme using the gradient projection method based on Mann’s approximation method for solving the constrained convex minimization problem (CCMP) and also solving the split feasibility problem (SFP) such that another strong convergence theorems for the CCMP and the SFP are obtained.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The authors would like to acknowledge the Science Achievement Scholarship of Thailand (SAST) and the Faculty of Science, Maejo University, for financial support.