Abstract

As is known, the regularization method plays an important role in solving constrained convex minimization problems. Based on the idea of regularization, implicit and explicit iterative algorithms are proposed in this paper and the sequences generated by the algorithms can converge strongly to a solution of the constrained convex minimization problem, which also solves a certain variational inequality. As an application, we also apply the algorithm to solve the split feasibility problem.

1. Introduction

Assume that is a Hilbert space with inner product and norm induced by its inner product. Let be a nonempty, closed, and convex subset of . Recall that the projection from onto , denoted by , assigns, to each , the unique point with the property

Below we introduce some nonlinear operators. Let : be nonlinear operators.

(i) is -Lipschitzian if , for all , where is a constant.

In particular, if , then is called a contraction on ; if , then is called a nonexpansive mapping on . We know that the projection is nonexpansive.

(ii) is monotone if, for all ,

(iii) Given number , is said to be -strongly monotone, if

(iv) Given number , is said to be -inverse strongly monotone (-ism) if

If is nonexpansive, then is monotone.

We know that the gradient-projection algorithm can be used to solve the constrained convex minimization problem. Let us recall the concrete analysis below. Consider the following constrained convex minimization problem: where is a real-valued function. Assume that is Fréchet differentiable; define a sequence by where the initial guess is taken from and the parameter is a real number which satisfies certain conditions. The convergence of algorithm (6) depends on the property of . In fact, if is only inverse strongly monotone, then algorithm (6) can converge weakly to a solution of the minimization problem (5). In 2011, Xu [1] provided an alternative averaged mapping approach to the gradient-projection algorithm; he also constructed a counterexample to prove that algorithm (6) has weak convergence only, in infinite-dimensional space. He also provided two modifications to ensure that the gradient-projection algorithms can converge strongly to a solution of (5). More investigations about the gradient-projection algorithm and its important role in solving the constrained convex minimization problem can be seen in [211]. Recently, the method has also been applied to solve the split feasibility problems which find application in image reconstruction and the intensity modulated radiation therapy (see [1217]). However, sometimes the minimization problem has more than one solution, so regularization is needed.

Consider the regularized minimization problem here is the regularization parameter, and again is Fréchet differentiable and the gradient is -ism.

On the gradient-projection method based on the regularization, we have weak convergence result as follows: define a sequence by where , , and . Then the sequence generated by (8) converges weakly to a minimizer of (5) in the setting of infinite-dimensional space (see [17]).

On the other hand, Tian [18] proposed the following iterative method: where is a nonexpansive mapping on with a fixed point, is a contraction on with coefficient , and is a -Lipschitzian and -strongly monotone operator with , . Letting , , he proved that the sequence generated by (9) can converge strongly to a fixed point , which solves the variational inequality ,  .

Combing the idea of regularization with Tian’s iterative scheme, in this paper, we will construct a new algorithm. The algorithm can not only find the minimum-solution of the constrained convex minimization problem by a single step but also ensure the strong convergence. In fact, the sequence generated by the constructed algorithm can converge strongly to a minimizer of the constrained convex minimization problem. The obtained point is also a solution of a certain variational inequality. Then we also apply the constructed algorithm to solve a split feasibility problem.

2. Preliminaries

Lemma 1 (see [18]). Let be a nonempty, closed, convex subset of a real Hilbert space . Let be a contraction with coefficient and let be -Lipschitzian and -strongly monotone with , . Then, for ,

Lemma 2 (see [19]). Let be a closed and convex subset of a Hilbert space and let be a nonexpansive mapping with . If is a sequence in weakly converging to and if converges strongly to y, then .

Lemma 3 (see [20]). Let be a closed and convex subset of a Hilbert space . Given and , then if and only if there holds the inequality

Lemma 4 (see [1]). Assume that is a sequence of nonnegative real numbers such that where and are sequences in and is a sequence in such that(i);(ii)either or ;(iii).Then .

We will use the following notation:(i) for weak convergence and for strong convergence.

3. Main Results

Assume that is a Hilbert space with inner product and norm induced by its inner product. Let be a nonempty, closed, and convex subset of . Assume that the minimization problem (5) is consistent and its solution set is denoted by .

Rewrite the regularized constrained convex minimization problem:

Recall that is a -contraction on with , and is -Lipschitzian and -strongly monotone on with , . Given , assume that is continuous with respect to and that ; then there exists a constant such that for . Let the gradient be -ism. For each , we consider the mapping on defined by where , , and .

We will use the following notation in Lemma 5, Proposition 6, and Theorem 7:

Lemma 5. There exists an implicit algorithm ; here is the fixed point of ; that is, where , are defined by (14).

Proof. Below we will show that is a contraction. Indeed, we have It follows easily that Hence has a unique fixed point; we denote it by , which uniquely solves the fixed point equation here .

Proposition 6. Let be defined by (19):(i) is bounded for .(ii).(iii) defines a continuous curve from into .

Proof. (i) For , we have It follows that . Hence, is bounded.
(ii) We can easily see that The boundedness of implies that and are also bounded. Hence (ii) follows.
(iii) For , we have Hence, + . Noting that is continuous with respect to , we get as , and therefore is continuous.

Theorem 7. Let be defined by (19). Then converges in norm, as , to a minimizer of (5), which solves the following variational inequality:

Proof. It follows from Lemma 1 that the variational inequality (23) has only one solution . To prove convergence we will use Lemma 3 in the following calculations. It holds that Hence, Since is bounded for , and , we see that if is a sequence in such that and , then, by (25), . We may further assume that . It turns out that From the boundedness of and , we conclude that It then follows from Lemma 2 that . This shows that .
We next prove that is a solution of the variational inequality (23). Since is nonexpansive, we see that is monotone. By (19), we have then it follows that Taking the limit through ensures that is a solution of the variational inequality. This implies that . Therefore .

Finally we consider the explicit version of our algorithm which is where the initial guess and and are the parameter sequences satisfying certain conditions.

Theorem 8. Assume that the minimization problem (5) is consistent and let S denote its solution set. Let the gradient be -ism. Let be -strongly monotone and -Lipschitzian and let be a contraction with coefficient . Fix a constant satisfying , a constant with the property , and a constant satisfying . Let be generated by the iterative algorithm (30). Assume(C1);(C2);(C3);(C4);(C5).Then the sequence converges in norm to as defined in Theorem 7.

Proof. We set Since , there exists such that for .
We observe that is bounded. Indeed, taking a fixed point , we get By induction ,  , then is bounded, so are and .
We claim that .
Because of the boundedness of , it can be easily seen that and are also bounded. So we can take two constants , such that and . Consider By Lemma 4, we obtain .
Next we show that
We next show that where is obtained in Theorem 7. Indeed, take a subsequence of such that Since the sequence is bounded, we may assume that ; it follows from Lemma 2 and (34) that . Then we obtain . By (37) we know that .
We finally show that . We have, using Lemma 3, Since is bounded, we can take a constant such that hence, it then follows that where .
Since and , by (37), we get   . Applying Lemma 4 to (41) concludes that as .

4. An Application

Since the split feasibility problem (say SFP, for short) was proposed by Censor and Elfving in 1994, it has been widely used in signal processing and image reconstruction, with particular progress in intensity-modulated radiation therapy. We know that the gradient-projection method plays an important role in solving the SFP. In this section, we provide an application of Theorem 8 to the SFP (see [21, 22]).

The SFP can mathematically be formulated as the problem of finding a point with the property where and are nonempty, closed, and convex subset of Hilbert spaces and , respectively. is a bounded linear operator.

It is clear that is a solution to the split feasibility problem if and only if and . We define the proximity function by and we consider the constrained convex minimization problem

Then solves the split feasibility problem (42) if and only if solves the minimization problem (44) with the minimal value equal to 0. Byrne introduced the so-called algorithm to solve the SFP: where . He obtained that the sequence generated by (45) converges weakly to a solution of the SFP.

Now we consider the regularization technique; let then we establish the iterative scheme as follows: where is a contraction with the coefficient . Let be -strongly monotone and -Lipschitzian.

Theorem 9. Assume that the split feasibility problem (42) is consistent. Let the sequence be generated by (47). The and are constants with the same property as in Theorem 8.   . The sequences and are the parameter sequences satisfying conditions (C1)–(C5) in Theorem 8. Then the sequence generated by (47) converges strongly to the solution of split feasibility problem (42).

Proof. By the definition of the proximity function , we have and we show that is -ism.
Since is (1/2)-averaged mapping, then is 1-ism: Hence, . Let ; then the iterative scheme (47) is equivalent to where , and then, due to Theorem 8, we have the conclusion immediately.

5. Numerical Result

In this section, we present the following simple example to judge the numerical performance of our algorithm. We use the algorithm in Theorem 9 to illustrate its realization in solving system of linear equations.

Example 10. In Theorem 9, we assume that . Take , where denotes the identity matrix, and with Lipschitz constant and strongly monotone constant . Given the parameters , for every . Fix , , and . Take The SFP can be formulated as the problem of finding a point with the property where , . That is, is the solution of system of linear equations , and Then by Theorem 9, the sequence is generated by As , we have (Table 1).

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Authors’ Contribution

All the authors read and approved the final paper.

Acknowledgments

The authors thank the referees for their helping comments, which notably improved the presentation of this paper. This work was supported by the Foundation of Tianjin Key Lab for Advanced Signal Processing and by the Fundamental Research Funds for the Central University (no. 3122014K012).