Abstract

The goal of this manuscript is to establish strong convergence theorems for inertial shrinking projection and CQ algorithms to solve a split convex feasibility problem in real Hilbert spaces. Finally, numerical examples were obtained to discuss the performance and effectiveness of our algorithms and compare the proposed algorithms with the previous shrinking projection, hybrid projection, and inertial forward-backward methods.

1. Introduction

Assume that is a real HS defined on the induced norm and the inner product . Let be a NCC subset of .

The mapping is called NE, if for all the following inequality holds:

For the mapping refers to the set of all FPs of it.

Here, we study the following inclusion problem: where and are single-valued and set-valued operators, respectively.

There are many important applications enjoyed by approximating FP problems for NEMs, such as monotone variational inequalities, image restoration problems, convex optimization problems, and SCFPs, for example, see [13]. For more accuracy, these problems can be expressed as mathematical models such a machine learning and the linear inverse problem.

In the past, the solution of problem (2) was described by and it relied on the forward-backward splitting method [410]. This technique is described as follows: and

In this scene, we do not mean the sum of and in the iterates, but each step of iterates includes only as the forward term and as the backward term. As special cases, this technique gets involved heavily in a study of the proximal point algorithm [1113] and the gradient method [1417].

In 1979, a good splitting iterative scheme in a real HS was introduced by Lions and Mercier [18]. It is described as follows: where In the previous literature, the algorithm (4) is called Peaceman-Rachford [7] and the scheme (5) is called Douglas-Rachford [19]. Generally, the convergence of both procedures is weak [7].

In 2001, a heavy ball method involved for studying maximal monotone operators is introduced by Alvarez and Attouch [20]; this idea was developed in [21, 22], where an inertial term was added. This procedure is called the inertial proximal point algorithm and it takes the shape

They got the weak convergence for the mapping , if is nondecreasing and with

In particular, condition (7) is true for . Here, is an extrapolation factor and the inertia is represented by the term .

It should be noted that the inertial term improves and increases the convergence speed of the algorithm [2325].

An inertial proximal point algorithm improved by Moudafi and Oliny [26], where the single-valued, cocoercive, and Lipschitz continuous operator was added, is as follows:

The problem of weak convergence still persists for the algorithm (8) via stipulation (7) and where is the Lipschitz constant of .

Besides that, the strong convergence is of interest to many researchers, but the study of convergence via norm convergence in infinite-dimensional spaces is often much more desirable than weak convergence [27].

The first contribution of researchers to the strong convergence is the algorithm presented by Nakajo and Takahashi [28]. They added the CQ terms to the Mann algorithm as follows: for an arbitrary point define the sequence iteratively by

They showed that the sequence converges strongly to , whenever the sequence is bounded above by We highly recommend seeing [24, 29], for more details on the CQ algorithms for NEMs.

Based on the algorithm (9), Dong et al. [30] introduced a strong convergence result by implicating an inertial forward-backward algorithm for monotone inclusions as the following: assume that is an -ISM operator and is a MM operator so that Suppose and is a sequence maked iteratively by

Recently, nice convergence analysis for NEMs via suitable stipulations has been discussed by Dong et al. [31]. They extended the inertial Mann algorithm as follows: where , , and are real sequences that justify the stipulations and [31].

Believing in the idea of strong convergence of algorithms in this manuscript, the two-step inertial shrinking projection algorithm is introduced which analyzes the strong convergence. As an application to our main results, the SCFP is solved. Finally, to see the behavior and performance of our algorithms in terms of convergence, numerical results are presented and discussed.

2. Preliminaries

This section is devoted to collect some important preliminaries, which we need in the sequel. Let be a NCC subset of a real HS and be a sequence in . Here, the strong convergence of to a point is written as . The metric projection of onto is described by , that is, for all and .

Lemma 1 (see [32]). Let be a NCC subset of a real HS , the metric projection is firmly NE, i.e., for all . Furthermore, for all and , is satisfied.

Lemma 2 (see [32]). Assume that is a real HS. Then, we get (i)(ii)for each and for a real number

Lemma 3 (see [33]). Suppose that is a real HS and is a sequence in . Then, the following hypotheses hold: (i)If and as , then as ; that is, the HS has the Kadec-Klee property(ii)If as , then

Lemma 4 (see [34]). Let be a NCC subset of a real HS . For each and , the following set is closed and convex:

Lemma 5 (see [28]). Let be a NCC subset of a real HS and be the metric projection. Then, for all and the following inequality holds:

Lemma 6 (see [35]). Let be a NE self-mapping of a NCC subset of a real HS . The mapping is demiclosed, i.e., the sequence in weakly converges to some and the sequence strongly converges to some ; it follows that .

Definition 7. Assume that is the domain of the mapping then for all the mapping is called (i)monotone if (ii)-strongly monotone if there is so that (iii)-ISM if there is so that

Lemma 8 (see [5]). Let be a real HS, be an -ISM operator, and be a MM operator. For each we consider then the following statements hold: (i)for (ii)for and ,

Lemma 9 (see [36]). Let be a real HS, be an -ISM operator and be a MM operator, then for all and all we have

3. Strong Convergence Results

From now on, we assume that be a NCC subset of a real HS is -ISM operator, is MM operator, is quasi-NEM so that is demiclosed at zero and .

Now, we build our algorithms to finding an element in as follows:

Let , and be sequences of real numbers. Select initial .
Step (1). Compute
   
Step (2). Compute
   
Step (3). Compute
   
   
Let , and be sequences of real numbers. Select initial .
Step (1). Compute
   
Step (2). Compute
   
Step (3). Compute
   
   
   

Now, we shall discuss the strong convergence of Algorithm 1: by introducing the following theorem.

Theorem 10. Let the sequence , be bounded and be a sequence in and be a sequence of positive real numbers so that the following two stipulations hold: (i)(ii)

If , then the sequence created by Algorithm 1: converges strongly to .

Proof. The proof will be divided into the following steps:
Step (i). For each , and for Prove that is well-defined.
From the stipulation (ii) and Lemma 9, we get is NEM. Thus, it follows from Lemma 8 that the set is closed and convex. Moreover, Lemma 4 leads that for all is closed and convex. Considering , we get By the same manner, one can write Furthermore, by Lemma 2 (ii) and Lemma 9, we obtain that Applying (17) and (18) in (19) and by stipulation (ii) of Theorem 10, we can write It can be easily obtained . For some , assume that , then , and by (20), we conclude that . Therefore, for all and this finishes the requirement of Claim 1.
Step (ii). Prove that the boundedness of . Because is a NCC subset of there is a unique so that
From , and for all we get Also, since we get By (21) and (22), we obtain that exists; this leads to being bounded.
Step (iii). Prove that as , for some . By the structure of , for , one sees that From Lemma 5, we have From Step (ii), we obtain that as . This proves that the sequence is a Cauchy. Therefore, as . In particular, we can obtain Step (iv). Prove that . Because and are bounded, then by (24), we have From (24), (25), and (26), we have Since we obtain that Thus, from the boundedness of , , and and (24), (29), we obtain that By (24), (30), (27), and (28) and the following inequalities we get that We now have Again using the stipulation (i) and (32), we get As there is so that and for all Hence, from Lemma 8 (ii) and (34), one can write Based on (32), as we get Because is NE, is continuous mapping. Hence, using (35), we have .
Step (v). Prove that . Since and we obtain that By taking the limit in (36), we have This shows that and this finishes the requirement.

Next, we shall discuss the strong convergence of Algorithm 2: by presenting the following theorem.

Theorem 11. Let the sequence , be bounded and be a sequence in and be a sequence of positive real numbers so that the following two stipulations hold: (i)(ii)

If then the sequence marked by Algorithm 2: converges strongly to .

Proof. In the same way as proving Theorem 10, we will discuss the following steps:
Step (i). Prove that for all , and for each is well-defined.
It is clear that is closed and a convex subset of (by Lemma 4). So, we can rewrite the set in the shape It follows that is closed and convex subset of too. Thus, is also closed and convex, for each
Let . One can obtain by similar way of Theorem 10 that Hence, for all this implies that
When , we get , and hence, . Suppose that for some . It follows from that for each Since and we have This yields , and hence, This implies that , and hence, is well-defined as well as
Step (ii). Prove that the boundedness of . Based on Algorithm 2:, one gets This implies that Since we have Also, since we can write By (43) and (44), we obtain exists, and hence, is bounded.
Step (iii). Prove that as .
Because and , it follows from Lemma 5 that This implies that as .
Step (iv). Prove that . In the same manner as the proof of Step (iv) of Theorem 10, we can write where . It follows from the nonexpansivity of that From (46) and (47), we can get By the boundedness of , there exists a subsequence of so that . This combines with (48), and from Lemma 6, we have ; this means that .
Since and , (43) and Lemma 3 (ii) imply that Since the nearest point is unique, then we have . Also, we get Applying Lemma 3 (i), we have as . Again, the uniqueness of leads to as .
This finishes the requirement.

4. Application to Solve Split Convex Feasibility Problem

This part is devoted to applying our methods to find a solution to the SCFP. Assume that is a bounded linear operator and its adjoint defined on real HSs and . Let and be a NCC sets. Censor and Elfving [37] formulated the SCFP as follows:

Censor and Elfving in [37] have introduced the SCFP in HSs while using a multidistance approach to find an adaptive approach for resolving it. Many of the problems that emerge from state retrieval and restoration of medical image can be formulated as split variational feasibility problems [38, 39]. This problem is also used in a variety of disciplines like image restoration, dynamic emission tomographic image reconstruction, and radiation therapy treatment planning [4042]. Let consider where is the metric projection on to , is the gradient, and Due to the above construction, the problem (50) has an inclusion format as described in (2). It can be seen that is Lipschitz which continues with constant and is MM, see, for example, [43].

For any NCC subset of a real HS , the indicator function of is defined by

Now, on the basis of the main results, we can deduce the following results for a SCFP.

Theorem 12. Let be a sequence iterated as follows: choose initial points and let , be bounded sequences and be a sequence in . Consider that is a sequence of positive real numbers so that the following assumptions are fulfilled: (i)(ii)Step (1). Compute Step (2). Compute Step (3). Compute

If the solution set is nonempty, then the sequence converges weakly to an element of solution set

Theorem 13. Let be a sequence iterated as follows: choose initial points and let , be bounded sequences and be a sequence in . Consider that is a sequence of positive real numbers so that the following assumptions are fulfilled: (i)(ii)Step (1). Compute Step (2). Compute Step (3). Compute

If the solution set is nonempty, then the sequence converges weakly to an element of solution set

5. Supportive Numerical Examples

This section is the mainstay of the paper as it studies the behavior and performance of our algorithms numerically and graphically. The program used here is MATLAB R2014a running on an HP Compaq 510, Core™ 2 Duo CPU T5870 with 2.0 GHz and 2 GB RAM.

Example 14. Let be two HSs with an inner product and the induced norm defined by

Next, consider the feasible set as and is

Consider the mapping so that Then, and So, we wish to solve the following problem:

We can also observe that since , the above problem is actually a SCFP in the form of

Figures 15 and Table 1 show the numerical computational results of Algorithm (4.2) by Dong et al. in [30] (St-Alg1), Algorithm 1: (St-Alg2), and Algorithm 2: (St-Alg3) by assuming

Remark 15. It is important to note that the different choices of initial points have substantial effect on the CPU (time) and a number of iterations on the proposed algorithms and also for those existing algorithms that are used for comparison. These facts can be seen from Figures 15 and Table 1. We conclude from these numerical results that our algorithms are faster in convergence than their counterpart presented by Dong et al. in [30].

6. Conclusion

The quality of the algorithm is measured by two main factors: velocity in convergence and time. When the convergence is faster in a short time, the results are faster and more accurate. Given the importance of algorithms in many applications in real society, many researchers have studied this logic and try to obtain a strong convergence, which has a prominent role in studying the efficiency and effectiveness of these algorithms. On the basis of this principle, in this paper was studied the effect of shrinking projection and CQ term on two inertial terms to get the strong convergence of new algorithms called two inertial shrinking projection and CQ algorithms. These results have been implicated to obtain a solution to SCFP in HSs. Finally, some numerical results were formulated to illustrate the efficiency and effectiveness of algorithms.

Abbreviations

HSs:Hilbert spaces
NCC:Nonempty closed convex
NEMs:Nonexpansive mappings
FPs:Fixed points
SCFPs:Split convex feasibility problems
ISM:Inverse strongly monotone
MM:Maximal monotone.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no competing interests.

Authors’ Contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Acknowledgments

The third author (Y.U.G.) wishes to acknowledge this work was carried out with the aid of a grant from the Carnegie Corporation of New York provided through the African Institute for Mathematical Sciences.