Abstract

Our main goal in this manuscript is to accelerate the relaxed inertial Tseng-type (RITT) algorithm by adding a shrinking projection (SP) term to the algorithm. Hence, strong convergence results were obtained in a real Hilbert space (RHS). A novel structure was used to solve an inclusion and a minimization problem under proper hypotheses. Finally, numerical experiments to elucidate the applications, performance, quickness, and effectiveness of our procedure are discussed.

1. Introduction

The standard form of the variational inclusion problem (VIP) on a RHS iswhere is the unknown point that we need to find, for an operator and a set-valued operator . VIP is a frequent problem in the optimization field, which has a lot of applications in many areas, including equilibrium, machine learning, economics, engineering, image processing, and transportation problems [116].

The vintage technique to solve problem (1) which is denoted by is the forward-backward splitting method [1722] which is defined as follows: andwhere . In (2), each step of iterates includes only the forward step ¥ and the backward step , but not . This technique involves the proximal point algorithm [2325] and the gradient method [2628] as special cases.

In a RHS, nice splitting iterative procedures presented by Lions and Mercier [29] are shown as follows:andwhere . Permanently, two algorithms are weakly convergent [30], knowing that algorithm (3) is called Peaceman–Rachford algorithm [19] and scheme (4) is called Douglas–Rachford algorithm [31].

A lot of works are concerned with problem (1) for accretive operators and two monotone operators, for instance, a stationary solution to the initial-valued problem of the evolution equationcan be adjusted as (1) when the governing maximal monotone [29].

[1] is used to solve a minimization problem as follows:where are proper and lower semicontinuous convex functions such that is differentiable with -Lipschitz gradient, and the proximal mapping of is

In particular, if and , where is the gradient of and is the subdifferential of which takes the form , problem (1) becomes (6), and (3) becomeswhere is the stepsize and is the proximity operator of .

The concept of merging the inertial term with the backward step was initiated by Alvarez and Attouch [32] and studied extensively in [33, 34]. For maximal monotone operators, it was called the inertial proximal point (IPP) algorithm, and they defined it by

It was proved that if is nondecreasing and withthen algorithm (9) converges weakly to zero of . In particular, condition (10) is true for . Here, is an extrapolation factor, and the inertia is represented by the term . Note that the inertial term improves the performance of the procedure and has good convergence results [3537].

Inertial term was merged with forward-backward algorithm by authors [38]. They added Lipschitz-continuous, a single-valued, cocoercive operator into the IPP algorithm:

Via assumption (10), provided with , the Lipschitz constant of , they obtained a weak convergence result. Note that, for , algorithm (11) does not take the form of (2), in spite of is still evaluated at the points .

Relaxation techniques and inertial effects have many advantages in solving monotone inclusion and convex optimization problems; this effect appeared in several names such as relaxed inertial proximal method, relaxed inertial forward-backward method, and relaxed inertial Douglas–Rachford algorithm; for more details, refer to [22, 24, 3944].

Abubakar et al. [45] introduced the RITT method as follows:where and are extrapolation and relaxation parameters, respectively. Under this algorithm, they discussed the weak convergence to the solution point of VIP (1) and the problem of image recovery. Note that the extrapolation step works to accelerate but not for the desired acceleration.

The concept of the SP method was discussed by Takahashi et al. [46] as in the following algorithm:

They just selected one closed convex (CC) set for a family of nonexpansive mappings to modify Mann’s iteration method [47] and proved that the sequence converges strongly to , provided for all and for some .

In 2019, Yang and Liu [48] selected the stepsize sequence for the iterative algorithm for monotone variational inequalities, which are based on Tseng’s extragradient method and Moudafi viscosity scheme that does not require either the knowledge of the Lipchitz constant of the operator or additional projections.

With the incorporation of results of [45, 46, 48], we accelerate RITT algorithm by adding the SP method to algorithm (12). In a RHS, strong convergence results are given under a proposed algorithm. As applications, our algorithm was used to find the solution to a VIP and minimization problem under certain conditions. Eventually, numerical experiments to illustrate the applications, performance, acceleration, and effectiveness of the proposed algorithm are presented.

2. Preparatory Lemmas and Definitions

Suppose that is a nonempty closed convex subset (CCS) of a RHS ; we shall refer to as the strong convergence, and is the nearest point projection, that is, for all and , . is called the metric projection. It is obvious that verifies the following inequality:for all . In other words, the metric projection is firmly nonexpansive. Hence, holds for all and , see [49, 50].

The following inequality holds in a HS [51]:for all .

Lemma 1. (see [52]). Let be a nonempty CCS of a RHS . For each and , the following set is closed and convex:

Lemma 2. (see [38]). Let be a nonempty CCS of a RHS and be the metric projection. Then,for all and .

Definition 1. Suppose that and are the domain and the range of an operator , respectively. For all , an operator is called(1)Monotone if(2)Lipschitz if(3)Strongly monotone if there exists such that(4)Inverse strongly monotone (ism) if there exists such that

Lemma 3. (see [44]). Let be a RHS, be an ism operator, and be a maximal monotone operator. For each , we defineThen, we get(i)For , (ii)For and ,

Lemma 4. Let be a RHS, be an ism operator, and be a maximal monotone operator. For each , we havefor all .

Proof. For all , we getThe proof is ended.

3. Shrinking Projection Relaxed Inertial Tseng-Type Algorithm

We provide a method consisting of the forward-backward splitting method with an inertial factor and an explicit stepsize formula, which are being used to ameliorate the convergence average of the iterative scheme and to make the manner independent of the Lipschitz constants. The detailed method is provided in Algorithm 1.

Initialization: select initial , , , , and .
St. (i). Put as:
    
St. (ii). Calculate:
    ,
  If , discontinue. is a solution of (1), otherwise, continue to St. (iii)
St. (iii). Calculate:
    
  where is stepsize sequence revised as follows:
    
St. (iv). Calculate:
    
  where .
St. (v). Compute
    
  put , and return to St. (i).

Note that(i)Since is an ism operator, it is a Lipschitz function with a constant , , and we getIt is obvious for that inequality (25) is satisfied. Hence, it follows that . This implies that the generated sequence is bounded below by , i.e., is monotonically decreasing.(ii)By (i) and (25), we havei.e., the update (28) is well defined.(iii)If we delete the shrinking projection term from our algorithm, we get the algorithms of the papers [22, 45, 53].

Theorem 1. Let be a RHS and the operators be ism on , and is maximally monotone. If feasible set of (1) is a nonempty CCS of a RHS , then the sequence generated by Algorithm 1 converges strongly to a point , provided that(i).(ii).

Proof. The proof will be divided as follows:

Part 1. Demonstrate that is well-defined, for each , , and . It follows from condition (i) and Lemma 4 that is a nonexpansive mapping. Lemma 3 implies that is a closed and convex set, and Lemma 1 clarifies that is closed and convex, for all .
Let ; we haveSince the resolvent is firmly a nonexpansive mapping and by Lemma 3, we haveHence, by (28), we getwhich leads toIt is obvious thatApplying (31) in (30), we can writeNow, from definition , we haveFrom equation (15), one can writeApplying (34) in (33), we getIt follows from (32), (35), and (26) thatApplying (27) in (36), we haveIt is clear that . Assume that for some . Then, and by (37), we have for all , . Thus, for all , i.e., is well-defined and bounded.

Part 2. Illustrate that is bounded. Since and closed and convex subset of , there is a unique such that . This leads to , , and for all , and we haveFurthermore, as , for all , we obtainIt follows by (38) and (39) that exists. Hence, is bounded.

Part 3. Fulfillment of . By the definition of , for , we observe that . From Lemma 2, we haveBy Part 2, we conclude that . Thus, is a Cauchy sequence. Hence, . Additionally, we get

Part 4. Prove that . It follows from (41) thatAlso, by (42) and condition (ii), we can writeFrom triangle inequality on the norm and (42) and (43), we obtainReplacing with in (36) and using (41) and (44), we haveApplying (41), (42), and (45), we can writeIt follows from (44) thatSince , there is such that and for all . Then, by Lemma 3 (ii) and (47), we getFrom (45) and (46), since as , we have also as . Since is a nonexpansive and continuous mapping, from (47), we conclude that .

Part 5. Show that . Since and , we can getSetting in (49), we haveThis shows that . This finishes the proof.

4. Solve a Minimization Problem

As an application of our theorem, we solve the following constrained convex minimization problem:where is a convex function. We suppose that the function is differentiable such that is an ism operator.

It is easy to see that problem (51) is equivalent to the following problem:where is the indicator function of . Thus, this problem becomes the problem of finding an element such thatwhere is the subdifferential of . We know that is a maximal monotone operator, and for all .

For solving problem (51), we state the theorem in the following, which is similar to Theorem 1.

Theorem 2. Let the sequence be bounded below by , where and . Given a parameter such that . Let be the sequence in which is defined by , , , andwhere is ism on a RHS , is a maximally monotone operator, and . If , then the sequence converges strongly to , provided that .

5. Solve a Split Feasibility Problem

In this section, we investigated the application of our proposed methods to the split convex feasibility problem (SCFP). Let be a bounded linear operator and its adjoint defined on the two RHSs and . Assume that and are nonempty CCSs. The SCFP [54] take the shape as follows:

In a HS, SFP was initiated by Censor and Elfving [54], and they used a multidistance approach to find an adaptive approach for resolving it. Many of the problems that emerge from state retrieval and restoration of medical image can be formulated as SVFP [55, 56]. SFP is also used in a variety of disciplines such as dynamic emission tomographic image reconstruction, image restoration, and radiation therapy treatment planning [5759]. Let us considerfor the metric projection on to , the gradient , and . Due to the above construction, problem (55) has an inclusion format as described in (1). It can be seen that is Lipschitz continuous with constant , and is maximal monotone, see, e.g., [60].

Let be a nonempty CCS of a RHS , and a normal cone of at is defined by

Suppose is a proper, lower semicontinuous, and convex function. For each , the subdifferential of is given by

For any nonempty CCS of , the indicator function of is defined by

It is obvious that the indicator function is proper, convex, and lower semicontinuous on . A subdifferential of is a maximal monotone operator, and

For each , now we define the resolvent of an indicator function for each in the following manner:

Hence, we can observe that

Now, on the basis of the above, Algorithm 1 may be reduced to the following scheme.

Theorem 3. Let be a sequence generated by the following scheme: choose , , , , and .St. (i): compute in the following way:St. (ii): calculateIf , stop, and is a solution of problem (55); otherwise, continue to St. (iii).St. (iii): calculatewhere is the stepsize sequence revised in the following way:St. (iv): calculatewhere .St. (v): computePut , and return to St. (i). If the solution set is nonempty, then the sequence converges weakly to an element of .

6. Numerical Discussion

This part is devoted to present a numerical solution to a SCFP in an infinite HS, which is a special inclusion problem as explained in Section 5. The problem setting is taken from [61]. We provide the comparison of Algorithm 1 (Alg1) in [45] and our proposed Algorithm 1 (Alg2).

Example 1. Let be two HSs with an inner productand the induced norm defined byNext, consider the feasible set asand isConsider the mapping such that . Then, , and . So, we shall solve the following problem:We can also observe that since , the above problem is actually a CFP of the formFigures 19 and Tables 1 and 2 show the numerical results by assuming .

Remark 1. It is well known that the success of any iterative method depends on two main things: first, the number of iterations: when the number of iterations is small, the method is successful in saving effort. Second, time factor: the method that needs less time in implementation is excellent than its counterpart, which needs a lot of time and is considered successful in saving time. So, from figures and tables, we observe that our algorithm needs fewer iterations and less time than Algorithm 1 [45]. This illustrates that our method is successful in speeding up Algorithm 1 [45] and solving problem (55). Also, the performance of our algorithm is good because it saves time and effort in studding the convergence rate.

Data Availability

Data sharing is not applicable to this article as no datasets are generated or analyzed during the current study.

Conflicts of Interest

The authors declare that they have no conflicts of interest concerning the publication of this article.

Authors’ Contributions

All authors contributed equally and significantly to writing this article.

Acknowledgments

The authors are grateful to the Spanish Government and the European Commission for Grants IT1207-19 and RTI2018-094336-BI00 (MCIU/AEI/FEDER, UE). This work was supported in part by the Basque Government under Grant IT1207-19.