Abstract

This paper aims to introduce an iterative algorithm based on an inertial technique that uses the minimum number of projections onto a nonempty, closed, and convex set. We show that the algorithm generates a sequence that converges strongly to the common solution of a variational inequality involving inverse strongly monotone mapping and fixed point problems for a countable family of nonexpansive mappings in the setting of real Hilbert space. Numerical experiments are also presented to discuss the advantages of using our algorithm over earlier established algorithms. Moreover, we solve a real-life signal recovery problem via a minimization problem to demonstrate our algorithm’s practicality.

1. Introduction

The theory of variational inequality established itself as an important field of study covering a broad class of results and emerged as an essential tool for solving various problems. This theory is a natural framework for recent numerical techniques developed to solve optimization problems.

Let be a nonempty, closed, and convex subset of a real Hilbert space . The classical variational inequality for a mapping is to find such thatfor all . The solution set of variational inequality problem is denoted by .

One of the simplest methods to solve (1) is the projection method which is the extension of the projected gradient method for optimization problems. The method worked with the assumption that is -Lipschitz continuity and strongly monotone. However, it was pointed out that the projection method may diverge if the strongly monotone assumption is replaced by monotonicity.

To overcome this, Korpelevich [1] proposed an extragradient method based on the computation of projection onto a feasible set twice in each iteration. In the extragradient method, one needs to calculate two projections onto in each iteration. Since projections onto are associated with the minimum distance problem, this might affect the efficiency and applicability of the algorithm if the mapping or the feasible set have complicated structures. So a natural question arises, can we create fast iterative algorithms that use the minimum number of projections onto for solving variational inequality problems? To answer this, Tseng [2] introduced an extragradient algorithm for solving variational inequality involving a monotone and -Lipschitz continuous mapping. In this method, only one projection is calculated onto in each iteration followed by a standard gradient step using the projection onto . In 2011, Censor et al. [3] modified the extragradient method of [1] by introducing the subgradient extragradient method where only one projection is calculated onto , and the other projection onto is replaced by a specific subgradient projection which can be calculated easily. In 2022, Anh [4] presented a novel convergence outcome for addressing the variational inequality problem characterized by strong monotonicity over the fixed point sets of nonexpansive mappings. Very recently, Anh [5] introduced an iterative methodology for solving the variational inequality problem by employing a recently devised proximal operator that converges to a unique solution.

One of the interesting problems in nonlinear analysis is dealing with the common elements of the set of solutions for variational inequality problems and fixed point problems. In 2003, Takahashi and Toyoda [6] introduced an iterative method that converges weakly to the common element for variational inequality involving -inverse strongly monotone mapping and fixed point problem involving nonexpansive mapping. Iiduka and Takahashi [7] obtained a strong convergence using an additional projection of the iterative sequence onto by improving the iterative method of [6]. There are many methods in the literature that draw inspiration from [6] to obtain results based on finding the common element such as Iiduka and Takahashi’s [8] strong convergence using Halpern’s type iterative scheme, Ceng and Yao’s [9] strong convergence result combining the extragradient method with Halpern’s method, and Nadezhkina and Takahashi’s [10] weak convergence using extragradient method.

Moudafi [11] introduced the viscosity approximation method for approximating fixed points of a nonexpansive mapping by the regularization procedure obtained using a suitable convex combination of the nonexpansive mapping. Marino and Xu [12] studied the viscosity approximation methods to discuss the optimality condition for the minimization problems. Chen et al. [13] incorporated viscosity approximation methods for finding the common elements to monotone and nonexpansive mappings. Numerous algorithms use viscosity approximation methods to find the common element of variational inequality problem and fixed point problem such as Ceng and Yao’s [14] strong convergence result by combining the extragradient method and viscosity approximation method such that the two sequences generated by the algorithm converge strongly to the common element, a general three-step iterative process by Shang et al. [15] in which two projections are calculated onto in first two steps, and in the third step, the third projection onto is combined using viscosity approximation method, a generalized viscosity type extragradient method by Anh et al. [16] which uses a strongly positive linear bounded operator to converge to the common element for variational inequality problem, fixed point problem and equilibrium problem, and two-step extragradient-viscosity method by Hieu et al. [17] in which first step calculates three projections onto and second step combines the projections using viscosity approximation method.

Anh and Phuong [18] in 2018 introduced, in their work, a robust convergence outcome for locating the common solution of a system encompassing unrelated variational inequalities and fixed-point problems. This addresses distinct feasible domains, adding versatility to the proposed solution methodology. Recently, Anh et al. [19] provided a weak convergence result using only one projection onto a closed convex set and combining using Mann-type iteration under some specific assumptions. In 2019, Thong and Hieu [20] introduced an extragradient viscosity algorithm with a step-size rule (VSEGM) which does not require the Lipschitz constant of the mapping. In 2022, Tan et al. [21] proposed a viscosity-type inertial subgradient extragradient algorithm (iVSEGM) which is a combination of VSEGM [20] with the inertial term. The use of inertial techniques helps to speed up the convergence. The most crucial aspect of algorithms based on the inertial term is that the next iteration depends upon combining the previous two iteration values. This improves the performance of the iterative algorithm to a great extent. For more literature on inertial techniques, we refer to [22] and references cited therein.

Motivated by the research going in this direction, we establish a new viscosity-type extragradient algorithm that uses a minimum number of projections onto and converges strongly to the common solution of the variational inequality problem involving -inverse strongly monotone mapping and fixed point problem for a countable family of nonexpansive mappings in the setting of real Hilbert space. This new iterative algorithm is based on an inertial term combined with the viscosity type approximation method and a step-size selection rule enabling the algorithm to choose the step size value faster. The step-size choice plays an important role in determining the efficiency of the algorithm. We prove that under some suitable assumptions, the sequence generated by our algorithm converges strongly to the common element.

Some highlights of this paper are as follows:(i)At each step, a single projection is calculated onto a closed and convex set.(ii)We use a strongly positive linear bounded operator in our algorithm with a relaxed condition. The benefit of using this operator can be seen in our numerical experiments.(iii)We provide a real-life application to our algorithm involving the recovery of signals.

We organize the rest of the paper as follows: Section 2 gives some preliminary results and definitions required to understand and prove the main results. Section 3 presents the main iterative algorithm and proves its strong convergence. In Sections 4 and 5, we provide numerical examples and applications, respectively, to support our results.

2. Preliminaries

In this section, we present several basic definitions and results that will be useful for proving the main result.

Suppose that is a closed, convex subset of a real Hilbert space . We denote the weak convergence and strong convergence of a sequence to by and , respectively.

For each point , we have a unique point in such that for all . This is called metric projection (see [23]) of onto C and for all , satisfies

From (2), we can write for all ,

Also, for all , we have

Definition 1. Let be a self-mapping on . Then, is said to be(i)-Lipschitz continuous with if for all (ii)Contraction if for all , there exists a constant such that(iii)Nonexpansive if for all (iv)Monotone if for all (v)-inverse strongly monotone (-ism) with if for all (vi)Strongly positive linear bounded operator if there exists a constant , such that for all ,

A set-valued monotone mapping from to is considered to be maximal if the graph, Graph of is not properly contained in other monotone mapping’s graph. Let be -ism mapping of into , and be the normal cone to at , which is defined as . Now, define

Then, the map is maximal monotone and if and only if .

Lemma 2. The following results hold in .(1) for all (2) for all

Lemma 3 (see [24]). Let be a nonexpansive mapping such that , where is a closed convex subset of a real Hilbert space . If a sequence such that and , then .

Lemma 4 (see [12]). Assume that is strongly positive linear bounded operator with coefficient on a Hilbert space such that , then .

Lemma 5 (see [25]). Assume that is a sequence of nonnegative real numbers such thatwhere and is a sequence in such that(1)(2) or Then, .

Before mentioning the next lemma, we discuss the AKTT-condition that is used to deal with the family of mappings. Let be a family of mappings on such that . Then, satisfies the AKTT-condition if for each bounded subset of , we have

To understand AKTT-condition through an example, we consider . Then, it can be easily seen that is a family of nonexpansive mappings and . Then, for each bounded subset of , we see that

Thus, we see that satisfies AKTT-condition.

Lemma 6 (see [26]). Let be a nonempty closed subset of Banach space and be a family of self mappings onto which satisfies the AKTT-condition. Then, converges strongly to a point in for each . Moreover,

Then, for every bounded subset of ,

3. Main Results

This section presents our algorithm for the common element of solutions to variational inequality and common fixed points of a countable family of nonexpansive mappings.

Throughout this section, we denote as a closed and convex subset of real Hilbert space . We consider the following assumptions:(A1) is a countable family of nonexpansive mappings(A2) is -ism mapping(A3) (A4) is strongly positive linear bounded operator with coefficient and (A5) is -contraction with (A6) is a positive sequence such that , and satisfies , and

Remark 7. The sequence generated by Algorithm 1 is non-increasing and exists (see [21]).

Remark 8. The iterative algorithm presented by Anh et al. [19] yields a weak convergence result by employing the Mann-type method and executing single projections onto a closed convex set. In contrast, our Algorithm 1 delivers a strong convergence result through a viscosity-type approximation, featuring a more relaxed condition on the strongly positive linear bounded operator.

Initialization: Take , and . Let , then calculate as:
Step 1: Set , where and calculate
Step 2: Compute and update
Set and go to Step 1.

Remark 9. In [16], Anh et al. achieve a strong convergence through a generalized viscosity-type approximation. This algorithm aims to identify a common element satisfying three distinct problems, with the norm of a strongly positive linear bounded operator constrained to be 1. In our method, we employ a generalized viscosity-type approximation with a more adaptable constraint on the norm of the strongly positive linear bounded operator , permitting . This adaptation is applied in the pursuit of a common solution to two specific problems.

Now, we state and prove our main result.

Theorem 10. Under the assumptions (A1)-(A6) and if satisfies the AKTT-condition. Then, the sequence generated by algorithm converges strongly to .

Proof. To begin with, we prove that the sequence is bounded. As is -ism mapping, then for all , we haveAs the sequence is non-increasing, , we getSince , we getSo, is a nonexpansive mapping. Using (A3), assume that . This means, . ConsiderUsing (A6) and assumptions of , we get . Thus, there exists a constant such that , for all . Hence, we getUsing Lemma 4, (A5) and (21), we haveIt implies that the sequence is bounded. So, the sequences , , , and are also bounded.
Now, we have to show that and as . Since and are nonexpansive mappings, we haveUsing (21) and (23), we getUsing triangle inequality, we haveChoose and using (25) in (24), we getTake and . So, we getFrom Remark 7 we see that is a telescoping series, which is convergent. Thus, we have . Also satisfies the AKTT-condition, , so from Lemma 5, we getFurther, we considerUsing (A2) in the above inequality, we haveFrom (21), we have , for some . Therefore, we getRearranging the terms, we getUsing (28) and (A6), we getNow, from the properties of projection mapping, we haveUsing (34), we haveThus, we haveSince , and as , we getFurther, we considerAs and since and are bounded, we getThis meansMoreoverthis impliesFurther, it is easy to see that as .
Next, we’ll show that converges to the common element. We observe that is a contraction, where . Since and , we getThus, from Banach’s contraction principle we see that has a unique fixed point, say , such that . Thus, we haveLet be a subsequence of such thatSince is a bounded sequence, thus a subsequence of converges weakly to . We may assume that , without loss of generality. Since , we obtain . By Lemmas 3 and 6 and the fact that , , we have . Let where is the normal cone to at , that is . Then is maximal monotone. From the properties of projection mapping, we havewhich impliesLet Graph(S). Since and , we getThis implies , as . Since is maximal monotone, we have and hence . So, we obtain . It follows thatFinally, we show.As , then by using Lemma 5 along with the assumption , we have . This completes our proof.

Remark 11. (1)Many researchers have calculated projections onto followed by projections onto the half-space. In our Algorithm 1, we calculate only one projection per iteration with a self-adaptive step size rule and inertial extrapolation step.(2)The inertial extrapolation step introduced in our algorithm is a combination of the previous two values of the iteration along with the inertia attaining any value greater than zero. This speeds up the convergence of the sequence converging to the common element.

4. Numerical Examples

In this section, we discuss some numerical examples to validate our theorem. The performance of our Algorithm 1 is compared with other well-established algorithms such as iVSEGM [21], MSEGM [27] and VSEGM [20]. We denote the error sequence as and study the behavior of this sequence. The convergence of implies that the sequence . All the programs are carried out in MATLAB 2018a on Intel(R) Core(TM) i3-10110U CPU @ 2.10 GHz computer with RAM 8.00 GB.

We consider in each of the example discussed. For a fair comparison with the earlier established algorithms, we consider , , and . For MSEGM, we consider and .

Example 1. Consider a nonlinear operator defined byand the closed, convex subset of . Next, we prove that is ism. SoThis impliesFinally, we getTherefore, is -ism mapping. Assume that and let be the family of self mappings on beObserve that the mapping is nonexpansive for each and satisfies the AKTT-condition. Assume the strongly positive linear bounded operator on to be , with constants and equal to . The initial values considered are and . Since each and every assumption of Theorem 10 is satisfied, so the sequence generated by Algorithm 1 converges to . Moreover, we also see that the error sequence converges to 0 much faster and more efficiently than the well-known schemes given in the literature (see Figure 1).

Example 2. Consider a problem in infinite-dimensional Hilbert space equipped with inner product and norm , for all . We define the feasible set as the unit ball . Now, consider the operatorwhere and . It can be easily shown that is -ism mapping and the proof is on similar lines as of Example 1. The projection on is explicitly defined asLet be the family of self mappings on beAssume that , the mapping is nonexpansive for each and satisfies the AKTT-condition. Assume the strongly positive linear bounded operator on to be , the identity operator with constants and equal to . The initial values considered are and . Since each and every assumption of Theorem 10 is satisfied, so the sequence generated by Algorithm 1 converges to . Moreover, we also see that the error sequence converges to 0 much faster and more efficiently than the well-known schemes given in the literature (see Figure 2).

Example 3. Consider an operator defined byand the closed, convex subset of . Now, we prove that is ism mapping.Case 1. When , we haveCase 2. When , we haveCase 3. When and , we haveThus, we see that is 1-ism. Assume that and let be the family of self mappings on beObserve that the mapping is nonexpansive for each and satisfies the AKTT-condition. Assume the strongly positive linear bounded operator on to be with constants and equal to . The initial values considered are and . Since each and every assumption of Theorem 10 is satisfied, so the sequence generated by Algorithm 1 converges to . Moreover, we also see that the error sequence converges to 0 much faster and more efficiently than the well-known schemes given in the literature (see Figure 3).

Remark 12. (1)We can see from Tables 13 that our algorithm outperforms earlier established algorithms both in terms of speed and accuracy. Moreover, it is easy to implement.(2)Our algorithm performs well in both finite and infinite dimensional Hilbert space.

5. Applications

In this section, we give some applications that can be solved through our main result.

Let be a real Hilbert space with , being its inner product and norm, respectively. Let be a closed, convex subset of and be a nonlinear mapping.

5.1. Application to Convex Minimization Problems

Let be a convex mapping. We consider the following minimization problem

Suppose that the mapping is Frechet differentiable. Then our optimization problem (64) has a solution if and only if the variational inequality below satisfies:that is, .

Suppose we take for each and in our algorithm. Then, we have the following theorem.

Theorem 13. Suppose that is a convex mapping such that its gradient is -Lipschitz continuous mapping and is a -contraction with constant . Also, consider is a strongly positive linear bounded operator with coefficient such that and . Assume that is a positive sequence such that . where satisfies and . If, then for any , the Algorithm 1 converges to .

Proof. Put in our main algorithm since is -Lipschitz continuous. This means that is -inverse strongly monotone mapping. Observe that is nonexpansive for each . Therefore, by Theorem we obtain . This means is a solution to the variational inequality problem. Hence, the result.

5.2. Application to Signal Processing Problems

Since communications in the actual world can experience interference during transmission, the signal recovery problem deals with the recovery of the original clean signals from noisy signals. The model for signal processing problems is described aswhere has non zero elements as the original signal, is our observed noisy signal, is a linear operator which is bounded and the noise observation is . This model works with the assumption that the signal is sparse, which means that the number of non-zero elements in the signal is much less compared to the dimension of . This model (66) can be solved using the Least Absolute Shrinkage and Selection Operator(LASSO) model. This model is expressed as:

Here and represents 2-norm and 1-norm respectively. (67) is further equivalent to solving a variational inequality problem of finding such thatfor all . The gradient of the function is known to be . We set , and for each in our proposed algorithm. Notice that is monotone and -Lipschitz continuous. It is easy to show that is a convex function. Thus, we get to be -ism. To verify numerically, we set , , , , ,  =  and . Assume that the original signal contains randomly generated spikes, which are very less as compared to the dimension of . The matrix and the noisy observation is generated by randn(m, n), respectively in the Matlab. Thus, the observation is obtained using (63). We apply our algorithm when , initial points and the randomly generated spikes . Thus, applying our Algorithm 1 by choosing , we have a sequence that converges to the point, which minimizes the function and thus the noisy observation (see Figures 4 and 5).

6. Conclusion

This paper discussed an iterative algorithm based on inertial term combined with the viscosity type approximation method, and some numerical computations of the proposed algorithm both in finite and infinite dimensional Hilbert space, are also presented to show the efficiency of the proposed algorithm. We concluded the discussion by giving applications of the proposed algorithm through the convex minimization problem and signal processing problem. For future work, we ask the following question: Is it possible to modify Algorithm 1 to deal with variational inequality problems involving much weaker forms than -inverse monotonicity?.

Data Availability

No underlying data were collected or produced in this study.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Authors’ Contributions

All authors contributed equally.

Acknowledgments

The first author is grateful to CSIR, New Delhi, India, for providing a junior research fellowship (File 09/0677(13166)/2022-EMR-I).