Abstract

In this paper, we study the strong convergence of an algorithm to solve the variational inequality problem which extends a recent paper (Thong et al., Numerical Algorithms. 78, 1045-1060 (2018)). We reduce and refine some of their algorithm conditions and we prove the convergence of the algorithm in the presence of some computational errors. Then, using the MATLAB software, the result will be illustrated with some numerical examples. Also, we compare our algorithm with some other well-known algorithms.

1. Introduction

Let be a real Hilbert space with the inner product and the norm , and be a nonempty, closed, and convex subset of . The variational inequality (VI) is to find a point such that where is a mapping of C into . The solution set of (1) is denoted by . Variational inequalies arise in the study of network equilibriums, optimization problems, saddle point problem, Nash equilibrium problems in noncooperative games etc.; see, for example, [112] and the references therein.

A new algorithm was proposed by Korpelevich [13] for solving the problem (VI) in the Euclidean space which is known as the extragradient method. Let be an arbitrary element in and consider where is a number in , is the Euclidean least distance projection of onto , and is a monotone operator. The next algorithm (3) was introduced by Tseng [14] and applying the modified forward-backward (F-B) method is a good alternative to the extragradient method (TEGM): where and if is Lipschitz continuous. The following algorithm (4) was proposed by Shehu and Iyiola [15] which a viscosity type subgradient extragradient method (VSEGM): where the operator is monotone and Lipschitz continuous, is a strict contraction mapping, , and where is the smallest nonnegative integer such that where for all . Recently, the sequence produced by the following algorithm was introduced by Thong and Hieu [3] based on Tseng’s method (THEGM): where the operator is monotone and Lipschitz continuous, , and is chosen to be the largest satisfying

In this paper, substituting a sequence of coefficients instead of the sequence in the algorithm (6), we extend algorithm (6). Moreover, condition (7) will be removed just by a slight change in the coefficients . Also, a sequence of computational errors in our algorithm is considered. The strong convergence of the proposed algorithm to a point of the variational inequality will be proved under the presence of computational errors. Finally, some examples will be presented which will examine the convergence of the proposed algorithm in different situations.

2. Preliminaries

In this section, some basic concepts are presented.

Let be a real Hilbert space with the inner product and norm and suppose that is a nonempty closed convex subset of and is an operator. The operator is said to be (i)Monotone if(ii)-Lipchitz continuous if there exist such that

For the main results of this paper, we need the following useful lemmas.

Lemma 1. Let be a real Hilbert space. Then, we have the following well-known results:

Lemma 2 (Xu, see [16]). Let be a sequence of nonnegative real numbers satisfying the following relation: where (a)(b)(c)Then,

Lemma 3 (see [17]). Let be a closed and convex subset in a real Hilbert space . Then, if and only if .

Lemma 4 (see [18]). Let be a sequence of nonnegative real numbers such that there exists a subsequence of such that for all . Then, there exists a nondecreasing sequence of such that and the following properties are satisfied by all (sufficiently large) number : In fact, is the largest number in the set such that .

Lemma 5 (see [3]). Let be a sequence generated by algorithm (3). Then,

3. Main Results

In this section, we prove a strong convergence theorem for finding a common element of the set of solutions of an equilibrium problem and a fixed point problem.

Theorem 6. Let be a nonempty closed convex subset of a real Hilbert space , be a monotone, and -Lipschitz continuous mapping on and such that . Suppose that is a contraction mapping with a constant . Let be a sequence of computational errors, be arbitrary and , , and be the sequences generated by where and are real sequences in such that for each . Also, assume the following conditions: Then, (i) if and only if is bounded and Suppose . Then, (ii)If , then converges strongly to , where is the mapping defined by for each

Proof. (i) Assume that is a bounded sequence and . Then, there exists a subsequence such that when . Note is a bounded sequence. Hence, there exists a subsequence of such that converges weakly to some . Now, noting that since , for all , we have Now, we have Therefore, From , we have for all . Now, let and , and from the convexity we have . Therefore, Since then for all . Because the mapping and multiplication are continuous, if , then we have for all , i.e; .
For the converse fix , using Lemma 5, we have Therefore, Using the above inequality, we have so the sequence is bounded.
(ii) Let . From part (i), we have that is bounded. Then , and are bounded. Now, using Lemma 5, we have (note that our sequence in the algorithm (14) replaces in Lemma 5).☐

Since , by the convexity of and the relation , we conclude that

Therefore,

Also, we have,

Note that is a contraction mapping. Then, by the Banach contraction principle, there exists a unique element such that . Now, we claim that converges strongly to . It is enough to consider two cases:

Case 1. Suppose there exists some such that for all . Then, exists. From (25) and our assumptions, . From the conditions (b), (c), and (d), . Then, from (26) and the boundedness of the sequences , and , we obtain that Now, we have for all . Note that from (27), it is concluded that , for all and . Then, from (28), , for all , i.e., is a Cauchy sequence in the Hilbert space ; therefore, is convergent. Now we show that converges strongly to . Note that

Since is strongly convergent, so for some , then we conclude that . Also, as in the proof of part (i), we conclude . Therefore, from Lemma 3

Next, we consider the sequences , and in (3.8) as follows:

From the above, we have

From , there exists an integer such that , for each . Without loss of generality, we may assume that , for each . From condition , note , so , and hence, . Therefore, condition of Lemma 2 holds. From , the boundedness of and (3.9), we have

Therefore, condition of Lemma 2 holds. Next, note and , , and are bounded. Hence, there exist some positive constants , , and such that, and , for each . Then, from and , we have

Therefore, condition of Lemma 2 holds. Then, from Lemma 2, it follows that as , i.e., .

Case 2. Suppose there exists a subsequence of such that for all . Now from Lemma 4, there exists a nondecreasing sequence of such that and the following inequalities hold for all : Now, from (25), we have Hence, , therefore from (26) (adjusted) .
From (29) (adjusted), we conclude that Therefore, we have where From our assumptions , , , and (30) (adjusted), we conclude . Hence, which completes the proof of part (ii).

Open problem 1. Can we remove the condition in (i) in Theorem 6?

4. Numerical Example

In this section, the algorithm (14) is illustrated with some examples.

Example 1. Put .
Then, from the algorithm (14), we have the following sequences: where is as follows where (see [19]). We have, Obviously, ; hence, .

Now, with and using the MATLAB software, we see that converges to (Figure 1).

In the following example, using the MATLAB software, we compare some similar algorithms and their convergence speed and behavior. In particular, the TEGM algorithm (3), VSEGM algorithm (4), THEGM algorithm (6), and the algorithm (14) are compared. We see that the algorithm (14) has a higher convergence speed than the other algorithms (Figure 2).

Example 2. Let , (Figure 2).

Now, we examine the convergence of the sequences in Theorem 6 in the following example.

Example 3. Put , .
Then, we have Hence, Therefore, , and now, by Theorem 6, the sequence converges strongly to 0.

In the following example, we examine the case that , and the sequence generated by the algorithm (14) is divergent (Table 1 and Figure 3).

Example 4. Put , .
Then, we have Hence, Now, we will prove by induction that, for all When , then we have .
Induction step: suppose for the inequality holds.

Then,

Hence, . Consequently .

Next, we show that is an unbounded sequence. Note .

Induction step: suppose for the inequality holds.

Then, thus is an unbounded sequence (Figure 4).

5. Conclusions

In this paper, we proposed a Tseng-type viscosity algorithm based on the viscosity method, which is an extension of the Thong et al.’s algorithm [3]. We showed that the sequence generated by the proposed algorithm strongly converges to an element of .

The following are the results in this paper: (i)We extended the results of Thong et al.’s. method [3] and provided necessary and sufficient conditions for the to be nonempty(ii)In our generated sequence , in the algorithm (14), we put a sequence of computational errors and proved the convergence of the sequence in the presence of computational errors(iii)We provided some numerical examples to compare our algorithm with the algorithms TEGM, VSEGM, and THEGM

Data Availability

No data were used to support the study.

Conflicts of Interest

This work does not have any conflicts of interest.