Abstract
In this paper, we present improved iterative methods for evaluating the numerical solution of an equilibrium problem in a Hilbert space with a pseudomonotone and a Lipschitztype bifunction. The method is built around two computing phases of a proximallike mapping with inertial terms. Many such simpler step size rules that do not involve line search are examined, allowing the technique to be enforced more effectively without knowledge of the Lipschitztype constant of the cost bifunction. When the control parameter conditions are properly defined, the iterative sequences converge weakly on a particular solution to the problem. We provide weak convergence theorems without knowing the Lipschitztype bifunction constants. A few numerical tests were performed, and the results demonstrated the appropriateness and rapid convergence of the new methods over traditional ones.
1. Introduction
Let stand for a certain Hilbert space and stand for a nonempty closed convex subset of The research is about an iterative technique for solving the equilibrium problem ((1), to make it short). Let be a bifunction with for each An equilibrium problem for granted bifunction on is interpreted this way: find such that
The numerical evaluation of the equilibrium problem under the following conditions is the focus of this study. We will assume that the following conditions have been satisfied:
For 1, the solution set of a problem (1) is denoted by and it is nonempty.
For 2, a bifunction is said to be pseudomonotone [1, 2], i.e.,
For 3, a bifunction is said to be Lipschitztype continuous [3] on if there exist two constants such that
For 4, for any sequence satisfying then, the following inequality holds:
For (5), is convex and subdifferentiable on for each fixed
Let us represent a problem’s solution set as and we will assume in the following text that this solution set is not empty. Researchers are interested in the equilibrium problem because it connects many mathematical problems, including fixed point problems, vector and scalar minimization problems, variational inequalities, complementarity problems, saddle point problems, Nash equilibrium problems in noncooperative games, and inverse optimization problems (see for further information [2, 4–9]). It also has a variety of applications in economics [10], the dynamics of offer and demand [11], and it continues to use the theoretical framework of noncooperative games and Nash’s equilibrium models [12, 13]. The phrase “equilibrium problem” was first used in the literature in 1992 by Muu and Oettli [9] and was further investigated by Blum [2]. More precisely, we consider two applications for the problem (1). (i) A variational inequality problem for an operator is stated as follows: find such that
Let us define a bifunction as follows:
Then, the equilibrium problem converts into the problem of variational inequalities defined in (5) and Lipschitz constants of the mapping are (ii) Letting a mapping is said to strict pseudocontraction [14] if there exists a constant such that
A fixed point problem (FPP) for is to find such that . Let us define a bifunction as follows:
It can be easily seen in [15] that expression (8) satisfies the conditions (1)–(5) as well as the values of Lipschitz constants are .
The extragradient method developed by Tran et al. [16] is one useful approach. Take an arbitrary starting point and the next iteration as follows: where and are two Lipschitztype constants.
The main goal is to create an inertialtype technique in the case of [16] that will be designed to increase the convergence rate of the iterative sequence. Such techniques have already been established as a result of the oscillator equation with damping and conservative force restoration. This secondorder dynamical system is known as a “heavy friction ball,” and it was first proposed by Polyak in [17]. The important feature of this method is that the next iteration is built on the previous two iterations. Numerical results show that inertial terms improve the performance of the approaches in terms of the number of iterations and elapsed time in this context. Inertialtype approaches have been extensively studied in recent years for certain classes of equilibrium problems [18–26] and others in [27–33].
As a result, the following natural question arises: Is it possible to develop new inertialtype weakly convergent extragradienttype methods with monotone and nonmonotone step size rules to solve equilibrium problems?
In our study, we provide a positive answer to this question, namely, that the gradient approach still generates a weak convergence sequence when solving equilibrium problems involving pseudomonotone bifunctions using a novel monotone and nonmonotone variable step size rule. Motivated by the work of Censor et al. [34] and Tran et al. [16], we will describe new inertial extragradienttype approaches to solving problem (1) in the context of an infinitedimensional real Hilbert space. Our primary contributions to this work are as follows: (i)We build an inertial subgradient extragradient technique with a novel monotone variable step size rule to solve equilibrium problems in a real Hilbert space and show that the resulting sequence is weakly convergent(ii)To solve equilibrium problems, we devise another inertial subgradient extragradient technique that leverages a novel variable nonmonotone step size rule that is independent of the Lipschitz constants(iii)Some results are investigated in order to address different kinds of equilibrium problems in a real Hilbert space(iv)We offer numerical demonstrations of the suggested methodologies for the verification of theoretical conclusions and compare them to earlier results [22, 35, 36]. Our numerical results indicate that the new approaches are useful and outperform the current ones
The paper is structured as follows: in Section 2, preliminary results were presented. Section 3 gives all new approaches and their convergence analysis. Finally, Section 5 gives some numerical results to explain the practical efficiency of the proposed methods.
2. Preliminaries
In this part, we will go over several fundamental identities as well as crucial lemmas and definitions. A metric projection of is defined by
The following sections outline the key characteristics of projection mapping.
Lemma 1 (see [37]). Let be a metric projection. Then, there are the following features: if and only if
Lemma 2 (see [37]). For any and . Then, the following conditions were met: A normal cone of at is defined by Assume that is a convex function and subdifferential of at is defined by
Lemma 3 (see [38]). Let be a subdifferentiable, convex, and lower semicontinuous function on . An element is a minimizer of a function if and only if where stands for the subdifferential of at and the normal cone of at
Lemma 4 (see [39]). Let be a nonempty subset of and be a sequence in satisfying two conditions: (i)For each exists(ii)Each sequentially weak cluster point of belongs to Then, sequence weakly converges to an element in Ξ.
Lemma 5 (see [40]). Suppose that and are two sequences of nonnegative real numbers satisfying the inequality If then, exists.
3. Main Results
In this section, we present a numerical iterative method for accelerating the rate of convergence of an iterative sequence by combining two strong convex optimization problems with an inertial term. We propose the techniques listed below for solving equilibrium problems.

Remark 6. (i) If is used in the abovementioned method, then, it is equivalent to the default extragradient method [16] with the updated step size rule. (ii) From the expressions in Algorithm 1, we have It further implies that
Lemma 7. A sequence is converged to and
Proof. Assume that such that Thus, we obtain This completes the proof of the lemma.

Lemma 8. A sequence is converged to and where .
Proof. Assume that such that Applying mathematical induction on the concept of , we have Suppose that and . Due to the definition of , we get That is, the series is convergent. The convergence must now be proven of . Let . Due to the fact that , we could get Letting in (26), we have as . This is an absurdity. As a result of the series convergence and taking in expression (26), we obtain . This brings the proof to a conclusion.
Lemma 9. The following useful inequality is derived in Algorithm 3.
Proof. By use of Lemma 3, we have Thus, for , there exists a vector such that Thus, we have Since implies that for all , thus, we have Since we have Combining expressions (31) and (32), we have
Lemma 10. In Algorithm 3, we also have the following useful inequality:
Proof. The proof is analogous to the proof of Lemma 9. Next, substituting we have
Theorem 11. Let be a sequence generated by Algorithm 3, and the conditions (1)–(5) are satisfied. Then, the sequence converges weakly to .
Proof. By substituting into Lemma 9, we have By the use of condition 2, we obtain From the expression in Algorithm 1, we obtain which after multiplying both sides by implies that Combining expressions (37) and (39), we obtain By using expression (35), we have Combining expressions (40) and (41), we have The following facts are available to us: Thus, we have Since , thus, there exists a fixed natural number such that Thus, we have Furthermore, it implies that From expression (47), we obtain It is possible to write as an expression for every such that Combining expressions (18) and (49) and Lemma 5 implies that By using the definition of we have By using expressions (50) and (19) in the abovementioned formula, we may deduce that Thus, we have By using expressions (51) and (53), we obtain Consequently, this implies that By taking the limit as in expression (55), we obtain Thus, expressions (52) and (56) give that By using expressions (50), (52), and (57), so that the sequences , , and are bounded, therefore , , and exist. Thus, , , . Following that, we will show that the sequence weakly converges to . As a result, all sequences , , and are bounded. We now demonstrate that each sequential weak cluster point in the sequence is in Consider that is a weak cluster point of , which means that there is a subsequence of that is weakly convergent to Then, is also weakly convergent to Now let demonstrate that We have obtained the following by combining Lemma 9 with expressions (39) and (35): where is any member of The use of expression (56) and the boundedness of the sequence implies that the righthand side of the last inequality is convergent to zero. By using the condition 4 and we have such as Since is a subset of halfspace it follows that This proves that Thus, Lemma 4 assures that , and converge weakly to as
We now present two iterative methods based on a monotone and nonmonotone variable step size rule and two strongly convex minimization problems without the need for subgradient methods. The following is a description of the second major result.
