Abstract

The purpose of this paper is to consider a modified hybrid steepest-descent method by using a viscosity approximation method with a weakly contractive mapping for finding the common element of the set of a common fixed point for an infinite family of nonexpansive mappings and the set of solutions of a system of an equilibrium problem. The sequence is generated from an arbitrary initial point which converges in norm to the unique solution of the variational inequality under some suitable conditions in a real Hilbert space. The results presented in this paper generalize and improve the results of Moudafi (2000), Marino and Xu (2006), Tian (2010), Saeidi (2010), and some others. Finally, we give an application to minimization problems and a numerical example which support our main theorem in the last part.

1. Introduction

The convex feasibility problem (CFP) is the problem for finding points in the intersection of a finite family of closed convex subsets in the framework of Hilbert spaces, that is, to find a point such that This problem plays an extremely important role in various fields, especially in applied mathematics and physical sciences; moreover, it has a great impact role on the real-world applications (see [1, 2]). The well-known applications are the theory of optimization [3, 4], image reconstruction by the projection method [5], signal processing problems [6], and model for the problem in sensor networks [7], as some powerful examples.

We focus on the important subclass of convex feasibility problems, in which finitely many sets are given. Each set can be specified in various forms, such as the fixed point set of a nonexpansive mapping, the set of solutions of the variational inequality, and the set of solutions to an equilibrium problem. In a framework of Hilbert spaces, there are some applications of convex feasibility problems in various disciplines such as image restoration, computer tomograph, and radiation therapy treatment planning [8].

Throughout this paper, we assume that is a real Hilbert space with inner product and norm , and let be a nonempty closed convex subset of . Let be bifunctions from to , where is the set of real numbers, and is an arbitrary index set. The system of equilibrium problems is to find such that If is a singleton, then problem (1.2), reduced to the equilibrium problems, is to find such that The set of solution of (1.3) is denoted by . The above formulation (1.3) was shown in [7] to cover monotone inclusion problems, saddle point problems, variational inequality problems, minimization problems, optimization problems, vector equilibrium problems, and Nash equilibria in noncooperative games. In other words, the is a unifying model for several problems arising in physics, engineering, science, optimization, economics, and so forth; Combettes and Hirstoaga [9] introduced an iterative scheme for finding a common element in the solution set of problem (1.3) in a Hilbert space.

The equilibrium problems include fixed point problems, optimization problems, variational inequalities problems, Nash equilibrium problems, noncooperative games, and economics and the equilibrium problems; as special cases see, for example, [7, 1014]. Some methods have been proposed to solve the equilibrium problem; see, for instance, [1522].

Let be a mapping. The variational  inequality problem, denoted by , is to find such that Existence and uniqueness of solutions are the most important problems of . The variational inequality problem has been extensively studied in the literature, see, for example, [23, 24] and the references therein. It is known that if is a strong monotone and Lipschitzian mapping on , then has a unique solution. Variational inequalities are among the most interesting and important mathematical problems and have been studied intensively in the past years since they have wide applications in the optimization and control, economics and transportation equilibrium, and engineering science. For these reasons, many existence results and iterative algorithms for various variational inclusions have been studied extensively by many authors. For details, see [2, 7, 2325] and references therein.

On the other hand, iterative methods for nonexpansive mappings have recently been applied to solve convex minimization problems. Convex minimization problems have a great impact and influence in the development of almost all branches of pure and applied sciences.

A mapping is called nonexpansive if , for all . We use to denote the set of fixed points of , that is, . Recall that a self-mapping is a contractive mapping on if there exists a constant such that , for all . A mapping is said to be a -Lipschitzian if there exists a constant such that , for all . The concept of quasi-nonexpansive was introduced by Diaz and Metcalf [26]. The mapping is said to be quasi-nonexpansive if , for all and .

In 2000, Moudafi [27] introduced the viscosity approximation method for a nonexpansive mapping . Let be a contraction on , starting with an arbitrary initial , defining a sequence recursively by where is a sequence in . Xu [28] proved that under certain appropriate conditions on , the sequence generated by (1.5) converges strongly to the unique solution of the variational inequality In 2006, Marino and Xu [29] introduced the following iterative scheme: It was proved that if the sequence of parameters satisfies appropriate conditions, then the sequence generated by (1.7) converges strongly to the unique solution of the variational inequality which is the optimality condition for the minimization problem where is a potential function for (i.e.,, for ). Assume is strongly positive bounded linear operator. It can be referred that there is a constant which satisfies the following property: In 2007, Suzuki [30] extended Moudafi's viscosity approximations with MeirKeeler contractions and presented very simple proofs of Xu's theorems by concidering Moudafi's approximations.

On the other hand, Yamada [31] introduced the following hybrid iterative scheme for finding the variational inequality: where is -Lipschitzian and -strongly monotone operator with , then he proved that if satisfies some appropriate conditions, then generated by (1.11) converges strongly to the unique solution of variational inequality

In 2010, Tian [32] combined (1.7) and (1.11) and considered the following general iterative method: If the sequence of parameters satisfies appropriate conditions, then the sequence generated by (1.13) converges strongly to the unique solution of the variational inequality

Later, Saeidi [33] introduced the following modified hybrid steepest-descent iterative algorithm for finding a common element of the set of solutions of a system of equilibrium problems for a family and the set of common fixed points for a family of infinitely nonexpansive mappings , with respect to -mappings (see (2.14)). The proposed scheme was defined by where is a relaxed -cocoercive, -Lipschitzian mapping such that . Then, under weaker hypotheses on coefficients, he proved the strong convergence of the proposed iterative algorithm to the unique solution of the variational inequality. Zhang et al. [34] introduced a modified iterative algorithm by using a viscosity approximation method with a weakly contractive mapping with respect to -mappings (see (2.14)). They defined where is a -weakly contractive self-mapping on , and is a sequence in . They proved that under certain appropriate conditions imposed on , the proposed iterative algorithm converges strongly to the common element of the set of common fixed points of an infinite family of nonexpansive mappings and the set of a finite family of equilibrium problems.

In this paper, motivated and inspired by the previously mentioned above results, we consider a modified hybrid steepest-descent method by using a viscosity approximation method with a weakly contractive mapping for finding the common element of the set of common fixed points for an infinite family of nonexpansive mappings with weakly contractive mappings and the set of solutions of a system of equilibrium problems. The sequence generated from an arbitrary initial point which will converge in norm to the unique solution of the variational inequality under some suitable conditions in a real Hilbert space. Furthermore, we give an application to minimization problems and a numerical example which support our main theorem in the last part.

2. Preliminaries

Let be a real Hilbert space and be a nonempty closed convex subset of . We denote weak convergence and strong convergence by notations and , respectively. Recall that when the metric (nearest point) projection from onto assigns to each , the unique point in satisfies the property The following characterizes the projection .

An important problem is how to find a solution of . It is known that where is an arbitrarily fixed constant, and is the projection of onto .

We recall some lemmas which will be needed in the rest of this paper.

Lemma 2.1. For a given , , It is well known that is a firmly nonexpansive mapping of onto and satisfies Moreover, is characterized by the following properties: and for all ,

Definition 2.2. A mapping with domain and range in , Alber and Guerre-Delabriere [35] defined a -weakly contractive mapping by the following: for some which is a continuous and strictly increasing function such that is positive on and . If , then is said to be contractive mapping with the contractive coefficient . If , then is said to be nonexpansive. If and , then with a fixed point is said to be qusi-nonexpansive.

Definition 2.3. A mapping is said to be an -strongly monotone if there exists a constant with the following property:

Definition 2.4. A mapping is said to be relaxed -cocoercive if there exist two constants and which satisfies the following property:

Lemma 2.5 (see [28]). Assume that is a sequence of nonnegative real numbers such that where is a sequence in , and is a sequence in such that (1), (2) or . Then .

Lemma 2.6 (see [36]). Let be a closed convex subset of a real Hilbert space and let be a nonexpansive mapping. Then is demiclosed at zero, that is, implies .

Lemma 2.7 (see [37]). Let be a closed convex subset of . Let be a bounded sequence in . Assume that (1)the weak -limit set , (2)for each , exists. Then is weakly convergent to a point in .

Lemma 2.8 (see [38]). Each Hilbert space satisfies Opial's condition, that is, for any sequence with , the inequality, holds for each with .

Lemma 2.9 (see [39]). Each Hilbert space satisfies the Kadec-Klee property, that is, for any sequence with and together with implies .

For solving the equilibrium problem, let us give the following assumptions for a bifunction of into which were imposed in [9, 40]: for all ; is monotone, that is, for all ; for each ; for each is convex and lower semicontinuous.

Lemma 2.10 (see [9, 40]). Let be a nonempty closed convex subset of , and let be a bifunction of into satisfying (A1)–(A4). If and , then there exists such that

Lemma 2.11 (see [9]). Let be a nonempty closed convex subset of , and let be a bifunction of into satisfying (A1)–(A4). For and , define a mapping as follows: for all . Then, the following conclusions hold that (1) is single-valued; (2) is firmly nonexpansive, that is, for any , (3); (4) is closed and convex.

A family of nonexpansive mappings has been considered by many authors (see [4152] and references therein). Recently, Shang et al. [47] improved the results of Kim and Xu [53] from a single mapping to a finite family of mappings in the framework of Hilbert spaces.

Now, we consider the mapping defined, as in Shimoji and Takahashi [48], by where are real numbers such that and are an infinite family of mappings of into itself. Nonexpansivity of each ensures the nonexpansivity of .

Lemma 2.12 (see [48]). Let be a real Hilbert space . Let be nonexpansive mappings from into itself such that and are real numbers such that , for all . Then, for every and , the limit exists.

Using Lemma 2.12, one can define the mapping from into itself as follows:

Such is called the -mapping generated by and .

Lemma 2.13 (see [48]). Let be a real Hilbert space . Let be nonexpansive mappings from into itself such that and are real numbers such that , for all . Then, .

3. Main Results

In this section, we will introduce an iterative scheme by using a modified hybrid steepest-descent method for finding the common element of the set of common fixed points for an infinite family of nonexpansive mappings with weakly contractive mappings and the set of solutions of a system of equilibrium problems in a real Hilbert space.

Theorem 3.1. Let be a nonempty closed convex subset of a real Hilbert space , such that . Let be a family of infinitely nonexpansive mappings, and let be a finite family of bifunctions to satisfying (A1)–(A4). Assume that . Let be a -Lipschitzian and -strongly monotone mapping on with . Let be a -weakly contractive self-mapping on with . Denote the collection of all weakly contractive on by . Let and . Let the mapping be defined by (2.14) and be a sequence in . If is the sequence generated by and where is a sequence in and satisfies the following conditions: and ;; ; , for all . Then, the sequence converges strongly to which is the unique solution of the variational inequality which is the optimality condition for the minimization problem where is a potential function for (i.e.,  , for ).

Proof. We will divide the proof of Theorem 3.1 into several steps.
Step 1. We will show that is bounded. Let . By taking for and , for all . Since is nonexpansive for each , then, we have From Lemmas 2.11 and 2.12, it follows that By mathematical induction, it becomes and we obtain that is bounded. So are and .
Step 2. We claim that for every . From Step 2 of the proof in [54, Theorem 3.1], we have for , Note that for every , we obtain So, we have Now, apply (3.8) to (3.10), we conclude (3.7).
Step 3. We may assume that . Let be a bounded sequence in . Then, we show that . Indeed, since is bounded and is a Lipschitzian mapping, now, from condition (C2), we have where is an approximate constant such that . Hence as .
Step 4. We show that . By the definition of , it follows that where is an approximate constant such that . Since for all and , we compute where is an approximate constant such that . It follows that Substituting (3.14) into (3.12), it yields that where is an approximate constant such that . By condition (C3), we obtain that as .
Step 5. We will show that We observe that where is an approximate constant such that . We compute By Step 2 and Step 4, we have immediately concluded from (3.17) that By Lemma 2.5, we have .
Step 6. We will show that For any and for all , note that is firmly nonexpansive. Then by Lemma 2.11, we get and, hence, By (3.22), we compute So, we obtain Using condition (C1) and (3.16), we obtain
Step 7. Next, we show that Since by condition (C1) and (3.16), we get as .
Step 8. We show that . The weak -limit set of is a subset of . Let , and let be a subsequence of which converges weakly to . By Step 6, without loss of generality, we may assume that We need to show that . At first, note that by (A2) and given and , we have Thus, By (A4), is a lower semicontinuous and convex, thus, weakly semicontinuous. By condition (C3) and (3.20), imply that in norm. Therefore, letting in (3.30) yields for all and . Replacing with with and using (A1) and (A4), we obtain Hence, , for all and . Letting and using (A3), we conclude , for all and . Therefore, Next, we show that . By Lemma 2.12, we have for all , and . Assume that , then . Therefore, from Opial's property of Hilbert space, (3.26), (3.34), and (3.35), we have This is a contradiction. Therefore, must belong to .
Step 9. We show that , where . By Banach's contraction mapping principle, it guarantees that has a unique fixed point which is the unique solution of (3.2). Let be a subsequence of such that Without loss of generality, we can assume that converges weakly to some . It follows that from Lemma 2.6 and that . Hence by (3.2), we obtain Step 10. Finally, we show that . As a matter of fact, we have where It is easily to see that and . By Lemma 2.5, we conclude that ; this completes the proof.

Corollary 3.2. Let be a nonempty closed convex subset of a real Hilbert space such that . Let a family of infinitely nonexpansive mappings, and let be a finite family of bifunctions to satisfying (A1)–(A4). Assume that . Let be a -Lipschitzian and -strongly monotone mapping on with . Let be a -weakly contractive self-mapping on with . Denote by the collection of all weakly contractive mapping on . Let and . Let the mapping be defined by (2.14) and be a sequence in . If is the sequence generated by and where is a sequence in which satisfies the following conditions (C1)–(C4), then the sequence converges strongly to which is the unique solution of the variational inequality which is the optimality condition for the minimization problem where is a potential function for (i.e.,  , for ).

Proof. Taking , in Theorem 3.1, it is easy to obtain the desired conclusion.

Corollary 3.3. Let be a nonempty closed convex subset of a real Hilbert space such that . Let a family of infinitely nonexpansive mappings, and let be a finite family of bifunctions to satisfying (A1)–(A4). Assume that . Let be a -Lipschitzian and -strongly monotone mapping on , and let be a contraction self-mapping on with . Denote by the collection of all contraction on . Let and . Let the mapping be defined by (2.14) and be a sequence in . If is the sequence generated by and where is a sequence in which satisfies the following conditions (C1)–(C4), then the sequence converges strongly to , which is the unique solution of the variational inequality which is the optimality condition for the minimization problem where is a potential function for (i.e.,, for ).

Proof. Taking in Theorem 3.1, it is easy to obtain the desired conclusion.

Corollary 3.4. Let be a nonempty closed convex subset of a real Hilbert space such that , and let be a finite family of bifunctions to satisfying (A1)–(A4). Assume that . Let be a -Lipschitzian and -strongly monotone mapping on with . Let be a -weakly contractive self-mapping on with . Denote by the collection of all weakly contractivemappingon , and let with . Let and . Let be a sequence in . If is the sequence generated by and where is a sequence in which satisfies the following conditions (C1), (C2), and (C4) in Theorem 3.1, then the sequence converges strongly to , which is the unique solution of the variational inequality which is the optimality condition for the minimization problem where is a potential function for (i.e.,, for ).

Proof. Taking in Theorem 3.1, it is easy to obtain the desired conclusion.

Corollary 3.5. Let be a nonempty closed convex subset of a real Hilbert space such that , and let a family of infinitely nonexpansive mappings. Assume that . Let be a -Lipschitzian and -strongly monotone mapping on with . Let be a -weakly contractive self-mapping on with . Denote by the collection of all weakly contractive mapping on . Let and . Let the mapping be defined by (2.14). If is the sequence generated by and where is a sequence in which satisfies the following conditions (C1)–(C3), then the sequence converges strongly to , which is the unique solution of the variational inequality which is the optimality condition for the minimization problem where is a potential function for (i.e.,, for ).

Proof. Taking , for each in Theorem 3.1, it is easy to obtain the desired conclusion.

Corollary 3.6. Let be a nonempty closed convex subset of a real Hilbert space such that , and let be a nonexpansive mapping with . Let be a -Lipschitzian and -strongly monotone mapping on with . Let be a -weakly contractive self-mapping on with . Denote by the collection of all weakly contractive on . Let and . If is the sequence generated by and where is a sequence in which satisfies the following conditions (C1)–(C3), then the sequence converges strongly to , which is the unique solution of the variational inequality which is the optimality condition for the minimization problem where is a potential function for (i.e.,, for ).

Proof. Taking , for each and replacing by nonexpansive mapping in Theorem 3.1, it is easy to obtain the desired conclusion.

4. An Example and Numerical Result

In this section, we give a real simple numerical example of Theorem 3.1 as follows:

Example 4.1. For simplicity, let for all for every and . Then is the sequence generated by and as , where is the unique solution of the minimization problem where is a constant.

Proof. We divide the proof into 4 steps.
Step 1. Using the idea in [55], we can show that where Since for all , with the definition of for all in Lemma 2.13, we have By the equivalent property of the nearest projection from to , we can conclude that if we take . By (3) in Lemma 2.11, we have Step 2. We show that is nonexpansive mapping. By (2.14) we have and we compute (2.14) in the same way as above, so we obtain Since , hence,
Step 3. We prove where is the unique solution of the minimization problem Since we let , be a real number, we choose . From (4.3), (4.4), and (4.7), we can obtain a special sequence of Theorem 3.1 as follows: Since , we have Combining with (4.6), we obtain It is obvious that and is the unique solution of the minimization problem where is a constant number.

5. Numerical Result

In this step, we give the numerical results (see Table 1) that support our main theorem as shown by the plotting graph using MATLAB 7.11.0. We choose the initial values as in Figure 1. From the example, we can see that converges to .

Acknowledgments

This work was partially supported by the Higher Education Research Promotion and National Research University Project of Thailand, Office of the Higher Education Commission (NRU55-CSEC no. 55000613). Also, the first author would like to thank the Office of the Higher Education Commission, Thailand, for the financial support of the Ph.D. Program at KMUTT, and the second author was supported by Rajamangala University of Technology Lanna Research and Development Institute for the Ph.D. Program at KMUTT. Moreover, the third author was supported by the Higher Education Research Promotion and National Research University Project of Thailand, Office of the Higher Education Commission, for financial support during the preparation of this paper. Finally, the authors would like to thank the referees for their careful readings and valuable suggestions to improve the writing of this paper.