- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Recently Accepted Articles ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Journal of Applied Mathematics
Volume 2012 (2012), Article ID 202860, 16 pages
An Alternative Regularization Method for Equilibrium Problems and Fixed Point of Nonexpansive Mappings
Department of Mathematics, Tianjin Polytechnic University, Tianjin 300160, China
Received 16 December 2011; Accepted 26 December 2011
Academic Editor: Rudong Chen
Copyright © 2012 Shuo Sun. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
We introduce a new regularization iterative algorithm for equilibrium and fixed point problems of nonexpansive mapping. Then, we prove a strong convergence theorem for nonexpansive mappings to solve a unique solution of the variational inequality and the unique sunny nonexpansive retraction. Our results extend beyond the results of S. Takahashi and W. Takahashi (2007), and many others.
Let be a real Hilbert space with inner product and norm , respectively. let be a nonempty closed convex subset of . Let be a bifunction of into , where is the set of real numbers. The equilibrium problem for is to find such that The set of solutions of (1.1) is denoted by . Given a mapping , let for all . Then, if and only if for all , that is, is a solution of the variational inequality. Numerous problems in physics, optimization, and economics reduce to find a solution of (1.1). Some methods have been proposed to solve the equilibrium problem; see, for instance, [1–6].
A mapping of into is said to be nonexpansive if
We denote by the set of fixed points of . The fixed point equation is ill-posed (it may fail to have a solution, nor uniqueness of solution) in general. Regularization therefore is needed. Contractions can be used to regularize nonexpansive mappings. In fact, the following regularization has widely been implemented ([7–9]). Fixing a point and for each , one defines a contraction by In this paper we provide an alternative regularization method. Our idea is to shrink first and then apply to the convex combination of the shrunk and the anchor (this idea appeared implicitly in  where iterative methods for finding zeros of maximal monotone operators were investigated). In other words, we fix an anchor and and define a contraction by Compared with (1.1), (1.4) looks slightly more compact in the sense that the mapping is more directly involved in the regularization and thus may be more convenient in manipulations since the nonexpansivity of is utilized first.
In 2000, Moudafi  proved the following strong convergence theorem.
Theorem 1.1 (Moudafi ). Let be a nonempty closed convex subset of a Hilbert space and let be a nonexpansive mapping of into itself such that is nonempty. Let be a contraction of into itself and let be a sequence defined as follows: and for all , where satisfies Then, converges strongly to , where and is the metric projection of onto .
Such a method for approximation of fixed points is called the viscosity approximation method.
In 2007, S. Takahashi and W. Takahashi  introduced and considered the following iterative algorithm by the viscosity approximation method in the Hilbert space: for all , where and satisfy some appropriate conditions. Furthermore, they proved that and converge strongly to , where .
In this paper, motivated and inspired by the above results, we introduce an iterative scheme by the general iterative method for finding a common element of the set of solutions of (1.1) and the set of fixed points of a nonexpansive mapping in Hilbert space.
Let be a nonexpansive mapping. Starting with an arbitrary , define sequences and by We will prove in Section 3 that if the sequences , , and of parameters satisfy appropriate conditions, then the sequence and generated by (1.8) converges strongly to the unique solution of the variational inequality which is the optimality condition for the minimization problem where is a potential function for and at the same time, the sequence and generated by (1.8) converges in norm to , where is the sunny nonexpansive retraction.
Throughout this paper, we consider as the Hilbert space with inner product and norm , respectively, is a nonempty closed convex subset of . Consider a subset of and a mapping . Then we say that(i) is a retraction provided ;(ii) is a nonexpansive retraction provided is a retraction that is also nonexpansive;(iii) is a sunny nonexpansive retraction provided is a nonexpansive retraction with the additional property: whenever , where and .
Let now be a nonexpansive mapping. For a fixed anchor and each recall that is the unique fixed point of the contraction . Namely, is the unique solution in to the fixed point equation In the Hilbert space (either uniformly smooth or reflexive with a weakly continuous duality map), then is strongly convergent should it is bounded as .
For solving the equilibrium problem for a bifunction , let us assume that satisfies the following conditions:(); () is monotone, that is, ;() For each , ;() For each , the function is convex and lower semicontinuous.
The following lemma appeared implicitly in .
Lemma 2.1 (see ). Let be a nonempty closed convex subset of and let be a bifunction satisfying (A1)–(A4). Let and , then, there exists such that
Lemma 2.2 (see ). Assume that satisfies (A1)–(A4). For and , define a mapping as follows: for all . Then, the following hold: is single-valued; is firmly nonexpansive, that is, for any ,; is closed and convex.
Lemma 2.3 (see ). Let , and be sequences of real numbers such that then, .
Lemma 2.4 (see ). Suppose that is a smooth Banach space. Then a retraction is sunny nonexpansive if and only if
Lemma 2.5. Let be a uniformly smooth Banach space, a nonempty closed convex subset of , and a nonexpansive mapping. Let be defined by (2.1). Then remains bounded as if and only if . Moreover, if , then converges in norm, as , to a fixed point of T; and if one sets then defines the unique sunny nonexpansive retraction from onto .
Lemma 2.6. In the Hilbert space, the following inequalities always hold (i); (ii).
3. Main Results
Theorem 3.1. Let be a nonempty closed convex subset of , be a bifunction satisfying (A1)–(A4) and be a nonexpansive mapping of into such that . Let be a contraction of into itself with , initially give an arbitrary element and let and be sequences generated by (1.8), where and satisfy the following conditions: (I)(II)(III)(IV). Then, the sequences and converge strongly to , where and converge in norm to , where is the sunny nonexpansive retraction.
Proof. Let . Then is a contraction of into itself. In fact, there exists such that for all . So, we have that
for all . So, is a contraction of into itself. Since is complete, there exists a unique element such that , such a is an element of . We divide the proof into serval steps.
Step 1. and are all bounded. Let , Then from , we have for all . Put , so can be rewritten as Therefore, from (3.2) we get If , then is bounded. So, we assume that .
Therefore , So, by induction, we have hence is bounded. we also obtain that , , , and are bounded.Step 2. as ,
On the other hand, from and , we have Putting in (3.9) and in (3.10), we have So, from we have and hence Without loss of generality, let us assume that there exists a real number such that for all . Then, we have and hence where . Then we obtain
So, put (3.8) and (3.16) into (3.7) we have where is a constant; is a constant.
Using Lemma 2.3 and conditions (I), (II), (III) we have From (3.15) and , we have Since , we have For , we have Therefore, we have By the above of what we have and the condition of , we get . Since , it follows that .Step 3. we show that where . To show this inequality, we choose a subsequence of such that Since is bounded, there exists a subsequence of which converges weakly to . Without loss of generality, we can assume that . From , we obtain . Let us show . By , we have From , we also have and hence Since and , from we have for all . For with and , let . Since and , we have and hence . So, from and we have and hence . From , we have for all , and hence . We will show that . Assume that . Since and , from Opial’s theorem we have This is a contradiction. So, we get . Therefore, . Since , we have
From , we have where , and . It is easy to see that and by (3.31) and the conditions. Hence, by Lemma 2.3, the sequence converges strongly to .
If is definition as (2.1), then, from Lemma 2.5, we have as , and if we set , then defines the unique sunny nonexpansive retraction from onto . So, if we replace with , the corollary still holds. and it is that is a fixed point sequence and as , and if we set , then defines the unique sunny nonexpansive retraction from onto . In the iterative algorithm of Theorem 3.1, we can take to replace in particular. Then, we have , so as . By the uniqueness of limit, we have , that is, , where defines the unique sunny nonexpansive retraction from onto .
Remark 3. We notice that has not influence on .
As direct consequences of Theorem 3.1, we obtain corollary.
Corollary 3.2. Let be a nonempty closed convex subset of , be a nonexpansive mapping of into such that . Let be a contraction of into itself and let and be sequences generated initially by an arbitrary elements and then by for all , where satisfies the following conditions:(I)(II)(III). Then, the sequences converge strongly to , where .
4. Application for Zeros of Maximal Monotone Operators
We adapt in this section the iterative algorithm (3.1) to find zeros of maximal monotone operators and find . Let us recall that an operator with domain and range in a real Hilbert space with inner product and norm is said to be monotone if the graph of , is a monotone set. Namely,
A monotone operator is said to be maximal monotone of the graph is not properly contained in the graph of any other monotone defined in . See Brezis  for more details on maximal monotone operators.
In this section we always assume that is maximal monotone and the set of zeros of , , is nonempty so that the metric projection from onto is well-defined.
One of the major problems in the theory of maximal operators is to find a point in the zero set because various problems arising from economics, convex programming, and other applied areas can be formulated as finding a zero of maximal monotone operators. The proximal point algorithm (PPA) of Rockafellar  is commonly recognized as the most powerful algorithm in finding a zero of maximal monotone operators. This (PPA) generates, starting with any initial guess , a sequence according to the inclusion: where is a sequence of errors and is a sequence of positive regularization parameters. Equivalently, we can write where for denotes the resolvent of , with being the identity operator on the space .
The aim of this section is to combine algorithm (3.1) with algorithm (4.4). Our algorithm generates a sequence and be sequences generated initially by an arbitrary elements and then by where and are sequences of positive real numbers. Furthermore, we prove that and converge strongly to , where .
Before stating the convergence theorem of the algorithm (4.7), we list some properties of maximal monotone operators.
Proposition 4.1. Let be a maximal monotone operator in and let denote the resolvent, where , (a) is nonexpansive for all ;(b) for all ;(c)For ;(d)(The Resolvent Identity) For , there holds the identity:
Theorem 4.2. Let be a nonempty closed convex subset of , be a bifunction satisfying (A1)–(A4) and be a maximal monotone operator such that . Let be a contraction of into itself and let and be sequences generated initially by an arbitrary elements and then by for all , where and satisfy the following conditions:(I), and ;(II) and ;(III);(IV), , , and . Then, the sequences and converge strongly to , where .
Proof. Below we write for simplicity. Setting we rewrite of (4.7) as Because the proof is similar to Theorem 3.1, here we just give the main steps as follows:(1) is bounded;(2), as ;(3), as ;(4), as ;(5);(6), as .
5. Application for Optimization Problem
In this section, we study a kind of optimization problem by using the result of this paper. That is, we will give an iterative algorithm of solution for the following optimization problem with nonempty set of solutions where is a convex and lower semicontinuous functional defined on a closed convex subset of a Hilbert space . We denoted by the set of solutions of (5.1). Let be a bifunction from to defined by . We consider the following equilibrium problem, that is, to find such that
It is obvious that , where denotes the set of solutions of equilibrium problem (5.2). In addition, it is easy to see that satisfies the conditions (A1)–(A4) in the Section 2. Therefore, from Theorem 3.1, we know that the following iterative algorithm: for any initial guess , converges strongly to a solution of optimization problem (5.1), where , , and satisfy
In fact, is the minimum norm point from onto the , furthermore, if , then is the minimum norm point on the .
This work was supported by the National Natural Science Foundation of China under Grants (11071270), and (10771050).
- S. D. Flåm and A. S. Antipin, “Equilibrium programming using proximal-like algorithms,” Mathematical Programming, vol. 78, no. 1, pp. 29–41, 1997.
- A. Moudafi and M. Théra, “Proximal and dynamical approaches to equilibrium problems,” in Lecture Note in Economics and Mathematical Systems, vol. 477, pp. 187–201, Springer-Verlag, New York, NY, USA, 1999.
- A. Tada and W. Takahashi, “Weak and strong convergence theorems for a nonexpansive mapping and an equilibrium problem,” Journal of Optimization Theory and Applications, vol. 133, no. 3, pp. 359–370, 2007.
- Y. Su, M. Shang, and X. Qin, “An iterative method of solution for equilibrium and optimization problems,” Nonlinear Analysis. Theory, Methods & Applications, vol. 69, no. 8, pp. 2709–2719, 2008.
- S. Takahashi and W. Takahashi, “Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces,” Journal of Mathematical Analysis and Applications, vol. 331, no. 1, pp. 506–515, 2007.
- P. L. Combettes and S. A. Hirstoaga, “Equilibrium programming in Hilbert spaces,” Journal of Nonlinear and Convex Analysis, vol. 6, no. 1, pp. 117–136, 2005.
- F. E. Browder, “Existence and approximation of solutions of nonlinear variational inequalities,” Proceedings of the National Academy of Sciences of the United States of America, vol. 56, pp. 1080–1086, 1966.
- F. E. Browder, “Convergence of approximants to fixed points of nonexpansive non-linear mappings in Banach spaces,” Archive for Rational Mechanics and Analysis, vol. 24, pp. 82–90, 1967.
- R. E. Bruck, Jr., “Nonexpansive projections on subsets of Banach spaces,” Pacific Journal of Mathematics, vol. 47, pp. 341–355, 1973.
- H.-K. Xu, “A regularization method for the proximal point algorithm,” Journal of Global Optimization, vol. 36, no. 1, pp. 115–125, 2006.
- A. Moudafi, “Viscosity approximation methods for fixed-points problems,” Journal of Mathematical Analysis and Applications, vol. 241, no. 1, pp. 46–55, 2000.
- W. Takahashi, Nonlinear Functional Analysis, Yokohama Publishers, Yokohama, Japan, 2000.
- Z. Opial, “Weak convergence of the sequence of successive approximations for nonexpansive mappings,” Bulletin of the American Mathematical Society, vol. 73, pp. 591–597, 1967.
- E. Blum and W. Oettli, “From optimization and variational inequalities to equilibrium problems,” The Mathematics Student, vol. 63, no. 1–4, pp. 123–145, 1994.
- H. K. Xu, “An iterative approach to quadratic optimization,” Journal of Optimization Theory and Applications, vol. 116, no. 3, pp. 659–678, 2003.
- H. Brezis, Operateurs Maximaux Monotones et Semi-Groupes de Contractions dans les Espaces de Hilbert, North-Holland Publishing, Amsterdam, Holland, 1973.
- R. T. Rockafellar, “Monotone operators and the proximal point algorithm,” SIAM Journal on Control and Optimization, vol. 14, no. 5, pp. 877–898, 1976.