`Abstract and Applied AnalysisVolume 2012 (2012), Article ID 310801, 17 pageshttp://dx.doi.org/10.1155/2012/310801`
Research Article

## Algorithmic Approach to a Minimization Problem

1Department of Mathematics, Tianjin Polytechnic University, Tianjin 300387, China
2Department of Mathematics and RINS, Gyeongsang National University, Jinju 660-701, Republic of Korea
3Department of Information Management, Cheng Shiu University, Kaohsiung 833, Taiwan

Received 27 April 2012; Accepted 7 May 2012

Copyright © 2012 Yonghong Yao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We first construct an implicit algorithm for solving the minimization problem , where is the intersection set of the solution set of some equilibrium problem, the fixed points set of a nonexpansive mapping, and the solution set of some variational inequality. Further, we suggest an explicit algorithm by discretizing this implicit algorithm. We prove that the proposed implicit and explicit algorithms converge strongly to a solution of the above minimization problem.

#### 1. Introduction

Let be a real Hilbert space with inner product and norm , respectively. Let be a nonempty closed convex subset of . Recall that a mapping is called -inverse-strongly monotone if there exists a constant such that A mapping is said to be nonexpansive if for all . Denote the set of fixed points of by .

Let be a nonlinear mapping and be a bifunction. Now we concern the following equilibrium problem is to find such that The solution set of (1.2) is denoted by . If , then (1.2) reduces to the following equilibrium problem of finding such that The solution set of (1.3) is denoted by . If , then (1.2) reduces to the variational inequality problem of finding such that The solution set of variational inequality (1.4) is denoted by .

Equilibrium problems which were introduced by Blum and Oettli [1] in 1994 have had a great impact and influence in pure and applied sciences. It has been shown that the equilibrium problems theory provides a novel and unified treatment of a wide class of problems which arise in economics, finance, image reconstruction, ecology, transportation, network, elasticity, and optimization. Equilibrium problems include variational inequalities, fixed point, Nash equilibrium, and game theory as special cases. The equilibrium problems and the variational inequality problems have been investigated by many authors. Please see [235] and the references therein. The problem (1.2) is very general in the sense that it includes, as special cases, optimization problems, variational inequalities, minimax problems, Nash equilibrium problem in noncooperative games, and others.

On the other hand, we also notice that it is quite often to seek a particular solution of a given nonlinear problem, in particular, the minimum-norm solution. For instance, given a closed convex subset of a Hilbert space and a bounded linear operator , where is another Hilbert space. The -constrained pseudoinverse of , , is then defined as the minimum-norm solution of the constrained minimization problem: which is equivalent to the fixed point problem: where is the metric projection from onto , is the adjoint of , is a constant, and is such that .

It is therefore an interesting problem to invent some algorithms that can generate schemes which converge strongly to the minimum-norm solution of a given problem.

In this paper, we focus on the following minimization problem: find such that where is the intersection set of the solution set of some equilibrium problem, the fixed points set of a nonexpansive mapping, and the solution set of some variational inequality. We will suggest and analyze two very simple algorithms for solving the above minimization problem.

#### 2. Preliminaries

Let be a nonempty closed convex subset of a real Hilbert space . Throughout this paper, we assume that a bifunction satisfies the following conditions:(H1) for all ;(H2) is monotone, that is, for all ;(H3) for each , ;(H4) for each , is convex and lower semicontinuous.The metric (or nearest point) projection from onto is the mapping which assigns to each point the unique point satisfying the property: It is well known that is a nonexpansive mapping and satisfies We need the following well-known lemmas for proving our main results.

Lemma 2.1 (see [13]). Let be a nonempty closed convex subset of a real Hilbert space . Let be a bifunction which satisfies conditions (H1)–(H4). Let and . Then there exists such that Further, if , then the following hold:(a) is single-valued and is firmly nonexpansive, that is, for any , ;(b) is closed and convex and .

Lemma 2.2 2.2 (see [27]). Let and be bounded sequences in a Banach space X and let be a sequence in with . Suppose that for all and . Then .

Lemma 2.3 (see [29]). Let be a closed convex subset of a real Hilbert space and let be a nonexpansive mapping. Then the mapping is demiclosed. That is, if is a sequence in such that weakly and strongly, then .

Lemma 2.4 (see [29]). Assume that is a sequence of nonnegative real numbers such that where is a sequence in and is a sequence such that(a); (b) or .Then .

#### 3. Main Results

In this section we will introduce two algorithms (one implicit and one explicit) for finding the minimum norm element of . Namely, we want to find a point which solves the following minimization problem: Let be a nonexpansive mapping and be -inverse-strongly monotone and -inverse-strongly monotone mappings, respectively. Let be a bifunction which satisfies conditions (H1)–(H4). In order to solve the minimization problem (3.1), we first construct the following implicit algorithm by using the projection method: where is defined as Lemma 2.1 and are two constants such that and . We will show that the net defined by (3.2) converges to a solution of the minimization problem (3.1). First, we show that the net is well defined. As matter of fact, for each , we consider the following mapping given by Since the mappings , , , and are nonexpansive, then we can check easily that which implies that is a contraction. Using the Banach contraction principle, there exists a unique fixed point of in , that is, which is exactly (3.2).

Next we show the first main result of the present paper.

Theorem 3.1. Suppose that . Then the net generated by the implicit method (3.2) converges in norm, as , to a solution of the minimization problem (3.1).

Proof. Take . First we need use the following facts:(1) for all , . In particular, for all , (2) and are nonexpansive and for all
Set and for all . It follows that From (3.2), we have that is, So, is bounded. Hence , , and are also bounded. Next we will use to denote some possible constant appearing in the following.
From (3.7), we have that is, Since and , we derive From Lemma 2.1 and (2.2), we obtain It follows that Set for all . By Lemma 2.1 and (2.2), we have that is, Therefore, we have Hence, we deduce This denotes that Note that thus, Next we show that is relatively norm compact as . Let be a sequence such that as . Put , and . From (3.20), we get By (3.2), we have It follows that In particular,
Since is bounded, without loss of generality, we may assume that converges weakly to a point . Hence, and also converge weakly to . Noticing (3.21) we can use Lemma 2.3 to get .
Now we show . Since , for any we have From the monotonicity of , we have Hence, Put for all and . Then, we have . So, from (3.27) we have Note that . Further, from monotonicity of , we have . Letting in (3.28), we have From (H1), (H4), and (3.29), we also have and hence Letting in (3.31), we have, for each , This implies that . By the same argument as that of [13], we have . Therefore, .
We substitute for in (3.24) to get Hence, the weak convergence of to implies that strongly. This has proved the relative norm compactness of the net as .
Now we return to (3.24) and take the limit as to get To show that the entire net converges to , assume , where . In (3.34), we take to get Interchange and to obtain Adding up (3.35) and (3.36) yields which implies that .
We note that (3.34) is equivalent to This clearly implies that Therefore, solves the minimization problem (3.1). This completes the proof.

Next we introduce an explicit algorithm for finding a solution of the minimization problem (3.1). This scheme is obtained by discretizing the implicit scheme (3.2). We will show the strong convergence of this algorithm.

Theorem 3.2. Suppose that . For given arbitrarily, let the sequence be generated iteratively by where and are two sequences in satisfying the following conditions:(a) and ;(b). Then the sequence converges strongly to a solution of the minimization problem (3.1).

Proof. Take . First we need use the following fact:
for all . In particular,
Set , and for all . From (3.40), we get By induction, we obtain, for all , Hence, is bounded. Consequently, we deduce that , , and are all bounded. We will use to denote some possible constant appearing in the following.
Define for all . It follows that This together with (a) implies that Hence by Lemma 2.2, we get Therefore, By the convexity of the norm , we have It follows that Since , , and , we derive From Lemma 2.1 and (2.2), we obtain It follows that Again, by Lemma 2.1 and (2.2), we have that is, Hence, It follows that Since , , , and , we derive that Note that therefore, Next we prove where is a solution of the minimization problem (3.1).
Indeed, we can choose a subsequence of such that Without loss of generality, we may further assume that weakly. By the same argument as that of Theorem 3.1, we can deduce that . Therefore, Finally, we prove . As a matter of fact, we have where and . It is clear that and . Hence, all conditions of Lemma 2.4 are satisfied. Therefore, we immediately deduce that strongly. This completes the proof.

#### Acknowledgments

Y. Yao was supported in part by NSFC 11071279 and NSFC 71161001-G0105. Y. Liou was supported in part by NSC 100-2221-E-230-012.

#### References

1. E. Blum and W. Oettli, “From optimization and variational inequalities to equilibrium problems,” The Mathematics Student, vol. 63, no. 1–4, pp. 123–145, 1994.
2. H. Attouch and R. Cominetti, “A dynamical approach to convex minimization coupling approximation with the steepest descent method,” Journal of Differential Equations, vol. 128, no. 2, pp. 519–540, 1996.
3. H. H. Bauschke and J. M. Borwein, “On projection algorithms for solving convex feasibility problems,” SIAM Review, vol. 38, no. 3, pp. 367–426, 1996.
4. H. H. Bauschke, P. L. Combettes, and D. R. Luke, “Finding best approximation pairs relative to two closed convex sets in Hilbert spaces,” Journal of Approximation Theory, vol. 127, no. 2, pp. 178–192, 2004.
5. A. Bnouhachem, M. A. Noor, and M. Khalfaoui, “Modified descent-projection method for solving variational inequalities,” Applied Mathematics and Computation, vol. 190, no. 2, pp. 1691–1700, 2007.
6. D. Butnariu, Y. Censor, and S. Reich, Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications, Elsevier, New York, NY, USA, 2001.
7. L. C. Ceng, S. Schaible, and J. C. Yao, “Implicit iteration scheme with perturbed mapping for equilibrium problems and fixed point problems of finitely many nonexpansive mappings,” Journal of Optimization Theory and Applications, vol. 139, no. 2, pp. 403–418, 2008.
8. Y. Censor and A. Lent, “Short Communication: cyclic subgradient projections,” Mathematical Programming, vol. 24, no. 1, pp. 233–235, 1982.
9. S.-S. Chang, H. W. Joseph Lee, and C. K. Chan, “A new method for solving equilibrium problem fixed point problem and variational inequality problem with application to optimization,” Nonlinear Analysis A, vol. 70, no. 9, pp. 3307–3319, 2009.
10. V. Colao, G. L. Acedo, and G. Marino, “An implicit method for finding common solutions of variational inequalities and systems of equilibrium problems and fixed points of infinite family of nonexpansive mappings,” Nonlinear Analysis A, vol. 71, no. 7-8, pp. 2708–2715, 2009.
11. V. Colao, G. Marino, and H.-K. Xu, “An iterative method for finding common solutions of equilibrium and fixed point problems,” Journal of Mathematical Analysis and Applications, vol. 344, no. 1, pp. 340–352, 2008.
12. P. L. Combettes, “Inconsistent signal feasibility problems: least-squares solutions in a product space,” IEEE Transactions on Signal Processing, vol. 42, pp. 2955–2966, 1994.
13. P. L. Combettes and S. A. Hirstoaga, “Equilibrium programming in Hilbert spaces,” Journal of Nonlinear and Convex Analysis, vol. 6, no. 1, pp. 117–136, 2005.
14. Y.-P. Fang, R. Hu, and N.-J. Huang, “Well-posedness for equilibrium problems and for optimization problems with equilibrium constraints,” Computers & Mathematics with Applications, vol. 55, no. 1, pp. 89–100, 2008.
15. C. S. Hu and G. Cai, “Viscosity approximation schemes for fixed point problems and equilibrium problems and variational inequality problems,” Nonlinear Analysis A, vol. 72, no. 3-4, pp. 1792–1808, 2010.
16. A. N. Iusem and W. Sosa, “Iterative algorithms for equilibrium problems,” Optimization, vol. 52, no. 3, pp. 301–316, 2003.
17. C. Jaiboon and P. Kumam, “Strong convergence theorems for solving equilibrium problems and fixed point problems of $\xi$-strict pseudo-contraction mappings by two hybrid projection methods,” Journal of Computational and Applied Mathematics, vol. 234, no. 3, pp. 722–732, 2010.
18. J. S. Jung, “Strong convergence of composite iterative methods for equilibrium problems and fixed point problems,” Applied Mathematics and Computation, vol. 213, no. 2, pp. 498–505, 2009.
19. L.-J. Lin and Y.-J. Huang, “Generalized vector quasi-equilibrium problems with applications to common fixed point theorems and optimization problems,” Nonlinear Analysis A, vol. 66, no. 6, pp. 1275–1289, 2007.
20. A. Moudafi, “On the convergence of splitting proximal methods for equilibrium problems in Hilbert spaces,” Journal of Mathematical Analysis and Applications, vol. 359, no. 2, pp. 508–513, 2009.
21. A. Moudafi and M. Théra, “Proximal and dynamical approaches to equilibrium problems,” in Ill-Posed Variational Problems and Regularization Techniques (Trier, 1998), vol. 477 of Lecture Notes in Economics and Mathematical Systems, pp. 187–201, Springer, Berlin, Germany, 1999.
22. M. A. Noor, “Projection-proximal methods for general variational inequalities,” Journal of Mathematical Analysis and Applications, vol. 318, no. 1, pp. 53–62, 2006.
23. J.-W. Peng and J.-C. Yao, “Strong convergence theorems of iterative scheme based on the extragradient method for mixed equilibrium problems and fixed point problems,” Mathematical and Computer Modelling, vol. 49, no. 9-10, pp. 1816–1828, 2009.
24. S. Plubtieng and R. Punpaeng, “A general iterative method for equilibrium problems and fixed point problems in Hilbert spaces,” Journal of Mathematical Analysis and Applications, vol. 336, no. 1, pp. 455–469, 2007.
25. X. Qin, Y. J. Cho, and S. M. Kang, “Viscosity approximation methods for generalized equilibrium problems and fixed point problems with applications,” Nonlinear Analysis A, vol. 72, no. 1, pp. 99–112, 2010.
26. Y. Shehu, “Fixed point solutions of generalized equilibrium problems for nonexpansive mappings,” Journal of Computational and Applied Mathematics, vol. 234, no. 3, pp. 892–898, 2010.
27. T. Suzuki, “Strong convergence theorems for infinite families of nonexpansive mappings in general Banach spaces,” Fixed Point Theory and Applications, vol. 2005, no. 1, pp. 103–123, 2005.
28. F. Q. Xia and X. P. Ding, “Predictor-corrector algorithms for solving generalized mixed implicit quasi-equilibrium problems,” Applied Mathematics and Computation, vol. 188, no. 1, pp. 173–179, 2007.
29. H. K. Xu, “An iterative approach to quadratic optimization,” Journal of Optimization Theory and Applications, vol. 116, no. 3, pp. 659–678, 2003.
30. S. Takahashi and W. Takahashi, “Strong convergence theorem for a generalized equilibrium problem and a nonexpansive mapping in a Hilbert space,” Nonlinear Analysis A, vol. 69, no. 3, pp. 1025–1033, 2008.
31. P. Tseng, “Applications of a splitting algorithm to decomposition in convex programming and variational inequalities,” SIAM Journal on Control and Optimization, vol. 29, no. 1, pp. 119–138, 1991.
32. I. Yamada and N. Ogura, “Hybrid steepest descent method for variational inequality problem over the fixed point set of certain quasi-nonexpansive mappings,” Numerical Functional Analysis and Optimization, vol. 25, no. 7-8, pp. 619–655, 2004.
33. Y. Yao, Y.-C. Liou, and J.-C. Yao, “An iterative algorithm for approximating convex minimization problem,” Applied Mathematics and Computation, vol. 188, no. 1, pp. 648–656, 2007.
34. Y. Yao and J.-C. Yao, “On modified iterative method for nonexpansive mappings and monotone mappings,” Applied Mathematics and Computation, vol. 186, no. 2, pp. 1551–1558, 2007.
35. H. Zegeye, E. U. Ofoedu, and N. Shahzad, “Convergence theorems for equilibrium problem, variational inequality problem and countably infinite relatively quasi-nonexpansive mappings,” Applied Mathematics and Computation, vol. 216, no. 12, pp. 3439–3449, 2010.