Research Article | Open Access

Volume 2011 |Article ID 262073 | https://doi.org/10.1155/2011/262073

Lingling Huang, Sanyang Liu, Weifeng Gao, "An Approximate Proximal Point Algorithm for Maximal Monotone Inclusion Problems", Mathematical Problems in Engineering, vol. 2011, Article ID 262073, 15 pages, 2011. https://doi.org/10.1155/2011/262073

# An Approximate Proximal Point Algorithm for Maximal Monotone Inclusion Problems

Revised24 Oct 2011
Accepted29 Oct 2011
Published29 Dec 2011

#### Abstract

This paper presents and analyzes a strongly convergent approximate proximal point algorithm for finding zeros of maximal monotone operators in Hilbert spaces. The proposed method combines the proximal subproblem with a more general correction step which takes advantage of more information on the existing iterations. As applications, convex programming problems and generalized variational inequalities are considered. Some preliminary computational results are reported.

#### 1. Introduction

Let be a real Hilbert space with inner product and induced norm . A set-valued operator is maximal monotone. A canonical problem associated with is the maximal monotone inclusion problem, that is, to find a vector such that The maximal monotone inclusion problem provides a powerful general framework for the study of many important optimization problems, such as convex programming problems and variational inequalities, see [1, 2], for example. Nowadays, it has received considerable attention. The interested readers may consult the monographs by Facchinei and Pang [3] and the survey papers [414].

Many methods have been proposed to solve the maximal monotone inclusion problem. One of the classical methods is the proximal point algorithm (PPA) which was originally proposed by Martinet [5]. Let be the current iteration, PPA generates the next iteration by solving the following proximal subproblem: If the sequence is chosen bounded from above zero, then the sequence generated by (1.2) converges weakly to a solution of (1.1). However, since (1.2) is a nonsmooth equation and an implicit scheme, solving (1.2) exactly is either impossible or as difficult as solving the original problem (1.1), see [6]. This makes straightforward applications of PPA impractical in many cases. To overcome this drawback, Rockafellar [7] generalized PPA and presented the following approximate proximal point algorithm: where the error sequence satisfies a summable error criterion. Because of the relaxed accuracy requirement, the approximate proximal point algorithm is more practical than the exact one. Furthermore, Rockafellar posed an open question: the sequence generated by (1.3) converges strongly or not. In 1992, by exhibiting a proper closed convex function in a infinite-dimensional Hilbert space , Güler [8] showed that it does not converge strongly in general. Naturally, the question arises whether PPA can be modified preferably in a simple way, so that strong convergence is guaranteed.

There is one point deserving to be paid attention to. The weak and strong convergences are only conceptions in the infinite dimensional spaces. Many real-world problems in economics and engineering are modeled in the infinite dimensional spaces, such as the optimal control and structural design problems. However, when we solve infinite-dimensional problems, numerical implementations of algorithms are certainly applied to finite dimensional approximations of these problems. Nevertheless, as is pointed out in [9], it is important to develop convergence theory for the infinite-dimensional case, because it guarantees the robustness and stability with respect to discretization schemes employed for obtaining finite dimensional approximations of infinite dimensional problems.

Recently, a number of researchers have concentrated on the developments of the approximate proximal point algorithms on theoretical analysises, algorithm designs, and practical applications. To find the solution of the desired problem in a restricted area , many authors used an additional projection or extragradient step to correct the approximate solution and thus presented the prediction-correction proximal point algorithms. Recent developments on the approximate proximal point algorithms also focus on replacing the correction step with more general steps, see, for example, [1013]. To mention a few, for any fixed vector , Zhou et al. [12] presented the following correction step: where is generated by (1.3). For suitably chosen parameter sequence and under certain assumptions on the error term , they proved the method converges strongly. Later, Qin et al. [13] developed a more general iterative scheme: where is generated by (1.3) and is a bounded sequence. For suitably chosen parameter sequences , , and and under certain assumptions on the error term , they got the strong convergence of the method too.

In this paper, we propose a strongly convergent approximate proximal point algorithm for the maximal monotone inclusion problems by combining the proximal subproblem (1.3) with a more general correction step. Compared with methods [1013], the proposed method takes advantage of more information on the existing iterations. For practical implementation, we give two applications of the proposed method to convex programming problems and generalized variational inequalities. Preliminary numerical experiments, reported in Section 5, demonstrate that the efficiency of the method is in practice.

This paper is organized as follows. Section 2 introduces some useful preliminaries. Section 3 describes the proposed method formally and presents the convergence analysis. Section 4 discusses some applications of the proposed method. Section 5 presents some numerical experiments, and some final conclusions are given in the last section.

Throughout this paper, we assume that the solution set of (1.1), denoted by , is nonempty.

#### 2. Preliminaries

This section summarizes some fundamental concepts and lemmas that are useful in the consequent analysis.

Definition 2.1. Let be a real Hilbert space and let be a set-valued operator. Then(i)the effective domain of , denoted by , is (ii)the range or image of , denoted by , is (iii)the graph of , denoted by , is (iv)the inverse of , denoted by , is

Definition 2.2. Let be a real Hilbert space and let be a set-valued operator. Then, is called monotone on if Furthermore, a monotone operator is called maximal monotone if its graph is not properly contained in the graph of any other monotone operator on .

Definition 2.3. Let be a real Hilbert space, let be a nonempty closed convex subset of , and let be an operator from into . Then,(i) is called firmly nonexpansive if (ii) is called nonexpansive if

Definition 2.4. Let be a real Hilbert space and let be a nonempty closed convex subset of . Then, the orthogonal projection from onto , denoted by , is

It is easy to verify that the orthogonal projection operator is nonexpansive.

Definition 2.5. Given any positive scalar and operator , define the resolvent of by where denotes the identity operator on . Also, define the Yosida approximation by

We know that for all , for all , where and for all .

In the following, we list some useful lemmas.

Lemma 2.6 (see [14]). Let be any positive scalar. An operator is monotone if and only if its resolvent is firmly nonexpansive. Furthermore, is maximal monotone if and only if is firmly nonexpansive and .

Lemma 2.7 (see [15]). Let be a maximal monotone operator and let be a positive scalar, then

Lemma 2.8 (see [16]). For all , exists and it is the point of nearest to .

Lemma 2.9 (see [17]). Let , and be sequences of positive numbers satisfying: where is a sequence in . Assume that the following conditions are satisfied(i) as and ;(ii);(iii).Then .

#### 3. The Algorithm and Convergence Analysis

In this section, we analyze the proposed approximate proximal point algorithm formally.

Algorithm 3.1. Given, andwithas. Findsatisfying under the inexact error criterion: Generate the new iteration by where the stepsize is defined by and , and are real sequences in satisfying(i);(ii) and ;(iii).

The following remark gives the relationships between Algorithm 3.1 and some existing algorithms.

Remark 3.2. When , Algorithm 3.1 reduces to the method proposed by He et al. [10]; when without considering the error term in (3.3), Algorithm 3.1 reduces to the method proposed by Yang and He [11]; when , Algorithm 3.1 reduces to the method proposed by Zhou et al. [12]; when , Algorithm 3.1 reduces to the method proposed by Qin et al. [13] with .

In the following, we give the convergence analysis of Algorithm 3.1, beginning with some lemmas. For convenience, we use the notation

Lemma 3.3 (see [6]). Let be a real Hilbert space and let be a nonempty closed convex subset of . For given and , there exists conforming to the set-valued equation (3.1) Furthermore, for any , one has

Lemma 3.4. Let be defined by (3.4), then

Proof. By the notation of , it follows from (3.2) that By the selection of , the proof is complete.

Lemma 3.5. Let be any zero of in C, then

Proof. Since and is nonexpansive, by Lemma 3.3, we have Considering the last two terms of the above equality, by the definition of and , we have Taking into account that , by Lemma 3.4, we further obtain Since , the assertion follows from (3.12) immediately.

We now prove the strong convergence of Algorithm 3.1

Theorem 3.6. Let be generated by Algorithm 3.1. Suppose that . Then, the sequence converges strongly to a zero point of , where .

Proof. We divide the proof into three parts.Claim 1. Show that the sequence is bounded.
For any . Set . We want to prove that It is easy to see that (3.13) is true for . Now, assume that (3.13) holds for some . We prove that (3.13) holds for . By the definition of , it follows from that where the second inequality follows from Lemma 3.5. Hence, the sequence is bounded.
Claim 2. Show that , where . Note that the existence of is guaranteed by Lemma 2.8.
Since is maximal monotone, and , we have Since as , for any , we obtain By the nonexpansivity of , we have Since , we obtain which combines with (3.16) yielding that By Lemma 2.7, we get Thus, Furthermore, we have Adopting the notation . Since , then Since is nonexpansive, by the notation of , we obtain Since as and , we get By using (3.23), we get Since , it follows from (3.25) and (3.27) that Combining (3.28) with (3.22), we obtain Note that , we get
Claim 3. Show that as .
By Lemma 2.7 and by the nonexpansivity of , we have By the definition of , we have Since is nonexpansive, we obtain Now, we consider the last two terms on the right-hand side of the above equality. Since , it follows from (3.32) that Consequently, we get Furthermore, we obtain Since and , we have Set , we have as . Denote , and , by Lemma 2.9, we conclude that as , that is, as . The proof is complete.

#### 4. Applications

This section considers two interesting applications, convex programming problems and generalized variational inequalities.

Let be a real Hilbert space, let be a nonempty closed convex subset of , and let be a proper closed convex function. Consider the following convex programming problem: In 1965, Moreau [18] indicated that if is a proper closed convex function, is a maximal monotone operator, where represents the subdifferential of , that is, Since is a minimizer of if and only if . Thus, the problem (4.1) can be directly transformed to the maximal monotone inclusion problem (1.1). Note that is equivalent to the following equation: Hence, when dealing with the convex programming problems, we use (4.4) instead of (3.1), see [12, 13], for example. Specifically, Theorem 3.6 could be detailed as follows.

Theorem 4.1. Let be a real Hilbert space, let be a nonempty closed convex subset of , and let be a proper closed convex function such that . Let with as , be a sequence in satisfying with and . Given and , let be generated by where the stepsize is defined by and , and are real sequences in satisfying(i);(ii) and ;(iii).Then, the sequence converges strongly to a minimizer of nearest to .

We now turn to another application of the proposed method. In recent years, the approximate proximal point algorithms are a family important methods to solve monotone variational inequalities, see [1921], for example. Let be a real Hilbert space, let be a nonempty closed convex subset of , and let be a monotone set-valued mapping. Consider the following generalized variational inequality (): find a vector and such that When is single-valued, reduces to the classical variational inequality . Let , where is single-valued and denotes the normal cone operator with respect to , namely, In this way, can be transformed to the maximal monotone inclusion problem (1.1) easily. In particular, for given and , using the proximal subproblem (3.1) to solve the problem (4.7) is equivalent to find and such that Specifically, when dealing with , Theorem 3.6 could be detailed as follows.

Theorem 4.2. Let be a real Hilbert space,let be a nonempty closed convex subset of , and let be a monotone set-valued mapping. Suppose that the solution set of variational inequality problem (4.7) is nonempty. Let with as , be a sequence in such that with and . Given and , let be generated by where the stepsize is defined by and , and are real sequences in satisfying(i);(ii) and ;(iii).Then, the sequence converges strongly to a solution of nearest to .

#### 5. Preliminary Computational Results

In this section, we give some numerical experiments and present comparisons between Algorithm 3.1 and the algorithm presented in Qin et al. [13]. All the codes are written in MATLAB 7.0 and run on the computer with an Intel Core2 1.86?GHz CPU, and Windows XP system. Throughout the computational experiments, the parameters are chosen as . The stopping criterion is .

Example 5.1. Consider the following generalized variational inequality problem which is tested in [22] by four variables. Let and be defined by Then, is a solution of this problem.

We solve this problem with different starting points. The numbers of iterations (It. num.) and the computation times (CPU(s)) are summarized in Table 1. From Table 1, we can see that Algorithm 3.1 performs better. In addition, for the considered problem, the iterative numbers and the computational times are not very sensitive to the starting point.

 Starting point Algorithm 3.1 Algorithm in [13] It. num. CPU(s) It. num. CPU(s) (0,0,0,1) 30 0.2310 37 0.3470 (0,0,1,0) 32 0.2310 37 0.3470 (0,0.5,0.5,0) 33 0.2310 38 0.3790 (0.5,0,0.5,0) 33 0.2470 38 0.3780

#### 6. Conclusion

This paper suggests an approximate proximal point algorithm for the maximal monotone inclusion problems by adopting a more general correction step. Under suitable and standard assumptions on the algorithm parameters, we get the strong convergence of the algorithm. Note that Algorithm 3.1 here includes some existing methods as special cases. Therefore, the proposed algorithm is expected to be widely applicable.

#### Acknowledgments

The authors are grateful to the anonymous reviewer and the editor for their valuable comments and suggestions, which have greatly improved the paper. The work was supported by National Nature Science Foundation (60974082) and the Fundamental Research Funds for the Central Universities (K50511700006, K50511700008).

#### References

1. R. S. Burachik, A. N. Iusem, and B. F. Svaiter, “Enlargement of monotone operators with applications to variational inequalities,” Set-Valued Analysis, vol. 5, no. 2, pp. 159–180, 1997.
2. M. C. Ferris and J. S. Pang, “Engineering and economic applications of complementarity problems,” SIAM Review, vol. 39, no. 4, pp. 669–713, 1997.
3. F. Facchinei and J.-S. Pang, Finite-Dimensional Variational Inequalities and Complementarity Problems, vol. 1-2, Springer, Heidelberg, Germany, 2003.
4. P. Tseng, “A modified forward-backward splitting method for maximal monotone mappings,” SIAM Journal on Control and Optimization, vol. 38, no. 2, pp. 431–446, 2000.
5. B. Martinet, “Régularisation d'inéquations variationnelles par approximations successives,” Revue Francaise d’Informatique et de Recherche Opérationnelle, vol. 4, pp. 154–158, 1970. View at: Google Scholar | Zentralblatt MATH
6. J. Eckstein, “Approximate iterations in Bregman-function-based proximal algorithms,” Mathematical Programming, vol. 83, no. 1, pp. 113–123, 1998.
7. R. T. Rockafellar, “Monotone operators and the proximal point algorithm,” SIAM Journal on Control and Optimization, vol. 14, no. 5, pp. 877–898, 1976.
8. O. Güler, “On the convergence of the proximal point algorithm for convex minimization,” SIAM Journal on Control and Optimization, vol. 29, no. 2, pp. 403–419, 1991. View at: Publisher Site | Google Scholar
9. M. V. Solodov and B. F. Svaiter, “Forcing strong convergence of proximal point iterations in a Hilbert space,” Mathematical Programming A, vol. 87, no. 1, pp. 189–202, 2000. View at: Google Scholar | Zentralblatt MATH
10. B. He, L. Liao, and Z. Yang, “A new approximate proximal point algorithm for maximal monotone operator,” Science in China. Series A, vol. 46, no. 2, pp. 200–206, 2003.
11. Z. Yang and B. He, “A relaxed approximate proximal point algorithm,” Annals of Operations Research, vol. 133, pp. 119–125, 2005.
12. H. Zhou, L. Wei, and B. Tan, “Convergence theorems of approximate proximal point algorithm for zeroes of maximal monotone operators in Hilbert spaces,” International Journal of Mathematical Analysis, vol. 1, no. 4, pp. 175–186, 2007. View at: Google Scholar | Zentralblatt MATH
13. X. Qin, S. M. Kang, and Y. J. Cho, “Approximating zeros of monotone operators by proximal point algorithms,” Journal of Global Optimization, vol. 46, no. 1, pp. 75–87, 2010.
14. J. Eckstein and D. P. Bertsekas, “On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators,” Mathematical Programming, vol. 55, no. 3, pp. 293–318, 1992.
15. G. J. Minty, “On the monotonicity of the gradient of a convex function,” Pacific Journal of Mathematics, vol. 14, pp. 243–247, 1964. View at: Google Scholar | Zentralblatt MATH
16. R. E. Bruck,, “A strongly convergent iterative solution of $0\in Ux$ for a maximal monotone operator U in Hilbert space,” Journal of Mathematical Analysis and Applications, vol. 48, pp. 114–126, 1974.
17. L. S. Liu, “Ishikawa and Mann iterative process with errors for nonlinear strongly accretive mappings in Banach spaces,” Journal of Mathematical Analysis and Applications, vol. 194, no. 1, pp. 114–125, 1995.
18. J.-J. Moreau, “Proximitéet dualité dans un espace hilbertien,” Bulletin de la Société Mathématique de France, vol. 93, pp. 273–299, 1965. View at: Google Scholar
19. A. Bnouhachem and M. A. Noor, “Inexact proximal point method for general variational inequalities,” Journal of Mathematical Analysis and Applications, vol. 324, no. 2, pp. 1195–1212, 2006.
20. B. S. He, X. L. Fu, and Z. K. Jiang, “Proximal-point algorithm using a linear proximal term,” Journal of Optimization Theory and Applications, vol. 141, no. 2, pp. 299–319, 2009.
21. M. Li, L. Z. Liao, and X. M. Yuan, “Proximal point algorithms for general variational inequalities,” Journal of Optimization Theory and Applications, vol. 142, no. 1, pp. 125–145, 2009.
22. C. Fang and Y. He, “A double projection algorithm for multi-valued variational inequalities and a unified framework of the method,” Applied Mathematics and Computation, vol. 217, no. 23, pp. 9543–9551, 2011.

Copyright © 2011 Lingling Huang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.