Abstract

We introduce and consider a proximal point algorithm for solving minimization problems using the technique of Güler. This proximal point algorithm is obtained by substituting the usual quadratic proximal term by a class of convex nonquadratic distance-like functions. It can be seen as an extragradient iterative scheme. We prove the convergence rate of this new proximal point method under mild assumptions. Furthermore, it is shown that this estimate rate is better than the available ones.

1. Introduction

The purpose of this paper is twofold. Firstly, it proposes an extension of the proximal point method introduced by Güler [1] in 1992, where the usual quadratic proximal term is substituted by a class of strictly convex distance-like functions, called Bregman functions. Secondly, it offers a general framework for the convergence analysis of the proximal point method of Güler. This framework is general enough to apply different classes of Bregman functions and still yield simple convergence proofs. The methods being analyzable in this context are called Güler's generalized proximal point algorithm, and are closely related to the Bregman proximal methods [25]. The analysis, we develop is different from the works in [4, 5], since our method is based on Güler's technique.

2. Preliminaries

To be more specific, we consider the minimization problem in the following form: where is a closed proper convex function. To solve the problem (2.1), Teboulle [6], Chen and Teboulle [2, 6], Eckstein [4] and Burachik [3] proposed a general scheme using the Bregman proximal mappings of the type where is given by with is a strictly convex and continuously differentiable function.

Throughout this paper, denotes the -norm and denotes the Euclidean inner product in . Let be a continuous single-valued mapping from into . The mapping is Lipschitz continuous with Lipschitz constant , if for all . We denote also by the distance of to the set and it is given by . Further notations and definitions used in this paper are standard in convex analysis and may be found in Rockafellar's book [7].

This type of kernels was introduced first by [8] in 1967. The corresponding algorithm using these Bregman proximal mappings is called the Generalized Proximal Point Method (GPPM) and known also under the terminology of Bregman Proximal Methods. These proximal method solve (2.1) by considering a sequence of unconstrained minimization problems, which can be summed as follows.

Algorithm 2.1. (1) Initialize .
(2) Compute the solution by the iterative scheme: where is a sequence of positive numbers and is defined by (2.3).

For , Algorithm 2.1 coincides with the classical proximal point algorithm (PPA) introduced by Moreau [9] and Martinet [10].

Under mild assumptions on the data of (2.1) ergodic convergence was proved [2, 5] when with the following global rate of convergence estimate: Our purpose in this paper is to propose an algorithm of the same type as Algorithm 2.1 which has better convergence rate. To this goal, we propose to combine Güler's scheme [1] and the Bregman proximal method. The main difference concerns the generation of an additional sequence in the unconstrained minimization (2.4) in such a way: We show (see Section 4) that this new proximal method possesses the following rate estimate which is faster than (2.5). Further, the convergence in terms of the objective values occurs when which is weaker than .

We briefly recall here the notion of Bregman functions called also -functions introduced by Brègman ([8], 1967), developed and used in the proximal theory by [4, 6, 1113]. Let be an open subset of and let be a finite-valued continuously differentiable function on be and let defined by

Definition 2.2. is called a Bregman function with zone or a -function if:(a) is continuously differentiable on and continuous on ,(b) is strictly convex on ,(c)for every , the partial level sets and are bounded for every and , respectively,(d)if is a convergent sequence with limit , then ,(e)if and are sequences such that , is bounded and , then .

From the above definition, we extract the following properties (see, for instance, [6, 13]).

Lemma 2.3. Let be a Bregman function with zone . Then,(i) and for and ,(ii)for all , and , (iii)for all , (iv)for all (v)let such that , then and .

Lemma 2.4. (i) Let be a strictly convex function such that then is a Bregman function.
(ii) If is a Bregman function, then for any , also is a Bregman function.

Remark 2.5. cannot be considered as a distance because of the lack of the triangle inequality and the symmetry property. is usually called an entropy distance.

The paper is organized as follows. In Section 3, we recall briefly the proximal point method of Güler. Section 4 will be devoted to the presentation and convergence analysis of the proposed algorithm. Finite convergence is shown in Section 5. Finally, in Section 6 we present an application of this method to solve variational inequalities problem.

3. Extragradient Algorithm

In 1992, Güler [1] has developed a new proximal point approach similar to the classical one (PPA) based on the idea stated by Nesterov [14].

Güler's proximal point algorithm (GPPA) can be summed up as follows.

Algorithm 3.1. (i) Initialize .
Define .
(ii) Compute .
(iii) Compute the solution by the iterative scheme:

For the convergence analysis, see Güler [1].

Remark 3.2. The GPPA can be seen as a suitable conjugate gradient type modification of the PPA of Rockafellar applied to (2.1).

4. Main Result

4.1. Introduction

The method that we are proposing is a modification of Güler's new proximal point approach GPPA discussed in Section 3 and can be considered as a nonlinear (or a nonquadratic) version of GPPA with Bregman kernels. In this paper it is shown that this method, which we call BGPPA possesses the strong convergence results obtained by Güler [1] and therefore this new scheme provides faster (global) convergence rates than the classical Bregman proximal point methods (cf. [2, 46, 11, 13, 15]). In this paper, we propose the following algorithm generalizing Güler's proximal point algorithm and summed up as follows.

Algorithm 4.1. (i) Initialize: .
Define
(ii) Compute: such that .
(iii) Compute the solution by the iterative scheme:

In this section we develop convergence results for the generalized Güler's proximal point algorithm GGPPA presented in Section 4.2. Our analysis is basically based on the following lemma.

Lemma 4.2 ([1, page 654]).   One has for all and .

Theorem 4.3. For all such that , one has the following convergence rate estimate:

Proof. Using the fact that , , and Lemma 4.2, we obtain Since , then (4.3) holds.

Theorem 4.4. Consider the sequence generated by GGPPA and let be a minimizer of on . Assume that(1) is a Bregman function with zone such that ,(2)Im or Im is open,(3) is Lipschitz continuous with coefficient ,then(a)for all , the sequence is well defined,(b) the GGPPA possesses this following convergence rate estimate: (c), when ,(d) if .

Proof. (a) Follows from [8, Theorem 4].
(b) Uses assumption (2.3) in the following manner and by taking in (4.3), then we have Since is arbitrary, then (4.5) holds.
(c) Is obvious.
(d) It suffices to observe that if , we have

4.2. Finite Convergence

Note that the finite convergence property was established for the classical proximal point algorithm in the case of sharp minima, see, for example, [16]. Recently, Kiwiel [5] has extended this property to his generalized Bregman proximal method (BPM). In the following theorem we prove that Algorithm 3.1 has this property. Our proof is based on Kiwiel's one [5, Theorem 6.1 page 1151].

Definition 4.5. A closed proper convex function is said to have a sharp minimum on if and only if there exists such that

Theorem 4.6. Under the same hypothesis as in Theorem 4.4 and by considering GGPPA with having a sharp minimum on and being bounded, then there exists such that and .

Proof. Straightforward, using Theorem 4.4 and [5, Theorem 6.1, page 1151].

5. Convergence Rate of GGPPA

If is a sequence of points, one forms the sequence of weighted averages given by where . If the sequence converges, then is said to converge ergodically.

Theorem 5.1. GGPPA possesses the following convergence rate: that is, . Furthermore, if , then one has

Proof. Let be a minimizer of . For brevity, we denote and . At optimality in the unconstrained minimization in GGPPA, we can write and by the convexity of , we have Setting in (5.5), we obtain and for , we have Or again, if we set in (5.5), and using the Cauchy-Schwartz inequality, we obtain that is, Since is convex, . Then we can write that is, Using the relation and the inequality (5.7), we get the relation For short we denote thus, (5.12) becomes Then by dividing both terms by , we get Since the left-side term is positive, then Now following Güler [17, page 410], we use the fact that . To apply this inequality, it suffices to show that is less than or equal to . This can be deduced from this relation (see Lemma 2.3 (ii)): Indeed, since , then (the proof of this next inequality can be found in the proof of Theorem 4.4-(b)) Therefore, and we obtain To continue the proof, we will separate some different cases. Case 1.   If . Then and we have . Thus, (5.18) becomes and by summation from to , we get that is, Then, Since is an arbitrary solution, we can write and by multiplying both terms by , we obtain Since and converge to the same point (indeed, we can see it via the formula giving in the algorithm GGPPA and , then ; hence, we obtain which implies that is, Case 2.   If Then and we have therefore; for we have . Thus, using inequality (5.18), we write and by summation from to , we get Then, Since is an arbitrary solution, we can write and by multiplying both terms by , we obtain Since and converge to the same point (indeed, we can see it via the formula giving in the algorithm GGPPA and , then ; hence, we obtain which implies, that is, Case 3.   If . In this case we observe that sequence is increasing, which may imply a divergence of the approach.

Since is convex, then the following convergence rate estimate can be derived directly.

Corollary 5.2. If one assumes that for all , then

6. Conclusion

We have introduced an extragradient method to minimize convex problems. The algorithm is based on a generalization of the technique originally proposed by Nesterov [14] and readapted by Güler in [1, 17], where the usual quadratic proximal term was substituted by a class of convex nonquadratic distance-like functions. The new algorithm has a better theoretical convergence rate compared to the available ones. This motivates naturally the study of the numerical efficiency of the new algorithm and its application to solve variational inequality problems [18, 19]. Also, further efforts are needed to consider the given study for nonconvex situations and apply it to solve nonconvex equilibrium problems [20].

Acknowledgments

The authors would like to thank Dr. Osman Güler for providing them with reference [12] and the English translation of the original paper of Nesterov [14].