- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Article Processing Charges ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Recently Accepted Articles ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

International Journal of Mathematics and Mathematical Sciences

Volume 2011 (2011), Article ID 305856, 12 pages

http://dx.doi.org/10.1155/2011/305856

## A Penalization-Gradient Algorithm for Variational Inequalities

^{1}Département Scientifique Interfacultaires, Université des Antilles et de la Guyane, CEREGMIA, 97275 Schoelcher, Martinique, France^{2}Department of Mathematics, College of Basic Education, PAAET Main Campus-Shamiya, Kuwait

Received 11 February 2011; Accepted 5 April 2011

Academic Editor: Giuseppe Marino

Copyright © 2011 Abdellatif Moudafi and Eman Al-Shemas. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper is concerned with the study of a penalization-gradient algorithm for solving variational inequalities, namely, find such that for all , where is a single-valued operator, is a closed convex set of a real Hilbert space . Given which acts as a penalization function with respect to the constraint , and a penalization parameter , we consider an algorithm which alternates a proximal step with respect to and a gradient step with respect to and reads as . Under mild hypotheses, we obtain weak convergence for an inverse strongly monotone operator and strong convergence for a Lipschitz continuous and strongly monotone operator. Applications to hierarchical minimization and fixed-point problems are also given and the multivalued case is reached by replacing the multivalued operator by its Yosida approximate which is always Lipschitz continuous.

#### 1. Introduction

Let be a real Hilbert space, a monotone operator, and let be a closed convex set in , we are interested in the study of a gradient-penalization algorithm for solving the problem of finding such that or equivalently where is the normal cone to a closed convex set . The above problem is a variational inequality, initiated by Stampacchia [1], and this field is now a well-known branch of pure and applied mathematics, and many important problems can be cast in this framework.

In [2], Attouch et al., based on seminal work by Passty [3], solve this problem with a multivalued operator by using splitting proximal methods. A drawback is the fact that the convergence in general is only ergodic. Motivated by [2, 4] and by [5] where penalty methods for variational inequalities with single-valued monotone maps are given, we will prove that our proposed forward-backward penalization-gradient method (1.9) enjoys good asymptotic convergence properties. We will provide some applications to hierarchical fixed-point and optimization problems and also propose an idea to reach monotone variational inclusions.

To begin with, see, for instance [6], let us recall that an operator with domain and range is said to be monotone if It is said to be maximal monotone if, in addition, its graph, , is not properly contained in the graph of any other monotone operator. An operator sequence is said to be graph convergent to if converges to in the Kuratowski-Painlevé's sense, that is, . It is well-known that for each and there is a unique such that . The single-valued operator is called the resolvent of of parameter . It is a nonexpansive mapping which is everywhere defined and is related to its Yosida approximate, namely , by the relation . The latter is -Lipschitz continuous and satisfies . Recall that the inverse of is the operator defined by and that, for all , we have the following key inequality Observe that the relation leads to Now, given a proper lower semicontinuous convex function , the subdifferential of at is the set Its Moreau-Yosida approximate and proximal mapping and are given, respectively, by We have the following interesting relation . Finally, given a nonempty closed convex set , its indicator function is defined as if and otherwise. The projection onto at a point is . The normal cone to at is if and otherwise. Observe that , , and .

Given some , the current approximation to a solution of (1.2), we study the penalization-gradient iteration which will generate, for parameters , as the solution of the regularized subproblem which can be rewritten as Having in view a large range of applications, we shall not assume any particular structure or regularity on the penalization function . Instead, we just suppose that is convex, lower semicontinuous and . We will denote by the solution set of (1.2).

The following lemmas will be needed in our analysis, see for example [6, 7], respectively.

Lemma 1.1. *Let be a maximal monotone operator, then graph converges to as provided that .*

Lemma 1.2. *Assume that and are two sequences of nonnegative real numbers such that
**
If , then there exists a subsequence of which converges. Furthermore, if , then exists.*

#### 2. Main Results

##### 2.1. Weak Convergence

Theorem 2.1. *Assume that , is inverse strongly monotone, namely
**
If
**
and (where is a small enough constant), then the sequence generated by algorithm (1.9) converges weakly to a solution of Problem (1.2).*

*Proof. *Let be a solution of (1.2), observe that solves (1.2) if and only if . Set , by the triangular inequality, we can write
On the other hand, by virtue of (1.4) and (2.1), we successively have
Hence

The later implies, by Lemma 1.2 and the fact that (2.2) insures , that the positive real sequence converges to some limit , that is,
and also assures that
Combining the two latter equalities, we infer that
Now, (1.9) can be written equivalently as
By virtue of Lemma 1.1, we have graph converges to because
Furthermore, the Lipschitz continuity of (see, e.g., [8]) clearly ensures that the sequence graph converges in turn to .

Now, let be a cluster point of . Passing to the limit in (2.9), on a subsequence still denoted by , and taking into account the fact that the graph of a maximal monotone operator is weakly strongly closed in , we then conclude that
because is Lipschitz continuous, is asymptotically regular thanks to (2.8), and is bounded away from zero.

It remains to prove that there is no more than one cluster point, our argument is classical and is presented here for completeness.

Let be another cluster of , we will show that . This is a consequence of (2.6). Indeed,
from
we see that the limit of as must exists. This limit has to be zero because is a cluster point of . Hence at the limit, we obtain
Reversing the role of and , we also have
That is , which completes the proof.

*Remark 2.2. *(i) Note that, we can remove condition (2.2), but in this case we obtain that there exists a subsequence of such that every weak cluster point is a solution of problem (1.2). This follows by Lemma 1.2 combined with the fact that and that graph converges to . The later is equivalent, see for example [6], to the pointwise convergence of to and therefore ensures that

(ii) In the special case , (2.2) reduces to , see Application (2) of Section 3.

Suppose now that , it well-known that if . Consequently,
which is the case for all for some because is bounded and . Hence , for all , and thus (2.2) is clearly satisfied.

The particular case corresponds to the unconstrained case, namely, . In this context the resolvent associated to is the identity, and condition (2.2) is trivially satisfied.

##### 2.2. Strong Convergence

Now, we would like to stress that we can guarantee strong convergence by reinforcing assumptions on .

Proposition 2.3. *Assume that is strong monotone with constant , that is,
**
and Lipschitz continuous with constant , that is,
**
If (where is a small enough constant) and , then the sequence generated by (1.9) strongly converges to the unique solution of (1.2).*

*Proof. *Indeed, by replacing inverse strong monotonicity of by strong monotonicity and Lipschitz continuity, it is easy to see from the first part of the proof of Theorem 2.1 that the operator of satisfies
Following the arguments in the proof of Theorem 2.1 to obtain
Now, by setting , we can check that if and only if , and a simple computation shows that with . Hence,
The result follows from Ortega and Rheinboldt [9, page 338] and the fact that . The later follows thanks to the equivalence between graph convergence of the sequence of operators to and the pointwise convergence of their resolvent operators combined with the fact that .

#### 3. Applications

*(1) Hierarchical Convex Minimization Problems*

Having in mind the connection between monotone operators and convex functions, we may consider the special case , being a proper lower semicontinuous differentiable convex function. Differentiability of ensures that and (1.2) reads as
Using definition of the Moreau-Yosida approximate, algorithm (1.9) reads as
In this case, it is well-known that the assumption (2.1) of inverse strong monotonicity of is equivalent to its -Lipschitz continuity. If further we assume for all and , then by Theorem 2.1 we obtain weak convergence of algorithm (3.2) to a solution of (3.1). The strong convergence is obtained, thanks to Proposition 2.3, if in addition is strongly convex (i.e., there is ;
for all , all ) and a convergent sequence with . Note that strong convexity of is equivalent to -strong monotonicity of its gradient. A concrete example in signal recovery is the Projected Land weber problem, namely,
being a linear-bounded operator. Set . Consequently,
and is therefore Lipschitz continuous with constant . Now, it is well-known that the problem possesses exactly one solution if is bounded below, that is,
In this case, is strongly monotone. Indeed, it is easily seen that is strongly convex: consider and , one has

*(2) Classical Penalization*

In the special case where , we have
which is nothing but the classical penalization operator, see [10]. In this context, taking into account the fact that
and that solves (1.2), and thus , we successively have
So condition on the parameters reduces to , and algorithm (1.9) is nothing but a relaxed projection-gradient method. Indeed, using (1.5) and the fact that , we obtain
An inspection of the proof of Theorem 2.1 shows that the weak converges is assured with .

*(3) A Hierarchical Fixed-Point Problem*

Having in mind the connection between inverse strongly monotone operators and nonexpansive mappings, we may consider the following fixed-point problem:
with a nonexpansive mapping, namely, .

It is well-known that is inverse strongly monotone with . Indeed, by definition of , we have
On the other hand
Combining the two last inequalities, we obtain
Therefore, by Theorem 2.1 we get the weak convergence of the sequence generated by the following algorithm:
to a solution of (3.12) provided that for all and . The strong convergence of (1.9) is obtained, by applying Proposition 2.3, for a contraction mapping, namely, for which is equivalent to the -strong monotonicity of , and is a convergent sequence with . It is easily seen that in this case is -Lipschitz continuous.

#### 4. Towards the Multivalued Case

Now, we are interested in (1.2) when is a multi-valued maximal monotone operator. With the help of the Yosida approximate which is always inverse strongly monotone (and thus single-valued), we consider the following partial regularized version of (1.2): where stands for the Yosida approximate of .

It is well-known that is inverse strongly monotone. More precisely, we have Using definition of the Yosida approximate, algorithm (1.9) applied to (4.1) reads as From Theorem 2.1, we infer that converges weakly to a solution provided that . Furthermore, it is worth mentioning that if is strongly monotone, is also strongly monotone, and thus (4.1) has a unique solution . By a result in [8, page 35], we have the following estimate: Consequently, (4.3) provides approximate solutions to the variational inclusion (1.2) for small values of . Furthermore, when , we have and thus (4.1) reduces to

If (3.1) and (4.1) are solvable, by ([11] Theorem 3.3), we have for all where with a solution of (3.1). The value of (3.1) is thus close to those of (4.1) for small values of , and hence, this confirmed the pertinence of the proposed approximation idea to reach the multi-valued case. Observe that in this context, algorithm (4.3) reads as

#### 5. Conclusion

The authors have introduced a forward-backward penalization-gradient algorithm for solving variational inequalities and studied their asymptotic convergence properties. We have provided some applications to hierarchical fixed-point and optimization problems and also proposed an idea to reach monotone variational inclusions.

#### Acknowledgment

We gratefully acknowledge the constructive comments of the anonymous referees which helped them to improve the first version of this paper.

#### References

- G. Stampacchia, “Formes bilinéaires coercitives sur les ensembles convexes,”
*Comptes Rendus de l'Académie des Sciences de Paris*, vol. 258, pp. 4413–4416, 1964. View at Zentralblatt MATH - H. Attouch, M. O. Czarnecki, and J. Peypouquet, “Prox-penalization and splitting methods for constrained variational problems,”
*SIAM Journal on Optimization*, vol. 21, no. 1, pp. 149–173, 2011. - G. B. Passty, “Ergodic convergence to a zero of the sum of monotone operators in Hilbert space,”
*Journal of Mathematical Analysis and Applications*, vol. 72, no. 2, pp. 383–390, 1979. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - B. Lemaire, “Coupling optimization methods and variational convergence,” in
*Trends in Mathematical Optimization*, vol. 84 of*International Series of Numerical Mathematics*, pp. 163–179, Birkhäuser, Basel, Switzerland, 1988. View at Zentralblatt MATH - J. Gwinner, “On the penalty method for constrained variational inequalities,” in
*Optimization: Theory and Algorithms*, vol. 86 of*Lecture Notes in Pure and Applied Mathematics*, pp. 197–211, Dekker, New York, NY, USA, 1983. View at Zentralblatt MATH - R. T. Rockafellar and R. J.-B. Wets,
*Variational Analysis*, vol. 317, Springer, Berlin, Germany, 1998. View at Publisher · View at Google Scholar · View at MathSciNet - P. L. Martinet,
*Algorithmes pour la résolution de problèmes d'optimisation et de minimax*, thesis, Université de Grenoble, 1972. - H. Brézis,
*Opérateurs Maximaux Monotones et Semi-Groupes de Contractions dans les Espaces de Hilbert*, North-Holland Mathematics Studies, no. 5, North-Holland, Amsterdam, The Netherlands, 1973. View at Zentralblatt MATH - J. M. Ortega and W. C. Rheinboldt,
*Iterative Solution of Nonlinear Equations in Several Variables*, Academic Press, New York, NY, USA, 1970. - D. Pascali and S. Sburlan,
*Nonlinear Mappings of Monotone Type*, Martinus Nijhoff, The Hague, The Netherlands, 1978. - N. Lehdili,
*Méthodes proximales de sélection et de décomposition pour les inclusions monotones*, thesis, Université de Montpellier, 1996.