Abstract

An interior projected-like subgradient method for mixed variational inequalities is proposed in finite dimensional spaces, which is based on using non-Euclidean projection-like operator. Under suitable assumptions, we prove that the sequence generated by the proposed method converges to a solution of the mixed variational inequality. Moreover, we give the convergence estimate of the method. The results presented in this paper generalize some recent results given in the literatures.

1. Introduction

Let be endowed with the inner product and the associated norm . Let be a nonempty, closed, and convex subset of and let be a set-valued mapping. Let be a proper, convex, and lower semicontinuous function. The mixed variational inequality problem (denoted by (MVI)) consists of finding an such that there exists satisfying which is well known to be a very useful tool to formulate a large class of problems encountered in mechanics, control, economics, structural engineering, social sciences, and so on [13]. In this paper, we denote by SOL(MVI) the solution set of (MVI).

It is well known that (MVI) includes a large variety of problems as special cases. For example, if , where is the indicator function over the constraint set , that is, if and otherwise, then (MVI) reduces to the generalized variational inequality (in short (GVI)): find an such that there exists satisfying If and is single-valued, then (MVI) collapses to Stampacchia variational inequality problem: find such that

One of the most interesting and important problems in the variational inequality theory is the development of an efficient iterative algorithm to compute approximate solutions, and the convergence analysis of the algorithm. Many methods have been proposed to solve (MVI) (see, e.g., [415]). Most of them are projection-type methods. Recently, projected subgradient methods have become effective and strong tools for solving (MVI) (see, e.g., [8, 14]). However, all these methods are based on the Euclidean projection operator that produces iterates which hit the boundary of the constraints and might often lead to zigzagging effect resulting in slower convergence properties. Moreover, the projection itself can be computationally expensive, if the constraints are not simple.

Recently, to overcome the above difficulties, Auslender and Teboulle [16] proposed an approach for solving (GVI), which is to replace the classical projection with a non-Euclidean distance-like function that can automatically eliminate the constraints and produce interior trajectories. This line of analysis has been studied and developed over the recent years [17, 18].

On the other hand, unlike problems (2) and (3), in general, (MVI) is not equivalent to the fixed point problem involving the projection operator because of the presence of the nonlinear term in problem (MVI). However, the projection-like map introduced by Auslender and Teboulle [1618] does not improve this situation. A natural problem is whether the techniques presented in [16] can be generalized from (GVI) to the setting of (MVI). That is we expect to devise a method for solving (MVI), which can not only inherit the nice property of interior-point methods of Auslender and Teboulle [1618] but also overcome the difficulty as the presence of the nonlinear term . This is the main motivation of this paper.

Motivated and inspired by the research work mentioned above, in this paper, we aim at extending the methods presented in [16] to mixed variational inequalities by introducing an interior projected-like subgradient method for mixed variational inequalities. The proposed method is based on using non-Euclidean projection-like operator. Under suitable assumptions, we prove that the sequence generated by the proposed method converges to a solution of the mixed variational inequality. Moreover, we give the convergence estimate of the method. The results presented in this paper generalize and improve some recent results.

2. Preliminaries

Definition 1. Let be a set-valued mapping. Then the mapping is said to be (i)monotone if, for any , , and , (ii)maximal monotone if it is monotone and the graph of , denoted by , is not properly contained in the graph of any other monotone operator;(iii)upper hemicontinuous at if, for any , the mapping is upper continuous at .

Remark 2. It is well known that is maximal monotone if and only if (i)for any , is a closed and convex subset of ;(ii) is upper hemicontinuous.

For many applications purposes, it will be useful to consider the ground set for (MVI) in the form where is nonempty, open, and convex set with closure and , where is a linear map, and (see, e.g., [1618]).

Definition 3. Let be a proximal distance which for each satisfies the following properties: is proper, lower semicontinuous, and convex and on , with and (the gradient of with respect to the first variable);dom, and dom, where denotes the subgradient map of the function with respect to the first variable; is strongly convex, over ; that is, there exists such that for all , for some norm in .

We denote by the family of functions satisfying the above three properties.

Remark 4. It is easy to see that the usual squared Euclidean distance satisfies the above three properties; that is, . Therefore, the notion of proximal distance extends the usual squared Euclidean distance.

Given , it follows from the proof of Proposition 2.1 of [18] that for each and for each there exists a unique (by strong convexity) point solving From this fact, one can define a projected-like map as follows.

Definition 5. For any , and any , a projected-like map is defined by

Remark 6. (i) From the optimality conditions for the convex problem (8) (see, e.g., [19]), there exists such that
(ii) We would like to mention that the resulting projection-like map remains in , that is, an interior point with respect to the constraint set . However, we also note that the properties of the map remain valid for an arbitrary closed and convex set . The resulting projection map in that case leads to a noninterior projection-like map defined by and characterized via In particular for , we have , where is the usual Euclidean projection operator.

Lemma 7 (Proposition 4.1 of [18]). For any and any and , the point satisfies and the following properties hold: (i);(ii).

To establish convergence of our algorithm in this paper, for each given , we need a corresponding proximal distance satisfying some desirable properties.

Definition 8. Given , open and convex, and , a function is called the induced proximal distance to if is finite valued on such that for any , and (i)for any and being bounded with , one has ;(ii)for any and converging to , one has ;(iii)for any , .

We write to quantify the triple that satisfies the premises of Definition 8.

Remark 9. One typical and useful example is the logarithmic quadratic distance given by with . In that case, with , , one can verify that (see page 709 of [18]). For more examples, the interested reader is referred to [1618].

3. An Interior Projected-Like Subgradient Method

In this paper, we adopt the following assumptions.

Assumption A. (A1) The solution set of (MVI) is nonempty; that is, SOL(MVI).
(A2) dom  and is maximal monotone.
(A3) is bounded on bounded subset of ri.
(A4) is convex, lower semicontinuous, proper, and finite on .
(A5) The subdifferential map is nonempty on and bounded on bounded subsets of ri.

Remark 10. The Assumptions are the same as Assumption of [16]. The Assumptions are the same as Assumptions of [16]. These assumptions are the same as those of [14], except for those on the mapping .

Algorithm 11. Initialization. Let .

Iteration Step. Given , take and , and compute

Remark 12. (i) If , then (15) reduces to of [16]. Thus, Algorithm 11 generalizes the basic iteration scheme of [16] from the variational inequality to the setting of the mixed variational inequality.
(ii) If , then (15) becomes which is the basic scheme of projected subgradient methods for mixed variational inequality (see, e.g., [8, 11, 12, 14]).

We first establish the key result giving the main properties of the basic scheme (15) that will be used extensively to establish our convergence results.

Lemma 13. Let and let be the sequence generated by Algorithm 11. Then the following properties hold: (a);(b);(c)for any and , with and .

Proof. (a) Set . It follows from item (i) of Lemma 7 that As a consequence, we have Now let . It follows from inequality (13) and the relation (9) that Since we have This together with (19) implies item (a).
(b) For each with any , we have Combining with item (a) taking , we obtain item (b).
(c) For each , it follows from the monotonicity of , and item (a) that Summing over and dividing both members by , by the convexity of , one obtains that for each , This completes the proof.

From now on, we analyze the convergence behavior of Algorithm 11 by choosing the parameter as where the parameter is freely chosen and satisfies About the parameter , we make the following assumptions.

Assumption B. and are bounded there exists such that for all ;
there exits some such that for all .

Remark 14. We would like to mention that there are many well-known alternatives for the choice of ; for example, The interested reader is referred to [8, 14, 20, 21]. Note that both hypotheses (with for ()) are satisfied by all the above suggested choices for . In particular, if and with , then Algorithm 11 reduces to the method proposed by Xia et al. [14].

Theorem 15. Let . Suppose that Assumptions A and B hold. Let be the sequence generated by Algorithm 11, and set Then the sequences , are bounded and each cluster point of the sequence belongs to the solution set of (MVI). Moreover, suppose that Then the whole sequence converges to some solution of (MVI).

Proof. We divide the proof into two steps.
Step  1. Invoking item (b) of Lemma 13, and using assumption (), we get by induction Using , it is easy to see that is bounded. From item (iii) of Definition 8, we know that is bounded. As a consequence the sequence is bounded. It follows from Assumptions () and () that the sequences and are also bounded. Then, using Assumption () we get Since , using inequality (32), one has . It follows from item (c) of Lemma 13 that for any and , Let be a cluster point of the sequence . Since and is lower semicontinuous, taking limits in both sides of inequality (32), one gets Furthermore, . Now set , where denotes the indicator function over ; that is, if , then , otherwise, . Since is nonempty for each , it follows that is maximal monotone. From the definition of subdifferential, we have Summing both sides of inequalities (34) and (35), we obtain Since is maximal monotone, this means that , which is equivalent to say that is a solution of (MVI).
Step  2. Invoking item (b) of Lemma 13, and using Assumption (), we get by induction Using and (30) we get From item (i), we know that the sequence is bounded with all its cluster points belonging to . Thus, to complete the proof of our claim on the whole convergence of the sequence to a solution of (MVI), we only need to prove that the sequence has a unique cluster point. The proof of the remainder is exactly the same one as given by Bruck [22] in Steps 2 and 3 of Theorem 1. For the sake of convenience, the reader is also referred to Corollary 1 of the recent paper [16] (page 38–40) and so we omit it here.

Remark 16. If , then Theorem 15 reduces to item (a) of Theorem 1 and Corollary 1 of [16]. Thus, we extend the main results of [16] from variational inequalities to the setting of mixed variational inequalities.

Remark 17. Compared with Theorem 3.5 of Xia et al. [14], Theorem 15 says that the sequence , rather than , is convergent to a solution of (MVI).

Definition 18. A function is called a gap function for (MVI) when the following statements hold: (i) for any ;(ii) if and only if is the solution of (MVI).
Clearly, a gap function with the properties of Definition 18 allows us to reformulate (MVI) as an optimization problem, namely, as
In order to establish efficiency estimates for mixed variational inequality problems, we will introduce a gap function for (MVI).

Proposition 19. The function is a gap function for (MVI).

Proof. The proof is in two parts.(i) It is easy to see that (ii) If solves (MVI), then there exists such that By monotonicity of , we have Therefore, it follows that By item (i), we have .
Conversely, if , then, by the definition of , we get By maximal monotonicity of , we know that is upper hemicontinuous. Therefore, it is easy to see that there exists such that This completes the proof.

In order to characterize the convergence estimate of Algorithm 11, we also need the quantity Now we present the convergence estimate of Algorithm 11.

Theorem 20. Let . Suppose that Assumptions A and B hold. Let be the sequence generated by Algorithm 11 and set If is finite, then we have

Proof. If is finite, then the estimate (49) is an immediate consequence of inequality (33) and the definition of the gap function .

Remark 21. If , then Theorem 20 reduces to item (b) of Theorem 1 of [16].

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This research is supported by Guangxi Natural Science Foundation (2013GXNSFBA019015), Scientific Research Foundation of Guangxi University for Nationalities (2012QD015), and Open Fund of Guangxi Key Laboratory of Hybrid Computation and IC Design Analysis (2013HCIC08).