Variational Inequalities and Vector OptimizationView this Special Issue
A Proximal Analytic Center Cutting Plane Algorithm for Solving Variational Inequality Problems
Under the condition that the values of mapping are evaluated approximately, we propose a proximal analytic center cutting plane algorithm for solving variational inequalities. It can be considered as an approximation of the earlier cutting plane method, and the conditions we impose on the corresponding mappings are more relaxed. The convergence analysis for the proposed algorithm is also given at the end of this paper.
According to , the history of algorithms for solving finite-dimensional variational inequalities is relatively short. A recent development of such methods is the analytic center method based on cutting plane methods. It combines the feature of the newly developed interior point methods with the classical cutting plane scheme to achieve polynomial complexity in theory and quick convergence in practice. More details can be found in [2, 3]. Specifically, Goffin et al.  developed a convergent framework for finding a solution of the variational inequality associated with the continuous mapping from to and the polyhedron under an assumption slightly stronger than pseudomonotonicity. Again, Marcotte and Zhu  extended this algorithm to quasimonotone variational inequalities that satisfy a weak additional assumption. Such methods are effective in practice.
Note that the facts in optimization problems, see [6–8], some functions from to are themselves defined through other minimization problems. For example, consider the Lagrangian relaxation, see [9–12], the primal problem is where is a compact subset of and , are two functions. Lagrangian relaxation in this problem leads to the problem , where is the dual function. Trying to solve problem (1.1) by means of solving its dual problem becomes more difficult since in this case evaluating the function value requires solving exactly another optimization problem (1.2). Let us see another example: consider the problem where is convex (not necessarily differentiable), is a nonempty closed convex set, is called the Moreau-Yosida regularization of on , that is, where is a positive parameter. A point is a solution to (1.3) if and only if it is a solution to the problem: The problem (1.5) is easier to deal with than (1.3), see . But in this case, computing the exact function value of at an arbitrary point is difficult or even impossible since itself is defined through a minimization problem involving another function . Intuitively, we consider the approximate computation of function .
The above-mentioned phenomenon also exists for mappings from to , where and are two subspaces of any two finite-dimensional spaces, respectively. Once a mapping, and more specifically, a continuous mapping is defined implicitly rather than explicitly, the approximation of the mapping becomes inevitable, see . In this paper we try to solve by assuming the values of the mapping from to can be only computed approximately. Under the assumption, we construct an algorithm for solving the approximate variational inequality problem and we also prove that there exists a cluster point of the iteration points generated by the proposed algorithm, it is a solution to the original problem .
This paper is organized as follows. Some basic concepts and results are introduced in Section 2. In Section 3, a proximal analytic center cutting plane algorithm for solving the variational inequality problems is given. The convergence analysis of the proposed algorithm is addressed in Section 4. In the last section, we give some conclusions.
2. Basic Concepts and Results
Let be a polyhedron and a continuous mapping from to . A vector is a solution to the variational inequality if and only if it satisfies the system of nonlinear inequalities: The vector is a solution to the dual variational inequality of if and only if it satisfies We denote by the solution set of , and the solution set of , respectively. Whenever is continuous, we have , see . If is pseudomonotone on , then , see . If is quasimonotone at and is not normal to at , then is nonempty, see Proposition 1 in . For the definitions of monotone, pseudomonotone and quasimonotone, see [5, 15].
Definition 2.1. The gap functions and of and are, respectively, defined by Note that , and if and only if is a solution to , if and only if is a solution to . Thus, .
Definition 2.2. A point is called an -solution to if for given .
Definition 2.3. For , we say if and only if , for , where .
Assumptions. Throughout this paper, we make the following assumptions: for each , given any , , where , we can always find a and a such that
These assumptions are realistic in practice, see [16, 17]. By using the given architecture in [16, 17], we can approximate the mapping arbitrary well since neural networks are capable of approximating any function from one finite-dimensional real vector space to another one arbitrary well, see . Specifically, let us consider the case of univariate function. If is a min-type function of the form
where each is convex and is an infinite set, then it may be impossible to calculate . However, we may still consider two cases. In the first case of controllable accuracy, for each positive one can find an -minimizer of (2.5), that is, an element satisfying ; in the second case, this may be possible only for some fixed (any possibly unknown) . In both cases, we may set . A special case of (2.5) arises from Lagrangian relaxation , where the problem with is the Lagrangian dual of the primal problem
with for . Then, for each multiplier , we need only to find such that , see .
Under the above assumptions (2.4), we introduce an approximate problem associated with : finding such that where satisfies for arbitrary . Its dual problem is to find such that where satisfies for arbitrary .
Definition 2.5. The gap function of is defined by .
Definition 2.6. A point is called an -solution to if for given .
The optimal solution sets of and are denoted by and , respectively. The following proposition ensures that is nonempty.
Proposition 2.7. If there exists a point such that and is not normal to at , then is nonempty.
Proof. Since is not normal to at , there exists a point such that . , set for , then we have and . Note the condition (2.9), we obtain . Letting , it follows from the condition (b) in (2.4) that , that is, .
In the following part, we focus our attention on solving . Let denote an auxiliary mapping, continuous in and , strongly monotone in , that is, for some . We consider the auxiliary variational inequality associated with , whose solution satisfies In view of the strong monotonicity of with respect to , this auxiliary variational inequality has a unique solution .
Proposition 2.8. The mapping is continuous on . Furthermore, is a solution to if and only if .
Proof. The first part of the proposition follows from Theorem 5.4 in . To prove the second part, we first suppose that . This yields , that is, solves . Conversely, suppose that solves , then and from (2.11), we have Adding the two preceding inequalities, one obtains and we conclude, from the strong monotonicity of with respect to , that .
Let be two positive numbers. Let be the smallest nonnegative integer satisfying where satisfies for arbitrary . The existence of a finite will be proved in Proposition 2.9. The composite mapping is defined, for every , by If is a solution to , then we have , and .
Proposition 2.9. The operator is well defined for every . Moreover, we have where is the number given in (2.4)-(c).
Proof. From the definition of , we have Suppose (2.15) does not hold for any finite integer , that is, Note the assumption (2.4)-(b), we obtain therefore, Since , (2.21) is in contradiction with (2.18). To prove the second part, we notice that if , which means the second conclusion of Proposition 2.9 holds.
Proposition 2.10. If , then , we have
Proof. Let , then and Since , so . Therefore, For all , there holds By combining (2.25) with (2.26), we obtain , that is, .
3. A Proximal Analytic Center Cutting Plane Algorithm
Algorithm 3.1 offered in this section is a modification of the algorithm in . Algorithm 3.1 is described as follows.
Algorithm 3.1. Let be the strong monotonicity constant of with respect to , and let , be two constants. Set , and .
Step 1 (computation of the center). Find an approximate analytic center of .
Step (stopping criterion). If , stop.
Step 3 (solving the approximate auxiliary variational inequality problem). Find , such that where satisfies , .
Step 4 (construction of the approximate cutting plane). Let and , where is the smallest integer that satisfies where satisfies .
Increase by one and go to Step 1.
End of Algorithm 3.1
4. Convergence Analysis
In , the authors proposed a column generation scheme to generate the polytope , and they proved if satisfies the following inequality where is a constant, the scheme will stop with a feasible solution, that is, they can find a vector such that with , contains a full-dimensional closed ball with radius. In other words, there exists the smallest such that generated by the column generation scheme does not contain the ball with radius, and it is known as the finite cut property. It is easy to know that the result of Theorem 6.6 in  also holds without much change for our Algorithm 3.1 using the approximate centers. That is, by using the row generation scheme, there exists the smallest such that generated in Step 4 in Algorithm 3.1 does not contain the ball with radius lying inside the polytope . This result plays an important role in proving the convergence of the described Algorithm 3.1 in Section 3.
Theorem 4.1. Let the polyhedron have nonempty interior, and let be nonempty. Assumption (2.4) holds. Then either Algorithm 3.1 stops with a solution to after a finite number of iterations, or there exists a subsequence of the infinite sequence that converges to a point in .
Proof. Assume that for every iteration , and let . From Proposition 2.10, we know that never lies on for any . Let be an arbitrary sequence of point in the interior of converging to and a sequence of positive numbers such that and that the sequence of closed balls lies in the interior of . Note that . From finite cut property, we know that there exists the smallest index and a point such that satisfies As , there exists a point on the segment such that . Since is compact, we can extract from a convergent subsequence . Denote by its limit point, we have From Proposition 2.9, we know that is bounded. Consequently, we can extract form a constant subsequence . Now from the continuity of for fixed and the relations (2.15) and (4.3), it follows by taking the limit in (4.3) that By Proposition 2.10, we conclude that .
Theorem 4.2. Under the conditions of Theorem 4.1, either Algorithm 3.1 stops with a solution to after a finite number of iterations, or there exists a subsequence of the infinite sequence that converges to a point in .
Proof. Since , . At the end of Step 4 in Algorithm 3.1 we increase by one, so we have , as . Moreover, as in Algorithm 3.1, where denotes the zero vector. This means as . Therefore, from the second result of Theorem 4.1, we know is the solution to the problem .
In , the authors proposed a cutting plane method for solving the quasimonotone variational inequalities, but throughout the paper they employed the exact information of the mapping from to . Just like the discussion in the first part of our paper, sometimes, it is not so easy or even impossible to compute the exact values of the mapping . Motivated by this fact, we consider constructing an approximate problem of , and try out a proximal analytic center cutting plane algorithm for solving . In contrast to , our algorithm can be viewed as an approximation algorithm, and it is easier to implement than  since it only requires the inexact information of the corresponding mapping. At the same time, the conditions we impose on the corresponding mappings are more relaxed, for example,  requires the mapping satisfies the Lipschitz condition, but we only require that the so-called approximate Lipschitz condition (2.4)-(c) holds.
This work is partially supported by the National Natural Science Foundation of China (Grant nos. 11171138, 11171049) and Higher School Research Project of Educational Department of Liaoning Province (Grant no. L2010235).
P. T. Harker and J.-S. Pang, “Finite-dimensional variational inequality and nonlinear complementarity problems: a survey of theory, algorithms and applications,” Mathematical Programming B, vol. 48, no. 2, pp. 161–220, 1990.View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
J.-L. Goffin and J.-P. Vial, “Convex nondifferentiable optimization: a survey focused on the analytic center cutting plane method,” Optimization Methods & Software, vol. 17, no. 5, pp. 805–867, 2002.View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
Z. Q. Luo and J. Sun, “An analytic center based column generation algorithm for convex quadratic feasibility problems,” SIAM Journal on Optimization, vol. 9, no. 1, pp. 217–235, 1999.View at: Publisher Site | Google Scholar | MathSciNet
J. L. Goffin, P. Marcotte, and D. Zhu, “An analytic center cutting plane method for pseudomonotone variational inequalities,” Operations Research Letters, vol. 20, no. 1, pp. 1–6, 1997.View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
P. Marcotte and D. L. Zhu, “A cutting plane method for solving quasimonotone variational inequalities,” Computational Optimization and Applications, vol. 20, no. 3, pp. 317–324, 2001.View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
M. Hintermüller, “A proximal bundle method based on approximate subgradients,” Computational Optimization and Applications, vol. 20, no. 3, pp. 245–266, 2001.View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
K. C. Kiwiel, “An algorithm for nonsmooth convex minimization with errors,” Mathematics of Computation, vol. 45, no. 171, pp. 173–180, 1985.View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
M. V. Solodov, “On approximations with finite precision in bundle methods for nonsmooth optimization,” Journal of Optimization Theory and Applications, vol. 119, no. 1, pp. 151–165, 2003.View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
D. P. Bertsekas, Nonlinear Programming, Athena Scientific, Belmont, Mass, USA, 1995.
D. P. Bertsekas and J. N. Tsitsiklis, Parallel and Distributed Computation, Prentice-Hall, Englewood Cliffs, NJ, USA, 1989.
J. F. Bonnans, J. C. Gilbert, C. Lemaréchal, and C. Sagastizábal, Optimization Numerique: Aspects Theoriques et Pratiques, Springer, Berlin, Germany, 1997.View at: MathSciNet
C. Lemaréchal, “Lagrangian decomposition and nonsmooth optimization: bundle algorithm, prox iteration, augmented Lagrangian,” in Nonsmooth optimization: Methods and Applications, pp. 201–216, Gordon and Breach, Philadelphia, Pa, USA, 1992.View at: Google Scholar | Zentralblatt MATH | MathSciNet
Y. R. He, “Minimizing and stationary sequences of convex constrained minimization problems,” Journal of Optimization Theory and Applications, vol. 111, no. 1, pp. 137–153, 2001.View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
A. Pinkus, “Density in approximation theory,” Surveys in Approximation Theory, vol. 1, pp. 1–45, 2005.View at: Google Scholar | Zentralblatt MATH | MathSciNet
A. Auslender, Optimization, Méthodes, Numériques, Masson, Paris, 1976.
B. Hammer, “Universal approximation of mappings on structured objects using the folding architecture,” Technical Report. Reihd P, Heft 183, Universitat Osnabruck, Fachbereich Informatic, 1996.View at: Google Scholar
B. Hammer and V. Sperschneider, “Neural networks can approximate mappings on structured objects,” in Proceedings of the International Conference on Computational Inteliigence and Neural Network, P. P. Wang, Ed., pp. 211–214, 1997.View at: Google Scholar
K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural Networks, vol. 2, no. 5, pp. 359–366, 1989.View at: Google Scholar
K. C. Kiwiel, “A proximal bundle method with approximate subgradient linearizations,” SIAM Journal on Optimization, vol. 16, no. 4, pp. 1007–1023, 2006.View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
J. L. Goffin, Z. Q. Luo, and Y. Ye, “Complexity analysis of an interior cutting plane method for convex feasibility problems,” SIAM Journal on Optimization, vol. 6, no. 3, pp. 638–652, 1996.View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet