Research Article | Open Access

Volume 2015 |Article ID 425351 | https://doi.org/10.1155/2015/425351

Xiangli Li, "Smoothing Nonmonotone Barzilai-Borwein Gradient Method and Its Application to Stochastic Linear Complementarity Problems", Mathematical Problems in Engineering, vol. 2015, Article ID 425351, 6 pages, 2015. https://doi.org/10.1155/2015/425351

# Smoothing Nonmonotone Barzilai-Borwein Gradient Method and Its Application to Stochastic Linear Complementarity Problems

Revised07 Sep 2015
Accepted20 Sep 2015
Published30 Sep 2015

#### Abstract

A new algorithm for nonsmooth box-constrained minimization is introduced. The method is a smoothing nonmonotone Barzilai-Borwein (BB) gradient method. All iterates generated by this method are feasible. We apply this method to stochastic linear complementarity problems. Numerical results show that our method is promising.

#### 1. Introduction

In this paper, we consider a problemwhere , and . If is differentiable on the feasible set, there are many methods for (1), such as trust region method , projected gradient method , projected Barzilai-Borwein method , Newton method , and active-set projected trust region . If is semismooth, there are few methods for (1). In this paper, we only consider that is locally Lipschitzian, but not necessarily differentiable.

In , a smoothing projected gradient method (SPG) was introduced for nonsmooth optimization problems with nonempty closed convex set. This algorithm is easy to implement. At each iteration, authors approximate the objective function by a smooth function with a fixed smoothing parameter and employ the classical projected gradient method to obtain a new point. If a certain criterion is satisfied, then authors update the smoothing parameter using the new point for the next iteration.

The main motivation for the current work come from the numerical results of . Through analyzing the algorithm , we find that it takes a lot of time to calculate projection when test problems are large-scale. In order to avoid this shortcoming, we propose a smoothing nonmonotone BB gradient method. In our method, we use an active set and a nonmonotone line search strategy. The search direction consists of two parts: some of the components are simply defined; the other components are determined by the Barzilai-Borwein gradient method. We apply it to stochastic linear complementarity problems.

Throughout this paper, will be the Euclidean norm. For all , the orthogonal projection of onto a set will be denoted by . For a given matrix , will be the th row of .

The paper can be outlined as follows. In Section 2, we describe our method. In Section 3, stochastic linear complementarity problems are introduced. In Section 4, we apply our method to stochastic linear complementarity problems and numerical results are illustrated and discussed. Finally, we make some concluding remarks in Section 5.

#### 2. Smoothing Nonmonotone BB Gradient Method

In this section, we propose a smoothing nonmonotone BB gradient method for (1), where is a general locally Lipschitz continuous function.

Definition 1. Let be a locally Lipschitz continuous function. One calls a smoothing function of , if is continuously differentiable in for any , and, for any , and is nonempty and bounded.

Let . Consider the associated set at the stationary point of function : where is a fixed parameter. For , we define index set , , and as follows:where . The set is an estimate of the active set at point . For simplicity, we abbreviate , , and defined by (5) as , , and , respectively. We determine by the following process:It is easy to show .

Now, we state algorithm as follows.

Algorithm 2. Choose positive constants , , a tolerance parameter , integer , constants , and parameters ; choose an initial point , . Set .Stop if . Otherwise, set If , choose , and set . Let , and go to Step  1.Determine , , and according to (5).Compute direction according to (6).Compute the step size by the Armijo line search, satisfying Set , , and .Compute , , and .If , ; otherwise, .If , ; otherwise, .Set whereLet and .Let Go to Step  2.

Remark 3. In Step  2 of Algorithm 2, we use to update the smoothing parameter.

Remark 4. Algorithm 2 employs a two-loop approach: the outer loop updates smoothing parameter , and the inner loop computes a stationary point of . In Step  4, the alternate BB step is used to compute the direction .

Remark 5. Note that , so there exists an infinite subsequence such that This implies that every limit point generated by Algorithm 2 is a stationary point of (1).

The convergence of Algorithm 2 is similar to literature [6, 7], so we omit the process here.

#### 3. Stochastic Linear Complementarity Problems

Let be a probability space with being a subset of . Suppose that the probability distribution is known. The stochastic linear complementarity problem (SLCP)  is to find such thatwhere and for are random matrices and vectors. Throughout this paper, we assume that and are measurable functions of and satisfy where stands for the expectation. If only contains a single realization, then (12) reduces to the standard LCP. The LCP has been studied by many researchers . Since, in many practical problems, some elements may involve uncertain data, problem (12) has been receiving much attention in recent literature [8, 1520].

In general, there is no satisfying (12) for almost . A deterministic formulation for the SLCP provides a decision vector which is optimal in a certain sense. Different deterministic formulations may yield different solutions that are optimal in different senses. There are two reformulations of (12) that have been proposed: the expected value (EV) formulation  and the expected residual minimization (ERM) formulation . In this paper, we concentrate on ERM. ERM is to find a vector that minimizes the expected residual of the SLCP ; that is,where is defined by denotes the th component of the vector . Here, is an NCP function which has the property In this paper, we choose .

Let ; by , we know that is semismooth and is a smoothing function of , where Hence, is a smoothing function of , where and is defined by .

In , authors use sample average approximation (SAA) , which replaces (14) by its approximationHere, the sample is generated by Monte Carlo sampling method, following the same probability distribution as . So, a smoothing function of is In the next section, we apply Algorithm 2 to (20).

#### 4. Numerical Results

The test problems are randomly generated. The procedure of generating the tested problems is employed from [16, 20]. We omit the procedure here. All these problems are tested in Matlab (version 7.5).

Several parameters are needed to generate the problem: , , and . A vector is randomly generated. When the parameter , then is the unique global solution of the test problems and . If , the global solution is unknown.

Algorithm 2 contains several parameters; we use the values in all numerical experiments. We terminate the iteration if one of the following conditions is satisfied:

We start from the same randomly generated initial point and compare our method with SPG. The numerical results can be seen in Tables 1-2. Let if is differentiable at . In Tables 1-2, denote the number of variables; and present at the finial iterates and , respectively; and denote the value of at the final iterates and , respectively.

 (, , , ) Algorithm 2 SPG CPU CPU (100, 20, 10, 20) 1.450e − 04 1.923e − 11 0.047 2.907e − 04 8.295e − 11 0.281 (100, 20, 10, 10) 9.718e − 05 2.741e − 11 0.047 2.757e − 04 2.872e − 10 0.156 (100, 20, 10, 0) 6.125e − 04 3.791e − 07 0.313 1.098e − 03 1.144e − 06 2.438 (100, 40, 20, 20) 6.406e − 04 1.943e − 10 0.188 6.900e − 04 1.340e − 10 1.531 (100, 40, 20, 10) 2.363e − 04 9.364e − 11 0.156 6.327e − 04 4.385e − 10 1.188 (100, 40, 20, 0) 9.764e − 04 1.156e − 06 0.703 3.012e − 03 1.027e − 06 5.953 (200, 60, 30, 20) 2.083e − 04 9.858e − 12 1.016 8.388e − 04 1.658e − 10 12.547 (200, 60, 30, 10) 3.174e − 04 8.098e − 11 1.000 6.856e − 04 4.393e − 10 7.625 (200, 60, 30, 0) 9.381e − 04 1.015e − 06 5.109 1.127e − 03 9.025e − 07 52.625 (200, 80, 40, 20) 2.786e − 04 1.487e − 11 1.922 2.383e − 04 1.293e − 11 10.547 (200, 80, 40, 10) 7.646e − 05 4.496e − 12 1.781 4.530e − 04 1.849e − 10 7.984 (200, 80, 40, 0) 6.546e − 04 1.187e − 07 10.453 1.122e − 03 7.939e − 07 60.391 (200, 100, 50, 20) 2.178e − 04 8.540e − 12 3.031 1.552e − 04 4.652e − 12 12.609 (200, 100, 50, 10) 3.605e − 04 7.120e − 11 2.750 2.961e − 04 6.574e − 11 9.297 (200, 100, 50, 0) 9.332e − 04 9.131e − 07 16.828 3.471e − 03 9.422e − 07 117.703 (300, 120, 60, 20) 9.365e − 04 9.237e − 11 6.313 4.712e − 05 3.370e − 13 17.797 (300, 120, 60, 10) 2.435e − 04 2.555e − 11 5.781 9.311e − 05 5.264e − 12 14.078 (300, 120, 60, 0) 8.956e − 04 1.009e − 06 35.656 1.359e − 03 8.088e − 07 231.656 (1000, 50, 25, 10) 1.183e − 04 1.785e − 11 3.078 8.176e − 04 6.966e − 10 49.734 (1000, 50, 25, 0) 7.092e − 04 1.117e − 07 24.359 1.439e − 03 9.236e − 07 137.047
 (, , , ) Algorithm 2 SPG CPU CPU (100, 20, 10, 20, 10) 2.384e − 04 5.326e + 02 0.109 6.918e − 04 5.325e + 02 0.469 (100, 20, 10, 20, 5) 6.564e − 05 1.455e + 02 0.078 3.865e − 04 1.455e + 02 0.484 (100, 40, 20, 20, 10) 5.222e − 05 9.491e + 02 0.250 2.156e − 04 9.491e + 02 0.859 (100, 40, 20, 20, 5) 7.980e − 04 3.096e + 02 0.234 4.781e − 04 3.096e + 02 1.625 (100, 40, 20, 10, 20) 8.711e − 04 2.413e + 03 0.531 2.632e − 04 2.416e + 03 0.734 (200, 60, 30, 20, 10) 9.097e − 04 1.752e + 03 1.109 5.129e − 04 1.752e + 03 13.016 (200, 60, 30, 20, 5) 5.947e − 04 4.115e + 02 1.078 4.898e − 04 4.115e + 02 17.047 (200, 80, 40, 20, 10) 9.220e − 04 1.773e + 03 2.109 4.182e − 04 1.773e + 03 16.828 (200, 80, 40, 20, 5) 5.758e − 04 6.038e + 02 1.984 4.215e − 04 6.038e + 02 13.859 (200, 100, 50, 20, 10) 8.910e − 04 2.577e + 03 4.922 2.475e − 04 2.577e + 03 20.859 (200, 100, 50, 20, 5) 5.727e − 04 7.772e + 02 3.078 1.726e − 04 7.772e + 02 14.063 (200, 100, 50, 10, 20) 5.363e − 04 7.329e + 03 5.359 2.786e − 04 7.328e + 03 21.297 (300, 120, 60, 20, 10) 7.976e − 04 2.953e + 03 7.578 3.600e − 04 2.953e + 03 40.125 (300, 120, 60, 20, 5) 9.890e − 05 9.607e + 02 6.609 1.061e − 04 9.607e + 02 21.828 (300, 120, 60, 10, 20) 9.449e − 04 7.566e + 03 13.922 4.752e − 04 7.567e + 03 44.484 (300, 120, 60, 20, 20) 3.839e − 04 8.263e + 03 8.313 3.692e − 04 8.262e + 03 35.453 (1000, 50, 25, 20, 10) 4.506e − 04 1.405e + 03 3.656 4.315e + 00 1.405e + 03 22.391 (1000, 50, 25, 10, 5) 2.947e − 04 3.753e + 02 3.297 5.873e − 04 3.753e + 02 23.734 (1000, 100, 50, 5, 10) 1.118e − 04 2.836e + 03 20.797 3.124e − 04 2.836e + 03 85.547 (1000, 100, 50, 10, 5) 1.700e − 04 8.009e + 02 13.484 9.510e − 05 8.009e + 02 53.516

The results reported in Tables 1-2 show that smoothing nonmonotone BB gradient method is quite promising. From results shown in Tables 1-2, we observe that, firstly, Algorithm 2 has less time for all test problems. Secondly, the function values of Algorithm 2 and SPG are close for most test problems. Through analysis, we find that Algorithm 2 drops faster than SPG in each step. This is mainly attributed to the active set and BB step.

#### 5. Concluding Remarks

In this paper, we present a smoothing nonmonotone Barzilai-Borwein gradient method for nonsmooth box-constrained minimization. The main idea of our method is to use a parametric smoothing approximation function in nonmonotone BB gradient method. In order to have a high speed of convergence, we calculate two-step size. We use BB method and inexact line search, which make sure that the algorithm has high efficiency. We apply it to stochastic linear complementarity problems. Numerical results show that our method is promising.

#### Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgments

This project was supported by the National Natural Science Foundation of China (Grants nos. 61362021 and 11361018), Guangxi Fund for Distinguished Young Scholars (2012GXSFFA060003), Guangxi Natural Science Foundation (no. 2014GXNSFFA118001), the Scientific Research Foundation of the Higher Education Institutions of Guangxi, China (Grant no. ZD2014050), Guangxi Natural Science Foundation (no. 2014GXNSFAA118003), and Guangxi Key Laboratory of Automatic Detecting Technology and Instruments (no. YQ15112).

1. S. Bellavia, M. Macconi, and B. Morini, “An affine scaling trust-region approach to bound-constrained nonlinear systems,” Applied Numerical Mathematics, vol. 44, no. 3, pp. 257–280, 2003. View at: Publisher Site | Google Scholar | MathSciNet
2. P. H. Calamai and J. J. Mor, “Projected gradient methods for linearly constrained problems,” Mathematical Programming, vol. 39, no. 1, pp. 93–116, 1987. View at: Publisher Site | Google Scholar | MathSciNet
3. Y.-H. Dai and R. Fletcher, “Projected barzilai-borwein methods for large-scale box-constrained quadratic programming,” Numerische Mathematik, vol. 100, no. 1, pp. 21–47, 2005. View at: Publisher Site | Google Scholar | MathSciNet
4. F. Facchinei, J. Júdice, and J. Soares, “An active set Newton algorithm for large-scale nonlinear programs with box constraints,” SIAM Journal on Optimization, vol. 8, no. 1, pp. 158–186, 1998. View at: Publisher Site | Google Scholar | MathSciNet
5. L. Qi, X. J. Tong, and D. H. Li, “Active-set projected trust-region algorithm for box-constrained nonsmooth equations,” Journal of Optimization Theory and Applications, vol. 120, no. 3, pp. 601–625, 2004. View at: Publisher Site | Google Scholar | MathSciNet
6. C. Zhang and X. Chen, “Smoothing projected gradient method and its application to stochastic linear complementarity problems,” SIAM Journal on Optimization, vol. 20, no. 2, pp. 627–649, 2009. View at: Publisher Site | Google Scholar | MathSciNet
7. Y. Xiao and Q. Hu, “Subspace Barzilai-Borwein gradient method for large-scale bound constrained optimization,” Applied Mathematics and Optimization, vol. 58, no. 2, pp. 275–290, 2008. View at: Publisher Site | Google Scholar | MathSciNet
8. X. Chen and M. Fukushima, “Expected residual minimization method for stochastic linear complementarity problems,” Mathematics of Operations Research, vol. 30, no. 4, pp. 1022–1038, 2005. View at: Publisher Site | Google Scholar | MathSciNet
9. B. Chen, X. Chen, and C. Kanzow, “A penalized fischer-burmeister NCP-function,” Mathematical Programming, vol. 88, no. 1, pp. 211–216, 2000. View at: Publisher Site | Google Scholar | MathSciNet
10. F. Facchinei and J.-S. Pang, Finite-Dimensional Variational Inequalities and Complementarity Problems, Springer, New York, NY, USA, 2003.
11. A. Fischer, “Solution of monotone complementarity problems with locally lipschitzian functions,” Mathematical Programming, vol. 76, pp. 513–532, 1997. View at: Publisher Site | Google Scholar | MathSciNet
12. F. Facchinei and C. Kanzow, “A nonsmooth inexact newton method for the solution of large-scale nonlinear complementarity problems,” Mathematical Programming, vol. 76, no. 3, pp. 493–512, 1997. View at: Publisher Site | Google Scholar | MathSciNet
13. D. Sun, “A regularization Newton method for solving nonlinear complementarity problems,” Applied Mathematics and Optimization, vol. 40, no. 3, pp. 315–339, 1999. View at: Publisher Site | Google Scholar | MathSciNet
14. D. Sun, R. S. Womersley, and H. Qi, “A feasible semismooth asymptotically Newton method for mixed complementarity problems,” Mathematical Programming, vol. 94, pp. 167–187, 2002. View at: Publisher Site | Google Scholar | MathSciNet
15. G. Grkan, A. Y. Ozge, and S. M. Robinson, “Sample-path solution of stochastic variational inequalities,” Mathematical Programming, vol. 84, no. 2, pp. 313–333, 1999. View at: Publisher Site | Google Scholar | MathSciNet
16. G. L. Zhou and L. Caccetta, “Feasible semismooth newton method for a class of stochastic linear complementarity problems,” Journal of Optimization Theory and Applications, vol. 139, no. 2, pp. 379–392, 2008. View at: Publisher Site | Google Scholar | MathSciNet
17. H. Fang, X. J. Cheng, and M. Fukushima, “Stochastic ${r}_{0}$ matrix linear complementarity problems,” SIAM Journal on Optimization, vol. 18, no. 2, pp. 482–506, 2007. View at: Publisher Site | Google Scholar
18. G.-H. Lin and M. Fukushima, “New reformulations for stochastic nonlinear complementarity problems,” Optimization Methods and Software, vol. 21, no. 4, pp. 551–564, 2006. View at: Publisher Site | Google Scholar | MathSciNet
19. G.-H. Lin, X. Chen, and M. Fukushima, “New restricted NCP functions and their applications to stochastic NCP and stochastic MPEC,” Optimization, vol. 56, no. 5-6, pp. 641–953, 2007. View at: Publisher Site | Google Scholar | MathSciNet
20. X. Chen, C. Zhang, and M. Fukushima, “Robust solution of monotone stochastic linear complementarity problems,” Mathematical Programming, vol. 117, no. 1-2, pp. 51–80, 2009. View at: Publisher Site | Google Scholar | MathSciNet
21. A. J. Kleywegt, A. Shapiro, and T. Homem-de-Mello, “The sample average approximation method for stochastic discrete optimization,” SIAM Journal on Optimization, vol. 12, no. 2, pp. 479–502, 2002. View at: Publisher Site | Google Scholar | MathSciNet