Table of Contents Author Guidelines Submit a Manuscript
Journal of Applied Mathematics
Volume 2013, Article ID 735916, 8 pages
Research Article

Smoothing Techniques and Augmented Lagrangian Method for Recourse Problem of Two-Stage Stochastic Linear Programming

Department of Applied Mathematics, Faculty of Mathematical Sciences, University of Guilan, P.O. Box 416351914 Rasht, Iran

Received 1 February 2013; Accepted 22 April 2013

Academic Editor: Neal N. Xiong

Copyright © 2013 Saeed Ketabchi and Malihe Behboodi-Kahoo. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


The augmented Lagrangian method can be used for solving recourse problems and obtaining their normal solution in solving two-stage stochastic linear programming problems. The augmented Lagrangian objective function of a stochastic linear problem is not twice differentiable which precludes the use of a Newton method. In this paper, we apply the smoothing techniques and a fast Newton-Armijo algorithm for solving an unconstrained smooth reformulation of this problem. Computational results and comparisons are given to show the effectiveness and speed of the algorithm.

1. Introduction

In stochastic programming, some data are random variables with specific possibility distribution [1], which was first introduced by the designer of linear programming problems, Dantzig, in [2].

In this paper, we consider the following two-stage stochastic linear program (slp) with recourse which involves the calculation of an expectation over a discrete set of scenarios: where and shows the expectation of function which depend on the random variable . The function is defined as follows: where , , and . Also, in the problem (3) vector of coefficients , matrix of coefficients , demand vector , and matrix depend on the random vector with support space . The problems (1) and (3) are called master and recourse problems of stochastic programming, respectively.

We assume that the problem (3) has a solution for each and .

In general, the recourse function is not differentiable everywhere. Therefore, the traditional methods use nonsmooth optimization techniques [35]. However, in the last decade, it is proposed smoothing method for recourse function in standard form of recourse problem [611]. In this paper, we apply a smooth approximation technique to smooth recourse function that the recourse problem has inequality linear constrained. For more explanation see Section 2. The approximated problem is based on the least two-norm solution of recourse problem. This paper considers the augmented Lagrangian method to obtain least two-norm solution (Section 3). For convenience, Euclidean least two-norm solution of linear programming problem is named normal solution. This effective method contains solving an unconstrained quadratic problem which its objective function is not twice differentiable. To apply a fast Newton method we use the soothing technique and replace plus function by an accurate smooth approximation [12, 13]. In Section 4, the smoothing algorithm and the numerical results are presented. Also, concluding remarks are given in Section 5.

We now describe our notation. Let be a vector in . By we mean a vector in whose th entry is if and equals if . By we mean the transpose of matrix , and is the gradient of at . For , and denote -norm and infinity norm, respectively.

2. Approximation of Recourse Function

As mentioned the objective function of (1) is nondifferentiable. This disadvantage property occurs on the recourse function. In this section, there is an attempt to approximate it to a differentiable function.

Using dual of the problem (3), function can be written as follows: Unlike the linear recourse function, the quadratic recours function is differentiable. Thus in this paper, the approximation is based on the following quadratic problem with helpful properties: The next theorem shows that, for the sufficiently small , the solution of this problem is the normal solution of the problem (4).

Theorem 1. For functions and introduced in (4) and (5), the following can be presented:(a) such that, for each , the solution for the problem (5) is the normal solution for the problem (4).(b) For each , function is differentiable with respect to .(c) The gradient of function at point is in which is the solution of the problem (5).

Proof. To prove (a), refer to [14, 15].
Also, (b) and (c) can be easily proved considering that function is the conjugate of function where and Theorems (26-3) and (23-5) in [16].

Using the approximated recourse function , we can define a differentiable approximation function to the objective function of (1): By (6), the gradient of above function exists and is obtained by This approximation has paved the way to use the optimization algorithm for master problem (1) in which the objective function is substituted by In [7], it is considered slp problem with inequality constrained in master problem and equality constrained in recourse problem. Also, in Theorem 2.3 of [7], it is shown that a solution of the approximated problem is a good approximation to a solution of master problem. Here we can express a similar theorem for the problem (1) by using the similar technique in the proof of Theorem 2.3 in [7].

Theorem 2. Consider the problem (1). Then, for any , there exists an such that for any where is defined as follows: Let be a solution of (1) and a solution of (11). Then, there exists an such that for any Further, one assumes that or are strongly convex on with modulus . Then,

According to Theorem 1, it can be found that for obtaining the gradient of function in each iteration, we need the normal solution of linear programming problems (4). In this paper, the augmented Lagrangian method [17] is used for this purpose.

3. Smooth Approximation and Augmented Lagrangian Method

In the augmented Lagrangian method, the unconstrained maximization problem is solved which gives the project of a point on the solution set of the problem (4).

Assume that is an arbitrary vector. Consider the problem of finding the least 2-norm projection of on the solution set of the problem (4) In this problem, vector and random variable are constants; therefore, for simplicity, this is assumed to be , and function is defined in a way that .

Considering that the objective function of the problem (16) is strictly convex, its solution is unique. Let us introduce the Lagrangian function for the problem (16) as follow: where and are Lagrangian multipliers and , are constant values. Therefore, the dual problem of (16) becomes By solving the inner minimization of the problem (18), duality of the problem (16) is obtained: where duality function is The following theorem states that if is sufficiently large, solving the inner maximization of (19) gives the solution of the problem (16).

Theorem 3 (see [17]). Consider the following maximization problem in which , , and are constants, and function is introduced as follows: Also, assume that the set is nonempty, and the rank of submatrix of corresponding to nonzero components of is . In such a case, there is which for all , is the unique and exact solution for the problem (16), where is the point obtained from solving the problem (21).

Also, in special conditions, the solution for the problem (3) can be also obtained and the following theorem expresses this issue.

Theorem 4 (see [17]). Assume that the solution set is nonempty. For each and , is one exact solution for the linear programming problem (3), where is the solution for the problem (21).

According to the theorems mentioned above, augmented Lagrangian method presents the following iteration process for solving the problem (16): where is an arbitrary vector and here we can use zero vector as initial vector for obtaining normal solution of the problem (4).

We note that the problem (23) is a concave problem and its objective function is piecewise quadratic and is not twice differentiable. Applying the smoothing techniques [18, 19] and replacing by a smooth approximation, we transform this problem to a twice continuously differentiable problem.

Chen and Mangasarian [19] introduced a family of smoothing functions, which is built as follows. Let be a piecewise continuous density function satisfying It is obvious that the derivative of plus function is step function, that is, , where the step function is defined 1 if and equals 0 if . Therefore, a smoothing approximation function of the plus function is defined by where is smoothing approximation function of step function and is defined as By choosing specific cases of these approaches are obtained as follows: The function with a smoothing parameter is used here to replace the plus function of (22) to obtain a smooth reformulation of function (22): Therefore, we have the following iterative process instead of (23) and (28): It can be shown that as the smoothing parameter approaches infinity any solution of smooth problem (29) approaches the solution of the equivalent problem (22) (see [19]).

We begin with a simple lemma that bounds the square difference between the plus function and its smooth approximation .

Lemma 5 (see [13]). For and where is the function of (28) with smoothing parameter .

Theorem 6. Consider the problems (21) and Then, for any and where is defined as follows: Let be a solution of (21) and a solution of (32). Then Further, one assumes that is a full rank matrix. Then,

Proof. For any and Hence By using Lemma 5, we get that From above inequality, we have Therefore Suppose that is full rank. Then the Hessian of is negative definite, and is strongly concave on bounded sets. By the definition of strong concavity, for any , Let , then

Considering the advantage of the twice differentiability of the objective function of the problem (32) allows us to use a quadratically convergent Newton algorithm with an Armijo stepsize [20] that makes the algorithm globally convergent.

4. Numerical Results and Algorithm

In each iteration of the process (30), one concave, quadratic, unconstrained maximization problem is solved. For solving it, the fast Newton method can be used.

In the algorithm, the Hessian matrix may be singular, thus we use a modified Newton. The direction in each iteration for solving (30) is obtained through the following relation: where is a small positive number, is the identity matrix of order , and is the suitable step length that Armijo algorithm is used for determining it (see Algorithm 1).

Algorithm 1: Newton method with the Armijo rule.

The proposed algorithm was applied to solve some recourse problems. Table 1 compares this algorithm with CPLEX v. 12.1 solver for quadratic convex programming problems (5). As is evident from Table 1, most of recourse problems could be solved more successful by the algorithm which is based on smooth augmented Lagrangian Newton method (SALN) than CPLEX package (for illustration see the problems 21–25 in Table 1). This algorithm gives us high accuracy and the solution with minimum norm in suitable time (see last column of Table 1). Also, we can find that CPLEX is better than the algorithm proposed for some recourse problems in which the matrices are approximately square (Ex. line 5–12).

Table 1: Comparative between smooth augmented Lagrangian Newton method (SALN) and CPLEX solver.

The test generator generates recourse problems. These problems are generated using the MATLAB code show in Algorithm 2.

Algorithm 2:

The algorithm considered for solving several recourse problems was run on a computer with 2.5 dual-core CPU and 4 GB memory in MATLAB 7.8 programming environment. Also, in the generated problems, recourse matrix is the Sparse matrix with the density . The constants and in the above algorithm in (44) were selected 1 and , respectively.

In Table 1, the second column indicates the size and density of matrix , the forth column indicates the feasibility of the primal problem (4), and the next column indicates the error norm function of this problem (the MATLAB code of this paper is available from the authors upon request).

5. Conclusion

In this paper, a smooth reformulation process, based on augmented Lagrangian algorithm, was proposed for obtaining the normal solution of recourse problem of a stochastic linear programming. This smooth iterative process allows us to use a quadratically convergent Newton algorithm, which accelerates obtaining the normal solution.

Table 1 shows that the proposed algorithm has appropriate speed in most of the problems. This result, specifically, can be observed in recourse problems with the matrix of coefficients in which the number of constraints is noticeably more than the number of variables. The more challenging is solving the problems which their coefficient matrix is square (the numbers of constraints and variables get closer to each other), and more time is needed by the algorithm for solving the problem.


The authors would like to thank the reviewers for their helpful comments.


  1. J. R. Birge and F. Louveaux, Introduction to Stochastic Programming, Springer Series in Operations Research, Springer, New York, NY, USA, 1997. View at Zentralblatt MATH · View at MathSciNet
  2. G. B. Dantzig, “Linear programming under uncertainty,” Management Science. Journal of the Institute of Management Science. Application and Theory Series, vol. 1, pp. 197–206, 1955. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  3. J. L. Higle and S. Sen, “Stochastic decomposition: an algorithm for two-stage linear programs with recourse,” Mathematics of Operations Research, vol. 16, no. 3, pp. 650–669, 1991. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  4. P. Kall and S. W. Wallace, Stochastic Programming, John Wiley & Sons, Chichester, UK, 1994. View at Zentralblatt MATH · View at MathSciNet
  5. S. A. Tarim and I. A. Miguel, “A hybrid benders' decomposition method for solving stochastic constraint programs with linear recourse,” in Recent Advances in Constraints, vol. 3978 of Lecture Notes in Computer Science, pp. 133–148, Springer, Berlin, Germany, 2006. View at Google Scholar
  6. J. R. Birge, X. Chen, L. Qi, and Z. Wei, “A stochastic Newton method for stochastic quadratic programs with recourse,” Tech. Rep., Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, Mich, USA, 1994. View at Google Scholar
  7. X. Chen, “A parallel BFGS-SQP method for stochastic linear programs,” in World Scientific, pp. 67–74, World Scientific, River Edge, NJ, USA, 1995. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  8. X. Chen, “Newton-type methods for stochastic programming,” Mathematical and Computer Modelling, vol. 31, no. 10–12, pp. 89–98, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  9. X. J. Chen, L. Q. Qi, and R. S. Womersley, “Newton's method for quadratic stochastic programs with recourse,” Journal of Computational and Applied Mathematics, vol. 60, no. 1-2, pp. 29–46, 1995. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  10. X. Chen and R. S. Womersley, “A parallel inexact Newton method for stochastic programs with recourse,” Annals of Operations Research, vol. 64, pp. 113–141, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  11. X. Chen and R. S. Womersley, “Random test problems and parallel methods for quadratic programs and quadratic stochastic programs,” Optimization Methods and Software, vol. 13, no. 4, pp. 275–306, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  12. X. Chen and Y. Ye, “On homotopy-smoothing methods for box-constrained variational inequalities,” SIAM Journal on Control and Optimization, vol. 37, no. 2, pp. 589–616, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  13. Y.-J. Lee and O. L. Mangasarian, “SSVM: a smooth support vector machine for classification,” Computational Optimization and Applications. An International Journal, vol. 20, no. 1, pp. 5–22, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  14. C. Kanzow, H. Qi, and L. Qi, “On the minimum norm solution of linear programs,” Journal of Optimization Theory and Applications, vol. 116, no. 2, pp. 333–345, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  15. O. L. Mangasarian, “A Newton method for linear programming,” Journal of Optimization Theory and Applications, vol. 121, no. 1, pp. 1–18, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  16. R. T. Rockafellar, Convex Analysis, Princeton University Press, Princeton, NJ, USA, 1970. View at MathSciNet
  17. Yu. G. Evtushenko, A. I. Golikov, and N. Mollaverdy, “Augmented Lagrangian method for large-scale linear programming problems,” Optimization Methods & Software, vol. 20, no. 4-5, pp. 515–524, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  18. C. H. Chen and O. L. Mangasarian, “Smoothing methods for convex inequalities and linear complementarity problems,” Mathematical Programming, vol. 71, no. 1, pp. 51–69, 1995. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  19. C. Chen and O. L. Mangasarian, “A class of smoothing functions for nonlinear and mixed complementarity problems,” Computational Optimization and Applications. An International Journal, vol. 5, no. 2, pp. 97–138, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  20. X. Chen, L. Qi, and D. Sun, “Global and superlinear convergence of the smoothing Newton method and its application to general box constrained variational inequalities,” Mathematics of Computation, vol. 67, no. 222, pp. 519–540, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet