Abstract

The augmented Lagrangian method can be used for solving recourse problems and obtaining their normal solution in solving two-stage stochastic linear programming problems. The augmented Lagrangian objective function of a stochastic linear problem is not twice differentiable which precludes the use of a Newton method. In this paper, we apply the smoothing techniques and a fast Newton-Armijo algorithm for solving an unconstrained smooth reformulation of this problem. Computational results and comparisons are given to show the effectiveness and speed of the algorithm.

1. Introduction

In stochastic programming, some data are random variables with specific possibility distribution [1], which was first introduced by the designer of linear programming problems, Dantzig, in [2].

In this paper, we consider the following two-stage stochastic linear program (slp) with recourse which involves the calculation of an expectation over a discrete set of scenarios: where and shows the expectation of function which depend on the random variable . The function is defined as follows: where , , and . Also, in the problem (3) vector of coefficients , matrix of coefficients , demand vector , and matrix depend on the random vector with support space . The problems (1) and (3) are called master and recourse problems of stochastic programming, respectively.

We assume that the problem (3) has a solution for each and .

In general, the recourse function is not differentiable everywhere. Therefore, the traditional methods use nonsmooth optimization techniques [35]. However, in the last decade, it is proposed smoothing method for recourse function in standard form of recourse problem [611]. In this paper, we apply a smooth approximation technique to smooth recourse function that the recourse problem has inequality linear constrained. For more explanation see Section 2. The approximated problem is based on the least two-norm solution of recourse problem. This paper considers the augmented Lagrangian method to obtain least two-norm solution (Section 3). For convenience, Euclidean least two-norm solution of linear programming problem is named normal solution. This effective method contains solving an unconstrained quadratic problem which its objective function is not twice differentiable. To apply a fast Newton method we use the soothing technique and replace plus function by an accurate smooth approximation [12, 13]. In Section 4, the smoothing algorithm and the numerical results are presented. Also, concluding remarks are given in Section 5.

We now describe our notation. Let be a vector in . By we mean a vector in whose th entry is if and equals if . By we mean the transpose of matrix , and is the gradient of at . For , and denote -norm and infinity norm, respectively.

2. Approximation of Recourse Function

As mentioned the objective function of (1) is nondifferentiable. This disadvantage property occurs on the recourse function. In this section, there is an attempt to approximate it to a differentiable function.

Using dual of the problem (3), function can be written as follows: Unlike the linear recourse function, the quadratic recours function is differentiable. Thus in this paper, the approximation is based on the following quadratic problem with helpful properties: The next theorem shows that, for the sufficiently small , the solution of this problem is the normal solution of the problem (4).

Theorem 1. For functions and introduced in (4) and (5), the following can be presented:(a) such that, for each , the solution for the problem (5) is the normal solution for the problem (4).(b) For each , function is differentiable with respect to .(c) The gradient of function at point is in which is the solution of the problem (5).

Proof. To prove (a), refer to [14, 15].
Also, (b) and (c) can be easily proved considering that function is the conjugate of function where and Theorems (26-3) and (23-5) in [16].

Using the approximated recourse function , we can define a differentiable approximation function to the objective function of (1): By (6), the gradient of above function exists and is obtained by This approximation has paved the way to use the optimization algorithm for master problem (1) in which the objective function is substituted by In [7], it is considered slp problem with inequality constrained in master problem and equality constrained in recourse problem. Also, in Theorem 2.3 of [7], it is shown that a solution of the approximated problem is a good approximation to a solution of master problem. Here we can express a similar theorem for the problem (1) by using the similar technique in the proof of Theorem 2.3 in [7].

Theorem 2. Consider the problem (1). Then, for any , there exists an such that for any where is defined as follows: Let be a solution of (1) and a solution of (11). Then, there exists an such that for any Further, one assumes that or are strongly convex on with modulus . Then,

According to Theorem 1, it can be found that for obtaining the gradient of function in each iteration, we need the normal solution of linear programming problems (4). In this paper, the augmented Lagrangian method [17] is used for this purpose.

3. Smooth Approximation and Augmented Lagrangian Method

In the augmented Lagrangian method, the unconstrained maximization problem is solved which gives the project of a point on the solution set of the problem (4).

Assume that is an arbitrary vector. Consider the problem of finding the least 2-norm projection of on the solution set of the problem (4) In this problem, vector and random variable are constants; therefore, for simplicity, this is assumed to be , and function is defined in a way that .

Considering that the objective function of the problem (16) is strictly convex, its solution is unique. Let us introduce the Lagrangian function for the problem (16) as follow: where and are Lagrangian multipliers and , are constant values. Therefore, the dual problem of (16) becomes By solving the inner minimization of the problem (18), duality of the problem (16) is obtained: where duality function is The following theorem states that if is sufficiently large, solving the inner maximization of (19) gives the solution of the problem (16).

Theorem 3 (see [17]). Consider the following maximization problem in which , , and are constants, and function is introduced as follows: Also, assume that the set is nonempty, and the rank of submatrix of corresponding to nonzero components of is . In such a case, there is which for all , is the unique and exact solution for the problem (16), where is the point obtained from solving the problem (21).

Also, in special conditions, the solution for the problem (3) can be also obtained and the following theorem expresses this issue.

Theorem 4 (see [17]). Assume that the solution set is nonempty. For each and , is one exact solution for the linear programming problem (3), where is the solution for the problem (21).

According to the theorems mentioned above, augmented Lagrangian method presents the following iteration process for solving the problem (16): where is an arbitrary vector and here we can use zero vector as initial vector for obtaining normal solution of the problem (4).

We note that the problem (23) is a concave problem and its objective function is piecewise quadratic and is not twice differentiable. Applying the smoothing techniques [18, 19] and replacing by a smooth approximation, we transform this problem to a twice continuously differentiable problem.

Chen and Mangasarian [19] introduced a family of smoothing functions, which is built as follows. Let be a piecewise continuous density function satisfying It is obvious that the derivative of plus function is step function, that is, , where the step function is defined 1 if and equals 0 if . Therefore, a smoothing approximation function of the plus function is defined by where is smoothing approximation function of step function and is defined as By choosing specific cases of these approaches are obtained as follows: The function with a smoothing parameter is used here to replace the plus function of (22) to obtain a smooth reformulation of function (22): Therefore, we have the following iterative process instead of (23) and (28): It can be shown that as the smoothing parameter approaches infinity any solution of smooth problem (29) approaches the solution of the equivalent problem (22) (see [19]).

We begin with a simple lemma that bounds the square difference between the plus function and its smooth approximation .

Lemma 5 (see [13]). For and where is the function of (28) with smoothing parameter .

Theorem 6. Consider the problems (21) and Then, for any and where is defined as follows: Let be a solution of (21) and a solution of (32). Then Further, one assumes that is a full rank matrix. Then,

Proof. For any and Hence By using Lemma 5, we get that From above inequality, we have Therefore Suppose that is full rank. Then the Hessian of is negative definite, and is strongly concave on bounded sets. By the definition of strong concavity, for any , Let , then

Considering the advantage of the twice differentiability of the objective function of the problem (32) allows us to use a quadratically convergent Newton algorithm with an Armijo stepsize [20] that makes the algorithm globally convergent.

4. Numerical Results and Algorithm

In each iteration of the process (30), one concave, quadratic, unconstrained maximization problem is solved. For solving it, the fast Newton method can be used.

In the algorithm, the Hessian matrix may be singular, thus we use a modified Newton. The direction in each iteration for solving (30) is obtained through the following relation: where is a small positive number, is the identity matrix of order , and is the suitable step length that Armijo algorithm is used for determining it (see Algorithm 1).

Choose a , , be error tolerance and is a small positive
number.
;
While  
Choose a and set .
While  
Choose     max such that
,
where,
  be a constant,
  and .
Put ,   and .
end
Set ,   and .
end

The proposed algorithm was applied to solve some recourse problems. Table 1 compares this algorithm with CPLEX v. 12.1 solver for quadratic convex programming problems (5). As is evident from Table 1, most of recourse problems could be solved more successful by the algorithm which is based on smooth augmented Lagrangian Newton method (SALN) than CPLEX package (for illustration see the problems 21–25 in Table 1). This algorithm gives us high accuracy and the solution with minimum norm in suitable time (see last column of Table 1). Also, we can find that CPLEX is better than the algorithm proposed for some recourse problems in which the matrices are approximately square (Ex. line 5–12).

The test generator generates recourse problems. These problems are generated using the MATLAB code show in Algorithm 2.

Sgen: Generate random solvable recourse problems:
Input: m,n,d(ensity); Output: W,q, ;
m=input(Enter :)
n=input(Enter :)
d=input(Enter d:)
pl=inline((abs(x)+x)/2)
W=sprand( , ,d);W=100*(W-0.5*spones(W));
z=sparse(10*pl(rand( ,1)));
q=W*z;
y=spdiags((sign(pl(rand( ,1)-rand( ,1)))),0, , )
*5*((rand( ,1)-rand( ,1)));
ξ=W*y-10*spdiags((ones( ,1)-sign(z)),0, , )*ones( ,1));
format short e; nnz(W)/prod(size(W))

The algorithm considered for solving several recourse problems was run on a computer with 2.5 dual-core CPU and 4 GB memory in MATLAB 7.8 programming environment. Also, in the generated problems, recourse matrix is the Sparse matrix with the density . The constants and in the above algorithm in (44) were selected 1 and , respectively.

In Table 1, the second column indicates the size and density of matrix , the forth column indicates the feasibility of the primal problem (4), and the next column indicates the error norm function of this problem (the MATLAB code of this paper is available from the authors upon request).

5. Conclusion

In this paper, a smooth reformulation process, based on augmented Lagrangian algorithm, was proposed for obtaining the normal solution of recourse problem of a stochastic linear programming. This smooth iterative process allows us to use a quadratically convergent Newton algorithm, which accelerates obtaining the normal solution.

Table 1 shows that the proposed algorithm has appropriate speed in most of the problems. This result, specifically, can be observed in recourse problems with the matrix of coefficients in which the number of constraints is noticeably more than the number of variables. The more challenging is solving the problems which their coefficient matrix is square (the numbers of constraints and variables get closer to each other), and more time is needed by the algorithm for solving the problem.

Acknowledgment

The authors would like to thank the reviewers for their helpful comments.