Abstract

A two-stage stochastic quadratic programming problem with inequality constraints is considered. By quasi-Monte-Carlo-based approximations of the objective function and its first derivative, a feasible sequential system of linear equations method is proposed. A new technique to update the active constraint set is suggested. We show that the sequence generated by the proposed algorithm converges globally to a Karush-Kuhn-Tucker (KKT) point of the problem. In particular, the convergence rate is locally superlinear under some additional conditions.

1. Introduction

Stochastic programming is a framework for modeling optimization problems that involve uncertainty. It has applications in a broad range of areas ranging between finance, transportation, and energy optimization [1, 2]. In the field of industrial production, stochastic programming is also widely used in stochastic control [37].

We consider the following two-stage stochastic quadratic programming problem:where and are twice continuously differentiable. is symmetric positive definite. , , and are fixed matrices or vectors. and are random vectors. is a continuously differentiable probability density function.

Let and . We denote the active constraint by , where . Throughout the paper, the following hypotheses hold.

Assumption 1. and are bounded.

Assumption 2. At every , the vectors , are linearly independent.

A basic difficulty of solving stochastic optimization problem (1a), (1b), (1c), and (1d) is that the objective function with uncertainty can be complicated or difficult to compute even approximately. The aim of this paper is to give computational approaches based on quasi-Monte-Carlo sampling techniques. To solve stochastic programming problems, one usually resorts to deterministic optimization methods. This idea is a natural one and was used by many authors over the years [812]. Deterministic methods were also applied to stochastic programming problems which involve quadratic programming in a vast literature. The extended linear quadratic programming (ELQP) model was introduced by Rockafellar and Wets [13, 14]. Qi and Womersley [15] proposed an sequence quadratic programming (SQP) algorithm for ELQP problems. To solve ELQP, Chen et al. [16] suggested a Newton-type approach and showed that this method is globally convergent and locally superlinear convergent. At the same time, Birge et al. [17] investigated a stochastic Newton method for ELQP with inequality constraint . Global convergence and local superlinear convergence of the method were established.

In order to get a numerical solution of (1a), (1b), (1c), and (1d) based on quasi-Monte-Carlo techniques, consider the following approximation of (1c):where and is generated by lattice rules [18, 19]. Consequently problem (1a), (1b), (1c), and (1d) is approximated by Since is bounded, it follows from [17] that is twice continuously differentiable. Moreover, from [16], the approximated objective function has the following continuous first derivative in :where .

Let be an integer sequence satisfying and as . Generate observations , on the unit hypercube according to an integration rule. Here, we choose quasi-Monte-Carlo sequences [20]. Since and are compact, it follows from [20] (or [21]) that there exists a constant such that, for any ,The paper addresses a feasible sequential system of linear equations (SSLE) approach to solve (1a), (1b), (1c), and (1d). This study is strongly motivated by recent successful development of various SSLE algorithms for deterministic optimization problems and quasi-Monte-Carlo simulation techniques. SSLE methods for deterministic optimization problems have been proposed by many authors over the years. An interested reader is referred to the literature [2226] for excellent surveys. Our algorithm has the following interesting features.(a)Without assuming isolatedness of the accumulation point or boundedness of the Lagrange multiplier approximation sequence, every accumulation point of the iterative sequence generated by the proposed algorithm converges to a KKT point of problem (1a), (1b), (1c), and (1d).(b)At each iteration, we only to solve four symmetric systems of linear equations with a common coefficient matrix and a simple structure. In the proposed algorithm the last system of linear equation only needs to be solved for achieving a local one-step superlinear convergence rate.(c)In order to achieve the “working set,” the multiplier function is needed to be obtained firstly in [27]. The multiplier function also is suggested by Facchinei et al. [28], while our algorithm provides a new technique to update the “working set,” consequently, without calculating the multiplier function.(d)In order to find a search direction, a quadratic programming subproblem needs to be solved at each iteration in [17]. Consequently, the Hessian of objective function needs to be approximated by Monte Carlo (or quasi-Monte-Carlo) rule, while for the SSLE methods the approximation is not necessary. Our algorithm solves four linear systems of equations with only the first-order derivative of objective function involved.

The remainder of this paper is organized as follows. Section 2 gives the algorithm of (1a), (1b), (1c), and (1d) and shows the proposed algorithm is well defined. In Section 3 we discuss the convergence of algorithm in detail. We proceed in Section 4 by showing the local superlinear convergence. Finally, our conclusions are presented in Section 5.

2. Algorithm

The Lagrangian function associated with problem (1a), (1b), (1c), and (1d) is defined by A point in is called a KKT point of problem (1a), (1b), (1c), and (1d), if there exits such that the following KKT conditions hold:whereFor , let where is a nonnegative parameter and with From the definition of , if and only if satisfies KKT conditions (8). In order to achieve the active constraint set in our algorithm, the estimate of set is defined by whereand is a positive parameter in . Since and are continuously differentiable, it follows from Theorem  3.15 in [28] that is nonnegative and continuous on . Hence, from (6) and continuous differentiability of , we have that , as , .

For simplicity, let , and whereNow we formally state our algorithm.

Algorithm 3.
(S.0) (Initialization)Parameters: , , , , , , , ;Data: , , , , symmetric positive define matrix , , and , for every . Sequence satisfies , for all , and ;Choose , such that , and ;Generate observations by quasi-Monte-Carlo rules and calculate ;Set .(S.1) (Choose Working Set)(S1.1)If , then set .(S1.2)Set , .(S1.3)Calculate and .(S1.4)If , then set , , and go to (S1.3).(S1.5)Set , .(S.2) (Computation of Search Direction)If , then run the following step (S2.1)–(S2.4); otherwise go to (S2.5).(S2.1)Set .(S2.2)Generate observations by quasi-Monte-Carlo rules and calculate .(S2.3)Set .(S2.4)If , then set and go to (S2.2); otherwise set , , , and go to (S.3).(S2.5)Set .(S2.6)Generate observations by quasi-Monte-Carlo rules and calculate .(S2.7)Compute by solving the system of linear equation in Set .(S2.8)Let where .Compute by solving the system of linear equation in If , then set and go to (S2.6); otherwise set , and .(S2.9)Compute by solving the system of linear equation in where(S2.10)Compute by solving the system of linear equation in where (S.3) If , then set .
Choose , the first number in the sequence satisfying (S.4) Compute such that .
Set , and . Generate a new symmetric positive define matrix . Set and go to (S.1).

Remarks(a)The main purpose of (S.1) is to generate a working set and ensure that the matrix is nonsingular, for every . Hence, is well defined, for all . The calculation of set specially is different from the one proposed in [27]. We use the solution of system (18) as a substitute for the multiplier function proposed in [27]. Moreover, is also uniformly bounded. Details will subsequently be given.(b)From the construction of the algorithm, four linear systems need to be solved at each iteration. To ensure the iterate sequence globally converges to KKT point of (1a), (1b), (1c), and (1d), we only need to solve the previous three linear systems (16), (18), and (19). The linear systems (16) and (19) play important roles in proving the global convergence. The main aim of the linear system (21) is to guarantee the one-step superlinear convergence rate of the algorithm under mild conditions.(c)It is not difficult to show that there exists , the first number of the sequence , which satisfies the linear search (23) and (24). In Section 4 we will show that , for sufficiently large . Hence, the Maratos effect will be avoided.(d)In numerical experiments is usually updated by the Broyden-Fletcher-Goldfarb-Shanno (BFGS) formula [27, 29]. At any iteration Algorithm 3 stops as the following termination criteria, with and maximum iterations :

The rest of section is devoted to show that Algorithm 3 is well defined. We firstly give the following hypothesis on the choice of the matrix .

Assumption 4. There exist positive constants and such that for all and

It is not difficult to see from , for every in nonnegative integer set , the inner iteration (S.1) terminates finitely.

Lemma 5. , if there exists some such that the following conditions hold.
(a) , (b) for all .

Proof. From condition (b), we have that . It follows that From independence of and , the result follows.

Lemma 6. , if there exists some such that the following conditions hold.
(a) , (b) for all .

Proof. From condition (b), . It follows that, as Therefore, we get , and Since , , we have, as ,This completes the proof.

It is easy to see from Lemmas 5 and 6 that is a unconstrained stationary point of , if we are not able to get the next iteration from the current iteration ; that is, the inner iterations (S2.1)–(S2.8) terminate infinitely. Since we always have , this means that is actually a KKT point of problem (1a), (1b), (1c), and (1d). In the following section, we assume that the inner iterations (S2.1)–(S2.8) terminate finitely for all ; namely, there always exists such that, for every , one of the following conditions holds.(i), .(ii), .Therefore, the algorithm generates an infinite iterative sequence .

Lemma 7. If there exists such that for all , then there exists such that for all .

Proof. Since , we have that . So the result follows from Assumption 4.

Lemma 8. If there exist and subset such that with and all , then for sufficiently large .

Proof. Assume to contrary that for any there always exists such that . From construction of the algorithm, we have that . By Assumption 1 and the finiteness of set , without loss of generality, we can assume that(i) with and for all ;(ii), keep changeless.For simplicity, let . Since is bounded, it follows from that for sufficiently large . Hence, by Assumption 4 and step (S.1), we get which contradicts with Assumption 2, and the proof is complete.

From Lemmas 7 and 8 we can directly obtain Lemma 9.

Lemma 9. There exists such that for all .

Since is compact, we get Lemma 10.

Lemma 10. is nonsingular and uniformly bounded with respect to ; that is, there exists and such that, for all ,

From Assumption 1 and Lemma 10, the following lemma is then obvious.

Lemma 11. are bounded for .

Lemma 12. If , the following results hold.(a) for .(b) for .(c).

Proof. (a) is a direct consequence of linear system (16). It is easy to see from linear systems (16), (18), and (19) that Therefore, we have This completes the proof.

3. Convergence

Lemma 13. Suppose the following conditions hold.(i).(ii) for every .(iii)There exists such that , and .Then is a stationary point; namely, .

Proof. We show the conclusion by contradiction. Suppose that . Without loss of generality, we assume that and . So there exists such that for sufficiently large Since , for sufficiently big , there exists such that So Let . Since , it is obvious that . So we have that, for sufficiently large ,Therefore, we haveBy (37), for all It follows that from (39) and (40) that there exists independent of such that, for any , both (23) and (24) hold. From (39), there exists such that, for all with , ,It is not difficult to see from (23) and Lemma 12 that, for sufficiently large ,Combining with (41), we get It follows that , which contradicts with the fact that is bounded, and the proof is complete.

Lemma 14. Suppose the following conditions hold.(i).(ii) for every .(iii)There exists such that , and .If for every , then is a stationary point; namely, .

Proof. From the above conditions, we have that, for every , , , and therefore It follows that as and This completes the proof.

Let , , denote the vectors on with components , respectively, where

Lemma 15. Suppose conditions (i)–(iii) hold in Lemma 14. If for every , then is a stationary point; namely, .

Proof. Without loss of generality, we suppose that and keep changeless.
From condition (iii) in Lemma 14Combining with the first equation of linear system (16), . It is easy to see from (47) that Therefore, we have So we getand for sufficiently large Let . Since is a KKT pair of problem (1a), (1b), (1c), and (1d), we have Therefore, from (50)If there is such that , then we have from (51) that for arbitrary and sufficiently large For , since is nonsingularSo we can also get that for sufficiently large So we haveSince , . It follows that , and, therefore, . This completes the proof.

Lemma 16. Suppose that . If , then is a KKT pair of problem (1a), (1b), (1c), and (1d).

Proof. If, for every , , the result can be directly obtained from and . Without loss of generality, we suppose that, for all ,(i),(ii) keep changeless.By Lemma 12 and linear systems (16), (18), and (19), we have So Let be an arbitrary accumulation point of . Since and are continuously differentiable, we get from (6), (16), and (59) that This completes the proof.

Lemma 17. Assume that the following conditions hold:(i).(ii)there exists subset such that and .Then is a KKT pair of problem (1a), (1b), (1c), and (1d).

Proof. Assume to the contrary that is not a KKT pair of problem (1a), (1b), (1c), and (1d). Without loss of generality, we suppose that conditions (i) and (ii), which are given in proof for Lemma 16, hold for all . It is not difficult to see from Lemma 16 that there exists such that, for sufficiently large ,So that, for sufficiently large ,From (61), does not converge to . Therefore, without loss of generality, we also can suppose that and . Since , for sufficiently large , there exists such that It follows that . So, for every ,and, for ,In a way similar to the proof of Lemma 13, we get that , which contradicts with the boundedness of , . This completes the proof.

Lemma 18. Assume that the following conditions hold.(i).(ii)There exists subset such that and .(iii) for every .If for every , then is a KKT pair of problem (1a), (1b), (1c), and (1d).

Proof. Without loss of generality, we suppose that keep changeless. Let By Lemma 10, is nonsingular. Therefore, there exists such that and is the unique solution of the following linear system:From Lemma 13, for every , andwhere .
So we get, as and ,It follows from Lemma 13 that . Therefore, we have that , , and the proof is complete.

Lemma 19. Assume that conditions (i)–(iii) in Lemma 18 hold. If for every , then is a KKT pair of problem (1a), (1b), (1c), and (1d).

Proof. Without loss of generality, we suppose that and keep changeless. In a way similar to Lemma 15, we haveLet and denote a vector with the following components:Since is a KKT pair of (1a), (1b), (1c), and (1d), we have from (70) that is the solution of the following linear system:On the other hand, since , there exists such that . From Assumption 2, is nonsingular. Therefore, is unique solution of the linear system (72). So we have that . So , and the proof is complete.

From Lemmas 1319, we have the following.

Theorem 20. If , then is a KKT pair of problem (1a), (1b), (1c), and (1d).

4. Rate of Convergence

In this section, we will establish the superlinear convergence of Algorithm 3. We suppose that the algorithm generates an infinite iterative sequence and there exists such that with . That is, (S2.1)–(S2.4) will never be run when and the inner iterations (S2.5)–(S2.8) terminate finitely. Let be an accumulation point of the sequence generated by Algorithm 3. We assume that , , are locally Lipschitz continuous on a neighborhood of . To ensure the whole sequence converges to , we need the following assumption.

Assumption 21. The second-order sufficient condition holds at ; that is, the Hessian is positive definite on the space , .

We first introduce a useful proposition as follows.

Proposition 22 (see [25, Proposition  4.1]). Assume that is an isolated accumulation point of a sequence such that for every subsequence converges to ; there is an infinite subset such that ; then the whole sequence converges to .

Lemma 23. If , then .

Proof. Assume to the contrary that there exists subset such that . By the finiteness of set and boundedness of sequence , there exists subset such that and , keep changeless. It is not difficult to see from linear system (18) thatOn the other hand, from Theorem 20, is a KKT pair of problem (1a), (1b), (1c), and (1d); it follows that , which contradicts with (73). So we have that .

Lemma 24. If , then .

Proof. Since multiplier is unique with respect to and is bounded, it follows from Theorem 20 that .

Lemma 25. If , then .

Proof. Suppose that is a arbitrary accumulation point of . Then, from Theorem 20In a way similar to the proof of Lemmas 18 and 19, we get that , . From the boundedness of , the result follows.

Lemma 26. If , then , .

Proof. By Lemma 25, . From Lemma 23, the result follows.

Lemma 27. Under Assumptions 1, 2, 4, and 21, the whole sequence converges to .

Proof. Suppose that . Assumptions 2 and 21 imply that is an isolated accumulation point of [30]. By (S.3) in Algorithm 3, . It follows from Lemma 23 that Therefore, we have from Proposition 22 that the whole sequence converges to . By Lemma 24, we have that converges to . This completes the proof.

Assumption 28. The strict complementarity condition holds at ; that is, .

Lemma 29. Let ; then for all sufficiently big

Proof. By Theorem 20 and Lemma 27, it is easy to see that In a way similar to the proof of (70) in Theorem 20, we have the following result:By Assumption 28, the result follows.

By Lemmas 23, 27, and 29, we can directly obtain the following corollary.

Corollary 30. If Assumptions 1, 2, 4, and 21 hold, then for every

By linear systems (18), (19), and (21), we have Combining with the fact that and , we have the following.

Lemma 31. For sufficiently large , the following results hold.

Assumption 32. The sequence of matrices satisfies where , .

Note. Assumption 32 is an extended Dennis-Moré condition. It is used in Qp-free algorithm for nonlinear optimization problems by Yang et al. [27]. We will show that it is a sufficient condition for our algorithm to be superlinearly convergent. In order to show the superlinear convergence, we first introduce the following proposition.

Proposition 33 (see [27, Lemma  4.3]). For sufficiently large , the direction can be decomposed intowith .

Lemma 34. For sufficiently large , if , then the step is accepted.

Proof. For , due to , it is not difficult to see from Corollary 30 that when is sufficiently large, . For , we have from linear system (21) and Lemma 31 thatIt follows from that, for sufficiently large , . So when is sufficiently large, is a strictly feasible point of problem (1a), (1b), (1c), and (1d). By (21) and (81), we have Combining with (84), we haveIt follows that for sufficiently large From Proposition 33 and Assumption 32, we have thatFrom linear system (19), for sufficiently large ,Since for all , we have, for sufficiently large ,By (87), (88), and (89), we getwhich completes the proof.

Theorem 35. Under stated assumptions, we have

Proof. By the definition of , we haveIt follows from (93) thatSince , it is clear from linear system (21) thatLet and .
From (94) and (95), we haveFrom Assumption 21, it is not difficult to see that when is sufficiently large, have full column rank. It follows from (96) and Assumption 32 that which implies that This completes the proof.

In sequel, we consider the following case: the KKT point of problem (1a), (1b), (1c), and (1d) is an unconstrained stationary with multiplier vector . It is clear that and also in this case. Therefore, we have form the construction of Algorithm 3 that, for sufficiently large , . In order to show the superlinear convergence under this case. we firstly give two well-known propositions.

Proposition 36. Assume that is twice continuously differentiable and is Lipschitz continuous on open convex subset of . Then, for arbitrary , we have where is a Lipschitz constant.

Proposition 37. Assume that and satisfy the conditions in Proposition 36. If is symmetric positive definite, then there exist , such that when with

In order to obtain the superlinear convergence of problem (1a), (1b), (1c), and (1d) under the condition , we give the following assumption.

Assumption 38. The sequence of matrices satisfies

Lemma 39. If , then the step is accepted for sufficiently large .

Proof. Since , , . It follows from that, for sufficiently large , , . That is, is strictly feasible. To the end, we show that inequality (23) also holds when . By (5), (6), and , we have, for sufficiently large ,which completes the proof.

Theorem 40. Assume that . If and satisfy the conditions in Propositions 36 and 37, then

Proof. Since , we haveIt follows from Proposition 36 thatBy inequality (6), we haveHence, from (105) and (106), we have By and Proposition 37,So, we havewhich implies that This completes the proof.

5. Conclusion

In this paper, by quasi-Monte-Carlo-based approximations of the objective function and its first derivative, we have proposed a feasible sequential system of linear equations method for two-stage stochastic quadratic programming problem with inequality constraint. A new technique to update the “working set” is suggested. The feature of the new technique is that, in order to update the “working set,” at each iteration we directly make use of the solution of linear system (16), while we do not calculate the inverse of matrix [27]. Moreover, it also does not need to approximate the Hessian by Monte Carlo (or quasi-Monte-Carlo) rule. Therefore, our algorithm saves the computational cost. The other remarkable feature of this technique is that it can accurately identify active constraints of problem (1a), (1b), (1c), and (1d). It should be pointed out that the technique also is useful for deterministic nonlinear programming problem with inequality constraints. We have shown that the sequence generated by the proposed algorithm converges to a KKT point of the problem globally. In particular the convergence rate is locally superlinear under some additional conditions. To get the superlinear convergence of the algorithm, we still need the strict complementarity assumption. However, we believe that, by using quasi-Monte-Carlo-based approximations and the new identification technique, it is possible to find a new algorithm without strict complementarity assumption. Moreover, how to use parallel optimization techniques [3133] for the large scale stochastic programs with recourse is an important topic for further research.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work is supported by the National Bureau of Statistics of the People’s Republic of China (2014LZ41).