Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2017, Article ID 1564642, 15 pages
https://doi.org/10.1155/2017/1564642
Research Article

A Quasi-Monte-Carlo-Based Feasible Sequential System of Linear Equations Method for Stochastic Programs with Recourse

College of Mathematics and Systems Science, Shandong University of Science and Technology, Qingdao 266590, China

Correspondence should be addressed to Changyin Zhou; moc.361@321ycuohz

Received 5 April 2017; Revised 14 July 2017; Accepted 24 July 2017; Published 24 August 2017

Academic Editor: Huanqing Wang

Copyright © 2017 Changyin Zhou et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A two-stage stochastic quadratic programming problem with inequality constraints is considered. By quasi-Monte-Carlo-based approximations of the objective function and its first derivative, a feasible sequential system of linear equations method is proposed. A new technique to update the active constraint set is suggested. We show that the sequence generated by the proposed algorithm converges globally to a Karush-Kuhn-Tucker (KKT) point of the problem. In particular, the convergence rate is locally superlinear under some additional conditions.

1. Introduction

Stochastic programming is a framework for modeling optimization problems that involve uncertainty. It has applications in a broad range of areas ranging between finance, transportation, and energy optimization [1, 2]. In the field of industrial production, stochastic programming is also widely used in stochastic control [37].

We consider the following two-stage stochastic quadratic programming problem:where and are twice continuously differentiable. is symmetric positive definite. , , and are fixed matrices or vectors. and are random vectors. is a continuously differentiable probability density function.

Let and . We denote the active constraint by , where . Throughout the paper, the following hypotheses hold.

Assumption 1. and are bounded.

Assumption 2. At every , the vectors , are linearly independent.

A basic difficulty of solving stochastic optimization problem (1a), (1b), (1c), and (1d) is that the objective function with uncertainty can be complicated or difficult to compute even approximately. The aim of this paper is to give computational approaches based on quasi-Monte-Carlo sampling techniques. To solve stochastic programming problems, one usually resorts to deterministic optimization methods. This idea is a natural one and was used by many authors over the years [812]. Deterministic methods were also applied to stochastic programming problems which involve quadratic programming in a vast literature. The extended linear quadratic programming (ELQP) model was introduced by Rockafellar and Wets [13, 14]. Qi and Womersley [15] proposed an sequence quadratic programming (SQP) algorithm for ELQP problems. To solve ELQP, Chen et al. [16] suggested a Newton-type approach and showed that this method is globally convergent and locally superlinear convergent. At the same time, Birge et al. [17] investigated a stochastic Newton method for ELQP with inequality constraint . Global convergence and local superlinear convergence of the method were established.

In order to get a numerical solution of (1a), (1b), (1c), and (1d) based on quasi-Monte-Carlo techniques, consider the following approximation of (1c):where and is generated by lattice rules [18, 19]. Consequently problem (1a), (1b), (1c), and (1d) is approximated by Since is bounded, it follows from [17] that is twice continuously differentiable. Moreover, from [16], the approximated objective function has the following continuous first derivative in :where .

Let be an integer sequence satisfying and as . Generate observations , on the unit hypercube according to an integration rule. Here, we choose quasi-Monte-Carlo sequences [20]. Since and are compact, it follows from [20] (or [21]) that there exists a constant such that, for any ,The paper addresses a feasible sequential system of linear equations (SSLE) approach to solve (1a), (1b), (1c), and (1d). This study is strongly motivated by recent successful development of various SSLE algorithms for deterministic optimization problems and quasi-Monte-Carlo simulation techniques. SSLE methods for deterministic optimization problems have been proposed by many authors over the years. An interested reader is referred to the literature [2226] for excellent surveys. Our algorithm has the following interesting features.(a)Without assuming isolatedness of the accumulation point or boundedness of the Lagrange multiplier approximation sequence, every accumulation point of the iterative sequence generated by the proposed algorithm converges to a KKT point of problem (1a), (1b), (1c), and (1d).(b)At each iteration, we only to solve four symmetric systems of linear equations with a common coefficient matrix and a simple structure. In the proposed algorithm the last system of linear equation only needs to be solved for achieving a local one-step superlinear convergence rate.(c)In order to achieve the “working set,” the multiplier function is needed to be obtained firstly in [27]. The multiplier function also is suggested by Facchinei et al. [28], while our algorithm provides a new technique to update the “working set,” consequently, without calculating the multiplier function.(d)In order to find a search direction, a quadratic programming subproblem needs to be solved at each iteration in [17]. Consequently, the Hessian of objective function needs to be approximated by Monte Carlo (or quasi-Monte-Carlo) rule, while for the SSLE methods the approximation is not necessary. Our algorithm solves four linear systems of equations with only the first-order derivative of objective function involved.

The remainder of this paper is organized as follows. Section 2 gives the algorithm of (1a), (1b), (1c), and (1d) and shows the proposed algorithm is well defined. In Section 3 we discuss the convergence of algorithm in detail. We proceed in Section 4 by showing the local superlinear convergence. Finally, our conclusions are presented in Section 5.

2. Algorithm

The Lagrangian function associated with problem (1a), (1b), (1c), and (1d) is defined by A point in is called a KKT point of problem (1a), (1b), (1c), and (1d), if there exits such that the following KKT conditions hold:whereFor , let where is a nonnegative parameter and with From the definition of , if and only if satisfies KKT conditions (8). In order to achieve the active constraint set in our algorithm, the estimate of set is defined by whereand is a positive parameter in . Since and are continuously differentiable, it follows from Theorem  3.15 in [28] that is nonnegative and continuous on . Hence, from (6) and continuous differentiability of , we have that , as , .

For simplicity, let , and whereNow we formally state our algorithm.

Algorithm 3.
(S.0) (Initialization)Parameters: , , , , , , , ;Data: , , , , symmetric positive define matrix , , and , for every . Sequence satisfies , for all , and ;Choose , such that , and ;Generate observations by quasi-Monte-Carlo rules and calculate ;Set .(S.1) (Choose Working Set)(S1.1)If , then set .(S1.2)Set , .(S1.3)Calculate and .(S1.4)If , then set , , and go to (S1.3).(S1.5)Set , .(S.2) (Computation of Search Direction)If , then run the following step (S2.1)–(S2.4); otherwise go to (S2.5).(S2.1)Set .(S2.2)Generate observations by quasi-Monte-Carlo rules and calculate .(S2.3)Set .(S2.4)If , then set and go to (S2.2); otherwise set , , , and go to (S.3).(S2.5)Set .(S2.6)Generate observations by quasi-Monte-Carlo rules and calculate .(S2.7)Compute by solving the system of linear equation in Set .(S2.8)Let where .Compute by solving the system of linear equation in If , then set and go to (S2.6); otherwise set , and .(S2.9)Compute by solving the system of linear equation in where(S2.10)Compute by solving the system of linear equation in where (S.3) If , then set .
Choose , the first number in the sequence satisfying (S.4) Compute such that .
Set , and . Generate a new symmetric positive define matrix . Set and go to (S.1).

Remarks(a)The main purpose of (S.1) is to generate a working set and ensure that the matrix is nonsingular, for every . Hence, is well defined, for all . The calculation of set specially is different from the one proposed in [27]. We use the solution of system (18) as a substitute for the multiplier function proposed in [27]. Moreover, is also uniformly bounded. Details will subsequently be given.(b)From the construction of the algorithm, four linear systems need to be solved at each iteration. To ensure the iterate sequence globally converges to KKT point of (1a), (1b), (1c), and (1d), we only need to solve the previous three linear systems (16), (18), and (19). The linear systems (16) and (19) play important roles in proving the global convergence. The main aim of the linear system (21) is to guarantee the one-step superlinear convergence rate of the algorithm under mild conditions.(c)It is not difficult to show that there exists , the first number of the sequence , which satisfies the linear search (23) and (24). In Section 4 we will show that , for sufficiently large . Hence, the Maratos effect will be avoided.(d)In numerical experiments is usually updated by the Broyden-Fletcher-Goldfarb-Shanno (BFGS) formula [27, 29]. At any iteration Algorithm 3 stops as the following termination criteria, with and maximum iterations :

The rest of section is devoted to show that Algorithm 3 is well defined. We firstly give the following hypothesis on the choice of the matrix .

Assumption 4. There exist positive constants and such that for all and

It is not difficult to see from , for every in nonnegative integer set , the inner iteration (S.1) terminates finitely.

Lemma 5. , if there exists some such that the following conditions hold.
(a) , (b) for all .

Proof. From condition (b), we have that . It follows that From independence of and , the result follows.

Lemma 6. , if there exists some such that the following conditions hold.
(a) , (b) for all .

Proof. From condition (b), . It follows that, as Therefore, we get , and Since , , we have, as ,This completes the proof.

It is easy to see from Lemmas 5 and 6 that is a unconstrained stationary point of , if we are not able to get the next iteration from the current iteration ; that is, the inner iterations (S2.1)–(S2.8) terminate infinitely. Since we always have , this means that is actually a KKT point of problem (1a), (1b), (1c), and (1d). In the following section, we assume that the inner iterations (S2.1)–(S2.8) terminate finitely for all ; namely, there always exists such that, for every , one of the following conditions holds.(i), .(ii), .Therefore, the algorithm generates an infinite iterative sequence .

Lemma 7. If there exists such that for all , then there exists such that for all .

Proof. Since , we have that . So the result follows from Assumption 4.

Lemma 8. If there exist and subset such that with and all , then for sufficiently large .

Proof. Assume to contrary that for any there always exists such that . From construction of the algorithm, we have that . By Assumption 1 and the finiteness of set , without loss of generality, we can assume that(i) with and for all ;(ii), keep changeless.For simplicity, let . Since is bounded, it follows from that for sufficiently large . Hence, by Assumption 4 and step (S.1), we get which contradicts with Assumption 2, and the proof is complete.

From Lemmas 7 and 8 we can directly obtain Lemma 9.

Lemma 9. There exists such that for all .

Since is compact, we get Lemma 10.

Lemma 10. is nonsingular and uniformly bounded with respect to ; that is, there exists and such that, for all ,

From Assumption 1 and Lemma 10, the following lemma is then obvious.

Lemma 11. are bounded for .

Lemma 12. If , the following results hold.(a) for .(b) for .(c).

Proof. (a) is a direct consequence of linear system (16). It is easy to see from linear systems (16), (18), and (19) that Therefore, we have This completes the proof.

3. Convergence

Lemma 13. Suppose the following conditions hold.(i).(ii) for every .(iii)There exists such that , and .Then is a stationary point; namely, .

Proof. We show the conclusion by contradiction. Suppose that . Without loss of generality, we assume that and . So there exists such that for sufficiently large Since , for sufficiently big , there exists such that So Let . Since , it is obvious that . So we have that, for sufficiently large ,Therefore, we haveBy (37), for all It follows that from (39) and (40) that there exists independent of such that, for any , both (23) and (24) hold. From (39), there exists such that, for all with , ,It is not difficult to see from (23) and Lemma 12 that, for sufficiently large ,Combining with (41), we get It follows that , which contradicts with the fact that is bounded, and the proof is complete.

Lemma 14. Suppose the following conditions hold.(i).(ii) for every .(iii)There exists such that , and .If for every , then is a stationary point; namely, .

Proof. From the above conditions, we have that, for every , , , and therefore It follows that as and This completes the proof.

Let , , denote the vectors on with components , respectively, where

Lemma 15. Suppose conditions (i)–(iii) hold in Lemma 14. If for every , then is a stationary point; namely, .

Proof. Without loss of generality, we suppose that and keep changeless.
From condition (iii) in Lemma 14Combining with the first equation of linear system (16), . It is easy to see from (47) that Therefore, we have So we getand for sufficiently large Let . Since is a KKT pair of problem (1a), (1b), (1c), and (1d), we have Therefore, from (50)If there is such that , then we have from (51) that for arbitrary and sufficiently large For , since is nonsingularSo we can also get that for sufficiently large So we haveSince , . It follows that , and, therefore, . This completes the proof.

Lemma 16. Suppose that . If , then is a KKT pair of problem (1a), (1b), (1c), and (1d).

Proof. If, for every , , the result can be directly obtained from and . Without loss of generality, we suppose that, for all ,(i),(ii) keep changeless.By Lemma 12 and linear systems (16), (18), and (19), we have So Let be an arbitrary accumulation point of . Since and are continuously differentiable, we get from (6), (16), and (59) that This completes the proof.

Lemma 17. Assume that the following conditions hold:(i).(ii)there exists subset such that and .Then is a KKT pair of problem (1a), (1b), (1c), and (1d).

Proof. Assume to the contrary that is not a KKT pair of problem (1a), (1b), (1c), and (1d). Without loss of generality, we suppose that conditions (i) and (ii), which are given in proof for Lemma 16, hold for all . It is not difficult to see from Lemma 16 that there exists such that, for sufficiently large ,So that, for sufficiently large ,From (61), does not converge to . Therefore, without loss of generality, we also can suppose that and . Since , for sufficiently large , there exists such that It follows that . So, for every ,and, for ,In a way similar to the proof of Lemma 13, we get that , which contradicts with the boundedness of , . This completes the proof.

Lemma 18. Assume that the following conditions hold.(i).(ii)There exists subset such that and .(iii) for every .If for every , then is a KKT pair of problem (1a), (1b), (1c), and (1d).

Proof. Without loss of generality, we suppose that keep changeless. Let By Lemma 10, is nonsingular. Therefore, there exists such that and is the unique solution of the following linear system: