Abstract

Motivated by problems coming from planning and operational management in power generation companies, this work extends the traditional two-stage linear stochastic program by adding probabilistic constraints in the second stage. In this work we describe, under special assumptions, how the two-stage stochastic programs with mixed probabilities can be treated computationally. We obtain a convex conservative approximations of the chance constraints defined in second stage of our model and use Monte Carlo simulation techniques for approximating the expectation function in the first stage by the average. This approach raises with another question: how to solve the linear program with the convex conservative approximation (nonlinear constrains) for each scenario?

1. Introduction

Optimization problems involving stochastic models occur in almost all areas of science and engineering. Financial planning or unit commitment in power systems are just few examples of areas in which ignoring uncertainty may lead to inferior or simply wrong decisions.

Stochastic programming models are optimization problems where the decision have to be made under uncertainty because some of the parameters are random variables, and they may use probabilistic constraints and/or penalties in the objective function. In practice, the numerical solvability of the problem plays an important role and there is a tradeoff between correct statistical modeling and computability. For earlier reviews on the various aspects in stochastic programming see, for example, [15].

Two-stage stochastic programming is useful for problems where an analysis of strategy scenarios is desired and when the right-side coefficients are random. The main idea of this model is the concept of recourse, which defines possibility to take corrective actions after a realization of the random event. A decision is first undertaken before values of random variables are known, and then, after the random events have occurred and their values are known, a second stage decision is made to minimize “penalties” that may appear because of any infeasibility. For a good introduction and deepen in various aspects of these models, you should see the books in [4, 6].

Chance constrained optimization problems were introduced in Miller and Wagner [7], and Prékopa [8]. An alternative to the scenario approximation (Monte Carlo sampling techniques) is an approximation based on analytical upper bounding of the probability for the randomly perturbed constraint to be violated. The simplest approximation scheme of this type was proposed in [9] and for a new class of analytical approximation (referred to as Bernstein approximations), see the works by Nemirovski and Shapiro [10]. Another approximation of probabilities constraints, is by using the Boole-Bonferroni inequalities; see, for example, [11, 12].

When the stochastic program includes nonlinear terms or when continuous random variables are explicitly included, a finite-dimensional linear programming deterministic equivalent does not exist. In this case, we must use some nonlinear programming types of procedures, see, for instance, [3, 1316].

In previous work (see [17]) was extended the traditional two-stage linear stochastic program by probabilistic constraints imposed in the second stage. In the next section, we present a summary with assumptions under which the mixed-probability stochastic program is structurally well behaved and stable under perturbation of both probabilities measures. Moreover, in [17] can be find, under general conditions, first qualitative continuity properties for the expectation of the objective function and the constraint set-valued maps. Hence, we deduced quantitative stability results for the optimal value function and the solution set under perturbations of probabilities measures.

In the third section, two possible applications that could have this model were shown, the first one is a summary of the case of planning and operational management in power generation companies presented in [17] and the other one is an application to the problem of air pollution.

2. Some Preliminaries: Basic Well-Posedness

In previous work (see [17]) was introduced the following parametric family of mixed probability stochastic programs 𝑃(𝜇,𝜆):𝑐min𝑥+𝑠𝑄(𝑧𝐴𝑥,𝜆)𝜇(𝑑𝑧)𝑥𝐶,(𝜇,𝜆)Δ×Λ,(2.1) where 𝑄(𝑡,𝜆) is the optimal value function of the problem in second stage:𝑞min𝐻𝑦𝑊𝑦=𝑡,𝑦0,𝜆𝑗(𝑦)𝑝𝑗,𝑗=1,,𝑑(2.2) and(i)𝐻𝑗, 𝑗=1,,𝑑, are set-valued mappings from 𝑚 to 𝑟 with closed graph;(ii)𝑝𝑗, 𝑗=1,,𝑑, are predesigned probability levels;(iii)if 𝒫(𝑠), 𝒫(𝑟) denote the sets of all Borel probability measures on 𝑠 and 𝑟, respectively, we assume that Δ and Λ are subsets of 𝒫(𝑠) and 𝒫(𝑟);(iv)𝐶 is a close subset of 𝑚.

All remaining vectors and matrices have suitable dimensions.

This model extends the traditional two-stage linear stochastic program by introducing some probabilistic constraints 𝜆(𝐻𝑗(𝑦))𝑝𝑗, 𝑗=1,,𝑑 in the second stage of the problem. These types of constraints add nonlinearities to the problem and basic arguments to analyze the well-posedness of 𝑃(𝜇,𝜆) were studied in [17].

The major difficulty in understanding the structure of 𝑃(𝜇,𝜆) rests in a dilemma about the function 𝑄.

On the one hand, 𝑄 is the optimal-value function of a nonlinear program with parameters 𝑡 and 𝜆, and parametric optimization mainly provides local results about the structure of 𝑄 but global results are very scarce and require specific assumptions that are often hard to verify.

On the other hand, 𝑄 arises as an integrand in 𝑃(𝜇,𝜆). For studying properties of the related integral we require global information about 𝑄.

From this viewpoint, it is not surprising that most of the structural results about two-stage stochastic programs concern the purely linear and the linear mixed integer cases, that is, the widest problem classes where parametric optimization offers broader results about global stability.

To lay a foundation for the structural analysis of 𝑄 we formulate the following general assumptions.

Assumption A.1. For any 𝜆Λ there exists a nonempty set 𝜆𝑠 and a Lebesgue null set 𝒩𝜆𝑠 such that the function 𝑄(,𝜆) is real valued and measurable on 𝜆, and continuous on 𝜆𝒩𝜆.

Assumption A.2. It holds that 𝜇Δsupp𝜇𝜆Λ𝑥𝐶𝐴𝑥+𝜆,(2.3) where supp𝜇 denotes the smallest closed set in 𝑠 with 𝜇-measure one.

Assumption A.3. There exists a real-valued, measurable function on 𝑠, we call bounding function, with the following properties. (1) 𝑄-Majorization
It holds that |𝑄(𝑡,𝜆)|(𝑡) for all 𝑡𝜆 and all 𝜆Λ.
(2) Integrability
It holds that 𝑠(𝑧)𝜇(d𝑧)<+forall𝜇Δ.
(3) Generalized Subadditivity
There exists a 𝜅>0 such that (𝑡1+𝑡2)𝜅((𝑡1)+(𝑡2)) for all 𝑡1,𝑡2𝑠.
(4) Local Boundedness
For each 𝑡𝑠 there exists an open neighborhood of 𝑡 where is bounded.

The essence of Assumptions A.1A.3 is the following: since 𝑄(,𝜆) is the optimal-value function of a minimization problem it well may attain the values + if the problem is infeasible and if the problem is unbounded. Indeed, Assumption A.1 makes sure that 𝑄(,𝜆) is finite on some set 𝜆 and A.2 Guarantees that the arguments 𝑧𝐴𝑥 are in 𝜆 for all relevant 𝑧 and 𝑥. Otherwise, 𝑄(𝑧𝐴𝑥,𝜆) would attain infinite values with positive probability, immediately preventing finiteness of the integral:𝐺(𝑥,𝜇,𝜆)=𝑠𝑄(𝑧𝐴𝑥,𝜆)𝜇(𝑑𝑧).(2.4) The continuity part of Assumption A.1 together with Assumption A.3 provides a framework for applying dominated convergence to show continuity of 𝐺(,𝜇,𝜆).

Introducing the exceptional set 𝒩𝜆 in Assumption A.1 makes sense, since 𝑄(,𝜆) often lacks continuity on lower-dimensional subsets of its domain of finiteness.

Furthermore, Assumption A.3 ensures an integrable upper bound for the functions |𝑄(𝐴𝑥,𝜆)| when 𝑥 is varying in some neighborhood. Any other set of conditions ensuring this could be placed instead.

Clearly, reflects the global growth of |𝑄(,𝜆)| whose quantitative analysis is acknowledged nontrivial for nonlinear problems.

3. Applications

Motivated by the study of stochastic programming problems coming from planning and operational management in power generation companies, in previous work (see [17]) was presented an example where was consider a power systems of plants to be operated over a time horizon. In the case of planning and operational management in power generation companies, the first stage variable 𝑥 in the model represents generation capacity investment decisions, such as changes (continuous) of maximum generation capacity for thermal plants, the variable 𝑧 is a random demand and 𝑦 is the second-stage operational variable representing the level of production of energy.

The latter is also limited by emission rights for carbondioxide that may concern single plants or consortia of plants. The level of permitted emission is considered random, since emission rights are traded at predesigned markets via auctions, for instance, whose outcomes are uncertain to market participants. This motivates to model limitations on the operational variables resulting from emission rights by probabilistic rather than deterministic constraints.

Works in [18, 19] have made several applications to model the problem of air pollution; in these papers authers combine different techniques, including two-stage stochastic programming. We now present, based on these previous works, a variation of these models which include restrictions on the type “chance constraints” in the second stage of the model, that is, an example where there are two completely independent probability measures and of different nature.

In air quality management systems, there are uncertainties in a variety of pollution-related processes, such as pollutant characteristics, emission rates, and mitigation measures. These uncertainties would affect the efforts in modeling pollutant. On the other hand, because it is economically infeasible and sometimes technically impossible to design processes leading to zero emission, decision makers and authorities seek to control the emissions to levels at which the effects are minimized. The problem is how to minimize the expected systems cost for pollution abatement while satisfying the policy in terms of allowable pollutant-emission levels.

The SO2 generation rates may vary with the type of coal that is used at the power plants, as well as the related combustion conditions, which could be expressed as a random variable. As an illustrative example, consider a power system consisting of plants 𝑖=1,2,,𝐼 to be operated over a time horizon with subintervals 𝑡=1,2,,𝑇 and a set of control methods 𝑗=1,2,,𝐽. The first stage variables 𝑥𝑖𝑗𝑡 represent the amount of SO2 generated from source 𝑖, to be mitigated through control measures 𝑗 in period 𝑡 under the regulated emission allowance, and 𝑐𝑗𝑡 is the operating cost of control measure 𝑗 during period 𝑡. The second-stage variables are related to the probabilistic excess SO2 from source 𝑖 to be mitigated through control measures 𝑗 in period 𝑡 under SO2 generation rate 𝑧(𝜉), and 𝑑𝑗𝑡 is the operating and penalty cost for excess SO2 emission during period 𝑡. In general, it is considered that this cost is much greater than the cost of operating the first stage variables.

The objective is to minimize the total of regular and penalty cost for SO2 abatement. min𝐼𝐽𝑖=1𝑇𝑗=1𝑡=1𝑐𝑗𝑡𝑥𝑖𝑗𝑡+𝐼𝐽𝑖=1𝑇𝑗=1𝑡=1𝑑𝑗𝑡𝔼𝑦𝑖𝑗𝑡.(3.1) If we denote by 𝑧𝑖𝑡(𝜉) the random variable of SO2 generation rate in source 𝑖 during period 𝑡, the constraints of pollution control demand are 𝐽𝑗=1𝑥𝑖𝑗𝑡+𝑦𝑖𝑗𝑡(𝜉)=𝑧𝑖𝑡(𝜉),𝑖,𝑡.(3.2) Finally, the function 𝐻(𝑦𝑡(𝜉),𝜁) represents the accumulation of SO2 in a particular area sensible, such as a city that is surrounded by emission sources or power plants and which depends, on the one hand, on the excess amount of emissions from each source 𝑖, given the extent 𝑗 control taken in period 𝑡, and the random variable 𝜁 associated with climatic conditions and predicts SO2 concentrations in a specific area under different meteorological conditions, then we add the probabilistic limitations on the second-stage variables:𝐻𝑦Pr𝑡(𝜉),𝜁0𝑝𝑡,𝑡,(3.3) where 𝑝𝑡 is the probability levels with which the limitations are to be met.

4. Numerical Method

In order to have some idea about how the two-stage stochastic programs with mixed probabilities can be treated computationally, we will study the following stochastic linear programming problem: 𝑐minT𝑥+𝔼(𝑄(𝑥,𝜉))𝐵𝑥=𝑏,𝑥0,(4.1) where𝑄𝑞(𝑥,𝜉)=minT𝑦,(𝜉)𝐴𝑥+𝑊𝑦(𝜉)=𝜉,𝑦(𝜉)0s.t.Pr{𝐻(𝑦(𝜉),𝜁)0}1𝑝(4.2)𝜉 and 𝜁 represents the independent random variables.(i)𝜉Ξ is the possible realizations of the random variable 𝜉 supported on Ξ𝑠.(ii)𝔼 stands for expectation with respect to the random variable 𝜉 and 𝑦(𝜉)𝑚 for each realization 𝜉.(iii)𝐵𝑀𝑙×𝑛(), 𝐴𝑀𝑠×𝑛(), and 𝑊𝑀𝑠×𝑚() are deterministic matrices and the probability level 𝑝(0,1).(iv)𝜁Θ is the possible realizations of the random variable 𝜁 supported on Θ𝑟.

The fundamental idea is to gives a convex conservative approximation of the chance constrained subproblems (4.2), for this, we will fallow the work by Nemirovski and Shapiro (see [10]) and then, have an efficiently solvable deterministic optimization program with the feasible set contained in the chance constrained subproblem.

Let 𝐻𝑚×Θ, defined by𝐻(𝑦,𝜁)=0(𝑦)+𝑟𝑗=1𝜁𝑗𝑗(𝑦)(4.3) and we assume that the functions 𝑗(𝑦), 𝑗=1,2,,𝑟 are convex, the components 𝜁𝑗, 𝑗=1,2,,𝑟, of the random vector 𝜁 are independent of other random variables and the moment generating functions 𝑀𝑗(𝑡)=𝔼exp𝑡𝜁𝑗,𝑗=1,2,,𝑟(4.4) are finite valued for all 𝑡 and are efficiently computable.

Then, we have that the problem:𝑞minT,𝑦𝐴𝑥+𝑊𝑦=𝜉,𝑦0s.t.inf𝑡>00(𝑦)+𝑟𝑗=1𝑡Λ𝑗𝑡1𝑗(𝑦)𝑡log𝑝0(4.5) is a conservative convex approximation of the chance constrained subproblems (4.2), for each realizations of the random variable 𝜉 (𝜉Ξ𝑠), where Λ𝑗(𝑡)=log𝑀𝑗(𝑡).(4.6) Note that this approximation (it is known as the Bernstein Approximation) is an explicit convex program with efficiently computable constraints and as such is efficiently solvable.

Now, we can use the Monte Carlo simulation, that is, suppose that we can generate a sample 𝜉1,𝜉2,,𝜉𝑁 of 𝑁 replications of the random vector 𝜉 and then, we can approximate the expectation function by the average1𝔼(𝑄(𝑥,𝜉))=𝑁𝑁𝑘=1𝑄𝑥,𝜉𝑘(4.7) and consequently, we have the sample average approximation method:𝑐minT1𝑥+𝑁𝑁𝑘=1𝑄𝑥,𝜉𝑘𝐵𝑥=𝑏,𝑥0,(4.8) where𝑄𝑥,𝜉𝑘𝑞=minT𝑦𝐴𝑥+𝑊𝑦=𝜉𝑘,,𝑦0s.t.inf𝑡>00(𝑦)+𝑟𝑗=1𝑡Λ𝑗𝑡1𝑗(𝑦)𝑡log𝑝0.(4.9) If we denote by𝜔(𝑦)=inf𝑡>00(𝑦)+𝑟𝑗=1𝑡Λ𝑗𝑡1𝑗(𝑦)𝑡log𝑝(4.10) we have that 𝜔(𝑦𝑘)0 is a convex constraints and conservative for each 𝑘=1,2,,𝑁, in the sense that if for 𝑦k𝑦𝑚𝐴𝑥+𝑊𝑦=𝜉𝑘,𝑦0(4.11) it holds that 𝜔(𝑦𝑘)0, then 𝐻𝑦Pr𝑘,𝜁01𝑝(4.12) or equivalently Pr0𝑦𝑘+𝑟𝑗=1𝜁𝑗𝑗𝑦𝑘01𝑝(4.13) and we obtain the following deterministic problem, with nonlinear constraints:min𝑥,𝑦1,,𝑦𝑁𝑐1𝑥+𝑁𝑁𝑘=1𝑞𝑦𝑘,s.t.𝐵𝑥=𝑏,𝐴𝑥+𝑊𝑦𝑘=𝜉𝑘𝜔𝑦,𝑘=1,2,,𝑁,𝑘𝑦0,𝑘=1,2,,𝑁,𝑥0,𝑘0,𝑘=1,2,,𝑁.(4.14)

Remark 4.1. It was demonstrated in theoretical studies and numerical experiments that Quasi-Monte Carlo techniques could significantly improve the accuracy of the sample average approximation problem, for a general discussion of Quasi-Monte Carlo methods see the works by Niederreiter in [20, 21]. Moreover, the problem (4.5) is not the only way to get to the conservative convex approximation of the chance constrained problems, we also can use the convex approximation obtained by Conditional Value at Risk (see [10] and the work by Rockafellar and Uryasev [22]). However, our aim in this paper is more focused on showing a numerical methodology to tackle this type of models.

Denote by 𝑋={𝑥𝑛𝐵𝑥=𝑏,𝑥0} and 𝑌={𝑦𝑚𝑤(𝑦)0,𝑦0} the convex subsets of the feasible set of problem (4.14) that do not depend on the sample generated by the random vector 𝜉, then the problem (4.14) can be rewritten asmin𝑐1𝑥+𝑁𝑁𝑘=1𝑞𝑦𝑘,s.t.𝑊𝑦k=𝜉𝑘𝑦𝐴𝑥,𝑘=1,2,,𝑁,𝑥𝑋,k𝑌,𝑘=1,2,,𝑁(4.15) and then, we can take advantage of separability. If we denote𝑣𝑢𝑘𝑞=min𝑦𝑊𝑦=𝑢𝑘,𝑦𝑌,(PK) where 𝑢𝑘=𝜉𝑘𝐴𝑥, for all 𝑘=1,2,𝑁, we have𝑣𝜉𝑘𝑟𝐴𝑥=max(𝜆)𝜆𝜉𝑘𝐴𝑥𝜆𝑠,(4.16) for all 𝑘=1,2,,𝑁 and𝑞𝑟(𝜆)=inf𝑦+𝜆𝑊𝑦𝑦𝑌.(4.17) Note that 𝑟(𝜆)𝜆(𝜉𝑘𝐴𝑥) is the dual function corresponding to 𝑣, and the master problemmin𝑐1𝑥+𝑁𝑁𝑘=1𝑣𝜉𝑘,𝐴𝑥𝑥𝑋(M.P.) can be solved using a differentiable descent method if 𝑟(𝜆)=inf{(𝑞+𝑊𝜆)𝑦𝑦𝑌} is strictly concave function over the set {𝜆𝑟(𝜆)>}. However, this last assumption is very restrictive, in fact, in our specific case is not satisfied because the objective function is linear, so we would have to study under what conditions the gradient of the value function 𝑣(𝑢), can be explicitly calculated.

Since the master problem (M.P.) has linear constraints, this can be solved using Frank-Wolfe method, for which only we need to know the gradient of 𝑣(𝜉𝑘𝐴𝑥), but𝑥𝑣𝜉𝑘𝐴𝑥=𝐴𝑢𝑘𝑣𝑢𝑘=𝐴𝜆𝑘(4.18) for each 𝑘=1,,𝑁, where 𝜆𝑘 is the Lagrange multiplier associated to linear constraint in the optimal solution of subproblem (PK). Therefore, our problem now is how we find specifically the value of this multiplier 𝜆𝑘.

5. Normal Distribution

In this section we investigate the case when the random vector 𝜁=(𝜁1,𝜁2,,𝜁𝑟) supported on Θ𝑟, has all its components normally distributed.

Let us suppose that 𝜁𝑗𝑁(𝜇𝑗,𝜎2𝑗), 𝑗=1,2,,𝑟, then the moment generating function is defined as 𝑀𝑗𝜇(𝑡)=exp𝑗𝜎𝑡+2𝑗𝑡22Λ,(5.1)𝑗(𝑡)=log𝑀𝑗(𝑡)=𝜇𝑗𝜎𝑡+2𝑗𝑡22(5.2) for each 𝑗=1,2,,𝑟.

Proposition 5.1. The Bernstein Approximation of the chance constrained subproblems is given by 𝜔(𝑦)=0(𝑦)+𝑟𝑗=1𝜇𝑗𝑗(𝑦)+2log𝑝𝑟𝑗=1𝜎2𝑗2𝑗(𝑦).(5.3)

Proof. As we saw before, the Bernstein Approximation is a conservative convex approximation of the chance constraints defined as 𝜔(𝑦)=inf𝑡>00(𝑦)+𝑟𝑗=1𝑡Λ𝑗𝑡1𝑗(𝑦)𝑡log𝑝(5.4) and substituting the expression given in (5.2) in the above relationship, we obtain 𝜔(𝑦)=inf𝑡>00(𝑦)+𝑟𝑗=1𝑡𝜇𝑗𝑗(𝑦)𝑡+𝜎2𝑗2𝑗(𝑦)2𝑡2𝑡log𝑝=inf𝑡>00(𝑦)+𝑟𝑗=1𝜇𝑗𝑗1(𝑦)+2𝑡𝑟𝑗=1𝜎2𝑗2𝑗(𝑦)𝑡log𝑝(5.5) and then 𝜔(𝑦)=0(𝑦)+𝑟𝑗=1𝜇𝑗𝑗(𝑦)+inf𝑡>012𝑡𝑟𝑗=1𝜎2𝑗2𝑗(𝑦)𝑡log𝑝.(5.6) Let us denote by 𝑓(𝑡)=𝑎/2𝑡𝑡log𝑝 the auxiliary function, it is easy to see that the stationary point ̂𝑡=𝑎/2log𝑝 is a global minimum of the function 𝑓, therefore, of the given equation (5.6), we can conclude, after some calculations, that 𝜔(𝑦)=0(𝑦)+𝑟𝑗=1𝜇𝑗𝑗(𝑦)+2𝑎log𝑝(5.7) and finally, substituting 𝑎=𝑟𝑗=1𝜎2𝑗2𝑗(𝑦), we have 𝜔(𝑦)=0(𝑦)+𝑟𝑗=1𝜇𝑗𝑗(𝑦)+2log𝑝𝑟𝑗=1𝜎2𝑗2𝑗(𝑦).(5.8)

Proposition 5.2. Let 𝑦𝑘𝑞argmin𝑦𝑊𝑦=𝑢𝑘,𝑦0(5.9) and suppose that 𝜔(𝑦𝑘)>0. Then, there is 𝑦𝑘𝑞argmin𝑦𝑊𝑦=𝑢𝑘,𝑦0,𝜔(𝑦)0(5.10) such that 𝜔(𝑦𝑘)=0.

Proof. The existence of 𝑦𝑘 depends only on whether the feasibility set 𝑆=𝑦𝑚𝑊𝑦=𝑢𝑘,𝑦0{𝑦𝑚𝜔(𝑦)0}(5.11) is not an empty set. By the other hand, 𝜔(𝑦) is a convex function, and then 𝑆 is a convex set, so 𝑦𝑘(𝛼)=𝛼𝑦𝑘+(1𝛼)𝑦𝑘[]𝑆,𝛼0,1(5.12) and by continuity of 𝜔, if 𝜔(𝑦𝑘)>0, there is 𝛼[0,1] such that 𝜔(𝑦𝑘(𝛼))=0.
Denote by 𝜃(𝛼)=𝑞𝑦𝑘(𝛼), then 𝜃(𝛼)=𝑞(𝑦𝑘𝑦𝑘)0 for all 𝛼[0,1] because 𝑞𝑦𝑘𝑞𝑦𝑘. This implies that the function 𝜃(𝛼) is monotone increasing in [0,1], and therefore 𝜃(0)𝜃𝛼𝜃(1)(5.13) and then we have 𝑞𝑦𝑘(𝛼)𝑞𝑦𝑘 and 𝑦𝑘𝛼𝑞argmin𝑦𝑊𝑦=𝑢𝑘,𝑦0,𝜔(𝑦)0,(5.14) where 𝜔(𝑦𝑘(𝛼))=0. Finally, it is enough to assign to 𝑦𝑘=𝑦𝑘(𝛼).

Now, we analyze the two possible cases for each 𝑘=1,,𝑁. Let𝑦𝑘𝑞argmin𝑦𝑊𝑦=𝑢𝑘,𝑦0.(5.15)

Case 1. If 𝜔(𝑦𝑘)0, then by 𝑦𝑘=𝑦𝑘 and 𝜆𝑘 is the Lagrange multiplier associated to linear constraint 𝑊𝑦=𝑢𝑘.

Case 2. If 𝜔(𝑦𝑘)>0. Using the results of the previous proposition, we have to find the solution to the penalized problem: min𝑞𝑦+𝐶𝑘𝜔2(𝑦),s.t.𝑊𝑦=𝑢𝑘,𝑦0(5.16) for a penalty parameter 𝐶𝑘 sufficiently large. To resolve this problem, we can apply again the iterative method of Frank and Wolfe, where in each iteration, we solves a linear problem and then, we have 𝑦𝑘(𝑗+1)=𝑦𝑘(𝑗)+𝛾𝑗𝑦𝑘(𝑗)𝑦𝑘(𝑗),(5.17) where 𝛾𝑗 is chosen by the limited minimization rule or the Armijo rule, 𝑦𝑘(𝑗)argmin𝑞+2𝐶𝑘𝜔𝑦𝑘(𝑗)𝑦𝜔𝑘(𝑗)𝑦𝑦𝑘(𝑗)𝑊𝑦=𝑢𝑘,𝑦0(5.18) and, if we denote by 𝜆𝑘(𝑗) the Lagrange multiplier associated to linear constraint 𝑊𝑦=𝑢𝑘 in (5.18), and let 𝑦𝑘 be the accumulation point of the sequence {𝑦𝑘(𝑗)}, that is, there is a subsequence {𝑦𝑘(𝑗)}𝑗𝒥 that converges to 𝑦𝑘, then we define by 𝜆𝑘 the corresponding limit point of subsequence {𝜆𝑘(𝑗)}𝑗𝒥.

6. Conclusions

In this paper, we present a strategy or methodology to be followed to solve a two-stage stochastic linear programs numerically, when the chance constraints are included in the second stage. It suggests treating the two measures of probabilities involved in the problem differently. Since the major difficulty of the problem is in the second stage, we chose to assume that we had a sample of replications of random vector involved in the expected value function in the objective function and approximate it by the average. For the case of chance constraints defined in second stage, the main idea was to obtain a convex conservative approximation and then get to an efficiently solvable deterministic nonlinear optimization program for each scenario considered in the previous sample. Since the number of replicas or sample size is generally very large, and for each one must solve a nonlinear optimization problem because a method of decomposition of general deterministic problem were proposed, then although the problem looks very computationally unwieldy for the special case when the random vector of probability constraint of the second stage has all its components normally distributed, an explicitly Bernstein approximation function was obtained and we showed how each nonlinear optimization problem can be solved separately.

Acknowledgments

The author wish to thank the referees for their careful reading and constructive remark. This work has been supported by CONICYT (Chile) under FONDECYT Grant 1090063.