Abstract

We study a new class of optimization problems called stochastic separated continuous conic programming (SSCCP). SSCCP is an extension to the optimization model called separated continuous conic programming (SCCP) which has applications in robust optimization and sign-constrained linear-quadratic control. Based on the relationship among SSCCP, its dual, and their discretization counterparts, we develop a strong duality theory for the SSCCP. We also suggest a polynomial-time approximation algorithm that solves the SSCCP to any predefined accuracy.

1. Introduction

Stochastic programming is one of the branches of optimization which enjoys a fast development in recent years. It tries to find optimal decisions in problems involving uncertain data, so it is also called “optimization under uncertainty” [1]. Since the problems in reality often involve uncertain data, stochastic programming has a lot of applications.

Many deterministic optimization models have their stochastic counterpart; for example, the stochastic counterpart of linear programming is stochastic linear programming. In this paper, we consider the stochastic counterpart of a kind of optimization model called separated continuous conic programming () which has the following form: Here the control and state variables (both are decision variables), and , are vectors of bounded measurable functions of time . , , are closed convex cones in the Euclidean space with appropriate dimensions, are vectors, are matrices, and the superscript denotes the transpose operation.

was first studied by Wang et al. [2]. They developed a strong duality theory for under some mild and verifiable conditions and suggested an approximation algorithm to solve with predefined precision. has a variety of applications in robust optimization and sign-constrained linear-quadratic control. However, many applications of are stochastic in nature in the sense that the values of some parameters in the resulted models may change over time with some probability distribution. To incorporate this kind of randomness into the model, we introduce the following stochastic counterpart of which we call stochastic separated continuous conic programming () problem: where is a random variable.

is formulated with the similar idea as that of the stochastic linear programming [1, 3]. There are two stages in this problem; the values of some parameters in the second stage depend on the value of a random variable .

Our goal in this paper is developing the strong duality for and suggesting a solution method to solve it approximately with predefined precision. Here is a summary of our main results. Through discretization, we connect and its dual to two ordinary conic programs, and we show that strong duality holds for and its dual under some mild (and verifiable) conditions on these two ordinary conic programs. Furthermore, the optimal values of those two conic programs provide an explicit bound on the duality gap between and its dual, based on which we suggest a polynomial-time approximation algorithm that solves to any predefined accuracy. According to our knowledge, we are the first to raise the model and there have been no other results on besides those in this paper.

The paper is organized as follows. In Section 2, we present an overview on the related literature. We also give a concrete example to show the application of . In Section 3, we construct a dual for . We also discretize and its dual into two ordinary conic programs, and bring out their relations. In Section 4, we discuss the strong feasibility for , its dual, and their discretizations. We then establish the strong duality result for and its dual in Section 5. This leads to a polynomial-time approximation algorithm with an explicit error bound, detailed in Section 6. In Section 7, we summarize what we get for and point out some future research directions.

For simpler presentation, in the remainder of this paper, we will concentrate on the following problem, which is the corresponding when is a discrete variable and only takes two different values with probability and , that is, there are only two scenarios in the second stage of : where the first-stage control and state variables are and , , and the second-stage control and state variables are , , and , . Also , , , , , , , .

Note that although (3) is a deterministic optimization problem, it is not an . To see why this is the case, one can try to formulate (3) into the form of and it then becomes clear that (3) cannot fit into the form.

In the rest of this paper, we will use some results on conic programming without explanations. Interested readers can consult the books on conic programming (e.g., [4]) for the related results.

2. Literature Review

Bellman [5, 6] first introduced the so-called continuous linear programming (), which has the following form: Here is a decision variable. The model has wide-ranging applications (e.g., the bottleneck problem [5]). But is very difficult to solve in its general form. Later, Anderson [7] introduced separated continuous linear programming () (see (5)), a special case of , to model the job-shop scheduling problems: The word “separated” refers to the fact that there are two kinds of constraints in : the constraints involving integration and the instantaneous constraints [7].

Anderson et al. [8] studied the properties of the extreme solutions of the , based on which Anderson and Philpott [9] developed a simplex type of algorithm for a network-based . Refer to Anderson and Philpott [10] and Anderson and Nash [11] for their other results on . Pullan [1218] continues studying in a series of papers. He systematically developed a duality theory and solution algorithms for the .

There are other researches focused on other forms of , including Luo and Bertsimas [19], Shapiro [20], Fleischer and Sethuraman [21], Weiss [22], and Nasrabadi et al. [23].

One of the extensions of is introduced by Wang et al. [2] in which the constraints involve the convex cone in their right hand side. When all the convex cones are nonnegative orthants, reduces to . In [2], based on the relationship among , its dual, and their discretization counterparts, they develop a strong duality theory for the . They also suggest a polynomial-time approximation algorithm that solves the to any predefined accuracy.

Wang [24, 25] extends to generalized separated continuous conic programming (GSCCP) by allowing the parameters in (1) to be piece-wise constants and extends the results of [2] for to . In this paper, we extend to by allowing the changes of values of some parameters in in the second stage. We also extend the results of [2] for to .

2.1. A Motived Example for SSCCP

We consider a problem which appears in [2]; for completeness, we reproduce the problem description and the formulation below.

A network processes a continuous flow of jobs at two machines. The jobs visit machines 1 and 2 in the order , that is, a total of three processing steps; see Figure 1. Corresponding to each processing step, there is a buffer holding the fluid. At , the initial levels of fluid at the three steps are 50, 20, and 120 units. The input rates of fluid from outside to the three buffers are 0.01, 0.01, and 0.01. To process each unit of job (“fluid”), the time requirements at the three steps are 0.4, 0.8, and 0.2 time units.

The problem is to find the processing rates at the three steps, ,  , which determine the fluid levels in the three buffers, , , during a given time interval such that the fluid levels in the three buffers are maintained as close as possible to a prespecified constant level .

The problem can be formulated as follows: where We can further express the above problem in the form of . Please refer to [2] for the details.

In reality, the values of and could be changed during for example, when the machine 1 experiences partial breakdown within ; where , the corresponding value of capacity vector for machine 1, , will change during . This makes the formulation of the problem an . We omit the details here.

3. The Dual and Discretizations

3.1. The Dual

The dual of that we will focus on is the following problem: where the decision variables , , , , , and are bounded measurable functions. , are the dual cones of and , , , respectively.

The derivation of the above dual problem is similar to the derivation of the dual problem for LP (see, e.g., [26]) and we omit the details here. Because involves time, to achieve some degree of symmetry in the dual (to facilitate the later analysis), we choose to write the dual in the reversed time; that is, in the dual is in the primal.

The following weak duality is readily shown from the derivation of .

Proposition 1. The weak duality holds between and ; that is, if , , , , , , is a feasible solution for and , , , , , , , is a feasible solution for , then

Next we will introduce the discretizations for and , respectively, and discuss the relationships among , , and their discretizations. But first, we need the following notation and conventions which mostly follow what is used in [2].

Notation and Conventions (i) When we say is a feasible solution to , we mean is a feasible solution to .(ii) By default, all vectors are column vectors. One exception is when we denote the solutions to and its dual (or their variations) as and , we mean and .(iii)  denotes a partition of into segments: where and are positive integer numbers.(iv) Given a partition and a vector , where is a right continuous function, the following (continuous) function is called a piecewise linear extension of , whereas the following (right-continuous) function is called a piecewise constant extension of . (v) When is a feasible solution to , with , , being piecewise constant and , , piecewise linear, we assume , , is right continuous, and , , is continuous, with , , and the pieces of both and correspond to a common partition for , and the pieces of both and correspond to a common partition for .When is a feasible solution to , with , , being piecewise constant and , , piecewise linear, we assume , , and are right continuous and , and are continuous, with , and the pieces of , , , and correspond to a common partition for , and the pieces of both and correspond to a common partition for .(vi)For , denote , and similarly denote .

3.2. The Discretizations

We start with introducing the following discretization of based on the partition of , where : Note that here we require that and Clearly, is a conic program.

Lemma 2. From a feasible solution for , one can get a feasible solution for with the same objective values, if , , , .

Proof. Suppose is a feasible solution for . Let then we have , , , , , , and .
Because and , and . When , , we have , and , and .
For ,
For ,
For , ,
For ,
For , ,
For ,
For ,
For , , For , Similarly, if , then So is a feasible solution for when , , , and .
It is easy to see that the objective value of is the same as that of . We omit the details here.

We now introduce the following discretization of based on the partition of , , where , , . Note here we require that and Clearly is also a conic program.

We now show the following.

Lemma 3. For any two convex cones ,

Proof. : Because , for any , , So for any , . So . So .
: Because , for any , , So for any , . So . so .

Lemma 4. From a feasible solution for , one can get a feasible solution for with the same objective values, if , , , , , .

Proof. Suppose is a feasible solution for . Let then , , , , , , and . When , and , from Lemma 3, we have , and , then; because , and , we have , so and .
The remaining proof is similar to that in proving Lemma 2 and we omit the details here.

4. Strong Feasibility

We say that is a strongly feasible solution to , if for the closed and convex cones , , , , with nonempty interiors, the following holds: We say that is strongly feasible if there exists a strongly feasible solution. The similar notions apply to the dual problem .

Next we will show that the strong feasibility of and can be determined by the strong feasibility of the following two conic programs:

Note that the constraints of and above are the same as the constraints of and , respectively, when . The objectives of and , however, are different from those of and . The choice of these objectives is to facilitate the explicit derivation of a bound on the duality gap; see the proof of Theorem 11.

Lemma 5. If the conic programs are strongly feasible and , , , and , then is strongly feasible, and so is .

Proof. We first show that when is strongly feasible, is strongly feasible.
Suppose is a strongly feasible solution to . We have and is such a constant that Let We have , , , , , , , and .
Because and , then , , and .
For ,
For ,
For ,
For , Note that when , .
For ,
Similarly, we can get (by noting that ) We can see that is a (two-piece) strongly feasible solution for . So is strongly feasible.
Now we will show that from this strongly feasible solution for , we can get a strongly feasible solution for .
Let Then , , , , , , , .
For ,
For , Similarly, we can get.
For ,
So is a strongly feasible solution for , and is strongly feasible.

Lemma 6. If the conic programs are strongly feasible, , , , , , , then is strongly feasible and so is .

Proof. The proof is similar to that of Lemma 5; the details are omitted here.

We shall focus on one partition for , denoted by and one partition for , denoted by . divides the interval into equal segments, each of length , and divides the interval into equal segments, each of length . divides the interval into equal segments, each of length , and divides the interval into equal segments, each of length .

If we reverse the inner order of and in , that is, for example, change to , we get the following problem: where Note that in , the difference between two adjacent items is ; in , the difference between two adjacent items is ; in , the difference between two adjacent items is . In , the difference between two adjacent items is ; in , the difference between two adjacent items is ; in , the difference between two adjacent items is . All the items in are the same, and this is also true for and .

The dual of without in the objective of is the following problem: Similarly, we have where Note that in , the difference between two adjacent items is ; in , the difference between two adjacent items is ; in , the difference between two adjacent items is . In , the difference between two adjacent items is ; in , the difference between two adjacent items is ; in , the difference between two adjacent items is . All the items in are the same, and this is also true for and .

Now we write down the relationships between the input parameters in and , and these following relationships will be used in proving Theorem 11 in Section 5:

5. Strong Duality

In this section, we will prove that under some mild and verifiable conditions, strong duality holds between and its dual.

Let denote a partition of , , with , , , and , = , while , , .

Lemma 7. If the conic programs are strongly feasible and , , , , then is strongly feasible.

Proof. The constraints of are:
When , and , , we have , , , . Comparing the above with the constraints of , we observe that if is a strongly feasible solution of , then is a strongly feasible solution to .

From Lemma 5, we know if the conic programs are strongly feasible and , , and , then is strongly feasible.

From Lemma 6, we know if the conic programs are strongly feasible and , , , , and , is strongly feasible. Now, from Lemma 7, if additionally , and , , then is strongly feasible.

So under the condition that the conic programs and are strongly feasible, , , , , , , , , , , is solvable.

The dual of without the constant terms in the objective is the following:

We use to denote the partition , with , , , and , , while , , .

Lemma 8. If the conic programs are strongly feasible and and , then is strongly feasible.

Proof. If we reverse the inner order of , the constraints of are:
When , , , , comparing the above with the constraints of , we observe that if is a strongly feasible solution of , then is a strongly feasible solution to .

From Lemma 5, we know that if the conic program is strongly feasible, and , , , , then is strongly feasible. From Lemma 8, if additionally, , , , , we have is strongly feasible.

From Lemma 6, we know that if the conic program is strongly feasible and , , , , and , then is strongly feasible.

So under the condition that the conic programs and are strongly feasible and , , , , , , , , , , is solvable.

Now we have the following.

Lemma 9. If the conic programs and are strongly feasible and , , , ,, , , , , , both and are solvable, that is, they have optimal solutions and their optimal objective values are finite.

Proposition 10. Suppose that and are solvable. Then, one has

Proof. Because and are solvable, they have finite optimal objective values and .
From Lemma 2, we know the optimal solution of can be extended to a feasible solution of , hence, the first inequality (since is a maximization problem). A similar argument justifies the third inequality. The second inequality follows from the weak duality in Proposition 1.

Theorem 11. Suppose and are strongly feasible, with finite optimal values. , , , , , , , , and . If one lets the number of intervals in , and be the same and both of them equal to , then there exists a constant , which is independent of , such that Consequently, one must have ; that is, strong duality holds.

Proof. First note that strong duality follows immediately from the inequality in (60) by letting , taking into account the inequalities in Proposition 10.
To establish the error bound in (60), consider the following primal-dual pair of conic programs: Note that the problem in (62) has the same constraints as but the objective function of , whereas the problem in (8) has the constraints of but the objective function of . Hence, both primal and dual are strongly feasible, since and are. Consequently they both have optimal solutions and their respective optimal objective values coincide.
We denote the optimal solutions for as and the optimal solution for as . Note that these are feasible solutions to and , respectively. Hence, we have Hence, If , then , . Hence, we have
From the primal feasibility of , we have So we know that is a feasible solution to . Thus, Similarly, from the dual feasibility of , we have Hence, is a feasible solution to , and
Putting the above together, we have Hence, we can let and , since and as assumed.

6. The Approximation Algorithm

Proposition 10 and Theorem 11 suggest that we can solve and their dual approximately through solving their discretized versions, the ordinary conic program and . The latter is readily solvable by standard algorithms, for example, SeDuMi [27], and the (discrete) solution can then be extended into the piecewise-constant control and piecewise-linear state variables as a feasible solution to . Furthermore, the explicit error bound in (60) means that we can achieve any required accuracy by partitions and with a sufficiently large number to construct the discretized conic programs and . Specifically, if is the required accuracy, then we can choose where (refer to the end of the proof of Theorem 11) Then, from (59) and (60), we have That is, the duality gap is guaranteed to be no greater than .

To select following (72), we need to first derive . This involves solving the two conic programs and . In addition, we also need to determine , , and . This can be accomplished by solving the following three conic programs: Note that the constraints of the above three problems originate from (14) and (27). Clearly, maximizing and minimizing both and improve our estimation of the error bound.

In summary, our algorithm amounts to solving six conic programming problems: , , (75), and . Conic programs are known to be polynomially solvable. Hence, ours is a polynomial-time algorithm.

Of course, with increases, the computational burden in terms of solving the discretized problems increases. However, the discretized problems are all ordinary conic programming problem and they are polynomially solvable. There exist softwares (e.g., SeCuMi [27], CVXOPT [28], etc.) which can solve conic programming problems efficiently. So the increased computational burden does not really pose a problem in this algorithm.

7. Conclusion and Future Work

In this paper, we have developed a duality theory for , which is an important extension on . Specifically, we have shown that the strong duality between and its dual is implied by two related ordinary conic programs and being strongly feasible with finite optimal values. We have also developed a polynomial-time approximation algorithm that solves to any desired accuracy with an easily computable error bound, based on the strong duality result.

All these results can be readily generalized for with three or more stages and with finite number of scenarios in each stage, without essential difficulty.

From Theorem 11, we know that as , the duality gap tends to 0, and the optimal objective value of the discretized conic program approaches the optimal objective value of the original . In the future, we plan to investigate whether the optimal solution to will also approach the optimal solution to .

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.