Special Issue

## Mathematical Problems for Complex Networks

View this Special Issue

Research Article | Open Access

Volume 2012 |Article ID 139271 | https://doi.org/10.1155/2012/139271

Xiaodong Ding, Chengliang Wang, "A Novel Algorithm of Stochastic Chance-Constrained Linear Programming and Its Application", Mathematical Problems in Engineering, vol. 2012, Article ID 139271, 17 pages, 2012. https://doi.org/10.1155/2012/139271

# A Novel Algorithm of Stochastic Chance-Constrained Linear Programming and Its Application

Academic Editor: Zidong Wang
Received20 Apr 2011
Accepted06 Jul 2011
Published07 Sep 2011

#### Abstract

The computation problem is discussed for the stochastic chance-constrained linear programming, and a novel direct algorithm, that is, simplex algorithm based on stochastic simulation, is proposed. The considered programming problem in this paper is linear programming with chance constraints and random coefficients, and therefore the stochastic simulation is an important implement of the proposed algorithm. By theoretical analysis, the theory basis of the proposed algorithm is obtained and, by numerical examples, the feasibility and validness of this algorithm are illustrated. The detailed algorithm procedure is given, which is easily converted into the executable codes of software tools. Then, we compare it with some algorithms to verify its superiority. Finally, a practical example is presented to show its practicability.

#### 1. Introduction

In the late 1950s, stochastic linear programming (SLP) appeared with the further application of linear programming. SLP is a special kind of linear programming problem in which a part or all of coefficients are random variables with joint probability distribution. Generally speaking, there are two sorts of SLP models, one of which can be described as “wait-and-see” model based on the hypothesis that the decision maker can wait until the random variables come true and another one is called “here-and-now” model in which the decision maker must make decision before the random variables come true.

Stochastic chance-constrained programming (SCP), firstly proposed by Charnes and Cooper , offers a powerful means of modeling stochastic decision and control systems (see, e.g., [2, 3]). SCP is mainly concerned with the problem that the decision maker must give his solution before the random variables come true. In this problem, the made decision may not satisfy the constraints in some degree, but the probability of decision satisfying the constraints cannot be less than some given confidence level . Stochastic chance-constrained linear programming (SCLP) is an important part of SCP.

As we know the traditional method to solve SCLP is converting it into an equivalent deterministic linear programming and then obtaining the optimal solution by some deterministic algorithms . However, this method is only effective for some special cases. Generally, SCLP cannot be converted into some deterministic linear programming and convex programming, see the work of Kall and Wallace in . However, for those which can be converted to deterministic problem, it is usually a difficult work to convert and one can always obtain a complicated nonlinear programming which is traditional hard problem. Therefore, it is necessary and urgent to find the direct and effective algorithm to solve SCLP. Luckily, with the rapid development of computer, genetic algorithms based on stochastic simulation were designed for SCP, see, for example, . These intelligent algorithms are more direct and effective than the method of converting SCP to deterministic programming. However, the disadvantages of these genetic algorithms are obvious. That is, they are always designed aiming to solve every specified problem, depending on experiment and lacking the common theory basis. Therefore, it is a necessary and urgent task to find a better algorithm, which gives rise to the motivation for the present study in this paper.

Summarizing the above discussion, we aim to develop a novel direct and universal algorithm to solve the computation problem of the stochastic chance-constrained linear programming. The main contribution of this paper can be given as follows: (i) several simple approaches are illustrated to be ineffective or limited to SCLP by numerical example; (ii) a novel and direct algorithm, that is, simplex algorithm based on stochastic simulation, is proposed; (iii) the theory basis of the proposed algorithm is proved; (iv) the detailed procedures of the simplex algorithm are given. By numerical examples, our algorithm is compared with the traditional methods and genetic algorithm based on stochastic simulation and verified to be feasible, valid, and better than the traditional ones and intelligent technique. Finally, a real-world example is given to illustrate the practicability of the developed algorithm.

#### 2. Research Model and Computation Problem of SCLP

In this paper, unless otherwise specified, denotes a complete probability space, where is a nonempty sample space, is the power set of , and is a probability measure on , and a random variable is defined as a function from the probability space to the set of .

##### 2.1. Stochastic Linear Programming (SLP) Model

Firstly, introduce the following SLP model where is an dimensional vector to be determined, and , with being random variables and .

##### 2.2. Stochastic Chance-Constrained Linear Programming (SCLP) Model

SCLP usually includes two sorts of models which can be formulated as follows: where is the th row of , is the th element of , and is the th confidence level of the constraints.

##### 2.3. Computation Problem of SCLP

In this subsection, by solving a numerical example of SLP, the computation problem of SCLP is analyzed and several possible approaches are tried to obtain the optimal solution to the example.

Example 2.1. Let be the random variables with uniform distribution in rectangle and consider the following SLP problem:

Solution 1. In (2.4), the expectations of the random variables and are easily derived. Therefore, a very simple idea is to replace the stochastic parameters and by their expectations, respectively, and solve the corresponding deterministic linear programming problem described as follows:
From (2.5), by deterministic linear programming algorithms, it is easy to derive the unique optimal solution .
Now, in order to analyze the feasibility of this approach, we assume , and, from the example, we can easily know the feasible region . Then That is, the probability of the solution taking value in the feasible region is only 0.25, which verifies that the method of directly replacing random parameters by their expectations is invaluable.

Solution 2. It is well known that the samples of random variables are also easily obtained; therefore, another simple technique is to produce some random samples of these random parameters and solve all the deterministic linear programming problems corresponding to the samples, and then we choose the best solution as the optimal solution. Now, produce 10 samples of and in (2.4): From Table 1, it is easy to see that the largest probability is no more than 0.5. Consequently, this approach is almost useless for any practical problems.
To further testify the above conclusion about the technique in Solution 2, we consider the following stochastic chance-constrained programming, which relaxes the constraints in (2.4):
Generate 10000 samples of , replace by the samples, and calculate all the deterministic programming problems. We can find that there is almost no solution satisfying the constraints of (2.8) among 10000 solutions.

 Number Solution 1 (2.20, 2.62) 4.82 0.37 2 (2.13, 2.68) 4.81 0.38 3 (1.46, 3.40) 4.86 0.45 4 (1.13, 3.34) 4.48 0.16 5 (1.75, 2.50) 4.26 0.11 6 (0.95, 3.44) 4.39 0.05 7 (3.18, 1.66) 4.84 0.37 8 (0.95, 3.23) 4.19 0.00 9 (1.70, 3.03) 4.74 0.36 10 (0.99, 3.39) 4.81 0.38

Solution 3. In the following, we analyze a traditional but indirect approach, which is to convert SLP into an equivalent deterministic programming problem and obtain the optimal solution by some deterministic programming algorithms.
From the constraints in (2.8), we have where .
Therefore, the constraints in (2.8) are equivalent to and then the SCLP model (2.8) is equivalent to the following deterministic nonlinear programming:
Setting the initial value , then the optimal solution and the optimal value of this nonlinear programming model are and we can have the probability .

From the above calculation and analysis of the three methods, we can conclude that the former two are not feasible regardless of their simpleness and the third one is valid. However, the third measure is indirect and the obtained equivalent deterministic programming is usually a nonlinear programming whose calculation is difficult and sometimes impossible. Therefore, it is necessary to find some novel direct computation method. Recently, a direct and effective algorithm, that is, genetic algorithms, has been put forward and rapidly developed to solve the stochastic chance-constrained linear programming, see, for example, the work of Liu in , of Ding et al. in , and of Ding and Sun in . However, they are always designed aiming to solve every specified problem, depending on experiment and lacking the common theory basis.

In next section, we propose a direct and universal algorithm of SCLP, simplex algorithm based on stochastic simulation, and build its theory basis. Then we design the detailed procedures of the algorithm, which are easily changed into the executable codes of software.

#### 3. Simplex Algorithm Based on Stochastic Simulation

In SCLP model, on one hand, the meaning of “” is not clear because of being random vector; on the other hand, it is difficult to judge the convexity of SCLP which is required by optimization theory. Therefore, the computation problem of the SCLP is very difficult. In this section, we propose a satisfying algorithm, that is, simplex algorithm based on stochastic simulation, to overcome this difficulty. Firstly, several problems in SCLP are handled by stochastic simulation. Then, we build the theory basis of this algorithm by theoretical analysis. Finally, the detailed procedures of this algorithm are designed.

According to the theory of stochastic chance-constrained linear programming, we can transform (2.2) into the following programming model: where is the target function and are the confidence levels of the target function and constraint, respectively.

Similarly, model (2.3) can be transformed into

##### 3.1. Stochastic Simulation
###### 3.1.1. Judging the Chance Constraint

Consider the chance constraints in (3.1): where the random matrix and the random vector have a known joint probability distribution . Then we check whether the chance constraint holds or not if we have a solution by applying the stochastic simulation method (the Monte Carlo simulation). The algorithm is as follows.

Algorithm 3.1. Chance constraint judging algorithm.
Step 1. Set .Step 2. Sample a random vector according to the joint probability distribution of .Step 3. Calculate ; if , then .Step 4. Repeat Step 2 to Step 3   times.Step 5. If , return FEASIBLE, or else return INFEASIBLE.

###### 3.1.2. Handling the Target Function

Consider the target function with the random parameter vector :

For any given vector , the minimum objective function can always be found by stochastic simulation, and the algorithm is as follows.

Algorithm 3.2. Minimum target function searching algorithm.
Step 1. Sample a random vector according to the probability distribution of .Step 2. Compute and arrange the results according to the ascending order.Step 3. Set as the integer part of .Step 4. Return the th largest element in the set as the estimation of .

###### 3.1.3. Checking the Estimation Number

In order to check the estimate of probability, we test and return the random variable , such that where is a random variable, is a deterministic number, and is a prescribed confidence level.

Algorithm 3.3. Estimation number checking algorithm.
Step 1. Set .Step 2. Sample random vectors according to the probability distribution of .Step 3. If , then .Step 4. Repeat Step 2 to Step 3   times.Step 5. If , return FEASIBLE and execute Step 6.Step 6. Letting be the integer part of , array according to ascending order, return the element.Step 7. Otherwise return INFEASIBLE.

##### 3.2. Theoretical Analysis

In this subsection, we aim to build the theory basis of simplex algorithm based on stochastic simulation by theoretical analysis. To begin with, we recall the basic principle of deterministic linear programming. According to the convexity of deterministic linear programming, the programming has optimal solution if the feasible solutions are finite, and the optimal solution must be in the range of the feasible solutions (see the work of Zhang and Xu in ). Based on this theory, the simplex method is designed. Specifically, the basic principles of simplex method can be formulated as follows. First of all, find a feasible solution stochastically and check whether it is optimal; if it is not optimal, then we find another feasible solution which can improve the target function and check this solution again; repeat the above process until we find the optimal solution and the corresponding target value, or we can confirm that the programming does not have the optimal solution. According to these principles, we further consider the stochastic chance-constrained linear programming (3.1).

In (3.1), replace by with according to the slack variable method and assume , where is an unit matrix. Then the constraint is equivalent to where . Still mark as , where is a zero vector, and then we have .

Based on the above assumption, the main principle of simplex algorithm based on stochastic simulation can be described as follows. Firstly, find a base matrix from by random sample or Big M method (Big M method is the most direct method and the base matrix is deterministic unit matrix). Secondly, search a feasible basic solution satisfying the chance constraint according to stochastic simulation, calculate the corresponding (denoted by ), then check whether it is the optimal solution by the sample value of (denoted by ), which is calculated by stochastic simulation method; if it is not optimal, according to the improvement principle, we can find the sample value of (denoted by ). Thirdly, solve the deterministic programming defined by and by simplex algorithm, and we can obtain a solution which can improve the value of target function. Fourthly, check whether the solution satisfies the constraint. If it does not, change , and and check once more until we derive an improved solution satisfying the constraint. Repeat the above steps until we find the optimal solution or are sure that there is no optimal solution (infinite optimal solutions).

Assume , , , () are the initial feasible solution, the feasible base, a sample value of ( is determined by checking whether satisfies the constraint through stochastic simulation), and the nonbase matrix, respectively, and then we have with and being corresponding to the base variable and the nonbase variable of , respectively.

Computing the value of target function by stochastic simulation and chance constraint it is easy to obtain the sample value of and the target value .

Set as being any feasible solution and consider the following stochastic chance constrained programming:

From (3.9) and the above discussion, we have therefore, Let and be the lower label set of nonbase variables and where is defined as the estimation number. Then we can derive

Then, the programming (3.9) is converted into where with and being obtained by stochastic simulation in (3.1).

If the feasible base solutions in programming (3.14) are nondegenerative, from the above discussion, we can have the following theorem.

Theorem 3.4. If , then is the optimal solution of SCLP (3.14) and is denoted as .

Proof. Since holds for all , there is no new feasible solution satisfying the constraint and reducing target function value. Therefore must be the optimal solution.

Given a probability space , assume Then it is easy to know that , and . According to Theorem 3.4, if does not hold, there may be other new feasible solutions satisfying the constraint and reducing the target function value. Since is equivalent to , we can compute estimation number by stochastic simulation and return some sample values of and also obtain some sample values of which is denoted as . Then we can derive a feasible solution by solving the following deterministic linear programming by simplex method (see the work of Zhang and Xu in ). Now, we can begin solve the SCLP problem (3.1) by simplex algorithm based on stochastic simulation, and the steps are as follows.

Firstly, check the initial feasible solution . If all of the estimation numbers hold, then is the optimal solution; if and , there is no optimal solution; otherwise, if and some of elements of are positive, there must be a new feasible solution to reduce the target function value.

Secondly, check whether satisfies the constraint of (3.1) by stochastic simulation. If it does, continue to check whether it is the optimal one; if it does not, change a new and the corresponding new and repeat the above checking.

Finally, repeat the above two steps. The number of basic feasible solutions is finite; therefore, it is sure that we can find an optimal solution or the programming problem has no optimal solution.

In order to find a new basic feasible solution , let the vector ( is the biggest estimation number of ) enter the base vector and change the nonbase vector into a base vector. Set and let . Then we obtain a new basic feasible solution where and . This is to say . Then, check whether satisfies the constraints by stochastic simulation. If it does, we figure out a new feasible solution.

##### 3.3. Computation Procedure

In this subsection, we design the detailed steps for the simplex algorithm based on stochastic simulation according to the above algorithm analysis. These procedures can easily be converted into the executable codes of some software tools.

Algorithm 3.5. Simplex algorithm based on stochastic simulation.
Step 1. Find an initial feasible base .Step 2. Find a basic feasible solution satisfying the chance constraints and the corresponding sample value of (denoted by ).Step 3. Computing and the target function value by stochastic simulation, return the sample value .Step 4. Check by stochastic simulation, and return sample value which satisfies the constraint. Produce a group of by stochastic simulation.Step 5. Select an , and calculate the estimation number by . Determine the lower label by , and then let enter the base vector.Step 6. If , end the procedure. Then, the basic feasible solution is the optimal one and the target function can be calculated as , or else, go to Step 7.Step 7. Calculate ; if , end the steps. We can get the programming is infinite, or else, go to Step 8.Step 8. Calculate the ratio to find the lower label , and set .Step 9. Replace by , and we have a new base. Then, compute the new basic feasible solution .Step 10. Check whether satisfies the chance constraints by stochastic simulation. If it does, go to Step 2; otherwise, go to Step 5. If, for all of obtained in Step 4, does not satisfy the chance constraint, go to Step 2 to find a new .

#### 4. A Numerical Example

In this section, a simulation example is presented to illustrate the feasibility and effectiveness of the simplex algorithm based on stochastic simulation developed in this paper.

Example 4.1. Consider the stochastic chance-constrained linear programming (2.8) again.
According to Big M method, (2.8) is equivalent to where and are slack variables.
According to Algorithm 3.5, the above SCLP (4.1) can be solved by using MATLAB toolbox and the optimal solution and optimal target function value can be obtained as follows: Moreover, we can find that, for the above optimal solution,

Remark 4.2. In SCLP, the coefficients, and , are all random variables; therefore, the optimal solution may be different for different computation. In Table 2, we obtain three couples of optimal solutions by executing Algorithm 3.5 three times. From the above results, we can know that every optimal solution satisfies the constraints, and there is a different confidence level corresponding to the optimal solution.

 Number Solution 1 (3.2802, 2.8907) 6.1709 0.9091 2 (3.2708, 2.8899) 6.1607 0.9061 3 (3.4615, 2.8299) 6.2914 0.9252

Remark 4.3. From Solution 2 of Example 2.1, it is very difficult to obtain an effective result even if 10000 random samples are utilized. But we can obtain a satisfactory optimal solution using less than 20 samples in our experiment by simplex algorithm based on stochastic simulation. So it is clear that Algorithm 3.5, that is, simplex algorithm based on stochastic simulation, is effective and much better than the method in Solution 2 to Example 2.1.

Remark 4.4. Apparently, simplex algorithm based on stochastic simulation is a direct method and can be applied to any SCLP problem. Thereby, Algorithm 3.5 is also better than the traditional approach in Solution 3 to Example 2.1, which is indirect measure and only effective for some special cases.

Example 4.5. In this example, we consider a typical optimal decision problem in oil refinery production (Kall and Wallace ). An oil refinery factory refines two kinds of crude oil (denoted by and ) and provides gas (denoted by ) for the gas company and burning oil (denoted by ) for power company. A plan is needed one week before production. Assume that the yield of gas by and yield of burning oil by are random and yields of other products are deterministic. Therefore, set where is uniform distribution and is exponential distribution . And let the requirement of gas and of burning oil be also random variables where is normal distribution and is normal distribution .

The amount of expending and are denoted as and , respectively, and the unit prices of and are and , receptively. Therefore, the total cost is . Again assume the production capability (the largest amount of raw consumed) per week is 100, that is, the constraint This problem has been dealt with by using a two-stage complement model in , and now we solve it by our simplex algorithm based on stochastic simulation developed in this paper.

As we know that the decision is made a week before production and cannot be changed during the next week. Moreover, in these decisions, confidence levels are necessary; these are According to the decision principles: satisfying customer and minimizing loss, we can have the following SCLP problem: If the confidence levels and are 0.8 and 0.7, respectively, by our simplex algorithm based on stochastic simulation, we can obtain an optimal solution

Remark 4.6. In this SCLP problem, the random variables , and obey uniform, exponential and normal distributions, respectively; therefore the joint distribution of them is so complicated that it is nearly impossible to find . Therefore the programming problem is difficult to be solved by using the approach in Solution 3 to Example 2.1.

Remark 4.7. Genetic algorithm based on stochastic simulation has been put forward by Iwamura and Liu  for SCL problems, which is also a direct and effective method. Supposing the scale , the cross probability , the mutation probability , and the parameter in the rank-based evaluation function , after running 500 generations, we obtain an optimal solution
Comparing (4.9) and (4.10), we can see both of them satisfying all of the constraints. However, (4.9) can obtain a less minimum value; this is a better result than (4.10). We should notice that their confidence levels are different. So simplex algorithm based on stochastic simulation is a practicable method.

#### 5. Conclusion

This paper has studied the computation problem of stochastic chance-constrained linear programming and proposed a novel algorithm, simplex algorithm based on stochastic simulation. By a numerical example, several simple approaches have been tried to solve the SCLP problem, and the disadvantages of these methods have been analyzed. By theoretical analysis, the theory basis of the simplex algorithm based on stochastic simulation has been built and a theorem has been proved. Then the detailed procedures of the proposed algorithm have been designed, which are easily executed by some software tools. Finally, by two examples, the introduced algorithm has been verified to be better than the approaches used in Example 2.1 and be more effective than genetic algorithm based on stochastic simulation.

Based on the algorithm proposed in this paper, some possible further research topics include (i) the direct and universal algorithm for uncertain programming, such as fuzzy programming and nonlinear stochastic programming, see, for example, [10, 13, 14]; (ii) the control and state estimate problems, see, for example, [3, 1519] and the references therein.

1. A. Charnes and W. W. Cooper, “Chance-constrained programming,” Management Science, vol. 6, no. 1, pp. 73–79, 1959. View at: Google Scholar
2. B. Liu, Theory and Practice of Uncertain Programming, Springer, Heidelberg, Germany, 2002.
3. T. B. M. J. Ouarda and J. W. Labadie, “Chance-constrained optimal control for multireservoir system optimization and risk analysis,” Stochastic Environmental Research and Risk Assessment, vol. 15, no. 3, pp. 185–204, 2001. View at: Publisher Site | Google Scholar
4. J.-M. Bismut, “An introductory approach to duality in optimal stochastic control,” SIAM Review, vol. 20, no. 1, pp. 62–78, 1978.
5. P. Kall and S. W. Wallace, Stochastic Programming, Wiley-Interscience Series in Systems and Optimization, John Wiley & Sons, Chichester, UK, 1994.
6. K. Iwamura and B. Liu, “A genetic algorithm for chance constrained programming,” Journal of Information & Optimization Sciences, vol. 17, no. 2, pp. 409–422, 1996. View at: Google Scholar | Zentralblatt MATH
7. J. Ren, R. Zhao, and B. Liu, “The combination model of stochastic optimal depenture investment,” The Theory and Practice of Systemic Engineering, vol. 9, no. 1, pp. 14–18, 2000. View at: Google Scholar
8. S. He, S. S. Chaudhry, Z. Lei, and W. Baohua, “Stochastic vendor selection problem: chance-constrained model and genetic algorithms,” Annals of Operations Research, vol. 168, pp. 169–179, 2009.
9. B. Liu, “Dependent-chance programming: a class of stochastic optimization,” Computers & Mathematics with Applications, vol. 34, no. 12, pp. 89–104, 1997. View at: Publisher Site | Google Scholar
10. X. Ding, R. Wu, and S. Shao, “Fixed chance-constrained programming model with fuzzy and stochastic parameter,” Control and Decision, vol. 17, no. 5, pp. 587–590, 2002. View at: Google Scholar
11. X. Ding and X. Sun, “A hybrid chance-constrained integer programming model and its application,” Systems Engineering-Theory Methodology Application, vol. 14, no. 2, pp. 141–144, 2005. View at: Google Scholar
12. J. Zhang and S. Xu, Linear Programming, Science, Beijing, China, 1990.
13. B. Liu, R. Zhao, and G. Wang, Uncertain Programming with Applications, Springer, Beijing, China, 2003.
14. H. Dong, Z. Wang, D. W. C. Ho, and H. Gao, “Robust ${H}_{\infty }$ fuzzy output-feedback control with multiple probabilistic delays and multiple missing measurements,” IEEE Transactions on Fuzzy Systems, vol. 18, no. 4, pp. 712–725, 2010. View at: Publisher Site | Google Scholar
15. B. Shen, Z. Wang, and X. Liu, “Bounded ${H}_{\infty }$ Synchronization and state estimation for discrete time-varying stochastic complex networks over a finite horizon,” IEEE Transactions on Neural Networks, vol. 22, no. 1, pp. 145–157, 2011. View at: Publisher Site | Google Scholar
16. H. Dong, Z. Wang, and H. Gao, “Observer-based ${H}_{\infty }$ control for systems with repeated scalar nonlinearities and multiple packet losses,” International Journal of Robust and Nonlinear Control, vol. 20, no. 12, pp. 1363–1378, 2010. View at: Google Scholar | Zentralblatt MATH
17. B. Shen, Z. Wang, and Y. S. Hung, “Distributed ${H}_{\infty }$-consensus filtering in sensor networks with multiple missing measurements: The finite-horizon case,” Automatica, vol. 46, no. 10, pp. 1682–1688, 2010. View at: Publisher Site | Google Scholar
18. H. Dong, Z. Wang, D. W. C. Ho, and H. Gao, “Variance-constrained ${ℋ}_{\infty }$ filtering for a class of nonlinear time-varying systems with multiple missing measurements: the finite-horizon case,” IEEE Transactions on Signal Processing, vol. 58, no. 5, pp. 2534–2543, 2010. View at: Publisher Site | Google Scholar
19. B. Shen, Z. Wang, Y. S. Hung, and G. Chesi, “Distributed ${H}_{\infty }$ filtering for polynomial nonlinear stochastic systems in sensor networks,” IEEE Transactions on Industrial Electronics, vol. 58, no. 5, pp. 1971–1979, 2011. View at: Publisher Site | Google Scholar