About this Journal Submit a Manuscript Table of Contents
Journal of Applied Mathematics
Volume 2014 (2014), Article ID 497262, 8 pages
http://dx.doi.org/10.1155/2014/497262
Research Article

A Nonlinear Lagrange Algorithm for Stochastic Minimax Problems Based on Sample Average Approximation Method

School of Science, Wuhan University of Technology, Wuhan 430070, China

Received 7 January 2014; Accepted 23 February 2014; Published 3 April 2014

Academic Editor: Jinyun Yuan

Copyright © 2014 Suxiang He et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

An implementable nonlinear Lagrange algorithm for stochastic minimax problems is presented based on sample average approximation method in this paper, in which the second step minimizes a nonlinear Lagrange function with sample average approximation functions of original functions and the sample average approximation of the Lagrange multiplier is adopted. Under a set of mild assumptions, it is proven that the sequences of solution and multiplier obtained by the proposed algorithm converge to the Kuhn-Tucker pair of the original problem with probability one as the sample size increases. At last, the numerical experiments for five test examples are performed and the numerical results indicate that the algorithm is promising.

1. Introduction

Consider the stochastic minimax problems of the form where , is a random vector supported on the probability space , , denotes expectation with respect to the distribution of , and is well defined. Problem (1) has drawn much attention in recent years, which arises in various situations such as inventory theory, robust optimization, and engineering filed; for example, see [15].

A nonlinear Lagrange function for problem (1) can be established based on Zhang and Tang [6]; that is, where is Lagrange multiplier and , and is a controlling parameter. The good properties of function (2) were investigated in [6] and the convergence analysis of the corresponding nonlinear Lagrange algorithm was presented in [7]. Although function (2) overcomes the nondifferentiability of the objective function in problem (1), the exact numerical evaluation of the expected value in (2) is very difficult because either distribution of random vector is unknown or it is too complex to compute the multidimensional integral.

The sample average approximation (in short, SAA) method [815] is a well-behaved approach for bypassing this difficulty. The idea of SAA method is to generate a random sample of the random variable with sample size and approximate the involved expected value function by the corresponding sample average function . Inspired by the SAA method, we present the SAA function of as follows: where . Furthermore, we will propose an implementable nonlinear Lagrange algorithm based on SAA function (3), in which function (3) is minimized and the Lagrange multiplier is updated by its SAA form. Under some mild assumptions on problem (1), we will show that the sequences of solution and multiplier generated by the SAA method-based nonlinear Lagrange algorithm converge to the Kuhn-Tucker pair of the original problem with probability one as the sample size increases.

The remainder of this paper is organized as follows. Preliminaries are given in Section 2. The SAA method-based nonlinear Lagrange algorithm and convergence analysis are established in Section 3. Section 4 reports the numerical results by using the proposed algorithm to solve five test examples. Finally, conclusions are drawn in Section 5.

2. Preliminaries

This section serves as a preparation for the convergence analysis of the proposed SAA method-based nonlinear Lagrange algorithm. The assumptions on problem (1) are provided firstly. Furthermore, some results that are essential to our discussion are listed. At last, we recall the nonlinear Lagrange algorithm in [7].

Let denote the Kuhn-Tucker pair of problem (1). Let be small enough and define . The Lagrange function for problem (1) is defined by . Set and . We list the following assumptions on problem (1), which will be used in the subsequent theoretical analysis.(A1) is twice continuously differentiable on .(A2)There exists a nonnegative measurable function such that is finite and for every the inequality holds with probability one.(A3)The random sample is independent and identically distributed.(A4) satisfies the K-T condition. That is, (A5)Strict complementary condition holds; that is, for .(A6)Linear independent constraint qualification holds. That is, is a set of linear independent vectors.(A7)For all satisfying , , it holds that where is a constant.

Definition 1 (see [11]). For nonempty sets and in , one denotes by the distance from to and by the deviation of the set from the set .

Lemma 2 (Heine-Cantor theorem; see [16]). If is continuous function and is compact, then is uniformly continuous, where and are two metric spaces.

Note. An important special case is that every continuous function from a closed interval to the real numbers is uniformly continuous.

Lemma 3. Define for . Suppose that converges to with probability one uniformly on for . Then converges to with probability one uniformly on .

Proof. From the given condition, for , one has that, for any , there exists such that when , holds with probability one for any .
Let . Thus we have that, for any , when , for any , the following holds with probability one: which means that Lemma 3 is true.

Algorithm 4 is from [7].

Algorithm 4. We have the following.
Step 1. Choose , where , , and small enough and set .
Step 2. Solve and obtain the optimal solution .
Step 3. If , then stop. Otherwise go to Step 4.
Step 4. Update the Lagrange multiplier by

Step 5. Set and return to Step 2.

3. The SAA Method-Based Nonlinear Lagrange Algorithm and Its Convergence

In view of the numerical computation difficulty in Algorithm 4 and motivated by the SAA method, we provide the following implementable nonlinear Lagrange algorithm based on the SAA method firstly. Furthermore we establish the convergence analysis of the SAA method-based algorithm under assumptions (A1)–(A7) in this section.

Implementable SAA method-based Algorithm 5 is presented as follows.

Algorithm 5. We have the following.
Step 1. Choose , where , small enough, , and is large enough. Set .

Step 2. Solve

and obtain the optimal solution .

Step 3. If , then stop. Otherwise go to Step 4.

Step 4. Update the Lagrange multiplier by

Step 5. Set and return to Step 2.

Taking into account the local convergence analysis of Algorithm 4 given in [7], next we will study the convergence of the sequence pair obtained by Algorithm 5 on . Let and denote the optimal value and the set of optimal solutions of and and denote the optimal value and the set of optimal solutions of , respectively. Set .

Theorem 6. If assumptions (A1)–(A3) hold and converges to with probability one for some , then the following statements hold: (i) converges to with probability one uniformly on ;(ii) converges to and tends to 0 with probability one as .

Proof. (i) Let , where . Then one has Now we prove that converges to with probability one uniformly on . Considering the definition of , we have
It follows from assumption (A1) and Theorem  7.48 in [11] that both and are continuous at on . Consequently, for any , there exist constants and such that and for . Since is continuous at on and from Lemma 2, we have that is uniformly continuous on ; that is, for any and , there exists such that, for , it holds that Furthermore, from Theorem  7.48 in [11] we know that converges to with probability one uniformly on , which means that, for the above given , there exists such that when , the inequality holds with probability one for any . In view of formula (15) and formula (16), one draws the conclusion that, for any , there exists such that when , it holds that with probability one for any . That is, converges to with probability one uniformly on .
In view of converging to with probability one, being bounded on with probability one, , and formula (14), we obtain that converges to with probability one uniformly on as . Moreover, one gets that converges to with probability one uniformly on as from Lemma 3.
Next we prove that converges to with probability one uniformly on . Let and . Hence, one has From the above discussion, we know that , for any . Since is continuous at on and by Lemma 2, we obtain that is uniformly continuous on the interval . That is, for any and , there exists such that, for , it holds that Moreover, for the given , there exists such that, for , it holds that with probability one for any . From formulas (19) and (20), it follows that, for any , there exists such that, for , the following inequality holds: with probability one for any . Combined with formula (18), statement (i) is true.(ii)From statement (i) and Theorem  5.3 in [11], statement (ii) is obtained. The proof of Theorem 6 is completed.

Theorem 7. If assumptions (A1)–(A3) hold, letting , , then, for any , the following statements hold:(i) converges to with probability one for ;(ii) converges to with probability one uniformly on ;(iii) tends to , and tends to 0 with probability one as .

Proof. (i) We use the mathematical induction method to show that statement (i) is true below.(a)Let ; then for we have
Considering , we have that converges to with probability one from Theorem 6. For , it holds that Noting that the first term in the right-hand side of formula (23) converges to 0 with probability one by Theorem  7.48 in [11] and the second term converges to 0 with probability one for being continuous, we obtain that converges to with probability one. Moreover, since is continuous at on , one gets that converges to with probability one. Then it follows from the properties of convergent sequence that converges to with probability one for .(b)When , we assume that converges to with probability one for . Then, when , next we prove that converges to with probability one for .
Let ; then for one has
From Theorem 6, we know that converges to with probability one as . By a similar proof process to that in (a), we have that converges to with probability one for . For , one has that Noting that the first term of (25) tends to 0 with probability one as for converging to with probability one and being bounded on with probability one and the second term tends to 0 with probability one as for , we obtain that converges to with probability one. Then it follows from properties of convergent sequence that converges to with probability one for .
According to (a) and (b), we have that statement (i) holds.(ii)From statement (i) and Theorem 6, we obtain that statement (ii) is true.(iii)From statement (ii) and Theorem  5.3 in [11], one has that statement (iii) holds.

The above theorem shows that the sample average approximation Lagrange multiplier converges to its counterpart with probability one, and the optimal value and optimal solutions of the subproblem converge to their counterparts of the subproblem with probability one under some mild conditions. Next we will analyze the convergence of Algorithm 5 under some mild conditions.

Theorem 8. If assumptions (A1)–(A7) hold, letting , then there exist and such that, for any , it holds that the sequence pair () converge to the K-T pair () with probability one.

Proof. Under assumptions (A1) and (A4)–(A7), from Theorem  3.1 in [7], we have that there exist and such that, for any and , the following inequality holds: where is a constant, which implies that the pair tend to the K-T pair of the original problem (1) as .
Since assumptions (A1)–(A3) hold and , it follows by Theorem 7 that the pair converge to the pair with probability one as .
Furthermore, since the conclusion is obtained.

Remark 9. Theorem 8 shows that, under some mild assumptions, the sequence pair generated by Algorithm 5 locally tend to the K-T pair of the original problem (1) with probability one as and when the controlling parameter is less than the threshold .

4. Numerical Results

The numerical results for five test examples by using Algorithm 5 are presented in this section, where the five test problems are compiled based on the deterministic optimization problems in the literature [17, 18]. The numerical experiments are implemented in Matlab 7.1 runtime environment on the same computer, whose basic parameters are Intel CORE i3-2310 M@2.10 GHz and memory 2 Gb.

In the experiments, the sample with sample size is generated by in Matlab 7.1. For each problem, we choose , , , , , and , respectively, to make comparison. The initial value for each example. Unconstrained minimization problem in Step 2 of Algorithm 5 is solved by BFGS quasi-Newton method combined with Wolf nonexact linear search rule, and the control precision is in this step. The stopping criterion in Step 3 is where .

The obtained numerical results are reported in Tables 15, in which , , iter., , and represent the sample size, the value of controlling parameter, the number of iterations, the error between the solution sequence by Algorithm 5 and the optimal solution of problem (1), and the error between the optimal value by Algorithm 5 and the optimal value of problem (1), respectively.

tab1
Table 1: The numerical results for Example 1.
tab2
Table 2: The numerical results for Example 2.
tab3
Table 3: The numerical results for Example 3.
tab4
Table 4: The numerical results for Example 4.
tab5
Table 5: The numerical results for Example 5.

Example 1 (Hald-Madson [17]). Consider the unconstrained min-max stochastic problem (1), in which is uniformly distributed on and are given by This problem has the optimal solution and the optimal value . The numerical results for this example obtained by Algorithm 5 are shown in Table 1.

Example 2 (Beale [18]). Consider the unconstrained min-max stochastic problem (1), in which is uniformly distributed on and are given by This problem has the optimal solution and the optimal value . The numerical results for this example obtained by Algorithm 5 are shown in Table 2.

Example 3 (Rosen Suzuki [18]). Consider the unconstrained min-max stochastic problem (1), in which is uniformly distributed on and are given by This problem has the optimal solution and the optimal value . The numerical results for this example obtained by Algorithm 5 are shown in Table 3.

Example 4 (Wong 1 [17]). Consider the unconstrained min-max stochastic problem (1), in which is uniformly distributed on and are given by This problem has the optimal solution and the optimal value . The numerical results for this example obtained by Algorithm 5 are shown in Table 4.

Example 5 (Wong 2 [18]). Consider the unconstrained min-max stochastic problem (1), in which is uniformly distributed on and are given by This problem has the optimal solution and the optimal value . The numerical results for this example obtained by Algorithm 5 are shown in Table 5.
From the above numerical results, the following remarks are proposed.

Remark 10. The preliminary numerical results show that Algorithm 5 is feasible and promising.

Remark 11. Compared with the numerical results for the same test example with the different sample size , the above numerical results show that, with the sample size being chosen larger, the precision of the optimal solution and the optimal value by Algorithm 5 become higher, which coincides with the theoretical analysis in Section 3.

5. Conclusions

This paper investigates a nonlinear Lagrange algorithm for solving stochastic minimax problems based on the sample average approximation method. And the convergence theory of the proposed algorithm is established under some assumptions. Furthermore, the preliminary numerical results are reported to demonstrate the feasibility and effectiveness of the algorithm. The future works on improving the numerical experiments to obtain the solutions with higher precision and performing the numerical experiments for large-scale test examples deserve our further attention. And applying this proposed algorithm to some practical problems is also interesting.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The research is supported by the National Natural Science Foundation of China under Project no. 11201357, the Fundamental Research Funds for the Central Universities under Project no. 2012-Ia-024, and the State Scholarship Fund under Project no. 201308420194.

References

  1. M. Breton and S. El Hachem, “Algorithms for the solution of stochastic dynamic minimax problems,” Computational Optimization and Applications, vol. 4, no. 4, pp. 317–345, 1995. View at Publisher · View at Google Scholar · View at Scopus
  2. J. Dupačováa, “The minimax approach to stochastic programming and an illustrative application,” Stochastic, vol. 20, pp. 73–88, 1987.
  3. Y. Ermoliev, A. Gaivoronski, and C. Nedeva, “Stochastic optimization problems with partially known distribution functions,” SIAM Journal on Control and Optimization, vol. 23, no. 5, pp. 697–716, 1985. View at Scopus
  4. M. Riis and K. A. Andersen, “Applying the minimax criterion in stochastic recourse programs,” Tech. Rep., Department of Operations Research, University of Aarhus, Aarhus, Denmark, 2002.
  5. S. Takriti and S. Ahmed, “Managing short-term electricity contracts under uncertainty: a mini- max approach,” Tech. Rep., School of Industrial & Systems Engineering, Georgia Institute of Technology, Atlanta, Ga, USA, 2002.
  6. L. W. Zhang and H. W. Tang, “A maximum entropy algorithm with parameters for solving minimax problem,” Archives Control Science, vol. 6, pp. 47–59, 1997.
  7. S. X. He and Y. Y. Nie, “A class of nonlinear Lagrangian algorithms for minimax problems,” Journal of Industrial and Management Optimization, vol. 9, pp. 75–97, 2013.
  8. S. Ahmed, A. Shapiro, and E. Shapiro, “The sample average approximation method for stochastic programs with integer recourse,” SIAM Journal of Optimization, vol. 12, pp. 479–502, 2002.
  9. X. Chen, R. B.-J. Wets, and Y. Zhang, “Stochastic variational inequalities: residual minimization smoothing/sample average approximations,” SIAM Journal on Optimization, vol. 22, pp. 649–673, 2012.
  10. F. Meng and H. Xu, “A regularized sample average approximation method for stochastic mathematical programs with nonsmooth equality constraints,” SIAM Journal on Optimization, vol. 17, no. 3, pp. 891–919, 2006. View at Publisher · View at Google Scholar · View at Scopus
  11. A. Shapiro, D. Dentcheva, and A. Ruszczynski, Lectures on Stochastic Programming: Modeling and Theory, MPS-SIAM Series on Optimization, 2009.
  12. M. Wang and M. M. Ali, “Stochastic nonlinear complementarity problems: stochastic programming reformulation and penalty-based approximation method,” Journal of Optimization Theory and Applications, vol. 144, no. 3, pp. 597–614, 2010. View at Publisher · View at Google Scholar · View at Scopus
  13. H. Xu and F. Meng, “Convergence analysis of sample average approximation methods for a class of stochastic mathematical programs with equality constraints,” Mathematics of Operations Research, vol. 32, no. 3, pp. 648–668, 2007. View at Publisher · View at Google Scholar · View at Scopus
  14. H. Xu, “Sample average approximation methods for a class of stochastic variational inequality problems,” Asia-Pacific Journal of Operational Research, vol. 27, no. 1, pp. 103–119, 2010. View at Publisher · View at Google Scholar · View at Scopus
  15. J. Zhang, L.-W. Zhang, and S. Lin, “A class of smoothing SAA methods for a stochastic mathematical program with complementarity constraints,” Journal of Mathematical Analysis and Applications, vol. 387, no. 1, pp. 201–220, 2012. View at Publisher · View at Google Scholar · View at Scopus
  16. R. Walter, Principles of Mathematical Analysis, McGraw-Hill, New York, NY, USA, 1953.
  17. C. Charalambous, “Nonlinear least pth optimization and nonlinear programming,” Mathematical Programming, vol. 12, no. 1, pp. 195–225, 1977. View at Publisher · View at Google Scholar · View at Scopus
  18. G. Di Pillo, L. Grippo, and S. Lucidi, “A smooth method for the finite minimax problem,” Mathematical Programming, vol. 60, no. 1–3, pp. 187–214, 1993. View at Publisher · View at Google Scholar · View at Scopus