Abstract

This paper applies sample average approximation (SAA) method based on -space decomposition theory to solve stochastic convex minimax problems. Under some moderate conditions, the SAA solution converges to its true counterpart with probability approaching one and convergence is exponentially fast with the increase of sample size. Based on the -theory, a superlinear convergent -algorithm frame is designed to solve the SAA problem.

1. Introduction

In this paper, the following stochastic convex minimax problem (SCMP) is considered: where and the functions , , are convex and , is a random vector defined on probability space ; denotes the mathematical expectation with respect to the distribution of .

SCMP is a natural extension of deterministic convex minimax problems (CMP for short). The CMP has a number of important applications in operations research, engineering problems, and economic problems. While many practical problems only involve deterministic data, there are some important instances where problems data contains some uncertainties and consequently SCMP models are proposed to reflect the uncertainties.

A blanket assumption is made that, for every , , , are well defined. Let be a sampling of . A well-known approach based on the sampling is the so-called SAA method, that is, using sample average value of to approximate its expected value because the classical law of large number for random functions ensures that the sample average value of converges with probability 1 to when the sampling is independent and identically distributed (idd for short). Specifically, we can write down the SAA of our SCMP (1) as follows: where The problem (3) is called the SAA problem and (1) the true problem.

The SAA method has been a hot topic of research in stochastic optimization. Pagnoncelli et al. [1] present the SAA method for chance constrained programming. Shapiro et al. [2] consider the stochastic generalized equation by using the SAA method. Xu [3] raises the SAA method for a class of stochastic variational inequality problems. Liu et al. [4] give the penalized SAA methods for stochastic mathematical programs with complementarity constraints. Chen et al. [5] discuss the SAA methods based on Newton method to the stochastic variational inequality problem with constraint conditions. Since the objective functions of the SAA problems in the references talking above are smooth, then they can be solved by using Newton method.

More recently, new conceptual schemes have been developed, which are based on the -theory introduced in [6]; see else [711]. The idea is to decompose into two orthogonal subspaces and at a point , where the nonsmoothness of is concentrated essentially on and the smoothness of appears on the subspace. More precisely, for a given , where denotes the subdifferential of at in the sense of convex analysis, then can be decomposed into direct sum of two orthogonal subspaces, that is, , where , and . As a result an algorithm frame can be designed for the SAA problem that makes a step in the space, followed by a -Newton step in order to obtain superlinear convergence. A -space decomposition method for solving a constrained nonsmooth convex program is presented in [12]. A decomposition algorithm based on proximal bundle-type method with inexact data is presented for minimizing an unconstrained nonsmooth convex function in [13].

In this paper, the objective function in (1) is nonsmooth, but it has the structure which has the connection with -space decomposition. Based on the -theory, a superlinear convergent -algorithm frame is designed to solve the SAA problem. The rest of the paper is organized as follows. In the next section, the SCMP is transformed to the nonsmooth problem and the proof of the approximation solution set converges to the true solution set in the sense that Hausdorff distance is obtained. In Section 3, the -theory of the SAA problem is given. In the final section, the -decomposition algorithm frame of the SAA problem is designed.

2. Convergence Analysis of SAA Problem

In this section, we discuss the convergence of (3) to (1) as increases. Specifically, we investigate the fact that the solution of the SAA problem (3) converges to its true counterpart as . Firstly, we make the basic assumptions for SAA method. In the following, we give the basic assumptions for SAA method.

Assumption 1. (a) Letting be a set, for , the limits exist for every .
(b) For every , the moment-generating function is finite-valued for all in a neighborhood of zero.
(c) There exists a measurable function such that for all and all .
(d) The moment-generating function of is finite-valued for all in a neighborhood of zero, where is the moment-generating function of the random variable .

Theorem 2. Let and denote the solution sets of (1) and (3). Assuming that both and are nonempty, then, for any , one has , where .

Proof. For any points and , we have From Assumption 1, we know that, for any , there exist ; if , , then By letting , we obtain This shows that , which implies .

We now move on to discuss the exponential rate of convergence of SAA problem (3) to the true problem (1) as sample increases.

Theorem 3. Let be a solution to the SAA problem (3) and is the solution set of the true problem (1). Suppose Assumption 1 holds. Then, for every , there exist positive constants and , such that for sufficiently large.

Proof. Let be any small positive number. By Theorem 2 and we have . Therefore, by Assumption 1, we have The proof is complete.

3. The -Theory of the SAA Problem

In the following sections, we give the -theory, -decomposition algorithm frame, and convergence analysis of the SAA problem.

The subdifferential of at a point can be computed in terms of the gradients of the function that are active at . More precisely, where is the set of active indices at , and Let be a solution of (3). By continuity of the structure functions, there exists a ball such that For convenience, we assume that the cardinality of is and reorder the structure functions, so that . From now on, we consider that The following assumption will be used in the rest of this paper.

Assumption 4. The set is linearly independent.

Theorem 5. Suppose Assumption 4 holds. Then can be decomposition at , where

Proof. The proof can be directly obtained by using Assumption 4 and the definition of the spaces and .
Given a subgradient with -component , the -Lagrangian of , depending on , is defined by The associated set of -space minimizers is defined by

Theorem 6. Suppose Assumption 4 holds. Let be a trajectory leading to and let . Then for all sufficiently small the following hold: (i)the nonlinear system, with variable and the parameter , has a unique solution and is a function;(ii) is a -function with ;(iii);(iv);(v), .

Proof. Item (i) follows from the assumption that are and applying a Second-Order Implicit Function Theorem (see [14], Theorem 2.1). Since is , is and the Jacobians exist and are continuous. Differentiating the primal track with respect to , we obtain the expression of and item (ii) follows.
(iii) By the definition of and , we have
According to the second-order expansion of , we obtain
Since , , , and ,
Similar to (iii), we get (iv): The conclusion of (v) can be obtained in terms of (i) and the definition of .

4. Algorithm and Convergence Analysis

Supposing , we give an algorithm frame which can solve (3). This algorithm makes a step in the -subspace, followed by a -Newton step in order to obtain superlinear convergence rate.

Algorithm 7 (algorithm frame).
Step 0. Initialization: given , choose a starting point close to enough and a subgradient and set .

Step 1. Stop if

Step 2. Find the active index set .

Step 3. Construct -decomposition at ; that is, . Compute where

Step 4. Perform -step. Compute which denotes in (22) and set .

Step 5. Perform -step. Compute from the system where is such that . Compute .

Step 6. Update: set and return to Step 1.

Theorem 8. Suppose the starting point is close to enough and , . Then the iteration points generated by Algorithm 7 converge and satisfy

Proof. Let and . It follows from Theorem 6(i) that Since exists and , we have from the definition of -Hessian matrix that By virtue of (30), we have . It follows from the hypothesis that is invertible and hence . In consequence, one has The proof is completed by combining (33) and (35).

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The research is supported by the National Natural Science Foundation of China under Project nos. 11301347, 11171138, and 11171049 and General Project of the Education Department of Liaoning Province no. L201242.