Journal of Function Spaces

Volume 2018 (2018), Article ID 3742575, 12 pages

https://doi.org/10.1155/2018/3742575

## Inexact SA Method for Constrained Stochastic Convex SDP and Application in Chinese Stock Market

^{1}Information and Engineering College, Dalian University, Dalian, China^{2}School of Mathematical Sciences, Dalian University of Technology, Dalian, China^{3}School of Finance, Zhejiang University of Finance and Economics, Hangzhou, China

Correspondence should be addressed to Shuang Chen

Received 4 August 2017; Revised 17 November 2017; Accepted 13 December 2017; Published 23 January 2018

Academic Editor: Dhananjay Gopal

Copyright © 2018 Shuang Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We propose stochastic convex semidefinite programs (SCSDPs) to handle uncertain data in applications. For these models, we design an efficient inexact stochastic approximation (SA) method and prove the convergence, complexity, and robust treatment of the algorithm. We apply the inexact method for solving SCSDPs where the subproblem in each iteration is only solved approximately and show that it enjoys the similar iteration complexity as the exact counterpart if the subproblems are progressively solved to sufficient accuracy. Numerical experiments show that the method we proposed was effective for uncertain problem.

#### 1. Introduction

In this paper, we propose a class of optimization problems called stochastic convex semidefinite programs (SCSDPs):where is a smooth convex function for every realization of on , is a random matrix whose probability distribution is supported on set , is a linear map, and mean that , and is the space of real symmetric endowed with the standard trace inner product and Frobenius norm . is the set of positive semidefinite matrices in .

SCSDPs may be viewed as an extension of the following stochastic models:

*(1) Stochastic (Linear) Semidefinite Programs (SLSDPs) ([1–5])*where is the minimum of the problem where denotes the Frobenius inner product between and .

*(2) Stochastic Convex Quadratic Semidefinite Programs (SCQSDPs)*where is given self-adjoint positive semidefinite linear operator for every and .

*(3) Stochastic Nearest Correlation Matrix.* Now we consider an interesting example from finance industry. In stock research, sample correlation matrices (a symmetric positive semidefinite matrix with unit diagonal) constructed from vectors of stock returns are used for predictive purposes. Unfortunately, on any day when an observation is made data is rarely available for all the stocks of interest. Higham [6] proposed a method that is to compute the sample correlations of pairs of stocks using data drawn only from the days on which both stocks have data available. Compute the nearest correlation matrix and use it for the subsequent stock analysis. This is a statistical application that motivates nearest correlation matrix problem where is a vector of all ones and is given.

Furthermore, with the particularity of Chinese stock market (in Chinese stock market which is different from stock markets in other countries, stock price rise or fall during a day does not exceed ten percent), we could more accurately estimate the information of stock price in the future. We consider computing expected correlations of pairs of stocks returns for better response random factors of the stock market. In order to justify the subsequent stock analysis, it is desired to compute the nearest expectation correlation matrix and to use that matrix in the computation. We use this matrix to predict the correlation of these stocks in the future. The problem we consider is an important special case of (4)where is a stochastic matrix from .

Alternatively, SCSDPs may be viewed as an extension of the following deterministic convex semidefinite programs ([7–11]): where is a smooth convex function on and is a linear map and and mean that .

There are several methods available for solving determinate deterministic convex semidefinite programs and their special cases, which include the accelerated proximal gradient method [8], the alternating projection method [6], the quasi-Newton method [12], the inexact semismooth Newton-CG method [13], and the inexact interior-point method [14]. However, all the methods mentioned above may not be extended to efficiently solve the SCSDPs (1) because it might not be easy to evaluate in the following occasions:(a) is a random vector with a known probability distribution, but calculations of the expected value involve multidimensional integration, which is computationally expensive if not impossible.(b)The function is known, but the distribution of is unknown and the information on can only be obtained using past data or sampling.(c) is not observable and it must be approximately evaluated through simulation.

Under these circumstances, the existing numerical methods for deterministic convex semidefinite programs are not applicable to SCSDPs and new methods are needed. On the other hand, Ariyawansa and Zhu [1] and Mehrotra and Gozevin [4] apply barrier decomposition algorithms and interior-point methods for solving stochastic (linear) semidefinite programs. However, these methods may not be extended for solving the convex problems.

The main purpose of this paper is to design an efficient algorithm to solve the general problems (1) including all the special cases mentioned above. The algorithm we propose here is based on the classical SA algorithm whose subproblem must be solved exactly to generate the next iterate point. The SA algorithm approach originates from the pioneering work of Robbins and Monro and is discussed in numerous publications [15–18]. In this paper, we design an inexact SA method which overcomes the limitation just mentioned. Specifically, in our inexact SA method, the subproblem is only solved approximately and we allow the error in solving subproblem to be deterministic or stochastic. From the theoretical point of view we analyze the convergence, complexity, and robust treatment of the algorithm. We also give the numerical results for illustrating the effectiveness of our method.

The rest of the paper is organized as follows. In Section 2, we focus on theory of the inexact SA method to solve the more general model-stochastic convex matrix program over abstract set , and the proof of convergence is given whether the error is determinate or stochastic. In Section 3, we provide two error estimations of the algorithm. In Section 4, we apply the theoretical results proved before for solving SCSDPs (1). Numerical experiments will be showed at last.

#### 2. Algorithm and Convergence Analysis of General Convex Stochastic Matrix Optimization

For more generality, we first consider the following stochastic convex optimization problem:where is a nonempty closed convex set, is a random matrix whose probability distribution is supported on set , and is a smooth convex stochastic function on . We assume that the expectationis well defined and finite valued for every . We also assume that the expected value function is continuous and convex on . If for every the function is convex on , then it follows that is convex. With these assumptions, (8) becomes a convex programming problem.

Let be any convex finite at . A matrix is called -subgradient of at (where ) if The set of all such -subgradients is denoted by .

Next we make the following assumptions (also in [19–21]) which will be used to analyze the convergence of algorithms in this paper:(A1)It is possible to generate an independent identically distributed i.i.d sample of realizations of random vector .(A2)There is an oracle, which, for a given input point , returns inexact stochastic subgradient-a vector such that is well defined and is a -subgradient of at , that is, .(A3)There is a positive number such that

*Remark 1. *Assumption (A2) is different from the assumption used in [19]. We only need to get a -subgradient which is more likely got in practice.

*Definition 2. *The* convex hull* of set (denoted by ) is the smallest convex polygon that contains all the points of .

Throughout the paper, we use the following notations. denotes the Frobenius norm, where means the element at the th row and th column of . denotes standard trace inner product. By , we denote the metric projection operator onto the set : that is, . Note that is a nonexpanding operator, that is,The notation stands for the largest integer less than or equal to . By , we denote the history of the process up to time . Unless stated otherwise, all relations between random variables are supposed to hold almost surely.

In the following, we discuss theory of the inexact SA approach to the minimization problem (8).

The inexact SA algorithm solves (8) by mimicking the subgradient descent method where . These lead to an executable algorithm below.

*Algorithm 3. *

*Step 1. *Give the initial point and set , iterate the following steps.

*Step 2. *Choose suitable step sizes and computational error .

*Step 3. *Find an approximation solution ofSet ; go back to Step .

In order to prove the convergence of Algorithm 3 we introduce the following lemma.

Lemma 4 (see [22]). *Let be an increasing sequence of -algebras and let , , , and be nonnegative random variables adapted to . If , , it holds almost surely thatand then is convergent almost surely and almost surely.*

We now state the main convergence theorem.

Theorem 5. *Suppose that the stochastic optimization problem (8) has an optimal solution and . Let , where , and be a sequence generated by (14); then converges to .*

*Proof. *Note that the iterate is a function of the history of the generated random process and hence is random.

DenoteSince and hence , we can writewhere the second inequality is due to nonexpansionary of projection operator.

Since is independent of , we haveSince assumption (A3) and , there is a positive number such that Then, by taking expectation of both sides of (17) and using (18), we obtain For , we known that . By Lemma 4, we know that is convergent almost surely andSuppose that and by the convexity of , we have that , when . For , we know that It is in contradiction with (21); then we have almost surely.

Sometimes, the error at each iteration is relevant to the random variable . In this environment, we consider stochastic inexact SA algorithm.

*Algorithm 6. *

*Step 1. *Give the initial point and set ; iterate the following steps.

*Step 2. *Choose suitable step sizes and computational stochastic error .

*Step 3. *Find an approximation solution of

Set ; go back to Step .

Theorem 7. *Suppose that the stochastic optimization problem (8) has an optimal solution . We also supposed that stochastic error and there is a positive number such that . Let , where , and be a sequence generated by (23); then converges to .*

*Proof. *Like the proof of Theorem 5, we obtainFor , we known that . By Lemma 4, we know that is convergent almost surely andSuppose that and by the convexity of , we have that , when . For , we know that It is in contradiction with (25); then we have almost surely.

#### 3. Error Estimation

Suppose further that the expectation function is differentiable and strongly convex on ; that is, there is constant such that

Next we will discuss the complexity of the above algorithms; we take the second algorithm, for instance.

Theorem 8. *Suppose that is differentiable and strongly convex, is Lipschitz continuous, and the conditions of Theorem 7 are satisfied; we have that where *

*Proof. *Note that strong convexity of implies that the minimizer is unique. As optimality of is unique, we have that which together with (27) implies that . In turn, it follows that for all and and hence Therefore, it follows from (24) thatFor some constant , by (32), we have Next we will prove that where Using mathematical induction, for , we have which holds naturally. For we know that / < 1; then we have .

For is Lipschitz continuous, there is constant such thatand hence

When , it follows from (38) that, after iterations, the expected error in terms of the objective value is of order . But that the result is highly sensitive to a priori information on strongly convex factor . In order to make the SA method applicable to general convex objectives rather than to strongly convex ones, one should replace the classical step sizes , which can be too small to ensure a reasonable rate of convergence. We take appropriate averages of the search points rather than these points themselves. This processing method go back to [20, 23] and we call it “robust treatment.” We will show the effectiveness of the above technique in the following theorem.

Theorem 9. *Suppose that is convex; let and . Consider the points , and let We know that *

*Proof. *Due to (24) and assumption (A3), we know that It follows that whenever , we haveand hence, setting ,By convexity of , we have , and by convexity of , we have . Thus, by (43) and in view of and , we getNow we can develop step size policies along with the associated efficiency estimates based on (44). Assume that the number of iterations of the method is fixed in advance and that . Then (44) becomesMinimizing the right-hand side of (45) over , we arrive at the constant step size policyAlong with the associated efficiency estimate and , we have

With constant step size policy (47) in the above theorem, after iterations, the expected error is of order . This is worse than the rate for the inexact SA algorithm. However, the error bounds (48) are guaranteed independently of any smoothness and/or strong convexity assumptions on .

#### 4. Specialization to the Case Where

To illustrate the advantage of inexact SA method over classical SA method, we apply our method to SCSDPs in this section. Problem (1) can be expressed in the form (8) with . Subproblem (14) then becomes the following constrained minimization problem: The KKT corresponding to problem (49) is written asWe can get the algorithm for problem (49).

*Algorithm 10. *

*Step 1. *Given the initial point and setting , iterate the following steps.

*Step 2. *Choose suitable step sizes and computational error .

*Step 3. *Solve the following equation:

Set ; go back to Step .

#### 5. Application and Numerical Results

In order to assess from a practical point of view the inexact SA method, we code the Algorithm in MATLAB and ran it on several subcategories of SDP. We implement our algorithm in MATLAB 2009a and use a computer with one 2.20 GHz processor and 2.00 GB RAM.

##### 5.1. Numerical Results for Random Matrix

In our first example, we take a simple case of stochastic convex SDP where is a convex function for every and where “” means each element of multiplied by random variable . We let mean 1 which is uniformly distributed whose interval radius is mutative from 0.5 to 4. Using inexact SA method to solve this problem, we list the results in Table 1.