Table of Contents Author Guidelines Submit a Manuscript
Journal of Function Spaces
Volume 2018, Article ID 3742575, 12 pages
https://doi.org/10.1155/2018/3742575
Research Article

Inexact SA Method for Constrained Stochastic Convex SDP and Application in Chinese Stock Market

1Information and Engineering College, Dalian University, Dalian, China
2School of Mathematical Sciences, Dalian University of Technology, Dalian, China
3School of Finance, Zhejiang University of Finance and Economics, Hangzhou, China

Correspondence should be addressed to Shuang Chen; moc.361@7070gnauhsnehc

Received 4 August 2017; Revised 17 November 2017; Accepted 13 December 2017; Published 23 January 2018

Academic Editor: Dhananjay Gopal

Copyright © 2018 Shuang Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We propose stochastic convex semidefinite programs (SCSDPs) to handle uncertain data in applications. For these models, we design an efficient inexact stochastic approximation (SA) method and prove the convergence, complexity, and robust treatment of the algorithm. We apply the inexact method for solving SCSDPs where the subproblem in each iteration is only solved approximately and show that it enjoys the similar iteration complexity as the exact counterpart if the subproblems are progressively solved to sufficient accuracy. Numerical experiments show that the method we proposed was effective for uncertain problem.

1. Introduction

In this paper, we propose a class of optimization problems called stochastic convex semidefinite programs (SCSDPs):where is a smooth convex function for every realization of on , is a random matrix whose probability distribution is supported on set , is a linear map, and mean that , and is the space of real symmetric endowed with the standard trace inner product and Frobenius norm . is the set of positive semidefinite matrices in .

SCSDPs may be viewed as an extension of the following stochastic models:

(1) Stochastic (Linear) Semidefinite Programs (SLSDPs) ([15])where is the minimum of the problem where denotes the Frobenius inner product between and .

(2) Stochastic Convex Quadratic Semidefinite Programs (SCQSDPs)where is given self-adjoint positive semidefinite linear operator for every and .

(3) Stochastic Nearest Correlation Matrix. Now we consider an interesting example from finance industry. In stock research, sample correlation matrices (a symmetric positive semidefinite matrix with unit diagonal) constructed from vectors of stock returns are used for predictive purposes. Unfortunately, on any day when an observation is made data is rarely available for all the stocks of interest. Higham [6] proposed a method that is to compute the sample correlations of pairs of stocks using data drawn only from the days on which both stocks have data available. Compute the nearest correlation matrix and use it for the subsequent stock analysis. This is a statistical application that motivates nearest correlation matrix problem where is a vector of all ones and is given.

Furthermore, with the particularity of Chinese stock market (in Chinese stock market which is different from stock markets in other countries, stock price rise or fall during a day does not exceed ten percent), we could more accurately estimate the information of stock price in the future. We consider computing expected correlations of pairs of stocks returns for better response random factors of the stock market. In order to justify the subsequent stock analysis, it is desired to compute the nearest expectation correlation matrix and to use that matrix in the computation. We use this matrix to predict the correlation of these stocks in the future. The problem we consider is an important special case of (4)where is a stochastic matrix from .

Alternatively, SCSDPs may be viewed as an extension of the following deterministic convex semidefinite programs ([711]): where is a smooth convex function on and is a linear map and and mean that .

There are several methods available for solving determinate deterministic convex semidefinite programs and their special cases, which include the accelerated proximal gradient method [8], the alternating projection method [6], the quasi-Newton method [12], the inexact semismooth Newton-CG method [13], and the inexact interior-point method [14]. However, all the methods mentioned above may not be extended to efficiently solve the SCSDPs (1) because it might not be easy to evaluate in the following occasions:(a) is a random vector with a known probability distribution, but calculations of the expected value involve multidimensional integration, which is computationally expensive if not impossible.(b)The function is known, but the distribution of is unknown and the information on can only be obtained using past data or sampling.(c) is not observable and it must be approximately evaluated through simulation.

Under these circumstances, the existing numerical methods for deterministic convex semidefinite programs are not applicable to SCSDPs and new methods are needed. On the other hand, Ariyawansa and Zhu [1] and Mehrotra and Gozevin [4] apply barrier decomposition algorithms and interior-point methods for solving stochastic (linear) semidefinite programs. However, these methods may not be extended for solving the convex problems.

The main purpose of this paper is to design an efficient algorithm to solve the general problems (1) including all the special cases mentioned above. The algorithm we propose here is based on the classical SA algorithm whose subproblem must be solved exactly to generate the next iterate point. The SA algorithm approach originates from the pioneering work of Robbins and Monro and is discussed in numerous publications [1518]. In this paper, we design an inexact SA method which overcomes the limitation just mentioned. Specifically, in our inexact SA method, the subproblem is only solved approximately and we allow the error in solving subproblem to be deterministic or stochastic. From the theoretical point of view we analyze the convergence, complexity, and robust treatment of the algorithm. We also give the numerical results for illustrating the effectiveness of our method.

The rest of the paper is organized as follows. In Section 2, we focus on theory of the inexact SA method to solve the more general model-stochastic convex matrix program over abstract set , and the proof of convergence is given whether the error is determinate or stochastic. In Section 3, we provide two error estimations of the algorithm. In Section 4, we apply the theoretical results proved before for solving SCSDPs (1). Numerical experiments will be showed at last.

2. Algorithm and Convergence Analysis of General Convex Stochastic Matrix Optimization

For more generality, we first consider the following stochastic convex optimization problem:where is a nonempty closed convex set, is a random matrix whose probability distribution is supported on set , and is a smooth convex stochastic function on . We assume that the expectationis well defined and finite valued for every . We also assume that the expected value function is continuous and convex on . If for every the function is convex on , then it follows that is convex. With these assumptions, (8) becomes a convex programming problem.

Let be any convex finite at . A matrix is called -subgradient of at (where ) if The set of all such -subgradients is denoted by .

Next we make the following assumptions (also in [1921]) which will be used to analyze the convergence of algorithms in this paper:(A1)It is possible to generate an independent identically distributed i.i.d sample of realizations of random vector .(A2)There is an oracle, which, for a given input point , returns inexact stochastic subgradient-a vector such that is well defined and is a -subgradient of at , that is, .(A3)There is a positive number such that

Remark 1. Assumption   (A2) is different from the assumption used in [19]. We only need to get a -subgradient which is more likely got in practice.

Definition 2. The convex hull of set (denoted by ) is the smallest convex polygon that contains all the points of .

Throughout the paper, we use the following notations. denotes the Frobenius norm, where means the element at the th row and th column of . denotes standard trace inner product. By , we denote the metric projection operator onto the set : that is, . Note that is a nonexpanding operator, that is,The notation stands for the largest integer less than or equal to . By , we denote the history of the process up to time . Unless stated otherwise, all relations between random variables are supposed to hold almost surely.

In the following, we discuss theory of the inexact SA approach to the minimization problem (8).

The inexact SA algorithm solves (8) by mimicking the subgradient descent method where . These lead to an executable algorithm below.

Algorithm 3.

Step 1. Give the initial point and set , iterate the following steps.

Step 2. Choose suitable step sizes and computational error .

Step 3. Find an approximation solution ofSet ; go back to Step  .

In order to prove the convergence of Algorithm 3 we introduce the following lemma.

Lemma 4 (see [22]). Let be an increasing sequence of -algebras and let , , , and be nonnegative random variables adapted to . If , , it holds almost surely thatand then is convergent almost surely and almost surely.

We now state the main convergence theorem.

Theorem 5. Suppose that the stochastic optimization problem (8) has an optimal solution and . Let , where , and be a sequence generated by (14); then converges to .

Proof. Note that the iterate is a function of the history of the generated random process and hence is random.
DenoteSince and hence , we can writewhere the second inequality is due to nonexpansionary of projection operator.
Since is independent of , we haveSince assumption (A3) and , there is a positive number such that Then, by taking expectation of both sides of (17) and using (18), we obtain For , we known that . By Lemma 4, we know that is convergent almost surely andSuppose that and by the convexity of , we have that , when . For , we know that It is in contradiction with (21); then we have almost surely.

Sometimes, the error at each iteration is relevant to the random variable . In this environment, we consider stochastic inexact SA algorithm.

Algorithm 6.

Step 1. Give the initial point and set ; iterate the following steps.

Step 2. Choose suitable step sizes and computational stochastic error .

Step 3. Find an approximation solution of

Set ; go back to Step .

Theorem 7. Suppose that the stochastic optimization problem (8) has an optimal solution . We also supposed that stochastic error and there is a positive number such that . Let , where , and be a sequence generated by (23); then converges to .

Proof. Like the proof of Theorem 5, we obtainFor , we known that . By Lemma 4, we know that is convergent almost surely andSuppose that and by the convexity of , we have that , when . For , we know that It is in contradiction with (25); then we have almost surely.

3. Error Estimation

Suppose further that the expectation function is differentiable and strongly convex on ; that is, there is constant such that

Next we will discuss the complexity of the above algorithms; we take the second algorithm, for instance.

Theorem 8. Suppose that is differentiable and strongly convex, is Lipschitz continuous, and the conditions of Theorem 7 are satisfied; we have that where

Proof. Note that strong convexity of implies that the minimizer is unique. As optimality of is unique, we have that which together with (27) implies that . In turn, it follows that for all and and hence Therefore, it follows from (24) thatFor some constant , by (32), we have Next we will prove that where Using mathematical induction, for , we have which holds naturally. For we know that / < 1; then we have .
For is Lipschitz continuous, there is constant such thatand hence

When , it follows from (38) that, after iterations, the expected error in terms of the objective value is of order . But that the result is highly sensitive to a priori information on strongly convex factor . In order to make the SA method applicable to general convex objectives rather than to strongly convex ones, one should replace the classical step sizes , which can be too small to ensure a reasonable rate of convergence. We take appropriate averages of the search points rather than these points themselves. This processing method go back to [20, 23] and we call it “robust treatment.” We will show the effectiveness of the above technique in the following theorem.

Theorem 9. Suppose that is convex; let and . Consider the points , and let We know that

Proof. Due to (24) and assumption (A3), we know that It follows that whenever , we haveand hence, setting ,By convexity of , we have , and by convexity of , we have . Thus, by (43) and in view of and , we getNow we can develop step size policies along with the associated efficiency estimates based on (44). Assume that the number of iterations of the method is fixed in advance and that . Then (44) becomesMinimizing the right-hand side of (45) over , we arrive at the constant step size policyAlong with the associated efficiency estimate and , we have

With constant step size policy (47) in the above theorem, after iterations, the expected error is of order . This is worse than the rate for the inexact SA algorithm. However, the error bounds (48) are guaranteed independently of any smoothness and/or strong convexity assumptions on .

4. Specialization to the Case Where

To illustrate the advantage of inexact SA method over classical SA method, we apply our method to SCSDPs in this section. Problem (1) can be expressed in the form (8) with . Subproblem (14) then becomes the following constrained minimization problem: The KKT corresponding to problem (49) is written asWe can get the algorithm for problem (49).

Algorithm 10.

Step 1. Given the initial point and setting , iterate the following steps.

Step 2. Choose suitable step sizes and computational error .

Step 3. Solve the following equation:

Set ; go back to Step .

5. Application and Numerical Results

In order to assess from a practical point of view the inexact SA method, we code the Algorithm in MATLAB and ran it on several subcategories of SDP. We implement our algorithm in MATLAB 2009a and use a computer with one 2.20 GHz processor and 2.00 GB RAM.

5.1. Numerical Results for Random Matrix

In our first example, we take a simple case of stochastic convex SDP where is a convex function for every and where “” means each element of multiplied by random variable . We let mean 1 which is uniformly distributed whose interval radius is mutative from 0.5 to 4. Using inexact SA method to solve this problem, we list the results in Table 1.

Table 1: Objective function is random matrix.

Table 1 shows that the inexact SA method for solving stochastic convex SDP is effective. The abscissa of Figure 1 represent the interval radius of and ordinate of Figure 1 represents norm difference of the optimal value between which is a random variable and . This means that the variance of random variable may impact approximation; however, this influence does not exceed the tolerance range.

Figure 1: Discrete map.
5.2. Numerical Results for SCQSDPs

In this example, we tested the inexact SA method on the linearly constrained SCQSDPs: where is random variable, , and is given as follows: Then and can be generated at random. The MATLAB code for generating the matrix is given by ; . We generated as follows:

We show the elementary numerical results in Table 2.

Table 2: Objective function is stochastic quadratic.

Our numerical results are reported in Table 2, where , , , iter, and pobj stand for, respectively, the matrix dimension, the primal infeasibility, the dual infeasibility, the number of outer iterations, and the primal objective value. For each instance, our algorithm can solve the convex QSDP problem efficiently with very good primal infeasibility. Experimental result shows that the exact SA algorithm can achieve a high degree of detecting reliability.

5.3. Predict Correlation Coefficient in the Chinese Stock Market

In this example, we discuss how to use our inexact SA method to predict correlation coefficient in the Chinese stock market. This is the main motivation of this paper. Chinese stock market is different from other countries, and the rise or fall in a day is not more than ten percent. This feature could make us estimate some index more accurately.

In this paper, our main concern is the correlation coefficient of stocks. Correlation coefficient is often useful to know if two stocks tend to move together. For a diversified portfolio, you would want stocks that are not closely related. It helps to measure the closeness of the returns. Usually, we use the determinate correlation coefficient based on historical data. This principle which we mentioned in the previous paragraph makes us discover that we can use stochastic factor to estimate the correlation coefficient in the future.

According to the difference of the total market capitalization of company, we can divide stocks into two parts called small-cap stocks and big-cap stocks.

(1) Small-Cap Stocks. They refer to stocks with a relatively small market capitalization. The definition of small cap can vary among brokerages, but generally it is a company with a market capitalization of less than billion.

Table 3 is the closing price of ten stocks of small-cap from 2017-6-3 to 2017-7-3, where CP is logogram of closing price. The second line of Table 3 is code of stocks.

Table 3: Closing price of small-cap stocks.

We know that the change of the closing price in 2017.7.4 will not exceed ten percent and seriously up or down is small probability event. According to this and model (6), we can forecast the correlation coefficient from 2017.6.4 to 2017.7.4. We list the results in Table 4.

Table 4: Prediction correlation matrix.

Now we give the true correlation coefficient of the stocks from 2017.6.4 to 2017.7.4 for comparison.

From Tables 4 and 5, we know that the correlation matrix we calculated is very close to the true correlation matrix. This mean that our method for predicting correlation matrix is effective.

Table 5: True correlation matrix.

(2) Big-Cap Stocks. In China, it is a term used by the investment community to refer to companies with a market capitalization value of more than billion. Large cap is an abbreviation of the term “large market capitalization.” Market capitalization is calculated by multiplying the number of a company’s shares outstanding by its stock price per share.

Table 6 is the closing price of ten stocks of big-cap from 2017-6-3 to 2017-7-3.

Table 6: Closing price of big-cap stocks.

Like in the small-cap stocks, we calculate the correlation matrix as shown in Table 7.

Table 7: Prediction correlation matrix.

Now we also give the true correlation coefficient of the stocks from 2017.6.4 to 2017.7.4 for comparison.

From Tables 7 and 8, we know that the correlation matrix we calculated is even more close to the true correlation matrix compared to the case in small-cap stocks. This means that our method for predicting correlation matrix is more effective in the company whose total market capitalization is huge.

Table 8: True correlation matrix.

6. Conclusion

In this paper, we propose stochastic convex semidefinite programs (SCSDPs) to handle data uncertainty in financial problem. An efficient inexact stochastic approximation method is designed. We proved the convergence, complexity, and robust treatment of the algorithm. Numerical experiments show that the method we proposed was effective for SCSDP and also for its special cases. We also numerically demonstrated that the method is more effective in big-cap stocks.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The work is partially supported by the Natural Science Foundation of China, Grants 11626051, 11626052, 11701061, and 11501074.

References

  1. K. Ariyawansa and Y. Zhu, “A class of polynomial volumetric barrier decomposition algorithms for stochastic semidefinite programming,” Mathematics of Computation, vol. 80, no. 275, pp. 1639–1661, 2011. View at Google Scholar
  2. K. A. Ariyawansa and Y. Zhu, “Stochastic semidefinite programming: a new paradigm for stochastic optimization,” 4OR, vol. 4, no. 3, pp. 239–253, 2006. View at Google Scholar
  3. S. Jin, K. A. Ariyawansa, and Y. Zhu, “Homogeneous self-dual algorithms for stochastic semidefinite programming,” Journal of Optimization Theory and Applications, vol. 155, no. 3, pp. 1073–1083, 2012. View at Google Scholar
  4. S. Mehrotra and M. G. Gozevin, “Decomposition-based interior point methods for two-stage stochastic semidefinite programming,” SIAM Journal on Optimization, vol. 18, no. 1, pp. 206–222, 2007. View at Google Scholar
  5. Y. Zhu, J. Zhang, and K. Partel, “Location-aided routing with uncertainty in mobile ad hoc networks: A stochastic semidefinite programming approach,” Mathematical and Computer Modelling, vol. 53, no. 11-12, pp. 2192–2203, 2011. View at Publisher · View at Google Scholar · View at Scopus
  6. N. J. Higham, “Computing the nearest correlation matrix---a problem from finance,” IMA Journal of Numerical Analysis (IMAJNA), vol. 22, no. 3, pp. 329–343, 2002. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  7. F. Alizadeh, “Interior point methods in semidefinite programming with applications to combinatorial optimization,” SIAM Journal on Optimization, vol. 5, no. 1, pp. 13–51, 1995. View at Publisher · View at Google Scholar · View at MathSciNet
  8. K. Jiang, D. Sun, and K. C. Toh, “An inexact accelerated proximal gradient method for large scale linearly constrained convex SDP,” SIAM Journal on Optimization, vol. 22, no. 3, pp. 1042–1064, 2012. View at Google Scholar
  9. M. J. Todd, “Semidefinite optimization,” Acta Numerica, vol. 10, no. 126, pp. 515–560, 2001. View at Google Scholar
  10. L. Vandenberghe and S. Boyd, “Semidefinite programming,” SIAM Review, vol. 38, no. 1, pp. 49–95, 1996. View at Google Scholar
  11. H. Wolkowicz, R. Saigal, and L. Vandenberghe, Handbook of semidefinite programming: theory, algorithms, and applications, vol. 27, Kluwer Academic Pub, New York, NY, USA, 2000.
  12. J. Malick, “A dual approach to semidefinite least-squares problems,” SIAM Journal on Matrix Analysis and Applications, vol. 26, no. 1, pp. 272–284, 2004. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  13. H. Qi and D. Sun, “A quadratically convergent Newton method for computing the nearest correlation matrix,” SIAM Journal on Matrix Analysis and Applications, vol. 28, no. 2, pp. 360–385, 2006. View at Google Scholar
  14. K. C. Toh, R. H. Tutuncu, and M. J. Todd, “Inexact primal-dual path-following algorithms for a special class of convex quadratic SDP and related problems,” Pacific Journal of Optimization, vol. 3, pp. 135–164, 2007. View at Google Scholar
  15. S. Ghadimi and G. Lan, “Stochastic first- and zeroth-order methods for nonconvex stochastic programming,” Siam Journal on Optimization, vol. 23, no. 4, pp. 2341–2368, 2013. View at Google Scholar
  16. G. Lan, A. Nemirovski, and A. Shapiro, “Validation analysis of mirror descent stochastic approximation method,” Mathematical Programming, vol. 134, no. 2, pp. 425–458, 2012. View at Google Scholar
  17. G. Lan and S. Ghadimi, “Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization i: a generic algorithmic framework,” Siam Journal on Optimization, vol. 22, no. 4, pp. 1469–1492, 2012. View at Google Scholar
  18. G. Lan and S. Ghadimi, “Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization, ii: shrinking procedures and optimal algorithms,” Siam Journal on Optimization, vol. 23, no. 4, pp. 2061–2089, 2010. View at Google Scholar
  19. G. Lan, “An optimal method for stochastic composite optimization,” Mathematical Programming, vol. 133, no. 1-2, Ser. A, pp. 365–397, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  20. A. Nemirovski and D. Yudin, “On Cezaris convergence of the steepest descent method for approximating saddle point of convex-concave functions,” in Soviet Math. Dokl, vol. 19, 1978. View at Google Scholar
  21. A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro, “Robust stochastic approximation approach to stochastic programming,” SIAM Journal on Optimization, vol. 19, no. 4, pp. 1574–1609, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  22. H. Robbins and D. Siegmund, “A convergence theorem for nonnegative almost supermartingales and some applications,” in J. S. Rustagi, Optimizing Methods in Statistics, pp. 233–257, Academic Press, New York, NY, USA, 1971. View at Google Scholar
  23. A. S. Nemirovsky and D. B. Yudin, 1983, Problem complexity and method efficiency in optimization.