Journal of Function Spaces

Journal of Function Spaces / 2018 / Article
Special Issue

Nonlinear Operator Theory and Its Applications

View this Special Issue

Research Article | Open Access

Volume 2018 |Article ID 3742575 | https://doi.org/10.1155/2018/3742575

Shuang Chen, Li-Ping Pang, Jian Lv, Zun-Quan Xia, "Inexact SA Method for Constrained Stochastic Convex SDP and Application in Chinese Stock Market", Journal of Function Spaces, vol. 2018, Article ID 3742575, 12 pages, 2018. https://doi.org/10.1155/2018/3742575

Inexact SA Method for Constrained Stochastic Convex SDP and Application in Chinese Stock Market

Academic Editor: Dhananjay Gopal
Received04 Aug 2017
Revised17 Nov 2017
Accepted13 Dec 2017
Published23 Jan 2018

Abstract

We propose stochastic convex semidefinite programs (SCSDPs) to handle uncertain data in applications. For these models, we design an efficient inexact stochastic approximation (SA) method and prove the convergence, complexity, and robust treatment of the algorithm. We apply the inexact method for solving SCSDPs where the subproblem in each iteration is only solved approximately and show that it enjoys the similar iteration complexity as the exact counterpart if the subproblems are progressively solved to sufficient accuracy. Numerical experiments show that the method we proposed was effective for uncertain problem.

1. Introduction

In this paper, we propose a class of optimization problems called stochastic convex semidefinite programs (SCSDPs):where is a smooth convex function for every realization of on , is a random matrix whose probability distribution is supported on set , is a linear map, and mean that , and is the space of real symmetric endowed with the standard trace inner product and Frobenius norm . is the set of positive semidefinite matrices in .

SCSDPs may be viewed as an extension of the following stochastic models:

(1) Stochastic (Linear) Semidefinite Programs (SLSDPs) ([15])where is the minimum of the problem where denotes the Frobenius inner product between and .

(2) Stochastic Convex Quadratic Semidefinite Programs (SCQSDPs)where is given self-adjoint positive semidefinite linear operator for every and .

(3) Stochastic Nearest Correlation Matrix. Now we consider an interesting example from finance industry. In stock research, sample correlation matrices (a symmetric positive semidefinite matrix with unit diagonal) constructed from vectors of stock returns are used for predictive purposes. Unfortunately, on any day when an observation is made data is rarely available for all the stocks of interest. Higham [6] proposed a method that is to compute the sample correlations of pairs of stocks using data drawn only from the days on which both stocks have data available. Compute the nearest correlation matrix and use it for the subsequent stock analysis. This is a statistical application that motivates nearest correlation matrix problem where is a vector of all ones and is given.

Furthermore, with the particularity of Chinese stock market (in Chinese stock market which is different from stock markets in other countries, stock price rise or fall during a day does not exceed ten percent), we could more accurately estimate the information of stock price in the future. We consider computing expected correlations of pairs of stocks returns for better response random factors of the stock market. In order to justify the subsequent stock analysis, it is desired to compute the nearest expectation correlation matrix and to use that matrix in the computation. We use this matrix to predict the correlation of these stocks in the future. The problem we consider is an important special case of (4)where is a stochastic matrix from .

Alternatively, SCSDPs may be viewed as an extension of the following deterministic convex semidefinite programs ([711]): where is a smooth convex function on and is a linear map and and mean that .

There are several methods available for solving determinate deterministic convex semidefinite programs and their special cases, which include the accelerated proximal gradient method [8], the alternating projection method [6], the quasi-Newton method [12], the inexact semismooth Newton-CG method [13], and the inexact interior-point method [14]. However, all the methods mentioned above may not be extended to efficiently solve the SCSDPs (1) because it might not be easy to evaluate in the following occasions:(a) is a random vector with a known probability distribution, but calculations of the expected value involve multidimensional integration, which is computationally expensive if not impossible.(b)The function is known, but the distribution of is unknown and the information on can only be obtained using past data or sampling.(c) is not observable and it must be approximately evaluated through simulation.

Under these circumstances, the existing numerical methods for deterministic convex semidefinite programs are not applicable to SCSDPs and new methods are needed. On the other hand, Ariyawansa and Zhu [1] and Mehrotra and Gozevin [4] apply barrier decomposition algorithms and interior-point methods for solving stochastic (linear) semidefinite programs. However, these methods may not be extended for solving the convex problems.

The main purpose of this paper is to design an efficient algorithm to solve the general problems (1) including all the special cases mentioned above. The algorithm we propose here is based on the classical SA algorithm whose subproblem must be solved exactly to generate the next iterate point. The SA algorithm approach originates from the pioneering work of Robbins and Monro and is discussed in numerous publications [1518]. In this paper, we design an inexact SA method which overcomes the limitation just mentioned. Specifically, in our inexact SA method, the subproblem is only solved approximately and we allow the error in solving subproblem to be deterministic or stochastic. From the theoretical point of view we analyze the convergence, complexity, and robust treatment of the algorithm. We also give the numerical results for illustrating the effectiveness of our method.

The rest of the paper is organized as follows. In Section 2, we focus on theory of the inexact SA method to solve the more general model-stochastic convex matrix program over abstract set , and the proof of convergence is given whether the error is determinate or stochastic. In Section 3, we provide two error estimations of the algorithm. In Section 4, we apply the theoretical results proved before for solving SCSDPs (1). Numerical experiments will be showed at last.

2. Algorithm and Convergence Analysis of General Convex Stochastic Matrix Optimization

For more generality, we first consider the following stochastic convex optimization problem:where is a nonempty closed convex set, is a random matrix whose probability distribution is supported on set , and is a smooth convex stochastic function on . We assume that the expectationis well defined and finite valued for every . We also assume that the expected value function is continuous and convex on . If for every the function is convex on , then it follows that is convex. With these assumptions, (8) becomes a convex programming problem.

Let be any convex finite at . A matrix is called -subgradient of at (where ) if The set of all such -subgradients is denoted by .

Next we make the following assumptions (also in [1921]) which will be used to analyze the convergence of algorithms in this paper:(A1)It is possible to generate an independent identically distributed i.i.d sample of realizations of random vector .(A2)There is an oracle, which, for a given input point , returns inexact stochastic subgradient-a vector such that is well defined and is a -subgradient of at , that is, .(A3)There is a positive number such that

Remark 1. Assumption   (A2) is different from the assumption used in [19]. We only need to get a -subgradient which is more likely got in practice.

Definition 2. The convex hull of set (denoted by ) is the smallest convex polygon that contains all the points of .

Throughout the paper, we use the following notations. denotes the Frobenius norm, where means the element at the th row and th column of . denotes standard trace inner product. By , we denote the metric projection operator onto the set : that is, . Note that is a nonexpanding operator, that is,The notation stands for the largest integer less than or equal to . By , we denote the history of the process up to time . Unless stated otherwise, all relations between random variables are supposed to hold almost surely.

In the following, we discuss theory of the inexact SA approach to the minimization problem (8).

The inexact SA algorithm solves (8) by mimicking the subgradient descent method where . These lead to an executable algorithm below.

Algorithm 3.

Step 1. Give the initial point and set , iterate the following steps.

Step 2. Choose suitable step sizes and computational error .

Step 3. Find an approximation solution ofSet ; go back to Step  .

In order to prove the convergence of Algorithm 3 we introduce the following lemma.

Lemma 4 (see [22]). Let be an increasing sequence of -algebras and let , , , and be nonnegative random variables adapted to . If , , it holds almost surely thatand then is convergent almost surely and almost surely.

We now state the main convergence theorem.

Theorem 5. Suppose that the stochastic optimization problem (8) has an optimal solution and . Let , where , and be a sequence generated by (14); then converges to .

Proof. Note that the iterate is a function of the history of the generated random process and hence is random.
DenoteSince and hence , we can writewhere the second inequality is due to nonexpansionary of projection operator.
Since is independent of , we haveSince assumption (A3) and , there is a positive number such that Then, by taking expectation of both sides of (17) and using (18), we obtain For , we known that . By Lemma 4, we know that is convergent almost surely andSuppose that and by the convexity of , we have that , when . For , we know that It is in contradiction with (21); then we have almost surely.

Sometimes, the error at each iteration is relevant to the random variable . In this environment, we consider stochastic inexact SA algorithm.

Algorithm 6.

Step 1. Give the initial point and set ; iterate the following steps.

Step 2. Choose suitable step sizes and computational stochastic error .

Step 3. Find an approximation solution of

Set ; go back to Step .

Theorem 7. Suppose that the stochastic optimization problem (8) has an optimal solution . We also supposed that stochastic error and there is a positive number such that . Let , where , and be a sequence generated by (23); then converges to .

Proof. Like the proof of Theorem 5, we obtainFor , we known that . By Lemma 4, we know that is convergent almost surely andSuppose that and by the convexity of , we have that , when . For , we know that It is in contradiction with (25); then we have almost surely.

3. Error Estimation

Suppose further that the expectation function is differentiable and strongly convex on ; that is, there is constant such that

Next we will discuss the complexity of the above algorithms; we take the second algorithm, for instance.

Theorem 8. Suppose that is differentiable and strongly convex, is Lipschitz continuous, and the conditions of Theorem 7 are satisfied; we have that where

Proof. Note that strong convexity of implies that the minimizer is unique. As optimality of is unique, we have that which together with (27) implies that . In turn, it follows that for all and and hence Therefore, it follows from (24) thatFor some constant , by (32), we have Next we will prove that where Using mathematical induction, for , we have which holds naturally. For we know that / < 1; then we have .
For is Lipschitz continuous, there is constant such thatand hence

When , it follows from (38) that, after iterations, the expected error in terms of the objective value is of order . But that the result is highly sensitive to a priori information on strongly convex factor . In order to make the SA method applicable to general convex objectives rather than to strongly convex ones, one should replace the classical step sizes , which can be too small to ensure a reasonable rate of convergence. We take appropriate averages of the search points rather than these points themselves. This processing method go back to [20, 23] and we call it “robust treatment.” We will show the effectiveness of the above technique in the following theorem.

Theorem 9. Suppose that is convex; let and . Consider the points , and let We know that

Proof. Due to (24) and assumption (A3), we know that It follows that whenever , we haveand hence, setting ,By convexity of , we have , and by convexity of , we have . Thus, by (43) and in view of and , we getNow we can develop step size policies along with the associated efficiency estimates based on (44). Assume that the number of iterations of the method is fixed in advance and that . Then (44) becomesMinimizing the right-hand side of (45) over , we arrive at the constant step size policyAlong with the associated efficiency estimate and , we have

With constant step size policy (47) in the above theorem, after iterations, the expected error is of order . This is worse than the rate for the inexact SA algorithm. However, the error bounds (48) are guaranteed independently of any smoothness and/or strong convexity assumptions on .

4. Specialization to the Case Where

To illustrate the advantage of inexact SA method over classical SA method, we apply our method to SCSDPs in this section. Problem (1) can be expressed in the form (8) with . Subproblem (14) then becomes the following constrained minimization problem: The KKT corresponding to problem (49) is written asWe can get the algorithm for problem (49).

Algorithm 10.

Step 1. Given the initial point and setting , iterate the following steps.

Step 2. Choose suitable step sizes and computational error .

Step 3. Solve the following equation:

Set ; go back to Step .

5. Application and Numerical Results

In order to assess from a practical point of view the inexact SA method, we code the Algorithm in MATLAB and ran it on several subcategories of SDP. We implement our algorithm in MATLAB 2009a and use a computer with one 2.20 GHz processor and 2.00 GB RAM.

5.1. Numerical Results for Random Matrix

In our first example, we take a simple case of stochastic convex SDP where is a convex function for every and where “” means each element of multiplied by random variable . We let mean 1 which is uniformly distributed whose interval radius is mutative from 0.5 to 4. Using inexact SA method to solve this problem, we list the results in Table 1.


Radius CPU time (s)

(1) 0.50.2260
(2) 10.2868
(3) 1.50.5979
(4) 20.7178
(5) 2.50.8770
(6) 30.9087
(7) 3.51.1569
(8) 41.2725
(9) 4.51.3046
(10) 51.3068
(11) 61.5898

Table 1 shows that the inexact SA method for solving stochastic convex SDP is effective. The abscissa of Figure 1 represent the interval radius of and ordinate of Figure 1 represents norm difference of the optimal value between which is a random variable and . This means that the variance of random variable may impact approximation; however, this influence does not exceed the tolerance range.

5.2. Numerical Results for SCQSDPs

In this example, we tested the inexact SA method on the linearly constrained SCQSDPs: where is random variable, , and is given as follows: Then and can be generated at random. The MATLAB code for generating the matrix is given by ; . We generated as follows:

We show the elementary numerical results in Table 2.


CPU time (s)IterPobj

(1)10017.4618
(2)20035.1117
(3)30090.3616
(4)500230.8916
(5)800756.3220
(6)10001757.7314
(7)12003939.1617
(8)15007500.4918

Our numerical results are reported in Table 2, where , , , iter, and pobj stand for, respectively, the matrix dimension, the primal infeasibility, the dual infeasibility, the number of outer iterations, and the primal objective value. For each instance, our algorithm can solve the convex QSDP problem efficiently with very good primal infeasibility. Experimental result shows that the exact SA algorithm can achieve a high degree of detecting reliability.

5.3. Predict Correlation Coefficient in the Chinese Stock Market

In this example, we discuss how to use our inexact SA method to predict correlation coefficient in the Chinese stock market. This is the main motivation of this paper. Chinese stock market is different from other countries, and the rise or fall in a day is not more than ten percent. This feature could make us estimate some index more accurately.

In this paper, our main concern is the correlation coefficient of stocks. Correlation coefficient is often useful to know if two stocks tend to move together. For a diversified portfolio, you would want stocks that are not closely related. It helps to measure the closeness of the returns. Usually, we use the determinate correlation coefficient based on historical data. This principle which we mentioned in the previous paragraph makes us discover that we can use stochastic factor to estimate the correlation coefficient in the future.

According to the difference of the total market capitalization of company, we can divide stocks into two parts called small-cap stocks and big-cap stocks.

(1) Small-Cap Stocks. They refer to stocks with a relatively small market capitalization. The definition of small cap can vary among brokerages, but generally it is a company with a market capitalization of less than billion.

Table 3 is the closing price of ten stocks of small-cap from 2017-6-3 to 2017-7-3, where CP is logogram of closing price. The second line of Table 3 is code of stocks.


CP 002001002003002004 002005 002006 002007 002008 002009 002010 002011

6.318.1409.55014.50011.3308.76026.00013.5009.6209.38012.380
6.418.2609.25013.87010.9409.43025.43013.2509.6009.44012.270
6.518.3009.24014.01011.3109.09025.30013.3609.6809.60012.160
6.617.6809.07013.99010.9508.52025.80012.7709.5009.56011.930
6.717.1308.95013.89010.9508.07025.68012.2509.2409.53011.720

6.1316.8208.75013.37010.9508.27024.50012.1708.9409.38011.400
6.1417.0608.80013.63011.4608.22024.45012.7909.1209.62011.860
6.1717.3708.77014.09011.5708.28024.26012.7109.3009.62011.780
6.1817.1508.81014.45011.3608.23024.80012.7709.2609.86011.540
6.1917.2808.81014.30011.1608.11025.35012.9909.1209.59011.500

6.2016.7508.62013.96010.6207.75024.58012.3008.6509.20010.740
6.2116.6708.48013.99010.3507.90024.56011.9508.7308.76010.330
6.2415.6408.20013.5009.32007.11022.90010.8508.2108.3509.900
6.2515.1608.20013.7409.1907.24023.01011.2908.5008.43010.310
6.2615.0808.18014.1909.9607.31023.40011.4708.7708.38010.590

6.2714.8908.18013.8509.6507.02022.89011.3408.6208.00010.280
6.2814.7408.20014.0909.3006.96022.58011.1909.0308.09010.140
7.114.6508.25014.1509.6007.05022.75011.5009.4908.16010.360
7.215.1208.24013.92010.1507.13023.32011.7409.5408.20010.800
7.314.7508.13014.04010.2307.40023.70011.5709.5908.00010.680

We know that the change of the closing price in 2017.7.4 will not exceed ten percent and seriously up or down is small probability event. According to this and model (6), we can forecast the correlation coefficient from 2017.6.4 to 2017.7.4. We list the results in Table 4.


CC 002001002003002004002005 002006 002007002008002009 002010002011

00200110.95970.06900.85470.93430.90420.90030.40020.91280.8682
0020030.959710.06810.80800.91290.90030.91100.64290.81390.9120
0020040.06900.068110.25350.14180.27190.27000.44290.17970.1379
0020050.85470.80800.253510.80800.81390.88960.56910.91000.8741
0020060.93430.91290.14180.808010.85170.89640.58370.82550.8888
0020070.90420.90030.27190.81390.851710.86240.52350.83810.8430
0020080.90030.91100.27000.88960.89640.862410.6828 0.85170.9168
0020090.40020.64290.44290.56910.58370.52350.682810.46910.7216
0020100.91280.81390.17970.91000.82550.83810.85170.469110.8430
0020110.86820.91200.13790.87410.88860.8430 0.91680.72160.84301

Now we give the true correlation coefficient of the stocks from 2017.6.4 to 2017.7.4 for comparison.

From Tables 4 and 5, we know that the correlation matrix we calculated is very close to the true correlation matrix. This mean that our method for predicting correlation matrix is effective.


CC 002001002003002004002005 002006 002007002008002009 002010002011

00200110.9660.0160.8520.9480.9050.9170.3420.9280.871
0020030.96610.0430.8330.9570.9080.9310.4930.9080.929
0020040.0160.04310.1720.0160.1330.2360.3880.1060.09
0020050.8520.8330.17210.8210.8320.9160.5120.9130.899
0020060.9480.9570.0160.82110.8680.9230.4960.8480.911
0020070.9050.9080.1330.8320.86810.870.4370.8640.849
0020080.9170.9310.2360.9160.9230.8710.5830.8980.935
0020090.3420.4930.3880.5120.4960.4370.58310.3390.647
0020100.9280.9080.1060.9130.8480.8640.8980.33910.875
0020110.8710.9290.090.8990.9110.8490.9350.6470.8751

(2) Big-Cap Stocks. In China, it is a term used by the investment community to refer to companies with a market capitalization value of more than billion. Large cap is an abbreviation of the term “large market capitalization.” Market capitalization is calculated by multiplying the number of a company’s shares outstanding by its stock price per share.

Table 6 is the closing price of ten stocks of big-cap from 2017-6-3 to 2017-7-3.


CP 002001002003002004002005 002006 002007002008002009 002010002011

6.39.8404.79010.7110.384.816.74012.9909.28013.77012.350
6.49.7404.73010.7010.284.776.73012.7409.16013.75012.180
6.59.6504.77010.5910.104.766.71012.8809.06013.36012.150
6.69.4504.70010.3910.004.716.71012.7209.04013.18011.760
6.79.3504.56010.269.9904.666.60012.1208.84013.17011.520

6.139.0204.3009.9909.8804.536.39011.3508.34012.32011.000
6.149.0204.50010.069.9604.556.29011.3408.31012.21011.070
6.179.0104.62010.039.9104.516.35011.1208.16012.10010.970
6.189.0704.59010.109.9504.336.35011.2208.14012.20011.120
6.198.8904.5609.9609.7504.284.68011.1508.11011.98010.940

6.208.4204.3409.5209.2904.174.50010.8207.94011.47010.610
6.218.2804.3709.4809.4504.154.45011.0507.73011.71010.590
6.247.5204.0608.6908.5104.054.21010.0907.48010.9309.5300
6.257.8003.9608.6908.4404.034.19010.0007.54010.9209.3000
6.267.7703.9308.5808.3003.924.1009.99007.49010.6609.3100

6.277.8804.2308.6608.2103.924.1109.84007.45010.9209.3300
6.288.2804.1009.0208.5703.934.18010.1307.51011.6009.9100
7.18.1704.0308.9208.5903.914.27010.1807.42011.3809.9900
7.28.1503.9908.8808.4803.924.24010.1907.28011.2109.8700
7.38.0403.9408.6808.4403.944.2509.97007.06011.3509.6600

Like in the small-cap stocks, we calculate the correlation matrix as shown in Table 7.


CC 600000600010600015600016600019600028600030600031600036600048

60000010.91190.97760.94970.93880.96560.94470.94270.97660.9736
6000100.911910.94770.93180.92090.92980.90200.89600.90300.9318
6000150.97760.947710.98250.96460.98350.94670.94080.97160.9825
6000160.94970.93180.982510.94370.96660.90500.89400.95470.9597
6000190.93880.92090.96460.943710.97660.95870.97160.93580.9527
6000280.96560.92980.98350.96660.976610.95970.95570.95670.9756
6000300.94470.90200.94670.90500.95870.959710.97760.94080.9696
6000310.94270.89600.94080.89400.97160.95570.977610.93180.9487
6000360.97660.90300.97160.95470.93580.95670.94080.931810.9746
6000480.97360.93180.98250.95970.95270.97560.96960.94870.97461

Now we also give the true correlation coefficient of the stocks from 2017.6.4 to 2017.7.4 for comparison.

From Tables 7 and 8, we know that the correlation matrix we calculated is even more close to the true correlation matrix compared to the case in small-cap stocks. This means that our method for predicting correlation matrix is more effective in the company whose total market capitalization is huge.


CC 600000600010600015600016600019600028600030600031600036600048

60000010.9110.9790.9520.9370.970.9430.9320.9790.976
6000100.91110.9510.9320.920.9250.9010.90.8870.93
6000150.9790.95110.9870.9680.9850.9510.9440.9680.988
6000160.9520.9320.98710.9460.970.9070.8930.9550.965
6000190.9370.920.9680.94610.9810.9620.9720.9290.954
6000280.970.9250.9850.970.98110.9650.9520.960.982
6000300.9430.9010.9510.9070.9620.96510.9750.9340.973
6000310.9320.90.9440.8930.9720.9520.97510.9090.942
6000360.9790.8870.9680.9550.9290.960.9340.90910.973
6000480.9760.930.9880.9650.9540.9820.9730.9420.9731

6. Conclusion

In this paper, we propose stochastic convex semidefinite programs (SCSDPs) to handle data uncertainty in financial problem. An efficient inexact stochastic approximation method is designed. We proved the convergence, complexity, and robust treatment of the algorithm. Numerical experiments show that the method we proposed was effective for SCSDP and also for its special cases. We also numerically demonstrated that the method is more effective in big-cap stocks.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The work is partially supported by the Natural Science Foundation of China, Grants 11626051, 11626052, 11701061, and 11501074.

References

  1. K. Ariyawansa and Y. Zhu, “A class of polynomial volumetric barrier decomposition algorithms for stochastic semidefinite programming,” Mathematics of Computation, vol. 80, no. 275, pp. 1639–1661, 2011. View at: Google Scholar
  2. K. A. Ariyawansa and Y. Zhu, “Stochastic semidefinite programming: a new paradigm for stochastic optimization,” 4OR, vol. 4, no. 3, pp. 239–253, 2006. View at: Google Scholar
  3. S. Jin, K. A. Ariyawansa, and Y. Zhu, “Homogeneous self-dual algorithms for stochastic semidefinite programming,” Journal of Optimization Theory and Applications, vol. 155, no. 3, pp. 1073–1083, 2012. View at: Google Scholar
  4. S. Mehrotra and M. G. Gozevin, “Decomposition-based interior point methods for two-stage stochastic semidefinite programming,” SIAM Journal on Optimization, vol. 18, no. 1, pp. 206–222, 2007. View at: Google Scholar
  5. Y. Zhu, J. Zhang, and K. Partel, “Location-aided routing with uncertainty in mobile ad hoc networks: A stochastic semidefinite programming approach,” Mathematical and Computer Modelling, vol. 53, no. 11-12, pp. 2192–2203, 2011. View at: Publisher Site | Google Scholar
  6. N. J. Higham, “Computing the nearest correlation matrix---a problem from finance,” IMA Journal of Numerical Analysis (IMAJNA), vol. 22, no. 3, pp. 329–343, 2002. View at: Publisher Site | Google Scholar | MathSciNet
  7. F. Alizadeh, “Interior point methods in semidefinite programming with applications to combinatorial optimization,” SIAM Journal on Optimization, vol. 5, no. 1, pp. 13–51, 1995. View at: Publisher Site | Google Scholar | MathSciNet
  8. K. Jiang, D. Sun, and K. C. Toh, “An inexact accelerated proximal gradient method for large scale linearly constrained convex SDP,” SIAM Journal on Optimization, vol. 22, no. 3, pp. 1042–1064, 2012. View at: Google Scholar
  9. M. J. Todd, “Semidefinite optimization,” Acta Numerica, vol. 10, no. 126, pp. 515–560, 2001. View at: Google Scholar
  10. L. Vandenberghe and S. Boyd, “Semidefinite programming,” SIAM Review, vol. 38, no. 1, pp. 49–95, 1996. View at: Google Scholar
  11. H. Wolkowicz, R. Saigal, and L. Vandenberghe, Handbook of semidefinite programming: theory, algorithms, and applications, vol. 27, Kluwer Academic Pub, New York, NY, USA, 2000.
  12. J. Malick, “A dual approach to semidefinite least-squares problems,” SIAM Journal on Matrix Analysis and Applications, vol. 26, no. 1, pp. 272–284, 2004. View at: Publisher Site | Google Scholar | MathSciNet
  13. H. Qi and D. Sun, “A quadratically convergent Newton method for computing the nearest correlation matrix,” SIAM Journal on Matrix Analysis and Applications, vol. 28, no. 2, pp. 360–385, 2006. View at: Google Scholar
  14. K. C. Toh, R. H. Tutuncu, and M. J. Todd, “Inexact primal-dual path-following algorithms for a special class of convex quadratic SDP and related problems,” Pacific Journal of Optimization, vol. 3, pp. 135–164, 2007. View at: Google Scholar
  15. S. Ghadimi and G. Lan, “Stochastic first- and zeroth-order methods for nonconvex stochastic programming,” Siam Journal on Optimization, vol. 23, no. 4, pp. 2341–2368, 2013. View at: Google Scholar
  16. G. Lan, A. Nemirovski, and A. Shapiro, “Validation analysis of mirror descent stochastic approximation method,” Mathematical Programming, vol. 134, no. 2, pp. 425–458, 2012. View at: Google Scholar
  17. G. Lan and S. Ghadimi, “Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization i: a generic algorithmic framework,” Siam Journal on Optimization, vol. 22, no. 4, pp. 1469–1492, 2012. View at: Google Scholar
  18. G. Lan and S. Ghadimi, “Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization, ii: shrinking procedures and optimal algorithms,” Siam Journal on Optimization, vol. 23, no. 4, pp. 2061–2089, 2010. View at: Google Scholar
  19. G. Lan, “An optimal method for stochastic composite optimization,” Mathematical Programming, vol. 133, no. 1-2, Ser. A, pp. 365–397, 2012. View at: Publisher Site | Google Scholar | MathSciNet
  20. A. Nemirovski and D. Yudin, “On Cezaris convergence of the steepest descent method for approximating saddle point of convex-concave functions,” in Soviet Math. Dokl, vol. 19, 1978. View at: Google Scholar
  21. A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro, “Robust stochastic approximation approach to stochastic programming,” SIAM Journal on Optimization, vol. 19, no. 4, pp. 1574–1609, 2008. View at: Publisher Site | Google Scholar | MathSciNet
  22. H. Robbins and D. Siegmund, “A convergence theorem for nonnegative almost supermartingales and some applications,” in J. S. Rustagi, Optimizing Methods in Statistics, pp. 233–257, Academic Press, New York, NY, USA, 1971. View at: Google Scholar
  23. A. S. Nemirovsky and D. B. Yudin, 1983, Problem complexity and method efficiency in optimization.

Copyright © 2018 Shuang Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views473
Downloads404
Citations

Related articles