Stochastic Process Theory and Its ApplicationsView this Special Issue
Research Article | Open Access
Yu Shi, Xia Zhao, Fengwei Jiang, Yipin Zhu, "Stable Portfolio Selection Strategy for Mean-Variance-CVaR Model under High-Dimensional Scenarios", Mathematical Problems in Engineering, vol. 2020, Article ID 2767231, 11 pages, 2020. https://doi.org/10.1155/2020/2767231
Stable Portfolio Selection Strategy for Mean-Variance-CVaR Model under High-Dimensional Scenarios
This paper aims to study stable portfolios with mean-variance-CVaR criteria for high-dimensional data. Combining different estimators of covariance matrix, computational methods of CVaR, and regularization methods, we construct five progressive optimization problems with short selling allowed. The impacts of different methods on out-of-sample performance of portfolios are compared. Results show that the optimization model with well-conditioned and sparse covariance estimator, quantile regression computational method for CVaR, and reweighted norm performs best, which serves for stabilizing the out-of-sample performance of the solution and also encourages a sparse portfolio.
Mean-risk models are widely used and play an important role in financial risk management. The classical and revolutionary work is mean-variance (MV) optimization model proposed by Markowitz , in which variance is used to measure risk. Afterwards, more and more researchers are devoted to the study of this field. Kolm et al.  review the development, challenges, and trends of MV optimization problems in recent six decades. Considering different risk measures focus on different characteristics of risk, some risk measures other than variance are incorporated into the mean-risk framework. For example, Konno and Yamazaki  and Ogryczak and Ruszczynski  use absolute deviation and semideviation to measure risk, respectively, and construct mean-risk model for portfolio selection. Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR) have been employed as the risk measure to conduct asset allocation (see Consigli , Alexander and Baptista , Xu et al. , and Quaranta and Zaffaroni  for more details).
Since different risk measures delineate different information of risk, the combination of two risk measures is used to control risk in mean-risk models. For instance, Konno et al.  and Konno and Suzuki  construct mean-absolute deviation-skewness model and mean-variance-skewness model, respectively, in which the skewness is indicative of unidirectional movement (bearish or bullish) of the stock market. Robert and Philip  propose a mean-variance-skewness-kurtosis portfolio optimization model and show that higher-order moments of return can significantly change optimal portfolio construction. Roman et al.  construct an optimization model based on mean-variance-CVaR criterion, in which variance and CVaR are combined to obtain a range of balanced solutions that are generally discarded by both mean-variance and mean-CVaR models. Additionally, the personal preference between variance and CVaR of the decision-maker can be considered. Gao et al.  extend it to dynamic scenario in financial field and Shi et al.  give the discussion in insurance investment under regulatory constraints. However, the study of the stability of optimal strategy’s out-of-sample performance is ready to explore under mean-variance-CVaR criterion. Thus, the adaptability of optimal investment strategy in practice is questionable, especially in high-dimensional scenarios (dimensionality of assets is comparable to or even larger than the number of observations). Here, we will study this problem from the point of view of model-solving procedure.
To solve mean-variance-CVaR optimization problem, the computation of variance and CVaR for portfolios is a central and fundamental work. For variance term, estimating covariance matrix of the return variables of assets is necessary. Generally, sample covariance matrix is a good estimator when sample size is large sufficiently. But under high-dimensional scenarios, it usually delivers the presence of poor out-of-sample performance (see, e.g., Green and Hollifield ; Chopra and Ziemba ; DeMiguel et al. ). To deal with such instable out-of-sample performance, researchers make various contributions. Based on different thresholding function, thresholding covariance matrix estimator is one of the mainstreams of improving sample covariance matrix by considering sparsity (e.g., Tibshirani ; Bickel and Levin’a ; Cai et al. ). Moreover, to guarantee the presence of a convex optimization problem, Rothman et al.  propose the positive definite sparse covariance estimator (PDSCE). While considering the property of well condition, Ledoit and Wolf  propose an estimator of covariance matrix as a linear combination of sample covariance and identity matrix. Maurya  develops a well-conditioned and sparse estimator of covariance matrix in high-dimensional setting. This estimator has the properties of well-conditioned, sparsity, and positive definite; especially, the property of positive definite guarantees a convex optimization problem and it performs better than several other popular methods used in literatures (e.g., graphical lasso in Friedman et al. , PDSCE in Rothman , and Bickel and Levin’a thresholding estimator in Bickel and Levin’a ).
For CVaR item, Rockafeller and Uryasev [25, 26] calculate CVaR through a convex programming problem, which pave the way of portfolio selection with CVaR under nonnormal assumption. Bassett et al.  bridge the gap between quantile regression and calculation of CVaR without distribution assumption of returns. However, it has also been investigated that the out-of-sample performance is unstable in mean-CVaR model (see Lim et al. , Takeda and Kanamori , and Kondor et al.  for more details). A popular way to make mean-CVaR model stable is regularization technique which can lead to a sparse portfolio simultaneously. Xu et al.  incorporate penalty function into mean-CVaR model under the large scale sample scenarios to get a sparse portfolio. Gao and Wu  combine penalty function and variance item to improve the out-of-sample performance of mean-CVaR model. Additionally, regularization method has wide applications in constructing mean-variance model to find stable optimal portfolios with better out-of-sample performance. For example, Brodie et al.  reformulate classical Markowitz’s mean-variance model as a constrained least-squares regression problem. They add a -regularization term to the objective function and show that this penalty regularizes the optimization problem and encourages sparse portfolios. Other works about regularization technique used in mean-variance or mean-CVaR model to promote stable solutions can be seen in literatures (e.g., Gotoh and Takeda  and Fastrich et al. ).
Motivated by the above discussion, in this paper, we try to find portfolios with more stable out-of-sample performance for mean-variance-CVaR model under high-dimensional scenarios. Five progressive optimization problems are designed carefully based on mean-variance-CVaR criteria through combining two estimators for covariance matrix, two computing methods for CVaR, and two penalty functions in different ways. Simulation is conducted to compare the out-of-sample performance of portfolios obtained from these optimization problems and the impact of underlying methods on optimal strategy is analyzed. Based on the historical data from the constituent stocks in Shanghai Stock Exchange 50 Index, an empirical study is considered.
The remaining of this paper is organized as follows. Section 2 describes the related methods used in this paper. Section 3 shows the optimization problems we formulate based on the mean-variance-CVaR criterion. Section 4 provides simulation and result comparisons. An empirical study is carried out in Section 5. Finally, Section 6 concludes the paper.
In this section, we will briefly introduce risk measures used in this paper, their estimating methods, and some regularization methods. Suppose that we have the opportunity to invest in assets with returns , . Let represent return vector with weight vector , in which is the portfolio allocation weight for . Thus, the total return is .
Variance describes the degree of dispersion of a random variable and hence can measure the fluctuation of investment return. Here, we use to denote the variance of a portfolio return variable since , and can be expressed aswhere is the covariance matrix of and is the covariance of and .
Generally, is estimated by sample covariance matrix (called as classical estimator); that is, and denote the sample covariance of and , where is the sample size.
Since the well-conditioned and sparse estimator proposed by Maurya , noted as , has been proved its superiority of property. Therefore, in this paper, we employ the well-conditioned and sparse estimator directly. Following the notation in Maurya , we havewhere and denote a diagonal matrix with the same diagonal as , is the ith largest eigenvalue of matrix , and is the mean of eigenvalues of , and the way to identify the best pair can also be found in Maurya .
For financial assets or portfolio, VaR refers to the greatest possible loss over specific holding period, at a certain confidence level . If the cumulative distribution function for return is , VaR can be defined as
As we know, compared with VaR, a key advantage of CVaR is that CVaR satisfies the four coherence axioms of Artzner et al. , whereas VaR is not coherent. The definition of CVaR is as follows:
Evidently, , the conditional expectation of losses exceeding , is more informative about the tail of the distribution than . Two popular ways to compute CVaR without distribution constraints have been employed recently. The first one is a linear programming model for optimizing CVaR proposed by Rockafellar and Uryasev [25, 26], generally called as Rockafellar–Uryasev’s approximation. They have proved that CVaR can be calculated by solving a convex optimization problem. In other words, CVaR can be formulated aswhere is the th quantile of . This method has been widely used in many literatures (e.g., Roman et al. , Lim et al. , and Gao and Wu ).
The second way is quantile regression method proposed in Bassett et al. . Xu et al.  have proved that it has faster computational speed compared with Rockafellar–Uryasev’s approximation method. According to Bassett et al. , the CVaR of a portfolio return could be calculated bywhere is the th quantile of and is the check function, in which is an indicator function that takes on the value one whenever its argument is true and zero otherwise.
Since , we can have under the constraint . Let , we can convert equation (6) into
2.3. Regularization Method
To encourage a stable and sparse solution in optimization problem, penalty item is frequently used, which is regarded as regularization method. is a general penalty function that allows shrinking the components in to zero. In this paper, we focus on smoothly clipped absolute deviation (SCAD) penalty proposed by Fan et al.  and reweighted norm penalty proposed by Emmanuel et al. , since they have oracle properties.
The SCAD penalty is formulated as follows:where and are tuning parameters. It is well known that SCAD is continuous and singular at the origin, penalizes large coefficients equally, and has no bias, and the reweighted norm is as follows:where is a tuning parameters and the value of can be identified by iteration method based on the value of . Reweighed norm has a better out-of-sample performance compared with norm penalty , since it gives every component in a corresponding weight so that the penalization is more accurate.
3. Model Formation
Motivated by Roman et al. , we consider the following mean-variance-CVaR model:where is a weighting parameter, which represents the attitude that investors hold towards the two risk measures and hence balances the importance of them.
Remark 1. Letting or , respectively, the above optimization problem is reduced to the mean-CVaR model or mean-variance model.
Remark 2. Different from Roman et al. , in this paper, we incorporate the variance term and CVaR term into the objective function of optimization model simultaneously and short selling is allowed.
3.1. Optimization Problems
In this section, we present the following five optimization problems based on different methods of calculating variance, CVaR, and different regularization methods in order to obtain optimal portfolio for .
To illustrate clearly, we outline the optimization problems in Table 1. Here, ”” means the underlying method is incorporated in the corresponding optimization problem. For example, the second row tells us that the classical estimation of variance term and Rockafellar–Uryasev’s approximation for CVaR term are used in problem .
Mathematically, optimization problems can be written in the following forms:
Why do we design such five optimization problems in the foregoing way? In , the classical method of variance term and Rockafellar–Uryasev’s approximation for CVaR term are used to find the solution. To construct , we replace the classical estimator in the variance term in with , and the CVaR part remains the same so that we could find if could mitigate the instability. Further, we replace in with to get , so that the difference between the two computational methods for CVaR could be compared. In , can be absorbed into the CVaR item based on quantile regression method. To examine the performance of SCAD and reweighted norm penalty function, we design and , in which quantile computational method for CVaR is used because of the convenience of selecting tuning parameter in penalty function and faster computational speed . Through such progressive problem design, we can figure out the performance of different methods and find the most effective one in our setting.
Remark 3. The estimation methods used in for variance and CVaR are the same as that in Roman et al. .
Remark 4. If , degenerates into the model in Xu et al. . However, the constraint is missing in their paper. According to Bassett et al. , the item in CVaR part (see (7)) in objective function can be neglected only under the constraint that the expected return of portfolio is a constant.
3.2. Selection of Tuning Parameter
For and , we need to search the best tuning parameter for penalty item.
In , the nonnegative regularization parameter drives the relevance of the SCAD penalty and is a tuning parameter belonging to SCAD penalty. Here, we search the best pair over two-dimension grids following the criterion mentioned in the following. Motivated by Lee et al.  and Wang et al. , we use the modified Bayesian Information Criterion as follows:where are random samples of , are random samples of for , and is the effective dimension of fitting model. Additionally, indicates the number of points in the set with . Furthermore, the iterative algorithm for conducting can be seen in Wu and Liu .
In , is the nonnegative regularization parameter and is the tuning parameter in reweighted norm penalty. For , we apply the following iterative procedure for every given to change the weighting parameter dynamically and adaptively .(1)Let the iteration count be . For any given , we solve the problem , which gives the solution . If the stopping criteria are satisfied, for example, compared with every component in , the maximum change does not exceed some small constant, we stop the iteration. Otherwise, go to Step (2).(2)Use to construct the new weighting parameter , where can be a small positive number and let . Go to Step (1).
Finally, here, can be searched by iteration based on the modified Bayesian Information Criterion equation (12).
4. Simulation and Discussion
4.1. Data Generation
To evaluate the out-of-sample performance of these five portfolio optimization problems, the dataset is generated by simulation approach with parameters being estimated from real historical price data of stock index.
4.1.1. Construction of Return Variable
Following the idea in Lim et al. , the random returns are captured by a hybrid distribution with multivariate normal distribution and exponential distribution. So all assets might suffer a perfectly correlated exponential-tail loss in a small probability. Let be the exponential random variable with parameter and we set for simplicity. So the return variable is constructed as follows:where is the Bernoulli random variable with parameter , follows multivariate normal distribution with mean vector and covariance matrix , and with , where is the -th diagonal element of the matrix . The parameter controls the tail loss of the distribution.
4.1.2. Determination of Parameter Values
We choose 142 stocks which in Shanghai Stock Exchange Constituent (SSEC) index download the data from Joinquant Data platform. The sample period spans from Jan 3, 2015, to Dec 31, 2015. We calculate the daily yield return and estimate their mean return vector and covariance matrix . (Note that the 180 constituent stocks in SSEC are always changing in the sample period, since the sample is adjusted every half year; therefore, we choose 142 stocks which continuously stay in the index over the period.)
4.1.3. Data Generation
As we know, when the dimensionality of assets is comparable to the number of observations, the data set belongs to high-dimensional scenarios. Therefore, in order to simulate a high-dimensional data set, we set the size of close to . Letting and , we simulate 101 groups of data, one group as in-sample data, and 100 group as out-of-sample data.
4.2. Optimization Results
In this section, we will calculate simulation results based on the five optimization problems, respectively, and give the comparisons from the point view of the out-of-sample performance. Without loss of generality, we set . All the calculations follow three steps: Step 1: calculate the weight of assets in portfolio selection model with different 0, 0.25, 0.5, 0.75, and 1 for in-sample data. Step 2: based on the weights in Step 1, we calculate the variance, CVaR with 0.95 confidence level, and expected return for other 100 groups of out-of-sample data. Step 3: summarize their mean value and standard deviation for every case. For example, under the case of at , the mean value and the standard deviation for 100 values of CVaR are calculated to be 10.0812 and (0.9708), respectively.
Steps 1 and 2 mean that we hold the optimal portfolio generated from in-sample data in the out-of-sample period, in which the data follow the same distribution as in-sample data. Therefore, if the optimal portfolio is stable, the performance of portfolio in 100 groups of out-of-sample data should be similar, and hence, we calculate the standard deviation in Step 3 to measure the similarity of the performance in 100 groups.
The results are shown in Table 2.
From Table 2, the following results are concluded:(1)Compared with –, the mean for 100 values of variance, CVaR in are larger, which means that the risks of portfolios in out-of-sample data get out of control and the expected returns are also higher. This is consistent with the truth of “high risk; high return.” The inherent reason is that sample covariance estimator lost its theoretical support (Law of Large Numbers) in high-dimensional scenarios.(2)Compared with , we can find that all the counterparts in decrease, which tells that the use of well-conditioned and sparse estimator () could control the risk significantly and improves the stability of solutions’ out-of-sample performance as well. When , the model is reduced to mean-CVaR model, whose solution is irrelevant to the estimation of , so the values of CVaR and expected return should remain the same. The data in Table 2 really illustrate this fact.(3)The results in and share identical in-sample optimal portfolio values and out-of-sample performance. This means that the impact of Rockafellar–Uryasev’s approximation and quantile regression method of CVaR on the solution of optimization problem shows no significant difference under our setting.(4)The values in further become smaller than ones in , which says that the use of SCAD penalty technique can improve the stability and the risk-controlling ability for out-of-sample performance.(5)Compared with , the values of variance and CVaR in show a slight decrease, which means that the reweighted norm penalty contributes to a more stable out-of-sample performance than SCAD penalty, even though the improvement is fractional.(6)Bigger represents a higher proportion for variance item in objective function. Since variance is a more conservative risk measure compared with CVaR, generally, bigger results in a more conservative portfolio with relative low-risk index and expected return in out-of-sample analysis. The values in and show this trend well. But others fail, which may attribute to the instable out-of-sample performance.
To analyze the stability of out-of-sample performance in – easily, we draw the scatter diagram for each optimization problem at . For the convenience of comparison, we give the following graphical design. First, , , and share the same coordinate system (see Figure 1), and the diagrams of and are identical, so we draw them in one diagram. Second, to give a vivid comparison, we zoom in the coordinate scale and draw the diagrams of and ; see Figure 2. The coordinate scale is further amplified in Figure 3 for the diagrams of and .
From Figure 1, we can find that the points in (b) are in lower risk positions and more concentrative, which means that the portfolio generated from (or ) has a better risk-controlling ability and more similar performance in 100 groups of out-of-sample data sets. Thus, the well-conditioned and sparse estimator helps to stabilize the out-of-sample performance and control the risk better. This is consistent with result (2) from Table 2.
From Figures 2 and 3, we can conclude that penalty items also help to stabilize the out-of-sample performance significantly and the reweighted norm penalty performs a little bit better than SCAD penalty under our settings. This is consistent with results (4) and (5) from Table 2.
5. Empirical Study
Suppose that an investor intends to track a stock index in a financial market allowing short selling. Through the Joinquant Data platform, we select daily data of the constituent stocks in Shanghai Stock Exchange 50 (SSE 50) Index from Jun 11, 2018, to Dec 11, 2018. The reason why we choose data in a period of half a year elaborates as follows. The constituent stocks in SSE 50 change every half year or so, and keeping them as a group in an index apparently influences the dependency structure of assets, which further affects the optimal investment strategy. Therefore, when an investor intends to track SSE 50, it is reasonable to get the data in which the same 50 stocks always stay in SSE 50.
Then, we tag the 50 stocks from No. 1 to No. 50. After calculating, we obtain 124 observations of daily logarithm yield return, including 64 in-sample data spanning from Jun 11, 2018, to Sept 10, 2018, and 60 out-of-sample data spanning from Sept 11, 2018, to Dec 11, 2018. Here, we can find that the dimensionality of in-sample data meets the requirement of high-dimensional scenarios because and are comparable.
Since outperforms – from Section 4, we calculate the optimal portfolios based on in formula (13) and show the results in Figure 4. The parameter is supposed to be 0, 0.5, and 1, respectively, and the target return is set to be the average value of the return data of 50 stocks. Considering the idea in this paper is illuminated by the pioneering work of Roman et al. , we further conduct the corresponding analysis from for comparison and the results are shown in Figure 5. It can be easily found that portfolios from are much more sparse and can avoid extreme investment action, which will be more applicable to the practice of finance.
Next, we observe the out-of-sample performance of the portfolios. There is evidence that equal weight (EW) strategy cannot be defeated by many portfolio methods consistently (DeMiguel et al. ). We also add EW strategy into comparison here. The variance, , and average return of the portfolios are calculated, respectively; see Table 3. The accumulated returns of the portfolios in out-of-sample period are displayed in Figure 6.
From Table 3 and Figure 6(a), we can conclude that the portfolio generated from has the highest terminal return and can control the risk robustly over the period, since it has a relatively slight fluctuation and avoids extreme losses and outperforms the EW strategy basically. However, the portfolio generated from is unstable even compared with EW strategy and ends up with the lowest return in spite of being dominate at the beginning. Figure 6(b) also tells us that in will have a more promising terminal return compared with the case only considering CVaR () or variance ().
In this paper, we have constructed five progressive optimization problems to figure out a relatively better way to conduct mean-variance-CVaR portfolio selection model in high-dimension scenarios. Specifically, two covariance matrix estimators (classical estimator and well-conditioned and sparse estimator), two estimation methods for CVaR (Rockafellar–Uryasev’s approximation and quantile regression), and two penalty functions (SCAD and reweighted norm) have been considered and different combinations of them have been used for the formulation of optimization problems. The data for simulation is generated from a hybrid distribution with multivariate normal distribution and exponential distribution. Moreover, the correlation matrix and mean vector of the multivariate normal distribution are estimated from real data. The simulation studies show that the well-conditioned and sparse estimator and penalty item devote to alleviate poor out-of-sample performance significantly and the reweighted norm penalty has a slight superiority. Additionally, we find that the classical optimization methods for mean-variance-CVaR model in Roman et al.  may result in poor out-of-sample performance under high-dimension scenarios. An empirical study has been conducted based on the data of constituent stocks in SSE 50. The results solidify the foregoing findings. Therefore, we suggest to combine well-conditioned and sparse covariance estimator, quantile regression computational method for CVaR, and reweighted norm for portfolio optimization under mean-variance-CVaR criterion, since the fragile out-of-sample performance could be effectively mitigated.
The study in this paper focuses on the high-dimensional setting, where is just a little bit greater than . An extension of this study under a more extremely high-dimensional scenario in which would also be interesting. Moreover, it is noteworthy to further stabilize the out-of-sample performance by controlling the fragile brought by mean item. We will explore these problems in the following study.
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
This work was partially supported by National Natural Science Foundation of China under Grant nos. 71671104 and 11971301, Key Project of National Social Science of China under Grant no. 16AZD019, and Incubation Group Project of Financial Statistics and Risk Management of SDUFE.
- H. Markowitz, “Portfolio selection,” The Journal of Finance, vol. 7, no. 1, pp. 77–91, 1952.
- P. N. Kolm, R. Tütüncü, and F. J. Fabozzi, “60 years of portfolio optimization: practical challenges and current trends,” European Journal of Operational Research, vol. 234, no. 2, pp. 356–371, 2014.
- H. Konno and H. Yamazaki, “Mean-absolute deviation portfolio optimization model and its applications to tokyo stock market,” Management Science, vol. 37, no. 5, pp. 519–531, 1991.
- W. Ogryczak and A. Ruszczyński, “From stochastic dominance to mean-risk models: semideviations as risk measures,” European Journal of Operational Research, vol. 116, no. 1, pp. 33–50, 1999.
- G. Consigli, “Tail estimation and mean-VaR portfolio selection in markets subject to financial instability,” Journal of Banking & Finance, vol. 26, no. 7, pp. 1355–1382, 2002.
- G. J. Alexander and A. M. Baptista, “Economic implications of using a mean-var model for portfolio selection: a comparison with mean-variance analysis,” Journal of Economic Dynamics and Control, vol. 26, no. 7-8, pp. 1159–1193, 2002.
- Q. Xu, Y. Zhou, C. Jiang, K. Yu, and X. Niu, “A large CVaR-based portfolio selection model with weight constraints,” Economic Modelling, vol. 59, pp. 436–447, 2016.
- A. G. Quaranta and A. Zaffaroni, “Robust optimization of conditional value at risk and portfolio selection,” Journal of Banking & Finance, vol. 32, no. 10, pp. 2046–2056, 2008.
- H. Konno, H. Shirakawa, and H. Yamazaki, “A mean-absolute deviation-skewness portfolio optimization model,” Annals of Operations Research, vol. 45, no. 1, pp. 205–220, 1993.
- H. Konno and K.-i. Suzuki, “A mean-variance-skewness portfolio optimization model,” Journal of the Operations Research Society of Japan, vol. 38, no. 2, pp. 173–187, 1995.
- C. S. Robert and A. H. Philip, “On the direction of preference for moments of higher order than the variance,” The Journal of Finance, vol. 35, no. 4, pp. 915–919, 1980.
- D. Roman, K. Darby-Dowman, and G. Mitra, “Mean-risk models using two risk measures: a multi-objective approach,” Quantitative Finance, vol. 7, no. 4, pp. 443–458, 2007.
- J. Gao, Y. Xiong, and D. Li, “Dynamic mean-risk portfolio selection with multiple risk measures in continuous-time,” European Journal of Operational Research, vol. 249, no. 2, pp. 647–656, 2016.
- Y. Shi, X. Zhao, and X. Yan, “Optimal asset allocation for a mean-variance-CVaR insurer under regulatory constraints,” American Journal of Industrial and Business Management, vol. 09, no. 07, pp. 1568–1580, 2019.
- R. C. Green and B. Hollifield, “When will mean-variance efficient portfolios be well diversified?” The Journal of Finance, vol. 47, no. 5, pp. 1785–1809, 1992.
- V. K. Chopra and W. T. Ziemba, “The effect of errors in means, variances, and covariances on optimal portfolio choice,” The Journal of Portfolio Management, vol. 19, no. 2, pp. 6–11, 1993.
- V. DeMiguel, L. Garlappi, and R. Uppal, “Optimal versus naive diversification: how inefficient is the 1/NPortfolio strategy?” Review of Financial Studies, vol. 22, no. 5, pp. 1915–1953, 2009.
- R. Tibshirani, “Regression shrinkage and selection via the lasso,” Journal of the Royal Statistical Society: Series B (Methodological), vol. 58, no. 1, pp. 267–288, 1996.
- P. J. Bickel and E. Levina, “Covariance regularization by thresholding,” The Annals of Statistics, vol. 36, no. 6, pp. 2577–2604, 2008.
- T. T. Cai, M. Yuan, D. Giannone, and I. Loris, “Adaptive covariance matrix estimation through block thresholding,” The Annals of Statistics, vol. 40, no. 4, pp. 2014–2042, 2012.
- A. J. Rothman, “Positive definite estimators of large covariance matrices,” Biometrika, vol. 99, no. 3, pp. 733–740, 2012.
- O. Ledoit and M. Wolf, “A well-conditioned estimator for large-dimensional covariance matrices,” Journal of Multivariate Analysis, vol. 88, no. 2, pp. 365–411, 2004.
- A. Maurya, “A well conditioned and sparse estimate of covariance and inverse covariance matrix using a joint penalty,” The Journal of Machine Learning Research, vol. 17, no. 130, pp. 1–28, 2016.
- J. Friedman, T. Hastie, and R. Tibshirani, “Sparse inverse covariance estimation with the graphical lasso,” Biostatistics, vol. 9, no. 3, pp. 432–441, 2008.
- R. T. Rockafellar and S. Uryasev, “Conditional Value-at-Risk for general loss distributions,” Journal of Banking & Finance, vol. 26, no. 7, pp. 1443–1471, 2002.
- R. T. Rockafellar and S. Uryasev, “Optimization of conditional value-at-risk,” The Journal of Risk, vol. 2, no. 3, pp. 21–41, 2000.
- G. W. Bassett, R. Koenker, and G. Kordas, “Pessimistic portfolio allocation and Choquet expected utility,” Journal of Financial Econometrics, vol. 2, no. 4, pp. 477–492, 2004.
- A. E. B. Lim, J. G. Shanthikumar, and G.-Y. Vahn, “Conditional value-at-risk in portfolio optimization: coherent but fragile,” Operations Research Letters, vol. 39, no. 3, pp. 163–171, 2011.
- A. Takeda and T. Kanamori, “A robust approach based on conditional value-at-risk measure to statistical learning problems,” European Journal of Operational Research, vol. 198, no. 1, pp. 287–296, 2009.
- I. Kondor, S. Pafka, and G. Nagy, “Noise sensitivity of portfolio selection under various risk measures,” Journal of Banking & Finance, vol. 31, no. 5, pp. 1545–1573, 2007.
- J. J. Gao and W. P. Wu, “Sparse and multiple risk measures approach for data driven mean-CVaR portfolio optimization model,” in Optimization and Control for Systems in the Big-Data Era: Theory and Applications, Springer, Berlin, Germany, 2017.
- J. Brodie, I. Daubechies, C. De Mol, D. Giannone, and I. Loris, “Sparse and stable Markowitz portfolios,” Proceedings of the National Academy of Sciences, vol. 106, no. 30, pp. 12267–12272, 2009.
- J.-Y. Gotoh and A. Takeda, “On the role of norm constraints in portfolio selection,” Computational Management Science, vol. 8, no. 4, pp. 323–353, 2011.
- B. Fastrich, S. Paterlini, and P. Winker, “Constructing optimal sparse portfolios using regularization methods,” Computational Management Science, vol. 12, no. 3, pp. 417–434, 2015.
- P. Artzner, F. Delbaen, J.-M. Eber, and D. Heath, “Coherent measures of risk,” Mathematical Finance, vol. 9, no. 3, pp. 203–228, 1999.
- J. Fan, J. Zhang, and K. Yu, “Vast portfolio selection with gross-exposure constraints,” Journal of the American Statistical Association, vol. 107, no. 498, pp. 592–606, 2012.
- J. C. Emmanuel, B. W. Michael, and P. B. Stephen, “Enhancing sparsity by reweighted l1 minimization,” Journal of Fourier Analysis and Applications, vol. 14, pp. 877–905, 2008.
- E. R. Lee, H. Noh, and B. U. Park, “Model selection via Bayesian information criterion for quantile regression models,” Journal of the American Statistical Association, vol. 109, no. 505, pp. 216–229, 2014.
- L. Wang, Y. Kim, and R. Li, “Calibrating nonconvex penalized regression in ultra-high dimension,” The Annals of Statistics, vol. 41, no. 5, pp. 2505–2536, 2013.
- Y. C. Wu and Y. F. Liu, “Variable selection in quantile regression,” Statistica Sinica, vol. 19, pp. 801–817, 2009.
Copyright © 2020 Yu Shi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.