Research Article  Open Access
Yao Dong, He Jiang, "A TwoStage Regularization Method for Variable Selection and Forecasting in HighOrder Interaction Model", Complexity, vol. 2018, Article ID 2032987, 12 pages, 2018. https://doi.org/10.1155/2018/2032987
A TwoStage Regularization Method for Variable Selection and Forecasting in HighOrder Interaction Model
Abstract
Forecasting models with highorder interaction has become popular in many applications since researchers gradually notice that an additive linear model is not adequate for accurate forecasting. However, the excessive number of variables with low sample size in the model poses critically challenges to predication accuracy. To enhance the forecasting accuracy and training speed simultaneously, an interpretable model is essential in knowledge recovery. To deal with ultrahigh dimensionality, this paper investigates and studies a twostage procedure to demand sparsity within highorder interaction model. In each stage, square root hard ridge (SRHR) method is applied to discover the relevant variables. The application of square root loss function facilitates the parameter tuning work. On the other hand, hard ridge penalty function is able to handle both the high multicollinearity and selection inconsistency. The real data experiments reveal the superior performances to other comparing approaches.
1. Introduction
Sparse representation has attracted a large deal of attention from researchers in different scientific fields including financial engineering and risk management computational biology and machine learning community [1]. For instance, in spectrum data analysis, scientists are exploring how to select few exact frequencies on which the beam energy is concentrated to avoid lower leakage problem [2]. The main goal of demanding sparsity is to discover quite a few numbers of relevant variables from a large pool of candidate variables which are essential when establishing an interpretable forecasting model. A model with high precision, low model complexity, and sensitivity to the model parameters can not only avoid the expensive cost of excessive supply but also reduce the time cost, service interruption, and resource waste [3]. Furthermore, researchers have noticed that a linear additive model using main effects only cannot provide accurate forecasting results. This motivates them to turn their attentions to highorder interaction model which considers interaction terms between variables. Interaction model has advantage over a linear model which does not consider possible relationship between variables. In microarray data analysis, for example, biologists are particular interested in genegene interactions and geneenvironment interactions than a single gene because interactions of single nucleotide polymorphisms (SNPs) are more important in cancer diagnosis [4]. However, new studies have also revealed the interactive effect of multiple environment factors. For example, a child with a poor quality environment would be more sensitive to a poor environment as an adult which ultimately led to higher psychological distress scores. This depicts a threeway interaction term between gene and environment. Furthermore, interaction with multiple genes is also considered to be essentially important in the molecular analysis [5]. This motivates us to consider a highorder interaction model which includes both twoway and threeway interaction terms in this paper:where is the intercept term which vanishes after centering, represents the Hadamard product of , , and which denotes the threeway interaction term. The noise term follows Gaussian distribution. , , and are corresponding coefficients for main effects, twoway, and threeway interaction terms, respectively. Obviously, this highorder interaction model is more complicate than twoway interaction model.
Although highorder interactions play an important role in depicting the nonlinear relationship among variables, sparse representation of highorder interaction model is quite challenging since the number of interaction terms is huge (of order or even ) if there are main effects in the model. For instance, even with 10^{2} variables in the model, a highorder interaction model has to consider 10^{4} or 10^{6} variables which make variable selection procedure more difficult. Furthermore, computation cost is another issue because of the huge number of variables in the highorder interaction model. Even worse, severe multicollinearity exists among variables caused by the high dimensionality. There is a large body of literature on sparse representation methods. Subset selection methods based on L0 type penalty have been discovered for decades. These methods including Akaike information criterion (AIC) [6], Bayesian information criterion (BIC) [7], and mallow’s [8] which are NP hard since all the possible combinations of variables are taken into account. Namely, if there are m variables, subset models have to be checked. To improve the computational efficiency, sparse representation approaches based on convex optimization are proposed. For instance, Tibshrani (1996) proposed the least absolute shrinkage and selection operator (LASSO) which applies L1 type penalty to pursue sparsity. In wavelet study, it is famous as basis pursuit method [9]. It is more computational efficient than subset selection methods. Efron (2004) studied the leastangle regression (LAR) which solved the same problem as LASSO but in a forward selection manner. They have shown that the solution path of LAR is pretty similar as that of LASSO. Yuan and Lin (2006) investigated group LASSO method which selected group variables based on “all in and all out” principle. To overcome the shortcoming of LASSO and further improve the selection accuracy, Zou (2006) studied adaptive LASSO using weights for each variable to adjust the sparsity degree. Friedman (2012) summarized and provided a review of sparse regression and classification methods [10].
However, convex optimization has drawbacks since it is difficult to discover all the relevant variables if the model coherence is sufficiently high. To overcome the difficulties, nonconvex penalties including smoothly clipped absolute deviation (SCAD) [11], minimax concave penalty (MCP) [12], and hard ridge (HR) [13, 14] are proposed to boost the accuracy. Ye et al. (2017) studied sparse least squares support vector machine with Lp norm function applied to select the relevant support vectors [15]. To facilitate parameter tuning work and increase the forecasting accuracy, Jiang (2018) studied square root nonconvex optimization (SRNO) and showed its superior performance [2] via simulation and real data studies.
Sparse representation in the twoway interaction model has been studied for years. Bach (2008) explored hierarchical multiple learning and studied its selection property [16]. Zhao et al. (2009) proposed composite absolute penalty (CAP) which considers a mixture of Lp and Lq norm functions which are given by [17]. Choi et al. (2010) studied SHIM method considering . They also designed a nonconvex optimization to perform sparsity representation. However, the number of features is only 10 which is pretty small. Thus, there is no guarantee that their method can work in the high dimensional setup [18]. Bickel et al. (2010) used a twostep algorithm to purse sparsity. In the first step, LASSO was applied and backward procedure was used in the second step [19]. Wu et al. (2010) proposed a stagewise procedure which screened the main effects in the first step and the selected variable will be cleaned based on ttest statistic [20]. Radchenko and Jame (2010) investigated VANISH constructing the main effects and interactions from two small sets of orthogonal functions [21]. Bien et al. (2013) studied hierarchical LASSO (HL) which considered a L1 optimization problem using a nonconvex constraint [22]. She et al. (2014) studied GRESH method which enforced sparsity to main effects and twoway interactions simultaneously [23]. Hao et al. (2014) screened the main effects using forward regression in the first step and fitted the quadratic model with selected main effects and associated interactions [24]. Simon and Hastie (2015) proposed “Glinternet” to discover the twoway interaction terms between categorical variables and continuous variables [25]. Yan and Bien (2017) advocated VARXL method to implement sparse representation in a high dimensional time series data [26]. To the best of our knowledge, most of variable selection methods mentioned above only consider twoway interaction terms. Few studies have been done on the highorder interaction model with both twoway and threeway interaction terms. To tackle this issue, this paper explores sparse representation in a highorder interaction model. The main contributions of this paper are given as follows:(i)A twostage square root hard ridge method (TSSRHR) for sparse representation in highorder interaction model is proposed.(ii)In computation, a fast and simpletoimplement algorithm is designed.(iii)In theory, we have shown the prediction error bound and estimation error bound of the TSSRHR estimate.(iv)Real data examples are shown to demonstrate the efficiency and superior performance of the proposed method over other existing competitors.
The rest of this paper is organized as follows. Section 2 represents the general framework of TSSRHR. Section 3 designs a fast and simpletoimplement algorithm with theoretical guarantee of its convergence and optimality. The theory part of TSSRHR is given in Section 4. In Section 5, real data analyses are shown to exhibit superior performance of the proposed method over other approaches. The conclusion of this paper is given in Section 6.
2. TwoStage Square Root Hard Ridge Method
We investigate and study the square root hard ridge (SRHR) method which combines the benefits of square root loss function and nonconvex hard ridge penalty function for highorder interaction selection. We describe the twostage SRHR algorithm at first. At stage 1, only main effects are selected by SRHR with all of highorder interactions including twoway and threeway interactions kept out. The selected variables are put in the set A. At stage 2, the selected set in stage 1. is expanded by adding both associated twoway and threeway interactions within . SRHR is applied again to implement variable selection task.(i)Stage 1. Define which represents all the main effects. Select the important effects using SRHR method by considering the following optimization problem: where and are two regularization parameters which are supposed to be determined. The index for selected variables are given in and the corresponding solution path is given by .(ii)Stage 2. Update . Select the important main effects, twoway and threeway interaction using SRHR method again by considering the following problem (for notation simplicity, and are denoted by and , respectively): where denotes the indicator function, represents the Cartesian product, and and are two regularization parameters which need to be selected carefully. The solution path is provided by and the selected variable are given in .
A penalized sparse representation method considers the sum of two components: the loss function and the penalty function. The loss function often applies the negative loglikelihood of the error and in Gaussian setup, it is a squared error loss function which involves the noise level which is difficult to estimate. Specifically, in a Gaussian linear regression setting , the negative likelihood function of is , where is a function of and is a constant. SRHR uses square root error loss function which can facilitate the parameter tuning work by avoiding estimating the noise level or using other forms of estimates [27]. Hard ridge penalty has two parts; it is obvious that the first part which is L_{0} norm penalized the number of nonzero elements using the regularization parameter . The second part is L2 type ridge penalty. It can compensate the shrinkage given by the L0 norm and is able to handle the high collinearity of the highorder interaction model. Therefore, SRHR combines the advantages of square root loss function and hard ridge penalty.
The choices of regularization parameters (, and ) play an important role in balancing a model’s in sample fit and outofsample forecasting ability. We grid starting from the largest value such that all the coefficients are shrunk to zeros. Then let the smallest value be a small proportion of , that is, and the grid values are generated between and exponentially. For , the same strategy is also applied. The optimal values for and are obtained from a grid of values .
To choose the optimal regularization parameters, extended Bayesian information criterion (EBIC) [28] and high dimensional Bayesian information criteria (HDBIC) [29] are applied in this paper. Particularly, EBIC is defined asand HDBIC is defined aswhere and are represent the cardinalities of and . In the first stage, and will be selected by the EBIC since large number of main effects are considered. It worth mentioning that different from [24], we did not apply BIC in the second stage. Although the number of variables is reduced dramatically, we still use HDBIC which is a high dimensional version of BIC because both twoway and threeway interaction terms are considered which still cause the dimensions of the model to be high.
3. Algorithm
In this section, for notation simplicity, two design matrices are designed as follows:
andFurthermore, let . Clearly, represents the design matrix for twoway interaction terms, denotes the design matrix for threeway interaction terms, and is the overall design matrix for main effects, twoway interaction terms, and threeway interaction terms. For the coefficients to be estimated, let and be the coefficient matrix for twoway and threeway interaction terms, respectively. Then (1) can be written aswhere for any . Define . To design an algorithm for TSSRHR, we consider the following optimization problem in the first stage:and in the second stage, we considerTo solve (9) and (10) accurately and efficiently, threshold rule is defined at first.
Definition (threshold rule). A threshold rule is a realvalued function defined for and such that(1);(2) for ;(3); and(4) for .
From the definition, it is easy to see that is an odd monotone unbounded shrinkage rule for , at any . A vector version of is defined componentwise if either or is replaced by a vector. Now define the HR thresholding rule, which is closely associated with the HR penalty, as where , are two regularization parameters. Using HR thresholding function, (9) and (10) can be solved based on the following iterations:andwhere is a vector version of HR thresholding function with replaced by a vector.
Algorithm 1 exhibits the details of TSRHR algorithm with , , , and defined. Consider the nonconvex optimization is not easy to solve using ADMM algorithm [30]. HR thresholding function is applied in the algorithm. The basic structure of this algorithm follows coordinate decent algorithm. The error tolerance of tol1, tol2 and maximum number of iterations m1 and m2 are determined based on trial and error. Usually, the default values of , and and are 1e4, 1e4, 500, and 500, respectively. TSSRHR algorithm has some advantages. Firstly, it does not involve any matrix inversion. Secondly, is applied to enforce elementwise sparsity. The step sizes and are able to guarantee the convergence of the algorithm. The theoretical results show that and provide satisfactory results. The step sizes do not depend on line search which takes a lot of computation time. The TSSRHR algorithm is simple to implement and computational efficient. To further speed up the convergence, a fast iterative shrinkage thresholding algorithm (FISTA) [31] is used to obtain the TSSRHR estimate. The following theorem provides the theoretical guarantee of convergence of Algorithm 1.

4. Prediction and Estimation
Notice that does not follow the triangle inequality but follows the following inequality for any vectors . This poses critically challenges to the derivation the forecasting and estimation error bounds. Namely, the ways of performing error rate analysis on square root lasso [27] are not applicable to SRHR because of its nonconvexity. Therefore, a novel approach is urgently needed to derive the forecasting and estimation error bounds.
Assumption 1 (overfitting assumption). Suppose that , then does the tasks as the Ordinary Least Square (OLS) estimator .
Assumption 2 (weak decomposability assumption). A norm in is called weakly decomposable for an index set , if there exists a norm on such thatFurthermore, a set is called allowed set if is a weakly decomposable norm for this set.
We can show that hard ridge penalty is weakly decomposable as follows: define and . Then we can obtain weak decomposability for any set This is due to the weak decomposability property of the penalty and the penalty.
Definition 3 (Meffective sparsity). For an allowed set of a norm , and a constant, the eigenvalue is defined as Then the Meffective sparsity is defined asMeffective sparsity is a variant of regularity condition used to establish the nonasymptotic oracle inequality, such as Restricted Eigenvalue condition [32], Mutual Coherence condition [33], Compatible Condition [34], Cone invertibility factor [35], and Restricted invertibility factor [36]. With Assumptions 1 and 2 satisfied, based on the Theorem 1 in [37], we can obtain the forecasting and estimation error bounds of the TSSRHR estimator in the following remark.
Remark. Chooseandwhere with and . Under the overfitting assumption and weak decomposability for , the following inequalities hold with probability andwhere , and are some positive constants.
This remark provides prediction error bound and estimation error bound for the TSSRHR estimate. The proof of remark follows Theorem 1 and Corollaries 1 and 2 in [37] directly. Thus, the details are omitted here.
5. Real World Data
In this section, twinsine signal data including 8 scenarios and microarray gene expression data about inbred mouse are treated as real data examples to test the performances of sparse representation methods. It worth mentioning both of these two real data examples are ultrahigh dimensional data when threeway interaction terms are considered. Comparisons will be made between the proposed twostage hard ridge (TSSRHR) and other twostage sparse representation methods including twostage LASSO (TSLASSO), twostage SCAD (TSSCAD), and twostage square root lasso (TSSRL) in terms of forecasting accuracy, sparsity, and computational efficiency. The unknown parameters in TSLASSO and TSSCAD are determined using 10fold cross validation. TSSRL selects the unknown parameter based on a theoretical choice: where is the number of variables and is the cumulative distribution function of Gaussian distribution. Here we remind that TSSRHR selects the regularization parameters by EBIC and HDBIC (cf. Section 2). The forecasting model will be trained with 75% of the samples and the remaining 25% will be used to calculate the test errors. The entire procedure will be repeated for 150 times and the median of evaluation criteria including number of nonzero elements (NZ), mean square error (MSE), root mean square error (RMSE), and computational cost (CC) will be reported.
5.1. Twinsine Signal Data
Spectral estimation studying the distribution over frequencies is widely applied in speech coding and radar solar signal processing. Nevertheless, it is critically difficult and challenging when there are a large number of frequencies which causes high resolution problem. This type of problem is called super resolution spectral estimation. To avoid the power leakage and noisy observation caused by the super resolution, a novel method is urgently needed to choose the important frequencies which plays a dominant role in superresolution estimation. The twinsine signal is generated from the target function:where , , , , and follows white Gaussian noise with variance given by . The frequence resolution is chosen as 0.002HZ to detect and distinguish the two sinusoidal components. for and the frequency atoms are given by and . In our experiments, we will consider the following 8 different scenarios:(i)Scenario 1: (ii)Scenario 2: (iii)Scenario 3: (iv)Scenario 4: (v)Scenario 5: (vi)Scenario 6: (vii)Scenario 7: (viii)Scenario 8: (ix)Scenario 9: (x)Scenario 10:
The results of forecasting, variable selection, and computation are summarized in Table 1 and Figures 1, 2, 3, and 4. It is not difficult to observe that TSSRHR obtains the most accurate outcome in all the scenarios including the most challenging scenario when and . TSSRL outperforms TSLASSO and TSSCAD method by delivering lower MSE and its forecasting results are comparable with TSSRHR in scenario 6. From the prospective of model sparsity, TSSRL achieves the most interpretable model followed by TSSRHR. TSLASSO and TSSCAD tend to select a large number of variables which can be found in all the scenarios except scenario 5 and scenario 6. For instance, TSLASSO selects 31458 variables and TSSCAD chooses 29113 variables in scenario 8. A large number of nuisance variables are included in the models given by TSLASSO and TSSCAD which will affect the forecasting results negatively. In terms of computational efficiency, TSSCAD spends more computational time in scenarios 12 and scenarios 56 when the dimensions are moderate. As the dimensions increase, all of the forecasting approaches take more computation cost. For instance, TSSRL took 1215.93 seconds and 1529.38 seconds in scenarios 4 and 8 when . TSLASSO runs fastest by using the convex penalty. However, notice that when D is large, although TSSRL also uses the convex penalty, it is not computational efficient since the design of the TSSRL is based on COORD algorithm which runs much slower than coordinate descent algorithm. In scenario 9 and 10 where the sample sizes are large, TSSRHR still performs better than other methods. Same as other scenarios, TSSRL gives the most sparse model with fewer variables. Also notice that the train speed of TSSRHR is acceptable. Therefore, TSSRHR outperforms other forecasting methods in terms of forecasting accuracy, model sparsity, and computation speed.

5.2. Microarray Gene Expression Data
The inbred mouse microarray gene expression dataset with 31 female mice and 29 male mice is considered to demonstrate the effectiveness of the proposed TSSRHR method. The gene expression values of 22,690 genes are measured by each array and the response gene is a continuous phenotypic variable measured by stearoylcoenzyme desaturase (scd1) with probe set ID 1415965 at. The remaining 22,689 probe set IDs are regarded as candidate variables. We are seeking a good model which is able to describe the relationship between the response variable and other candidate genes. The performances of forecast models on inbred mouse microarray gene expression data are shown in Table 2. It worth mentioning that a gene filtering preprocessing procedure is applied to the inbred mouse data to reduce the expensive computational cost. TSSRHR delivers the best forecasting performances by giving the lowest MSE and RMSE followed by TSSRL, TSLASSO, and TSSCAD. TSLASSO selects a model with 6311 threeway interaction terms while TSSCAD only selects 3 variables which is fewer than those of TSSRL and TSSRHR. This can be verified in Figure 5 which shows the selection pattern of the compared methods. The computational cost of TSSRL is almost 4 times of TSSRHR. Comparing with TSSRHR and TSSCAD, TSLASSO and TSCAD spend less computational cost. Overall, TSSRHR provides the lowest forecasting error with sufficient model sparsity and acceptable computational time.

6. Conclusion
Twoway interaction model has been widely used in numbers of scientific fields. However, researchers find that threeway interaction terms play more important role than twoway interaction terms when establishing forecasting model. To this end, this paper studies highorder interaction model which has both twoway and threeway interaction terms. To establish an interpretable model and reduce the computation time, sparse representation method using a twostage square root hard ridge (TSSRHR) is proposed and studied. Based on the property of twostage method, the number of interactions will decrease dramatically so that the computational issue is solved. To implement SRHR in each stage, a simple and efficient algorithm is designed. Furthermore, the prediction and estimation error bounds of TSSRHR are also exhibited using two regularity assumptions. For real data analysis, a twinsine signal data and microarray gene expression data are provided to show the forecasting and selection performances of TSSRHR comparing with other stagewise sparse representation methods. A hierarchical variable selection problem on the highorder interaction model will be conducted for the future research.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this article.
Acknowledgments
This research was supported by the National Natural Science Foundation of China (Grants no. 71861012 and no. 71761016), the Natural Science Foundation of Jiangxi, China (Grants no. 20181BAB211020 and no. 20171BAA218001), and the China Postdoctoral Science Foundation (Grants no. 2017M620277 and no. 2018T110654).
References
 J. Fan and R. Li, “Statistical challenges with high dimensionality: Feature selection in knowledge discovery,” Proceedings of the International Congress of Mathematicians, pp. 595–622, 2006. View at: Google Scholar
 H. Jiang, “Sparse estimation based on square root nonconvex optimization in highdimensional data,” Neurocomputing, vol. 282, pp. 122–135, 2018. View at: Google Scholar
 H. Jiang, “Model forecasting based on twostage feature selection procedure using orthogonal greedy algorithm,” Applied Soft Computing, vol. 63, pp. 110–123, 2017. View at: Publisher Site  Google Scholar
 H. Schwender and K. Ickstadt, “Identification of SNP interactions using logic regression,” Biostatistics, vol. 9, no. 1, pp. 187–198, 2008. View at: Publisher Site  Google Scholar
 E. Assary, J. P. Vincent, R. Keers, and M. Pluess, “Geneenvironment interaction and psychiatric disorders: Review and future directions,” Seminars in Cell & Developmental Biology, vol. 77, pp. 133–143, 2017. View at: Publisher Site  Google Scholar
 H. Akaike, “A new look at the statistical model identification,” IEEE Transactions on Automatic Control, vol. 19, pp. 716–723, 1974. View at: Publisher Site  Google Scholar  MathSciNet
 G. Schwarz, “Estimating the dimension of a model,” The Annals of Statistics, vol. 6, no. 2, pp. 461–464, 1978. View at: Publisher Site  Google Scholar  MathSciNet
 C. L. Mallows, “Some comments on Cp,” Technometrics, vol. 42, no. 1, pp. 87–94, 2000. View at: Publisher Site  Google Scholar
 S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM Journal on Scientific Computing, vol. 20, no. 1, pp. 33–61, 1998. View at: Publisher Site  Google Scholar  MathSciNet
 J. H. Friedman, “Fast sparse regression and classification,” International Journal of Forecasting, vol. 28, no. 3, pp. 722–738, 2012. View at: Publisher Site  Google Scholar
 A. Antoniadis, “Wavelets in statistics: a review,” Statistical Methods & Applications, vol. 6, no. 2, pp. 97–130, 1997. View at: Publisher Site  Google Scholar
 C. H. Zhang, “Nearly unbiased variable selection under minimax concave penalty,” The Annals of Statistics, vol. 38, no. 2, pp. 894–942, 2010. View at: Publisher Site  Google Scholar  MathSciNet
 Y. She, J. Wang, H. Li, and D. Wu, “Group iterative spectrum thresholding for superresolution sparse spectral selection,” IEEE Transactions on Signal Processing, vol. 61, no. 24, pp. 6371–6386, 2013. View at: Publisher Site  Google Scholar  MathSciNet
 H. Jiang and Y. Dong, “Dimension reduction based on a penalized kernel support vector machine model,” KnowledgeBased Systems, vol. 138, pp. 79–90, 2017. View at: Publisher Site  Google Scholar
 Y.F. Ye, Y.H. Shao, N.Y. Deng, C.N. Li, and X.Y. Hua, “Robust l_{p} norm least squares support vector regression with feature selection,” Applied Mathematics and Computation, vol. 305, pp. 32–52, 2017. View at: Publisher Site  Google Scholar  MathSciNet
 F. Bach, “Exploring large feature spaces with hierarchical multiple kernel learning,” in Proceedings of the 22nd Annual Conference on Neural Information Processing Systems, NIPS 2008, pp. 105–112, Canada, December 2008. View at: Google Scholar
 P. Zhao, G. Rocha, and B. Yu, “The composite absolute penalties family for grouped and hierarchical variable selection,” The Annals of Statistics, vol. 37, no. 6, pp. 3468–3497, 2009. View at: Publisher Site  Google Scholar  MathSciNet
 N. H. Choi, W. Li, and J. Zhu, “Variable selection with the strong heredity constraint and its oracle property,” Journal of the American Statistical Association, vol. 105, no. 489, pp. 354–364, 2010. View at: Publisher Site  Google Scholar  MathSciNet
 P. J. Bickel, Y. Ritov, A. B. Tsybakov et al., “Hierarchical selection of variables in sparse highdimensional regression,” in Borrowing Strength: Theory Powering Applications–a Festschrift for Lawrence D. Brown, pp. 56–59, Institute of Mathematical Statistics, 2010. View at: Google Scholar
 J. Wu, B. Devlin, S. Ringquist, M. Trucco, and K. Roeder, “Screen and clean: a tool for identifying interactions in genomewide association studies,” Genetic Epidemiology, vol. 34, no. 3, pp. 275–285, 2010. View at: Publisher Site  Google Scholar
 P. Radchenko and G. M. James, “Variable selection using adaptive nonlinear interaction structures in high dimensions,” Journal of the American Statistical Association, vol. 105, no. 492, pp. 1541–1553, 2010. View at: Publisher Site  Google Scholar  MathSciNet
 J. Bien, J. Taylor, and R. Tibshirani, “A lasso for hierarchical interactions,” The Annals of Statistics, vol. 41, no. 3, pp. 1111–1141, 2013. View at: Publisher Site  Google Scholar  MathSciNet
 Y. She, Z. Wang, and H. Jiang, “Group regularized estimation under structural hierarchy,” Journal of the American Statistical Association, vol. 113, no. 521, pp. 445–454, 2018. View at: Publisher Site  Google Scholar  MathSciNet
 N. Hao and H. H. Zhang, “Interaction screening for ultrahighdimensional data,” Journal of the American Statistical Association, vol. 109, no. 507, pp. 1285–1301, 2014. View at: Publisher Site  Google Scholar  MathSciNet
 M. Lim and T. Hastie, “Learning interactions via hierarchical grouplasso regularization,” Journal of Computational and Graphical Statistics, vol. 24, no. 3, pp. 627–654, 2015. View at: Publisher Site  Google Scholar  MathSciNet
 X. Yan and J. Bien, “Hierarchical sparse modeling: a choice of two group lasso formulations,” Statistical Science. A Review Journal of the Institute of Mathematical Statistics, vol. 32, no. 4, pp. 531–560, 2017. View at: Publisher Site  Google Scholar  MathSciNet
 A. Belloni, V. Chernozhukov, and L. Wang, “Squareroot lasso: pivotal recovery of sparse signals via conic programming,” Biometrika, vol. 98, no. 4, pp. 791–806, 2011. View at: Publisher Site  Google Scholar  MathSciNet
 J. Chen and Z. Chen, “Extended Bayesian information criteria for model selection with large model spaces,” Biometrika, vol. 95, no. 3, pp. 759–771, 2008. View at: Publisher Site  Google Scholar  MathSciNet
 C.K. Ing and T. L. Lai, “A stepwise regression method and consistent model selection for highdimensional sparse linear models,” Statistica Sinica, vol. 21, no. 4, pp. 1473–1513, 2011. View at: Google Scholar
 S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends in Machine Learning, vol. 3, no. 1, pp. 1–122, 2011. View at: Publisher Site  Google Scholar
 Y. Nesterov, “Gradient methods for minimizing composite objective function,” Tech. Rep., Université catholique de Louvain, Center for Operations Research and Econometrics, 2007. View at: Google Scholar
 P. J. Bickel, Y. Ritov, and A. B. Tsybakov, “Simultaneous analysis of lasso and Dantzig selector,” The Annals of Statistics, vol. 37, no. 4, pp. 1705–1732, 2009. View at: Publisher Site  Google Scholar  MathSciNet
 K. Lounici, “Supnorm convergence rate and sign concentration property of Lasso and Dantzig estimators,” Electronic Journal of Statistics, vol. 2, pp. 90–102, 2008. View at: Publisher Site  Google Scholar  MathSciNet
 S. A. van de Geer and P. Buhlmann, “On the conditions used to prove oracle results for the Lasso,” Electronic Journal of Statistics, vol. 3, pp. 1360–1392, 2009. View at: Publisher Site  Google Scholar  MathSciNet
 F. Ye and C.H. Zhang, “Rate minimaxity of the lasso and dantzig selector for the lq loss in lr balls,” The Journal of Machine Learning Research, vol. 11, pp. 3519–3540, 2010. View at: Google Scholar
 C.H. Zhang and T. Zhang, “A general theory of concave regularization for highdimensional sparse estimation problems,” Statistical Science, vol. 27, no. 4, pp. 576–593, 2012. View at: Publisher Site  Google Scholar  MathSciNet
 B. Stucky and S. van de Geer, “Sharp oracle inequalities for square root regularization,” Journal of Machine Learning Research, vol. 18, no. 67, pp. 1–29, 2017. View at: Google Scholar  MathSciNet
Copyright
Copyright © 2018 Yao Dong and He Jiang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.