Complexity

Complexity / 2018 / Article

Research Article | Open Access

Volume 2018 |Article ID 2032987 | https://doi.org/10.1155/2018/2032987

Yao Dong, He Jiang, "A Two-Stage Regularization Method for Variable Selection and Forecasting in High-Order Interaction Model", Complexity, vol. 2018, Article ID 2032987, 12 pages, 2018. https://doi.org/10.1155/2018/2032987

A Two-Stage Regularization Method for Variable Selection and Forecasting in High-Order Interaction Model

Academic Editor: Mahdi Jalili
Received19 Aug 2018
Revised05 Oct 2018
Accepted17 Oct 2018
Published11 Nov 2018

Abstract

Forecasting models with high-order interaction has become popular in many applications since researchers gradually notice that an additive linear model is not adequate for accurate forecasting. However, the excessive number of variables with low sample size in the model poses critically challenges to predication accuracy. To enhance the forecasting accuracy and training speed simultaneously, an interpretable model is essential in knowledge recovery. To deal with ultra-high dimensionality, this paper investigates and studies a two-stage procedure to demand sparsity within high-order interaction model. In each stage, square root hard ridge (SRHR) method is applied to discover the relevant variables. The application of square root loss function facilitates the parameter tuning work. On the other hand, hard ridge penalty function is able to handle both the high multicollinearity and selection inconsistency. The real data experiments reveal the superior performances to other comparing approaches.

1. Introduction

Sparse representation has attracted a large deal of attention from researchers in different scientific fields including financial engineering and risk management computational biology and machine learning community [1]. For instance, in spectrum data analysis, scientists are exploring how to select few exact frequencies on which the beam energy is concentrated to avoid lower leakage problem [2]. The main goal of demanding sparsity is to discover quite a few numbers of relevant variables from a large pool of candidate variables which are essential when establishing an interpretable forecasting model. A model with high precision, low model complexity, and sensitivity to the model parameters can not only avoid the expensive cost of excessive supply but also reduce the time cost, service interruption, and resource waste [3]. Furthermore, researchers have noticed that a linear additive model using main effects only cannot provide accurate forecasting results. This motivates them to turn their attentions to high-order interaction model which considers interaction terms between variables. Interaction model has advantage over a linear model which does not consider possible relationship between variables. In microarray data analysis, for example, biologists are particular interested in gene-gene interactions and gene-environment interactions than a single gene because interactions of single nucleotide polymorphisms (SNPs) are more important in cancer diagnosis [4]. However, new studies have also revealed the interactive effect of multiple environment factors. For example, a child with a poor quality environment would be more sensitive to a poor environment as an adult which ultimately led to higher psychological distress scores. This depicts a three-way interaction term between gene and environment. Furthermore, interaction with multiple genes is also considered to be essentially important in the molecular analysis [5]. This motivates us to consider a high-order interaction model which includes both two-way and three-way interaction terms in this paper:where is the intercept term which vanishes after centering, represents the Hadamard product of , , and which denotes the three-way interaction term. The noise term follows Gaussian distribution. , , and are corresponding coefficients for main effects, two-way, and three-way interaction terms, respectively. Obviously, this high-order interaction model is more complicate than two-way interaction model.

Although high-order interactions play an important role in depicting the nonlinear relationship among variables, sparse representation of high-order interaction model is quite challenging since the number of interaction terms is huge (of order or even ) if there are main effects in the model. For instance, even with 102 variables in the model, a high-order interaction model has to consider 104 or 106 variables which make variable selection procedure more difficult. Furthermore, computation cost is another issue because of the huge number of variables in the high-order interaction model. Even worse, severe multicollinearity exists among variables caused by the high dimensionality. There is a large body of literature on sparse representation methods. Subset selection methods based on L0 type penalty have been discovered for decades. These methods including Akaike information criterion (AIC) [6], Bayesian information criterion (BIC) [7], and mallow’s [8] which are NP hard since all the possible combinations of variables are taken into account. Namely, if there are m variables, subset models have to be checked. To improve the computational efficiency, sparse representation approaches based on convex optimization are proposed. For instance, Tibshrani (1996) proposed the least absolute shrinkage and selection operator (LASSO) which applies L1 type penalty to pursue sparsity. In wavelet study, it is famous as basis pursuit method [9]. It is more computational efficient than subset selection methods. Efron (2004) studied the least-angle regression (LAR) which solved the same problem as LASSO but in a forward selection manner. They have shown that the solution path of LAR is pretty similar as that of LASSO. Yuan and Lin (2006) investigated group LASSO method which selected group variables based on “all in and all out” principle. To overcome the shortcoming of LASSO and further improve the selection accuracy, Zou (2006) studied adaptive LASSO using weights for each variable to adjust the sparsity degree. Friedman (2012) summarized and provided a review of sparse regression and classification methods [10].

However, convex optimization has drawbacks since it is difficult to discover all the relevant variables if the model coherence is sufficiently high. To overcome the difficulties, nonconvex penalties including smoothly clipped absolute deviation (SCAD) [11], minimax concave penalty (MCP) [12], and hard ridge (HR) [13, 14] are proposed to boost the accuracy. Ye et al. (2017) studied sparse least squares support vector machine with Lp norm function applied to select the relevant support vectors [15]. To facilitate parameter tuning work and increase the forecasting accuracy, Jiang (2018) studied square root nonconvex optimization (SRNO) and showed its superior performance [2] via simulation and real data studies.

Sparse representation in the two-way interaction model has been studied for years. Bach (2008) explored hierarchical multiple learning and studied its selection property [16]. Zhao et al. (2009) proposed composite absolute penalty (CAP) which considers a mixture of Lp and Lq norm functions which are given by [17]. Choi et al. (2010) studied SHIM method considering . They also designed a nonconvex optimization to perform sparsity representation. However, the number of features is only 10 which is pretty small. Thus, there is no guarantee that their method can work in the high dimensional setup [18]. Bickel et al. (2010) used a two-step algorithm to purse sparsity. In the first step, LASSO was applied and backward procedure was used in the second step [19]. Wu et al. (2010) proposed a stagewise procedure which screened the main effects in the first step and the selected variable will be cleaned based on t-test statistic [20]. Radchenko and Jame (2010) investigated VANISH constructing the main effects and interactions from two small sets of orthogonal functions [21]. Bien et al. (2013) studied hierarchical LASSO (HL) which considered a L1 optimization problem using a nonconvex constraint [22]. She et al. (2014) studied GRESH method which enforced sparsity to main effects and two-way interactions simultaneously [23]. Hao et al. (2014) screened the main effects using forward regression in the first step and fitted the quadratic model with selected main effects and associated interactions [24]. Simon and Hastie (2015) proposed “Glinternet” to discover the two-way interaction terms between categorical variables and continuous variables [25]. Yan and Bien (2017) advocated VARXL method to implement sparse representation in a high dimensional time series data [26]. To the best of our knowledge, most of variable selection methods mentioned above only consider two-way interaction terms. Few studies have been done on the high-order interaction model with both two-way and three-way interaction terms. To tackle this issue, this paper explores sparse representation in a high-order interaction model. The main contributions of this paper are given as follows:(i)A two-stage square root hard ridge method (TSSRHR) for sparse representation in high-order interaction model is proposed.(ii)In computation, a fast and simple-to-implement algorithm is designed.(iii)In theory, we have shown the prediction error bound and estimation error bound of the TSSRHR estimate.(iv)Real data examples are shown to demonstrate the efficiency and superior performance of the proposed method over other existing competitors.

The rest of this paper is organized as follows. Section 2 represents the general framework of TSSRHR. Section 3 designs a fast and simple-to-implement algorithm with theoretical guarantee of its convergence and optimality. The theory part of TSSRHR is given in Section 4. In Section 5, real data analyses are shown to exhibit superior performance of the proposed method over other approaches. The conclusion of this paper is given in Section 6.

2. Two-Stage Square Root Hard Ridge Method

We investigate and study the square root hard ridge (SRHR) method which combines the benefits of square root loss function and nonconvex hard ridge penalty function for high-order interaction selection. We describe the two-stage SRHR algorithm at first. At stage 1, only main effects are selected by SRHR with all of high-order interactions including two-way and three-way interactions kept out. The selected variables are put in the set A. At stage 2, the selected set in stage 1. is expanded by adding both associated two-way and three-way interactions within . SRHR is applied again to implement variable selection task.(i)Stage 1. Define which represents all the main effects. Select the important effects using SRHR method by considering the following optimization problem:where and are two regularization parameters which are supposed to be determined. The index for selected variables are given in and the corresponding solution path is given by .(ii)Stage 2. Update . Select the important main effects, two-way and three-way interaction using SRHR method again by considering the following problem (for notation simplicity, and are denoted by and , respectively):where denotes the indicator function, represents the Cartesian product, and and are two regularization parameters which need to be selected carefully. The solution path is provided by and the selected variable are given in .

A penalized sparse representation method considers the sum of two components: the loss function and the penalty function. The loss function often applies the negative log-likelihood of the error and in Gaussian setup, it is a squared error loss function which involves the noise level which is difficult to estimate. Specifically, in a Gaussian linear regression setting , the negative likelihood function of is , where is a function of and is a constant. SRHR uses square root error loss function which can facilitate the parameter tuning work by avoiding estimating the noise level or using other forms of estimates [27]. Hard ridge penalty has two parts; it is obvious that the first part which is L0 norm penalized the number of nonzero elements using the regularization parameter . The second part is L2 type ridge penalty. It can compensate the shrinkage given by the L0 norm and is able to handle the high collinearity of the high-order interaction model. Therefore, SRHR combines the advantages of square root loss function and hard ridge penalty.

The choices of regularization parameters (, and ) play an important role in balancing a model’s in sample fit and out-of-sample forecasting ability. We grid starting from the largest value such that all the coefficients are shrunk to zeros. Then let the smallest value be a small proportion of , that is, and the grid values are generated between and exponentially. For , the same strategy is also applied. The optimal values for and are obtained from a grid of values .

To choose the optimal regularization parameters, extended Bayesian information criterion (EBIC) [28] and high dimensional Bayesian information criteria (HDBIC) [29] are applied in this paper. Particularly, EBIC is defined asand HDBIC is defined aswhere and are represent the cardinalities of and . In the first stage, and will be selected by the EBIC since large number of main effects are considered. It worth mentioning that different from [24], we did not apply BIC in the second stage. Although the number of variables is reduced dramatically, we still use HDBIC which is a high dimensional version of BIC because both two-way and three-way interaction terms are considered which still cause the dimensions of the model to be high.

3. Algorithm

In this section, for notation simplicity, two design matrices are designed as follows:

andFurthermore, let . Clearly, represents the design matrix for two-way interaction terms, denotes the design matrix for three-way interaction terms, and is the overall design matrix for main effects, two-way interaction terms, and three-way interaction terms. For the coefficients to be estimated, let and be the coefficient matrix for two-way and three-way interaction terms, respectively. Then (1) can be written aswhere for any . Define . To design an algorithm for TSSRHR, we consider the following optimization problem in the first stage:and in the second stage, we considerTo solve (9) and (10) accurately and efficiently, threshold rule is defined at first.

Definition (threshold rule). A threshold rule is a real-valued function defined for and such that(1);(2) for ;(3); and(4) for .

From the definition, it is easy to see that is an odd monotone unbounded shrinkage rule for , at any . A vector version of is defined component-wise if either or is replaced by a vector. Now define the HR thresholding rule, which is closely associated with the HR penalty, as where , are two regularization parameters. Using HR thresholding function, (9) and (10) can be solved based on the following iterations:andwhere is a vector version of HR thresholding function with replaced by a vector.

Algorithm 1 exhibits the details of TSRHR algorithm with , , , and defined. Consider the nonconvex optimization is not easy to solve using ADMM algorithm [30]. HR thresholding function is applied in the algorithm. The basic structure of this algorithm follows coordinate decent algorithm. The error tolerance of tol1, tol2 and maximum number of iterations m1 and m2 are determined based on trial and error. Usually, the default values of , and and are 1e-4, 1e-4, 500, and 500, respectively. TSSRHR algorithm has some advantages. Firstly, it does not involve any matrix inversion. Secondly, is applied to enforce elementwise sparsity. The step sizes and are able to guarantee the convergence of the algorithm. The theoretical results show that and provide satisfactory results. The step sizes do not depend on line search which takes a lot of computation time. The TSSRHR algorithm is simple to implement and computational efficient. To further speed up the convergence, a fast iterative shrinkage thresholding algorithm (FISTA) [31] is used to obtain the TSSRHR estimate. The following theorem provides the theoretical guarantee of convergence of Algorithm 1.

Inputs: the dataset
: maximum number of iterations used in the SRHR algorithm of the first stage;
: maximum number of iterations used in the SRHR algorithm of the second stage;
: the error tolerance used in the SRHR algorithm of the first stage;
: the error tolerance used in the SRHR algorithm of the second stage;
: scale parameter used in the SRHR algorithm of the first stage;
: scale parameter used in the SRHR algorithm of the second stage;
Output: The forecasting test error and selected pattern;
Randomly divide the original data into the training dataset and
test dataset .
The first stage using SRHR algorithm:
Generate grid values of and .
forto
forto
Initialization: ,
Scaling:
while or do
Step 1.
Step 2.
Step 3.
Step 4:
end while
end for
end for
Obtain the solution path and the corresponding sparsity patterns based on
EBIC criterion.
The second stage using SRHR algorithm:
Generate the high-order interaction model and based
on the sparsity pattern .
Generate grid values of and .
forto
forto
Initialization:
Scaling:
while or do
Step 1.
Step 2.
Step 3.
Step 4:
end while
end for
end for
Obtain the solution path and update the sparsity patterns using
HDBIC criterion.
Calculate the test error using test dataset

4. Prediction and Estimation

Notice that does not follow the triangle inequality but follows the following inequality for any vectors . This poses critically challenges to the derivation the forecasting and estimation error bounds. Namely, the ways of performing error rate analysis on square root lasso [27] are not applicable to SRHR because of its nonconvexity. Therefore, a novel approach is urgently needed to derive the forecasting and estimation error bounds.

Assumption 1 (overfitting assumption). Suppose that , then does the tasks as the Ordinary Least Square (OLS) estimator .

Assumption 2 (weak decomposability assumption). A norm in is called weakly decomposable for an index set , if there exists a norm on such thatFurthermore, a set is called allowed set if is a weakly decomposable norm for this set.

We can show that hard ridge penalty is weakly decomposable as follows: define and . Then we can obtain weak decomposability for any set This is due to the weak decomposability property of the penalty and the penalty.

Definition 3 (M-effective sparsity). For an allowed set of a norm , and a constant, the eigenvalue is defined as Then the M-effective sparsity is defined asM-effective sparsity is a variant of regularity condition used to establish the nonasymptotic oracle inequality, such as Restricted Eigenvalue condition [32], Mutual Coherence condition [33], Compatible Condition [34], Cone invertibility factor [35], and Restricted invertibility factor [36]. With Assumptions 1 and 2 satisfied, based on the Theorem 1 in [37], we can obtain the forecasting and estimation error bounds of the TSSRHR estimator in the following remark.

Remark. Chooseandwhere with and . Under the overfitting assumption and weak decomposability for , the following inequalities hold with probability andwhere , and are some positive constants.

This remark provides prediction error bound and estimation error bound for the TSSRHR estimate. The proof of remark follows Theorem 1 and Corollaries 1 and 2 in [37] directly. Thus, the details are omitted here.

5. Real World Data

In this section, twinsine signal data including 8 scenarios and microarray gene expression data about inbred mouse are treated as real data examples to test the performances of sparse representation methods. It worth mentioning both of these two real data examples are ultra-high dimensional data when three-way interaction terms are considered. Comparisons will be made between the proposed two-stage hard ridge (TSSRHR) and other two-stage sparse representation methods including two-stage LASSO (TSLASSO), two-stage SCAD (TSSCAD), and two-stage square root lasso (TSSRL) in terms of forecasting accuracy, sparsity, and computational efficiency. The unknown parameters in TSLASSO and TSSCAD are determined using 10-fold cross validation. TSSRL selects the unknown parameter based on a theoretical choice: where is the number of variables and is the cumulative distribution function of Gaussian distribution. Here we remind that TSSRHR selects the regularization parameters by EBIC and HDBIC (cf. Section 2). The forecasting model will be trained with 75% of the samples and the remaining 25% will be used to calculate the test errors. The entire procedure will be repeated for 150 times and the median of evaluation criteria including number of nonzero elements (NZ), mean square error (MSE), root mean square error (RMSE), and computational cost (CC) will be reported.

5.1. Twinsine Signal Data

Spectral estimation studying the distribution over frequencies is widely applied in speech coding and radar solar signal processing. Nevertheless, it is critically difficult and challenging when there are a large number of frequencies which causes high resolution problem. This type of problem is called super resolution spectral estimation. To avoid the power leakage and noisy observation caused by the super resolution, a novel method is urgently needed to choose the important frequencies which plays a dominant role in super-resolution estimation. The twinsine signal is generated from the target function:where , , , , and follows white Gaussian noise with variance given by . The frequence resolution is chosen as 0.002HZ to detect and distinguish the two sinusoidal components. for and the frequency atoms are given by and . In our experiments, we will consider the following 8 different scenarios:(i)Scenario 1: (ii)Scenario 2: (iii)Scenario 3: (iv)Scenario 4: (v)Scenario 5: (vi)Scenario 6: (vii)Scenario 7: (viii)Scenario 8: (ix)Scenario 9: (x)Scenario 10:

The results of forecasting, variable selection, and computation are summarized in Table 1 and Figures 1, 2, 3, and 4. It is not difficult to observe that TSSRHR obtains the most accurate outcome in all the scenarios including the most challenging scenario when and . TSSRL outperforms TSLASSO and TSSCAD method by delivering lower MSE and its forecasting results are comparable with TSSRHR in scenario 6. From the prospective of model sparsity, TSSRL achieves the most interpretable model followed by TSSRHR. TSLASSO and TSSCAD tend to select a large number of variables which can be found in all the scenarios except scenario 5 and scenario 6. For instance, TSLASSO selects 31458 variables and TSSCAD chooses 29113 variables in scenario 8. A large number of nuisance variables are included in the models given by TSLASSO and TSSCAD which will affect the forecasting results negatively. In terms of computational efficiency, TSSCAD spends more computational time in scenarios 1-2 and scenarios 5-6 when the dimensions are moderate. As the dimensions increase, all of the forecasting approaches take more computation cost. For instance, TSSRL took 1215.93 seconds and 1529.38 seconds in scenarios 4 and 8 when . TSLASSO runs fastest by using the convex penalty. However, notice that when D is large, although TSSRL also uses the convex penalty, it is not computational efficient since the design of the TSSRL is based on COORD algorithm which runs much slower than coordinate descent algorithm. In scenario 9 and 10 where the sample sizes are large, TSSRHR still performs better than other methods. Same as other scenarios, TSSRL gives the most sparse model with fewer variables. Also notice that the train speed of TSSRHR is acceptable. Therefore, TSSRHR outperforms other forecasting methods in terms of forecasting accuracy, model sparsity, and computation speed.


Nz()Nz()Nz()MSERMSECCNz()Nz()Nz()MSERMSECC
Scenario 1Scenario 2

TSSRHR3.001.003.0093.4796.6810.62TSSRHR3.001.005.0082.7191.135.82
TSLASSO5.0059.503505.50145.72119.136.94TSLASSO7.0046.003596.00156.48124.124.12
TSSCAD2.001.008.00127.40112.9218.77TSSCAD14.0021.503678.00185.49132.8120.73
TSSRL3.000.001.00102.71101.365.26TSSRL2.000.004.0086.7891.988.13

Nz()Nz()Nz()MSERMSECCNz()Nz()Nz()MSERMSECC
Scenario 3Scenario 4

TSSRHR2.001.006.0080.2189.3811.72TSSRHR3.000.000.0087.9293.1320.46
TSLASSO12.0020.004796.00137.54117.134.32TSLASSO35.000.0031458.00264.13161.8111.12
TSSCAD4.000.0015.00101.92100.8319.72TSSCAD19.000.006661.00237.35152.5152.15
TSSRL2.000.001.0088.8693.5325.43TSSRL2.000.001.0096.9297.431215.93

Nz()Nz()Nz()MSERMSECCNz()Nz()Nz()MSERMSECC
Scenario 5Scenario 6

TSSRHR3.000.004.00413.85200.944.21TSSRHR3.000.004.00418.72205.194.92
TSLASSO6.007.0094.00533.48228.123.98TSLASSO5.002.0050.50438.59207.144.01
TSSCAD3.002.506.00470.83215.9420.15TSSCAD6.001.0094.00473.21216.8325.13
TSSRL3.000.001.00468.10215.182.13TSSRL2.000.003.00428.13206.893.72

Nz()Nz()Nz()MSERMSECCNz()Nz()Nz()MSERMSECC
Scenario 7Scenario 8

TSSRHR2.000.003.00405.13201.9810.98TSSRHR3.000.002.00407.95201.3428.13
TSLASSO5.000.0077.50436.82207.924.35TSLASSO16.000.001195.50870.94294.487.19
TSSCAD9.0028.00451.50434.12206.1530.95TSSCAD28.000.0029113454.79211.8372.49
TSSRL2.000.001.00430.93205.733.43TSSRL2.000.001.00445.23209.891529.38

Nz()Nz()Nz()MSERMSECCNz()Nz()Nz()MSERMSECC
Scenario 9Scenario 10

TSSRHR3.001.004.00147.33121.3814.03TSSRHR3.001.003.00142.62119.4219.66
TSLASSO3.001.0012.00194.07139.314.09TSLASSO4.000.0019.00183.93135.626.28
TSSCAD3.004.0018.00195.77139.928.73TSSCAD3.001.0015.00165.67128.7114.62
TSSRL3.000.003.00183.71135.542.84TSSRL3.000.003.00174.79132.213.91

5.2. Microarray Gene Expression Data

The inbred mouse microarray gene expression dataset with 31 female mice and 29 male mice is considered to demonstrate the effectiveness of the proposed TSSRHR method. The gene expression values of 22,690 genes are measured by each array and the response gene is a continuous phenotypic variable measured by stearoyl-coenzyme desaturase (scd1) with probe set ID 1415965 at. The remaining 22,689 probe set IDs are regarded as candidate variables. We are seeking a good model which is able to describe the relationship between the response variable and other candidate genes. The performances of forecast models on inbred mouse microarray gene expression data are shown in Table 2. It worth mentioning that a gene filtering preprocessing procedure is applied to the inbred mouse data to reduce the expensive computational cost. TSSRHR delivers the best forecasting performances by giving the lowest MSE and RMSE followed by TSSRL, TSLASSO, and TSSCAD. TSLASSO selects a model with 6311 three-way interaction terms while TSSCAD only selects 3 variables which is fewer than those of TSSRL and TSSRHR. This can be verified in Figure 5 which shows the selection pattern of the compared methods. The computational cost of TSSRL is almost 4 times of TSSRHR. Comparing with TSSRHR and TSSCAD, TSLASSO and TSCAD spend less computational cost. Overall, TSSRHR provides the lowest forecasting error with sufficient model sparsity and acceptable computational time.


Nz()Nz()Nz()MSERMSECC

TSSRHR1.001.007.008.6129.330.48
TSLASSO1.0063.006311.00136.66116.890.22
TSSCAD1.001.001.00142.75119.480.24
TSSRL0.000.0014.0010.9833.131.50

6. Conclusion

Two-way interaction model has been widely used in numbers of scientific fields. However, researchers find that three-way interaction terms play more important role than two-way interaction terms when establishing forecasting model. To this end, this paper studies high-order interaction model which has both two-way and three-way interaction terms. To establish an interpretable model and reduce the computation time, sparse representation method using a two-stage square root hard ridge (TSSRHR) is proposed and studied. Based on the property of two-stage method, the number of interactions will decrease dramatically so that the computational issue is solved. To implement SRHR in each stage, a simple and efficient algorithm is designed. Furthermore, the prediction and estimation error bounds of TSSRHR are also exhibited using two regularity assumptions. For real data analysis, a twinsine signal data and microarray gene expression data are provided to show the forecasting and selection performances of TSSRHR comparing with other stagewise sparse representation methods. A hierarchical variable selection problem on the high-order interaction model will be conducted for the future research.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this article.

Acknowledgments

This research was supported by the National Natural Science Foundation of China (Grants no. 71861012 and no. 71761016), the Natural Science Foundation of Jiangxi, China (Grants no. 20181BAB211020 and no. 20171BAA218001), and the China Postdoctoral Science Foundation (Grants no. 2017M620277 and no. 2018T110654).

References

  1. J. Fan and R. Li, “Statistical challenges with high dimensionality: Feature selection in knowledge discovery,” Proceedings of the International Congress of Mathematicians, pp. 595–622, 2006. View at: Google Scholar
  2. H. Jiang, “Sparse estimation based on square root nonconvex optimization in high-dimensional data,” Neurocomputing, vol. 282, pp. 122–135, 2018. View at: Google Scholar
  3. H. Jiang, “Model forecasting based on two-stage feature selection procedure using orthogonal greedy algorithm,” Applied Soft Computing, vol. 63, pp. 110–123, 2017. View at: Publisher Site | Google Scholar
  4. H. Schwender and K. Ickstadt, “Identification of SNP interactions using logic regression,” Biostatistics, vol. 9, no. 1, pp. 187–198, 2008. View at: Publisher Site | Google Scholar
  5. E. Assary, J. P. Vincent, R. Keers, and M. Pluess, “Gene-environment interaction and psychiatric disorders: Review and future directions,” Seminars in Cell & Developmental Biology, vol. 77, pp. 133–143, 2017. View at: Publisher Site | Google Scholar
  6. H. Akaike, “A new look at the statistical model identification,” IEEE Transactions on Automatic Control, vol. 19, pp. 716–723, 1974. View at: Publisher Site | Google Scholar | MathSciNet
  7. G. Schwarz, “Estimating the dimension of a model,” The Annals of Statistics, vol. 6, no. 2, pp. 461–464, 1978. View at: Publisher Site | Google Scholar | MathSciNet
  8. C. L. Mallows, “Some comments on Cp,” Technometrics, vol. 42, no. 1, pp. 87–94, 2000. View at: Publisher Site | Google Scholar
  9. S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM Journal on Scientific Computing, vol. 20, no. 1, pp. 33–61, 1998. View at: Publisher Site | Google Scholar | MathSciNet
  10. J. H. Friedman, “Fast sparse regression and classification,” International Journal of Forecasting, vol. 28, no. 3, pp. 722–738, 2012. View at: Publisher Site | Google Scholar
  11. A. Antoniadis, “Wavelets in statistics: a review,” Statistical Methods & Applications, vol. 6, no. 2, pp. 97–130, 1997. View at: Publisher Site | Google Scholar
  12. C. H. Zhang, “Nearly unbiased variable selection under minimax concave penalty,” The Annals of Statistics, vol. 38, no. 2, pp. 894–942, 2010. View at: Publisher Site | Google Scholar | MathSciNet
  13. Y. She, J. Wang, H. Li, and D. Wu, “Group iterative spectrum thresholding for super-resolution sparse spectral selection,” IEEE Transactions on Signal Processing, vol. 61, no. 24, pp. 6371–6386, 2013. View at: Publisher Site | Google Scholar | MathSciNet
  14. H. Jiang and Y. Dong, “Dimension reduction based on a penalized kernel support vector machine model,” Knowledge-Based Systems, vol. 138, pp. 79–90, 2017. View at: Publisher Site | Google Scholar
  15. Y.-F. Ye, Y.-H. Shao, N.-Y. Deng, C.-N. Li, and X.-Y. Hua, “Robust lp -norm least squares support vector regression with feature selection,” Applied Mathematics and Computation, vol. 305, pp. 32–52, 2017. View at: Publisher Site | Google Scholar | MathSciNet
  16. F. Bach, “Exploring large feature spaces with hierarchical multiple kernel learning,” in Proceedings of the 22nd Annual Conference on Neural Information Processing Systems, NIPS 2008, pp. 105–112, Canada, December 2008. View at: Google Scholar
  17. P. Zhao, G. Rocha, and B. Yu, “The composite absolute penalties family for grouped and hierarchical variable selection,” The Annals of Statistics, vol. 37, no. 6, pp. 3468–3497, 2009. View at: Publisher Site | Google Scholar | MathSciNet
  18. N. H. Choi, W. Li, and J. Zhu, “Variable selection with the strong heredity constraint and its oracle property,” Journal of the American Statistical Association, vol. 105, no. 489, pp. 354–364, 2010. View at: Publisher Site | Google Scholar | MathSciNet
  19. P. J. Bickel, Y. Ritov, A. B. Tsybakov et al., “Hierarchical selection of variables in sparse high-dimensional regression,” in Borrowing Strength: Theory Powering Applications–a Festschrift for Lawrence D. Brown, pp. 56–59, Institute of Mathematical Statistics, 2010. View at: Google Scholar
  20. J. Wu, B. Devlin, S. Ringquist, M. Trucco, and K. Roeder, “Screen and clean: a tool for identifying interactions in genome-wide association studies,” Genetic Epidemiology, vol. 34, no. 3, pp. 275–285, 2010. View at: Publisher Site | Google Scholar
  21. P. Radchenko and G. M. James, “Variable selection using adaptive nonlinear interaction structures in high dimensions,” Journal of the American Statistical Association, vol. 105, no. 492, pp. 1541–1553, 2010. View at: Publisher Site | Google Scholar | MathSciNet
  22. J. Bien, J. Taylor, and R. Tibshirani, “A lasso for hierarchical interactions,” The Annals of Statistics, vol. 41, no. 3, pp. 1111–1141, 2013. View at: Publisher Site | Google Scholar | MathSciNet
  23. Y. She, Z. Wang, and H. Jiang, “Group regularized estimation under structural hierarchy,” Journal of the American Statistical Association, vol. 113, no. 521, pp. 445–454, 2018. View at: Publisher Site | Google Scholar | MathSciNet
  24. N. Hao and H. H. Zhang, “Interaction screening for ultrahigh-dimensional data,” Journal of the American Statistical Association, vol. 109, no. 507, pp. 1285–1301, 2014. View at: Publisher Site | Google Scholar | MathSciNet
  25. M. Lim and T. Hastie, “Learning interactions via hierarchical group-lasso regularization,” Journal of Computational and Graphical Statistics, vol. 24, no. 3, pp. 627–654, 2015. View at: Publisher Site | Google Scholar | MathSciNet
  26. X. Yan and J. Bien, “Hierarchical sparse modeling: a choice of two group lasso formulations,” Statistical Science. A Review Journal of the Institute of Mathematical Statistics, vol. 32, no. 4, pp. 531–560, 2017. View at: Publisher Site | Google Scholar | MathSciNet
  27. A. Belloni, V. Chernozhukov, and L. Wang, “Square-root lasso: pivotal recovery of sparse signals via conic programming,” Biometrika, vol. 98, no. 4, pp. 791–806, 2011. View at: Publisher Site | Google Scholar | MathSciNet
  28. J. Chen and Z. Chen, “Extended Bayesian information criteria for model selection with large model spaces,” Biometrika, vol. 95, no. 3, pp. 759–771, 2008. View at: Publisher Site | Google Scholar | MathSciNet
  29. C.-K. Ing and T. L. Lai, “A stepwise regression method and consistent model selection for high-dimensional sparse linear models,” Statistica Sinica, vol. 21, no. 4, pp. 1473–1513, 2011. View at: Google Scholar
  30. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends in Machine Learning, vol. 3, no. 1, pp. 1–122, 2011. View at: Publisher Site | Google Scholar
  31. Y. Nesterov, “Gradient methods for minimizing composite objective function,” Tech. Rep., Université catholique de Louvain, Center for Operations Research and Econometrics, 2007. View at: Google Scholar
  32. P. J. Bickel, Y. Ritov, and A. B. Tsybakov, “Simultaneous analysis of lasso and Dantzig selector,” The Annals of Statistics, vol. 37, no. 4, pp. 1705–1732, 2009. View at: Publisher Site | Google Scholar | MathSciNet
  33. K. Lounici, “Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators,” Electronic Journal of Statistics, vol. 2, pp. 90–102, 2008. View at: Publisher Site | Google Scholar | MathSciNet
  34. S. A. van de Geer and P. Buhlmann, “On the conditions used to prove oracle results for the Lasso,” Electronic Journal of Statistics, vol. 3, pp. 1360–1392, 2009. View at: Publisher Site | Google Scholar | MathSciNet
  35. F. Ye and C.-H. Zhang, “Rate minimaxity of the lasso and dantzig selector for the lq loss in lr balls,” The Journal of Machine Learning Research, vol. 11, pp. 3519–3540, 2010. View at: Google Scholar
  36. C.-H. Zhang and T. Zhang, “A general theory of concave regularization for high-dimensional sparse estimation problems,” Statistical Science, vol. 27, no. 4, pp. 576–593, 2012. View at: Publisher Site | Google Scholar | MathSciNet
  37. B. Stucky and S. van de Geer, “Sharp oracle inequalities for square root regularization,” Journal of Machine Learning Research, vol. 18, no. 67, pp. 1–29, 2017. View at: Google Scholar | MathSciNet

Copyright © 2018 Yao Dong and He Jiang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views1043
Downloads517
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.