Predictive Modelling Based on Statistical Learning in Biomedicine
View this Special IssueResearch Article  Open Access
Probing for Sparse and Fast Variable Selection with ModelBased Boosting
Abstract
We present a new variable selection method based on modelbased gradient boosting and randomly permuted variables. Modelbased boosting is a tool to fit a statistical model while performing variable selection at the same time. A drawback of the fitting lies in the need of multiple model fits on slightly altered data (e.g., crossvalidation or bootstrap) to find the optimal number of boosting iterations and prevent overfitting. In our proposed approach, we augment the data set with randomly permuted versions of the true variables, socalled shadow variables, and stop the stepwise fitting as soon as such a variable would be added to the model. This allows variable selection in a single fit of the model without requiring further parameter tuning. We show that our probing approach can compete with stateoftheart selection methods like stability selection in a highdimensional classification benchmark and apply it on three gene expression data sets.
1. Introduction
At the latest since the emergence of genomic and proteomic data, where the number of available variables is possibly far higher than the sample size , highdimensional data analysis becomes increasingly important in biomedical research [1–4]. Since common statistical regression methods like ordinary least squares are unable to estimate model coefficients in these settings due to singularity of the covariance matrix, varying strategies have been proposed to select only truly influential, that is, informative, variables and discard those without impact on the outcome.
By enforcing sparsity in the true coefficient vector, regularized regression approaches like the lasso [5], least angle regression [6], elastic net [7], and gradient boosting algorithms [8, 9] perform variable selection directly in the model fitting process. This selection is controlled by tuning hyperparameters that define the degree of penalization. While these hyperparameters are commonly determined using resampling strategies like crossvalidation, bootstrapping, and similar methods, the focus on minimizing the prediction error often results in the selection of many noninformative variables [10, 11].
One approach to address this problem is stability selection [12, 13], a method that combines variable selection with repeated subsampling of the data to evaluate selection frequencies of variables. While stability selection can considerably improve the performance of several variable selection methods including regularized regression models in highdimensional settings [12, 14], its application depends on additional hyperparameters. Although recommendations for reasonable values exist [12, 14], proper specification of these parameters is not straightforward in practice as the optimal configuration would require a priori knowledge about the number of informative variables. Another potential drawback is that stability selection increases the computational demand, which can be problematic in highdimensional settings if the computational complexity of the used selection technique scales superlinearly with the number of predictor variables.
In this paper, we propose a new method to determine the optimal number of iterations in modelbased boosting for variable selection inspired by probing, a method frequently used in related areas of machine learning research [15–17] and the analysis of microarrays [18]. The general notion of probing involves the artificial inflation of the data with random noise variables, socalled probes or shadow variables. While this approach is in principle applicable to the lasso or least angle regression as well, it is especially attractive to use with more computationally intensive boosting algorithms, as no resampling is required at all. Using the first selection of a shadow variable as stopping criterion, the algorithm is applied only once without the need to optimize any hyperparameters in order to extract a set of informative variables from the data, thereby making its application very fast and simple in practice. Furthermore, simulation studies show that the resulting models in fact tend to be more strictly regularized compared to the ones resulting from crossvalidation and contain less uninformative variables.
In Section 2, we provide detailed descriptions of the modelbased gradient boosting algorithm as well as stability selection and the new probing approach. Results of a simulation study comparing the performance of probing to crossvalidation and different configurations of stability selection in a binary classification setting are then presented in Section 3 before discussing the application of these methods on three data sets with measurements of gene expression levels in Section 4. Section 5 summarizes our findings and presents an outlook to extensions of the algorithm.
2. Methods
2.1. Gradient Boosting
Given a learning problem with a data set sampled i.i.d. from a distribution over the joint space , with a dimensional input space and an output space (e.g., for regression and for binary classification), the aim is to estimate a function, , that maps elements of the input space to the output space as good as possible. Relying on the perspective on boosting as gradient descent in function space, gradient boosting algorithms try to minimize a given loss function, , that measures the discrepancy between a predicted outcome value of and the true . Minimizing this discrepancy is achieved by repeatedly fitting weak prediction functions, called base learners, to previous mistakes, in order to combine them to a strong ensemble [19]. Although early implementations in the context of machine learning focused specifically on the use of regression trees, the concept has been successfully extended to suit the framework of a variety of statistical modelling problems [8, 20]. In this modelbased approach, the base learners are typically defined by semiparametric regression functions on to build an additive model. A common simplification is to assume that each base learner is defined on only one component of the input spaceFor an overview of the fitting process of modelbased boosting see Algorithm 1.
Algorithm 1 (modelbased gradient boosting). Starting at with a constant loss minimal initial value , the algorithm iteratively updates the predictor with a small fraction of the base learner with the best fit on the negative gradient of the loss function: (1)Set iteration counter .(2)While , compute the negative gradient vector of the loss function: (3)Fit every base learner separately to the negative gradient vector .(4)Find , that is, the base learner with the best fit: (5)Update the predictor with a small fraction of this component:
The resulting model can be interpreted as a generalized additive model with partial effects for each covariate contained in the additive predictor. Although the algorithm relies on two hyperparameters and , Bühlmann and Hothorn [9] claim that the learning rate is of minor importance as long as it is “sufficiently small,” with commonly used in practice.
The stopping criterion, , determines the degree of regularization and thereby heavily affects the model quality in terms of overfitting and variable selection [21]. However, as already outlined in the introduction, optimizing using common approaches like crossvalidation results in the selection of many uninformative variables. Although still focusing on minimizing prediction error, using a 25fold bootstrap instead of the commonly used 10fold crossvalidation tends to return sparser models without sacrificing prediction performance [22].
2.2. Stability Selection
The weak performance of crossvalidation regarding variable selection partly results from the fact that it pursues the goal of minimizing the prediction error instead of selecting only informative variables. One possible solution is the stability selection framework [12, 13], a very versatile algorithm that can be combined with all kinds of variable selection methods like gradient boosting, lasso, or forward stepwise selection. It produces sparser solutions by controlling the number of false discoveries. Stability selection defines an upper bound for the perfamily error rate (PFER), for example, the expected number of uninformative variables included in the final model.
Therefore, using stability selection with modelbased boosting means that Algorithm 1 is run independently on random subsamples of the data until either a predefined number of iterations is reached or different variables have been selected. Subsequently, all variables are sorted with respect to their selection frequency in the sets. The amount of informative variables is then determined by a userdefined threshold that has to be exceeded. A detailed description of these steps is given in Algorithm 2.
Algorithm 2 (stability selection for modelbased boosting [14]). (1)For ,(a)draw a subset of size from the data;(b)fit a boosting model to the subset until the number of selected variables is equal to or the number of iterations reaches a prespecified number ().(2)Compute the selection frequencies per variable : where denotes the set of selected variables in iteration .(3)Select variables with a selection frequency of at least , which yields a set of stable covariates:
Following this approach, the upper bound for the PFER can be derived as follows [12]:With additional assumptions on exchangeability and shape restrictions on the distribution of simultaneous selection, even tighter bounds can be derived [13]. While this method is successfully applied in a large number of different applications [23–26], several shortcomings impede the usage in practice. First off, three additional hyperparameters , PFER, and are introduced. Although only two of them have to be specified by the user (the third one can be calculated by assuming equality in (7)), it is not intuitively clear which parameter should be left out and how to specify the remaining two. Even though recommendations for reasonable settings for the selection threshold [12] or the PFER [14] are proposed, the effectiveness of these settings is difficult to evaluate in practical settings. The second obstacle in the usage of stability selection is the considerable computational power required for calculation. Overall boosting models ([13] recommends ) have to be fitted and a reasonable has to be found as well, which will most likely require crossvalidation. Even though this process can be parallelized quite easily, complex model classes with smooth and higherorder effects can become extremely costly to fit.
2.3. Probing
The approach of adding probes or shadow variables, for example, artificial uninformative variables to the data, is not completely new and has already been investigated in some areas of machine learning. Although they share the underlying idea to benefit from the presence of variables that are known to be independent from the outcome, the actual implementation of the concept differs (see Guyon and Elisseeff (2003) [15] for an overview). An especially useful approach, however, is to generate these additional variables as randomly shuffled versions of all observed variables. These permuted variables will be called shadow variables for the remainder of this paper and are denoted as . Compared to adding randomly sampled variables, shadow variables have the advantage that the marginal distribution of is preserved in . This approach is tightly connected to the theory of permutation tests [27] and is used similarly for allrelevant variable selection with random forests [28].
Implementing the probing concept to the sequential structure of modelbased gradient boosting is rather straightforward. Since boosting algorithms proceed in a greedy fashion and only update the effect which yields the largest loss reduction in each iteration, selecting a shadow variable essentially implies that the best possible improvement at this stage relies on information that is known to be unrelated to the outcome. As a consequence, variables that are selected in later iterations are most likely correlated to only by chance as well. Therefore, all variables that have been added prior to the first shadow variable are assumed to have a true influence on the target variable and should be considered informative. A description of the full procedure is presented in Algorithm 3.
Algorithm 3 (probing for variable selection in modelbased boosting). (1) Expand the data set by creating randomly shuffled images for each of the variables such that where denotes the symmetric group that contains all possible permutations of .(2) Initialize a boosting model on the inflated data set and start iterations with .(3) Stop if the first is selected; see Algorithm 1 step ().(4) Return only the variables selected from the original data set .
The major advantage of this approach compared to variable selection via crossvalidation or stability selection is that one model fit is enough to find informative variables and no expensive refitting of the model is required. Additionally, there is no need for any prespecification like the search space () for crossvalidation or additional hyperparameters (, , PFER) for stability selection. However, it should be noted that, unlike classical crossvalidation, probing aims at optimal variable selection instead of prediction performance of the algorithm. Since this usually involves stopping much earlier, the effect estimates associated with the selected variables are most likely strongly regularized and might not be optimal for predictions.
3. Simulation Study
In order to evaluate the performance of our proposed variable selection method, we conduct a benchmark simulation study where we compare the set of nonzero coefficients determined by the use of shadow variables as stopping criterion to crossvalidation and different configurations of stability selection. We simulate data points for variables from a multivariate normal distribution with Toeplitz correlation structure for all and . The response variable is then generated by sampling Bernoulli experiments with probability with the linear predictor for the th observation and all nonzero elements of sampled from . Since the total amount of nonzero coefficients determines the number of informative variables in the setting, it is denoted as .
Overall, we consider 12 different simulation scenarios defined by all possible combinations of , , and . Specifically, this leads to the evaluation of 2 lowdimensional settings with , 4 settings with , and 6 highdimensional settings with . Each configuration is run times. Along with new realizations of and , we also draw new values for the nonzero coefficients in and sample their position in the vector in each run to allow for varying correlation patterns among the informative variables. For variable selection with crossvalidation, fold bootstrap (the default in mboost) is used to determine the final number of iterations. Different configurations of stability selection were tested to investigate whether and, if so, to what extent these settings affect the selection. In order to explicitly use the upper error bounds of stability selection, we decided to specify combinations with and and calculate from (7). Aside from the learning rate , which is set to for all methods, no further parameters have to be specified for the probing scheme. Two performance measures are considered for the evaluation of the methods with respect to variable selection: first, the true positive rate (TPR) as the fraction of (correctly) selected variables from all true informative variables and, second, the false discovery rate (FDR) as the fraction of uninformative variables in the set of selected variables. To ensure reproducibility the R package batchtools [29] was used for all simulations.
The results of the simulations for all settings are illustrated in Figure 1. With TPR and FDR on the axis and axis, respectively, solutions displayed in the top left corner of the plots therefore successfully separate informative variables from the ones without true effect on the response. Although already using a sparse crossvalidation approach, the FDR of variable selection via crossvalidation is still relatively high, with more than 50% false positives in the selected sets in the majority of the simulated scenarios. Whereas this seems to be mostly disadvantageous in the cases where , the trend to more greedy solutions leads to a considerably higher chance of identifying more of the truly informative variables if or with very high , however, still at the price of picking up many noise variables on the way. Pooling the results of all configurations considered for stability selection, the results cover a large area of the performance space in Figure 1, thereby probably indicating high sensitivity on the decisions regarding the three tuning parameters.
Examining the results separately in Figure 2, the dilemma is particularly clearly illustrated for and . Despite being able to control the upper bounds for expected false positive selections, only a minority of the true effects are selected if the PFER is set too conservative. In addition, the high variance of the FDR observed for these configurations in some settings somewhat counteracts the goal to achieve more certainty about the selected variables one might probably pursue by setting the PFER very low. The performance of probing, on the other hand, reveals a much more stable pattern and outperforms stability selection in the difficult and settings. In fact, the TPR is either higher or similar to all configurations used for stability selection, but exhibiting slightly higher FDR especially in settings with . Interestingly, probing seems to provide results similar to those of stability selection with PFER = 8, raising the question if the use of shadow variables allows statements about the number of expected false positives in the selected variable set.
Considering the runtime, however, we can see that probing is orders of magnitudes faster with an average runtime of less than a second compared to 12 seconds for crossvalidation and almost one minute for stability selection.
4. Application on Gene Expression Data
In this section we exploit the usage of probing as a tool for variable selection on three gene expression data sets. More specifically, this includes data from using oligonucleotide arrays for colon cancer detection [30] with tumor and regular colon tissue samples and measured genes expression levels. In addition, we analyse data from a study aiming to predict metastasis of breast carcinoma [31], where patients were labelled good or poor ( and , resp.) depending on whether they remained eventfree for a fiveyear period after diagnosis or not. The data set contains logtransformed expression levels of genes. The last example examines riboflavin production by Bacillus subtilis [32] with observations of logtransformed riboflavin production rates and expression level for genes. All data are publicly available via R packages datamicroarray and hdi. Our proposed probing approach is implemented in a fork of the mboost [33] software for componentwise gradient boosting. It can be easily used by setting probe=TRUE in the glmboost() call.
In order to evaluate the results provided by the new approach, we analysed the data using crossvalidation, stability selection [34], and the lasso [35] for comparison. Table 1 shows the total number of variables selected by each method along with the size of the intersection between the sets. Starting with the probably least surprising result, boosting with crossvalidation leads to the largest set of selected variables in all examples, whereas using probing as stopping criterion instead clearly reduces these sets. Since both approaches are based on the same regularization profile until the first shadow variable enters the model, the less regularized solution of crossvalidation always contains all variables selected with probing. For stability selection, we used the conservative approach with and as suggested by Bühlmann et al. (2014) [32]. As a consequence, the set of variables considered to be informative further shrinks in all three scenarios. Again, these results clearly reflect the findings from the simulation study in Section 3, placing the probing approach between stability selection with probably overly conservative error bound and the greedy selection with crossvalidation.

Since so far all approaches rely on boosting algorithms, we additionally considered variable selection with the lasso. We used the default settings of the glmnet package for R to calculate the lasso regularization path and determine the final model via 10fold crossvalidation [35]. Although the lasso already tends to result in sparser models under these conditions compared to modelbased boosting [22], glmnet additionally uses a “onestandarderror rule” to regularize the solution even further. In fact, this leads to the selection of an identical set of genes as probing for the breast carcinoma example, but the final models estimated for both other examples still contain a higher number of variables. This is especially the case for the data on riboflavin production, where the lasso solution is further not simply a subset of the crossvalidated boosting approach and only agrees on 23 mutually selected variables. Interestingly, even one of the 5 variables proposed by stability selection is also missing. The R code used for this analysis can be found in the Supplementary Material of this manuscript available online at https://doi.org/10.1155/2017/1421409.
5. Conclusion
We proposed a new approach to determine the optimal number of iterations for sparse and fast variable selection with modelbased boosting via the addition of probes or shadow variables (probing). We were able to demonstrate via a simulation study and the analysis of gene expression data that our approach is both a feasible and convenient strategy for variable selection in highdimensional settings. In contrast to common tuning procedures for modelbased boosting which rely on resampling or crossvalidation procedures to optimize the prediction accuracy [21], our probing approach directly addresses the variable selection properties of the algorithm. As a result, it substantially reduces the high number of false discoveries that arise with standard procedures [14] while only requiring a single model fit to obtain the set of parameters.
Aside from the very short runtime, another attractive feature of probing is that no additional tuning parameters have to be specified to run the algorithm. While this greatly increases its ease of use, there is, of course, a tradeoff regarding flexibility, as the lack of tuning parameters means that there is no way to steer the results towards more or less conservative solutions. However, a corresponding tuning approach in the context of probing could be to allow a certain amount of selected probes in the model before deciding to stop the algorithm (cf. Guyon and Elisseeff, 2003 [15]). Although variables selected after the first probe can be labelled informative less convincingly, this resembles the uncertainty that comes with specifying higher values for the error bound of stability selection.
A potential drawback of our approach is that due to the stochasticity of the permutations, there is no deterministic solution and the selected set might slightly vary after rerunning the algorithm. In order to stabilize results, probing could also be used combined with resampling to determine the optimal stopping iteration for the algorithm by running the procedure on several bootstrap samples first. Of course, this requires the computation of multiple models and therefore again increases the runtime of the whole selection procedure.
Another promising extension could be a combination with stability selection. With each model stopping at the first shadow variable, only the selection threshold has to be specified. However, since this means a fundamental change of the original procedure, further research on this topic is necessary to better assess how this could affect the resulting error bound.
While in this work we focused on gradient boosting for binary and continuous data, there is no reason why our results should not also carry over to other regression settings or related statistical boosting algorithms as likelihoodbased boosting [36]. Likelihoodbased boosting follows the same principle idea but uses different updates, coinciding with gradient boosting in case of Gaussian responses [37]. Further research is also warranted on extending our approach to multidimensional boosting algorithms [25, 38], where variables have to be selected for various models simultaneously.
In addition, probing as a tuning scheme could be generally also combined with similar regularized regression approaches like the lasso [5, 22]. Our proposal for modelbased boosting hence could be a starting point for a new way of tuning algorithmic models for highdimensional data, not with the focus on prediction accuracy, but addressing directly the desired variable selection properties.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
The work of authors Tobias Hepp and Andreas Mayr was supported by the Interdisciplinary Center for Clinical Research (IZKF) of the FriedrichAlexanderUniversity ErlangenNürnberg (Project J49). The authors additionally acknowledge support by Deutsche Forschungsgemeinschaft and FriedrichAlexanderUniversität ErlangenNürnberg (FAU) within the funding programme Open Access Publishing.
Supplementary Materials
The supplementary material contains the code used to run the experiments as well as the resulting data used to create the figures and tables.
References
 R. Romero, J. Espinoza, F. Gotsch et al., “The use of highdimensional biology (genomics, transcriptomics, proteomics, and metabolomics) to understand the preterm parturition syndrome,” BJOG: An International Journal of Obstetrics and Gynaecology, vol. 113, no. s3, pp. 118–135, 2006. View at: Publisher Site  Google Scholar
 R. Clarke, H. W. Ressom, A. Wang et al., “The properties of highdimensional data spaces: implications for exploring gene and protein expression data,” Nature Reviews Cancer, vol. 8, no. 1, pp. 37–49, 2008. View at: Publisher Site  Google Scholar
 P. Mallick and B. Kuster, “Proteomics: a pragmatic perspective,” Nature Biotechnology, vol. 28, no. 7, pp. 695–709, 2010. View at: Publisher Site  Google Scholar
 M. L. Bermingham, R. PongWong, A. Spiliopoulou et al., “Application of highdimensional feature selection: evaluation for genomic prediction in man,” Scientific Reports, vol. 5, Article ID 10312, 2015. View at: Publisher Site  Google Scholar
 R. Tibshirani, “Regression shrinkage and selection via the lasso,” Journal of the Royal Statistical Society B, vol. 58, no. 1, pp. 267–288, 1996. View at: Google Scholar  MathSciNet
 B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani, “Least angle regression,” The Annals of Statistics, vol. 32, no. 2, pp. 407–499, 2004. View at: Publisher Site  Google Scholar  MathSciNet
 H. Zou and T. Hastie, “Regularization and variable selection via the elastic net,” Journal of the Royal Statistical Society. Series B. Statistical Methodology, vol. 67, no. 2, pp. 301–320, 2005. View at: Publisher Site  Google Scholar  MathSciNet
 J. Friedman, T. Hastie, and R. Tibshirani, “Additive logistic regression: a statistical view of boosting,” The Annals of Statistics, vol. 28, no. 2, pp. 337–407, 2000. View at: Publisher Site  Google Scholar  MathSciNet
 P. Bühlmann and T. Hothorn, “Boosting algorithms: regularization, prediction and model fitting,” Statistical Science, vol. 22, no. 4, pp. 477–505, 2007. View at: Publisher Site  Google Scholar  MathSciNet
 N. Meinshausen and P. Bühlmann, “Highdimensional graphs and variable selection with the lasso,” The Annals of Statistics, vol. 34, no. 3, pp. 1436–1462, 2006. View at: Publisher Site  Google Scholar  MathSciNet
 C. Leng, Y. Lin, and G. Wahba, “A note on the lasso and related procedures in model selection,” Statistica Sinica, vol. 16, no. 4, pp. 1273–1284, 2006. View at: Google Scholar  MathSciNet
 N. Meinshausen and P. Bühlmann, “Stability selection,” Journal of the Royal Statistical Society. Series B. Statistical Methodology, vol. 72, no. 4, pp. 417–473, 2010. View at: Publisher Site  Google Scholar  MathSciNet
 R. D. Shah and R. J. Samworth, “Variable selection with error control: another look at stability selection,” Journal of the Royal Statistical Society. Series B. Statistical Methodology, vol. 75, no. 1, pp. 55–80, 2013. View at: Publisher Site  Google Scholar  MathSciNet
 B. Hofner, L. Boccuto, and M. Göker, “Controlling false discoveries in highdimensional situations: boosting with stability selection,” BMC Bioinformatics, vol. 16, no. 1, article 144, 2015. View at: Publisher Site  Google Scholar
 I. Guyon and A. Elisseeff, “An introduction to variable and feature selection,” Journal of Machine Learning Research, vol. 3, pp. 1157–1182, 2003. View at: Google Scholar
 J. Bi, K. P. Bennett, M. Embrechts, C. M. Breneman, and M. Song, “Dimensionality reduction via sparse support vector machines,” Journal of Machine Learning Research, vol. 3, pp. 1229–1243, 2003. View at: Google Scholar
 Y. Wu, D. D. Boos, and L. A. Stefanski, “Controlling variable selection by the addition of pseudovariables,” Journal of the American Statistical Association, vol. 102, no. 477, pp. 235–243, 2007. View at: Publisher Site  Google Scholar  MathSciNet
 V. G. Tusher, R. Tibshirani, and G. Chu, “Significance analysis of microarrays applied to the ionizing radiation response,” Proceedings of the National Academy of Sciences of the United States of America, vol. 98, no. 9, pp. 5116–5121, 2001. View at: Publisher Site  Google Scholar
 T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning, Springer Series in Statistics, Springer New York Inc., New York, NY, USA, 2001. View at: Publisher Site  MathSciNet
 G. Ridgeway, “The state of boosting,” Computing Science and Statistics, vol. 31, pp. 172–181, 1999. View at: Google Scholar
 A. Mayr, B. Hofner, and M. Schmid, “The importance of knowing when to stop: a sequential stopping rule for componentwise gradient boosting,” Methods of Information in Medicine, vol. 51, no. 2, pp. 178–186, 2012. View at: Publisher Site  Google Scholar
 T. Hepp, M. Schmid, O. Gefeller, E. Waldmann, and A. Mayr, “Approaches to regularized regression—a comparison between gradient boosting and the lasso,” Methods of Information in Medicine, vol. 55, no. 5, pp. 422–430, 2016. View at: Publisher Site  Google Scholar
 A.C. Haury, F. Mordelet, P. VeraLicona, and J.P. Vert, “TIGRESS: trustful inference of gene regulation using stability selection,” BMC Systems Biology, vol. 6, article 145, 2012. View at: Publisher Site  Google Scholar
 S. Ryali, T. Chen, K. Supekar, and V. Menon, “Estimation of functional connectivity in fMRI data using stability selectionbased sparse partial correlation with elastic net penalty,” NeuroImage, vol. 59, no. 4, pp. 3852–3861, 2012. View at: Publisher Site  Google Scholar
 J. Thomas, A. Mayr, B. Bischl, M. Schmid, A. Smith, and B. Hofner, Stability selection for componentwise gradient boosting in multiple dimensions, 2016.
 A. Mayr, B. Hofner, and M. Schmid, “Boosting the discriminatory power of sparse survival models via optimization of the concordance index and stability selection,” BMC Bioinformatics, vol. 17, no. 1, article 288, 2016. View at: Publisher Site  Google Scholar
 H. Strasser and C. Weber, “The asymptotic theory of permutation statistics,” Mathematical Methods of Statistics, vol. 8, no. 2, pp. 220–250, 1999. View at: Google Scholar  MathSciNet
 M. B. Kursa, A. Jankowski, and W. . Rudnicki, “Borutaa system for feature selection,” Fundamenta Informaticae, vol. 101, no. 4, pp. 271–285, 2010. View at: Publisher Site  Google Scholar  MathSciNet
 M. Lang, B. Bischl, and D. Surmann, “batchtools: Tools for R to work on batch systems,” The Journal of Open Source Software, vol. 2, no. 10, 2017. View at: Publisher Site  Google Scholar
 U. Alon, N. Barka, D. A. Notterman et al., “Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays,” Proceedings of the National Academy of Sciences of the United States of America, vol. 96, no. 12, pp. 6745–6750, 1999. View at: Publisher Site  Google Scholar
 E. Gravier, G. Pierron, A. VincentSalomon et al., “A prognostic DNA signature for T1T2 nodenegative breast cancer patients,” Genes Chromosomes and Cancer, vol. 49, no. 12, pp. 1125–1134, 2010. View at: Publisher Site  Google Scholar
 P. Bühlmann, M. Kalisch, and L. Meier, “Highdimensional statistics with a view toward applications in biology,” Annual Review of Statistics and Its Application, vol. 1, no. 1, pp. 255–278, 2014. View at: Google Scholar
 T. Hothorn, P. Buehlmann, T. Kneib, M. Schmid, and B. Hofner, mboost: ModelBased Boosting. R package version R package version 2.70, 2016.
 B. Hofner and T. Hothorn, stabs: Stability Selection with Error Control. R package version R package version 0.51, 2015.
 J. Friedman, T. Hastie, and R. Tibshirani, “Regularization paths for generalized linear models via coordinate descent,” Journal of Statistical Software, vol. 33, no. 1, pp. 1–22, 2010. View at: Google Scholar
 G. Tutz and H. Binder, “Generalized additive modeling with implicit variable selection by likelihoodbased boosting,” Biometrics, vol. 62, no. 4, pp. 961–971, 2006. View at: Publisher Site  Google Scholar  MathSciNet
 A. Mayr, H. Binder, O. Gefeller, and M. Schmid, “The evolution of boosting algorithms: From machine learning to statistical modelling,” Methods of Information in Medicine, vol. 53, no. 6, pp. 419–427, 2014. View at: Publisher Site  Google Scholar
 A. Mayr, N. Fenske, B. Hofner, T. Kneib, and M. Schmid, “Generalized additive models for location, scale and shape for high dimensional dataa flexible approach based on boosting,” Journal of the Royal Statistical Society. Series C. Applied Statistics, vol. 61, no. 3, pp. 403–427, 2012. View at: Publisher Site  Google Scholar  MathSciNet
Copyright
Copyright © 2017 Janek Thomas et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.