Table of Contents Author Guidelines Submit a Manuscript
Computational and Mathematical Methods in Medicine
Volume 2017, Article ID 6083072, 12 pages
https://doi.org/10.1155/2017/6083072
Review Article

An Update on Statistical Boosting in Biomedicine

1Institut für Medizininformatik, Biometrie und Epidemiologie, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
2Institut für Statistik, Ludwig-Maximilians-Universität München, Munich, Germany
3Paul-Ehrlich-Institut, Langen, Germany

Correspondence should be addressed to Andreas Mayr; ed.uaf@ryam.saerdna

Received 24 February 2017; Accepted 8 June 2017; Published 2 August 2017

Academic Editor: Andrzej Kloczkowski

Copyright © 2017 Andreas Mayr et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. A. Mayr, H. Binder, O. Gefeller, and M. Schmid, “The evolution of boosting algorithms: from machine learning to statistical modelling,” Methods of Information in Medicine, vol. 53, no. 6, pp. 419–427, 2014. View at Publisher · View at Google Scholar · View at Scopus
  2. P. Bühlmann and T. Hothorn, “Rejoinder: boosting algorithms: regularization, prediction and model fitting,” Statistical Science, vol. 22, no. 4, pp. 516–522, 2007. View at Publisher · View at Google Scholar · View at Scopus
  3. G. Tutz and H. Binder, “Generalized additive modeling with implicit variable selection by likelihood-based boosting,” Biometrics, vol. 62, no. 4, pp. 961–971, 2006. View at Publisher · View at Google Scholar · View at MathSciNet
  4. B. Hofner, T. Hothorn, T. Kneib, and M. Schmid, “A framework for unbiased model selection based on boosting,” Journal of Computational and Graphical Statistics, vol. 20, no. 4, pp. 956–971, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  5. T. Kneib, T. Hothorn, and G. Tutz, “Variable selection and model choice in geoadditive regression models,” Biometrics, vol. 65, no. 2, pp. 626–634, 2009. View at Publisher · View at Google Scholar · View at MathSciNet
  6. L. Breiman, “Statistical modeling: the two cultures,” Statistical Science, vol. 16, no. 3, pp. 199–231, 2001. View at Publisher · View at Google Scholar · View at MathSciNet
  7. Y. Freund, “Boosting a weak learning algorithm by majority,” in Proceedings of the Third Annual Workshop on Computational Learning Theory, COLT 1990, M. A. Fulk and J. Case, Eds., pp. 202–216, University of Rochester, Rochester, NY, USA, August 1990.
  8. J. H. Friedman, “Greedy function approximation: a gradient boosting machine,” The Annals of Statistics, vol. 29, no. 5, pp. 1189–1232, 2001. View at Publisher · View at Google Scholar · View at MathSciNet
  9. J. Friedman, T. Hastie, and R. Tibshirani, “Additive logistic regression: a statistical view of boosting,” The Annals of Statistics, vol. 28, no. 2, pp. 337–407, 2000. View at Publisher · View at Google Scholar · View at MathSciNet
  10. T. Hepp, M. Schmid, O. Gefeller, E. Waldmann, and A. Mayr, “Approaches to regularized regression - A comparison between gradient boosting and the lasso,” Methods of Information in Medicine, vol. 55, no. 5, pp. 422–430, 2016. View at Publisher · View at Google Scholar · View at Scopus
  11. A. Mayr, H. Binder, O. Gefeller, and M. Schmid, “Extending statistical boosting,” Methods of Information in Medicine, vol. 53, no. 6, pp. 428–435, 2014. View at Publisher · View at Google Scholar · View at Scopus
  12. B. Hofner, L. Boccuto, and M. Göker, “Controlling false discoveries in high-dimensional situations: Boosting with stability selection,” BMC Bioinformatics, vol. 16, no. 1, article no. 144, 2015. View at Publisher · View at Google Scholar · View at Scopus
  13. E. Waldmann, D. Taylor-Robinson, N. Klein et al., “Boosting joint models for longitudinal and time-to-event data,” Biometrical Journal, 2017. View at Publisher · View at Google Scholar
  14. S. Brockhaus, M. Melcher, F. Leisch, and S. Greven, “Boosting flexible functional regression models with a high number of functional historical effects,” Statistics and Computing, vol. 27, no. 4, pp. 913–926, 2017. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  15. R. E. Schapire, “The strength of weak learnability,” Machine Learning, vol. 5, no. 2, pp. 197–227, 1990. View at Publisher · View at Google Scholar · View at Scopus
  16. R. Schapire and Y. Freund, Boosting: Foundations and Algorithms, vol. 14, MIT Press, 2012. View at MathSciNet
  17. Y. Freund and R. Schapire, “Experiments with a new boosting algorithm,” in Proceedings of the Thirteenth International Conference on Machine Learning Theory, pp. 148–156, San Francisco: Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1996.
  18. T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Springer, New York, NY, USA, 2nd edition, 2009.
  19. A. J. Wyner, M. Olson, J. Bleich, and D. Mease, “Explaining the success of adaboost and random forests as interpolating classifiers,” Journal of Machine Learning Research, vol. 18, no. 48, pp. 1–33, 2017. View at Google Scholar
  20. C. Strobl, A.-L. Boulesteix, A. Zeileis, and T. Hothorn, “Bias in random forest variable importance measures: illustrations, sources and a solution,” BMC Bioinformatics, vol. 8, article 25, 2007. View at Publisher · View at Google Scholar · View at Scopus
  21. A. Hapfelmeier, T. Hothorn, K. Ulm, and C. Strobl, “A new variable importance measure for random forests with missing data,” Statistics and Computing, vol. 24, no. 1, pp. 21–34, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  22. T. J. Hastie and R. J. Tibshirani, Generalized Additive Models, vol. 43, Chapman and Hall, London, UK, 1990. View at MathSciNet
  23. A. Mayr, B. Hofner, and M. Schmid, “The importance of knowing when to stop: a sequential stopping rule for component-wise gradient boosting,” Methods of Information in Medicine, vol. 51, no. 2, pp. 178–186, 2012. View at Publisher · View at Google Scholar · View at Scopus
  24. T. Hothorn, “Boosting – An unusual yet attractive optimiser,” Methods of Information in Medicine, vol. 53, no. 6, pp. 417-418, 2014. View at Publisher · View at Google Scholar · View at Scopus
  25. L. Mason, J. Baxter, P. Bartlett, and M. Frean, “Boosting algorithms as gradient descent,” in Proceedings of the 13th Annual Neural Information Processing Systems Conference, NIPS 1999, pp. 512–518, usa, December 1999. View at Scopus
  26. T. Hothorn, P. Bühlmann, T. Kneib, M. Schmid, and B. Hofner, “mboost: Model-Based Boosting,” 2017, R package version 2.8-0. https://CRAN.R-project.org/package=mboost.
  27. R Development Core Team, R: A Language and Environment for Statistical Computing, Vienna, Austria, 2016, ISBN 3-900051-07-0. https://www.R-project.org.
  28. B. Hofner, A. Mayr, N. Robinzonov, and M. Schmid, “Model-based boosting in R: a hands-on tutorial using the R Package mboost,” Computational Statistics, vol. 29, no. 1-2, pp. 3–35, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  29. G. Tutz and H. Binder, “Boosting ridge regression,” Computational Statistics and Data Analysis, vol. 51, no. 12, pp. 6044–6059, 2007. View at Publisher · View at Google Scholar · View at Scopus
  30. P. Bühlmann and B. Yu, “Boosting with the L2 loss: regression and classification,” Journal of the American Statistical Association, vol. 98, pp. 324–338, 2003. View at Publisher · View at Google Scholar · View at MathSciNet
  31. H. Binder, GAMBoost: Generalized Linear and Additive Models by Likelihood Based Boosting, 2011, R package version 1.2-2. https://CRAN.R-project.org/package=GAMBoost.
  32. H. Binder, CoxBoost: Cox Models by Likelihood-based Boosting for a Single Survival Endpoint or Competing Risks, 2013, R package version 1.4. https://CRAN.R-project.org/package=CoxBoost.
  33. R. De Bin, “Boosting in Cox regression: a comparison between the likelihood-based and the model-based approaches with focus on the R-packages CoxBoost and mboost,” Computational Statistics, vol. 31, no. 2, pp. 513–531, 2016. View at Publisher · View at Google Scholar · View at MathSciNet
  34. R. Tibshirani, “Regression shrinkage and selection via the lasso,” Journal of the Royal Statistical Society - Series B, vol. 58, no. 1, pp. 267–288, 1996. View at Google Scholar
  35. B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani, “Least angle regression,” The Annals of Statistics, vol. 32, no. 2, pp. 407–499, 2004. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  36. N. Meinshausen, G. Rocha, and B. Yu, “Discussion: a tale of three cousins: Lasso, L2 boosting and Dantzig,” The Annals of Statistics, vol. 35, no. 6, pp. 2373–2384, 2007. View at Google Scholar
  37. J. Duan, C. Soussen, D. Brie, J. Idier, and Y.-P. Wang, “On LARS/homotopy equivalence conditions for over-determined LASSO,” IEEE Signal Processing Letters, vol. 19, no. 12, 2012. View at Publisher · View at Google Scholar · View at Scopus
  38. P. Bühlmann, J. Gertheiss, S. Hieke et al., “Discussion of 'the evolution of boosting algorithms' and 'extending statistical boosting',” Methods of Information in Medicine, vol. 53, no. 6, pp. 436–445, 2014. View at Publisher · View at Google Scholar · View at Scopus
  39. T. Hastie, J. Taylor, R. Tibshirani, and G. Walther, “Forward stagewise regression and the monotone lasso,” Electronic Journal of Statistics, vol. 1, pp. 1–29, 2007. View at Publisher · View at Google Scholar · View at MathSciNet
  40. S. Janitza, H. Binder, and A.-L. Boulesteix, “Pitfalls of hypothesis tests and model selection on bootstrap samples: causes and consequences in biometrical applications,” Biometrical Journal, vol. 58, no. 3, pp. 447–473, 2016. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  41. N. Meinshausen and P. Bühlmann, “Stability selection,” Journal of the Royal Statistical Society Series B, vol. 72, no. 4, pp. 417–473, 2010. View at Publisher · View at Google Scholar · View at MathSciNet
  42. R. D. Shah and R. J. Samworth, “Variable selection with error control: another look at stability selection,” Journal of the Royal Statistical Society. Series B. Statistical Methodology, vol. 75, no. 1, pp. 55–80, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  43. B. Hofner, T. Hothorn, and stabs., Stability Selection with Error Control, 2017, R package version 0.6-2, https://CRAN.R-project.org/package=stabs.
  44. A. Mayr, B. Hofner, and M. Schmid, “Boosting the discriminatory power of sparse survival models via optimization of the concordance index and stability selection,” BMC Bioinformatics, vol. 17, no. 1, article no. 288, 2016. View at Publisher · View at Google Scholar · View at Scopus
  45. A. Mayr and M. Schmid, “Boosting the concordance index for survival data - a unified framework to derive and evaluate biomarker combinations,” PLoS ONE, vol. 9, no. 1, Article ID e84483, 2014. View at Publisher · View at Google Scholar · View at Scopus
  46. Y. Chen, Z. Jia, D. Mercola, and X. Xie, “A gradient boosting algorithm for survival analysis via direct optimization of concordance index,” Computational and Mathematical Methods in Medicine, vol. 2013, Article ID 873595, 2013. View at Publisher · View at Google Scholar · View at Scopus
  47. J. Thomas, A. Mayr, B. Bischl, M. Schmid, A. Smith, and B. Hofner, “Gradient boosting for distributional regression: faster tuning and improved variable selection via noncyclical updates,” Statistics and Computing, pp. 1–15, 2017. View at Publisher · View at Google Scholar
  48. R. A. Rigby and D. M. Stasinopoulos, “Generalized additive models for location, scale and shape,” Journal of the Royal Statistical Society. Series C. Applied Statistics, vol. 54, no. 3, pp. 507–554, 2005. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  49. A. Mayr, N. Fenske, B. Hofner, T. Kneib, and M. Schmid, “Generalized additive models for location, scale and shape for high dimensional data---a flexible approach based on boosting,” Journal of the Royal Statistical Society. Series C. Applied Statistics, vol. 61, no. 3, pp. 403–427, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  50. B. Hofner, A. Mayr, and M. Schmid, “gamboostLSS: an R package for model building and variable selection in the GAMLSS framework,” Journal of Statistical Software, vol. 74, no. 1, 2016. View at Publisher · View at Google Scholar · View at Scopus
  51. B. Hofner, A. Mayr, N. Fenske, J. Thomas, and M. Schmid, “gamboostLSS: Boosting Methods for GAMLSS Models,” 2017, R package version 2.0-0, https://CRAN.R-project.org/package=gamboostLSS.
  52. U. Alon, N. Barka, D. A. Notterman et al., “Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays,” Proceedings of the National Academy of Sciences of the United States of America, vol. 96, no. 12, pp. 6745–6750, 1999. View at Publisher · View at Google Scholar · View at Scopus
  53. E. Gravier, G. Pierron, A. Vincent-Salomon et al., “A prognostic DNA signature for T1T2 node-negative breast cancer patients,” Genes Chromosomes and Cancer, vol. 49, no. 12, pp. 1125–1134, September 2009. View at Publisher · View at Google Scholar · View at Scopus
  54. P. Bühlmann, M. Kalisch, and L. Meier, “High-dimensional statistics with a view toward applications in Biology,” Annual Review of Statistics and Its Application, vol. 1, pp. 255–278, 2014. View at Publisher · View at Google Scholar · View at Scopus
  55. J. A. Ramey, “Datamicroarray: Collection of Data Sets for Classification,” https://github.com/ramhiser/datamicroarray, 2016.
  56. R. Dezeure, P. Bühlmann, L. Meier, and N. Meinshausen, “High-dimensional inference: confidence intervals, p-values and R-software hdi,” Statistical Science, vol. 30, no. 4, pp. 533–558, 2015. View at Publisher · View at Google Scholar · View at MathSciNet
  57. M. Sariyar, M. Schumacher, and H. Binder, “A boosting approach for adapting the sparsity of risk prediction signatures based on different molecular levels,” Statistical Applications in Genetics and Molecular Biology, vol. 13, no. 3, pp. 343–357, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  58. C.-X. Zhang, J.-S. Zhang, and S.-W. Kim, “PBoostGA: pseudo-boosting genetic algorithm for variable ranking and selection,” Computational Statistics, vol. 31, no. 4, pp. 1237–1262, 2016. View at Publisher · View at Google Scholar · View at Scopus
  59. J. Thomas, T. Hepp, A. Mayr, and B. Bischl, “Probing for sparse and fast variable selection with model-based boosting”.
  60. Y. Huang, J. Liu, H. Yi, B.-C. Shia, and S. Ma, “Promoting similarity of model sparsity structure in integrative analysis of cancer genetic data,” Statistics in Medicine, vol. 36, no. 3, pp. 509–559, 2017. View at Publisher · View at Google Scholar · View at MathSciNet
  61. P. Bühlmann and B. Yu, “Sparse boosting,” Journal of Machine Learning Research, vol. 7, pp. 1001–1024, 2006. View at Google Scholar · View at MathSciNet
  62. J. O. Ramsay and B. W. Silverman, Applied functional data analysis: methods and case studies, vol. 77 of Springer Series in Statistics, Springer, Berlin, Germany, 2002. View at Publisher · View at Google Scholar
  63. S. Greven and F. Scheipl, “A general framework for functional regression modelling,” Statistical Modelling, vol. 17, no. 1-2, pp. 1–35, 2017. View at Publisher · View at Google Scholar · View at MathSciNet
  64. J. S. Morris, “Functional regression,” Annual Review of Statistics and Its Application, vol. 2, pp. 321–359, 2015. View at Publisher · View at Google Scholar · View at Scopus
  65. S. Brockhaus, F. Scheipl, T. Hothorn, and S. Greven, “The functional linear array model,” Statistical Modelling, vol. 15, no. 3, pp. 279–300, 2015. View at Publisher · View at Google Scholar · View at MathSciNet
  66. I. D. Currie, M. Durban, and P. H. Eilers, “Generalized linear array models with applications to multidimensional smoothing,” Journal of the Royal Statistical Society. Series B. Statistical Methodology, vol. 68, no. 2, pp. 259–280, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  67. S. Brockhaus and D. Rügamer, Brockhaus S, Rügamer D. FDboost: Boosting Functional Regression Models; R package version 0.2-0, 2016, https://CRAN.R-project.org/package=FDboost, 2016.
  68. S. Brockhaus, A. Fuest, A. Mayr, and S. Greven, “Signal regression models for location, scale and shape with an application to stock returns”.
  69. D. Rügamer, S. Brockhaus, K. Gentsch, K. Scherer, and S. Greven, “Boosting factor-specific functional historical models for the detection of synchronisation in bioelectrical signals”.
  70. S. Ullah and C. F. Finch, “Applications of functional data analysis: a systematic review,” BMC Medical Research Methodology, vol. 13, no. 1, article 43, 2013. View at Publisher · View at Google Scholar · View at Scopus
  71. C. Zemmour, F. Bertucci, P. Finetti et al., “Prediction of early breast cancer metastasis from dna microarray data using high-dimensional Cox regression models,” Cancer Informatics, vol. 14, supplement 2, pp. 129–138, 2015. View at Publisher · View at Google Scholar · View at Scopus
  72. M. Schmid and T. Hothorn, “Flexible boosting of accelerated failure time models,” BMC Bioinformatics, vol. 9, article 269, 2008. View at Publisher · View at Google Scholar · View at Scopus
  73. M. S. Wulfsohn and A. A. Tsiatis, “A joint model for survival and longitudinal data measured with error,” Biometrics, vol. 53, no. 1, pp. 330–339, 1997. View at Publisher · View at Google Scholar · View at MathSciNet
  74. C. L. Faucett and D. C. Thomas, “Simultaneously modelling censored survival data and repeatedly measured covariates: a Gibbs sampling approach,” Statistics in Medicine, vol. 15, no. 15, pp. 1663–1685, 1996. View at Publisher · View at Google Scholar · View at Scopus
  75. D. Rizopoulos, “JM: an R package for the joint modelling of longitudinal and time-to-event data,” Journal of Statistical Software, vol. 35, no. 9, pp. 1–33, 2010. View at Google Scholar · View at Scopus
  76. M. Schmid, S. Potapov, A. Pfahlberg, and T. Hothorn, “Estimation and regularization techniques for regression models with multidimensional prediction functions,” Statistics and Computing, vol. 20, no. 2, pp. 139–150, 2010. View at Publisher · View at Google Scholar · View at MathSciNet
  77. E. Waldmann and A. Mayr, “JMboost: Boosting Joint Models for Longitudinal and Time-to-Event Outcomes,” R package version 0.1-0. https://github.com/mayrandy/JMboost.
  78. H. Reulen and T. Kneib, “Boosting multi-state models,” Lifetime Data Analysis, vol. 22, no. 2, pp. 241–262, 2016. View at Publisher · View at Google Scholar · View at MathSciNet
  79. H. Reulen, “gamboostMSM: Estimating multistate models using gamboost(),” 2014. R package version 1.1.87. https://CRAN.R-project.org/package=gamboostMSM.
  80. L. Möst and T. Hothorn, “Conditional transformation models for survivor function estimation,” The International Journal of Biostatistics, vol. 11, no. 1, pp. 23–50, 2015. View at Publisher · View at Google Scholar · View at MathSciNet
  81. T. Hothorn, T. Kneib, and P. Bühlmann, “Conditional transformation models,” Journal of the Royal Statistical Society. Series B. Statistical Methodology, vol. 76, no. 1, pp. 3–27, 2014. View at Publisher · View at Google Scholar · View at MathSciNet
  82. M. J. van der Laan and J. M. Robins, Unified Methods for Censored Longitudinal Data and Causality, New York, NY, USA, Springer Science & Business Media, 2003. View at Publisher · View at Google Scholar · View at MathSciNet
  83. R. De Bin, W. Sauerbrei, and A.-L. Boulesteix, “Investigating the prediction ability of survival models based on both clinical and omics data: two case studies,” Statistics in Medicine, vol. 33, no. 30, pp. 5310–5329, 2014. View at Publisher · View at Google Scholar · View at MathSciNet
  84. Z. Guo, W. Lu, and L. Li, “Forward Stagewise Shrinkage and Addition for High Dimensional Censored Regression,” Statistics in Biosciences, vol. 7, no. 2, pp. 225–244, 2015. View at Publisher · View at Google Scholar · View at Scopus
  85. M. Sariyar, I. Hoffmann, and H. Binder, “Combining techniques for screening and evaluating interaction terms on high-dimensional time-to-event data,” BMC Bioinformatics, vol. 15, no. 1, article 58, 2014. View at Publisher · View at Google Scholar · View at Scopus
  86. S. Hieke, A. Benner, R. F. Schlenk, M. Schumacher, L. Bullinger, and H. Binder, “Identifying prognostic SNPs in clinical cohorts: complementing univariate analyses by resampling and multivariable modeling,” PLoS ONE, vol. 11, no. 5, Article ID e0155226, 2016. View at Publisher · View at Google Scholar · View at Scopus
  87. L. Weinhold, S. Wahl, S. Pechlivanis, P. Hoffmann, and M. Schmid, “A statistical model for the analysis of beta values in DNA methylation studies,” BMC Bioinformatics, vol. 17, no. 1, article 480, 2016. View at Publisher · View at Google Scholar · View at Scopus
  88. G. Schauberger and G. Tutz, “Detection of differential item functioning in Rasch models by boosting techniques,” British Journal of Mathematical and Statistical Psychology, vol. 69, no. 1, pp. 80–103, 2016. View at Publisher · View at Google Scholar · View at Scopus
  89. G. Casalicchio, G. Tutz, and G. Schauberger, “Subject-specific Bradley-Terry-Luce models with implicit variable selection,” Statistical Modelling, vol. 15, no. 6, pp. 526–547, 2015. View at Publisher · View at Google Scholar · View at MathSciNet
  90. G. Napolitano, J. C. Stingl, M. Schmid, and R. Viviani, “Predicting CYP2D6 phenotype from resting brain perfusion images by gradient boosting,” Psychiatry Research: Neuroimaging, vol. 259, pp. 16–24, 2017. View at Publisher · View at Google Scholar
  91. M. Feilke, B. Bischl, V. J. Schmid, and J. Gertheiss, “Boosting in nonlinear regression models with an application to DCE-MRI data,” Methods of Information in Medicine, vol. 55, no. 1, pp. 31–41, 2016. View at Publisher · View at Google Scholar · View at Scopus
  92. M. Pybus, P. Luisi, G. M. Dall'Olio et al., “Hierarchical boosting: a machine-learning framework to detect and classify hard selective sweeps in human populations,” Bioinformatics, vol. 31, no. 24, pp. 3946–3952, 2015. View at Publisher · View at Google Scholar · View at Scopus
  93. K. Lin, H. Li, C. Schlötterer, and A. Futschik, “Distinguishing positive selection from neutral evolution: Boosting the performance of summary statistics,” Genetics, vol. 187, no. 1, pp. 229–244, 2011. View at Publisher · View at Google Scholar · View at Scopus
  94. C. Truntzer, E. Mostacci, A. Jeannin, J.-M. Petit, P. Ducoroy, and H. Cardot, “Comparison of classification methods that combine clinical data and high-dimensional mass spectrometry data,” BMC Bioinformatics, vol. 15, no. 1, article 385, 2014. View at Publisher · View at Google Scholar · View at Scopus
  95. J. W. Messner, G. J. Mayr, and A. Zeileis, “Nonhomogeneous Boosting for Predictor Selection in Ensemble Postprocessing,” Monthly Weather Review, vol. 145, no. 1, pp. 137–147, 2017. View at Publisher · View at Google Scholar
  96. A. Mayr, M. Schmid, A. Pfahlberg, W. Uter, and O. Gefeller, “A permutation test to analyse systematic bias and random measurement errors of medical devices via boosting location and scale models,” Statistical Methods in Medical Research, vol. 26, no. 3, pp. 1443–1460, 2017. View at Publisher · View at Google Scholar · View at MathSciNet
  97. F. Faschingbauer, U. Dammer, E. Raabe et al., “A new sonographic weight estimation formula for small-for-gestational-age fetuses,” Journal of Ultrasound in Medicine, vol. 35, no. 8, pp. 1713–1724, 2016. View at Publisher · View at Google Scholar · View at Scopus
  98. J. Schäfer, J. Young, E. Bernasconi et al., “Predicting smoking cessation and its relapse in HIV-infected patients: the swiss HIV cohort study,” HIV Medicine, vol. 16, no. 1, pp. 3–14, 2015. View at Publisher · View at Google Scholar · View at Scopus
  99. M. Melcher, T. Scharl, M. Luchner, G. Striedner, and F. Leisch, “Boosted structured additive regression for,” Biotechnology and Bioengineering, vol. 114, no. 2, pp. 321–334, 2017. View at Publisher · View at Google Scholar
  100. P. Bahrmann, M. Christ, B. Hofner et al., “Prognostic value of different biomarkers for cardiovascular death in unselected older patients in the emergency department,” European Heart Journal: Acute Cardiovascular Care, vol. 5, no. 8, pp. 568–578, 2016. View at Publisher · View at Google Scholar
  101. D. Pattloch, A. Richter, B. Manger et al., “Das erste Biologikum bei rheumatoider arthritis: einflussfaktoren auf die Therapieentscheidung,” Zeitschrift für Rheumatologie, vol. 76, no. 3, pp. 210–218, 2017. View at Publisher · View at Google Scholar
  102. T. Chen and C. Guestrin, “XGBoost: a scalable tree boosting system,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2016, pp. 785–794, August 2016. View at Publisher · View at Google Scholar · View at Scopus
  103. B. Hofner, J. Müller, and T. Hothorn, “Monotonicity-constrained species distribution models,” Ecology, vol. 92, no. 10, pp. 1895–1901, 2011. View at Publisher · View at Google Scholar · View at Scopus
  104. B. Hofner, T. Kneib, and T. Hothorn, “A unified framework of constrained regression,” Statistics and Computing, vol. 26, no. 1-2, pp. 1–14, 2016. View at Publisher · View at Google Scholar · View at MathSciNet
  105. B. Hofner and A. Smith, “Boosted negative binomial hurdle models for spatiotemporal abundance of sea birds,” in Proceedings of the 30th International Workshop on Statistical Modelling, pp. 221–226, 2015.
  106. S. Friedrichs, J. Manitz, P. Burger et al., “Pathway-based kernel boosting for the analysis of genome-wide association studies,” Computational and Mathematical Methods in Medicine, vol. 2017, 17 pages, 2017. View at Publisher · View at Google Scholar