Table of Contents Author Guidelines Submit a Manuscript
Computational and Mathematical Methods in Medicine
Volume 2017, Article ID 5271091, 17 pages
https://doi.org/10.1155/2017/5271091
Research Article

Nonparametric Subgroup Identification by PRIM and CART: A Simulation and Application Study

Institute of Medical Statistics and Epidemiology, Technische Universität München, Ismaninger Str. 22, 81675 Munich, Germany

Correspondence should be addressed to Armin Ott; ed.mut@tto.nimra

Received 25 January 2017; Accepted 2 April 2017; Published 22 May 2017

Academic Editor: Olaf Gefeller

Copyright © 2017 Armin Ott and Alexander Hapfelmeier. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. J. C. Foster, J. M. Taylor, and S. J. Ruberg, “Subgroup identification from randomized clinical trial data,” Statistics in Medicine, vol. 30, no. 24, pp. 2867–2880, 2011. View at Publisher · View at Google Scholar · View at MathSciNet
  2. J. C. Foster, Subgroup identification and variable selection from randomized clinical trial data [Ph.D. thesis], The University of Michigan, 2013.
  3. B. Nannings, A. Abu-Hanna, and E. de Jonge, “Applying PRIM (Patient Rule Induction Method) and logistic regression for selecting high-risk subgroups in very elderly ICU patients,” International Journal of Medical Informatics, vol. 77, no. 4, pp. 272–279, 2008. View at Publisher · View at Google Scholar · View at Scopus
  4. D.-S. Kwak, K.-J. Kim, and M.-S. Lee, “Multistage PRIM: Patient rule induction method for optimisation of a multistage manufacturing process,” International Journal of Production Research, vol. 48, no. 12, pp. 3461–3473, 2010. View at Publisher · View at Google Scholar · View at Scopus
  5. J. H. Friedman and N. I. Fisher, “Bump hunting in high-dimensional data,” Statistics and Computing, vol. 9, no. 2, pp. 123–143, 1999. View at Publisher · View at Google Scholar · View at Scopus
  6. W. Polonik and Z. Wang, “PRIM analysis,” Journal of Multivariate Analysis, vol. 101, no. 3, pp. 525–540, 2010. View at Publisher · View at Google Scholar · View at MathSciNet
  7. R. Tibshirani and K. Knight, “Model search and inference by “bootstrap”,” Tech. Rep., University of Toronto, 1995. View at Google Scholar
  8. B. Efron and R. J. Tibshirani, An Introduction Tothe Bootstrap, vol. 57 of Monographs on Statistics and Applied Probability, New York, NY, USA, 1993. View at Publisher · View at Google Scholar · View at MathSciNet
  9. R. L. Rivest, “Learning decision lists,” Machine Learning, vol. 2, no. 3, pp. 229–246, 1987. View at Publisher · View at Google Scholar · View at Scopus
  10. W.-Y. Loh, “Classification and regression trees,” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, vol. 1, no. 1, pp. 14–23, 2011. View at Publisher · View at Google Scholar · View at Scopus
  11. T. Therneau, B. Atkinson, and B. Ripley, “Recursive Partitioning and Regression Trees,” http://CRAN.R-project.org/package=rpart.
  12. W. J. Youden, “Index for rating diagnostic tests,” Cancer, vol. 3, no. 1, pp. 32–35, 1950. View at Publisher · View at Google Scholar · View at Scopus
  13. T. Hothorn, F. Leisch, A. Zeileis, and K. Hornik, “The design and analysis of benchmark experiments,” Journal of Computational and Graphical Statistics, vol. 14, no. 3, pp. 675–699, 2005. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  14. I.-G. Chong and C.-H. Jun, “Flexible patient rule induction method for optimizing process variables in discrete type,” Expert Systems with Applications, vol. 34, no. 4, pp. 3014–3020, 2008. View at Publisher · View at Google Scholar · View at Scopus
  15. T. Grubinger, A. Zeileis, and K.-P. Pfeiffer, “evtree: Evolutionary learning of globally optimal classification and regression trees in R,” Journal of Statistical Software, vol. 61, no. 1, pp. 1–29, 2014. View at Google Scholar
  16. F. Leisch, Dimitriado, and E. u, “Machine Learning Benchmark Problems,” http://CRAN.R-project.org/package=mlbench.
  17. L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone, Classification and Regression Trees, Wadsworth, 1984.
  18. T. M. Therneau, E. J. Atkinson et al., An Introduction to Recursive Partitioning Using The Rpart Routines, Mayo Clinic, 1997.
  19. P. Royston, G. Ambler, and W. Sauerbrei, “The use of fractional polynomials to model continuous risk variables in epidemiology,” International Journal of Epidemiology, vol. 28, no. 5, pp. 964–974, 1999. View at Publisher · View at Google Scholar · View at Scopus
  20. P. Royston and W. Sauerbrei, Multivariable Model-Building: A Pragmatic Approach to Regression Anaylsis Based on Fractional Polynomials for Modelling Continuous Variables, John Wiley & Sons, 2008.