Table of Contents Author Guidelines Submit a Manuscript
Journal of Healthcare Engineering
Volume 2018, Article ID 8902981, 9 pages
https://doi.org/10.1155/2018/8902981
Research Article

Ensemble of Rotation Trees for Imbalanced Medical Datasets

1School of Computer and Information Technology, Xinyang Normal University, Xinyang 464000, China
2Cooper Innovation Center of Internet Healthcare, Zhengzhou University, Zhengzhou 450000, China
3Department of Neurology, Xinyang Central Hospital, Xinyang 464000, China
4School of Software Technology, Zhengzhou University, Zhengzhou 450001, China

Correspondence should be addressed to Huaping Guo; moc.361@mc_ougph and Wei She; nc.ude.uzz@ehsw

Received 22 August 2017; Revised 8 February 2018; Accepted 11 February 2018; Published 10 April 2018

Academic Editor: Maria Lindén

Copyright © 2018 Huaping Guo et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. H. He and Y. Ma, Eds., Imbalanced Learning: Foundations, Algorithms, and Applications, Wiley-IEEE Press, New York, NY, USA, 2013. View at Publisher · View at Google Scholar
  2. P. Yao, Z. Wang, H. Jiang, and Z. Liu, “Fault diagnosis method based on CS-boosting for unbalanced training data,” Journal of Vibration Measurement & Diagnosis, vol. 33, no. 1, pp. 111–115, 2013. View at Google Scholar
  3. P. D. Martin, “Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation,” Journal of Machine Learning Technologies, vol. 2, no. 1, pp. 37–63, 2011. View at Google Scholar
  4. X. Y. Liu, Q. Q. Li, and Z. H. Zhou, “Learning imbalanced multi-class data with optimal dichotomy weights,” in 2013 IEEE 13th International Conference on Data Mining, pp. 478–487, Dallas, TX, USA, 2013. View at Publisher · View at Google Scholar · View at Scopus
  5. X.-Y. Liu, J. Wu, and Z.-H. Zhou, “Exploratory undersampling for class-imbalance learning,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 39, no. 2, pp. 539–550, 2009. View at Publisher · View at Google Scholar · View at Scopus
  6. G. Batista, R. Prati, and M. Monard, “A study of the behavior of several methods for balancing machine learning training data,” ACM SIGKDD Explorations Newsletter, vol. 6, no. 1, pp. 20–29, 2004. View at Publisher · View at Google Scholar
  7. N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, “SMOTE: synthetic minority over-sampling technique,” Journal of Artificial Intelligence Research, vol. 16, pp. 321–357, 2002. View at Publisher · View at Google Scholar
  8. Z. Zhou, Y. Wang, Q. M. J. Wu, C.-N. Yang, and X. Sun, “Effective and efficient global context verification for image copy detection,” IEEE Transactions on Information Forensics and Security, vol. 12, no. 1, pp. 48–63, 2017. View at Publisher · View at Google Scholar · View at Scopus
  9. Z. Xia, X. Wang, L. Zhang, Z. Qin, X. Sun, and K. Ren, “A privacy-preserving and copy-deterrence content-based image retrieval scheme in cloud computing,” IEEE Transactions on Information Forensics and Security, vol. 11, no. 11, pp. 2594–2608, 2016. View at Publisher · View at Google Scholar · View at Scopus
  10. J. Li, X. Li, B. Yang, and X. Sun, “Segmentation-based image copy-move forgery detection scheme,” IEEE Transactions on Information Forensics and Security, vol. 10, no. 3, pp. 507–518, 2015. View at Publisher · View at Google Scholar · View at Scopus
  11. Z. Zhou, C.-N. Yang, B. Chen, X. Sun, Q. Liu, and Q. M. J. Wu, “Effective and efficient image copy detection with resistance to arbitrary rotation,” IEICE Transactions on Information and Systems, vol. E99.D, no. 6, pp. 1531–1540, 2016. View at Publisher · View at Google Scholar · View at Scopus
  12. S. Wang and X. Yao, “Diversity analysis on imbalanced data sets by using ensemble models,” in 2009 IEEE Symposium on Computational Intelligence and Data Mining, pp. 324–331, Nashville, TN, USA, 2009. View at Publisher · View at Google Scholar · View at Scopus
  13. N. Chawla, A. Lazarevic, L. Hall, and K. Bowyer, “SMOTEBoost: improving prediction of the minority class in boosting,” in Knowledge Discovery in Databases: PKDD 2003. PKDD 2003, N. Lavrač, D. Gamberger, L. Todorovski, and H. Blockeel, Eds., vol. 2838 of Lecture Notes in Computer Science, pp. 107–119, Springer, Berlin, Heidelberg, 2003. View at Publisher · View at Google Scholar
  14. C. Seiffert, T. Khoshgoftaar, J. Van Hulse, and A. Napolitano, “RUSBoost: a hybrid approach to alleviating class imbalance,” IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, vol. 40, no. 1, pp. 185–197, 2010. View at Publisher · View at Google Scholar · View at Scopus
  15. H. Guo, H. Liu, C.-A. Wu, W. Liu, and W. She, “Ensemble of rotation trees for imbalanced medical datasets,” in The International Conference on Healthcare Science and Engineering, Zhengzhou, China, 2017.
  16. B. X. Wang and N. Japkowicz, “Boosting support vector machines for imbalanced data sets,” Knowledge and Information Systems, vol. 25, no. 1, pp. 1–20, 2010. View at Publisher · View at Google Scholar · View at Scopus
  17. N. V. Chawla, “Many are better than one: improving probabilistic estimates from decision trees,” in Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment, J. Quiñonero-Candela, I. Dagan, B. Magnini, and F. d’Alché-Buc, Eds., vol. 3944 of Lecture Notes in Computer Science, pp. 41–55, Springer, Berlin, Heidelberg, 2006. View at Publisher · View at Google Scholar · View at Scopus
  18. R. E. Banfield, L. O. Hall, K. W. Bowyer, and W. P. Kegelmeyer, “A comparison of decision tree ensemble creation techniques,” IEEE Transaction of Pattern Analysis and Machine Intelligence, vol. 29, no. 1, pp. 173–180, 2007. View at Publisher · View at Google Scholar · View at Scopus
  19. M. Galar, A. Fernandez, E. Barrenechea, H. Bustince, and F. Herrera, “A review on ensembles for the class imbalance problem: bagging-, boosting-, and hybrid-based approaches,” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 42, no. 4, pp. 463–484, 2012. View at Publisher · View at Google Scholar · View at Scopus
  20. J. J. Rodriguez and L. I. Kuncheva, “Rotation forest: a new classifier ensemble method,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 10, pp. 1619–1630, 2006. View at Publisher · View at Google Scholar · View at Scopus
  21. C. Su, S. Ju, Y. Liu, and Z. Yu, “Improving random forest and rotation forest for highly imbalanced datasets,” Intelligent Data Analysis, vol. 19, no. 6, pp. 1409–1432, 2015. View at Publisher · View at Google Scholar · View at Scopus
  22. M. Hosseinzadeh and M. Eftekharia, “Improving rotation forest performance for imbalanced data classification through fuzzy clustering,” in 2015 The International Symposium on Artificial Intelligence and Signal Processing (AISP), pp. 35–40, Mashhad, Iran, 2015. View at Publisher · View at Google Scholar · View at Scopus
  23. X. Fang, X. Zheng, Y. Tan, and H. Zhang, “Highly imbalanced classification using improved rotation forests,” International Journal of Wireless and Mobile Computing, vol. 10, no. 1, pp. 35–41, 2016. View at Publisher · View at Google Scholar · View at Scopus
  24. C. Yuan, X. Sun, and R. LV, “Fingerprint liveness detection based on multi-scale LPQ and PCA,” China Communications, vol. 13, no. 7, pp. 60–65, 2016. View at Publisher · View at Google Scholar · View at Scopus
  25. F. J. Provost and T. Fawcett, “Analysis and visualization of classifier performance: comparison under imprecise class and cost distributions,” in Proceedings of the Third International Conference on Knowledge Discovery and Data Mining (KDD-97), pp. 43–48, Huntington Beach, CA, USA, 1997.
  26. J. V. Hulse, T. M. Khoshgoftaar, and A. Napolitano, “Experimental perspectives on learning from imbalanced data,” in ICML '07 Proceedings of the 24th International Conference on Machine Learning, pp. 935–942, Corvalis, OR, USA, 2007. View at Publisher · View at Google Scholar · View at Scopus
  27. N. Garcia-Pddrajas, C. Garcia-Osorio, and C. Fyfe, “Nonlinear boosting projections for ensemble construction,” Journal of Machine Learning Research, vol. 8, pp. 1–33, 2007. View at Google Scholar
  28. L. Breiman, “Bagging predictors,” Machine Learning, vol. 24, no. 2, pp. 123–140, 1996. View at Publisher · View at Google Scholar
  29. J. R. Quinlan, C4.5: Programs for Machine Learning, Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1993.
  30. J. Demsar, “Statistical comparisons of classifiers over multiple data sets,” Journal of Machine Learning Research, vol. 6, pp. 1–30, 2006. View at Google Scholar
  31. S. García, A. Fernández, J. Luengo, and F. Herrera, “Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: experimental analysis of power,” Information Sciences, vol. 180, no. 10, pp. 2044–2064, 2010. View at Publisher · View at Google Scholar · View at Scopus