Table of Contents Author Guidelines Submit a Manuscript
Computational Intelligence and Neuroscience
Volume 2017, Article ID 1930702, 12 pages
https://doi.org/10.1155/2017/1930702
Research Article

A Theoretical Analysis of Why Hybrid Ensembles Work

Department of Computer Science, National Chengchi University, No. 64, Sec. 2, Zhi Nan Rd., Wen Shan District, Taipei City 11605, Taiwan

Correspondence should be addressed to Kuo-Wei Hsu; wt.ude.uccn@ushwk

Received 8 August 2016; Revised 6 December 2016; Accepted 5 January 2017; Published 31 January 2017

Academic Editor: Jussi Tohka

Copyright © 2017 Kuo-Wei Hsu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. R. Ranawana and V. Palade, “Multi-classifier systems: review and a roadmap for developers,” International Journal of Hybrid Intelligent Systems, vol. 3, no. 1, pp. 35–61, 2006. View at Publisher · View at Google Scholar
  2. R. Polikar, “Ensemble based systems in decision making,” IEEE Circuits and Systems Magazine, vol. 6, no. 3, pp. 21–45, 2006. View at Publisher · View at Google Scholar · View at Scopus
  3. R. Polikar, “Bootstrap-inspired techniques in computation intelligence,” IEEE Signal Processing Magazine, vol. 24, no. 4, pp. 59–72, 2007. View at Publisher · View at Google Scholar · View at Scopus
  4. G. Brown, “Ensemble learning,” in Encyclopedia of Machine Learning, pp. 312–320, Springer, New York, NY, USA, 2010. View at Publisher · View at Google Scholar
  5. C. Zhang and Y. Ma, Eds., Ensemble machine learning, Springer, NY, USA, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  6. Z.-H. Zhou, Ensemble Methods: Foundations and Algorithms, Chapman & Hall/CRC, 2012.
  7. M. Woźniak, M. Graña, and E. Corchado, “A survey of multiple classifier systems as hybrid systems,” Information Fusion, vol. 16, no. 1, pp. 3–17, 2014. View at Publisher · View at Google Scholar · View at Scopus
  8. M. Fernández-Delgado, E. Cernadas, S. Barro, and D. Amorim, “Do we need hundreds of classifiers to solve real world classification problems?” The Journal of Machine Learning Research, vol. 15, no. 1, pp. 3133–3181, 2014. View at Google Scholar · View at MathSciNet
  9. G. Giacinto and F. Roli, “Design of effective neural network ensembles for image classification purposes,” Image and Vision Computing, vol. 19, no. 9-10, pp. 699–707, 2001. View at Publisher · View at Google Scholar · View at Scopus
  10. K.-S. Goh, E. Chang, and K.-T. Cheng, “SVM binary classifier ensembles for image classification,” in Proceedings of the 10th International Conference on Information and Knowledge Management (ACM CIKM '01), pp. 395–402, November 2001. View at Scopus
  11. M. Pal, “Ensemble of support vector machines for land cover classification,” International Journal of Remote Sensing, vol. 29, no. 10, pp. 3043–3049, 2008. View at Publisher · View at Google Scholar · View at Scopus
  12. A. Merentitis, C. Debes, and R. Heremans, “Ensemble learning in hyperspectral image classification: toward selecting a favorable bias-variance tradeoff,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 7, no. 4, pp. 1089–1102, 2014. View at Publisher · View at Google Scholar · View at Scopus
  13. A. Samat, P. Du, S. Liu, J. Li, and L. Cheng, “E2LMs: ensemble extreme learning machines for hyperspectral image classification,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 7, no. 4, pp. 1060–1069, 2014. View at Publisher · View at Google Scholar · View at Scopus
  14. M. Han and B. Liu, “Ensemble of extreme learning machine for remote sensing image classification,” Neurocomputing, vol. 149, pp. 65–70, 2015. View at Publisher · View at Google Scholar · View at Scopus
  15. R. Cappelli, D. Maio, and D. Maltoni, “A multi-classifier approach to fingerprint classification,” Pattern Analysis and Applications, vol. 5, no. 2, pp. 136–144, 2002. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  16. I. Maqsood, M. R. Khan, and A. Abraham, “An ensemble of neural networks for weather forecasting,” Neural Computing and Applications, vol. 13, no. 2, pp. 112–122, 2004. View at Google Scholar · View at Scopus
  17. Y.-S. Dong and K.-S. Han, “A comparison of several ensemble methods for text categorization,” in Proceedings of the IEEE International Conference on Services Computing (SCC '04), pp. 419–422, September 2004. View at Publisher · View at Google Scholar · View at Scopus
  18. T. Rohlfing and C. R. Maurer Jr., “Multi-classifier framework for atlas-based image segmentation,” Pattern Recognition Letters, vol. 26, no. 13, pp. 2070–2079, 2005. View at Publisher · View at Google Scholar · View at Scopus
  19. S. Avidan, “Ensemble tracking,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 2, pp. 261–271, 2007. View at Publisher · View at Google Scholar · View at Scopus
  20. H.-B. Shen and K.-C. Chou, “Ensemble classifier for protein fold pattern recognition,” Bioinformatics, vol. 22, no. 14, pp. 1717–1722, 2006. View at Publisher · View at Google Scholar · View at Scopus
  21. S. B. Cho and H.-H. Won, “Cancer classification using ensemble of neural networks with multiple significant gene subsets,” Applied Intelligence, vol. 26, no. 3, pp. 243–250, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  22. D. Gray and H. Tao, “View point invariant pedestrian recognition with an ensemble of localized features,” in Proceedings of the 10th European Conference on Computer Vision: Part I, pp. 262–275, Marseille, France, 2008.
  23. S. Paisitkriangkrai, C. Shen, and A. van den Hengel, “Pedestrian detection with spatially pooled features and structured ensemble learning,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 6, pp. 1243–1257, 2016. View at Publisher · View at Google Scholar · View at Scopus
  24. H. I. Aljamaan and M. O. Elish, “An empirical study of bagging and boosting ensembles for identifying faulty classes in object-oriented software,” in Proceedings of the IEEE Symposium on Computational Intelligence and Data Mining (CIDM '09), pp. 187–194, IEEE, Nashville, Tenn, USA, April 2009. View at Publisher · View at Google Scholar · View at Scopus
  25. I. H. Laradji, M. Alshayeb, and L. Ghouti, “Software defect prediction using ensemble learning on selected features,” Information and Software Technology, vol. 58, pp. 388–402, 2015. View at Publisher · View at Google Scholar · View at Scopus
  26. Y. Su, S. Shan, X. Chen, and W. Gao, “Hierarchical ensemble of global and local classifiers for face recognition,” IEEE Transactions on Image Processing, vol. 18, no. 8, pp. 1885–1896, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  27. I. Katakis, G. Tsoumakas, and I. Vlahavas, “Tracking recurring contexts using ensemble classifiers: an application to email filtering,” Knowledge and Information Systems, vol. 22, no. 3, pp. 371–391, 2010. View at Publisher · View at Google Scholar · View at Scopus
  28. S. Kotsiantis, K. Patriarcheas, and M. Xenos, “A combinational incremental ensemble of classifiers as a technique for predicting students' performance in distance education,” Knowledge-Based Systems, vol. 23, no. 6, pp. 529–535, 2010. View at Publisher · View at Google Scholar · View at Scopus
  29. A. Takemura, A. Shimizu, and K. Hamamoto, “Discrimination of breast tumors in ultrasonic images using an ensemble classifier based on the adaboost algorithm with feature selection,” IEEE Transactions on Medical Imaging, vol. 29, no. 3, pp. 598–609, 2010. View at Publisher · View at Google Scholar · View at Scopus
  30. B. C. Ko, J. W. Gim, and J. Y. Nam, “Cell image classification based on ensemble features and random forest,” Electronics Letters, vol. 47, no. 11, pp. 638–639, 2011. View at Publisher · View at Google Scholar · View at Scopus
  31. M. M. Fraz, P. Remagnino, A. Hoppe et al., “An ensemble classification-based approach applied to retinal blood vessel segmentation,” IEEE Transactions on Biomedical Engineering, vol. 59, no. 9, pp. 2538–2548, 2012. View at Publisher · View at Google Scholar · View at Scopus
  32. S. Mohapatra, D. Patra, and S. Satpathy, “An ensemble classifier system for early diagnosis of acute lymphoblastic leukemia in blood microscopic images,” Neural Computing and Applications, vol. 24, no. 7-8, pp. 1887–1904, 2014. View at Publisher · View at Google Scholar · View at Scopus
  33. Z. Borbora, J. Srivastava, K.-W. Hsu, and D. Williams, “Churn prediction in MMORPGs using player motivation theories and ensemble approach,” in Proceedings of the IEEE 3rd International Conference on Privacy, Security, Risk and Trust and IEEE 3rd International Conference on Social Computing, pp. 157–164, Boston, Mass, USA, October 2011. View at Publisher · View at Google Scholar
  34. R. Xia, C. Zong, and S. Li, “Ensemble of feature sets and classification algorithms for sentiment classification,” Information Sciences, vol. 181, no. 6, pp. 1138–1152, 2011. View at Publisher · View at Google Scholar · View at Scopus
  35. E. Fersini, E. Messina, and F. A. Pozzi, “Sentiment analysis: bayesian ensemble learning,” Decision Support Systems, vol. 68, pp. 26–38, 2014. View at Publisher · View at Google Scholar · View at Scopus
  36. G. Wang, J. Sun, J. Ma, K. Xu, and J. Gu, “Sentiment classification: the contribution of ensemble learning,” Decision Support Systems, vol. 57, pp. 77–93, 2014. View at Publisher · View at Google Scholar
  37. M. Hagen, M. Potthast, M. Büchner, and B. Stein, “Twitter sentiment detection via ensemble classification using averaged confidence scores,” in Proceedings of the 37th European Conference on IR Research, pp. 741–754, Vienna, Austria, March 2015. View at Publisher · View at Google Scholar
  38. J. Kodovský, J. Fridrich, and V. Holub, “Ensemble classifiers for steganalysis of digital media,” IEEE Transactions on Information Forensics and Security, vol. 7, no. 2, pp. 432–444, 2012. View at Publisher · View at Google Scholar · View at Scopus
  39. K. P. Singh, S. Gupta, and P. Rai, “Identifying pollution sources and predicting urban air quality using ensemble learning methods,” Atmospheric Environment, vol. 80, pp. 426–437, 2013. View at Publisher · View at Google Scholar · View at Scopus
  40. M. Govindarajan, “Hybrid intrusion detection using ensemble of classification methods,” International Journal of Computer Network and Information Security, vol. 6, no. 2, pp. 45–53, 2014. View at Publisher · View at Google Scholar
  41. H. T. X. Doan and G. M. Foody, “Increasing soft classification accuracy through the use of an ensemble of classifiers,” International Journal of Remote Sensing, vol. 28, no. 20, pp. 4609–4623, 2007. View at Publisher · View at Google Scholar · View at Scopus
  42. L. I. Kuncheva, M. Skurichina, and R. P. W. Duin, “An experimental study on diversity for bagging and boosting with linear classifiers,” Information Fusion, vol. 3, no. 4, pp. 245–258, 2002. View at Publisher · View at Google Scholar · View at Scopus
  43. W. B. Langdon, S. J. Barrett, and B. F. Buxton, “Combining decision trees and neural networks for drug discovery,” in Proceedings of the 5th European Conference on Genetic Programming, pp. 60–70, Kinsale, Ireland, April 2002.
  44. Z. Lu, X. Wu, and J. Bongard, “Adaptive informative sampling for active learning,” in Proceedings of the 10th SIAM International Conference on Data Mining (SDM '10), pp. 894–905, May 2010. View at Scopus
  45. K.-W. Hsu, “Hybrid ensembles of decision trees and artificial neural networks,” in Proceedings of the 1st IEEE International Conference on Computational Intelligence and Cybernetics (CyberneticsCom '12), pp. 25–29, Bali, Indonesia, July 2012. View at Publisher · View at Google Scholar · View at Scopus
  46. P. Domingos, “A unified bias-variance decomposition for zero-one and squared loss,” in Proceedings of the 17th National Conference on Artificial Intelligence and 12th Conference on Innovative Applications of Artificial Intelligence, pp. 564–569, 2000.
  47. C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, vol. 20, no. 3, pp. 273–297, 1995. View at Publisher · View at Google Scholar · View at Scopus
  48. X. Wu, V. Kumar, J. R. Quinlan et al., “Top 10 algorithms in data mining,” Knowledge and Information Systems, vol. 14, no. 1, pp. 1–37, 2008. View at Publisher · View at Google Scholar · View at Scopus
  49. R. M. Salgado, J. J. F. Pereira, T. Ohishi, R. Ballini, C. A. M. Lima, and F. J. Von Zuben, “A hybrid ensemble model applied to the short-term load forecasting problem,” in Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN '06), pp. 2627–2634, IEEE, Vancouver, Canada, July 2006. View at Publisher · View at Google Scholar · View at Scopus
  50. J.-K. Min and S.-B. Cho, “Activity recognition based on wearable sensors using selection/fusion hybrid ensemble,” in Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC '11), pp. 1319–1324, October 2011. View at Publisher · View at Google Scholar · View at Scopus
  51. A. Verikas, Z. Kalsyte, M. Bacauskiene, and A. Gelzinis, “Hybrid and ensemble-based soft computing techniques in bankruptcy prediction: a survey,” Soft Computing, vol. 14, no. 9, pp. 995–1010, 2010. View at Publisher · View at Google Scholar · View at Scopus
  52. B. Verma and S. Z. Hassan, “Hybrid ensemble approach for classification,” Applied Intelligence, vol. 34, no. 2, pp. 258–278, 2011. View at Publisher · View at Google Scholar
  53. L. Breiman, “Bagging predictors,” Machine Learning, vol. 24, no. 2, pp. 123–140, 1996. View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  54. J. R. Quinlan, C4.5: Programs for Machine Learning, Morgan Kaufmann Publishers Inc, San Francisco, Calif, USA, 1993.
  55. D. Opitz and R. Maclin, “Popular ensemble methods: an empirical study,” Journal of Artificial Intelligence Research, vol. 11, pp. 169–198, 1999. View at Publisher · View at Google Scholar
  56. L. I. Kuncheva, “Using diversity measures for generating error-correcting output codes in classifier ensembles,” Pattern Recognition Letters, vol. 26, no. 1, pp. 83–90, 2005. View at Publisher · View at Google Scholar · View at Scopus
  57. K.-W. Hsu and J. Srivastava, “Diversity in combinations of heterogeneous classifiers,” in Proceedings of the 13th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining, pp. 923–932, Bangkok, Thailand, April 2009. View at Publisher · View at Google Scholar
  58. K.-W. Hsu and J. Srivastava, “Relationship between diversity and correlation in multi-classifier systems,” in Proceedings of the 14th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining, Part II, pp. 500–506, Hyderabad, India, June 2010. View at Publisher · View at Google Scholar
  59. K.-W. Hsu and J. Srivastava, “Improving bagging performance through multi-algorithm ensembles,” Frontiers in Computer Science, vol. 6, no. 5, pp. 498–512, 2012. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  60. M. Lichman, UCI Machine Learning Repository, School of Information and Computer Science, University of California, Irvine, Calif, USA, 2013, http://archive.ics.uci.edu/ml.
  61. M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I. H. Witten, “The WEKA data mining software: an update,” ACM SIGKDD Explorations Newsletter, vol. 11, no. 1, pp. 10–18, 2009. View at Publisher · View at Google Scholar
  62. J. Demšar, “Statistical comparisons of classifiers over multiple data sets,” The Journal of Machine Learning Research, vol. 7, pp. 1–30, 2006. View at Google Scholar · View at MathSciNet