Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2016, Article ID 3845131, 11 pages
http://dx.doi.org/10.1155/2016/3845131
Research Article

A Novel Margin-Based Measure for Directed Hill Climbing Ensemble Pruning

1School of Computer and Information Technology, Xinyang Normal University, Xinyang, Henan 464000, China
2School of Information Engineering, Zhengzhou University, Zhengzhou, Henan 450000, China

Received 10 February 2016; Revised 14 June 2016; Accepted 30 June 2016

Academic Editor: Simone Bianco

Copyright © 2016 Huaping Guo et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. X. Wu, V. Kumar, Q. J. Ross et al., “Top 10 algorithms in data mining,” Knowledge and Information Systems, vol. 14, no. 1, pp. 1–37, 2008. View at Publisher · View at Google Scholar · View at Scopus
  2. T. G. Dietterich, “Ensemble methods in machine learning,” in Proceedings of the 1st International Workshop on Multiple Classifier Systems, pp. 1–15, Cagliari, Italy, June 2000.
  3. L. Breiman, “Bagging predictors,” Machine Learning, vol. 24, no. 2, pp. 123–140, 1996. View at Google Scholar · View at Scopus
  4. Y. Freund and R. E. Schapire, “A decision-theoretic generalization of on-line learning and an application to boosting,” Journal of Computer and System Sciences, vol. 55, no. 1, part 2, pp. 119–139, 1997. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  5. L. Breiman, “Random forests,” Machine Learning, vol. 45, no. 1, pp. 5–32, 2001. View at Publisher · View at Google Scholar · View at Scopus
  6. F. T. Liu, K. M. Ting, Y. Yu, and Z.-H. Zhou, “Spectrum of variable-random trees,” Journal of Artificial Intelligence Research, vol. 32, no. 1, pp. 355–384, 2008. View at Google Scholar · View at Scopus
  7. J. J. Rodríguez, L. I. Kuncheva, and C. J. Alonso, “Rotation forest: a new classifier ensemble method,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 10, pp. 1619–1630, 2006. View at Publisher · View at Google Scholar · View at Scopus
  8. D. Zhang, S. Chen, Z. Zhou, and Q. Yang, “Constraint projections for ensemble learning,” in Proceedings of the 23rd AAAI Conference on Artificial Intelligence (AAAI '08), pp. 758–763, Chicago, Ill, USA, July 2008.
  9. Y. Liu and X. Yao, “Ensemble learning via negative correlation,” Neural Networks, vol. 12, no. 10, pp. 1399–1404, 1999. View at Publisher · View at Google Scholar · View at Scopus
  10. L. Guo, Margin framework for ensemble classifiers. Application to remote sensing data [Ph.D. thesis], University of Bordeaux 3, Pessac, France, 2011.
  11. Z. Ma, Q. Dai, and N. Liu, “Several novel evaluation measures for rank-based ensemble pruning with applications to time series prediction,” Expert Systems with Applications, vol. 42, no. 1, pp. 280–292, 2015. View at Publisher · View at Google Scholar · View at Scopus
  12. W. M. Zhi, H. P. Guo, M. Fan, and Y. D. Ye, “Instance-based ensemble pruning for imbalanced learning,” Intelligent Data Analysis, vol. 19, no. 4, pp. 779–794, 2015. View at Publisher · View at Google Scholar · View at Scopus
  13. Z.-H. Zhou, J. Wu, and W. Tang, “Ensembling neural networks: many could be better than all,” Artificial Intelligence, vol. 137, no. 1-2, pp. 239–263, 2002. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  14. G. Martinez-Muverbnoz and A. Suarez, “Aggregation ordering in bagging,” in Proceedings of the IASTED International Conference on Artificial Intelligence and Applications, pp. 258–263, Acta Press, Calgary, Canada, 2004.
  15. R. E. Banfield, L. O. Hall, K. W. Bowyer, and W. P. Kegelmeyer, “Ensemble diversity measures and their application to thinning,” Information Fusion, vol. 6, no. 1, pp. 49–62, 2005. View at Publisher · View at Google Scholar · View at Scopus
  16. I. Partalas, G. Tsoumakas, and I. P. Vlahavas, “Focused ensemble selection: a diversity-based method for greedy ensemble selection,” in Proceedings of the 18th European Conference on Artificial Intelligence, pp. 117–121, Patras, Greece, July 2008.
  17. G. Martinez-Muñoz, D. Hernández-Lobato, and A. Suarez, “An analysis of ensemble pruning techniques based on ordered aggregation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 2, pp. 245–259, 2009. View at Publisher · View at Google Scholar · View at Scopus
  18. I. Partalas, G. Tsoumakas, and I. P. Vlahavas, “An ensemble uncertainty aware measure for directed hill climbing ensemble pruning,” Machine Learning, vol. 81, no. 3, pp. 257–282, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  19. Z. Lu, X. D. Wu, X. Q. Zhu, and J. Bongard, “Ensemble pruning via individual contribution ordering,” in Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '10), pp. 871–880, Washington, DC, USA, July 2010. View at Publisher · View at Google Scholar · View at Scopus
  20. L. Guo and S. Boukir, “Margin-based ordered aggregation for ensemble pruning,” Pattern Recognition Letters, vol. 34, no. 6, pp. 603–609, 2013. View at Publisher · View at Google Scholar · View at Scopus
  21. C. Qian, Y. Yu, and Z. H. Zhou, “Pareto ensemble pruning,” in Proceedings of the 29th AAAI Conference on Artificial Intelligence, pp. 2935–2941, Austin, Tex, USA, January 2015.
  22. B. Krawczyk and M. Woźniak, “Untrained weighted classifier combination with embedded ensemble pruning,” Neurocomputing, vol. 196, pp. 14–22, 2016. View at Publisher · View at Google Scholar
  23. Y. Zhang, S. Burer, and W. N. Street, “Ensemble pruning via semi-definite programming,” Journal of Machine Learning Research, vol. 7, pp. 1315–1338, 2006. View at Google Scholar · View at MathSciNet
  24. W. M. Zhi, H. P. Guo, and M. Fan, “Energy-based metric for ensemble selection,” in Proceedings of the 14th Asia-Pacific Web Conference, pp. 306–317, Kunming, China, April 2012.
  25. W. Fan, F. Chu, H. Wang, and P. S. Yu, “Pruning and dynamic scheduling of cost-sensitive ensembles,” in Proceedings of the 18th National Conference on Artificial Intelligence and Fourteenth Conference on Innovative Applications of Artificial Intelligence, Edmonton, Canada, August 2002.
  26. R. Caruana, A. Niculescu-Mizil, G. Crew, and A. Ksikes, “Ensemble selection from libraries of models,” in Proceedings of the 21st International Conference on Machine Learning (ICML '04), pp. 137–144, Banff, Canada, July 2004. View at Scopus
  27. Q. Dai and M. L. Li, “Introducing randomness into greedy ensemble pruning algorithms,” Applied Intelligence, vol. 42, no. 3, pp. 406–429, 2015. View at Publisher · View at Google Scholar · View at Scopus
  28. R. E. Schapire, Y. Freund, P. Bartlett, and W. S. Lee, “Boosting the margin: a new explanation for the effectiveness of voting methods,” The Annals of Statistics, vol. 26, no. 5, pp. 1651–1686, 1998. View at Publisher · View at Google Scholar · View at MathSciNet
  29. I. Partalas, G. Tsoumakas, and I. Vlahavas, “A study on greedy algorithms for ensemble pruning,” Tech. Rep. TR-LPIS-360-12, LPIS, Department of Informatics, Aristotle University of Thessaloniki, Thessaloniki, Greece, 2012. View at Google Scholar
  30. D. D. Margineantu and T. G. Dietterich, “Pruning adaptive boosting,” in Proceedings of the 14th International Conference on Machine Learning, pp. 211–218, Nashville, Tenn, USA, September 1997.
  31. Q. Dai, T. Zhang, and N. Liu, “A new reverse reduce-error ensemble pruning algorithm,” Applied Soft Computing, vol. 28, pp. 237–249, 2015. View at Publisher · View at Google Scholar · View at Scopus
  32. W. Gao and Z.-H. Zhou, “On the doubt about margin explanation of boosting,” Artificial Intelligence, vol. 203, pp. 1–18, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  33. A. Asuncion and D. Newman, “UCI Machine Learning Repository,” 2007.
  34. J. R. Quinlan, C4.5: Programs for Machine Learning, Morgan Kaufmann, San Francisco, Calif, USA, 1993.
  35. I. H. Witten and E. Frank, Data Mining: Practical Machine Learning Tools and Techniques, Morgan Kaufmann, San Francisco, Calif, USA, 2005.
  36. J. Demšar, “Statistical comparisons of classifiers over multiple data sets,” Journal of Machine Learning Research, vol. 7, pp. 1–30, 2006. View at Google Scholar · View at MathSciNet
  37. S. García and F. Herrera, “An extension on ‘statistical comparisons of classifiers over multiple data sets’ for all pairwise comparisons,” Journal of Machine Learning Research, vol. 9, pp. 2677–2694, 2008. View at Google Scholar · View at Scopus