Table of Contents Author Guidelines Submit a Manuscript
Computational Intelligence and Neuroscience
Volume 2017, Article ID 3405463, 11 pages
https://doi.org/10.1155/2017/3405463
Research Article

Improving Classification Performance through an Advanced Ensemble Based Heterogeneous Extreme Learning Machines

1School of Computer and Communication Engineering, University of Science and Technology Beijing (USTB), Beijing 100083, China
2Beijing Key Laboratory of Knowledge Engineering for Materials Science, Beijing 100083, China
3Department of Electrical Engineering, COMSATS Institute of Information Technology Abbottabad, Abbottabad, Pakistan

Correspondence should be addressed to Dezheng Zhang; moc.621@anihczdz and Xiong Luo; nc.ude.btsu@oulx

Received 26 December 2016; Revised 25 March 2017; Accepted 9 April 2017; Published 4 May 2017

Academic Editor: Pietro Aricò

Copyright © 2017 Adnan O. M. Abuassba et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. Tuwe Löfström, On Effectively Creating Ensembles of Classifiers: Studies on Creation Strategies, Diversity and Predicting with Confidence, Stockholm University, Ph.D. thesis, 2015.
  2. M. Barghash, “An effective and novel neural network ensemble for shift pattern detection in control charts,” Computational Intelligence and Neuroscience, vol. 2015, Article ID 939248, 2015. View at Publisher · View at Google Scholar · View at Scopus
  3. L. Nanni, S. Brahnam, S. Ghidoni, and A. Lumini, “Toward a General-Purpose Heterogeneous Ensemble for Pattern Classification,” Computational Intelligence and Neuroscience, vol. 2015, Article ID 909123, 2015. View at Publisher · View at Google Scholar · View at Scopus
  4. L. Breiman, Bagging predictors, Manufactured in The Netherlands, Kluwer Academic Publishers, Boston, USA, 1996.
  5. Y. F. R. E. Schapire, “A Short Introduction to Boosting,” Journal of Japanese Society for Artificial Intelligence, vol. 14, no. 5, pp. 771–780, 1999 (Japanese). View at Google Scholar
  6. Y. F. R. E. Schapire, “Experiments with a New Boosting Algorithm,” in Proceedings of the Thirteenth International Conference, Machine Learning, 1996.
  7. G.-B. Huang, Q.-Y. Zhu, and C.-K. Siew, “Extreme learning machine: A new learning scheme of feedforward neural networks,” in Proceedings IEEE International Joint Conference on Neural Networks, pp. 985–990, hun, July 2004. View at Publisher · View at Google Scholar · View at Scopus
  8. X. Luo and X. Chang, “A novel data fusion scheme using grey model and extreme learning machine in wireless sensor networks,” International Journal of Control, Automation and Systems, vol. 13, no. 3, pp. 539–546, 2015. View at Publisher · View at Google Scholar · View at Scopus
  9. X. Luo, X. Chang, and X. Ban, “Regression and classification using extreme learning machine based on L1-norm and L2-norm,” Neurocomputing, vol. 174, pp. 179–186, 2016. View at Publisher · View at Google Scholar · View at Scopus
  10. G.-B. Huang, H. Zhou, X. Ding, and R. Zhang, “Extreme learning machine for regression and multiclass classification,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 42, no. 2, pp. 513–529, 2012. View at Publisher · View at Google Scholar · View at Scopus
  11. W. Deng, Q. Zheng, and L. Chen, “Regularized extreme learning machine,” in Proceedings IEEE Symposium on Computational Intelligence and Data Mining (CIDM), pp. 389–395, usa, April 2009. View at Publisher · View at Google Scholar · View at Scopus
  12. Q. Ye, H. Pan, and C. Liu, “Enhancement of ELM by clustering discrimination manifold regularization and multiobjective foa for semisupervised classification,” Computational Intelligence and Neuroscience, vol. 2015, Article ID 731494, 2015. View at Publisher · View at Google Scholar · View at Scopus
  13. H. Yu, Y. Yuan, X. Yang, and Y. Dan, A Dynamic Generation Approach for Ensemble of Extreme Learning Machines, Springer International Publishing, Switzerland, 2014. View at Publisher · View at Google Scholar · View at Scopus
  14. N. Liu and H. Wang, “Ensemble based extreme learning machine,” IEEE Signal Processing Letters, vol. 17, no. 8, pp. 754–757, 2010. View at Publisher · View at Google Scholar · View at Scopus
  15. H. Lu, Z. Jinwei, X. Ma, and Z. Wenbin, “Tumor-classification using extreme learning machine ensemble,” Mathamatics in practice and theory, China Academic Journal, 2012. View at Google Scholar
  16. B. Ayerdi, I. Marqués, and M. Graña, “Spatially regularized semisupervised Ensembles of Extreme Learning Machines for hyperspectral image segmentation,” Neurocomputing, vol. 149, Part A, pp. 373–386, 2015. View at Publisher · View at Google Scholar · View at Scopus
  17. H. Wang, Q. He, T. Shang, F. Zhuang, and Z. Shi, “Extreme learning machine ensemble classifier for large-scale data,” in Proceedings of ELM, vol. 1, 2014.
  18. S. Huang, B. Wang, J. Qiu, J. Yao, G. Wang, and G. Yu, “Parallel Ensemble of Online Sequential Extreme Learning Machine Based on MapReduce,” in Proceedings of ELM, vol. 1, 2014.
  19. G.-B. Huang, X. Ding, and H. Zhou, “Optimization method based extreme learning machine for classification,” Neurocomputing, vol. 74, no. 1-3, pp. 155–163, 2010. View at Publisher · View at Google Scholar · View at Scopus
  20. G.-B. Huang, “An Insight into extreme learning machines: random neurons, random features and kernels,” Cognitive Computation, vol. 6, no. 3, pp. 376–390, 2014. View at Publisher · View at Google Scholar · View at Scopus
  21. Y. Lan, Y. C. Soh, and G.-B. Huang, “Ensemble of online sequential extreme learning machine,” Neurocomputing, vol. 72, no. 13-15, pp. 3391–3395, 2009. View at Publisher · View at Google Scholar · View at Scopus
  22. G. Wang and P. Li, “Dynamic Adaboost ensemble extreme learning machine,” in Proceedings 3rd International Conference on Advanced Computer Theory and Engineering (ICACTE), pp. V354–V358, China, August 2010. View at Publisher · View at Google Scholar · View at Scopus
  23. D. J. Newman and etal., “UCI repository of machine learning databases,” Department of Information and Computer Science, University of California, Irvine, California, USA, 1998, http://archive.ics.uci.edu/ml/datasets.html.
  24. https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/.
  25. U. Alon, N. Barka, D. A. Notterman et al., “Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays,” Cell Biology, vol. 96, pp. 6745–6750, 1999. View at Google Scholar
  26. E. Bauer and R. Kohavi, “Empirical comparison of voting classification algorithms: bagging, boosting, and variants,” Machine Learning, vol. 36, no. 1, pp. 105–139, 1999. View at Publisher · View at Google Scholar · View at Scopus
  27. B. Chen, X. Liu, H. Zhao, and J. C. Principe, “Maximum correntropy Kalman filter,” Automatica, vol. 76, pp. 70–77, 2017. View at Publisher · View at Google Scholar · View at MathSciNet
  28. B. Chen, L. Xing, H. Zhao, N. Zheng, and J. C. Pr\'\i ncipe, “Generalized correntropy for robust adaptive filtering,” IEEE Transactions on Signal Processing, vol. 64, no. 13, pp. 3376–3387, 2016. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  29. L. I. Kuncheva and C. J. Whitaker, “Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy,” Machine Learning, vol. 51, no. 2, pp. 181–207, 2003. View at Publisher · View at Google Scholar · View at Scopus
  30. E. K. Tang, P. N. Suganthan, and X. Yao, “An analysis of diversity measures,” Machine Learning, vol. 65, no. 1, pp. 247–271, 2006. View at Publisher · View at Google Scholar
  31. G. U. Yule, “On the Association of Attributes in Statistics: With Illustrations from the Material of the Childhood Society, &c,” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 194, no. 252-261, pp. 257–319, 1900. View at Publisher · View at Google Scholar
  32. S. García, A. Fernández, J. Luengo, and F. Herrera, “Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power,” Information Sciences, vol. 180, no. 10, pp. 2044–2064, 2010. View at Publisher · View at Google Scholar · View at Scopus
  33. J. Demsar, “Statistical comparisons of classifiers over multiple data sets,” Journal of Machine Learning Research (JMLR), vol. 7, pp. 1–30, 2006. View at Google Scholar
  34. Todd D. Little, The Oxford Handbook of Quantitative Methods, Volume 2: Statistical Analysis, Oxford University Press, New York, NY, USA, 2013.
  35. L. I. Kuncheva, Combining Pattern Classifiers: Methods and Algorithms, John Wiley and Sons, Inc, Second Edition edition, 2014.
  36. X. Luo, J. Liu, D. Zhang, and X. Chang, “A large-scale web QoS prediction scheme for the Industrial Internet of Things based on a kernel machine learning algorithm,” Computer Networks, vol. 101, pp. 81–89, 2016. View at Publisher · View at Google Scholar · View at Scopus
  37. X. Luo, D. Zhang, L. T. Yang, J. Liu, X. Chang, and H. Ning, “A kernel machine-based secure data sensing and fusion scheme in wireless sensor networks for the cyber-physical systems,” Future Generation Computer Systems, vol. 61, pp. 85–96, 2016. View at Publisher · View at Google Scholar · View at Scopus