Table of Contents Author Guidelines Submit a Manuscript
Computational and Mathematical Methods in Medicine
Volume 2015, Article ID 676129, 11 pages
http://dx.doi.org/10.1155/2015/676129
Research Article

Ensemble Merit Merge Feature Selection for Enhanced Multinomial Classification in Alzheimer’s Dementia

1Department of Computer Science, Lady Doak College, Madurai, Tamil Nadu 625002, India
2The Alzheimer’s Disease Neuroimaging Initiative, San Diego, CA 92093-0949, USA
3Department of Computer Science, TBAK College, Kilakarai, Tamil Nadu 623517, India
4Department of Computer Applications, Karunya University, Coimbatore, Tamil Nadu 641114, India

Received 8 February 2015; Revised 8 May 2015; Accepted 18 May 2015

Academic Editor: Maria N. D. S. Cordeiro

Copyright © 2015 T. R. Sivapriya et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. M. Prince, E. Albanese, M. Guerchet, and M. Prina, Alzheimer Report. Dementia and Risk Reduction. An Analysis of Protective and Modifiable Factors, Alzheimer's Disease International, London, UK, 2014, http://www.alz.co.uk/research/WorldAlzheimerReport2014.pdf.
  2. M. Prince, R. Bryce, E. Albanese, A. Wimo, W. Ribeiro, and C. P. Ferri, “The global prevalence of dementia: a systematic review and metaanalysis,” Alzheimer's & Dementia, vol. 9, no. 1, pp. 63.e2–75.e2, 2013. View at Publisher · View at Google Scholar · View at Scopus
  3. Y. Xia, “GA and AdaBoost-based feature selection and combination for automated identification of dementia using FDG-PET imaging,” in Intelligent Science and Intelligent Data Engineering, vol. 7202 of Lecture Notes in Computer Science, pp. 128–135, Springer, Berlin, Germany, 2012. View at Publisher · View at Google Scholar
  4. B. Magnin, L. Mesrob, S. Kinkingnéhun et al., “Support vector machine-based classification of Alzheimer's disease from whole-brain anatomical MRI,” Neuroradiology, vol. 51, no. 2, pp. 73–83, 2009. View at Publisher · View at Google Scholar · View at Scopus
  5. M. C. Tierney, R. Moineddin, and I. McDowell, “Prediction of all-cause dementia using neuropsychological tests within 10 and 5 years of diagnosis in a community-based sample,” Journal of Alzheimer's Disease, vol. 22, no. 4, pp. 1231–1240, 2010. View at Publisher · View at Google Scholar · View at Scopus
  6. M. N. Rossor, N. C. Fox, C. J. Mummery, J. M. Schott, and J. D. Warren, “The diagnosis of young-onset dementia,” The Lancet Neurology, vol. 9, no. 8, pp. 793–806, 2010. View at Publisher · View at Google Scholar · View at Scopus
  7. R. M. Chapman, M. Mapstone, J. W. McCrary et al., “Predicting conversion from mild cognitive impairment to Alzheimer's disease using neuropsychological tests and multivariate methods,” Journal of Clinical and Experimental Neuropsychology, vol. 33, no. 2, pp. 187–199, 2011. View at Publisher · View at Google Scholar · View at Scopus
  8. A. Pozueta, E. Rodríguez-Rodríguez, J. L. Vazquez-Higuera et al., “Detection of early Alzheimer's disease in MCI patients by the combination of MMSE and an episodic memory test,” BMC Neurology, vol. 11, article 78, 2011. View at Publisher · View at Google Scholar · View at Scopus
  9. M. Quintana, J. Guràdia, G. Sánchez-Benavides et al., “Using artificial neural networks in clinical neuropsychology: high performance in mild cognitive impairment and Alzheimer's disease,” Journal of Clinical and Experimental Neuropsychology, vol. 34, no. 2, pp. 195–208, 2012. View at Publisher · View at Google Scholar · View at Scopus
  10. R. Cuingnet, E. Gerardin, J. Tessieras et al., “Automatic classification of patients with Alzheimer's disease from structural MRI: a comparison of ten methods using the ADNI database,” NeuroImage, vol. 56, no. 2, pp. 766–781, 2011. View at Publisher · View at Google Scholar · View at Scopus
  11. M. Q. Wilks, H. Protas, M. Wardak et al., “Automated VOI analysis in FDDNP PET using structural warping: validation through classification of Alzheimer's disease patients,” International Journal of Alzheimer's Disease, vol. 2012, Article ID 512069, 8 pages, 2012. View at Publisher · View at Google Scholar · View at Scopus
  12. S. Klöppel, C. M. Stonnington, J. Barnes et al., “Accuracy of dementia diagnosis—a direct comparison between radiologists and a computerized method,” Brain, vol. 131, no. 11, pp. 2969–2974, 2008. View at Publisher · View at Google Scholar · View at Scopus
  13. A. Larner, “Comparing diagnostic accuracy of cognitive screening instruments: a weighted comparison approach,” Dementia and Geriatric Cognitive Disorders Extra, vol. 3, no. 1, pp. 60–65, 2013. View at Publisher · View at Google Scholar
  14. J. Maroco, D. Silva, A. Rodrigues, M. Guerreiro, I. Santana, and A. de Mendonça, “Data mining methods in the prediction of dementia: a real-data comparison of the accuracy, sensitivity and specificity of linear discriminant analysis, logistic regression, neural networks, support vector machines, classification trees and random forests,” BMC Research Notes, vol. 4, article 299, 14 pages, 2011. View at Publisher · View at Google Scholar · View at Scopus
  15. W. Yu, T. Liu, R. Valdez, M. Gwinn, and M. J. Khoury, “Application of support vector machine modeling for prediction of common diseases: the case of diabetes and pre-diabetes,” BMC Medical Informatics and Decision Making, vol. 10, no. 1, article 16, 2010. View at Publisher · View at Google Scholar · View at Scopus
  16. P. R. Hachesu, M. Ahmadi, S. Alizadeh, and F. Sadoughi, “Use of data mining techniques to determine and predict length of stay of cardiac patients,” Healthcare Informatics Research, vol. 19, no. 2, pp. 121–129, 2013. View at Publisher · View at Google Scholar · View at Scopus
  17. M. M. Kabir, M. M. Islam, and K. Murase, “A new wrapper feature selection approach using neural network,” Neurocomputing, vol. 73, no. 16–18, pp. 3273–3283, 2010. View at Publisher · View at Google Scholar · View at Scopus
  18. S. Maldonado, R. Weber, and J. Basak, “Simultaneous feature selection and classification using kernel-penalized support vector machines,” Information Sciences, vol. 181, no. 1, pp. 115–128, 2011. View at Publisher · View at Google Scholar · View at Scopus
  19. J. Geraci, M. Dharsee, P. Nuin et al., “Exploring high dimensional data with Butterfly: a novel classification algorithm based on discrete dynamical systems,” Bioinformatics, vol. 30, no. 5, pp. 712–718, 2014. View at Publisher · View at Google Scholar · View at Scopus
  20. X. Chen, M. Wang, and H. Zhang, “The use of classification trees for bioinformatics,” Data Mining and Knowledge Discovery, vol. 1, no. 1, pp. 55–63, 2011. View at Publisher · View at Google Scholar · View at Scopus
  21. M. L. Calle, V. Urrea, A.-L. Boulesteix, and N. Malats, “AUC-RF: a new strategy for genomic profiling with random forest,” Human Heredity, vol. 72, no. 2, pp. 121–132, 2011. View at Publisher · View at Google Scholar · View at Scopus
  22. K. R. Gray, P. Aljabar, R. A. Heckemann, A. Hammers, and D. Rueckert, “Random forest-based similarity measures for multi-modal classification of Alzheimer's disease,” NeuroImage, vol. 65, pp. 167–175, 2013. View at Publisher · View at Google Scholar · View at Scopus
  23. J. A. Williams, A. Weakley, D. J. Cook, and M. Schmitter-Edgecombe, “Machine learning techniques for diagnostic differentiation of mild cognitive impairment and dementia,” in Proceedings of the 27th AAAI Conference on Artificial Intelligence Workshop, 2013.
  24. B. Chizi and O. Maimon, “Dimension reduction and feature selection,” in Data Mining and Knowledge Discovery Handbook, pp. 93–111, 2005. View at Google Scholar
  25. I. Guyon and A. Elisseeff, “An introduction to variable and feature selection,” Journal of Machine Learning Research, vol. 3, pp. 1157–1182, 2003. View at Google Scholar · View at Scopus
  26. I. Guyon, J. Weston, S. Barnhill, and V. Vapnik, “Gene selection for cancer classification using support vector machines,” Machine Learning, vol. 46, no. 1–3, pp. 389–422, 2002. View at Publisher · View at Google Scholar · View at Scopus
  27. O. Uncu and I. B. Türksen, “A novel feature selection approach: combining feature wrappers and filters,” Information Sciences, vol. 177, no. 2, pp. 449–466, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  28. R. C. Eberhart and Y. Shi, “Particle swarm optimization: developments, applications and resources,” in Proceedings of the IEEE International Conference on Evolutionary Computation, pp. 81–86, Seoul, The Republic of Korea, May 2001. View at Scopus
  29. H. H. Inbarani, A. T. Azar, and G. Jothi, “Supervised hybrid feature selection based on PSO and rough sets for medical diagnosis,” Computer Methods and Programs in Biomedicine, vol. 113, no. 1, pp. 175–185, 2014. View at Publisher · View at Google Scholar · View at Scopus
  30. P. Buhlmann and T. Hothorn, “Boosting algorithms: regularization, prediction and model fitting,” Statistical Science, vol. 22, no. 4, pp. 477–505, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  31. L. Rocach, Pattern Classification Using Ensemble Methods, World Scientific, Singapore, 2009.
  32. C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, vol. 20, no. 3, pp. 273–297, 1995. View at Publisher · View at Google Scholar · View at Scopus
  33. L. Breiman, “Random forests,” Machine Learning, vol. 45, no. 1, pp. 5–32, 2001. View at Publisher · View at Google Scholar · View at Scopus
  34. A. Liaw and M. Wiener, “Classification and regression by random forest,” R News, vol. 2, no. 3, pp. 18–22, 2002. View at Google Scholar
  35. S. Janitza, C. Strobl, and A. L. Boulesteix, “An AUC-based permutation variable importance measure for random forests,” BMC Bioinformatics, vol. 14, article 119, 2013. View at Publisher · View at Google Scholar · View at Scopus
  36. J. R. Quinlan, C4.5: Programs for Machine Learning, Morgan Kaufmann, Boston, Mass, USA, 1993.
  37. S. B. Kotsiantis, “Supervised machine learning: a review of classification techniques,” Informatica, vol. 31, no. 3, pp. 249–268, 2007. View at Google Scholar · View at MathSciNet
  38. H. Azami, K. Mohammadi, and B. Bozorgtabar, “An improved signal segmentation using moving average and Savitzky-Golay filter,” Journal of Signal and Information Processing, vol. 3, no. 1, pp. 39–44, 2012. View at Publisher · View at Google Scholar
  39. G. J. McLachlan and T. Krishnan, The EM Algorithm and Extensions, Wiley, Hoboken, NJ, USA, 2nd edition, 2008.
  40. G. M. D'Angelo, J. Luo, and C. Xiong, “Missing data methods for partial correlation,” Journal of Biometrics & Biostatistics, vol. 3, no. 8, p. 155, 2012. View at Publisher · View at Google Scholar
  41. S. Arlot and A. Celisse, “A survey of cross-validation procedures for model selection,” Statistics Surveys, vol. 4, pp. 40–79, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  42. T. Fawcett, “An introduction to ROC analysis,” Pattern Recognition Letters, vol. 27, no. 8, pp. 861–874, 2006. View at Publisher · View at Google Scholar · View at Scopus
  43. J. C. Xu, L. Sun, Y. P. Gao, and T. H. Xu, “An ensemble feature selection technique for cancer recognition,” Bio-Medical Materials and Engineering, vol. 24, no. 1, pp. 1001–1008, 2014. View at Publisher · View at Google Scholar · View at Scopus