Table of Contents Author Guidelines Submit a Manuscript
The Scientific World Journal
Volume 2014, Article ID 730712, 7 pages
http://dx.doi.org/10.1155/2014/730712
Research Article

Towards Application of One-Class Classification Methods to Medical Data

1Department of Computer Sciences and Artificial Intelligence, UPV/EHU, 20018 Donostia, Spain
2Department of Statistics, UB, 08028 Barcelona, Spain

Received 5 December 2013; Accepted 24 February 2014; Published 20 March 2014

Academic Editors: V. Bhatnagar and Y. Zhang

Copyright © 2014 Itziar Irigoien et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. L. Tarassenko, P. Hayton, N. Cerneaz, and M. Brady, “Novelty detection for the identification of masses in mammograms,” in Proceedings of the 4th International Conference on Artificial Neural Networks, pp. 442–447, June 1995. View at Scopus
  2. M. Costa and L. Moura, “Automatic assessment of scintmammographic images using a novelty filter,” Proceedings of the Annual Symposium on Computer Applications in Medical Care, pp. 537–541, 1995. View at Google Scholar · View at Scopus
  3. O. Boehm, D. R. Hardoon, and L. M. Manevitz, “Classifying cognitive states of brain activity via one-class neural networks with feature selection by genetic algorithms,” International Journal of Machine Learning and Cybernetics, vol. 2, no. 3, pp. 125–134, 2011. View at Publisher · View at Google Scholar · View at Scopus
  4. J. A. Reyes and D. Gilbert, “Prediction of protein-protein interactions using one-class classification methods and integrating diverse biological data,” Journal of Integrative Bioinformatics, vol. 4, no. 3, p. 77, 2007. View at Google Scholar
  5. A. Depeursinge, J. Iavindrasana, A. Hidki et al., “Comparative performance analysis of state-of-the-art classification algorithms applied to lung tissue categorization,” Journal of Digital Imaging, vol. 23, no. 1, pp. 18–30, 2010. View at Publisher · View at Google Scholar · View at Scopus
  6. G. Cohen, H. Sax, and A. Geissbuhler, “Novelty detection using one-class Parzen density estimator. An application to surveillance of nosocomial infections,” Studies in Health Technology and Informatics, vol. 136, pp. 21–26, 2008. View at Google Scholar
  7. D. M. J. Tax, One-class classification [Ph.D. thesis], Delft University of Technology, 2001.
  8. C. Dsir, S. Bernard, C. Petitjean, and L. Heutte, “One class random forests,” Pattern Recognition, vol. 46, pp. 3490–3506, 2013. View at Google Scholar
  9. O. Mazhelis, “One-class classifiers: a review and analysis of suitability in the context of mobile-masquerader detection,” South African Computer Journal, vol. 36, pp. 29–48, 2006. View at Google Scholar
  10. S. S. Khan and M. G. Madden, “A survey of recent trends in one class classification,” in Proceedings of the 20th Irish Conference on Artificial Intelligence and Cognitive Science, 2009.
  11. C. Bishop, Neural Networks for Pattern Recognition, Oxford University Press, Oxford, UK, 1995.
  12. R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification, John Wiley & Sons, New York, NY, USA, 2000.
  13. A. Ypma, D. M. J. Tax, and R. P. W. Duin, “Robust machine fault detection with independent component analysis and support vector data description,” in Proceedings of the 9th IEEE Workshop on Neural Networks for Signal Processing (NNSP '99), Y. H. Hu, J. Larsen, E. Wilson, and S. Douglas, Eds., pp. 67–76, August 1999. View at Scopus
  14. R. O. Duda and P. E. Hart, Pattern Classification and Scene Analysis, John Wiley & Sons, New York, NY, USA, 1973.
  15. V. Vapnik, The Nature of Statistical Learning Theory, Springer, New York, NY, USA, 1995.
  16. D. M. J. Tax and R. P. W. Duin, “Data domain description using support vectors,” in Proceedings of the 7th European Symposium on Artificial Neural Networks, pp. 251–256, 1999.
  17. D. M. J. Tax and R. P. W. Duin, “Support Vector data description,” Machine Learning, vol. 54, no. 1, pp. 45–66, 2004. View at Publisher · View at Google Scholar · View at Scopus
  18. I. W. Tsang, J. T. Kwok, and P.-M. Cheung, “Core vector machines: fast SVM training on very large data sets,” Journal of Machine Learning Research, vol. 6, pp. 363–392, 2005. View at Google Scholar · View at Scopus
  19. D. Wang, D. S. Yeung, and E. C. C. Tsang, “Structured one-class classification,” IEEE Transactions on Systems, Man, and Cybernetics, B: Cybernetics, vol. 36, no. 6, pp. 1283–1295, 2006. View at Publisher · View at Google Scholar · View at Scopus
  20. F. Angiulli, “Prototype-based domain description for one-class classification,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 6, pp. 1131–1144, 2012. View at Publisher · View at Google Scholar · View at Scopus
  21. E. M. Knorr, R. T. Ng, and V. Tucakov, “Distance-based outliers: algorithms and applications,” VLDB Journal, vol. 8, no. 3-4, pp. 237–253, 2000. View at Google Scholar · View at Scopus
  22. F. Angiulli, S. Basta, and C. Pizzuti, “Distance-based detection and prediction of outliers,” IEEE Transactions on Knowledge and Data Engineering, vol. 18, no. 2, pp. 145–160, 2006. View at Publisher · View at Google Scholar · View at Scopus
  23. C. M. Bishop, Neural Networks for Pattern Recognition, Clarendon Press, 1995.
  24. D. Barber and C. M. Bishop, “Ensemble learning in Bayesian neural networks,” in Neural Networks and Machine Learning, C. M. Bishop, Ed., vol. 168 of Series F: Computer and Systems Sciences, pp. 215–237, Springer, 1998. View at Google Scholar
  25. B. Lerner and N. D. Lawrence, “A comparison of state-of-the-art classification techniques with application to cytogenetics,” Neural Computing and Applications, vol. 10, no. 1, pp. 39–47, 2001. View at Publisher · View at Google Scholar · View at Scopus
  26. I. Irigoien and C. Arenas, “INCA: new statistic for estimating the number of clusters and identifying atypical units,” Statistics in Medicine, vol. 27, no. 15, pp. 2948–2973, 2008. View at Publisher · View at Google Scholar · View at Scopus
  27. A. P. Bradley, “The use of the area under the ROC curve in the evaluation of machine learning algorithms,” Pattern Recognition, vol. 30, no. 7, pp. 1145–1159, 1997. View at Google Scholar · View at Scopus
  28. R. P. W. Duin, “On the choice of smoothing parameters for Parzen estimators of probabilitydensity functions,” IEEE Transactions on Computers, vol. 25, no. 11, pp. 1175–1179, 1976. View at Google Scholar · View at Scopus
  29. M. Kraaijveld and R. Duin, “A criterion for the smoothing parameter for parzenestimators of probability density functions,” Tech. Rep., Delft University of Technology, 1991. View at Google Scholar
  30. V. Vapnik, Statistical Learning Theory, Wiley Interscience, 1998.
  31. K. R. Müller, S. Mika, G. Rätsch, K. Tsuda, and B. Schölkopf, “An introduction to kernelbased learning algorithms,” IEEE Neural Networks, vol. 12, pp. 181–201, 2001. View at Google Scholar
  32. J. C. Gower, “Measures of similarity, dissimilarity and distance,” in Encyclopedia of Statistical Sciences, S. Kotz, N. L. Johson, and C. B. Read, Eds., pp. 307–316, John Wiley & Sons, New York, NY, USA, 1985. View at Google Scholar
  33. J. C. Gower, “Some distance properties of latent root and vector methods used in multivariate analysis,” Biometrika, vol. 53, pp. 325–338, 1966. View at Google Scholar
  34. W. J. Krzanowski and F. H. C. Marriott, Multivariate Analysis. Part 1: Distributions, Ordination and Inference, Kendall’s Library of Statistics, Edward Arnold, London, UK, 1994.
  35. C. M. Cuadras and C. Arenas, “A distance based regressionmodel for prediction with mixed data,” Communications in Statistics A. Theory and Methods, vol. 19, pp. 2261–2279, 1990. View at Google Scholar
  36. P. Legendre and M. J. Anderson, “Distancebased redundancy analysis: testing multispecies responses in multifactorial ecological experiments,” Ecological Monographs, vol. 48, pp. 505–519, 1999. View at Google Scholar
  37. M. J. Anderson and J. Robinson, “Generalized discriminant analysis based on distances,” Australian and New Zealand Journal of Statistics, vol. 45, no. 3, pp. 301–318, 2003. View at Google Scholar · View at Scopus
  38. M. J. Anderson and T. J. Willis, “Canonical analysis of principal coordinates: a useful method of constrained ordination for ecology,” Ecology, vol. 84, no. 2, pp. 511–525, 2003. View at Google Scholar · View at Scopus
  39. W. J. Krzanowski, “Biplots for multifactorial analysis of distance,” Biometrics, vol. 60, no. 2, pp. 517–524, 2004. View at Publisher · View at Google Scholar · View at Scopus
  40. I. Irigoien, C. Arenas, E. Fernández, and F. Mestres, “GEVA: geometric variability-based approaches for identifying patterns in data,” Computational Statistics, vol. 25, pp. 241–255, 2010. View at Publisher · View at Google Scholar · View at Scopus
  41. I. Irigoien, B. Sierra, and C. Arenas, “ICGE: an R package for detecting relevant clusters and atypical units in gene expression,” BMC Bioinformatics, vol. 13, article 30, 2012. View at Publisher · View at Google Scholar · View at Scopus
  42. I. Irigoien, F. Mestres, and C. Arenas, “The depth problem: identifying the most representative units in a data group,” IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 10, pp. 161–172, 2013. View at Google Scholar
  43. C. R. Rao, “Diversity: its measurement, decomposition, apportionment and analysis,” Sankhyā A, vol. 44, pp. 1–22, 1982. View at Google Scholar
  44. C. Arenas and C. M. Cuadras, “Some recent statistical methods based on distances,” Contributions to Science, vol. 2, pp. 183–191, 2002. View at Google Scholar
  45. C. Blake, E. Keogh, and C. Merz, “UCI Repository of Machine Learning Database,” http://archive.ics.uci.edu/ml/.
  46. P. Juszczak, D. M. J. Tax, E. Pȩkalska, and R. P. W. Duin, “Minimum spanning tree based one-class classifier,” Neurocomputing, vol. 72, no. 7–9, pp. 1859–1869, 2009. View at Publisher · View at Google Scholar · View at Scopus
  47. J. C. Gower, “A general coefficient of similarity and some of its properties,” Biometrics, vol. 27, pp. 857–871, 1971. View at Google Scholar
  48. A. Montanari and S. Mignari, “Notes on the bias of dissimilarity indices for incomplete data sets: the case of archaelogical classifications,” Qüestiió, vol. 18, pp. 39–49, 1994. View at Google Scholar
  49. K. Kato, “Adaptor-tagged competitive PCR: a novel method for measuring relative gene expression,” Nucleic Acids Research, vol. 25, no. 22, pp. 4694–4696, 1997. View at Google Scholar · View at Scopus