Table of Contents Author Guidelines Submit a Manuscript
Applied Computational Intelligence and Soft Computing
Volume 2016, Article ID 5919717, 14 pages
http://dx.doi.org/10.1155/2016/5919717
Research Article

A Semisupervised Cascade Classification Algorithm

1Department of Mathematics, University of Patras, 26504 Rio, Greece
2Department of Electrical and Computer Engineering, University of Patras, 26504 Rio, Greece

Received 30 November 2015; Accepted 27 January 2016

Academic Editor: Samuel Huang

Copyright © 2016 Stamatis Karlos et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. X. Zhu and A. B. Goldberg, Introduction to Semi-Supervised Learning, Synthesis Lectures on Artificial Intelligence and Machine Learning, Morgan & Claypool, San Rafael, Calif, USA, 2009.
  2. M. K. Dalal and M. A. Zaveri, “Semisupervised learning based opinion summarization and classification for online product reviews,” Applied Computational Intelligence and Soft Computing, vol. 2013, Article ID 910706, 8 pages, 2013. View at Publisher · View at Google Scholar
  3. C. Rosenberg, M. Hebert, and H. Schneiderman, “Semi-supervised self-training of object detection models,” in Proceedings of the 7th IEEE Workshop on Applications of Computer Vision (WACV '05), pp. 29–36, Breckenridge, Colo, USA, January 2005. View at Publisher · View at Google Scholar · View at Scopus
  4. C. Liu and P. C. Yuen, “A boosted co-training algorithm for human action recognition,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 21, no. 9, pp. 1203–1213, 2011. View at Publisher · View at Google Scholar · View at Scopus
  5. I. Triguero, S. García, and F. Herrera, “Self-labeled techniques for semi-supervised learning: taxonomy, software and empirical study,” Knowledge and Information Systems, vol. 42, no. 2, pp. 245–284, 2015. View at Publisher · View at Google Scholar · View at Scopus
  6. F. Schwenker and E. Trentin, “Pattern classification and clustering: a review of partially supervised learning approaches,” Pattern Recognition Letters, vol. 37, no. 1, pp. 4–14, 2014. View at Publisher · View at Google Scholar · View at Scopus
  7. J. Gama and P. Brazdil, “Cascade generalization,” Machine Learning, vol. 41, no. 3, pp. 315–343, 2000. View at Publisher · View at Google Scholar · View at Scopus
  8. D. Ferraretti, G. Gamberoni, and E. Lamma, “Unsupervised and supervised learning in cascade for petroleum geology,” Expert Systems with Applications, vol. 39, no. 10, pp. 9504–9514, 2012. View at Publisher · View at Google Scholar · View at Scopus
  9. W.-C. Cheng and D.-M. Jhan, “A self-constructing cascade classifier with AdaBoost and SVM for pedestriandetection,” Engineering Applications of Artificial Intelligence, vol. 26, no. 3, pp. 1016–1028, 2013. View at Publisher · View at Google Scholar · View at Scopus
  10. P. Zhang, T. D. Bui, and C. Y. Suen, “A novel cascade ensemble classifier system with a high recognition performance on handwritten digits,” Pattern Recognition, vol. 40, no. 12, pp. 3415–3429, 2007. View at Publisher · View at Google Scholar · View at Scopus
  11. P. Domingos and M. Pazzani, “On the optimality of the simple Bayesian classifier under zero-one loss,” Machine Learning, vol. 29, no. 2-3, pp. 103–130, 1997. View at Publisher · View at Google Scholar · View at Scopus
  12. J. R. Quinlan, C4.5: Programs for Machine Learning, Morgan Kaufmann, San Francisco, Calif, USA, 1993.
  13. O. Chapelle, B. Schölkopf, and A. Zien, Semi-Supervised Learning, The MIT Press, Cambridge, Mass, USA, 2006. View at Publisher · View at Google Scholar
  14. I. Triguero, S. Garcia, and F. Herrera, “SEG-SSC: a framework based on synthetic examples generation for self-labeled semi-supervised classification,” IEEE Transactions on Cybernetics, vol. 45, no. 4, pp. 622–634, 2015. View at Publisher · View at Google Scholar · View at Scopus
  15. M. Li and Z.-H. Zhou, “Improve computer-aided diagnosis with machine learning techniques using undiagnosed samples,” IEEE Transactions on Systems, Man, and Cybernetics Part A: Systems and Humans, vol. 37, no. 6, pp. 1088–1098, 2007. View at Publisher · View at Google Scholar · View at Scopus
  16. M. Li and Z.-H. Zhou, “SETRED: self-training with editing,” in Advances in Knowledge Discovery and Data Mining: 9th Pacific-Asia Conference, PAKDD 2005, Hanoi, Vietnam, May 18–20, 2005. Proceedings, vol. 3518 of Lecture Notes in Computer Science, pp. 611–621, Springer, Berlin, Germany, 2005. View at Publisher · View at Google Scholar
  17. S. Sun, “A survey of multi-view machine learning,” Neural Computing and Applications, vol. 23, no. 7-8, pp. 2031–2038, 2013. View at Publisher · View at Google Scholar · View at Scopus
  18. H. Hotelling, “Relations between two sets of variates,” Biometrika, vol. 28, no. 3-4, pp. 321–377, 1936. View at Publisher · View at Google Scholar
  19. C. Xu, D. Tao, and C. Xu, “A survey on multi-view learning,” http://arxiv.org/abs/1304.5634.
  20. A. Blum and T. Mitchell, “Combining labeled and unlabeled data with co-training,” in Proceedings of the 11th Annual Conference on Computational Learning Theroy (COLT '98), pp. 92–100, Morgan Kaufmann, Madison, Wis, USA, July 1998. View at Publisher · View at Google Scholar
  21. S. Sun and Q. Zhang, “Multiple-view multiple-learner semi-supervised learning,” Neural Processing Letters, vol. 34, no. 3, pp. 229–240, 2011. View at Publisher · View at Google Scholar · View at Scopus
  22. K. Nigam and R. Ghani, “Analyzing the effectiveness and applicability of co-training,” in Proceedings of the 9th International Conference on Information and Knowledge Management (CIKM '00), pp. 86–93, ACM, McLean, Pa, USA, November 2000. View at Publisher · View at Google Scholar
  23. L. Didaci, G. Fumera, and F. Roli, “Analysis of co-training algorithm with very small training sets,” in Structural, Syntactic, and Statistical Pattern Recognition: Joint IAPR International Workshop, SSPR&SPR 2012, Hiroshima, Japan, November 7–9, 2012. Proceedings, vol. 7626 of Lecture Notes in Computer Science, pp. 719–726, Springer, Berlin, Germany, 2012. View at Publisher · View at Google Scholar
  24. J. Du, C. X. Ling, and Z.-H. Zhou, “When does cotraining work in real data?” IEEE Transactions on Knowledge and Data Engineering, vol. 23, no. 5, pp. 788–799, 2011. View at Publisher · View at Google Scholar · View at Scopus
  25. S. Sun and F. Jin, “Robust co-training,” International Journal of Pattern Recognition and Artificial Intelligence, vol. 25, no. 7, pp. 1113–1126, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  26. Y. Zhou and S. Goldman, “Democratic co-learning,” in Proceedings of the 16th IEEE International Conference on Tools with Artificial Intelligence (ICTAI '04), pp. 594–602, Boca Raton, Fla, USA, November 2004. View at Publisher · View at Google Scholar · View at Scopus
  27. S. Wang, L. Wu, L. Jiao, and H. Liu, “Improve the performance of co-training by committee with refinement of class probability estimations,” Neurocomputing, vol. 136, pp. 30–40, 2014. View at Publisher · View at Google Scholar · View at Scopus
  28. L. I. Kuncheva, “Using diversity measures for generating error-correcting output codes in classifier ensembles,” Pattern Recognition Letters, vol. 26, no. 1, pp. 83–90, 2005. View at Publisher · View at Google Scholar · View at Scopus
  29. L. I. Kuncheva and C. J. Whitaker, “Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy,” Machine Learning, vol. 51, no. 2, pp. 181–207, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  30. Z. Jiang, S. Zhang, and J. Zeng, “A hybrid generative/discriminative method for semi-supervised classification,” Knowledge-Based Systems, vol. 37, pp. 137–145, 2013. View at Publisher · View at Google Scholar · View at Scopus
  31. Z.-H. Zhou and M. Li, “Tri-training: exploiting unlabeled data using three classifiers,” IEEE Transactions on Knowledge and Data Engineering, vol. 17, no. 11, pp. 1529–1541, 2005. View at Publisher · View at Google Scholar · View at Scopus
  32. T. Guo and G. Li, “Improved tri-training with unlabeled data,” Advances in Intelligent and Soft Computing, vol. 115, no. 2, pp. 139–147, 2012. View at Publisher · View at Google Scholar · View at Scopus
  33. C. Deng and M. Z. Guo, “A new co-training-style random forest for computer aided diagnosis,” Journal of Intelligent Information Systems, vol. 36, no. 3, pp. 253–281, 2011. View at Publisher · View at Google Scholar · View at Scopus
  34. M. F. A. Hady and F. Schwenker, “Co-Training by Committee: a new semi-supervised learning framework,” in Proceedings of the IEEE International Conference on Data Mining Workshops (ICDM '08), pp. 563–572, Pisa, Ital, December 2008. View at Publisher · View at Google Scholar · View at Scopus
  35. J. Wang, S.-W. Luo, and X.-H. Zeng, “A random subspace method for co-training,” in Proceedings of the International Joint Conference on Neural Networks (IJCNN '08), pp. 195–200, IEEE, Hong Kong, June 2008. View at Publisher · View at Google Scholar · View at Scopus
  36. Y. Yaslan and Z. Cataltepe, “Co-training with relevant random subspaces,” Neurocomputing, vol. 73, no. 10-12, pp. 1652–1661, 2010. View at Publisher · View at Google Scholar · View at Scopus
  37. M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I. H. Witten, “The WEKA data mining software: an update,” ACM SIGKDD Explorations Newsletter, vol. 11, no. 1, pp. 10–18, 2009. View at Publisher · View at Google Scholar
  38. J. Alcalá-Fdez, A. Fernández, J. Luengo et al., “KEEL data-mining software tool: data set repository, integration of algorithms and experimental analysis framework,” Journal of Multiple-Valued Logic and Soft Computing, vol. 17, no. 2-3, pp. 255–287, 2011. View at Google Scholar · View at Scopus
  39. C. Deng and M. Z. Guo, “Tri-training and data editing based semi-supervised clustering algorithm,” in MICAI 2006: Advances in Artificial Intelligence, vol. 4293 of Lecture Notes in Computer Science, pp. 641–651, Springer, Berlin, Germany, 2006. View at Publisher · View at Google Scholar
  40. S. García, A. Fernández, J. Luengo, and F. Herrera, “Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: experimental analysis of power,” Information Sciences, vol. 180, no. 10, pp. 2044–2064, 2010. View at Publisher · View at Google Scholar · View at Scopus
  41. S. Holm, “A simple sequentially rejective multiple test procedure,” Scandinavian Journal of Statistics, vol. 6, no. 2, pp. 65–70, 1979. View at Google Scholar · View at MathSciNet
  42. J. Demsar, “Statistical comparisons of classifiers over multiple data sets,” Journal of Machine Learning Research, vol. 7, pp. 1–30, 2006. View at Google Scholar · View at MathSciNet
  43. Y. Hochberg, “A sharper bonferroni procedure for multiple tests of significance,” Biometrika, vol. 75, no. 4, pp. 800–802, 1988. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  44. Z. Xu, I. King, M. R.-T. Lyu, and R. Jin, “Discriminative semi-supervised feature selection via manifold regularization,” IEEE Transactions on Neural Networks, vol. 21, no. 7, pp. 1033–1047, 2010. View at Publisher · View at Google Scholar · View at Scopus