Table of Contents Author Guidelines Submit a Manuscript
Computational Intelligence and Neuroscience
Volume 2017, Article ID 5468208, 12 pages
https://doi.org/10.1155/2017/5468208
Research Article

Object Extraction in Cluttered Environments via a P300-Based IFCE

1School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072, China
2Department of Computer & Electrical Engineering and Computer Science, California State University, Bakersfield, CA 93311, USA
3State Key Laboratory of Robotics, Shenyang Institute of Automation, Shenyang, Liaoning 110016, China
4Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, Guangdong, China
5Department of Math and Computer Science, West Virginia State University, 5000 Fairlawn Ave, Institute, WV 25112, USA
6Intelligent Fusion Technology, Inc., Germantown, MD 20876, USA

Correspondence should be addressed to Wei Li; ude.busc@ilw

Received 14 December 2016; Revised 3 April 2017; Accepted 24 May 2017; Published 27 June 2017

Academic Editor: Hasan Ayaz

Copyright © 2017 Xiaoqian Mao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. P. F. Felzenszwalb and D. P. Huttenlocher, “Efficient graph-based image segmentation,” International Journal of Computer Vision, vol. 59, no. 2, pp. 167–181, 2004. View at Publisher · View at Google Scholar · View at Scopus
  2. R. Dony and S. Wesolkowski, “Edge detection on color images using RGB vector angles,” in Proceedings of the Engineering Solutions for the Next Millennium, 1999 IEEE Canadian Conference on Electrical and Computer Engineering, pp. 687–692, Edmonton, Alberta, Canada. View at Publisher · View at Google Scholar
  3. J. Shi and J. Malik, “Normalized cuts and image segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 888–905, 2000. View at Publisher · View at Google Scholar · View at Scopus
  4. J. Malik, S. Belongie, T. Leung, and J. Shi, “Contour and texture analysis for image segmentation,” International Journal of Computer Vision, vol. 43, no. 1, pp. 7–27, 2001. View at Publisher · View at Google Scholar · View at Scopus
  5. F. A. Albalooshi and V. K. Asari, “A self-organizing lattice Boltzmann active contour (SOLBAC) approach for fast and robust object region segmentation,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '15), pp. 1329–1333, Quebec City, QC, Canada, September 2015. View at Publisher · View at Google Scholar · View at Scopus
  6. K. Brewer and E. Williams, “The Psychology of Object and Pattern Recognition: A Brief Introduction,” ISBN: 978-1-904542-16-2, 2004.
  7. G. Zhao, Y. Li, G. Chen et al., “A fuzzy-logic based approach to color segmentation,” in The International Society for Optical Engineering, vol. 8739 of Proceedings of SPIE Defense, Security, and Sensing, Baltimore, Md, USA, 2013. View at Publisher · View at Google Scholar
  8. H. Kitano, M. Asada, Y. Kuniyoshi, I. Noda, E. Osawa, and H. Matsubara, “Robocup: a challenge problem for AI and robotics,” AI Magazine, vol. 18, no. 1, pp. 73–85, 1997. View at Google Scholar · View at Scopus
  9. J. M. Cañas, D. Puig, E. Perdices et al., “Visual Goal Detection for the RoboCup Standard Platform League,” Combinatorics Probability & Computing, vol. 24, no. 2, pp. 1344–1349, 2015. View at Google Scholar
  10. Y. Takemura, Y. Sato, K. Azeura et al., “SOM based color constancy algorithm for RoboCup robots,” in Proceedings of the 2008 Symposium on Cryptography and Information Security (SCIS 2008), vol. 2008, pp. 1495–1500, Miyazaki, Japan.
  11. X. Liang Y, L. I. Yong-Xin, J. Zhang et al., “Application of hough transform in robocup vision system,” Machinery Design & Manufacture, 2008. View at Google Scholar
  12. C. Gönner, M. Rous, and K.-F. Kraiss, “Real-time adaptive colour segmentation for the RoboCup middle size league,” in RoboCup 2004: Robot Soccer World Cup VIII, pp. 402–409, 2004. View at Google Scholar
  13. U. Kaufmann, G. Mayer, G. Kraetzschmar, and G. Palm, “Visual robot detection in RoboCup using neural networks,” in RoboCup 2004: Robot Soccer World Cup VIII, pp. 262–273, 2005. View at Google Scholar
  14. J. Zhang, W. Li, J. Yu, X. Mao, M. Li, and G. Chen, “Operating an underwater manipulator via P300 brainwaves,” in Proceedings of the 2016 23rd International Conference on Mechatronics and Machine Vision in Practice (M2VIP), pp. 1–5, Nanjing, China, November 2016. View at Publisher · View at Google Scholar
  15. C. J. Bell, P. Shenoy, R. Chalodhorn, and R. P. N. Rao, “Control of a humanoid robot by a noninvasive brain-computer interface in humans,” Journal of Neural Engineering, vol. 5, no. 2, pp. 214–220, 2008. View at Publisher · View at Google Scholar · View at Scopus
  16. J. Zhao, Q. Meng, W. Li, M. Li, F. Sun, and G. Chen, “An OpenViBE-based brainwave control system for Cerebot,” in Proceedings of the 2013 IEEE International Conference on Robotics and Biomimetics (ROBIO '13), pp. 1169–1174, Shenzhen, China, December 2013. View at Publisher · View at Google Scholar · View at Scopus
  17. K. Bouyarmane, J. Vaillant, N. Sugimoto, F. Keith, J.-I. Furukawa, and J. Morimoto, “Brain-machine interfacing control of whole-body humanoid motion,” Frontiers in Systems Neuroscience, vol. 8, article 138, 2014. View at Publisher · View at Google Scholar · View at Scopus
  18. B. J. Choi and S. H. Jo, “A low-cost EEG system-based hybrid brain-computer interface for humanoid robot navigation and recognition,” PLoS ONE, vol. 8, no. 9, Article ID e74583, 2013. View at Publisher · View at Google Scholar · View at Scopus
  19. E. Tidoni, P. Gergondet, A. Kheddar, and S. M. Aglioti, “Audio-visual feedback improves the BCI performance in the navigational control of a humanoid robot,” Frontiers in Neurorobotics, vol. 8, article 20, 2014. View at Publisher · View at Google Scholar · View at Scopus
  20. P. Stawicki, F. Gembler, and I. Volosyak, “Driving a semiautonomous mobile robotic car controlled by an SSVEP-based BCI,” Computational Intelligence and Neuroscience, vol. 2016, Article ID 4909685, 14 pages, 2016. View at Publisher · View at Google Scholar · View at Scopus
  21. X. Mao, M. Li, W. Li et al., “Progress in EEG-based brain robot interaction systems,” Computational Intelligence and Neuroscience, vol. 2017, Article ID 1742862, 25 pages, 2017. View at Publisher · View at Google Scholar
  22. M. Li, W. Li, J. Zhao, Q. Meng, M. Zeng, and G. Chen, “A p300 model for cerebot—a mind-controlled humanoid robot,” in Robot Intelligence Technology and Applications 2, vol. 274 of Advances in Intelligent Systems and Computing, pp. 495–502, 2014. View at Publisher · View at Google Scholar · View at Scopus
  23. J. Zhao, W. Li, and M. Li, “Comparative study of SSVEP- and P300-based models for the telepresence control of humanoid robots,” PLoS ONE, vol. 10, no. 11, Article ID e0142168, 2015. View at Publisher · View at Google Scholar · View at Scopus
  24. W. Li, “An iterative fuzzy segmentation algorithm for recognizing an odor source in near shore ocean environments,” in Proceedings of the 2007 IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA '07), pp. 101–106, Jacksonville, Fla, USA, June 2007. View at Publisher · View at Google Scholar · View at Scopus
  25. Y. Tian, X. Kang, Y. Li et al., “Identifying rhodamine dye plume sources in near-shore oceanic environments by integration of chemical and visual sensors,” Sensors, vol. 13, no. 3, pp. 3776–3798, 2013. View at Publisher · View at Google Scholar · View at Scopus
  26. N. A. Mat-Isa, M. Y. Mashor, and N. H. Othman, “Seeded region growing features extraction algorithm; its potential use in improving screening for cervical cancer,” International Journal of the Computer the Internet & Management, vol. 13, 2005. View at Google Scholar
  27. P. F. Felzenszwalb and D. P. Huttenlocher, “Pictorial structures for object recognition,” International Journal of Computer Vision, vol. 61, no. 1, pp. 55–79, 2005. View at Publisher · View at Google Scholar · View at Scopus
  28. C.-F. Juang and L.-T. Chen, “Moving object recognition by a shape-based neural fuzzy network,” Neurocomputing, vol. 71, no. 13-15, pp. 2937–2949, 2008. View at Publisher · View at Google Scholar · View at Scopus
  29. C. J. Gonsalvez and J. Polich, “P300 amplitude is determined by target-to-target interval,” Psychophysiology, vol. 39, no. 3, pp. 388–396, 2002. View at Publisher · View at Google Scholar · View at Scopus
  30. M. Li, W. Li, J. Zhao, Q. Meng, M. Zeng, and G. Chen, “A p300 model for cerebot—a mind-controlled humanoid robot,” in Robot Intelligence Technology and Applications 2, vol. 274 of Advances in Intelligent Systems and Computing, pp. 495–502, Springer International Publishing, 2014. View at Publisher · View at Google Scholar · View at Scopus
  31. J. Kulk and J. S. Welsh, “Evaluation of walk optimisation techniques for the NAO robot,” PLOS ONE, vol. 7, no. 6, pp. 306–311, 2011. View at Google Scholar
  32. M. Cheriet, J. N. Said, and C. Y. Suen, “A recursive thresholding technique for image segmentation,” IEEE Transactions on Image Processing, vol. 7, no. 6, pp. 918–921, 1998. View at Publisher · View at Google Scholar · View at Scopus
  33. M. Li, W. Li, and H. Zhou, “Increasing N200 potentials via visual stimulus depicting humanoid robot behavior,” International Journal of Neural Systems, vol. 26, no. 1, Article ID 1550039, 2015. View at Google Scholar
  34. X. Mao, H. He, and W. Li, “Path finding for a NAO humanoid robot by fusing visual and proximity sensors,” in Proceedings of the 12th World Congress on Intelligent Control and Automation (WCICA '16), pp. 2574–2579, Guilin, China, June 2016. View at Publisher · View at Google Scholar · View at Scopus