Table of Contents
Advances in Artificial Intelligence
Volume 2015, Article ID 471483, 10 pages
http://dx.doi.org/10.1155/2015/471483
Research Article

Pop-Out: A New Cognitive Model of Visual Attention That Uses Light Level Analysis to Better Mimic the Free-Viewing Task of Static Images

SVP TV, 7000 Mons, Belgium

Received 9 October 2014; Revised 3 December 2014; Accepted 3 May 2015

Academic Editor: Djamel Bouchaffra

Copyright © 2015 Makiese Mibulumukini. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. S. Frintrop, P. Jensfelt, and H. Christensen, “Attentional robot localization and mapping,” in Proceedings of the ICVS Workshop on Computational Attention & Applications, Bielefeld, Germany, 2007.
  2. F. Zajega, M. Mancas, R. B. Madhkour et al., “KinAct: the attentive social game demonstration,” in Proceedings of the 11th Asian Conference on Computer Vision, Daejeon, Republic of Korea, 2012.
  3. M. Mancas, N. Riche, J. Leroy, and B. Gosselin, “Abnormal motion selection in crowds using bottom-up saliency,” in Proceedings of the 18th IEEE International Conference on Image Processing (ICIP '11), pp. 229–232, Brussels, Belgium, September 2011. View at Publisher · View at Google Scholar · View at Scopus
  4. A. Torralba, A. Oliva, M. S. Castelhano, and J. M. Henderson, “Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search,” Psychological Review, vol. 113, no. 4, pp. 766–786, 2006. View at Publisher · View at Google Scholar · View at Scopus
  5. M. Mancas, “Relative influence of bottom-up and top-down attention,” in Attention in Cognitive Systems, vol. 5395 of Lecture Notes in Computer Science, pp. 212–226, Springer, Berlin, Germany, 2009. View at Google Scholar
  6. A. M. Treisman and G. Gelade, “A feature-integration theory of attention,” Cognitive Psychology, vol. 12, no. 1, pp. 97–136, 1980. View at Publisher · View at Google Scholar · View at Scopus
  7. Z. W. Pylyshyn and R. W. Storm, “Tracking multiple independent targets: evidence for a parallel tracking mechanism.,” Spatial Vision, vol. 3, no. 3, pp. 179–197, 1988. View at Publisher · View at Google Scholar · View at Scopus
  8. S. A. McMains and D. C. Somers, “Multiple spotlights of attentional selection in human visual cortex,” Neuron, vol. 42, no. 4, pp. 677–686, 2004. View at Publisher · View at Google Scholar · View at Scopus
  9. J. Wolfe, “A revised model of visual search,” Psychonomic Bulletin and Review, vol. 1, no. 2, pp. 202–238, 1994. View at Publisher · View at Google Scholar
  10. M. Mozer, “Early parallel processing in reading: a connectionist approach,” in Attention and Performance XII: The Psychology of Reading, M. Coltheart, Ed., pp. 83–104, 1987. View at Google Scholar
  11. B. A. Olshausen, C. H. Anderson, and D. C. Van Essen, “A neurobiological model of visual attention and invariant pattern recognition based on dynamic routing of information,” Journal of Neuroscience, vol. 13, no. 11, pp. 4700–4719, 1993. View at Google Scholar · View at Scopus
  12. J. K. Tsotsos, “Analyzing vision at the complexity level,” Behavioral and Brain Sciences, vol. 13, no. 3, pp. 423–445, 1990. View at Google Scholar · View at Scopus
  13. J. Tsotos, “An inhibitory beam for attentional selection,” in Proceedings of the York Conference on Spacial Vision in Humans and Robots, pp. 313–331, 1993.
  14. J. Tsotos, “Modeling visual attention via selective tuning,” Artificial Intelligence, vol. 78, pp. 507–547, 1995. View at Google Scholar
  15. G. Kootstra, B. de Boer, and L. R. B. Schomaker, “Predicting eye fixations on complex visual stimuli using local symmetry,” Cognitive Computation, vol. 3, no. 1, pp. 223–240, 2011. View at Publisher · View at Google Scholar · View at Scopus
  16. G. Kootstra, A. Nederveen, and B. D. Boer, “Paying attention to symmetry,” in Proceedings of the British Machine Vision Conference (BMVC '08), Leeds, UK, 2008.
  17. G. Amy, M. Piolat, and J. Roulin, “L'ecole Gestaltiste: une psychologie allemande de la “forme”,” in Psychologie Cognitive, pp. 41–46, Breal, 2006. View at Google Scholar
  18. A. Borji and L. Itti, “State-of-the-art in visual attention modeling,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 1, pp. 185–207, 2013. View at Publisher · View at Google Scholar · View at Scopus
  19. C. Koch and S. Ullman, “Shifts in selective visual attention: towards the underlying neural circuitry,” Human Neurobiology, vol. 4, no. 4, pp. 219–227, 1985. View at Google Scholar · View at Scopus
  20. J. J. Clark and N. J. Ferrier, “Modal control of an attentive vision system,” in Proceedings of the 2nd International Conference on Computer Vision, pp. 514–523, 1988. View at Scopus
  21. L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 11, pp. 1254–1259, 1998. View at Publisher · View at Google Scholar · View at Scopus
  22. R. Milanese, Detecting salient regions in an image: from biological evidence to computer implementation [Ph.D. thesis], University of Geneva, Geneva, Switzerland, 1993.
  23. S. Frintrop, VOCUS: A Visual Attention System for Object Detection and Goal-Directed Search, vol. 3899 of Lecture Notes in Computer Science, Springer, Berlin, Germany, 2006.
  24. O. Le Meur, P. Le Callet, D. Barba, and D. Thoreau, “A coherent computational approach to model bottom-up visual attention,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 5, pp. 802–817, 2006. View at Publisher · View at Google Scholar · View at Scopus
  25. O. Le Meur, P. Le Callet, and D. Barba, “Predicting visual fixations on video based on low-level visual features,” Vision Research, vol. 47, no. 19, pp. 2483–2498, 2007. View at Publisher · View at Google Scholar · View at Scopus
  26. M. Guironnet, N. Guyader, D. Pellerin, and P. Ladret, “Static and dynamic features-based visual attention model: comparison to human judgement,” in Proceedings of the European Signal Processing Conference, Antalya, Turkey, 2005.
  27. Q. Zou, S. Luo, and J. Li, “Selective attention guided perceptual grouping model,” in Advances in Natural Computation, vol. 3610 of Lecture Notes in Computer Science, pp. 867–876, Springer, Berlin, Germany, 2005. View at Google Scholar
  28. G. Kootstra and D. Kragic, “Fast and bottom-up object detection, segmentation, and evaluation using gestalt principles,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '11), pp. 3423–3428, Shanghai, China, May 2011. View at Publisher · View at Google Scholar · View at Scopus
  29. J. Vitay and N. Rougier, “Using neural dynamics to switch attention,” in Proceedings of the International Joint Conference on Neural Networks, Québec, Canada, 2005.
  30. J. Fix, N. Rougier, and F. Alexandre, “A dynamic neural field approach to the covert and overt deployment of spatial attention,” Cognitive Computation, vol. 3, no. 1, pp. 279–293, 2011. View at Publisher · View at Google Scholar · View at Scopus
  31. T. Judd, K. Ehinger, F. Durand, and A. Torralba, “Learning to predict where humans look,” in Proceedings of the IEEE International Conference on Computer Vision, Kyoto, Japan, 2009.
  32. R. Ihaka, “Human Vision,” https://www.stat.auckland.ac.nz/∼ihaka/120/Notes/ch04.pdf.
  33. O. L. Meur, Attention sélective en visualisation d'images fixes et animées affichées sur écran: modèles et évaluation de performances—applications [Ph.D. thesis], University of Nantes, Nantes, France, 2005.
  34. J. A. Kinney, “Comparison of scotopic, mesopic, and photopic spectral sensitivity curves,” Journal of the Optical Society of America, vol. 48, no. 3, pp. 185–190, 1958. View at Publisher · View at Google Scholar · View at Scopus
  35. M. Daniel, “La luminance de la CIE,” http://www.profil-couleur.com/ec/110b-luminance.php.
  36. J. Decuypere, J. L. Capron, T. Dutoit, and M. Renglet, “Implementation of a retina model extended to mesopic vision,” in Proceedings of the 27th Session of the CIE, pp. 871–880, Sun City, South Africa, 2011.
  37. Technical basics of light (OSRAM), http://www.imageled.com/Technologies_.html.
  38. J. Decuypere, J. L. Capron, T. Dutoit, and M. Renglet, “Mesopic contrast measured with a computational model of the retina,” in Proceedings of CIE Lighting Quality and Energy Efficiency, pp. 77–84, Hangzhou, China, 2012.
  39. N. D. B. Bruce and J. K. Tsotsos, “Saliency based on information maximization,” in Advances in Neural Information Processing Systems, vol. 18, pp. 155–162, 2006. View at Google Scholar
  40. D. J. Field, “Relations between the statistics of natural images and the response properties of cortical cells,” Journal of the Optical Society of America, vol. 4, no. 12, pp. 2379–2394, 1987. View at Publisher · View at Google Scholar · View at Scopus
  41. P. Kovesi, “What Are Log-Gabor Filters and Why Are They Good?” http://www.csse.uwa.edu.au/~pk/research/matlabfns/PhaseCongruency/Docs/convexpl.html.
  42. M. Makiese, “De la perception des images à l’algorithme Log-Gabor PCA,” in Workshop sur les Technologies de l'Information et de la Communication (WOTIC '11), Casablanca, Morocco, 2011.
  43. M. Makiese, N. Riche, M. Mancas, B. Gosselin, and T. Dutoit, “Biologically plausible context recognition algorithms,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '13), Melbourne, Australia, 2013.
  44. Recommandation ITU-R BT.601-5 (1982–1995).
  45. Recommandation ITU-R BT.709-5, (1990–2002).
  46. ITU-R Recommendations and Reports, Editions 2, 2012.
  47. A. Torralba, K. P. Murphy, W. T. Freeman, and M. A. Rubin, “Context-based vision system for place and object recognition,” in Proceedings of the International Conference on Computer Vision (ICCV '03), Nice, France, 2003.
  48. Photopic and Scotopic lumens: when the photopic lumen fails us, http://www.visual-3d.com/.
  49. T. Judd, F. Durand, and A. Torralba, “A benchmark of computational models of saliency to predict human fixations,” Tech. Rep., MIT Computer Science and Artificial Intelligence Laboratory, 2012. View at Google Scholar