Table of Contents
ISRN Machine Vision
Volume 2013 (2013), Article ID 138057, 10 pages
http://dx.doi.org/10.1155/2013/138057
Research Article

Active Object Recognition with a Space-Variant Retina

Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA

Received 6 October 2013; Accepted 24 October 2013

Academic Editors: H. Erdogan, O. Ghita, D. Hernandez, A. Nikolaidis, and J. P. Siebert

Copyright © 2013 Christopher Kanan. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. C. A. Curcio and K. A. Allen, “Topography of ganglion cells in human retina,” Journal of Comparative Neurology, vol. 300, no. 1, pp. 5–25, 1990. View at Google Scholar · View at Scopus
  2. R. F. Dougherty, V. M. Koch, A. A. Brewer, B. Fischer, J. Modersitzki, and B. A. Wandell, “Visual field representations and locations of visual areas v1/2/3 in human visual cortex,” Journal of Vision, vol. 3, no. 10, pp. 586–598, 2003. View at Publisher · View at Google Scholar · View at Scopus
  3. S. A. Engel, G. H. Glover, and B. A. Wandell, “Retinotopic organization in human visual cortex and the spatial precision of functional MRI,” Cerebral Cortex, vol. 7, no. 2, pp. 181–192, 1997. View at Publisher · View at Google Scholar · View at Scopus
  4. P. M. Daniel and D. Whitteridge, “The representation of the visual field on the cerebral cortex in monkeys,” The Journal of Physiology, vol. 159, pp. 203–221, 1961. View at Google Scholar · View at Scopus
  5. A. J. Bell and T. J. Sejnowski, “The “independent components” of natural scenes are edge filters,” Vision Research, vol. 37, no. 23, pp. 3327–3338, 1997. View at Publisher · View at Google Scholar · View at Scopus
  6. M. S. Caywood, B. Willmore, and D. J. Tolhurst, “Independent components of color natural scenes resemble V1 neurons in their spatial and color tuning,” Journal of Neurophysiology, vol. 91, no. 6, pp. 2859–2873, 2004. View at Publisher · View at Google Scholar · View at Scopus
  7. T. W. Lee, T. Wachtler, and T. J. Sejnowski, “Color opponency is an efficient representation of spectral properties in natural scenes,” Vision Research, vol. 42, no. 17, pp. 2095–2103, 2002. View at Publisher · View at Google Scholar · View at Scopus
  8. T. Wachtler, E. Doi, T. W. Lee, and T. J. Sejnowski, “Cone selectivity derived from the responses of the retinal cone mosaic to natural scenes,” Journal of Vision, vol. 7, no. 8, article 6, 2007. View at Publisher · View at Google Scholar · View at Scopus
  9. R. Raina, A. Battle, H. Lee, B. Packer, and A. Y. Ng, “Self-taught learning: transfer learning from unlabeled data,” in Proceedings of the 24th International Conference on Machine Learning (ICML '07), pp. 759–766, June 2007. View at Publisher · View at Google Scholar · View at Scopus
  10. C. Kanan and G. Cottrell, “Robust classification of objects, faces, and flowers using natural image statistics,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '10), pp. 2472–2479, June 2010. View at Publisher · View at Google Scholar · View at Scopus
  11. Q. V. Le, M. A. Ranzato, R. Monga et al., “Building high-level features using large scale unsupervised learning,” in Proceedings of the International Conference on Machine Learning (ICML '12), pp. 81–88, 2012.
  12. H. Shan and G. W. Cottrell, “Looking around the backyard helps to recognize faces and digits,” in Proceedings of the 26th IEEE Conference on Computer Vision and Pattern Recognition (CVPR '08), June 2008. View at Publisher · View at Google Scholar · View at Scopus
  13. M. D. Fairchild, Color Appearance Models, Wiley Interscience, 2nd edition, 2005.
  14. C. Kanan, A. Flores, and G. W. Cottrell, “Color constancy algorithms for object and face recognition,” in Advances in Visual Computing, vol. 6453 of Lecture Notes in Computer Science, no. 1, pp. 199–210, 2010. View at Publisher · View at Google Scholar · View at Scopus
  15. C. Kanan and G. W. Cottrell, “Color-to-grayscale: does the method matter in image recognition?” PLoS ONE, vol. 7, no. 1, Article ID e29740, 2012. View at Publisher · View at Google Scholar · View at Scopus
  16. M. Bolduc and M. D. Levine, “A real-time foveated sensor with overlapping receptive fields,” Real-Time Imaging, vol. 3, no. 3, pp. 195–212, 1997. View at Google Scholar · View at Scopus
  17. M. Bolduc and M. D. Levine, “A review of biologically motivated space-variant data reduction models for robotic vision,” Computer Vision and Image Understanding, vol. 69, no. 2, pp. 170–184, 1998. View at Google Scholar · View at Scopus
  18. E. L. Schwartz, “Spatial mapping in the primate sensory projection: analytic structure and relevance to perception,” Biological Cybernetics, vol. 25, no. 4, pp. 181–194, 1977. View at Google Scholar · View at Scopus
  19. M. Chessa, S. P. Sabatini, F. Solari, and F. Tatti, “A quantitative comparison of speed and reliability for log-polar mapping techniques,” in Computer Vision Systems, vol. 6962 of Lecture Notes in Computer Science, pp. 41–50, 2011. View at Publisher · View at Google Scholar · View at Scopus
  20. R. H. Masland, “The fundamental plan of the retina,” Nature Neuroscience, vol. 4, no. 9, pp. 877–886, 2001. View at Publisher · View at Google Scholar · View at Scopus
  21. A. Olmos and F. A. A. Kingdom, “A biologically inspired algorithm for the recovery of shading and reflectance images,” Perception, vol. 33, no. 12, pp. 1463–1473, 2004. View at Publisher · View at Google Scholar · View at Scopus
  22. Z. Koldovský, P. Tichavský, and E. Oja, “Efficient variant of algorithm FastICA for independent component analysis attaining the Cramér-Rao lower bound,” IEEE Transactions on Neural Networks, vol. 17, no. 5, pp. 1265–1277, 2006. View at Publisher · View at Google Scholar
  23. J. P. Jones and L. A. Palmer, “An evaluation of the two-dimensional Gabor filter model of simple receptive fields in cat striate cortex,” Journal of Neurophysiology, vol. 58, no. 6, pp. 1233–1258, 1987. View at Google Scholar · View at Scopus
  24. M. Grundland and N. A. Dodgson, “Decolorize: fast, contrast enhancing, color to grayscale conversion,” Pattern Recognition, vol. 40, no. 11, pp. 2891–2896, 2007. View at Publisher · View at Google Scholar · View at Scopus
  25. R. Gattass, C. G. Gross, and J. H. Sandell, “Visual topography of V2 in the Macaque,” Journal of Comparative Neurology, vol. 201, no. 4, pp. 519–539, 1981. View at Google Scholar · View at Scopus
  26. C. Kanan, “Recognizing sights, smells, and sounds with gnostic fields,” PLoS ONE, vol. 8, no. 1, Article ID e54088, 2013. View at Publisher · View at Google Scholar
  27. J. Konorski, Integrative Activity of the Brain, University of Chicago Press, Chicago, Ill, USA, 1967.
  28. M. Kouh and T. Poggio, “A canonical neural circuit for cortical nonlinear operations,” Neural Computation, vol. 20, no. 6, pp. 1427–1451, 2008. View at Publisher · View at Google Scholar · View at Scopus
  29. I. S. Dhillon and D. S. Modha, “Concept decompositions for large sparse text data using clustering,” Machine Learning, vol. 42, no. 1-2, pp. 143–175, 2001. View at Publisher · View at Google Scholar · View at Scopus
  30. R. E. Fan, K. W. Chang, C. J. Hsieh, X. R. Wang, and C. J. Lin, “LIBLINEAR: a library for large linear classification,” Journal of Machine Learning Research, vol. 9, pp. 1871–1874, 2008. View at Google Scholar · View at Scopus
  31. K. Crammer and Y. Singer, “On the algorithmic implementation of multiclass kernel-based vector machines,” Journal of Machine Learning Research, vol. 2, pp. 265–292, 2001. View at Google Scholar
  32. A. M. Martinez and R. Benavente, “The AR face database,” Tech. Rep. 24, CVC, 1998. View at Google Scholar
  33. G. Griffin, A. D. Holub, and P. Perona, “The Caltech-256 object category dataset,” Tech. Rep. CNS-TR-2007-001, Caltech, Pasadena, Calif, USA, 2007. View at Google Scholar
  34. Y. Liang, C. Li, W. Gong, and Y. Pan, “Uncorrelated linear discriminant analysis based on weighted pairwise Fisher criterion,” Pattern Recognition, vol. 40, no. 12, pp. 3606–3615, 2007. View at Publisher · View at Google Scholar · View at Scopus
  35. N. Pinto, D. D. Cox, and J. J. DiCarlo, “Why is real-world visual object recognition hard?” PLoS Computational Biology, vol. 4, no. 1, article e27, 2008. View at Publisher · View at Google Scholar · View at Scopus
  36. P. Gehler and S. Nowozin, “On feature combination for multiclass object classificationpages,” in Proceedings of the IEEE 12th International Conference on Computer Vision (ICCV '09), pp. 221–228, IEEE Computer Society, Los Alamitos, Calif, USA, 2009.
  37. A. Bergamo and L. Torresani, “Meta-class features for large-scale object categorization on a budget,” in Proceedings of the IEEE Computer Vision and Pattern Recognition (CVPR '12), 2012.
  38. B. T. Vincent, R. J. Baddeley, T. Troscianko, and I. D. Gilchrist, “Is the early visual system optimised to be energy efficient?” Network: Computation in Neural Systems, vol. 16, no. 2-3, pp. 175–190, 2005. View at Publisher · View at Google Scholar · View at Scopus
  39. V. Javier Traver and A. Bernardino, “A review of log-polar imaging for visual perception in robotics,” Robotics and Autonomous Systems, vol. 58, no. 4, pp. 378–398, 2010. View at Publisher · View at Google Scholar · View at Scopus
  40. M. Varma and D. Ray, “Learning the discriminative power-invariance trade-off,” in Proceedings of the 2007 IEEE 11th International Conference on Computer Vision (ICCV '07), October 2007. View at Publisher · View at Google Scholar · View at Scopus
  41. R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction, MIT Press, Cambridge, Mass, USA, 1998.
  42. H. Larochelle and G. Hinton, “Learning to combine foveal glimpses with a third-order Boltzmann machine,” in Proceedings of the 24th Annual Conference on Neural Information Processing Systems 2010 (NIPS '10), December 2010. View at Scopus