Table of Contents Author Guidelines Submit a Manuscript
Computational Intelligence and Neuroscience
Volume 2015 (2015), Article ID 905421, 10 pages
Research Article

A Computational Approach towards Visual Object Recognition at Taxonomic Levels of Concepts

1Cognitive Robotics Lab, School of Electrical and Computer Engineering, University of Tehran, Tehran 14395-515, Iran
2School of Cognitive Sciences, Institute for Research in Fundamental Sciences (IPM), Tehran 19395-5746, Iran

Received 14 February 2015; Revised 2 June 2015; Accepted 4 June 2015

Academic Editor: Thomas DeMarse

Copyright © 2015 Zahra Sadeghi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


It has been argued that concepts can be perceived at three main levels of abstraction. Generally, in a recognition system, object categories can be viewed at three levels of taxonomic hierarchy which are known as superordinate, basic, and subordinate levels. For instance, “horse” is a member of subordinate level which belongs to basic level of “animal” and superordinate level of “natural objects.” Our purpose in this study is to take an investigation into visual features at each taxonomic level. We first present a recognition tree which is more general in terms of inclusiveness with respect to visual representation of objects. Then we focus on visual feature definition, that is, how objects from the same conceptual category can be visually represented at each taxonomic level. For the first level we define global features based on frequency patterns to illustrate visual distinctions among artificial and natural. In contrast, our approach for the second level is based on shape descriptors which are defined by recruiting moment based representation. Finally, we show how conceptual knowledge can be utilized for visual feature definition in order to enhance recognition of subordinate categories.