Table of Contents Author Guidelines Submit a Manuscript
International Journal of Biomedical Imaging
Volume 2011, Article ID 241396, 7 pages
Research Article

Biomedical Imaging Modality Classification Using Combined Visual Features and Textual Terms

1College of Information Science and Engineering, Ritsumeikan University, Kusatsu-Shi, 525-8577, Japan
2College of Information Sciences and Technology, The Pennsylvania State University, University Park, PA 16802, USA

Received 22 February 2011; Revised 12 May 2011; Accepted 6 July 2011

Academic Editor: Fei Wang

Copyright © 2011 Xian-Hua Han and Yen-Wei Chen. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


We describe an approach for the automatic modality classification in medical image retrieval task of the 2010 CLEF cross-language image retrieval campaign (ImageCLEF). This paper is focused on the process of feature extraction from medical images and fuses the different extracted visual features and textual feature for modality classification. To extract visual features from the images, we used histogram descriptor of edge, gray, or color intensity and block-based variation as global features and SIFT histogram as local feature. For textual feature of image representation, the binary histogram of some predefined vocabulary words from image captions is used. Then, we combine the different features using normalized kernel functions for SVM classification. Furthermore, for some easy misclassified modality pairs such as CT and MR or PET and NM modalities, a local classifier is used for distinguishing samples in the pair modality to improve performance. The proposed strategy is evaluated with the provided modality dataset by ImageCLEF 2010.