Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2018 (2018), Article ID 2134395, 13 pages
Research Article

A Novel Technique Based on Visual Words Fusion Analysis of Sparse Features for Effective Content-Based Image Retrieval

1Department of Software Engineering, University of Engineering and Technology, Taxila 47050, Pakistan
2Department of Computer Science, University of Engineering and Technology, Taxila 47050, Pakistan
3College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia
4College of Computer and Information Systems, Al-Yamamah University, Riyadh 11512, Saudi Arabia
5Department of Computer Engineering, Umm Al-Qura University, Makkah 21421, Saudi Arabia

Correspondence should be addressed to Zahid Mehmood

Received 16 July 2017; Accepted 4 February 2018; Published 6 March 2018

Academic Editor: Marco Perez-Cisneros

Copyright © 2018 Muhammad Yousuf et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Content-based image retrieval (CBIR) is a mechanism that is used to retrieve similar images from an image collection. In this paper, an effective novel technique is introduced to improve the performance of CBIR on the basis of visual words fusion of scale-invariant feature transform (SIFT) and local intensity order pattern (LIOP) descriptors. SIFT performs better on scale changes and on invariant rotations. However, SIFT does not perform better in the case of low contrast and illumination changes within an image, while LIOP performs better in such circumstances. SIFT performs better even at large rotation and scale changes, while LIOP does not perform well in such circumstances. Moreover, SIFT features are invariant to slight distortion as compared to LIOP. The proposed technique is based on the visual words fusion of SIFT and LIOP descriptors which overcomes the aforementioned issues and significantly improves the performance of CBIR. The experimental results of the proposed technique are compared with another proposed novel features fusion technique based on SIFT-LIOP descriptors as well as with the state-of-the-art CBIR techniques. The qualitative and quantitative analysis carried out on three image collections, namely, Corel-A, Corel-B, and Caltech-256, demonstrate the robustness of the proposed technique based on visual words fusion as compared to features fusion and the state-of-the-art CBIR techniques.