Table of Contents Author Guidelines Submit a Manuscript
Journal of Sensors
Volume 2017 (2017), Article ID 3515418, 8 pages
Research Article

Multisource Data Fusion Framework for Land Use/Land Cover Classification Using Machine Vision

1Department of Computer Science & IT, The Islamia University of Bahawalpur, Punjab 63100, Pakistan
2Key Laboratory of Photo-Electronic Imaging Technology and System, School of Computer Science, Beijing Institute of Technology (BIT), Beijing 100081, China
3Department of Computer Science, NFC IET, Multan, Punjab 60000, Pakistan
4Department of Computer Sciences, Quaid-i-Azam University, Islamabad 45320, Pakistan
5Department of Computer Science, Virtual University of Pakistan, Lahore, Punjab 54000, Pakistan

Correspondence should be addressed to Salman Qadri

Received 21 April 2017; Revised 25 July 2017; Accepted 8 August 2017; Published 11 September 2017

Academic Editor: Julio Rodriguez-Quiñonez

Copyright © 2017 Salman Qadri et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Data fusion is a powerful tool for the merging of multiple sources of information to produce a better output as compared to individual source. This study describes the data fusion of five land use/cover types, that is, bare land, fertile cultivated land, desert rangeland, green pasture, and Sutlej basin river land derived from remote sensing. A novel framework for multispectral and texture feature based data fusion is designed to identify the land use/land cover data types correctly. Multispectral data is obtained using a multispectral radiometer, while digital camera is used for image dataset. It has been observed that each image contained 229 texture features, while 30 optimized texture features data for each image has been obtained by joining together three features selection techniques, that is, Fisher, Probability of Error plus Average Correlation, and Mutual Information. This 30-optimized-texture-feature dataset is merged with five-spectral-feature dataset to build the fused dataset. A comparison is performed among texture, multispectral, and fused dataset using machine vision classifiers. It has been observed that fused dataset outperformed individually both datasets. The overall accuracy acquired using multilayer perceptron for texture data, multispectral data, and fused data was 96.67%, 97.60%, and 99.60%, respectively.