Table of Contents
Journal of Computational Environmental Sciences
Volume 2015, Article ID 903465, 9 pages
http://dx.doi.org/10.1155/2015/903465
Research Article

Automatic Extraction of Water Bodies from Landsat Imagery Using Perceptron Model

Lab for Spatial Informatics, International Institute of Information and Technology, Gachibowli, Hyderabad, Telangana 500032, India

Received 9 September 2014; Revised 29 December 2014; Accepted 31 December 2014

Academic Editor: Timothy O. Randhir

Copyright © 2015 Kshitij Mishra and P. Rama Chandra Prasad. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Extraction of water bodies from satellite imagery has been widely explored in the recent past. Several approaches have been developed to delineate water bodies from different satellite imagery varying in spatial, spectral, and temporal characteristics. The current study puts forward an automatic approach to extract the water body from a Landsat satellite imagery using a perceptron model. Perceptron involves classification based on a linear predictor function that merges few characteristic properties of the object commonly known as feature vectors. The feature vectors, combined with the weights, sum up to provide an input to the output function which is a binary hard limit function. The feature vector in this study is a set of characteristic properties shown by a pixel of the water body. Low reflectance of water in SWIR band, comparison of reflectance in different bands, and a modified normalized difference water index are used as descriptors. The normalized difference water index is modified to enhance its reach over shallow regions. For this study a threshold value of 2 has been proved as best among the three possible threshold values. The proposed method accurately and quickly discriminated water from other land cover features.

1. Introduction

Mapping of natural resources like forest and water bodies using satellite imagery has gained much importance in the recent past. Both forest and water resources are subject to intense exploitation and monitoring them at regular intervals is imperative for their sustainable management. Water bodies, which play a key role in the global carbon cycle and climate variations, are mapped in spatiotemporal domain to analyze and assess the extent and rate of their degradation and disappearance. Geospatial tools are proving to be advantageous for such impact assessment for the implementation of conservation measures [14].

Researchers across the globe have used different satellite data varying in spatial, spectral, and temporal characteristics to generate thematic maps of land use land cover or maps with special emphasis on water bodies [511]. At the same time, various techniques have been adopted to extract these features from satellite imagery and each method has its own merits and demerits.

Visual interpretation of satellite data provides the best delineation of water bodies of varied sizes but is time consuming, especially when working with high resolution data [12, 13]. The simple and common approach of unsupervised classification which uses an interactive self-organizing data analysis technique [14] provides results with very low accuracy, when there is spectral overlap between water bodies with other classes. In contrast, supervised classification presents more accurate and reliable outputs than unsupervised method [15] but may vary when used for high resolution data [16, 17]. Moreover, the supervised technique requires sufficiently large spectral training data sets and is not a fully automated method [1820]. Further it does not take into account the spatial features of the objects [21].

The method of fractal characterization classifies the features based on their texture either smooth or rough. Since water bodies exhibit a smooth texture compared to the other landscape features like vegetation and buildings in the satellite imagery, they can be easily extracted using the fractal method. However the method does not take into account the spectral features of the objects; hence different classes with varied spectral characteristics but with similar textures are classified as one class. Further, the results may also differ significantly with image resolution.

The use of the normalized difference water index (NDWI) method maximizes the reflectance properties of water by minimizing the low reflectance of near infrared (NIR) and maximizing the reflectance in the green wavelength [22, 23]. Studies show that this method yields better results for deeper and worse for shallower parts of the water body.

The threshold method is one of the most widely used algorithms for the extraction of water bodies from satellite imagery. The method is based on the fact that the reflected radiance of water in short-wave-infrared (SWIR) band is lower than that of other objects like vegetation, buildings, bare soil, and roads. Each pixel that passes the threshold test is classified as water body along with some other objects which are not truly a water class, providing false positive results. This method also has constraints related to size and shadows.

Whatever be the approach, generally the user is interested in a method which is fast, accurate, and automated. Towards this objective, researchers adopted hybrid approaches using different algorithms like decision tree classifier [24, 25], neural network [26], and other methods [2734]. Similarly the current study puts forward an automatic approach to extract water bodies from a satellite imagery using a perceptron [35] model for classification.

The Perceptron model has been extensively utilized for object recognitions in the domain of image processing [3644]. Over the past years a considerable increase in the availability of remotely sensed satellite data and the use of single layer and multilayer perceptron models have been reported. Use of perceptron models by several researchers showed better classification results than the conventional multispectral classification method [36] and also requires less training data [4547].

Compared to single layer, multilayer perceptron model (MLP) proposed by Rumelhart et al. [48] is most commonly implemented artificial neural networks (ANN) in various geospatial applications. More specifically, in different domain, such as urban planning, land use, land cover mapping, forestry, change detection, for example, Patra et al. [49] in their change detection study used MLP model with context sensitive semisupervised techniques to differentiate changed and unchanged pixels without prior knowledge of ground inventory. Mishra et al. [50] for Muzaffarpur city, India, B. Ahmed and R. Ahmed [51] for Dhaka city, predicted future LULC changes using MLP modeler. Similarly Gopal and Woodcock [52], Erbek et al. [53], and Petra et al (2014) also employed MLP for pattern recognition and change detection studies. Similarly, MLP was used by Fiset et al. [54] for image matching; Özkan and Erbek [55], Oliveira et al. [56], and Fierens and Rosin [57] for classification and feature extraction from satellite images; Kotsiantis [58], Freund and Schapire [59], Canargo and Yoneyama [60], and Pal and Mitra [61] for Hyperspectral data classification. With respect to forestry, Vehtari and Lampinen [62] used MLP to identify tree trunks in digital images and Boschetti et al. [63] for image fusion of hyperspectral data with pan data to extract vegetation under storey information. Lippit et al. [64] applied MLP to map selective logging sites in deciduous and mixed-deciduous forest in Massachusetts and Li et al. [65] and Jensen et al. [66] for forest age estimations; Chaudhuri and Parui implemented MLP [67] in defense application to identify target objects. Goswami et al. [68] have shown the strength of MLP to identify objects in an image automatically. Their research shows that the perceptron can be effectively used to extract water bodies from satellite data with good accuracy.

This paper presents an automatic approach to extract the water body from a Landsat satellite imagery using a single layered perceptron model based on a linear predictor function that merges few characteristic properties of the object commonly known as feature vectors. The feature vectors, combined with the weights, sum up to provide an input to the output function which is a binary hard limit function. The feature vector in this study is a set of characteristic properties shown by a pixel of the water body. Low reflectance of water in SWIR band, comparison of reflectance in different bands, and a modified normalized difference water index are used as descriptors. The normalized difference water index is modified to enhance its reach over shallow regions.

2. Materials and Methods

Landsat ETM data of 29th October, 2001, of Hyderabad city, India, were used as test material to extract water bodies (Figure 1). Out of the seven bands of Landsat data, five bands were used in the current approach, namely, blue (0.45~5.2), green (0.52~0.60), red (0.63~0.69), near infrared (NIR—0.76~0.90), and short wave infrared (SWIR—1.55~1.75). The method put into use here is a classification based on perceptron which is an algorithm for supervised classification. Perceptron involves classification based on a linear predictor function that combines some characteristic properties of the object commonly known as descriptors or feature vectors. A feature vector is an attribute that represents the object; the more the feature vectors, the easier the process of classification. These feature vectors are combined with weights for the construction of a predictor function. The perceptron is a binary classifier and maps the input “” to a value “” using a hard limit function (Figure 2).

Figure 1: Landsat ETM 2001 data showing major water bodies of Hyderabad city, India.
Figure 2: Perceptron model.

To execute classification using perceptron, the first step is to define feature vectors by finding out the descriptors that are characteristics of a water body, which are either pixel based or object based. In the current study only pixel based information is used. Once vector features are defined the next step is to initialize the weights for each vector. These weights depend on the spatial and temporal properties of the objects. For example, the reflectance value for a band shown by a water body in summer is different from what it shows during monsoon season (flood times, more precisely) because of increased water level and sediment deposition and other impurities. Hence there is no universal value for the weights for each descriptor and the optimal values might change depending on the time and location. The values of weights usually vary from 0 to 1, where a weight of 0 indicates that the descriptor no longer shows any characteristic property of the object and a weight of 1 means that this property must be shown by the object.

After finding the feature vectors and initializing the weights, a weighted sum was calculated. This weighted sum served as the input to the output function. The output function is a hard limit binary function and while performing the process of classification it uses a threshold value. The threshold value depends on the feature vectors and the weights. In this case we choose the maximum possible value of the weighted sum displayed by a water body pixel and the calculation of such value was done by using the extreme values. The weighted sum acquired for each pixel is compared with the threshold value and in this way the classification is done.

2.1. Feature Vectors: Three Feature Vectors Are Used
2.1.1. Low Reflected Radiance of Water in SWIR Band

The reflected radiance of an object that is captured in a remote sensing image depends upon the extent of electromagnetic radiation absorbed by the object; that is, the more it absorbs, the less it reflects. The water body absorbs more in the infrared region. Hence the reflected radiance of water in infrared and short-wave-infrared bands is lower than that of other objects like vegetation, buildings, bare soil, and roads and so forth. The reflectance values for some pixels of water body and land features are shown in Figure 3.

Figure 3: Reflectance values of water body and other land features.

Further from Figure 4 it is clear that the water bodies show a distinctly low reflectance radiance in all infrared bands including band 5, that is, short-wave-infrared band.

Figure 4: Reflectance values of different objects in all six Landsat ETM reflected bands.
2.1.2. Reflectance in B5 < B2 and Reflectance in B4 > B3

(a) The reflectance of water in band 5 is less than reflection of water in band 2. (b) The reflection of water in band 4 is greater than the reflection in band 3.

The above two properties derived by analyzing the pixel values are true for water body but not for other objects like soil, sand, roads, vegetation, or buildings. Hence by applying these two conditions water bodies can be easily extracted.

2.1.3. Modified Normalized Difference Water Index (MNDWI)

The equation for NDWI is (Green − NIR)/(Green + NIR). The selection of these wavelengths maximizes the reflectance properties of water, that ismaximize the typical reflectance of water features by using green wavelengths;minimize the low reflectance of NIR by water features;maximize the high reflectance of NIR by terrestrial vegetation and soil features.

However NDWI does not extract shallow parts of the water body and is unable to separate built-up structures from water feature [22]. Therefore to increase the level of details in the result we have used the summation of NDWI and a modified NDWI (MNDWI) and then did the binarization of result. The index used is where MNDWI is (Blue − NIR)/(Blue + NIR).

2.2. Binarization of Index

After calculation of this index it is binarized with respect to zero. All the values above zero are changed to 1 and below zero to zero. This binarization helps in two ways: first, all the water pixels become 1 and all the nonwater pixel becomes zero. This helps in selecting the threshold for output function. Second, root out the diminishing effect of the negative values while calculating the weighted sum.

2.3. Threshold Value

The threshold value determines whether a pixel belongs to a water body or not. The maximum value for each of the three feature vectors can be 1. Hence the maximum value of the weighted sum cannot be more than 3. If the pixel follows only one of the three descriptors, then the value of the weighted sum would be 1, if it follows two descriptors then the weighted sum would be 2, if it follows all the three descriptors, then the value would be 3, and if it does not follow any of the descriptors, then its value would be 0.

This approach involves a comparative study taking into account all the three possible thresholds. The most optimum of the three thresholds is 2 as it ensures that the pixel shows at least two of the three characteristic properties. After the output function finishes its job, the water bodies get extracted. But along with the water bodies a portion of small regions or single pixels is also extracted, which are usually referred to as noise. The main reason for the existence of noise is the presence of small water bodies formed as a result of heavy rains or any other reason. The only factor that could separate these unwanted results from the permanent water bodies is the size factor and can be achieved by the following two methods.(1) Single pixel removal. Tracing the neighborhood of each pixel, if the pixel is not found lying in the decided range, it is removed from the “water body” class.(2) Size parameter. After implementing a single pixel removal, noise might still exist. So now the task is to find out the maximum size of cluster of pixels that is not the part of water body but still exists in the image. There is a risk of losing some part of the water body region if an appropriate value is not chosen. There is also a possibility that size of few water bodies is itself lesser than that size-threshold limit. Size parameter provides flexibility in the process of noise removal. The value of size-threshold parameter can be set according to the desired size of the water body. If the focus is only on extracting big lakes, the size-threshold can be increased accordingly. If the focus is on extracting all possible water sources, then the size-threshold parameter can even be completely removed.

3. Results and Discussion

For deeper parts of the water body the reflectance values in band 5 are less than 35 but for shallow parts, the reflectance values are high. For marshy regions it goes beyond 55. A comparative study for extraction of water bodies using different values of the SWIT-threshold is carried out starting from the values 40 to 65. Similarly the size parameter is also flexible and depends upon the minimum size of the water body that we are interested in. So the results are tabulated keeping in view different threshold and varying the size parameter (Table 1).

Table 1: Accuracy of extracted water bodies using different threshold and varying size parameter.

The study is carried out taking into account all the three possible perceptron-threshold values. A perceptron-threshold value of 1 indicates that the pixel classified as water body follows one of the three descriptors. A perceptron-threshold value 1 brings into the result a lot of noise. The results suffer from the same problem of false positive results as the normal SWIR-threshold method. The number of false positive results increases with the increase in threshold value of the SWIR band as more and more nonwater pixels get included in the result.

Perceptron-threshold value of 2 indicates that the pixel classified as a water body pixel follows at least two of the three descriptors. This is the best and most optimum value for perceptron-threshold as it neither allows nonwater pixels to get included nor does it cut down the true positive results. A perceptron-threshold of 3 indicates that the pixels follow all the three descriptors. This perceptron-threshold removes some true positive results but the results acquired by this threshold are very reliable and are independent of the increase in threshold value for the SWIR band.

From Table 1 (showing the accuracy percentage of the results) it is clear that, for a perceptron-threshold value of 1, area classified as water body increases with the increase in SWIR-threshold value. It happens because the shallow parts of the water body reflect more than the deeper parts as a result, the reflectance value being directly proportional to the amount of light reflected increases. And hence if we decrease the SWIR-threshold value these shallow parts are removed. For a perceptron-threshold value of 2, the area increases with increase in SWIR-threshold to a certain value after which there is no effect of increase in the SWIR-threshold. Also the magnitude of gradient between two consecutive SWIR-threshold values is less than that for perceptron-threshold of 1. For perceptron-threshold value of 3, there is a very little increase in moving from SWIR values 40 to 50 but after that the value gets stabilized and has no effect on increasing the SWIR value. It happens because the perceptron-threshold of 3 ensures that the pixel follows all the three descriptors. Noise included as a result of increasing the SWIR value is removed by the other two descriptors. Figure 5 shows the result of taking the SWIR threshold of 60 and size parameter of 30. It is also very clear that the increase in size parameter reduces the area classified as water body.

Figure 5: Extracted water bodies using various threshold values and size parameter.

The water bodies extracted by this method are compared with the thematic map prepared for the same area by Prasad et al. [3] using the same satellite. The results are highly correlated with reference to their map. Also the results were validated with the ground survey data using GPS.

4. Conclusions

In the present study we put forward an automatic method to extract water bodies from remote sensing data in addition to the previous methods that used different algorithms and satellite data. The proposed method accurately and quickly discriminated water from nonwater features using ETM data. The characteristic properties of water, such as low reflectance values in the SWIR band, higher value of reflectance in band 4 than in band 2 and a higher reflectance in band 2 than band 5, were used as a basis in the study. These properties supplementing with an NDWI are used as the feature vectors to improve the classification. Besides, binarization of the modified NDWI is done so as to make a selection of threshold easy and to reduce the effects of negative values while calculating the weighted sum. For this study a threshold value of 2 has proved to be the best among the three possible threshold values possible. A threshold value of 1 includes noise and a threshold value of 3 cuts down the true positive results. If only big and permanent water bodies are to be extracted then a size parameter included in the method can come in handy. The proposed perceptron model works competently for ETM data and researchers in the domain of global water research can implement this model to extract the water bodies with acceptable accuracy and precision.

However, the model should be tested for its sensitivity and performance by considering resolution, season of satellite data in addition to land use, and land cover patterns around water bodies, which are not examined in the present study. The study used ETM data from one time period and was able to delineate water bodies to the best level. However it needs to be checked using diverse temporal data and different seasons for the same sensor (ETM). Also there is a need to assess the efficiency of the proposed algorithm with satellite data from varied sensors. Further, for ETM data the study has come up with a threshold value of 2 as best in extracting water bodies, which needs to be standardized across various spectral and spatial resolutions of satellite data. The observation of suppressing true positive values by threshold value of 3 has to be scrutinized using multiple satellite data to define an average threshold for water body extraction. The proposed method was proved to be accurate in isolating water bodies of larger size and there is a need to improve model by emphasizing a common threshold that can extract even smaller water bodies too in the view of the current global water crisis.

The resolution of data definitely matters in precisely delineating water boundaries. The proposed model has to be tested using high spatial resolution data, such as IKONOS and QuickBird, which are preferred by researchers for better accuracy of land feature mapping. Classification of high resolution satellite data is challenging and if suggested model works out well then water bodies from data can be extracted with more accuracy and in a short time.

Along with satellite data the process of water body mining will also depend on the land use land cover of the landscape understudy. The current study used the landscape structure of the urban scenario of the rapidly growing Hyderabad city. It is necessary to examine the capability of the proposed method considering different landscape structure such as water bodies in densely forested landscape, or in a rural scenario with intermingling of water bodies among agricultural lands having different crops or water bodies adjacent to large rivers and streams. In a sense landscape matters because the land use classes around the water bodies vary in different landscapes. Further water bodies are extracted by differentiating water from nonwater features which are boundary classes for the water bodies and discriminating those classes may defy the context of feature spectral overlap.

In view of the above, the future research has great research scope in the field of perceptron model to design and develop a potential unique method that can work on varied satellite data and extract the water body with more accuracy including smaller size water bodies.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors duly acknowledge Global Land Facility Programme for providing satellite data used in the study and give thanks to Anonymous reviewer for his constructive comments.

References

  1. N. M. Mattikalli and K. S. Richards, “Estimation of surface water quality changes in response to land use change: application of the export coefficient model using remote sensing and geographical information system,” Journal of Environmental Management, vol. 48, no. 3, pp. 263–282, 1996. View at Publisher · View at Google Scholar · View at Scopus
  2. P. Brezonik, K. D. Menken, and M. Bauer, “Landsat-based remote sensing of lake water quality characteristics, including chlorophyll and colored dissolved organic matter (CDOM),” Lake and Reservoir Management, vol. 21, no. 4, pp. 373–382, 2005. View at Publisher · View at Google Scholar · View at Scopus
  3. P. R. C. Prasad, K. S. Rajan, V. Bhole, and C. B. S. Dutt, “Is rapid urbanization leading to loss of water bodies?” Journal of Spatial Science, vol. 2, no. 2, pp. 43–52, 2009. View at Google Scholar
  4. P. R. C. Prasad, C. Pattanaik, and S. N. Prasad, “Assessment and monitoring of wetland dynamics in Andhra Pradesh using remote sensing and GIS,” in Biodiversity and Sustainable Livelihood, L. Patro, Ed., pp. 73–86, Discovery Publishing House, New Delhi, India, 2010. View at Google Scholar
  5. A. G. Dekker, T. J. Malthus, M. M. Wijnen, and E. Seyhan, “Remote sensing as a tool for assessing water quality in Loosdrecht Lakes,” Journal of Hydrobiologia, vol. 233, no. 1–3, pp. 137–159, 1992. View at Publisher · View at Google Scholar · View at Scopus
  6. F. M. Henderson, “Environmental factors and the detection of open surface water areas with X-band radar imagery,” International Journal of Remote Sensing, vol. 16, no. 13, pp. 2423–2437, 1995. View at Publisher · View at Google Scholar · View at Scopus
  7. Z. Zhang, V. Prinet, and M. SongDe, “Water body extraction from multi-source satellite images,” in Proceedings of the Geoscience and Remote Sensing Symposium (IGARSS '03), vol. 6, pp. 3970–3972, IEEE, July 2003. View at Publisher · View at Google Scholar · View at Scopus
  8. Z. He, X. Zhang, Z. Huang, and H. Jiang, “A water extraction technique based on high-spatial remote sensing images,” Journal of Zhejiang University (Science Edition), vol. 31, no. 6, pp. 701–707, 2004. View at Google Scholar
  9. C. Xie, J. Zhang, G. Huang, Z. Zhao, and J. Wang, “Water body information extraction from high resolution airborne synthetic aperture radar image with technique of imaging in different directions and object-oriented,” in Proceedings of the ISPRS Congress Silk Road for Information from Imagery, pp. 165–168, Beijing, China, 2008.
  10. C. Yang, H. Rong, and S. Wang, “Extracting waterbody from beijing-1 micro-satellite images based on knowledge discovery,” in Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS '08), pp. IV850–IV853, Massachusetts, Boston, Mass, USA, July 2008. View at Publisher · View at Google Scholar · View at Scopus
  11. G. Petrie, B. Moon, and K. Steinmaus, “Semi-automated stream extraction at PNN,” in Proceedings of the Overwatch Geospatial Users Conference, 2008.
  12. Q. Qin, Y. Yuan, and R. Lu, “A new approach to object recognition on high resolution satellite image,” in Proceedings of the International Archives of Photogrammetry and Remote Sensing Part B3, vol. 33, pp. 753–760, Amsterdam, The Netherlands, 2000.
  13. P. Kupidura, “Distinction of lakes and rivers on satellite images using mathematical morphology,” in Biuletyn WAT LXII.3, pp. 57–69, 2013, http://works.bepress.com/przemyslaw_kupidura/10. View at Google Scholar
  14. J. T. Tou and R. C. Gonzalez, Pattern Recognition Principles, vol. 1 of Applied Mathematics and Computation, Addison-Wesley, Reading, Mass, USA, 1974.
  15. R. T. Kingsford, R. F. Thomas, P. S. Wong, and E. Knowles, GIs Database for Wetlands of the Murray Darling Basin, Final Report to the Murray-Darling Basin Commission, National Parks and Wildlife Service, Sydney, Australia, 1997.
  16. F. Rego and B. Koch, “Automatic classification of land cover with high resolution data of the Rio de Janeiro city Brazil comparison between pixel and object classification,” in Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, J. Carstens, Ed., Regensburg, Germany, June 2003.
  17. T. Blaschke, M. Conradi, and S. Lang, “Multiscale image analysis for ecological monitoring of heterogeneous, small structured landscapes,” in Remote Sensing for Environmental Monitoring, GIS Applications, and Geology, vol. 4545 of Proceedings of SPIE, pp. 35–44, Toulouse, France, 2001. View at Publisher · View at Google Scholar
  18. J. R. Jensen, Introductory Digital Image Processing: A Remote Sensing Perspective, Prentice Hall, Upper Saddle River, NJ, USA, 1996.
  19. J. A. Richards, Remote Sensing Digital Image Analysis, Springer, Berlin, Germany, 1999.
  20. M. Xiang, C.-C. Hung, M. Pham, B.-C. Kuo, and T. Coleman, “A parallelepiped multispectral image classifier using genetic algorithms,” in Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS '05), vol. 1, pp. 482–485, Seoul, Republic of Korea, July 2005. View at Publisher · View at Google Scholar · View at Scopus
  21. T. Blaschke and J. Strobl, “What's wrong with pixels? Some recent developments interfacing remote sensing and GIS,” GIS—Zeitschrift für Geoinformationssysteme, pp. 12–17, 2001. View at Google Scholar
  22. H. Xu, “Modification of normalised difference water index (NDWI) to enhance open water features in remotely sensed imagery,” International Journal of Remote Sensing, vol. 27, no. 14, pp. 3025–3033, 2006. View at Publisher · View at Google Scholar · View at Scopus
  23. S. K. McFeeters, “The use of the Normalized Difference Water Index (NDWI) in the delineation of open water features,” International Journal of Remote Sensing, vol. 17, no. 7, pp. 1425–1432, 1996. View at Publisher · View at Google Scholar · View at Scopus
  24. J. Fu, J. Wang, and J. Li, “Study on the automatic extraction of water body from TM image using decision tree algorithm,” in International Symposium on Photoelectronic Detection and Imaging 2007: Related Technologies and Applications, vol. 6625 of Proceedings of the SPIE, Beijing, China, September 2007. View at Publisher · View at Google Scholar · View at Scopus
  25. J. Deng and K. Wang, “Study on the automatic extraction of water body from SPOT-5 images using decision tree algorithm,” Journal of Zhejiang University, vol. 31, no. 2, pp. 171–174, 2005. View at Google Scholar
  26. S. E. Decatur, “Application of neural networks to terrain classification,” in Proceedings of the IEEE International Joint Conference on Neural Networks, pp. 283–288, June 1989. View at Scopus
  27. P. A. Wilson, “Rule-based classification of water in landsat MSS images using the variance filter,” Photogrammetric Engineering and Remote Sensing, vol. 63, no. 5, pp. 485–491, 1997. View at Google Scholar · View at Scopus
  28. D. Yunyan and Z. Chenghu, “Automatically extracting remote sensing information for water bodies,” Journal of Remote Sensing, vol. 4, no. 2, pp. 264–269, 1998. View at Google Scholar
  29. Q. Zhang, C. Wang, F. Shinohara, and T. Yamaoka, “Automatic extraction of water body based on EOS/MODIS remotely sensed imagery,” in MIPPR 2007: Automatic Target Recognition and Image Analysis; and Multispectral Image Acquisition, vol. 6786 of Proceedings of SPIE, Wuhan, China, 2007. View at Publisher · View at Google Scholar
  30. O. Sharma, D. Mioc, and F. Anton, “Feature extraction and simplification from color images based on color image segmentation and skeletonization using the quadedge data structure,” in Proceeding of the 15th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision, 2007.
  31. P. Nuangjumnong and R. Simking, “Automatic extraction of road and water surface from SPOT5 Pan-Sharpened image,” in Proceedings of the Conference Map Asia, 2009.
  32. M. Li, L. Xu, and M. Tang, “An extraction method for water body of remote sensing image based on oscillatory network,” Journal of Multimedia, vol. 6, no. 3, pp. 252–260, 2011. View at Publisher · View at Google Scholar · View at Scopus
  33. N. D. Duong, “Water body extraction from multispectral image by spectral pattern analysis,” in Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 39-B8 of 22 ISPRS Congress, pp. 181–186, Melbourne, Australia, August-September 2012.
  34. J. Luo, Y. Sheng, Z. Shen, and J. Li, “High-precise water extraction based on spectral-spatial coupled remote sensing information,” in Proceedings of the 30th IEEE International Geoscience and Remote Sensing Symposium (IGARSS '10), pp. 2840–2843, July 2010. View at Publisher · View at Google Scholar · View at Scopus
  35. I. Kanellopoulos and G. G. Wilkinson, “Strategies and best practice for neural network image classification,” International Journal of Remote Sensing, vol. 18, no. 4, pp. 711–725, 1997. View at Publisher · View at Google Scholar · View at Scopus
  36. H. Bischof, W. Schneider, and A. J. Pinz, “Multispectral classification of landsat-images using neural networks,” IEEE Transactions on Geoscience and Remote Sensing, vol. 30, no. 3, pp. 482–490, 1992. View at Publisher · View at Google Scholar · View at Scopus
  37. P. M. Atkinson and A. R. L. Tatnall, “Neural networks in remote sensing,” International Journal of Remote Sensing, vol. 18, no. 4, pp. 699–709, 1997. View at Publisher · View at Google Scholar · View at Scopus
  38. G. G. Wilkinson, “Open questions in neurocomputing for earth observation,” in Neurocomputation in Remote Sensing Data Analysis, I. Kanellopoulosm, G. G. Wilkinson, F. Roli, and J. Austin, Eds., pp. 3–13, Springer, Berlin, Germany, 1997. View at Publisher · View at Google Scholar
  39. C. Özkan and F. Sunar, “The use and effectiveness of artificial neural networks in forest fire classification,” in Proceedings of the RSS'99 Symposium on Earth Observation: From Data to Information, pp. 767–772, Cardiff, UK, September 1999.
  40. T. Yoshida and S. Omatu, “Neural network approach to land cover mapping,” IEEE Transactions on Geoscience and Remote Sensing, vol. 32, no. 5, pp. 1103–1108, 1994. View at Publisher · View at Google Scholar · View at Scopus
  41. P. Blonda, V. la Forgia, G. Pasquariello, and G. Satalino, “Feature extraction and pattern classification of remote sensing data by a modular neural system,” Optical Engineering, vol. 35, no. 2, pp. 536–542, 1996. View at Publisher · View at Google Scholar · View at Scopus
  42. J. D. Paola and R. A. Schowengerdt, “A review and analysis of backpropagation neural networks for classification of remotely-sensed multi-spectral imagery,” International Journal of Remote Sensing, vol. 16, no. 16, pp. 3033–3058, 1995. View at Publisher · View at Google Scholar · View at Scopus
  43. G. M. Foody, “Land cover classification by an artificial neural network with ancillary information,” International Journal of Geographical Information Systems, vol. 9, no. 5, pp. 527–542, 1995. View at Publisher · View at Google Scholar · View at Scopus
  44. G. M. Foody and M. K. Arora, “An evaluation of some factors affecting the accuracy of classification by an artificial neural network,” International Journal of Remote Sensing, vol. 18, no. 4, pp. 799–810, 1997. View at Publisher · View at Google Scholar · View at Scopus
  45. D. L. Civco, “Artificial neural networks for land-cover classification and mapping,” International Journal of Geographical Information Systems, vol. 7, no. 2, pp. 173–186, 1993. View at Publisher · View at Google Scholar · View at Scopus
  46. J. C.-W. Chan, K.-P. Chan, and A. G.-O. Yeh, “Detecting the nature of change in an urban environment: a comparison of machine learning algorithms,” Photogrammetric Engineering and Remote Sensing, vol. 67, no. 2, pp. 213–225, 2001. View at Google Scholar · View at Scopus
  47. S. Ghosh, S. Biswas, D. Sarkar, and P. P. Sarkar, “A tutorial on different classification techniques for remotely sensed imagery datasets,” Smart Computing Review, vol. 4, no. 1, pp. 34–43, 2014. View at Google Scholar
  48. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning internal representation by error propagation,” in Parallel Distributed Processing: Exploration in Microstructure of Cognition, vol. 1, pp. 318–362, MIT Press, Cambridge, Mass, USA, 1986. View at Google Scholar
  49. S. Patra, S. Ghosh, and A. Ghosh, “Change detection of remote sensing images with semi-supervised multilayer perceptron,” Fundamenta Informaticae, vol. 84, no. 3-4, pp. 429–442, 2008. View at Google Scholar · View at MathSciNet · View at Scopus
  50. V. N. Mishra, P. K. Rai, and K. Mohan, “Prediction of land use changes based on land change modeler (1 cm) using remote sensing: a case study of Muzaffarpur (Bihar), India,” Journal of the Geographical Institute Jovan Cvijic, vol. 64, no. 1, pp. 111–127, 2014. View at Publisher · View at Google Scholar
  51. B. Ahmed and R. Ahmed, “Modeling urban land cover growth dynamics using multi-temporal satellite images: a case study of Dhaka, Bangladesh,” ISPRS International Journal of Geo-Information, vol. 1, no. 3, pp. 3–31, 2012. View at Publisher · View at Google Scholar
  52. S. Gopal and C. Woodcock, “Remote sensing of forest change using artificial neural networks,” IEEE Transactions on Geoscience and Remote Sensing, vol. 34, no. 2, pp. 398–404, 1996. View at Publisher · View at Google Scholar · View at Scopus
  53. F. S. Erbek, C. Özkan, and M. Taberner, “Comparison of maximum likelihood classification method with supervised artificial neural network algorithms for land use activities,” International Journal of Remote Sensing, vol. 25, no. 9, pp. 1733–1748, 2004. View at Publisher · View at Google Scholar · View at Scopus
  54. R. Fiset, F. Cavayas, M. C. Mouchot, B. Solaiman, and R. Desjardins, “Map-image matching using a multi-layer perceptron: the case of the road network,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 53, no. 2, pp. 76–84, 1998. View at Publisher · View at Google Scholar · View at Scopus
  55. C. Özkan and F. S. Erbek, “The comparison of activation functions for multispectral Landsat TM image classification,” Photogrammetric Engineering and Remote Sensing, vol. 69, no. 11, pp. 1225–1234, 2003. View at Publisher · View at Google Scholar · View at Scopus
  56. A. C. T. Oliveira, L. T. Oliveira, L. M. T. Carvalho, A. Z. Martinhago, F. W. Acerbi Junior, and L. P. Z. Lima, “Separabilities of forest types in Amplitude-phase space of multi-temporal MODIS NDVI,” in XIV Simpósio Brasileiro de Sensoriamento Remoto, pp. 1457–1464, INPE, Natal, Brasil, April 2009.
  57. F. Fierens and P. L. Rosin, “Filtering remote sensing data in the spatial and feature domains,” in Proceedings of the Image and Signal Processing for Remote Sensing, pp. 472–482, September 1994. View at Scopus
  58. S. B. Kotsiantis, “Supervised machine learning: a review of classification techniques,” Informatica, vol. 31, no. 3, pp. 249–268, 2007. View at Google Scholar · View at MathSciNet
  59. Y. Freund and R. E. Schapire, “Large margin classification using the perceptron algorithm,” Machine Learning, vol. 37, no. 3, pp. 277–296, 1999. View at Publisher · View at Google Scholar · View at Scopus
  60. L. S. Camargo and T. Yoneyama, “Specification of training sets and the number of hidden neurons for multilayer perceptrons,” Neural Computation, vol. 13, no. 12, pp. 2673–2680, 2001. View at Publisher · View at Google Scholar · View at Scopus
  61. S. K. Pal and S. Mitra, “Multilayer perceptron, fuzzy sets, and classification,” IEEE Transactions on Neural Networks, vol. 3, no. 5, pp. 683–697, 1992. View at Publisher · View at Google Scholar · View at Scopus
  62. A. Vehtari and J. Lampinen, “Bayesian MLP neural networks for image analysis,” Pattern Recognition Letters, vol. 21, no. 13-14, pp. 1183–1191, 2000. View at Publisher · View at Google Scholar · View at Scopus
  63. M. Boschetti, I. Gallo, M. Meroni et al., “Retrieval of vegetation understory information fusing hyperspectral and panchromatic airborne data,” in Proceedings of the 3rd EARSeL Workshop on Imaging Spectroscopy, Herrsching, Germany, May 2003.
  64. C. D. Lippitt, J. Rogan, Z. Li, J. R. Eastman, and T. G. Jones, “Mapping selective logging in mixed deciduous forest: a comparison of machine learning algorithms,” Photogrammetric Engineering and Remote Sensing, vol. 74, no. 10, pp. 1201–1211, 2008. View at Publisher · View at Google Scholar · View at Scopus
  65. D. Li, W. Ju, W. Fan, and Z. Gu, “Estimating the age of deciduous forests in northeast China with enhanced thematic mapper plus data acquired in different phenological seasons,” Journal of Applied Remote Sensing, vol. 8, no. 1, Article ID 083670, 2014. View at Publisher · View at Google Scholar · View at Scopus
  66. J. R. Jensen, F. Qiu, and M. Ji, “Predictive modelling of coniferous forest age using statistical and artificial neural network approaches applied to remote sensor data,” International Journal of Remote Sensing, vol. 20, no. 14, pp. 2805–2822, 1999. View at Publisher · View at Google Scholar · View at Scopus
  67. B. B. Chaudhuri and S. K. Parui, “Target detection: remote sensing techniques for defence applications,” Defence Science Journal, vol. 45, no. 4, pp. 285–291, 1995. View at Google Scholar
  68. A. K. Goswami, S. Gakhar, and H. Kaur, “Automatic object recognition from satellite images using artificial neural network,” International Journal of Computer Applications, vol. 95, no. 10, pp. 33–39, 2014. View at Publisher · View at Google Scholar