Table of Contents Author Guidelines Submit a Manuscript
Journal of Sensors
Volume 2018, Article ID 8130470, 8 pages
https://doi.org/10.1155/2018/8130470
Research Article

Wetland Change Detection Using Cross-Fused-Based and Normalized Difference Index Analysis on Multitemporal Landsat 8 OLI

School of Resources and Environmental Engineering, Anhui University, Hefei, Anhui 230601, China

Correspondence should be addressed to Biao Wang; nc.ude.uha@sr-oaibgnaw

Received 30 November 2017; Revised 8 April 2018; Accepted 30 April 2018; Published 21 May 2018

Academic Editor: Biswajeet Pradhan

Copyright © 2018 Yan Gao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Wetlands are one of the most important ecosystems on the Earth and play a critical role in regulating regional climate, preventing floods, and reducing flood severity. However, it is difficult to detect wetland changes in multitemporal Landsat 8 OLI satellite images due to the mixed composition of vegetation, soil, and water. The main objective of this study is to quantify change to wetland cover by an image-to-image comparison change detection method based on the image fusion of multitemporal images. Spectral distortion is regarded as candidate change information, which is generated by the spectral and spatial differences between multitemporal images during the process of image cross-fusion. Meanwhile, the normalized difference vegetation index (NDVI) and normalized difference water index (NDWI) were extracted from the cross-fused image as a normalized index image to enhance and increase the information about vegetation and water. Then, the modified iteratively reweighted multivariate alteration detection (IR-MAD) is applied to the generally fused images and normalized difference index images, providing a good evaluation of spectral distortion. The experimental results show that the proposed method performed better to reduce the detection errors due to the complicated areas under different ground types, especially in cultivated areas and forests. Moreover, the proposed method was tested and quantitatively assessed and achieved an overall accuracy of 96.67% and 93.06% for the interannual and seasonal datasets, respectively. Our method can be a tool to monitor changes in wetlands and provide effective technical support for wetland conservation.

1. Introduction

Wetlands are a unique ecosystem formed by the interaction between water and land, and they cover 6% of the Earth’s surface [1]. Due to seasonal changes, the characteristics of wetlands vary among water, soil, and vegetation. This makes the wetland landscape more complex, and it becomes more difficult to extract information about changes in these regions. In addition, the combination of reflectance spectra of the underlying soil, the hydrologic regime, and atmospheric vapor makes optical classification more difficult, and these factors could introduce a reduction in spectral reflectance. Therefore, it is often difficult to achieve the expected results using a single method to extract information about wetland change [2].

Postclassification comparison (PCC), in which two multitemporal images are independently classified and then compared [3], is one of the methods used for wetland change detection. First, it is applied for detecting the trajectories of corresponding wetland cover types. More specifically, it includes many classification methods, such as the regression tree algorithm or the maximum likelihood classification [4, 5]. However, in some of these methods, high-accuracy classification and ground truth information are required [36].

The image-to-image (or direct) comparison change detection method, which is another method for wetland change detection, is used to obtain a difference image of spectral changes through the analysis and calculation of the spectral characteristics from multitemporal images, and then a binary image is generated in which the change areas are distinguished from unchanged areas [7]. The advantages of this method are first that it provides a faster comparison of images and second that it demands no ground truth information; however, it could not display the change trajectories of wetland cover types [8]. Change detection methods such as change vector analysis (CVA) [9], principal component analysis (PCA) [10], Erreur Relative Globale Adimensionnelle de Synthese (ERGAS) [11], and multivariate alteration detection (MAD) [12] directly calculate multitemporal images.

However, the results of the change detection method based on the difference image largely depend on the spectral characteristics and may include some false positives. For this reason, a change detection method based on cross-fusion and spectral distortion is proposed to improve the accuracy of change detection in flood zones [13]. Successful change detection results have been achieved in coal-mining subsidence areas; nevertheless, there are still some false detections in wetland areas [14].

To mitigate the false positive results, we employ an image-to-image method through NDVI and NDWI extraction based on a cross-fusion image to detect the change information of wetlands. In the case of wetlands, vegetation, soil, and water coexist, and the application of the cross-fusion method is beneficial for improving spatial resolution and enhancing information about wetland change. In addition, based on the cross-fused image, NDVI and NDWI are extracted in order to enhance the information about vegetation and water. We can then derive the change information using the modified IR-MAD algorithm, which is a well-established change detection method for multitemporal multispectral images [12]. Finally, the change area of wetlands is obtained using the automated threshold method.

2. Study Area and Dataset

In this study, we collected three Landsat 8 OLI multitemporal images covering the Shengjin Lake Nature Reserve area. These images represent the seasonal (25 July 2016 and 16 December 2016) and interannual (6 November 2013 and 16 December 2016) changes in the types of land cover, as shown in Figures 1(a)1(c). In the preprocessing, the relevant bands were selected, including the 30-meter-resolution MS (multispectral) bands 2–7 and the 15-meter-resolution PAN (panchromatic) band 8. Then, the Shengjin Lake-protected area vector data were used to cut the images, with the specific parameters shown in Table 1.

Figure 1: Multitemporal images of Shengjin Lake used in the experiment.
Table 1: Data specifications.

The Shengjin Lake National Nature Reserve (30°15~30° 30N, 116°55~117°15E) is located in Chizhou City, Anhui Province. The protected region, with a total area of 333.40 km2, consists of Shengjin Lake, cultivated areas, urban areas, forest, and bare land. The Shengjin Lake wetland ecological environment is well-preserved, with rich natural and cultural landscapes. It is one of the most complete areas in the wetland ecosystem of inland freshwater lakes in the lower reaches of the Yangtze River. It connects with the Yangtze River, and the water level of the lake is regulated by the Huangpen sluice. The location of the study area is shown in Figure 1(d). The water level of Shengjin Lake varies between 3.4 and 7.4 m due to the sluice. Water level changes lead to the largest lake area in the summer wet season and a smaller area in the winter dry season. During the dry season, two largest Carex meadows (“upper lake meadow” and “lower lake meadow”) provide suitable living environments and food sources for Greater White-fronted Geese and Bean Geese, and it has become a critical winter habitat for rare birds [15, 16].

3. Methodology

In this section, we detail the process of extracting wetland change information from bitemporal images using a modified IR-MAD. We consider the two datasets and consisting of high-resolution PAN and low-resolution MS images acquired in the same geographical area at different times and , respectively. The process flow is shown in Figure 2, and the details of each step are described below.

Figure 2: Workflow of the proposed methodology for wetland vegetation extraction using multitemporal satellite images. The superscript H and L represent high resolution and low resolution, respectively.
3.1. Cross-Fused Image Generation

Generally, the image fusion method fuses high-resolution PAN images and low-resolution MS images into a high-resolution MS image. In this paper, the PAN and MS images are generally fused using Gram-Schmidt adaptive (GSA) to produce high-resolution multispectral and images. The GSA algorithm is applied as a representative component substitution- (CS-) based fusion algorithm, and there is no limit to the number of bands of the fused image [17]. The major drawback of the CS-based fusion method is spectral distortion, also called color (or radiometric) distortion. This spectral distortion is caused by the mismatch between the spectral responses of the MS and PAN bands according to the different bandwidths [18]. In this study, the spectral distortion will be regarded as a candidate detection feature for wetland cover change [17]. To this end, we extract the high-resolution NIR image instead of high-resolution PAN image. The NIR band has a narrower bandwidth, and by using this band, the mismatch level of spectral response outside the NIR spectral range will increase, and the spectral distortion will become more pronounced [13]. At the same time, the NIR band is a very useful information source for detecting water or vegetation areas, because the water area appears dark since it has strong absorption characteristics, and the vegetation has the opposite characteristics and appears light. Thus, the NIR band is useful in extracting change information. Then, the cross-fused image is generated where the MS image is fused with the NIR band of by using the GSA image fusion algorithm. can be obtained in the same way as , and the formula is as follows:

3.2. NDVI and NDWI Extraction

The NDVI, which is a classic index used to monitor vegetation changes, is calculated from a normalized transform of the NIR and red reflectance ratio. Application of the NDVI strives to minimize the solar irradiance and soil background effects and improve the vegetation signal. Similarly, the NDWI is a commonly used remote sensing water monitoring index [19, 20].

The cross-fused image has high spatial and temporal resolution, and the vegetation and water details of the bitemporal images are preserved. By using the NDVI and the NDWI, the change information from water and vegetation is further optimized, and the distinction among water bodies, wet soil, and vegetation is improved. The spectral difference is enhanced twofold, and the detection sensitivity of the vegetation and water increases. The equation is as follows: where , , and are the NIR, red, and green bands of the cross-fused image, respectively. According to (3) and (4), the and images are generated from the cross-fused images and and include the NDVI and NDWI bands.

3.3. Wetland Change Area Extracted

The IR-MAD algorithm, which is based on canonical correlation analysis, considers two band multispectral images and of the same area, but acquired at two different times, and it has an important application in the field of multitemporal multispectral image change detection. The random variables and , generated by any linear combination of the intensities of the spectral bands using coefficient matrices and , are defined. The equation is as follows [21]. where the superscript is the transpose of each matrix. The task is to find suitable vectors and by maximizing the variance of . This leads to solving two generalized eigenvalue problems for and from a canonical matrix analysis.

The MAD variate , which is generated by taking the paired difference between and , represents the changed information [22], where the equation is as follows (6).

In this study, corresponding to the two multitemporal normalized difference index images ( and ) and two general fused images ( and), MAD variate is generated by using optimal coefficients and through (7) and (8): where is the MAD variate of the generally fused image with the same temporal data. Meanwhile, two normalized index images ( and ) in are extracted from the cross-fusion image.

The probability of the changed information for pixel , which is calculated by the sum of the squares of the standardized MAD variate, is defined in (9). where variable represents a weight for the probability of changes in each pixel to identify a greater chi-square value, is the MAD variate of the th band for pixel , and is the variance of the no-change distribution. The above results can be regarded as the weights of the observations. The iteration process would continue for a number of fixed iterations or until there is no significant change in the canonical correlations. The latter is used in this study [22]. Then, the optimal matrices and are recalculated by the weight factor.

In this study, based on the combination of and , the final change detection index is calculated.

is the th band MAD variate of the th pair of fused images for pixel . This method can effectively reduce the falsely detected changes by considering values twice in (10).

This modified IR-MAD algorithm can not only alleviate the problem of spectral distortion that caused massive false change alarms in the process of using bitemporal images to generate the cross-fused images but also reduce the interaction between bands of multispectral images [14, 21]. Therefore, this algorithm can yield better change detection results in multitemporal images. Finally, the Otsu thresholding algorithm, which is based on histogram image segmentation [23], has effective performance and easy application, and it was applied to the modified IR-MAD image to obtain binary data of the changed and the unchanged area [23].

4. Experimental Result and Discussion

To evaluate the effectiveness of this method, we analyze and discuss the seasonal and interannual variations in the study area. We use the cross-fused and PCC change detection methods to compare with our result. In the cross-fused change detection method, original IR-MAD is applied between and images to extract the changed area. In the process of PCC, three general fused images are classified into 7 classes (water, bare land, meadow, cultivated area, city, forest, and mudflats) through the maximum likelihood classification.

To quantitatively compare the performance of these methods, ground truths of seasonal and interannual variation images were generated from GSA-fused images ( and ) by manually digitizing the changed areas of Shengjin Lake Nature Reserve as shown in red in Figures 3(a) and 3(e). The results are overlain on the multispectral images of 6 November 2013 and 25 July 2016, respectively. In the quantitative analysis process, the confusion matrix method was applied to evaluate the statistical accuracy of the tested methodologies, and some indices such as overall accuracy (OA), kappa coefficient (KC), commission error (CE), omission error (OE), and false alarm rate (FAR) were calculated [24]. The detailed quantitative change detection accuracy assessment results for each method are shown in Figure 3 and Table 2. The red color indicates the change pixels extracted from the change detection results of the different methods, and the results are overlain on the multispectral image of 25 December 2016.

Figure 3: Results of wetland change area by using the tested methods: (a)–(d) interannual change detection result; (e)–(h) seasonal change detection result.
Table 2: Change detection accuracy results: overall accuracy (OA), kappa coefficient (KC), commission error (CE), omission error (OE), and false alarm rate (FAR).

Through the observation and analysis of the results in Figure 3, in areas where different ground types coexist (water, bare land, meadow, cultivated area, city, forest, and mudflats), the proposed method, compared to PCC based on the general fused image and cross-fused method, can more accurately detect the wetland change information and effectively reduce the change detection errors for interannual or seasonal wetland cover change. In the results of the PCC and cross-fused methods, some parts of the unchanged area are considered to be changed areas. As shown in Table 2, the CE value of PCC reaches 70%. In addition, the results of our study are more accurate than the PCC; the OA value reaches 90% and the FAR value reaches 0.02.

Figure 3 includes the whole study area, allowing for an initial visual assessment of the results of wetland change extent extraction. Figures 4 and 5 show the subimages extracted from “upper lake meadow” and “lower lake meadow” regions of Figure 3.

Figure 4: Detailed images from upper lake meadow regions of Figure 3: (a)–(d) interannual change detection result; (e)–(h) seasonal change detection result.
Figure 5: Detailed images from lower lake meadow regions of Figure 3: (a)–(d) interannual change detection result; (e)–(h) seasonal change detection result.

Figure 4 mainly consists of a complex area composed of water, meadow, cultivated area, city, and mudflats. In addition, the area shown in Figure 5 mainly consists of complex areas composed of water, meadow, forest, and mudflats. The blue and yellow colors indicate the commission and omission error, respectively, and the results are overlain on the multispectral image of 25 December 2016.

As shown in Figures 4 and 5, the PCC and cross-fused method are not sensitive to artificially cultivated areas and forest areas affected by seasonal changes, and they detect too many false positives caused by similar spectral characteristics.

The proposed method efficiently detects the changed area that corresponds to the complex area with similar spectral characteristics, improves the accuracy of wetland change detection, and minimizes the impacts of seasonality and artificiality. The proposed method also has good performance in meadow areas. However, it produces false positives in some mudflat edge regions, such as yellow areas in Figures 4(b) and 5(f). On the one hand, this is because spatial inconsistency occurred due to the different look angles of bitemporal imagery. On the other hand, the proposed method is based only on the NDWI, which is sensitive to water. As a result, some omission errors can occur in areas where the water level falls and the mudflats are exposed because the mudflats still contain a certain amount of water.

5. Conclusions

In this paper, we proposed an image-to-image change detection method using multitemporal images to quantify wetland cover changes; the method is based on a combination of a cross-fusion image and normalized difference index image. For multitemporal Landsat 8 OLI images, the GSA fusion method is used to generate cross-fusion images, and then NDVI and NDWI are extracted. The optimal change information was calculated through the modified IR-MAD, which used pairs of normalized difference index images and general fused images. The experimental results showed that the proposed method increases the accuracy of change detection and minimizes the error detection in the complex areas under different ground types. Especially in the cultivated area affected by manmade alterations, change information can be identified more accurately, and a lower FAR can be achieved. This allows us to help wetland managers implement effective management plans. Further, our method is of guiding importance in the monitoring of wetland health and wetland conservation.

Conflicts of Interest

The authors declare no conflict of interest.

Acknowledgments

This research was supported in part by the Natural Science Foundation of Anhui Province under Grant no. 1608085MD83, the Science and Technology project of the Department of Land and Resources of Anhui Province: Study on the Data Integration Technology for the Real Estate Registration (no. 2016-K-12), the Educational Commission of Anhui Province of China (no. KJ2018A0007), and the Department of Human Resources and Social Security of Anhui: Innovation Project Foundation for Selected Overseas Chinese Scholar.

References

  1. G. Liu, L. Zhang, Q. Zhang, Z. Musyimi, and Q. Jiang, “Spatio–temporal dynamics of wetland landscape patterns based on remote sensing in Yellow River Delta, China,” Wetlands, vol. 34, no. 4, pp. 787–801, 2014. View at Publisher · View at Google Scholar · View at Scopus
  2. E. Adam, O. Mutanga, and D. Rugege, “Multispectral and hyperspectral remote sensing for identification and mapping of wetland vegetation: a review,” Wetlands Ecology and Management, vol. 18, no. 3, pp. 281–296, 2010. View at Publisher · View at Google Scholar · View at Scopus
  3. A. Singh, “Review article digital change detection techniques using remotely-sensed data,” International Journal of Remote Sensing, vol. 10, no. 6, pp. 989–1003, 1989. View at Publisher · View at Google Scholar · View at Scopus
  4. L. Yang, C. Homer, J. Brock, and J. Fry, “An efficient method for change detection of soil, vegetation and water in the Northern Gulf of Mexico wetland ecosystem,” International Journal of Remote Sensing, vol. 34, no. 18, pp. 6321–6336, 2013. View at Publisher · View at Google Scholar · View at Scopus
  5. C. Munyati, “Wetland change detection on the Kafue Flats, Zambia, by classification of a multitemporal remote sensing image dataset,” International Journal of Remote Sensing, vol. 21, no. 9, pp. 1787–1806, 2000. View at Publisher · View at Google Scholar · View at Scopus
  6. F. Bovolo, S. Marchesi, and L. Bruzzone, “A framework for automatic and unsupervised detection of multiple changes in multitemporal images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 50, no. 6, pp. 2196–2212, 2012. View at Publisher · View at Google Scholar · View at Scopus
  7. D. Lu, P. Mausel, E. Brondízio, and E. Moran, “Change detection techniques,” International Journal of Remote Sensing, vol. 25, no. 12, pp. 2365–2401, 2004. View at Publisher · View at Google Scholar · View at Scopus
  8. D. Renza, E. Martinez, I. Molina, and D. M. Ballesteros L., “Unsupervised change detection in a particular vegetation land cover type using spectral angle mapper,” Advances in Space Research, vol. 59, no. 8, pp. 2019–2031, 2017. View at Publisher · View at Google Scholar · View at Scopus
  9. J. Chen, X. Chen, X. Cui, and J. Chen, “Change vector analysis in posterior probability space: a new method for land cover change detection,” IEEE Geoscience and Remote Sensing Letters, vol. 8, no. 2, pp. 317–321, 2011. View at Publisher · View at Google Scholar · View at Scopus
  10. M. Hussain, D. Chen, A. Cheng, H. Wei, and D. Stanley, “Change detection from remotely sensed images: from pixel-based to object-based approaches,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 80, pp. 91–106, 2013. View at Publisher · View at Google Scholar · View at Scopus
  11. D. Renza, E. Martinez, and A. Arquero, “A new approach to change detection in multispectral images by means of ERGAS index,” IEEE Geoscience and Remote Sensing Letters, vol. 10, no. 1, pp. 76–80, 2013. View at Publisher · View at Google Scholar · View at Scopus
  12. A. A. Nielsen, “The regularized iteratively reweighted MAD method for change detection in multi- and hyperspectral data,” IEEE Transactions on Image Processing, vol. 16, no. 2, pp. 463–478, 2007. View at Publisher · View at Google Scholar · View at Scopus
  13. Y. Byun, Y. Han, and T. Chae, “Image fusion-based change detection for flood extent extraction using bi-temporal very high-resolution satellite images,” Remote Sensing, vol. 7, no. 8, pp. 10347–10363, 2015. View at Publisher · View at Google Scholar · View at Scopus
  14. B. Wang, J. Choi, S. Choi, S. Lee, P. Wu, and Y. Gao, “Image fusion-based land cover change detection using multi-temporal high-resolution satellite images,” Remote Sensing, vol. 9, no. 8, p. 804, 2017. View at Publisher · View at Google Scholar · View at Scopus
  15. C. Li, G. Beauchamp, Z. Wang, and P. Cui, “Collective vigilance in the wintering hooded crane: the role of flock size and anthropogenic disturbances in a human-dominated landscape,” Ethology, vol. 122, no. 12, pp. 999–1008, 2016. View at Publisher · View at Google Scholar · View at Scopus
  16. M. Barter, L. Cao, L. Chen, and G. Lei, “Results of a survey for waterbirds in the lower Yangtze floodplain, China, in January-February 2004,” Forktail, vol. 21, pp. 1–7, 2005. View at Google Scholar
  17. B. Aiazzi, S. Baronti, and M. Selva, “Improving component substitution pansharpening through multivariate regression of MS +Pan data,” IEEE Transactions on Geoscience and Remote Sensing, vol. 45, no. 10, pp. 3230–3239, 2007. View at Publisher · View at Google Scholar · View at Scopus
  18. C. Thomas, T. Ranchin, L. Wald, and J. Chanussot, “Synthesis of multispectral images to high spatial resolution: a critical review of fusion methods based on remote sensing physics,” IEEE Transactions on Geoscience and Remote Sensing, vol. 46, no. 5, pp. 1301–1312, 2008. View at Publisher · View at Google Scholar · View at Scopus
  19. R. B. Myneni, F. G. Hall, P. J. Sellers, and A. L. Marshak, “The interpretation of spectral vegetation indexes,” IEEE Transactions on Geoscience and Remote Sensing, vol. 33, no. 2, pp. 481–486, 1995. View at Publisher · View at Google Scholar · View at Scopus
  20. A. K. Bhandari, A. Kumar, and G. K. Singh, “Improved feature extraction scheme for satellite images using NDVI and NDWI technique based on DWT and SVD,” Arabian Journal of Geosciences, vol. 8, no. 9, pp. 6949–6966, 2015. View at Publisher · View at Google Scholar · View at Scopus
  21. B. Wang, S.-K. Choi, Y.-K. Han, S.-K. Lee, and J.-W. Choi, “Application of IR-MAD using synthetically fused images for change detection in hyperspectral data,” Remote Sensing Letters, vol. 6, no. 8, pp. 578–586, 2015. View at Publisher · View at Google Scholar · View at Scopus
  22. P. R. Marpu, P. Gamba, and M. J. Canty, “Improving change detection results of IR-MAD by eliminating strong changes,” IEEE Geoscience and Remote Sensing Letters, vol. 8, no. 4, pp. 799–803, 2011. View at Publisher · View at Google Scholar · View at Scopus
  23. A. S. Abutaleb, “Automatic thresholding of gray-level pictures using two-dimensional entropy,” Computer Vision Graphics and Image Processing, vol. 47, no. 1, pp. 22–32, 1989. View at Publisher · View at Google Scholar · View at Scopus
  24. R. G. Congalton, “A review of assessing the accuracy of classification of remotely sensed data,” Remote Sensing of Environment, vol. 37, no. 1, pp. 35–46, 1991. View at Publisher · View at Google Scholar · View at Scopus