Journal of Sensors

Journal of Sensors / 2018 / Article

Research Article | Open Access

Volume 2018 |Article ID 5902318 | 10 pages | https://doi.org/10.1155/2018/5902318

Coastal Zone Classification Based on Multisource Remote Sensing Imagery Fusion

Academic Editor: Evangelos Hristoforou
Received10 Jun 2018
Revised08 Jul 2018
Accepted12 Jul 2018
Published24 Sep 2018

Abstract

The main objective of this paper was to assess the capability of multisource remote sensing imagery fusion for coastal zone classification. Five scenes of Gaofen- (GF-) 1 optic imagery and four scenes of synthetic aperture radar (SAR) (C-band Sentinel-1 and L-band ALOS-2) imagery were collected and matched. Note that GF-1 is the first satellite of the China high-resolution earth observation system, which acquires multispectral data with decametric spatial resolution, high temporal resolution, and wide coverage. The results showed that based on the comparison of C- and L-band SAR for coastal coverage, it is verified that C band is superior to L band and parameter subsets of , , and can be effectively used for coastal classification. A new fusion method based on the wavelet transform (WT) was also proposed and used for imagery fusion. Statistical values for the mean, entropy, gradient, and correlation coefficient of the proposed method were 67.526, 7.321, 6.440, and 0.955, respectively. We therefore conclude that the result of our proposed method is superior to GF-1 imagery and traditional HIS fusion results. Finally, the classification output was determined along with an assessment of classification accuracy and kappa coefficient. The kappa coefficient and overall accuracy of the classification were 0.8236 and 85.9774%, respectively, so the proposed fusion method had a satisfying performance for coastal coverage mapping.

1. Introduction

Coastal zones, typical land-sea ecosystems, play a key role in the sustainable development and environmental protection of shorelines, where around 50% of the world’s population lives within 60 up to 200 km of the coast [1, 2]. Increasing human activities impact coastal zones. To reduce vulnerability and improve risk assessments for coastal development, it is important to map and monitor coastal zone status in terms of land cover types and change [3, 4].

Remote sensing, both passive and active sensors, has been proven to be a valuable tool for coastal zone classification. Optical sensors, such as the Thematic Mapper (TM) (Landsat 5), Enhanced Thematic Mapper Plus (ETM+) (Landsat 7), High Resolution Visible (HRV) (SPOT 3), High Resolution Visible and Infrared (HRVIR) (SPOT 4), and High Resolution Geometric (HRG) (Spot 5) sensors; the new Chinese Gaofen (GF-1, -2, and -4) sensors; and synthetic aperture radar (SAR) sensors like RADARSAT-1/2, ENVISAT ASAR, ALOS-1/2, Terra, COSMO-SkyMed, and GF-3 SAR sensors, have been used to map and monitor coastal zones for more than 20 years [57]. It is estimated that a combination of optics imagery and SAR imagery can be better applied in coastal zone classification. Optics imagery has been widely proposed for coastal zone classification in the literature, based on conventional visual photo-interpretation keys, such as tone color, texture, pattern, form, size, and context [812]. Xiao et al. [13] used ETM+ remote sensing data from the United States’ Landsat-7 satellite to build a coastal wetland classification model, which was based on a backpropagation (BP) neural network. The model was applied to natural wetland cover classification research in the core area of the Yancheng National Natural Reserve for Coastal Rare Birds. Zhang et al. [14] proposed a hybrid approach based on Landsat Thematic Mapper (TM) imagery for classifying coastal wetland vegetation. Linear spectral mixture analysis was used to segregate the TM image into four fractional images, which were used for classifying major land cover types through a threshold technique. Although conventional visual photo-interpretation can facilitate different types of coastal classification, it is also biased because optical remote sensing sensors are easily affected by cloud cover or solar illumination. SAR ensures all-day and almost all-weather observations with moderate-to-fine spatial resolution and rapid revisit time, which has been shown to be suitable for accurate and frequent mapping of coastal classification [15, 16]. In issues of single-polarimetric SAR for coastal detection and classification, Gou et al. [17] proposed an unsupervised method based on a three-channel joint sparse representation (SR) classification with fully polarimetric SAR (PolSAR) data. The proposed method utilizes both texture and polarimetric feature information extracted from the HH, HV, and VV channels of a SAR image. Buono et al. [18] demonstrated PolSAR’s capabilities to classify coastal areas of the Yellow River Delta (China) using two well-known unsupervised classification algorithms, H/α-based and Freeman–Durden model-based algorithms, from a fully polarimetric SAR scene collected by RADARSAT-2 in 2008. Comparably, optical and SAR imagery provide abundant spectral and polarimetric features, respectively. Both optical and SAR products have advantages and disadvantages, so a combination of optics and SAR is a new method for coastal classification. Rodrigues and Souza-Filho [19] investigated the capability of Landsat ETM+ and RADARSAT-1 SAR for classification in the case of the mangroves, coastal plateaus, alluvial plains, tidal flats, and salt marsh classes. Supervised classification of Landsat ETM+ imagery and the ETM+ SAR product represented a significant advancement for rapid and accurate coastal mapping. Iii et al. [20] developed a -means clustering algorithm to classify Landsat TM, color-infrared (CIR) photographs, and ERS-1 SAR data. Individually, the green reflective CIR and SAR data identified broad categories of water, marsh, and forest. In combination with TM, SAR, and the green CIR band, each improved overall accuracy by about 3% and 15%, respectively.

In this paper, optical imagery from Landsat TM, Gaofen- (GF-) 1, and C-band Sentinel-1 and L-band ALOS-2 SAR was collected and matched. Simultaneously, the spectral and polarimetric features of sampled coastal types were analyzed for classification. A wavelet transform (WT) was also proposed for multisource remote sensing imagery fusion to acquire optimum classification results.

The sections of this paper are as follows: the study area and remote sensing data set are introduced in Section 2. The methodology for feature extraction and the WT fusion algorithm are listed in Section 3. The experimental results, including comparison of multifeatures, fusion imagery based on the wavelet transform (WT), and classification results using the maximum likelihood classifier (MLC), are specifically stated in Section 4. Conclusions are given in Section 5.

2. Experiment

2.1. Study Area

Hangzhou Bay is a representative wetland area located south of the Yangtze River Delta, with a winding shoreline and numerous islands. In addition, the north-south transition of climate and the east-west transition of landforms result in coastal wetland diversity. There are five types of wetland distributed in Zhejiang Province (Table 1), with a total area of 2467775 hectares. It is worth noting that natural wetland covers 891083 hectares (about 36.1%), while artificial wetland covers 1576692 hectares (about 63.9%).


SensorTypePolarization/bandsTime (UTC)Resolution (m)Swath width (km)

ALOS-2SARHH/HV/VH/VV2016-5-16 (15:32)640 × 70
ALOS-2SARHH/HV/VH/VV2016-12-26 (15:32)640 × 70
ALOS-2SARHH/HV2016-7-17 (15:59)350 × 70
Sentinel-1SARVH/VV2016-7-26 (09:54)5 × 20240
Sentinel-1SARVH/VV2016-7-26 (09:54)5 × 20240
Landsat 8OpticOLI2016-07-21 (19:19)30185
Landsat 8OpticOLI2016-07-21 (19:19)30185
GF-1OpticPMS2016-07-07 (11:10)860
GF-1OpticPMS2016-07-07 (11:10)860
GF-1OpticPMS2016-07-07 (11:10)860
GF-1OpticPMS2016-07-07 (11:10)860
GF-1OpticWFV2016-07-15 (11:06)16800

2.2. Datasets
2.2.1. ALOS-2

Three scenes of ALOS-2 SAR imagery were collected, including two scenes from quad-polarized imagery and one scene from dual-polarized imagery. ALOS-2 stands for Advanced Land Observing Satellite 2, launched by the Japan Aerospace Exploration Agency (JAXA) in May 2014. As the name indicates, the ALOS-2 is the successor of the ALOS, but specialized for the L-band (1.2 GHz) SAR.

2.2.2. Sentinel-1

Two Sentinel-1 SAR imagery scenes were collected for comparison. Sentinel-1 is the first of the Copernicus Programme satellite constellation, launched by the European Space Agency in April 2010. The Sentinel-1 pairs are composed of two satellites, Sentinel-1A and Sentinel-1B, that carry a C-band (5.405 GHz) SAR.

2.2.3. GF-1

Five GF-1 optical imagery scenes were also collected for analysis. The Chinese GF-1 is the first satellite of the Major National Science and Technology Project of China, known as the China high-resolution earth observation system, launched in April 2013. The GF-1 panchromatic multispectral (PMS) sensor and wide-field view (WFV) cameras acquire data with high spatial resolution, wide coverage, and high revisit frequency, which are highly valuable data sources for coastal zone dynamic monitoring and classification.

The shapefile of survey results is also shown in Figure 1, together with SAR imagery. It is noted that the survey results were used for study validation. The specifications of remote sensing datasets are listed in Table 1. VV and VH polarization channels were performed for analysis and comparison between L-band ALOS-2 and C-band Sentinel-1.

3. Methodology

3.1. Multipolarization Features

With the exception of the normalized radar cross section (NRCS) of 4 polarization channels, the multipolarization features considered in this study are shown in Table 2. Processing consisted of radiometric calibration, map reprojection, and generation of the multilook covariance matrix .


ParametersDefinition

Copolarized backscattering coefficient
Cross-polarized backscattering coefficient
Cross-polarized backscattering coefficient difference
Cross-polarized polarization ratio

3.2. Imagery Fusion Method Based on WT

WT is widely used for imagery fusion through the multiresolution analysis of the spatial-frequency domain. The main idea of WT fusion involves retrieving multiresolution signals from WT and then fusing the images at different scales. It is noteworthy that the WT method was derived by performing an inverse WT using a low-resolution multispectral approximation image and the details from a high-resolution panchromatic image [21, 22].

The computation of WT from a 2-D image involves recursive filtering and subsampling. At each level, there are four detailed images: low-low (LL), low-high (LH), high-low (HL), and high-high (HH). An -level decomposition will finally have different frequency bands, which include high-frequency bands and one LL-frequency band [23]. The 2-D WT will have a pyramid structure. Figure 2 is a schematic of image fusion based on WT.

3.3. Unsupervised Classification Using the Maximum Likelihood Classifier (MLC)

In this study, the MLC method was adopted to achieve high classification accuracy, which relies on the Bayesian maximum likelihood approach that discriminates different classes with the same a priori occurrence probability [24, 25]. The common classification procedure is widely used for polarimetric SAR classification. In order to evaluate the performance of our proposed WT fusion method with the traditional method, the MLC was performed for data processing. The classifier labels each fusion image pixel according to its normalized imagery value. Hence, once the normalized imagery fusion value is estimated, a proper threshold is set to identify the pixel that falls into which category. When the pixel under test is confusing, characterized by two or three categories, then the pixel is identified as belonging to a mixed category. Then, once pixels are grouped according to their normalized imagery fusion value, pixels within the same group are further divided into 30 small and almost equal-size clusters according to their pixel values. Once small clusters are generated, they are merged to obtain the user-selected number of output classes according to the Wishart metric [18].

The schematic for our study is shown in Figure 3. Prior to MLC, another two steps were implemented according to the flow chart. First, the optical parameter was selected for both SAR and optical imagery based on the observed performance of C-band Sentinel-1 and L-band ALOS-2 SAR and GF-1 high-resolution optical imagery. Second, the optimal parameters from SAR and optical imagery were input for WT fusion after imagery registration (Figure 2). Then, the MLC method could be applied for image fusion. Finally, a comparison of classification results and reference data was carried out to evaluate precision.

4. Results

4.1. Comparison of Multipolarization Parameters between Sentinel-1 and ALOS-2

The multipolarization parameter assessment consisted of the mean and standard deviation for both C-band Sentinel-1 and L-band ALOS-2 SAR. For each type of polarization parameter listed in Table 2, 10000 pixels were randomly selected from the matched Sentinel-1 and ALOS-2 dataset for different land cover types: silt beach, sand beach, brush, shallow water, aquiculture area, rice field, and so on. The statistical results for the polarization parameters are shown in Figure 4. The four polarimetric parameters are also shown in Figure 5.

From Figure 4, it can be seen that the error for the C-band Sentinel-1 (blue) was much lower than that for the L-band ALOS-2 (red) SAR. In addition, the four polarimetric parameters had different performances in depicting scattering discrepancy of coastal zone types. Compared with the other three parameters, values ranged from 0.6 to 0.7 and barely contributed to the detection of the discrepancy. Therefore, any single polarimetric parameter could not effectively meet the classification demands for the coastal zone types. We recommend adopting a combination of , , and of C-band Sentinel-1 as the optimal SAR parameters for coastal land cover classification.

4.2. Imagery Fusion Based on the WT Method

Prior to imagery fusion, the hazen-intensity-saturation (HIS) transforms were respectively applied for GF-1 and Sentinel-1 SAR imagery to decompose the imagery into H, I, S spaces. We selected individual components from GF-1 optical and Sentinel-1 SAR imagery for fusion; then, they were fused into a new single component using WT. The WT method was used to eliminate distortion issues for spectral features in the transform. Finally, revised HIS transforms were performed to restore the fusion results to the RGB space. The fusion imagery is shown in Figure 6, in comparison with the GF-1 imagery and results from the traditional HIS fusion method.

Compared with pseudo RGB composite imagery from the GF-1 image (Figure 6(a)), the HIS fusion result (Figure 6(b)) contained the most texture details, especially for mountainous areas. The bridge could be clearly distinguished from complex ocean color in the background. However, spectral information in the HIS fusion result was severely lost. As a compromise between spectral and polarimetric information, we proposed a new fusion method based on WT. The result of our proposed fusion method is shown (Figure 6(c)). The wavelet basis used for decomposition and reconstitution was “db13.” It is well known that optical and polarimetric features can be presented in the same dimension. The fusion imagery proposed in this study contained precise polarimetric scattering information, which was highly sensitive to strong scattering targets, such as bridges, ships, and nearshore facilities. In addition, the nearshore marine dynamic factors, such waves and currents, can easily be traced in fusion imagery. To compare the performances of the traditional HIS fusion method and the proposed method based on WT, we examined three indicators for the mean, standard deviation (Std), entropy, gradient, and correlation coefficient (Cor) [2629].

It can be seen in Table 3 that our proposed method had a satisfying performance for optical and SAR imagery fusion. Compared with traditional HIS fusion results, although the Std of our proposed fusion results was much larger, the other features were superior to the GF-1 and HIS fusion results. According to the statistics, we recommend adopting our proposed method for optical and SAR imagery fusion.


ImageryMeanStdEntropyGradientCor

GF-164.32363.1186.6694.713 
HIS fusion62.93275.0296.6706.1820.914
Our proposed fusion method67.52676.1847.3216.4400.955

4.3. Classification Results Using the ML Method

A ML classification was then applied to the fusion results. The procedure was as follows: (1)In accordance with an expert interpretation diagram (Figure 1), the five types of coastal land cover were selected as classification marks. Classes “1”–“5” were assigned to pixels corresponding to sea, intertidal zone, aquaculture zone, buildings, and plant cover. For each type of coastal coverage, about 100000 pixels were selected for training and the reference data were selected for validation(2)All the training samples were used as inputs for the MLC method.(3)After completing the training, the validation samples were then applied to generate the type of identification accuracy and kappa coefficient. The five test areas, which corresponded to the five regions defined by the reference map, were manually identified in the classification outputs. Finally, the coastal classification map is shown in Figure 7.

The classification outputs (Figure 7) showed that multisource remote sensing imagery fusion based on our proposed fusion method had a satisfying performance for coastal coverage classification. The color bars in Figure 6 range from 1 to 5 to represent the coverage types of sea, intertidal zone, aquiculture zone, buildings, and plant cover. In Table 4, the kappa coefficient and overall classification accuracy were 0.8236 and 85.9774%, respectively. Aside from the aquiculture zone, the product accuracy of other four selected coverage types was greater than 90%. The low accuracy of the aquiculture zone (50.11%) was attributed to the similar spectral and polarimetric information for sea and the aquiculture zone.


ClassProduct accuracy (%)User accuracy (%)Product accuracy (pixels)User accuracy (pixels)

Sea100.0096.602387/23872387/2471
Intertidal zone99.6981.822574/25822574/3146
Aquiculture zone50.1172.161112/22191112/1541
Buildings84.9580.202439/28712439/3041
Plant cover92.6399.421899/20501899/1910

Overall accuracy = (10411/12109) = 85.9774%, kappa coefficient = 0.8236.

The criteria of the expert interpretation diagram are according to the criteria shown in Table 5.


TypeClassification criteriaInterpretation sign
GeometryColorTexture

Shallow watersPlanarDark cyanSmooth
Rocky coastPlanar or strippedWhite or turquoiseSmooth
Sand beachStrippedWhite in the middle, turquoise in offshore, brownish in nearshoreQuite smooth
Silt beachSchistose or strippedIndigoQuite smooth
Intertidal marshesIrregular shapeSeashell roseSmooth, irregular tidal bore marks
Irregular shapeCamel hairQuite rough
Mangrove forestIrregular shapeRough
Estuarine watersNaturally curve or obviously flat, apparent boundarySmooth
Sand islandBanding or irregular shapeQuite rough, obvious layering
Coastal salt lakeRegular shapeSmooth
Coastal fresh lakeIrregular, natural shapeFrom deep red to redQuite smooth
ReservoirsRegular geometrical shapeMazarineSmooth
AquafarmRegular bandingDark cyan or aquaClear boundaries, chequered with black and white
Rice fieldRegular shape, ridges, ditches, and other agricultural facilitiesSeashell roseSmooth
Salt panRegular rectangle and continuous distributionDark green, whiteClear boundaries, square shape gray and white, coarse
Others

5. Conclusions

This paper investigated the utility of multisource remote sensing imagery based on WT for coastal coverage classification. Five scenes of GF-1 optical imagery and four scenes of SAR (C-band Sentinel-1 and L-band ALOS-2) imagery were collected and used to identify the optimal combination of SAR band and polarimetric parameters. A fusion method based on WT was proposed and performed for imagery fusion. Finally, the classification output was provided, along with a classification accuracy assessment and the kappa coefficient. The conclusions are as follows: (1)In terms of response of C- and L-band SAR to coastal coverage, the C-band Sentinel-1 is superior to the L-band ALOS-2 SAR. Moreover, compared with the other three parameters, the values ranged from 0.6 to 0.7 and hardly contributed to the detection of the discrepancy. Therefore, it is verified that C band is superior to L band, and parameter subsets of , , and can be effectively used for coastal classification.(2)In terms of fusion performance for our proposed method, although the Std of our proposed fusion results was much bigger, the other features (mean, entropy, and gradient) were superior to the GF-1 and HIS fusion results. In addition, the Cor statistics showed that the results of our proposed method was much better than the HIS fusion results.(3)In terms of classification assessment of our proposed fusion method, the kappa coefficient and overall accuracy were 0.8236 and 85.9774%, respectively, which had a satisfying performance for coastal coverage mapping.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request. The ALOS-2 data used to support the findings of this study have been deposited in the JAXA repository (https://auig2.jaxa.jp/ips/home). The Sentinel-1 data used to support the findings of this study have been deposited in the ESA repository (https://scihub.copernicus.eu/). The GF-1 data used to support the findings of this study have been deposited in the China Center for Resources Satellite Date and Application repository (http://www.cresda.com/CN/).

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors gratefully acknowledge the financial supports from the National Natural Science Foundation of China (61601213) and the project funded by China Postdoctoral Science Foundation (2017M611252).

References

  1. C. W. Finkl, “Coastal classification: systematic approaches to consider in the development of a comprehensive scheme,” Journal of Coastal Research, vol. 201, pp. 166–213, 2004. View at: Publisher Site | Google Scholar
  2. C. Wilkinson, O. Lindén, H. Cesar, G. Hodgson, J. Rubens, and A. E. Strong, “Ecological and Socioeconomic impacts of 1998 coral mortality in the Indian Ocean: an ENSO impact and a warning of future change?” Ambio, vol. 28, no. 2, pp. 188–196, 1999. View at: Google Scholar
  3. C. Liquete, G. Zulian, I. Delgado, A. Stips, and J. Maes, “Assessment of coastal protection as an ecosystem service in Europe,” Ecological Indicators, vol. 30, pp. 205–217, 2013. View at: Publisher Site | Google Scholar
  4. P. Polomé, S. Marzetti, and A. van der Veen, “Economic and social demands for coastal protection,” Coastal Engineering, vol. 52, no. 10-11, pp. 819–840, 2005. View at: Publisher Site | Google Scholar
  5. A. P. Cracknell, “Remote sensing techniques in estuaries and coastal zones an update,” International Journal of Remote Sensing, vol. 20, no. 3, pp. 485–496, 1999. View at: Publisher Site | Google Scholar
  6. C. Giri, B. Pengra, Z. Zhu, A. Singh, and L. L. Tieszen, “Monitoring mangrove forest dynamics of the Sundarbans in Bangladesh and India using multi-temporal satellite data from 1973 to 2000,” Estuarine, Coastal and Shelf Science, vol. 73, no. 1-2, pp. 91–100, 2007. View at: Publisher Site | Google Scholar
  7. K. Liu, X. Li, X. Shi, and S. Wang, “Monitoring mangrove forest changes using remote sensing and GIS data with decision-tree learning,” Wetlands, vol. 28, no. 2, pp. 336–346, 2008. View at: Publisher Site | Google Scholar
  8. C. Macalister and M. Mahaxay, “Mapping wetlands in the Lower Mekong Basin for wetland resource and conservation management using Landsat ETM images and field survey data,” Journal of Environmental Management, vol. 90, no. 7, pp. 2130–2137, 2009. View at: Publisher Site | Google Scholar
  9. P. W. M. Souza Filho and W. R. Paradella, “Use of RADARSAT-1 fine mode and Landsat-5 TM selective principal component analysis for geomorphological mapping in a macrotidal mangrove coast in the Amazon region,” Canadian Journal of Remote Sensing, vol. 31, no. 3, pp. 214–224, 2014. View at: Publisher Site | Google Scholar
  10. T. J. Malthus and P. J. Mumby, “Remote sensing of the coastal zone: an overview and priorities for future research,” International Journal of Remote Sensing, vol. 24, no. 13, pp. 2805–2815, 2003. View at: Publisher Site | Google Scholar
  11. A. Shalaby and R. Tateishi, “Remote sensing and GIS for mapping and monitoring land cover and land-use changes in the northwestern coastal zone of Egypt,” Applied Geography, vol. 27, no. 1, pp. 28–41, 2007. View at: Publisher Site | Google Scholar
  12. Y. Ma, J. Zhang, and Y. Gao, “High resolution remote sensing image classification of coastal zone and its automatic realization,” in 2008 International Conference on Computer Science and Software Engineering, pp. 827–829, Hubei, China, December 2008. View at: Google Scholar
  13. J. Xiao, O. U. Weixin, and F. U. Haiyue, “Land cover classification of Yancheng coastal natural wetlands based on BP neural network and ETM+ remote sensing data,” Acta Ecologica Sinica, vol. 33, no. 23, pp. 7496–7504, 2013. View at: Publisher Site | Google Scholar
  14. Y. Zhang, D. Lu, B. Yang, C. Sun, and M. Sun, “Coastal wetland vegetation classification with a Landsat Thematic Mapper image,” International Journal of Remote Sensing, vol. 32, no. 2, pp. 545–561, 2011. View at: Publisher Site | Google Scholar
  15. V. Alberga, “A study of land cover classification using polarimetric SAR parameters,” International Journal of Remote Sensing, vol. 28, no. 17, pp. 3851–3870, 2007. View at: Publisher Site | Google Scholar
  16. A. Buono, F. Nunziata, L. Mascolo, and M. Migliaccio, “A multipolarization analysis of coastline extraction using X-band COSMO-SkyMed SAR data,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 7, no. 7, pp. 2811–2820, 2014. View at: Publisher Site | Google Scholar
  17. S. Gou, X. Li, and X. Yang, “Coastal zone classification with fully polarimetric SAR imagery,” IEEE Geoscience and Remote Sensing Letters, vol. 13, no. 11, pp. 1616–1620, 2016. View at: Publisher Site | Google Scholar
  18. A. Buono, F. Nunziata, M. Migliaccio, X. Yang, and X. Li, “Classification of the Yellow River Delta area using fully polarimetric SAR measurements,” International Journal of Remote Sensing, vol. 38, no. 23, pp. 6714–6734, 2017. View at: Publisher Site | Google Scholar
  19. S. W. P. Rodrigues and P. W. M. Souza-Filho, “Use of multi-sensor data to identify and map tropical coastal wetlands in the Amazon of northern Brazil,” Wetlands, vol. 31, no. 1, pp. 11–23, 2011. View at: Publisher Site | Google Scholar
  20. E. W. R. Iii, G. A. Nelson, and S. K. Sapkota, “Classifying coastal resources by integrating optical and radar imagery and color infrared photography,” Mangroves and Salt Marshes, vol. 2, no. 3, pp. 187–187, 1998. View at: Google Scholar
  21. S. G. Mallat, “A theory for multiresolution signal decomposition: the wavelet representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 11, no. 7, pp. 674–693, 1989. View at: Publisher Site | Google Scholar
  22. H. Li, B. S. Manjunath, and S. K. Mitra, “Multi-sensor image fusion using the wavelet transform,” Graphical Models and Image Processing, vol. 57, no. 3, pp. 235–245, 1995. View at: Publisher Site | Google Scholar
  23. H. Ma, C. Jia, and S. Liu, “Multisource image fusion based on wavelet transform,” International Journal of Information Technology, vol. 11, no. 7, pp. 81–91, 2008. View at: Google Scholar
  24. T. M. Pellizzeri, P. Lombardo, and C. J. Oliver, “A new maximum likelihood classification technique for multitemporal SAR and multiband optical images,” in IEEE International Geoscience and Remote Sensing Symposium, pp. 908–910, Toronto, Ontario, Canada, June 2002. View at: Publisher Site | Google Scholar
  25. G. Liu, S. Huang, and A. Torre, “Bayesian classification of multi-look polarimetric SAR images with a generalized multiplicative speckle model and adaptive a priori probabilities,” International Journal of Remote Sensing, vol. 19, no. 1, pp. 161–170, 1998. View at: Publisher Site | Google Scholar
  26. W. Jianhua, Z. Jinxia, and L. Shanwei, “Fusion and classification of SAR and optical image with consideration of polarization characteristics,” Acta Optica Sinica, vol. 37, no. 6, article 0628001, 2017. View at: Publisher Site | Google Scholar
  27. G. Hong, A. Zhang, F. Zhou, and B. Brisco, “Integration of optical and synthetic aperture radar (SAR) images to differentiate grassland and alfalfa in Prairie area,” International Journal of Applied Earth Observation and Geoinformation, vol. 28, no. 28, pp. 12–19, 2014. View at: Publisher Site | Google Scholar
  28. H. Yang, H. D. Guo, and C. L. Wang, “Coast line dynamic inspect and land cover classification at Yellow River mouth using TM-SAR data fusion method,” Geography and Territorial Research, vol. 4, 2001. View at: Google Scholar
  29. Z. Han and Y. Jin, “Classification of typical canopies over the Chong Ming eastern tidal flat from data fusion of ERS-2 SAR and Landsat ETM+,” in International Conference on Space Information Technology, October 2006. View at: Publisher Site | Google Scholar

Copyright © 2018 Jiahui Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

693 Views | 381 Downloads | 0 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.