Scientifica

Scientifica / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 8810279 | https://doi.org/10.1155/2021/8810279

Aicha Moumni, Abderrahman Lahrouni, "Machine Learning-Based Classification for Crop-Type Mapping Using the Fusion of High-Resolution Satellite Imagery in a Semiarid Area", Scientifica, vol. 2021, Article ID 8810279, 20 pages, 2021. https://doi.org/10.1155/2021/8810279

Machine Learning-Based Classification for Crop-Type Mapping Using the Fusion of High-Resolution Satellite Imagery in a Semiarid Area

Academic Editor: Hyung-Sup Jung
Received21 Dec 2020
Revised04 Mar 2021
Accepted28 Mar 2021
Published21 Apr 2021

Abstract

The monitoring of cultivated crops and the types of different land covers is a relevant environmental and economic issue for agricultural lands management and crop yield prediction. In this context, this paper aims to use and evaluate the contribution of multisensors classification based on machine learning classifiers to crop-type identification in a semiarid area of Morocco. It is a very heterogeneous zone characterized by mixed crops (tree crops with annual crops, same crop with different phenological states during the same agricultural season, crop rotation, etc.). Therefore, such heterogeneity made the crop-type discrimination more complicated. To overcome these challenges, the present work is the first study in this area which used the fusion of high spatiotemporal resolution Sentinel-1 and Sentinel-2 satellite images for land use and land cover mapping. Three machine learning classifier algorithms, artificial neural network (ANN), support vector machine (SVM), and maximum likelihood (ML), were applied to identify and map crop types in irrigated perimeter. In situ observations of the year 2018, for the R3 perimeter of Haouz plain in central Morocco, were used with satellite data of the same year to perform this work. The results showed that combined images acquired in C-band and the optical range improved clearly the crop-type classification performance (overall accuracy = 89%; Kappa = 0.85) compared to the classification results of optical or SAR data alone.

1. Introduction

Accurate and detailed knowledge of land cover/land use (LC/LU) is a crucial issue for research work and many operational applications in agriculture such as a crop water requirement, crop yield prediction, etc. The availability of remote sensing imagery, that offered the access to a set of large regions, is a major asset to elaborate LC/LU maps. Remote sensors operate on a variety of basic physical principles, recording the electromagnetic properties of an Earth’s surface (i.e., the energy reflected (optical sensors), emitted (passive infrared or microwave thermal sensors), or diffused (active radar sensors)) and, therefore, provide a variety of information on the properties of land cover [1]. The use of optical remote sensing for LC/LU mapping is well established and can be considered effective, yet it exhibits some shortcomings when applied to large scale regions with complex land cover, or where cloud cover is frequent [25]. On the other hand, using SAR data for crop-type discrimination is still facing many challenges, particularly that its recognition accuracy is not high enough [6, 7]. But it has been shown that it can provide improvements in classification quality when combined with optical data [810]. In order to achieve these improvements and better identify land cover types, combining datasets acquired from remote sensors that rely on different physical fundamentals, and thus providing synergistic information on surface properties, leads to a promising approach [1113], particularly with the recent free of charge image datasets (optical and radar images from the Sentinel satellite sensors) [14], which provide the possibility of data fusion of higher spectral resolution, compensating the limits of the use of unique data products alone. Nowadays, studies support the hypothesis that the complementarity of these two types of data is able to provide improved information on LC/LU applications. For example, assume that the optical energy reflected by vegetation depends on the structure of the leaf, pigmentation, and humidity, while the microwave energy dispersed by vegetation depends on the size, the density, the orientation, and the dielectric properties of elements comparable to the wavelength size of the radar [15]. On the basis of this hypothesis, the objective of this study was to evaluate the usefulness of using SAR data on the one hand and of the combination of both types of remote sensing data, on the other hand, to map and identify crop types.

The work begins with the description of test site and the satellite data acquired and detailed the methodology to be followed. 20 Sentinel-2 (S2) optical images and 12 Sentinel-1 (S1) SAR images have been downloaded. In addition, in situ dataset was acquired from a field and supplemented with data from Google Earth platform. The methodology consists of the application of the ML, SVM, and ANN machine learning algorithms for the classification of Sentinel-2 products, Sentinel-1 products and the combined products of the two sensors. The quantitative evaluation of the results was carried out by the overall accuracy (OA) and the Kappa coefficient (K) obtained by construction of the confusion matrix of each classified image, while the qualities of the images were compared visually.

Moreover, these results were followed by comparison between the performance of classification using optical imagery, SAR imagery, and the combination of both of datasets and discussion in order to evaluate the performance of using radar data for the production of LC/LU maps of reliable soil, and prove the possibility of improving classification accuracy by synergistic use and the fusion of data acquired from multisensors. Finally, the paper ended with the main conclusions and perspectives extracted from this study.

2. Materials and Methods

2.1. Test Site

The plain of Haouz is a vast plain of 6000 km2 of surface which stretches over a length of approximately 150 km from east to west in the region of Marrakech-Safi located in central Morocco and including part of the Haut-Atlas. It is a predominantly rural area where the agricultural sector plays an important role in the formation of the economic fabric and of which approximately 3100 km2 represents an irrigated area. The semiarid continental climate of the Haouz plain is characterized by an average annual rainfall of 250 mm and high temperature in summer (37.7°C on average maximum) and low in winter (4.9°C as the average of the minimum) [16]. The main agricultural production in the region remains cereal with nearly 5.6 million quintals in 2011–2012, or an average yield of 6.2 quintals per hectare [17].

In this work, we are interested in the left part of the irrigated sectors, in the Haouz plain, called R3 perimeter and located in the Sidi Rahal region about 40 km east of the city of Marrakech (Figure 1). It is a sector irrigated by a gravity. It is divided into plots (several plots form blocks) of different size. The majority of plots are used for the production of cereal (46% of the areas surveyed in 2012), followed by tree crops (mainly olive and orange) and market gardening. A significant part, a quarter to a year depending on the year, is left fallow or not cultivated. The cereal is sown between November and January, reaches its maximum development in late March, and is harvested in early summer. The characteristics of this site (absent relief, regular, and large plot) make it a privileged study area for evaluating the contribution of satellite data in the extraction of information on changes in classes. For this, the R3 site has been intensively investigated in recent years (and still is) [5, 16, 18].

The different crops of the perimeter R3 were digitalized and subdivided into plots using ArcGIS software. These plots were created in such way to delineate each crop separately in order to identify the spectral responses and radar signal responses within the same plot. A total of 506 plots were recorded, with areas varying from 0.05 up to 5 hectares. We used very high spatial resolution images, updated for the year 2018 and provided by Google Earth Archives. Figure 2 shows the division of the area into 506 plots.

2.2. In Situ Data

Field data is used to extract profiles, calibrate or train classification algorithms, and validate the results (accuracy assessment of classified image). Samples were extracted during a field campaign carried out on April 2018 and were supplemented by samples extracted using the archives of high spatial resolution images (Spot) on Google Earth platform. The investigated plots are accompanied by detailed description providing information on the type of land cover and are divided into two groups, namely, calibration samples and validation samples. Figure 3 illustrates the spatial distribution of the surveyed plots (calibration and validation). The land cover types were grouped into six main classes which were chosen in terms of abundance in the study area. This typology contains orange trees, olive trees, cereal, double cropping (cereal in winter + summer crops), fallow, and bare soil.

2.3. Remotely Sensed Data

The optical and SAR imagery were used in this work acquired from the two sensors Sentinel-2 and Sentinel-1, respectively. The choice of these two sensors is mainly due to the availability and cost of these products, as well as the high spatial, spectral, and temporal resolutions they offer.

2.3.1. Sentinel-2 Images

ESA (https://sentinel.esa.int/web/sentinel/home) produces and distributes ortho-rectified Sentinel-2 data expressed in reflectance at the top of the atmosphere, the 1C level. Theia [19] produces and distributes level 2A data, corrected for atmospheric effects using the MAJA software developed thanks to the coordination between CNES and CESBIO [2022]. This processing chain uses multitemporal information to detect clouds and their shadows, estimate the optical thickness of aerosols and the amount of water vapour, and correct for atmospheric effects. The data are freely downloadable from Theia official website, http://www.theia-land.fr/. The MAJA chain is an atmospheric correction and cloud detection system. It can accommodate time series of high-resolution images taken at constant or nearly constant viewing angles. It can process data from Landsat and Sentinel-2 satellites.

(1) Data Processing. S2 images were processed to derive products based on a vegetation index known as NDVI (Difference Normalized Vegetation Index). This index, presented by Tucker in 1979 [23], is now the most widely used vegetation index in remote sensing and indicates the importance and the dominance of vegetation on remotely sensed image. The transformation of the images into NDVI was carried out for the whole 20 images, in order to extract the phenological evolution of the classes, since it can be modelled by the NDVI profiles. NDVI is calculated from the normalized difference of the near infrared (NIR) and red (R) bands of S2 images according to the following formula:

(2) Acquired Scenes. S2 images have been used for the present study. All available atmospheric corrected and cloudless S2 images, from January 1 to December 31 of 2018, have been downloaded from Theia Land Service website. These images resulted in 20 images covering the whole year of 2018 and well distributed over the 4 seasons (around 2 images per month), used to monitor the phenological evolution of the different classes chosen for the R3 test site.

2.3.2. Sentinel-1 Images

Microwave signals are emitted by an antenna towards a particular region of the Earth surface for synthetic aperture radar (SAR) imaging. The microwave energy reflected back to the spacecraft is measured. SAR images are created using the radar concept, which takes advantage of the time delay of backscattered signals to create an image. The intensity of each pixel in a SAR image reflects the proportion of microwave backscattered from that ground region. The backscattering coefficient is a physical quantity that ranges from +5 dB for very light objects to −40 dB for very dark surfaces, with values ranging from +5 dB for very bright objects to −40 dB for very dark surfaces.

Sentinel-1 is a synthetic aperture radar (SAR) mission that provides continuous all-weather, day-and-night images at C-band in four imaging modes (EW, IW, SM, WV) with various spatial resolutions (10, 20, 60 m) and coverages. This mission is based on a constellation of two identical satellites, Sentinel-1A and Sentinel-1B, launched separately.

2.3.3. Data Processing

(1) Speckle Noise Removal Filtering. Unlike optical imagery, SAR data is formed by coherent interaction of the transmitted microwave with the targets. Hence, it is affected by the speckle noise which arises from coherent summation of the signals scattered from ground scatterers distributed randomly within the pixels. A SAR image appears visually more noisy than an optical one. Therefore, a speckle noise removal filter is necessary before display and further analysis.

To reduce the speckle noise in the acquired S1 images, Enhanced Lee Filter [24] was applied. In the literature, there is a large family of filters [2527]; therefore, Enhanced Lee Filter is one of the efficient filters used for SAR imagery in particular for areas with high degree of heterogeneity [28]. This filter is widely used for remote sensing applications [29, 30]. Enhanced Lee Filter, implemented in ENVI software used for crop classification, was applied in the different windows given by the software and the best speckle noise filtering was achieved in window of 55 [31, 32].

To interpret and analyze the SAR imagery, another processing was applied to get the backscattering coefficient values. Therefore, the filtered images were transformed into decibels (dB). This transformation was done through the following formula:where is the value of each pixel.

(2) Texture Features. Texture is a native spatial attribute of an image. Due to the sensitivity of the SAR backscatter to the homogeneity, orientation, spatial relationship, and type of ground objects, this kind of imagery represents certain texture features [33]. Therefore, the texture analysis is highly important while using SAR data.

Texture analysis is based mainly on the computation of textures features from an image. These features are determined from the statistical distribution properties of the image spectral tone for a certain neighborhood [34, 35]. Texture analysis statistics are classified into first-order, second-order, and higher-order statistics. The Gray Level Co-occurrence Matrix (GLCM) is one of the methods to calculate the second-order statistical texture features.

Texture analysis from GLCM offers crucial and reliable information on the spatial relationship of the pixels of an image [36]. In general, GLCM estimates the probability that pixel values (in moving windows) occur in a given direction and at a certain distance in the image [37]. There are many texture features computed by GLCM [38], three of which were calculated in this study, namely, the mean, variance, and correlation (Table 1).


TextureFormula

Mean
Variance
Correlation

The three S2 derived GLCM features were computed with a moving window size of 5 × 5, in all directions, based on the method [38]. These GLCM statistics were applied for both VV parallel polarization and VH cross polarization using the Sentinel toolbox of the software called SNAP (SeNtinel Application Platform).

is a normalized gray tone spatial dependence matrix such that; i and j, respectively, represent the rows and the columns for the mean, variance, and correlation measures; μ is the mean for the variance texture measure; and N is the number of distinct gray levels in the quantified image; (), and () are the means and standard deviations of and , respectively, for the correlation texture measure [39].

(3) Acquired Scenes. An automatic processing chain generates single-date products and “ready-to-use” time series for a very large number of applications. Sentinel-1 data are ortho-rectified on the Sentinel-2 grid to facilitate the joint use of both missions. This product, named “S1Tiling,” has been developed within the CNES radar service, in collaboration with CESBIO, to generate calibrated, ortho-rectified and soon to be filtered Sentinel-1 time series images over any terrestrial region of the Earth. It is based on the ortho-rectification application of radar images from the Orfeo Tool Box. The images obtained are superimposable on the Sentinel-2 optical images, as they use the same geographical reference frame. We can therefore have access to Sentinel-1 data acquired on Sentinel-2 tiles.

One SAR image per month was downloaded in accordance with the important dates in terms of vegetative cycles of the crops over the study area. 12 SAR images were acquired for this study. These images have been chosen in such way that they are acquired no more than 2 days before or after the date of acquisition of the optical image of the same period. Figure 4 shows the temporal distribution of the acquisition dates of the downloaded S2 and S1 images.

3. Methodology

The methodological approach used in this work consists of 4 main steps:(1)Data acquisition(2)Data preparation and preprocessing(3)Extraction and analysis of profiles (NDVI, VV, VH, VV/VH) and classification(4)Evaluation of obtained results by the confusion matrix (OA and K)

Figure 5 illustrates the sequence of the different steps just mentioned. Since we have already presented the first two steps in the previous sections, we will subsequently detail the 3rd and 4th steps.

3.1. Temporal Optical NDVI and SAR Backscatter Profiles Extraction

For each of the chosen classes, the NDVI, VV, VH, and VV/VH profiles were created, in order to study the possible confusions or separability to be expected between these different classes, and to be able to analyze and interpret the classification results. The temporal evolution of NDVI profiles allows the modelling of the dynamics of land cover types, and particularly the phenological evolution of the crop-type classes, which makes them relatively interpretable. On the other hand, the radar profiles are more complex for the vegetative classes, reflecting the evolution of the proportion of the backscattered signal, mainly influenced by the surface roughness of the canopy as well as its water content. Figures 6 and 7 show, respectively, for each of the chosen classes, the temporal evolution of NDVI, and that of the backscattered signals in the C-band (in VV, VH, VV/VH polarizations).

3.2. Machine Learning-Based Classification

The classification was performed using three supervised machine learning algorithms implemented in ENVI, namely, maximum likelihood (ML), support vector machine (SVM), and neural network (ANN).ML Classifier. It is a supervised classification technique that relies on the statistics of class signatures extracted directly from the satellite imagery or of training samples representing different land cover types chosen on the basis of the ground truth data collected during field campaigns [5, 40]. ML is a pixel-based algorithm that based on a multivariate probability density function of classes [41]. The likelihood of a pixel to be belonging to each of the considered classes is calculated. Then, this pixel is affected to a class having the highest probability of pixel belonging. MLC is a one of the most commonly used supervised classification methods in remote sensing to derive land use/cover maps [3, 4244].SVM Classifier. The support vector machine (SVM) is one of the most useful machine learning algorithms. It is based on statistical learning theory and has been extensively exploited in remote sensing for LC/LU mapping and crop classification [44]. The main advantage of SVM method is the ability to classify high dimensional data with small set of training samples [45]. SVM works with pixels in the boundary of considered classes which are named support vectors [46, 47]. For the complex data that cannot be separated using linear hyper-planes, optimal hyperplane separating the different classes by nonlinear mapping functions, called kernel functions, can be defined. Several kernel functions can be used with SVM classifier. However only four of them which are linear, polynomial, radial basis function (RBF), and sigmoid kernels have been commonly used to classify satellite data [48]. RBF kernel was used for the present study.ANN Classifier. ANN classifier was originally created, as a mathematical model, for data analysis and pattern recognition in order to mimic the analytical operations and neural storage of the human brain. It is such parallel system of calculation which consists of large number of basic processors with interconnections [49]. ANN is extensively adopted machine learning technique for LC/LU mapping [44, 50, 51]. It has the ability to learn from training ground samples and store the pattern of the each input variable (land cover classes in the present study). After the training step, new data (pixels of satellite image) is introduced to ANN classifier; it recognizes the pattern from this data and classify it.

In the present study, we have restricted ourselves to the use of supervised methods since these techniques generally provide better results in the production of LC/LU maps. First, we started by classifying the time series of NDVI, VV, VH, and VV/VH. We then combined the data from the two sensors to explore different possible scenarios. Table 2 describes these scenarios in the order followed in the classification step.


ScenarioLayersLayers used for classificationSensor

1Noncombined (single sensor)NDVIS2
2VVS1
3VHS1
4VV/VHS1
5TextureS1
6VV; VHS1
7VV; VH; textureS1
8VV; VV/VHS1
9VH; VV/VHS1
10VV; VH; VV/VHS1
11VV; VH; VV/VH; textureS1

12CombinedNDVI; VVS1 and S2
13NDVI; VHS1 and S2
14NDVI; VH; textureS1 and S2
15NDVI; VV/VHS1 and S2
16NDVI; VV; VHS1 and S2

17(multisensor fusion)NDVI; VV; VV/VHS1 and S2
18NDVI; VH; VV/VHS1 and S2
19NDVI; VH; VV/VH; textureS1 and S2
20NDVI; VV; VH; VV/VHS1 and S2

3.3. Accuracy Assessment

The evaluation of the classification results can be done qualitatively by comparing the images or quantitatively by measuring the accuracy of the LC/LU classification using statistical tools such as the confusion matrix and the Kappa index [52, 53]. Confusion matrices were calculated to reveal not only the general errors made at the level of each class when interpreting the results, but also errors due to confusion between LC/LU classes [54, 55].

The overall accuracy (OA) of the classification, used as one of evaluation metrics, is given by the average of the percentages of correctly classified pixels in the following equation:where is the number of diagonal pixels (correctly classified) and N is the total number of pixels.

The second assessment metric is the Kappa coefficient (K), which is calculated by the formula of the following equation:where is the obtained OA or actual percentage of classified land is covers and is the probability of obtaining a correct classification.

3.4. Postclassification

After the classification process, the classified images including SAR bands showed a law sharpness. A majority filter therefore was applied to get smoothed images. It is a logical filter applied to a classified image. In this process, the number of pixels assigned to each of the classes is calculated and if the center’s pixel is not a member of the majority class (including 3, 5 or more pixels according to the considered window), it is given the label to the majority class. Such filters are used in order to smooth the classified images by the number of pixels allocated to each of the classes counted and, if the center pixel is not a member of the majority class (containing five or more pixels within the window), it is given the label of the majority class. The effect of this algorithm is to smooth the classified image by weeding out isolated pixels, which were initially given labels that were dissimilar to the labels assigned to the surrounding pixels [56].

4. Results and Discussion

4.1. Temporal Analysis of the Profiles
4.1.1. NDVI Profiles

The NDVI vegetation index provides information on the importance or the dominance of the vegetation cover; it allows the modelling of the phenological stages of the different crops. For annual crops, for example, the start of the season begins when the rate of increase in NDVI values is greater than the previous successive observations during the period of vegetation growth. The end of the season is defined as the time during the maturation period when there is a significant decrease in NDVI. Generally, it corresponds to the period during which chlorophyll activity gradually decreases [27]. These evolutions can be seen in Figure 6 graphs B, C, and F for cereal, double cropping, and fallow. The NDVI spectral profiles of these crops reflect their seasonality. The amplitude of the graphs becomes important during periods of vegetation development.

Wheat and barley (grouped in the cereal class because of their high phenological similarity) are grown in early December, reach their maximum development at the end of March, and are harvested in mid-May and no later than early June (graph B). The same description remains valid for the fallow class (graph F), because this natural vegetation benefits from the presence of winter rains and grows over the same period of March. For the time profile of the double cropping class (graph C), it is the union of the profiles of two crops: cereal and vegetable crops or corn.

Orange and olive belong to perennial crops group, yet we have separated them. The NDVI value of these classes is generally over 0.3 throughout the year (Figure 6, graphs A and D). Decreases in NDVI values of tree crops in summer are due to water stress resulting from water scarcity and increased temperatures over this period and in addition sometimes due to leaf loss. The bare soil class is relatively easy to detect due to the absence of vegetation resulting in NDVI values that not exceeding 0.20 (Figure 6, graphs E).

4.1.2. VV, VH, and VV/VH Profiles

The profiles in Figure 7 illustrate the temporal evolution in intensity (SAR backscattering coefficient expressed in decibels of the VV, VH, and VV/VH polarizations), of the 6 selected covers, orange, olive, cereal, double cropping, fallow, and bare soil. Unlike NDVI profiles, radar profiles are more complicated to interpret.

The first thing noticed is that, for all the studied classes, the values of intensity in parallel polarization VV (values between −7 dB and −12 dB) are above those in cross-polarization VH (values between −12 dB and −20 dB), while the intensity values of the VV/VH ratio are positive (varying between 4 dB and 10 dB). In particular, for the double cropping class, the VH and VV/VH signals show the most marked seasonality over the entire period with amplitudes of the order of 3 dB and 5 dB, respectively, linked more in VH polarization to the periods of leaf activity in phase with NDVI. This may be at the origin of the distinction of the dual culture class from the others.

For the tree crops, oranges class showed “Presque” a stable value of VV/VH signal with slight variations, mainly in May, August, and November. Similar variation of amplitudes in the same period was recorded also for VV and VH signals (varying between min of −11 and a max of −8 dB for VV signal, and between min of −16 and max of −14 for VH signal). Compared to olive NDVI profile, the period of variation of SAR backscatter signals is the same months when the NDVI values decrease. This shift can be explained by the increase of temperature which is related directly to soil moisture. The second crop tree, oranges class, presented significant decrease, of VV/VH ratio values, during the period between February and May (varying between min of 4 dB and max of 9 dB), while VV and VH signals showed negligible shifts.

For cereal (barley and wheat), barley and wheat are more redundant annual crops in the study area. Both of these crops have similar phenology and plant structure; we therefore considered them as same crop class, namely, cereal. For this class, the VV and VH signals represented same variations as double cropping especially during spring and summer (from January to July). Those shifts can reflect the high leaf activity of cereal during this period compared with NDVI values. During the rest of the year, some variations were observed in particular November and December due to precipitations. For cereal’s VV/VH signal, no significant shifts were recorded. Finally, unlike the Fallow’s and bare soil’s NDVI profiles, the backscatter signals’ values are distinguished. Table 3 summarizes the ranges and amplitudes of signals for all classes over all months of the year.


ClassesSignal VVSignal VHSignal VV/VH
MinMaxAmplitudeMinMaxAmplitudeMinMaxAmplitude

Orange−11.5−8.72.8−16.9−14.42.54.96.71.8
Olive−9.5−7.52.1−15.5−12.53.04.59.75.3
Cereal−13.0−7.85.2−19.2−13.55.74.97.12.2
Double cropping−12.5−7.15.4−18.6−13.35.34.69.75.1
Fallow−12.7−7.65.2−20.4−14.55.94.99.24.3
Bare soil−12.0−7.24.8−19.6−16.43.57.110.73.6

4.2. LC/LU Classification Results

This section is about the performance assessment of the classification algorithms, applied to the crop-type discrimination in R3 perimeter, from the time series of both Sentinel-1 and Sentinel-2 data. Quantitative results, using the performance metrics (OA, K), were presented and followed by qualitative results (visually analyzing the overall quality of classified images).

4.2.1. Quantitative Results

First of all, the results corresponding to the initial scenarios of noncombined Sentinel products are presented and followed by the classified images for these scenarios. Then, we studied the classification results of the combined products of S1 and S2 data. Finally, we analyzed these results by highlighting the scenarios leading to better OA and K indices. The results of the first group (noncombined products) are grouped together in Table 4.


SVMMLANN
OAKOAKOAK

NDVI85.830.8184.380.7981.540,76
VV64.810.5262.560.5060.450,48
VH71.350.6156.960.4466.900,55
VV/VH51.280.3345.980.2944.860,27
Texture58.220.4450.920.3854.850,40
VV; VH75.430.6771.150.6065.620,56
VV; VH; texture76.720.6972.830.6354.890,41
VV; VV/VH72.970.5966.960.5558.380,38
VH; VV/VH72.100.6266.950.55Overestimation
VV; VH; VV/VH75.420.67OverestimationOverestimation
VV; VH; VV/VH; texture76.450.68Overestimation55.690.42

The results of the product classifications from each of the two S1 and S2 sensors confirm the superiority of the optical data and the SVM algorithm in terms of LC/LU classification and the performance, respectively, especially that we had problems of overestimation with the ML and ANN algorithms, even if the latter is one of the most powerful machine learning classifiers. The resulting OA of the NDVI time series classification is 85.83% (K = 0.81), while the best OA values were obtained by classifying the time series of 2 scenarios (VV and VH) and (VV, VH, and VV/VH). These OA are equal to 75.43% (K = 0.67) and 75.42% (K = 0.67), respectively. We noticed that the addition of the VV/VH band provided no improvement to VV and VH, while the inclusion of textural characteristics improved the classification up to 76.72% (K = 0.68).

Taking into consideration that SAR sensors are not primarily intended for LC/LU mapping, the classification of radar products presents acceptable results despite being inferior to those obtained by NDVI (optical data). The Kappa index being between 0.61 and 0.80 indicates “strong agreement” according to Landis and Koch, 1977 [28]. The results of the second group (combined optical and SAR products) are presented in Table 5.


SVMMLANN
OAKOAKOAK

NDVI; VV82.930.7781.240.7459.980.45
NDVI; VH86.100.8184.460.7872.180.62
NDVI; VH; texture87.220.8285.680.8051.890.38
NDVI; VV/VH82.330.8081.000.7458.110.43
NDVI; VV; VH83.780.7882.830.76Overestimation
NDVI; VV; VV/VH83.370.7783.390.76Overestimation
NDVI; VH; VV/VH84.600.7982.390.75Overestimation
NDVI; VH; VV/VH; texture85.520.8084.360.7855.090.42
NDVI; VV; VH; VV/VH83.830.78OverestimationOverestimation
NDVI; VV; VH; VV/VH; texture85.200.80Overestimation55.590.42

The classification results of the S1 and S2 combined products show a slight increase (to a maximum of approximately 2%) in the specifications of scenarios 12 (NDVI; VV), 13 (NDVI; VH), and 14 (NDVI; VH; texture). On the other hand, there is a slight decrease for the rest with the same rate. As for the first group, some scenarios were overestimated by the ML and ANN machine learning algorithms.

4.2.2. Qualitative Results

In order to visually compare the quality of the classified images, we then presented in Figure 8 the images obtained from the scenarios that resulted with higher accuracy. We noticed some differences between the classifications of S1 and S2 products. In fact, first of all we observed inconsistencies in classifications with the olive tree and fallow classes. Moreover, the quality of classified SAR images is lower than those classified with NDVI images in terms of sharpness even though the effect of speckle was reduced after applying the noise removal filter. The combined products resulted in very sharp images, which is due to the complementary contribution of optical data. The presence of isolated pixels on homogeneous plots over the entire area contributed to the “noisy” appearance of classified SAR products. Therefore, we have tried to improve the quality of these images and the accuracy of the results by applying post-processing which consists of smoothing classifications containing SAR products.

4.2.3. Postclassification: Smoothing

When the classification of scenarios containing SAR products is carried out, we notice in the images a lack of sharpness in the definition of the classified plots; the images seem “noisy.” We then apply a smoothing on these images, through the command of ENVI called “Majority/Minority Analysis.” This command filtered the image by replacing the value of the central pixel of a window of size n × n (where we must define n) by the majority value located in this window. In order to avoid an “oversmoothing,” we have chosen a window of size 3 × 3. We present in Figure 9 an example of a classified SAR image before and after smoothing. Visually, the quality has improved, and this product, which is derived from S1 data only, looks more like the classified image from the NDVI time series.

This process was applied to the scenarios containing SAR products and which gave the best results. Table 6 displays the overall accuracies and the Kappa indices of these scenarios before and after applying the smoothing.


Before smoothingAfter smoothing
OAKOAK

VV; VH75.430.6777.360.67
VV; VH; texture76.720.6980.910.74
VV; VH; VV/VH75.420.6781.120.75
VV; VH; VV/VH; texture76.450.6881.010.75
NDVI; VH86.100.8188.260.84
NDVI; VH; texture87.220.8288.900.85
NDVI; VV; VH83.780.7886.350.81

In order to better visualize these improvements, Figure 10 shows the evolution and the increase in the values of the OAs after smoothing.

Figure 11 illustrates smoothed images derived from purely SAR products of the scenarios in Table 6.

Figure 12 illustrates the best three thematic products series, namely, (1) crop-type map from NDVI time series, (2) crop-type map from SAR time series (VV, VH, VV/VH, and texture) smoothed, and (3) crop-type map from combined optical and SAR data (NDVI, VH, and texture).

5. Discussion

In general, NDVI profiles model the phenological behavior of classes, and compared to VV, VH, and VV/VH profiles, they showed great class separability which facilitates their discrimination when applying classification algorithms. By quantifying the results using confusion matrices, the separability was translated by the deviation found on the overall accuracies and the Kappa indices, with around 85% (K = 0.81) and around 77% (K = 0.69) of OA on the classifications of NDVI (optical data) and a combined product of VV, VH, and texture (best case scenario of the purely SAR data). Product integration of the two sensors S1 and S2 slightly increased accuracy compared to using data from S2 only by about 2% (maximum).

A visual interpretation of the quality of the results showed that classifications of multiband images containing S1 data are still “noisy” despite these images having been filtered before being classified. In order to further improve the classification performance, we used a post-classified product smoothing technique. Effectively, the results improved and the classification of data derived from S1 only reached a remarkable OA of 81.12% (K = 0.75) for the best scenario (VV, VH, and VV/VH). The smoothing also contributed to the improvement of the classification accuracy of combined products (S1 and S2), with an increase of approximately 3% to reach the best accuracy over the 20 scenarios covered in total, which is worth about 89% of OA (with K = 0.85).

6. Conclusions

LC/LU mapping is, for land management, a necessary tool for understanding, analyzing, and monitoring land cover dynamics in order to better exploit land. In addition, the availability of high spatio-temporal resolution satellite data, which have the advantage of covering surfaces at all scales (local, regional, and continental), offers the possibility of carrying out this mapping. The use of optical remote sensing including radiometric indices, combined with the textural characteristics of SAR remote sensing imagery, is generally accepted as a mean of improving classification performance [39, 5761]. The objective of this study was to identify and map the different crop types in the irrigated R3 perimeter using high-resolution multidate satellite images S1 and S2 of the year 2018. The R3 sector characterized by a semiarid climate is located around the city of Sidi-Rehhal about 40 km southeast of the city of Marrakech.

The classification of the NDVI time series alone resulted in an OA of about 86% (K = 0.81), while the best result was found by the integration of the S1 and S2 products, and particularly by using NDVI combined with the VH cross-polarization and textural characteristics (about OA = 87%, K = 0.82 before smoothing and OA = 89%, K = 0.85 after smoothing). We noticed that, in general, the parallel VV polarization improves accuracy very slightly, while the derived VV/VH band hardly influences the quality of the classifications. The integration of all S1 products (VV, VH, VV/VH, textural characteristics) with the NDVI series resulted in decreased OA and K coefficient. The worst result was found for the S1 product classification only, with an OA not exceeding, for the best case scenario, 77% (K = 0.65). A visual interpretation of the quality of the results showed that classifications of combined images containing SAR data are still “noisy” despite these images having been filtered before being classified. In order to further improve the accuracy, we used a post-classified product smoothing technique. Effectively, the results improved and the classification of data derived from S1 only reached a significant OA of 81.12% (K = 75) for the best scenario (VV, VH, and VV/VH). The smoothing also contributed to the improvement of the classification accuracy of combined products (S1 and S2), with an increase of approximately 3% to reach the best OA on a total of 63 classifications, which is worth about 89% (K = 0.85).

The results are largely in agreement with the literature. For our best classifications, it can be said on the one hand that the integration of S1 and S2 has increased the accuracy and quality of classification compared to the single use of S1 or S2. On the other hand, the results found by using S1 products only hold promise for the use of radar data for ground mapping as an alternative to optical data.

As perspective, it will be necessary to test the synergistic use of S1 and S2 sensors for mapping a larger area (e.g., the Haouz plain). We can then expect a decrease in performance, linked to confusion due to the dimensionality and diversity of the land cover types. To overcome this, we can consider expanding the database, using very high resolution image processing techniques for the detection of orange and olive trees as well as in-depth work on the smoothing and parameterization of profiles.

Data Availability

Sentinel‐1 and Sentinel‐2 were acquired over the study area and used for the present work. Sentinel‐2 images can be downloaded from the Theia CNES website: https://theia.cnes.fr/. Sentinel‐1 data can be downloaded from PEPS CNES website: https://peps.cnes.fr/. The SAR imagery was preprocessed using the Sentinel Application Platform (SNAP) and ENVI software

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this article.

References

  1. N. Joshi, M. Baumann, A. Ehammer et al., “A review of the application of optical and radar remote sensing data fusion to land use mapping and monitoring,” Remote Sensing, vol. 8, no. 1, p. 70, 2016. View at: Publisher Site | Google Scholar
  2. K. H. Hoang, “Cartographie de l’occupation du sol du bassin versant de la rivière Câu Vietnam) au moyen d'images optiques et SAR en support à la modélisation hydrologique,” Université du Québec, Institut national de la recherche scientifique, Quebec, Canada, 2014, Doctoral dissertation. View at: Google Scholar
  3. A. Moumni, B. E. Sebbar, V. Simonneaux et al., “Sample period dependent classification approach for the cartography of crops in the Haouz plain, Morocco,” Remote Sensing for Agriculture, Ecosystems, and Hydrology XXI, vol. 11149, Article ID 1114909, 2019. View at: Google Scholar
  4. A. Moumni, B. E. Sebbar, V. Simonneaux et al., “Evaluation of Sen2agri system over semi-arid conditions: a case study of the Haouz plain in Central Morocco,” in Proceeding of the 2020 Mediterranean and Middle-East Geoscience and Remote Sensing Symposium (M2GARSS), pp. 343–346, IEEE, Tunis, Tunisia, May 2020. View at: Google Scholar
  5. B. Sebbar, A. Moumni, and A. Lahrouni, “Decisional tree models for land cover mapping and change detection based on phenological behaviors. application case: localization of non-fully-exploited agricultural surfaces in the eastern part of the Haouz plain in the semi-arid central Morocco,” ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XLIV, pp. 365–373, 2020. View at: Publisher Site | Google Scholar
  6. Z. Sun, D. Wang, and G. Zhong, “A review of crop classification using satellite-based polarimetric SAR imagery,” in Proceeding of the 2018 7th International Conference on Agro-Geoinformatics (Agro-Geoinformatics), pp. 1–5, IEEE, Hangzhou, China, August 2018. View at: Google Scholar
  7. C.-a. LIU, Z.-x. CHEN, Y. Shao, J.-s. CHEN, T. Hasi, and H.-z. PAN, “Research advances of SAR remote sensing for agriculture applications: a review,” Journal of Integrative Agriculture, vol. 18, no. 3, pp. 506–525, 2019. View at: Publisher Site | Google Scholar
  8. C. M. Sicre, R. Fieuzal, and F. Baup, “Contribution of multispectral (optical and radar) satellite images to the classification of agricultural surfaces,” International Journal of Applied Earth Observation and Geoinformation, vol. 84, Article ID 101972, 2020. View at: Google Scholar
  9. W. Masiza, J. G. Chirima, H. Hamandawana, and R. Pillay, “Enhanced mapping of a smallholder crop farming landscape through image fusion and model stacking,” International Journal of Remote Sensing, vol. 41, no. 22, pp. 8739–8756, 2020. View at: Publisher Site | Google Scholar
  10. K. Van Tricht, A. Gobin, S. Gilliams, and I. Piccard, “Synergistic use of radar Sentinel-1 and optical Sentinel-2 imagery for crop mapping: a case study for Belgium,” Remote Sensing, vol. 10, no. 10, p. 1642, 2018. View at: Publisher Site | Google Scholar
  11. B. Salehi, B. Daneshfar, and A. M. Davidson, “Accurate crop-type classification using multi-temporal optical and multi-polarization SAR data in an object-based image analysis framework,” International Journal of Remote Sensing, vol. 38, no. 14, pp. 4130–4155, 2017. View at: Publisher Site | Google Scholar
  12. N. Kussul, L. Mykola, A. Shelestov, and S. Skakun, “Crop inventory at regional scale in Ukraine: developing in season and end of season crop maps with multi-temporal optical and SAR satellite imagery,” European Journal of Remote Sensing, vol. 51, no. 1, pp. 627–636, 2018. View at: Publisher Site | Google Scholar
  13. Y. J. E. Gbodjo, D. Ienco, L. Leroux, R. Interdonato, R. Gaetano, and B. Ndao, “Object-based multi-temporal and multi-source land cover mapping leveraging hierarchical class relationships,” Remote Sensing, vol. 12, no. 17, p. 2814, 2020. View at: Publisher Site | Google Scholar
  14. Z. Malenovský, H. Rott, J. Cihlar et al., “Sentinels for science: potential of Sentinel-1, -2, and -3 missions for scientific observations of ocean, cryosphere, and land,” Remote Sensing of Environment, vol. 120, pp. 91–101, 2012. View at: Publisher Site | Google Scholar
  15. F. Tupin, Fusion of Optical and SAR Images in Radar Remote Sensing of Urban Areas, Springer, Dordrecht, Netherlands, 2010.
  16. A. Diarra, L. Jarlan, S. Er-Raki et al., “Performance of the two-source energy budget (TSEB) model for the monitoring of evapotranspiration over irrigated annual crops in North Africa,” Agricultural Water Management, vol. 193, pp. 71–88, 2017. View at: Publisher Site | Google Scholar
  17. S. Khabba, L. Jarlan, S. Er-Raki et al., “The SudMed program and the Joint International Laboratory TREMA: a decade of water transfer study in the soil-plant-atmosphere system over irrigated crops in semi-arid area,” Procedia Environmental Sciences, vol. 19, pp. 524–533, 2013. View at: Publisher Site | Google Scholar
  18. S. Belaqziz, S. Khabba, S. Er-Raki et al., Caractérisation de la distribution des irrigations par l'utilisation de la télédétection pour les réseaux d’irrigation gravitaire, African Association of Remote Sensing of the Environment, Johannesburg, South Africa, 2013.
  19. N. Baghdadi, M. Leroy, P. Maurel et al., “The Theia land data centre,” in Proceedings of the Remote Sensing Data Infrastructures (RSDI) International Workshop, La grande motte, France, Octobre 2015. View at: Google Scholar
  20. O. Hagolle, G. Dedieu, B. Mougenot, V. Debaecker, B. Duchemin, and A. Meygret, “Correction of aerosol effects on multi-temporal images acquired with constant viewing angles: application to Formosat-2 images,” Remote Sensing of Environment, vol. 112, no. 4, pp. 1689–1701, 2008. View at: Publisher Site | Google Scholar
  21. O. Hagolle, M. Huc, D. V. Pascual, and G. Dedieu, “A multi-temporal method for cloud detection, applied to FORMOSAT-2, VENμS, LANDSAT and SENTINEL-2 images,” Remote Sensing of Environment, vol. 114, no. 8, pp. 1747–1755, 2010. View at: Publisher Site | Google Scholar
  22. O. Hagolle, M. Huc, D. Villa Pascual, and G. Dedieu, “A multi-temporal and multi-spectral method to estimate aerosol optical thickness over land, for the atmospheric correction of FormoSat-2, LandSat, VENμS and sentinel-2 images,” Remote Sensing, vol. 7, no. 3, pp. 2668–2691, 2015a. View at: Publisher Site | Google Scholar
  23. C. J. Tucker, “Red and photographic infrared linear combinations for monitoring vegetation,” Remote Sensing of Environment, vol. 8, no. 2, pp. 127–150, 1979. View at: Publisher Site | Google Scholar
  24. J. Zhu, J. Wen, and Y. Zhang, “A new algorithm for SAR image despeckling using an enhanced Lee filter and median filter,” in Proceedings of the 2013 6th International Congress on Image and Signal Processing (CISP), vol. 1, pp. 224–228, IEEE, Zhenming Yuan, China, December 2013. View at: Google Scholar
  25. C. Danilla, C. Persello, V. Tolpekin et al., “Classification of multitemporal SAR images using convolutional neural networks and markov random fields,” in Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pp. 2231–2234, IEEE, Worth, TX, USA, July 2017. View at: Google Scholar
  26. X. Tang, L. Zhang, and X. Ding, “SAR image despeckling with a multilayer perceptron neural network,” International Journal of Digital Earth, vol. 12, no. 3, pp. 354–374, 2019. View at: Publisher Site | Google Scholar
  27. H. G. Han and M. J. Lee, “A method for classifying land and ocean area by removing Sentinel-1 speckle noise,” Journal of Coastal Research, vol. 102, no. 1, pp. 33–38, 2020. View at: Publisher Site | Google Scholar
  28. D. Hazarika, V. K. Nath, and M. Bhuyan, “A lapped transform domain enhanced lee filter with edge detection for speckle noise reduction in SAR images,” in Proceedings of the 2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS), pp. 243–248, IEEE, Kolkata, India, July 2015. View at: Google Scholar
  29. S. Kharbouche and D. Clavet, “A tool for semi-automated extraction of waterbody feature in SAR imagery,” Remote Sensing Letters, vol. 4, no. 4, pp. 381–390, 2013. View at: Publisher Site | Google Scholar
  30. M. Amani, B. Salehi, S. Mahdavi, and B. Brisco, “Separability analysis of wetlands in Canada using multi-source SAR data,” GIScience & Remote Sensing, vol. 56, no. 8, pp. 1233–1260, 2019. View at: Publisher Site | Google Scholar
  31. A. Gaber, M. Koch, and F. El-Baz, “Textural and compositional characterization of wadi feiran deposits, sinai peninsula, Egypt, using radarsat-1, PALSAR, SRTM and ETM+ data,” Remote Sensing, vol. 2, no. 1, pp. 52–75, 2010. View at: Google Scholar
  32. S. M. Abuzied, “Groundwater potential zone assessment in Wadi Watir area, Egypt using radar data and GIS,” Arabian Journal of Geosciences, vol. 9, no. 7, pp. 1–20, 2016. View at: Publisher Site | Google Scholar
  33. Y. Zeng, J. Zhang, J. L. van Genderen, and Y. Zhang, “Image fusion for land cover change detection,” International Journal of Image and Data Fusion, vol. 1, no. 2, pp. 193–215, 2010. View at: Publisher Site | Google Scholar
  34. P. D. Culbert, A. M. Pidgeon, V. St.-Louis, D. Bash, and V. C. Radeloff, “The impact of phenological variation on texture measures of remotely sensed imagery,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 2, no. 4, pp. 299–309, 2009. View at: Publisher Site | Google Scholar
  35. J. R. Irons and G. W. Petersen, “Texture transforms of remote sensing data,” Remote Sensing of Environment, vol. 11, pp. 359–370, 1981. View at: Publisher Site | Google Scholar
  36. M. Hall-Beyer, “Practical guidelines for choosing GLCM textures to use in landscape classification tasks over a range of moderate spatial scales,” International Journal of Remote Sensing, vol. 38, no. 5, pp. 1312–1338, 2017. View at: Publisher Site | Google Scholar
  37. Z. Szantoi, F. Escobedo, A. Abd-Elrahman, S. Smith, and L. Pearlstine, “Analyzing fine-scale wetland composition using high resolution imagery and texture features,” International Journal of Applied Earth Observation and Geoinformation, vol. 23, pp. 204–212, 2013. View at: Publisher Site | Google Scholar
  38. R. M. Haralick, K. Shanmugam, and I. H. Dinstein, “Textural features for image classification,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 3, no. 6, pp. 610–621, 1973. View at: Publisher Site | Google Scholar
  39. D. Deus, “Integration of ALOS PALSAR and Landsat data for land cover and forest mapping in Northern Tanzania,” Land, vol. 5, no. 4, p. 43, 2016. View at: Publisher Site | Google Scholar
  40. J. G. Liu and P. J. Mason, Essential Image Processing and GIS for Remote Sensing, John Wiley & Sons, England, UK, 2009.
  41. T. M. Lillesand, R. W. Kiefer, and J. W. Chipman, Remote Sensing and Image Interpretation, Wiley, New York, NY, USA, 6th edition, 2008.
  42. L. Wang, Q. Dong, L. Yang, J. Gao, and J. Liu, “Crop classification based on a novel feature filtering and enhancement method,” Remote Sensing, vol. 11, no. 4, p. 455, 2019. View at: Publisher Site | Google Scholar
  43. J. Zhang, Y. He, L. Yuan, fnm Liu, fnm Zhou, and fnm Huang, “Machine learning-based spectral library for crop classification and status monitoring,” Agronomy, vol. 9, no. 9, p. 496, 2019. View at: Publisher Site | Google Scholar
  44. A. Moumni, M. Oujaoura, J. Ezzahar, and A. Lahrouni, “A new synergistic approach for crop discrimination in a semi-arid region using Sentinel-2 time series and the multiple combination of machine learning classifiers,” in Proceedings of the Journal of Physics: Conference Series, vol. 1743, IOP Publishing., Bristol, UK, May 2021. View at: Google Scholar
  45. Y. Shao and R. S. Lunetta, “Comparison of support vector machine, neural network, and CART algorithms for the land-cover classification using limited training data points,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 70, no. 0, pp. 78–87, 2012. View at: Publisher Site | Google Scholar
  46. G. M. Foody and A. Mathur, “Toward intelligent training of supervised image classifications: directing training data acquisition for SVM classification,” Remote Sensing of Environment, vol. 93, no. 1-2, pp. 107–117, 2004. View at: Publisher Site | Google Scholar
  47. P. M. Mather and M. Koch, Classification, Computer Processing of Remotely-Sensed Images, John Wiley & Sons, England, UK, 2011.
  48. T. Kavzoglu and I. Colkesen, “A kernel functions analysis for support vector machines for land cover classification,” International Journal of Applied Earth Observation and Geoinformation, vol. 11, no. 5, pp. 352–359, 2009. View at: Publisher Site | Google Scholar
  49. E. Raczko and B. Zagajewski, “Comparison of support vector machine, random forest and neural network classifiers for tree species classification on airborne hyperspectral APEX images,” European Journal of Remote Sensing, vol. 50, no. 1, pp. 144–154, 2017. View at: Publisher Site | Google Scholar
  50. H. Jiang, Y. Rusuli, T. Amuti, and Q. He, “Quantitative assessment of soil salinity using multi-source remote sensing data based on the support vector machine and artificial neural network,” International Journal of Remote Sensing, vol. 40, no. 1, pp. 284–306, 2019. View at: Publisher Site | Google Scholar
  51. C. Zhang and Z. Xie, “Combining object-based texture measures with a neural network for vegetation mapping in the everglades from hyperspectral imagery,” Remote Sensing of Environment, vol. 124, pp. 310–320, 2012. View at: Publisher Site | Google Scholar
  52. J. A. Cohen, “Coefficient of agreement for nominal scales,” Educational and Psychological Measurement, vol. 20, no. 1, pp. 37–46, 1960. View at: Google Scholar
  53. J. R. Landis and G. G. Koch, “An application of hierarchical kappa-type statistics in the assessment of majority agreement among multiple observers,” Biometrics, vol. 15, pp. 363–374, 1977. View at: Google Scholar
  54. V. J. Mama and J. O. S. E. P. H. Oloukoi, “Évaluation de la précision des traitements analogiques des images satellitaires dans l'étude de la dynamique de l'occupation du sol,” Télédétection, vol. 3, no. 5, pp. 429–441, 2003. View at: Google Scholar
  55. S. Araya, G. Lyle, M. Lewis, and B. Ostendorf, “Phenologic metrics derived from MODIS NDVI as indicators for plant available water-holding capacity,” Ecological Indicators, vol. 60, pp. 1263–1272, 2016. View at: Publisher Site | Google Scholar
  56. P. M. Mather, Computer Processing of Remotely-Sensed Images: An Introduction, John Wiley & Sons, Chichester, England, UK, 3rd edition, 2004.
  57. A. Chatziantoniou, E. Psomiadis, and G. Petropoulos, “Co-Orbital Sentinel 1 and 2 for LULC mapping with emphasis on wetlands in a mediterranean setting based on machine learning,” Remote Sensing, vol. 9, no. 12, p. 1259, 2017. View at: Publisher Site | Google Scholar
  58. Z. Shao, H. Fu, P. Fu, and L. Yin, “Mapping urban impervious surface by fusing optical and SAR data at the decision level,” Remote Sensing, vol. 8, no. 11, p. 945, 2016. View at: Publisher Site | Google Scholar
  59. N. Clerici, C. A. Valbuena Calderón, and J. M. Posada, “Fusion of Sentinel-1A and Sentinel-2A data for land cover mapping: a case study in the lower Magdalena region, Colombia,” Journal of Maps, vol. 13, no. 2, pp. 718–726, 2017. View at: Publisher Site | Google Scholar
  60. J. J. Erinjery, M. Singh, and R. Kent, “Mapping and assessment of vegetation types in the tropical rainforests of the Western Ghats using multispectral Sentinel-2 and SAR Sentinel-1 satellite imagery,” Remote Sensing of Environment, vol. 216, pp. 345–354, 2018. View at: Publisher Site | Google Scholar
  61. A. Whyte, K. P. Ferentinos, and G. P. Petropoulos, “A new synergistic approach for monitoring wetlands using Sentinels -1 and 2 data with object-based machine learning algorithms,” Environmental Modelling & Software, vol. 104, pp. 40–54, 2018. View at: Publisher Site | Google Scholar

Copyright © 2021 Aicha Moumni and Abderrahman Lahrouni. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views4397
Downloads1196
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.