Abstract

This paper proposes an object-based approach to supervised change detection using uncertainty analysis for very high resolution (VHR) images. First, two temporal images are combined into one image by band stacking. Then, on the one hand, the stacked image is segmented by the statistical region merging (SRM) to generate segmentation maps; on the other hand, the stacked image is classified by the support vector machine (SVM) to produce a pixel-wise change detection map. Finally, the uncertainty analysis for segmented objects is implemented to integrate the segmentation map and pixel-wise change map at the appropriate scale and generate the final change map. Experiments were carried out with SPOT 5 and QuickBird data sets to evaluate the effectiveness of proposed approach. The results indicate that the proposed approach often generates more accurate change detection maps compared with some methods and reduces the effects of classification and segment scale on the change detection accuracy. The proposed method supplies an effective approach for the supervised change detection for VHR images.

1. Introduction

In recent years, change detection has been one of the most important issues in the remote sensing and plays an important role in many practical applications, such as environment monitoring [1], disaster detection [2, 3], urban growing [4, 5], and forest monitoring [6, 7].

Since change detection identifies changes by analyzing multitemporal images acquired in the same geographical area at different times [8], many factors affect its accuracy, such as sensor calibration, sunlight and atmospheric conditions, spectral characteristics of vegetation due to seasonal effects, and registration strategy [9]. Hence, to reduce these impacts on change detection accuracy, some preprocessing steps need implementing, such as noise reduction, sensor calibration, and radiometric and geometric corrections [10]. But different factors should be considered in different change detection strategies. In the past decades, two change detection strategies were developed according to the nature of data processing: unsupervised and supervised [11]. The former partitions pixels in unchanged and changed parts by comparing two multitemporal images directly. The latter implements a supervised classification to two multitemporal images, respectively, and detects changes by postcomparisons between classification maps, namely, the postclassification method. The supervised strategy not only supplies “from-to” information, but also behaves more robust to different sunlight and atmospheric conditions compared to the unsupervised strategy [12].

This study adopts the supervised strategy to detect changes. In an early stage, pixel-wise classification methods were widely used to obtain “from-to” change information. These methods were performed to detect land cover changes, and it was indicated that the postclassification method is effective for multitemporal images acquired by different sensors or with different resolution [13, 14]. The postclassification method was also implemented for Landsat TM and SRTM-DEM data to detect habitat changes, and it was found that this method is capable of combining data from multiple sources [15]. Additionally, damage induced by earthquake also can be detected using postclassification method [16, 17]. Many classification algorithms were utilized in the postclassification method, such as maximum-likelihood classification (MLC) [18], decision tree [15, 16], transfer learning [19], and support vector machine (SVM) [20].

However, traditionally pixel-wise classification methods only consider gray value of pixels, and the contextual information is not taken into count. Additionally, with the development of space technology, more very high resolution (VHR) multispectral images are available, acquired by satellite sensors (e.g., SPOT-5, QuickBird, WorldView-2, GeoEye-1, and Ikonos). To improve change detection accuracy, the contextual information should be considered in the postclassification process. Therefore, the object-based classification for change detection was proposed and discussed in detail using spatial and spectral information [21, 22]. In the object-based change detection, pixels of the multitemporal images were segmented into objects, a classifier was then performed to classify these objects, and changes were detected by comparing classified objects of multitemporal images. In addition, a change detection model based on neighborhood correlation image computed from contextual information and decision tree classification was proposed, and it was found that this model yielded superior results than pixels-wise and postclassification methods [23]. However, the traditionally postclassification change detection methods classify multitemporal images, respectively, and compare multiple classification results to find changes, regardless of pixel-based or object-based classification. Thus, change detection results were significantly impacted by the classification of multitemporal images. To reduce the impact of separate classification, one-pass classification [24, 25] was used to detect changes, in which “from-to” changes were directly detected by performing a classifier to stacked multitemporal images. But for most one-pass and object-based change detection methods, it is very important to select the appropriate segment scale to produce segmented objects, which affects change detection results seriously.

In this paper, a novel object-based approach to change detection using uncertainty analysis (OBCDUA) is proposed, and Figure 1 shows the detailed process of proposed approach. First, two multitemporal images with respect to and bands are stacked into one image of bands. Second, the statistical region merging method (SRM) is adopted to segment the stacked image into objects with similar characteristics. Then samples of all the types including unchanged and changed classes are selected, and the pixel-wise SVM classification is performed to classify the stacked image. Finally, the segmented maps and pixel-wise classification result are incorporated through uncertainty analysis for segmented objects. Experiments were carried out on SPOT 5 and QuickBird data sets to evaluate the effectiveness of proposed approach.

2. Proposed Supervised Change Detection Approach

Suppose that two multispectral images and of size and bands (assuming equivalent bands are contained in the two images) were acquired from the same geographical area at two different times and .

2.1. Stack of Two Temporal Images

Actually, the traditional postclassification approach reduces the impacts of sunlight and atmospheric conditions on change detection results, but the accuracy of separate classification significantly affects the quality of change maps. Hence, a one-pass classification is adopted in this paper, which just performs classification once. After obtaining two temporal images and , they are stacked into one image by simple band stacking.

2.2. Segmentation of the Stacked Image

As known to us, some traditional algorithms (e.g., -means, ISODATA) segment images into clusters, but the decision of initialization model often affects segment results. In order to generate robust segment results, the SRM is adopted to segment the stacked image, which has the ability of coping with significant noise, handling occlusions, and performs scale-sensitive segmentations fast [26].

In SRM, the stacked image is the observed image and contains pixels, each containing values, each of the channels belonging to the set (we have in this paper). Let denote the perfect segment scene of observed image , and each colour channel of is obtained by sampling each statistical pixel of from a set of exactly independent random variables (taking values on ) for observed channels. The tuning controls the scale of segmentation: the larger it is, the more regions exist in the final segmentation.

The realization of SRM relies on an interaction between a merging predicate and an order in merging. The merging predicate can be expressed aswhere and represent a fixed couple of regions of , and denote the observed average for colour channel in region and , respectively, and . If , and are merged. As for the merging order, the function can be used to sort pixel pairs in , and it can be written aswhere and are the pixels of and and are the pixel values of channel . The SRM is then performed to segment the stacked image, where we let and the values of scale belonging to .

2.3. Pixel-Wise Classification Using SVM

As shown in Figure 1, on the one hand, the stacked image is segmented by SRM; on the other hand, the pixel-wise classification is performed on the stacked image using the robust classifier SVM, which prepares input data for the following uncertainty analysis.

The SVM is a nonparametric supervised classifier based on statistical learning theory [27], which is robust to high dimensional data sets and to ill-posed problems and has been widely used for classification [28, 29]. The linearly separable binary classification is considered first. Assume that a training data set with sample is given, and it can be represented as , , where represents the spectral response of the case, and is the class label. The SVM aims to find the optimal separating hyperplane that positions the samples of a class in one side of it and makes the distance between it and the closest training samples in both classes as large as possible. A hyperplane in feature space is given by the equation , where vector is normal to the hyperplane, is a point on the hyperplane, and the scalar is the bias of the hyperplane from the origin. Then a separating hyperplane can be defined , and the optimal hyperplane can be obtained by maximizing the margin using the constrained optimization problem:

If the data set is not linearly separable, the constraints of (3) cannot be satisfied in practice. Thus, slack variables are introduced to relax the constraints, so that the problem of unsatisfied constraints is solved [30], namely, soft margin. Considering the introduced slack variables, (3) can be written aswhere the constant is a penalty parameter associated with training samples that lie on the wrong side of the hyperplane and must be determined by user carefully to avoid problems such as a higher penalty to errors due to its large value.

This classifier finds an optimal separating hyperplane as a decision function in a high dimensional space to solving a nonlinear separation in the original input space. Hence, the input data are mapped into a high dimensional space through a nonlinear vector mapping function, which can be replaced by valid kernel function [31]. Therefore, the classification decision function in the high dimensional space is given bywhere is Lagrange multipliers for looking for the optimal separating hyperplane and is a kernel function.

As known, so many kernel functions exist, such as polynomial, Gaussian radial basis function (RBF), and hyperbolic tangent, and the Gaussian RBF is adopted in this paper due to its interpretability and positive performances [25]. Additionally, the one-against-one approach [32] is adopted to solve the problem of multiclass classification. Finally, the optimal hyperplanes are obtained for classification after training processing.

2.4. Object-Based Change Detection Using Uncertainty Analysis

After the stacked image is segmented by SRM and classified by SVM, the object-based change detection using uncertainty analysis can be performed. In fact, a spectral-spectral classification method (SSCM) has been proposed by Tarabalka et al. [33, 34]; however, the uncertainty of segment scale was not considered, which cannot obtain results at optimal scale for different objects and decrease the accuracy of results. Therefore, a change detection approach using uncertainty analysis for segment scale is proposed to make full use of spectral and spatial information as in Figure 2.

Step 1. A moderate scale segmentation map is selected and integrated with pixel-wise classification map, where is the segment scale. For a segmented object , the number of pixels for the th class is counted and sorted in all classes of the object . The percentage of class with largest pixel number is calculated: where is the percentage of class with largest pixel number, is the pixel number of class , and is the class number in the object . A threshold is set and compared with to analyze the uncertainty of current segment scale for the object . If , the current scale can be seen as the appropriate segment scale for the object , and the object is then identified as class . On the contrary, the current scale is too coarse to combine the segmented object with the pixel-wise classification result, and the object remains original state and is then labeled as an uncertainty one. As can be seen from the first row of Figure 2, the classes with largest pixel number in the objects A and B of the segmentation map are dark and white classes, respectively. Supposing the both percentages of dark and white classes are larger than the threshold , the objects A and B are then identified as dark and white as shown in the second row of Figure 2. For the object C, the percentage of the class grey with largest pixel number is smaller than the threshold , so the current segment scale is too coarse for the object C. Moreover, it remains original state and is labeled as an uncertainty one.

Step 2. A further uncertainty analysis for the labeled uncertain object above is performed using a more detailed segmentation map. In the SRM of Section 2.2, the larger the value is, the more detailed the segmentation map is. The uncertain objects in the last step can be segmented into more objects in the more detailed segmentation map. For these new objects, Step  1 is implemented to analyze their uncertainties under the current segment scale. If , the new object can be labeled as the class with largest pixel number. As can be seen from the second row of Figure 2, the object C of the segmentation map is divided into objects C and D in the more detailed segmentation map. Therefore, the objects C and D are identified as grey and dark classes through the uncertainty analysis, respectively. Contrarily, it is still seen as an uncertain one and analyzed using a more detailed segmentation map.

Step 3. The uncertainty analyses for all objects are performed until the most detailed segmentation map has been used for uncertainty analysis. Though these objects are refined by the detailed segmentation map, some still cannot meet the requirements and are labeled uncertain ones. Give that, a majority voting is implemented in uncertain objects, and they are labeled the corresponding class having maximum number of pixels. Finally, the change map is obtained by integrating the segmentation maps and pixel-wise change detection result through the uncertainty analysis.

The implementation of proposed OBCDUA includes four steps.

Step 1 (stacking of two temporal images). To reduce the impact of twice classification on change detection results, two temporal images are combined into one image simply by band stacking, so that input data is prepared.

Step 2 (segmentation of the stacked image). The stacked image is segmented into homogenous objects using SRM, and the segmentation maps provide spatial information in the following integration approach.

Step 3 (pixel-wise classification of the stacked image). All kinds of training samples are selected, such as road, grass, bare soil to grass, and water to grass, and the SVM is then performed to classify the stacked image, so the pixel-wise change map is generated.

Step 4 (integration of the segmentation maps and pixel-wise change map using uncertainty analysis). The segmentation maps and pixel-wise change map are integrated through the uncertainty analysis approach, and the final change map is obtained by making full use of spectral and spatial information.

3. Experimental Results and Discussion

3.1. Experiments of SPOT Images

A data set including two VHR images acquired from a same geographical area of China is used in the experiments to evaluate the effectiveness of the proposed change detection approach. The images were acquired by SPOT 5 on April 2008 () and February 2009 (), respectively, and they both were generated by fusing panchromatic and multispectral images, which had three bands. A small area with 1120 × 480 pixels was cropped as the test site from the entire images, and both color images were presented in Figures 3(a) and 3(b), respectively, and the image was registered to the image. Then a band stacking combines the two temporal images into a stacked image. The reference data including 12 classes of interest as shown in Figure 3 was generated by visual interpretation. The 10 percent samples of each class were randomly selected as training samples to train the classifier, and the rest 90 percent samples were testing data and used to evaluate change detection results.

In order to evaluate the performance of proposed change detection approach quantitatively, six indices are adopted to assess the results. (1) Classification overall accuracy: it is the probability of a reference pixel being correctly classified for all classes. (2) Classification kappa coefficient , where and are the observed proportion of agreement and the proportion of agreement expected by chance in the classification error matrix, respectively. (3) Missed detections , where is the number of changed pixels in the testing data that are incorrectly classified as unchanged, and is the number of changed pixels in the testing data. (4) False alarms , where is the number of unchanged pixels in the testing data that are incorrectly detected as changed, and is the number of unchanged pixels in the testing data. (5) Total errors . (6) Reduction in remaining error (RRE) : supposing the result A is more accurate than results B, the PPE of method A for B can be calculated by , where the and are the accuracy of methods A and B, respectively. When the accuracy reaches a relatively great value, we assume that any increases becomes valuable. With this in mind, the index termed RRE is used to emphasize the increases in accuracy.

3.1.1. Results of the Proposed Approach

The stacked image was segmented using SRM into homogenous objects, in which a parameter tunes the tradeoff between the segment scale and computational complexity. In the experiments, we let and the values of scale belonging to were used to obtain segmentation maps. Some segmentation maps were shown in Figure 4 (e.g., scale , 7, and 9, resp.). As can be seen, a larger generates more numerous and smaller regions in the segmentation map, but some large homogenous regions are remained. Additionally, smaller regions often correspond to smaller perceptual regions at different scales, and larger regions are often displayed as a whole in the stacked image. It is indicated that SRM performs segmentation reasonably. Moreover, the stacked image was then classified by SVM, in which Gaussian radial basis function (RBF) was adopted and two parameters and were set as 100 and 0.167, respectively. The pixel-wise change map using SVM classification is shown in Figure 5(a). As can be seen, much “salt and pepper” noise exists in the classification map because only spectral information was used without spatial information.

Figure 5 shows the change maps generated by the proposed approach, where the scale value of initial segmentation map was 8 and the threshold was set to 0.8. It can been seen that about half objects are identified as uncertain when the value of segmentation scale is 8. With the increasing of values, some uncertain objects are labeled in a more detailed segmentation map. Few uncertain objects exist in the change map after analyzing with the segmentation map of value 12 as shown in Figure 5(d). Finally, the majority voting method was implemented for the uncertain objects and labeled them as the class having the maximum pixel number in the corresponding objects as shown in Figure 5(e). Both homogenous and detailed regions exist in the final change map, because the optimal segmentation scale is found for different objects through uncertainty analysis and spectral and spatial information is integrated properly.

Two parameters affect the accuracy of proposed approach, namely, the threshold for uncertainty analysis and the initial segment scale . Figure 6 presents the change patterns of accuracy indices for different thresholds ranging from 0.7 to 0.9 with the steps 0.5 under the initial segment scale value 8. The change patterns of overall accuracy and Kappa coefficient are similar. They both increase with the increasing of threshold until the value 0.8 and then decrease, but the overall accuracy only changes between 93.0% and 93.5% and the Kappa coefficient slightly ranges from 0.925 to 0.930. Besides, the missed detections first increase and then decrease with the increasing of threshold; the false alarms first decrease and then slightly increase with the increasing of threshold. However, the total errors always keep at around 2.9%. As can be seen from above, the proposed approach is robust for the threshold .

Figure 7 shows the change patterns of accuracy indices for different initial segment scale values ranging from 6 to 10 with the threshold value 0.8. The change patterns of overall accuracy and Kappa coefficient are also similar. They both first increase with the increasing of scale until the scale value 8 and then decrease, but the overall accuracy and Kappa coefficient always remain at around 93.5% and 0.926, respectively. The missed detections generally and slightly decrease with the increasing of scale ; the false alarms always remain at around 2.55% with the increasing of scale . Moreover, the total errors keep at around 2.95%. In a word, the proposed approach is also robust for the initial segment scale.

3.1.2. Comparison with Other Change Detection Methods

In order to verify the effectiveness of the proposed change detection approach, other methods were implemented and compared with the proposed one, namely, the traditional postclassification method (TPCM), the pixel-wise classification method (PWCM), and SSCM.

In the TPCM, the SVM was implemented to the and images and classified the two images into road, grass, water, bright buildings, dark buildings, and bare soil, respectively. The classification maps of and images were then compared pixel by pixel, and the change map was obtained as shown in Figure 8(a). Since the postclassification change map included more change classes than the reference data, the excess classes were recorded as “other classes” and presented using white. For the grass, it is easily recognized and classified as homogenous regions; however, much “salt and pepper” noise exists for some other classes.

Figure 8(b) presents the change map generated by the PWCM. For the grass and water, it can obtain homogenous regions, but “salt and pepper” noise exists for many classes as marked in the change map, which especially occurs around edge pixels.

In the SSCM, the change map of pixel-wise classification was combined with segmentation maps at different scale using the majority voting method. Figure 9 presents the change patterns of accuracy indices for different segment scale of SSCM. It is found that the segment scale for the stacked image significantly affects the spectral-spatial change results seriously. The overall accuracy and kappa coefficient generally grow with the increasing of segment scale. The false alarms grow until scale 7 and then drop with the increasing of segments scale. The missed detections and total errors both drop with the increasing scale and get similar values at scale 12. It is indicated that the selection of segment scale is very important for obtaining accurate change detection results. Hence, the spectral-spatial change map of segment scale 12 was adopted and compared with the proposed one as shown in Figure 8(c). The SSCM mostly contains homogenous regions, which seems better than TPCM and PWCM visually as marked in Figure 8(c). However, the integrated change map using the coarse segmentation map may contain more errors and the accuracy is affected by the segment scale seriously.

Figure 8(d) shows the change map generated by the proposed approach with the initial segment scale value of 8 and the threshold value of 0.8. As can be seen, the proposed approach generates more homogenous regions and removes more noise than other three methods as marked in the change detection maps. Additionally, the detailed change information can be also accurately identified. The reason is that the proposed approach finds the optimal scales for different objects through uncertainty analysis, which appropriately integrates segmented objects with pixel-wise classification results.

Table 1 gives the accuracy indices of overall accuracy, Kappa coefficient, missed detections, false alarms, total errors, and RRE for the comparisons between the proposed OBCDUA and other three methods. As can be seen from Table 1, the overall accuracy and Kappa coefficient of change detection map generated by the proposed OBCDUA are 93.55% and 0.9283, respectively. Compared with other methods, the values of RRE for overall accuracy are 75.6%, 29.0%, and 17.5%, respectively. The OBCDUA reduces the missed detections, false alarms, and total errors when compared with TPCM, PWCM, and SSCM. The value of total errors for OBCDUA is 2.93%, and the RRE values are 74.2%, 32.2%, and 18.2%, respectively.

3.2. Experiments of QuickBird Images

Another data set is used in the second experiments to evaluate the effectiveness of the proposed change detection approach. The images of Xuzhou with 770 × 650 pixels were acquired by QuickBird on August 2005 () and October 2010 (), respectively, and they both were fused with panchromatic and multispectral images and contain three bands as shown in Figures 10(a) and 10(b). The image was registered to the image, and they were stacked into one image. The reference data including 10 classes of interest as shown in Figure 10(c) was generated by visual interpretation. The 10 percent samples of each class were also randomly selected as training samples, and the rest 90 percent samples of each class were testing data and used to evaluate change detection results.

3.2.1. Results of the Proposed Approach

In the second experiments, the values of scale were also set to to obtain segmentation maps. Parts of segmentation maps were shown in Figure 11 (e.g., scale = 5, 7, and 9, resp.). It can be seen that a larger generates more numerous and smaller regions in the segmentation map, but some large homogenous regions are also remained. The stacked image was then classified by SVM and shown in Figure 12(a), where RBF was adopted and two parameters and were set as 100 and 0.167, respectively. Obviously, much “salt and pepper” noise exists in the pixel-based classification map because only spectral information was adopted.

The scale value of initial segmentation map and the threshold were set to 8 and 0.8, and the change maps generated by the proposed approach are shown in Figure 12. It can be seen that many objects are labeled as uncertain under current scale. More and more uncertain objects are labeled in a more detailed segmentation map with the increasing of values, and most objects were identified after the segmentation map of scale value 12 as shown in Figure 12(d). The final change map was created by implementing the majority voting for the uncertain objects as shown in Figure 12(e). Because the optimal segmentation scale is found for different objects in the proposed approach, both homogenous and detailed regions can be detected in the final change map.

Figure 13(a) presents the change patterns of accuracy indices for different thresholds ranging from 0.7 to 0.9 with steps 0.5 under the initial segment scale value 8. Both change patterns of overall accuracy and Kappa coefficient increase with the increasing of threshold until the value 0.85 and then decrease, but the overall accuracy only changes between 93.5% and 94.2% and the Kappa coefficient slightly ranges from 0.923 to 0.931. Besides, both the false alarms and the total errors change slightly, and the missed change detection always keeps at around 12%. Therefore, the proposed approach is robust for the threshold to a certain extent.

Figure 14 shows the change patterns of accuracy indices for different initial segment scale values ranging from 6 to 10 with the threshold value 0.8. The change patterns of overall accuracy and Kappa coefficient are similar, and they both first increase with the increasing of scale until 8 and then decrease slightly, but the overall accuracy and Kappa coefficient always remain at around 94.0% and 0.925, respectively. Additionally, all the missed detections, false alarms, and total errors change slightly the increasing of scale . So the proposed approach is also robust for the initial segment scale in the second experiments.

3.2.2. Comparison with Other Change Detection Methods

Some experiments were implemented to verify the effectiveness of the proposed change detection approach compared with TPCM, PWCM, and SSCM, and change maps were shown in Figure 15.

In the TPCM, the image was classified into road, vegetation, water, and buildings, and image was classified into road, vegetation, water, buildings, and workshops using SVM. The change map was obtained by comparing the classification maps pixel by pixel as shown in Figure 15(a). The excess classes excluded in the reference data were recorded as “other classes” and presented using white. The vegetation was easily confused with water, and much “salt and pepper” noise exists in the change detection map. Figure 15(b) presents the change map generated by the PWCM. Compared with TPCM, it can obtain more accurate change detection map, but “salt and pepper” noise still exists the same as TPCM.

For the SSCM, the majority voting method was used to obtain the change map by combining the pixel-wise classification map with segmentation maps at a specific scale. Figure 16 presents the change patterns of accuracy indices for different segment scale of SSCM. The overall accuracy and kappa coefficient generally grow with the increasing of segment scale. The false alarms always grow with the increasing of segments scale. The missed detections drop circuitously, and total errors first drop then increase with the increasing scale. As can be seen the segment scale is very important for obtaining the optimal change detection results. Finally, the segment scale 12 was adopted and compared with the proposed one as shown in Figure 15(c). Figure 15(d) shows the change map generated by OBCDUA with the initial segment scale value of 8 and the threshold value of 0.8. Both OBCDUA and SSCM mostly generate homogenous regions than TPCM and PWCM as marked in Figure 15, but OBCDUA retains more detailed change information. As can be seen, the proposed approach generates more homogenous regions and removes more noise than other three methods as marked in the change detection maps. Additionally, the detailed change information can be also accurately identified. The reason is that the proposed approach finds the optimal scales for different objects through uncertainty analysis, which appropriately integrates segmented objects with pixel-wise classification results.

All the accuracy indices of overall accuracy, Kappa coefficient, missed detections, false alarms, total errors, and RRE for experiments of QuickBird images are shown in Table 2. The overall accuracy and Kappa coefficient of change detection map generated by the proposed OBCDUA are 94.29% and 0.9310, respectively. Compared with other methods, the values of RRE for overall accuracy are 73.7%, 31.9%, and 10.1%, respectively. The OBCDUA generated the most accurate change map, where the total errors are 2.70% and the RRE values are 72.9%, 22.6%, and 15.4%.

Above all, the proposed approach produces the most accurate change detection map and enhances the robustness of results for the segment scale. This is because two temporal images were stacked and one-pass classification was performed, which reduce the impact of classification. More importantly, the uncertainty analysis approach was adopted to integrate the segmentation maps and pixel-wise change map, which selects the optimal segment scales for different objects and removes much “salt and pepper” noise simultaneously. In OBCDUA, the segment scale of SRM and the threshold affect the change detection results, but it is indicated that the proposed OBCDUA is robust for both parameters to a certain extent. On the contrary, the segment scale of SRM affects the results of SSCM seriously. Additionally, the OBCDUA often generates the most accurate change map than other methods used in this study. In a word, the experimental results confirm the effectiveness of proposed OBCDUA, which results in more homogenous regions in the change map and is more suitable for VHR images.

4. Conclusion

An object-based approach to change detection using uncertainty analysis is proposed in this paper. First, two temporal images are combined into one image by band stacking. Second, the SRM is performed to the stacked image and segment it into homogenous objects. Then a pixel-wise SVM classification is also implemented to the stacked image to generate a pixel-wise change map. Finally, the uncertainty analysis for segmented objects is implemented to appropriately integrate the segmentation maps and pixel-wise change map into the final change map. Experiments were carried out with SPOT 5 and QuickBird data sets to evaluate the effectiveness of proposed approach. It is confirmed that the OBCDUA not only improves the accuracy of change detection results compared with TPCM, PWCM, and SSCM, but also enhances the robustness of results for the segments scales. The proposed approach supplies an effective approach to supervised change detection for VHR images.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors are very grateful to Professor Penglin Zhang for providing SPOT 5 images in the experiments. The work presented in this paper is supported by Ministry of Science and Technology of China (2012BAJ15B04), Key Laboratory for National Geographic Census and Monitoring, National Administration of Surveying, Mapping and Geoinformation (2014NGCM17), and a Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions.