Table of Contents Author Guidelines Submit a Manuscript
Journal of Spectroscopy
Volume 2018, Article ID 3918954, 11 pages
https://doi.org/10.1155/2018/3918954
Research Article

Stratified Object-Oriented Image Classification Based on Remote Sensing Image Scene Division

1School of Information Engineering, China University of Geosciences, 29 Xueyuan Road, Haidian, Beijing 100083, China
2Key Laboratory of Virtual Geographic Environment, Ministry of Education, Nanjing Normal University, Nanjing, Jiangsu, China

Correspondence should be addressed to Dongping Ming; nc.ude.bguc@pdgnim

Received 5 February 2018; Revised 29 March 2018; Accepted 26 April 2018; Published 3 June 2018

Academic Editor: Khalique Ahmed

Copyright © 2018 Wen Zhou et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The traditional remote sensing image segmentation method uses the same set of parameters for the entire image. However, due to objects’ scale-dependent nature, the optimal segmentation parameters for an overall image may not be suitable for all objects. According to the idea of spatial dependence, the same kind of objects, which have the similar spatial scale, often gather in the same scene and form a scene. Based on this scenario, this paper proposes a stratified object-oriented image analysis method based on remote sensing image scene division. This method firstly uses middle semantic which can reflect an image’s visual complexity to classify the remote sensing image into different scenes, and then within each scene, an improved grid search algorithm is employed to optimize the segmentation result of each scene, so that the optimal scale can be utmostly adopted for each scene. Because the complexity of data is effectively reduced by stratified processing, local scale optimization ensures the overall classification accuracy of the whole image, which is practically meaningful for remote sensing geo-application.

1. Introduction

GEOBIA has been the mainstream method for processing high spatial resolution remote sensing images [1, 2]. Spatial dimensions are crucial to GEOBIA methods [3], and scale has a great influence on a remote sensing image’s object-oriented classification. However, due to the complexity of feature type, there is no absolute optimal scale suitable for all objects [46]; scale is a problem that needs to be solved in image segmentation [7]. Segmentation quality will be limited by the parameters set by the experience of the user [8], and the optimization algorithm determines the optimal segmentation parameters of the overall image, which is a compromise result of all objects.

Different objects or geographic phenomenon have inherent spatial and time scales [9], and it is increasingly difficult to recognize complex patterns in highresolution [10]. To extract objects or separate them from their surroundings, the processing scale (segmentation scale) needs to be set similar to object spatial scales [11]. Object-based scale selection is the key to object-based image analysis, and selecting an inappropriate scale will cause over-segmentation or under-segmentation [12]. This will reduce the accuracy and efficiency of multiscale information extraction from high spatial resolution images [1315]. Many methods have been used to select optimal parameters for multiscale segmentation [1625]; however, optimal segmentation parameters for an overall image may not suitable for different objects when processing large heterogeneous images [26, 27]. A key issue that remains to be resolved is to determine a suitable segmentation scale that allows different objects and phenomena to be characterized in a single image [28, 29]. However, observations indicate that there is a tendency: the same types of objects often have similar spatial scale and often aggregate in the same area. Therefore, it is a feasible way to divide the overall image into different scenes and then use an optimization algorithm to segment the scene image into image objects, which will improve overall segmentation quality. Different from the conventional scene classification method which aims to determine the class attribute of an image [3032], the scene division mentioned in this article aims to divide one whole image into several scenes. Methods used to classify remote sensing images into scenes can be roughly separated into the following three categories: hand boundary tracing, featuring layer’s threshold segmentation, and segmentation or classification based scene division.

Ordinary hand boundary tracing method [3335] delineates scene boundaries based on color composition or the difference between feature values, such as brightness and NDVI. This method can ensure that the result will meet the user’s subjective requirements, but it suffers from the subjectivity of operator and is highly time consuming [36].

The featuring layer’s threshold segmentation method chooses one feature, such as brightness or NDVI, to roughly divide the image into several scenes by setting thresholds [37, 38]. For example, the NDVI values between a plant coverage scene and nonplant coverage scene are different, so the image can be roughly divided into several scenes using a defined threshold value. In this method, the threshold has great influence on the result, and selecting the threshold is often accomplished by using sample statistics or random samples. Therefore, both the threshold and samples used for statistics influence the division results.

The segmentation-based scene division method combines two ideas: one is to set large-scale parameters in image segmentation to obtain large object, whose size is close to scenes [39, 40] and another is to merge small objects to form large scenes [41]. The software eCognition, SPAING, and MAGIC also provide image segmentation and classification operations [41], but the segmentation result is easily influenced by linear objects such as road and river, so even if the coverage is same, one desired scene will be separated into two or more scenes.

Additionally, an image can also be classified into scenes using texture brightness or NDVI [42, 43], but this method is a simple classification operation. For example, it will divide the image into plant and nonplant scenes, lighting and shaded scenes, or rock and nonrock scenes. This method may need training samples, so it only provides good results from specific images, and lacks universality, which limits its application.

In summary, many problems can be found in these described methods: some methods are less efficient, only suitable for some types of images, influenced by subjective factors, or the result does not meet the requirements. Therefore, a new method incorporating middle semantic (entropy, homogeneity, and mean) to divide remote sensing image into different scenes is proposed. This method is not influenced by subjective factors and is suitable for most types of images because hue value and its texture can be calculated in almost every type of image. The result shows that this method can efficiently improve classification accuracy when combined with segmentation parameter optimization methods, such as an improved grid search algorithm.

2. Methods

2.1. Scene Structure and Scale Dependence in Remote Sensing Image

Combining the scale effect of remote sensing with the geographic concept of scene structure may find a breakthrough to solve the scale problem [44]. Scene structure is the composition and structure of different scales of geographic units in a certain geographical area. A geographic entity or a phenomenon’s spatial pattern often exhibits a certain degree of scale dependence, so using different time spans and spatial range to observe the same objects may provide different results or conclusions [44]. Different scene structure has different visual complexity, and more objects in a scene will lead to a more complex scene. The scale of interest in this study is the segmentation scale. In order to obtain high precision segmentation result, the segmentation scale needs to be similar to the inherent spatial scales of geographic units.

2.2. The Principle of Stratified Segmentation

A scene is bounded by land planning or grouped by economic influence, and the type and distribution pattern of one type of object in one scene are similar, but the scene structure between different scenes may be different. Therefore, different scenes have individual suitable segmentation parameters. Most segmentation methods and parameter optimization algorithms are aimed at determining the best result for an overall image, but this is a compromise of different objects and is unsuitable for different types of objects. In this study, a stratified object-oriented image analysis based on remote sensing image scene division is proposed. This method can break down the complex entire image into several simple spatial structure scenes (Figure 1). Objects with similar color will have similar hue value, so some features such as hue value can be used to divide image into scenes. What is more, different scenes’ visual complexity and structure may also differ, so the texture of hue can be used to reflect it. While mean can reflect the main hue (main object) of one scene, entropy and homogeneity can reflect the scene structure. According to entropy and homogeneity, the image can be divided into single coverage type scenes and complex coverage type scenes. And according to the mean value, the single coverage type scene can be redivided into several feature dominant scenes. Using parameters optimization methods to segment different scenes individually, then every scene’s final segmented scale will become as close as possible to geographic units’ inherent spatial scale.

Figure 1: Workflow of stratified object-oriented image analysis based on remote sensing image scene division.
2.3. Segmentation Parameters Optimization Based on an Improved Grid Search Algorithm

An improved grid search algorithm was used to optimize segmentation parameters. The grid search algorithm (GSA) uses the grid, which is divided into two parameters for optimization within a certain space range, to find one set of optimized parameter by traversing all crossings in the grid. In this process, all combinations of parameters are traversed. Given a large enough parameter selection range and short enough step size, the method can find the global optimal solution and obtain the optimal combination of parameters at the same time. However, this is time intensive. To improve GSA efficiency for parameter optimization, an improved GSA (IGSA) is proposed. First, it obtains an approximate optimal solution using a large scale and step size. Then, one of the parameters is fixed, and a small step size is used to search another parameter value in a narrow search range near the fixed parameter. Usually, this improved method centers on an approximate optimal combination and expands with crossing directions [45]. Therefore, the first selection of step size is particularly important for grid searching with expanding crossing directions.

3. Experiments and Analysis

3.1. Experimental Data

To test the method’s robustness, two study areas were selected. The first is a QuickBird pansharpened image (Image A) of Hualien City, Taiwan, China (Figure 2). The size is 12000 × 12000 pixels, with a resolution 0.7 m per pixel. The primary land cover types in this image are buildings, plants, bare land, roads, and water. The second is a QuickBird multispectral image (Image B) of the Alma Cray area (copper mine), Uzbekistan (Figure 2), and it has a size of 3400 × 3400 pixels, and the resolution is 2.4 m per pixel. The cover types are buildings, plants, bare land, mines, and water.

Figure 2: Study areas and experimental image. (a) Image A. (b) Image B.
3.2. Scene Division: The First Step of Stratified Segmentation

As the process steps show in Figure 1, after preprocessing, the near-infrared, red, and green bands were selected for RGB color synthesis in both studies. Then, the image was transformed from RGB color space to HSV color space. The hue layer values can represent the coverage colors, and similar color’s hue values are also numerical approximations. The calculation windows should be smaller than object sizes but large enough to distinguish object features, and based on this, eight texture layers, representing hue layer characteristics, were obtained. The hue values reflect the scene color differences. Because the goal is scene division, the textures for different scene values are represented with different gray scales (values). Most texture measures within a given group are strongly correlated. Homogeneity, dissimilarity, variance, and contrast are strongly correlated, and entropy is strongly correlated with the second moment [46]. For scene division, the scene differences need to be magnified. So in texture layers, the differences of values in different scenes need to distributed in different ranges. Therefore, entropy, homogeneous mean layers, and HSV layers were chosen to cooperate with the original image to produce an integrated image for scene division. Different scenes’ main colors were different, and the boundaries in those images are more pronounced than the original image.

The eCognition multiscale segmentation has proven to be the superior method at present [21]; thus, this method was used for scene division and subsequent scene image segmentation. There are three parameters in this method: scale, shape, and compactness. The experiment parameters of image A were scale: 1000, shape: 0.1, and compactness: 0.5; and the parameters set for image B were scale: 1500, shape: 0.1, and compactness: 0.5. The bands chosen for the image A scene division were near-infrared, hue layer, mean layer, homogeneity layer, and entropy layer with a weighting of 1 : 1 : 1 : 1 : 1. The bands chosen for image B’s scene division were blue, green, red, near-infrared, hue layer, mean layer, homogeneity layer, and entropy layer with a weighting of 1 : 1 : 1 : 1 : 2 : 2 : 2 : 2, which weighted the texture layers more than the other parameters. Figure 3 shows the scene division results after segmenting the image using the described parameters and merging the crushing scenes. The overall image A was divided into six scenes, and according to their different dominant characters, they were named as follows: low covered building, high covered building, low covered plants, high covered plants, and ocean scene (Figure 3). The clouds were removed from the image; therefore, the overall image below haven’t include a clouds scene. The overall image B was divided into city, mineral, and two low covered plants scenes (Figure 4).

Figure 3: Scene division result of image A.
Figure 4: Scene division result of image B.
3.3. Image Segmentation and Classification

The segmentation result has a great influence on the subsequent classification, so the accuracy of classification, to a certain extent can reflect the merits of segmentation [47]. Therefore, the classification result can be used to evaluate the segmentation result in this study. This article sets up comparative experiments to verify the effectiveness of scene division-based object-oriented image analysis method. Except scene division, other processes of these two sets of experiments are same; both the overall image and scene images use the same classification and test samples.

Tables 14 show the number of classification and test samples. A larger number of features used in classification require longer computational time [48], so only brightness, NDVI, NDWI, and shape index were used as classification features. GSA was used to obtain optimal segmentation results for different scenes.

Table 1: Sample classification statistics of image A.
Table 2: Sample classification statistics of image B.
Table 3: Test sample statistics of image A.
Table 4: Test sample statistics of image B.

Figure 5 shows the overall and scene images’ Kappa coefficients [49] for different parameters in image A, while Figure 6 shows same kind of content for image B. The optimal segmentation parameters are marked in the figures. The following conclusions are drawn from these results. First, both the overall and scene result show that the classification accuracy is significantly affected by segmentation parameters. Second, different images’ optimal classification results correspond to different segmentation parameters, which indicate that it is necessary to divide the overall image into different scenes. Four scene image and the overall image’s optimal segmentation parameters are all different; that means overall image’s optimal segmentation parameters are just a compromise of all different objects. So it is not suitable to use one set of parameters to segment all kind of objects, and dividing the image into several scenes which were occupied by different objects can reduce the effect of scale effects on image segmentation as much as possible.

Figure 5: Scene and scene merged images’ kappa coefficients of image A.
Figure 6: Scene and scene merged images’ kappa coefficients of image B.
3.4. Classification Comparison between the Stratified and Ordinary Segmentation Result

Tables 5 and 6 show image A and image B’s optimal segmentation parameters for each scene and overall image; the accuracies were calculated using (1). The last line shows five scene images’ merged results; the optimal segmentation parameter has not been provided. Compared with the overall image, the accuracy of the merged image A, which was produced using a combination of scene images, increased by 8.70% (2), and the Kappa coefficient increased by 9.70%. In Table 6, compared with overall image, accuracy for the merged image B, which was produced using a combination of scene images, increased by 11.20%, and the Kappa coefficient increased by 21.12%. This improvement indicates that the proposed stratified segmentation method can improve segmentation accuracy and reduce the scale effect on classification results.where is the value of accuracy, is the number of reference samples, is the index of classes, and is the number of objects be classified into class where the reference category is also class .where is the value of improved accuracy or Kappa coefficient, is the accuracy or Kappa coefficient of merged image, and is the two kinds of value of the overall image.

Table 5: The optimal segmentation parameters and classification results for different scenes of image A.
Table 6: The optimal segmentation parameters and classification results for different scenes of image B.

The overall and scene image classification results for image A are shown in Figure 7, and classification results for image B are shown in Figure 8. The building classification results based on image scene division (Figure 7) provided more details compared with overall image. Compared with other objects, the building occupied scene need a smaller scale, which is reflected in Table 5. The building classification is more sensitive to the segmentation parameters compared with other objects, and different from other objects, building’s higher accuracy only appears at smaller scale parameters. For image B, the difference between the overall and scene image is even clearer. City scene (Figure 6) has a suitable scale that is much smaller than other scenes, and even two low covered plant scenes have different appropriate scales (Table 6). However, because of the similar spectral features of building and other objects, the building classification accuracy is also poor (Figure 6). In the overall and scene images, there are some rock and waste disposal sites that are incorrectly classified as buildings. During segmentation, as Figure 7(b) and Figure 8(b) show, the stratified method can provide segmentation results similar to objects’ inherent scale as much as possible.

Figure 7: Classification of image A. (a) Overall classification. (b) Scene classification.
Figure 8: Classification of image B. (a) Overall classification. (b) Scene classification.

4. Discussion and Conclusion

The proposed stratified segmentation method combines hue and hue layer textures to divide scenes, which is theoretically more similar to the human visual mechanism. Through scene division, the complex of entire image was effectively reduced. In practice, this method is strongly universal, so can be easily used, also the divisions are of accuracy to some extent. The result shows that this method can improve the final classification accuracy effectively, especially for large-sized images wherein the aggregation phenomenon is clear. The method can significantly aid in remote sensing image classification and feature extraction. In addition, various segmentation methods can be used in different scenes depending on scene image characteristics, which may correspond to spatial scales, and thus improve classification accuracy.

The proposed method also has some shortcomings; the segmentation parameter optimization method used in this method may increase time consumption. However, the focus of this study is stratified segmentation. Therefore, the subject of future work will be a more efficient optimization algorithm, combining knowledge of spatial statistics to estimate the optimal segmentation parameters, and finding the most suitable segmentation method for scenes dominated by different coverage.

Although the stratified method was implemented by eCognition, this idea has been adapted to all GeOBIA work. This method can also be used recursively in an image with huge size or complex nested landscape, which means that we can use stratified method to divide overall image into several scenes, then reapply this method to every scene to get subscene until the appropriate image objects are segmented.

Data Availability

Image data were purchased from a commercial data sales company. Other data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

Authors’ Contributions

Wen Zhou conceived and designed the study, performed the experiments, and wrote the paper. Dongping Ming proposed the research idea, supervised the research, and revised the manuscript. Lu Xu and Hanqing Bao helped to perform the experiments. Min Wang provided significant comments and suggestions.

Acknowledgments

This research was supported by the National Natural Science Foundation of China (41671369 and 41671341), the “Fundamental Research Funds for the Central Universities,” the Open Fund of Twenty First Century Aerospace Technology Co. Ltd. (Grant no. 21AT_2016-07), and the Major Science and Technology Program for Water Pollution Control and Treatment (2017ZX07302003).

References

  1. J. Yang and Y. He, “Automated mapping of impervious surfaces in urban and suburban areas: linear spectral unmixing of high spatial resolution imagery,” International Journal of Applied Earth Observation and Geoinformation, vol. 54, pp. 53–64, 2017. View at Publisher · View at Google Scholar · View at Scopus
  2. T. Blaschke, “Object based image analysis for remote sensing,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 65, no. 1, pp. 2–16, 2010. View at Publisher · View at Google Scholar · View at Scopus
  3. P. Hofmanna, V. Andrejchenkoa, P. Lettmayerb et al., “Agent based image analysis (ABIA)–preliminary research results from an implemented framework,” in Proceedings of the GEOBIA 2016, pp. 14–16, Enschede, Netherlands, September 2016.
  4. S. M. Bhandarkar, J. Koh, and M. Suk, “Multiscale image segmentation using a hierarchical self-organizing map,” Neurocomputing, vol. 14, no. 3, pp. 241–272, 1997. View at Publisher · View at Google Scholar · View at Scopus
  5. R. Gaetano, G. Scarpa, and G. Poggi, “Hierarchical texture-based segmentation of multiresolution remote-sensing images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 47, no. 7, pp. 2129–2141, 2009. View at Publisher · View at Google Scholar · View at Scopus
  6. J. Liu, J. Zhang, F. Xu, Z. Huang, and Y. Li, “Adaptive algorithm for automated polygonal approximation of high spatial resolution remote sensing imagery segmentation contours,” IEEE Transactions on Geoscience and Remote Sensing, vol. 52, no. 2, pp. 1099–1106, 2013. View at Publisher · View at Google Scholar · View at Scopus
  7. J. H. Liu and Z. Y. Mao, “A survey on segmentation techniques and application strategy of high spatial resolution remote sensing imagery,” Remote Sensing Information, vol. 4, no. 1, pp. 80–85, 2009. View at Google Scholar
  8. S. Du, Z. Guo, W. Wang et al., “A comparative study of the segmentation of weighted aggregation and multiresolution segmentation,” GIScience and Remote Sensing, vol. 53, no. 5, pp. 651–670, 2016. View at Publisher · View at Google Scholar · View at Scopus
  9. D. Lu and Q. Weng, “A survey of image classification methods and techniques for improving classification performance,” International Journal of Remote Sensing, vol. 28, no. 5, pp. 823–870, 2007. View at Publisher · View at Google Scholar · View at Scopus
  10. W. Zhao, S. Du, and W. J. Emery, “Object-based convolutional neural network for high-resolution imagery classification,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 10, no. 7, pp. 3386–3396, 2017. View at Publisher · View at Google Scholar · View at Scopus
  11. M. Baatz and A. Schäpe, “An optimization approach for high quality multi-scale image segmentation,” in Proceedings of the Beiträge zum AGIT-Symposium, pp. 12–23, Salzburg, Austria, 2000.
  12. M. Dongping, C. Tianyu, C. Hongyue, L. Li, C. Qiao, and J. Du, “Semivariogram-based spatial bandwidth selection for remote sensing image segmentation with mean-shift algorithm,” IEEE Geoscience and Remote Sensing Letters, vol. 9, no. 5, pp. 813–817, 2012. View at Publisher · View at Google Scholar · View at Scopus
  13. D. Ming, J. Yang, L. Li, and Z. Song, “Modified ALV for selecting the optimal spatial resolution and its scale effect on image classification accuracy,” Mathematical and Computer Modelling, vol. 54, no. 3-4, pp. 1061–1068, 2011. View at Publisher · View at Google Scholar · View at Scopus
  14. S. W. Myint, P. Gober, A. Brazel, S. Grossman-Clarke, and Q. Weng, “Per-pixel vs. object-based classification of urban land cover extraction using high spatial resolution imagery,” Remote Sensing of Environment, vol. 115, no. 5, pp. 1145–1161, 2011. View at Publisher · View at Google Scholar · View at Scopus
  15. I. Dronova, P. Gong, N. E. Clinton et al., “Landscape analysis of wetland plant functional types: The effects of image segmentation scale, vegetation classes and classification methods,” Remote Sensing of Environment, vol. 127, pp. 357–369, 2012. View at Publisher · View at Google Scholar · View at Scopus
  16. L. Dragut, O. Csillik, C. Eisank, and D. Tiede, “Automated parameterisation for multi-scale image segmentation on multiple layers,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 88, pp. 119–127, 2014. View at Publisher · View at Google Scholar · View at Scopus
  17. W. Zhao and S. Du, “Learning multiscale and deep representations for classifying remotely sensed imagery,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 113, pp. 155–165, 2016. View at Publisher · View at Google Scholar · View at Scopus
  18. D. Ming, J. Li, J. Wang, and M. Zhang, “Scale parameter selection by spatial statistics for GeOBIA: using mean-shift based multi-scale segmentation as an example,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 106, pp. 28–41, 2015. View at Publisher · View at Google Scholar · View at Scopus
  19. C. Witharana and D. L. Civco, “Optimizing multi-resolution segmentation scale using empirical methods: exploring the sensitivity of the supervised discrepancy measure Euclidean distance 2 (ED2),” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 87, pp. 108–121, 2014. View at Publisher · View at Google Scholar · View at Scopus
  20. C. R. Jung, “Combining wavelets and watersheds for robust multiscale image segmentation,” Image and Vision Computing, vol. 25, no. 1, pp. 24–33, 2007. View at Publisher · View at Google Scholar · View at Scopus
  21. Q. L. Tan, Z. J. Liu, and W. Shen, “An algorithm for object-oriented multi-scale remote sensing image segmentation,” Journal of Beijing Jiaotong University, vol. 31, no. 4, pp. 111–76, 2007. View at Google Scholar
  22. B. Johnson and Z. Xie, “Unsupervised image segmentation evaluation and refinement using a multi-scale approach,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 66, no. 4, pp. 473–483, 2011. View at Publisher · View at Google Scholar · View at Scopus
  23. S. Chabrier, B. Emile, C. Rosenberger, and H. Laurent, “Unsupervised performance evaluation of image segmentation,” EURASIP Journal on Advances in Signal Processing, vol. 2006, no. 1, pp. 1–13, 2006. View at Publisher · View at Google Scholar · View at Scopus
  24. J. Yang, P. Li, and Y. He, “A multi-band approach to unsupervised scale parameter selection for multi-scale image segmentation,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 94, pp. 13–24, 2014. View at Publisher · View at Google Scholar · View at Scopus
  25. J. Yang, Y. He, J. Caspersen, and T. Jones, “A discrepancy measure for segmentation evaluation from the perspective of object recognition,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 101, pp. 186–192, 2015. View at Publisher · View at Google Scholar · View at Scopus
  26. X. Zhang, P. Xiao, X. Feng, L. Feng, and N. Ye, “Toward evaluating multiscale segmentations of high spatial resolution remote sensing images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 7, pp. 3694–3706, 2015. View at Publisher · View at Google Scholar · View at Scopus
  27. F. Cánovas-García and F. Alonso-Sarría, “A local approach to optimize the scale parameter in multiresolution segmentation for multispectral imagery,” Geocarto International, vol. 30, no. 8, pp. 937–961, 2015. View at Publisher · View at Google Scholar · View at Scopus
  28. X. Zhang, X. Feng, P. Xiao, G. He, and L. Zhu, “Segmentation quality evaluation using region-based precision and recall measures for remote sensing images,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 102, pp. 73–84, 2015. View at Publisher · View at Google Scholar · View at Scopus
  29. C. Gonzalo-Martín, M. Lillo-Saavedra, E. Menasalvas, D. Fonseca-Luengo, A. García-Pedrero, and R. Costumero, “Local optimal scale in a hierarchical segmentation method for satellite images,” Journal of Intelligent Information Systems, vol. 46, no. 3, pp. 517–529, 2016. View at Publisher · View at Google Scholar · View at Scopus
  30. M. Kim, M. Madden, and T. Warner, “Estimation of optimal image object size for the segmentation of forest stands with multispectral IKONOS imagery,” in Lecture Notes in Geoinformation and Cartography, pp. 291–307, Springer, Berlin, Heidelberg, Germany, 2008. View at Google Scholar
  31. W. Zhao and S. Du, “Scene classification using multi-scale deeply described visual words,” International Journal of Remote Sensing, vol. 37, no. 17, pp. 4119–4131, 2016. View at Publisher · View at Google Scholar · View at Scopus
  32. X. Zhang, S. Du, and Y. C. Wang, “Semantic classification of heterogeneous urban scenes using intrascene feature similarity and interscene semantic dependency,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 8, no. 5, pp. 2005–2014, 2015. View at Publisher · View at Google Scholar · View at Scopus
  33. Y. Mo and L. Zhou, “Sub region classification method—an new classification method to remote sensing image in mountain areas,” Carsologica Sinica, vol. 19, no. 4, pp. 360–365, 2000. View at Google Scholar
  34. L. Wen-Li, Z. Y. Yang, L. I. Ying et al., “Ili valley land use classification of SPOT remote sensing image based on partition and texture information,” Geomatics and Spatial Information Technology, vol. 36, no. 8, pp. 68–71, 2013. View at Google Scholar
  35. Y. L. Liu, S. Y. Yan, T. Wang et al., “A study on segmentation-based classification approaches for remotely sensed imagery,” Journal of Remote Sensing, vol. 6, no. 5, pp. 357–363, 2002. View at Google Scholar
  36. A. García-Pedrero, C. Gonzalo-Martín, and M. Lillo-Saavedra, “A machine learning approach for agricultural parcel delineation through agglomerative segmentation International,” Journal of Remote Sensing, vol. 38, no. 7, pp. 1809–1819, 2017. View at Publisher · View at Google Scholar · View at Scopus
  37. F. Yu, Y. Zeng, and Y. Xu, “Intelligent remote sensing classification of multi character data based on vegetation partition,” Remote Sensing For Land and Resources, vol. 1, pp. 63–70, 2014. View at Google Scholar
  38. S. Li, J. Wang, and Y. Chen, “An Investigation on sub-region classification method of TM image in Dulong river basin,” Remote Sensing Information, vol. 3, pp. 40–43, 2006. View at Google Scholar
  39. X. Gigandet, M. B. Cuadra, A. Pointet, L. Cammoun, R. Caloz, and J.-Ph. Thiran, “Region-based satellite image classification: method and validation,” in Proceedings of the IEEE International Conference on Image Processing, pp. 832–835, Genoa, Italy, September 2005.
  40. J. Ton, A. K. Jain, W. R. Enslin, and W. D. Hudson, “Automatic road identification and labeling in Landsat 4 TM images,” Photogrammetria, vol. 43, no. 5, pp. 257–276, 1989. View at Publisher · View at Google Scholar · View at Scopus
  41. L. Yi, G. Zhang, and Z. Wu, “A scale-synthesis method for high spatial resolution remote sensing image segmentation,” IEEE Transactions on Geoscience and Remote Sensing, vol. 50, no. 10, pp. 4062–4070, 2012. View at Publisher · View at Google Scholar · View at Scopus
  42. D. Ming, J. Luo, and Z. Shen, “Research on region partition in high resolution remote sensing image based on GMRF-SVM,” Science of Surveying and Mapping, vol. 2, pp. 33–37, 2009. View at Google Scholar
  43. J. J. Yang, Q. G. Jiang, Y. L. Chen et al., “Lithology division for large-scale region segmentation based on LS-SVM and high resolution remote sensing images,” Journal of China University of Petroleum, vol. 36, no. 1, pp. 60–67, 2012. View at Google Scholar
  44. D. P. Ming, Q. Wang, and J. Y. Yang, “Spatial scale of remote sensing image and selection of optimal spatial resolution,” Journal of Remote Sensing, vol. 12, no. 4, pp. 529–537, 2008. View at Google Scholar
  45. J. Li, C. Zhang, and Z. Li, “Battlefield target identification based on improved grid-search SVM classifier,” in Proceedings of the International Conference on Computational Intelligence and Software Engineering, pp. 1–4, Bangkok, Thailand, January 2009.
  46. M. Hall-Beyer, The GLCM Tutorial, 2007, http://www.fp.ucalgary.ca/mhallbey.
  47. P. Hofmann, P. Lettmayer, T. Blaschke et al., “Towards a framework for agent-based image analysis of remote-sensing data,” International Journal of Image and Data Fusion, vol. 6, no. 2, pp. 115–137, 2015. View at Publisher · View at Google Scholar · View at Scopus
  48. F. Cánovas-García and F. Alonso-Sarría, “Optimal combination of classification algorithms and feature ranking methods for object-based classification of submeter resolution Z/I-imaging DMC imagery,” Remote Sensing, vol. 7, no. 4, p. 4651, 2015. View at Publisher · View at Google Scholar · View at Scopus
  49. J. Cohen, “A coefficient of agreement for nominal scales,” Educational and Psychological Measurement, vol. 20, no. 1, pp. 37–46, 1960. View at Publisher · View at Google Scholar · View at Scopus