Macroscopic/Mesoscopic Computational Materials Science Modeling and EngineeringView this Special Issue
Research Article | Open Access
A Study of Image Classification of Remote Sensing Based on Back-Propagation Neural Network with Extended Delta Bar Delta
This paper proposes a model to extract feature information quickly and accurately identifying what cannot be achieved through traditional methods of remote sensing image classification. First, process the selected Landsat-8 remote sensing data, including radiometric calibration, geometric correction, optimal band combination, and image cropping. Add the processed remote sensing image to the normalized geographic auxiliary information, digital elevation model (DEM), and normalized difference vegetation index (NDVI), working together to build a neural network that consists of three levels based on the structure of back-propagation neural and extended delta bar delta (BPN-EDBD) algorithm, determining the parameters of the neural network to constitute a good classification model. Then determine classification and standards via field surveys and related geographic information; select training samples BPN-EDBD for algorithm learning and training and, if necessary, revise and improve its parameters using the BPN-EDBD classification algorithm to classify the remote sensing image after pretreatment and DEM data and NDVI as input parameters and output classification results, and run accuracy assessment. Finally, compare with traditional supervised classification algorithms, while adding different auxiliary geographic information to compare classification results to study the advantages and disadvantages of BPN-EDBD classification algorithm.
With the increase of researches on back-propagation neural (BPN) network in remote sensing image classification in recent years, many experts have realized that BP neural network in remote sensing image classification alone does not provide significant progress. So they change study direction to improve the neural network algorithms structure and choose more scientific parameters by other means to improve the remote sensing images classification accuracy and the algorithm convergence speed: for example, in 1996 Abdelgadir A. A. multilayer forward feed neural network used to classify land cover types in Minnesota achieved good classification accuracy ; in 1999, Suchafita Gopal and Cuais E achieved successfully the classification of the global land cover by fuzzy neural network ; in 2002, Haejin Ha surveyed the relationship between storm water quality and types of land use in the California area, combined with rain water quality data to effectively classify the types of land use through neural networks ; Diane M. Miller conducted a massive remote sensing image-based artificial neural network classification and found that artificial neural networks can effectively resolve the misclassification problem that exists in traditional classification methods. In addition to the spectral characteristics of the image, texture features of the culture are also included, allowing significant improvement in classification accuracy . When classifying multisource data based on artificial neural network, improvement through fuzzy mathematical method, ARL Tatnall, and so forth achieved good results ; when performing the neural network classification test, through processing remote sensing image fusion of different scales, R.Pu, and so forth played high resolution and hyperspectral remote sensing images to their strengths, improving the accuracy of details .
Based on previous experience in academic researches on remote sensing image classification based on BP neural network, BP neural network algorithm is flexible, with the ability for comprehensive analysis, and fits well in the presence of nonlinear remote sensing image data. Therefore, it can better solve the “same object different spectrum and vice versa” phenomenon, extracting feature information from the remote sensing images quickly and accurately. This study attempts to take advantage of the processing capacity and fault tolerance feature of the biological mimicking process of the neural network through the supervised learning modified back-propagation neural network + EDBD and EDBD (extended delta bar delta) algorithm to accelerate convergence and improve accuracy and with different auxiliary information for image classification.
The structure of the rest of the paper is as follows: in Section 2, fundamental notions of the BPNN theory and EDBD algorithm are presented. In Section 3 we discuss application of this classification model to analyze remote sensing images data from the Ningde district. Section 4 presents the empirical results of this analysis. Finally, the paper concludes with the discussion.
2. Artificial Neural Network
Artificial neural network [7, 8] is a kind of computer software and hardware to mimic biological neural network computing system information processing model with the ability to learn, and after the completion of learning, it is very convenient and can get results quickly; therefore it has been applied in research of various fields. However, neural networks are composed of complex weights and conversion functions; therefore, it is often seen as a black-box model and cannot directly measure the effect of each input variable and its importance. However, the neural network is divided into many modes, each mode consists of its unique algorithm and applied problems, including after adding EDBD algorithm to back-propagation neural network (BPNN)  results in a method of accelerating the convergence for learning cycles, jumping and slowing, and improves accuracy. This type of classification is rare with satellite images, and therefore PBN + EDBD algorithm is the main research method.
Further, since the back-propagation neural network uses the conversion functions sigmoid function or hyperbolic tangent functions, in which the function gradually closes in to the values 0 and 1, and this will make the close vicinity of the differential approach 0, the error signal value is too small, leading to a network learning process that is too small. In order to avoid such problem, the input range is usually limited to the values from 0.1 to 0.9; this way, the problem of slow learning in the vicinity of 0 can be avoided. This paper uses the maximum and minimum linear mapping method to map the data range between the limits shown in the following conversion formula:in which is and represents the maximum limit of input parameters; represents the minimum limit of input parameters; is a value after being compared with parameters; is the value converted.
3. EDBD Algorithm
When BPN learns using a fixed learning rate, it often encounters two phenomena, namely, the slowing phenomenon and the jumping phenomenon. Its contents are as follows.
3.1. Slowing Phenomenon
As Figure 1 shows, during the network learning process, a certain amount of link’s weight change is in consecutive numbers, a.k.a. continuous positive or continuous negative, which means that the gap between the neurons of the link is continuously positive or negative. This phenomenon means the value of the error function that reaches a minimum weight value is not skipped. If the decline of the error function decreases, it is known as the slowing phenomenon.
3.2. Jumping Phenomenon
As shown in Figure 2, during the network learning process, a certain link’s weight change is consecutively opposite, as in the signs being interspersed continuously, which means that the gap between the neurons is continuously interspersed with positive and negative signs. This phenomenon also means the value of the error function that reaches a minimum weight has been skipped. If the error function value is incremented, it is called the jumping phenomenon.
In order to improve these two phenomena, this study will refer to the EDBD algorithm [7, 10] proposed by A. A. Minia and others so that the accuracy and rate will improve when the BPB learns; relevant principles of the EDBD algorithm is to add inertia to the amount of change in a learning weight of a neural network to improve the convergence process and shock during the process. Consider in which is represented by (representing the weights between the level and the th neuron and the th level and the th neuron), the change during the th time learning cycle, and so on. represents inertia; represents inertia factors, controlling the inertia proportions, and . One hasin which represents learning rate limit. Considerin which is the inertia factor limit. One hasin which , : amount of change for the times.
There are nine parameters , , , , , , , , , …, and so forth in functions (2) to (5). Parameter values are chosen based on network characteristics of the user, generally through trial and error or experience to get a better combination of parameters. We can see how the EDBD algorithm is calculated from above: each output neuron or hidden neuron has its own learning rate and inertia factor; using the exponential pattern to control the increase in amount, so that the learning rate will rapidly increase in flat areas (increasing the learning rate during small gaps), and during relatively large slopes, the rate of increase slows (reducing learning rate at large gaps). Thus flat areas have a greater learning rate increase with no risk of the jumping phenomenon happening. The inertia factor changes with the number of times learning. The Ceiling is defined as formulas (3) and (4) to prevent the value of unlimited inertia factor and learning rate increasing. Network learning updates its weights and thresholds every time a training example is added. When all the training examples are loaded once, a learning cycle is complete. At the end of each learning cycle, the network performs the MSE (Mean-Squared Error) calculation on the training examples and test examples, monitoring the completion of online learning. This study set learning examples and test examples into the network cycle learning process. If their MSE are less than 2.5% average, learning is completed.
4. Image Classification Accuracy Assessment
Accuracy evaluation of image classification refers to image classification after completion, based on the real conditions on the ground, and collects reference data to assess the accuracy of the classification of images. And the error matrix, through calculating the rows and columns, user’s accuracy, and producer’s accuracy can be produced. The following will be further description of the accuracy assessment used in this study [11–13].
This study analyzed the accuracy assessment results of the error matrix; the following error analysis indicators will be used to explore the matrix.
4.1. Producer’s Accuracy
Producer’s accuracy is when reference data that belong to a real ground and check points of land covered are misclassified; the producer’s accuracy relates to the probability that a reference sample (photo-interpreted land cover class in this project) will be correctly mapped and measures the errors of omission, and the number of correctly classified elements is divided by the total number of elements available. Considerwhere represents the th column and the th row of the error matrix; represents the sum of the first column elements.
This accuracy means that, under the use of a classification method, the ground reference data can be correctly classified, which holds the same meaning as omission error; omission represents a known category that was omitted and classified into other categories; the formula is defined as .
4.2. User’s Accuracy
This accuracy is the percentage of various covered lands after classification corresponded to the real ground reference data. That is, based on each classification, the diagonal values are divided by the sum of the column values, resulting in the percentage. One has where represents the error matrix element of the th column of the th row; represents the sum of the first row element.
User accuracy represents the percentage of correct classification after the covered land is classified; another similar meaning is commission error, expressing the percentage of being misclassification. Its function is .
4.3. Overall Accuracy
It represents the resulting percentage of check points divided by the total number of check points drawn after classification, that is, the sum total of all the error matrix diagonal values divided by the total sample, resulting in the percentage. Consider
4.4. Kappa Statistic
In order to better demonstrate the error of the overall image classification, statistical value calculation can be derived from the error matrix. It also considers omission and commission errors, which indicates how much better are the classification results than random classification. Consider
5. Modeling Flow of the Classification Model
By radiometric calibration, geometric correction, and remote sensing images after optimal fusion band combination with the introduction of normalized GIS data DEM, the structure and parameters of BPN-EDBD algorithm are determined to constitute a good classification mode. Then classify patterns, characteristics, and features of the local area are automatically extracted and calculated for their classification accuracy. Finally, a comparative analysis with traditional computers studies the advantages and disadvantages of this network classification algorithm. The process is shown in Figure 3.
Step 1 (preprocessing of the image data). Remote sensing images acquired go through geometric correction, radiometric correction, cropping, and other treatments; the purpose is to improve the classification accuracy by pretreatment of the classified image resulting in vivid contrasts and clear results.
Step 2 (select a training sample). Choose the most common, the most representative of area distribution data after preprocessing as the training sample. Feature as many types of culture samples; sampling as much as possible is a fundamental principle. When classifying, the best way to select a sample is to select those that feature a big range of culture types of landscape with the aid of maps, aerial photos, or other visuals for interpretation; then correspond and annotate the type of data in the field with the image space and finally enter into the computer.
Step 3 (feature selection and feature extraction). Feature parameter is a screening tool that can reflect the image of the land type; it can be extracted directly from the image or indirectly through calculations. Feature extraction is to get the best indicator of the land type features, and the feature extraction algorithm is for processing the original features to get new and better features. Through feature extraction, image data are compressed, reduced in redundancy, and reduced in classification difficulty.
Step 4 (classification). The process is based on extraction of image features to segment the image space and finally classifying the space of the same characteristics as the last parameter.
Step 5 (test results). It mainly evaluates the accuracy of classification and reliability of its methods. Due to the limitations of hardware and software, the remote sensing information such as spectrum or textures that reflects the cultures is hard to distinguish. Therefore, a classification method in full compliance with the requirements of classification does not exist, which results in “synonym spectrum” and “foreign body with the spectrum” phenomenon and/or misclassifications. Thus, classification accuracy and reliability evaluation have to be carried out afterwards.
Step 6 (the results output). It includes image classification, culture type, or classification accuracy of statistical tables.
6. Neural Network Classification of Remote Sensing Images
This paper extracts the U.S. Landsat-8 data as research data, in addition to remote sensing data; DEM will introduce geography and VDVI auxiliary data used to distinguish certain types of surface features to improve the accuracy of remote sensing image classification. The study area is set in Ningde City, which won the 2009 Ningde Landsat-8 remote sensing images and the 30 m resolution digital elevation model (DEM). After pretreating images, extract a subimage of in size as research area, through remote sensing image by calculating the correlation matrix between the bands and filter image band using the best index (OIF) , with 5, 4, and 3 as the maximum amount of information contained in the combination band.
6.1. Training Sample Data
Based on the spectral characteristics of true color composite image, using the actual survey information, following the sample’s guide through cultivated land, forests, water, construction land, and other land uses, through appropriate scale image segmentation, and getting the class distribution, size, and related statistical information, select the 62 samples of cultivated land, 29 woodland samples, 46 samples of construction land, eight water samples, and 13 samples of other sites. But also select a certain number of real samples after testing of accuracy as a reference; select 32 samples of cultivated land, 26 woodland samples, 18 samples of construction land, 12 water samples, and six samples of other sites. Based on the samples, we get the class spectral characteristics and other information extracted from the sample data. We can conduct our analysis on separability of training data, where we use the Jeffries-Matusita separation distance and conversion method [15, 16], and thus the training samples quality evaluation results are shown in Table 1. The separability of the sample determines whether or not the sample is qualified, when separability is greater than 1.90. The training sample has a better separability, belonging to qualified samples. As the table shows, our selected training and testing samples all have a separability degree greater than 1.90 and thus are all qualified samples and can be applied in this classification study.
Through data analysis of DEM data from experimental areas and slopes derived from DEM data, experimental areas located between the range 5~632 m with a slope change interval of 0~34°; landforms include three categories: plains, valleys, and hills and mountains in which features of landscape are controlled by the distribution of landforms. Plains and valleys with elevation of 10 m or less are mainly irrigated lands and water bodies. There are also a small number of residents, orchard land, and dry land. Lands with 10~50 m elevation are mainly residents, dry lands, and orchard-based lands, and less water and irrigated lands are present. Residents are generally located in the area of less than 5° slope. Lands with 50~200 m elevation are mostly dry lands, grasslands, woodlands, and so forth. Grasslands are mainly distributed in the gentle slopes of less than 20°; woodlands are generally distributed in more than 200 m elevation. Due to the impact of the terrain, the area with mountain shadows and shady woodlands has similar spectral characteristics of a water body and can be separated by DEM. In order to improve the classification accuracy of the network, the DEM data, normalized difference vegetation index (NDVI) is quantified, equivalent to an increase of two “bands” with the participation of the spectral data with network training classification [17, 18]. Because spectral data and DEM data have different dimensions and physical meanings, in order to accelerate the convergence of the neural network, input component data normalization is required so that each component is located in the range [0–1]. NDVI is the most widely used vegetation index ; different NDVI values correspond to different land types; the use of NDVI images set the threshold to achieve separation and extraction of vegetation and other surface features, ranging in value from −1 to 1, and generally green lands have the range of 0.2–0.8.
6.2. Neural Network Classification Parameter Selection
By adjusting the connection weights of the network, training samples for learning error converge within the required range, so as to achieve the desired classification category identification. BPN-EDBD algorithm has nine parameters: , , , , , , , , and , adjusted after the trial and error method; nine parameters are , , , , , , , , and . Finally, the parameters used in the three examples are (20, 0.7, 10, 0.1, 20, 0.7, 10, 0.1, 0.01). The initial weights and threshold values set randomly between −1 and 1. The above training parameters apply the BP network training to the training samples in the study area, and the results of their training curve are shown in Figure 4.
In order to compare the performance of this classification, such as parallelepiped classification, minimum distance classification method, maximum likelihood classification, and neural network classification are selected for comparison; the results are shown in Figure 5.
(b) Minimum distance
(d) BPNN + EDBD
In order to study the sample under the same conditions, BPNN + EDBD classification algorithm adds auxiliary information DEM and NDVI affecting the classification results. The input neurons are divided into three cases (Figure 6). The hidden layer neurons node’s set reference by formula (11)  outputs a layer in which its outputs were farmlands, garden lands, water bodies, construction sites, and others. Case A directly uses the original 3-band classification, Case B uses the original 3-band + DEM classification, and Case C uses the original 3-band + DEM + NDVI classification. Three different cases with different auxiliary information for image classification are compared. Consider
(a) Case A: network structure
(b) Case B: network structure
(c) Case C: network structure
Case A in Figure 6(a) is 3-2-1 (input layer neurons, neurons in the hidden layer, and output layer neurons). Case B in Figure 6(b) is 4-3-1 (input layer neurons, neurons in the hidden layer, and output layer neurons). Case C in Figure 6(c) is 5-3-1 (input layer neurons, neurons in the hidden layer, and output layer neurons).
Figure 7 shows the three network diagrams of this example. Each group will enter the network paradigm of learning and testing.
(a) Case A: classification result
(b) Case B: classification result
(c) Case C: classification result
Figure 8 shows the relationship between classification consumer time and image area by the three networks.
7. Results and Accuracy of Each Method of Classification Comparative Analysis
During accuracy evaluation, we used the combination of overall accuracy and Kappa coefficient method. The higher the overall accuracy and Kappa coefficient, the better the accuracy (Table 2). As seen in Table 3, parallelepiped classification performed the worst, and minimum distance classification stays below the likelihood classification method while the classification accuracy of BP neural network in the same samples is better than other supervised classification methods with an overall accuracy of 95% and Kappa coefficient of 0.9359. This proves the BPNN + EDBD classification method has good adaptability and stability with higher classification accuracy.
Under the same study sample conditions, add auxiliary information BPNN + EDBD classification methods. After comparing Figure 8 (Cases B and C), it can be seen that, after adding DEM auxiliary information, image spots are reduced, and the boundary between the different objects became more apparent. Illustrated in Figure 8 (Case C), adding DEM and NDVI auxiliary information at the same time, NDVI makes woodland spectral information similar to grassland more apparent; thus grasslands and forests became more obvious. After comparing the results from Figure 8, adding auxiliary information not only accelerates learning error convergence, but also made the overall classification feature, whether it is a block or boundaries, more clear and obvious.
The entire study uses the method of BPN with EDBD algorithm for the classification of remote sensing image. This method integrates several techniques, such as image fusion and ancillary information, and an effective classifier (BPN with EDBD algorithm). In the past, researchers and scientists used ancillary information to enhance the image classification quality.
However, few of the studies discussed the effectiveness and strategies of how to handle this ancillary information. Accordingly, this study focuses on evaluating and discussing the importance of the effect of their influence by constructing an enhanced supervised decision support system. This enhanced decision support system includes the following.(1)The EDBD algorithms successfully accelerate the convergent speed of BPN. The slowing and jumping phenomena disappear. Thus, efficiency of BPN with/without EDBD algorithms in attaining adequate iteration is compared. Also, an appropriate number of hidden neurons is an annoying problem which is also resolved if EDBD algorithm is employed.(2)From results analysis, it is found that DEM and NDVI play vital roles in image classification. This study also finds that some complicated indicators may distort or overstretch the raw data from satellite image and it may provide a better quality for image classification.(3)The method provides a new concept of knowledge scope on the determination of target categories. By using DEM, it can easily find the field for classifying in our study.(4)This enhanced supervised classifier can handle some detailed images (such as levees and paddy rice) much better and the error matrix is calculated. The user accuracy is greatly improved to 100% for adding ancillary information.
As of now, the effect of neural network learning methods and the introduction of auxiliary information method has not received unanimous understanding. In addition to improved accuracy showed in the studies above, when the sample size is small and classification results have certain limitations of influence, the classification of remote sensing images in the optimization of neural network structure and a variety of earth sciences information becomes a secondary fusion for future research focuses.
Conflict of Interests
The authors declare that they have no conflict of interests regarding the publication of this paper.
This research is supported by the Science and Technology Project of Education Department of Fujian Province under Contract no. JA14331 and the Vital Construction Project for Serving West Coast of Ningde Normal University under Contract nos. 2012H311 and 2013F33.
- P. M. Atkinson and P. Lewis, “Geostatistical classification for remote sensing: an introduction,” Computers and Geosciences, vol. 26, no. 4, pp. 361–371, 2000.
- G. Bosque, I. del Campo, and J. Echanobe, “Fuzzy systems, neural networks and neuro-fuzzy systems: a vision on their hardware implementation and platforms over two decades,” Engineering Applications of Artificial Intelligence, vol. 32, pp. 283–331, 2014.
- H. Ha and M. K. Stenstrom, “Identification of land use with water quality data in stormwater using a neural network,” Water Research, vol. 37, no. 17, pp. 4222–4230, 2003.
- D. M. Miller, E. J. Kaminsky, and S. Rana, “Neural network classification of remote-sensing data,” Computers & Geosciences, vol. 21, no. 3, pp. 377–386, 1995.
- P. M. Atkinson and A. R. L. Tatnall, “Introduction: neural networks in remote sensing,” International Journal of Remote Sensing, vol. 18, no. 4, pp. 699–709, 1997.
- P. Biswajeet and L. Saro, “Utilization of optical remote sensing data and GIS tools for regional landslide hazard analysis using an artificial neural network model,” Earth Science Frontiers, vol. 14, no. 6, pp. 143–151, 2007.
- R. Bayindir, S. Sagiroglu, and I. Colak, “An intelligent power factor corrector for power system using artificial neural networks,” Electric Power Systems Research, vol. 79, no. 1, pp. 152–160, 2009.
- S. Wan and J. Y. Yen, “The study on SSI problems in an industrial area with modified neural network approaches,” International Journal for Numerical and Analytical Methods in Geomechanics, vol. 30, no. 15, pp. 1563–1578, 2006.
- P. D. Heermann and N. Khazenie, “Classification of multispectral remote sensing data using a back-propagation neural network,” IEEE Transactions on Geoscience and Remote Sensing, vol. 30, no. 1, pp. 81–88, 1992.
- A. A. Minai and R. D. Williams, “Back-propagation heuristics: a study of the extended delta-bardelta algorithm,” Neural Networks, vol. 1, pp. 595–600, 1990.
- A. J. C. Trappey, F.-C. Hsu, C. V. Trappey, and C.-I. Lin, “Development of a patent document classification and search platform using a back-propagation network,” Expert Systems with Applications, vol. 31, no. 4, pp. 755–765, 2006.
- Y. Du, S. Zhou, and Q. Si, “Application and contrast research on remote sensing image classification base on ANN,” Journal of Science of Surveying and Maping, vol. 35, no. 4, pp. 121–125, 2010.
- D. G. Lowe, “Distinctive image features from scale-in-variant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004.
- N. Patel and B. Kaushal, “Classification of features selected through Optimum Index Factor (OIF) for improving classification accuracy,” Journal of Forestry Research, vol. 22, no. 1, pp. 99–105, 2011.
- K. Rokni, A. Ahmad, K. Solaimani, and S. Hazini, “A new approach for surface water change detection: integration of pixel level image fusion and image classification techniques,” International Journal of Applied Earth Observation and Geoinformation, vol. 34, pp. 226–234, 2015.
- S. Padma and S. Sanjeevi, “Jeffries Matusita based mixed-measure for improved spectral matching in hyperspectral image analysis,” International Journal of Applied Earth Observation and Geoinformation, vol. 32, pp. 138–151, 2014.
- J. Zhang and G. M. Foody, “A fuzzy classification of sub-urban land cover from remotely sensed imagery,” International Journal of Remote Sensing, vol. 23, no. 11, pp. 2193–2212, 2002.
- S. Bokhorst, H. Tømmervik, T. V. Callaghan, G. K. Phoenix, and J. W. Bjerke, “Vegetation recovery following extreme winter warming events in the sub-Arctic estimated using NDVI from remote sensing and handheld passive proximal sensors,” Environmental and Experimental Botany, vol. 81, pp. 18–25, 2012.
- H. Cicek, M. Sunohara, G. Wilkes et al., “Using vegetation indices from satellite remote sensing to assess corn and soybean response to controlled tile drainage,” Agricultural Water Management, vol. 98, no. 2, pp. 261–270, 2010.
- G. M. Foody, “Thematic map comparison: evaluating the statistical significance of differences in classification accuracy,” Photogrammetric Engineering and Remote Sensing, vol. 70, no. 5, pp. 627–633, 2004.
Copyright © 2015 Shi Liang Zhang and Ting Cheng Chang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.