Abstract

Subpixel mapping (SPM) algorithms effectively estimate the spatial distribution of different land cover classes within mixed pixels. This paper proposed a new subpixel mapping method based on image structural self-similarity learning. Image structure self-similarity refers to similar structures within the same scale or different scales in image itself or its downsampled image, which widely exists in remote sensing images. Based on the similarity of image block structure, the proposed method estimates higher spatial distribution of coarse-resolution fraction images and realizes subpixel mapping. The experimental results show that the proposed method is more accurate than existing fast subpixel mapping algorithms.

1. Introduction

Mixed pixels widely exist in remote sensing imageries. Subpixel unmixing, a sort of soft classification technique, solves the problem of mixed pixels. This method obtains the relative abundance (i.e., the proportion or the component) of each land cover class within each pixel and gets fraction images of each class of hyperspectral remote sensing image. While it is the defect of subpixel unmixing that this technique can only obtain the proportion of each land cover class, it is hard to specify spatial distribution of different land cover classes within pixel, which means that many specific spatial details are still missing. To address this issue, subpixel mapping (SPM) proposed firstly by Atkinson et al. [1] estimates the specific spatial distribution of different land cover classes within mixed pixels. Subpixel mapping converts soft classification into hard classification [2] on higher spatial scale. This technique segments mixed pixels into subpixels by appropriate scale in order to predict the class of each subpixel and obtain specific land cover information on higher spatial resolution.

At present, most existing SPM algorithms take spatial correlation hypothesis as theoretical basis, which declares that the closer the subpixels are, the more possibilities they belong to the same class. This theory stands in most cases, by which subpixel mapping can be carried out [3]. Similar to pixel swapping, Hopfield neural network (HNN), linear optimization, genetic algorithm, subpixel spatial attraction (SPA), and interpolation-based [4], almost all the existing SRM algorithms adopt spatial correlation hypothesis. But some of them consume too much time because large numbers of iterations are required to meet the satisfactory results, such as genetic algorithms, Hopfield neural network, and Markov random field, while the others need prior spatial information which is hard to be obtained (e.g., traditional BPNN and indicator cokriging).

A series of interpolation-based fast algorithms [4] emerge and show their advantages for they do not need prior spatial structure information on classes and consume far less time without too many parameters and iterations. However, most of them fail to perfectly reflect the characteristics of complex landscapes only relying on spatial correlation hypothesis, mainly because not enough prior spatial structure information is taken into consideration. Such fast SPM methods trend to reproduce the details defectively when dealing with division, tiny point features, and narrow linear, strip features such as rivers and highways. In view of this phenomenon, existing fast SPM results differ significantly from the actual situation, which limits the accuracy and performance of SPM and imposes restrictions on the further development of fast SPM algorithms.

Image super-resolution reconstruction (SR) is widely applied to image processing field. This technique reconstructs coarse-resolution image to fine-resolution image through a series of signal processing methods. Not only coarse-resolution images but also additional information should be involved in the super-resolution reconstruction process in order to make up the lack of detailed information. A super-resolution reconstruction algorithm based on multiscale similarity learning proposed by Pan et al. [5] improves the accuracy of SR. Multiscale structure self-similarity refers to similar structures which exist in the same image of same or different scales. Structure self-similarity exists widely in remote sensing images. For super-resolution reconstruction, this algorithm takes advantage of the similarity of image blocks of same scale as well as different scales within the image itself as additional information to learn from. Zhang et al. [6] introduced multiscale self-similarity into subpixel mapping field. This algorithm takes multiscale self-similarity redundancy as a new regularization term and improves the accuracy. However, with too complex iterations, this method consumes much time. Another new algorithm, self-similarity pixel swapping (SSPS) proposed by Su [7], combines self-similarity with spatial continuity and does well in terms of accuracy. Inheriting from pixel swapping, this time-consuming algorithm also needs massive iterations.

Inspired by those reasons above, this paper proposes a novel algorithm, which directly applies self-similarity to fraction images. Learning from the corresponding relationship between fraction image blocks with similar structure of different scales by BPNN, this algorithm converts coarse-resolution fraction images into fine-resolution fraction images without too much time-consuming iterative process and estimates the specific spatial distribution on higher resolution scale. Structural self-similarity of fraction image block provides additional spatial information which is easy to obtain, because the prior spatial information comes from coarse-resolution fraction image itself and its downsampled image.

2. BPNN SPM Algorithm Based on Structure Self-Similarity Learning

2.1. Spatial Correlation Hypothesis

Most existing subpixel mapping algorithm predicts the specific spatial distribution of each land cover class in mixed pixels according to the theory of spatial correlation hypothesis [8]. Figure 1 shows a simple schematic diagram of the spatial distribution of simulated pixels, which is assumed to contain two kinds of land classes connoted by black and white, respectively. The scale factor is set as 5, . Figures 1(b) and 1(c) represent two different space distribution conditions and Figure 1(b) represents random subpixel spatial distribution, while Figure 1(c) is the space distribution with higher spatial correlation. Clear conclusion can be drawn through observation that the distribution of Figure 1(c) is relatively reasonable.

In a sense, spatial correlation hypothesis decreases the uncertainty of SPM. When facing division, tiny point features, and narrow linear, strip features, this method exposes its drawback.

2.2. Self-Similarity Learning of Image

Glasner et al. [9] proposed that similar areas (i.e., similar block) of same scale and different scales generally exist within the same neighborhood, between different neighborhoods no matter in same image or different images, which is the specific performance of image multiscale structure self-similarity. Structure self-similarity occurs so often in remote sensing imageries. They usually exist widespread in the form of roads, houses, and natural landscape significantly or latently, which provides useful additional information [10] for finer spatial resolution. As a result, image structure self-similarity learning can be applied to SPM as prior spatial information.

The principle of the SR on similar blocks with the same scale is shown in Figure 2(a). In Figure 2(b), fine-resolution image is represented by HR, while LR represents the corresponding coarse-resolution image [11]. The size of HR image is times the size of the LR image. and are similar blocks of different scales within HR, and the size of is times the size of . The corresponding blocks are and , respectively, which are a pair of similar blocks of different scale. Due to scale factor of HR and LR completely the same as that of and , provides precise additional information for reconstruction in LR. A clear conclusion can be drawn that small size image blocks tend to contain spatial patterns information, which recur themselves in higher resolution image through the relationship between fine-resolution image and coarse-resolution image [12].

Therefore, it is the key point of image self-similarity learning algorithm to conduct block processing on image itself as well as its degradation coarse-resolution image and seek the association or corresponding relationship of block and its downsampled block.

Self-similarity has already been applied to SPM by [6]. In this paper, self-similarity is used as regularization term to improve accuracy with massive iterations, which consumes too much time and limit SPM efficiency. Besides, [7] combines self-similarity with pixel swapping. This article has the same shortcomings as it contains iterative process and consumes much time.

Here is the specific process of the learning methods to seek this corresponding relationship: select training image set, and then degenerate the training set to get coarse-resolution image set and conduct block process. Obtain the feature of the image block to be reconstructed as well as the feature of its downsampled image block and express them in the form of vector. A dictionary entry is composed of two corresponding vectors. The feature of coarse-resolution image block functions as search index and the feature of fine-resolution block works as dictionary entry content to be searched. Once the relationship is obtained, it can be used on image to estimate itself on higher resolution.

2.3. SPM Based on BPNN

SPM based on BPNN should take coarse-resolution fraction image blocks and their downsampled fraction image blocks to set up training sample. Downsample the coarse-resolution fraction images and “super-coarse-resolution fraction images” can be obtained, on which a local window is used to operate to describe the super-coarse-resolution fraction block. The vector composed of soft attribution values within a block is taken as input of training sample and neurons of input layer amount to correspondingly. Then the vector which consists of the probability values of this kind of class within block (containing subpixels) is set as output, so the output layer is composed of neurons.

Afterwards, the train sample is used to train the BPNN to obtain the link weights between neurons of different network layers. As a result, the nonlinear mapping relationship between input and output is learned. During the progress of network training, link weights will be gradually modified by feedback, which lets the output of the network approaches the expected output.

2.4. SPM Method Based on Block Structure Self-Similarity Learning

Image self-similarity learning can be applied to subpixel mapping, for this algorithm can solve the shortcomings of existing SPM method, such as low efficiency on linear, strip object as well as complex landscapes mapping.

The specific method of subpixel mapping based on self-similar learning is as follows.

2.4.1. Downsampled Coarse Fraction Images

Suppose original coarse-resolution fraction images ; K refers to the number of land cover classes. Let zoom factor be . And then conduct downsampling on all the coarse fraction images to obtain “super-coarse-resolution fraction images,” according to . The size of is times the size of . This algorithm aims to estimate ’s spatial distribution on finer spatial resolution. The relationship between and should be learned to estimate the spatial distribution on finer spatial scale.

2.4.2. Build Dictionary

Divide each into blocks image by image. Each image block contains pixels. For each “super-coarse-resolution block” , corresponding coarse-resolution block, (each containing pixels), can be found at the same location in the corresponding coarse-resolution fraction image, the size of which is times the size of . Both and are vectors composed of the soft attribute values of all the pixels of this class in corresponding block. And calculate , the average soft attribute value of .

Set dictionary entry, . and function as the index of dictionary entries. is set as the first index, while is the second index, and functions as the contents of the dictionary items.

2.4.3. Match Similar Blocks

Set two judgment coefficients, and .

Make , image blocks to be trained, traverse all the dictionary entries in order and calculate , and keep items which makes . Moreover, let traverse the second index of items and calculate . Keep items which makes , search the corresponding , and conduct weighted sum on to get to estimate the fine-resolution block corresponding to low-resolution block, which matches similar image blocks.

In addition, and are threshold values. These two threshold values directly influence the accuracy and consuming time of SPM. However, this paper tries to validate proposed method. So these two threshold values are not under discussion. Both values are set as 0.15, since either accuracy or consuming time is taken into consideration.

2.4.4. Train BP Network

The vector is composed of soft attribution values within a block, is taken as input of training sample, and neurons of input layer amount to correspondingly. Then the vector which consists of is set as output, so the output layer has neurons.

Then BPNN should be trained to obtain the link weights between neurons of different network layers. As a result, the nonlinear mapping relationship between input and output is learned. During the progress of network training, link weights will be gradually changed by feedback to approach expected output.

2.4.5. Estimate Based on BPNN

The trained network can be used on coarse-resolution fraction block to estimate its finer resolution faction block. The vector of soft attribution values of each land class of coarse-resolution block to be mapped should be extracted as input, and the vector of estimated soft attribution values of this land cover class within fine-resolution block will be obtained as output.

2.4.6. Stitch Fraction Blocks

Stitch all estimated fine-resolution block sequentially to get fine-resolution fraction images.

2.4.7. Class Allocation

Allocate classes in UOC [13] for each subpixel in turn.

3. Experiments and Analysis

To validate the proposed SPM method, experiments on two images were carried out. Traditional BPNN, interpolation-based fast algorithm with MSIs, and proposed method were tested. To fully assess the method, fine spatial resolution images were degraded to simulate coarse image. PCC (percent correctly classified) and Kappa coefficient were introduced into this paper to evaluate the accuracy. Besides, two indicators PCC′ and Kappa′ [14] are introduced to evaluate mixed pixels mapping results. The scale factor had been fully discussed in other papers where it usually varies between 2 and 5. In this paper, the scale factor is set as 3. No more discussion about was included in this paper, while the influence of the size of blocks towards accuracy was discussed.

3.1. Test Image 1

An image acquired by ROSIS works as the first set of experimental data. There are 4 kinds of land cover classes in this image. Firstly, downsampling is conducted on the original image to obtain fraction images of different classes. The size of original image is pixels, and the scale factor is set as 3, , and the block size is set as . So the size of fraction image is 270 × 270 pixels, which means that each pixel of coarse image corresponds to 3 × 3 pixels of original image. The flowchart of SPM based on self-similarity learning is shown in Figure 3.

Furthermore, downsampling is conducted on the fraction images to obtain lower coarse fraction images, whose sizes are 90 × 90. Traditional BP method, interpolation-based fast algorithm with MSIs, and the BSSL method are carried out for SPM experiments. The results are shown in Figures 4(c) and 4(d).

3.2. Test Image 2

Select an aerial image as the experimental data for test 2; this image still contains 4 kinds of land cover classes: road, water, corn, and vegetables. The image size is . Suppose the scale factor as 3, , and set block size as . After being treated in the same manner, test image is downsampled to obtain “super-coarse-fraction images.” Afterwards, interpolation-based fast algorithm with MSIs, traditional BPNN method, and the proposed method are, respectively, conducted on coarse fraction image blocks to test SPM, and the results obtained are shown in Figures 5(b), 5(c), and 5(d).

3.3. Influence of Block Size on SPM

Discussion about the influence of block size on accuracy was conducted on test image 2. Three block sizes were tested on proposed method (scale factor is set as 3). And the results are shown in Tables 1 and 2. As the size increases, the PCC shows significant decline. The bigger the block size is, the more complex structure information it contains, and thus, the harder SPM works. It consumes more time with smaller block size. However, the influence on time is not as obvious and critical as that on accuracy. To pursue better accuracy, sacrifice of such little time is acceptable.

As a result, small size block should be utilized for better accuracy when the size of image to be estimated is relatively small.

3.4. Experiment Results Contrast

From Figures 4 and 5, a conclusion can be drawn visually where BSSL method has better spatial distribution. In the mapping results of BPNN and interpolation-based method, the size and shape of land cover like elongated river become deformed seriously, and serious distortion exists in most isolated and scattered points like land cover. There are obvious burrs at the edge of land cover. The results of BSSL overcome the defects of the traditional methods to a large extent. Significant promotion can be found for slender river restoration, and isolated, scattered point feature mapping results have been obviously improved compared with traditional methods. Thus, the SPM that this paper proposed utilizes the structure features of the land cover relatively and remains more in line with the actual situation.

For further comparison of the experimental results, quantitative analysis should be carried out by adopting three accuracy indexes including confusion matrix, PCC (percent correctly classified), and Kappa coefficient to evaluate the accuracy of the results. In addition, two new indicators PCC′ and Kappa′ are introduced. These two indicators conduct calculations only on mixed pixels, which exclude the impact of nonmixed pixels to evaluate SPM better. Pixels of the result under evaluation and reference image are compared pixel by pixel, and it comes to the final result after statistical calculation.

Tables 3 and 4 show the comparison of accuracy evaluation coefficients of the two methods in two sets of test images. As shown above, the PCC index of this method in image 1 has an increase of 2.269% and 5.728% and the kappa coefficient is, respectively, high being 0.0230 and 0.0556, compared with interpolation-based method with MSIs and traditional BPNN method, respectively. PPC′ index rises to 3.9% and 8.005%, respectively, and kappa′ coefficient increases to 0.0303 and 0.0976. And the PCC index of this method in image 2 has an increase of 3.097% and 7.207% and the kappa coefficient is, respectively, high being 0.0344 and 0.0764, compared with MSI and traditional BPNN. PPC′ index rises to 2.929% and 7.031% respectively and kappa′ increases to 0.0372 and 0.1096.

In addition, computation time comparison is listed in Table 5. The proposed method consumes moderate computation time with better accuracy compared with fast algorithms.

In general, each accuracy index of BSSL algorithm has been improved in various degrees in comparison with traditional algorithms. Computation time of proposed method is moderate. It can be concluded that the proposed method takes the structural information of the image into account and is superior to traditional fast algorithms on accuracy.

4. Conclusion

Owing to the shortcoming of existing fast SPM algorithms based on spatial correlation hypothesis that they are not able to reconstruct such specific spatial distribution as narrow linear, strip and division, tiny point land cover features perfectly, the results of existing SPM tend to meet the technical requirements badly.

The SPM based on BSSL that this paper proposed takes land cover structure features into account and improves the accuracy of SPM. As experimental results show, this method lives up to practical distribution of land cover and promotes the effect and accuracy of SPM and consumes moderate computation time, compared with existing fast SPM algorithms. The size of image block is of vital importance in this method. In most cases, or small image block provides relatively good SPM result considering both time and accuracy.

The proposed method shows its potential in real-time applications, while the computation time can be further shortened in future research.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This paper is supported by the Fundamental Research Funds for the Central Universities, no. HEUCF1508, and this paper is also supported by the Natural Science Foundation of Heilongjiang Province of China under Grant no. F201413.