Abstract
Subpixel mapping technology can determine the specific location of different objects in the mixed pixel and effectively solve the uncertainty of the ground features spatial distribution in traditional classification technology. Existing methods based on linear optimization encounter the premature and local convergence of the optimization algorithm. This paper proposes a subpixel mapping method based on modified binary quantum particle swarm optimization (MBQPSO) to solve the above issues. The initial subpixel mapping imagery is obtained according to spectral unmixing results. We focus mainly on the discretization of QPSO, which is implemented by modifying the discrete update process of particle location, to minimize the objective function, which is formulated based on different connected regional perimeter calculating methods. To reduce time complexity, a target optimization strategy of global iteration combined with local iteration is performed. The MBQPSO is tested on standard test functions and results show that MBQPSO has the best performance on global optimization and convergent rate. Then, we analyze the proposed algorithm qualitatively and quantitatively by simulated and real experiment; results show that the method combined with MBQPSO and objective function, which is formulated based on the gap length between region and background, has the best performance in accuracy and efficiency.
1. Introduction
Hyperspectral images are composed of hundreds of bands with a very high spectral resolution, generally from the visible to the infrared region. For every recorded pixel, rich spectral information provides a complete spectral description and a better characterization of the observed surface, which results in a very powerful tool for materials discrimination and earth observation. However, a common drawback of hyperspectral sensors is the relatively low spatial resolution, which leads to the problem of mixed pixels (e.g., pixels containing mixture of different materials). Ground feature spatial detail in mixed pixels is extremely important for land cover mapping, coastline extraction, change detection, and landscape index estimating. Mixed pixels cannot be correctly addressed by traditional hard classification methods. Subpixel mapping is an effective measure to solve this problem.
Subpixel mapping and pixels spatial dependence theory were initially proposed by Atkinson [1]. Based on this theory, several subpixel mapping techniques have been proposed. Existing subpixel mapping methods contain the following typical algorithms: the neural network method, the geostatistics method, the Markov random field method, and the linear optimization method.
The first category is the neural network method. Among them, Hopfield’s neural network (HNN) model combined with the constrained energy minimization principle of output neuron has been used in the subpixel mapping. In the HNN model, each pixel is a neuron that constructs an energy function with the correlation between subpixels and the neighboring subpixel, using the mixed pixels abundance as the constraint [2, 3]. The improved HNN method for small scale targets and small scale coexisting with large scale target had been proposed in [4]. A Back Propagation (BP) neural network is trained to learn the appropriate location of the subpixels, which belongs to the different classes inside the pixel [5]. At the same time, wavelet transform and genetic algorithms have been used to solve subpixel mapping [6, 7]. The ARTMAP neural network [8] has been utilized to realize the learning model in [6]. Based on the subpixel shifted remote sensing images (SSRSI), component information obtained from SSRSI was put into the HNN model to increase ratio constraints and reduce uncertainty [9]. In order to eliminate isolated pixels, the results of the BP neural network model were postprocessed [10]. Multiple low spatial resolution subpixel shifted images were used to modify the BP neural network to reduce uncertainty and error in the BPNN model [11]. The general regression neural network (GRNN) has been proposed to achieve improved accuracy in subpixel mapping [12].
The second category is the geostatistics method. Existing approaches for subpixel mapping have been placed within an inverse problem framework; a geostatistical method was proposed for generating alternative synthetic land cover maps at the fine (target) spatial resolution [13, 14]. In [15], a model sequentially produced with local indicator variogram (SLIV) was proposed, and indicator variograms extracted from targetresolution classification were produced from a representative local area to develop an effective method.
The third category is the Markov random field method. The Markov random field (MRF) has been used in subpixel mapping [16]. Subsequently, scholars have done indepth study on the MRF model [17, 18]. Considering the idea of the MRF, which is able to consider spatial and spectral information simultaneously, a novel sFCMbased (supervised fuzzy means) subpixel mapping model that incorporates the sFCM criterion for unmixing pixels into the objective function was proposed [19]. In terms of the energy minimization for MRF, one of the most commonly used traditional methods is simulated annealing (SA), but SA is very timeconsuming. To overcome this limitation, graph cut is used in [20].
The fourth category is the linear optimization method. Through the decomposition of mixed pixels or fuzzy classification technique, the proportion of features was determined and then a mathematical model was defined for describing the spatial correlation to construct the objective function, converting the subpixel mapping into a linear optimization problem. Among linear optimization method, Verhoeye proposed the maximum spatial correlation between neighborhood pixels and solved the problem using simplex linear optimization technology [21]. Immune clonal selection algorithms and differential evolution algorithms were used to solve the mathematical model using Verhoeye’s data [22, 23]. A new subpixel mapping method based on evolution agent was proposed [24]. Since subpixel/pixel spatial attraction model (SPSAM), which assumes that similar features have physical characteristics of mutual attraction, has effectively verified spatial correlation theory [25, 26], SPSAM ignores the correlation between subpixels. Wang put forward a modified SPSAM, which considers the correlation between and within the pixel simultaneously [27, 28]. By incorporating auxiliary datasets, a subpixel mapping framework based on a maximum a posteriori (MAP) model was proposed to utilize the complementary information of multiple shifted images [29]. Since the previous MAPbased subpixel mapping algorithm was obtained by downsampling a classification image without spectral unmixing errors, an adaptive subpixel mapping method based on a MAP model and a winnertakeall class determination strategy (AMCDSM) was proposed [30]. The existing spectralspatial based SPM algorithms only use the maximal spatial dependence principle as the spatial term to describe the local spatial distribution of different land cover features; a novel spectralspatial based SPM algorithm with multiscale spatial dependence was proposed [31]. In [32], the spectral unmixing predictions (i.e., coarse land cover proportions used as input for SPM) were considered a convolution of not only subpixels within the coarse pixel, but also subpixels from neighboring coarse pixels. A new SPM method based on optimization is developed which recognizes the optimal solution as the one that, when convolved with the PSF, is the same as the input coarse land cover proportion.
The computation complexity of those above objective functions is relatively highly complex. In order to intuitively reflect the spatial correlation, based on the principle that similar features attract each other, Villa utilized various types of regional perimeter minimum as a target after subpixel mapping, using the simulated annealing (SA) and pixel swapping algorithm (PSA) to achieve iterative optimization [33, 34]. Compared with SA and other similar evolutionary techniques, PSO has some attractive characteristics and has proven to have superior computational efficiency. This is because PSO is not applicable to problems where the position of the particle should be discrete [35]. Ertürk used binary particle swarm optimization (BPSO) to optimize the objective function that had been proposed by Villa, and the optimization algorithm has been fit for parallel implementation [35]. The results of the BPSO algorithm are better than SA. Compared with BPSO algorithm, Sun proposed the binary quantum particle swarm optimization algorithm (QPSO) [36], which demonstrates many advantages such as a simple evolution equation, fewer control parameters, fast convergence rate, and simple operation. Therefore, BQPSO algorithm was proposed for subpixel mapping [37], but the discrete update process of particle location may lead to algorithm premature convergence and local convergence. The modified version of BQPSO was proposed in this article.
In this study, a subpixel mapping method of hyperspectral images based on MBQPSO is proposed. The initial subpixel mapping imagery is obtained according to the results of spectral unmixing. Using regional perimeter minimizing to depict spatial correlation is intuitive and requires low computational complexity. The objective function is formulated by different connected regional perimeter calculating methods, such as the gap length between region and background (_gap), the length of chain code (_chain), and the sum of border point number (_point). We focus mainly on the discretization of QPSO, which is implemented by modifying the discrete update process of particle location, to minimize the above three objective functions. In order to reduce time complexity, objective optimization strategy of global iteration combined with local iteration is performed.
The rest of this paper is organized as follows. Section 2 presents the basic methodology of the proposed subpixel mapping method based on MBQPSO. Section 3 provides the experimental results and analyses with simulated and real hyperspectral images. The conclusion is drawn in Section 4.
2. Subpixel Mapping Method Based on MBQPSO
The proposed method provides an accurate subpixel mapping method based on MBQPSO and local optimization strategy. In this paper, known endmembers and abundance are the premise of the subpixel mapping method. Specifically, the number of endmembers is estimated by VD [38] and extracted by VCA. Then, the abundance is obtained by FCLS in each pixel for each endmember. A pixel can be seen as a pure pixel only if the minimum difference value between the maximum abundance and the others in the pixel is greater than a certain threshold [33–35]. The initial subpixel mapping imagery is obtained according to spectral unmixing results.
Then the followup process of the proposed subpixel mapping algorithm based on MBQPSO consists of the following: () the objective function is formulated by different connected regional perimeter calculating methods; () the proposed MBQPSO is used to minimize the objective function; () optimization strategy of global iteration combined with local iteration is performed to reduce time complexity. The framework of the proposed subpixel mapping method is organized as follows: The objective function and adaptability analysis are presented in Section 2.1. The MBQPSO algorithm is introduced in Section 2.2. Two key problems of objective function optimization based on MBQPSO are described in Section 2.3 followed by the optimization strategy in Section 2.4.
2.1. Modified Binary Quantum Particle Swarm Optimization Algorithm
Before presenting the proposed MBQPSO, we first introduce QPSO method, containing the basic concept and evolution equation. The discrete update process of evolution equation is given in the form of pseudocode, and the specific details of the discrete update process of are given in the form of textual description.
2.1.1. QPSO Method
The QPSO algorithm [36] is theoretical a global convergence guaranteed algorithm, which has fewer parameters and better global searching ability than PSO. In QPSO algorithm, there are only the concepts of particle position and the distance between particles, while PSO includes the concept of speed and trajectory. In a QPSO, each particle only has position information in a dimensional searching space. is the current time and is the th particle in particles. While time is , the optimal position of the th particle is and the global optimal position of particle swarm is . The evolution equations are as follows:where is a positive integer between 1 and ; and are random positive numbers between 0 and 1. If , then the plus/minus sign in (3) takes a plus sign, or else it takes a minus [36]. is the th particle attractor, represented by a random vector that is defined by (2). is the average optimal position; is contractionexpansion coefficient and determines the particle convergence speed. The expression is as follows: where curiteration is the current iteration number and maxiteration is the maximum iteration number. and are the initial and final value of the control parameter, respectively. In addition, is greater than, and is a linear decreasing function, controlled by the parameters.
However, like PSO and other evolutionary algorithms, QPSO readily relapses into local optimum when solving highdimensional complex optimization problems. In order to adapt the QPSO algorithm to the practical problems in discrete search space, we introduce the concept of binary encoding to QPSO, which is implemented by modifying the discrete update process of particle location.
2.1.2. MBQPSO Method
In BQPSO [37], particle position is represented as a binary string, and continuous evolution equations (1)–(3) are all discretized. The discrete update process of and is given in Algorithms 1 and 2.


In order to discretize the particle updated location, deformation of (3) is as follows:where represents the function of Hamming distance and represents integers. It should be noted that due to is random positive number between 0 and 1 and
Substituting and into (5), can be obtained in reverse. The time complexity of this procedure is . In order to reduce calculation amount, each bit of is mutated with mutation probability to obtain a new:where is the length of .
It should be noted that “” in (3) is not considered in (5), which may result in premature convergence. In order to overcome this problem, the discretization procedure of is modified and called , which is shown in Algorithm 3. The time complexity of is . The modified algorithm is named MBQPSO to distinguish it from the BQPSO algorithm.

The algorithm’s design work is described as follows.
First, set ; then use mutation probability to update . For , let be the number of bits, which changes from 1 to 0 and be the number of bits, which changes from 0 to 1. The detailed update process is divided into three cases.(1)if , then .(2)According to the selection of plus or minus sign in (3), we set the following rule:(2.1) If , then ; this may result in due to . It is obvious that the bigger , the bigger the difference of and .(a) For the case of , in order to obtain updated , the result of mutation should satisfy . According to the definition of , the bigger , the smaller and the more the chance for ; that is, will be satisfied if set =1 when and otherwise set = 0.(b) For the case of , in order to obtain a new , the result of mutation should satisfy , which will be satisfied if set when and otherwise set = 1.(2.2) If , then ; this may result in due to . In the same way, the same conclusion can be obtained:(a) For the case of , set = 0 when and else set .(b) For the case of , set = 1 when and otherwise set .
One final comment is noteworthy. If is unsigned, a new binary string will be constructed to complete the update of using the function .
Using function analysis theory, the convergence of the BQPSO algorithm is proven [37]. Compared to BQPSO, the proposed MBQPSO algorithm simply modifies the discretization process of particle position. Therefore, it is easy to prove the convergence of the modified algorithm using the same method.
2.2. Objective Function and Adaptability Analysis
2.2.1. Objective Function
Relying on the spatial correlation tendency of features, we assume that each endmember within a pixel should be spatially close to the same endmembers in the surrounding pixels. Therefore, the cost function to be minimized is chosen as the perimeter of the areas belonging to the same class:where is the number of the classes, is the number of connected components of the class n, and is the perimeter of the connected component .
In general, the perimeter calculation of the connected component has three methods: the gap length between region and background, the length of chain code (4connected and 8connected), and the sum of border points number [39]. For convenience, the objective functions corresponding to these three methods are named _gap, _chain, and _point.
2.2.2. Adaptability Analysis of Perimeter Calculation Methods
Adaptability analysis of the three perimeter calculation methods (_gap, _chain, and _point) are shown in Figure 1. Table 1 gives the perimeters that are calculated by the different methods in Figure 1. The real features’ subpixel distribution maps are Figures 1(a), 1(c), 1(e), and 1(g), respectively. Based on the same subpixel abundance, Figures 1(b), 1(d), 1(f), and 1(h) are randomly selected subpixel mapping results with special case.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
Special Case 1. There may be only one subpixel in a connected area (isolated single point). The perimeter of an isolated single point is 0. The total perimeter, including an isolated point and a certain region, is bound to be bigger than the region’s perimeter, as shown in Figures 1(b) and 1(d).
Special Case 2. There may be only two subpixels in a connected area (isolated two points), as shown in Figures 1(f) and 1(h).
From Figure 1, it can be seen that the feature original pixel distributions (a), (c), (e), and (g) are better than the results of the special case distributions (b), (d), (f), and (h).
Combination of Figure 1 and Table 1 shows the following:
() _gap method calculates connected components perimeter; isolated points or isolated two points will increase the perimeter length. This method satisfies the requirement that the smaller the value of objective function, the better the subpixel mapping result.
() For cases (b) and (f), the number of boundary points is increased with the _point method, but for (d) and (h), the mapping results could not acquire (c) and (g), even with the same objective function values.
() The _chain (8connected) method has an uncertain trend of changing calculation results. For cases (b) and (f), the smaller the target function is, the better the result is. Because cases (d) and (h) have isolated single points, for which 8connected chain code perimeter is 0, this method cannot obtain mapping results of (c) and (g).
() For Figure 1, the _chain (4connected) method cannot acquire the optimal results while the value of the objective function gets smaller.
Therefore, according to the analysis of these special cases, _gap is the best, _point takes second place, and _chain is the worst. While using chain code method to calculate perimeter, 4connected segmentation leads to the generation of isolated single points. However, 8connected segmentation may not necessarily generate an isolated point. So the latter’s performance is better than the former. Therefore, _chain (8connected) is used in the following experiments.
2.3. Objective Function Optimization Based on MBQPSO
In this paper, we use different connected regional perimeter to formulate the objective function and the proposed method is used to minimize the aforementioned objective function. Based on the MBQPSO algorithm, there are two problems that should be solved to optimize objective function (7). They are as follows.
(a) Corresponding Relations between Particle and Pixel. After the endmembers are extracted and the abundances in each pixel for each endmember are determined, an abundance threshold value is used to determine whether each pixel may be considered pure. The remaining pixels are considered mixed. Then, for each mixed pixel, establish that each pixel corresponds to a subpixel window, which is composed of subpixel. Supposing the th pixel contains d pieces features, the abundance values are ordered from low to high in accordance with . The number of subpixels in the corresponding subpixel window iswhere is the number of feature classes, is the th pixel, and is the th class feature. is the number of subpixels assigned to the th endmember in the th pixel and is the abundance of subpixels assigned to the th endmember in the th pixel. is the rate of spatial resolution enhancement.
After the above step, the number of subpixel labels for each mixed pixel is known, whereas the subpixel position is not. In the subpixel mapping process of objective function optimization based on MBQPSO, a number of particles (particle population ) are assigned for each mixed pixel. Each of these particles is a bit series and contains rows and columns, where is the number of endmembers that the pixel contains, and each column denotes a subpixel location for that pixel. These binary bit series initialize each particle in particle swarm and set .
The th feature’s distribution can be determined as long as the first kinds have been determined. The particle location of th pixel can be expressed by dimensional binary matrix. The position matrix of each pixel must satisfy the needs of two constraints:
(a) The th row only has elements that are 1.
(b) Each column cannot have more than one element that is 1.
(b) Abundance Full Additive Constraint While Particle Updates. (a) The number of 1bits in each row of a particle must be . However, the number of bits, taking 1 in each particle, is not stable. Under this condition, the number of 1bits in each row of each particle is checked, and randomly located deletion or addition is conducted to ensure that the number of bits that takes the value of 1 is equal to .
(b) Only a single 1bit should be allowed in each column of each particle. When the number of bits takes 1 in a column more than one, this constraint is imposed by randomly moving the overlapping 1bit for the row with the smaller abundance rate to a column that is empty for all rows [35]. Note that this constraint is enforced under the first constraint. Furthermore, an overlapping 1bit means that there exists an extra empty column.
2.4. Optimization Strategy
After construction of objective function and presentation of modified optimization algorithm, the target optimization strategy of subpixel mapping is the process of global iteration combined with local iteration. Local iteration is a pixelbypixel iteration, whereas traversing over all pixels can be considered a global iteration. In order to reduce the time complexity, the optimization strategy is as follows [39].
PixelbyPixel Iteration. to construct cost function for each iteration, only calculate the current pixel’s corresponding subpixel window and the surrounding structure’s local area perimeter.
Figure 2 shows the pixelbypixel iterative optimization area diagram. Figure 2(b) is a 3 × 3 resolution enhancement image of the dotted area in Figure 2(a). Denote the pixel with a circle in Figure 2(a) and denote the dotted area in Figure 2(b) as region, which indicate the subpixel window and the surrounding structures local region for the pixel . In each iteration, the subpixel mapping for pixel is optimized by calculating the perimeter of region as cost function.
(a)
(b)
3. Experimental Results and Discussion
Experiments are conducted to test the performance of the proposed modified BQPSO algorithm with standard test function, one simulated image, and one real image.
3.1. The Standard Test Function Experiment
In standard test function experiments, we use the same standard test function as done in [37]. Each test function implements three sets of simulation test and the number of groups is 20, 40, and 80. The BPSO, BQPSO, and MBQPSO algorithm run 20 times [40] and the maximum iteration number is 200. Table 2 records the average optimal value of the three algorithms, the maximum value, the minimum value, and variance of the algorithm in the 20 times simulation.
From Table 2, we can see that the test results of the MBQPSO algorithm are better than those of the BQPSO and BPSO algorithm. In a low dimensional test function, the variance of the test results is greatly reduced. The average optimal value is significantly improved. The convergence curve from Figure 3 shows that the MBQPSO has the best performance on global optimization ability and convergent rate. The BQPSO algorithm’s performance takes the second place and is better than the BPSO in simulation. The curve’s adjustment time is shorter and the final result is optimized.
(a)
(b)
(c)
(d)
(e)
3.2. The Subpixel Mapping Experiment of Different Algorithms
In subpixel mapping experiments, consisting of simulated and real image, we compare the proposed six subpixel mapping algorithms which are combined by three objective functions (_point, _chain, and _gap) and two optimization algorithms (BQPSO and MBQPSO), named as PBQPSO, CBQPSO, GBQPSO, PMBQPO, CMBQPSO, and GMBQPSO. In addition, we also compare the six proposed methods with pixel swapping algorithm (PSA) in simulated and real experiment. The results of the six proposed subpixel mappings are compared under the conditions of different objective functions and different optimization strategies. Since the subpixel mapping result is the same as the classification result, the Kappa coefficient and recognition rate are used to evaluate the performance of the different subpixel mapping algorithms.
In order to construct an accurate abundance map, based on the real features distribution of the Indiana AVIRIS Indian pines hyperspectral data (size is: ), the ground truth gives 17 kinds of different features (including the background), and then some of these classes are merged into nine categories. The simulated hyperspectral image is generated by nine kinds of spectral feature on the basis of category labels, which are selected from the USGS spectral library.
Two hyperspectral image pictures are constructed for the later experiments:
() Figure 4(a) is captured from the original image’s 1 : 144, 1 : 144 region, which is then filtered by the 3 3 mean filter and narrowly shown in image 1.
(a)
(b)
() Figure 4(b) is captured from the original image’s 1 : 60, 70 : 144 region, which is then filtered by the 3 3 mean filter and narrowly shown in image 2. So image 2 is in fact the subfigure of image 1.
This part has five experiments, and only the fourth experiment’s spatial resolution is 5 5; the others are 3 3. In order to avoid the influence of the initial value sensitivity of the particle swarm algorithm, the same initial particle swarm is used in each experiment and the results run for five times.
We compare the results of the above seven algorithms for Figure 4(b). The initial recognition rate (the ratio of the number of correct classification result label to the total number of category labels) of the seven experiments is 93.89%. For BQPSObased and MBQPSObased method, the particle number is 50; the number of maximum iteration steps is 30.
Table 3 gives the results of the subpixel location result, combining the different objective functions and the different optimization algorithms. As can be seen from Table 3, we have the following:
() Under the same objective function, recognition rate and Kappa coefficient of MBQPSO are greatly improved.
() Comparing the results of objective function calculation and subpixel mapping, we learn that the _gap method is more suitable for constructing the target function of the subpixel position, which has the advantages of the shortest time and the best mapping results.
() Comparing the results of PSA with the six methods, based on BQPSO and MBQPSO, the proposed six methods can obtain higher recognition rates and Kappa coefficient.
As can be seen from Figure 5, MBQPSO had fewer isolated points than BQPSO and the result is better. From the iterative curves of Figure 6, all the results of BQPSO are only three times iterative convergence. The number of iteration steps is increased after optimization and the corresponding results are better. Compared with the above six, PSA had more isolated points and the recognition rates and Kappa coefficient are lower.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(a)
(b)
(c)
3.3. The Influence of Parameters
3.3.1. The Number of Particles
To investigate the influence of the number of particles, the following simulation parameters are used: set the maximum number of iterations to 30 and set the number of particles at 30, 40, 50, 60, and 100. The test results are shown in Figure 7.
(a)
(b)
(c)
The following information can be seen from Figure 7.
() Under the same number of particles, the recognition rate and the Kappa coefficient are the best using MBQPSO combined with _gap method to calculate perimeter.
() With the increase of the particles number, running time generally has an upward trend. Among them, the calculation results of the _point method and MBQPSO present a rising trend on the whole, with an increasing number of particles. But at the middle of the function it slightly declines, which may be caused by the uncertainty of the two constraints during the iterative process.
() The running time of the _gap method becomes longer with the increase of the number of particles. The recognition rate and Kappa coefficient decrease at first and then increase. The two parameters of the _point and _chain methods are increased at first and then decreased. The recognition rate of GMBQPSO algorithm is the highest in 50 particles.
3.3.2. Adaptability of Image Size
This section verifies the applicability of image size with the modified algorithm. Based on the real feature distribution of the whole Indian pines image, we design a simulation experiment, as shown in Figure 8(a). Using image 1 in Figure 4, the results of the above six algorithms are compared. The number of particles is 50 and the maximum iteration number is 30. Figure 8 depicts the result of subpixel mapping and the evaluation index of the mapping results is given as follows.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
Table 4 and Figure 8 show the improvement of the modified algorithm. The results of Table 4 quantitatively confirm that the GMBQPSO algorithm has the best applicability and that the Kappa coefficient and the recognition rate are the highest, but the operation time is longer. From Figure 8 it can be seen that there are many isolated points among the results of the BQPSO algorithm. Most boundaries between categories of MBQPSO running results are more obvious. Comparing Tables 4 and 3, it can be seen that the image size has no effect on subpixel mapping accuracy. The bigger the image is, the larger the mixed pixels’ number is. After statistical analysis in Figure 4, the number of mixed pixels in image 1 is 5.5 times the number of pixels in image 2. Comparing the operating time of Table 4 with Table 3, considering the two constraints, the operating time is proportional to the number of mixed pixels in the image. In summary, the GMBQPSO is superior to other algorithms in both visual perception and quantitative evaluation results (the Kappa coefficient and the recognition rate).
3.3.3. Influence of Spatial Resolution Enhancement Ratio
In order to verify the subpixel mapping practicality of the optimization objective function on the image spatial resolution enhancement ratio, which is greater than 3 3, the original image is filtered by 5 5 filtering and subpixel mapping is implemented by a spatial enhancement ratio 5 5.
Because the 5 5 filtered image contained fewer pure pixels, it is necessary to adjust the abundance threshold. A pixel can be seen as a pure pixel only if the minimum difference value between the maximum abundance and the others in the pixel is greater than 0.9.
As can be seen from Table 5, we have the following:
() The improved MBQPSO results are obviously better than those of the BQPSO algorithm.
() In the case of 100 particles, the recognition rate and the Kappa coefficient are superior to the 50 particles, but the running time is longer. Considering time and accuracy, the choice of 50 particles is reasonable.
In Figure 9, the results show that MBQPSO combined with three kinds of perimeter calculation methods had fewer isolated points and the GMBQPSO results are optimal. But there still exist other isolated cases in the original 5 5 filtering image. In practical application, we should take further measures to achieve better results.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
3.4. The Real Image Subpixel Mapping Experiment
The experiments on real images are conducted by considering two different hyperspectral pieces of data. The first experimental data uses a ROSIS hyperspectral remote sensing image, which is located in the north of Italy, Pavia University. The size of the original image is pixels and it contains nine classes of pixel number ranging from 947 to 18649 pixels. The band number is 103 and the geometric resolution is 1.3 meters. We sample a 117 75 region where five kinds of features and the real surface features’ distribution are shown in Figure 10(b). The 3 3 mean filter is developed for the original image to obtain the 60th band gray image Figure 10(a). The number of endmembers is estimated as 5 by VD [38]. A pixel can be seen as a pure pixel only if the minimum difference value between the maximum abundance and the others in the pixel is greater than a certain threshold [33–35]. Because a metal area is obvious, we only perform a statistical analysis of the recognition rate on a metal area. Figure 10(c) shows the random mapping results. Figures 10(d)–10(i) represent the subpixel mapping results with different optimization algorithm, respectively. Table 6 gives a comparison of recognition rate.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
The second image analyzed in our experiments is a Salinas dataset. The size of the original image is pixels and it contains 204 bands and contained 17 classes. The 3 3 mean filter is developed for the original image to obtain the pixels downsampled image and we select three bands to obtain the pseudocolor image Figure 11(a). We combine class 1 and class 2 in one class, combine class 8 and class 15 in one class, and let class 16 be named class 2. The number of endmembers is estimated as 15 by VD and endmembers are extracted by calculating the mean of each category. A pixel can be seen as a pure pixel only if the minimum difference value between the maximum abundance and the others in the pixel is greater than a certain threshold. The background image is shown in Figure 11(b). In this part, we only perform a statistical analysis of the recognition rate on the other 14 categories without background. Figure 11(c) shows the random mapping results. Figures 11(d)–11(f) represent the subpixel mapping results with different optimization algorithm, respectively. Table 7 gives a comparison of recognition rate.
(a)
(b)
(c)
(d)
(e)
(f)
Table 6 gives a comparison of the recognition rate and the running time of various algorithms with the abundance threshold at 0.4 and 0.5, respectively. Figure 10 gives the results of the algorithm with the abundance threshold at 0.4 and 50 particles.
From Table 6, we have the following:
() With the same number of particles, the recognition rate of the MBQPSO is better than that of the BQPSO, and the recognition rate of GMBQPSO is the highest.
() The results of the abundance threshold at 0.4 are better than those at 0.5, and the recognition rate and Kappa coefficient are all improved.
() From the point of view of the particle number, the Kappa coefficient and the recognition rate of the total particle number 100 can be improved, but time is longer.
() Compared with the above six, PSA had more isolated point, and the recognition rates and Kappa coefficient are lower.
As analysis in the first real experimental result, we present the comparison of the recognition rate and the running time of various algorithms with the abundance threshold at 0.4 in Table 7. Figure 11 gives the results of the algorithm with the abundance threshold at 0.4 and 50 particles. Based on the subpixel mapping results in the Table 6, we only present the subpixel mapping results image of GMBQPSO, GBQPSO, and PSA in Figure 11. We can see that the marginal area of MBQPSO’s mapping results is better than others. The recognition rate of the MBQPSO is better than that of the corresponding BQPSO, and the recognition rate of GMBQPSO is the highest. The running time of the corresponding BQPSO is shorter than MBQPSO, owing to premature convergence of BQPSObased method. In addition, the running time is larger than the experiment result of ROSIS dataset because the size of the Salinas dataset is about 20 times larger than ROSIS dataset.
4. Conclusion
In this paper, a modified method of subpixel mapping based on binary quantum particle swarm optimization (MBQPSO) algorithm is proposed. In order to improve the adaptability of objective function and lower time complexity of optimization algorithm, we use three kinds of regional perimeter minimum to describe spatial correlation. At the same time, the discretization process of particle position updating is corrected to overcome premature convergence in the original BQPSO algorithm iteration process. Simulated and real image experiments show that the proposed method in this paper is superior to the original method in the Kappa coefficient, recognition rate, and running time. It can improve the mapping accuracy of the algorithm. The _gap method obtains the optimal mapping result and the fastest speed, which is suitable for subpixel mapping. The method proposed in this paper can not only reflect the spatial detail information of the ground objects, but also reconstruct the shape feature information in a small area. Compared with the original linear optimization method, visual effect and practical classification accuracy are all improved greatly. The assumption of the spatial correlation between pixels is suitable in most cases, but because of the complexity of actual situations, this assumption will have some errors in some cases. Therefore, it is necessary to conduct further research to obtain a better expression of the pixel spatial distribution rules to improve accuracy of subpixel mapping.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Acknowledgments
This work was supported by the National Natural Science Foundation of China (no. 61671408), Shanghai Aerospace Science and Technology Innovation Fund (no. SAST2015033), and the Joint Funds of the Ministry of Education of China (no. 6141A02022314).