Abstract

Existing remote sensing data classification methods cannot achieve the sharing of remote sensing image spectrum, leading to poor fusion and classification of remote sensing data. Therefore, a high spatial resolution remote sensing data classification method based on spectrum sharing is proposed. A page frame recovery algorithm (PFRA) is introduced to allocate the wireless spectrum resources in low-frequency band, and a dynamic spectrum sharing mechanism is designed between the primary and secondary users of remote sensing images. Based on this, D-S evidence theory is used to fuse high spatial resolution remote sensing data and correct the pixel brightness of the fused multispectral image. The initial data are normalized, the feature of spectral image is extracted, the convolution neural network classification model is constructed, and the remote sensing image is segmented. Experimental results show that the proposed method takes shorter time and has higher accuracy for high spatial resolution image segmentation. High spatial resolution remote sensing data classification is more efficient, and the accuracy of data classification and remote sensing image fusion are more ideal.

1. Introduction

The spatial resolution of remote sensing images with high spatial resolution is greatly improved, which is fully reflected in the obvious internal differentiation of features, increased texture, rich details, and prominent edges [1]. Remote sensing images, especially high-resolution remote sensing images, have broad application prospects in land use and land cover change [2]. However, due to the uncertainty in the acquisition and processing of high spatial resolution remote sensing information, the classification accuracy of remote sensing data is difficult to meet the needs of land cover change, environmental monitoring, and thematic information extraction.

In order to improve this problem, scholars in relevant fields at home and abroad have also put forward some research results. Reference [3] proposed a remote sensing image classification algorithm combining IFCM (improved FCM) clustering and variational inference. In the feature extraction stage, the spatial pixel template method is used to extract the pixel feature points, and the posterior distribution of parameters is approximated based on the variational inference method in Bayesian statistics to obtain the image classification results. In reference [4], a classification fusion algorithm based on machine learning is proposed. The classification fusion results of ranking level and measurement level are output, and the typical areas of Landsat 8 remote sensing images in Beijing are used for classification prediction. A new object-oriented classification method is proposed in reference [5], which uses the segmentation algorithm to perform the initial over segmentation of the original image. The segmentation unit with good homogeneity is obtained. The segmentation unit is taken as the object to be processed; Gravitational Self-Organizing Map (G SOM) is used to cluster the segmented objects, the clustering results are obtained, and the consistency function is used to integrate the diverse clustering results at the least cost, so as to realize fast and automatic decision classification. In reference [6], a multisensor classification strategy based on deep learning integration process and decision fusion framework is studied. Random feature selection is used to generate two independent CNN-SVM integrated systems, one for LiDAR and VIS and the other for HS data to overcome similarity and over matching.

However, none of the above studies can realize the spectrum sharing of remote sensing images, resulting in the poor effect of remote sensing data fusion and classification. Therefore, a high spatial resolution remote sensing data classification method based on spectrum sharing is proposed. First, share the low-frequency wireless spectrum resources of remote sensing images. D-S evidence theory fusion method is used to realize high spatial resolution remote sensing data fusion. The convolution neural network classification model is constructed to realize the classification of high spatial resolution remote sensing data based on spectrum sharing. The experimental results show that the image segmentation time of the proposed high spatial resolution remote sensing classification method is shorter, the remote sensing image can be segmented accurately, and the data fusion effect of remote sensing image is better. The above experimental results show that the proposed high spatial resolution remote sensing data classification method based on spectrum sharing is practical and can provide a reliable theoretical basis for this field.

2. Method

2.1. Remote Sensing Image Dynamic Spectrum Sharing Method

PFRA-based radio spectrum resource allocation optimization method is used for remote sensing image in low-frequency band [7, 8] to improve the problem of spectrum resource shortage. If the number of available channels is , the channel gain of subtransmitter , subreceiver , subtransmitter , main receiver , main transmitter , and subreceiver in channel is described as , , and . Among them, channel gain is composed of large-scale attenuation and small-scale attenuation. In channel , the transmitting power of secondary transmitter and main transmitter is , , and is the secondary remote sensing link, respectively. Each remote sensing link has channel request , so the channel allocation matrix shall meet the request convergence conditions of the sub-remote sensing link channel, and each sub-remote sensing link channel request shall be set to the same fixed value, namely,

In the process of wireless spectrum sharing of remote sensing images, the same spectrum can be shared with the primary user only when the disturbance caused by the secondary user to the primary user is less than a fixed limit [9]. Meanwhile, the secondary transmitter has the maximum transmitting power, and the sum of the transmitting power of the secondary transmitter within each frequency band shall not exceed the maximum transmitting power, namely,

In the allocation of wireless spectrum resources in the low-frequency band of remote sensing image, the network income should be considered. The dry ratio of received signals of subreceiver in channel is described as follows:where represents the signal available in the sub-remote sensing link, represents the disturbance originating in the remaining sub-remote sensing links, represents the disturbance originating in the main remote sensing link, and represents the noise power. Therefore, the reachable speed of the comparison is

There is a certain correlation between the reachability of sub-remote sensing link in channel and the channel allocation result, so the channel allocation matrix can be expressed as

In the formula, represents the assignment of channel to sub-remote sensing link and represents the assignment of channel to sub-remote sensing link . Only when channel is assigned to the sub-remote sensing link, the relative reachable speed can be obtained. Therefore, the sum of all RS link speeds is expressed as

The greater the speed sum of sub-remote sensing links, the better the spectrum utilization.

The problem of wireless spectrum sharing of remote sensing image in low-frequency band is to solve channel assignment matrix . The sum of the perturbation elements in each channel is taken as the perturbation coefficient of the channel [10] and expressed as follows:

Formula (9) ensures that the mean of the disturbance coefficient is minimized and that the disturbance between each secondary user assigned to the same spectrum is minimized. In addition, in the allocation of remote sensing image spectrum resources, the disturbance to the primary user should be minimized. By minimizing the disturbance, all the primary users can be controlled within the disturbance. Therefore, the disturbance obtained by primary user in spectrum may be expressed as

In formula (11), indicates the magnitude of the disturbance to primary user caused by the secondary transmitter in the spectrum .

Since the spectrum of remotely sensed images is dynamically changing, the problem of dynamic spectrum sharing of remotely sensed images can be described as follows: based on the limited spectrum resources available, a dynamic spectrum sharing mechanism between the primary and secondary users of remotely sensed images can be modelled [11] to share the free frequency bands to the secondary users to optimize utilization.

Figure 1 depicts the relationship between the dynamic spectrum sharing links of remote sensing images.

Spectrum estimation value of remote sensing image is as follows:where is used to describe the channel bandwidth of spectrum , is used to indicate the demand for idle channels by secondary user , and is a constant, which is affected by factors such as transmission power, noise, and antenna gain, so the channel value estimation matrix is expressed in .

Suppose a channel is only available to one secondary user, and all secondary users can select only one free channel at a time. is used to describe the sharing matrix; in the case of element , the spectrum holder will share the free channel to the second level user , and in the case of , the spectrum holder will not share the free channel to the second level user . Therefore, a reasonable sharing matrix must meet the following conditions:

In formula (13), represents that each spectrum holder can share free channels to a secondary user at most. Optimal dynamic spectrum sharing is to maximize spectrum efficiency, which can be obtained by using the following linear optimization equation:

Constraints are expressed as

2.2. Normalization of High Spatial Resolution Remote Sensing Data

In order to achieve the ideal image processing effect, it is necessary to normalize the initial data in classification. In the classification model based on convolution neural network [12, 13], using standardized method to input data into different classification space, the operation process will make a big difference between different image classification results. The spectral reflectance difference per single pixel of the initial image is large, and the numerical span is relatively large in the separation process and will increase the computational difficulty. Firstly, each trajectory segment of high spatial resolution remote sensing data is normalized. This operation can make the trajectory of each single pixel's spectral curve more obvious and easier to judge and increase the variation of trajectory and reduce the complexity so as to improve the speed and accuracy of classification training. Assuming all pixels as column vector , the formula is as follows:where represents the pixel average value of the initial image and represents the pixel standard deviation of the image in the -th curve band.

2.3. Classification Method of High Spatial Resolution Remote Sensing Data Based on Spectrum Sharing

High spatial resolution remote sensing monitoring refers to the use of high spatial resolution remote sensing technology for target monitoring in order to achieve quantitative analysis and determination of the characteristics and processes of surface change from monitoring data [14]. So far, high spatial resolution remote sensing technology has been widely used in meteorology, land, ocean, agriculture, geology, military, and other fields.

Classification is one of the main objectives of high spatial resolution remote sensing monitoring, which is a method of dividing each pixel or region into a certain type of terrain based on features collected by airborne LiDAR and hyperspectral technology [15]. The basic principle is as follows: because of different kinds of objects and different responses to electromagnetic waves, the high spatial resolution remote sensing data collected by airborne LiDAR and hyperspectral high spatial resolution remote sensing technology are different, which leads to different feature parameters. Data classification is achieved by using this feature to distinguish the target object from other objects [16].

Based on the above description, the classification of high spatial resolution remote sensing monitoring data is generally divided into six steps, as shown in Figure 2.

As can be seen from Figure 2, the classification of high spatial resolution remote sensing monitoring data includes data acquisition; preprocessing of high spatial resolution remote sensing monitoring data to improve the quality of data; feature extraction to select and reflect the characteristics of the target object; classification realization to be based on features, using classification algorithms to achieve classification; evaluation and analysis to analyze the effectiveness and feasibility of the method through simulation experiments; and output to show the test results in the form of images, statistical tables, etc. [17, 18].

2.3.1. Image Fusion

D-S evidence theory fusion method is used to realize high spatial resolution remote sensing data fusion [19]. The D-S theory of evidence constructs a trust structure as follows.

Basic probability distribution function: set to represent a finite set and to represent the probability distribution function on set , which satisfies the following conditions [20]: ; for any , ; .

The basic probability function mentioned above has a certain degree of confidence, which is usually subjectively defined.

Trust function is given by

The trust function of any subset in set is the sum of the basic probability functions of all subsets, and the trust function describes the global trust degree of evidence to is true.

The expression of likelihood function [21] is

In this expression, stands for the complement of . Evidence theory can combine different kinds of evidences, which are related to each other, so as to fuse different kinds of evidences and get the final conclusion. The fusion rules are as follows: assuming that and are the basic probability functions under different evidence , the following combination rules of evidence theory may be applied:

In the process of fusion, if the trust degree is 0, will be discarded, and if no compensation is made after discarding, the total trust value will be less than 1, and parameter will be introduced, that is, the part that can be compensated is discarded, and the total trust degree after fusion will be 1. The expression after fusion is , orthogonal and ; similarly, assuming that there are multiple related evidence parallel fusion, then .

In the operation of D-S evidence theory, the evidence accumulation can reduce the hypothesis set continuously, the complexity of time and information is low, and it has very good effect in dealing with the unstable factors caused by fuzzy generation.

Despite the preliminary data processing, however, the amount of data received by the coordinator for each data acquisition is still very large, including the environmental parameters transmitted from the acquisition nodes [22]. D-S evidence theory is used to realize the fusion analysis of various environmental parameters, and the support degree of each set of data to various hypotheses is given to guide the control decision. The data fusion method is shown in Figure 3.

2.3.2. Pixel Brightness Value Correction

Because of the saturation problem of pixel brightness in different images in HS-2 satellite dataset, it is necessary to correct the pixel brightness of fused multispectral images [23, 24]. Taking the upper limit of the cumulative pixel brightness value within the range of [1, 63] as the standard and taking the standard pixel brightness value of the multispectral camera as the reference data, a univariate quadratic model is constructed:

In (19), Dq represents the brightness value of corrected pixels, D represents the brightness value of pixels before correction, and δ, γ, and λ are all regression parameters. The quadratic model of one variable described in formula (19) is used to eliminate unstable pixels from high spatial resolution remote sensing images of HS-2 [25, 26].

3. Classification Algorithm Design of High Spatial Resolution Remote Sensing Data

3.1. Spectral Image Feature Extraction

The feature extraction of high spatial resolution remote sensing data is mainly through the sinusoidal two-dimensional transformation function modulated by Gaussian function, and the expression formula is [2729]

In the formula (20), and represent the values of the transverse coordinate system, the sine of the wavelength of the factor, the angle between directions of the specified parallel function, the spatial phase ratio of the ellipticity of the function, the standard deviation of the sine 2D transformation function determined by space and wavelength, and the composite function with the default value and the imaginary part transformation.

In order to improve the accuracy of spatial resolution remote sensing data classification, a spectral feature extraction model is proposed. The calculation formula is given by

In the formula, the key point of feature extraction is to capture the location of the central pixel, design and extract the tunnel to focus the position of the central pixel on the , convert the spectral feature vectors after the input and convolution data output to one-dimensional vectors, and then take as the radius input data, then take the pixel as the center point and as the initial data input to the designed feature extraction tunnel, and then realize the spectral feature extraction in the region with the central pixel as the target [30]. The design formula is as follows:

When the gradient of 2D transform function reaches saturation, when , the gradient of the function is 1, so the problem of gradient dispersion is alleviated in the process of extracting. Therefore, the tunnel extracting method based on 2D transform function is not only accurate but also fast.

3.2. Construction of Convolution Neural Network Classification Model

Different classification models of convolution neural networks can be constructed by different training methods [31, 32]. Figure 4 shows the classification model structure. Set the training sample coefficient to , so that , and is the equity coefficient between layers and .

A high spatial resolution remote sensing data matrix is established, and each pixel sample in the matrix is combined into one, the number of columns is set to 1, and the number of rows is the number of fragments of each high spatial resolution remote sensing data, so as to construct the classification processor model of convolution neural network. Therefore, input layer is , and is the number of fragments of high spatial resolution remote sensing data, hidden convolution layer is composed of 20 convolution cores whose size is , including nodes and , so there are training coefficients between the convolution layer and the input layer, and pool is the second layer in the hidden convolution layer, whose function size is , and there are nodes without coefficient.

3.3. Remote Sensing Image Segmentation

After preprocessing the high spatial resolution remote sensing image, the high spatial resolution remote sensing image is segmented according to the graph cut theory to obtain a large number of remote sensing image blocks [33, 34]. In high spatial resolution remote sensing image, the boundary pixel and color fluctuation are the key features of high spatial resolution remote sensing image. Based on this feature, the target boundary in high spatial resolution remote sensing image can be described by energy function. The energy function is mapped into s-t network, and the boundary of high spatial resolution remote sensing image can be divided according to the minimum cost. The RGB distance between the nodes with the above two features can be obtained through the following formula:where and , respectively, represent any two nodes in the high spatial resolution remote sensing image, and their corresponding pixel RGB values are and , respectively.

The edge weight of S-T network can be determined by the following formula:where and represent the S-T network boundary and node segmentation cost, respectively.

When the initial annular region contains the land class boundary of high spatial resolution remote sensing image, in order to minimize the cost of image cutting, it is necessary to ensure that the active contour of S-T network includes the land class boundary of remote sensing image.

Wine can be expressed as , and the curve with the smallest is the optimal segmentation curve of the terrestrial boundary of the high spatial resolution remote sensing image [35]:

In the S-T network, due to the influence of boundary thickness on the size of , the corresponding pixel RGB value of nodes on both sides of the boundary of the high spatial resolution remote sensing image will fluctuate significantly, and the smaller the value, the smaller the cumulative weight value of the pixels on the boundary of the high spatial resolution remote sensing image [36]. The S-T network is segmented to obtain the ring line , which contains the cumulative weights of the edges as its segmentation cost.

4. Analysis of Experimental Results

4.1. Comparison of Remote Sensing Image Fusion Effects

In order to accurately judge the changes of spectral information and spatial details before and after fusion, statistical quantitative indicators are used to evaluate the effect of remote sensing image fusion. During the experiment, the correlation coefficient method is used to evaluate the effect of high spatial resolution remote sensing image fusion. The calculation process of correlation coefficient is as follows:

In the above formula, and represent multispectral image and panchromatic image respectively, represents the number of rows of image, and represents the number of columns of image. The calculation formula is as follows:

Due to the large vegetation area in the study area, when using this method to extract office information, the panchromatic band image and the 2–4 band of multispectral image in the remote sensing data of the study area are selected. In order to verify the effect and feasibility of this method and increase the feasibility of experimental results, the remote sensing image classification method based on IFCM clustering and variational inference proposed in reference [3] and the remote sensing image classification method based on heterogeneous machine learning algorithm fusion proposed in reference [4] are used as experimental comparison methods. The remote sensing image fusion results of the three methods are shown in Figure 5.

According to the analysis of Figure 5, compared with the two experimental comparison methods of remote sensing image classification method based on IFCM clustering and variational inference proposed in reference [3] and remote sensing image classification method based on heterogeneous machine learning algorithm fusion proposed in reference [4], the remote sensing image fused by this method has high definition and can effectively enhance the detailed features in the space of remote sensing image.

The remote sensing image classification method based on IFCM clustering and variational inference proposed in reference [3] is compared with the remote sensing image classification method based on heterogeneous machine learning algorithm fusion proposed in reference [4] and the image fusion effect of this method. The correlation coefficient results of different bands of multispectral images are shown in Table 1.

By analyzing Table 1, compared with the two experimental comparison methods of remote sensing image classification method based on IFCM clustering and variational inference proposed in reference [3] and remote sensing image classification method based on heterogeneous machine learning algorithm fusion proposed in reference [4], the correlation coefficient of remote sensing image fused by this method is high, indicating that this method has the strongest ability to retain spectral information.

4.2. Comparison of Accuracy and Kappa Coefficient

Taking the overall accuracy and kappa coefficient as test indicators, the higher the two coefficients, the better the urban environmental layout effect. The calculation formulas of overall accuracy and kappa coefficient are as follows:where represents the number of landscapes used for staggered zone layout; represents the number of landscapes in the error matrix; and and represent the number of landscapes in row and column , respectively.

The overall accuracy and kappa coefficient test results of different methods are shown in Figures 6 and 7, respectively.

By analyzing the data in Figures 6 and 7, it can be seen that the overall accuracy and kappa coefficient obtained by the proposed method in multiple iterations are higher than those obtained by the remote sensing image classification method based on IFCM clustering and variational inference and the remote sensing image classification method based on the fusion of heterogeneous machine learning algorithms because the proposed method introduces the page frame recovery algorithm (PFRA). The allocation of low-frequency wireless spectrum resources is completed, a dynamic spectrum sharing mechanism between primary and secondary users of remote sensing images is designed, the overall accuracy and kappa coefficient are improved, and a good layout effect is obtained.

4.3. Comparison of Time Coefficient Indicators

Taking the time coefficient as the test index, the proposed method, the remote sensing image classification method based on IFCM clustering and variational inference, and the remote sensing image classification method based on the fusion of heterogeneous machine learning algorithms are used for testing. The larger the time coefficient, the longer the time consumption of the method. Figure 8 shows the test results of time coefficient.

As can be seen from Figure 8, the time coefficients obtained by the proposed method are lower than those obtained by IFCM clustering and variational inference and by heterogeneous machine learning algorithm because the proposed method uses D-S evidence theory to fuse high spatial resolution remote sensing data and correct the pixel brightness of the fused multispectral image. The initial data are normalized to extract spectral image features, which reduces the time used and improves the classification efficiency of remote sensing data.

4.4. Comparison of Time Consumption and Accuracy of Remote Sensing Image Segmentation

The segmentation results of the proposed method, the classification method based on IFCM clustering and variational inference, and the classification method based on heterogeneous machine learning algorithm are shown in Table 2.

Table 2 shows that the average accuracy rate of remote sensing image segmentation is 99.67%, 2.24% higher than that of remote sensing image classification based on IFCM clustering and variational inference and 11.37% higher than that of remote sensing image classification based on heterogeneous machine learning algorithm fusion. At the same time, the average time consumption of remote sensing image segmentation is 0.38 s, 0.56 s lower than that of remote sensing image classification based on IFCM clustering and variational inference and 0.83 s lower than that based on heterogeneous machine learning algorithm fusion. The above data show that this method can reduce the time consumption of remote sensing image segmentation on the basis of ensuring high segmentation accuracy.

5. Conclusion

In order to optimize the data fusion and classification accuracy of traditional remote sensing data, a high spatial resolution remote sensing data classification method based on spectrum sharing is proposed. The following conclusions are drawn:(1)The remote sensing image fused by this method has high definition and can effectively enhance the spatial detail features of remote sensing image. The correlation coefficient of the fused remote sensing image is high, and the high spatial resolution remote sensing data classification method based on spectrum sharing has the strongest ability to retain spectral information.(2)The overall accuracy obtained by this method in multiple iterations is high. According to the analysis results, the layout optimization model is constructed to realize the layout optimization of urban ecotone, improve the overall accuracy and kappa coefficient, and obtain a good layout effect.(3)The time coefficient obtained in the test process of this method is low, which provides relevant information for the layout optimization of urban ecotone, reduces the optimization time, and improves the efficiency of optimizing the layout of urban ecotone.(4)The average accuracy of remote sensing image segmentation in this method is as high as 95.78%, and the average time consumption of remote sensing image segmentation in this method is 0.38 s, which can reduce the time consumption of remote sensing image segmentation on the basis of ensuring high segmentation accuracy.

In the future research work, the research will focus on the following two aspects:(1)In the future, we can study the construction of deep learning network automation. How to automatically analyze and make decisions for remote sensing image analysis tasks, build a deep learning network suitable for the current task, and adaptively adjust the network structure and network learning parameters is a very practical research work, which provides a solid foundation for reducing the difficulty of remote sensing image analysis and processing tasks and improving the utilization rate and value of remote sensing images in the future.(2)How to jointly use multisource remote sensing data to improve the classification effect of high score remote sensing images is an important and difficult problem at present. Therefore, based on the existing research foundation, constructing a high spatial resolution remote sensing data classification method based on spectrum sharing has high research value and application value.

Data Availability

The raw data supporting the conclusions of this article will be made available from the authors, without undue reservation.

Conflicts of Interest

The authors declare that they have no conflicts of interest regarding this work.

Acknowledgments

This work was supported by Key Project of Education Department of Hebei Province “Spectrum Sharing Technology Research of Cognitive Internet of Things” (no. ZD2018064), Municipal Soft Science Research Project: Research on Spectrum Sharing Technology in Dense Wireless Heterogeneous Networks (no. 2019029043), and University-Level Doctoral Fund: Research on Spectrum Sensing Algorithm in Dense Mobile Cognitive Radios (no. BKY-2017-05).