High-Performance Computing and Automatic Face RecognitionView this Special Issue
Feature Matching Optimization of Multimedia Remote Sensing Images Based on Multiscale Edge Extraction
In order to solve the problem of low efficiency of image feature matching in traditional remote sensing image database, this paper proposes the feature matching optimization of multimedia remote sensing images based on multiscale edge extraction, expounds the basic theory of multiscale edge, and then registers multimedia remote sensing images based on the selection of optimal control points. In this paper, 100 remote sensing images with a size of 3619825 with a resolution of 30 m are selected as experimental data. The computer is configured with 2.9 ghz CPU, 16 g memory, and i7 processor. The research mainly includes two parts: image matching efficiency analysis of multiscale model; matching accuracy analysis of multiscale model and formulation of model parameters. The results show that when the amount of image data is large, feature matching takes more time. With the increase of sampling rate, the amount of image data decreases rapidly, and the feature matching time also shortens rapidly, which provides a theoretical basis for the multiscale model to improve the matching efficiency. The data size is the same, 3619 × 1825, which makes the matching time between images have little difference. Therefore, the matching time increases linearly with the increase of the number of images in the database. When the amount of image data in the database is large, a higher number of layers should be used; when the amount of image data in the database is small, the number of layers of the model should be reduced to ensure the accuracy of matching. The availability of the proposed method is proved.
With the rapid development of remote sensing technology, aerial photography, UAV, and vehicle mobile measurement system, it is possible to obtain various image data reflecting the characteristics of natural and human activities quickly, dynamically, and on a large scale . Images contain rich information and have the advantages of intuition, image, and easy understanding. They play an extremely important role in human perception of the external world . In reality, people extract the physical characteristics and spatial information of various target objects in the objective world through images to study the spatial position, shape, attribute, change, and relationship with the surrounding environment. Therefore, the research on image processing technology has always been the most important research content in the field of photogrammetry, remote sensing, and computer vision. Due to the influence of a series of factors such as atmospheric refraction, terrain fluctuation, and the change of internal and external orientation elements of the sensor, the quality of the obtained image is reduced or there is a lack of useful information, which brings difficulties to image processing and target recognition. It is difficult to rely on vision-based processing only. The multiscale analysis method is an effective means . Through the decomposition of the image on different scales by multiscale technology, the image feature information can be expressed in different degrees at different scales, which is conducive to a better understanding of the details of the image, fully extract the image feature information, and obtain ideal results in the process of image matching and target recognition. The multiscale image feature information extraction and process based on weight learning is shown in Figure 1 .
2. Literature Review
To solve this research problem, Ren et al. proposed a method of extracting point features by a gray method . Lv et al. proposed a new operator Forstner operator based on the least square principle and measured by the point gray error ellipse ; Queiroz et al. proposed sift operator, i.e. variable ratio invariant feature point ; Yang et al. improved Moravec operator with the same idea and proposed Plessey corner detection operator . Dongand Lin compared Harris, Cottier, Forstner, and other operators, and considered that the effect of Forstner operator was second only to Harris operator . Xu et al. compared Plessey, Forstner, and susan-2d operators and concluded that Forstner operator achieved the best results in the comprehensive comparison of clarity, invariance, stability, uniqueness, and interpretability. However, in the application, it is found that Harris corner is very sensitive to the change of image scale. When the image size is inconsistent, Harris corner is not a good result. Basically, all point feature extraction operators have such problems . Chen et al. used differential geometry to extract linear targets in images, including lines and curves . Ma et al. combined with the least square and Kalman filter, used the gray section perpendicular to the road direction for road tracking. Due to the different manifestations of the same target in different resolution images, multiresolution analysis can combine the advantages of the two to obtain better recognition results . Yao et al. analyzed the application of multiscale analysis theory in road extraction in detail and gave a theoretical framework, which has guiding significance for road extraction based on multiscale analysis .
3.1. Basic Theory of Multiscale Edge
Wavelet analysis is born after Fourier analysis. Wavelet bases are considered to be sparse when representing singular points, but it is not suitable for representing line singular targets. Therefore, it is difficult to use wavelet basis to represent edge information. Forcibly using wavelet basis to describe edge linear targets will bring ringing phenomenon to the experimental results of image denoising . Donoho proposed the edge transform when studying the restoration of noisy image data. The weldlet transform can approximately optimally describe the “horizontal model.” Donoho found that the weldlet decomposition based on the cost function can achieve minimax risk estimation. When representing an image with a large number of linear edges, the weldlet distortion rate is very small, and the optimal approximation can be achieved nearly .
3.2. Multimedia Remote Sensing Image Registration Based on Optimal Control Point Selection
3.2.1. Solving the Optimal Solution of Projection Variation Based on Least Square Method
We use the projection transform H (topography) to describe the matching relationship between two image control point pairs. Let and represent the coordinates of a pair of matching control point pairs of test images C and D, respectively . According to the projection transformation relationship:
In order to obtain the projection transformation matrix , at least 4 pairs of matching control point pairs are required on and to solve 8 parameters. Considering that there is matching error in the control point pairs distributed on and in practical engineering application, if 4 pairs of matching control point pairs are directly used for calculation, large matching error will be introduced. Therefore, this method uses multipair (greater than 4) control point matching to approximately calculate the projection transformation parameters of and through the least square method. Let represent the coordinates of control points, and represent the coordinates of control points matching in .
When n is determined, the H ideal approximate solution is obtained, and the average distance between the control points of the two images is the smallest. N is the number of input control point pairs. According to the idea of the least square method proposed in this chapter, the minimum value of n is 4. Experiments show that when n is 6 or 7, this algorithm can achieve a good compromise in registration accuracy and efficiency. The control point of is mapped to through projection transformation, and the average distance from control point is
According to the inverse matrix , the control points of are mapped to through projection transformation, and the average distance from control points is
3.2.2. Selection and Correction of Control Point Pairs
The selection of control point pairs and automatic matching are two key points of projection transformation. Firstly, the algorithm inputs the control point pair of the test image and the reference image according to the human visual and image features, which can ensure that the distribution position and area of the feature points selected on the test image and the reference image are relatively consistent and evenly distributed in the image so as to lay a good foundation for establishing the accurate matching of the control point pair.
Since the control points input manually for the first time are not necessarily the real extreme points of the image, as shown in Figure 2, the control points of the test image need to be corrected to distribute them to the extreme points of the image so as to enhance the matching stability and improve the antinoise ability . In this algorithm, the Taylor expansion of Gaussian difference function (dog) is used to find the extreme points near the control points of the test image through linear interpolation, so that the input control points are distributed on the extreme points with more stable and higher position accuracy.
Set point a as the known input control point, use point a to estimate the value of a nearby extreme point B, and set as the offset of point B relative to point a. According to the Taylor function expansion of dog function (fitting function):
By deriving and making (4) equal to zero, the offset of the extreme point can be obtained as follows:
When the offset of in any dimension is greater than 0.5, it indicates that the extreme point is closer to the adjacent point C of A. Update point a to point C and continue the iteration until the offset of any dimension of is less than 0.5, and the iteration process ends. Add to the current point to obtain the exact position of the control point .
3.2.3. Matching of Control Point Pairs
According to equations (2) and (3), the initial matching errors and of the control point pair can be calculated. The initial values and need to be further reduced by adjusting the control points . Because the control point of the test image is close to the ideal extreme point after being processed by dog operator, this algorithm mainly adjusts the position of the corresponding control point of the reference image. According to the marking sequence of the control point pair, it automatically moves according to the set step s in the up, down, left, and right directions, respectively, and searches the best matching position of the reference image control point based on the principle of obtaining the minimum value with and . Suppose n pairs of control points are input in the test image and reference image, and N takes 6 or 7 in the experiment. The control points of the test image are corrected by the dog operator, and the automatic search algorithm of the matching control points corresponding to the reference image is: calculate the initial error: calculate the matching errors and of the initial control points according to (2); iterative adjustment of reference image control points: the positions of other control points are fixed. For the ith control point, move the coordinates according to step S. When and are no longer reduced, the position of the control point at this time is its best position, and then adjust the position of the next control point in turn ; iteration abort: when the matching distance error of the control point pair of the test image and the reference image is less than a preset threshold, the iteration aborts and the algorithm ends.
How to define is a very important problem. The definition of is too small, the matching control point pair will deviate from the actual feature points of the image, and the ideal registration effect cannot be achieved; too much definition of will lead to large registration error. Through experimental analysis, it is found that when is distributed between (0.3, 0.5), the image registration effect is ideal .
3.2.4. Algorithm Implementation Process
In order to solve the problem of registration accuracy of different modal remote sensing images, this chapter proposes a multimodal image registration algorithm based on the selection of optimal matching points. The flow of the algorithm is as follows:
Initialize the algorithm, set the iteration end threshold parameter , and adjust the step size S; manually input n pairs of control points in test image and reference image :
The dog operator is used to adjust the control point of the test image so that it is located at the extreme point to obtain a stable and accurate control point; using projection transformation, H and are calculated based on the coordinates of control point pairs, and H and are substituted into equations (2) and (3) to obtain the initial matching errors and of control point pairs; for , the position of control point is automatically adjusted point by point, the adjustment step is s, and and are recalculated; when and , the automatic adjustment process ends; calculate h based on the position coordinates of the control point pair; using H, traverse all pixels of , construct and project to image , and the registration algorithm ends .
4. Results and Analysis
In this paper, 100 remote sensing images with a size of 3619825 with a resolution of 30 m are selected as experimental data. The computer is configured with 2.9 Ghz CPU, 16 g memory, and i7 processor. The research mainly includes two parts: image matching efficiency analysis of multiscale model; matching accuracy analysis of the multiscale model and formulation of model parameters.
4.1. Image Matching Efficiency Analysis of Multiscale Model
(1)Correlation between matching rate and image sampling rate select two images for matching calculation, one as the image to be matched and the other as the target image, and resample the image to be matched to varying degrees. Table 1 shows the variation of image matching time with sampling rate. Figure 3 shows the change of image matching time with sampling rate. It can be seen from the figure that with the increase of sampling rate, the image matching time first decreases rapidly, and then changes gradually gently. This change law shows that when the amount of image data is large, the feature matching takes more time. With the increase of sampling rate, the amount of image data decreases rapidly, and the feature matching time also shortens rapidly. This provides a theoretical basis for the multiscale model to improve the matching efficiency.(2)Matching efficiency between multiscale model and single-layer image database
As compared in order to further compare and analyze the difference of feature matching efficiency between multiscale model and traditional single-layer image database, five groups of image databases are created, and the total number of images in the database are 40, 50, 60, 70, 80, 90, and 100, respectively. Multiscale models are built for 7 groups of databases. The feature matching time of 7 groups of image databases under different methods is shown in Table 2. Figure 4 shows the efficiency difference of feature matching under different methods. It can be found that the matching time fitting equation has linear characteristics: the reason why the matching time is linearly related to the total number of images is that the remote sensing images selected in the experiment have the same resolution and the data size is the same, which is 3619 × 1825, which makes the matching time between images have little difference. Therefore, the matching time increases linearly with the increase of the number of images in the database. In practical application, the image size is different. Therefore, in practical application, there is not necessarily a good linear relationship between the matching time and the total number of images.
When the total number of images is the same, the feature matching time of the multiscale model is always less than that of single-layer database, and the advantages of the multiscale model are more obvious with the increase of the total number of images. Therefore, with the increase of the total number of images, the difference of matching efficiency between the two methods increases gradually. In practical application, the database usually contains hundreds of remote sensing images. At this time, the matching efficiency of the multiscale model will be much higher than that of single-layer image database.
4.2. Image Matching Accuracy of Multiscale Model
In this paper, the matching accuracy of the multiscale model is studied by changing the image size. The process is as follows: (1) 100 images are divided into 5 groups, 20 images in each group, of which the image size in the group is the same and the image size between the groups is different; (2) each image is divided into five layers, and the multiscale model is used for feature matching; (3) record the current number of layers in each group where the real matching image is wrongly eliminated, reduce the number of layers, until the real matching image obtains the accurate matching position, and record the best number of layers with the highest final matching accuracy . The experimental results of 5 groups of data are shown in Table 3. It can be found from Table 3 that with the decrease of image size, the optimal number of layers also decreases, and when the image size is reduced to 226 × 114, the correct matching results cannot be obtained by using multi-scale model for feature matching. This is because when the amount of image data is small, the total number of feature points extracted from the image is small, and a small number of feature points cannot fully ensure the matching accuracy of the image. Therefore, when building a multiscale model for the image database in practical application, the amount of image data at the highest level should be greater than 226 × 114.
The data in Table 3 are linearly fitted to obtain the functional relationship between the average data volume of the database image and the optimal number of layers. As shown in Figure 5, it can be found that when the amount of image data in the database is large, a higher number of layers should be adopted; when the amount of image data in the database is small, the number of layers of the model should be reduced to ensure the accuracy of matching. Using the logarithmic function equation of database image and the optimal number of layers, the optimal number of layers of image database in practical application can be obtained so as to ensure the feature matching accuracy of database image.
In this paper, a feature matching optimization of multimedia remote sensing images based on multiscale edge extraction is proposed. The proposed algorithm can not only efficiently complete the feature point matching operation between images but also accurately screen the best matching images from the database; with the increase of the total number of images in the database, the advantages of this method are more obvious. This research will provide the possibility for efficient, real-time, and dynamic matching of remote sensing image database. The multiscale method also has defects, because the local features of the image exist in a certain scale range, so a feature point may have several different feature scales at the same time, which increases the difficulty of subsequent matching. In the future, it is necessary to find a method to make the local features represented by representative feature points.
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare no conflicts of interest.
1. Xi’an Science and Technology Project of Shaanxi Province, Analysis and evaluation of biodiversity in Qinba Mountains of Shaanxi Province, China, (2020KJWL23). 2. Xi’an Liberal Arts College of Biological and Environmental Engineering, dean of the fund, under the background of digital twin wisdom park building 3 d models quickly generate key technology research, (project number: YZJJ202111). 3. Natural Science Foundation of Shaanxi Province (Young), Aerosol optical thickness inversion over Fenhe and Weihe Plain based on Domestic high-resolution satellite, (2020JQ-978).
L. Y. Zhao, B Y. Lü, X R. Li, and S. H. Chen, “Multi-source remote sensing image registration based on scale-invariant feature transform and optimization of regional mutual information,” Acta Physica Sinica, vol. 64, no. 12, pp. 124204–124210, 2015.View at: Publisher Site | Google Scholar
Y. Jeawon, G. A. Drosopoulos, G. Foutsitzi, G. E. Stavroulakis, and S. Adali, “Optimization and analysis of frequencies of multi-scale graphene/fibre reinforced nanocomposite laminates with non-uniform distributions of reinforcements,” Engineering Structures, vol. 228, no. 5, Article ID 111525, 2021.View at: Publisher Site | Google Scholar
A. Hamedianfar and M. Barakat A Gibril, “Large-scale urban mapping using integrated geographic object-based image analysis and artificial bee colony optimization from worldview-3 data,” International Journal of Remote Sensing, vol. 40, no. 17, pp. 6796–6821, 2019.View at: Publisher Site | Google Scholar
P. Launeau, Z. Kassouk, F. Debaine et al., “Airborne hyperspectral mapping of trees in an urban area,” International Journal of Remote Sensing, vol. 38, no. 5, pp. 1277–1311, 2017.View at: Publisher Site | Google Scholar
C. Ren, Y. J. Liang, X. J. Lu, and H. B. Yan, “Research on the soil moisture sliding estimation method using the ls-svm based on multi-satellite fusion,” International Journal of Remote Sensing, vol. 40, no. 5-6, pp. 2104–2119, 2019.View at: Publisher Site | Google Scholar
D. Lv, Z. Jia, J. Yang, and N. Kasabov, “Remote sensing image enhancement based on the combination of nonsubsampled shearlet transform and guided filtering,” Optical Engineering, vol. 55, no. 10, Article ID 103104, 2016.View at: Publisher Site | Google Scholar
C. Wang and C. Ma, “Multi-objective optimization of customized bus routes based on full operation process,” Modern Physics Letters B, vol. 34, no. 25, Article ID 2050266, 2020.View at: Publisher Site | Google Scholar
C C. Queiroz, M. Namias, V O. Menezes et al., “Optimization of oncological f-18-fdg pet/ct imaging based on a multiparameter analysis,” Medical Physics, vol. 43, no. 2, pp. 930–938, 2016.View at: Publisher Site | Google Scholar
J. F. Yang, J. H. Wan, Y. Ma, J. Zhang, and Y. B. Hu, “Oil spill hyperspectral remote sensing detection based on dcnn with multi-scale features,” Journal of Coastal Research, vol. 90, no. sp1, p. 332, 2019.View at: Publisher Site | Google Scholar
Z. Dong and B. Lin, “Bmf-cnn: an object detection method based on multi-scale feature fusion in vhr remote sensing images,” Remote Sensing Letters, vol. 11, no. 3, pp. 215–224, 2020.View at: Publisher Site | Google Scholar
C. Xu, H. Sui, H. Li, and J. Liu, “An automatic optical and sar image registration method with iterative level set segmentation and sift,” International Journal of Remote Sensing, vol. 36, no. 15, pp. 3997–4017, 2015.View at: Publisher Site | Google Scholar
Y. Chen, Q. Ma, C. Liu, and Q. Shu, “Mapping detection of marine data based on space remote sensing technology,” Journal of Coastal Research, vol. 93, no. sp1, p. 717, 2019.View at: Publisher Site | Google Scholar
X. Mei, F. Fan, C. Li et al., “Infrared ultraspectral signature classification based on a restricted Boltzmann machine with sparse and prior constraints,” International Journal of Remote Sensing, vol. 36, no. 18, pp. 4724–4747, 2015.View at: Publisher Site | Google Scholar
G. Yao, L. Zhang, T. Shi, and K. Deng, “Registrating large mismatching sar images based on corners and surface extremum strategy,” International Journal of Remote Sensing, vol. 40, no. 9, pp. 3555–3570, 2018.View at: Publisher Site | Google Scholar
L. Wan, T. Zhang, and H. J. You, “Multi-sensor remote sensing image change detection based on sorted histograms,” International Journal of Remote Sensing, vol. 39, no. 11, pp. 3753–3775, 2018.View at: Publisher Site | Google Scholar
D. Wang, X. Cui, F. Xie, Z. Jiang, and Z. Shi, “Multi-feature sea–land segmentation based on pixel-wise learning for optical remote-sensing imagery,” International Journal of Remote Sensing, vol. 38, no. 15, pp. 4327–4347, 2017.View at: Publisher Site | Google Scholar
H. Zhu, P. Zhang, L. Wang, X. Zhang, and L. Jiao, “A multiscale object detection approach for remote sensing images based on mse-densenet and the dynamic anchor assignment,” Remote Sensing Letters, vol. 10, no. 10, pp. 959–967, 2019.View at: Publisher Site | Google Scholar
M. S. El-Tokhy and I. I. Mahmoud, “Classification of welding flaws in gamma radiography images based on multi-scale wavelet packet feature extraction using support vector machine,” Journal of Nondestructive Evaluation, vol. 34, no. 4, p. 34, 2015.View at: Publisher Site | Google Scholar
T. Cui, W. Zhao, and C. Wang, “Design optimization of vehicle ehps system based on multi-objective genetic algorithm,” Energy, vol. 179, no. JUL.15, pp. 100–110, 2019.View at: Publisher Site | Google Scholar
W. Shi, S. Deng, and W. Xu, “Extraction of multi-scale landslide morphological features based on local gi using airborne lidar-derived dem,” Geomorphology, vol. 303, pp. 229–242, 2018.View at: Publisher Site | Google Scholar
J. Zhang, M. Zareapoor, X. He, D. Shen, D. Feng, and J. Yang, “Mutual information based multi-modal remote sensing image registration using adaptive feature weight,” Remote Sensing Letters, vol. 9, no. 7, pp. 646–655, 2018.View at: Publisher Site | Google Scholar
S. Chen, X. Li, L. Zhao, and H. Yang, “Medium-low resolution multisource remote sensing image registration based on sift and robust regional mutual information,” International Journal of Remote Sensing, vol. 39, no. 10, pp. 3215–3242, 2018.View at: Publisher Site | Google Scholar
L. A. Pereira, S. . Haffner, G. Nicol, and T. F. Dias, “Multiobjective optimization of five-phase induction machines based on nsga-ii,” IEEE Transactions on Industrial Electronics, vol. 64, no. 12, pp. 9844–9853, 2017.View at: Publisher Site | Google Scholar
X. Sun, Z. Shi, G. Lei, Y. Guo, and J. Zhu, “Multi-objective design optimization of an ipmsm based on multilevel strategy,” IEEE Transactions on Industrial Electronics, vol. 68, no. 1, 2020.View at: Publisher Site | Google Scholar
W. Huang, Y. Wang, and X. Chen, “Cloud detection for high-resolution remote-sensing images of urban areas using colour and edge features based on dual-colour models,” International Journal of Remote Sensing, vol. 39, no. 20, pp. 6657–6675, 2018.View at: Publisher Site | Google Scholar
P. Gelineau, M. Stepien, S. Weigand, L. Cauvin, and F. Bedoui, “Elastic properties prediction of nano-clay reinforced polymer using multi-scale modeling based on a multi-scale characterization,” Mechanics of Materials, vol. 89, pp. 12–22, 2015.View at: Publisher Site | Google Scholar