#### Abstract

A spatial data interpolation algorithm is a method to transform scattered and sparse measurement data into regular and continuous applicable data. This algorithm can make the display results smooth and continuous without changing the data characteristics, which is closer to the real acquisition object. In response to the low interpolation accuracy of the inverse distance weighting (IDW) algorithm and the poor interpolation efficiency of the ordinary kriging (OK) algorithm, this study proposes a variational-weighted (VW) interpolation algorithm based on data fixed-frame sampling. By introducing the variational function of OK into IDW and taking different weight coefficients from OK, a new computational model is constructed to improve the interpolation accuracy and adapt to different types of data characteristics. In the whole interpolation process, the data are sampled in a fixed frame and a new sampling point search method is proposed to improve the computational efficiency. The study not only compares the accuracy and efficiency of the algorithm using two types of data but also tests the stability of the new algorithm for different data volumes. It is shown that VW based on a fixed-grid sampling of the data is more accurate than the inverse distance interpolation algorithm for both data types. The VW interpolation algorithm based on data lattice sampling improves the computational efficiency for featureless data and exhibits better performance in terms of accuracy and computational efficiency for data with features, compared to the common kriging interpolation algorithm. In addition, the VW interpolation algorithm based on a fixed-grid sampling of data has a more stable performance for different amounts of data.

#### 1. Introduction

Visualization technology is a technique for transforming collected data into images and graphics that are intuitively visible to humans, through which one can better analyze, monitor, and understand the properties and characteristics of the visible object [1]. As an important branch of visualization technology, data visualization is a technology that uses computer graphics, computer vision, image processing techniques, and user interfaces to transform data into displayable graphics or images with interactive, multidimensional, and visual characteristics [2]. However, discrete, sparse data sets are common due to the difficulty of the acquisition process and the sheer volume of data collected. To achieve a realistic and complete visualization of such data, it is necessary to supplement the missing data with interpolation algorithms and to adjust the data transition for the visualization part, so that the displayed picture is more in line with the real situation while meeting people’s requirements for a clear and concise visualization [3]. Interpolation algorithms are therefore commonly used in the fields of meteorological visualization, terrain visualization, the spatial distribution of soil properties, and image processing [4–8]. In recent years, many researchers have conducted extensive research on interpolation algorithms, and [9] proposed an adaptive inverse distance weighting (AIDW) method, which uses the distribution density of sampling points to adjust the distance decay parameters, thus giving the interpolation method better flexibility to adapt to heterogeneous distance decay parameters in space. [10] proposed an improved kriging method based on a message passing interface model to improve the computational efficiency of the spatial interpolation process. [11] proposed a common kriging interpolation algorithm (wind-OK) combining the influence of wind direction on spatial and temporal distribution of ground PM concentration in a given area and proved through experiments that the performance of the new algorithm was improved compared with the common kriging algorithm.

Spatial data interpolation algorithms can convert sparse, irregular, and scattered data into continuous, regular, and rich data sets by interpolation methods from discrete data and sparse data [12]. In this article, based on IDW and OK, a variational-weighted interpolation algorithm based on data fixation sampling is proposed, which combines the respective advantages of IDW and OK, so that the interpolation results of the original data have a high interpolation efficiency while ensuring a certain interpolation accuracy.

#### 2. Research on Sampling Point Search Method and Interpolation Algorithm

Sampling point search is a method of determining the known data points to be involved in the interpolation of the points to be interpolated through a certain search method. Since it is very computationally intensive and inefficient to use all known data points directly for interpolation, sampling point search is an important step before the interpolation operation, and the goodness of the sampling point search method also affects the overall interpolation results to a certain extent. The traditional sampling point search methods are distance search and direction search, both of which determine the sampling points to be involved in the interpolation operation by measuring the distance between the known data points and the points to be interpolated before the interpolation operation is performed [13, 14].

The essence of the distance search method is to select the *N* nearest known data points to the point to be interpolated as the sampling points. When the known original data point is closer to the point to be interpolated, the greater the influence on the interpolation result, and vice versa, the smaller the influence on the interpolation result [15]. As shown in Figure 1, for the point *Q* to be interpolated, the search radius *R* is first determined and the known data points within the search radius are identified. Next, the distance between each data point and the point to be interpolated is calculated, and the four closest points are determined. As shown in Figure 1, *Q*1, *Q*2, *Q*3, and *Q*4, these four points are used as the sampling points involved in the interpolation operation.

The disadvantage of this method is that it is based purely on distance and does not place restrictions on the location of the sampling points. When the distribution of known data points is extremely irregular, all the sampling points may come from the same direction. As can be seen from Figure 1, *Q*1, *Q*2, *Q*3, and *Q*4 are all on the lower side of point *Q*. This results in no data points on the upper side being involved in the interpolation operation. Although the distance between the upper side data points and point *Q* is not as close as the lower side data points, they have a certain influence on the interpolation results, and not using them at all will result in a large error between the interpolation results and the actual situation.

The directional search method is a method of improving interpolation accuracy by using sampling points in different directions on the basis of distance search, which makes up for the possible disadvantages of the distance search method of taking points in one direction. The essence of the directional search method is to select the nearest known data points in different directions to be interpolated as sampling points for interpolation. Directional search methods mainly include the four-way search method, the six-way search method, and the eight-way search method [16]. The eight-way search method is used as an example to briefly describe the search process. As shown in Figure 2, the entire original data are first searched to determine the known data points in eight directions within the interpolation radius *R*. Next the distance between each data point and point *Q* within each direction is calculated separately. Finally, the closest point within each direction is determined as the sampling point to be involved in the interpolation operation. The seven points *Q*1, *Q*2, *Q*3, *Q*4, *Q*5, *Q*7, and *Q*8 in the figure are used as the sampling points for the interpolation operation of point *Q*.

Although this method compensates for the potential disadvantages of distance search, for each point to be interpolated, the entire original data have to be searched to determine the data points within the search radius *R*, and the nearest point to the point to be interpolated in each direction has to be determined, which is a large search effort and also reduces the efficiency of the entire interpolation process to a certain extent. The maximum number of eight points to be interpolated also limits the interpolation accuracy to a certain extent. The fixed nature of the search radius makes it impossible to adapt to the characteristics of different parts of the data, and the search radius may be too large or too small in some parts.

Spatial interpolation algorithms based on the basic assumption of the “First Law of Geography” mainly rely on the collected data to perform interpolation calculation to fill in the unknown data in the original data. The closer the data points are to each other, the greater the relative influence, and conversely, the further the distance, the smaller the relative influence [17]. This study focuses on the inverse distance weighting method, the spline function method (SF), the kriging interpolation method, and the trend surface method (TS).

The inverse distance weighting method is an interpolation method based on the principle of similarity. Its essence is to fit a smooth mathematical plane equation with the known point data and then calculate the data values of the unknown points according to the equation, that is, to perform multiple regression analysis to obtain the smooth mathematical plane equation based on the relationship between the attribute data and the geographical coordinates of the sampled points. If the coordinates of two points and in space are and , then the value of the distance between the two points is as follows [18]:

Let be the interpolation point, be the sample point, is the number of known sample points around , and the attribute value of be , the attribute value of the interpolation point is found. The value calculated using the inverse distance-weighted interpolation method is as follows:where represents the weight, which is inversely proportional to the distance from a known point in the vicinity of . To normalize the weights :where represents the value of the distance attenuation parameter. When the value of is large, the closer the sample point to be interpolated, the greater the influence of the sample point. When the value of is small, the more distant the sampling point is from the point to be interpolated, the greater the influence on it. Substituting into equation (2) gives the attribute value for point as

The spline function method is a method that uses functions for interpolation calculations, often using segmented polynomials with both first-order and second-order derivatives that are continuous to approximate known data points, thus generating smooth interpolation curves. This method is simple to operate, not too computationally intensive, fast, suitable for interpolating contours based on dense points, and suitable for surfaces with progressive changes. However, this method is more difficult to estimate the interpolation error and has poor interpolation results when the points are not dense [19].

The kriging interpolation method is an interpolation method based on the theory of variational functions for the unbiased optimal estimation of regionalized variables in a finite region [20]. This algorithm pays more attention to the relative position relationship between each sampling point while considering the relative position relationship between the sampling points and the interpolation points. The estimates of kriging interpolation are obtained by weighting a linear combination of existing data points, which means that kriging interpolation is linear. Kriging interpolation brings the mean residual or error close to zero, meaning that kriging interpolation is unbiased. Kriging interpolation, therefore, gives results that are closer to the actual situation. However, this algorithm is computationally inefficient and consumes computer resources. The calculation is also more complicated with large amounts of data.

The kriging method of interpolation assumes that the region under study is , point . Let the regionalized variable understudy on the region be and the attribute value at point be . Based on the kriging method of interpolation, the attribute valuation at interpolation point is the weighted sum of the attribute values of the known points involved in interpolation around the interpolation point , as shown inwhere is the weighting factor to be determined. According to the unbiased condition of kriging interpolation, must satisfy the relation:

Then the system of equations for solving the kriging method for the coefficient to be determined is obtained aswhere is the covariance function of and and is the Lagrange multiplier. From the above system of equations, is solved and substituted to obtain an estimate . There is a certain correlation between , which is related not only to the distance but also to its relative direction.

The trend surface method is similar to the inverse distance weighting method in that it also uses known data points to fit a smoothed mathematical plane equation and then calculates the attribute values of the unknown points from the equation [21]. It uses the relationship between data attributes and geographic coordinates of the original data to perform a multiple regression analysis to obtain a smoothed trend surface function. The trend surface method is an approximate interpolation method because, in practice, it is generally difficult for the trend surface to pass exactly through the original data points, except in special cases such as when there are few data points and they happen to be passed by a surface. The trend surface method is a statistical method of interpolation that presupposes a series of interrelated spatial data where the trend and period of the interpolated point is a function of the other variables associated with it. It is also one of the more commonly used overall interpolation methods, allowing a smooth mathematical plane to be used to describe some geographic property that varies continuously in space.

Traditional spatial interpolation methods are widely used in the corresponding fields with their respective characteristics, so it is necessary to choose a reasonable and applicable interpolation algorithm according to the characteristics of the data spatial distribution [22, 23]. Through the analysis of the purpose, accuracy, and operation speed of interpolation, the above interpolation aspects are compared from the perspectives of interpolation accuracy, operation efficiency, extrapolation capability, and data distribution requirements, as shown in Table 1:

As can be seen from Table 1, the inverse distance weighting method has high operational efficiency, but the interpolation accuracy is lower compared to the kriging interpolation algorithm, while the kriging interpolation method has high operational accuracy and a wide range of applications, but low efficiency. Compared with the spline function method and the trend surface method kriging, the advantages of these two interpolation algorithms are more obvious.

#### 3. The Principle of Data Fixed-Frame Sampling and Variability-Weighted Interpolation

##### 3.1. Data Framing Sampling

As the original data have sparse and irregular characteristics, the sampling point search and interpolation methods are very important aspects for the accuracy of the interpolated data and the subsequent application effect of the data. Among the traditional sampling point search methods, the distance search method has no restriction on the orientation of the sampling points, which reduces the interpolation accuracy; the direction search method has lower search efficiency. Among the traditional interpolation algorithms, the kriging method has high interpolation accuracy and good effect, but the algorithm is complex and time-consuming to compute; the inverse distance interpolation algorithm has high computational efficiency, but poor computational accuracy [24]. Combining the characteristics of the original data and the respective advantages and disadvantages of the traditional sampling point search method and the traditional interpolation algorithm, this study proposes a variability-weighted interpolation algorithm based on the fixed-grid sampling of the data, which is divided into two parts: a fixed-grid sampling of the original data and variability-weighted interpolation of the points to be interpolated.

Compared with the distance search method, the directional search method adds a limitation on the sampling orientation of the sampling points, which improves the interpolation accuracy to a certain extent and presents a better interpolation effect. However, for the directional search method, a circle is drawn with the search radius to search for sampling points, which limits the search range to a certain extent and may result in a certain degree of missing sampling points. When the data sparsity is relatively high, the octagonal search method can only determine a relatively small number of sampling points, and if the search range is expanded at this time, a new data point search is required, which is a more complex and tedious process. In this study, we propose data grid sampling to ensure that a certain number of sampling points are involved in the interpolation operation by searching the data on the grid around the interpolation point and simplifying the search process by searching the grid data in a single pass.

In all traditional methods of searching for sampling points, there is a problem of inefficient searching. The main reason: when each point to be interpolated for interpolation calculation, the entire original data have to be searched and then the sampling points involved in the operation are determined. This process is undoubtedly more tedious, so this study adopts a single search of the fixed-grid data, where the original data are first gridded, and then a single full search of the original data is used to determine the location of the grid where the data are located. This allows the data points within the grid to be extracted directly when applying the data gridding sampling method to search for sampling points, improving the efficiency of the search.

Assume that the original data study area , which will first be gridded, is processed to satisfy the following process: each grid after division is a square with as the side length, and attention should be paid to the division process to ensure that the data points are evenly distributed within the grid as far as possible; the grid coordinates after division are the grid coordinates for interpolation, and all grid points should have feature values present after interpolation is completed. Suppose any point is located on grid coordinate . In order to facilitate the positioning and extraction of the original data points, each grid will be represented by its lower left coordinate, then the grids around the point are , , , and , as shown in Figure 3.

Assume that there are original data points within the study area , and record all data points within grid using and . Prior to a single search of the fixed-grid data, it is necessary to:(1)Set initial values for and : where represents the maximum value of the grid’s horizontal coordinates and represents the maximum value of the grid’s vertical coordinates.(2)For any data point , locate the coordinates of the grid in which it is located as where are the coordinates of the leftmost lower grid point in the study area , is the side length of the small square grid, and denotes the leftward rounding function.

The steps for a single search of the fixed-grid data are shown in Figure 4.

When data points within any grid need to be extracted during the interpolation process, proceed as follows:(1)Determine if is zero, when , there is no known data point in grid and no subsequent steps are necessary; when , data point is the first known data point in grid , continue with step (2).(2)When , data point is the second known data point within grid .(3)When , data point is the third known data point within grid . (4)Until , then the above data points are all known data points located within grid .

After all the original data have been located by searching and stored, the interpolation of the missing points can be performed. Before the points to be interpolated can be subjected to the interpolation operation, the sampling points to be involved in the interpolation operation need to be determined. Data framing sampling ensures the effectiveness of the sampling point search method by looking at the amount of data to be sampled, the interpolation efficiency, and the sampling direction of the sampled points.

When searching for sampling points, the search first covers the first layer of the grid around the point to be interpolated. If there are less than 8 data points in the first layer of the grid, the second layer of the grid is added and so on. This way of searching ensures that the number of participating interpolation points is not less than 8, without wasting other data points within the grid, and improves the interpolation accuracy. In terms of efficiency, it is not necessary to take only 8 data points in this section to cut down on time as the search is already significantly more efficient by searching for location storage data. The data fixing sampling method also focuses on the sampling direction of the sampling points while ensuring the amount of data to be sampled and the efficiency of the interpolation. As the grid is divided with appropriate edge lengths so that the original data are distributed as evenly as possible across the grid, and no data points are deleted from the grid during the interpolation process, this ensures to a certain extent that the sampling direction of the sampling points involved in the interpolation is guaranteed. In addition, when the data points in the first search grid are less than 8, the second layer is allowed to participate in the interpolation operation, which also solves the problem of missing data in some directions in the first layer. At the edges of the grid, the same rules are applied until the number of sampling points exceeds 8. The search method is shown in Figure 5.

Point to be interpolated follows the following steps when conducting a search for sampling point data:(1)Firstly, the data points in grids 1, 2, 3, and 4 are extracted from the graph and the number of data points is judged, if the amount of data is less than 8, then step (2) is performed. If the amount of data is greater than or equal to 8, all the data in grids 1, 2, 3, and 4 are used as interpolation operations, and the attribute value of the point to be interpolated is calculated.(2)The data points in grids 5, 6, …, 15, 16 are extracted and the total number of data points extracted is calculated to be less than 8. If the amount of data is less than 8, the data points in the third level of the grid are extracted, and so on. If the amount of data is greater than or equal to 8, all the data in grids 1, 2, 3, …, 15, 16 are used for interpolation, and the attribute value of the point to be interpolated is calculated.

The flow chart for data framing sampling is shown in Figure 6.

##### 3.2. Variability-Weighted Interpolation Algorithm

The variability-weighted interpolation algorithm combines the high efficiency of the inverse distance weighting algorithm with the high accuracy of the ordinary kriging interpolation algorithm. The inverse distance weighting formula and the variability weighting formula are then weighted separately to determine the interpolation interval for the high accuracy interpolation results by comparing the interpolation results for different weights.

If the original data study area , the original data points corresponding to the attribute value and the point to be interpolated is . The weighting factor of the inverse distance weighting algorithm can be expressed as follows:where denotes the weight coefficient, , indicates the distance between the point to be interpolated and the original data point number , and denotes the distance decay parameter value.

The above formula shows that the weight coefficient is only related to the distance between the two points, and the variation function in ordinary kriging is also a quantity related only to the distance between the two points, so the distance in formula (11) can be replaced by the various function of ordinary kriging, and the data points involved in interpolation are selected in the experimental process using data fixation sampling, and the weight coefficient obtained can be expressed as follows:where the variance function .where and are weighting factors and .

#### 4. Experimental Process

Through the analysis of the data fixation sampling method and the variability-weighted interpolation algorithm, the general steps of the experiment were obtained as follows:(1)Experimental data information is obtained.(2)For interpolation gridding, the original data are located and stored by a single search of the fixed-grid data.(3)The variability function is calculated based on the original data. The raw data are mostly obtained by means of acquisition and measurement and is therefore discrete data. Discrete data are divided into data with a scattered distribution of coordinates and data with a regular distribution of coordinates. For data with a discrete coordinate distribution, the variation function can be calculated by relaxing the distance and angle tolerances, so that data with a small degree of distance and angle deviation can be grouped into data with the same direction and step length for calculation. As the data used in this study are data with regular coordinate distribution and the coordinates are arranged at intervals of 1 along the and axes, the basic step length is directly set to 1 when calculating the variation function. The specific steps for calculating the experimental variance function are as follows:(1)Initialize the data variables. Define the variables: is the basic step, which is taken to be 1 due to the nature of the experimental data. is the number of directions of the variation function. Define the array: is used to store the original data. (an integer array) holds the direction of each various function. and hold the data information for two points in each direction of variation. holds the value of the various function in each direction at the position of step . holds the number of pairs of points corresponding to each direction.(2)Read raw data and store it in array . Determine the number of directions of the variational function .(3)Select a pair of raw data in and calculate the distance between the two points and the angle between them and the axis. Use this to determine the direction of variation as and deposit it in the corresponding and .(4)Repeat step (3) until all the data in the array have been traversed.(5)Select two data points in each direction corresponding to arrays and , and calculate the distance between the two points. Since the basic step of the experiment is 1, . The values of the attributes of the two points are subtracted and then squared, and then accumulated and counted in . .(6)Repeat step (5) until the arrays and are traversed.(7)Iterate through arrays and , dividing each element of array by two times the value of the corresponding element in array . The resulting result is returned to be assigned to . The value of the variance function of the experiment is stored in the array . The flow chart for calculating the values of the experimental variation functions is shown in Figure 7. Experimental variation function values were obtained separately for smooth data without features and abrupt data with features by experimental calculations (Figures 8 and 9).(4)Select a suitable theoretical model of the various function according to the value of the various function for parameter fitting and determine the variation function formula. The variation function is determined from the experimentally obtained values of the various function and fitted to the corresponding theoretical variation function model. The theoretical model of the various function is divided into two main types: models with abutment values and models without abutment values. Models with abutments include spherical models, exponential models, and Gaussian models. The models without abutments mainly include the power function model and the logarithmic function model. In practice, for most regionalized variables, models with abutments are chosen. Based on the values and distribution of the experimental variation functions obtained, the most suitable theoretical variation function model can be determined as a Gaussian model. The theoretical formulation of the Gaussian model can be expressed as follows: where denotes the variation function. denotes the arch height, representing the part of the parameter that varies structurally. denotes the basic step. denotes the variation range, representing the process by which the distance varies with respect to the spatially relevant differences. The parameters of the Gaussian model of the variance function for smooth data without features can be obtained by combining the values of the distribution of the variance function obtained: the value of is 0.0372 and the value of is 20.8604. Then, the expression of the variance function for smooth data without features is The fitted Gaussian model variance functions are shown in Figures 10 and 11.(5)Select point to be interpolated, and select the sampling point to be involved in the interpolation operation by using the data fixation sampling method.(6)Calculate the distance between the points to be interpolated and the sampling points.(7)Calculate the weight coefficient based on the distance , and the weight coefficient based on the variation function formula.(8)Calculate the attribute value of the point to be interpolated.(9)Complete the calculation of all the points to be interpolated by cycling through steps 6 to 9.

#### 5. Comparison of Experimental Result Validation

In this article, two types of data are used, each of which is validated against the inverse distance-weighted interpolation algorithm, the ordinary kriging interpolation algorithm, and the variability-weighted interpolation algorithm based on a fixed-grid sampling of the data. Three tests, mean square error , mean absolute deviation , and standard deviation , were used to test the interpolation results.(1)Featureless smooth data For the featureless smooth data, the inverse distance-weighted interpolation algorithm, the ordinary kriging interpolation algorithm, and the variability-weighted interpolation algorithm based on data fixation sampling were used for interpolation calculation. A comparison of the interpolation results is shown in Table 2, and three-dimensional stereograms of the interpolation results are shown in Figures 12–14.(2)Characteristic mutation data For the characteristic mutation data, the inverse distance-weighted interpolation algorithm, the ordinary kriging interpolation algorithm, and the variability-weighted interpolation algorithm based on data fixation sampling were used for the interpolation calculation. A comparison of the interpolation results is shown in Table 3 and three-dimensional stereograms of the interpolation results are shown in Figures 15–17.

From the above interpolation results, it can be seen that in the experiments on smooth data without features, VW based on data fixation sampling improved the mean square error, mean deviation, and standard deviation by 63.29%, 71.83%, and 47.1%, respectively, compared to VW and improved the efficiency by 56.17% compared to OK. In the experiments on the characterized mutant data, VW based on data fixation sampling improved by 18.78%, 59.32%, and 36.19% relative to IDW in terms of mean square error, mean deviation, and standard deviation, respectively, and relative to OK in terms of mean square error, mean deviation, standard deviation, and efficiency by 69.25%, 82.63%, 58.3%, and 84.77%, respectively. Thus, VW based on a fixed-grid sampling of the data can produce high interpolation accuracy and fast interpolation, and it is more suitable for both smooth data without features and mutation data with features. It is worth noting that, for data with features, OK can produce large data fluctuations, resulting in reduced interpolation accuracy. However, VW based on a fixed-grid sampling of the data solves this problem well and provides a significant improvement in efficiency.

In addition to comparing the interpolation results of the three interpolation methods to verify the interpolation accuracy and interpolation efficiency of the variability-weighted interpolation algorithm based on a fixed-grid sampling of the data, the variability-weighted interpolation algorithm was also applied to data with different degrees of sparsity to verify the stability of the interpolation results .(1)Featureless smooth data The interpolation results of the variability-weighted interpolation algorithm for featureless data with different sparsity levels are shown in Figure 18.(2)Characteristic mutated data The interpolation results of the variability-weighted interpolation algorithm for characteristic data with different sparsity levels are shown in Figure 19.

As can be seen from the stability comparison results, the interpolation accuracy results are relatively stable overall between 5% and 25% sparsity, although individual errors may fluctuate slightly when sparsity varies for both featureless smooth data and feature mutant data. Thus, the interpolation accuracy of the variability-weighted interpolation algorithm is relatively stable for data with varying degrees of sparsity.

#### 6. Conclusions

This article proposes a variability-weighted interpolation algorithm based on data fixation sampling by combining the inverse distance-weighted interpolation algorithm and the ordinary kriging interpolation algorithm. The good performance and stability of the new method provide a good reference for the application of data interpolation in related fields and applications of different original data types. The interpolation efficiency is improved by data lattice sampling, and the interpolation accuracy is improved by variability-weighted interpolation. This article details the sampling search principle, the sampling search process, the interpolation algorithm principle and the interpolation algorithm process, and experimental verification of the proposed interpolation algorithm in terms of interpolation accuracy, efficiency, and interpolation stability using two different types of data. The experimental results show that the variational-weighted interpolation algorithm based on data fixation sampling has high interpolation accuracy and fast interpolation speed for both smooth data without obvious features and mutated data with obvious features. Regardless of the type of data, the variability-weighted interpolation algorithm is substantially faster than the ordinary kriging algorithm in terms of interpolation speed. The interpolation accuracy results are stable and scientifically valid when the sparsity of the original data varies.

The weighting coefficients of the variational-weighted interpolation algorithm based on a fixed-grid sampling of the data used in this experiment have been continuously adjusted through experimentation so that subsequent research can be carried out in the area of weighting coefficient optimization. The adaptive optimization of the weighting coefficients for different types of data allows for the minimization of accuracy errors for all types of input data.

#### Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

#### Conflicts of Interest

The authors declare that they have no conflicts of interest.

#### Acknowledgments

This work was supported by the National Defense Basic Research Program (JCKY2019204B020).