This paper demonstrates that automatic selection of the right interpolation/smoothing method in a GNSS-based network real-time kinematic (NRTK) interpolation segment can improve the accuracy of the rover position estimates and also the processing time in the NRTK processing center. The methods discussed and investigated are inverse distance weighting (IDW); bilinear and bicubic spline interpolation; kriging interpolation; thin-plate splines; and numerical approximation methods for spatial processes. The methods are implemented and tested using GNSS data from reference stations in the Norwegian network RTK service called CPOS. Data sets with an average baseline between reference stations of 60–70 km were selected. 12 prediction locations were used to analyze the performance of the interpolation methods by computing and comparing different measures of the goodness of fit such as the root mean square error (RMSE), mean square error, and mean absolute error, and also the computation time was compared. Results of the tests show that ordinary kriging with the Matérn covariance function clearly provides the best results. The thin-plate spline provides the second best results of the methods selected and with the test data used.

1. Introduction

The use of GNSS and network real-time kinematic positioning to achieve GNSS positions with accuracy at the cm-level is increasing rapidly these years. This is partly due to the development and modernization of the GNSS systems themselves (GPS, GLONASS, Galileo, and Beidou), but it is also caused by a general quest for better position accuracy in many user communities.

High-accuracy GNSS positioning is based on the carrier phase being observable. Using the notation from [1], the basic observation equation that summarizes the relation between observations and error sources is given as follows: where is the phase observation in cycles, is the wavelength in meters/cycle, is the geometric distance between the receiver and satellite in meters, is the ionospheric signal delay in meters, is the tropospheric signal delay in meters, is the frequency in Hertz, and are the clock errors of, respectively, the receiver and the satellite, is the initial number of cycles at the first observation epoch (the ambiguity), and is a noise term given in cycles that mainly accounts for multipath (reflected signals) and receiver noise.

When using the NRTK technique, a network of reference stations is used to estimate the errors in the positioning process, that is, the effects of the ionosphere and troposphere as well as inaccuracies in the satellite position as provided with the broadcast ephemerids from the satellites.

The accuracy of NRTK positioning systems depends on the ability to identify and mitigate the error sources in the system as well as the residual biases. The biases include residual effects from the space segment, signal propagation, environment effects, and receiver noise in the reference network. The mitigation process can be carried out by modeling, estimation, or combinations of observables.

The NRTK processing chain can be summarized as follows: the first step is to collect raw measurements from the network of reference stations, solve for the ambiguities within the reference network, and generate error estimates. The next step is to apply the interpolation/smoothing scheme to generate the RTK corrections for the user location. The RTK corrections are then transmitted to users who can then perform real-time positioning with accuracy at the cm-level.

Figure 1 shows all segments involved in the NRTK processing chain. The figure illustrates the so-called virtual reference station (VRS) concept, which was developed by Landau et al. [2]. Other NRTK standards such as for instance the master auxiliary concept (MAC) also exist [3], but we limit the discussion in this paper to the VRS concept.

As the GNSS systems and users become more numerous, the amount of data that needs processing increases as well, which poses some interesting challenges for the NRTK system developers and service providers. This paper focuses on processing large data sets and high quality interpolators/smoothers that can be used to aid the data processing. Let us consider how the RTK processing is carried out. First the user sends his/her position to the control center, and then the network engine chooses a suitable subnetwork which is used to generate corrections, and these corrections are then transmitted back to the user. The first challenge to this model is the number of users, since each user has to be processed independently, and the number of users has increased dramatically in recent years. The solution to this is to construct new models and algorithms. These should be able to process data from large geographical areas, as well as computing the necessary corrections and quality indicators ready for use, so that any RTK user that connects will be served immediately.

In other branches of science and engineering, new analysis tools that satisfy these requirements have already been developed: neural networks, machine learning, classification and regression trees, hierarchical models, and so forth. In this paper, some existing interpolation/smoothing methods are applied to real datasets, and the strengths and weaknesses of each method are identified. The results are then used to combine these methods and construct models that describe the observed variations in the data as well as possible.

Interpolation methods can be divided into two categories: local methods and global methods. The local methods only use a subset of the data for interpolation, which implies that the required processing time is reduced. Conversely, the global techniques use all the data available to generate predictions. In this paper, both these approaches are considered. Referring to Figure 1, the main focus of this paper is directed at the correction interpolation segment and more specifically at the automatic selection of the right interpolation algorithm based on appropriate tests, such that the rover position estimation will be improved.

The rest of the paper is organized as follows: Section 2 gives a full description of the test data using the Norwegian GNSS network data, known as CPOS, and introduces the variational problem in general. Section 3 covers local interpolation algorithms, specifically the inverse-distance weighted and bilinear/bicubic methods by Akima. Section 4 and the following sections deal with global interpolation methods. First, thin-plate splines and the Bayesian model behind the smoothing are reviewed in this section. Section 5 introduces numerical approximation schemes for Gaussian random fields. Section 6 covers spatial interpolation algorithms and specifically the ordinary kriging method. Section 7: the performance parameters are defined in this section. Section 8: the results from Sections 36 are generalized in this section. Section 9 is the conclusion and discussion and covers applications of the results developed in Sections 36.

2. Test Data and Variational Reconstruction

2.1. Test Data

The main success of network real-time kinematic positioning has been the reduction of correlated errors in the network (e.g., ionospheric, tropospheric, and satellite position errors). This type of errors is collectively referred to as distance-dependent errors and can be subdivided into the dispersive errors which depend on frequency and the nondispersive errors which do not.

The size of the network varies with time, as the individual reference stations and satellites may not deliver data for a while, and observations are typically correlated to each other. Modeling the spatial and temporal variations of such a process is too complex to capture the covariance structure of the data, so often we end up imposing stationarity. In this paper, we apply techniques for handling spatial processes in order to capture the covariance structure in the data, such that high quality synthetic data can be provided. The main clue is to employ the right tool from epoch to epoch, based on some appropriate criteria.

We prefer to work with real data, and since the real network error estimates were not made available, we decided to analyze the ionospheric path delays for CPOS RTK network, given by absolute TEC values. If the ionosphere data is replaced with the full network corrections, the same algorithms should still function very well. Ionospheric path delay is considered the single largest source of inaccuracy for positioning and navigation, so the quality of the NRTK corrections is strongly affected in the case of moderate to high ionosphere activity. To test the algorithms against each other, a large ionospheric data set from the Norwegian CPOS network is investigated. The data is generated by a first-order geometry-free approach (Section 8.1).

At the time of writing, the CPOS RTK network contains approximately 180 stations on the Norwegian mainland (for a listing of stations of the CPOS RTK network, see the appendix.). The algorithms were tested with different station configurations (50, 75, and 110 stations), equipped with geodetic dual frequency GNSS receivers, which track both GPS and GLONASS satellites. In this investigation, however, only the GPS portion of the data was used. The distribution of the CPOS RTK network reference stations is given in Figure 2.

2.2. Variational Reconstruction
2.2.1. Problem Formulation

Let us assume that the observed noisy measurements at the locations is a random function, with the mean and variance . Our goal is then to predict the value at other locations where we have no observations, under the assumption that the predicted values should resemble its neighbors. To achieve this, we can either interpolate or construct a smooth function that represents the variation in the data and is robust against outliers.

The data that will be modeled is a pure spatiotemporal process, namely, the absolute total electron count (TEC). Assuming weak stationarity of the process under study, the mean and variance are not functions of the spatial location . The model used to describe the variation in data in this paper, however, is assumed to have the form The mean function , often referred to as the trend or deterministic part, determines the large-scale variation in the data. The function is called the random part and determines the small-scale variation. This process model will be assumed in the subsequent discussion of all the different interpolation/smoothing techniques presented in this paper.

Some data analysts prefer the Cressie decomposition [4, ch. 3] of the observed random field , which takes the form where is the spatial location; is the observation; is the trend (the mean component of the model); is a stationary Gaussian process with variance (partial sill), and a correlation function parameterized in its simplest form by (the range parameter); and finally is an error term, with a variance parameter (nugget variance).

2.2.2. Model Parameter Estimations

Once the model is defined, the next step is to estimate the model parameters. In general, this is done numerically by minimizing the negative log-likelihood function. The most used optimization methods are, respectively, the conjugate gradient method, the quasi-Newtonian method, and the Nedler-Mead method. The details of these methods will not be treated in this paper, but the interested reader is referred to references [5, 6].

The algorithm may not converge to correct parameter values when called with the default options. The user should therefore try different initial values, and if the parameters have different orders of magnitude, a scaling of the parameters may be necessary. If such problems arise, some possible workarounds include(i)rescaling data values by dividing by a constant,(ii)rescaling coordinates by subtracting values and/or dividing by constants,(iii)bootstrapping to accelerate the convergence. This method is used in our implementation of the kriging algorithm in Section 6.

2.2.3. Model Validation

In the field of statistical analysis, an appropriate way of analyzing data is to divide it into three distinct subsets. The training dataset is used to construct the model, the validation data is used to check the model, and the last data set is used to challenge the model. The main purpose is to determine whether or not our model is an accurate representation of the real world data. This process is called the model validation assessment. The most famous methods are the family of cross-validation, generalized maximum likelihood (GML) methods, Akaike information criterion (AIC), Bayesian information criterion (BIC), and so forth.

In our implementation, the generalized cross-validation is used to determine the optimal smoothing parameter (see Section 4). The computation AIC and BIC are computed in Section 6, when maximum likelihood estimation is used instead of weighted least squares in the kriging algorithm. The GML methods will be used in future work.

3. IDW and Akima Interpolation

3.1. IDW Interpolation

The inverse-distance weighted (IDW) scheme is an exact interpolator. It honors the data by assigning weights to all neighboring points according to their distance from the prediction location . Locations that are closer to receive higher weights, and locations that are far from are given lower weights; this mechanism is administered by the parameter in the IDW predictor formula. The user can freely choose the number of observations used to perform the interpolation. This is done by defining a radius around the prediction location .

The IDW predictor scheme is defined as Here, is the prediction location, is the number of observations, are the neighboring observations, is the weight decision parameter, and is the distance (either spherical or Euclidean).

The IDW method is originally due to Shepard [7], which described a global method. All derived IDW methods are either generalizations or variations of this method. The basic Shepard’s method can be expressed as where typically the weight is the inverse Euclidean distance . We will however define a disk with center and a radius and set the weight to zero outside of this disk. A natural scheme suggested by many authors, like, for example, Renka and Brown [8], is given by the expression where

Impose the constraints such that(i)the sum of all weights inside the disk should be normalized to unity, that is, ,(ii)the predictor is a linear combination of the observations. If the variance of the predictor is then controlled such that it is at a minimum, the IDW behaves almost like the local kriging interpolator (Section 6); however the covariance structure is not preserved.

For the implementation, the package gstat from Edzer Pebesma is used to carry out IDW (see Table 3 for more information).

3.2. Akima Algorithms

Bilinear or bicubic spline interpolation is applied using different versions of algorithms by Akima [9, 10]. Given a set of data points in a plane, our aim is to fit a smooth curve that passes through the given points considered as reference points. The method is local and is based on a piecewise function composed of a set of polynomials and applicable up to the third degree on each interval. The method produces remarkable results with minimum processing time. For a detailed mathematical formulation, please refer to references [9, 10].

3.2.1. Basics of Spline Interpolation/Smoothing

An understanding of the basic elementary building blocks of a 1D spline facilitates the understanding of 2D and 3D splines, for instance, the TPS (Section 4).

Given real numbers , a function defined on interval is a cubic spline if and only if the following conditions are satisfied:(i)the function is a cubic polynomial on each interval ;(ii)the function and its first and second derivatives are continuous at each of the points . Condition (ii) implies that the cubic polynomials from condition (i) fit together on each , where the are called knots. Together these two conditions imply that is a function with continuous first and second derivatives on the whole interval .

For some given real constants , the cubic spline function can be expressed as where the index . The end-point knots correspond to the boundaries of the function domain; that is, and .

Finding a smoothing spline is not an easy task. Reinsch (1967) proposed an algorithm and showed that the solution of the minimum principle is actually cubic splines. The basic idea is to construct a nonsingular system of linear equations of the second derivative of . The resulting equations are computationally efficient because of their banded structure. For an excellent exposition of the material, see also [11].

3.2.2. Output Result from Akima

Figure 3 shows the output from Akima with bilinear interpolation.

4. Thin Plate Spline Method

4.1. Mathematical Preliminaries

In this section, our main interest is not to construct a function that exactly interpolates the data at distinct points but to find an attractive way to smooth noisy data. The method of thin-plate splines (TPS) will be used for this purpose.

Duchon [12] was the first to build the theoretical foundation for the TPS method. The name TPS comes from the physical situation of bending a thin surface, where the method minimizes the bending energy of a thin plate fixed at the data sites. For our application, the TPS method is used to minimize the cost function: where is a vector of partial differentiation operators of order . In the two-dimensional case, that is, when , , and , the TPS penalty function can be written as

Let denote the differential operator in the integrand of (10). The thin-plate spline , which is the solution to the variational problem of minimizing the penalty , can then be found by solving the biharmonic equation .

The goal is to find the function in Sobolev space [13, p. 250] that minimizes the following expression: where is the total number of observations, is a smoothness penalty (the cost function), and is the smoothing parameter. The smoothing parameter is a trade-off factor between the rate of change of the residual error and local variation. Optimal minimization of results in a good compromise between smoothness and goodness of fit.

Once the curve approximation of the data has been constructed, generating values at any location, where no observations are available, is accomplished by simply indexing the variables and and fetching the corresponding value. This is a major advantage of smoothing methods over interpolation methods; no extra interpolations are required after the curve has been constructed for a given epoch.

Validation is carried out by the GCV (Section 2.2.3). Let the cross-validation function , with , be defined as where . The OCV (ordinary cross-validation) and OCV MSE (ordinary cross-validation mean square error) , respectively, are defined as

The determination of the GCV (general cross-validation) goes as follows. First, the expression for has to be rewritten. There exists an matrix , the smoothing/influence/sensitivity matrix with the property. Consider such that can be written as where and is element of .

Definition 1 (generalized cross-validation (GCV)). Let be the smoothing matrix defined in (14); then the GCV function is given by the expression

4.2. Estimation of the Smoothing Parameter

The smoothing parameter plays a central role in the TPS method. By adjusting the value of , one can get the desired level of smoothness at the cost of accuracy at the data sites. When we set this parameter to zero, the problem is reduced to an interpolation with no smoothing. On the other hand, when the smoothing parameter tends to infinity, the method yields a plane which is least-square fitted to the data. The smoothness penalty method can be chosen by any criteria, but the most popular criterion is GCV (generalized cross-validation), also known as the “left-out one” method. The GCV criterion selects the smoothing parameter that minimizes the GCV function, equation (16), that is .

The GCV function is the predicted mean square error and can be viewed as a weighted version of the :

In geodesy, it is often interesting to estimate the accuracy . Two loss functions are considered: the mean square prediction error , and the stricter Sobolev error is defined as , The performance of an estimator is often well characterized by the risk function, defined as the expectation value of the loss function:

In this analysis, the GCV is used to estimate the smoothing parameter . Figure 12 shows the smoothed surface generated by the TPS with GCV.

For implementation, the CRAN package rgcvpack is used to implement the TPS algorithm (see Table 3 for more information).

5. Numerical Approximation Methods

Numerical approximation techniques will assist us in processing huge data sets with convergence. The main idea is based on the pioneering work of Besag [14].

Let us assume that our observations at different locations follow a multivariate Gaussian distribution with mean and variance-covariance . Then the continuously Gaussian fields have the distribution

Approximating the continuous Gaussian random field by the discrete Gauss-Markov random field is accomplished by introducing the Markov property. This is done as follows: we say that two locations and are conditionally independent if and only if This property is very important when constructing the precision matrix of the GMRF. That is, if we know what happens nearby, we can ignore everything that lies further away. Consider

That is, element of is zero if the process at location is conditionally independent of a process at given the process at all locations except . Figure 4 illustrates the concept of the GMRF.

The sparse precision matrix makes the GMRF computationally effective, but it is difficult to construct reasonable precision matrices. As a conclusion, the GMRF is a Gaussian field with a sparse precision matrix . For an excellent description of the theory and applications of GMRF, the reader is referred to, for example, Rue and Held [15].

The integrated nested Laplace approximation (INLA) method developed by Håvard Rue is used to implement the GMRF (see Table 3 for more information).

6. Kriging Interpolator

The kriging interpolator is a linear spatial interpolation algorithm and is primarily used in geostatistics. In recent years, however, the interpolator has been applied in many new areas, such as geophysics and climate data analysis.

Given the observations , we want to predict the value of where no observations have been made. Our goal is to find an estimator such that the following requirements are met.

(i) Unbiasedness. This means that and is accomplished if and the mean is stationary.

(ii) Minimum Prediction Variance. We make some assumptions about the mean value of the random field . If the mean is unknown but constant across the entire region of interest, we have ordinary kriging. Otherwise, the method is known as simple kriging.

Any estimator that meets the conditions of unbiasedness and minimum prediction variance is said to be a BLUP (best linear unbiasedness predictor). Let us examine the components of the MSPE (mean square prediction error). Consider

We want to minimize subject to the constraint .

The procedure is well defined by the method of Lagrange multipliers. Form the Lagrangian ,

We then take the partial derivatives of with respect to the weights and to , set the equations to be equal to zero, and solve them; we get

Equation (25), which is the kriging equation, is used to compute the weights. The computation of weights is based on the covariances among locations in the sample (region of interest) and the covariances between sample locations and the location to be predicted. To be specific.(1)Covariances among the locations in the sample: The covariance matrix of the sample values read (2)Covariances between the sample locations and the prediction point: The vector of covariances between the sample locations and the prediction point read Equation (25) becomes where is vector of weights and is a vector of the same dimensions.

6.1. Directional Effects

Another form of nonstationarity lies in the covariance structure. One specific way to relax the stationarity assumption is to allow directional effects. For instance, the correlation decay rate at increasing distances may be allowed to depend on the relative orientation between pairs of locations. The simplest form of directional effects in the covariance structure is called geometrical anisotropy. This arises when a stationary covariance structure is transformed by a differential stretching and rotation of the coordinate axes. Hence, geometrical anisotropy is defined by two additional parameters. Algebraically, a model with geometrical anisotropy in spatial coordinates can be converted to a stationary model in coordinates by the transformation is called the anisotropy angle and the anisotropy ratio. The direction with the slowest correlation decay is called the principal axis.

6.2. Choice of Covariance Function

The spatial correlation between measurements at different locations is described by the semivariogram functions: where is the variance and is the covariance. The variogram and the covariance contain the same information and can be used interchangeably.

In this study, the spatial correlation function is defined by the Matérn function and is given by

is the Euclidean spatial distance between locations. is the modified Bessel function of the second kind [16], the order measures the degree of the smoothness of the process, and is the scaling parameter related to the distance of decorrelation (dependency becomes almost 0).

is obtained from spectral densities [17, p. 31] of the form

Figure 5 shows the empirical semivariogram with the Matérn covariance function, which fits the L1-VTEC data well. It also works in a wide range of circumstances; including low, moderate, and high ionospheric activities, tested with several different reference station configurations, more specifically 75, 100, and 115 stations.

6.3. Computation of the Inverse Matrix

The kriging equation (25) requires the inverse of the covariance matrix to be computed, and this is detrimental to the performance of the algorithm for large data sets. The operation may occasionally even fail to invert the matrix. Numerical methods with optimization algorithms will help us avoid this, for instance, factorization methods, ill-conditioned test, and other suitable methods.

7. Performance Parameters

In order to carry out the performance analysis of each individual algorithm, an averaging weighted reference signal was constructed. It is defined as a linear combination of values generated by algorithms with different weights, that is, , under the normalization constraint . Five algorithms are involved to construct the reference signal .

The weights are chosen according to algorithm performance measured in terms of minimum value and stability of variance, functionality, correctness, and processing time. Figure 6 shows the variance of two algorithms. We see that ordinary kriging has a minimum and stable variance; therefore its weight is higher than for the Akima bicubic spline. Table 1 summarizes the weight assignment for the algorithms.

7.1. Quality of Service Parameters Definitions

For each one of the quality of service (QoS) parameters whose values are negotiable, the worst case performance must be specified. In some cases, the minimum or the maximum values are preferable, in other cases the averaged value.

The criteria chosen for performance evaluation in this paper are based on comparing the reference signal to the output from algorithm . Analysis is based on statistical monitoring and detecting the changes in spatial location, scale, and level. The full list is given in Table 2.

The required length of time series before we can carry out the goodness of fit is a critical parameter. With the data sets used for testing, values in the range of 100–200 epochs were acceptable.

All algorithms compete about the QoS, the one with highest score is selected as the winner, and the corrections from this algorithm are used. 12 locations (can be regarded as VRS) are chosen inside the CPOS RTK network for testing, and one location is chosen randomly for each run to compute the QoS. The mathematical definitions of the QoS parameters are given in Table 2.

(i) Mean Absolute Error (MAE). MAE measures the average absolute error and is defined below. Ideally, this value should be as small as possible. Consider

(ii) Mean Square Error (MSE). This measures the average squared error and is defined below. This value should also be as close to zero as possible. Consider

(iii) Root Mean Square Error (RMSE). RMSE between reference signals and gives the standard deviation of the algorithm prediction error and minimum value is preferable. Consider

(iv) Nash-Sutcliffe Efficiency (NSE) [18]. NSE determines the relative magnitude of the noise variance compared to the observed data variance. Unity means a perfect match, and zero means that algorithm predictions are as accurate as the mean of the observed information, while negative values imply that the observed mean is better than the predicted one. Consider

(v) Kling-Gupta Efficiency (KGE) [19]. KGE was developed by Gupta et al. as a goodness of fit that decomposes the NSE to facilitate the analysis of correlation, bias, and variability. Consider

Three components are involved in computation of this index.(a) is the Pearson product moment correlation coefficient, which ideally should tend to unity. This quantity is defined by the expression (b) represents the change in locations. This index is defined as the ratio between distribution locations (means), and the ideal value is unity. Consider (c)Variability ratio (VR) represents changes in scale (variances). This index is defined as the ratio between distribution standard deviations, and the ideal value is again unity. Consider

(vi) Computation Time (CT). Algorithms with high quality data and minimum CT are preferable.

(vii) Coefficient of Determination (R2). and gives the portion of the variance of one variable that is predictable from the other variable.

(viii) Spearman Correlation Coefficient . is a nonparametric test used to measure the degree of associations between two variables.

8. Implementation and Analyses

Packages used in the implementation are downloaded from the Comprehensive R Archive Network (CRAN). Table 3 gives a full description of each package.

8.1. Data Preprocessing

The data used in this investigation is ionospheric data obtained from the Norwegian CPOS RTK network. The TEC values are generated using the GFA (geometry-free approach) algorithm. The algorithm takes two steps to process.(a)From IGS [20], the global ionospheric model (GIM) is available. The ionospheric path delay and differential code biases (DCBs) for each satellite are retrieved from the IONEX (IONosphere map EXchange) format. This information is used to estimate the hardware biases of the reference stations, by using the code observable.(b)From the previous step, we then use the biases in phase measurement and compute the ionospheric delay. The procedure is described in more detail in [21].

8.2. Interpretation of Results

The test results show that ordinary kriging with the Matérn covariance function is the most appropriate choice under normal circumstances and produces a smooth solution with acceptable accuracy. The Matérn covariance function is well-behaved even for nonstationary fields and is governed by three parameters: location, scale, and shape.

Stein [17, p. 12] recommended the use of the Matérn model due to its flexibility (ability to model the smoothness of physical processes and possibility to handle nonstationarity).

The processing time of this algorithm increases as the number of observations increase. Another approach is to exclude observations that are far away from the interpolation points and use only a subset of the data for the interpolation. This approach is called local kriging.

In order to increase the convergence of OK, we have incorporated the bootstrapping algorithm on historical data to get a very good guess for initial values. Figure 9 illustrates the concept.

One major obstacle of this algorithm is the computation of the inverse matrix in the kriging equation (25). Using the numerical approximation of the inverse matrix, the computation time will improve considerably, as mentioned previously in Section 6.3.

The WLSE (weighted least square estimation) algorithm is preferable to maximum likelihood or restricted maximum likelihood and works in most cases, regardless of the distribution of the observations. If the observations have a Gaussian distribution, the WLS and ML/REML yield the same results.

We are often faced with a nonstationary process where we are interested in estimating the spatial covariance for the entire random field. Guttorp and Sampson [22] proposed a two-step approach for solving this problem, a nonparametric algorithm to estimate the spatial covariance structure for the entire random field without assuming stationarity. The interested reader is referred to [23, pp. 93–95].

When the covariance structure preserves sparsity, numerical approximation methods are preferable to all other methods, as they require less memory and computation time.

TPS algorithm is preferred when performing smoothing rather than interpolating data.

8.3. Algorithms Delay Comparison

In this subsection, the delays caused by different algorithms are investigated. The test results are shown in Figure 7. The number of observations varied between 700 and 3000 ionospheric piercing points (IPPs). The data used for the plot are the first 255 epochs with a resolution of 30 seconds from high ionospheric activities, from the year 2013 and day of year 152.

The delay caused by local methods is shown on the right of Figure 7 and is much lower compared to global methods.

The GMRF has the highest delay over the OK and the TPS. The only challenge of TPS is to select a good smoothing parameter, . The modified cross-validation, the generalized cross-validation, and robust GCV all work well.

The IDW methods are local interpolation techniques and use only a subset of the data set to perform the interpolation. The benefit of these methods is the reduced computation time.

8.4. QoS Results

Statisticians are usually more interested in smoothing data than interpolating it. When the data is noisy, the TPS smoothing scheme works best. One major advantage of this approach is that once the curve that represents the variation in the data is constructed, we can retrieve the value at any other location without reinterpolating the data set.

Figure 8 shows the result for an arbitrary prediction point with coordinates . The reference signal is compared to the predicted values generated by the ordinary kriging algorithm with MCF. The computed quality of service parameters (QoS) are presented below the plot.

The results are summarized in Table 4 where the QoS parameters are provided for each of the interpolation algorithms tested. An arbitrary epoch has been picked for the test. High scores are highlighted in bold font. The result shows that the ordinary kriging has the best performance. The TPS comes in second place and is the only real competitor to the ordinary kriging for this case. As kriging has the best performance, the corrections from this algorithm will be used to generate synthetic observations of the user in the field. This comparison to determine the best interpolation algorithm is performed for each epoch.

8.5. Practical Considerations
8.5.1. Ordinary Kriging

The library geoR from CRAN (Comprehensive R Archive Network) is used to implement the spatial interpolation. In order to produce correct results, the data analyst must handle many small exceptions that may otherwise result in unexpected output.

(i) Performance Analysis. The variance of the estimated grids are analyzed. If the variance is very small, this ensures stability of the algorithm.

(ii) Parameter Estimations. In order to increase/accelerate the convergence of ordinary kriging, we have incorporated the bootstrapping algorithm on historical data to get very good estimates of the initial values. Figure 9 shows the first 255 estimates for the parameters partial sill and range parameter , for moderate ionospheric activity, and the network configuration with 75 reference receivers.

(iii) Anisotropy. The optimal estimated kriging weights (negative weights) and variances are very sensitive to anisotropy. Our aim is to ensure that the spatial process does not depend on direction. The geometry anisotropy correction is applied by transforming a set of coordinates according to the geometric anisotropy parameters. The package gstat does not provide automatic fitting of anisotropy parameters, while the geoR package transforms/backtransforms a set of coordinates according to the estimated geometric anisotropy parameters.

(iv) Normality Test. The MLE (maximum likelihood estimation) procedure requires that the observations are Gaussian distributed, this assumption is violated in most cases. Therefore the Jarque-Bera test is used as a test of the normality and is based on the third and fourth moments of a distribution, called skewness and kurtosis coefficients; the interested is referred to [24]. If the test fails, the weighted least square method is used to estimate the parameters. Figures 10 and 11 from a configuration with 100 sites and high ionospheric activity confirm that the L1-VTEC distribution is not normally distributed.

Based on the tests and checks mentioned above, the ordinary kriging is assigned a weight of when computing the QoS values.

8.5.2. Test Results

Figure 12 shows the smoothed curve generated by TPS software with GCV. Once the curve is determined, we can easily retrieve any value inside the coverage area without extra computation compared to other interpolation methods. In addition, this describes the variation very well.

Once the smoothing parameter is determined by GCV, TPS is the real competitor of the kriging algorithm and the weight assigned to it is 0.25.

9. Conclusion

A significant improvement of the rover position estimation can be achieved by applying the right interpolation/smoothing algorithm at the NRTK interpolation segment. This will reduce the information loss under prediction of the user error level and will provide high quality of virtual reference station data from epoch to epoch.

Five methods have been suggested to generate the rover correction. The study shows that the kriging interpolator, the ordinary kriging with the Matérn covariance function, is the most appropriate choice for weighted spatial linear interpolation, while TPS is a strong competitor of OK when the aim is to smooth, not to interpolate, the data. After performing matrix sparsity tests, the GMRF is computationally effective, requires less memory, and produces good results as TPS and OK.

For local methods the covariance structure is in general not conserved. For gentle variation in data, the Akima with bicubic method is an appropriate choice because it is the real spline method. While IDW is stable, it is inaccurate and in addition does not conserve the covariance structure of the process under study.

One major benefit of these techniques is that there is no need for any prior estimation of the spatial dependency, as in the case of Bayesian analysis (e.g., Kalman filter).

10. Discussions

(1)As we mentioned in the Introduction, processing large data sets is a challenge of the future, and our suggestion for how to handle this is formulated as follows. First of all, we already have enough mathematical tools to do the job, so we do not need to develop new ones. These tools can be considered as elementary building blocks in the hands of the data analyst/modeler. The main challenge is to know that the right tools exist, what they can do for us, what their strengths and weaknesses are, and how to combine them in appropriate ways to describe the observed variations as well as possible. Figure 13 shows the number of users connected at the same time and the historical data of the users using the CPOS services in Norway. Both curves increase exponentially in a period of one decade, and if the development continues to follow the same pattern, the existing tools will not be sufficient to process the large data sets.(2)Data quality and quantity are important to perform reliable statistical analysis, and elementary checks are necessary before starting analysis.(3)In geodesy and geophysical data analysis, the Gauss-Markov model and Kalman Filter are often considered when modeling and when state estimation is necessary. Since new navigation satellite system (e.g., Galileo, Beidou) in addition to the old GPS and GLONASS becomes operation, massive data sets need to be processed in real-time, so we are experiencing a computational paradigm shift.(4)To avoid information loss between the correction and interpolation segments, picking the right algorithm for the job is essential for improving the user position errors.


CPOS Station Characteristics

See Table 5.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.


The authors would like to thank the SATREF group at the Norwegian Mapping Authority for their effort for providing the data, especially Tor O. Trondersk. Without their effort, this investigation would not have been possible. They would also like to thank Ole Ostevdal from the Norwegian University of Life Sciences for his software to generate absolute ionospheric vertical error that are used in this investigation. The software is modified in order to meet their computation needs. The international GNSS Service (IGS) is acknowledged for providing geodetic infrastructure and geodetic products used in this work.