Abstract

We present a comparative study of computational methods for estimation of ionospheric scintillation indices. First, we review the conventional approaches based on Fourier transformation and low-pass/high-pass frequency filtration. Next, we introduce a novel method based on nonparametric local regression with bias Corrected Akaike Information Criteria (AICC). All methods are then applied to data from the Norwegian Regional Ionospheric Scintillation Network (NRISN), which is shown to be dominated by phase scintillation and not amplitude scintillation. We find that all methods provide highly correlated results, demonstrating the validity of the new approach to this problem. All methods are shown to be very sensitive to filter characteristics and the averaging interval. Finally, we find that the new method is more robust to discontinuous phase observations than conventional methods.

1. Introduction

Global Navigation Satellite Systems (GNSS) are based on satellite signals being transmitted to receivers on the ground. These signals must pass through the ionosphere on the way, where the variation in electron density causes undesirable fluctuations in the observed signal. This distortion, known as scintillation, can affect both the amplitude and phase of GNSS signals. The relevant electron density variations range from decameters to kilometers in size [13].

In the auroral zones, it is much more common for phase scintillation than amplitude scintillation to be the dominant effect (e.g., [46]). Scintillation is a large challenge for navigational systems, as it can disturb not only single-receiver systems but also networked systems [3, 4, 79]. In a worst-case scenario, the user receiver can end up with a loss of signal lock, which causes discontinuities in the phase measurements. This is known as a cycle-slip.

A good understanding of the ionosphere morphology can aid the development of suitable mitigation algorithms, that is, algorithms that assign weights to observations based on the scintillation distortion of each satellite link.

To be more specific, a more realistic stochastic model for GNSS observables would have to take into account the variance caused by scintillation. This would avoid biased solutions. Today, the stochastic model includes the following: correlation among observations [10]; satellite elevation dependency modeled by exponential function [11]; temporal and cross-correlations [12]; and multipath detection and monitoring [13]. That is, a suitable robust weighting algorithm that reduces the influence of the satellite exposed by scintillation will look similar to the one proposed by Eueler and will enhance the ability to resolve the carrier-phase ambiguity and to improve the stochastic model for GNSS processes.

This is most commonly gauged by the phase scintillation index , which is simply defined as the standard deviation of the detrended carrier phase over some period of time. A critical step in the calculation of is a frequency filter, which is used to separate the high-frequency ionospheric distortions from the low-frequency distortions due to, for example, multipath interference. The scintillation indices are usually calculated for one-minute intervals and then processed with a Butterworth sixth-order high-pass filter [1423]. However, the estimated value of can be highly sensitive to the cutoff frequency of the filter. While the most common cutoff frequency is 0.1 Hz, it has been shown that this is suboptimal at high latitudes and that higher values like 0.3 Hz might yield better results [24]. The variation in the carrier phase of GNSS signals is also commonly quantified using the rate of total electron content index (ROTI) [2527]. Several other indices for quantifying phase variations have also been proposed [24, 2832]. It is worth noting that all of these indices are directly related to the scintillation signal variance.

Some scintillation indices use methods based on wavelet transforms for the filtration. A major advantage of wavelet filters is that they manage to preserve local features in the signal, thus avoiding misinterpretation that can occur when using standard filter approaches [24, 3336]. While such wavelet filters may yield better results than conventional filters, the computational load can become very high, especially when processing data with high sampling rates (50+ Hz). The improvement that can be achieved by substituting a wavelet filter for the conventional filters must therefore be weighed against the computational load required, especially if the algorithm is intended for real-time applications.

The primary focus of this paper is on nonparametric local regression with bias Corrected Akaike Information Criteria as a viable alternative for the computation of reliable scintillation indices. In addition, we perform a statistical analysis of the scintillation indices and show that they can be well described by the Nakagami and Frechet distributions. The paper is organized as follows.

Section 2: it includes detailed description of the data sets used in this paper; Section 3: it deals with digital filter construction of low-pass filter algorithms for amplitude scintillation index computation and high-pass filter algorithms for phase scintillation computation. Filter types discussed are the Chebyshev, Butterworth, and Elliptic filters. These algorithms are used to validate the new approach; Section 4: it tackles nonparametric regression with kernel smoothing; Section 5: this section presents how the scintillation indices are computed by the conventional methods and the new approach; Section 6: it shows statistical analysis of scintillation indices, including distributions, sensitivity, and correlation analysis of implemented algorithms; Section 7: it presents generalization of the preceding sections; Section 8: it includes conclusion and applications of the results.

2. Test Data

The data used for the investigation herein was obtained from three different NRISN observation sites in the northernmost parts of Norway. These sites are highlighted with an italic typeface in Table 1 and encircled in Figure 1. The observational data is taken from day 50 of year 2014, which according to the I95 index corresponds to relatively high ionospheric activity. (The I95 index  [37] is used for the classification of ionospheric activity by CPOS software.) For more information about the ionospheric activity levels, see Figures 15 and 16. The NRISN sites are equipped by dual-frequency Septentrio PolaRxS receivers tracking GPS and GLONASS L1 and L2 frequencies with sampling rate of 100 Hz and are capable of tracking up to 12 satellites for each system. The baselines vary between 176 and 1988 km, and the height difference between sites is around m. To give a full picture of the baseline lengths and the height difference between sites above the ellipsoid WGS84, Tables 2 and 3 are provided.

To accurately determine the relative performances of the various algorithms, it is important to test different values for the filter parameters. This also allows us to determine the optimal filter parameters for the computation of the amplitude and phase variances for GNSS signals. In this case, the relevant parameters are the satellite elevation angle, cutoff frequency, and averaging period. The algorithms were tested with a cutoff frequency of  Hz for the signal amplitude and  Hz for the phase and an averaging interval of 30, 60, or 90 seconds.

3. Digital Filters Construction

To study the influence of scintillation on GNSS signals, the standard deviations of the amplitude fluctuations and phase fluctuations must be computed. To achieve this goal, digital filters that satisfy our requirements have to be constructed. For this, we require both a suitable low-pass filter to compute the amplitude scintillation index and a high-pass filter to compute the phase scintillation index .

Digital Finite Impulse Response (FIR) and Infinite Impulse Response (IIR) filters are chosen and used to construct sixth-order Butterworth, Chebyshev, and Elliptic low-pass filters. Further details on the design and implementation of digital FIR/IIR filters are well described in [3841]. Once the filter specifications have been determined, the next step is to compute the corresponding filter coefficients in the frequency domain. This is usually done with the computationally efficient Parks-McClellan algorithm. A detailed description of the algorithm is beyond the scope of this paper. The interested reader is referred to [41, Section ]. Since an ideal low-pass filter is impossible to obtain, we often use approximations. In this paper, three different approximations are considered, namely, the Butterworth, Chebyshev, and Elliptic filters.

3.1. Cutoff Frequency Determination

A raw GNSS scintillation signal is a nonstationary signal that is affected by the Doppler shift caused by satellite-receiver relative motion (trend), the slowly varying background ionosphere, and scintillation. In order to study the scintillation, trends and slow fluctuations in the raw signal should be removed; that is, any signal components that do not originate from scintillation must be excluded. There exist many methods for carrying out this detrending operation, for example, linear regression by fitting a straight line to the original data or the ratio and difference detrending methods. Detrending is the statistical operation performed to remove the long-term change in the mean function and is equivalent to high-pass filter. That is, the variance at lower frequencies is reduced compared to variance at high frequencies.

In general, the data detrending is a preprocessing step to achieve stationarity of the observed signal. The next step is then to analyze the signal statistically, by, for example, computing the first and the second moments of the signal, namely, the phase scintillation index and amplitude scintillation index . To perform a proper detrending, the cutoff frequency must be selected properly. The most important part of the scintillation spectrum is around the Fresnel frequency. It also extends to higher frequencies, but with decreasing power per bandwidth. The Fresnel frequency is [3, 30], where is the relative velocity between the satellite and ionosphere, is the wavelength of the signal (~19 cm), and denotes the distance between the receiver and the ionosphere (~350 km at 90 degrees elevation). Ideally, the cutoff frequency should be slightly below the Fresnel frequency but still high enough to remove the unwanted signal components. Also, it should be a constant, so that the scintillation values are comparable. For the case of GNSS scintillation data, a Butterworth filter with a constant cutoff frequency of 0.1 Hz is the most common approach, but it has been shown that this value is inappropriate at high latitudes [16, 24].

3.2. Butterworth Approximation

The high-order Butterworth approximation [40, p. 264] satisfies our needs, as the filter has a low ripple and minimum transition band. Due to these characteristics, the Butterworth sixth-order filter is the most used filter in computation of scintillation indices and . The cutoff frequency is user defined parameter, by default set to 0.1 Hz for low-pass filters and 0.3 Hz for high-pass filters. The derivation of such indices is usually carried out using Algorithm 1, that is, fast convolution using a fast Fourier transform (FFT) and multiplication in the frequency domain. For each Fourier coefficient, the magnitude of the frequency response of th-order Butterworth filter is given by the following equation:Note the sign in the exponent, which toggles whether the filter exhibits a low-pass or high-pass behavior. Also, the filter gain has been normalized to unity in the region with minimum attenuation, that is, when either or .

3.3. Chebyshev Approximation

The Chebyshev approximation [41, p. 27] has passband ripple that can be dumped and a remarkable transition band. Based on these qualities, its chosen to approximate the ideal low-pass filter with the user specification parameters. However, unlike the Butterworth case, the Chebyshev filter has ripples in the passband and rapid transition and is able to satisfy the user specification with lower order than Butterworth. The phase response is not linear as the Butterworth case.

The magnitude of the frequency response of an th-order Chebyshev filter iswhere determines the ripple magnitude and is less than 1. is a Chebyshev polynomial given by the expression

3.4. Elliptic Approximation

The Elliptic filters [40, p. 275] known as Cauer filters yield smaller transition bandwidths than Butterworth or Chebyshev filters for any given order but are equiripple in both the passbands and stopbands. In general, Elliptic filters meet a given set of performance specifications with the lowest order of any filter type.

For Elliptic filters, the normalized cutoff frequency is a number between 0 and 1, where 1 corresponds to half the sampling frequency (Nyquist frequency). The filter has the smallest transition band of the two approaches described so far but is very complicated analytically.

The magnitude of the frequency response of an th-order Elliptic filter iswhere controls the deviation in the passband and is the Jacobian Elliptic function [42, pp. 567–588] of order .

3.5. Implementation

In this study, we have implemented digital filters of types FIR and IIR. The frequency response for all three filter types are given, respectively, by Figures 2 and 3. These figures show the difference between the filter types (Butterworth, Elliptic, and Chebyshev).

4. Nonparametric Regression

In contrast to other data modeling methods, for instance, parametric and semiparametric, a nonparametric regression makes no assumptions about the shape of the distribution function. This class of modeling lets the data speak for themselves and they open the way for a new model by their flexibility.

Let us now suppose that we have a vector of observations sampled from an unknown density function and that our aim is to estimate and display it graphically. The density estimation function is then defined aswhere is the smoothing parameter and is the kernel function. This kernel cannot be arbitrarily chosen but needs to satisfy three requirements: it must be nonnegative, normalized, and symmetric around the origin.

There are lots of different kernel functions that satisfy these requirements, for example, the uniform kernel, quartic kernel, Gaussian kernel, and Epanechnikov kernel. The precise choice of kernel function is in practice less important than the choice of smoothing parameter .

4.1. MSE and MISE

We wish to investigate the performance of the kernel density estimation (KDE) at a single point or over the whole real line and find out how close our estimator is to its target. The Mean Square Error (MSE) and Mean Integrated Square Error (MISE) can be used to measure such performance or efficiency of nonparametric methods.

The MSE when estimating using the estimator at point is given byWe see that the MSE can be decomposed into two parts, namely, variance and bias terms. In contrast to parametric models, the bias is ignored. The MISE is defined as

4.2. Asymptotic Statistical Properties of KDE

In this section, we will derive asymptotic approximations for the bias and variance of the KDE. These terms are necessary to derive the optimal smoothing parameter . For these derivations, it will be useful for defining the second moment and norm of the KDE:

4.2.1. Derivation of Bias Approximation

We will start by deriving an expression for the expectation value of the kernel density :

The approximation of is obtained by using the Taylor expansion up to the second terms in and letting . By the definition of bias, this result yields

Some facts about the bias of are as follows:(1). Larger values of will characterize the process as oversmoothing.(2)The sign and the direction are decided by .(3)If we include more terms in the Taylor series expansion, we will reduce the error term.

4.2.2. Derivation of Variance Approximation

The variance of stochastic variable is defined as and applying the Taylor expansion of and (from the symmetry of the kernel):

Some facts about the variance of are as follows:(1). Small values of will characterize the process as undersmoothing.(2) gives the number of the observations inside the processing window.(3)The variance increases with the magnitude of .

4.2.3. Bandwidth Selection of KDE

As mentioned earlier, the bandwidth selection is more important than the choice of the kernel. This section is devoted to computing the optimal smoothing parameter . We start by evaluating the expressions given by (6) and (7):for and , and the Asymptotic Mean Integrated Square Error (AMISE) reads

The exact computation of is given in Wand and Jones [43, p. 24].

The optimal smoothing parameter is found by minimizing the expression given by (14):set (15) to be equal to 0, and solve; we getInserting this expression of into (13), we get

Several smoothing parameter selection procedures exist, for instance, plug-in methods (Section 4.2.3), focused information criteria (FIC) [44, p. 145], cross-validation, penalizing functions, and Akaike Information Criteria (AIC). The interested reader is referred to [45, Chap. 5].

4.3. Kernel Methods for Nonparametric Regression

In this section, we try to put all pieces together to construct a class of kernel-type regression estimators known as local polynomial kernel estimators. The main idea is to estimate the regression function at a particular point by locally fitting a th degree polynomial to the data by employing the weighted least square techniques. The weights are chosen to the height of the kernel function centered about the point .

The steps needed to derive the expression of the local polynomial kernel estimator are as follows.(i)Local polynomial definition: Let be the order of the polynomial we fit at point to estimate :(ii)Weights definition: the weights are given by the kernel density function(iii)Weighted least square: the value of the estimate at a point is , where minimize the expression:The weighted least square solution in matrix form read , where is vector of responses. The design matrix readThe weight matrix is diagonal matrix of weights :The value of the estimate at a point is the intercept coefficient of the local fit of :where is a vector of dimensions .Some important remarks are as follows.(1)For , we fit a constant function locally and this estimator is known as the Nadaraya-Watson kernel estimator:(2)For , the estimator function corresponds to the Priestley-Chao kernel estimator(3)Final product: the key is the optimal smoothing parameter . To compute the phase scintillation index , the steps are as follows:(i)Find the smoothing parameter . In this study, we have employed the AIC and the bias Corrected AIC (AICC) for smoothing parameter selection [46, Chap. 3]. This operation is repeated for each computation of .(ii)Compute the weights based on the height of the chosen kernel.(iii)Locally fit a th degree polynomial to the data by using the weighted least square technique.

4.4. Analytical Analysis of KDE

We will now derive an analytical relationship between the conventional filters and the new approach based on kernel smoothing. With the conventional approach, we apply a Fourier transform to the function , multiply it by the filter transfer function in the frequency domain, and then apply the inverse Fourier transform to transform the filtered function back to its original domain. According to the convolution theorem of Fourier analysis, this can equivalently be written as a convolution of the function with the inverse Fourier transform of the filter transfer function:For simplicity, we will limit the analysis to a first-order low-pass filter of the Butterworth type, as obtained by setting and choosing a positive exponent in (1): Taking the inverse Fourier transform of this, we getwhere is the modified Bessel function of the second kind. Substituting back into (26), we get Writing the convolution explicitly out, we find that the first-order Butterworth filter is equivalent to a kernel smoothing using a Bessel function as the kernel:From this result, we see that the cutoff frequency of the Butterworth filter is directly equivalent to the smoothing parameter in KDE equation (5). The modified Bessel function of the second kind can be written in integral form as It is also worth noting that which means that the Bessel kernel is a proper density function, but with a pole at the origin. We have established a direct analytical link between the conventional Butterworth filter approach and the new kernel smoothing approach. In this light, the nonparametric filter approach can be considered as a generalization of the conventional filters to other kernels, with the conventional filters as a special case.

4.4.1. Higher Order Approximation

For higher order , the computation of the Fourier transforms of (1), (2), and (4) is very complex analytically. For future work, the numerical methods and Monte Carlo simulation may be considered as suitable techniques for analyzing the relationship between the conventional filters and the new approach based on nonparametric local regression.

4.4.2. Scintillation Cutoff Frequency Determination

We have established the link between the optimal smoothing parameter and the cutoff frequency of associated filter equations (30) and (5). As a result, we can define the filter link function implicitly by . The relation between and is given by the following equation:where is a proportionality constant.

5. Scintillation Indices Computation

This section presents the methods used to compute the amplitude scintillation index and phase scintillation index , that is, to suppress all contributions but the scintillation effects in the signal variance. Following the mathematical formulation and notation suggested by Kaplan et al. [47, chap. 6.4], a simplified model for a received signal in the absence of scintillation iswhere is the received signal power, is the observed signal, is the normalized transmitted signal, is the signal noise, is the time, is the carrier frequency, and is a phase offset. Scintillation causes a perturbation to both the received signal amplitude and the phase. Thus, in the presence of scintillation, the model may be extended aswhere is a positive number that parametrizes signal attenuation, while represents fluctuations in the phase offset.

Figure 4 shows the steps necessary to compute the indices discussed above.

The first step in the processing chain is the data cleaning, which includes detecting and pruning outliers and handling discontinuities caused by cycle-slips and missing observations in the data set. Next, we have to detrend the data set. This means that we construct a time series of the received signal, estimate the trend, and remove this from the signal. This will remove disturbances due to, for example, the Doppler shift from the relative motion of satellite and receiver. Next, we pass the signal through either a high-pass or low-pass filter, in order to isolate the scintillation components in the signal. Finally, the last step is the computation of the scintillation indices, which is the interesting part.

5.1. Filtering Techniques

Using digital filters to extract specific signal components is an important technique in many practical applications. For instance, we need the low-frequency components of the signal strength to calculate the amplitude scintillation and the high-frequency components of the signal phase to calculate .

There are a variety of methods that can be used to design the digital filters. One commonly used method is the bilinear transform. The method uses the analog filter approximation functions that have been developed and translate the filter coefficients in a way that will make them usable for discrete systems, the IIR filter. That is, the output of the filter will depend on the previous values of the output as well as on the past and the current values of the input. Such a process is known as the autoregressive moving average (ARMA) process.

Other methods use frequency response of the desired filter to directly determine the digital filter coefficients. These are known as Finite Impulse Response (FIR) filters and are implemented as moving average filters. Once the FIR filter coefficients have been determined, the output signal is simply the convolution of the input signal with the filter coefficients.

One efficient way to implement this approach computationally is that the convolution is performed through multiplication in the frequency domain. This is done by first using a fast Fourier transform (FFT) and then multiplying the filter transfer function with the input signal and performing the inverse transform. This is described by Algorithm 1, which assumes that the filter coefficients have already been determined.

Algorithm 1 (filtering with FFT convolution). ) Apply the Fourier transform to the filter coefficients :() Apply the Fourier transform to the input signal :() Multiply the complex sequences elementwise:() Apply the inverse Fourier transform to transform the filtered function back to its original domain:() Normalization: The IFFT coefficients are the complex conjugates of the Discrete Fast Transform (DFT) coefficients. The output signal has to be scaled down by the length of the signal.

5.2. Computation of the Amplitude Scintillation Index

The amplitude scintillation index is defined as the standard deviation of the signal power normalized to the mean value over the interval of interest, usually a 1-minute interval.

In order to compute this index, the necessary steps are as follows:(I)Computation of the total amplitude index . The index is defined by the expressionwhere is the detrended power or satellites signal intensity.(II)Correction of by removing the ambient noise: can have large values due to the ambient noise, and this must be removed. The removal process is done by estimating the average carrier-to-noise ratio density over the entire evaluation interval and using the estimate to compute due the ambient noise. If the signal-to-noise () density is known, the predicted due to the ambient noise [48] is given by the expressionReplacing with the 60-second estimate , we obtain an estimate of due to the noise . Subtracting the square of this value from the square of (40) yields the revised estimate of :(III)Detrending the signal to achieve stationarity: Detrending the raw signal intensity is accomplished by dividing the original time series by the filtered one. The effect of this is the removal of the low-frequency variance. The ratio method is attractive because it is dimensionless; however, it cannot be used if the data contains negative values, and it can also become problematic if the fitted line crosses zero.Detrending by the difference method produces stationarity of the signal under study. The first difference eliminates a linear trend, while the second difference can eliminate a quadratic trend and so on. In this study, both methods are used and they produce almost the same result; the first difference method has a little higher range than the ratio method.In order to classify the scintillation activity indicated by , four categories [49] are defined: is quiet, is moderate, is disturbed, and is severe.(IV)Practical considerations: A full understanding of the difference between and is necessary and useful for the users of the GNSS receivers. is usually expressed in dB; it refers to the ratio of the signal power and noise power in the given bandwidth while is expressed in dB-Hz; it refers to carrier power and the noise power per unit bandwidth.The signal strength indicator measured by the receiver is , the carrier-to-noise density measured in dB-Hz. In the computation of , the signal-to-noise density is used. The relation between these two quantities [50] is given by the expression

5.3. Computation of Scintillation Index

Computation of the phase scintillation is straightforward, as the procedure is very similar to the computation of . The main difference exists in data filtering; for high-pass filter, usually a Butterworth filter with cutoff frequency of 0.3 Hz is used while is as mentioned in Section 3.2.

For typical FIR, IIR, and FFT convolution, the filter transfer function given by (1) is used. The sign in the exponent is negative, which results in the high-pass behavior. Detrending is a necessary step before carrying out the filtering. This is accomplished by any method that is able to produce a stationary time series. In this study, the ratio method is used for data detrending. The next step is filtering. Algorithm 1 described previously is used.

For the case of the new approach, the details are given in Section 4.3 and are summarized as follows:(I)Preprocessing step requires handling outliers, missing observations, and cycle-slip detection and repair.(II)Choose a kernel density function to compute the weights.(III)Model selection with AIC/AICC is to determine the optimal smoothing parameter . The key is .(IV)Compute the weights based on the height of the chosen kernel centered about the point of interest.(V)locally fit a th degree (, or ) polynomial to the data by using the weighted least square technique.(VI)Compute by taking the standard deviation of the residuals.

6. Statistical Analysis

The main objective of this section is to define the distributions of phase and amplitude scintillation indices ( and ), correlation, and sensitivity analysis of the implemented filtering algorithms. Focus will be directed to the new approach, the nonparametric local regression with smoothing parameter selection. Outliers detection and repair, discontinuities, and missing observations are handled in data preparation.

6.1. Statistical Distribution of the Scintillation Index

The power fluctuations of given in (35) are generally modeled as a Nakagami -distribution. The probability density function (PDF) of this distribution is given by (B.1) with mean value of one and variance .

Due to the characteristics of the Nakagami distributions, will not exceed [47, p. 296]. Figure 5 shows the computed index for an arbitrary GPS satellite and Figure 6 shows the distribution of .

6.2. Distribution of

This subsection is devoted to the computation of phase scintillation . Figure 7 shows the computed phase scintillation index for GPS satellite. Clearly, the heavy tailed distribution family is the appropriate choice. Here, we found that Frechet distribution equation (B.5) models the empirical data very well.

The motivations for considering the log-normal as an approximation model of the distribution of are as follows: First, the log-normal distribution is a positive real-valued function with a heavy tail that can describe the presence of extreme variability in the data. Secondly, the distribution is simple and practical and provides a very good fit to the data. Finally its variance is scaling.

Figure 8 shows the distribution of phase scintillation indices for an arbitrary GLONASS satellite, PRN 08. Clearly, the distribution follows the heavy tailed distribution (the Frechet and log-normal ones are appropriate fit).

6.3. Correlation Analysis

Often, we like to measure the linear predictability of one signal from another one . Assuming that the variance of both time series is finite, the cross-correlation function [51, p.23] can be used for this purpose. This function is defined aswhere is the cross-covariance function between two time series, and , respectively. is the expectation operator. The Cauchy-Schwartz inequality implies .

The phase scintillation index is the dominant disturbance at high latitudes and is obtained by high-pass filtering. For this reason, we have chosen two implementations of high-pass filtering algorithms (FFT convolution and IIR) for the correlation analysis. All plots have in common the cutoff frequency Hz and the averaging interval minute. Three scenarios are presented below, using different threshold values  rad.

6.3.1. Scenario 1:  rad

For this scenario, we drop all scintillations indices above a threshold of 0.1 rad. Figure 9 shows that the algorithms are highly correlated, and the correlation coefficient varies between and . The new approach, that is, the nonparametric regression with Akaike information criterion (AIC) [51, pp. 53-54] model selection, performs very well. The algorithm does not take into consideration any knowledge of physical problems, for instance, multipath, Doppler effects due to satellite motion, and cutoff frequency selection. The key is the smoothing parameter that determines the trade-off between the variance and the bias terms.

In this scenario, we have dropped all observations above the threshold value  rad. These observations are classified as extreme and correspond to severe ionospheric activities.

6.3.2. Scenario 2:  rad

Figure 10 shows that the algorithms are still highly correlated and the correlation coefficient has decreased to values between and . The main reason for this reduction is the inclusion of more extremal events that are classified as severe ionospheric activities. Before the vertical lines indicated in Figure 10, the algorithms are highly correlated. After the lines, decorrelation appears between the two algorithms.

6.3.3. Scenario 3:  rad

In this scenario, we have included all values ( rad are treated as outliers and removed). Figure 11 shows that the correlation coefficient has dropped to values between 69 and 74%. The reason is the inclusion of the extremal events classified as severe.

The vertical line is used to point out/distinguish two classes; the one to the left is highly correlated while the other is uncorrelated and corresponds to the extreme ionospheric activities and possibly noise in the signal.

6.4. Sensitivity Analysis

In order to carry out the sensitivity analysis of the implemented algorithms, a reference algorithm has to be chosen. The most common one used is high-pass filter Butterworth with FFT convolution, order 6, and cutoff frequency of  Hz.

Let the function represents the phase scintillation index , depending on parameters , where and are the cutoff frequency and the averaging interval parameters, respectively. represents the remaining parameters, for instance, cycle-slip detection and repair and handling outliers/glitch. We will ignore the parameter to facilitate analysis.

In order to carry out the sensitivity analysis, keeping all parameters constant and varying one parameter at a time are common. This is done by the following equations:

6.4.1. Scenario 1: Cutoff Frequency Constant

In this scenario, we kept the cutoff frequency Hz constant and varied the average interval seconds.

Figure 12 shows the results. The general shape of the time series is very similar, but some of the modes/spikes are not present in all of them. In general using a longer averaging interval smooths the time series, broadening the spikes, reducing their magnitude, and causing some of the spikes to vanish. The longest interval time series does not appear to offer any significant advantage over the middle interval time series. The 30 sec interval time series contains more fine structure than the 60 sec interval time series. In conclusion, the averaging interval should not exceed 60 seconds.

6.4.2. Scenario 2: Averaging Interval Constant

In this scenario, the averaging interval is constant and equal to seconds while varying the cutoff frequency  Hz.

Figure 13 shows the results. The general shape of the time series is very similar, but some of the modes/spikes are not present in all of them. In general using a higher cutoff frequency results in a reduction of the scintillation index value throughout the time series. At time ≈ 340, there is a large spike which is significantly reduced when changing the cutoff frequency from 0.2 to 0.3 Hz and vanishes for a cutoff frequency of 0.4 Hz. The phase scintillation values are clearly strongly dependent on the cutoff frequency, even for a modest change such as the one seen here. This is a known weakness in the standard techniques of calculation (e.g., [52]).

Taking into account the first and the second sensitivity analysis, the acceptable range for the computation of is and . These ranges give the maximum variations in the data.

7. Implementation and Analysis

7.1. Data Collection and Analysis

As mentioned in the introduction, the data used in this investigation are obtained from the Norwegian Ionospheric Scintillation Network. The reference stations used are encircled as shown in Figure 1.

For implementation and software packages used to compute the scintillation indices and , the interested reader is referred to Appendix C.

7.2. Interpretation of Results

The test results show that all detrending filtering techniques produce almost the same results. The main difference exists in statistical methods used in data editing. That is, how outliers are detected and removed, the missing observations, and glitch detection and removal.

The new approach is robust against outliers, discontinuity, and missing observations. Figure 14 shows how algorithms (FFT convolution, IIR, and local kernel regression) handle a discontinuity with a magnitude of  rad. The new approach handles perfectly the discontinuity and avoids generating a false scintillation due to discontinuity.

7.3. Remarks

Some remarks and suggestions are as follows:(I)In order to capture false scintillation due to low satellite elevation angles, the cutoff elevation angle was set to zero. Figure 5 shows the events which are presented by shadowed rectangle.(II)In this paper, we have computed scintillation indices by means of different detrending and filtering techniques. Identifying the weaknesses and the strengths of each method, we can compute a reliable index that can be used in further analysis, for instance, to carry out the classification of the level of the ionosphere disturbances.(III)Nonparametric local regression with optimal smoothing parameters can be used as scintillation indices computation.(IV)As can be seen in Figure 14, the new approach handles discontinuities very well, while the other filter techniques generate artificially high values. The new approach computes  rad while the other algorithms compute values of and  rad, respectively. Using the traditional techniques this discontinuity would be falsely reported as a scintillation event.

8. Conclusion

We have shown that the new approach based on local regression with kernel smoothing and with AIC/AICC for bandwidth parameter selection can be used for computation of scintillation indices for high latitude data. The new approach can be analytically related to the existing filtering methods and is shown to produce highly correlated results with the traditional approaches. However, the new approach shows superior handling of discontinuities.

We have shown that applying the right detrending and filtering techniques to the scintillation data at high latitude one can obtain reliable scintillation indices ( and ). Clearly, the studies show that the phase scintillations are dominant for these data sets.

For the derivation of , the digital filter FIR (Butterworth, Chebyshev, and Elliptic) and FFT convolution used to implement the low-pass filter work well with minor differences. For , the digital filter IIR (Butterworth and Elliptic) and FFT convolution work well. The difference exists in statistical methods used to compute standard deviations and how outliers, glitches, and missing observations are handled. Poor methods can bias the estimation process. In addition, the study shows that the derivation of scintillation indices and is sensitive to averaging interval and the cutoff frequency. Figure 5 shows the false scintillation for . In this case, all elevation angles are used under computation.

For the data sets classified as high ionospheric activity, we have defined the empirical distribution of the scintillation indices and for the Norwegian Regional Ionospheric Scintillation Network, located at high latitude (N to N). Heavy tailed Frechet/log-normal distribution (B.5) is a good model for phase scintillation index as Figure 8 confirms. The distribution of follows the Nakagami distribution, given by (B.1) and shown by Figure 6.

Appendix

A. Ionosphere Classification

The Index 95 values are computed by the Trimble software running the CPOS network and is used to classify the ionosphere activity. I95 classification is defined as follows:(i): ionospheric disturbances have no influences.(ii): ionospheric disturbances are classified as weak.(iii): ionospheric disturbances are classified as strong.(iv): ionospheric disturbances are classified as severe.

Figures 15 and 16 show the ionosphere activities for the data sets analyzed (y: 2014, DOY: 50) which are classified as strong to severe.

B. Empirical Distributions of and

This section gives some distributions employed in communication systems in order to characterize the statistics of the signals transmitted through multipath fading channels. These distributions are relevant for scintillation data as well.

B.1. The Nakagami -Distribution

A random variable is said to have a Nakagami -distribution if, for some and , its probability density function (pdf) is given bywhere is the mean square value of and is defined as and is the expectation operator. For the parameter , the ratio of the moments is defined by .

For , we obtain the Rayleigh density function, given by expression (B.2).

B.2. Rayleigh Distribution

A random variable is said to have a Rayleigh distribution if its probability density function (pdf) is given by

B.3. The Log-Normal Distribution

A random variable is said to have a log-normal distribution if its probability density function (pdf) is given bywhere is the median of and is the variance. The log-normal distribution is characterized by the parameter known as the mean-to-median ratio and is given by .

Alternatively, log-normal pdf can be written as

B.4. Frechet Distribution

A random variable is said to have a Frechet distribution if for parameter , respectively, defining the scale, location, and the shape parameters, the distribution is given by the expression

C. Implementation and Software Packages

The software used to generate the scintillation indices and is implemented in the programming languages C, C++, and R. All programs are configurable and generate a log-file to report all events under processing.(1)C++ configurable process decodes Septentrio PolaRxS message types 4027 (measurements blocks) and 4046 (correlation values) and produces a suitable matrix format that is easy to process with Matlab, Python, or R. The decoded messages are well described in the SBF reference guide [53].(2)Convolution with FFT: C++ program reads raw scintillation data in matrix format and computes the indices and . This module includes the detrending methods varying between fitting a straight line, ratio method, and more advanced techniques such as nonparametric regression with information criteria for model selection.(3)Median filter: C++ program removes the glitches (spikes) from scintillation indices data.(4)Digital filters: R package, signal, is downloaded from the Comprehensive R Archive Network (CRAN). The package is used to implement FIR (Butterworth, Elliptic, and Chebyshev) and IIR (Butterworth, Elliptic) filters.(5)Outliers: R package outliers downloaded from CRAN is used to detect and remove outliers.(6)Local regression: R package locfit downloaded from CRAN is used to carry out filtering via nonparametric regression with AIC for model selection.(7)Nonparametric analysis of covariance for regression: R package fANCOVA downloaded from CRAN is used to carry out filtering via nonparametric regression with AICC for model selection.

Competing Interests

The authors declare that there are no competing interests regarding the publication of this paper.

Acknowledgments

This work is funded by the Norwegian Mapping Authority (NMA), part of Ph.D. program and cooperation with the Norwegian University of Life Sciences (NMBU). The authors would also like to thank Anna B. O. Jensen and Jabir A. Ouassou for their constructive discussions and proofreading of this paper.