Abstract

Galileo, the European Global Navigation Satellite System, will provide to its users highly accurate global positioning services and their associated integrity information. The element in charge of the computation of integrity messages within the Galileo Ground Mission Segment is the integrity processing facility (IPF), which is developed by GMV Aerospace and Defence. The main objective of this paper is twofold: to present the integrity algorithms implemented in the IPF and to show the achieved performance with the IPF software prototype, including aspects such as: implementation of the Galileo overbounding concept, impact of safety requirements on the algorithm design including the threat models for the so-called feared events, and finally the achieved performance with real GPS and simulated Galileo scenarios.

1. Introduction

Navigation algorithms play a key role in the provision of the Galileo Mission, since they are responsible for computing the essential information the users need to calculate their position: the satellite ephemeris and clock prediction models. Such information is generated in the Galileo Ground Mission Segment (GMS) and broadcast by the satellites within the navigation signal, together with the expected a-priori accuracy (signal-in-space accuracy (SISA)).

In parallel, the integrity algorithms of the GMS are responsible for providing a real-time monitoring of the satellite status with timely alert messages in case of failures. The accuracy of the integrity monitoring system is characterized by the signal-in-space monitoring accuracy (SISMA), which is also broadcast to the users through the integrity message.

Galileo is currently in its detailed design and development phase. The design and development phase for the IPF started in May 2005. The preliminary design review (PDR) has been successfully held and the experimentation and validation activities for proving the correctness of the algorithm design are now close to the end. The SW prototypes of the algorithms have already been implemented and accepted and their performance is currently under assessment.

The signal-in-space accuracy (SISA) plays an important role in the Galileo integrity concept, as it should cope with the navigation message errors in fault-free conditions. The computation of this parameter is performed in another element of the GMS named orbitography and synchronization processing facility (OSPF) based on off-line data processing. The description of the algorithms in charge of the SISA computation is out of the scope of this paper, which is devoted to the real-time integrity monitoring system of Galileo allocated to the IPF. A comprehensive description of the SISA computation can be found in [16].

2. The Galileo Integrity Concept

In order to validate the navigation message being broadcast by the satellites, an independent estimation of the signal-in-space error (SISE) is performed in real-time. This estimation, which is also a random process with associated uncertainty, allows the verification of the overbounding of the true SISE distribution by the SISA distribution. The assumption made in this case is that the difference between the true SISE projected at worst-user location and the estimated one can be overbounded by a Gaussian distribution with the standard deviation equal to SISMA. In this context, the SISMA can be considered as a quality measure of the integrity check within the IPF. More details on the Galileo integrity concept can be found in [7]. As the IPF is an unmanned facility, the algorithms’ robustness and reliability are extremely important, and thus the whole design is highly conditioned by the stringent integrity and continuity requirements.

Before entering more deeply in the explanation of the Galileo user integrity concept and its impact on the integrity algorithms, the Galileo overbounding concept should be clarified. As stated in [7], it can be defined in Definition 1.

Definition 1. The distribution of a random variable A is overbounded by a distribution of a random variable B, if for all : .

This definition of the Galileo overbounding concept is quite similar to the cumulative density function (CDF) overbounding definition stated by DeCleene in [8], although there is a difference because the two tails are combined together in this definition with respect to the one proposed by DeCleene. In an equivalent way as in [8], zero-mean, unimodality and symmetry are required, although the requirement of “zero-mean” will be overcome in the frame of the SISMA as it will be seen afterwards.

The objective of the IPF is to validate the navigation message of the satellites. The validation is based on IPF estimation of the SISE and its comparison with the broadcast SISA and the internally computed SISMA. According to the assumptions mentioned earlier, the IPF will assume that the estimated SISE is overbounded by a Gaussian unbiased distribution as follows:

(i)true SISE overbounded by (ii)SISE estimation error (true SISE minus estimated SISE) overbounded by (iii)the estimated SISE is also overbounded by . Under these assumptions, the user considers that the threshold applied at IPF-level in order to decide if a navigation message is valid or not is given by the variance of the distribution characterizing the estimated SISE, together with the required false alarm probability:

being the point of the normal distribution that leaves in the tails (two-tail problem) a probability equal to the specified false alarm rate. Thus, if the estimated SISE projected to the worst user location is higher than the allowed threshold, the satellite is flagged as “DO NOT USE” in order to indicate the user that its navigation message is not valid.

The current specification of the IPF element envisages a maximum false alarm probability in the order of 10–7 in 15 seconds, which gives a factor approximately of 5.212. Considering that the required values for SISA and SISMA are 0.85 and 0.7 meters, respectively, in case no more barriers were implemented, the minimum detectable errors by the IPF would be in the order of 6 meters.

All parameters defined up to now play an important role in Galileo user integrity equation. In particular, the user will not use those satellites with IF set to “DO NOT USE.” Furthermore, the SISA and the SISMA will be introduced in the equations in order to compute the so-called integrity risk (IR), which is the probability of having hazard misleading information (HMI).

Galileo users will compute the integrity risk by combining the horizontal and vertical errors, considering both the fault-free situation and the one where there is one failing satellite. The basic underlying assumptions allowing the user to determine the integrity risk of his position solution at any global location are as follows:

(i)in a “fault-free-mode” the true SISE for a satellite is overbounded by a zero-mean Gaussian distribution with a standard deviation equal to SISA;(ii)in general, the IPF will detect the faulty satellites and they will be flagged as “don't use;”(iii)one satellite of those flagged as “OK” is considered to be faulty but not detected (“failure mode”); for this satellite the true SISE is overbounded by a Gaussian distribution whose mean is the “IPF rejection threshold” (T) and the standard deviation is equal to SISMA, ;(iv)the probability that more than one satellite at each instance in time is faulty but not detected is negligible for the user equation. In order to overcome the problem of the bias in the SISMA minimising the impact on the performance, an innovative approach has been followed in the design of the IPF algorithms. It takes advantage of the fact that the SISMA is only used for the integrity risk computation of the assumed failed satellite, for which the true SISE is expected to be overbounded by the Gaussian distribution as stated above. This distribution has a certain bias which is a function of the SISMA, so this feature can be used to manage the potential biases.

Moreover both and the equations the final user implements to reconstruct the IPF rejection threshold are fixed, and consequently any modification in the SISMA computation ought to lead to broadcast a SISMA such that the value of “T” as obtained by the user is higher or equal than the one used internally within the IPF. Based on these principles, the rejection threshold was modified leading to the following expression:

where

(i) is the estimated bias of the SISE estimation error projected to the worst-user location (WUL);(ii) is the point of the Gaussian distribution that leaves in the two tails a probability equal to ;(iii) is the standard deviation of the Gaussian distribution that overbounds the SISE estimation error after removing the bias. may not be the same as since the probability of the false alarm for this check at high-level does not consider another contributions due to IPF internal events, leading to a higher . In this context, the final user should consider that the true SISE of the failed satellite follows the Gaussian distribution. The SISMA to be broadcast () would be now obtained by just imposing the condition that the threshold built by the final user has to be exactly the same as the one used by the IPF:

and then

It is important to note that since the is a bound in absolute value of the true , needs to be enlarged by another in order to be conservative and get the real bias that the user should consider. This means that the final user will not reconstruct exactly the IPF internal rejection threshold but something higher.

Finally, the following formula is obtained:

where From this expression it can be easily deduced that, at least whenever is greater or equal to , is also greater or equal to and thus the broadcasting of as defined above allows being compliant with all the requirements while compatible with the user concept.

The main advantage of this approach is that it allows a simple management of the bias in the SISE estimation error fully compatible with the user integrity concept without using any specific property of the Gaussian overbounding concept. However, the main drawbacks are that unimodality and symmetry are required to allow the overbounding and that the value of the SISMA to be broadcast is not fully independent of the SISA, and thus the broadcast value is not a pure measurement of the quality of the SISE estimation process. This could bring a problem if users with different SISA’s could exist; therefore the unique requirement that is needed is to guarantee that the IPF is aligned to the safety-of-life users in terms of using the same SISA. This is completely guaranteed by the IOD/SNF mechanism described in the Galileo signal-in-space (SIS) definition that establishes a direct link between the integrity message and the navigation message for which it has been computed. A detailed description of this mechanism is out of the scope of this paper and omitted for brevity. In terms of performance, a trade-off analysis was performed comparing this strategy with other potential ones such as the “Excess mass overbounding” (see [9]). The results were quite similar, although slightly better in the proposed alternative approach, and so it has been preliminary selected for its implementation within the IPF.

3. Overview of the IPF Algorithmic Processing

The IPF is the real-time processing element of the GMS that provides the integrity information for the broadcast satellite navigation data based on the Galileo sensor stations (GSSs) ranging measurements obtained every second, provided with a small delay in order to be compatible with the time-to-alert (TTA) requirement. The IPF design copes with this 1 Hz algorithmic task and with the batch task of performing an optimum GSS clock synchronization and tropospheric delay estimation, as it will be seen afterwards.

With respect to the service management, the IPF provides integrity information for the Galileo PRS and SoL services in an independent way. Thus, two instances of the same IPF algorithms but with different configuration will be executed once for each service (see [1, 10] for further details) avoiding any sharing of input data.

The IPF receives information from the network of GSS’. This network is made up of nominally 40 different sites with a worldwide distribution optimized to obtain the best performance. At each site, the information is collected through two different measurement chains, namely A and B. For each chain, the IPF receives the broadcast navigation and integrity messages and raw ranging measurements. Information from both chains are processed independently up to the moment when the SISMA estimates for each chain have been obtained, this is the so-called chain processing (see Figure 1). At that point, the product-check algorithm merges the SISMA’s from both chains selecting the highest one for safety reasons, and consolidates the final values to be broadcast, finishing the service processing.

The basic steps (see Figure 1) that lead to the computation of the Integrity information are as follows:

(i)preprocessing and validation (PPV) of the raw measurements: rejection of inconsistent (code-phase, time-evolution) raw measurements, detection of cycle slips, and smoothing of the code measurements with phase measurements;(ii)synchronization of the receiver clocks to the GST time scale (GSTS + OnClk);(iii)estimation of the tropospheric zenith delay per receiver (GSTS);(iv)estimation for each satellite of the SISE by using the residuals resulting from the reconstruction of the smoothed pseudorange with the broadcast satellite ephemeris and clock prediction models in the navigation message and the estimated GSS clock biases and tropospheric delays (IntDet);(v)computation of the actual SISMA as the error bound in the SISE estimation process considering the projection to the WUL, the potential biases and random errors in the ranging measurements residuals (IntDet);(vi)computation of the integrity flag, by the direct comparison of the estimated SISE and the rejection threshold based on the actual SISMA and the available SISA (IntDet);(vii)computation of the broadcast SISMA, as the predicted error bound in the SISE estimation process during the full validity time of the integrity message, including the anticipation of the failure of any GSS of the available ones (IntDet) with the objective of improving the continuity of the system, although the availability is degraded;(viii)computation of the quality-of-service parameters (QoSs), which is a measure of the expected level of performance, and it will be used by another GMS element to select the master IPF among all the active ones.

3.1. Preprocessing Algorithm

The goal of the preprocessing and validation (PPV) algorithm is to provide smoothed pseudoranges and carrier phase free of ionospheric delays by means of the dual frequency combination. On top of this, several barriers are added in order to improve the robustness of the IPF, aimed at detecting outliers and discontinuities in the incoming data. The reader is referred to [11] for further details. Here only a brief description of the change in the cycle slip algorithm with respect to [11] is provided.

The smoothing filter is the classical Hatch filter, which has been proved to be very efficient. Several trade-offs have been performed with the objective of finding an alternative filter with better performance. The results have shown that the Hatch filter is almost the optimum. The good performance, the high degree of simplicity, and the very low CPU consumption are key factors leading to consider the Hatch filter as the baseline. However, a certain lack of robustness have been observed from the experimentation test campaign, given the long reaction time to discontinuities (i.e., an undetected cycle slip produces a discontinuity in the ambiguity), its output is compared with that of a finite impulse response (FIR) filter. In case they differ more than expected, both filters are reset. The FIR filter has an approximate order of 700, looking for a balance between CPU consumption, time to react to discontinuities in the input data and the size of the discontinuities to be detected.

In order to maintain the performance in the most stringent ionospheric conditions corresponding to sunspot number (SSN) of 250, an algorithm has been added with the objective of correcting the ionospheric delay terms or higher order, those proportional to and . These effects may be up to several centimetres, which are negligible compared with the raw pseudorange noise, but not with the carrier phase one, which is nominally just several millimetres. The approach outlined in [12] has been followed.

The cycle slip detection and repair algorithm is another key aspect of the measurement preprocessing. Performance of the IPF in terms of SISMA value and continuity are quite demanding, and thus the false alarm rate of this algorithm should be extremely low, in the order of 10−13 per second. In a nominal situation (1 cm phase noise), the algorithm description is given in [11]. However, in the presence of strong scintillations, the choice is to set the detection threshold at 0.6 cycles and allow up to 20 consecutive slips before resetting the filter because of potential filter divergence seen as an excessive number of reparations. Additionally, in order to help controlling the effect of undetected or false detected cycle slips in the smoothed pseudorange and the carrier phase (and thus the SISMA value), the noise of the repaired phase is monitored. In case a certain value is exceeded (fixed according to the probability of false alarm), phase measurements are rejected for its processing by the rest of algorithms until the monitored noise decreases.

3.2. Sensor Station Synchronization Strategy (GSTS and On-Line Clock)

As outlined before, special effort has been devoted to improve the synchronization algorithms since the impact on the performance is significant. Initially, the considered algorithm for the synchronization was based on Kalman filters using as observables “common view observations” formed with the smoothed pseudorange (note that “common view observations” are formed by the difference between the ranging measurements of two sensor stations with respect to the same satellite). The best accuracy that typically can be obtained with this kind of algorithms is in the order of 0.8 nanoseconds with the disadvantage of the reactivity of the Kalman filters to nonmodeled effects and its sensitivity to the fine tuning. Moreover, adaptive filters are not considered adequate from a safety point of view.

A major breakthrough has been achieved when applying basic principles of the orbit determination to the IPF, leading to the processing of both smoothed pseudoranges together with carrier-phase measurements. Taking advantage of the highly accurate ephemeris models provided by the OSPF, the GSSs are synchronized by fixing the orbits according to the broadcast navigation messages and solving for the clocks, the iono-free carrier phase ambiguities and the zenith tropospheric delays (ZTDs). The tests with real GPS data have shown that the GSS synchronization error in real-time is between 0.3 and 0.4 nanoseconds (67%) and the ZTD estimation error in the order of 2 cm (67%). Nevertheless, the CPU consumption of this process allows only its execution in batch mode every minute. A second process working in real-time second by second is required. This process (called on-line clock) is equal to the first one but it takes the estimated ambiguities and ZTDs; and it fixes them so that the state vector is just reduced to the satellite and GSS clock biases with respect to the time reference. The degradation in performance is almost negligible and thus a real-time synchronization algorithm almost as accurate as an orbit determination and time synchronization (ODTS) one is achieved.

3.2.1. GSTS Algorithm

The first process of the two-step synchronization scheme is called the ground station time synchronization (GSTS) algorithm. In short, the GSTS is a weighted least square algorithm with a priori information and linear constraints. Its state vector has been reduced to the receiver clock offsets, iono-free carrier phase ambiguities, tropospheric zenith delays, and the satellite clock offsets. As observables, it uses an arc of iono-free smoothed pseudoranges and carrier phases accumulated over two hours and sampled every 10 minutes. The arc length and sampling time are parameters to be tuned during the performance validation phase. These parameters can be varied accordingly so that a similar level of performance can be achieved with shorter arcs and higher sampling rates, provided the observability of the parameters to be estimated is maintained within acceptable limits. The CPU consumption is such that it allows to be executed every minute or so. The main models included in the GSTS are the following:

(i)snapshot clock biases with no clock model to relate biases along different measurement epochs;(ii)Saastamoinen tropospheric mapping function, and tropospheric blind model (as a priori information);(iii)pseudorange and phase measurement modeling including relativistic effects;(iv)station uplift correction due to solid earth tide effects. Similar to a classical ODTS algorithm, the performance of this algorithm is mainly driven by the phase noise, provided that the orbital error in the navigation message is kept within acceptable limits. It is important to note that there is no attempt to correct the high-order ionospheric delay terms in the measurement modeling performed within this algorithm. Therefore they will be seen as noise degrading the performances of the GSTS algorithm.

An algorithm such as the GSTS has the advantage with respect to a Kalman implementation of the classical common-view approach that not only the synchronization accuracy improves but also the tropospheric zenith delay accuracy is much greater. Besides, the snapshot clock model does not make any assumption on the clock behavior of the receiver clock, being more robust to clock jumps or instabilities.

The GSTS implements also robust estimation techniques since two iterations are performed with observable rejection in the middle based on the comparison of residuals with the a posteriori residual standard deviation.

Furthermore, in order to avoid a degradation of the receiver clock offsets by a faulty navigation, those navigations that have never been validated in previous epochs are not included in the current synchronization. During system start-up, this situation is under direct operator control and it is assumed that the probability of having an OSPF failure at the same time as the IPF is in the start-up and not seen by any monitoring parameter is negligible. It should be noted that in the nominal case, the integrity is maintained because of the off-line calibration of the SSEB performed in the MSF element of the GMS that includes the synchronization error, as explained in [13].

3.2.2. On-Line Clock Synchronization Algorithm

ONCLK is a “simplified version of GSTS” in which only the satellite and receiver clock offsets are estimated and takes the tropospheric zenith delay and ambiguities estimated from the last run of the GSTS as they do not depend on the reference time scale and are assumed to be sufficiently stable in time. ONCLK uses the navigation messages that are to be validated by integrity determination but that have already been validated in previous epochs, as explained before for the GSTS.

When a brand-new navigation message reaches the IPF without integrity information, this message is not used in the receiver clock synchronization. Instead, the former navigation message is used. However, and as it will be seen later, an integrity flag and SISMA values are computed for this new navigation message. In this way, in case there is a problem with the new navigation message for a satellite, it does not corrupt other satellites through a corrupted receiver clock offsets and the navigation message is flagged with an integrity flag to DO NOT USE without any side effect. As a summary the first time the navigation message is validated its true SISE is fully decorrelated with respect to the SISE estimation error due to the delay considered when entering in the receiver synchronization process.

3.3. Time Reference for Synchronization

The reference time scale used by the IPF must be as close as possible to Galileo system time (GST) provided by precise timing facility (PTF). The implementation of this reference in the GSTS fits in naturally, just by adding to the input data the measurements of the GSS connected to the PTF and fixing its clock offset. This master clock approach was firstly implemented. However, the high availability of the GMS should be achieved taking into account the PTF failures and, thus, a different scheme was adopted. The reference time scale now used within the IPF is defined as the “GST as seen through the Galileo constellation”; that is, the Broadcast GST. This is implemented in the GSTS and “on-line clock” algorithms by introducing a linear constraint so that the corrections to the satellite clock bias with respect to the predicted clock offset in the navigation message average to zero. In order to cope with faulty satellites/navigations, the contribution of each satellite to the constraint is deweighted by a factor that depends on its own clock offset with respect to the navigation, thus allowing the decrease of the faulty satellite/navigation contribution in the iterative estimation process. In this way, those satellites with degraded navigation messages do not drag substantially the time reference from GST as defined by the PTF.

3.4. Integrity Determination Algorithm

The integrity determination (Int_Det) is the algorithm in charge of computing the integrity flag and SISMA for each satellite. The integrity flag indicates that the estimated SISE is consistent with the broadcast SISA and the uncertainty in the estimated SISE given by the SISMA.

The SISE for a given satellite (SISE) is defined as a three-component vector in the satellite body-fixed coordinate system: cross-track, along-track and, radial + clock. These two components are estimated together due to the lack of observability to separate correctly the orbit radial error and the clock one. This introduces a mismodeling that is negligible compared with the error sources present in the measurement residuals for values of the estimated SISE in the order of several meters. The mismodeling is in the order of 2-3% of the radial error, so its effect increases as long as the radial error becomes greater. However, the likelihood of large radial errors is very improbable due to the way the orbits are computed within the OSPF; typical radial errors are in the order of 1–10 cm. In any case, the simplification becomes nonapplicable with very large orbit radial errors when the satellite should have been already rejected, assuming that a full negative correlation with the clock prediction error does not exist.

The SISE for a given satellite is estimated by using a pure weighted least square:

where

(i)A is the design matrix that contains the unit vectors to the satellite from all the GSSs in view:(ii)W is the a priori noise measurement covariance matrix:(iii)Res is the vector with the measurement residuals, obtained as the result of the difference between the smoothed iono-free pseudoranges and the reconstructed ones based on the estimated parameters. The smoothed pseudoranges are reconstructed by means of the navigation message (for satellite position and clock) and the estimated GSS clock biases and tropospheric delays. The obtained measurement residuals, Res, are the projection of the true SISE to the line of sight of the receiver plus the following contributions called residual errors.

(i)Synchronization error: the error in the receiver synchronization, with standard deviation .(ii)Tropospheric delay error: the error in the tropospheric zenith delay estimation and its mapping function, with standard deviation .(iii)Errors after preprocessing: nonfiltered measurement noise such as not-mitigated multipath and other nonmodeled effects with standard deviation . The GSS clock biases and the zenith tropospheric delays are estimated within the same process, thus they may somehow be correlated. Considering that the GSS clock offset is modeling as a snapshot process and the tropospheric zenith delay is averaged over periods of the order of hours, the correlation coefficient is expected to be low. The global variance for this line of sight would be

The inverses of these global variances are the diagonal elements of the W matrix. The off-diagonal elements are neglected as the receivers are assumed to be independent.

The estimated SISE vector is projected onto the worst user location (WUL) defined as that location over the satellite footprint where the projected SISE is maximum. The SISE covariance matrix is also projected onto its worst user location to obtain the . It is noted that the WUL for the estimated SISE may not be the same as for the .

Since biases may be expected in the measurement error per line of sight, the estimated SISE would be also biased by an amount given by the following expression (note that biases at satellite-level are derived from those for the different lines of sight following the same weighted least-square expression as the one used for the estimated SISE computation):

where is the vector formed by the residual biases for each line of sight. The vector would also be projected to its worst user location to obtain the bias of the SISMA; . It is important to note that in fact the true bias per line of sight is not really known, but just an upper bound with a certain confidence level. Therefore, the will be consequently an upper bound of the true bias at WUL. The derivation of this upper bound follows a complex mathematical development that has been omitted for brevity.

The rejection threshold T is computed as

where is the point of the Gaussian distribution that leaves in the two tails a probability equal to . may not be the same as since the probability of the false alarm for this check at high-level does not consider another contributions due to IPF internal events, leading to a higher . If the , then integrity flag (IF) is set to Do Not Use.

According to the Galileo integrity concept, the user needs to consider the failure of the worst station over the validity time of the SISMA. This brings the so-called “broadcast SISMA” concept. It is the same as the actual SISMA except for two aspects: (i) it will be valid over its full applicability time (e.g., 85 seconds); (ii) it will consider the failure of the receiver that maximizes the SISMA value during the whole applicability time interval. The algorithm is the same as the actual SISMA, repeated over all the epochs of the validity time and for all possible GSS single failures. Therefore, for each prediction epoch,

(i)the satellite positions are predicted based on the ephemeris model;(ii)the removal of each sensor station in view of the satellite is considered; (iii)the SSEB for each line of sight is derived taking into account the predicted instantaneous elevation (this implies that sensor stations that are monitoring the satellite in the current epoch may disappear in the validity time); so the and can be computed. Combining these terms with the applicable SISA leads to the SISM. Obviously, the final set () that defines the broadcast SISMA is the one that provides the maximum rejection threshold that the user will reconstruct:

However, the IPF can only provide one SISMA parameter; the so-called SISMA user, SISM:

It is clear that

Finally, the post SISE fit residuals (obtained by means of the expression where “A” is the design matrix, “Res” is the measurement residual vector and “SISE” is the estimated SISE, all defined previously) are grouped per receiver and a chi-square test is performed. In case the check fails and no faulty line-of-sight can be identified, its residuals are removed and the SISE for the affected satellites is recomputed. This is a general barrier against outliers or underestimated measurement noises.

3.5. Product-Check Algorithm

The product-check algorithm is responsible for consolidating the SISMA estimates for the configured service by merging the values coming from the two measurement chains: A and B. A set of integrity barriers are also included in order to mitigate the propagation of certain input feared events to the output (see [11] for further details on these barriers).

4. First Results of IPF Algorithm Performance

Recently, the experimental IPF algorithm prototype (E-IPF) has passed successfully the functional validation, and the experimentation phase is about to start. Therefore, first performance results of the E-IPF (with the complete functionality) and some conclusions have been obtained from this functional validation phase, using real GPS data and GPS/Galileo synthetic data simulated by the Galileo raw data generator (RDG), which is a software tool aimed at providing the algorithms with raw measurements simulating the sensor station network, the satellite constellation and the signal propagation effects. Other preliminary results with a different prototype of the IPF obtained in the frame of the Galileo system test bed (GSTB-V1) project may be found in [14].

The performance drivers, the expected results (obtained by a service volume simulator called “SISMA tool”), and the first E-IPF results with real GPS data and Galileo synthetic data, are provided in the following sections.

4.1. Performance Drivers

The SISMA is a parameter that describes the SISE estimation errors, at user level, due to the measurement errors at GSS level. As explained above, the SISMA is computed using a weighted least-square algorithm with biased measurement errors; therefore the SISMA computation can be expressed in a simplified way for the sake of analysing the performance drivers:

where

(i)SSEB (elevation angle) is the sensor station error budget, which represents the current estimation of the final measurement residual errors, ; and(ii)PDOP is the position dilution of position of the satellite as a user with respect to the GSS in view, reflecting the geometric relationship between the satellite and the different GSS sites viewing this one and involved in the SISMA computation. The satellite PDOP is a parameter that depends exclusively on the GSS sites location. These ones can also be affected by the presence of “feared events,” such as scintillations or GSS outages.

The main drivers affecting the SISMA performance are shown in Algorithm 1.

Quality of the raw measurement data (mainly carrier
phase noise and pseudorange multipath)
GSS synchronization error
Zenith tropospheric delay estimation error
GSS code/carrier incoherence
Presence of “Feared Events” as the scintillations or
GSS switches
GSS network distribution and availability

Except the last driver which impacts in the PDOP, the other ones only have an impact on the SSEB.

Reference [1] provides a sensitivity analysis of the SISMA performance with respect to the GSS synchronization and ZTD estimation errors, showing that almost all the improvement that could be achieved by means of the IPF algorithm design has been reached, and thus the IPF external factors become dominant, in particular the impact of the ionospheric scintillations. The ionospheric scintillations will degrade the quality of the ranging measurements increasing the noise and even sometimes causing the loss of the signal, depending on the receiver architecture. The impact on the SISMA performance will depend basically on two factors: the degradation of the measurement quality depending partially on the receiver design, and the design of the integrity algorithms. Ionospheric scintillations corresponding to a sunspot number of 250 has to be considered as part of the reference conditions for performance validation, which corresponds to very strong ionospheric conditions.

4.2. SISMA Performance: Expected Results

Previous to the functional validation, some preliminary results have been obtained by the “SISMA tool” (which applies only the SISMA equations considering a hypothetical simulated environment), and by a prototype of the GSTS algorithm (which processes real GPS data), according to the last IPF element specifications and assumptions. These results provide the IPF performance that can be expected for different system configurations. At this respect, no simplification in the SISMA computation process has been considered with respect to the proposed algorithm design.

The following scenarios were considered:

(i)Galileo service: safety-of-life (SoL),(ii)system configuration: FOC (40 GSSs and 27 satellites corresponding to the whole GSS network and nominal constellation) and IOV (18 GSSs and 4 satellites for the initial test campaign and system validation),(iii)GSS network in its nominal state (40 GSSs) or Degraded (39 GSSs considering the failure of the GSS with highest impact on SISMA). As a first step, the GISM model was used to derive the degradation on the measurement quality in the presence of scintillations, but this model did not seem to provide realistic results, since it had not been calibrated with data coming from very strong scintillations. The conclusion was that all GSSs that could be affected by scintillations were going to be rejected, degrading significantly the PDOP and the corresponding SISMA performance. Then, more realistic conditions in terms of measurement noise, cycle slip ration and , were considered to simulate the impact of the scintillations. Nevertheless, this issue is still under discussion, and an update of the GISM model to overcome these limitations may be expected. According to the preliminary approach, it is expected that the IPF will be able to process most of the measurements from the GSS under scintillations, but with higher noise, multipath and cycle-slips error contributions. The consequence is that only one SSEB was computed for the GSS affected by scintillations coming from a kind of weighted average of the measurement error contributions with and without scintillations.

The following figures show the relative contributions of the main sources to the variance SSEB () corresponding to the FOC-SoL system configuration depending on the GSS location. The overall standard deviation () and bias () of the expected SSEB will be provided in the “First E-IPF results with synthetic data” subsection in comparison with the SSEB obtained from the synthetic data.

The following figures show the relative contributions of the main sources to the standard deviation SSEB () corresponding to the IOV-SoL system configuration.

The three main contributions to the SSEB variance are shown in the following.

(i) The smoothed iono-free pseudorange noise with the highest contribution at low elevation angles. The ionospheric scintillations impact directly on this contribution. There is a contribution to the budget derived from a higher probability of having undetected cycle slips and erroneously repaired cycle slips, because of the increase in the carrier phase noise, together with the measurement noise increase of the raw pseudoranges due to the decrease of the . Note that the loss of line-of-sights due to the cycle slip algorithm reset is not considered in this preliminary analysis.

(ii) The GSS clock synchronization error (around 16.5 cm 1-σ for FOC and 143 cm 1-σ for IOV). Although the relative weight increases with the elevation angle, the error does not depend on it. The ionospheric scintillations have a low impact on this contribution. In the IOV system configuration, the dominant contribution is clearly the synchronization error, quite degraded compared to the FOC case since only 4 satellites are available.

(iii) The tropospheric delay error. (ZTD: 4.4 cm 1-σ for FOC and 11.1 cm 1-σ for IOV, both without scintillations.) An increasing factor of 1.33 has been considered in the presence of scintillations due to the degradation of the tropospheric delay estimation process due to the increase of the carrier phase noise, which is one of the drivers for the GSTS algorithm performance. The derivation of this factor follows a complex process that has been omitted for brevity since the aim is to obtain a preliminary analysis of the performance that could be expected.

As mentioned before, the last two contributions depend mainly on the GSTS performance except for the tropospheric estimation in IOV if the blind model is used. Therefore the GSTS is the key algorithm from the point of view of SSEB. Nevertheless, the main contribution is the smoothed iono-free pseudorange error, which depends mainly on the external factors such as the scintillations and the receiver features rather than the preprocessing algorithm.

The following table provides the expected a priori contributions to the SSEB that have been considered in the frame of the preliminary analysis done with the “SISMA tool.” As it can be seen, the dominant contribution is the smoothed iono-free code-phase error after preprocessing in the FOC configuration, while in IOV the major one is the synchronization error.

It is noted that experimentation results with the GSTS algorithm prototype with real GPS data support the idea of negligible biases (tropo and synchronization contributions). However the provided bias budget () comes from the fact that the off-line measurement residual error calibration process estimates the biases with a certain confidence level. Therefore this budget considers the contribution to the bias upper bound coming from the confidence interval.

The following table shows the different SISMA upper bound target values and the corresponding ones according to the previous SSEB budgets:

From Table 2, it can be seen that the SISMA upper bound requirements are met in IOV but not in SoL-FOC cases (both nominal and degraded). At IPF algorithmic level, both the multipath mitigation and GSS synchronization processes are considered to be state-of-the-art, and thus the improvements should mainly come from the provision of raw measurements with better quality, improvements in the receiver design such as the efficiency factor or the addition of more GSS to the network.

Figure 4 shows the maximum SISMA that can be obtained in all potential satellite positions for nominal SoL-FOC configuration. The highest values are obtained in areas close to the geomagnetic equator due to the worse SSEB caused by the impact of scintillations. (For additional details please refer to the analysis shown in [11].) The white small circles indicate the GSS location.

Figure 5 shows the respective SISM, SISM, and SISM histograms from the distributions that correspond to the overall satellites positions. The influence of the SISM in the SISM can be observed, which is similar to the SISM distribution, but shifted some 0.2 meters.

Figures 6 and 7 show the maximum SISMA that can be obtained in all potential satellite positions for nominal SoL-IOV, and the associated SISMA histograms, respectively.

As it can be seen, the distribution of the SISMA values in IOV is much more irregular than the one for FOC, which is due to the fact that the satellite visibility is sometimes quite degraded: only few sensor stations can monitor the satellite and with a poor geometry. Clearly, this implies that the IOV system configuration is very far from being optimum from the integrity monitoring point of view, although it will serve in demonstrating the system capabilities and performing at least a system functional validation.

4.3. First E-IPF Results with Real GPS Data

Apart from simulations and analysis, the processing of real GPS data has been considered very important for assessing the goodness of the proposed algorithms. The final clock and orbits provided by IGS have been considered the reference for building the true SISE. Two real scenarios have been analysed.

(i) Real GPS FOC-like scenario: 2 days (starting the 01/05/04 at 00:00:00), 20 GSS’s, 21 GPS.

(ii) Real GPS IOV-like scenario: 2 days (starting the 01/05/04 at 00:00:00), 18 GSS’s, 4 GPS.

It is noted that only one chain and service is considered with GPS real data.

Real GPS FOC-Like Scenario Results
A simplified integrity analysis has been carried out to check the SISMA overbounding over the estimated SISE error. This analysis consists of checking if both 68 percentile and 95 percentile conditions of the absolute value of estimated SISE error at WUL (SREW) divided by actual SISMA are satisfied. It is reminded that the no-inflated PREC is considered for integrity purpose. The results are as follows:
(i)19 out of 24 satellites fulfil the 68 percentile ratio criteria (<1);(ii)only 2 satellites do not fulfil the 95% ratio criteria (<2), which is a value of 2.04. Besides, regarding the SISMA upper bound (using the nominal PREC which is inflated), a significant percentage of the obtained values is higher than expected as shown in Table 3 and Figure 8.
As it can be seen, the tail of the distribution is heavier than the one in the case FOC-SoL nominal shown previously in Section 4.2, caused by the low number of sensor station that could be considered (limited to those that provide 1 Hz data with high availability): 20 sensor stations compared to 40. Another important aspect to be highlighted is that with real data the weight of biases is lower than expected a priori, so this means that the budget analysis is pessimistic in this area.

Real GPS IOV-Like Scenario Results
Table 4 shows the SISM at 95% per satellite for each day. All values are under the SISMA upper bound requirement (6.5 m at 95%) for IOV Galileo only.
Although the statistics of each satellite involves only one satellite trace in the Earth, other analyses show that the difference in the 95% percentile between the whole SISMA map and the IOV SISMA satellite traces is very small. Therefore, there is a large margin with respect to the requirement.
Figure 9 shows the time evolution of the different broadcast SISMA terms, and the integrity flag status for the PRN24 in the first day. The behavior of the system can be considered unstable, because of the rapid changes and discontinuities in the broadcast SISMA. This is caused by the poor geometries and by the lack of continuity in data from IGS sensor stations in which data gaps are relatively frequent.
Table 5 shows the GSS synchronization error. The target is 8 nanoseconds at 95% for IOV-Galileo Only. There is also a large margin. The corresponding values for some GSSs have not been provided due to the lack of IGS clock reference.
The results are quite homogeneous except for KIRa, which are quite worse compared with the averaged. This is caused by poor geometries (KIRa is close to the north pole) and data gaps leading to higher synchronization errors. It is also important to note that these results are far better than the a priori expectations reflected in Section 4.2.
Figure 10 shows the clock synchronization errors for one GSS (note that the x-axis reflects the week rollover).

4.4. First E-IPF Results with Galileo Synthetic Data

The following scenarios have been simulated.

(i)Galileo FOC SoL Nominal scenario: 3 days, 40 GSSs, and 27 satellites.(ii)Galileo IOV SoL scenario: 10 days, 18 GSSs, and 4 satellites.

Galileo FOC SoL Nominal Scenario
Figure 11 shows a comparison between the SSEB (or PREC), with and without scintillations, considered by the “SISMA tool” to obtain the expected results, and the SSEB computed in the E-IPF platform. First of all, it is noted that there are only two types of GSS in the “SISMA tool,” the ones without scintillations located out of the geomagnetic equator area, and the ones within this area. However the RDG does not have this limitation, in principle it simulates a more realistic environment. It can be observed that the hypothetic area of the E-IPF SSEB Std points has similar pattern than the area composed between the two “SISMA tool” SSEB Std lines, although it is located just under the “SISMA tool” one. The apparently better E-IPF SSEB is due to the rejection of the many measurements affected by the scintillations. It is noted that the E-IPF barriers configuration has not been consolidated yet. Once this occurs after the experimentation phase, more measurements will be preprocessed although the SSEB will be degraded, but the PDOP will be improved for many satellite positions.
On the contrary, the SSEB bias areas are different but in the same order of magnitude.
Table 6 and Figure 12 show the respective SISM, SISM, and SISM statistics and histograms from the distribution corresponding to the overall satellites positions. Although the maximum value is much higher than the required one, however the 95% (0.72 m) is slightly better than the respective expected value (0.937 m) obtained with the “SISMA tool.” These results could be expected taking into account the apparently better SSEB in the E-IPF. However, SISM highest values are reached due the rejection of the many measurements in the GSS affected by the scintillations, which provokes the increase of the PDOP and then the SISMA is degraded.
Regarding the integrity verification results, the same approach as the one described for the analysis of the real GPS FOC-like scenario has been followed. The results have been worse as shown in the following:
(i)any satellite does not fulfil the 68 percentile ratio criteria (<1);(ii)16 out of 24 fulfil the 95% ratio criteria (<2). Figure 13 shows the accumulated relative frequency (CDF) of the observable /SISM used for the satellite with worst compliance value. If the SISE estimation error had been overbounded by the SISMA, the ratio should have been lower than 1 and 2 for the percentiles 67% and 95%, respectively.
It is important to note that the integrity verification strategy will be improved in the subsequent phases of the IPF integrity algorithm consolidation, since the aim at this stage was to have a first and quick analysis. The apparent lack of integrity is caused basically by two facts linked to the methodology. On the one hand, the management of biases should be consistent with the approach established in Section 2, leading to an independent verification of the SISM and SISM since their combination at user level is guaranteed. On the other hand the instantaneous worst user location has been considered while it should be more appropriate to consider fixed users (more realistic), check the integrity for everyone, and select the worst case.
Figure 14 shows the estimated SISE error at instantaneous WUL distribution for one satellite as an example. As it can be seen, there are almost no points above one metre (in absolute value). With respect to the shape of the distribution, the bimodality is caused by the sign of the “radial + clock” component of the SISE estimation error, while the lack of full symmetry is caused by the presence of negative biases slightly higher in absolute value than the positive ones as it can be seen in Figure 11.
These results should be considered as preliminary since there are still some improvements to be done, mainly on the area of the fine tuning of the developed algorithms, as well as the methodology to verify the integrity.

Galileo IOV SoL Scenario
Table 7 shows the GSS synchronization error. Although the results are not as good as the real GPS IOV-like scenario, there is still a margin with respect to the target. On the other hand, it should be taken into account that the simulated environment is more degraded than the real one.

5. Conclusions

A description of the main IPF integrity algorithms has been provided with special emphasis on those features related to the integrity barriers in the presence of feared events, the implications of the Galileo user integrity concept on the algorithms together with the derivation of the formulae for the correct management of the SISE estimation error bias in terms of impact at user level.

The main results derived from the preliminary experimentation activities performed with the IPF algorithm software prototypes (E-IPF) using real GPS data and Galileo synthetic data are the following:

(i)from the obtained IOV results, the SISMA at 95% is lower than 3.9 m with real GPS data (while the target IOV SISMA value is 6.5 m at 95%);(ii)an FOC SISMA value of 0.72 m at 95% was obtained with Galileo synthetic data (the FOC SISMA target is 0.7 m as maximum). Improvements are still expected as part of the algorithm fine tuning process;(iii)good performances in IOV GSS synchronization error, in the order of few nanoseconds;(iv)first integrity results have been provided, showing the correctness of the IPF algorithms, although additional verification activities are required.

Acknowledgment

The authors wish to thank the GMV Galileo Team who participated in the projects mentioned in this paper, since the success is a consequence of the excellent cooperation and teaming approach.