Abstract

In a modern world there is a growing demand for localization services of various kinds. Position estimation can be realized via cellular networks, especially in the currently widely deployed LTE (Long Term Evolution) networks. However, it is not an easy task in harsh propagation conditions which often occur in dense urban environments. Recently, time-methods of terminal localization within the network have been the focus of attention, with the OTDoA (Observed Time Difference of Arrival) method in particular. One of the main factors influencing the accuracy of location estimation in the OTDoA method is the nature of the propagation channel that affects the ease of isolating the signal component travelling from the transmitter to the receiver through the shortest path. To obtain the smallest possible localization error, it is necessary to detect the first received component of the useful signal. This aim could be achieved by using a proper algorithm within the receiver. This paper proposes a new algorithm for effective detecting of the first component of the LTE downlink signal in the multipath environment. In a mobile terminal location estimation process, CSRS (Cell Specific Reference Signal) signals were used instead of dedicated PRS (Positioning Reference Signal) signals. New solution was verified during the measurement campaign in a real LTE network.

1. Introduction

Nowadays, radio-localization technologies constitute a rapidly developing industrial branch. There is a whole spectrum of radio-localization services for different purposes. Information about geographical position of a mobile device could be used for effective management of radio resources in a cellular network, e.g., by assigning more radio resources to the areas with a higher concentration of mobile terminals [1]. Other applications include self-localization on user demand, searching for the closest characteristic points (pharmacies, bus stops, and shops), local advertisements, local weather forecasts, and even a local connection price tariff. Lifesaving is another application of radio-localization and, at the same time, is extraordinarily important. Establishing the accurate position of a person making an emergency call significantly increases a chance of providing necessary help at a right moment [2]. Moreover, radio-localization may be invaluable support in searches for people wanted by the police. The above examples illustrate the broad range of radio-localization services and applications.

GPS (Global Positioning System), owned by the US government, is currently the most commonly used radio-localization system. Despite its almost global coverage, there are still areas where this system does not work properly [3]. These are especially indoor environments, high-density urban areas, and dense forests, where the levels of the received signals could be too low for the GPS or GNSSs (Global Navigation Satellite Systems) to ensure correct detection and accurate position estimation. In addition, GNSS signals are sometimes disturbed by jamming devices used, for example, by enemy troops in electronic warfare. Such circumstances require a complementary system for position estimation. One of the solutions that could be applied is cellular systems, like the LTE (Long Term Evolution) system, widely deployed recently, which offers better reception quality than the GNSS systems due to much higher received power. Usually, the cellular systems also provide good coverage in urban and indoor locations.

One of the most promising radio-localization methods in the LTE system is OTDoA (Observed Time Difference of Arrival), in which the position is estimated on the basis of the measurement results of time difference of the arrival of the signals to the UE (User Equipment) antenna from a set of at least three base stations, as well as on the information about both the coordinates of the base stations and about transmission timing at those base stations.

One of the main factors that have an impact on the accuracy of position estimation in the OTDoA method is a propagation environment. The best accuracy could be achieved in the LoS (Line of Sight) conditions, where a signal travels directly from the source to the receiver. Unfortunately, any obstacles between a transmitter and a receiver, such as buildings, hills, or even plants, make accurate position estimation much more difficult. Harsh propagation conditions commonly occur in dense urban areas, where multipath phenomena and the NLoS (Non Line of Sight) conditions cause deep signal fading and also hinder accurate measurements of the time of arrival of the signal [4]. Firstly, a direct path between a transmitter and a receiver is obstructed in the NLoS conditions. In time-based radiolocation systems it results in the occurrence of a measurement error depending on the difference between the length of the real signal propagation path and the hypothetic direct signal path. Secondly, in the case of the multipath propagation, the receiver should make the position estimation on the basis of the signals’ components which arrived first at the antenna connector. Unfortunately, this is often a difficult task, particularly in the NLoS conditions. In the LoS conditions, in which the first component of the received signal is commonly a dominant one, determining the first component is simply based on choosing the maximum power component [5]. In the NLoS, however, it often happens that the first received component does not have the highest power value, especially in urban areas [6], which influences the PDP (Power Delay Profile) profiles of radio communication channels defined, for instance, by the WINNER II project [7]. In that case, the described approach for the LoS conditions may cause significant position estimation errors. That is why more sophisticated algorithms are required to detect the first component of the received signal.

Of course, despite the above-mentioned problems, many examples can be found in the literature, which deal with the detection of LoS/NLoS propagation conditions in the radio link [812]. The first of them proposes a method for NLoS identification based on the space-time-frequency channel correlation features of multiinput multioutput (MIMO) systems. The identification process is based on the idea that the correlation of NLoS components would approach zero as the space and time separation increase while the absolute value of the LoS component correlation is constant with any space and time separation. This solution is not suitable for implementation in the present case due to the use of multiantenna systems. In [9], authors formulated the NLoS identification problem as a binary hypothesis test by exploiting the Rician factor estimation. Unfortunately, this solution addressed the problem of identifying whether a received signal at a base station is due to LoS transmission or not. Unfortunately, according to the authors, this solution is dedicated to implementation in base stations. In the OTDoA method discussed above, all measurements are made by the UE and reported to the network. In turn in [10], the authors propose the use of nonparametric machine learning techniques to perform NLoS identification and NLoS mitigation in ultra-wide bandwidth (UWB) interface. The proposed method is based on the following observations: in NLoS conditions, signals are considerably more attenuated and have smaller energy and amplitude; in LoS conditions, the strongest path of the signal typically corresponds to the first path, while in NLoS conditions weak components typically precede the strongest path, resulting in a longer rise time; and the root mean square (RMS) delay spread, which captures the temporal dispersion of the signal’s energy, is larger for NLoS signals. This solution is difficult to adapt to the OFDM (Orthogonal Frequency Division Multiplexing) interface due to the difficulty in delay spread measuring. Another solution was presented in [11]. This time, the authors focused on the CDMA (Code Division Multiple Access) system. For the detection of LoS/NLoS condition authors propose to adopt the normalized version of the Rayleigh-ness test, which is self-tunable in respect to the power of the received signal. This test is performed in the process of synchronization of the receiver with the code sequence. For obvious reasons, because of different types of interfaces, this solution is not directly transferable to the LTE system. The problem of identifying NLoS propagation condition in [12] was solved by applying the statistical decision theory. A join time-of-arrival (ToA) and received signal strength (RSS) methods led to an increased probability of detection. The RSS method uses an empirically developed path loss model to determine the propagation distance between the transmitter and receiver and is typically available in wireless communication networks. Unfortunately, in the LTE system, the ToA measurement can be performed (in the form of round-trip-time) only between the UE and the serving base station. Therefore, it is not possible to use this method to determine propagation conditions for all base stations involved in the location process.

This paper is organized as follows: Section 2 describes a new algorithm for effective detecting of the first component of LTE downlink signal in the multipath environment; the next two sections present the simulation model and the simulation results along with preliminary measurement results, respectively. Finally, the last section concludes the paper.

2. Proposed Algorithm

The literature presents several different algorithms for detecting the arrival of a signal. The algorithm presented in [13] makes a parabolic interpolation around the maximum of a correlation function between the received signal and the pattern signal stored at the receiver. The time of the signal arrival is assumed as the position of the maximum of this parabolic function. This algorithm, however attractive because of its low computational complexity, is not usually suitable for detecting the first component of the signal since the first component could be at a significant distance from the maximum one [14]. Another approach was presented in [15], in which the algorithm detects the first signal component that is above the level of -30 dB related to the maximum component power. The first component that is significantly above the noise level is chosen if the dynamic range of the PDP is less than 30 dB. However, there is no explanation what “significantly above the noise level” means. Another algorithm, designed for the OFDM signals, is presented in [16]. In this case, the algorithm firstly performs a correlation between the received signal and the pattern signal stored at the receiver side and then restricts the search area to the size of a CP (Cyclic Prefix). This operation is carried out through the sliding window of a CP size. For each shift of the window, the algorithm sums up all components in a correlation function within this window and then selects the highest value from the set of all shifts. This operation makes it possible to reliably determine the area the signal was received and eliminates big errors in searching for the first signal component. Finally, the first component of the received signal is chosen as the first correlation component within the chosen area for which the SIR (Signal to Interference Ratio) value is above -13 dB. Another approach, suitable for position estimation in LTE networks, was presented in [17]. In this solution, the time of arrival of the signal is detected in two steps. Firstly, it is necessary to perform coarse time synchronization (with the accuracy of at least half of a CP) with the signals transmitted from every base station taking part in the localization process. Secondly, the fine synchronization is conducted in four tasks: estimation of the CFR (Channel Frequency Response) for the chosen REs (Resource Elements) containing reference symbols, crosscorrelation calculation of the CFR pairs referenced to the REs that are positioned adjacently in the time-frequency resource grid (assuming that the CFR for two neighboring RE is identical, the correlation phase is linearly proportional to the required time delay), averaging results for all subframes containing reference symbols, and finally calculation of the time delay from the averaged phase of a crosscorrelation function.

When studying the subject literature, you can find two more interesting proposals that directly relate to the detection of the first LTE signal path [18, 19]. In [18] a first-peak of the channel impulse response (CIR) estimation algorithm in which the threshold adapts to the environmental noise was described. To improve the probability of the first-peak detection an integrated squared envelope of CIR at different slots and for different antennas in one frame duration was implemented. The proposed method was tested in conditions similar to real—LTE base stations were emulated using software defined radio modules.

In turn in [19] to the time of arrival of the LTE signal a maximum likelihood estimator in time domain with adaptive threshold was used. The effectiveness of the method has been tested only by simulation.

In this paper, the authors proposed a simple algorithm to measure the time of arrival of the LTE downlink signal based on an adaptive decision threshold for a crosscorrelation function. This solution was verified during the measurement campaign in a real LTE network.

Special reference signals called PRS (Positioning Reference Signal) were designed for the LTE OTDoA method. These signals are placed at a radio resource grid with variable settings of many parameters, i.e., bandwidth, time duration, and the repeat period [20]. However, transmission of the PRS signals in the downlink channel results in an increased load on limited radio resources. That is why operators are often reluctant to implement the PRS signals. Therefore, in the LTE network without dedicated PRS signals, it is possible to use the SoO (Signal of Opportunity) signals. Such signals in the LTE network could be the CSRS (Cell Specific Reference Signal) signals used for various downlink measurements, such as the RSRP (Received Signal Reference Power), and also for a demodulation process through estimation of the parameters of a radio communication channel [21]. In this paper, all analyses are based on the CSRS signals. However, they could be easily extended to the PRS signals with the difference only in the number of the OFDM symbols occupation in a resource grid—PRS signals occupy a little bit more symbols in order to improve their quality of reception.

The proposed algorithm is based on setting a threshold in a correlation function Γ(m) between the received signal r(n) and the reference pattern signal xref(n) that is generated at a receiverwhere stands for a complex conjugate operation and is the correlation length. Signal is a discrete form of the received continuous signal , namely, the sum of the attenuated and delayed copies of the transmitted signal plus noise and interferences where is a linear convolution operation and is an impulse response of the channel defined asin which is the number of signal components arriving at the receiver antenna, is a complex factor defining attenuation and phase change of the received component due to the effect of the radio communication channel, and is the delay of -th signal component.

Determination of the detection threshold allows detection of the components that have enough power to be regarded as a representation of a useful signal. However, the TDoA (Time Difference of Arrival) measurements based on the threshold crossing will return correct distance differences suitable for position estimation only in the case of signals with an equal bandwidth. On the other hand, when considering signals with a different bandwidth (Figure 1), detection of the arrival of the signals based on the threshold crossing introduces variable time measurement errors due to the variation in the width of the mainlobe of a correlation function. The solution for this issue could be finding the local maximum of a crosscorrelation function that lies above the threshold, although the searching area has to be restricted to the half of the mainlobe width after the first component has exceeded a certain threshold. The restriction is necessary to avoid situations where consecutive correlation components have an increasing power level. In this case, the local maximum value above the threshold would be the global maximum that could be far from the actual position of the first signal component (see example on Figure 2).

It should be noted that there are several reference signal bandwidths defined in the LTE system, i.e., 1.4 MHz, 3 MHz, 5 MHz, 10 MHz, 15 MHz, and 20 MHz. For smaller bandwidths, i.e., 1.4 MHz, 3 MHz, and 5 MHz, the broad mainlobe in a correlation function (see Figure 1 again) causes low resolution of the time measurement. In that case, an algorithm which detects the maximum- value component is proposed and which also assumes that its position was the moment of the arrival of the signal that was propagated through the shortest path between the transmitter and the receiver. In contrast, for greater defined signal bandwidths, i.e., 10 MHz, 15 MHz, and 20 MHz, it is proposed to determine a variable detection threshold and then to find the local maximum of a correlation function above the threshold (with the restriction on the search area described previously). The found maximum is assumed as a representation of the first component of the received signal.

These considerations led to a formalized form of finding position of the correlation function component that is assumed to represent the first received LTE reference signal componentwhereandwhere represents the length of the first path component searching interval (in samples), is a number of samples in half of the crosscorrelation function mainlobe width which depends on the reference signal bandwidth (see Table 1), and is an argument for which exceeds the detection threshold counting from the beginning of the correlation domain. The size of the domain depends, among other things, on a priori RSTD (Reference Signal Time Difference) estimation uncertainty (e.g., for the RSTD estimation based on a cell size or TA (Timing Advance) measurements).

The shape of a correlation function of the received OFDM signals depends on plenty of factors, especially the presence of noise and interferences, distribution of received useful signal components (their relative delays and attenuations), the correlation length, and also the signal bandwidth. These factors make the task of threshold setting difficult. This paper proposes calculating the detection threshold as a functionwhere the parameter takes the following form:whereand is estimation of the undesired components level in the correlation function that stems from noises/interferences and is defined aswhere is a parameter identified by Bienayme-Chebyshev inequality [22], which imposes the upper-limit on the probability of an occurrence of any value of a random variable. For , the probability of an occurrence of any component beyond the value in a correlation function is small and equals . Parameter is an average of components located more than away from the maximum value in order to omit the useful signal componentswhereand is a rounding up function. Parameter in (10) is defined aswhereParameter exemplifies possibilities of separating these components in a correlation function which represent a useful signal (or at least the component with the maximum power) from the components caused by interferences and noises present in a radio communication channel. The higher the value, the better these possibilities.

The CSRS signals from the LTE downlink, proposed for position estimation instead of the dedicated PRS signals, are continuously repeated in the time-frequency resource grid. Therefore, the correlation length is unrestricted. The higher the value of the parameter, the bigger the computational effort that has to be made to calculate the function, which naturally takes more time. On the other hand, a greater correlation length increases the probability of correct separation of the reference signal from interferences and noises. subframes were for our work, which could ensure high values, even when the SINR (Signal to Interference and Noise Ratio) is limited. It is essential for detection of the arrival time of a signal within nonsynchronized networks (which are commonly used nowadays), due to a high level of mutual interferences, especially in situations when a terminal is in close proximity to one base station (then the signals from other base stations would be subjected to intense interferences).

For specified , the expression defining detection threshold may be simplified toThe following function model was adopted to determine the form of the

The threshold value of detection is therefore determined by the level of undesired components which stems from interferences and noises weighted by coefficient . Thus, in order to set the proper detection threshold, it is necessary to establish the form of the coefficient function. This was achieved through simulation described briefly in the next section.

3. Simulation Model

The simulation model was designed and implemented in order to establish the form of the function. The model consists of three main elements, i.e., the model of the OFDM LTE transmitter, of the receiver, and of a multipath radio-communication channel. Downlink LTE signal samples were generated with the QPSK (Quadrature Phase Shift Keying) modulated data symbols and the CSRS reference symbols at the transmitter model side. All symbols within the time-frequency grid have equal power. The signal from the transmitter model side was then filtered with a FIR (Finite Impulse Response) filter with variable parameters (Figure 3), which represent a propagation channel (a tap-delay model).

The branches of the filter are referenced to consecutive, properly delayed, and attenuated components of the transmitted signal that propagates through different paths between the transmitter and the receiver due to multipath phenomena. Time delay values were generated according to a random process defined aswhere N(·) is a normal distribution realization, is an i-th average delay of a signal component in accordance with static characteristics of the channel defined by the 3GPP (3rd Generation Partnership Project) [23], and is a standard deviation corresponding to a propagation route that equals 20 m [24]. In the case of the process (17) realization in which the value is less than zero, it is assumed that . Attenuation values were set bywhich is polynomial interpolation of a static characteristic of the 3GPP ETU channel [25]. The interfering signal I originated from 4 adjacent cells is added to the sum of signals from the branches, where is the interference from the s-th adjacent cell. The signal is created in the same manner as a useful signal and passes through the same multipath channel model (without noise and an interference component) and has an identical bandwidth. It is important that all the interfering signals are generated with different values of the physical cell ID, which causes the CSRS symbols from different cells to occur at different position in the resource grid. This implies that, ideally, the CSRS signals originated from different cells are mutually orthogonal. In addition, interfering signals are shifted in time in various manners and have different powers in relation to the useful signal, which is presented in Table 2. The values were chosen to take into account the lack of synchronization between transmitters in real LTE networks and also different distances from interfering stations to the mobile terminal equipment, the position of which must be estimated via the OTDoA measurements. Finally, the AWGN (Additive White Gaussian Noise) noise signal with the power 10 dB less than the sum of the powers of the interfering signals is added to the sum of useful and interfering signals, giving a signal at the output of the channel, which may be presented as

Crosscorrelation between the received signal and the reference pattern signal is performed at the receiver model side.

Determination of the was carried out through setting the detection threshold above 10 % of the highest undesired component in a correlation function for the consecutive values

Figure 4 illustrates the above method, while the simulation results are presented in Figure 5. The analysis of the obtained values leads to the following conclusions:(i) values may be limited by the LS_Fp() curve independently of the bandwidth of the signal.(ii)For low values (< 8 dB) is approximately constant and equals 1.35. This refers to the situation in which the components that represent interferences and noises are dominant in a correlation function. Then, a good estimator of the threshold value is 1.35· set by Bienayme-Chebyshev inequality.(iii)For larger values (≥ 8 dB), sidelobes of a correlation function start to exceed the components representing interferences and noises. Therefore, it is necessary to properly increase the threshold value from the 1.35· value so that the threshold is placed above the level of sidelobes.(iv)Differences in the values for the given result from various distributions of the received signal components (different relative delays and powers) at subsequent simulation realizations.

LS_Fp() was determined through the least squares method [26] and has the following form:Finally, the threshold value could be written as

4. Results and Discussion

This part, consisting of two subsections: simulation validation and algorithm verification, compares the measurements obtained in an existing urban environment with the simulation results.

4.1. Simulation Validation

The city of Gdansk in Poland served as an urban environment where the measurements were conducted in order to validate the simulations results. The SDR (Software Defined Radio) platform USRP2920 received the downlink signals from the LTE network [27], downconverted them to the baseband, converted from analog to the digital form and sent the samples to the PC (Personal Computer). Then, after the initial digital filtering, the physical layer cell identity was achieved [28, 29] through detection of the PSS (Primary Synchronization Signal) symbols and the SSS (Secondary Synchronization Signal) symbols. At this stage, it was possible to determine both the form and the position of the CSRS signals in the radio resource grid and thus generate the reference pattern signal at the receiver. The next step was the correlation between the received signal and the reference pattern signal and finally the realization of the procedure of the evaluation of th function described in the previous section. Figure 6 presents the obtained results, which show high convergence of simulation and measurements results and confirm the proper configuration of the simulation environment.

4.2. Algorithm Verification

Verification of the efficiency of the developed algorithm was conducted via a measurements campaign carried out in an existing LTE urban scenario. The measurements were made in the Gdansk-Oliwa district, with a dense grid of low-rise houses of various sizes. There is also over a dozen higher, even ten-story blocks of flats that could effectively reflect and disperse radio signals. The downlink signals from three base stations were captured and processed during the measurements in order to estimate the location of the terminal.

As described earlier, after a synchronization phase with the PSS and the SSS signals, it was possible to detect the value for every base station participating in the localization process and to construct the pattern signal within the receiver equipment. Then correlation (1) was carried out and the proposed algorithm was applied to determine the time of arrivals of the CSRS signals from the base stations in question. Finally, the Chan algorithm [30] was used to estimate the location of the receiver. It should be noted here that the estimations were hampered due to the lack of synchronization between the base stations in the examined network. Reference signals transmitted from different base stations were shifted in time in relation to each other, with these shifts changing in time as well. Therefore, it was necessary to make initial measurements with the receiver located in the selected places with the known coordinates in order to set the functions describing mutual time shifts at each moment of the measurements campaign. After that several tens of position estimation measurements were carried out in different places in the area between three LTE base stations.

Figure 7 presents the measurement results in the form of a cumulative distribution function of terminal location estimation error. The error was calculated as a difference between position estimates obtained on the basis of the OTDoA measurements of the LTE downlink signals using the proposed adaptive decision threshold and the reference coordinates from a GPS receiver.

The results of our study show that the use of larger bandwidths, i.e., 10 MHz, 15 MHz, and 20 MHz, which the proposed algorithm is especially dedicated to, provides accuracies of location estimations better than 100 m for 90 % of the cases and better than 40 m for 50 % of the cases in an urban environment.

It is worth mentioning that the measurements of the work efficiency of the proposed method were made for one configuration of base stations. It is well known that mobile terminal position errors, in addition to the detection efficiency of the first component of the received LTE signal, also depend on the geometric dilution of precision (GDoP) parameter. In [31], this problem has been thoroughly analysed. The bad geometry effect is a serious problem affecting the positioning accuracy badly. The situation is even worse when the time measurement errors increase. An unquestionable advantage of the proposed method of signal detection is the lack of ambiguity of the result in comparison with the proposal in [13]. The method described in [13] fails for channels characterized by a long time duration of impulse response with several local extremes, which is frequently observed in NLoS conditions, or when several consecutive pulses overlap without clearly visible gaps between them. One may say that there is still a possibility of using parabolic interpolation presented in [13] for first component of NLoS channel impulse response to get better estimate of time of arrival, but it may be not a trivial task. We will discuss it using exemplary channel impulse responses presented in Figure 8.

When the first component of received signal has the highest power (Figure 8(a)), the method presented in [13] gives good estimate of position of peak of correlation function. But in typical urban environment NLoS propagation conditions should be expected, with the first component not being the one with highest power (Figure 8(b)). As long as there is a significant separation between the first and next components of received signal, it may be possible to apply the method presented in [13] for fine estimation of time of the first local maximum of correlation function. In that case the variable threshold-based method described in this paper may be used for coarse selection of samples of the first component of NLoS signal. However, in some cases several components of received signal may overlap in time, creating one pulse with long duration instead of series of separated pulses (Figure 8(c)). Even correct selection of first local maximum of correlation function will cause significant error of time estimation by using method from [13], which will not occur when threshold-based detection is used. Therefore solution proposed in this paper may be used to overcome drawbacks of parabolic interpolation method from [13].

Taking into account the factors that could cause additional errors in location estimation, such as the lack of network synchronization and the need to perform a complex procedure of reference signals time shifts determination or a lower sampling frequency in the SDR platform (25 MHz) compared to the nominal sampling frequency in the LTE system (30.73 MHz), the obtained results are promising and prove the possibility of implementing the OTDoA based localization services in the LTE network with the proposed algorithm. It should be emphasized that the results were obtained without using the PRS signals dedicated to the OTDoA method, because they were not present in the downlink signals of the tested LTE network. Using the dedicated PRS signals could improve the quality of position estimation and thus the whole radio-localization process.

5. Conclusions

This paper has examined the OTDoA localization method in the LTE system. The considerations were focused on detection of the arrival of the signal in an urban environment due to its challenging propagation conditions. The analysis of the downlink LTE signal structure and the results of the crosscorrelation of signals at the receiver side made it possible to propose a new algorithm of adaptive selection of the detection threshold for the TDoA measurements. The threshold is adjusted dynamically, according to the current propagation conditions. It considers the power of the received signal, the level of noise and interferences, the signal bandwidth, and the correlation length. The function describing the threshold value was proposed on the basis of the designed and implemented simulation model, which was then validated experimentally through measurements in a real LTE network.

Finally, the algorithm was implemented into the SDR platform and measurements were performed in a real urban LTE network to verify its efficiency in a real-life localization process.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work has been carried out as part of the project entitled “Innovative method of aircraft location in the diffused VCS system”, co-financed under the European Regional Development Fund within the Smart Growth Operational Program, agreement no. POIR.04.01.04-00-0032/16 on 13/07/2017.