Table of Contents Author Guidelines Submit a Manuscript
Mobile Information Systems
Volume 2017, Article ID 5407605, 11 pages
Research Article

Feasibility Study of Using Mobile Laser Scanning Point Cloud Data for GNSS Line of Sight Analysis

1Department of Remote Sensing and Photogrammetry, Finnish Geospatial Research Institute, 02431 Masala, Finland
2GNSS Research Center, Wuhan University, Wuhan, Hubei, China
3School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
4Department of Real Estate, Planning and Geoinformatics, Aalto University, Espoo, Finland

Correspondence should be addressed to Jian Tang; nc.ude.uhw@naijgnat

Received 1 September 2016; Revised 7 December 2016; Accepted 12 February 2017; Published 5 March 2017

Academic Editor: Gonzalo Seco-Granados

Copyright © 2017 Yuwei Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


The positioning accuracy with good GNSS observation can easily reach centimetre level, supported by advanced GNSS technologies. However, it is still a challenge to offer a robust GNSS based positioning solution in a GNSS degraded area. The concept of GNSS shadow matching has been proposed to enhance the GNSS based position accuracy in city canyons, where the nearby high buildings block parts of the GNSS radio frequency (RF) signals. However, the results rely on the accuracy of the utilized ready-made 3D city model. In this paper, we investigate a solution to generate a GNSS shadow mask with mobile laser scanning (MLS) cloud data. The solution includes removal of noise points, determining the object which only attenuated the RF signal and extraction of the highest obstruction point, and eventually angle calculation for the GNSS shadow mask generation. By analysing the data with the proposed methodology, it is concluded that the MLS point cloud data can be used to extract the GNSS shadow mask after several steps of processing to filter out the hanging objects and the plantings without generating the accurate 3D model, which depicts the boundary of GNSS signal coverage more precisely in city canyon environments compared to traditional 3D models.

1. Introduction

GNSS plays an important role in current navigation applications in a pervasive way. However, the accuracy and availability of such solutions in city canyons are a well-known problem, which has attracted researchers’ attention in the last few decades. Since currently the urbanization level in developed countries is more than 80 percent and about 50 percent worldwide, the majority of positioning demands are, thus, requested from urban areas. Therefore, augmented solutions helping positioning and navigation in city canyons are obviously needed. The main cause of degraded performance of GNSS in the city canyons is that tall buildings block direct radio signal path from the GNSS to users. Part or even full GNSS observations are missed because of the GNSS shadow cast by the buildings nearby. As a result, the availability and accuracy of positioning are not always guaranteed.

However, GNSS shadow information can be calculated, if 3D city model information is available by applying 3D ray tracing technique [1] to GNSS signals to analyse line of sight (LOS) in the city canyon. The shadow matching concept was first published in 2004 by Tiberius and Verbree [2]. A corresponding GNSS shadow match technique has been evaluated and proved its capability of refining the positioning accuracy by many research groups recently [113], especially, researchers from Britain: they proposed the idea and simulated the urban canyon case with multiple GNSS constellations scenario [3] and utilized the 3D city model of London to verify the idea [4, 5] with visibility scoring algorithm [4] to achieve optimized positioning results in London. Such technology was also to be investigated in Finland [8], United States [911], Canada [12], and Taiwan [13]. The major advantage of the shadow matching technology is that it is applicable to receivers which output standard National Marine Electronics Association (NMEA) messages. Thus the potential of the method is extensive, which can be utilized in various platforms to improve the position accuracy in the GNSS signal degrade area, especially for low-end receivers.

There have been several attempts to calculate LOS using various 2.5D surfaces (Digital Surface Model (DSM) or Digital Terrain Model (DTM)). However, all algorithms may rely greatly on the accuracy and integrity of the 2.5D/3D surfaces or models. A 3D model is a simplified version of the real world and some explicit details are omitted deliberately for various reasons. It influences the LOS analysis to some extent. Normally the model accuracy varies between meter-level to decimetre-level [1]. In this research, the shadow mask is generated with centimetre level accurate MLS cloud data instead of any existing 3D models, because we argue that the modelling accuracy degraded during 3D model reconstruction processing from point cloud.

A MLS system consists of a mobile platform, that is, a car, a laser scanner/several laser scanners, and possibly cameras. It is integrated with a georeferencing system consisting of GNSS and inertial measurement unit (IMU). It provides georeferenced 3D point cloud of a measured scene with high accuracy. With good GNSS visibility the errors of MLS point cloud are trivial. Kaartinen et al. [14] demonstrated that the elevation accuracy of commercial and research MLS systems were better than 3.5 cm up to a range of 35 m. The best system achieved a planimetric accuracy of 2.5 cm over a range of 45 m. Applications of MLS include extraction and modelling of buildings [1519], trees [18, 2023], pole detection [21, 2428] and ground and road surface [18, 2931], and change detection in fluvial environments [32]. Also, Nokia HERE (previously Navteq) True Cars and Google Street View Cars are collecting large data sets; however, no research based on those data sets is released to the public yet.

Processing of MLS data is a relatively new field of science. One of the major disadvantages of the MLS is the limited number of software applications capable of processing the huge amount of data. Current systems can provide a scanning rate of more than 1 Mpts/s. Thus, efficient processing techniques are needed especially when working with raw mobile laser scanning data.

In this paper, a novel solution for GNSS LOS analysis is demonstrated to generate the GNSS shadow mask using MLS point cloud data. The solution includes removal of noise points, removal of points coming from objects that do not interfere with GNSS, such as wires and poles, and determining the objects only attenuating the RF signal, and the highest point extraction and the angle calculation for shadow mask generation. The following section explains why point cloud but not 3D models is utilized for shadow mask generation; Section 3 summarizes the processing algorithms for MLS data, followed by introduction of field tests for the research. Then, we discuss the experimental results and analyse them in detail. Finally, conclusions are drawn and future improvements are discussed.

2. Why Point Cloud Rather Than 3D Models

A 3D model is a simplified version of the real world. In order to distinct the difference between dense point cloud and 3D model, the following part is a brief introduction about the procedure of 3D model reconstruction from point cloud. The reconstruction of 3D models of a scene is a complicated process. The universal point cloud processing methods to automatically generate 3D models are still not available. The dedicated modelling methods for different objects are diverse as aforementioned in introduction [1533]. However, in general, the processing chain includes the following steps as Figure 1 presented: (i) noise point filtering/reduction from the georeferenced point cloud; (ii) object classification: grounds, buildings, roads, trees, and other street furniture such as traffic signs and fences; (iii) building reconstruction by planar detection, edge generalization from zigzagged point cloud to extract the outline of a plane (roof or facade), constrained right-angle processing for all edges; (iv) foreign objects filtering, such as the hanging objects and the plants to minimize the size and complexity of the generated 3D model; (v) meshing or triangulating the building geometry; (vi) texture mapping: project rectified images onto building roofs and facades. With proposed method, shadow mask is available without a specific and detailed 3D city model but raw point cloud, which will save excessive labour cost and system investment.

Figure 1: General processing chain from MLS data collection to 3D city model.

Compared to original point cloud, it is possible that the accuracy degrades during the model reconstruction in each aforementioned step. Figure 2 shows an example as part of the model reconstruction. As it can be observed, the edges in unorganized point cloud are jagged. The outline extraction from a planar roof points is the process of generalization. Usually, after 3D model reconstruction, an evaluation is necessary to check which accuracy level has been achieved when compared to the original point cloud. Therefore, when using 3D models for shadow matching, there are several issues that needed to be considered: (i) what kind of data sources has been used for 3D model generation; (ii) what kind of method has been used for the 3D model reconstruction; (iii) what kind of accuracy level has been achieved in the resulting 3D models. This paper will not extend these topics too far because they are beyond the scope of this paper. However, it is clear that when the 3D models are reconstructed, the loss of the accuracy is inevitable.

Figure 2: An example of outline extraction from planar roof points: a part of 3D model reconstruction.

The point cloud data quality of the adopted MLS platform was already analysed in Kaartinen et al. [14], there was no need to analyse further the point cloud quality and better than 5 cm accuracy of the data is guaranteed in the whole experiment, which is more accurate than most available 3D models. A potential application for such technology is that it can utilize the point cloud generated by the LiDAR sensor equipped by autonomous driving car in near future.

In this research, we investigate the feasibility of utilizing point cloud collecting by MLS platform to generate the shadow mask by removing the objects which only attenuating the GNSS signal. The enhancement comparison between the proposed method and other will be discussed future, if a centimetre level accurate 3D model of the investigated area is available. Besides, the performance enhancement is a comprehensive result affected by geometry of GNSS constellation, selected GNSS receivers, and the adopted algorithm (such as particle filter and optimized visibility scoring scheme [4]).

3. Methodology and Algorithm

As Figure 3 illustrating a typical shadow matching scenario in the city canyon, assuming the distance between the pedestrian and the building is , the height of the building in 3D model is against its truth ; the error introduced by the inaccuracy of the model (normally in several decimetres to meters level) can be calculated withAs a result, satellites and are excluded for the positioning computation, when they actually should be used. This is because these satellites are actually in view, but the inaccuracy in the 3D model will excluded satellites and from the observed list. It implies that the less accurate 3D model might cause the shadow matching positioning not applicable in practical. It can also be concluded that this research has potential benefits to improve the positioning accuracy with shadow matching method if a more accurate 3D city model is available.

Figure 3: How the accuracy of the model affects the shadow matching results.

One issue that has been addressed is that, in this research, we do not utilize the existing 3D model but investigate the method to generate the GNSS shadow mask directly from 3D point cloud collected from a MLS system. 3D model is actually a simplified expression of physical environment which ignores some details on purpose, while the point cloud collected by the MLS contains the most detailed environmental object. Thus the methodology investigated in this research is to filter out the unnecessary information to generate a GNSS shadow mask as precise as possible, by considering the physical natures of the scanned environmental objects without generating a real 3D model from the point cloud.

In this research, a small-footprint phase-based laser scanner is utilized. The major reasons that phase-based model is used rather than a pulse-based version are explained in the following.(1)Phase-based model has smaller field of view (FOV); the one utilized for this research has 0.19 mRad FOV while most pulse-based laser scanners are with several mRad FOV. Considering the 25 meters’ range, the beam diameter at exit is 2.25 mm and the maximum footprint size of the adopted phased based model is 7.3 mm, which is considerably smaller in comparison to 10–20 cm of larger FOV pulsed based scanners. To generate a point cloud as detailed as possible, the footprint should be small due to the fractional interception of laser energy by scattered objects within the laser footprint. Large footprint laser scanner will introduce more measurement error [3438].(2)Phase-based model has higher acquisition speed, comparing with pulsed based model; the acquisition speed of the selected model can be approximately 1 M point per second against 10~100 k of most pulse-based laser scanners. It implies that a considerable denser point cloud can be generated with a higher efficient manner, comparing with that generated by the pulse-based model. In other words, more details can be unveiled by the denser point cloud.(3)Phase-based model has higher range resolution. In this research the range resolution is 1 mm against one to several centimetres’ range resolution of most pulsed based models.(4)Phase-based model adopts the laser with less transmission power comparing with pulsed based laser scanner. Eye safety is an issue determining the acceptance of laser scanners especially for massive civil applications.

However, the field measurements indicate that the phase-based laser scanner is more sensitive to environment factor and results in higher noise levels [38]. A noise mitigating procedure is necessary to filter out the noise measurement before calculating the shadow mask.

The data sources conducted in this research are MLS point cloud (from a FARO Focus laser scanner), trajectory data (from a NovAtel SPAN georeferenced system), GNSS data collected by a dual-frequency receiver (a NovAtel OEMV receiver), and precise satellite ephemeris downloaded from an Internet service.

MLS point cloud is a set of georeferenced points, which contain 3D coordinates and intensity values. The research adopts ETRS-TM35FIN with GRS80 ellipsoidal height map coordinate system, in which is pointing to the east, towards the north, and -axis upwards. In this research, when we mention “the top view,” it means in plane view.

Usually, laser scanning point cloud contains much noise as a result of failed ranging or multiple reflections when reflecting surface is smaller than the footprint of the scanning laser point, which will directly lead to wrong results. Therefore, it is comparably important to filter out the noise, and the noise reduction is the foundation of all point cloud processing. Otherwise such sparsely existing spatial noise might be recognized as the highest obstruction point in further processing. Normally the noise is present as isolated point(s) and can be detected efficiently with a spatial filter. An adaptive spatial filter (ASF) was utilized to mitigate the noise from raw laser scanning points. After noise was filtered out, we designed another ASF for detecting points coming from objects that do not interfere with GNSS, for example, the hanging power cable, flag. From a GNSS user perspective, those objects cast little GNSS shadow on the antenna of a receiver; thus, such objects do not attenuate the physical signal strength of the GNSS. However, such objects do influence the calculation of the highest obstruction angles of the scene. The ASF can also determine the objects which only attenuate the GNSS signal rather than block it, such as plantings. Next, the highest obstruction angle for each azimuth along a grid point can be calculated. Finally, the GNSS visibility map for the whole area is built with a high density two-dimensional grid, for example, with 1-meter spacing.

3.1. Noise Mitigation Processing of Raw Laser Point Cloud

The ASF for noise mitigation was developed based on the isolated distribution characteristic of the noise with the following steps.(1)The point cloud is projected into 2D views of , , and planes.(2)From each 2D view, we use a bivariate histogram to analyse each grid (with a grid size of 15 × 15 meters) and calculate the number of points that fall in each grid.(3)If the number of points is less than the threshold of the filter, (150 as initialized value with current system configuration), it was considered as noise.

Because the filter is mainly for mitigating spatial noise, we refer to it as a “spatial filter.” The threshold setting of the spatial filter adaptively relies on the density of the point cloud, noise distribution, the size of grid, and the scene situation. For example, when a scene contains water area or a large area of glass-made objects, noise level will be higher. Therefore, user knowledge and experience are needed for setting the threshold value.

3.2. Space Hanging Objects and Plantings Detection and Removal

Space hanging objects, for example, pipelines, powerlines, and hanging flags are ubiquitous and usually as a part of a city’s infrastructure especially in the city canyon, which cast too little GNSS shadow on the GNSS receiver, to be considered changing the visibility of the satellites. Therefore, these objects need to be removed from the scene before the calculation of the highest obstruction angle. In addition, considering the complex of the scene, for example, when a person is located under a bridge or a tunnel, the concrete structure of these objects would block the signals. In this case, the high buildings near the bridge cannot always be considered as the highest obstruction angles. Therefore, a voxel based algorithm is developed to detect the location of a view point: in open area or under a bridge or a tunnel. We assume a voxel with a size of 5 × 5 × 5 meters. We put this voxel centred at the view point and the bottom from 2 meters height above ground. The points inside the voxel can be analysed according to the number and the density of the points. When the number and the density of the points are greater than the thresholds, it is accepted as the bridge or tunnel points. Whole object (bridge or tunnel) can be detected by applying region grow method. We analyse the neighbouring points of each candidate for the highest point. If the number of points around the candidate has an ignorable change in vertical direction, we consider it as an object hanging in the air and it is therefore removed. Poles and wires can also be removed by other specific algorithms, such as [25] for poles and [33] for wires.

The highest points generated by the plantings have its unique sparse pattern against the linear sky line generated by regular buildings when we project it into the top view as Figure 12(c) presents. It can be easily detected and removed with a two-dimensional spatial filter.

3.3. The Highest Point Extraction and the Highest Obstruction Angle Calculation

After the noise and space hanging object detection and removal, for each grid point, the GNSS shadow map of the scene can be evaluated by choosing the point cloud within a radius of 25 meters 2D-distance from the grid point. A low building close to the user might yield to larger elevation angle than a high building far away. 25 meters is a compromise and empirical value between the computational complexity and performance and the total number of point cloud within the radius is approximate hundreds of million based on current system setup. In a city canyon, the heights of buildings vary from 5 meters to over 100 meters. The threshold for the 3D distance from grid point is usually difficult to be defined. Therefore, it is more accurate to define a scene by choosing a 2D distance from a grid point.

We set each grid point as Grid_ whereas the scene points with 25 meters or less 2D distance from Grid_ are designated as Scene_. A grid point Grid_ stands for a pedestrian standing in this position whereas the range of the scene points represents a visual area in the position of Grid_. Grid_ is calculated based on trajectory data collected by NovAtel SPAN, and about 1.4 meters from ground surface. Figure 4 gives an example of the relationship between a grid point and its visual area in the 3D view and in the top view. In order to demonstrate the obstruction of a scene in terms of the highest points, from a top view of the Scene_, the Scene_ is divided into 360 sectors from the grid point at Grid_. Each sector is 1°.

Figure 4: Illustration of a visual area at a grid point (Grid_) and how the points are divided into 360 sectors (1° as one sector) in (a) 3D view and (b) plane view.

After the highest points of the surrounding scene of each grid point have been obtained, elevation angle of the highest point with respect to the vertical direction is calculated. Figure 5 shows the definitions of the vertical angle. The red dot is the grid point while the green dot shows one highest point surrounding in a scene. Grid_ is the th grid point. Scene_ presents the highest point of th degree of azimuth of the surrounding scene of the th grid point. α represents the elevation angle of the point “Scene_” with respect to the vertical direction.

Figure 5: The vertical angle calculation.

The angles are calculated as follows:where “” and “” are the -axis coordinates of the point “Scene_” and the point “Grid_,” respectively; “” stands for the 3D distance between the point “Scene_” and the point “Grid_”.

3.4. The GNSS Shadow Mask

Based on the elevation angle of each azimuth, the sky plot of GNSS shadow mask of each grid can be drawn. Figure 6 presents one example of azimuth-elevation pairs in a sky plot in a test field, where the red circles stand for Space Vehicle’s (SV)/GNSS satellites’ positions based on the ephemeris data and with corresponding satellite ID the blue line stands for the boundary of the GNSS shadow extracted from the MLS data with the above-mentioned ASF applied. GNSS satellites with elevation angles below the determined boundary were blocked by the buildings from the GNSS users’ perspective. Therefore, the GNSS shadow mask of each grid extracted from the MLS became a useful information for positioning. Such spatial information could be utilized by the GNSS shadow matching methodology to enhance the positioning accuracy.

Figure 6: GNSS shadow mask sky plot.
3.5. Two Scenarios for Investigation

In this research, two special scenarios of the city canyon are investigated besides typical scenario: the first is scenario with hanging objects such as flags and cables as Figure 7(a) presents and Figure 7(b) presents the city canyon with plantings which might attenuate the GNSS RF signal but not block it. It is well known that the GNSS observability along the canyon direction is better than the cross direction due to the topography of the city canyon. However, if there are some hanging objects like power cables, hanging lights, and flags that exist, it might be processed as the highest obstruction point in the along canyon direction which might redefine, in most cases narrow down, the boundary of the GNSS shadow mask. Thus an adaptive spatial filter needs to be designed to filter out the hanging objects in the point cloud. We name such hanging objects cases as “CASE I” in the following context. The same consequence occurs if there are some plantings existing nearby as Figure 7(b) presents. When the plantings are close to the observing point, the highest obstruction point introduced from the point cloud of the plantings might also redescribe the GNSS shadow mask. Thus another ASF should be designed to deal with the case. Such scenario is named as “CASE II” in the following research.

Figure 7: Two special scenarios investigated in this research: (a) city canyon with hanging objects; (b) city canyon with nearby plantings.

4. MLS Data Collection

MLS data used for the experiment were collected by ROAMER mobile mapping system developed by the Finnish Geospatial Research Institute (FGI) as Figure 8 presents, in Tapiola shopping centre area, Espoo, Finland, which is a 300-by-300-meter area with buildings of varying size and height [39]. The MLS system can be applied to different platforms, that is, car, trolley, and boat and also for personal backpack [40]. The effective range of ROAMER was 78 meters and the scanning rate could reach approximate 1 M/s. With regard to the experiment, the system employed a trolley as a platform to collect data to enter also pedestrian alleys at speed of about 1 m/s.

Figure 8: ROAMER on data collecting in Tapiola, Finland.

5. Results and Discussions

Figure 9 illustrates the effectiveness of the ASF for noise mitigation. The red dots present the noise measurements within the MLS collected point cloud. Most of the sparse noise hanging in space has been filtered out with the filter. The resulting filtered point cloud was used for the highest point extraction.

Figure 9: Noise mitigation results.

Figure 10 shows the detection of the highest points by removing the cable-like objects. The black dots are the highest points of each azimuth in one scene, and the red point in the figure stands for a grid point. The colours in the figure indicate height. From the figure, it can be observed that all space hanging objects in the scene are detected and removed.

Figure 10: The results of the highest points detection after the space hanging object filter has been applied.

Figure 11 shows a comparison of the highest point extraction with Figure 11(a) and without Figure 11(b) the space hanging objects. Without adapting the spatial filter, many of the extracted mask points are located on three hanging cables between two buildings in the scene. Cables are detected and deleted with the ASF from the highest points, list as Figure 11(b) shows. The difference is more obvious when we project all highest point to plane, as Figures 11(c) and 11(d) present. Figure 11(e) shows the difference of the two GNSS shadow masks. The blue circle line depicts the boundary of shadow with the spatial filter, which has much lower elevation angle in the cross canyon (north-south) direction in comparison to the red star line which is drawn based on the unfiltered data. From the red star line, we can perceive that there are three cables existing in the observed area and there are two dents in the red circle between 50° to 55° and −135° to −140°; it is identified to be two lights hung on power cables for illustration in Tapiola. We can observe that the design ASF can correctly filter out the hanging objects and generate the GNSS shadow mask which is more precise in depicting the GNSS visibility in city canyon environments for CASE I scenario.

Figure 11: (a) Highest points extraction without ASF, (b) highest points extraction with ASF, (c) top view of the extracted highest points in plane without ASF, (d) top view of the extracted highest points in plane with ASF GNSS, and (e) shadow mask sky plot generated by the methods with and without ASF in CASE I.
Figure 12: (a) Highest points extraction without ASF, (b) highest points extraction with ASF, (c) top view of the extracted highest points in plane without ASF, (d) top view of the extracted highest points in plane with ASF GNSS, and (e) shadow mask sky plots generated by the methods with and without ASF in CASE II.

Figure 12 presents the effectiveness of the design ASF for CASE II scenario in city canyon. As Figures 12(a) and 12(c) shown, all sparse distributed highest points introduced by the plantings have been filtered out and a more clear GNSS mask is generated in Figures 12(b) and 12(d). By comparing the GNSS shadow masks generated with the method with and without ASF, some conclusions can be drawn: the proposed ASF can correctly filter out the highest point generated by the nearby plantings; the signal attenuated by the planting can also be utilized as a special signal of opportunity (SOP) for positioning to improve the location accuracy: because the attenuation introduced by trunk or foliage should be lower comparing the attenuation caused by buildings resulting in lower SNR GNSS signal.

6. Conclusion

By analysing the MLS point cloud data and by applying developed ASF, it is concluded that MLS data can be used to extract GNSS shadow masks after a series of appropriate processing steps to filter out the hanging objects and the plantings. Since the seamless MLS point cloud data has centimetre accuracy, the extracted GNSS shadow mask is more precise, compared with the ones generated from traditional 3D models, where the model error is several decimetres [1] or even higher. Since point cloud from city and roadside areas are collected by large geospatial data providers with mobile mapping and laser scanning technology, such as Nokia HERE and Google, who are also the major players in the fields of positioning and navigation, there are possibilities to implement the presented methods for the benefit of GNSS users in city environments.

In future research, we will generate a dense two-dimensional grid with 1-meter spacing GNSS shadow mask database for testing GNSS shadow matching to enhance the position accuracy in city canyons. As far as we know, it would be the most detailed and dense database with centimetre level accuracy. More specific field tests are planned to be conducted in the test area to evaluate its improvement of pedestrian navigation applications by comparing the performance between the MLS method and the 3D city models for the elevation mask determination. We also aim to analyse the attenuation introduced by trunk and foliage on GNSS signals to investigate a more precise GNSS shadow matching algorithm in city canyon environments that takes into account city plants.

Competing Interests

The authors declare that they have no competing interests.


The research is financially supported by Academy of Finland (New Laser and Spectral Field Methods for In Situ Mining and Raw Material Investigations (292648)) and Strategic Research Council at the Academy of Finland is acknowledged for financial support (Project Decision no. 293389). Additionally, Chinese Academy of Science (181811KYSB20130003, 181811KYSB20160113), China Ministry of Science and Technology (2015DFA70930), and National Natural Science Foundation of China (41304004) are acknowledged.


  1. L. Wang, P. D. Groves, and M. K. Ziebart, “Urban positioning on a smartphone: real-time shadow matching using GNSS and 3D city models,” Inside GNSS Magazine, vol. 8, no. 6, pp. 44–56, 2013. View at Google Scholar
  2. C. Tiberius and E. Verbree, “GNSS positioning accuracy and availability within Location Based Services: the advantages of combined GPS-Galileo positioning,” in Proceedings of the ESA/Estec Workshop on Satellite Navigation User Equipment Technologies, G. S. Granados, Ed., pp. 1–12, ESA Publications Division, Noordwijk, The Netherland, 2004.
  3. P. D. Groves, “Shadow matching: a new GNSS positioning technique for urban canyons,” Journal of Navigation, vol. 64, no. 3, pp. 417–430, 2011. View at Publisher · View at Google Scholar · View at Scopus
  4. L. Wang, P. D. Groves, and M. K. Ziebart, “GNSS shadow matching: improving urban positioning accuracy using a 3D city model with optimized visibility prediction scoring,” in Proceedings of 25th International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS '12), pp. 423–437, Nashville, Tenn, USA, 2012.
  5. L. Wang, P. D. Groves, and M. K. Ziebart, “Multi-constellation GNSS performance evaluation for urban canyons using large virtual reality city models,” Journal of Navigation, vol. 65, no. 3, pp. 459–476, 2012. View at Publisher · View at Google Scholar · View at Scopus
  6. M. Adjrad and P. D. Groves, “Intelligent urban positioning using shadow matching and GNSS ranging aided by 3D mapping,” in Proceedings of the 29th International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS '16), Portland, Ore, USA, September 2016.
  7. M. Adjrad and P. D. Groves, “Enhancing conventional GNSS positioning with 3D mapping without accurate prior knowledge,” in Proceedings of the 28th International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS '15), pp. 2397–2409, Tampa, Fla, USA, September 2015. View at Scopus
  8. R. Chen and J. Liu, “Mitigating GNSS positioning errors using 3D spatial attributes of map data,” Finnish Patent: 20110073, January 2011.
  9. J. T. Isaacs, A. T. Irish, F. Quitin, U. Madhow, and J. P. Hespanha, “Bayesian localization and mapping using GNSS SNR measurements,” in Proceedings of the IEEE/ION Position, Location and Navigation Symposium (PLANS '14), pp. 445–451, IEEE, Monterey, Calif, USA, May 2014. View at Publisher · View at Google Scholar · View at Scopus
  10. J. Bradbury, “Prediction of urban GNSS availability and signal degradation using virtual reality city models,” in Proceedings of the 20th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS '07), pp. 2696–2706, Fort Worth, Tex, USA, September 2007.
  11. J. T. Isaacs, A. T. Irish, F. Quitin, U. Madhow, and J. P. Hespanha, “Bayesian localization and mapping using GNSS SNR measurements,” in Proceedings of the IEEE/ION Position, Location and Navigation Symposium (PLANS '14), pp. 445–451, May 2014. View at Publisher · View at Google Scholar · View at Scopus
  12. R. Kumar and M. G. Petovello, “A novel GNSS positioning technique for improved accuracy in urban canyon scenarios using 3D city model,” in Proceedings of the 27th International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS '14), vol. 812, pp. 2139–2148, Tampa, Fla, USA, 2014.
  13. J.-Y. Han and P.-H. Li, “Utilizing 3-D topographical information for the quality assessment of a satellite surveying,” Applied Geomatics, vol. 2, no. 1, pp. 21–32, 2010. View at Publisher · View at Google Scholar · View at Scopus
  14. H. Kaartinen, J. Hyyppä, A. Kukko, A. Jaakkola, and H. Hyyppä, “Benchmarking the performance of mobile laser scanning systems using a permanent test field,” Sensors, vol. 12, no. 9, pp. 12814–12835, 2012. View at Publisher · View at Google Scholar · View at Scopus
  15. M. Rutzinger, B. Höfle, S. O. Elberink, and G. Vosselman, “Feasibility of facade footprint extraction from mobile laser scanning data,” Photogrammetrie, Fernerkundung, Geoinformation, vol. 2011, no. 3, pp. 97–107, 2011. View at Publisher · View at Google Scholar · View at Scopus
  16. C. Frueh, S. Jain, and A. Zakhor, “Data processing algorithms for generating textured 3D building facade meshes from laser scans and camera images,” International Journal of Computer Vision, vol. 61, no. 2, pp. 159–184, 2005. View at Publisher · View at Google Scholar · View at Scopus
  17. L. Zhu, J. Hyyppä, A. Kukko, H. Kaartinen, and R. Chen, “Photorealistic building reconstruction from mobile laser scanning data,” Remote Sensing, vol. 3, no. 7, pp. 1406–1426, 2011. View at Publisher · View at Google Scholar · View at Scopus
  18. H. Zhao and R. Shibasaki, “Reconstructing a textured CAD model of an urban environment using vehicle-borne laser range scanners and line cameras,” Machine Vision and Applications, vol. 14, no. 1, pp. 35–41, 2003. View at Publisher · View at Google Scholar · View at Scopus
  19. D. Manandhar and R. Shibasaki, “Auto-extraction of urban features from vehicle-borne laser data,” International Archives of Photogrammetry, Remote Sensing and Spatial, Information Sciences, vol. 34, pp. 433–438, 2002. View at Google Scholar
  20. M. Rutzinger, A. K. Pratihast, S. J. Oude Elberink, and G. Vosselman, “Tree modelling from mobile laser scanning data-sets,” Photogrammetric Record, vol. 26, no. 135, pp. 361–372, 2011. View at Publisher · View at Google Scholar · View at Scopus
  21. S. Pu, M. Rutzinger, G. Vosselman, and S. Oude Elberink, “Recognizing basic structures from mobile laser scanning data for road inventory studies,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 66, no. 6, pp. S28–S39, 2011. View at Publisher · View at Google Scholar · View at Scopus
  22. E. Puttonen, A. Jaakkola, P. Litkey, and J. Hyyppä, “Tree classification with fused mobile laser scanning and hyperspectral data,” Sensors, vol. 11, no. 5, pp. 5158–5182, 2011. View at Publisher · View at Google Scholar · View at Scopus
  23. A. Jaakkola, J. Hyyppä, A. Kukko et al., “A low-cost multi-sensoral mobile mapping system and its feasibility for tree measurements,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 65, no. 6, pp. 514–522, 2010. View at Publisher · View at Google Scholar · View at Scopus
  24. C. Brenner, “Extraction of Features from mobile laser scanning data for future driver assistance systems,” in Advances in GIScience-Proceedings of the 12th AGILE Conference, Lecture Notes in Geoinformation and Cartography, pp. 25–42, Springer, Berlin, Germany, 2009. View at Publisher · View at Google Scholar
  25. M. Lehtomäki, A. Jaakkola, J. Hyyppä, A. Kukko, and H. Kaartinen, “Detection of vertical pole-like objects in a road environment using vehicle-based laser scanning data,” Remote Sensing, vol. 2, no. 3, pp. 641–664, 2010. View at Publisher · View at Google Scholar · View at Scopus
  26. M. Lehtomäki, A. Jaakkola, J. Hyyppä, A. Kukko, and H. Kaartinen, “Performance analysis of a pole and tree trunk Detection method for mobile laser scanning data,” in Proceedings of the ISPRS Calgary Workshop on Laser Scanning, pp. 197–202, ISPRS, Calgary, Canada, August 2011. View at Scopus
  27. A. Golovinskiy, V. G. Kim, and T. Funkhouser, “Shape-based recognition of 3D point clouds in urban environments,” in Proceedings of the 12th International Conference on Computer Vision (ICCV '09), pp. 2154–2161, October 2009. View at Publisher · View at Google Scholar · View at Scopus
  28. H. Yokoyama, H. Date, S. Kanai, and H. Takeda, “Pole-like objects recognition from mobile laser scanning data using smoothing and principal component analysis,” International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 38, pp. 115–120, 2011. View at Google Scholar
  29. A. Jaakkola, J. Hyyppä, H. Hyyppä, and A. Kukko, “Retrieval algorithms for road surface modelling using laser-based mobile mapping,” Sensors, vol. 8, no. 9, pp. 5238–5249, 2008. View at Publisher · View at Google Scholar · View at Scopus
  30. S.-J. Yu, S. R. Sukumar, A. F. Koschan, D. L. Page, and M. A. Abidi, “3D reconstruction of road surfaces using an integrated multi-sensory approach,” Optics and Lasers in Engineering, vol. 45, no. 7, pp. 808–818, 2007. View at Publisher · View at Google Scholar · View at Scopus
  31. C. McElhinney, P. Kumar, C. Cahalane, and T. McCarthy, “Initial results from European Road Safety Inspection (EuRSI) mobile mapping project,” International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 38, pp. 440–445, 2010. View at Google Scholar
  32. M. Vaaja, J. Hyyppä, A. Kukko, H. Kaartinen, H. Hyyppä, and P. Alho, “Mapping topography changes and elevation accuracies using a mobile laser scanner,” Remote Sensing, vol. 3, no. 3, pp. 587–600, 2011. View at Publisher · View at Google Scholar · View at Scopus
  33. P. Axelsson, “Processing of laser scanner data—algorithms and applications,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 54, no. 2-3, pp. 138–147, 1999. View at Publisher · View at Google Scholar · View at Scopus
  34. J. Jutila, K. Kannas, and A. Visala, “Tree measurement in forest by 2D laser scanning,” in Proceedings of the IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA '07), pp. 491–496, Jacksonville, Fla, USA, June 2007. View at Publisher · View at Google Scholar · View at Scopus
  35. J. Tang, Y. Chen, L. Chen et al., “Fast fingerprint database maintenance for indoor positioning based on UGV SLAM,” Sensors, vol. 15, no. 3, pp. 5311–5330, 2015. View at Publisher · View at Google Scholar · View at Scopus
  36. Y. Chen, J. Tang, J. Hyyppä et al., “Automated stem mapping using SLAM technology for plot-wise forest inventory,” in Proceedings of the Ubiquitous Positioning Indoor Navigation and Location-Based Services (UPINLBS '14), pp. 130–134, Corpus Christi, Tex, USA, November 2014.
  37. O. Ringdahl, P. Hohnloser, T. Hellström, J. Holmgren, and O. Lindroos, “Enhanced algorithms for estimating tree trunk diameter using 2D laser scanner,” Remote Sensing, vol. 5, no. 10, pp. 4839–4856, 2013. View at Publisher · View at Google Scholar · View at Scopus
  38. T. Nuttens, C. Stal, J. Wisbecq, G. Deruyter, and A. De Wulf, “Field comparison of pulse-based and phase-based laser scanners for civil engineering applications,” in Proceedings of the 14th International Multidisciplinary Scientific Geoconference and EXPO (SGEM '14), pp. 169–176, Sofia, Bulgaria, June 2014. View at Scopus
  39. A. Kukko, C.-O. Andrei, V.-M. Salminen et al., “Road environment mapping system of the Finnish Geodetic Institute—FGI ROAMER,” International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 36, no. 3/W52, pp. 241–247, 2007. View at Google Scholar
  40. X. Liang, A. Kukko, H. Kaartinen et al., “Possibilities of a personal laser scanning system for forest mapping and ecosystem services,” Sensors, vol. 14, no. 1, pp. 1228–1248, 2014. View at Publisher · View at Google Scholar · View at Scopus