Table of Contents Author Guidelines Submit a Manuscript
Journal of Sensors
Volume 2016 (2016), Article ID 4158370, 8 pages
http://dx.doi.org/10.1155/2016/4158370
Research Article

Novel Aerial 3D Mapping System Based on UAV Platforms and 2D Laser Scanners

Applied Geotechnologies Research Group, University of Vigo, 36310 Vigo, Spain

Received 3 August 2015; Accepted 10 December 2015

Academic Editor: Banshi D. Gupta

Copyright © 2016 David Roca et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The acquisition of 3D geometric data from an aerial view implies a high number of advantages with respect to terrestrial acquisition, the greatest being that aerial view allows the acquisition of information from areas with no or difficult accessibility, such as roofs and tops of trees. If the aerial platform is copter-type, other advantages are present, such as the capability of displacement at very low-speed, allowing for a more detailed acquisition. This paper presents a novel Aerial 3D Mapping System based on a copter-type platform, where a 2D laser scanner is integrated with a GNSS sensor and an IMU for the generation of georeferenced 3D point clouds. The accuracy and precision of the system are evaluated through the measurement of geometries in the point clouds generated by the system, as well as through the geolocation of target points for which the real global coordinates are known.

1. Introduction

The availability of 3D point clouds from objects is a key aspect regarding their inventory and the performance of posterior geometric analysis and other types of analysis such as energy use in buildings [1, 2], presence and dimension of cracks in bridges and roads [3, 4], and presence of defects in smaller objects or elements such as welding [5].

Point clouds can be generated using different methodologies and devices: on the one hand, photogrammetry techniques can be applied for the orientation of the images and the extraction of the 3D coordinates of the point represented by each pixel through the computation of the ray intersection [6]. On the other hand, there is the possibility of performing the direct measurement of the 3D coordinates of points through the use of laser scanning devices. Most of them consist of a moving head (rotation in the -axis) with a rotating mirror inside (rotation in its -axis). The two axes of rotation provoke the deviation of the laser ray emitted, which travels through the space, returning to the head after encountering an object. The result is directly the 3D coordinates of each point of the area around the position of the laser scanner [7, 8].

However, development in positioning sensors, from Inertial Measurement Units (IMU) to GNSS, has encouraged the appearance of mobile platforms for the generation of massive 3D point clouds. Most options are terrestrial and their displacement is based on a wheeled structure, which can be a car or a van [9, 10], or a platform specifically designed for indoor works [11].

Regarding aerial platforms, advances in the recent years have increased the possibilities towards the integration of sensors for different purposes. One option is the integration of RGB-D cameras, such as Kinect sensors, in the aerial platform, in such a way that small 3D point clouds are acquired [12, 13]. The main drawbacks of this procedure are that data processing must be performed for the registration of contiguous point clouds and that their main application is limited to indoor scenes due to the poor performance of these devices in outdoor conditions. Thus, some versions integrate both a LiDAR sensor (2D laser scanner) for the measurement of geometry and a photographic camera for the acquisition of images [14, 15]. This way, positioning of the system relies on LiDAR data processing and image orientation, based on photogrammetry and computer vision algorithms. These cases used a relative coordinate system, unless ground control points are measured with a GPS device and used for the absolute orientation of the images. Another common option is the implementation of rotation in the laser, with the same purpose of the rotating head in a terrestrial laser scanner: the generation of 3D point clouds due to the fusion of all the 2D sections acquired in the same position [16, 17]. In these cases, some approaches rely completely on the IMU for the computation of the trajectory of the flight [18], whereas other approaches focus on the alignment of the 2D point clouds for the determination of the positions from which they were acquired. This approach is commonly known as SLAM (Simultaneous Localization and Mapping) [19].

There are different algorithms for SLAM, although the majority of them can only be applied to indoor scenes due to the higher presence of characteristic elements [20] than outdoor scenes. Other SLAM algorithms stablished the displacement of the laser on the horizontal plane, in such a way that the measurement is limited to horizontal and vertical surfaces, with no possibility of measuring ramps, or even staircases [21]. What is more, the applications of SLAM-based platforms are limited to mapping and navigation, disregarding the third dimension of space. In order to measure in 3D, the platform must either integrate a second LiDAR sensor in a perpendicular position regarding the first [22] or implement rotation to it [17].

In order to overcome the limitations of SLAM regarding outdoor scenes and geometry, this paper presents a methodology for 3D modelling of outdoor scenes based on data acquired by a LiDAR sensor mounted on an aerial platform, copter-type. The system is able to acquire the 3D coordinates of points, including slopes and inclined surfaces, common in outdoor scenes. In order to minimize the number of LiDAR sensors in the platform and consequently reduce the payload to its minimum, the system consists of a 2D laser scanner and a GNSS receiver. This way, 3D point clouds are generated by the combination of the 2D scans with the displacement measured by GNSS as the third dimension. The direct measurement of the GNSS receiver in the global reference system allows the direct absolute orientation of the 3D point clouds with no need for artificial ground control points.

This paper is organized as follows: Section 2 describes the equipment, and the procedures for data acquisition and data processing. Section 3 incorporates the results obtained after different measurements for the validation of the system. Last, Section 4 presents the conclusions reached after the analysis of the Aerial 3D Mapping System is proposed.

2. Materials and Methods

2.1. Sensor Integration

The system presented in this paper consists of an aerial platform, copter-type, equipped with a 2D laser scanner for the measurement of point coordinates and an IMU (Inertial Measurement Unit) for the measurement of the position and orientation of the platform, and, thus, of the 2D laser scanner.

The platform chosen as the base of the system is a Mikrokopter Okto-XL. It is an eight-propeller platform, with a brushless motor for the control of each propeller, in such a way that propellers are individually controlled. The platform presents a payload of 2.5 kg, and a flying autonomy up to 20 minutes. The platform is remotely controlled, with communication with the remote control on the band of 2.4 GHz.

Data acquisition for 3D mapping is performed through the incorporation of the following sensors to the platform:(i)2D laser scanner, Hokuyo, model UTM-30LX.(ii)IMU Advanced Navigation, Spatial model.(iii)GNSS Receiver, Trimble BD920, with RTK, DGNSS, and SBAS modes [23].(iv)Embedded processor Pandaboard.

Technical characteristics of the sensors are included in Table 1.

Table 1: Technical characteristics of the sensors used for 3D mapping in the Aerial System.

Data acquisition is controlled from a tablet device, which communicates with the Aerial 3D Mapping System through a wireless communication module, in the channel of 433 MHz. What is more, the embedded processor is equipped with a 3G module for receiving GPS corrections of the position of the platform, in the NTRIP protocol. The GNSS receiver Trimble BD920 imports the corrections received, which are used for the precise computation of the position of the system, in real time. Data from the laser (scanning sections), IMU, and GPS after corrections are stored on-board, in a storage system consisting of an external portable memory. Figure 1 shows a schema of the different connections between sensors.

Figure 1: Communication between sensors.

As shown in Figure 1, every measurement of the laser is associated with a measurement of the IMU and the GNSS receiver, making possible the posterior generation of a 3D point cloud. The synchronization is performed through a trigger-out signal sent by the laser every time a measurement is performed. The signal is received by the IMU, which sends its measures and resends the order of acquisition to the GNSS receiver.

In order to avoid the entrance of flying elements and dust in the electronics of the system, all the components are protected by an aluminium box, 2 mm thick, with the exception of the laser and the GNSS antenna. The box is affixed to the UAV with a tappet, also made in aluminium. The laser sensor is placed in such a way that the 90° with no measurement are oriented upwards. This way, no information is missed from the area of interest (ground and vertical elements such as buildings, trees); in addition, there is no record of the wing and the motors of the vehicle, which could disturb the point cloud appearing as points with no interest. Figure 2 shows the system developed for aerial 3D mapping.

Figure 2: Aerial 3D Mapping System developed.

The positions of both the GPS antenna and the laser scanner are calibrated with respect to the IMU, so that the respective measurements of global and 2D coordinates are referred to the same origin in the system and their integration is possible.

2.2. Data Acquisition

While the flight is controlled with the remote control by the operator, data acquisition is controlled from the tablet device, with a software specifically developed by the authors for this operation. Different screen captures of the software are shown in Figure 3.

Figure 3: Screen captures of the software developed for controlling data acquisition. (a) System not working. (b) System logging data.

The software is developed in such a way that data acquisition is controlled with buttons “Start” and “Stop”. In addition, the number of measurements can be followed in real time (top of the screen, legends “NumSurvey” and “NumScans”, together with the measurements of the IMU sensor for both positioning and orientation (latitude, longitude and height, and roll, pitch and yaw)). Regarding the GNSS, the “Status” legend shows at every moment the quality of the GNSS positioning. In particular, the green light for acquisition is given when the acquisition mode is RTK Fixed, as shown in Figure 3(b).

When global positioning is optimal (RTK Fixed), the “Start” button can be pressed. In this moment, the laser data acquisition starts, and all the measurements by the IMU, the GNSS receiver, and the laser are stored.

2.3. Data Processing

Given the high precision of data acquisition, data processing is a simple procedure. The position and orientation of the aerial system in each laser measurement are known through the measurements of the IMU and the GNSS sensors associated. Thus, the procedure for the generation of the 3D point clouds of the area flown consists in the placement of each laser measurement (a 2D point cloud) in the position of the UAV in the moment of the acquisition, projected with the orientation of the UAV. Consecutive 2D point clouds are projected to their respective positions, taking into account that the displacement line is the 3rd dimension of the point cloud. The projection of each point is determined by the measurement of angle and distance from the IMU provided by the laser scanner, and the angles of roll, pitch, and yaw measured by the IMU in each position. Figure 4 shows some examples of 3D point clouds of buildings acquired from the Aerial 3D Mapping System.

Figure 4: Example of point clouds acquired from the Aerial 3D Mapping System developed. The red line shows the path followed by the system in the different flights required for the complete coverage of the buildings. Colour is applied according to different levels of height.

3. Results and Discussion

The performance of the Aerial 3D Mapping System is evaluated in two ways. On the one hand, the quality of the 3D point cloud is analysed through its comparison with a point cloud of the same buildings acquired with a terrestrial laser scanner FARO . This way, the accuracy of the geometry acquired by the proposed system is calculated. On the other hand, the quality of the global positioning of the system is evaluated through the comparison of the 3D coordinates of seven target points between the point cloud generated and independent GPS measurements.

3.1. Relative Measurements

Two different single-houses were acquired with the Aerial 3D Mapping System and with a terrestrial laser scanner FARO X330. The high accuracy in the 3D measurement of scenes of the latter (2 mm) makes it the optimal option for being considered as a reference. The measurement of the Aerial 3D Mapping System consists in following a flying path surrounding the buildings, at a flying height 1-2 m over the building. Height is dependent on the horizontal distance between the Aerial System and the building, given that the two parameters, height and horizontal distance, determine the view angle of the laser sensor. In order to acquire information of the façades and the roof, the recommended view angle of the laser sensor is between 30° and 50°.

Regarding FARO X330, data acquisition is performed from one scan position, located next to one corner of the building. This way, information about the main façade and one lateral façade is acquired. With respect to the roofs, which appear in the point clouds acquired by the Aerial 3D Mapping System, the information acquired by the FARO is limited to some points visible by the laser from the scan position.

The point clouds resulting from the data acquisitions are shown in Figure 5, while the results of their geometrical evaluation are shown in Table 2.

Table 2: 3D dimensions of buildings in Figure 5; comparison between the point cloud acquired with FARO Focu and the Aerial 3D Mapping System proposed. Differences over 5% are highlighted in bold.
Figure 5: 3D point clouds acquired of buildings (1) and (2) used for the validation of the system proposed. (a) Point clouds from the Aerial 3D Mapping System. (b) Point clouds from TLS FARO X330. The main difference is the lack of information from the roofs in the point clouds of the latter.

Results show that the error in the measurement of the Aerial 3D Mapping System is under 5% except for one measurement. What is more, the highest accuracy is found for the measurement of the length. The higher accuracy in this dimension, which depends on the measurement of the GNSS and the computation of the displacement of the system, shows the quality of the performance of RTK mode. On the other hand, the lower accuracy in the measurement of the height of the building can be due to the fact that it can only be measured in one scanning section, the one coincident with the top of the roof, with no security for its coincidence with the real higher section of the building. On the contrary, this measurement presents high accuracy in the point clouds acquired with FARO , since they present several points for each position, including the top of the roof. However, the highest error is found in the width of the buildings. The cause of this highest error is that the width measurement is a combination of measurements of the laser and the GNSS, resulting in the integration of the error from the two sources.

3.2. Absolute Positioning

The quality of the global positioning of the system is evaluated through the measurement of the 3D coordinates of 7 control targets, consisting of white rectangular plates placed at different heights (between 0.20 and 1.40 m). The targets were homogeneously distributed in a forest clearing, as shown in Figure 6. With the aim of performing a statistical analysis, the coordinates of the targets are measured in 10 different flights, performed at 8–10 m height. The GNSS sensor of the aerial system received online corrections from one station: the official national base located at an approximate distance of 30 km.

Figure 6: (a) Side view of the forest clearing with control targets at different heights. (b) Top view of the forest clearing; the position of the targets is highlighted in red.

The reference for the coordinates of the targets is set on the independent measurement performed directly on the targets using a GPS Trimble R8. Thus, the spatial coordinates of the seven targets are measured in the ten flights, and their values are compared with the reference values (i.e., values from the measurements performed with GPS Trimble R8). Mean deviations from the measurements of the Aerial 3D Mapping System with respect to the reference values are 0.239 m in the -axis, 0.337 m in the -axis, and 0.201 m in the -axis. Mean deviation values for the 10 measurements of each point are shown in Table 3.

Table 3: Mean deviation values between the coordinates measured by the system proposed and the reference values measured with GPS Trimble R8 for each point. All units are in [m].

The highest mean deviation, for all points except point 2, is in the -axis. What is more, Point 7 is the one with the highest deviation both in - and in -axis, due to an error in the measurement of the heading angle by the IMU, which is the measurement in which the IMU presents the worst performance [24]. In all cases, precision is higher than 0.5 m, with the only one exception of point 7, for which precision in the -axis is 0.723 m.

4. Conclusions

This paper presents the development and integration of a novel Aerial 3D Mapping System, constituted by an aerial platform, copter-type, and a 2D laser scanner for the measurement of point coordinates in outdoor scenes. Position of the system during acquisition is measured with a GNSS sensor, which accepts corrections from GPS stations, both local (positioned by the operators) and official, provided they communicate using NTRIP protocol. In addition, orientation of the system is measured by an IMU, which also calculates the trajectory of the system in case GNSS signal is lost. Thus, the final objective of the system is the generation of 3D point clouds of the area under study, with coordinates in the global coordinate system. The global coordinate system used in this paper is ETRS89, with UTM projection and time zone being dependent on the location (from 29 to 31 in the case of Spain).

With the presented configuration, and using GPS corrections from an official base, precision in global positioning is under 0.400 m in all axes, being the worst precision measured in the -axis with a value of 0.337 m. Thus, the system provides results with a precision 10 times lower than the theoretical precision provided by GPS RTK mode, which is 1-2 cm. This decrease in precision is caused by errors in the measurements of the IMU, which influence the projection of each point to its 3D position. The IMU is the sensor that introduces higher deviations, especially in the measurement of the heading angle, which is the one with the worst performance.

Regarding the generation of 3D point clouds and the accuracy of their geometry, two houses were measured, and the evaluation of their dimensions with respect to those measured by a terrestrial laser scanner results in error below 5% for most cases. Regarding the different dimensions measured, the highest error is found in the width of the buildings. The reason for this higher error is that this dimension is transversally measured by the laser, and the orientation of the points is calculated from the values of translation and rotation between positions of the system, instead of being directly measured. This fact highlights the importance of a proper flight planning, in which the key dimensions of the object of interest are measured directly by the system (either by the laser scanner or by the GNSS in case the measurement is parallel to the trajectory of the Aerial System).

Improvements to the Aerial 3D Mapping System would imply the integration of an IMU with better technical characteristics, as well as a laser scanner with capacity to measure at higher frequencies, so that more scans are measured in each flight and thus the error produced in the measurement of the dimensions of the objects is reduced. Regarding the generation of 3D point clouds, future work will focus on the processing step, through a procedure of plane extraction in consecutive 2D sections for the adjustment of the points to fit the plane, resulting in the generation of more accurate 3D models.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to thank the Ministerio de Economía y Competitividad and CDTI (Gobierno de España) for the financial support given through human resources grants (FPDI-2013-17516, PTQ-13-06381), and projects (ENE2013-48015-C3-1-R). All the programs are cofinanced by the Fondo Europeo para el Desarrollo Regional (FEDER).

References

  1. M. Previtali, L. Barazzetti, R. Brumana et al., “Automatic façade modelling using point cloud data for energy-efficient retrofitting,” Applied Geomatics, vol. 6, no. 2, pp. 95–113, 2014. View at Publisher · View at Google Scholar · View at Scopus
  2. L. Díaz-Vilariño, S. Lagüela, J. Armesto, and P. Arias, “Semantic as-built 3d models including shades for the evaluation of solar influence on buildings,” Solar Energy, vol. 92, pp. 269–279, 2013. View at Publisher · View at Google Scholar · View at Scopus
  3. L. Truong-Hong and D.-F. Laefer, “Application of terrestrial laser scanner in bridge inspection: review and an opportunity,” in Proceedings of the 37th IABSE Symposium on Engineering for Progress, Nature and People, pp. 2713–2720, Madrid, Spain, September 2014.
  4. H. Guan, J. Li, Y. Yu et al., “Iterative tensor voting for pavement crack extraction using mobile laser scanning data,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 3, pp. 1527–1537, 2015. View at Publisher · View at Google Scholar · View at Scopus
  5. M. Rodríguez-Martín, S. Lagüela, D. González-Aguilera, and P. Rodríguez-Gonzálvez, “Procedure for quality inspection of welds based on macro-photogrammetric three-dimensional reconstruction,” Optics and Laser Technology, vol. 73, pp. 54–62, 2015. View at Publisher · View at Google Scholar
  6. T. Luhmann, S. Robson, S. Kyle, and I. Harley, Close Range Photogrammetry: Principles, Methods and Applications, Whittles Publishing, Dunbeath, UK, 2006.
  7. B. Riveiro, P.-B. Lourenço, D.-V. Oliveira, H. González-Jorge, and P. Arias, “Automatic morphologic analysis of quasi-periodic masonry walls from LiDAR,” Computer-Aided Civil and Infrastructure Engineering, 2015. View at Publisher · View at Google Scholar
  8. D. González-Aguilera, A. Muñoz-Nieto, J. Gómez-Lahoz, J. Herrero-Pascual, and G. Gutierrez-Alonso, “3D digital surveying and modelling of cave geometry: application to paleolithic rock art,” Sensors, vol. 9, no. 2, pp. 1108–1127, 2009. View at Publisher · View at Google Scholar · View at Scopus
  9. I. Puente, H. González-Jorge, J. Martínez-Sánchez, and P. Arias, “Review of mobile mapping and surveying technologies,” Measurement, vol. 46, no. 7, pp. 2127–2145, 2013. View at Publisher · View at Google Scholar · View at Scopus
  10. S. I. El-Halawany and D. D. Lichti, “Detection of road poles from mobile terrestrial laser scanner point cloud,” in Proceedings of the International Workshop on Multi-Platform/Multi-Sensor Remote Sensing and Mapping (M2RSM '11), pp. 1–6, IEEE, Xiamen, China, January 2011. View at Publisher · View at Google Scholar · View at Scopus
  11. S. Zancajo-Blazquez, S. Laguela-Lopez, D. Gonzalez-Aguilera, and J. Martinez-Sanchez, “Segmentation of indoor mapping point clouds applied to crime scenes reconstruction,” IEEE Transactions on Information Forensics and Security, vol. 10, no. 7, pp. 1350–1358, 2015. View at Publisher · View at Google Scholar
  12. D. Roca, S. Lagüela, L. Díaz-Vilariño, J. Armesto, and P. Arias, “Low-cost aerial unit for outdoor inspection of building façades,” Automation in Construction, vol. 36, pp. 128–135, 2013. View at Publisher · View at Google Scholar · View at Scopus
  13. S. Lange, N. Sünderhauf, P. Neubert, S. Drews, and P. Protzel, “Autonomous corridor flight of a UAV using a low-cost and light-weight RGB-D camera,” in Proceedings of the 6th AMiRE Symposium, Bielefeld, Germany, May 2011.
  14. B. Yang and C. Chen, “Automatic registration of UAV-borne sequent images and LiDAR data,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 101, pp. 262–274, 2015. View at Publisher · View at Google Scholar · View at Scopus
  15. B. Jutzi, M. Weinmann, and J. Meidow, “Weighted data fusion for UAV-borne 3D mapping with camera and line laser scanner,” International Journal of Image and Data Fusion, vol. 5, no. 3, pp. 226–243, 2014. View at Publisher · View at Google Scholar · View at Scopus
  16. S. Huh, D. H. Shim, and J. Kim, “Integrated navigation system using camera and gimbaled laser scanner for indoor and outdoor autonomous flight of UAVs,” in Proceedings of the IEEE International Conference on Intelligent Robots and Systems (IROS '13), pp. 3158–3163, IEEE, Tokyo, Japan, November 2013. View at Publisher · View at Google Scholar · View at Scopus
  17. D. Droeschel, M. Schreiber, and S. Behnke, “Omnidirectional perception for lightweight UAVs using a continuously rotating 3D laser scanner,” International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 40, no. 1, pp. 107–112, 2013. View at Google Scholar
  18. F.-J. Pei, H.-Y. Li, and Y.-H. Cheng, “An improved FastSLAM system based on distributed structure for autonomous robot navigation,” Journal of Sensors, vol. 2014, Article ID 456289, 9 pages, 2014. View at Publisher · View at Google Scholar · View at Scopus
  19. K. Sibilski and M. Kmiecik, “VTOL UAV optically aided indoor attitude estimation with a complementary filter for the special orthogonal group SO3,” in Proceedings of the 51st AIAA Aerospace Sciences Meeting Including the New Horizons Forum and Aerospace Exposition, Grapevine, Tex, USA, January 2013.
  20. J. Yin, L. Carlone, S. Rosa, and B. Bona, “Graph-based robust localization and mapping for autonomous mobile robotic navigation,” in Proceedings of the IEEE International Conference on Mechatronics and Automation (ICMA '14), pp. 1680–1685, IEEE, Tianjin, China, August 2014. View at Publisher · View at Google Scholar · View at Scopus
  21. G. Vosselman, “Design of an indoor mapping system using three 2D laser scanners and 6 DOF SLAM,” ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 2, no. 3, pp. 173–179, 2014. View at Publisher · View at Google Scholar
  22. Y.-K. Wang, J. Huo, and X.-S. Wang, “A real-time robotic indoor 3D mapping system using duel 2D laser range finders,” in Proceedings of the 33rd Chinese Control Conference (CCC '14), pp. 8542–8546, IEEE, Nanjing, China, July 2014. View at Publisher · View at Google Scholar · View at Scopus
  23. N. Ward, “Future of IALA DGNSS,” in Proceedings of the Institute of Navigation, National Technical Meeting, vol. 1, pp. 184–187, January 2006. View at Scopus
  24. V. Bistrov, “Performance analysis of alignment process of MEMS IMU,” International Journal of Navigation and Observation, vol. 2012, Article ID 731530, 11 pages, 2012. View at Publisher · View at Google Scholar · View at Scopus