Designing an Efficient Emergency Response Airborne Mapping System with Multiple Sensors
Multisource remote sensing data have been extensively used in disaster and emergency response management. Different types of visual and measured data, such as high-resolution orthoimages, real-time videos, accurate digital elevation models, and three-dimensional landscape maps, can enable producing effective rescue plans and aid the efficient dispatching of rescuers after disasters. Generally, such data are acquired using unmanned aerial vehicles equipped with multiple sensors. For typical application scenarios, efficient and real-time access to data is more important in emergency response cases than in traditional application scenarios. In this study, an efficient emergency response airborne mapping system equipped with multiple sensors was designed. The system comprises groups of wide-angle cameras, a high-definition video camera, an infrared video camera, a LiDAR system, and a global navigation satellite system/inertial measurement unit. The wide-angle cameras had a visual field of 85° × 105°, facilitating the efficient operation of the mapping system. Numerous calibrations were performed on the constructed mapping system. In particular, initial calibration and self-calibration were performed to determine the relative pose between different wide-angle cameras to fuse all the acquired images. The mapping system was then tested in an area with altitudes of 1000 m–1250 m. The biases of the wide-angle cameras were small bias values (0.090 m, −0.018 m, and −0.046 m in the x-, y-, and z-axes, respectively). Moreover, the root-mean-square error (RMSE) along the planer direction was smaller than that along the vertical direction (0.202 and 0.294 m, respectively). The LiDAR system achieved smaller biases (0.117, −0.020, and −0.039 m in the x-, y-, and z-axes, respectively) and a smaller RMSE in the vertical direction (0.192 m) than the wide-angle cameras; however, RMSE of the LiDAR system along the planar direction (0.276 m) was slightly larger. The proposed system shows potential for use in emergency response systems for efficiently acquiring data such as images and point clouds.
Remote sensing is useful for acquiring various types of visual and measured data, such as high-resolution orthoimages, real-time videos, accurate digital elevation models (DEMs), and three-dimensional (3D) landscape maps [1, 2], thus providing support for decision-making in disaster response [3–5]. Moreover, these data can be employed to plan effective rescues and assist the efficient dispatching of rescuers shortly after disaster [6–8]. Unmanned aerial vehicle (UAV) systems equipped with multiple sensors are used to rapidly require such data because they can be deployed in remote areas that would otherwise be inaccessible. Moreover, such systems can be deployed rapidly and flexibility, which are crucial for dynamical monitoring of disasters and accidents [9–15].
Recently, sensor technology has advanced rapidly. Developed remote sensors (such as high-definition video cameras, digital cameras , and small and lightweight LiDAR systems) integrated with different sensors can be easily mounted on the airborne platforms, thus affording efficient mapping systems. A digital camera, small laser, and low-cost global positioning system (GPS)/inertial measurement unit (IMU) were integrated and mounted on a mini unmanned helicopter to develop a colored point cloud model that can be used to easily extract 3D geometric and textural information of the objects . Moreover, a flexible, lightweight, rapid mapping system was mounted on a large, unmanned helicopter for emergency response . This system comprised a digital camera, laser scanner, and GNSS/IMU sensors that can be used to acquire high-quality DEMs and orthoimages. Researchers have reported [18, 19] that all UAV systems comprise four main units: LiDAR, charge coupled devices, hyperspectral sensors, and GPS/IMU known as LiCHy. Such systems can obtain multiple types of accurately georeferenced observation data .
Studies have demonstrated that airborne multisource remote sensing systems are efficient, safe, and affordable and can be used to collect and deliver multiple types of georeferenced data . Such systems can also ensure that the disaster response community has rapid and timely access to accurate and relevant geospatial data during a disaster [22–24]. However, the performance of such systems equipped with a single camera is limited in some respects. First, these systems show narrow fields of view, hindering the efficiency of image acquisition. Second, the storage speed and capacity are extremely low. Furthermore, using a single camera is always unreliable, as it may malfunction during operation . To overcome these limitations, in this study, we developed an emergency response airborne mapping system equipped with multiple sensors. This system integrates wide-angle cameras, a LiDAR system, and a video camera into a single platform that can be installed on a large fixed-wing UAV.
2. Mapping System Design and Calibration
2.1. Mapping System Design
The proposed system comprised three groups of wide-angle cameras, a high-definition (HD) video camera, an infrared (IR) video camera, a LiDAR system, and global navigation satellite system (GNSS)/IMU (Figure 1). The groups of wide-angle cameras were designed to efficiently acquire image data, which can be used to achieve high-resolution orthoimages , accurate DEMs, point clouds, and 3D models. HD images were captured in real-time using the HD video camera, particularly during the day; while, IR images were obtained at night using the IR video camera, which is an active measurement sensor. Point cloud data were acquired using the LiDAR system for quickly calculating the disaster earthwork volume. This is particularly important under poor visibility conditions or at night. In addition to navigation, the GNSS/IMU was used to provide the initial poses for the images acquired using the wide-angle camera groups and LiDAR scans.
2.1.1. Wide-Angle Camera Groups
The three groups of wide-angle cameras are the most important components of the mapping system. Two of the camera groups consisted of five Canon 5DSR cameras with a visual field of 60° × 105° and a focal length of 50 mm (Figures 1–3). These cameras produced images with 200 million pixels (9216 × 20928). If one of the camera groups malfunctioned, the other could still obtain images with a forward overlap exceeding 60%, which is the nominal average for most mapping projects. Additionally, if these two groups of cameras work well, they could be merged to form a group that yields images with 400 million pixels with a visual field of 85° × 105°; this wide visual field would increase the mapping system efficiency. The third camera group consisted of two Canon 5DSR cameras with a focal length of 24 mm, which afforded a visual field of 100°(forward) × 72°(side) and images with 100 million pixels. This group also provided the system with a double-redundant imaging capability. The camera group is also made up the geometric calibration of mutual support with the other two groups, thus improving the reliability of the system.
2.1.2. Other Sensors
The LiDAR system was used to collect point cloud data for calculating disaster earthwork volume quickly, particularly at night and under poor visibility conditions. These point cloud data could also be combined with the wide-angle camera images to create 3D models. To match the view of the point cloud data and that of the wide-angle cameras, a wide-field-view laser scanner (A-Pilot AP-3500 Sure Star) was used. This type of system has proved to be highly effective in airborne topographic surveys of mountainous regions .
To ensure movement time synchronization, as is common in LiDAR systems and photography , a positioning and orientation system (POS) can provide direct position and orientation data. All the sensors used the GNSS clock as a reference. Moreover, to ensure that all sensors used the same geographic coordinate system, we transformed all the coordinate sensors to the GNSS/IMU altitude system. The integrated POS NovAtel SPAN®UIMU-LCI system was used for navigation (GNSS, 2015). Further details on the instrument specifications are given in Table 1.
2.1.3. UAV Platform
A large fixed-wing “Harrier” (Figure 4), which can carry types of payloads and fly all day and in all-weather conditions , was used herein. The main specifications of this platform are given in Table 2. Harrier is a medium-to-high altitude, low-speed, long-endurance UAV system based on the mature military UAV systems. It uses wheel-type takeoff and landing, entire-process auto-control, line-of-sight links, and combined navigation techniques. As they are equipped with image reconnaissance and monitoring systems, signals detection systems, and multipurpose function, Harrier UAVs are easy to operate and highly reliable; they also show low maintenance requirements and a long service life. A Harrier UAV comprises of four subsystems, vehicle, survey, control, and information transmission; mission payload; and integrated support subsystems.
2.1.4. Emergency Response
Automatic route-planning software was developed to automatically generate the flight route based on the preloaded DEM and the latitude and longitude of the study area (Figure 5). Moreover, if the flight speed or altitude changed, the system could automatically calculate the exposure interval necessary to ensure that the degree of data overlap met the requirements.
2.2. Data Processing
The complete data-processing workflow is shown in Figure 6. After the UAV landed, the acquired data were exported to a computer. First, the GPS and IMU data were processed to obtain the position and orientation information. Based on the synchronization of the UAV systems, the position and orientation were assigned to each frame of the video data. Using the TerraSolid software, the GPS/IMU navigation system was employed to calibrate the acquired laser scanner data and obtain the 3D spatial coordinates (x-, y-, and z-axes) of the ground target. This method was also used to eliminate the noise and abnormal values in the laser scanner and subsequently perform multiple-route splicing. The point cloud data of the entire survey were output in the LAS format. The single images obtained using different cameras were merged into combined wide-angle images. Finally, the position and orientation data were used to geocode the video data, LiDAR data, and wide-angle images using a common geographic coordinate system.
2.3. Mapping System Calibration
The basic principle of combined wide-angle imaging is to use reimaging technology to form a single-center wide-angle image equivalent to one that could be obtained using a multidirectional lens (or multiple cameras whose main optical axes show different orientations). The process in shown in Figure 7. The reimaging technology can be divided into two major steps, initial and actuarial calculations.
The initial calculation refers to the distortion correction of each camera based on ground calibration parameters and the projection transformation of the interior orientation elements of each camera relative to the virtual combined camera. All these calculations and transformations are based on the classical photogrammetric formulas.
Herein, the initial calculation was performed to calibrate the equipment parameters, including the distortion parameters of individual cameras and the interior orientation elements of the virtual combined camera. The actuarial calculation forms the core of the proposed method and includes static self-calibration. To calibrate the deformation errors of the camera system caused by mechanical processing, installation, temperature, and material fatigue, we adopted a static self-calibration method for the portion-by-portion calibration of the images obtained using the individual cameras. This method involved using the parallax of the overlapping parts of adjacent images (Figure 8). The equation used for the self-calibration is as follows:where f represents the focal length, represent the adjacent images, represent the amounts of parallax in the overlapping areas of adjacent images i and j, respectively, obtained by image matching, and are the corrections to the exterior orientation elements of the individual cameras relative to the virtual combined camera. Each parallax point in each pair of adjacent images can be inserted into equation (1). The size of the correction for the individual cameras can be obtained by solving the all parallax equations.
Because of the influence of various factors, when the cameras are used in combination, it is difficult to ensure simultaneous exposure times of all the cameras. Furthermore, the one-minute difference between successive exposures produces parallax in overlapping images. The parallax can be expressed as a function of the exterior orientation elements of the airborne platform; thus, the images obtained using each camera can be calibrated dynamically. The mathematical relation between the exterior orientation elements and the motion parameters of the airborne platform can be described using the following equation :where represent the exterior orientation elements of camera relative to the virtual combined camera (obtained by using the static calibration); constitute the exterior orientation information of the airborne platform flying at velocity; and denote the increments in the exterior orientation elements owing to exposure time delay in the flying state.
An intense image motion will occur during the exposure time, particularly if the motion exceeds half of a pixel at high the flight speed or the flying height is too low. A formula for calculating the IMC was proposed in  aswhere IMC is the true rate of image movement in millimeters per second, is the true ground speed (meters per hour), f is the focal length (millimeters), and h is the altitude above terrain (meters).
In this study, we used an equation similar to time delay integration and determined the gray value of each unit record of L by using the least-squares method:
As shown in Figure 9, is the exposure time, are the exposed pixels in the flight direction, and are the corresponding gray values of these pixels, and is the unit record of .
3. Materials and Methods
3.1. Study Area
The study area was located in Pingba District, Anshun City, in the west central part of Guizhou Province, China (Figure 10). This area comprises a steep, mountainous area lying at the watershed between the Yangtze River and Pearl River systems on the eastern side of the Yunnan‒Guizhou Plateau. The geological structure of the area is complex, and the karst landforms are unique. The altitude of the terrain is within the range of 1102–1695 m, being generally higher in the northwest and lower in the southeast; this area is mainly mountainous and exhibits an average elevation of 1282 m. The study area shows a humid subtropical monsoon climate with an annual average temperature of 14°C and an average annual precipitation of 1146.3 mm. On average, there are 1276 h of sunshine annually. Winds are light, and the annual average wind speed is 2.4 m/s. The area to be imaged was located between 105.83°E and 106.52°E and between 26.23° and 26.64°, and the size of this area was 700 km2
3.2. Evaluation Methods of the Mapping System
The groups of wide-angle cameras and the LiDAR system are the most important sensors in the proposed mapping system Therefore, we attempted to evaluate the accuracy of these sensors by evaluating the estimates of the locations of a set of ground control points. The locations of 89 control points were determined using the GPS real-time kinematic (RTK) technique. The observations obtained using the sensors were evaluated in terms of the deviation (bias) and root-mean-square error (RMSE) along the planar direction and vertical direction :where , , and are the measured values along the x-, y-, and z-axes, respectively; , , and are the corresponding reference values determined using the GPS RTK technique; is a measured value in one of the three directions; is the corresponding reference value; and is the total number of measured values.
After the proposed system had been flown in the test area, the experimental data were processed to obtain digital orthoimages, a wide-angle camera-based point cloud, and LiDAR-based point cloud (Figure 11). To test the accuracy of the measurements obtained using the wide-angle cameras, after the aerial triangulation, the locations of the ground control points were determined based on the triangulation and compared with the reference locations (Table 3). The deviations in the measured values are small; there is also little variation in the values. The exception is the vertical direction, for which RMSE is larger (0.294 m). As shown in Figure 12, the measurement errors in the x- and y-axes are less than 0.4 m, whereas the maximum error in the vertical direction is nearly 1 m.
We laid a lot of ground control points (GCP) collected by GNSS to check the accuracy of the airborne mapping system, and the distribution of GCP is shown in Figure 13. The results (Table 3) show that the accuracy of the wide-angle cameras and the LiDAR system is reliable, but the RMSE along the horizontal plane and RMSE along the vertical plane are different. The RMSE along the horizontal plane LiDAR system is larger than the wide-angle cameras; however, in the vertical direction, it is smaller. As shown in Figure 14, the error in the LiDAR measurements along the horizontal plane is less than 0.5 m and that along the vertical direction is less than 0.6 m. These results indicate that the measurements along the vertical direction obtained using LiDAR is better, and the measurements along the horizontal direction obtained using the wide-angle cameras is better.
In this study, a wide-angle emergency response airborne mapping system equipped with multiple sensors was proposed. Four main sensors were integrated on a large fixed-wing platform. These sensors consisted of groups of wide-angle cameras that can be used to obtain high-resolution wide-field-of-view RGB color images, a HD video camera that can be used to capture and transmit real-time videos of disaster scenes during daylight, an infrared camera that can be used to record and transmit live videos at night, and a LiDAR system that can be used to obtain elevation point cloud data for calculating the earth-rock volume in disaster areas. Tests on the proposed system demonstrated that it is efficient and reliable and can be used to acquire RGB imagery and infrared videos in real time. Furthermore, the high-quality 3D point cloud data and high-resolution orthoimages can be obtained by processing the data after landing. Based on observations of ground control points using the GPS/IMU, the error in the imagery and point clouds obtained using the system was 0.294 m, which can meet the needs of various applications. It was also found that the images obtained using the groups of wide-angle cameras and the LiDAR system show high measurement accuracy. These data are suitable for use in emergency response. It can be concluded that the proposed system shows all-weather all-day image acquisition capability that can meet the requirements of national airborne emergency mapping in China.
The data used to support the findings of this study are collected and obtained by the equipment itself, and the data link cannot be provided.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
This work was supported jointly by the Outstanding Youth Science and Technology Program of Guizhou Province of China (5615) and the Guizhou Science and Technology Major Project ((2014)6011).
G. Petrie, “Airborne digital frame cameras: the technology is really improving,” GeoInformatics, vol. 6, no. 7, pp. 18–27, 2003.View at: Google Scholar
H. . B. Abrahamsen, “Use of an unmanned aerial vehicle to support situation assessment and decision-making in search and rescue operations in the mountains,” Abrahamsen Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine., vol. 22, no. 1, p. 16, 2014.View at: Publisher Site | Google Scholar
D. Morgan and F. Edgar, Aerial Mapping: Methods and Applications, CRC Press, Boca Raton, FL, USA, Second edition, 2001.
K. E. Joyce, S. E. Belliss, S. V. Samsonov, S. J. McNeill, and P. J. Glassey, “A review of the status of satellite remote sensing and image processing techniques for mapping natural hazards and disasters,” Progress in Physical Geography: Earth and Environment, vol. 33, no. 2, pp. 183–207, 2009.View at: Publisher Site | Google Scholar
P. Boccardo and F. Giulio Tonolo, “Remote-sensing techniques for natural disaster impact assessment,” in Advances in Mapping from Remote Sensor Imagery Techniques and Applications, X. Yang and J. Li, Eds., pp. 387–414, CRC Press, Boca Raton, FL, USA, 2012.View at: Google Scholar
S. Lewis, “Remote sensing for natural disasters: facts and figures. 2011,” 2014, http://www.scidev.net/global/earth-science/feature/remote-sensingfor-natural-disasters-facts-and-figures.html.View at: Google Scholar
K. Jacobsen, “Development of digital aerial cameras,” The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. ISPRS Archives, vol. 38, pp. 1–6, 2010.View at: Google Scholar
M. Nagai, R. Shibasaki, D. Manandhar, and H. J. Zhao, “Development of digital surface model and feature extraction by integrating laser scanner and CCD sensor with IMU,” in Proceedings of the ISPRS Congress. Geo-Imagery Bridging Continents, Istanbul, Turkey, July 2004.View at: Google Scholar
K. Choi, I. Lee, S. W. Shin, and K. Ahn, “A project overview for the development of a light and flexible rapid mapping system for emergency response,” The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XXXVII, no. Part B5, pp. 915–920, 2008.View at: Google Scholar
R. T. Eguchi, C. K. Huyck, S. Ghosh, and B. J. Adams, “The application of remote sensing technologies for disaster management,” in Proceedings of the 14th World Conference on Earthquake Engineering, pp. 12–17, Beijing, China, October 2008.View at: Google Scholar
Z. J. Lin, G. Z. Su, C. Y. Shen, and B. Y. Wu, “Design and experiment of an active-passive multi-sensor combined wide-angle imaging system,” Geomatics and Information Science of Wuhan University, vol. 42, pp. 1537–1548, 2017.View at: Google Scholar
“A-pilot airborne LIDAR,” 2017, http://www.isurestar.com/index.php/en-product-product.html#8.View at: Google Scholar
M. M. R. Mostafa and J. Hutton, “Direct positioning and orientation systems: how do they work? What is the attainable accuracy?” in Proceedings of the American Society of Photogrammetry and Remote Sensing Annual Meeting, pp. 23–27, St. Louis, MO, USA, April 2001.View at: Google Scholar
“SPAN® GNSS inertial navigation systems,” 2015, http://www.novatel.com/products/span-gnss-inertial-systems/span-imus/uimu-lci/.View at: Google Scholar
U. A. V. Harrier, “System unmanned aerial vehicle technical proposal,” Guizhou Aircraft Company of AVIC, 2013.View at: Google Scholar