Journal of Sensors

Journal of Sensors / 2018 / Article

Research Article | Open Access

Volume 2018 |Article ID 8617843 | https://doi.org/10.1155/2018/8617843

Shumin Wang, Ling Ding, Zihan Chen, Aixia Dou, "A Rapid UAV Image Georeference Algorithm Developed for Emergency Response", Journal of Sensors, vol. 2018, Article ID 8617843, 10 pages, 2018. https://doi.org/10.1155/2018/8617843

A Rapid UAV Image Georeference Algorithm Developed for Emergency Response

Academic Editor: Vincenzo Marletta
Received24 Apr 2018
Revised13 Aug 2018
Accepted10 Sep 2018
Published23 Oct 2018

Abstract

The image collection system based on the unmanned aerial vehicle plays an important role in the postearthquake response and disaster investigation. In the postearthquake response period, for hundreds of image stitching or 3D model reconstruction, the traditional UAV image processing methods may take one or several hours, which need to be improved on the efficiency. To solve this problem, the UAV image rapid georeference method for postearthquake is proposed in this paper. Firstly, we discuss the rapid georeference model of UAV images and then adopt the world file designed and developed by ESRI to organize the georeferenced image data. Next, the direct georeference method based on the position and attitude data collected by the autopilot system is employed to compute the upper-left corner coordinates of the georeferenced images. For the differences of image rotation manners between the rapid georeference model and the world file, the rapid georeference error compensation model from the image rotation is considered in this paper. Finally, feature extraction and feature matching for UAV images and referenced image are used to improve the accuracy of the position parameters in the world file, which will reduce the systematic error of the georeferenced images. We use the UAV images collected from Danling County and Beichuan County, Sichuan Province, to implement the rapid georeference experiments employing different types of UAV. All the images are georeferenced within three minutes. The results show that the algorithm proposed in this paper satisfies the time and accuracy requirements of postearthquake response, which has an important application value.

1. Introduction

The high-resolution image collection system based on the unmanned aerial vehicle has the advantages of lightweight, low cost, and flexibility. It has to be a vital tool for rapid collecting and viewing of the disaster information in postearthquake response and reaction and attracts more attention [16]. At present, the main processing methods of UAV images are image stitching [711] and 3D point cloud model reconstruction [1215] based on the computer vision theory. The image stitching can get the panoramic image of the parts of a disaster area even the whole disaster area. It generally needs steps such as feature extraction, feature matching, image transformation, bundle adjustment, and image fusion, which usually consumes one or several hours to achieve the image stitching. The 3D point cloud reconstructed from the UAV images can acquire the disaster information from different sides of the concerned objects. It generally needs steps such as sparse match, dense match, adjustment, and point cloud calculation. We need much time to achieve these steps. All the methods mentioned above are implemented in the commercial software, such as PhotoScan (http://www.agisoft.com/), an image stitching software, and Smart 3D (http://www.acute3d.com/), a three-dimensional model reconstruction software. The image map and three-dimensional model output from the mentioned software could be provided to the postearthquake commanders as the next level results. However, in the postearthquake response period, one to several hours is quite long for the emergency response commanders; they want to know the disaster information as soon as possible and make the correct decision to muster rescue force and distribute necessary supplies. To improve the image processing speed, [1619] adopt the SLAM (simultaneous localization and mapping) framework to create a panorama image, which uses the incremental manner to mosaic the UAV images. For large-scale image mosaics which contain thousands of images, it becomes computationally infeasible to use the filtering approach since the size of the state vector grows very large. The commercial solutions such as DroneDeploy Live Map produces mosaics instantly during the flight, which is designed to create and view a low-resolution map on the mobile device and, if you want to get the high-resolution map, you have to upload the images to the cloud through the network and download the result after image stitching. Processing the high-quality data can take up to a few hours for a very large job with high-resolution imagery (https://support.dronedeploy.com/docs/live-map). It depends on the image network speed, and it is hard to achieve the real-time georeference. To satisfy the postearthquake response application requirements, the rapid georeference method of UAV images for postearthquake response is proposed in this paper, the technique flow (see Figure 1). Firstly, the rapid georeference model is established combined with the position and attitude data collected from the autopilot system. And then, we deduce the image rotation transformation method, which converts the image rotation around the image center to the image rotation around the upper-left corner. Considering the limitation of GPS accuracy, which does not have the differential function, the feature extraction and matching method are employed to reduce the systematic error and improve the georeference accuracy. Finally, we validate the algorithm proposed in this paper through the rapid georeference experiments, which collected the UAV images from Danling County and Beichuan County, Sichuan Province, using different types of UAV. The results indicate that the algorithm could achieve rapid georeference within three minutes for up to hundreds of UAV images. It saves much more time and provides important technical support in the early postearthquake response period.

2. The Rapid Georeference of UAV Images

2.1. The Rapid Georeference Model of UAV Images

The disaster information collecting system based on the UAV is usually equipped with a high-resolution digital camera and an autopilot system, which could record the time, position, and attitude of the camera exposure. The UAV images could achieve the georeference by synchronizing the images and POS (positioning and orientation system) data. Because of the digital camera and the autopilot system which is installed along with the main optical axis of the camera are installed tightly, the lever arm and boresight angle will be ignored at the situation of emergency response. It satisfies the accuracy requirement. We can acquire the position and attitude data of the camera theoretically, which output from the autopilot system. In order to avoid the distortion of georeferenced images in the process of spatial transformation, which affects the disaster image interpretation, we adopt the image rigid body transformation comprised of scale, rotation, and translation.

In the above formulas (1) and (2), is the georeferenced image pixel coordinate. is the image pixel coordinate before georeference. is the scale factor, which indicates the image spatial resolution. is the camera rotation angle around the optical axis. is the translation matrix, is the translation amount along with the -axis, and is the translation amount along with the -axis.

2.2. The Data Organization Form of Georeferenced UAV Images

Considering the processing efficiency of the UAV image rapid georeference, we do not implement the pixel resampling in the process of the rapid georeference but adopt the world file to organize the georeference parameters. The world file is a data organization format, designed and developed by ESRI, a Geographic Information System company. It comprises six parameters, which are organized in a file with the specified suffix name and text encoding. By this data organizing manner, the Geographic Information System software could automatically identify the georeferenced images. All of these parameters consist of the affine transformation matrix as flows.

In formula (3), is the georeferenced image pixel coordinate. is the image pixel coordinate before georeference. is the scale factor on the -axis. and are the rotation parameters. is the scale factor on the -axis, and it is negative. and are the longitude and latitude coordinates of the georeferenced image upper-left corner. All the rotation and scale factors could be deduced from formula (2).

Each pixel will acquire the geographic coordinates, when the Geographic Information System software loads the georeferenced image data as Figure 2. When equals zero and equals zero, the georeferenced pixel equals and equals . The key problem of UAV image rapid georeference is how to quickly calculate parameters and . Since the autopilot system collects the angle data, which indicates that the image rotates around its symmetry center but the world file adopts the upper-left as its rotation center, we have to change the center rotation to the upper-left rotation and reduce the error caused by rotation.

3. The Parameters of Rapid Georeference Determination

3.1. The Position Parameter Calculation

The image upper-left corner coordinates are key parameters in the UAV image rapid geographic reference based on the world file. A direct and simple way to obtain the coordinate parameters is to calculate the numbers of pixels between the center and the upper-left corner, then multiply the image resolution, and finally plus the center coordinates. The calculated results are the coordinates of the upper-left corner. But it does not take the error brought from the position and the attitude data change into consideration. To solve this problem, the method based on the collinear equation is employed to calculate the image upper-left corner coordinates.

Ground point A is given and its geographic coordinates are as shown in Figure 3, and the coordinates of correspondence pixel in the camera frame are ; the transformation from pixel to A is shown in formula (4). This transformation process includes such coordinate systems. (1)Camera frame: the -axis is pointing up (aligned with gravity), the y-axis is defined by how the user has mounted the IMU (inertial measurement unit), and the x-axis is defined by how the user has mounted the IMU(2)IMU body frame: the -axis points up through the roof of the vehicle perpendicular to the ground, the -axis points out the front of the vehicle in the direction of travel, and the -axis completes the right-handed system (out the right-hand side of th vehicle when facing forward)(3)Navigation frame: it is a local frame, which is tangent to the reference ellipsoid. Its origin is at the center of the reference ellipsoid, and the -axis points to the equator intersection of the Greenwich meridian; the -axis points to the equator with a 90° meridian intersection and the -axis through the North Pole.(4)Earth-centered earth-fixed frame: a coordinate system is selected which is earth-centered earth-fixed (ECEF); with the X-Y plane containing the equator, intersects, in the positive direction, the Greenwich Meridian (from where the longitude is measured; longitude equals 0 degrees at equals zero); is parallel to the Earth’s rotation axis and has a positive direction toward the North Pole, and is in the equatorial plane and perpendicular to and completes a right-handed coordinate system; i.e., the cross-product of and is a vector in the direction of .

In formula (4), is the transformation matrix from the camera frame to the IMU body frame. is the transformation matrix from the IMU body frame to the navigation frame. is the transformation matrix from the navigation frame to the earth-centered earth-fixed frame.

In formula (5), , , and indicate the heading pitch and roll angles, gained from the autopilot system.

In formula (6), and , respectively, indicate the longitude and latitude, which are outputs from the autopilot system. The rotation transformation matrix could be deduced according to the Euler transformation referred in [2022]. When equals , the image upper-left corner coordinates could be obtained through formula (4).

When the UAV collects the images in the postearthquake areas, it flies about hundreds of meters above the ground and there is no signal blocking from the GPS receiver to the satellite. If the signal discontinuity problem happens, we could adopt the linear interpolation method to calculate the lost position and the attitude information within a short time interval. The UAV collects image 1 at time , and the autopilot records the position and the attitude information . It collects image 2 at time , and the autopilot records the position and the attitude information . However, it collects image without recording the position and attitude information at time between and ; we can calculate the position and attitude information from the flowing formula:

In the above formula (7), indicates the parameters and is the image number.

3.2. The Transformation between Different Rotation Systems

The remote sensing images collected based on the unmanned aerial vehicle rotate around their symmetry center theoretically, but the images georeferenced based on the world file rotate around their upper-left corner. Therefore, they have the different positions after the rotation. Figure 4(a) shows that the image rotates around the upper-left corner in different angles from position 1 to position 8, and Figure 4(b) indicates that the image rotates around the symmetry center according to the same angle form position 1 to position 8.

In Figure 4, when the image rotates around its upper-left corner and the image rotates around its symmetry center according to the same angle, the correspondent edges of images after rotation are parallel to each other. It indicates that the objects of the images have the same directions after the rotation. However, they have a translation gap between the images, in which we need to correct and improve the position accuracy. According to formula (2), after the image rotates around its symmetry center, the coordinates of the image are calculated as

In formula (8), is the pixel coordinate of the direction, and is the pixel coordinate of the direction. is the rotation angle of the image rotating around the symmetry center. is the width of the raw image, and is the height of the image. When the image rotates around its symmetry center, is calculated from . When the image rotates around its upper-left corner, is calculated from .

The translation amount caused by image rotation could be deduced from formula (8); the detailed form is as follows:

In formula (9), and are, respectively, the translation amounts on the and directions which indicate the position deviation that the image rotates around its symmetry center relative to the image rotating around the upper-left corner. is the image rotation angle that the image rotates around its symmetry center. is the raw image width, and is the raw image height. The translation amount is closely related to the rotation angle, image width, and image height.

4. The Rapid Georeference Accuracy Improvement Based on the Reference Image

The UAV image rapid georeference accuracy is dependent on the position data accuracy which is the output from the autopilot system. Generally, different types of UAV have different GPS modules, and the accuracies of GPS modules are different, especially the GPS module without differential positioning function. In order to improve the georeference accuracy and time efficiency further, the whole georeference accuracy improvement method based on the SURF (speeded up robust features) extraction and matching is proposed in this paper. Select one frame of the whole sequence to extract the SURF features and match with the reference image; then calculate the coordinates of the selected image upper-left corner through the feature matching. It reduces the symmetry error and improves the whole image georeference accuracy. SURF features have the robustness to image translation, rotation, scale, and the luminous changing. The feature extraction and matching could be up to subpixel accuracy. We do not discuss more details about the SURF features; the details could be referred in [2326].

Since the reference image always covers a big area, the SURF features extracted from the UAV images to match the features extracted from the reference images will exhaust much time. To solve this problem, the features are extracted from the reference image in the given area which is initialized according to the boundary area of the georeferenced UAV images and match with the features extracted in the UAV image. When equals and equals , we put into formula (4); the upper-left corner coordinates of the initially georeferenced UAV image will be acquired. When equals and equals to , we put inputs into formula (4); the lower-right corner coordinates of the initially georeferenced UAV image will be acquired. Simultaneously, considering the initial georeference error, the searched feature area in the reference high-resolution image is as follows: .

Considering the time efficiency, the Gaussian probability model is employed in this paper to delete the wrong matched points. The distance from the feature in the UAV image to the feature in the reference image is ; all the distances between the matched features will be the same in the ideal conditions. All the distances between matched features will satisfy the Gaussian normal distribution, the mean of distance is , and the variance is ; the probabilities of samples which drop into the areas , , and are, respectively, 68.3%, 95.5%, and 99.7%, according to the Gaussian normal distribution model. If the distance drops out of the confidence interval, we could recognize them as wrong matched features and delete them from the matched feature list. In Figure 5, we use the SURF feature extraction and matching to achieve the UAV image matching with a google image. It fulfils the purpose of the position error calculation between the initial georeferenced image and the reference image.

The feature in the UAV image is given, and its geographic coordinates are . After the UAV image matches with the reference image, it gets the new geographic coordinates . The UAV image georeferenced error was computed as follows:

5. Experiments and Analysis

Experiments adopt C# language combined with GDAL (Geospatial Data Abstraction Library) function to compile the rapid geographic reference software and achieve the UAV image rapid georeference. The rapid georeference software is installed on the IBM T430 computer as the experimental platform, with an Intel Core i5 2.60GHz CPU and a 4 GB size memory.

5.1. The Experiment of Rapid Georeference Based on the Fixed Wing UAV

In this experiment, the fixed wing UAV (http://www.feimarobotics.com/) was used to collect the image data as shown in Figure 6. The specifications of the UAV are shown in Table 1. The UAV was equipped with a SONY DSC-RX1R II microsingle camera with a 20 mm focal length, of which CCD size is 23.5 × 15.6 mm. The resolution of the collected image is 7952 × 5304. The UAV autopilot system can output the longitude, latitude, elevation, pitch angle, roll angle, and heading angle. The GPS module of the autopilot system is without data differential function.


UAV typeCaptain/spanFlight weightControl distanceControl mannerCruising speedPayload model

Fixed wing UAV1.6 m/1.1 m3 kg10 kmManual control/autopilot60 km/hSONY DSC-RX1R II camera

Taking the UAV remote sensing image data obtained from the Danling County in Sichuan Province, as an example, the average flying height is 250 m. Finally, 1025 images are obtained. The single image size is about 18 MB, and the total image size is about 18 GB. The attitude and position data are shown in Figure 7. All the images achieved rapid georeference within 3 minutes.

In Figure 8, the UAV images achieved the georeference without considering the image rotation and the error correction. The UAV image rapid georeference meets the time requirement, but we could not get the ground object distribution situation of the entire flight area. It is difficult to quickly select suitable images for disaster information submission and the interest object monitoring.

In Figure 9, it indicates the result of UAV image rapid georeference based on the algorithm proposed in this paper. All the images get the spatial reference. If they are loaded in the same frame, it seems like a coarse panoramic image. Actually, they are separate layers. In the panoramic image, we can clearly know the object distribution on the ground surface in the flight area, such as the distribution of buildings, roads, and playground. And then, we could select the suitable images including disaster information to submit before the panoramic image stitching. It provides data support for disaster interpretation, disaster investigation, and disaster emergency response in the early response period.

If we adopt the software such as PhotoScan which will make the perfect result with a small stitching gap to generate the panoramic image, it will take more than three hours to generate the panoramic image. In the emergency response period, saving time is very important. If the emergency response commanders get the disaster information submitted through the rapid georeference, they will know the disaster location and make the correct rescue decision as soon as possible. In this situation, dozens of meter accuracy totally satisfy the emergency response requirements. And the panoramic image generated from PhotoScan could make the image map, which could be the next level image production and submitted to the emergency response commanders.

5.2. The Rapid Georeference Experiment Based on the Quadrotor UAV

In this experiment, we use the quadrotor UAV (http://www.flightwin.com/) to collect images as shown in Figure 10. The specifications of the quadrotor UAV are shown in Table 2. The UAV equipped with the Sony ILCE-6000 camera, with a 20 mm focal length, of which CCD size is 23.5 × 15.6 mm. The resolution of the collected image is 6000 × 4000.


UAV typeLength/width/heightFlight weightControl distanceControl mannerCruising speedPayload model

Quadrotor UAV1.3 m/1.3 m/0.48 m18 kg10 kmManual control/autopilot0~50 km/hSONY ILCE-6000 camera

Taking the UAV remote sensing image data obtained from the earthquake site in Beichuan County, Sichuan Province, as an example, the average flight height of the UAV is 450 m. Finally, 226 images are obtained. The single image size is about 10 MB, and the total image size is about 2.2 GB. The attitude and position data are shown in Figure 11. Achieve rapid geographic reference of all image data in less than 1 minute.

From Figure 12, we could know the whole situation of the area. All the images are separate layers. For example, the building is damaged by an earthquake. We select the suitable image and submit it to the response commanders before the panoramic image generated from the PhotoScan software. It is very important in the early period of emergency response. According to the rapid georeferenced result, the commanders will arrange workers to investigate the disaster information.

6. Conclusions

In this paper, the method and model of UAV image rapid georeference are discussed in detail and the algorithm is implemented through C# language combined with GDAL. The rapid georeference software can achieve one flying sortie UAV image referencing in a short time. It provides important technical support for grasping the overall disaster information and monitoring the key objects in the disaster area. At the same time, it can generate a vector outline of each georeferenced image, and then the vector outline is overlaid on the reference image to quickly select the image containing the disaster information to submit. It has an important application value in the postearthquake response period. In the future work, the image rapid georeference for the UAV with differential GPS module will be studied to improve the localization accuracy and reduce the relative error between adjacent images.

Data Availability

The part data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this paper.

Acknowledgments

This research was funded by the Fundamental Research Funds of the Institute of Earthquake Forecasting, China Earthquake Administration (2018IEF0204), and the National Key Research and Development Program of China (2017YFB0504104).

References

  1. A. Nedjati, B. Vizvari, and G. Izbirak, “Correction to: post-earthquake response by small uav helicopters,” Natural Hazards, vol. 90, no. 1, pp. 511–511, 2018. View at: Publisher Site | Google Scholar
  2. D. Dominici, M. Alicandro, and V. Massimi, “UAV photogrammetry in the post-earthquake scenario: case studies in L’Aquila,” Geomatics, Natural Hazards and Risk, vol. 8, no. 1, pp. 87–103, 2017. View at: Publisher Site | Google Scholar
  3. I. Aicardi, F. Chiabrando, A. M. Lingua, F. Noardo, and M. Piras, “Unmanned aerial systems for data acquisitions in disaster management applications,” in 2014 Journal of universities and international development cooperation, in the III Congress of the University Network for Development Cooperation(CUCS) Congress, pp. 164–171, Turin, Italy, 2013. View at: Google Scholar
  4. Z. Xu, J. Yang, C. Peng et al., “Development of an UAS for post-earthquake disaster surveying and its application in Ms7. 0 Lushan earthquake, Sichuan, China,” Computers & Geosciences, vol. 68, pp. 22–30, 2014. View at: Publisher Site | Google Scholar
  5. S. M. Adams, M. L. Levitan, and C. J. Friedland, “High resolution imagery collection for post-disaster studies utilizing unmanned aircraft systems (UAS),” Photogrammetric Engineering & Remote Sensing, vol. 80, no. 12, pp. 1161–1168, 2014. View at: Publisher Site | Google Scholar
  6. V. Baiocchi, D. Dominici, and M. Mormile, “UAV application in post-seismic environment,” The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 2, pp. 21–25, 2013. View at: Google Scholar
  7. M. Brown and D. G. Lowe, “Automatic panoramic image stitching using invariant features,” International Journal of Computer Vision, vol. 74, no. 1, pp. 59–73, 2007. View at: Publisher Site | Google Scholar
  8. A. Zomet, A. Levin, S. Peleg, and Y. Weiss, “Seamless image stitching by minimizing false edges,” IEEE Transactions on Image Processing, vol. 15, no. 4, pp. 969–977, 2006. View at: Publisher Site | Google Scholar
  9. D. Turner, A. Lucieer, and C. Watson, “An automated technique for generating georectified mosaics from ultra-high resolution unmanned aerial vehicle (UAV) imagery, based on structure from motion (SfM) point clouds,” Remote Sensing, vol. 4, no. 5, pp. 1392–1410, 2012. View at: Publisher Site | Google Scholar
  10. G. Zhou, “Near real-time orthorectification and mosaic of small UAV video flow for time-critical event response,” IEEE Transactions on Geoscience and Remote Sensing, vol. 47, no. 3, pp. 739–747, 2009. View at: Publisher Site | Google Scholar
  11. I. Aicardi, F. Nex, M. Gerke, and A. Lingua, “An image-based approach for the co-registration of multi-temporal UAV image datasets,” Remote Sensing, vol. 8, no. 9, p. 779, 2016. View at: Publisher Site | Google Scholar
  12. T. Rosnell and E. Honkavaara, “Point cloud generation from aerial image data acquired by a quadrocopter type micro unmanned aerial vehicle and a digital still camera,” Sensors, vol. 12, no. 1, pp. 453–480, 2012. View at: Publisher Site | Google Scholar
  13. S. Harwin and A. Lucieer, “Assessing the accuracy of georeferenced point clouds produced via multi-view stereopsis from unmanned aerial vehicle (UAV) imagery,” Remote Sensing, vol. 4, no. 6, pp. 1573–1599, 2012. View at: Publisher Site | Google Scholar
  14. F. Remondino and S. El-Hakim, “Image-based 3D modelling: a review,” The Photogrammetric Record, vol. 21, no. 115, pp. 269–291, 2006. View at: Publisher Site | Google Scholar
  15. H. Fathi and I. Brilakis, “Automated sparse 3D point cloud generation of infrastructure using its distinctive visual features,” Advanced Engineering Informatics, vol. 25, no. 4, pp. 760–770, 2011. View at: Publisher Site | Google Scholar
  16. J. Civera, A. J. Davison, J. A. Magallón, and J. M. M. Montiel, “Drift-free real-time sequential mosaicing,” International Journal of Computer Vision, vol. 81, no. 2, pp. 128–137, 2009. View at: Publisher Site | Google Scholar
  17. S. Lovegrove and A. J. Davison, “Real-time spherical mosaicing using whole image alignment,” in European Conference on Computer Vision, pp. 73–86, Springer, Berlin, Heidelberg, 2010. View at: Google Scholar
  18. F. Remondino, L. Barazzetti, F. Nex, M. Scaioni, and D. Sarazzi, “UAV photogrammetry for mapping and 3d modeling–current status and future perspectives,” ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XXXVIII-1/C22, no. 1, pp. 25–31, 2011. View at: Publisher Site | Google Scholar
  19. S. Bu, Y. Zhao, G. Wan, and Z. Liu, “Map2DFusion: real-time incremental UAV image mosaicing based on monocular slam,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4564–4571, Daejeon, South Korea, October 2016. View at: Publisher Site | Google Scholar
  20. D. B. Barber, J. D. Redding, T. W. McLain, R. W. Beard, and C. N. Taylor, “Vision-based target geo-location using a fixed-wing miniature air vehicle,” Journal of Intelligent and Robotic Systems, vol. 47, no. 4, pp. 361–382, 2006. View at: Publisher Site | Google Scholar
  21. L. Di, T. Fromm, and Y. Chen, “A data fusion system for attitude estimation of low-cost miniature UAVs,” Journal of Intelligent & Robotic Systems, vol. 65, no. 1-4, pp. 621–635, 2012. View at: Publisher Site | Google Scholar
  22. H. Xiang and L. Tian, “Method for automatic georeferencing aerial remote sensing (RS) images from an unmanned aerial vehicle (UAV) platform,” Biosystems Engineering, vol. 108, no. 2, pp. 104–113, 2011. View at: Publisher Site | Google Scholar
  23. H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: speeded up robust features,” in Computer Vision – ECCV 2006, pp. 404–417, Springer, Berlin, Heidelberg. View at: Publisher Site | Google Scholar
  24. H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-up robust features (SURF),” Computer Vision and Image Understanding, vol. 110, no. 3, pp. 346–359, 2008. View at: Publisher Site | Google Scholar
  25. E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: an efficient alternative to SIFT or SURF,” in 2011 International Conference on Computer Vision, pp. 2564–2571, Barcelona, Spain, November 2011. View at: Publisher Site | Google Scholar
  26. B. Sheta, M. Elhabiby, and N. El-Sheimy, “Assessments of different speeded up robust features (SURF) algorithm resolution for pose estimation of UAV,” International Journal of Computer Science & Engineering Survey, vol. 3, no. 5, pp. 15–41, 2012. View at: Publisher Site | Google Scholar

Copyright © 2018 Shumin Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views2392
Downloads702
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.