Table of Contents Author Guidelines Submit a Manuscript
Journal of Sensors
Volume 2015 (2015), Article ID 415361, 9 pages
Research Article

Automatic Fusion of Hyperspectral Images and Laser Scans Using Feature Points

1Research Center of Artistic Heritage, Taiyuan University of Technology, Taiyuan 030012, China
2Key Laboratory of 3D Information Acquisition and Application, Capital Normal University, Beijing 100048, China

Received 14 November 2014; Accepted 7 February 2015

Academic Editor: Xue Cheng Tai

Copyright © 2015 Xiao Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Automatic fusion of different kinds of image datasets is so intractable with diverse imaging principle. This paper presents a novel method for automatic fusion of two different images: 2D hyperspectral images acquired with a hyperspectral camera and 3D laser scans obtained with a laser scanner, without any other sensor. Only a few corresponding feature points are used, which are automatically extracted from a scene viewed by the two sensors. Extraction method of feature points relies on SURF algorithm and camera model, which can convert a 3D laser scan into a 2D laser image with the intensity of the pixels defined by the attributes in the laser scan. Moreover, Collinearity Equation and Direct Linear Transformation are used to create the initial corresponding relationship of the two images. Adjustment is also used to create corrected values to eliminate errors. The experimental result shows that this method is successfully validated with images collected by a hyperspectral camera and a laser scanner.

1. Introduction

Hyperspectral imaging technology can quickly detect hundreds or even thousands of different light frequencies and relative intensities of surface features, which is unlike regular cameras that are typically sensitive to only three different frequencies (red, green, and blue). Laser scanning technology can quickly obtain the accurate geometry information of surface features in despite of the adverse external circumstances. If the hyperspectral data and the laser data can be fused, the spectral information and the spatial information of the same location can be obtained at the same time, which can effectively make up for the deficiency of single data source.

Currently, the registration and fusion of hyperspectral images and laser scans have been the research hotspots. However, because of the multiple different sensors and the different imaging modalities the fusion is very complicated. The most common approach is to perform registration using manual methods. However, this approach is very low precision in practice. Only several methods exist for aligning these two datasets of the same location. Nieto et al. [1] installed a digital camera on top of the laser to acquire the color point clouds and translated it into the 2D color image; then the registration is completed by the piecewise linear transform. Kurz et al. [2] used two sensors to detect the position of the target and then corrected them to complete the registration. Zachary and Juan [3] obtained initial position between two sensors through GPS and then used the mutual information to achieve the registration. In addition, some methods of aligning regular digital image with laser scan can be used for reference. For example, Tsai camera calibration method [4, 5] was used to obtain the 2D-3D homologous points and the unknown parameters to implement the registration. However, the precise corresponding points were difficultly found because of the difference of color structure. The stereo matching [6] was used to convert multi-images to 3D point clouds to realize the 2D-3D registration. However, this method did not realize the registration of single image and point cloud. Moreover, the mutual information [79] was also used to complete the 2D-3D registration. The collinear equation was used to construct 2D-3D correspondence [10, 11] to implement the registration of aerial images and laser scans.

In summary, the attempt in this paper is to create a method that can correctly register and fuse the hyperspectral data and the laser data without the additional sensors. The remainder of this paper is organized as follows. Section 2 provides the mathematical model of this algorithm. Section 3 in great detail describes the algorithm of automatic fusion. Section 4 simulates and verifies this automatic fusion methodology. Finally Section 5 makes the concluding remarks and maps out the directions for future work.

2. Derivation of Mathematical Model

2.1. Definition of Coordinate System
2.1.1. Coordinate System of Hyperspectral Image

The data model of hyperspectral image, different from the model of remote sensing image and digital image, is the feature vector representation model and can be expressed by the data cube model, as shown in Figure 1. -axis and -axis denote the space dimensions and -axis denotes the spectral band. -plane is the image information of a band or multiband of hyperspectral image; -plane and -plane are the spectrum information of a hyperspectral image line, as shown in Figures 1(a) and 1(b). The cube model and the spectrum oscillogram of a hyperspectral pixel are shown in Figure 1(c).

Figure 1: Hyperspectral Image cube model.

Rotary broom hyperspectral camera scans imaging with the line array scanning mode and the hyperspectral image is the 2D image. Imaging geometric model is to create the mapping relationship between object space and image space, as shown in Figure 2, where is the coordinate origin; is the abscissa, which is along the rotation direction of hyperspectral camera from the rotation starting location; is the ordinate, which is along the rotation vertical direction. The horizontal resolution is determined with the rotation speed of turntable and the vertical resolution is determined by the scanning speed of hyperspectral camera.

Figure 2: Coordinate system of Hyperspectral Image.
2.1.2. Coordinate System of Laser Scans

Terrestrial 3D laser scans imaging with the line array scanning mode and the laser image is 3D point clouds, whose coordinate system is confirmed by the self-laser, as shown in Figure 3, where is the coordinate origin, which is the scanner electrooptical center; is the scanner vertical rotation axis; is along the optical axis scanner arbitrary horizontal angle, such as the first horizontal angle or the north direction of the built-in magnetic compass; is orthogonal to and , which is formed by the right-hand system.

Figure 3: Coordinate system of Laser Scan.
2.2. Definition of Camera Model

Rotary broom hyperspectral camera and Terrestrial 3D laser are both dependent on the cylindrical coordinate system, but their images are, respectively, 2D and 3D, so the camera model is formed to transform the 3D laser scan into the 2D laser image.

This camera model is inferred by panoramic camera model [12, 13], as shown in Figure 4. The formula is shown inwhere is the point of laser scan; is the point of 2D laser image; is the principle distance of the cameral model; is the principle point of 2D laser image; is the correction parameter.

Figure 4: Camera model.

Through (1), the point of 2D laser image is calculated, but the type of their values is double and the type of 2D image pixel is integer, so must be changed into the integer value, as shown in where is the point position of the 2D image; is the horizontal resolution; is the vertical resolution.

The pixels value of 2D laser image is defined by the attributes of the laser scan, which may be the information such as color, curvature, and normal, and so forth. The formula is as follows: where color is the value of , ranged in [0–255]; is the attribute value of ; and are, respectively, the maximum and the minimum of this attribute.

3. Algorithm of Automatic Fusion

In this automatic fusion algorithm, first of all, the hyperspectral gray image is extracted from the hyperspectral image and the 2D laser image from the laser scan is created by the camera model. And then, the feature points of and are produced with SURF and SC-RANSAC, and the feature points of hyperspectral image and laser scan are generated with the inverse operation of camera model. The initial registration is achieved by Collinearity Equation and Direct Linear Transformation; the precision registration is completed through creating corrected values to eliminate errors with Adjustment. At length, the automatic fusion is accomplished by the registration result. This algorithm flowing chart is shown in Figure 5.

Figure 5: Algorithm flowing chart.
3.1. Initial Registration
3.1.1. Extraction of Feature Points

Feature Points between and are extracted by SURF algorithm [14], which is superior to SIFT algorithm in every aspect [1517]. In order to eliminate the error feature points, SC-RANSAC [18] is used, which is the currently fastest RANSAC extension coalesced with RANSAC and spatial consistency check from the literature [19]. Moreover, the searching method, an important role to improve the speed, is the stochastic KD tree algorithm [20], which searches by multiple stochastic KD tree to advance searching nodes and whose accuracy and matching speed are better for high dimensional data search [21, 22].

After completing the extraction of feature points, according to the inversion operation of camera model, the corresponding 3D feature points of is found. If () are the feature points of hyperspectral image and () are the corresponding feature points of laser scan, the corresponding feature points between them are found, too.

3.1.2. Collinearity Equation

The Collinearity Equation, the basic equation of photogrammetry, is used to set up the mapping relationship between the 2D coordinate and the 3D coordinate. If in the hyperspectral coordinate system corresponds to in the laser coordinate system, the corresponding relation between them can be expressed as where () is the direction cosine of rotation matrix; is the system errors correction; is the principal distance.

Direct Linear Transformation (DLT) is the solution of direct linear relationship between the photo point coordinate and the corresponding object point coordinate, which is essentially a kind of space resection and space intersection method. This algorithm is applicable to a variety of no metric cameras without the known internal orientation elements and is also suitable for the close range photogrammetry of the large angle without the initial external orientation elements. According to DLT algorithm, (4) is translated into the following formula:where () are the coefficients. According to these coefficients, the interior orientation elements and the exterior orientation elements between hyperspectral coordinate and laser coordinate are calculated. The corresponding relation between and is also calculated. However, because of the great errors of DTL, () can only be considered the approximate results, and the corresponding relation results are the initial registration results.

3.2. Precise Registration

In order to accurately determine the corresponding relation between hyperspectral coordinate and laser coordinate, Adjustment is executed using the redundant observation values of the hyperspectral pixels. If the correction of the hyperspectral pixel observed value is , the corresponding relation between and is shown in

If , the error equation of is shown in where is the symmetric radial distortion coefficient; is the radius vector of hyperspectral pixel.

The operation is executed by least squares method, whose iterative condition is that the interpolation of adjacent in is less than 0.01 mm. The calculating process of value is also the iterative process, and each iterative value is calculated by control point. Thus, the precise values of the coefficients are calculated.

3.3. Data Fusion

If is the arbitrary point of hyperspectral image and in laser scan is the corresponding point of , according to (), the corresponding relation between and is calculated by

Therefore, based on (8), each point of laser data corresponds to each point of hyperspectral data.

In case is the spectral information of in hyperspectral image, and is the property of in laser scan, in which is the feature except the spatial coordinates , such as intensity, amplitude, and so forth, the hyperspectral data and the laser data are fused, and the property of is expressed by which includes the spatial information of laser data and the spectral information of hyperspectral data, based on the corresponding relationship between and .

4. Experiment and Analysis

In order to verify the effectiveness of the algorithm, the experiments are conducted. The dataset of the hyperspectral data and the laser data were obtained from the electronic display board of the playground of Capital Normal University by our own laboratory. The 3D laser used was a Riegl LSM-420i and the hyperspectral imager was integrated by our laboratory, which main parameters are shown in Table 1. The setup of data acquisition is shown in Figure 6. The algorithm code was written in Matlab with mex files written in Visual C++. The code was run on a dell computer with Inter i5CPU and 4 G RAM.

Table 1: Hyperspectral camera parameters.
Figure 6: Data acquisition map.

The initial RGB image data of hyperspectral data is shown in Figure 7(a). The initial laser scan is shown in Figure 7(b), whose horizontal and vertical angular resolution are 1° and whose horizontal and vertical spacing are about 7 cm, and the different colors show the intensity of laser scan.

Figure 7: Initial data.
4.1. Experiment

In this paper, because the geometry of electronic display board is very flat and the normal vectors are similar, so the point-cloud image is generated by the intensity as a pixel value and the image resolution is determined by the point-cloud distance. Then the points of hyperspectral image and laser scan are found by 2D-3D registration algorithm, and six pairs of feature points are chosen, as shown in Table 2.

Table 2: Feature points of Hyperspectral Image and Laser Scan.

According to Collinearity Equation and DLT in initial registration, the approximate coefficients () are obtained, as shown in Table 3.

Table 3: Approximate coefficients.

According to Adjustment in precise registration, the precise coefficients () are obtained, as shown in Table 4.

Table 4: Precise coefficients.

The fusions of hyperspectral data and laser data are, respectively, executed by () and (), as shown in Figure 8. The initial fusion image has basically been fused; nevertheless there are more great errors, as shown in Figure 8(a). For instance, the texts of electronic board have obvious deviation. The blue points represent no corresponding points, so the base of electronic board does not have the corresponding points. The precise fusion image has greatly been fused, as shown in Figure 8(b). The texts and the base of electronic board have been the corresponding fusion.

Figure 8: Fusion images.
4.2. Evaluation of Precision

To further verify the effectiveness of this algorithm, other feature points are selected as the check points to verify its accuracy. Firstly, the internal orientation elements and the external orientation elements are calculated by the approximate solution algorithm, and the corresponding hyperspectral pixels of laser scan are calculated. Then, the corresponding hyperspectral pixels of laser scan are calculated by the precise solution algorithm. The comparison is shown in Table 5. “Hyperspectral Image” is the hyperspectral coordinate, and “Laser Scan” is the laser coordinate. “Hyperspectral ” is the hyperspectral coordinate calculated by the approximate solution algorithm, and “Hyperspectral ” is the hyperspectral coordinate calculated by the precise solution algorithm.

Table 5: The corresponding coordinates of check points.

To verify the errors of the check points, the residual errors of the horizontal and vertical direction are, respectively, calculated based on the distance between “Hyperspectral Image” and “Hyperspectral ” and “Hyperspectral ,” as shown in Table 6. “” and “” are, respectively, the residual errors of the horizontal and vertical direction between “Hyperspectral Image” and “Hyperspectral .” “” and “” are, respectively, the residual errors of the horizontal and vertical direction between “Hyperspectral Image” and “Hyperspectral .” From Table 6, the horizontal residual mean errors are decreased from −20.8094 to 4.6046 and the vertical residual mean errors are decreased from −27.8079 to 0.1148. Precision is greatly improved.

Table 6: Residual errors and residual mean errors of check points (unit: pixel).

Moreover, the twenty check points are selected to verify this algorithm and the residual errors of the thirty check points are shown in Figure 9. In agreement with the above analysis, the residual errors of precise registration shapely reduce, and the horizontal residual mean errors are decreased from −18.3751 to 1.5820 and the vertical residual mean errors are decreased from −16.3553 to −0.11167. To sum up, this method has reached a more satisfactory accuracy.

Figure 9: Analysis of residual errors of check points.

For the dataset used with natural images, very few methods are appropriate to attempt comparison with. We implemented the other approaches. However no images were successfully aligned by these methods. This failure was expected. The following are the reasons. Firstly, the methods are very few and the initial conditions are so many. For example, the method in Nieto et al. [1] needs that color point clouds are created by the calibration of laser scanner and digital camera. The method in Zachary and Juan [3] needs that the initial site of the two sensors is obtained by GPS. However, the method in this paper does not need the initial site of the two sensors and the additional device such as digital camera and GPS. Moreover, there are some comparisons of the different registration methods of images and laser scans. For example, the method in Zhang et al. [10] applies the inspection line and collinear equation, but this algorithm is more suitable for the aerial image and airborne data. The method in Liu [23] needs to select manually 2D-3D the same points, which causes the great error of human factor.

5. Conclusion

A method for fusing the hyperspectral data with the laser data suitable for surface features is presented. This method operates by creating a 2D laser image using a camera model and extracting feature points of hyperspectral image and laser scan. The collinearity equation is used to create the correspondence to find correct alignment of hyperspectral image with laser scan. The adjustment is used to improve the registration accuracy of hyperspectral image with laser scan. The method was demonstrated to successfully work to fuse a hyperspectral image and a laser scan. In future work a dataset with nature environments will be obtained and the features will be more complex; therefore the feature extraction and the accuracy advance need further strengthening.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.


This work was supported by National Science and Technology Support Program of China (2012BAH31B01), Key Program of Natural Science Foundation of Beijing, China (B) (KZ201310028035), and Youth Foundation of Taiyuan University of Technology, China (2013W014).


  1. J. I. Nieto, S. T. Monteiro, and D. Viejo, “3D geological modelling using laser and hyperspectral data,” in Proceedings of the 30th IEEE International Geoscience and Remote Sensing Symposium (IGARSS '10), pp. 4568–4571, IEEE, Honolulu, Hawaii, USA, July 2010. View at Publisher · View at Google Scholar · View at Scopus
  2. T. H. Kurz, S. J. Buckley, J. A. Howell, and D. Schneider, “Integration of panoramic hyperspectral imaging with terrestrial lidar data,” The Photogrammetric Record, vol. 26, no. 134, pp. 212–228, 2011. View at Publisher · View at Google Scholar · View at Scopus
  3. T. Zachary and N. Juan, “A mutual information approach to automatic calibration of camera and lidar in natural environments,” in Proceedings of the Australasian Conference on Robotics and Automation, pp. 3–5, 2012.
  4. R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,” Robotics and Automation, vol. 3, no. 4, pp. 323–344, 1987. View at Google Scholar
  5. C. Rocchini, P. Cignoni, C. Montani, and R. Scopigno, “Acquiring, stitching and blending diffuse appearance attributes on 3D models,” Visual Computer, vol. 18, no. 3, pp. 186–204, 2002. View at Publisher · View at Google Scholar · View at Scopus
  6. J. Shao, A. Zhang, S. Wang, X. Meng, L. Yang, and Z. Wang, “Research on fusion of 3D laser point clouds and CCD image,” Chinese Journal of Lasers, vol. 40, no. 5, Article ID 0514001, 2013. View at Publisher · View at Google Scholar
  7. G. Palma, M. Corsini, M. Dellepiane, and R. Scopigno, “Improving 2D-3D registration by mutual information using gradient maps,” in Proceedings of the 8th Eurographics Italian Chapter Conference, pp. 89–94, The Eurographics Association, November 2010. View at Publisher · View at Google Scholar · View at Scopus
  8. M. Corsini, M. Dellepiane, F. Ponchio, and R. Scopigno, “Image-to-geometry registration: a mutual information method exploiting illumination-related geometric properties,” Computer Graphics Forum, vol. 28, no. 7, pp. 1755–1764, 2009. View at Publisher · View at Google Scholar · View at Scopus
  9. A. Mastin, J. Kepner, and J. Fisher III, “Automatic registration of LIDAR and optical images of urban scenes,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPR '09), pp. 2639–2646, IEEE, June 2009. View at Publisher · View at Google Scholar · View at Scopus
  10. L. Zhang, H. Ma, G. Gao, and Z. Chen, “Automatic registration of urban aerial images with airborne LiDAR points based on line-point similarity invariants,” Acta Geodaetica et Cartographica Sinica, vol. 43, no. 4, pp. 372–379, 2014. View at Publisher · View at Google Scholar · View at Scopus
  11. H. Ma, C. Yao, and J. Wu, “Registration of LiDAR point clouds and high resolution images based on linear features,” Geomatics and Information Science of Wuhan University, vol. 37, no. 2, pp. 136–159, 2012. View at Google Scholar · View at Scopus
  12. D. Schneider and H.-G. Maas, “A geometric model for linear-array-based terrestrial panoramic cameras,” The Photogrammetric Record, vol. 21, no. 115, pp. 198–210, 2006. View at Publisher · View at Google Scholar · View at Scopus
  13. D. Schneider and G. H. Maas, “Geometric modelling and calibration of a high resolution panoramic camera,” in Optical 3-D Measurement Techniques VI, vol. 2, pp. 122–129, 2003. View at Google Scholar
  14. H. Bay, A. Ess, T. Tuytelaars, and L. van Gool, “Speeded-up robust features (SURF),” Computer Vision and Image Understanding, vol. 110, no. 3, pp. 346–359, 2008. View at Publisher · View at Google Scholar · View at Scopus
  15. X. Pan and S. Lyu, “Detecting image region duplication using SIFT features,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '10), pp. 1706–1709, March 2010. View at Publisher · View at Google Scholar · View at Scopus
  16. L. Juan and O. Gwun, “SURF applied in panorama image stitching,” in Proceedings of the 2nd International Conference on Image Processing Theory, Tools and Applications (IPTA '10), pp. 495–499, Paris, France, July 2010. View at Publisher · View at Google Scholar · View at Scopus
  17. A. C. Murillo, J. J. Guerrero, and C. Sagüés, “SURF features for efficient robot localization with omnidirectional images,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '07), pp. 3901–3907, April 2007. View at Publisher · View at Google Scholar · View at Scopus
  18. T. Sattler, B. Lelbe, and L. Kobbelt, “Improving RANSAC's efficiency with a spatial consistency filter,” in Proceedings of the International Conference on Computer Vision, pp. 2090–2097, 2009.
  19. C. Papazov and D. Burschka, “An efficient RANSAC for 3D object recognition in noisy and occluded scenes,” in Computer Vision—ACCV 2010, vol. 6492 of Lecture Notes in Computer Science, pp. 135–148, Springer, Berlin, Germany, 2011. View at Publisher · View at Google Scholar
  20. C. Silpa-Anan and R. Hartley, “Optimised KD-trees for fast image descriptor matching,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '08), pp. 1–8, Anchorage, Alaska, USA, June 2008. View at Publisher · View at Google Scholar · View at Scopus
  21. N.-N. Ding, Y.-Y. Liu, Y. Zhang, C.-N. Chen, and B.-G. He, “Fast image registration based on SURF-DAISY algorithm and randomized kd trees,” Journal of Optoelectronics Laser, vol. 23, no. 7, pp. 1395–1402, 2012. View at Google Scholar · View at Scopus
  22. M. Muja and D. G. Lowe, “Fast approximate nearest neighbors with automatic algorithm configuration,” in Proceedings of the 4th International Conference on Computer Vision Theory and Applications (VISAPP '09), pp. 331–340, February 2009. View at Scopus
  23. S. M. Liu, The 3D Laser Point Cloud Shading and Texture Mapping of Surface Model, Nanjing Normal University, 2009.