Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2016 / Article

Research Article | Open Access

Volume 2016 |Article ID 3130681 | 9 pages | https://doi.org/10.1155/2016/3130681

Fusion of IR and Visual Images Based on Gaussian and Laplacian Decomposition Using Histogram Distributions and Edge Selection

Academic Editor: Zhike Peng
Received03 Dec 2015
Revised25 Jan 2016
Accepted26 Jan 2016
Published07 Feb 2016

Abstract

We propose a novel fusion method of IR (infrared) and visual images to combine distinct information from two sources. To decompose an image into its low and high frequency components, we use Gaussian and Laplacian decomposition. The strong high frequency information in the two sources can be easily fused by selecting the large magnitude of Laplacian images. The distinct low frequency information, however, is not as easily determined. As such, we use histogram distributions of the two sources. Therefore, experimental results show that the fused images can contain the dominant characteristics of both sources.

1. Introduction

Multisensor fusion is widely used for signal, image, feature, and symbol combination [1]. To fuse images, variable images, such as visual image, infrared (IR) image, millimeter wave (MMW) image, X-ray image, and depth image are used for concealed weapon detection [2, 3], remote sensing [4, 5], multifocus imaging [6], and so forth [1, 7, 8].

Forward looking infrared (FLIR) cameras can sense IR radiation (i.e., thermal radiation). Therefore, IR images can contain useful information that is not apparent in visual images. Alternatively, detailed information within the visual band is not included in IR images. Therefore, the fusion of IR and visual images can provide the advantages of both types of images. Also, the fusion of IR and visual images can apply various research fields, such as night vision [9, 10], face recognition [11], human detection [12], and detecting concealed object [13].

Visual and IR images can be easily fused by averaging; however, in this scheme, the advantages of the two sources are eliminated and the details are annihilated in the worst case.

To preserve the dominant advantages in the fused images, two methods can be utilized. Various methods using multiscale decomposition, such as Laplacian pyramid [14, 15], discrete wavelet transform (DWT) [16, 17], and contrast pyramid [18] can be used. One stage of Laplacian pyramid decomposition is shown in Figure 1. Additionally, some methods using region-segmentation [1921] have also been proposed.

The multiscale decomposition-based methods are performed with low computation; however, selecting the correct and distinct values of the low frequency information is not easily determined. So, many stages of pyramid and DWT are used to select significant values, and distinct values are only detected from Laplacian images or high frequency bands [22]. Although distinct values can be selected from both low and high frequency images, most of all methods used only strong intensity and predetermined weights.

On the other hand, the latter methods can simply select the distinct values of the low frequency information; accurate segmentation is not guaranteed, and segmentation methods have a higher computation complexity. In addition, seam boundary regions should be blended by two image sources to prevent discontinuities.

In this paper we are aimed at developing a fusion method that has the advantages of the methods mentioned above. To do this, we use simple Gaussian and Laplacian decomposition and utilize histogram distributions and edge selection to determine the distinct values of the low and high frequency information, respectively. Because we use histogram distributions, significant low frequency information can be selected, such as locally hot or cold regions of IR images and locally bright or dark regions of visual images. In addition, only one decomposition is used and fast averaging filters are used, so processing speed is fast enough for real-time applications.

2. Fusion of IR and Visual Images

The proposed fusion method is similar to methods that use the Laplacian pyramid or DWT. We use Gaussian smoothing, as opposed to Gaussian or wavelet scaling, so our method is identical to methods that perform single scaling. A Gaussian image is the filtered image of an original image by the Gaussian convolution, and a Laplacian image is calculated by () as shown in Figure 2.

To obtain a fused image , the distinct information of is related to the magnitude; a larger indicates a strong edge. To determine the distinct value of , we use the histogram distributions of the visual and IR Gaussian images.

To fuse a visual image and an IR image , we first compute the Gaussian images and and decompose the Laplacian images and , and then the Gaussian and Laplacian image of fused image are generated by selecting distinct values using histogram distributions and edge selection as shown in Figure 3.

The Laplacian component of the fused image is easily determined by comparing edge strength such as the absolute Laplacian values at each pixel. If the large absolute Laplacian pixels are directly used, boundary discontinuities have an evil effect in terms of visuality. This can cause the fused images to contain discontinuous artefacts. Accordingly, we use weight map of the absolute Laplacian values. To compute this weight map , we compute the binary weight map calculated by

Finally, we determine using or as follows:where (2a) can be used for the fusion image having smooth boundaries of objects, and (2b) can be used for the fusion image having distinct boundaries of objects.

The distinct intensity values of most images have a low population. In particular, intensities with a low population in the IR images show the highest or the lowest temperature as shown in Figure 4. Therefore, we use the histogram distribution to select .

To select the Gaussian component of the fusion image, we use the low population map given bywhere and denote the histogram distribution functions for and , respectively. However, has extreme discontinuities, so we substitute for .

In all processes for Gaussian filter, we use a fast mean filter using spatial buffers to reduce computation complexities.

3. Experimental Result

The proposed fusion method was tested with three image sequences of TNO image fusion dataset [23]: “UN Camp,” “Dune,” and “Trees” (360 × 270, grayscale). These test images have small intensity rages, so we tested modified images by linear histogram normalization instead of original images. In this experiment, we used two parameter sets as shown in Table 1, where the first three parameters are the mask sizes. The optimal parameters are experimentally determined to make visually nature fusion images not having artificial discontinuities.


Parameter sets condition

Optimal parameters(2a)
Large parameters(2b)

Examples of our results using the optimal parameters are shown in Figures 5, 6, and 7. Edge discontinuities are observed in and , while they are blurred in and . It seems that the distinct information of and is well fused in . In images, strong high frequency components are well selected. In images, the blurred distinct values are well revealed regardless of images. Particularly, isolated dark regions having low frequency are distinctly shown in images.

More results for the three image sequences using optimal parameters are shown in Figure 8; the red fused regions are more influenced by the visual images, while orange fused regions are more influenced by the IR images.

To objectively evaluate the proposed method, we consider two evaluation metrics: entropy () and the Xydeas and Petrovic index () [24]. The performance comparison with the averaging method is shown in Table 2. To compare our results with those reported in [12], we compared the improvement in the ratio of the averaging method as shown in Table 3, because the existing methods used a different image enhancement methods which were not appeared in literature.


MethodsUN CampDuneTrees

Averaging0.3120.3150.297
Proposed (large parameters)0.4870.4970.550
Proposed (optimal parameters)0.4370.4300.476

Averaging6.2756.7696.413
Proposed (large parameters)7.1917.4537.086
Proposed (optimal parameters)7.0937.4006.973


MethodsUN CampDuneTrees

Li’s [16]1.277 (5)1.154 (5)1.402 (5)
Lewis’s [19]1.515 (3)1.435 (3)1.575 (4)
Saeedi’s [21]1.663 (1)1.568 (2)1.795 (2)
Proposed
(large parameters)
1.562 (2)1.578 (1)1.851 (1)
Proposed
(optimal parameters)
1.400 (4)1.365 (4)1.603 (3)

Li’s [16]1.075 (4)1.033 (4)1.015 (4)
Lewis’s [19]1.059 (5)1.019 (5)1.011 (5)
Saeedi’s [21]1.178 (1)1.120 (1)1.099 (2)
Proposed
(large parameters)
1.146 (2)1.101 (2)1.105 (1)
Proposed
(optimal parameters)
1.130 (3)1.093 (3)1.087 (3)

Although our method using the optimal parameters does not yield the best objective performance, these metrics do not completely agree with human subjective evaluation as shown in Figure 9. The performance of Figure 9(b) is higher than Figure 9(a); however, Figure 9(b) has many discontinuities as indicated by the circles. In addition, in Figure 9(c), the stitching of the two source images (after clipping and mean filtering) yields the highest compared to the other results. Therefore, and for segmentation-based fusion methods may be overestimated to the human subjective evaluation.

Visual comparison of two results using two parameter sets is shown in Figure 10. As shown in Table 3, and of the results using the large parameters are higher than the results using the optimal parameters; however the results using the optimal parameters show better visuality.

Using single thread processing with a 3.60 GHz CPU, the average computation time of all of the sequences is 5.237 msec. This processing time is inconceivable for segmentation-based methods. Therefore, the proposed method can be used for real-time fusion applications.

4. Conclusion

In this paper, we have proposed a novel fusion method for IR and visual images based on Gaussian and Laplacian decomposition using histogram distributions and edge selection. This method can easily determine the distinct values of Gaussian and Laplacian images. The distinct values of Laplacian images are selected by edge strength, and the distinct values of Gaussian images are selected by using histogram distributions. So, the fused images can contain the dominant characteristics of two source images and can be obtained via relatively simple computation. In addition, we showed that the object evaluation, entropy and Xydeas and Petrovic index, does not completely agree with human visual evaluation by showing the results using two different parameter sets. Therefore, the proposed method can be used for image fusion and blending instead of other existing methods.

Conflict of Interests

The authors declare that they have no competing interests.

References

  1. R. S. Blum, Z. Zue, and Z. Zhang, “An overview of image fusion,” in Multi-Sensor Image Fusion and Its Applications, pp. 1–36, CRC Press, Boca Raton, Fla, USA, 2005. View at: Google Scholar
  2. Z. Xue and R. S. Blum, “Concealed weapon detection using color image fusion,” in Proceedings of the 6th International Conference on Information Fusion (FUSION '03), pp. 622–627, Cairns, Australia, July 2003. View at: Publisher Site | Google Scholar
  3. J. Yang and R. S. Blum, “A statistical signal processing approach to image fusion for concealed weapon detection,” in Proceedings of the International Conference on Image Processing, vol. 1, pp. 513–516, 2002. View at: Publisher Site | Google Scholar
  4. T.-M. Tu, S.-C. Su, H.-C. Shyu, and P. S. Huang, “Efficient intensity-hue-saturation-based image fusion with saturation compensation,” Optical Engineering, vol. 40, no. 5, pp. 720–728, 2001. View at: Publisher Site | Google Scholar
  5. G. Simone, A. Farina, F. C. Morabito, S. B. Serpico, and L. Bruzzone, “Image fusion techniques for remote sensing applications,” Information Fusion, vol. 3, no. 1, pp. 3–15, 2002. View at: Publisher Site | Google Scholar
  6. X. Zhang, X. Li, Z. Liu, and Y. Feng, “Multi-focus image fusion using image-partition-based focus detection,” Signal Processing, vol. 102, pp. 64–76, 2014. View at: Publisher Site | Google Scholar
  7. F. Laliberté, L. Gagnon, and Y. Sheng, “Registration and fusion of retinal images-an evaluation study,” IEEE Transactions on Medical Imaging, vol. 22, no. 5, pp. 661–673, 2003. View at: Publisher Site | Google Scholar
  8. C. E. Reese and E. J. Bender, “Multispectral image-fused head-tracked vision system (HTVS) for driving applications,” in Helmet- and Head-Mounted Displays VI, vol. 4361 of Proceedings of SPIE, pp. 1–11, August 2001. View at: Publisher Site | Google Scholar
  9. A. Toet, “Natural colour mapping for multiband nightvision imagery,” Information Fusion, vol. 4, no. 3, pp. 155–166, 2003. View at: Publisher Site | Google Scholar
  10. A. M. Waxman, A. N. Gove, D. A. Fay et al., “Color night vision: opponent processing in the fusion of visible and IR imagery,” Neural Networks, vol. 10, no. 1, pp. 1–6, 1997. View at: Google Scholar
  11. J. Heo, S. G. Kong, B. R. Abidal, and M. A. Abidi, “Fusion of visual and thermal signatures with eyeglass removal for robust face recognition,” in Proceedings of the Conference on Computer Vision and Pattern Recognition Workshop (CVPRW '04), p. 122, Washington, DC, USA, June 2004. View at: Publisher Site | Google Scholar
  12. L. Jiang, F. Tian, L. E. Shen et al., “Perceptual-based fusion of IR and visual images for human detection,” in Proceedings of the International Symposium on Intelligent Multimedia, Video and Speech Processing (ISIMP '04), pp. 514–517, October 2004. View at: Google Scholar
  13. Z. Xue, R. S. Blum, and Y. Li, “Fusion of visual and IR images for concealed weapon detection,” in Proceedings of the 5th International Conference on Information Fusion, pp. 1198–1205, IEEE, Annapolis, Md, USA, July 2002. View at: Publisher Site | Google Scholar
  14. P. J. Burt and E. H. Adelson, “The Laplacian pyramid as a compact image code,” IEEE Transactions on Communications, vol. 31, no. 4, pp. 532–540, 1983. View at: Publisher Site | Google Scholar
  15. P. J. Burt and R. J. Kolczynski, “Enhanced image capture through fusion,” in Proceedings of the 4th International Conference on Computer Vision (ICCV '93), pp. 173–182, IEEE, Berlin, Germany, May 1993. View at: Publisher Site | Google Scholar
  16. H. Li, B. S. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graphical Models and Image Processing, vol. 57, no. 3, pp. 235–245, 1995. View at: Publisher Site | Google Scholar
  17. G. Pajares and J. M. de la Cruz, “A wavelet-based image fusion tutorial,” Pattern Recognition, vol. 37, no. 9, pp. 1855–1872, 2004. View at: Publisher Site | Google Scholar
  18. A. Toet, L. J. van Ruyven, and J. M. Valeton, “Merging thermal and visual images by a contrast pyramid,” Optical Engineering, vol. 28, no. 7, pp. 789–792, 1989. View at: Google Scholar
  19. J. J. Lewis, R. J. O'Callaghan, S. G. Nikolov, D. R. Bull, and N. Canagarajah, “Pixel- and region-based image fusion with complex wavelets,” Information Fusion, vol. 8, no. 2, pp. 119–130, 2007. View at: Publisher Site | Google Scholar
  20. N. Cvejic, D. Bull, and N. Canagarajah, “Region-based multimodal image fusion using ICA bases,” IEEE Sensors Journal, vol. 7, no. 5, pp. 743–751, 2007. View at: Publisher Site | Google Scholar
  21. J. Saeedi and K. Faez, “Infrared and visible image fusion using fuzzy logic and population-based optimization,” Applied Soft Computing Journal, vol. 12, no. 3, pp. 1041–1054, 2012. View at: Publisher Site | Google Scholar
  22. M. I. Smith and J. P. Heather, “A review of image fusion technology in 2005,” in Thermosense XXVII, vol. 5782 of Proceedings of SPIE, pp. 29–45, April 2005. View at: Publisher Site | Google Scholar
  23. A. Toet, TNO Image Fusion Dataset, Figshare, 2014. View at: Publisher Site
  24. C. S. Xydeas and V. S. Petrovic, “Objective pixel-level image fusion performance measure,” in Sensor Fusion: Architectures, Algorithms, and Applications IV, vol. 4051 of Proceedings of SPIE, pp. 89–98, Orlando, Fla, USA, April 2000. View at: Publisher Site | Google Scholar

Copyright © 2016 Seohyung Lee and Daeho Lee. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

1092 Views | 675 Downloads | 2 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.