Advances in Astronomy

Advances in Astronomy / 2015 / Article

Research Article | Open Access

Volume 2015 |Article ID 203872 | 8 pages | https://doi.org/10.1155/2015/203872

An Improved Infrared/Visible Fusion for Astronomical Images

Academic Editor: Dean Hines
Received30 Apr 2015
Revised04 Aug 2015
Accepted16 Aug 2015
Published26 Aug 2015

Abstract

An undecimated dual tree complex wavelet transform (UDTCWT) based fusion scheme for astronomical visible/IR images is developed. The UDTCWT reduces noise effects and improves object classification due to its inherited shift invariance property. Local standard deviation and distance transforms are used to extract useful information (especially small objects). Simulation results compared with the state-of-the-art fusion techniques illustrate the superiority of proposed scheme in terms of accuracy for most of the cases.

1. Introduction

The visible light astronomy due to reflection, refraction, interference, and diffraction enables scientists to unearth many of nature’s secrets; however, brightness of stars creates a haze in the sky. On the other hand, the infrared (IR) astronomy enables us to peer through the veil of interstellar dust and see objects at extreme cosmological distances. IR images have good radiometric resolution whereas visible images provide detailed information. In this regard, various image fusion techniques have been developed to combine the complementary information present in both images. These techniques can be grouped into wavelet, statistical decomposition, and compressive sensing.

The wavelet transform based fusion schemes generally decompose the visible and IR images into different base and detail layers, to combine the useful information. In [1], contourlet transform fusion is used to separate foreground and background information; however, the separation is not always accurate, which causes loss in target information. In [2], nonsubsampled contourlet transform, local energy, and fuzzy logic based fusion claims better subjective visual effects; however, merger and description of necessary components of IR and visible images in fusion model require improvements especially in case of noisy images. In [3], wavelet transform and fuzzy logic based scheme utilizes dissimilarity measure to assign weights; however, some artifacts are also introduced in the fused image. Contrast enhancement (using ratio of local and global divergence of IR image) based fusion lacks color consistency [4]. In adaptive intensity hue saturation method [5], the amount of spatial details injected into each band of multispectral image is appropriately determined by the weighting matrix, which is defined on the basis of the edges present in panchromatic and multispectral bands. The scheme preserves the spatial details; however, it is unable to control the spectral distortion sufficiently [6]. In [7], gradient-domain approach based on mapping contrast defines the structure tensor matrix onto a low-dimensional gradient field. However, the scheme effects the natural output colours. In [8], wavelet transform and segmentation based fusion scheme is developed to enhance targets in low contrast. However, the fusion performance is dependent on segmentation quality and large segmentation errors can occur for cosmological images (especially when one feature is split into multiple regions).

Statistical fusion schemes split the images into multiple subspaces using different matrix decomposition techniques. -means and singular value decomposition based scheme suffers from computational complexity [9]. In [10], spatial and spectral fusion model uses sparse matrix factorization to fuse images with different spatial and spectral properties. The scheme combines the spectral information from sensors having low spatial but high spectral resolution with the spatial information from sensors having high spatial but low spectral resolution. Although the scheme produces better fused results with well preserved spectral and spatial properties, its issues include spectral dictionary learning process and computational complexity. In [11], an internal generative mechanism based fusion algorithm first decomposes source image into a coarse layer and a detail layer by simulating the mechanism of human visual system for perceiving images. Then the detail layer is fused using pulse coupled neural network, and the coarse layer is fused by using the spectral residual based saliency method. The scheme is time inefficient and yields weak fusion performance. In [12], independent components analysis based IR and visible image fusion scheme uses kurtosis information of the independent components analysis based coefficients. However, further work is required for determining fusion rules of primary features.

Compressive sensing based fusion schemes exploit the sparsity of data using different dictionaries. Adjustable compressive measurement based fusion scheme suffers from empirical adjustment of different parameters [13]. In [14], a compressive sensing approach preserves data (such as edges, lines, and contours); however, design of appropriate sparse transform and optimal deterministic measurement matrix is an issue. In [15], a compressive sensing based image fusion scheme (for infrared and visible images) first compresses the sensing data by random projection and then obtains sparse coefficients on compressed samples by sparse representation. The fusion coefficients are finally combined with the fusion impact factor and the fused image is reconstructed from the combined sparse coefficients. However, the scheme is inefficient and prone to noise effects. In [16], a nonnegative sparse representation based scheme is used to extract the features of source images. Some methods are developed to detect the salient features (which include the target and contours) in the IR image and texture features in visible image. Although the scheme performs better for noisy images, the sparseness of the image is controlled implicitly.

In a nutshell, the above-mentioned state-of-the-art fusion techniques suffer from limited accuracy, high computational complexity, or nonrobustness. To overcome these issues, a UDTCWT based visible/IR image fusion scheme for astronomical images is developed. The UDTCWT reduces noise effects and improves object classification due to its inherited shift invariance property. Local standard deviation along with distance transforms is used to extract useful information (especially small objects). Simulation results illustrate the superiority of proposed scheme in terms of accuracy, for most of the cases.

2. Proposed Method

Let be the input source IR () and visible () registered images (with dimensions ). The local standard deviation for estimating local variations of is where is local mean image computed as The local standard deviation measures the randomness of pixels in a local area where the high values indicate presence of astrobodies and the low value values correspond to smooth/blank space (without any object or astrobody).

The image is obtained by thresholding to remove the pixel containing large variations: that is, where is a controlling parameter and and are mean and variance of , respectively. The gray distance image (to classify different points which are present inside/outside any shape/object) is computed using and mask as The distance transform (used to eliminate oversegmentation and short sightedness) measures the overall distance of the pixel from other bright pixels. For instance, a pixel closer to a cluster of stars (objects) tends to be part of the segmented mask and vice versa.

Let be the binary image obtained from the distance image : that is, where denotes mean image and is a positive constant. The image segments the foreground from background regions. The connected components image (to segment different binary patterns) with structure element (a matrix of all ones) is Let and represent area and perimeter of the th connected component placed at th location, respectively; a binary segmented image is constructed as where and are thresholding parameters. UDTCWT is applied on the source images to obtain coefficient matrix of dimensions (where represents wavelet coefficients). The decomposition obtained using UDTCWT not only eliminates noise/unwanted artifacts, but also is effective in preserving the useful information present in the input images (due to its undecimated property). The binary coefficient matrix is obtained by assigning nonzero values at pixel locations where visible image provides more information than the IR image. This binary thresholding ensures that the fused image contains the significant/important information of both source images (as the higher value of UDTCWT corresponds to presence of significant/important information): A binary fuse map is computed as where represents operation. LetThe final fused image is obtained by computing the inverse UDTCWT of fused coefficients . Figure 1 shows the flow diagram of proposed technique.

3. Results and Discussion

To verify the significance of the proposed technique, simulations are performed on various visible/IR datasets. Quantitative analysis is performed using (luminance/contrast distortion), (mutual information), (weighted quality index), (edge dependent quality index), (structural similarity index measure), (human perception inspired metric), (edge transform metric), and (image feature metric) [1722].

The metric [17, 18] is designed through modality image distortion as combination of loss of correlation, luminance distortion, and contrast distortion. The metric [17] represents the orientation preservation and edge strength values. It models the perceptual loss of information in fused results in terms of how well the strength and orientation values of pixels in source images are represented in fused image. It deals with the problem of objective evaluation of dynamic, multisensor image fusion, based on gradient information preservation between the inputs and the fused images. It also takes into account additional scene and object motion information present in multisensor sequences. The metric [17] is defined by assigning more weight to those windows, where saliency of the input image is high. It corresponds to the areas that are likely to be perceptually important parts of the underlying scene. The index [17] takes into account aspects of the human visual system, where it expresses the contribution of the edge information of the source images to the fused images. The measure [19] is the similarity between two images and is designed to improve traditional measures of mean square error and peak signal to noise ratio, which are inconsistent with human eye perception. The metric [20] evaluates the performance of image fusion for night vision applications, using a perceptual quality evaluation method based on human visual system models. Image quality of fused image is assessed by contrast sensitivity function and contrast preservation map. The metric [21] assesses the pixel-level fusion performance and reflects the quality of visual information obtained from the fusion of input images. The metric [22] evaluates the performance of the combinative pixel-level image fusion, based on an image feature measurement (i.e., phase congruency and its moments), and provides an absolute measurement of image features. By comparing the local cross-correlation of corresponding feature maps of input images and fused output, the quality of the fused result is assessed without a reference image.

These quality metrics [1722] work well for noisy, blurred, and distorted images, using multiscale transformation, arithmetic, statistical, and compressive sensing based schemes for multiexposure, multiresolution, and multimodal environments. These are also useful for remote and airborne sensing, military, and industrial engineering related applications. The normalized range of these measures is between 0 and 1 where high values imply better fusion metric for each quality measure.

Figure 2(a) shows Andromeda galaxy (M31) JPEG IR image taken by Spitzer space telescope [23] while Figure 2(b) shows the corresponding visible image taken using 12.5′′ Ritchey Chretien Cassegrain (at F6) and ST10XME [24]. Figures 2(c)2(f) are the outputs of local variance, distance transform, and segmentation steps.

Figure 3 shows the fusion results obtained by ratio pyramid (RP) [25], dual tree complex wavelet transform (DTCWT) [26], nonsubsampled contourlet transform (NSCT) [27], multiresolution singular value decomposition (MSVD) [28], Ellmauthaler et al. [8], and proposed schemes. By visual comparison, it can be noticed that the proposed scheme provides better fusion results, especially the preservation of background intensity value as compared to existing state-of-the-art schemes.

Figures 4(a) and 4(b) show visible and IR Jupiter’s moon JPEG images taken by new Horizons spacecraft using multispectral visible imaging camera and linear Etalon imaging spectral array [29]. The fusion results obtained by RP [25], DTCWT [26], NSCT [27], MSVD [28], Ellmauthaler et al. [8], and proposed schemes are shown in Figures 4(c)4(h), respectively. Note that only the proposed scheme is able to accurately preserve both the moon texture (from IR image) and other stars (from visible image) in the fused image.

Figures 5(a) and 5(b) show visible and IR Nabula (M16) JPEG images taken by Hubble space telescope [30]. The fusion results obtained by RP [25], DTCWT [26], NSCT [27], MSVD [28], Ellmauthaler et al. [8], and proposed schemes are shown in Figures 5(c)5(h), respectively. The fused image using proposed scheme highlights the IR information more accurately as compared to existing state-of-the-art schemes.

Table 1 shows the quantitative comparison of existing and proposed schemes (where the bold values indicate best results). It can be observed that the results obtained using proposed schemes are significantly better in most of the cases/measures as compared to existing state-of-the-art schemes.


Dataset Technique

Andromeda galaxy (M31) Proposed 0.8220 0.8319 0.77070.48040.6461 0.3487 0.66120.4615
Ellmauthaler et al. [8] 0.7179 0.8444 0.7345 0.3647 0.6350 0.2229 0.5610 0.3387
MSVD [28] 0.6452 0.6946 0.6243 0.4148 0.4535 0.4432 0.0045 0.2675
NSCT [27] 0.6259 0.6003 0.5576 0.2641 0.4606 0.1777 0.3432 0.2689
DTCWT [26] 0.7085 0.8134 0.6573 0.3113 0.5436 0.1682 0.2682 0.2290
RP [25] 0.4706 0.5416 0.5102 0.2514 0.3970 0.5282 0.2075 0.2026

Jupiter’s moon Proposed 0.79270.72550.7814 0.4622 0.7566 0.3433 0.67250.5617
Ellmauthaler et al. [8] 0.2832 0.6477 0.6343 0.4230 0.7398 0.1768 0.5672 0.5614
MSVD [28] 0.4780 0.4970 0.5217 0.5243 0.5292 0.4599 0.0065 0.4923
NSCT [27] 0.4155 0.4083 0.4631 0.5279 0.5212 0.2672 0.5001 0.6139
DTCWT [26] 0.4571 0.5932 0.5851 0.3476 0.5973 0.1805 0.4785 0.4844
RP [25] 0.3467 0.3989 0.4022 0.4749 0.3919 0.4825 0.0183 0.3634

Nabula (M16) Proposed 0.73990.48960.84610.85870.86450.76460.75530.5652
Ellmauthaler et al. [8] 0.7318 0.4230 0.8446 0.8494 0.8563 0.5736 0.5424 0.5494
MSVD [28] 0.4918 0.4120 0.6466 0.6704 0.6539 0.5598 0.0051 0.4331
NSCT [27] 0.5204 0.2980 0.6037 0.5899 0.6013 0.4916 0.3953 0.5058
DTCWT [26] 0.6736 0.3608 0.8023 0.8225 0.8125 0.4477 0.4631 0.5079
RP [25] 0.5885 0.3099 0.6834 0.6818 0.6934 0.6597 0.1708 0.4639

4. Conclusion

A fusion scheme for astronomical visible/IR images based on UDTCWT, local standard deviation, and distance transform is proposed. The use of UDTCWT is helpful in retaining useful details of the image. The local standard deviation variation measures presence or absence of small objects. The distance transform activates the effects of proximity in the segmentation process and eliminates effects of oversegmentation in addition to short sightedness. The scheme reduces noise artifacts and efficiently extracts the useful information (especially small objects). Simulation results on different visible/IR images verify the effectiveness of proposed scheme.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

  1. L. Kun, G. Lei, L. Huihui, and C. Jingsong, “Fusion of infrared and visible light images based on region segmentation,” Chinese Journal of Aeronautics, vol. 22, no. 1, pp. 75–80, 2009. View at: Publisher Site | Google Scholar
  2. Z. Fu, X. Dai, Y. Li, H. Wu, and X. Wang, “An improved visible and Infrared image fusion based on local energy and fuzzy logic,” in Proceedings of the 12th IEEE International Conference on Signal Processing, pp. 861–865, October 2014. View at: Publisher Site | Google Scholar
  3. J. Saeedi and K. Faez, “Infrared and visible image fusion using fuzzy logic and population-based optimization,” Applied Soft Computing, vol. 12, no. 3, pp. 1041–1054, 2012. View at: Publisher Site | Google Scholar
  4. S. Yin, L. Cao, Y. Ling, and G. Jin, “One color contrast enhanced infrared and visible image fusion method,” Infrared Physics and Technology, vol. 53, no. 2, pp. 146–150, 2010. View at: Publisher Site | Google Scholar
  5. Y. Leung, J. Liu, and J. Zhang, “An improved adaptive intensity-hue-saturation method for the fusion of remote sensing images,” IEEE Geoscience and Remote Sensing Letters, vol. 11, no. 5, pp. 985–989, 2014. View at: Publisher Site | Google Scholar
  6. B. Chen and B. Xu, “A unified spatial-spectral-temporal fusion model using Landsat and MODIS imagery,” in Proceedings of the 3rd International Workshop on Earth Observation and Remote Sensing Applications (EORSA '14), pp. 256–260, IEEE, Changsha, China, June 2014. View at: Publisher Site | Google Scholar
  7. D. Connah, M. S. Drew, and G. D. Finlayson, Spectral Edge Image Fusion: Theory and Applications, Springer, 2014.
  8. A. Ellmauthaler, E. A. B. da Silva, C. L. Pagliari, and S. R. Neves, “Infrared-visible image fusion using the undecimated wavelet transform with spectral factorization and target extraction,” in Proceedings of the 19th IEEE International Conference on Image Processing (ICIP '12), pp. 2661–2664, October 2012. View at: Publisher Site | Google Scholar
  9. M. Ding, L. Wei, and B. Wang, “Research on fusion method for infrared and visible images via compressive sensing,” Infrared Physics and Technology, vol. 57, pp. 56–67, 2013. View at: Publisher Site | Google Scholar
  10. B. Huang, H. Song, H. Cui, J. Peng, and Z. Xu, “Spatial and spectral image fusion using sparse matrix factorization,” IEEE Transactions on Geoscience and Remote Sensing, vol. 52, no. 3, pp. 1693–1704, 2014. View at: Publisher Site | Google Scholar
  11. X. Zhang, X. Li, Y. Feng, H. Zhao, and Z. Liu, “Image fusion with internal generative mechanism,” Expert Systems with Applications, vol. 42, no. 5, pp. 2382–2391, 2015. View at: Publisher Site | Google Scholar
  12. Y. Lu, F. Wang, X. Luo, and F. Liu, “Novel infrared and visible image fusion method based on independent component analysis,” Frontiers in Computer Science, vol. 8, no. 2, pp. 243–254, 2014. View at: Publisher Site | Google Scholar | MathSciNet
  13. A. Jameel, A. Ghafoor, and M. M. Riaz, “Adaptive compressive fusion for visible/IR sensors,” IEEE Sensors Journal, vol. 14, no. 7, pp. 2230–2231, 2014. View at: Publisher Site | Google Scholar
  14. Z. Liu, H. Yin, B. Fang, and Y. Chai, “A novel fusion scheme for visible and infrared images based on compressive sensing,” Optics Communications, vol. 335, pp. 168–177, 2015. View at: Publisher Site | Google Scholar
  15. R. Wang and L. Du, “Infrared and visible image fusion based on random projection and sparse representation,” International Journal of Remote Sensing, vol. 35, no. 5, pp. 1640–1652, 2014. View at: Publisher Site | Google Scholar
  16. J. Wang, J. Peng, X. Feng, G. He, and J. Fan, “Fusion method for infrared and visible images by using non-negative sparse representation,” Infrared Physics and Technology, vol. 67, pp. 477–489, 2014. View at: Publisher Site | Google Scholar
  17. G. Piella and H. Heijmans, “A new quality metric for image fusion,” in Proceedings of the IEEE Conference on Image Processing Conference, pp. 171–173, September 2003. View at: Google Scholar
  18. Z. Wang and A. C. Bovik, “A universal image quality index,” IEEE Signal Processing Letters, vol. 9, no. 3, pp. 81–84, 2002. View at: Publisher Site | Google Scholar
  19. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004. View at: Publisher Site | Google Scholar
  20. Y. Chen and R. S. Blum, “A new automated quality assessment algorithm for image fusion,” Image and Vision Computing, vol. 27, no. 10, pp. 1421–1432, 2009. View at: Publisher Site | Google Scholar
  21. C. S. Xydeas and V. Petrović, “Objective image fusion performance measure,” IET Electronics Letters, vol. 36, no. 4, pp. 308–309, 2000. View at: Publisher Site | Google Scholar
  22. J. Zhao, R. Laganière, and Z. Liu, “Performance assessment of combinative pixel-level image fusion based on an absolute feature measurement,” International Journal of Innovative Computing, Information and Control, vol. 3, no. 6, pp. 1433–1447, 2007. View at: Google Scholar
  23. Andromeda galaxy (M31) IR, http://sci.esa.int/herschel/48182-multiwavelength-images-of-the-andromeda-galaxy-m31/.
  24. Andromeda galaxy (M31) visible, http://www.robgendlerastropics.com/M31Page.html.
  25. A. Toet, “Image fusion by a ratio of low-pass pyramid,” Pattern Recognition Letters, vol. 9, no. 4, pp. 245–253, 1989. View at: Publisher Site | Google Scholar
  26. J. J. Lewis, R. J. O'Callaghan, S. G. Nikolov, D. R. Bull, and N. Canagarajah, “Pixel- and region-based image fusion with complex wavelets,” Information Fusion, vol. 8, no. 2, pp. 119–130, 2007. View at: Publisher Site | Google Scholar
  27. Q. Zhang and B.-L. Guo, “Multifocus image fusion using the nonsubsampled contourlet transform,” Signal Processing, vol. 89, no. 7, pp. 1334–1346, 2009. View at: Publisher Site | Google Scholar
  28. V. P. S. Naidu, “Image fusion technique using multi-resolution singular value decomposition,” Defence Science Journal, vol. 61, no. 5, pp. 479–484, 2011. View at: Google Scholar
  29. Jupiter's moon, http://www.technology.org/2014/12/03/plutos-closeup-will-awesome-based-jupiter-pics-new-horizons-spacecraft.
  30. Nabula (M16), http://webbtelescope.org/webb_telescope/technology_at_the_extremes/keep_it_cold.php.

Copyright © 2015 Attiq Ahmad et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1383 Views | 476 Downloads | 4 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19.