The Scientific World Journal

The Scientific World Journal / 2014 / Article
Special Issue

Multidimensional Signal Processing and Applications

View this Special Issue

Research Article | Open Access

Volume 2014 |Article ID 695752 | 7 pages | https://doi.org/10.1155/2014/695752

Improved Guided Image Fusion for Magnetic Resonance and Computed Tomography Imaging

Academic Editor: S. Bourennane
Received06 Aug 2013
Accepted15 Dec 2013
Published13 Feb 2014

Abstract

Improved guided image fusion for magnetic resonance and computed tomography imaging is proposed. Existing guided filtering scheme uses Gaussian filter and two-level weight maps due to which the scheme has limited performance for images having noise. Different modifications in filter (based on linear minimum mean square error estimator) and weight maps (with different levels) are proposed to overcome these limitations. Simulation results based on visual and quantitative analysis show the significance of proposed scheme.

1. Introduction

Medical images from different modalities reflect different levels of information (tissues, bones, etc.). A single modality cannot provide comprehensive and accurate information [1, 2]. For instance, structural images obtained from magnetic resonance (MR) imaging, computed tomography (CT), and ultrasonography, and so forth, provide high-resolution and anatomical information [1, 3]. On the other hand, functional images obtained from position emission tomography (PET), single-photon emission computed tomography (SPECT), and functional MR imaging, and so forth, provide low-spatial resolution and functional information [3, 4]. More precisely, CT imaging provides better information on denser tissue with less distortion. MR images have more distortion but can provide information on soft tissue [5, 6]. For blood flow and flood activity analysis, PET is used which provide low space resolution. Therefore, combining anatomical and functional medical images through image fusion to extract much more useful information is desirable [5, 6]. Fusion of CT/MR images combines anatomical and physiological characteristics of human body. Similarly fusion of PET/CT is helpful for tumor activity analysis [7].

Image fusion is performed on pixels, features, and decision levels [810]. Pixel-level methods fuse at each pixel and hence reserve most of the information [11]. Feature-level methods extract features from source images (such as edges or regions) and combine them into a single concatenated feature vector [12, 13]. Decision-level fusion [11, 14] comprises sensor information fusion, after the image has been processed by each sensor and some useful information has been extracted out of it.

Pixel-level methods include addition, subtraction, division, multiplication, minimum, maximum, median, and rank as well as more complicated operators like Markov random field and expectation-maximization algorithm [15]. Besides these, pixel level also includes statistical methods (principal component analysis (PCA), linear discriminant analysis, independent component analysis, canonical correlation analysis, and nonnegative matrix factorization). Multiscale transforms like pyramids and wavelets are also types of pixel-level fusion [11, 14]. Feature-level methods include feature based PCA [12, 13], segment fusion [13], edge fusion [13], and contour fusion [16]. They are usually robust to noise and misregistration. Weighted decision methods (voting techniques) [17], classical inference [17], Bayesian inference [17], and Dempster-Shafer method [17] are examples of decision-level fusion methods. These methods are application dependent; hence, they cannot be used generally [18].

Multiscale decomposition based medical image fusion decompose the input images into different levels. These include pyramid decomposition (Laplacian [19], morphological [20], and gradient [21]); discrete wavelet transform [22]; stationary wavelet transform [23]; redundant wavelet transform [24]; and dual-tree complex wavelet transform [25]. These schemes produce blocking effects because the decomposition process is not accompanied by any spatial orientation selectivity.

To overcome the limitations, multiscale geometric analysis methods were introduced for medical image fusion. Curvelet transform based fusion of CT and MR images [26] does not provide a proper multiresolution representation of the geometry (as curvelet transform is not built directly in the discrete domain) [27]. Contourlet transform based fusion improves the contrast, but shift-invariance is lost due to subsampling [27, 28]. Nonsubsampled contourlet transform with a variable weight for fusion of MR and SPECT images has large computational time and complexity [27, 29].

Recently, guided filter fusion (GFF) [30] is used to preserve edges and avoid blurring effects in the fused image. Guided filter is an edge-preserving filter and its computational time is also independent of filter size. However, the method provides limited performance for noisy images due to the use of Gaussian filter and two-level weight maps. An improved guided image fusion for MR and CT imaging is proposed to overcome these limitations. Simulation results based on visual and quantitative analysis show the significance of proposed scheme.

2. Preliminaries

In this section, we briefly discuss the methodology of GFF [30]. The main steps of the GFF method are filtering (to obtain the two-scale representation), weight maps construction, and fusion of base and detail layers (using guided filtering and weighted average method).

Let be the fused image obtained by combining input images and of same sizes (). The base ( and ) and detail ( and ) layers of source images are where is the average filter. The base and detail layers contain large- and small-scale variations, respectively. The saliency images are obtained by convolving and and with a Laplacian filter followed by a Gaussian filter ; that is,

The weight maps and are where is a function with value 1 for and value 0 for (similarly for ). and are the saliency values for pixel in and , respectively.

Guided image filtering is performed to obtain the refined weights , , , and as where , , , and are the parameters of the guided filter and and and are the base layer and the detail layer weight maps.

The fused image is obtained by weighted averaging of the corresponding layers; that is,

The major limitations of GFF [30] scheme are summarized as follows.(1)The Gaussian filter from (2) is not suitable for Rician noise removal. Thus, the algorithm has limited performance for noisy images. Hence filter of (2) needs to be modified to incorporate noise effects.(2)The weight maps and from (3) can be improved by defining more levels. The main issue with binary assignment (0 and 1) is that when the saliency values are approximately equal, the effect of one value is totally discarded, which results in degraded fused image.

3. Proposed Methodology

The proposed scheme follows the methodology of GFF [30] with necessary modifications to incorporate the above listed limitations. This section first discusses the modification proposed due to noise artifacts and then the improved weight maps are presented.

3.1. Improved Saliency Maps

The acquired medical images are usually of low quality (due to artifacts), which degrade the performance (both in terms of human visualization and quantitative analysis).

Beside other artifacts, MR images often contain Rician Noise (RN) which causes random fluctuations in the data and reduces image contrast [31]. RN is generated when real and imaginary parts of MR data are corrupted with zero-mean, equal variance uncorrelated Gaussian noise [32]. RN is a nonzero mean noise. Note that the noise distribution tends to Rayleigh distribution in low intensity regions and to a Gaussian distribution in regions of high intensity of the magnitude image [31, 32].

Let be image obtained using MR imaging containing Rician noise . The CT image has higher spatial resolution and negligible noise level [33, 34].

The source images are first decomposed into base and and detail and layers following (1): and have an added noise term compared to (1). Linear minimum mean square error estimator (LMMSE) is used instead of Gaussian filter for minimizing RN, consequently improving fused image quality.

The saliency maps and are thus computed by applying the LMMSE based filter and following (2): The main purpose of is to make the extra term as in small as possible while enhancing the image details.

3.2. Improved Weight Maps

The saliency maps are linked with detail information in the image. The main issue with 0 and 1 weight assignments arises in GFF [30] when different images have approximately equal saliency values. In such cases, one value is totally discarded. For noisy MR images, the saliency value may be higher at a pixel due to noise; in that case it will assign value 1 (which is not desirable). An appropriate solution is to define a range of values for weight maps construction.

Let , and ,

Let , and ,

These values are selected empirically and may be further adjusted to improve results. Figures 1(a) and 1(b) show CT and noisy MR images, respectively. Figures 1(c)1(f) show the results of applying different weights. The information in the upper portion of the fused image increases as more levels are added to the weight maps.

The weight maps are passed through guided filter to obtain , , , and . Finally the fused image is

LMMSE based filter reduces the Rician noise and the more levels of weight maps ensure that more information is transferred to the fused image. The incorporation of the LMMSE based filter and a range of weight map values makes the proposed method suitable for noisy images.

4. Results and Analysis

The proposed method is tested on several pairs of source (MR and CT) images. For quantitative evaluation, different measures including mutual information (MI) [35] measure , structural similarity (SSIM) [36] measure , Xydeas and Petrović’s [37] measure , Zhao et al.’s [38] measure , Piella and Heijmans’s [39] measures and , and visual information fidelity fusion (VIFF) [40] metric are considered.

4.1. MI Measure

MI is a statistical measure which provides the degree of dependencies in different images. Large value of MI implies better quality and vice versa [11, 33, 35]: where , , and are the entropies of , , and images, respectively. is the jointly normalized histogram of and , is the jointly normalized histogram of and , and , , and are the normalized histograms of , , and , respectively.

4.2. SSIM [36] Measure

SSIM [36] measure is defined as where is a sliding window and is where and are the variance of images and , respectively.

4.3. Xydeas and Petrović’s [37] Measure

Xydeas and Petrović [37] proposed a metric to evaluate the amount of edge information, transferred from input images to fused image. It is calculated as where and are the product of edge strength and orientation preservation values at location , respectively. The weights and reflect the importance of and , respectively.

4.4. Zhao et al.’s [38] Metric

Zhao et al. [38] used the phase congruency (provides an absolute measure of image feature) to define an evaluation metric. The larger value of the metric describes a better fusion result. The metric is defined as the geometric product of phase congruency, maximum and minimum moments, respectively.

4.5. Piella and Heijmans’s [39] Metric

Piella and Heijmans’s [39] metrics and are defined as where and are the local quality indexes calculated in a sliding window and is defined as in (13). Consider where is the mean of and and are the variance of and the covariance of , , respectively. Consider where and are the variance of images and within the window , respectively.

4.6. VIFF [40] Metric

VIFF [40] is a multiresolution image fusion metric used to assess fusion performance objectively. It has four stages. (1) Source and fused images are filtered and divided into blocks. (2) Visual information is evaluated with and without distortion information in each block. (3) The VIFF of each subband is calculated and the overall quality measure is determined by weighing (of VIFF at different subbands).

Figure 2 shows a pair of CT and MR images. It can be seen that the CT image (Figure 2(a)) provides clear bones information but no soft tissues information, while in contrast to CT image the MR image in Figure 2(b) provides soft tissues information. The fused image must contain both the information of bones and soft tissues. The fused image obtained using proposed scheme in Figure 2(d) shows better results as compared to fused image obtained by GFF [30] in Figure 2(c).

Figure 3 shows the images of a patient suffering from cerebral toxoplasmosis [41]. A more comprehensive information consisting of both the CT and MR images is the requirement in clinical diagnosis. The improvement in fused image using proposed scheme can be observed in Figure 3(d) compared to image obtained by GFF [30] in Figure 3(c).

Figure 4 shows a pair of CT and MR images of a woman suffering from hypertensive encephalopathy [41]. The improvement in fused image using proposed scheme can be observed in Figure 4(d) compared to image obtained by GFF [30] in Figure 4(c).

Figure 5 shows a pair of images containing acute stroke disease [41]. The improvement in quality of fused image obtained using proposed scheme can be observed in Figure 5(d) compared to Figure 5(c) (image obtained by GFF [30]).

Table 1 shows that proposed scheme provides better quantitative results in terms of , , , , , , and as compared to GFF [30] scheme.


Quantitative measures Example  1 Example  2 Example  3 Example  4
GFF [30] ProposedGFF [30] ProposedGFF [30] ProposedGFF [30] Proposed

0.2958 0.2965 0.4803 0.5198 0.4164 0.4759 0.4994 0.5526
0.3288 0.3540 0.3474 0.3519 0.3130 0.3139 0.2920 0.2940
0.4034 0.5055 0.4638 0.4678 0.4473 0.4901 0.4498 0.4653
0.1600 0.1617 0.3489 0.3091 0.2061 0.2193 0.3002 0.2855
0.4139 0.4864 0.2730 0.3431 0.2643 0.3247 0.2729 0.3339
0.4539 0.7469 0.5188 0.6387 0.6098 0.7453 0.5268 0.6717
0.2561 0.3985 0.1553 0.2968 0.1852 0.3009 0.1842 0.3487

5. Conclusions

An improved guided image fusion for MR and CT imaging is proposed. Different modifications in filter (LMMSE based) and weights maps (with different levels) are proposed to overcome the limitations of GFF scheme. Simulation results based on visual and quantitative analysis show the significance of proposed scheme.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

  1. Y. Yang, D. S. Park, S. Huang, and N. Rao, “Medical image fusion via an effective wavelet-based approach,” Eurasip Journal on Advances in Signal Processing, vol. 2010, Article ID 579341, 13 pages, 2010. View at: Publisher Site | Google Scholar
  2. F. Maes, D. Vandermeulen, and P. Suetens, “Medical image registration using mutual information,” Proceedings of the IEEE, vol. 91, no. 10, pp. 1699–1721, 2003. View at: Publisher Site | Google Scholar
  3. S. Das, M. Chowdhury, and M. K. Kundu, “Medical image fusion based on ripplet transform type-I,” Progress In Electromagnetics Research B, no. 30, pp. 355–370, 2011. View at: Publisher Site | Google Scholar
  4. S. Daneshvar and H. Ghassemian, “MRI and PET image fusion by combining IHS and retina-inspired models,” Information Fusion, vol. 11, no. 2, pp. 114–123, 2010. View at: Publisher Site | Google Scholar
  5. V. Barra and J.-Y. Boire, “A general framework for the fusion of anatomical and functional medical images,” NeuroImage, vol. 13, no. 3, pp. 410–424, 2001. View at: Publisher Site | Google Scholar
  6. A. Polo, F. Cattani, A. Vavassori et al., “MR and CT image fusion for postimplant analysis in permanent prostate seed implants,” International Journal of Radiation Oncology Biology Physics, vol. 60, no. 5, pp. 1572–1579, 2004. View at: Publisher Site | Google Scholar
  7. A. L. Grosu, W. A. Weber, M. Franz et al., “Reirradiation of recurrent high-grade gliomas using amino acid PET (SPECT)/CT/MRI image fusion to determine gross tumor volume for stereotactic fractionated radiotherapy,” International Journal of Radiation Oncology Biology Physics, vol. 63, no. 2, pp. 511–519, 2005. View at: Publisher Site | Google Scholar
  8. G. Piella, “A general framework for multiresolution image fusion: from pixels to regions,” Information Fusion, vol. 4, no. 4, pp. 259–280, 2003. View at: Publisher Site | Google Scholar
  9. N. Mitianoudis and T. Stathaki, “Pixel-based and region-based image fusion schemes using ICA bases,” Information Fusion, vol. 8, no. 2, pp. 131–142, 2007. View at: Publisher Site | Google Scholar
  10. A. Jameel, A. Ghafoor, and M. M. Riaz, “Entropy dependent compressive sensing based image fusion,” in Intelligent Signal Processing and Communication Systems, November 2013. View at: Google Scholar
  11. B. Yang and S. Li, “Pixel-level image fusion with simultaneous orthogonal matching pursuit,” Information Fusion, vol. 13, no. 1, pp. 10–19, 2012. View at: Publisher Site | Google Scholar
  12. T. Ranchin and L. Wald, “Fusion of high spatial and spectral resolution images: the ARSIS concept and its implementation,” Photogrammetric Engineering and Remote Sensing, vol. 66, no. 1, pp. 49–61, 2000. View at: Google Scholar
  13. F. Al-Wassai, N. Kalyankar, and A. Al-Zaky, “Multisensor images fusion fased on feature-level,” International Journal of Latest Technology in Engineering, Management and Applied Science, vol. 1, no. 5, pp. 124–138, 2012. View at: Google Scholar
  14. M. Ding, L. Wei, and B. Wanga, “Research on fusion method for infrared and visible images via compressive sensing,” Infrared Physics and Technology, vol. 57, pp. 56–67, 2013. View at: Google Scholar
  15. H. B. Mitchell, Image Fusion Theories, Techniques and Applications, Springer, 2010.
  16. V. Sharma and W. Davis, “Feature-level fusion for object segmentation using mutual information,” in Augmented Vision Perception in Infrared, pp. 295–319, 2008. View at: Google Scholar
  17. D. L. Hall and J. Llinas, “An introduction to multisensor data fusion,” Proceedings of the IEEE, vol. 85, no. 1, pp. 6–23, 1997. View at: Publisher Site | Google Scholar
  18. A. Ardeshir Goshtasby and S. Nikolov, “Image fusion: advances in the state of the art,” Information Fusion, vol. 8, no. 2, pp. 114–118, 2007. View at: Publisher Site | Google Scholar
  19. P. J. Burt and E. H. Adelson, “The laplacian pyramid as a compact image code,” IEEE Transactions on Communications, vol. 31, no. 4, pp. 532–540, 1983. View at: Google Scholar
  20. A. Toet, “A morphological pyramidal image decomposition,” Pattern Recognition Letters, vol. 9, no. 4, pp. 255–261, 1989. View at: Google Scholar
  21. V. S. Petrović and C. S. Xydeas, “Gradient-based multiresolution image fusion,” IEEE Transactions on Image Processing, vol. 13, no. 2, pp. 228–237, 2004. View at: Publisher Site | Google Scholar
  22. A. Z. Chitade and S. K. Katiyar, “Multiresolution and multispectral data fusion using discrete wavelet transform with IRS images: cartosat-1, IRS LISS III and LISS IV,” Journal of the Indian Society of Remote Sensing, vol. 40, no. 1, pp. 121–128, 2012. View at: Publisher Site | Google Scholar
  23. S. Li, J. T. Kwok, and Y. Wang, “Using the discrete wavelet frame transform to merge Landsat TM and SPOT panchromatic images,” Information Fusion, vol. 3, no. 1, pp. 17–23, 2002. View at: Publisher Site | Google Scholar
  24. R. Singh, M. Vatsa, and A. Noore, “Multimodal medical image fusion using redundant discrete wavelet transform,” in Proceedings of the 7th International Conference on Advances in Pattern Recognition (ICAPR '09), pp. 232–235, February 2009. View at: Publisher Site | Google Scholar
  25. G. Chen and Y. Gao, “Multisource image fusion based on double density dualtree complex wavelet transform,” in Proceedings of the International Conference on Fuzzy Systems and Knowledge Discovery, vol. 9, pp. 1864–1868, 2012. View at: Google Scholar
  26. F. Alia, I. El-Dokanya, A. Saada, and F. Abd El-Samiea, “A curvelet transform approach for the fusion of MR and CT images,” Journal of Modern Optics, vol. 57, no. 4, pp. 273–286, 2010. View at: Google Scholar
  27. L. Wang, B. Li, and L.-F. Tian, “Multi-modal medical image fusion using the inter-scale and intra-scale dependencies between image shift-invariant shearlet coefficients,” Information Fusion, 2012. View at: Publisher Site | Google Scholar
  28. L. Yang, B. L. Guo, and W. Ni, “Multimodality medical image fusion based on multiscale geometric analysis of contourlet transform,” Neurocomputing, vol. 72, no. 1–3, pp. 203–211, 2008. View at: Publisher Site | Google Scholar
  29. T. Li and Y. Wang, “Biological image fusion using a NSCT based variable-weight method,” Information Fusion, vol. 12, no. 2, pp. 85–92, 2011. View at: Publisher Site | Google Scholar
  30. S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Transactions on Image Processing, vol. 22, no. 7, 2013. View at: Google Scholar
  31. R. D. Nowak, “Wavelet-based Rician noise removal for magnetic resonance imaging,” IEEE Transactions on Image Processing, vol. 8, no. 10, pp. 1408–1419, 1999. View at: Publisher Site | Google Scholar
  32. C. S. Anand and J. S. Sahambi, “Wavelet domain non-linear filtering for MRI denoising,” Magnetic Resonance Imaging, vol. 28, no. 6, pp. 842–861, 2010. View at: Publisher Site | Google Scholar
  33. Y. Nakamoto, M. Osman, C. Cohade et al., “PET/CT: comparison of quantitative tracer uptake between germanium and CT transmission attenuation-corrected images,” Journal of Nuclear Medicine, vol. 43, no. 9, pp. 1137–1143, 2002. View at: Google Scholar
  34. J. E. Bowsher, H. Yuan, L. W. Hedlund et al., “Utilizing MRI information to estimate F18-FDG distributions in rat flank tumors,” in Proceedings of the IEEE Nuclear Science Symposium Conference Record, vol. 4, pp. 2488–2492, October 2004. View at: Google Scholar
  35. M. Hossny, S. Nahavandi, and D. Creighton, “Comments on 'Information measure for performance of image fusion',” Electronics Letters, vol. 44, no. 18, pp. 1066–1067, 2008. View at: Publisher Site | Google Scholar
  36. C. Yang, J.-Q. Zhang, X.-R. Wang, and X. Liu, “A novel similarity based quality metric for image fusion,” Information Fusion, vol. 9, no. 2, pp. 156–160, 2008. View at: Publisher Site | Google Scholar
  37. C. S. Xydeas and V. Petrović, “Objective image fusion performance measure,” Electronics Letters, vol. 36, no. 4, pp. 308–309, 2000. View at: Publisher Site | Google Scholar
  38. J. Zhao, R. Laganière, and Z. Liu, “Performance assessment of combinative pixel-level image fusion based on an absolute feature measurement,” International Journal of Innovative Computing, Information and Control, vol. 3, no. 6, pp. 1433–1447, 2007. View at: Google Scholar
  39. G. Piella and H. Heijmans, “A new quality metric for image fusion,” in Proceedings of the International Conference on Image Processing (ICIP '03), pp. 173–176, September 2003. View at: Google Scholar
  40. Y. Han, Y. Cai, Y. Cao, and X. Xu, “A new image fusion performance metric based on visual information fidelity,” Information Fusion, vol. 14, no. 2, pp. 127–135, 2013. View at: Google Scholar
  41. Harvard Image Database, https://www.med.harvard.edu/.

Copyright © 2014 Amina Jameel et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

979 Views | 841 Downloads | 14 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.