Table of Contents Author Guidelines Submit a Manuscript
The Scientific World Journal
Volume 2014, Article ID 708075, 8 pages
http://dx.doi.org/10.1155/2014/708075
Research Article

MRI and PET Image Fusion Using Fuzzy Logic and Image Local Features

1Faculty of Engineering and Technology, International Islamic University, Islamabad 44000, Pakistan
2School of Engineering and Applied Sciences, Isra University, Islamabad 44000, Pakistan
3College of Signals, National University of Sciences and Technology, Islamabad 44000, Pakistan

Received 7 August 2013; Accepted 29 October 2013; Published 19 January 2014

Academic Editors: S. Bourennane and J. Marot

Copyright © 2014 Umer Javed et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

An image fusion technique for magnetic resonance imaging (MRI) and positron emission tomography (PET) using local features and fuzzy logic is presented. The aim of proposed technique is to maximally combine useful information present in MRI and PET images. Image local features are extracted and combined with fuzzy logic to compute weights for each pixel. Simulation results show that the proposed scheme produces significantly better results compared to state-of-art schemes.

1. Introduction

Fusion of images obtained from different imaging systems like computed tomography (CT), MRI, and PET plays an important role in medical diagnosis and other clinical applications. Each imaging technique provides a different level of information. For instance, CT (based on X-ray principle) is commonly used for visualizing dense structures and is not suitable for soft tissues and physiological analysis. MRI on the other hand provides better visualization of soft tissues and is commonly used for detection of tumors and other tissue abnormalities. Likewise, information of blood flow in the body is provided by PET (a nuclear imaging technique) but it suffers from low resolution as compared to CT and MRI. Hence, fusion of images obtained from different modalities is desirable to extract sufficient information for clinical diagnosis and treatment.

Image fusion integrates (complementary as well as redundant) information from multimodality images to create a fused image [16]. It not only provides accurate description of the same object but also helps in required memory reduction by storing fused images instead of multiple source images. Different techniques are developed for medical image fusion which can be generally grouped into pixel, feature, and decision level fusion [7]. Compared to feature and decision, pixel level methods [1, 2] are more suited for medical imaging as they can preserve spatial details in fused images [1, 8].

Conventional pixel level methods (including addition, subtraction, multiplication, and weighted average) are simpler but are less accurate. Intensity Hue saturation (IHS)-based methods fuse the images by replacing the intensity component [1, 5, 9]. These methods generally produce high-resolution fused images but cause spectral distortion (due to inaccurate estimation of spectral information) [10]. Similarly, principal components analysis based methods fuse images by replacing certain principle components [11].

Multiresolution techniques including pyramids, discrete wavelet transform (DWT), contourlet, curvelet, shearlet, and framelet transform image into different bands for fusion (a comprehensive comparison is presented in [12]). DWT-based schemes decompose the input images into horizontal, vertical, and diagonal subbands which are then fused using additive or substitutive methods. Earlier DWT-based fusion schemes cannot preserve the salient features of the source images efficiently, hence producing block artifacts and inconsistency in the fused results [2, 3]. Human visual system is combined with DWT to fuse the low frequency bands using visibility and variance features, respectively. Local window approach is used (to adjust coefficients adaptively) for noise reduction and maintaining homogeneity in fused image [4]. However, the method often produces block artifacts and reduced contrast [3, 5]. Consistency verification and activity measures combined with DWT can only capture limited directional information and hence are not suitable for sharp image transitions [13].

Texture features and visibility measure are used with framelet transform [5] to fuse high and low frequency components, respectively. Contourlet transform based methods use different and flexible directions to detect the intrinsic geometrical structures [13]. The common methods are variable weight using nonsubsampled contourlet transform [14]; and bio-inspired activity measurer using pulse-coded neural networks [15]. However, the down- and up-sampling in contourlet transform lack shift invariance and cause ringing artifacts [14]. Curvelet transform uses various directions and positions at length scales [16]; however, it does not provide a multiresolution representation of geometry [17]. Shearlet transform carries different features (like directionality, localization, and multiscale framework) and can decompose the image into any scale and direction to fuse the required information [17].

Prespecified transform matrix and learning techniques are used with kernel singular value decomposition to fuse images in sparse domain [18]. In [19], image fusion has been performed using redundancy DWT and contourlet transform. A pixel level neuro-fuzzy logic based fusion adjusts the membership functions (MFs) using backpropagation and least mean square algorithms [20]. A spiking cortical model is proposed to fuse different types of medical images [21]. However, these schemes are complex or work under certain assumptions/constraints.

A fusion technique for MRI and PET images using local features and fuzzy logic is presented. The proposed technique maximally combines the useful information present in MRI and PET images. Image local features are extracted and combined with fuzzy logic to compute weights for each pixel. Simulation results based on visual and quantitative analysis show the significance of the proposed scheme.

2. -Trous-Based Image Fusion: An Overview

In contrast to conventional multiresolution schemes (where the output is downsampled after each level), à-trous or undecimated wavelet provides shift invariance, hence better suited for image fusion.

Let different approximations of MRI image (having dimensions ) be obtained by successive convolutions with a filter , that is, where and is a bicubic B-spline filter. The th wavelet plane of is, The image is decomposed into low and high frequency components as where is the total number of decomposition levels. Similarly PET image in terms of low and high frequency components is where , as PET images are assumed to be in pseudocolor [9].

Different methods are present in literature to fuse low and high frequency components which are generally grouped into substitute wavelet (SW) and additive wavelet (AW). The fused image using SW is Note that SW method fuses image by completely replacing the high frequency components of PET by high frequency components of MRI image, which can cause geometric and spectral distortion. SW and IHS (SWI) are combined to overcome the limitation in fused image ; that is, where the intensity image is The substitution process in SWI method sometimes results in loss of information as the intensity component is obtained by simple averaging/weighting.

In AW method, the fused image is obtained by injecting high frequency components of into : AW method adds the same amount of high frequencies into low-resolution bands which causes redundancy of high frequency components (hence resulting in spectral distortion).

To cater the limitation, AW luminance proportional (AWLP) method injects the high frequencies in proportion to the intensity values [22]. Consider where are total number of bands. The fused image of AWLP preserves the relative spectral information amongst different bands. The fused image using improved additive wavelet proportional (IAWP) [23] method is where are wavelet planes of a low-resolution (a spatially degraded version of ) MRI image . The is obtained by filtering the high frequencies (by applying a smoothing filter). The major limitations of the above schemes includes induction of redundant high/low frequencies; and consequently spatial degradations.

3. Proposed Technique

The proposed scheme first decomposes the MRI and PET images into low and high frequencies using à-trous wavelet. High and low frequencies are then fused separately according to defined criterion. The overall fused image in terms of high and low frequencies is

3.1. Fusion of Low Frequencies

Fusion of low frequencies and is critical and challenging task. Various schemes utilize different criterions for fusion of low frequencies. For instance, one choice is to totally discard the low frequencies of one image, another choice is to take average or weighted average of both and so forth. However, the schemes provide limited performance as they do not cater the spatial properties of image. We have proposed fusion of low frequency using different weighting average for each pixel location. The weights are computed based on the amount of information contained in vicinity of each pixel.

3.1.1. Local Features

Local variance (LV) and local blur (LB) features are used with fuzzy inference engine to compute the desired weights for fusing low frequencies.

LV [24] is used to evaluate the regional characteristics of image and is defined as : where is the mean value of window centered at pixel. Note that image containing sharp edges results in higher value (and vice versa).

LB is computed using local Rényi entropy [25] of image. Let be the probability (or normalized histogram) having intensity values within a local window (of size ) centered at pixel. is defined as [25]

High values of and show that contain more information and need to be assigned more weight as compared to image.

3.1.2. Fuzzy Inference Engine

Let high and low Gaussian Membership functions (MFs) having means , and variances , for LV be [26] Similarly let high and low Gaussian MFs having means , and variances , for LB be

The inputs and are mapped into fuzzy set using Gaussian fuzzifier [27] as where and are noise suppression parameters. The inputs are then processed by fuzzy inference engine using pre defined IF-THEN rules [26, 27] as follows.: IF is high and is high THEN is high.: IF is low and is high THEN is medium.: IF is high and is low,THEN is medium.: IF is low and is low THEN is low.

The output MFs for high (having mean and variance ), medium (having mean and variance ), and low (having mean and variance ) are defined as The output of fuzzy inference engine is where and . The weights are obtained by processing fuzzy outputs using center average defuzzifier [27].

The image is obtained by weighted sum of and as

3.2. Fusion of High Frequencies

Let represent a wavelet plane of the resultant image . This ensures that only those high frequency components are used for image fusion, which are not already present in . By the virtue of this, the proposed scheme not only avoids redundancy of information but also results in improved fusion results as compared to early techniques. The fused high frequency image is Note that is not dependent on the bands because is gray-scale image.

4. Results and Discussion

The simulations of proposed and existing schemes are performed on PET and MRI images obtained from Harvard database [28]. The fusion database for brain images is classified into normal, grade II astrocytoma, and grade IV astrocytoma images. The MRI and PET images are coregistered with spatial resolution. The proposed fusion scheme is compared visually and quantitatively (using entropy [29], mutual information (MI) [29], structural similarity (SSIM) [30], Xydeas and Petrovic [31] metric, and Piella [32] metric) with DWT [12], GIHS [6], IAWP [23], and GFF [33] schemes.

The original MRI images belonging to normal brain, grade II astrocytoma, and grade IV astrocytoma are shown in Figures 1(a)1(c), respectively. Fluorodeoxyglucose (FDG) is a radiopharmaceutical commonly used for PET scans. The PET-FDG images of normal, grade II, and grade IV astrocytoma are shown in Figures 1(d)1(f), respectively. It can be seen that different imaging modalities provide complementary information for the same region.

fig1
Figure 1: Original MRI and PET images: (a)–(c) MRI; (d)–(f) PET.

Figure 2 shows fused images (of normal brain) obtained by using different techniques. It can be seen from Figure 2(e) that the proposed technique has preserved the complementary information of both modalities and the fuzzy based weight assessment has enabled offering less spectral information loss as compared to other state-of-art techniques.

fig2
Figure 2: Image fusion results for normal images: (a) DWT [12]; (b) GIHS [6]; (c) GFF [33]; (d) IAWP [23]; (e) proposed technique.

Figure 3 shows fused images (of grade II astrocytoma class) obtained by using different techniques. From Figure 3(e), it can be observed that the proposed technique provides complementary information contained in both modalities and the fuzzy based weight assessment has enabled offering less spectral information loss as compared to other state of art techniques. The improvement in fused images is more visible in the tumorous region (bottom right corner).

fig3
Figure 3: Image fusion results for grade II astrocytoma images: (a) DWT [12]; (b) GIHS [6]; (c) GFF [33]; (d) IAWP [23]; (e) proposed technique.

Figure 4 shows fused images (of Grade IV astrocytoma) obtained by using different techniques. Similar improvement (as that of Figures 2(e) and 3(e)) can be observed in Figure 4(e). It is easy to conclude that the proposed scheme provides better visual quality compared to the existing schemes.

fig4
Figure 4: Image fusion results for grade IV astrocytoma images: (a) DWT [12]; (b) GIHS [6]; (c) GFF [33]; (d) IAWP [23]; (e) proposed technique.

Table 1 shows the quantitative comparison of different fusion techniques. Note that a higher value of the metric represents better quality. The fused images obtained using proposed technique provide better quantitative results in terms of entropy [29], MI [29], SSIM [30], Xydeas and Petrovic [31], and Piella [32] metrics.

tab1
Table 1: Quantitative measures for fused PET-MRI images.

5. Conclusion

An image fusion technique for MRI and PET using local features and fuzzy logic is presented. The proposed scheme maximally combines the useful information present in MRI and PET images using image local features and fuzzy logic. Weights are assigned to different pixels for fusing low frequencies. Simulation results based on visual and quantitative analysis show that the proposed scheme produces significantly better results compared to state of art schemes.

6. Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

  1. G. Bhatnagar, Q. M. J. Wu, and Z. Liu, “Human visual system inspired multi-modal medical image fusion framework,” Expert Systems with Applications, vol. 40, no. 5, pp. 1708–1720, 2013. View at Google Scholar
  2. L. Yang, B. L. Guo, and W. Ni, “Multimodality medical image fusion based on multiscale geometric analysis of contourlet transform,” Neurocomputing, vol. 72, no. 1–3, pp. 203–211, 2008. View at Publisher · View at Google Scholar · View at Scopus
  3. K. Amolins, Y. Zhang, and P. Dare, “Wavelet based image fusion techniques—an introduction, review and comparison,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 62, no. 4, pp. 249–263, 2007. View at Publisher · View at Google Scholar · View at Scopus
  4. Y. Yang, D. S. Park, S. Huang, and N. Rao, “Medical image fusion via an effective wavelet-based approach,” Eurasip Journal on Advances in Signal Processing, vol. 2010, Article ID 579341, 13 pages, 2010. View at Publisher · View at Google Scholar · View at Scopus
  5. G. Bhatnagar and Q. M. J. Wu, “An image fusion framework based on human visual system in framelet domain,” International Journal of Wavelets, Multiresolution and Information Processing, vol. 10, no. 1, Article ID 1250002, 2012. View at Publisher · View at Google Scholar · View at Scopus
  6. T. Li and Y. Wang, “Biological image fusion using a NSCT based variable-weight method,” Information Fusion, vol. 12, no. 2, pp. 85–92, 2011. View at Publisher · View at Google Scholar · View at Scopus
  7. S. T. Shivappa, B. D. Rao, and M. M. Trivedi, “An iterative decoding algorithm for fusion of multimodal information,” Eurasip Journal on Advances in Signal Processing, vol. 2008, Article ID 478396, 10 pages, 2008. View at Publisher · View at Google Scholar · View at Scopus
  8. B. Yang and S. Li, “Pixel-level image fusion with simultaneous orthogonal matching pursuit,” Information Fusion, vol. 13, no. 1, pp. 10–19, 2012. View at Publisher · View at Google Scholar · View at Scopus
  9. S. Daneshvar and H. Ghassemian, “MRI and PET image fusion by combining IHS and retina-inspired models,” Information Fusion, vol. 11, no. 2, pp. 114–123, 2010. View at Publisher · View at Google Scholar · View at Scopus
  10. Z. Wang, D. Ziou, C. Armenakis, D. Li, and Q. Li, “A comparative analysis of image fusion methods,” IEEE Transactions on Geoscience and Remote Sensing, vol. 43, no. 6, pp. 1391–1402, 2005. View at Publisher · View at Google Scholar · View at Scopus
  11. H. Li, B. S. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graphical Models and Image Processing, vol. 57, no. 3, pp. 235–245, 1995. View at Publisher · View at Google Scholar · View at Scopus
  12. G. Pajares and J. M. de la Cruz, “A wavelet-based image fusion tutorial,” Pattern Recognition, vol. 37, no. 9, pp. 1855–1872, 2004. View at Publisher · View at Google Scholar · View at Scopus
  13. M. N. Do and M. Vetterli, “The contourlet transform: an efficient directional multiresolution image representation,” IEEE Transactions on Image Processing, vol. 14, no. 12, pp. 2091–2106, 2005. View at Publisher · View at Google Scholar · View at Scopus
  14. D. Li and H. Chongzhao, “Fusion for CT image and MR image based on nonsubsampled transformation,” in Proceedings of the IEEE International Conference on Advanced Computer Control (ICACC '10), vol. 5, pp. 372–374, March 2010. View at Publisher · View at Google Scholar · View at Scopus
  15. X.-B. Qu, J.-W. Yan, H.-Z. Xiao, and Z.-Q. Zhu, “Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain,” Acta Automatica Sinica, vol. 34, no. 12, pp. 1508–1514, 2008. View at Publisher · View at Google Scholar · View at Scopus
  16. E. Candès, L. Demanet, D. Donoho, and L. X. Ying, “Fast discrete curvelet transforms,” Multiscale Modeling and Simulation, vol. 5, no. 3, pp. 861–899, 2006. View at Publisher · View at Google Scholar · View at Scopus
  17. Q.-G. Miao, C. Shi, P.-F. Xu, M. Yang, and Y.-B. Shi, “A novel algorithm of image fusion using shearlets,” Optics Communications, vol. 284, no. 6, pp. 1540–1547, 2011. View at Publisher · View at Google Scholar · View at Scopus
  18. N. N. Yu, T. S. Qiu, and W. H. Liu, “Medical image fusion based on sparse representation with KSVD,” in Proceedings of the World Congress on Medical Physics and Biomedical Engineering, vol. 39, pp. 550–553, 2013.
  19. S. Rajkumar and S. Kavitha, “Redundancy Discrete Wavelet Transform and Contourlet Transform for multimodality medical image fusion with quantitative analysis,” in Proceedings of the 3rd International Conference on Emerging Trends in Engineering and Technology (ICETET '10), pp. 134–139, November 2010. View at Publisher · View at Google Scholar · View at Scopus
  20. J. Teng, S. Wang, J. Zhang, and X. Wang, “Neuro-fuzzy logic based fusion algorithm of medical images,” in Proceedings of the 3rd International Congress on Image and Signal Processing (CISP '10), vol. 4, pp. 1552–1556, October 2010. View at Publisher · View at Google Scholar · View at Scopus
  21. R. Wang, Y. Wu, M. Ding, and X. Zhang, “Medical image fusion based on spiking cortical model,” in Medical Imaging 2013: Digital Pathology, vol. 8676 of Proceedings of SPIE, 2013.
  22. L. Alparone, L. Wald, J. Chanussot, C. Thomas, P. Gamba, and L. M. Bruce, “Comparison of pansharpening algorithms: outcome of the 2006 GRS-S data-fusion contest,” IEEE Transactions on Geoscience and Remote Sensing, vol. 45, no. 10, pp. 3012–3021, 2007. View at Publisher · View at Google Scholar · View at Scopus
  23. Y. Kim, C. Lee, D. Han, Y. Kim, and Y. Kim, “Improved additive-wavelet image fusion,” IEEE Geoscience and Remote Sensing Letters, vol. 8, no. 2, pp. 263–267, 2011. View at Publisher · View at Google Scholar · View at Scopus
  24. D.-C. Chang and W.-R. Wu, “Image contrast enhancement based on a histogram transformation of local standard deviation,” IEEE Transactions on Medical Imaging, vol. 17, no. 4, pp. 518–531, 1998. View at Google Scholar · View at Scopus
  25. S. Gabarda and G. Cristóbal, “Blind image quality assessment through anisotropy,” Journal of the Optical Society of America A, vol. 24, no. 12, pp. B42–B51, 2007. View at Publisher · View at Google Scholar · View at Scopus
  26. M. M. Riaz and A. Ghafoor, “Fuzzy logic and singular value decomposition based through wall image enhancement,” Radioengineering Journal, vol. 22, no. 1, p. 580, 2012. View at Google Scholar
  27. L. X. Wang, A Course in Fuzzy Systems and Control, Prentice Hall, New York, NY, USA, 1997.
  28. Harvard Medical Atlas Database, http://www.med.harvard.edu/AANLIB/home.html.
  29. G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” Electronics Letters, vol. 38, no. 7, pp. 313–315, 2002. View at Publisher · View at Google Scholar · View at Scopus
  30. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004. View at Publisher · View at Google Scholar · View at Scopus
  31. C. S. Xydeas and V. Petrović, “Objective image fusion performance measure,” Electronics Letters, vol. 36, no. 4, pp. 308–309, 2000. View at Publisher · View at Google Scholar · View at Scopus
  32. G. Piella, “Image fusion for enhanced visualization: a variational approach,” International Journal of Computer Vision, vol. 83, no. 1, pp. 1–11, 2009. View at Publisher · View at Google Scholar · View at Scopus
  33. S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Transactions on Medical Imaging, vol. 22, no. 7, pp. 2864–2875, 2013. View at Google Scholar