- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
International Journal of Biomedical Imaging
Volume 2014 (2014), Article ID 469015, 8 pages
MRI Volume Fusion Based on 3D Shearlet Decompositions
1School of Electronic Engineering, University of Electronic Science Technology of China, Qingshuihe Campus, No. 2006, Xiyuan Avenue, West Hi-Tech Zone, Chengdu, Sichuan 611731, China
2Research Institute of Electronic Science and Technology, University of Electronic Science Technology of China, Qingshuihe Campus, No. 2006, Xiyuan Avenue, West Hi-Tech Zone, Chengdu, Sichuan 611731, China
3Electronic Engineering College, Chengdu University of Information Technology, No. 24, Section 1, Xuefu Road, Southwest Airport Economic Development Zone, Chengdu, Sichuan 610225, China
Received 16 August 2013; Revised 12 February 2014; Accepted 7 March 2014; Published 10 April 2014
Academic Editor: Richard H. Bayford
Copyright © 2014 Chang Duan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Nowadays many MRI scans can give 3D volume data with different contrasts, but the observers may want to view various contrasts in the same 3D volume. The conventional 2D medical fusion methods can only fuse the 3D volume data layer by layer, which may lead to the loss of interframe correlative information. In this paper, a novel 3D medical volume fusion method based on 3D band limited shearlet transform (3D BLST) is proposed. And this method is evaluated upon MRI and quantitative susceptibility mapping data of 4 human brains. Both the perspective impression and the quality indices indicate that the proposed method has a better performance than conventional 2D wavelet, DT CWT, and 3D wavelet, DT CWT based fusion methods.
Medical image fusion is a special case of image fusion and has been studied for decades. It is widely applied in medical diagnostics [1, 2]. It refers to extracting and merging the feasible information from different source images, which were captured by different kinds of sensors, such as CT, MRI, and PET, or different configurations of the same sensor, such as MRI and quantitative susceptibility mapping (QSM). Some information is correlated, but most of the information is complementary, because special sensors or special configurations of the same sensor are sensible to special sources. For example, CT images provide the details of dense hard tissues, MRI images give information of soft tissues: provides the contrast of the tissue relaxation time, and QSM can be provide susceptibility contrast information, which is produced by a range of endogenous magnetic biomarkers and contrast agents such as iron, calcium, and gadolinium (Gd). If different data can be properly fused, the fused data contain all the feasible information of the scanned object, which can reveal the details of structure more clearly than each single sensor. Previously, all source data need to be registered. Because 3D magnitude image and QSM image are acquired from the real and imaginary part of the same scan, QSM images are exactly registered to images.
Nowadays, many researches on medical fusion method only consider the 2D case. However, many medical sensors can provide 3D data volume, and the value of each point in the volume is correlated not only to the adjacent points in the same layer but also to the points in neighboring layers. Therefore, it is necessary to develop the volume fusion method instead of 2D image fusion method which can only fuse the data in single layer.
Fusion methods can be performed in spatial domain or certain transformed domain. In spatial domain, the intuitive fused image is selected as the weighted average image of source images. This kind of methods is relatively easy to implement, but the performances are low and some feasible information is reduced or even lost. The transformed domain of fusion methods is usually following the following steps: (1) registering source images, (2) performing the forward transform to sources images, (3) acquiring the fused coefficients from coefficients of source images under fusion rules, and (4) performing backward transform to fused coefficients to get the fused image. In this type of methods, the research works are usually focused on two points: the choice of the transform and the design of fusion rule. Many multiscale transforms are applied in fusion methods, such as DWT, DTCWT, curvelets , and shearlets .
Shearlets emerged in recent years among the most successful frameworks for the efficient representation of multidimensional data. Indeed, many other transforms were introduced to overcome the limitation of traditional multiscale transforms due to their poor ability of capturing edges and other anisotropic features. However, shearlet transform stands out since it has many advantages uniquely. It has a single or finite set of generating functions; it provides optimally sparse representations for multidimensional data; it allows a unified treatment of the continuum and digital realms. With these advantages, shearlet transform has been widely utilized in many image processing tasks such as denoising , edge detection , and enhancement . And in many papers , as well as this paper, shearlet transform is indeed also very suited to image fusion. In this paper, the 3D band limited shearlet transform, which is the discrete implementation of shearlet transform, is selected for medical volume fusion.
Three fusion rules are utilized in this paper: maximum points’ modulus (MPM), which considers only the value of single point; maximum regional energy (MRE), which considers the information for the local region  and treats each point of the region equally; and maximum sum of modified laplacian , which also considers the information in the region but treats the center point of the region and the points around it differently. Other more complicated fusion rules have also been proposed. The above three methods were selected as representatives. These three classic fusion rules are expanded into 3 dimensions. In order to evaluate the performance of proposed method, the quality indices also are extended into 3D version.
The rest of the paper is organized as follows. In Section 2, the basic theories about 3D shearlet transform and the discrete implementation, 3D BLST, are briefly introduced. In Section 3, fusion method based on 3D BLST with three fusion rules are proposed. Using the experiments of Section 4, the comparison of 2D and 3D methods and the performance of the proposed methods are illustrated and discussed. Finally, we draw conclusions in Section 5.
2. 3D Shearlet Transform
In this section, the basic theory of 3D shearlet transform and its discrete implementation, 3D band limited shearlet transform (3D BLST), are introduced.
2.1. Basic Theory of the 3D Shearlet Transform
As shown in Figure 1, the 3D frequency domain can be partitioned into three pairs of pyramids given by and the center cube The partitioning of frequency space into pyramids allows restricting the range of the shear parameters. Without such partitioning, one must allow arbitrarily large shear parameters, which leads to a treatment biased toward one axis. The defined partition, however, enables restriction of the shear parameters to . This approach is the key to provide an almost uniform treatment of different directions in a sense of a good approximation to rotation.
Pyramid-adapted shearlets are scaled according to the paraboloidal scaling matrices as , and , defined by , , and . Directionality is encoded by the shear matrices , , and , , given by , , and , respectively. The translation lattices will be defined through the following matrices: , , and , where and . Then the definition of 3D pyramid-adapted discrete shearlet can be given as follows.
Definition 1. For , the pyramid-adapted discrete shearlet system generated by is defined by where ,
2.2. 3D Band Limited Shearlet Transform
3D band limited shearlet transform (3D BLST) is one discrete implementation of 3d pyramid-adapted shearlet transform. Let the shearlet generator be defined by , where and satisfy the following assumptions:(a), , and for , .(b), , and for , .
Thus, in frequency domain, the band limited shearlet function is almost a tensor product of one wavelet with two “bump” functions and, thereby, a canonical generalization of the classical band limited 2D shearlets. This implies the support in frequency domain by a needle-like shape with the wavelet acting in radial direction and ensures high directional selectivity. The derivation from a tensor product in fact ensures a favorable behavior with respect to the shearing operator and thus a tiling of frequency domain which leads to a tight frame for .
Theorem 2. Let be a band limited shearlet defined as before; the family of functions forms a tight frame for , where denotes the orthogonal projection onto and .
By this theorem and a change of variables, the shearlet tight frames for , , and can be constructed, respectively. Furthermore, wavelet theory provides choice of such that forms a tight frame for . Since as a disjoint union, any function can be expressed by as , where denotes the orthogonal projection onto the closed subspace for some measurable set . More details of 3D shearlet theory and the implementation of band limited shearlet transform as well as other implementations can be found in [10–14].
3. Proposed Fusion Method
The proposed fusion method in this paper belongs to the voxel-level fusion, with average rule for low frequency coefficients and three different fusion rules for high frequency coefficients.
(a) Max Modulus of Points’ Modulus (MPM). One has The fused high coefficients are the coefficients that have the larger modulus as represented in (5), where means the high frequency coefficients; label two sources, respectively; refers to the fused result. This fusion rule considers only the point information.
(b) Max Region Energy (MRE) . One has where , , is a local region, is the mean of all in , and is the number of coefficients in . The fused high coefficients are the coefficients that have the larger local energy. This kind of method considers not only the information of the current position but also that information around it.
(c) Max Region Sum of Modified Laplacian (MSML). The fused high frequency coefficients are acquired according to (7). 3D version of modified Laplacian index is calculated through (9), and the sum of them is calculated as (8); is a local region. In this paper, the variation equals 1:
The steps of proposed fusion method are given in Figure 2. Firstly, 3D BLST are performed to both source volumes; the low frequency is the average of both source coefficients, the low frequency coefficients are the average of both low frequency coefficients of source images, the high frequency coefficients are acquired by equations (5)–(9). Finally, the backward 3D BLST is performed to fused coefficients, and the output is the fused volume as represented by .
In this section, the performances of proposed methods are evaluated on 4 human brain subjects. The human study was approved by our Institutional Review Board. MR examinations were performed with a 3.0T MR system (Signa HDxt, GE, USA), using an 8-channel head coil. A 3D weighted multiecho gradient echo sequence was used with the following parameters: ; ms; number of ; first ms; uniform spacing ms; kHz; field of view cm; a range of resolutions were tested: mm3. The 3D magnitude and QSM images reconstructed by NMEDI  are interpolated to for fusion. Because in QSM processing the magnetic field outside the brain parenchyma was corrupted by noise, QSM region was cropped by a mask, which was obtained using a brain extraction tool (BET) . In following comparisons, the fusion regions are performed in the mask.
4.1. 2D versus 3D: The Consistency along the z-Axis
The volume data has 3 dimensions, that is, -, -, and -axis. In the first experiment, the 2D methods are performed along - and -axis frame by frame. And the 3D methods directly fuse the whole volume data. One major difference between these two type methods is the treatment of data along -axis. The consistency along -axis is compared by the visual effect of inter frame difference () images and the measurement of [17, 18]. When calculating , only the voxel which located in both masks of frames is calculated, because the data out of the mask is invalid and the mask is different in images. Suppose the is acquired by , where refer to the images from the source and fused volume , . The of this paper is calculated by (10), where refers to the number of frames along -axis and means pointwise multiplication:
From the visual impression of the images (Figure 3), it can be noticed that the fused images by 2D methods have several obvious distortions which make the fused images similar to neither of the source images. While, in results by 3D methods, the images are more consistent with the of source data, which means the images are highly correlated to the of source images (Figure 3). The images for fused volumes are very similar to the of QSM data. And the difference among the 3D methods can hardly be noticed. This conclusion can still be drawn from the quality index of , as given in Tables 1, 2, 3, and 4. The 3D BLST with MPM fusion has the highest value of , and all the values for 3D methods are higher than that of the 2D methods with the same fusion rule.
4.2. Performance of Proposed Methods
In the second experiment, the visual effect and the quality index are compared among the fusion methods based on 2D, 3D DWT, 2D, 3D DTCWT, and 3D BLST. Two widely used performance indices are selected as the subjective measurements of the fused results: mutual information () and . However, in document , the index of is in only suited for the case of 2D images; it is necessary to be expanded into 3D in our experiment, where the 2D “Sobel” operator was substituted by 3D “Sobel” operator and the number of angels increases to 3. In most of the previous image fusion research, the quality index is calculated through the whole voxels (or pixels) of the source images (or volumes). However, in this experiment, only the points which located in the area of mask are taken into consideration, because those points outside the mask carry no useful information about the brains and are set to arbitrary value manually; consequently they are ignored in evaluation step.
One layer of each coronal, axial, and sagittal plane is selected as the representation; the source and result images are shown in Figures 4, 5, and 6. From the perspective impression, it is hard to tell which fusion method is better, because the resulting images are more similar to each other. The distinctions among them can only be noticed after carefully observation. This phenomenon suggests that the proposed method and all conventional methods can fulfill the fusion task effectively. The performance of different fusion methods can be compared through the quality indices which are listed in Tables 5, 6, 7, and 8. From the tables, it can be noticed that the quality indices of proposed method are larger than the methods based on DWT or DT CWT. And in the case of 3D BLST, among different fusion rules, the rule of MRE has the largest indices.
The method of this paper belongs to the voxel-level fusion method, which considers only the distribution of the shearlet coefficients. In the future, the inner structure feature of the organ will be taken into account to see whether it can further improve the quality of the fused medical volume.
In this paper, the 3D medical volume fusion method based on 3D band limited shearlet transform is proposed. From the principles of methods and the experiments the following conclusion can be drawn: (1) the 3D transform based methods have better consistency along -axis (the third dimension) than conventional 2D transform based methods in medical volume fusion. (2) From both perspective impression and the quality indices, the proposed 3D BLST medical fusion method is better than 3D DWT or 3D DTCWT. (3) Among the fusion rules using 3D BLST, the MRE fusion rule has better performance than the other two fusion rules of MPM and MSML.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
This work was supported in part by the National Nature Science Foundation of China (no. 611390003).
- C. R. Hatt, A. K. Jain, V. Parthasarathy, A. Lang, and A. N. Raval, “MRI—3D ultrasound—X-ray image fusion with electromagnetic tracking for transendocardial therapeutic injections: In-vitro validation and in-vivo feasibility,” Computerized Medical Imaging and Graphics, vol. 37, no. 2, pp. 162–173, 2013.
- D.-A. Clevert, A. Helck, P. M. Paprottka, P. Zengel, C. Trumm, and M. F. Reiser, “Ultrasound-guided image fusion with computed tomography and magnetic resonance imaging. Clinical utility for imaging and interventional diagnostics of hepatic lesions,” Radiologe, vol. 52, no. 1, pp. 63–69, 2012.
- H. Lu, L. Zhang, and S. Serikawa, “Maximum local energy: an effective approach for multisensor image fusion in beyond wavelet transform domain,” Computers and Mathematics with Applications, vol. 64, pp. 996–1003, 2012.
- L. Wang, B. Li, and L. f. Tian, “EGGDD: an explicit dependency model for multi-modal medical image fusion in shift-invariant shearlet transform domain,” Information Fusion, vol. 19, pp. 29–37, 2014.
- G. R. Easley, D. Labate, and F. Colonna, “Shearlet-based total variation diffusion for denoising,” IEEE Transactions on Image Processing, vol. 18, no. 2, pp. 260–268, 2009.
- S. Yi, D. Labate, G. R. Easley, and H. Krim, “A shearlet approach to edge analysis and detection,” IEEE Transactions on Image Processing, vol. 18, no. 5, pp. 929–941, 2009.
- P. S. Negi and D. Labate, “3-D discrete shearlet transform and video processing,” IEEE Transactions on Image Processing, vol. 21, pp. 2944–2954, 2012.
- P. Feng, J. Wang, B. Wei, and D. Mi, “A Fusion algorithm for GFP image and phase contrast image of arabidopsis cell based on SFL-contourlet transform,” Computational and Mathematical Methods in Medicine, vol. 2013, Article ID 635040, 10 pages, 2013.
- X. B. Qu, J. W. Yan, and G. D. Yang, “Multifocus image fusion method of sharp frequency localized Contourlet transform domain based on sum-modified-Laplacian,” Guangxue Jingmi Gongcheng/Optics and Precision Engineering, vol. 17, no. 5, pp. 1203–1212, 2009.
- G. Kutyniok and D. Labate, Shearlets: Multiscale Analysis for Multivariate Data, Springer, Berlin, Germany, 2012.
- W. Q. Lim, “The discrete shearlet transform: a new directional transform and compactly supported shearlet frames,” IEEE Transactions on Image Processing, vol. 19, no. 5, pp. 1166–1180, 2010.
- W. Q. Lim, “Nonseparable shearlet transform,” IEEE Transactions on Image Processing, vol. 22, pp. 2056–2065, 2013.
- G. Easley, D. Labate, and W. Q. Lim, “Sparse directional image representations using the discrete shearlet transform,” Applied and Computational Harmonic Analysis, vol. 25, no. 1, pp. 25–46, 2008.
- K. Guo and D. Labate, “The construction of smooth Parseval frames of shearlets,” Mathematical Modelling of Natural Phenomena, vol. 8, pp. 82–105, 2013.
- T. Liu, C. Wisnieff, M. Lou, W. Chen, and P. Spincemaille, “Nonlinear formulation of the magnetic field to source relationship for robust quantitative susceptibility mapping,” Magnetic Resonance in Medicine, vol. 69, pp. 467–476, 2013.
- S. M. Smith, “Fast robust automated brain extraction,” Human Brain Mapping, vol. 17, no. 3, pp. 143–155, 2002.
- Q. Zhang, Y. Chen, and L. Wang, “Multisensor video fusion based on spatial-temporal salience detection,” Signal Processing, vol. 93, no. 9, pp. 2485–2499, 2013.
- Q. Zhang, L. Wang, Z. Ma, and H. Li, “A novel video fusion framework using surfacelet transform,” Optics Communications, vol. 285, no. 13-14, pp. 3032–3041, 2012.
- V. Petrovi and C. Xydeas, “On the effects of sensor noise in pixel-level image fusion performance,” in Proceedings of the 3rd International Conference on Information Fusion (FUSION '00), vol. 2, pp. WEC3/14–WEC3/19, July 2000.