Research Article  Open Access
Hui Huang, Xi’an Feng, Jionghui Jiang, "Medical Image Fusion Algorithm Based on Nonlinear Approximation of Contourlet Transform and Regional Features", Journal of Electrical and Computer Engineering, vol. 2017, Article ID 6807473, 9 pages, 2017. https://doi.org/10.1155/2017/6807473
Medical Image Fusion Algorithm Based on Nonlinear Approximation of Contourlet Transform and Regional Features
Abstract
According to the pros and cons of contourlet transform and multimodality medical imaging, here we propose a novel image fusion algorithm that combines nonlinear approximation of contourlet transform with image regional features. The most important coefficient bands of the contourlet sparse matrix are retained by nonlinear approximation. Lowfrequency and highfrequency regional features are also elaborated to fuse medical images. The results strongly suggested that the proposed algorithm could improve the visual effects of medical image fusion and image quality, image denoising, and enhancement.
1. Introduction
As an analytical probe, there has been an everincreasing interest in developing and applying medical imaging technique to problems in clinical diagnosis. The list of possible applications of Xray, ultrasound, CT, MRI, SPECT, and PET in clinical diagnosis continues to grow and diversify. Although those imaging technologies have given clinicians an unprecedented toolbox to aid in clinical decisionmaking, advances in image fusion of comprehensive morphological and functional information retrieved from different imaging technologies could enable physicians to identify human anatomy, physiology, and pathology as well as diseases at an even earlier stage.
Recently, there has been active research using medical image fusion technology for clinical applications in the academic community. Many researchers have proposed various image fusion algorithms which have achieved good results, such as the Laplacian pyramid transform [1], the contrast pyramid transform [2, 3] techniques based on the morphological pyramid transform [4], techniques based on the pyramid gradient transform [5], wavelet transform [6–8], Ridgelet, and Curvelet. DO and Martin Vetterli proposed the contourlet transform in 2002, which is a “real” twodimensional image representation based on wavelet multiscale analysis and known as a Pyramidal Directional Filter Bank (PDFB). The multiscale geometric analytical tool used in contourlet transform demonstrates excellent spatial and frequency domain localization properties of wavelet analysis, as well as the bonus of multidirectional, multiscale characteristics and good anisotropy, demonstrating its suitability to describe the geometric characteristics of an image [9, 10]. Additionally, the contourlet wavelet transform with smaller contourlet coefficients is able to express abundant sparse matrices with depicting image edges, such as curves, straight lines, and other features. After contourlet transformation, the image becomes more focused, which is conducive to the tracking and analysis of important image features. Contourlet transform can decompose in any direction and on any scale, which is key to the accurate description of image contours and directional textural information. To date, many scholars have applied the contourlet transform to image fusion and reported good results, particularly when combined with the image characteristics of contourlet transform fusion [11, 12], nonsubsampled contourlet image fusion [13, 14], and nonlinear approximation of contourlet transform fusion [15, 16].
This paper proposes an image fusion algorithm based on the analysis of a large number of image fusion contourlet transform techniques, which combines nonlinear approximation of contourlet transform and regional image features. First, contourlet decomposition is employed to extract the highfrequency and lowfrequency regions of an image. The lowfrequency coefficients are retained, and nonlinear approximations of the most significant highfrequency coefficients are obtained. Then, the lowfrequency coefficients and the most significant highfrequency coefficients are combined via image fusion. The coefficient matrix module is used to calculate the energy region near the center point of the lowfrequency region, and a reasonable fusion coefficient is chosen. Based on the most significant highfrequency coefficients, this paper employs the fusion rule of salience/match measure with a threshold. CT and MRI image simulations and experimental results indicate that the proposed method achieves better fusion performance and visual effects than the wavelet transform and traditional contourlet fusion methods.
2. Nonlinear Approximation of Contourlet Transform Algorithm
A contourlet transform model of the filter group can be extended into a continuous function of square space [17, 18]. In a continuous domain contourlet transform, is decomposed into multiscale, multidirectional subspace sequence by the use of an iterative filter group, as shown the following equation:
The definition of space and is consistent with wavelet decomposition [19]. is the approximate space. The scaling function provides an approximation component on the scale represented by , as generated by zooming and panning the scaling function orthogonal basis. is decomposed into directional subspace , expressed as . The space is defined in the rectangular frame or , which belongs to , as shown in the following equation:
In Formula (2), is a lowpass analysis filter of PDB. The sampling matrix can be expressed as follows:
In Formula (3), the parameters directly determine the conversion of orientation, that is, the horizontal or vertical bias. According to multiresolution analysis theory, can be obtained from the original function and its translation, as shown in the following equation:
According to the theory described above, is a continuous domain contourlet, while , , and represent the scale, orientation, and position parameters of the contourlet, respectively.
Given a set of base functions in a nonlinear approximation of contourlet transform, the function can be expanded to . Then the maximum absolute values of coefficients are used to approximate the original function, expressed as , where is the index of the maximum absolute value of the coefficient and represents the nonlinear approximation of the function [20].
3. Feature Matching Algorithm
3.1. LowFrequency Fusion Algorithm
The lowfrequency subband retains the profile information of the original image; the lowfrequency region is processed as much as possible in order to retain the profile characteristics. In this paper, the window coefficient matrix is employed to calculate the energy in the region near the image center point, which not only takes regional factors into account but also retains the characteristics of directivity and highlights the central pixel. The energy of the lowfrequency region can be defined as follows:
represents the center of the neighborhood of , represents the center of the proximity panel of , is the area coefficient matrix, and is the lowfrequency subband coefficient of the fused image. According to the rule of the lowfrequency region of energy fusion, the energy of the lowfrequency region must be calculated first. is the center of the two adjacent subimages of the lowfrequency region and the center region of the two adjacent lowfrequency subimages of . Then, the absolute value of the regional energy can be expressed as follows:
3.2. HighFrequency Fusion Algorithm
This paper utilizes highfrequency coefficients and the rules of the match measure with a threshold. The local energy of the highfrequency region [21] can be defined as follows:
represents the position of the highfrequency coefficients of on the decomposition level. The size of the image in a neighborhood of the window (typically 3 × 3 or 5 × 5 pixels) is defined by . The neighborhood of the sum of the square of the highfrequency coefficients is represented by the local energy of point . The match degree of point is defined as follows:
The fusion rule of the highfrequency coefficients is defined by the following formula:
Match degree, which is measured by the matching degree of the feature information in corresponding position of the two original images A and B, determines the proportion of the characteristic information of different original images. The point ’s match degree is determined by the rules as follows:(1)If , then(2)If , then
In the rules listed above, represents the match degree, and represents the matching threshold. When , we take the larger value of the local energy and as highfrequency coefficient. When , we take the value , where the weight and have correlation with the degree of matching, and . Obviously, the calculation process of feature matching rules demonstrates good locality, because the fusion results of the highfrequency coefficient value at are only determined by the coefficient values which are contained by the neighborhood of point .
4. Experimental Results and Analysis
The CT and MRI image fusion experimental simulations were implemented on PIV 2.4 GHz, using a 4 GB RAM PC as the development platform of Matlab7.0. After nonlinear approximation contourlet transform was performed, a 3 × 3 feature region is calculated in the highfrequency and lowfrequency regions, and the highfrequency region match threshold is 0.75 as proposed by Burt and Lolczynski [21].
Figure 1 depicts the results of nonlinear approximation contourlet transform performed on an MRI image. The various proportions of nonlinear approximation retain the most significant coefficients at highfrequency subbands. Table 1 describes the MRI image feature coefficients and the most significant coefficients at various approximation proportions.

(a) MRI image
(b) 10% coefficients
(c) 30% coefficients
(d) 50% coefficients
Figure 2 depicts the matching fusion of MRI and CT image characteristics based on nonlinear approximation contourlet transform. A constant lowfrequency region is maintained after nonlinear approximation of significant coefficients at highfrequency subbands, and the images are then fused according to their regional characteristics. This paper quantitatively analyzes the effects of fusion based on the indicators of standard deviation, correlation coefficients, entropy, and mutual information [22, 23].
(a) CT image
(b) MRI image
(c) 10% coefficients
(d) 30% coefficients
(e) 50% coefficients
(f) 70% coefficients
(g) 90% coefficients
4.1. Standard Deviation
The standard deviation (STD) reflects the contrast change of an image: the larger the value, the clearer the edge contour; the definition of STD is given as follows:
In the formula, represents the result fusion image, while represents the image , and represents the mean value of .
4.2. Correlation Coefficients
Correlation coefficient (CC) is a measure of the similarity of two images: the greater the correlation coefficient, the better the fusion effect; CC is defined as
4.3. Entropy
Entropy (EN) measures the amount of information maintained by the fused image. The larger the entropy value is, the more information the result image has. EN is defined as
in which is defined as the normalized histogram of the variable and .
4.4. Mutual Information
Mutual information (MI) can be used to measure the mutual correlation or similarity of the two input images. The higher the mutual information of the fused image, the more information extracted from the original images and the better the fusion effect. For the two input discrete image signals, MI is defined as
in which is an entropy function; , while is a joint entropy function; . is defined as histogram of normalized distribution of and and .
As shown in Table 2, the maximum value of the standard deviation in the fused image can be obtained for less than 50% significant coefficients, which demonstrate sharper profile edges and the highest resolution at 50% nonlinear approximation. This may be the result of few significant coefficients which will in turn result in less extraction of edge contours; alternatively, too many significant coefficients will lead to the extraction of excess noise. Correlation coefficients, mutual information, and entropy have little effect on image fusion with various proportions of significant coefficients. The primary reason for this is that the lowfrequency subband retains the profile information of the original image, while fused images do not achieve nonlinear approximation in the lowfrequency region. Results indicate that fusion image quality demonstrates a nonlinear relationship to the proportion of significant coefficients. When the significant coefficient is 50%, the quality of image fusion is highest.

In order to verify the effectiveness of the algorithm, we compare our result with others from different researchers. Figure 3 shows different fusion results for MRI and CT images with different algorithms, in which Figure 3(a) is the experimental result with fusion algorithm based on regional feature of wavelet coefficients [24], Figure 3(b) is experimental result based on Haar wavelet transform [25], Figure 3(c) is the result of coefficient weighted fusion algorithm after contourlet transform [19], Figure 3(d) is the fusion result based on the nonsubsampled contourlet transform (NSCT) and regional features [26], and Figure 3(e) is our experimental fusion result, which is an effect of regional image feature matching after 50% nonlinear estimation to the highfrequency coefficients by the contourlet transform. For the visual assessment, the fusion images (Figures 3(a) and 3(b)) are based on the wavelet transform methods; the color is shallow, and edge profile is not clear enough, which cannot reflect the source CT and MRI images of the details of the information. In Figure 3(c), with contourlet transform, the fusion image effect is slightly better than the wavelet method and can more clearly reflect the content information of the source images. In Figures 3(d) and 3(e), the fusion image not only inherits the bone tissue of the CT image but also can keep the soft tissue of the MRI image; moreover, the image edge detail can be more obvious after image fusion.
(a) CT image
(b) MRI image
(c) Wavelet fusion
(d) Haar fusion
(e) Contourlet fusion
(f) Nonsubsampled contourlet fusion
(g) Nonlinear contourlet fusion
Analyzing the results quantitatively with the indexes shown in Table 3, the corresponding STD, CC, EN, and MI of Figures 3(a) and 3(b), which are based on the wavelet transform methods, are relatively small, which shows that the information has been lost in the process of image fusion and also shows a notable difference between the fusion image and the source images. It has been demonstrated that the ability of wavelet transform method to extract information from the source image is poor, and the fusion effect is not good. Figure 3(c) gets more directions in the highfrequency part by using contourlet transform method and can better deal with the direction of the details relative to the wavelet transform, so the coefficient of each index is superior to wavelet fusion. In Figure 3(d), both the coefficient of each index and the quality of image fusion are the highest, but the algorithm needs a large amount of computation and is too timeconsuming. Figure 3(e) is the result of our nonlinear contourlet algorithm, in which index coefficients outperform wavelet transform and contourlet transform, and the fusion image quality is close to the NSCT; moreover, it greatly reduces the amount of calculation and has significant application value.

Figures 4, 5, and 6 are another group of fusion results of CT and MRI medical images using different algorithms; indexes of the performance evaluation are shown in Tables 4, 5, and 6, respectively. Both from the subject visual perspective and from the object standard of the fused image estimate, the experiment result of our algorithm is better than wavelet and contourlet fusion algorithm. When compared with the nonsubsampled contourlet fusion algorithm, our algorithm has greatly enhanced the computing speed, without losing the fusion quality.



(a) CT image
(b) MRI image
(c) Wavelet fusion
(d) Haar fusion
(e) Contourlet fusion
(f) Nonsubsampled contourlet fusion
(g) Nonlinear contourlet fusion
(a) CT image
(b) PET image
(c) Wavelet fusion
(d) Haar fusion
(e) Contourlet fusion
(f) Nonsubsampled fusion
(g) Nonlinear contourlet fusion
(a) CT image
(b) MRI image
(c) Wavelet fusion
(d) Haar fusion
(e) Contourlet fusion
(f) Nonsubsampled fusion
(g) Nonlinear contourlet fusion
5. Conclusions
This paper proposes an image fusion algorithm based on nonlinear approximation of contourlet transform and regional features applied to medical image processing, which combines nonlinear approximation of contourlet transform characteristics with the highlow frequency coefficient region feature algorithm. First, the algorithm retains the original lowfrequency coefficients and then approximates highfrequency coefficients in order to extract the most nonlinear significant coefficients. Second, a coefficient matrix module is used in the lowfrequency region to calculate the energy region near the center point and choose a reasonable fusion coefficient, which improves profile information retention of the original image. Finally, the rules of the salience/match measure with a threshold are used to calculate the center point of the region analysis feature and select the integration factor in the highfrequency region according to the most significant coefficients in order to better fuse image edge contours and texture. Via the experiment on CT and MRI images, the results show that the fusion image quality of our algorithm greatly outperforms the wavelet coefficients, Haar wavelet transform, and coefficient weighted contourlet transform and is very close to NSCT. Moreover, our algorithm has greatly improved the computation efficiency. Nonlinear contourlet transform has a certain filter and noise removal function, which outperforms the contourlet transform, while highfrequency component nonlinear feature extraction method retains features to maximum and has greatly improved the computing speed, which outperforms NSCT. In short, the method has certain advantages in both image quality and computation efficiency, so it can provide more reliable information to doctors.
Competing Interests
The authors declare that there are no competing interests regarding the publication of this paper.
Acknowledgments
This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant no. LY15F020033), the National Natural Science Foundation of China (Grant no. 61271414), and the Science and Technology Plan Project of Wenzhou, China (Grant no. Y20160070).
References
 T. Windeatt and R. Ghaderi, “Binary labelling and decisionlevel fusion,” Information Fusion, vol. 2, no. 2, pp. 103–112, 2001. View at: Publisher Site  Google Scholar
 D. Rajan and S. Chaudhuri, “Data fusion techniques for superresolution imaging,” Information Fusion, vol. 3, no. 1, pp. 25–38, 2002. View at: Publisher Site  Google Scholar
 P. J. Burt and E. H. Adelson, “The laplacian pyramid as a compact image code,” IEEE Transactions on Communications, vol. 31, no. 4, pp. 532–540, 1983. View at: Publisher Site  Google Scholar
 A. Toet, “Image fusion by a ration of lowpass pyramid,” Pattern Recognition Letters, vol. 9, no. 4, pp. 245–253, 1989. View at: Publisher Site  Google Scholar
 A. Toet, L. J. Ruyven, and J. M. Valeton, “Merging thermal and visual images by a contrast pyramid,” Optical Engineering, vol. 28, no. 7, pp. 789–792, 1989. View at: Google Scholar
 A. Toet, “A morphological pyramidal image decomposition,” Pattern Recognition Letters, vol. 9, no. 4, pp. 255–261, 1989. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 P. J. Burt, A Gradient Pyramid Basis for Pattern Selective Image Fusion, SID Press, San Jose, Calif, USA, 1992.
 T. Ranchin and L. Wald, “The wavelet transform for the analysis of remotely sensed images,” International Journal of Remote Sensing, vol. 14, no. 3, pp. 615–619, 1993. View at: Publisher Site  Google Scholar
 L. J. Chipman, T. M. Orr, and L. N. Graham, “Wavelets and image fusion,” in Proceedings of the IEEE International Conference on Image Processing. Part 3 (of 3), pp. 248–251, Los Alamitos, Calif, USA, October 1995. View at: Google Scholar
 H. Li, B. S. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graphical Models and Image Processing, vol. 57, no. 3, pp. 235–245, 1995. View at: Publisher Site  Google Scholar
 L. Kun, G. Lei, and C. Weiwei, “Regional feature selfadaptive image fusion algorithm based on contourlet transform,” Acta Optica Sinica, no. 4, pp. 681–686, 2008. View at: Google Scholar
 K. Zhu and X.G. He, “Remote sensing images fusion method based on morphology and regional feature of contourlet coefficients,” Computer Science, no. 4, pp. 301–305, 2013. View at: Google Scholar
 A. L. Da Cunha, J. Zhou, and M. N. Do, “The nonsubsampled contourlet transform: theory, design, and applications,” IEEE Transactions on Image Processing, vol. 15, no. 10, pp. 3089–3101, 2006. View at: Publisher Site  Google Scholar
 F. Pak, H. R. Kanan, and A. Alikhassi, “Breast cancer detection and classification in digital mammography based on nonsubsampled contourlet transform (NSCT) and super resolution,” Computer Methods and Programs in Biomedicine, vol. 122, no. 2, pp. 89–107, 2015. View at: Publisher Site  Google Scholar
 Y. Wu, W. Hou, and S. Wu, “Fabric defect image denoising based on contourlet transform and nonlinear diffusion,” Journal of Electronic Measurement & Instrument, vol. 25, no. 8, pp. 665–670, 2011. View at: Publisher Site  Google Scholar
 H. Wang, Q. Yang, R. Li, and Z. Yao, “TunableQ contourlet transform for image representation,” Journal of Systems Engineering and Electronics, vol. 24, no. 1, pp. 147–156, 2013. View at: Publisher Site  Google Scholar
 L.C. Jiao and S. Tan, “Development and prospect of image multiscale geometric analysis,” Acta Electronica Sinica, vol. 31, no. 12, pp. 1975–1981, 2003. View at: Google Scholar
 H. Y. Patil, A. G. Kothari, and K. M. Bhurchandi, “Expression invariant face recognition using local binary patterns and contourlet transform,” Optik, vol. 127, no. 5, pp. 2670–2678, 2016. View at: Publisher Site  Google Scholar
 Z. Xin and C. Weibin, “Medical image fusion based on weighted Contourlet transformation coefficients,” Journal of Image and Graphics, vol. 19, no. 1, pp. 133–140, 2014. View at: Google Scholar
 X.C. Xue, S.Y. Zhang, H.F. Li, and Y.F. Gun, “Research on application of contourlet transform for image compression,” Control & Automation, no. 25, 2009. View at: Google Scholar
 P. J. Burt and R. J. Lolczynski, “Enhanced image capture through fusion,” in Proceedings of the 4th International Conference on Computer Vision, pp. 173–182, IEEE, Berlin, Germany, May 1993. View at: Publisher Site  Google Scholar
 L.M. Hu, J. Gao, and K.F. He, “Research on quality measures for image fusion,” Acta Electronica Sinica, vol. 32, pp. 218–221, 2004. View at: Google Scholar
 G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” Electronics Letters, vol. 38, no. 7, pp. 313–315, 2002. View at: Publisher Site  Google Scholar
 Z.G. Zhou and X.H. Wang, “Image fusion algorithm based on the neighboring relation of wavelet coefficients,” Computer Science, vol. 36, no. 5, pp. 257–261, 2009. View at: Google Scholar
 L. Min, “Multifocus image fusion based on morphological haar wavelet transform,” Computer Engineering, vol. 38, no. 23, pp. 211–214, 2012. View at: Google Scholar
 L. Chao, L. Guangyao, T. Yunlan, and X. Xianglong, “Medical images fusion of nonsubsampled Contourlet transform and regional feature,” Journal of Computer Applications, vol. 33, no. 6, pp. 1727–1731, 2013. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2017 Hui Huang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.