Journal of Electrical and Computer Engineering

Journal of Electrical and Computer Engineering / 2017 / Article

Research Article | Open Access

Volume 2017 |Article ID 6807473 |

Hui Huang, Xi’an Feng, Jionghui Jiang, "Medical Image Fusion Algorithm Based on Nonlinear Approximation of Contourlet Transform and Regional Features", Journal of Electrical and Computer Engineering, vol. 2017, Article ID 6807473, 9 pages, 2017.

Medical Image Fusion Algorithm Based on Nonlinear Approximation of Contourlet Transform and Regional Features

Academic Editor: Panajotis Agathoklis
Received26 Jul 2016
Revised15 Nov 2016
Accepted05 Dec 2016
Published24 Jan 2017


According to the pros and cons of contourlet transform and multimodality medical imaging, here we propose a novel image fusion algorithm that combines nonlinear approximation of contourlet transform with image regional features. The most important coefficient bands of the contourlet sparse matrix are retained by nonlinear approximation. Low-frequency and high-frequency regional features are also elaborated to fuse medical images. The results strongly suggested that the proposed algorithm could improve the visual effects of medical image fusion and image quality, image denoising, and enhancement.

1. Introduction

As an analytical probe, there has been an ever-increasing interest in developing and applying medical imaging technique to problems in clinical diagnosis. The list of possible applications of X-ray, ultrasound, CT, MRI, SPECT, and PET in clinical diagnosis continues to grow and diversify. Although those imaging technologies have given clinicians an unprecedented toolbox to aid in clinical decision-making, advances in image fusion of comprehensive morphological and functional information retrieved from different imaging technologies could enable physicians to identify human anatomy, physiology, and pathology as well as diseases at an even earlier stage.

Recently, there has been active research using medical image fusion technology for clinical applications in the academic community. Many researchers have proposed various image fusion algorithms which have achieved good results, such as the Laplacian pyramid transform [1], the contrast pyramid transform [2, 3] techniques based on the morphological pyramid transform [4], techniques based on the pyramid gradient transform [5], wavelet transform [68], Ridgelet, and Curvelet. DO and Martin Vetterli proposed the contourlet transform in 2002, which is a “real” two-dimensional image representation based on wavelet multiscale analysis and known as a Pyramidal Directional Filter Bank (PDFB). The multiscale geometric analytical tool used in contourlet transform demonstrates excellent spatial and frequency domain localization properties of wavelet analysis, as well as the bonus of multidirectional, multiscale characteristics and good anisotropy, demonstrating its suitability to describe the geometric characteristics of an image [9, 10]. Additionally, the contourlet wavelet transform with smaller contourlet coefficients is able to express abundant sparse matrices with depicting image edges, such as curves, straight lines, and other features. After contourlet transformation, the image becomes more focused, which is conducive to the tracking and analysis of important image features. Contourlet transform can decompose in any direction and on any scale, which is key to the accurate description of image contours and directional textural information. To date, many scholars have applied the contourlet transform to image fusion and reported good results, particularly when combined with the image characteristics of contourlet transform fusion [11, 12], nonsubsampled contourlet image fusion [13, 14], and nonlinear approximation of contourlet transform fusion [15, 16].

This paper proposes an image fusion algorithm based on the analysis of a large number of image fusion contourlet transform techniques, which combines nonlinear approximation of contourlet transform and regional image features. First, contourlet decomposition is employed to extract the high-frequency and low-frequency regions of an image. The low-frequency coefficients are retained, and nonlinear approximations of the most significant high-frequency coefficients are obtained. Then, the low-frequency coefficients and the most significant high-frequency coefficients are combined via image fusion. The coefficient matrix module is used to calculate the energy region near the center point of the low-frequency region, and a reasonable fusion coefficient is chosen. Based on the most significant high-frequency coefficients, this paper employs the fusion rule of salience/match measure with a threshold. CT and MRI image simulations and experimental results indicate that the proposed method achieves better fusion performance and visual effects than the wavelet transform and traditional contourlet fusion methods.

2. Nonlinear Approximation of Contourlet Transform Algorithm

A contourlet transform model of the filter group can be extended into a continuous function of square space [17, 18]. In a continuous domain contourlet transform, is decomposed into multiscale, multidirectional subspace sequence by the use of an iterative filter group, as shown the following equation:

The definition of space and is consistent with wavelet decomposition [19]. is the approximate space. The scaling function provides an approximation component on the scale represented by , as generated by zooming and panning the scaling function orthogonal basis. is decomposed into directional subspace , expressed as . The space is defined in the rectangular frame or , which belongs to , as shown in the following equation:

In Formula (2), is a low-pass analysis filter of PDB. The sampling matrix can be expressed as follows:

In Formula (3), the parameters directly determine the conversion of orientation, that is, the horizontal or vertical bias. According to multiresolution analysis theory, can be obtained from the original function and its translation, as shown in the following equation:

According to the theory described above, is a continuous domain contourlet, while , , and represent the scale, orientation, and position parameters of the contourlet, respectively.

Given a set of base functions in a nonlinear approximation of contourlet transform, the function can be expanded to . Then the maximum absolute values of coefficients are used to approximate the original function, expressed as , where is the index of the maximum absolute value of the coefficient and represents the nonlinear approximation of the function [20].

3. Feature Matching Algorithm

3.1. Low-Frequency Fusion Algorithm

The low-frequency subband retains the profile information of the original image; the low-frequency region is processed as much as possible in order to retain the profile characteristics. In this paper, the window coefficient matrix is employed to calculate the energy in the region near the image center point, which not only takes regional factors into account but also retains the characteristics of directivity and highlights the central pixel. The energy of the low-frequency region can be defined as follows:

represents the center of the neighborhood of , represents the center of the proximity panel of , is the area coefficient matrix, and is the low-frequency subband coefficient of the fused image. According to the rule of the low-frequency region of energy fusion, the energy of the low-frequency region must be calculated first. is the center of the two adjacent subimages of the low-frequency region and the center region of the two adjacent low-frequency subimages of . Then, the absolute value of the regional energy can be expressed as follows:

3.2. High-Frequency Fusion Algorithm

This paper utilizes high-frequency coefficients and the rules of the match measure with a threshold. The local energy of the high-frequency region [21] can be defined as follows:

represents the position of the high-frequency coefficients of on the decomposition level. The size of the image in a neighborhood of the window (typically 3 × 3 or 5 × 5 pixels) is defined by . The neighborhood of the sum of the square of the high-frequency coefficients is represented by the local energy of point . The match degree of point is defined as follows:

The fusion rule of the high-frequency coefficients is defined by the following formula:

Match degree, which is measured by the matching degree of the feature information in corresponding position of the two original images A and B, determines the proportion of the characteristic information of different original images. The point ’s match degree is determined by the rules as follows:(1)If , then(2)If , then

In the rules listed above, represents the match degree, and represents the matching threshold. When , we take the larger value of the local energy and as high-frequency coefficient. When , we take the value , where the weight and have correlation with the degree of matching, and . Obviously, the calculation process of feature matching rules demonstrates good locality, because the fusion results of the high-frequency coefficient value at are only determined by the coefficient values which are contained by the neighborhood of point .

4. Experimental Results and Analysis

The CT and MRI image fusion experimental simulations were implemented on PIV 2.4 GHz, using a 4 GB RAM PC as the development platform of Matlab7.0. After nonlinear approximation contourlet transform was performed, a 3 × 3 feature region is calculated in the high-frequency and low-frequency regions, and the high-frequency region match threshold is 0.75 as proposed by Burt and Lolczynski [21].

Figure 1 depicts the results of nonlinear approximation contourlet transform performed on an MRI image. The various proportions of nonlinear approximation retain the most significant coefficients at high-frequency subbands. Table 1 describes the MRI image feature coefficients and the most significant coefficients at various approximation proportions.

MRI image10% coefficients30% coefficients50% coefficients

Significant coefficients8601665541966132768

Figure 2 depicts the matching fusion of MRI and CT image characteristics based on nonlinear approximation contourlet transform. A constant low-frequency region is maintained after nonlinear approximation of significant coefficients at high-frequency subbands, and the images are then fused according to their regional characteristics. This paper quantitatively analyzes the effects of fusion based on the indicators of standard deviation, correlation coefficients, entropy, and mutual information [22, 23].

4.1. Standard Deviation

The standard deviation (STD) reflects the contrast change of an image: the larger the value, the clearer the edge contour; the definition of STD is given as follows:

In the formula, represents the result fusion image, while represents the image , and represents the mean value of .

4.2. Correlation Coefficients

Correlation coefficient (CC) is a measure of the similarity of two images: the greater the correlation coefficient, the better the fusion effect; CC is defined as

4.3. Entropy

Entropy (EN) measures the amount of information maintained by the fused image. The larger the entropy value is, the more information the result image has. EN is defined as

in which is defined as the normalized histogram of the variable and .

4.4. Mutual Information

Mutual information (MI) can be used to measure the mutual correlation or similarity of the two input images. The higher the mutual information of the fused image, the more information extracted from the original images and the better the fusion effect. For the two input discrete image signals, MI is defined as

in which is an entropy function; , while is a joint entropy function; . is defined as histogram of normalized distribution of and   and .

As shown in Table 2, the maximum value of the standard deviation in the fused image can be obtained for less than 50% significant coefficients, which demonstrate sharper profile edges and the highest resolution at 50% nonlinear approximation. This may be the result of few significant coefficients which will in turn result in less extraction of edge contours; alternatively, too many significant coefficients will lead to the extraction of excess noise. Correlation coefficients, mutual information, and entropy have little effect on image fusion with various proportions of significant coefficients. The primary reason for this is that the low-frequency subband retains the profile information of the original image, while fused images do not achieve nonlinear approximation in the low-frequency region. Results indicate that fusion image quality demonstrates a nonlinear relationship to the proportion of significant coefficients. When the significant coefficient is 50%, the quality of image fusion is highest.

Standard deviationCorrelation coefficientsEntropyMutual information

10% significant coefficients50.480.667.684.51
30% significant coefficients56.320.697.684.53
50% significant coefficients63.160.707.704.54
70% significant coefficients59.320.717.704.55
90% significant coefficients57.160.737.724.55

In order to verify the effectiveness of the algorithm, we compare our result with others from different researchers. Figure 3 shows different fusion results for MRI and CT images with different algorithms, in which Figure 3(a) is the experimental result with fusion algorithm based on regional feature of wavelet coefficients [24], Figure 3(b) is experimental result based on Haar wavelet transform [25], Figure 3(c) is the result of coefficient weighted fusion algorithm after contourlet transform [19], Figure 3(d) is the fusion result based on the nonsubsampled contourlet transform (NSCT) and regional features [26], and Figure 3(e) is our experimental fusion result, which is an effect of regional image feature matching after 50% nonlinear estimation to the high-frequency coefficients by the contourlet transform. For the visual assessment, the fusion images (Figures 3(a) and 3(b)) are based on the wavelet transform methods; the color is shallow, and edge profile is not clear enough, which cannot reflect the source CT and MRI images of the details of the information. In Figure 3(c), with contourlet transform, the fusion image effect is slightly better than the wavelet method and can more clearly reflect the content information of the source images. In Figures 3(d) and 3(e), the fusion image not only inherits the bone tissue of the CT image but also can keep the soft tissue of the MRI image; moreover, the image edge detail can be more obvious after image fusion.

Analyzing the results quantitatively with the indexes shown in Table 3, the corresponding STD, CC, EN, and MI of Figures 3(a) and 3(b), which are based on the wavelet transform methods, are relatively small, which shows that the information has been lost in the process of image fusion and also shows a notable difference between the fusion image and the source images. It has been demonstrated that the ability of wavelet transform method to extract information from the source image is poor, and the fusion effect is not good. Figure 3(c) gets more directions in the high-frequency part by using contourlet transform method and can better deal with the direction of the details relative to the wavelet transform, so the coefficient of each index is superior to wavelet fusion. In Figure 3(d), both the coefficient of each index and the quality of image fusion are the highest, but the algorithm needs a large amount of computation and is too time-consuming. Figure 3(e) is the result of our nonlinear contourlet algorithm, in which index coefficients outperform wavelet transform and contourlet transform, and the fusion image quality is close to the NSCT; moreover, it greatly reduces the amount of calculation and has significant application value.

Standard deviationCorrelation coefficientsEntropyMutual informationRunning time

Wavelet fusion50.480.646.784.161.608
Haar fusion58.600.667.594.471.901
Contourlet fusion59.320.727.634.523.021
Nonsubsampled contourlet fusion65.010.737.724.595.561
Nonlinear contourlet fusion63.160.727.704.551.254

Figures 4, 5, and 6 are another group of fusion results of CT and MRI medical images using different algorithms; indexes of the performance evaluation are shown in Tables 4, 5, and 6, respectively. Both from the subject visual perspective and from the object standard of the fused image estimate, the experiment result of our algorithm is better than wavelet and contourlet fusion algorithm. When compared with the nonsubsampled contourlet fusion algorithm, our algorithm has greatly enhanced the computing speed, without losing the fusion quality.

Standard deviationCorrelation coefficientsEntropyMutual informationRunning time

Wavelet fusion80.140.752.242.152.013
Haar fusion80.270.773.542.222.408
Contourlet fusion81.300.804.392.274.211
Nonsubsampled contourlet fusion81.480.845.233.329.054
Nonlinear contourlet fusion81.390.815.142.911.701

Standard deviationCorrelation coefficientsEntropyMutual informationRunning time

Wavelet fusion53.110.726.231.961.752
Haar fusion57.840.756.302.031.805
Contourlet fusion60.820.837.522.103.501
Nonsubsampled contourlet fusion70.060.968.042.077.526
Nonlinear contourlet fusion62.970.857.692.121.055

Standard deviationCorrelation coefficientsEntropyMutual informationRunning time

Wavelet fusion44.970.735.784.802.805
Haar fusion49.720.765.884.952.932
Contourlet fusion51.260.806.355.424.921
Nonsubsampled contourlet fusion59.060.886.605.6710.594
Nonlinear contourlet fusion52.330.826.425.441.824

5. Conclusions

This paper proposes an image fusion algorithm based on nonlinear approximation of contourlet transform and regional features applied to medical image processing, which combines nonlinear approximation of contourlet transform characteristics with the high-low frequency coefficient region feature algorithm. First, the algorithm retains the original low-frequency coefficients and then approximates high-frequency coefficients in order to extract the most nonlinear significant coefficients. Second, a coefficient matrix module is used in the low-frequency region to calculate the energy region near the center point and choose a reasonable fusion coefficient, which improves profile information retention of the original image. Finally, the rules of the salience/match measure with a threshold are used to calculate the center point of the region analysis feature and select the integration factor in the high-frequency region according to the most significant coefficients in order to better fuse image edge contours and texture. Via the experiment on CT and MRI images, the results show that the fusion image quality of our algorithm greatly outperforms the wavelet coefficients, Haar wavelet transform, and coefficient weighted contourlet transform and is very close to NSCT. Moreover, our algorithm has greatly improved the computation efficiency. Nonlinear contourlet transform has a certain filter and noise removal function, which outperforms the contourlet transform, while high-frequency component nonlinear feature extraction method retains features to maximum and has greatly improved the computing speed, which outperforms NSCT. In short, the method has certain advantages in both image quality and computation efficiency, so it can provide more reliable information to doctors.

Competing Interests

The authors declare that there are no competing interests regarding the publication of this paper.


This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant no. LY15F020033), the National Natural Science Foundation of China (Grant no. 61271414), and the Science and Technology Plan Project of Wenzhou, China (Grant no. Y20160070).


  1. T. Windeatt and R. Ghaderi, “Binary labelling and decision-level fusion,” Information Fusion, vol. 2, no. 2, pp. 103–112, 2001. View at: Publisher Site | Google Scholar
  2. D. Rajan and S. Chaudhuri, “Data fusion techniques for super-resolution imaging,” Information Fusion, vol. 3, no. 1, pp. 25–38, 2002. View at: Publisher Site | Google Scholar
  3. P. J. Burt and E. H. Adelson, “The laplacian pyramid as a compact image code,” IEEE Transactions on Communications, vol. 31, no. 4, pp. 532–540, 1983. View at: Publisher Site | Google Scholar
  4. A. Toet, “Image fusion by a ration of low-pass pyramid,” Pattern Recognition Letters, vol. 9, no. 4, pp. 245–253, 1989. View at: Publisher Site | Google Scholar
  5. A. Toet, L. J. Ruyven, and J. M. Valeton, “Merging thermal and visual images by a contrast pyramid,” Optical Engineering, vol. 28, no. 7, pp. 789–792, 1989. View at: Google Scholar
  6. A. Toet, “A morphological pyramidal image decomposition,” Pattern Recognition Letters, vol. 9, no. 4, pp. 255–261, 1989. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  7. P. J. Burt, A Gradient Pyramid Basis for Pattern Selective Image Fusion, SID Press, San Jose, Calif, USA, 1992.
  8. T. Ranchin and L. Wald, “The wavelet transform for the analysis of remotely sensed images,” International Journal of Remote Sensing, vol. 14, no. 3, pp. 615–619, 1993. View at: Publisher Site | Google Scholar
  9. L. J. Chipman, T. M. Orr, and L. N. Graham, “Wavelets and image fusion,” in Proceedings of the IEEE International Conference on Image Processing. Part 3 (of 3), pp. 248–251, Los Alamitos, Calif, USA, October 1995. View at: Google Scholar
  10. H. Li, B. S. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graphical Models and Image Processing, vol. 57, no. 3, pp. 235–245, 1995. View at: Publisher Site | Google Scholar
  11. L. Kun, G. Lei, and C. Weiwei, “Regional feature self-adaptive image fusion algorithm based on contourlet transform,” Acta Optica Sinica, no. 4, pp. 681–686, 2008. View at: Google Scholar
  12. K. Zhu and X.-G. He, “Remote sensing images fusion method based on morphology and regional feature of contourlet coefficients,” Computer Science, no. 4, pp. 301–305, 2013. View at: Google Scholar
  13. A. L. Da Cunha, J. Zhou, and M. N. Do, “The nonsubsampled contourlet transform: theory, design, and applications,” IEEE Transactions on Image Processing, vol. 15, no. 10, pp. 3089–3101, 2006. View at: Publisher Site | Google Scholar
  14. F. Pak, H. R. Kanan, and A. Alikhassi, “Breast cancer detection and classification in digital mammography based on non-subsampled contourlet transform (NSCT) and super resolution,” Computer Methods and Programs in Biomedicine, vol. 122, no. 2, pp. 89–107, 2015. View at: Publisher Site | Google Scholar
  15. Y. Wu, W. Hou, and S. Wu, “Fabric defect image de-noising based on contourlet transform and nonlinear diffusion,” Journal of Electronic Measurement & Instrument, vol. 25, no. 8, pp. 665–670, 2011. View at: Publisher Site | Google Scholar
  16. H. Wang, Q. Yang, R. Li, and Z. Yao, “Tunable-Q contourlet transform for image representation,” Journal of Systems Engineering and Electronics, vol. 24, no. 1, pp. 147–156, 2013. View at: Publisher Site | Google Scholar
  17. L.-C. Jiao and S. Tan, “Development and prospect of image multiscale geometric analysis,” Acta Electronica Sinica, vol. 31, no. 12, pp. 1975–1981, 2003. View at: Google Scholar
  18. H. Y. Patil, A. G. Kothari, and K. M. Bhurchandi, “Expression invariant face recognition using local binary patterns and contourlet transform,” Optik, vol. 127, no. 5, pp. 2670–2678, 2016. View at: Publisher Site | Google Scholar
  19. Z. Xin and C. Weibin, “Medical image fusion based on weighted Contourlet transformation coefficients,” Journal of Image and Graphics, vol. 19, no. 1, pp. 133–140, 2014. View at: Google Scholar
  20. X.-C. Xue, S.-Y. Zhang, H.-F. Li, and Y.-F. Gun, “Research on application of contourlet transform for image compression,” Control & Automation, no. 25, 2009. View at: Google Scholar
  21. P. J. Burt and R. J. Lolczynski, “Enhanced image capture through fusion,” in Proceedings of the 4th International Conference on Computer Vision, pp. 173–182, IEEE, Berlin, Germany, May 1993. View at: Publisher Site | Google Scholar
  22. L.-M. Hu, J. Gao, and K.-F. He, “Research on quality measures for image fusion,” Acta Electronica Sinica, vol. 32, pp. 218–221, 2004. View at: Google Scholar
  23. G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” Electronics Letters, vol. 38, no. 7, pp. 313–315, 2002. View at: Publisher Site | Google Scholar
  24. Z.-G. Zhou and X.-H. Wang, “Image fusion algorithm based on the neighboring relation of wavelet coefficients,” Computer Science, vol. 36, no. 5, pp. 257–261, 2009. View at: Google Scholar
  25. L. Min, “Multifocus image fusion based on morphological haar wavelet transform,” Computer Engineering, vol. 38, no. 23, pp. 211–214, 2012. View at: Google Scholar
  26. L. Chao, L. Guangyao, T. Yunlan, and X. Xianglong, “Medical images fusion of nonsubsampled Contourlet transform and regional feature,” Journal of Computer Applications, vol. 33, no. 6, pp. 1727–1731, 2013. View at: Publisher Site | Google Scholar

Copyright © 2017 Hui Huang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.