Abstract

According to the pros and cons of contourlet transform and multimodality medical imaging, here we propose a novel image fusion algorithm that combines nonlinear approximation of contourlet transform with image regional features. The most important coefficient bands of the contourlet sparse matrix are retained by nonlinear approximation. Low-frequency and high-frequency regional features are also elaborated to fuse medical images. The results strongly suggested that the proposed algorithm could improve the visual effects of medical image fusion and image quality, image denoising, and enhancement.

1. Introduction

As an analytical probe, there has been an ever-increasing interest in developing and applying medical imaging technique to problems in clinical diagnosis. The list of possible applications of X-ray, ultrasound, CT, MRI, SPECT, and PET in clinical diagnosis continues to grow and diversify. Although those imaging technologies have given clinicians an unprecedented toolbox to aid in clinical decision-making, advances in image fusion of comprehensive morphological and functional information retrieved from different imaging technologies could enable physicians to identify human anatomy, physiology, and pathology as well as diseases at an even earlier stage.

Recently, there has been active research using medical image fusion technology for clinical applications in the academic community. Many researchers have proposed various image fusion algorithms which have achieved good results, such as the Laplacian pyramid transform [1], the contrast pyramid transform [2, 3] techniques based on the morphological pyramid transform [4], techniques based on the pyramid gradient transform [5], wavelet transform [68], Ridgelet, and Curvelet. DO and Martin Vetterli proposed the contourlet transform in 2002, which is a “real” two-dimensional image representation based on wavelet multiscale analysis and known as a Pyramidal Directional Filter Bank (PDFB). The multiscale geometric analytical tool used in contourlet transform demonstrates excellent spatial and frequency domain localization properties of wavelet analysis, as well as the bonus of multidirectional, multiscale characteristics and good anisotropy, demonstrating its suitability to describe the geometric characteristics of an image [9, 10]. Additionally, the contourlet wavelet transform with smaller contourlet coefficients is able to express abundant sparse matrices with depicting image edges, such as curves, straight lines, and other features. After contourlet transformation, the image becomes more focused, which is conducive to the tracking and analysis of important image features. Contourlet transform can decompose in any direction and on any scale, which is key to the accurate description of image contours and directional textural information. To date, many scholars have applied the contourlet transform to image fusion and reported good results, particularly when combined with the image characteristics of contourlet transform fusion [11, 12], nonsubsampled contourlet image fusion [13, 14], and nonlinear approximation of contourlet transform fusion [15, 16].

This paper proposes an image fusion algorithm based on the analysis of a large number of image fusion contourlet transform techniques, which combines nonlinear approximation of contourlet transform and regional image features. First, contourlet decomposition is employed to extract the high-frequency and low-frequency regions of an image. The low-frequency coefficients are retained, and nonlinear approximations of the most significant high-frequency coefficients are obtained. Then, the low-frequency coefficients and the most significant high-frequency coefficients are combined via image fusion. The coefficient matrix module is used to calculate the energy region near the center point of the low-frequency region, and a reasonable fusion coefficient is chosen. Based on the most significant high-frequency coefficients, this paper employs the fusion rule of salience/match measure with a threshold. CT and MRI image simulations and experimental results indicate that the proposed method achieves better fusion performance and visual effects than the wavelet transform and traditional contourlet fusion methods.

2. Nonlinear Approximation of Contourlet Transform Algorithm

A contourlet transform model of the filter group can be extended into a continuous function of square space [17, 18]. In a continuous domain contourlet transform, is decomposed into multiscale, multidirectional subspace sequence by the use of an iterative filter group, as shown the following equation:

The definition of space and is consistent with wavelet decomposition [19]. is the approximate space. The scaling function provides an approximation component on the scale represented by , as generated by zooming and panning the scaling function orthogonal basis. is decomposed into directional subspace , expressed as . The space is defined in the rectangular frame or , which belongs to , as shown in the following equation:

In Formula (2), is a low-pass analysis filter of PDB. The sampling matrix can be expressed as follows:

In Formula (3), the parameters directly determine the conversion of orientation, that is, the horizontal or vertical bias. According to multiresolution analysis theory, can be obtained from the original function and its translation, as shown in the following equation:

According to the theory described above, is a continuous domain contourlet, while , , and represent the scale, orientation, and position parameters of the contourlet, respectively.

Given a set of base functions in a nonlinear approximation of contourlet transform, the function can be expanded to . Then the maximum absolute values of coefficients are used to approximate the original function, expressed as , where is the index of the maximum absolute value of the coefficient and represents the nonlinear approximation of the function [20].

3. Feature Matching Algorithm

3.1. Low-Frequency Fusion Algorithm

The low-frequency subband retains the profile information of the original image; the low-frequency region is processed as much as possible in order to retain the profile characteristics. In this paper, the window coefficient matrix is employed to calculate the energy in the region near the image center point, which not only takes regional factors into account but also retains the characteristics of directivity and highlights the central pixel. The energy of the low-frequency region can be defined as follows:

represents the center of the neighborhood of , represents the center of the proximity panel of , is the area coefficient matrix, and is the low-frequency subband coefficient of the fused image. According to the rule of the low-frequency region of energy fusion, the energy of the low-frequency region must be calculated first. is the center of the two adjacent subimages of the low-frequency region and the center region of the two adjacent low-frequency subimages of . Then, the absolute value of the regional energy can be expressed as follows:

3.2. High-Frequency Fusion Algorithm

This paper utilizes high-frequency coefficients and the rules of the match measure with a threshold. The local energy of the high-frequency region [21] can be defined as follows:

represents the position of the high-frequency coefficients of on the decomposition level. The size of the image in a neighborhood of the window (typically 3 × 3 or 5 × 5 pixels) is defined by . The neighborhood of the sum of the square of the high-frequency coefficients is represented by the local energy of point . The match degree of point is defined as follows:

The fusion rule of the high-frequency coefficients is defined by the following formula:

Match degree, which is measured by the matching degree of the feature information in corresponding position of the two original images A and B, determines the proportion of the characteristic information of different original images. The point ’s match degree is determined by the rules as follows:(1)If , then(2)If , then

In the rules listed above, represents the match degree, and represents the matching threshold. When , we take the larger value of the local energy and as high-frequency coefficient. When , we take the value , where the weight and have correlation with the degree of matching, and . Obviously, the calculation process of feature matching rules demonstrates good locality, because the fusion results of the high-frequency coefficient value at are only determined by the coefficient values which are contained by the neighborhood of point .

4. Experimental Results and Analysis

The CT and MRI image fusion experimental simulations were implemented on PIV 2.4 GHz, using a 4 GB RAM PC as the development platform of Matlab7.0. After nonlinear approximation contourlet transform was performed, a 3 × 3 feature region is calculated in the high-frequency and low-frequency regions, and the high-frequency region match threshold is 0.75 as proposed by Burt and Lolczynski [21].

Figure 1 depicts the results of nonlinear approximation contourlet transform performed on an MRI image. The various proportions of nonlinear approximation retain the most significant coefficients at high-frequency subbands. Table 1 describes the MRI image feature coefficients and the most significant coefficients at various approximation proportions.

Figure 2 depicts the matching fusion of MRI and CT image characteristics based on nonlinear approximation contourlet transform. A constant low-frequency region is maintained after nonlinear approximation of significant coefficients at high-frequency subbands, and the images are then fused according to their regional characteristics. This paper quantitatively analyzes the effects of fusion based on the indicators of standard deviation, correlation coefficients, entropy, and mutual information [22, 23].

4.1. Standard Deviation

The standard deviation (STD) reflects the contrast change of an image: the larger the value, the clearer the edge contour; the definition of STD is given as follows:

In the formula, represents the result fusion image, while represents the image , and represents the mean value of .

4.2. Correlation Coefficients

Correlation coefficient (CC) is a measure of the similarity of two images: the greater the correlation coefficient, the better the fusion effect; CC is defined as

4.3. Entropy

Entropy (EN) measures the amount of information maintained by the fused image. The larger the entropy value is, the more information the result image has. EN is defined as

in which is defined as the normalized histogram of the variable and .

4.4. Mutual Information

Mutual information (MI) can be used to measure the mutual correlation or similarity of the two input images. The higher the mutual information of the fused image, the more information extracted from the original images and the better the fusion effect. For the two input discrete image signals, MI is defined as

in which is an entropy function; , while is a joint entropy function; . is defined as histogram of normalized distribution of and   and .

As shown in Table 2, the maximum value of the standard deviation in the fused image can be obtained for less than 50% significant coefficients, which demonstrate sharper profile edges and the highest resolution at 50% nonlinear approximation. This may be the result of few significant coefficients which will in turn result in less extraction of edge contours; alternatively, too many significant coefficients will lead to the extraction of excess noise. Correlation coefficients, mutual information, and entropy have little effect on image fusion with various proportions of significant coefficients. The primary reason for this is that the low-frequency subband retains the profile information of the original image, while fused images do not achieve nonlinear approximation in the low-frequency region. Results indicate that fusion image quality demonstrates a nonlinear relationship to the proportion of significant coefficients. When the significant coefficient is 50%, the quality of image fusion is highest.

In order to verify the effectiveness of the algorithm, we compare our result with others from different researchers. Figure 3 shows different fusion results for MRI and CT images with different algorithms, in which Figure 3(a) is the experimental result with fusion algorithm based on regional feature of wavelet coefficients [24], Figure 3(b) is experimental result based on Haar wavelet transform [25], Figure 3(c) is the result of coefficient weighted fusion algorithm after contourlet transform [19], Figure 3(d) is the fusion result based on the nonsubsampled contourlet transform (NSCT) and regional features [26], and Figure 3(e) is our experimental fusion result, which is an effect of regional image feature matching after 50% nonlinear estimation to the high-frequency coefficients by the contourlet transform. For the visual assessment, the fusion images (Figures 3(a) and 3(b)) are based on the wavelet transform methods; the color is shallow, and edge profile is not clear enough, which cannot reflect the source CT and MRI images of the details of the information. In Figure 3(c), with contourlet transform, the fusion image effect is slightly better than the wavelet method and can more clearly reflect the content information of the source images. In Figures 3(d) and 3(e), the fusion image not only inherits the bone tissue of the CT image but also can keep the soft tissue of the MRI image; moreover, the image edge detail can be more obvious after image fusion.

Analyzing the results quantitatively with the indexes shown in Table 3, the corresponding STD, CC, EN, and MI of Figures 3(a) and 3(b), which are based on the wavelet transform methods, are relatively small, which shows that the information has been lost in the process of image fusion and also shows a notable difference between the fusion image and the source images. It has been demonstrated that the ability of wavelet transform method to extract information from the source image is poor, and the fusion effect is not good. Figure 3(c) gets more directions in the high-frequency part by using contourlet transform method and can better deal with the direction of the details relative to the wavelet transform, so the coefficient of each index is superior to wavelet fusion. In Figure 3(d), both the coefficient of each index and the quality of image fusion are the highest, but the algorithm needs a large amount of computation and is too time-consuming. Figure 3(e) is the result of our nonlinear contourlet algorithm, in which index coefficients outperform wavelet transform and contourlet transform, and the fusion image quality is close to the NSCT; moreover, it greatly reduces the amount of calculation and has significant application value.

Figures 4, 5, and 6 are another group of fusion results of CT and MRI medical images using different algorithms; indexes of the performance evaluation are shown in Tables 4, 5, and 6, respectively. Both from the subject visual perspective and from the object standard of the fused image estimate, the experiment result of our algorithm is better than wavelet and contourlet fusion algorithm. When compared with the nonsubsampled contourlet fusion algorithm, our algorithm has greatly enhanced the computing speed, without losing the fusion quality.

5. Conclusions

This paper proposes an image fusion algorithm based on nonlinear approximation of contourlet transform and regional features applied to medical image processing, which combines nonlinear approximation of contourlet transform characteristics with the high-low frequency coefficient region feature algorithm. First, the algorithm retains the original low-frequency coefficients and then approximates high-frequency coefficients in order to extract the most nonlinear significant coefficients. Second, a coefficient matrix module is used in the low-frequency region to calculate the energy region near the center point and choose a reasonable fusion coefficient, which improves profile information retention of the original image. Finally, the rules of the salience/match measure with a threshold are used to calculate the center point of the region analysis feature and select the integration factor in the high-frequency region according to the most significant coefficients in order to better fuse image edge contours and texture. Via the experiment on CT and MRI images, the results show that the fusion image quality of our algorithm greatly outperforms the wavelet coefficients, Haar wavelet transform, and coefficient weighted contourlet transform and is very close to NSCT. Moreover, our algorithm has greatly improved the computation efficiency. Nonlinear contourlet transform has a certain filter and noise removal function, which outperforms the contourlet transform, while high-frequency component nonlinear feature extraction method retains features to maximum and has greatly improved the computing speed, which outperforms NSCT. In short, the method has certain advantages in both image quality and computation efficiency, so it can provide more reliable information to doctors.

Competing Interests

The authors declare that there are no competing interests regarding the publication of this paper.

Acknowledgments

This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant no. LY15F020033), the National Natural Science Foundation of China (Grant no. 61271414), and the Science and Technology Plan Project of Wenzhou, China (Grant no. Y20160070).