Journal of Electrical and Computer Engineering

Volume 2017, Article ID 6807473, 9 pages

https://doi.org/10.1155/2017/6807473

## Medical Image Fusion Algorithm Based on Nonlinear Approximation of Contourlet Transform and Regional Features

^{1}School of Marine Science and Technology, Northwestern Polytechnical University, Xi’an 710072, China^{2}Zhijiang College of Zhejiang University of Technology, Hangzhou 310024, China

Correspondence should be addressed to Jionghui Jiang; nc.ude.tujz.cjz@iuhgnoijgnaij

Received 26 July 2016; Revised 15 November 2016; Accepted 5 December 2016; Published 24 January 2017

Academic Editor: Panajotis Agathoklis

Copyright © 2017 Hui Huang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

According to the pros and cons of contourlet transform and multimodality medical imaging, here we propose a novel image fusion algorithm that combines nonlinear approximation of contourlet transform with image regional features. The most important coefficient bands of the contourlet sparse matrix are retained by nonlinear approximation. Low-frequency and high-frequency regional features are also elaborated to fuse medical images. The results strongly suggested that the proposed algorithm could improve the visual effects of medical image fusion and image quality, image denoising, and enhancement.

#### 1. Introduction

As an analytical probe, there has been an ever-increasing interest in developing and applying medical imaging technique to problems in clinical diagnosis. The list of possible applications of X-ray, ultrasound, CT, MRI, SPECT, and PET in clinical diagnosis continues to grow and diversify. Although those imaging technologies have given clinicians an unprecedented toolbox to aid in clinical decision-making, advances in image fusion of comprehensive morphological and functional information retrieved from different imaging technologies could enable physicians to identify human anatomy, physiology, and pathology as well as diseases at an even earlier stage.

Recently, there has been active research using medical image fusion technology for clinical applications in the academic community. Many researchers have proposed various image fusion algorithms which have achieved good results, such as the Laplacian pyramid transform [1], the contrast pyramid transform [2, 3] techniques based on the morphological pyramid transform [4], techniques based on the pyramid gradient transform [5], wavelet transform [6–8], Ridgelet, and Curvelet. DO and Martin Vetterli proposed the contourlet transform in 2002, which is a “real” two-dimensional image representation based on wavelet multiscale analysis and known as a Pyramidal Directional Filter Bank (PDFB). The multiscale geometric analytical tool used in contourlet transform demonstrates excellent spatial and frequency domain localization properties of wavelet analysis, as well as the bonus of multidirectional, multiscale characteristics and good anisotropy, demonstrating its suitability to describe the geometric characteristics of an image [9, 10]. Additionally, the contourlet wavelet transform with smaller contourlet coefficients is able to express abundant sparse matrices with depicting image edges, such as curves, straight lines, and other features. After contourlet transformation, the image becomes more focused, which is conducive to the tracking and analysis of important image features. Contourlet transform can decompose in any direction and on any scale, which is key to the accurate description of image contours and directional textural information. To date, many scholars have applied the contourlet transform to image fusion and reported good results, particularly when combined with the image characteristics of contourlet transform fusion [11, 12], nonsubsampled contourlet image fusion [13, 14], and nonlinear approximation of contourlet transform fusion [15, 16].

This paper proposes an image fusion algorithm based on the analysis of a large number of image fusion contourlet transform techniques, which combines nonlinear approximation of contourlet transform and regional image features. First, contourlet decomposition is employed to extract the high-frequency and low-frequency regions of an image. The low-frequency coefficients are retained, and nonlinear approximations of the most significant high-frequency coefficients are obtained. Then, the low-frequency coefficients and the most significant high-frequency coefficients are combined via image fusion. The coefficient matrix module is used to calculate the energy region near the center point of the low-frequency region, and a reasonable fusion coefficient is chosen. Based on the most significant high-frequency coefficients, this paper employs the fusion rule of salience/match measure with a threshold. CT and MRI image simulations and experimental results indicate that the proposed method achieves better fusion performance and visual effects than the wavelet transform and traditional contourlet fusion methods.

#### 2. Nonlinear Approximation of Contourlet Transform Algorithm

A contourlet transform model of the filter group can be extended into a continuous function of square space [17, 18]. In a continuous domain contourlet transform, is decomposed into multiscale, multidirectional subspace sequence by the use of an iterative filter group, as shown the following equation:

The definition of space and is consistent with wavelet decomposition [19]. is the approximate space. The scaling function provides an approximation component on the scale represented by , as generated by zooming and panning the scaling function orthogonal basis. is decomposed into directional subspace , expressed as . The space is defined in the rectangular frame or , which belongs to , as shown in the following equation:

In Formula (2), is a low-pass analysis filter of PDB. The sampling matrix can be expressed as follows:

In Formula (3), the parameters directly determine the conversion of orientation, that is, the horizontal or vertical bias. According to multiresolution analysis theory, can be obtained from the original function and its translation, as shown in the following equation:

According to the theory described above, is a continuous domain contourlet, while , , and represent the scale, orientation, and position parameters of the contourlet, respectively.

Given a set of base functions in a nonlinear approximation of contourlet transform, the function can be expanded to . Then the maximum absolute values of coefficients are used to approximate the original function, expressed as , where is the index of the maximum absolute value of the coefficient and represents the nonlinear approximation of the function [20].

#### 3. Feature Matching Algorithm

##### 3.1. Low-Frequency Fusion Algorithm

The low-frequency subband retains the profile information of the original image; the low-frequency region is processed as much as possible in order to retain the profile characteristics. In this paper, the window coefficient matrix is employed to calculate the energy in the region near the image center point, which not only takes regional factors into account but also retains the characteristics of directivity and highlights the central pixel. The energy of the low-frequency region can be defined as follows:

represents the center of the neighborhood of , represents the center of the proximity panel of , is the area coefficient matrix, and is the low-frequency subband coefficient of the fused image. According to the rule of the low-frequency region of energy fusion, the energy of the low-frequency region must be calculated first. is the center of the two adjacent subimages of the low-frequency region and the center region of the two adjacent low-frequency subimages of . Then, the absolute value of the regional energy can be expressed as follows:

##### 3.2. High-Frequency Fusion Algorithm

This paper utilizes high-frequency coefficients and the rules of the match measure with a threshold. The local energy of the high-frequency region [21] can be defined as follows:

represents the position of the high-frequency coefficients of on the decomposition level. The size of the image in a neighborhood of the window (typically 3 × 3 or 5 × 5 pixels) is defined by . The neighborhood of the sum of the square of the high-frequency coefficients is represented by the local energy of point . The match degree of point is defined as follows:

The fusion rule of the high-frequency coefficients is defined by the following formula:

Match degree, which is measured by the matching degree of the feature information in corresponding position of the two original images* A* and* B*, determines the proportion of the characteristic information of different original images. The point ’s match degree is determined by the rules as follows:(1)If , then(2)If , then

In the rules listed above, represents the match degree, and represents the matching threshold. When , we take the larger value of the local energy and as high-frequency coefficient. When , we take the value , where the weight and have correlation with the degree of matching, and . Obviously, the calculation process of feature matching rules demonstrates good locality, because the fusion results of the high-frequency coefficient value at are only determined by the coefficient values which are contained by the neighborhood of point .

#### 4. Experimental Results and Analysis

The CT and MRI image fusion experimental simulations were implemented on PIV 2.4 GHz, using a 4 GB RAM PC as the development platform of Matlab7.0. After nonlinear approximation contourlet transform was performed, a 3 × 3 feature region is calculated in the high-frequency and low-frequency regions, and the high-frequency region match threshold is 0.75 as proposed by Burt and Lolczynski [21].

Figure 1 depicts the results of nonlinear approximation contourlet transform performed on an MRI image. The various proportions of nonlinear approximation retain the most significant coefficients at high-frequency subbands. Table 1 describes the MRI image feature coefficients and the most significant coefficients at various approximation proportions.