Table of Contents Author Guidelines Submit a Manuscript
Computational and Mathematical Methods in Medicine
Volume 2018, Article ID 2806047, 12 pages
https://doi.org/10.1155/2018/2806047
Research Article

Medical Image Fusion Based on Sparse Representation and PCNN in NSCT Domain

1School of Electronics and Information Engineering, Nanjing University of Information Science and Technology, Nanjing 210044, China
2School of Mechanical Engineering, North China Electric Power University, Hebei 071000, China

Correspondence should be addressed to Yiming Chen; moc.qq@6113120771

Received 23 June 2017; Revised 29 January 2018; Accepted 17 April 2018; Published 24 May 2018

Academic Editor: Michele Migliore

Copyright © 2018 Jingming Xia et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The clinical assistant diagnosis has a high requirement for the visual effect of medical images. However, the low frequency subband coefficients obtained by the NSCT decomposition are not sparse, which is not conducive to maintaining the details of the source image. To solve these problems, a medical image fusion algorithm combined with sparse representation and pulse coupling neural network is proposed. First, the source image is decomposed into low and high frequency subband coefficients by NSCT transform. Secondly, the K singular value decomposition (K-SVD) method is used to train the low frequency subband coefficients to get the overcomplete dictionary , and the orthogonal matching pursuit (OMP) algorithm is used to sparse the low frequency subband coefficients to complete the fusion of the low frequency subband sparse coefficients. Then, the pulse coupling neural network (PCNN) is excited by the spatial frequency of the high frequency subband coefficients, and the fusion coefficients of the high frequency subband coefficients are selected according to the number of ignition times. Finally, the fusion medical image is reconstructed by NSCT inverter. The experimental results and analysis show that the algorithm of gray and color image fusion is about 34% and 10% higher than the contrast algorithm in the edge information transfer factor QAB/F index, and the performance of the fusion result is better than the existing algorithm.

1. Introduction

Medical imaging attracts more and more attention due to the increasing requirements of clinic investigation and disease diagnosis [1]. Imaging of different modalities can reflect different information about the lesion. For example, CT images have a clear bone image, but soft tissue imaging is blurred; MRI images can obtain multiangle and multiplane detail information of soft tissue, but the skeleton imaging is blurred; PET images can present metabolic activity of human cells, but the anatomical structure is not clear. Therefore, the medical images of different modalities are fused to improve the accuracy and recognition of lesion location, which provides a more effective imaging reference for clinical diagnosis of modern medicine [2].

Compared with wavelet transform and Contourlet transform, NSCT transform has the advantages of multiscale and multidirection analysis, and anisotropy and translation invariance [3]. However, the low frequency subband coefficients obtained by NSCT decomposition are not sparse, and the fusion of them directly is not conducive to maintain the characteristics of the source image. Sparse Representation (SR) can extract deeper structural features between low frequency subband coefficients and express or approximate it in a linear combination of a few atoms [4]. Compared with other artificial neural networks, PCNN has an incomparable advantage over other traditional artificial neural networks [5]. The PCNN model has global coupling and pulse synchronization, which can combine the input high frequency subband coefficients with human visual characteristics to obtain richer detail information [6]. Therefore, NSCT transform, sparse representation, and PCNN model image fusion method are gaining more and more attention. Chun-hui and Yun-ting [7] propose a fast image fusion algorithm based on sparse representation and nonsubsampled Contourlet transform. This algorithm greatly improves the efficiency of image fusion, but it gives only four-directional sparse representation in the low frequency subband, which cannot fully represent the characteristics and details of the source image. Shabanzade and Ghassemian [8] propose a multimodal image fusion algorithm based on NSCT and sparse representation. This algorithm uses sparse representation to perfectly approximate the low frequency subband coefficients. However, the rule based on the larger local energy or variance is used for high frequency subband coefficients, which cannot effectively solve the problem of image detail smoothing caused by sparse representation. Gong et al. [9] propose an image fusion method based on the improved NSCT transform and PCNN model, and this method can preserve the image structure better so that the fusion image is more in line with the human visual nervous system. However, the mutual information of the fusion image is relatively less. Mohammed et al. [10] propose a medical image fusion algorithm based on sparse representation and dual input PCNN model. This algorithm has a high fusion performance and adapts to the human visual nerve system. However, it needs to train medical image database to get an overcomplete dictionary; in addition, PCNN model applies a dual input, which presents the high complexity and low integration efficiency of the algorithm.

In order to obtain the medical fusion image with high fusion performance and high fusion efficiency, and to help it adapt to human visual nervous system, this paper, by aiming at the above research situation and existing problems and combining the sparse representation with PCNN simplified model, proposes the medical image fusion algorithm based on NSCT and SR-PCNN, hereinafter referred to as NSCT-SR-PCNN fusion algorithm.

2. Nonsubsampled Contourlet Transform

The NSCT transform consists of two steps: the nondownsampling pyramid (NSP) decomposition and the nondownsampling direction filter bank (NSDFB). NSP decomposition is the process of decomposing the source image into low and high frequency subbands through the nonsubsampling tower filter bank to ensure the characteristic of NSCT multiscale transformation. The NSPFB filter is decomposed as shown in Figure 1.

Figure 1: NSPFB filter decomposition.

NSDFB is a two-channel nondownsampling filter bank, and it decomposes the high frequency subband image decomposed by NSP in level-one NSDFB direction, so it can produce 2l different directional subband image. NSDFB filter is decomposed as shown in Figure 2.

Figure 2: NSDFB filter decomposition.

3. Sparse Representation

Sparse representation means that the natural signal can be represented or approximated by a linear combination of a small number of atoms in the overcomplete dictionary ; then the sparse coefficient of the signal can be obtained bywhere is a prespecified dictionary; is a sparse coefficient vector; stands for the count of nonzero entries in ; is the bounded representation error. NSCT-SR-PCNN algorithm uses K-SVD method to train the dictionary and uses the orthogonal matching tracking optimization (OMP) algorithm to estimate the sparse coefficient [11].

4. Pulse Coupled Neural Network

PCNN simplified model is a feedback neural network model proposed by simulating the signal processing mechanism of cat visual cortex [12]. In the simplified model, the partial simplification of the parameters makes the generality of the model well guaranteed. However, there is a great difference in the response of the visual system to the different feature regions in the image. In the PCNN model, this difference is mainly reflected in the setting of the parameters, and the flexible changes in the parameters still affect the final fusion results. Therefore, this paper uses the most commonly used discrete mathematical iterative model. The simplified model is shown in Figure 3.

Figure 3: PCNN simplified model.

The mathematical expression of the PCNN simplified model can be expressed bywhere is the number of iterations; is the external input; is the output of neuron; is the internal behavior of neurons; is feedback input excitation; is the input of neuron’s link; is the weight coefficient of the connection between neurons; is the link strength coefficient; is the output of variable threshold function; and are, respectively, the signal amplification factor and attenuation time constant of neuron’s link; and are, respectively, the signal amplification factor and decay time constant of variable threshold function.

5. Medical Image Fusion Algorithm Based on NSCT-SR-PCNN

NSCT-SR-PCNN algorithm firstly uses NSCT transform to decompose the source image after registration to obtain the low frequency and high frequency subband of the source image; secondly the fusion method based on sparse representation is used to fuse the low frequency subband, and the fusion method based on PCNN simplified model is used to fuse the high frequency subband; finally NSCT inverse transform is used to reconstruct the fused subband coefficients to obtain the medical image of fusion. The specific implementation process of NSCT-SR-PCNN medical image fusion algorithm is shown in Figure 4.

Figure 4: NSCT-SR-PCNN medical image fusion algorithm flow.
5.1. The Rules of Low Frequency Subband Coefficient Fusion

Low frequency subband coefficient fusion is achieved by using sparse representation fusion. First of all, blocks taken from the image to be fused form a training sample set, and secondly K-SVD algorithm is used to train a complete dictionary, and then the Batch-OMP [13] optimization algorithm is used to estimate the sparse coefficient; finally, the sparse coefficients are adaptively fused according to image features. The specific steps are as follows.

Step 1. Use the NSCT transforms to decompose, respectively, the source images and with size after registration to obtain the low frequency and high frequency subband coefficients.

Step 2. Segment the low frequency subband coefficients and by using the sliding window with the steps of pixels and the size , and obtain the subblocks; transform the image subblocks into column vectors to form the sample training matrix and .

Step 3. Average the sample training matrices and to obtain the mean matrices and ; average of the sample training matrices and is removed to obtain the sparse representation of the sample matrices and .

Step 4. Use the K-SVD algorithm to iterate the sample matrix to obtain the overcomplete dictionary matrix of the low frequency subband coefficient.

Step 5. Use the Batch-OMP optimization algorithm to estimate the sparse coefficients of and and obtain the sparse coefficient matrices and . According to the value of norm, the sparse coefficient matrix of column is fused by applying the rules of

Among them, can be seen from

Step 6. The choice of the fusion mean matrix is given by

Step 7. Multiply the overcomplete dictionary matrix with the fusion sparse coefficient matrix and then add fusion mean matrix . The fusion sample training matrix is given by

Step 8. Convert the columns of the fusion sample training matrix into data subblocks, and reconstruct the data subblocks to obtain the fusion coefficients of the low frequency subbands.

The implementation of low frequency subband coefficients fusion based on sparse representation is shown in Figure 5.

Figure 5: Low frequency subband coefficient SR fusion process.
5.2. The Rules of High Frequency Subband Coefficient Fusion

According to the characteristics of human visual system, the spatial frequency (SF) reflects the local area characteristics and details of the image. The high frequency subband coefficient fusion selects SF as the neuron feedback input to stimulate the PCNN simplified model. The neuron feedback input is expressed by

Among them are the window size , , and ; from formula (8) we can see:

In the PCNN model, the value of determines the strength of the coupling relationship of the neurons, and the high frequency subband coefficient fusion selects the Laplacian energy (EOL), the visibility (VI), and the standard deviation (SD) that can measure the neighborhood characteristic information, respectively, as the linking strength values of PCNN corresponding neurons, and EOL, VI, and SD are expressed bywhere is a pixel point of the image; is the pixel value; is the window of size ; is the pixel gray level average; is the number of pixels in the window; is a constant.

For fusion based on PCNN simplified model, SF is used as the neuron feedback input to excite each neuron, and EOL, VI, and SD are selected as the linking strength values of the corresponding neurons; then the corresponding ignition map is obtained by the PCNN ignition, and the new ignition map of the source image is constructed by the weighting function. Finally, the fusion coefficient is selected according to the number of ignition frequencies. Specific implementation steps are as follows.

Step 9. According to formula (7), calculate the neighborhood spatial frequency and of the high frequency subband coefficients HA and HB and then normalize and , and mark them as and , respectively; and are used as neuron feedback input to motivate the PCNN simplified model.

Step 10. According to formula (9), calculate EOL, VI, and SD of high frequency subband coefficients HA and HB (which is recorded as , , , , , and ) and take them, respectively, as the linking strength value of corresponding neurons.

Step 11 (initialization setting). , ; at this time the neuron is in the flameout state; that is, , so the number of pulses generated is .

Step 12. According to formula (2), calculate , , , and .

Step 13. The output of the PCNN simplified model iteration run is as follows: , , , , , and ; use the weighting function to obtain the new ignition map and which corresponds to high frequency subband coefficients HA and HB: , ; is given by

Step 14. Compare the ignition time output threshold values (ignition frequencies) at the new ignition map pixel; the high frequency subband fusion coefficient is given by

The adaptive fusion implementation process based on PCNN simplified model is shown in Figure 6.

Figure 6: High frequency subband coefficient PCNN fusion process.

6. The Results and Analysis of Experiments

In order to verify the effectiveness of the proposed algorithm, five kinds of contrast algorithms are selected to conduct gray and color medical image fusion experiments. The medical images of each group are obtained from http://www.med.harvard.edu/AANLIB/home.html page. Objective evaluation of quality is made in terms of 7 indexes, such as the information entropy (IE), spatial frequency (SF), mean gradient (AG) [14], clarity (MC), mutual information (MI), standard deviation (SD) [15], and edge information delivery factor ( high weight evaluation index) [1619]. The visual information fidelity (VIFF) and structural similarity model (SSIM) were used to evaluate the visual effect of human eyes. The contrast algorithm 1 is a medical image fusion study based on NSCT transform (referred to as NSCT fusion algorithm) proposed in the paper [20]. The contrast algorithm 2 is the multifocus image fusion (referred to as the SR fusion algorithm) based on the fragmented complete sparse representation proposed in the paper [21]. The contrast algorithm 3 is the image fusion by using pulse coupled neural network (referred to as PCNN fusion algorithm), proposed by the paper [22]. The contrast algorithm 4 is a multifocus image fusion based on NSCT and sparse representation (referred to as NSCT-SR fusion algorithm) proposed in the paper [11]. The contrast algorithm 5 is an improved algorithm based on NSCT and adaptive PCNN medical image fusion (referred to as NSCT-PCNN fusion algorithm) proposed in the paper [23]. NSCT transformation parameters setting: the class of decomposition is 4, the scale decomposition filter selects “pyrexc” filter, and the direction filter selects “vk” filter [24]; SR parameters setting: sliding window size is , step size is 1, the number of dictionary training iteration is 30 times, and sparse error is 0.01; PCNN parameters setting: , , , , , and(see [25]).

6.1. Gray Image Fusion Experiment

Gray image fusion experiment is conducted by selecting the images of four groups of brain under different states as the images to be fused. The results of the fusion of the various algorithms are shown in Figures 710, and the objective evaluation indexes for quality of the various algorithms are shown in Tables 14.

Table 1: Quality evaluation of CT/MRI medical image fusion.
Table 2: Quality evaluation of MR-PD/MR-T1 medical image fusion.
Table 3: Quality evaluation of MR-PD/MR-T2 medical image fusion.
Table 4: Quality evaluation of MR-T1/MR-T2 medical image fusion.
Figure 7: CT/MRI medical image fusion results.
Figure 8: MR-PD/MR-T1 medical image fusion results.
Figure 9: MR-PD/MR-T2 medical image fusion results.
Figure 10: MR-T1/MR-T2 medical image fusion results.

The NSCT-SR-PCNN algorithm has better fusion and better fusion performance than the five contrast algorithms, from the human visual effects in Figures 710 or the evaluation index from Tables 14. The reason is that, for NSCT algorithm, NSCT decomposition of the low frequency subband coefficient is not sparse, and direct low frequency coefficient fusion is not conducive to the retention of the source image features; for SR algorithm and the PCNN algorithm, the image fusion is based on the spatial domain implementation, and the spatial domain fusion method fails to express details, so the fusion image has low contrast, fuzzy details, and block artifact, and other problems. For NSCT-SR algorithm, it solves the problem that low frequency subband coefficient is not sparse, but the fusion of high frequency subband is only conducted based on the direction featured principle, which cannot completely present the details of the image information. For NSCT-PCNN algorithm, it can adapt to the human visual system, but the low frequency subband coefficients have no sparseness. For NSCT-SR-PCNN algorithm, it not only solves the problem of the detail loss of the wavelet transform and the sparseness loss of low frequency subband coefficient of the NSCT, but also improves the comprehensive performance of the fusion results by using the spatial frequency of the high frequency subband coefficients to impel input and by using EOL, VI, and SD to strengthen their links with the corresponding neurons. NSCT-SR-PCNN algorithm can obtain better performance for CT/MRI, MR-PD/MR-T1 and MR-T1/MR-T2 medical image fusion in the comprehensive analysis of evaluation indexes. The NSCT-SR-PCNN algorithm can achieve better performance for CT/MRI, MR-PD/MR-T1 and MR-PD/MR-T2 medical image fusion according to edge information delivery factor of index.

6.2. Color Image Fusion Experiment

Color image fusion experiment selects three groups of brain under different images as the images to be fused. The fusion results of various algorithms are shown in Figures 1113, and the objective evaluation index for quality of various algorithms is shown in Tables 57.

Table 5: Quality evaluation of MR-PD/PET medical image fusion.
Table 6: Quality evaluation of MR-T1/PET medical image fusion.
Table 7: Quality evaluation of MR-T2/PET medical image fusion.
Figure 11: MR-PD/PET medical image fusion results.
Figure 12: MR-T1/PET medical image fusion results.
Figure 13: MR-T2/PET medical image fusion results.

Compared with the five contrast algorithms in terms of the human visual effect of Figures 1113 or the comprehensive analysis of the evaluation index and the index of the edge information delivery factor, the NSCT-SR-PCNN algorithm can provide better performance for MR-PD/PET, MR-T1/PET, and MR-T2/PET medical image fusion. Based on the experimental data, the NSCT-SR-PCNN algorithm proposed in this paper can make the fusion image obtain high fusion performance in the aspect of texture clarity, gray scale variation, and contrast ratio and can realize no color loss or distortion in transmission.

7. Conclusion

The NSCT-SR-PCNN algorithm effectively combines NSCT transform, sparse representation, and pulse coupled neural network to overcome the shortcomings of wavelet transform, which cannot reflect the holistic characteristics, and solves the problem that the low frequency subband coefficient is not sparse. In addition, this algorithm collects the image texture, the degree of change in edge and details, and other information, which improves the comprehensive performance of the fusion results. The experimental data show that although not all the evaluation indexes of NSCT-SF-PCNN algorithm rank the first, the evaluation indexes of the NSCT-SF-PCNN algorithm are all in the top three and the top four; besides this, the comprehensive indexes are the number one, and the edge information delivery factor of the high weight evaluation index is higher than that of the five contrast algorithms, and the edge and detail information of the source image is better preserved, and the human visual effect is better. Certainly, the NSCT-SR-PCNN algorithm also needs to be improved. For example, through the online dictionary learning method, to obtain a complete dictionary still requires further study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was funded by the National Natural Science Foundation (41505017).

References

  1. Y. Fei, G. Wei, and S. Zongxi, “Medical image fusion based on feature extraction and sparse representation,” International Journal of Biomedical Imaging, vol. 2017, Article ID 3020461, 11 pages, 2017. View at Publisher · View at Google Scholar · View at Scopus
  2. J. Zhen-yi and W. Yuan-jun, “Multi-modality medical image fusion method based on non-subsampled contourlet transform,” Chinese Journal of Medical Physics, vol. 33, no. 5, pp. 445–450, 2016. View at Google Scholar
  3. M. N. Do and M. Vetterli, “Contourlets: A directional multiresolution image representation,” in Proceedings of the ICIP 2002 International Conference on Image Processing, vol. 1, pp. 357–360, Rochester, NY, USA, 2001. View at Publisher · View at Google Scholar
  4. S. Zhao-yu, H. Rong, and O. Ning, “Image fusion based on multi-scale sparse representation,” Computer Engineering and Design, vol. 36, no. 1, pp. 232–235, 2015. View at Google Scholar
  5. R. Eckhorn, H. J. Reitboeck, and M. Arndt, “A neural network for feature linking via synchronous activity,” Canadian Journal of Microbiology, vol. 46, no. 8, pp. 759–763, 1989. View at Google Scholar
  6. H. J. Reitboeck, R. Eckhorn, M. Arndt, and P. Dicke, “A Model for Feature Linking via Correlated Neural Activity,” in Synergetics of Cognition, pp. 112–125, Springer, Berlin, Germany, 1990. View at Publisher · View at Google Scholar
  7. Z. Chun-hui and G. Yun-ting, “Fast image fusion algorithm based on sparse representation and non-subsampled contourlet transform,” Journal of Electronics and Information Technology, vol. 7, no. 38, pp. 1773–1780, 2016. View at Google Scholar
  8. F. Shabanzade and H. Ghassemian, “Multimodal image fusion via sparse representation and clustering-based dictionary learning algorithm in NonSubsampled Contourlet domain,” in Proceedings of the 8th International Symposium on Telecommunications, IST 2016, pp. 472–477, Tehran, Iran, September 2016. View at Publisher · View at Google Scholar · View at Scopus
  9. J. Gong, B. Wang, L. Qiao, J. Xu, and Z. Zhang, “Image Fusion Method Based on Improved NSCT Transform and PCNN Model,” in Proceedings of the 9th International Symposium on Computational Intelligence and Design, ISCID 2016, pp. 28–31, Hangzhou, China, December 2016. View at Publisher · View at Google Scholar · View at Scopus
  10. A. Mohammed, K. L. Nisha, and P. S. Sathidevi, “A novel medical image fusion scheme employing sparse representation and dual PCNN in the NSCT domain,” in Proceedings of the 2016 IEEE Region 10 Conference, TENCON 2016, pp. 2147–2151, Singapore, November 2016. View at Publisher · View at Google Scholar · View at Scopus
  11. O. Ning, Z. Xue-ying, and Y. Hua, “Multi-focus image fusion based on NSCT and sparse representation,” Computer Engineering and Design, vol. 38, pp. 177–182, 2017. View at Google Scholar
  12. C. M. Gray and W. Singer, “Stimulus specific neuronal oscillations in the cat visual cortex: A cortical functional unit,” Soc.neurosci.abst, 1987. View at Google Scholar
  13. Z. Hai-feng, L. Yu-miao, L. Ming, and C. Si-bao, “Medical Image Compression Based on Fast Sparse Representation,” Computer Engineering, vol. 40, no. 4, pp. 233–236, 2014. View at Google Scholar
  14. W. Yuan-jun, J. Bo-yu, and J. Zhen-yi, “Review of multimodal medical image fusion technology based on wavelet transformation,” Chinese Journal of Medical Physics, vol. 30, no. 6, pp. 4530–4536, 2013. View at Google Scholar
  15. X. Wei-liang, D. Wen-zhan, and L. Jun-feng, “Medical image fusion algorithm based on lifting wavelet transform and PCNN,” Journal of Zhejiang Sci-Tech University, vol. 35, no. 6, pp. 891–898, 2016. View at Google Scholar
  16. C. S. Xydeas and V. Petrović, “Objective image fusion performance measure,” IET Journals and Magazines on Electronics Letters, vol. 36, no. 4, pp. 308-309, 2000. View at Publisher · View at Google Scholar · View at Scopus
  17. G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” IET Journals and Magazines on Electronics Letters, vol. 38, no. 7, pp. 313–315, 2002. View at Publisher · View at Google Scholar · View at Scopus
  18. G. Piella and H. Heijmans, “A new quality metric for image fusion,” in Proceedings of the 10th International Conference on Image Processing, vol. 3, pp. 173–176, IEEE, Barcelona, Spain, September 2003. View at Scopus
  19. Z. Liu, E. Blasch, Z. Xue, J. Zhao, R. Laganiére, and W. Wu, “Objective assessment of multiresolution image fusion algorithms for context enhancement in Night vision: A comparative study,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 1, pp. 94–109, 2012. View at Publisher · View at Google Scholar · View at Scopus
  20. T. Xiu-hua and X. Wang, “Research on NSCT-based Medical Image Fusion,” Computer Applications and Software, vol. 30, no. 4, pp. 287–290, 2013. View at Google Scholar
  21. C. Yao-jia, Z. Yong-ping, and T. Jian-yan, “Multi-focus Image Fusion Based on Blocked Sparse Representation,” Video Engineering, vol. 36, no. 13, pp. 48–52, 2012. View at Google Scholar
  22. C. Hao, Z. Juan, and L. Yan-ying, “Image fusion based on pulse coupled neural network,” Optics and Precision Enginee Ring, vol. 18, no. 4, pp. 995–1001, 2010. View at Google Scholar
  23. C. Jun-qiang and H. Dan-fei, “A Medical Image Fusion Improved Algorithm Based on NSCT and Adaptive PCNN,” Journal of Changchun University of Science and Technology, vol. 38, no. 3, pp. 152–155, 2015. View at Google Scholar
  24. S. Li, B. Yang, and J. Hu, “Performance comparison of different multi-resolution transforms for image fusion,” Information Fusion, vol. 12, no. 2, pp. 74–84, 2011. View at Publisher · View at Google Scholar · View at Scopus
  25. Y. Tian, Y. Li, and F. Ye, “Multimodal medical image fusion based on nonsubsampled contourlet transform using improved PCNN,” in Proceedings of the 13th IEEE International Conference on Signal Processing, ICSP 2016, vol. 13, pp. 799–804, Chengdu, China, November 2016. View at Publisher · View at Google Scholar · View at Scopus