Research Article | Open Access
Yang Chen, Zheng Qin, "PCNN-Based Image Fusion in Compressed Domain", Mathematical Problems in Engineering, vol. 2015, Article ID 536215, 9 pages, 2015. https://doi.org/10.1155/2015/536215
PCNN-Based Image Fusion in Compressed Domain
This paper addresses a novel method of image fusion problem for different application scenarios, employing compressive sensing (CS) as the image sparse representation method and pulse-coupled neural network (PCNN) as the fusion rule. Firstly, source images are compressed through scrambled block Hadamard ensemble (SBHE) for its compression capability and computational simplicity on the sensor side. Local standard variance is input to motivate PCNN and coefficients with large firing times are selected as the fusion coefficients in compressed domain. Fusion coefficients are smoothed by sliding window in order to avoid blocking effect. Experimental results demonstrate that the proposed fusion method outperforms other fusion methods in compressed domain and is effective and adaptive in different image fusion applications.
Image fusion is the processing of combining the complementary information from multiple source images into a fused image which provides more accuracy than any of the source images. Image fusion is widely used in civil, military, and medical image processing. For example, fusion of multifocus images [1, 2] can provide a better view for human or machine perception. Fusion of infrared and visible light [3, 4] can provide a strong ability of discovering important targets with detailed texture expression. In addition, fusion of computed tomography (CT) image and magnetic resonance imaging (MRI) images [5, 6] can provide detailed information on bones structures and soft tissue for diagnosis.
Many image fusion methods have been proposed and they can be classified into three levels of information representation, namely, pixel level [7, 8], feature level , and decision level . Among these categories of image fusion, the pixel level image fusion is the most effective in terms of conveying more information from multimodal images. Particularly, multiscale transform based methods are the most widely used, such as pyramid , gradient , wavelet [13–15], and contourlet [16–18].
In recent years, CS [19, 20] has become a much preferred algorithm for image fusion and other image processings due to its compression capability in the sampling procedure on the sensor side. Sampling is performed on source images to obtain their linear measurements in the compressed domain. There are different CS sampling patterns addressed in previous CS literature [20–23], such as discrete cosine transform (DCT) , star shape, double-star shape, star-circle shape , and scrambled block Hadamard ensemble (SBHE) . In the fusion rules of compressive image fusion, weighted fusion factors are calculated by mathematical combinations of image channels. The calculation rules of fusion weighted factors are mainly based on average , mean, variance, PCA [26, 27], and mutual information [28, 29]. In addition, fused image in compressed domain can be reconstructed from the measurement according to a recovery algorithm such as gradient projection for sparse reconstruction (GPSR) , basis pursuit , total variation minimization , orthogonal matching pursuit , and L1-norm minimization [34, 35].
Different from existed compressive image fusion methods, this paper addresses a novel image fusion method by using PCNN in compressed domain. PCNN is a biologically inspired neural network algorithm developed by Eckhorn et al. , which has been used in image segmentation, image fusion , image enhancement, and pattern recognition. It is characterized by the global coupling and pulse synchronization of neurons. These characteristics benefit image fusion which makes use of local image information. Generally, humans are sensitive to edges or salient information. Therefore, local standard variance, which stands for gradient activity in the local neighbourhood, is used to motivate PCNN neurons in this paper.
The remainder of the paper is organized as follows. Section 2 provides a brief description of compressive sensing theory. Proposed image fusion methods in compressed domain are illustrated in Section 3. Experimental results and discussions are presented in Section 4 and conclusions are provided in Section 5.
2. Compressive Sensing and Sampling Pattern
Compressive sensing theory [19, 20] enables a sparse or compressible signal to be reconstructed from a small number of nonadaptive linear projections, thus significantly reducing the sampling and computation costs.
2.1. Background on Compressive Sensing
Consider an unknown signal ; it is expressed in the following form on the orthogonal base: . For the coefficient vector , if only elements are not zero, we say is sparse when is in the substrate . After the measurement matrix projection, there is . Since , the projection process combines the traditional signal sampling and compression process. Through and , direct recovery of is morbid inverse problem. But sparse theory suggests that this problem can be transformed into an optimization problem of sparse vectors to be solved: where represents norm. Thus the signal can be restored from the formula .
Subsequently, measurement matrix is required to satisfy the following properties (RIP) criterion in order to recover signal accurately: where , represents norm. is a dimensional vector with strictly sparseness. That is to say, matrix and substrate are not related.
2.2. SBHE Sampling
In the block CS, an image is divided into small blocks with the size of . Sampling operator with size of is formed by the partial block Hadamard transform with its columns randomly permuted as SBHE operator . The sampling operator is a block diagonal matrix, which is expressed as where represents the Hadamard matrices of . Let denote the vectorized signal of the th block. The corresponding measurement output vector is expressed as Then, the measurement output vector of the entire image is determined by
SBHE is adopted as the sampling pattern in this paper, which has been shown to satisfy the five requirements, including near optimal performance, university, fast computation, being memory efficient, and being Hardware friendly.
3. Proposed Medical Image Fusion Method
3.1. Image Fusion Framework in Compressed Domain
In image fusion, pixel level image fusion offers effective fused information at the cost of higher computational complexity. Compressed domain approaches, on the other hand, are more promising due to their compression capability and computational simplicity on the sensor side. Thus, it is important to explore the compressive image fusion method, and the flowchart of image fusion in compressed domain is illustrated as Figure 1.
Firstly, source images are compressed through compressive sensing so as to facilitate the transmission of the sensor. In the fusion phase, fusion rule is used to combine the compressive sensing coefficients. And then inverse transformation is applied to the coefficients derived from the fusion, and the fused image is obtained eventually. In this paper, SBHE is adopted as the sampling pattern and GPSR reconstruction algorithm is used as well.
3.2. PCNN Fusion Rule
PCNN model consists of three parts: the dendritic tree, the linking modulation, and the pulse generator. The role of the dendritic tree is to receive the inputs from two kinds of receptive fields, the linking and the feeding. The linking receives local stimulus from the output of surrounding neurons, while the feeding receives local stimulus and external stimulus. We adopt PCNN as the fusion rule and the flowchart of the proposed PCNN-based image fusion in compressed domain is shown in Figure 2. The procedure of the proposed method is summarized in Algorithm 1.
Algorithm 1 (PCNN-based image fusion in compressed domain). (1) Decompose the source images by SBHE sampling pattern.
(2) Measure the local standard variance in sliding window with (6).
(3) Motivate PCNN with local standard variance , generate pulse of neurons with (8), and calculate the firing times with (9).
(4) Calculate the decision map with (10), and weighted factors are smoothed by the sliding window with (11) and (12).
(5) Fusion coefficients are obtained with weighted factors of the compressed image.
(6) Reconstruct the fused image from the fusion coefficients by GPSR algorithm.
We use the local standard variance to motivate the PCNN model. The local standard variance at the point of the source image is calculated as where represents the gray value of the point at the th block of the image and denotes the number of elements in, which denotes the neighbourhood of the point . measures the local standard variance in . We use the local standard variance to motivate PCNN, and corresponding PCNN model can be expressed as where the feeding input equals the local standard variance . The linking input equals the sum of neurons firing times in the linking range with the range size of . is the synaptic gain, and denotes the decay constants. and are the amplitude gain. is the linking strength. is total internal activity and is the threshold. denotes the iteration times. denotes firing times at iteration at the point . The neuron generates a pulse, that is, = 1, when is larger than . Due to the reason that neighbouring coefficients with similar features represent similar firing times in given iteration times, firing times, that is, , are used to represent image information, which is defined as (9).
The decision map is used to decide weighted factors of the source images in the fused image, which can be calculated in (10). The weighted factors at the th block are smoothed by the sliding window, which are calculated in (11) and (12). Coefficients with larger firing times have more weighted factors in the fused image:
4. Experimental Results
4.1. Experiment Setup
We conduct the experiment on six image datasets with three application scenarios. These fusion applications are multifocus image fusion, infrared and visible light image fusion, and medical image fusion. In addition to visual comparison, we also present fusion results with three objective metrics: information entropy (IE), mutual information (MI) , and . We compare these image fusion methods in compressed domain, with sampling rate set at 0.5. In addition, we use SBHE as the sampling pattern and adopt GPSR as the CS reconstruction algorithm. The SBHE block size is set at . In this way, the block Hadamard sampling matrix can encode pixels with . All the source images can be obtained at http://www.imagefusion.org and http://www.med.harvard.edu/aanlib/home.html.
4.2. Experimental Result
4.2.1. Multifocus Image Fusion
Due to the limited focus depth of the optical lens, it is not possible to get an image that contains all relevant objects in focus. In this way, multifocus image fusion is required to fuse the images taken from the same view point under different focal settings, in order to provide a fused image with a better description of the scene than any individual image. The obtained fusion image shows all the details of the source image, making it more conducive to the follow-up treatment. We implement CS image fusion based on six fusion methods on pixel level combined with compressive sensing coefficients and compare these fusion methods on two datasets of multifocus image fusion. Experimental results are shown in Figure 3, and objective measurements are provided in Table 1, with the best results marked in bold.
(a) Image A
(b) Image B
(i) Image A
(j) Image B
We have proposed the compressive image fusion method based on PCNN. It is difficult to evaluate multisensor image fusion results from subjective evaluation for the reason that the fusion methods adopt the same CS theory as the image decomposition method. Therefore, fusion results are evaluated by means of IE, MI, and metrics that can provide objective measurements although minor differences exist. As shown in Table 1, it is demonstrated that, in the fusion scenario of multifocus images, our proposed method has the best result from the perspective of both objective assessment and visual perception.
4.2.2. Infrared and Visible Light Image Fusion
The infrared (IR) image and visible image are different kinds of images where IR image has low definition but a strong ability of discovering important military targets, while the visible image has higher definition and provides more detail information of texture expression. Thus, the IR and visible image fusion can help in target detection and recognition. We implement CS image fusion based on six fusion methods combined with compressive sensing coefficients and conduct the experiments in “UNcamp” image sets (two frames are selected randomly). Experimental results and objective measurements are provided as in Figure 4 and Table 2.
(a) Image A
(b) Image B
(i) Image A
(j) Image B
In infrared and visible light images fusion scenarios, as shown in Figure 4, the proposed fusion method has better object detection ability. In addition, Table 2 shows that the proposed method has the better performance in IE, MI, and metrics than any of the other five fusion methods. It also demonstrates that the fused image reserved the edge and salient information from source images.
4.2.3. Medical Image Fusion Application
Medical image fusion plays an important role in clinical applications. We apply the proposed method to medical image fusion on two image datasets. These datasets include CT-MRI image fusion and T1-weighted MR image (MR-T1) and T2-weighted MR image (MR-T2) fusion. Fusion of CT and MRI images can preserve bone structures and soft tissues information at the same time. In addition, MR-T1 and MR-T2 image fusion can provide the details of an anatomical structure of tissues and the information about normal and pathological tissues at the same time. Experiments are conducted to evaluate the fusion method in compressed domain, shown in Figure 5. The objective measurements are provided in Table 3.
(a) Image A
(b) Image B
(i) Image A
(j) Image B
As Figure 5 shows, the fused image in Figure 5(h) obtained by our proposed method contains both the bone structure from the CT image and the soft tissue information from the MRI image. In addition, the fusion result of the proposed method is smooth while the result of mean, variance, and PCA causes the blocking effect. Table 3 demonstrates that the proposed method has the best result in terms of IE, MI, and .
In this paper, we present a PCNN-based image fusion framework in compressed domain. In this framework, the local standard variance is calculated as the input to motivate PCNN, which guides the calculation of weighted factors in the fusion rule. Experiments are conducted on six image datasets with three fusion application scenarios to validate the performance and adaptability of the proposed method. Experimental results demonstrate that the proposed method has better performance in terms of objective evaluation metrics compared with other image fusion methods in compressed domain. Moreover, the proposed method is adaptive for different application scenarios illustrated in this paper.
It is significant to explore the image fusion method in compressed domain due to the reason that compressive sensing has excellent ability of compression capability and computational simplicity on the sensor side. In the future, we plan to extend the proposed method by integrating more sampling patterns and develop more advanced fusion rules.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
This work is supported by National S&T Major Program (Grant no. 9140A1550212 JW01047) and by the “Twelfth Five” Preliminary Research Project of PLA (no. 402040202).
- S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image and Vision Computing, vol. 26, no. 7, pp. 971–979, 2008.
- W. Huang and Z. Jing, “Multi-focus image fusion using pulse coupled neural network,” Pattern Recognition Letters, vol. 28, no. 9, pp. 1123–1132, 2007.
- X.-H. Yang, H.-Y. Jin, and L.-C. Jiao, “Adaptive image fusion algorithm for infrared and visible light images based on DT-CWT,” Journal of Infrared and Millimeter Waves, vol. 26, no. 6, pp. 419–424, 2007.
- Y. Niu, S. Xu, L. Wu, and W. Hu, “Airborne infrared and visible image fusion for target perception based on target region segmentation and discrete wavelet transform,” Mathematical Problems in Engineering, vol. 2012, Article ID 275138, 10 pages, 2012.
- C. S. Pattichis, M. S. Pattichis, and E. Micheli-Tzanakou, “Medical imaging fusion applications: an overview,” in Proceedings of the 35th Asilomar Conference on Signals, Systems and Computers (ACSSC ’01), pp. 1263–1267, November 2001.
- R. Shen, I. Cheng, and A. Basu, “Cross-scale coefficient selection for volumetric medical image fusion,” IEEE Transactions on Biomedical Engineering, vol. 60, no. 4, pp. 1069–1079, 2013.
- B. Yang and S. T. Li, “Pixel-level image fusion with simultaneous orthogonal matching pursuit,” Information Fusion, vol. 13, no. 1, pp. 10–19, 2012.
- Y. Yang, C. Z. Han, X. Kang, and D. Q. Han, “An overview on pixel-level image fusion in remote sensing,” in Proceedings of the IEEE International Conference on Automation and Logistics (ICAL '07), pp. 2339–2344, Jinan, China, August 2007.
- A. Ross and R. Govindarajan, “Feature level fusion using hand and face biometrics,” in Biometric Technology for Human Identification II, Proceedings of SPIE, pp. 196–204, 2005.
- J. Byeungwoo and D. Landgrebe, “Decision fusion approach for multitemporal classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 37, no. 3, pp. 1227–1233, 1999.
- W. C. Wang and F. L. Chang, “A multi-focus image fusion method based on Laplacian pyramid,” Journal of Computers, vol. 6, no. 12, pp. 2559–2566, 2011.
- V. S. Petrović and C. S. Xydeas, “Gradient-based multiresolution image fusion,” IEEE Transactions on Image Processing, vol. 13, no. 2, pp. 228–237, 2004.
- K. Amolins, Y. Zhang, and P. Dare, “Wavelet based image fusion techniques—an introduction, review and comparison,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 62, no. 4, pp. 249–263, 2007.
- G. Pajares and J. M. de la Cruz, “A wavelet-based image fusion tutorial,” Pattern Recognition, vol. 37, no. 9, pp. 1855–1872, 2004.
- W. Shi, C. Zhu, Y. Tian, and J. Nichol, “Wavelet-based image fusion and quality assessment,” International Journal of Applied Earth Observation and Geoinformation, vol. 6, no. 3-4, pp. 241–251, 2005.
- S. Yang, M. Wang, L. Jiao, R. Wu, and Z. Wang, “Image fusion based on a new contourlet packet,” Information Fusion, vol. 11, no. 2, pp. 78–84, 2010.
- Q. Zhang and B.-L. Guo, “Multifocus image fusion using the nonsubsampled contourlet transform,” Signal Processing, vol. 89, no. 7, pp. 1334–1346, 2009.
- S. Li and B. Yang, “Hybrid multiresolution method for multisensor multimodal image fusion,” IEEE Sensors Journal, vol. 10, no. 9, pp. 1519–1526, 2010.
- D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006.
- E. J. Candès and M. B. Wakin, “An introduction to compressive sampling: a sensing/sampling paradigm that goes against the common knowledge in data acquisition,” IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 21–30, 2008.
- J. Han, O. Loffeld, K. Hartmann, and R. Wang, “Multi image fusion based on compressive sensing,” in Proceedings of IEEE International Conference on Image Processing (ICIP '10), pp. 1463–1469, 2010.
- T. Wan, N. Canagarajah, and A. Achim, “Compressive image fusion,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '08), pp. 1308–1311, October 2008.
- E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Transactions on Information Theory, vol. 52, no. 2, pp. 489–509, 2006.
- L. Gan, T. T. Do, and T. D. Tran, “Fast compressive imaging using scrambled block hadamard ensemble,” in Proceedings of the 16th European Signal Processing Conference (EUSIPCO '08), August 2008.
- H. Li, B. S. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graphical Models and Image Processing, vol. 57, no. 3, pp. 235–245, 1995.
- I. T. Jolliffe, Principal Component Analysis, Springer Series in Statistics, Springer, 1986.
- U. Patil and U. Mudengudi, “Image fusion using hierarchical PCA,” in Proceedings of the International Conference on Image Information Processing (ICIIP '11), pp. 1–6, Himachal Pradesh, India, November 2011.
- X. Y. Luo, J. Zhang, J. Y. Yang, and Q. H. Dai, “Image fusion in compressed sensing,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '09), pp. 2205–2208, November 2009.
- G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” Electronics Letters, vol. 38, no. 7, pp. 313–315, 2002.
- M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems,” IEEE Transactions on Information Theory, vol. 52, pp. 5406–5425, 2006.
- S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM Review, vol. 43, no. 1, pp. 129–159, 2001.
- Y. Wang, J. Yang, W. Yin, and Y. Zhang, “A new alternating minimization algorithm for total variation image reconstruction,” SIAM Journal on Imaging Sciences, vol. 1, no. 3, pp. 248–272, 2008.
- J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Transactions on Information Theory, vol. 53, no. 12, pp. 4655–4666, 2007.
- E. J. Candès and T. Tao, “Near-optimal signal recovery from random projections: universal encoding strategies?” IEEE Transactions on Information Theory, vol. 52, no. 12, pp. 5406–5425, 2006.
- E. Candès and J. Romberg, “-magic: Recovery of Sparse Signals via Convex Programming,” 2005.
- R. Eckhorn, H. J. Reitboeck, M. Arndt, and P. W. Dicke, “Feature linking via synchronization among distributed assemblies: simulation of results from cat cortex,” Neural Computation, vol. 2, pp. 293–307, 1990.
- C. S. Xydeas and V. Petrović, “Objective image fusion performance measure,” Electronics Letters, vol. 36, no. 4, pp. 308–309, 2000.
Copyright © 2015 Yang Chen and Zheng Qin. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.