Journal of Sensors

Volume 2016, Article ID 8359602, 12 pages

http://dx.doi.org/10.1155/2016/8359602

## Multifocus Color Image Fusion Based on NSST and PCNN

Information College, Yunnan University, Kunming 650091, China

Received 7 August 2015; Revised 29 October 2015; Accepted 5 November 2015

Academic Editor: Claudio Lugni

Copyright © 2016 Xin Jin et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper proposed an effective multifocus color image fusion algorithm based on nonsubsampled shearlet transform (NSST) and pulse coupled neural networks (PCNN); the algorithm can be used in different color spaces. In this paper, we take HSV color space as an example, H component is clustered by adaptive simplified PCNN (S-PCNN), and then the H component is fused according to oscillation frequency graph (OFG) of S-PCNN; at the same time, S and V components are decomposed by NSST, and different fusion rules are utilized to fuse the obtained results. Finally, inverse HSV transform is performed to get the RGB color image. The experimental results indicate that the proposed color image fusion algorithm is more efficient than other common color image fusion algorithms.

#### 1. Introduction

Video technology is one of the important technologies for coastal monitoring, and image fusion is the basis of video technology. Color images contain color information and brightness information, so the color images are more suitable for coastal monitoring than gray images [1]. Besides, the identifiable degree of human vision to color information is higher than the gray image [2]. The whole procedure of the image fusion is to extract the significant and representative information from the source images of the same scene, which may come from different types of image sensors or the same one acting in different modes, and then attempt to fuse it into the final composite image with a better description of the scene than any of the individual source images. Thus, the study of suitable fusion technology in multisensor image is necessary and valuable [3].

Color image is the combination of different brightness and colors. Because color image is comprised of several components and the fusion image is the fusion of each color space component, there are some common algorithms, such as average, Intensity, Hue and Saturation (HIS), and principal component analysis (PCA) [4, 5], which are easy to implement but the performances are not good. Recently, image fusion methods based on multiresolution analysis have been widely studied; the first step is image transform and then recombining the coefficients of the transformed image; at last the fused image can be obtained by inverse transform. According to the different ways of decomposition, these algorithms can be divided into pyramid transform, wavelet transform [6], curvelet [7], and contourlet [8]. In 2005, Labate et al. proposed a new multidimensional representation algorithm, which is called shearlet [9]. One advantage of this algorithm is that it can be constructed using generalized multiresolution analysis and efficiently implemented using a classical cascade algorithm. So shearlet has good performance in both time domain and frequency domain [10]. In order to combine the superiorities and overcome the defects of nonsubsampled contourlet transform (NSCT) and shearlet transform (ST), [11] proposed the theory of nonsubsampled shearlet transform (NSST) combining the nonsubsampled Laplacian pyramid transform with several different shearing filters. In comparison with current multiresolution geometric analysis (MGA) tools, NSST absorbs some recent developments in the MGA field and shows satisfactory fusion performance such as the better sparse representation ability and much lower computational costs. Besides, NSST also has the requirement of the shift-invariance property ST lacks. Therefore, it is hoped that further research on the area of image fusion based on NSST domain is promising and competitive [12]. In recent years, the image fusion method based on PCNN is getting more and more attention by many experts and scholars with PCNN’s characters in biological background. Compared with other artificial neural networks, PCNN has an incomparable advantage over other traditional artificial neural networks. So PCNN has been widely used in image processing fields and shows extremely superior performances [12–15].

In this paper, a new multifocus color image fusion algorithm is proposed based on NSST and PCNN. The paper absorbs some advantages of NSST and PCNN; it firstly converts RGB color image to HSV color image, and then H component is input into adaptive simplified PCNN (S-PCNN) model to get oscillation frequency graph (OFG) of S-PCNN; a new fused H component is obtained by comparing the OFG; S and V components are decomposed into low frequency subband and high frequency subband by NSST, and these subbands are fused by different methods to get new fused S and V components. At last, inverse HSV transform is performed to obtain a new fused RGB color image. The experimental results indicate that the proposed algorithm is more effective to save the color information of the source color images than other common algorithms; and the fused image contains more edges, texture, and detail.

This paper is arranged as follows. Section 2 introduces related theories of NSST and PCNN model. Section 3 explains the proposed algorithm, including framework and workflow. Section 4 presents the experimental results and analysis. Section 5 concludes this paper.

#### 2. Related Theories

##### 2.1. PCNN

PCNN model has three fundamental parts: the receptive field, the modulation field, and the pulse generator [13, 14]. In the receptive field, which consists of and channels and is described by (1), the neuron receives neighboring neurons’ coupling input and external stimulus input . In and channels of the neuron, the neuron links with its neighborhood neurons via the synaptic linking weights and , respectively; the two channels accumulate input and exponential decay changes at the same time; the decay exponentials are and , respectively, while the channel amplitudes are and , respectively:

In the modulation field, the linking input made by adding a bias to the linking; then, it is multiplied by the feeding input; the bias is unitary, is the linking strength, and the total internal activity is the result of modulation, which is described by

Pulse generator consists of a threshold adjuster, a comparison organ, and a pulse generator, which is described by (3). Its function is to generate the pulse output , and is adjustment threshold; is threshold coefficient. When the internal state is larger than the threshold , that is, the neuron satisfies the condition , a pulse would be produced by the neuron; we call an ignition, which is described by (4):where the subscripts and represent the neuron location in PCNN and denotes the current iteration (discrete time step), where varies from 1 to ( is the total number of iterations). In particular, “a neuron ignition” means a PCNN’s neuron generates a pulse. The total times of ignitions represent image information of the corresponding code sequences after iterations.

When PCNN is used for image processing, a pixel is connected to unique neuron. The number of neurons in the network is equal to the pixel number of the input image; namely, there exists one-to-one correspondence between the image and neurons network, and the pixel value is taken as the external input stimulus of the neuron in channel. A neuron outputs results in two states, namely, pulse (status 1) and nonpulse (status 0), so the output status of neurons composes a binary image. More information about PCNN will be found in [12–15].

##### 2.2. S-PCNN

Simplified PCNN (S-PCNN) model [15] is composed the same as the original PCNN model, but the input of channel is only related to image gray value and has no relationship with external coupling and exponential decay characteristics, and its parameters are less than original PCNN model, and the input channel of the receptive field is simple and effective. In S-PCNN model, the variables of a neuron satisfy the following:

##### 2.3. The OFG of PCNN

Capture characters of PCNN neurons will cause a similar brightness to the surrounding neurons to capture the ignition.. The capture characters can be automatically coupled to transmit information. In this paper, we use PCNN to extract image features; PCNN also can extract the information of the image’s texture, edge, and regional distribution and has a good effect on image processing. In an iteration of PCNN, a binary image will be obtained by recording the neuron fires or not. The binary images effectively express the features of the image such as texture, edge, and regional distribution; the binary map and OFG are shown in Figures 1(b) and 1(c). After the global statistics of the binary image of the neurons, we get an oscillation frequency graph (OFG), which is shown in (6) and Figures 1(d) and 1(e):where denotes the iteration times, denotes the pulse output of the neuron , and is the current iteration.