Abstract

Poissonian image deconvolution is a key issue in various applications, such as astronomical imaging, medical imaging, and electronic microscope imaging. A large amount of literature on this subject is analysis-based methods. These methods assign various forward measurements of the image. Meanwhile, synthesis-based methods are another well-known class of methods. These methods seek a reconstruction of the image. In this paper, we propose an approach that combines analysis with synthesis methods. The method is proposed to address Poissonian image deconvolution problem by minimizing the energy functional, which is composed of a sparse representation prior over a learned dictionary, the data fidelity term, and framelet based analysis prior constraint as the regularization term. The minimization problem can be efficiently solved by the split Bregman technique. Experiments demonstrate that our approach achieves better results than many state-of-the-art methods, in terms of both restoration accuracy and visual perception.

1. Introduction

Poissonian image deconvolution appears in various applications such as astronomical imaging [1], medical imaging [2], and electronic microscope imaging [3]. It aims to reconstruct a high quality image from the degraded image . Mathematically, the process of Poissonian image deconvolution can be generally modeled by where denotes the point spread function (PSF), denotes the unknown image to be estimated, denotes the Poisson noise process, denotes the blurred noisy image, and is the length of image vector which is stacked by columns.

It is known that Poissonian image deconvolution is a typical ill-posed inverse problem. In general, the solution of (1) is not unique. Prior knowledge of image, including analysis-based and synthesis-based priors, can be used to address this problem. For an overview of the two classes of priors, we refer to [4]. Analysis-based priors are frequently used as regularization term in energy functional where is the regularization constraint term. Two main analysis-based methods have been proposed to solve problem (2): the total variation (TV) [5, 6] based methods and wavelet frames-based methods. TV-based methods have shown good performance on blurred images for discontinuous solution and edge-preserving advantages. Many authors have combined the TV regularization term with variant methods to solve problem (2). For example, Sawatzky et al. combined expectation-maximization algorithm with TV regularization [7] in positron emission tomography. Setzer et al. considered using TV regularization term with split Bregman techniques [8]. TV-based methods can remove noise in flat region effectively. However, they lose details in texture region. Furthermore, the inherent problem of TV-based methods is that they favor a piecewise constant solution, which causes staircase effects in flat region.

To overcome disadvantages of TV-based methods, several authors are interested in frame-based methods that can be regarded as analysis regularization methods as well. It is assumed that majority of nature images have sparse approximation under some redundant tight frame systems including wavelet transform, Gabor transform, curvelets, and framelets [9]. Chen and Cheng considered a hybrid variational model [10] that describes a cartoon part by TV and a detailed part by the sparse representation over the wavelet basis. Figueiredo and Bioucas-Dias proposed an alternating direction method [11] using frame-based analysis regularizer. Fang et al. presented blind Poissonian images deconvolution with framelet regularization [12]. He considered framelet-based analysis prior and combination of framelet and total variation analysis priors in [13] as well.

Dictionary learning [1420] based sparse representation has been extensively studied, and significant progress has been made in the development of modern dictionaries learning for synthesis-based sparse representation of image. Natural images contain repeated patterns such as texture region, flat region, and smooth region. Hence, they can be described as linear combinations of a few atoms from a dictionary. Dictionary learning for synthesis model in image denoising was proposed by Elad and Aharon. They presented the method via sparse and redundant representations over learned dictionaries, called K-SVD [16]. This method is performed to remove the additive noise. Huang et al. attempted to use the K-SVD method [17] for multiplicative noise removal which followed. It was proven that K-SVD method could perform well just by transforming the multiplicative noise to additive noise in logarithmic domain. Similarly, Poisson noise can be transformed to additive Gaussian noise with unitary variance by Anscombe root transformation (ART) [18] and then removed by K-SVD method [15]. In fact, the variance stabilization often has been questioned for the poor numerical results. Therefore, the denoising algorithm should be specifically designed for Poisson noise. Ma et al. proposed dictionary learning for Poisson noise removal [21].

We still have work to do, on dictionary learning for Poisson noise removal, because TV regularizer and dictionary learning (refer to [21]) do not match well. TV regularizer can reduce artifacts in smooth regions caused by patch-based priors of dictionary learning; however, it smooths the fine details in texture region. Motivated by analysis and synthesis based methods, we combine analysis with synthesis method for Poissonian image deconvolution. We choose B-spline framelet as the analysis regularization term, because framelet transform has the ability of multiple-resolution analysis. Different framelet masks reflect different orders of difference operators, which can adaptively capture multiscale edge structures in images. Besides, synthesis-based method is integrated into the proposed method using a sparse representation prior over a learned dictionary. At last, split Bergman method is used to address optimization problem. Bergman variables make the proposed method stable and regularization parameters do not need to be updated in Bregman iteration process.

The rest of this paper is organized as follows. The K-SVD algorithm and B-spline framelet are briefly reviewed in Section 2. The Poissionan image deconvolution model is proposed in Section 3. Numerical experiments are given to demonstrate the qualities of the restored images in Section 4. Conclusions are given in Section 5.

2. Review of Prior Works

In this section, we review K-SVD algorithm and B-spline framelet briefly. K-SVD algorithm is a classical sparse representation method, and it will be used in our proposed model. Framelet draws many attentions by its multiscale characteristic.

2.1. K-SVD Algorithm

We consider the image patch of size and define dictionary , where . For each image patch, can be represented sparsely over the dictionary; that is, the solution of is very sparse. The notation stands for the count of the nonzero entries in . Considering the image patch contaminated by an additive zero-mean white Gaussian noise with standard deviation , the MAP estimator for denoising this image patch is built by solving If each image patch of image can be described by (4), the MAP estimator of image will be described by where denotes the degraded image and each image patch , is an matrix that extracts the block from the image . Instead of addressing both unknowns and together, a block-coordinate minimization algorithm is used: (i)initialization , overcomplete DCT dictionary;(ii)repeat times:(1)sparse coding stage: use any pursuit algorithm to seek the optimal , by approximating the solution of (2)dictionary update stage: for each column in , and , compute the representation error and update the dictionary column by SVD decomposition of the representation error;(iii)Given all , update : the simple quadratic term has a closed-form solution of the form

2.2. Analysis Operator-B-Spline Framelet

In this subsection, we will briefly introduce the theory of frame and B-spline framelet [22]. A countable subset of will be called a frame of , if it satisfies where is the inner product of , is a signal, and and are positive constants. If , we call it tight frame. That is to say, can be perfectly reconstructed by

Furthermore, wavelet frame is constructed by wavelet transform using time and scale parameters sampling. It is defined as

If the wavelet frame satisfies the tight frame condition, functions are called framelets which belong to the wavelet tight frame . To construct a set of framelets, one starts from a refinement mask of a refinable function (also called a scaling function) satisfying where is the Fourier transform of . A wavelet tight frame can be constructed by an appropriate set of framelets that can be described in the Fourier domain as

B-spline tight framelet is constructed by piecewise linear B-spline function and refinement mask . The corresponding low-pass filter is

Two framelets can be defined by the framelet masks and , and the corresponding high-pass filters are

Subsequently, the framelet decomposition of can be easily obtained. The decomposition process which also can be called wavelet transform is defined as where is the transform matrix. In our paper, is called the analysis operator associated with 1-tight (Parseval) frame. We have , in discrete domain, and and denote the discrete wavelet tight frame decomposition and reconstruction, respectively. More details about the properties of frame and framelet can be found in [23].

3. Sparse and Redundant Representation on Dictionary Learning and Framelet

3.1. Proposed Model

Inspired by K-SVD for dictionary learning and the split Bregman method, we propose a minimization model based on the sparse representation of the image over a certain unknown dictionary and analysis prior. The proposed model can be described by where are the positive regularization parameters. Among these parameters, is an important parameter. It can reduce the workload of parameter setting drastically. The effect can be seen in parameter setting analysis at the end of this section. is the -norm that sums the absolute values of a vector. The sparse solution of (1) can be approximated by -norm regularizer. The -norm becomes popular in recent years. It is a more robust measurement which gives suitable penalty on flat region or edge. Besides, it fully embodies the sparsity. Most of all, it is easy to be computed. The notation stands for the analysis-based regularizer. The framelet transform is generated by the piecewise linear B-splines framelet filter masks for its simplicity and effectiveness in image deblurring.

3.2. Alternating Split Bergman Algorithm

Split Bergman algorithm [9, 24] has shown good performance on splitting the seemingly unsolved variable optimization problem into number of easy subproblems. Considering that the general unconstrained minimization problems have the form of where is a bounded linear operator and both functions and are proper, convex, and lsc, it is assumed that (19) has a solution and . To derive the split Bregman algorithm, problem (19) is transformed into the constrained minimization problem

Using augmented Lagrangian method, the split Bregman iteration for (19) is defined as

The alternating split Bregman algorithm is proposed to minimize the (21) alternatingly with respect to and :

It is important to note that the penalty parameter should not be too large. If the parameter becomes very large, the minimization subproblems will become increasingly ill-conditioned, and it will lead to numerical problems [25].

If we use the alternating split Bergman algorithm directly, it will cause a little problem. Leting and , we can obtain the minimization problem about variables

Supposing that are known, the solution with respect to can be transformed to

Obviously, it is a quadratic term. We can obtain the linear equation of by its closed form

The value of term about variable is large, and other elements are so small that can be ignored. Thus, is about the same as the following form:

In [16], it has proved that learning dictionary using K-SVD can deal with the additive noise. However, it cannot deal with the Poisson noise directly. Here, we add the special constraint to address this difficult problem. The special constraint comes from the idea which combines the split Bregman algorithm with an alternating minimization algorithm; refer to [26]. It is shown that this special alternating optimization algorithm simplifies the complex problem. Moreover, it solves the actual problem that K-SVD algorithm is only performed on additive noise denoising or multiplicative noise denoising, and multiplicative noise can be transformed to the additive noise in logarithm domain [17].

3.3. Proposed Algorithm

The difficulties in seeking variables are nonquadratic data term and nonseparable -norm term. The alternating optimization algorithm can be used to solve these difficulties by introducing auxiliary variables . The minimization problem becomes

Here, the alternating optimization algorithm is a special split Bregman algorithm. It is to be noted that replaced in the sparse representation of the image over a certain unknown dictionary.

The proposed model is finally defined as where are the positive penalty parameters. The minimization problem (28) can be decoupled into several subproblems via split Bregman iteration.

(1) Given , the minimization of (28) with respect to is transformed to

According to its closed form, we can obtain the linear equation of The convolution of can be easily performed in FFT domain.

(2) Given , the subproblem with respect to becomes

Then can be obtained via solving the corresponding Euler-Lagrange equation of functional (31)

(3) Given , the third subproblem leads to a -norm problem

The shrinkage operator is used to solve problem (31). Supposed that , , is expanded,

(4) Given , the sparse representation coefficients can be obtained by solving

Comparing (35) with (6), it can be found that (35) is the sparse coding stage of K-SVD algorithm. Any pursuit algorithm can be used to compute the representation vectors .

(5) Given , the last subproblem with respect to is

This is also a linear equation problem; similar to (33), we can obtain

(6) Given , the Bregman auxiliary variables are updated as follows:

Besides, the dictionary learning is also an important procedure on sparse and redundant representation. We update the dictionary at the last step. The is updated in an outer loop for saving computational time as suggested in [21]. Therefore, the loop times are divided into inner loop for main process and outside loop only for dictionary updating.

Taking into account the above equations, the proposed Poissonian image deconvolution algorithm is summarized as shown in Algorithm 1.

Initialize: Initialization: , , , , , and
     . Set , .
for     do
while  stopping criterions is not satisfied  do
   Step 1. Update using (30);
   Step 2. Update using (32);
   Step 3. Update using (34);
   Step 4. Update using (37);
   Step 5. Update sparse representation coefficients using (6);
   Step 6. Update the Bregman auxiliary variables ;
    ;
end
Step 7. Update Dictionary ;
end

One of the stopping criterions is calculated for each iteration by the “normalized step difference energy” (): , where and denote the image vector at the th and th iteration, respectively. The other stopping criterion is , where is the maximum inner loop times. Each of the stopping criterion is satisfied; the inner loop will be terminated. In our paper, we set , , and . Besides, some conditions should be satisfied to obtain a sound deconvolution result. For example, the restored image should be normalized, and it should be nonnegative in the process of iteration as we set .

The proposed method involves many penalty parameters. Parameters choosing plays an important role in efficiency and stability of the algorithm. There are some skills to determinate these parameters. (1) In many literatures, can be equal to each other. To eliminate numerical problems, the three parameters should not be too small. (2) The value of is the larger the better in practice. In our paper, we set . We can see that is an important parameters group in the process of framelet regularization. The value of group belongs to empirical range . The last parameters group is which belongs to empirical range .

4. Experimental Results

In this section, in order to evaluate the performance of the proposed Poissonian image deconvolution method, we test and compare it with framelet regularization based method for Poissonian image deconvolution using Bregman iteration (PIDSB-FA) [13] method and dictionary learning approach and total variation regularization (TV-LD) [21] method. The ratio of peak signal and noise (PSNR) is frequently used to measure the restored image quality: where denotes the maximum value of pixel in an image, , and and denote the original image and restored image, respectively.

The numerical experiments are implemented with Matlab 7.14.0 on a dual core personal computer, 3.10 GHz processor and 2 G physical memory.

Example 1. To test the performance of the proposed method, numerical experiments are conducted on two kinds of test images. The first kind of test images is medical images [21], brain image with size of and neck image with size of . The other kind of test images is nature images, house image with size of , Lena image with size of , and Bara image with size of (Figure 1). All images are normalized in the interval .

These images are clear and original images. To simulate the Poissonian image deconvolution, these images should be degraded by the blur kernel and contaminated by Poisson noise. Gaussian blur kernel and uniform blur kernel are used to test the performance of deblurring. We choose Gaussian blur kernel with standard deviation and uniform blur kernel. Three Poisson noise levels are considered, , where denotes the peak intensity of Poisson noise. The lower peak intensity corresponds to the stronger Poisson noise. In general, the simulated process has three steps. At first, the original image is scaled by , where denotes the maximal value of the image. Subsequently, blurring process is added by convolving the normalized image with blur kernel. At last, Poisson noise is added to the burred image.

The range of the parameters is given at the end of Section 3. In this section, we offer the detailed parameters setting. The value of parameters pair is for all blur kernels and peak intensities of Poisson noise. We set for Gaussian blur kernel, for Gaussian blur kernel, and Poisson noise with , respectively. We take , for uniform blur kernel and all peak intensities of Poisson noise.

In all experiments, we set the parameters of K-SVD [16] as follows: iteration number is 30, the average error passing the threshold in OMP is , , , the size of dictionaries is , and the size of image patches is . Large patch size will produce redundant information and small patch size will lead to useful information missing. Besides, the larger image patch size is, the longer time will be taken [27]. The final dictionary learned at the last iteration of the proposed method for Example 2 is shown in (Figure 3).

To illustrate the influence of the parameter on the performance of the proposed method, we note down the PSNR value of restored image when each parameter is changing in (Figure 2). The plot tests and verifies the fact that parameter is not too high. Besides, the parameter receives good performance in a wide range.

The parameters setting for PIDSB-FA are offered as follows: , , and , and we set parameters for LDTV as suggested in [21].

Restoration of brain is shown in Figure 4. The brain image is blurred by uniform kernel of size and contaminated by Poisson noise with . Restoration of house is shown in Figure 5. The house image is blurred by Gaussian kernel of size and contaminated by Poisson noise with . The image restored by PIDSB-FA method contains much details, no matter useful or useless. LD-TV reduces the useless details; however, it also smooths the useful details. The proposed method gives the sound restored results. It overcomes the problems which are caused by PIDSB-FA and LD-TV methods.

The PSNR values of degraded images and restored images by the three methods are compared in Table 1. The improvement in terms of PSNR values obtained by PIDSB-FA is about the same as LD-TV. Experiment results describe that both analysis operator using piecewise linear B-splines framelet filter and synthesis operator using dictionary learning are good operators for image restoration. The proposed method combines the two operators. It can be found from Table 1 that the proposed method has achieved the highest PSNR on medical and nature images. Obviously, it remains the advantages of two operators.

Example 2. The restoration of the Lena image is blurred by customized blur ([0 1 1 1 1 0 0; 1 1 0 0 0 1 0; 0 0 0 0 1 1 0; 0 0 1 1 0 0 0; 0 1 1 0 0 1 1; 0 1 0 0 1 1 0; 0 1 1 1 1 0 0]/22) and then contaminated by Poisson noise with . The parameters setting for proposed method are , and . Restoration of the Lena image by three methods is described in Figure 6. Lena image is a widely used image for its rich repetition pattern like texture region and strong sparse representation like large flat region and smooth region. The restored image by PIDSB-FA method preserves the rich detains in texture region such as decorations of the hat and also draws the artifacts into flat region such as Lena’s cheek. The restored image by LD-TV is relatively more smooth in flat region; however, it also misses some fine details in texture region. Meanwhile, the proposed method performs well on highlighting the texture region and smoothing the flat region. On the other hand, the proposed method receives better PSNR values than the other two methods.

The above experiments describe that the proposed method gives better restoration of various images under different kinds of blurring kernels and Poisson noise with multilevel peak intensities. Besides, the stability is also the chief concerned matter in the proposed method. Figure 7 shows that the NSDE and PSNR value of proposed method is changing with iterations on the brain image with Gaussian blur kernel and of Poisson noise. The plot of NSDE describes the convergence, and PSNR value of restored image increases progressively until being stable.

5. Conclusions

In this study, we proposed and implemented a Poissonian deconvolution method via sparse and redundant representations and framelet. The proposed method includes three terms: the Poisson data term, the synthesis term, and the analysis term. Dictionary learning for synthesis model and B-spline framelet-based analysis prior are integrated perfectly. The proposed method keeps the satisfied result of dictionary learning method; meanwhile, it takes full advantage of the multiscale characteristic of framelet-based methods. The seemingly unsolved variable optimization problem is efficiently addressed by using the specific split Bregman method. Furthermore, parameters are divided into groups which simplify the parameter setting greatly. Numerical comparisons on a various of simulated images show that the proposed method outperforms the other two latest methods. However, the dictionary learning is a time-consuming process. Therefore, improving the proposed method on dictionary learning in terms of computation time is the future research.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Authors’ Contribution

Yu Shi and Houzhang Fang contributed equally to this work.