Abstract

Edge-preserving Bayesian restorations using nonquadratic priors are often inefficient in restoring continuous variations and tend to produce block artifacts around edges in ill-posed inverse image restorations. To overcome this, we have proposed a spatial adaptive (SA) prior with improved performance. However, this SA prior restoration suffers from high computational cost and the unguaranteed convergence problem. Concerning these issues, this paper proposes a Large-scale Total Patch Variation (LS-TPV) Prior model for Bayesian image restoration. In this model, the prior for each pixel is defined as a singleton conditional probability, which is in a mixture prior form of one patch similarity prior and one weight entropy prior. A joint MAP estimation is thus built to ensure the iteration monotonicity. The intensive calculation of patch distances is greatly alleviated by the parallelization of Compute Unified Device Architecture(CUDA). Experiments with both simulated and real data validate the good performance of the proposed restoration.

1. Introduction

Image restorations have wide applications in remote sensing, radar imaging, tomographic imaging, microscopic imaging, astronomic imaging, digital photography, and so forth [1–4]. For linear and shift invariant imaging systems, the transformation from 𝑓 to 𝑔 is well described by the following additive linear degradation model [3, 4]:𝑔=π΄βˆ—π‘“+πœ€,(1.1) where 𝑔, 𝑓, and πœ€ represent the degraded observed image, the original true image, and the corrupting white Gaussian noise with variance 𝜎2, respectively. 𝐴 is the PSF (Point Spread function) of the imaging system and βˆ— is the linear convolution operator. Throughout this paper, we assume that the PSF 𝐴 is known or could be numerically determined or estimated.

Bayesian or maximum a posteriori (MAP) approach, within Markov Random Fields (MRFs) framework, can provide a stable solution of the ill-posed inverse image restorations through incorporating a priori information about the geometrical properties of an image [3–8]. Via modelling the unknown parameters in the prior probability density functions, prior information can also be interpreted as the regularization metrics, which measures the extent to which the contextual constraint assumption is violated. With the incorporated prior information, Bayesian approach is able to distinguish solutions from less desirable ones by transforming the original ill-posed problem into a well-posed one. Generally, we can build the following posterior probability 𝑃(π‘“βˆ£π‘”) for image restorations,ξ‚€βˆ’1𝑃(π‘“βˆ£π‘”)βˆπ‘ƒ(π‘”βˆ£π‘“)𝑃(𝑓),(1.2)𝑃(π‘”βˆ£π‘“)=exp(𝐿(𝑔,𝑓))=exp2β€–π‘”βˆ’π΄βˆ—π‘“β€–2,(1.3)𝑃(𝑓)=π‘βˆ’1Γ—exp(βˆ’π›½π‘ˆ(𝑓))=π‘βˆ’1×expβˆ’π›½π‘—π‘ˆπ‘—ξƒͺ(𝑓).(1.4) Here, 𝑃(π‘”βˆ£π‘“) and 𝑃(𝑓) denote the likelihood distribution and the prior distribution, respectively. 𝐿(𝑔,𝑓) and π‘ˆ(𝑓) denote the corresponding likelihood energy and the prior energy function. π‘ˆπ‘—(𝑓) is the notation of the energy function π‘ˆ evaluated at pixel index 𝑗. The partition function 𝑍 is a normalizing constant. 𝛽 is the global parameter controlling the balance between the likelihood energy and the prior energy. We can build the posterior energy function in the logarithm formπœ“π›½1(𝑓)=log𝑃(π‘“βˆ£π‘”)=𝐿(𝑔,𝑓)βˆ’π›½π‘ˆ(𝑓)=βˆ’2β€–π‘”βˆ’π΄βˆ—π‘“β€–2ξ“βˆ’π›½π‘—π‘ˆπ‘—(𝑓).(1.5) The restored or deconvolved image 𝑓 can be obtained through the maximization of function πœ“π›½(𝑓).

The widely used quadratic membrane (QM) prior or the Tikhonov L2 regularization tend to smooth both noise and edge details and often lead to unfavorable oversmoothing in restorations [5]. On the other side, some edge-preserving Bayesian restoration methods have been proposed, which can be classified into three main categories: wavelet relevant regularization restorations, Bayesian restorations with nonquadratic prior energies, and Bayesian restorations with total variation (TV) prior. Wavelet relevant regularization restorations, with multiscale stochastic prior models, have been proposed in the research of edge-preserving restorations [8–11]. In [10], priors based on wavelet decompositions and heavy-tailed pdfs were incorporated into the EM restoration algorithm. In [11], Neelamani et al. proposed a Fourier-Wavelet hybrid restoration, which exploits the potentials of the sparse representation of noise for Fourier transform and the sparse representation of the coefficients in wavelet transform. All these wavelet regularization methods are based on the distribution modeling and thresholding of the decomposed wavelet coefficients [11]. Edge-preserving priors with nonquadratic energies were also proposed to preserve the edge details by using nonlinear inverse-proportional functions between edge existences and intensity differences [5, 12, 13]. The weighting matrices in nonquadratic prior Bayesian restorations preserve the edges by turning off or suppressing smoothing at appropriate locations. Another impressive research direction in this area is the total variation (TV)-based image restorations [7, 14, 15]. This kind of approach can also be viewed as TV priors with the half-quadratic regularization prior energies. With no enforced global image continuity, TV prior restorations often demonstrate good property of edge preserving.

The nonquadratic priors preserve the edges by determining the pixel-wise regularizations based on the intensity-difference information within local fixed neighborhoods. The regularizations on TV priors also come from the information of the local intensity gradients. For all these nonquadratic and TV priors, the local information of intensity difference can by no means provide effective regularizations to discriminate noise from image textures in different scales. So these edge-preserving approaches often fail to cope well with some complicated restorations with high noise contaminations, and tend to produce block artifacts around the continuous edges in those situations [16–21].

Objects in the images are reasonably assumed to be a composition of edges and backgrounds with intensities changing coherently over different scales. Thus, for the traditional prior model, the intensity differences between individual pixels within local neighborhoods are often insufficient to characterize objects [19, 20]. This limit elicited our previous work in [21] which introduced a spatial adaptive (SA) prior restoration approach. This SA prior works by adaptively including the relevant neighboring pixels and excluding the negative irrelevant ones within a large neighborhood. Though showing expressive results, the iterative restoration in [21] updates the prior weight using the latest image estimate in a one step late (OSL) way, which leads to inconsistent posterior energy function and nonconvergent iterations. In addition, the approach in [21] is greatly limited by the high computational cost in calculating all the patch distances for all the pixel pairs in each prior neighborhood. In this paper, to overcome the first problem, we applied a large-scale total patch variation (LS-TPV) prior model, which is built through the specification of the conditional probability for each pixel [22–25]. The joint maximization estimation of this constrained entropy function will lead to convergent image/weight restorations. To overcome the second problem, we introduce the parallel computation structure of CUDA to calculate the patch distances [26–28]. Furthermore, another benefit from the CUDA-accelerated parallelization is that it allows further performance enhancement for the LS-TPV prior restoration by using larger search neighborhoods.

In Section 2, a review of some previous prior models is illustrated, and after that we introduce the proposed LS-TPV prior model. In Section 3, we give a joint estimation algorithm for the proposed restoration. In Section 4, we perform comparative experiments with both simulated and real data. Relevant visual/quantitative results are also presented. Conclusion and discussion are given in Section 5.

2. Prior Model

Conventionally, the value of π‘ˆπ‘—(𝑓) is commonly computed through a weighted sum of potential functions 𝑣 of the differences between the pixels in the neighborhood 𝑁𝑗:π‘ˆπ‘—(𝑓)=π‘βˆˆπ‘π‘—π‘€π‘π‘—π‘£ξ€·π‘“π‘βˆ’π‘“π‘—ξ€Έ.(2.1)

Generally, different choices of the potential function 𝑣 lead to different priors. For the simple space-invariant QM prior, the potential function has the form 𝑣(𝑑)=𝑑2/2. Some Edge-preserving nonquadratic priors could be chosen by adopting a nonquadratic potential function 𝑣, such as the Huber potential function⎧βŽͺ⎨βŽͺβŽ©π‘‘π‘£(𝑑)=22,𝛾𝛾|𝑑|≀𝛾,|𝑑|βˆ’22,|𝑑|>𝛾,(2.2) where 𝛾 is the threshold parameter [5, 13]. Such edge-preserving nonquadratic priors preserve structure information by choosing nonquadratic potential functions that increase less as the differences between the adjacent pixels become bigger. Weight 𝑀𝑏𝑗 is a positive value that denotes the interaction degree between pixel 𝑏 and pixel 𝑗. In traditional prior models, it is simply considered to be inversely proportional to the distance between pixels 𝑏 and 𝑗. So on a square lattice of image 𝑓, in which the 2D positions of pixels 𝑏 and 𝑗 (𝑏≠𝑗) are, respectively, (𝑏π‘₯,𝑏𝑦) and (𝑗π‘₯,𝑗𝑦), 𝑀𝑏𝑗 is usually calculated by the geometric distance 1/(𝑏π‘₯βˆ’π‘—π‘₯)2+(π‘π‘¦βˆ’π‘—π‘¦)2.

Restorations using total variation (TV) prior take the prior energy as the following expression:π‘ˆTV(𝑓)=π‘—ξ”ξ€·Ξ”β„Žπ‘—π‘“ξ€Έ2+Δ𝑣𝑗𝑓2,(2.3) where Ξ”β„Žπ‘— and Δ𝑣𝑗 are linear operators corresponding to the first-order horizontal and vertical intensity differences at pixel 𝑗, respectively [7, 14, 15]. Since both the smooth and sharp edges tend to have similar prior energies, this TV prior energy given by (2.3) does not penalize weak discontinuities and often favors images with bounded variations.

Traditional quadratic priors, nonquadratic priors and TV prior, only provide local intensity information for Bayesian restorations. And the edge-preserving nonquadratic and TV priors, though being able to preserve edge information, tend to produce unfavorable block artifacts or false edges in high-noise situations.

To improve restoration quality, in [21] we proposed a spatial adaptive (SA) prior model, in which a spatial adaptive prior neighborhood is implemented by setting the weight 𝑀𝑏𝑗 to an 1-0 binary function to classify the pixels in neighborhood 𝑁 into the pixels relevant to the center pixel 𝑗 and those not. The building of the SA prior can be formalized as follows:π‘ˆSA(𝑓)=π‘—π‘ˆπ‘—ξ“(𝑓)=π‘—ξ“π‘βˆˆπ‘π‘—ξƒ©π‘€π‘π‘—ξ€·π‘“π‘βˆ’π‘“π‘—ξ€Έ2||π‘π‘Ÿπ‘—||ξƒͺ,𝑀(2.4)𝑏𝑗=⎧βŽͺ⎨βŽͺ⎩||𝑛1,𝑏(𝑓)βˆ’π‘›π‘—||(𝑓)2||𝑛≀𝛿0,𝑏(𝑓)βˆ’π‘›π‘—||(𝑓)2||𝑛>𝛿,(2.5)𝑏(𝑓)βˆ’π‘›π‘—||(𝑓)2=ξ“π‘™ξ‚€π‘“π‘™βˆˆπ‘›π‘βˆ’π‘“π‘™βˆˆπ‘›π‘—ξ‚2.(2.6) Here, π‘ˆSA is the energy function for the SA prior. 𝑀𝑏𝑗, which represents the classification of the neighboring pixels in the search neighborhood 𝑁𝑗, can be computed via (2.5) and (2.6). |π‘π‘Ÿπ‘—|, the number of the neighboring pixels with nonzero 𝑀𝑏𝑗 in the neighborhood 𝑁𝑗, is the normalization factor for the different sizes of non-1 neighboring pixels in each 𝑁𝑗. Parameter 𝛿 in (2.5) is the threshold parameter. The distance |𝑛𝑏(𝑓)βˆ’π‘›π‘—(𝑓)|2 is determined by the distance measure (2.6) between the two translated comparing patches 𝑛𝑏 and 𝑛𝑗, whose centers are, respectively, located at pixel 𝑏 and pixel 𝑗.

Though proved to be effective in suppressing both noise and block artifacts, all the weights 𝑀 in SA prior need to be heuristically inferred using current image estimate in the way of one step late (OSL), which often leads to inconsistent posterior energy function with no guaranteed convergence [8]. To overcome this problem, in this paper, we proposed an LS-TPV prior model, in which regularization takes effect through penalizing the weighted distances/variations between the patches surrounding each neighboring pixel pair. The weights 𝑀 in each 𝑁 are considered variables with a nonlinear dependence on image 𝑓. For the central pixel 𝑗 and its neighboring pixel 𝑏 in each prior neighborhood 𝑁𝑗, 𝑀𝑏𝑗 is considered the similarity probability between the two patches 𝑛𝑏 and 𝑛𝑗 surrounding pixels 𝑏 and 𝑗, respectively [24]. An entropy prior of all the weights 𝑀𝑏𝑗 for each 𝑁𝑗 is introduced into our model in the form of βˆ’βˆ‘π‘—βˆ‘π‘βˆˆπ‘π‘—π‘€π‘π‘—ln𝑀𝑏𝑗. Here as the β€œFrame” model in [25], the constraint is βˆ‘π‘βˆˆπ‘π‘—π‘€π‘π‘—=1. Then, based on joint MAP theorem [12], we can obtain𝑃(𝑓,𝑀,πœ‚βˆ£π‘”)βˆπ‘ƒ(π‘”βˆ£π‘“)𝑃(π‘“βˆ£π‘€,πœ‚)𝑃(𝑀,πœ‚).(2.7) Here, πœ‚ is the introduced pixel-wise Lagrange-multiplier parameter. For each pixel 𝑗, with 𝑓 denoting the current image estimate, the prior energy π‘ˆLS-TPV(𝑓𝑗,𝑀,πœ‚βˆ£π‘“) of the LS-TPV prior is in the combined form of a weighted patch similarity energy, an entropy energy βˆ’βˆ‘π‘—βˆ‘π‘βˆˆπ‘π‘—π‘€π‘π‘—ln𝑀𝑏𝑗 and a normalization constraint βˆ‘π‘βˆˆπ‘π‘—π‘€π‘π‘—=1. We can thus obtain the following posterior energy function:πœ“π›½ξƒ©ξ“(𝑓,𝑀,πœ‚βˆ£π‘”)=log𝑃(π‘”βˆ£π‘“)+log𝑃(π‘“βˆ£π‘€,πœ‚)+log𝑃(𝑀,πœ‚)=𝐿(𝑔,𝑓)βˆ’π›½π‘—π‘ˆLS-TPV𝑓𝑗𝑓ξƒͺ1,𝑀,πœ‚βˆ£=βˆ’2β€–π‘”βˆ’π΄βˆ—π‘“β€–2ξ“βˆ’π›½π‘—βŽ›βŽœβŽœβŽβŽ›βŽœβŽœβŽξ“π‘βˆˆπ‘π‘—π‘€π‘π‘—π‘‘π‘π‘—ξ“+β„Žπ‘βˆˆπ‘π‘—π‘€π‘π‘—lnπ‘€π‘π‘—βŽžβŽŸβŽŸβŽ +πœ‚π‘—βŽ›βŽœβŽœβŽξ“π‘βˆˆπ‘π‘—π‘€π‘π‘—βŽžβŽŸβŽŸβŽ βŽžβŽŸβŽŸβŽ .βˆ’1(2.8) Here, as (1.4), 𝛽 is the global parameter controlling the balance between the likelihood energy and the prior energy. The patch distance 𝑑𝑏𝑗 is calculated as β€–πΊπ›Όβˆ—π‘›π‘βˆ’πΊπ›Όβˆ—π‘›π‘—β€–2, in which 𝐺𝛼 denotes the Gaussian convolution kernel with standard deviation 𝛼. πœ‚π‘— is the value of Lagrange-multiplier parameter at pixel 𝑗. The parameter β„Ž controls the maximum entropy constraint on 𝑀, which is routinely modulated with respect to the noise levels (large β„Ž is routinely set to suppress more noise for high-noise situations).

3. Restoration Algorithm

To obtain tractable maximization of the posterior probability using the proposed LS-TPV prior energy π‘ˆLS-TPV in (2.8), we use the algorithm of iterative coordinate descent (ICD), in which each element is sequentially updated by maximizing the log-style local conditional probability function [29]. In this way, the objective image 𝑓 and the weight 𝑀 are jointly estimated through alternative maximization of the πœ“π›½(𝑓,𝑀,πœ‚βˆ£π‘”) with respect to 𝑓 and 𝑀.

(1) argmaxπ‘€βˆ’π‘ˆLS-TPV(𝑀,πœ‚βˆ£π‘“).
𝑓 denotes the fixed current image estimation, and each weight 𝑀𝑏𝑗 for each pixel 𝑗 is updated by solving the following singleton optimization problem: argmaxπ‘€π‘π‘—βŽ›βŽœβŽœβŽβˆ’ξ“π‘βˆˆπ‘π‘—π‘€π‘π‘—β€–β€–πΊπ›Όβˆ—π‘›π‘βˆ’πΊπ›Όβˆ—π‘›π‘—β€–β€–2ξ“βˆ’β„Žπ‘βˆˆπ‘π‘—π‘€π‘π‘—lnπ‘€π‘π‘—βˆ’πœ‚π‘—βŽ›βŽœβŽœβŽξ“π‘βˆˆπ‘π‘—π‘€π‘π‘—βŽžβŽŸβŽŸβŽ βŽžβŽŸβŽŸβŽ βˆ’1.(3.1)
With Lagrange-multiplier πœ‚π‘— introduced, by joint solving (3.1) with respect to each 𝑀𝑏𝑗 and πœ‚π‘—, we can then obtain 𝑀𝑏𝑗: 𝑀𝑏𝑗=ξ‚€βˆ’β€–β€–πΊexpπ›Όβˆ—π‘›π‘βˆ’πΊπ›Όβˆ—π‘›π‘—β€–β€–2/β„Žβˆ‘π‘β€²βˆˆπ‘π‘—ξ‚€βˆ’β€–β€–πΊexpπ›Όβˆ—π‘›π‘β€²βˆ’πΊπ›Όβˆ—π‘›π‘—β€–β€–2/β„Ž,(3.2) where β€–πΊπ›Όβˆ—π‘›π‘βˆ’πΊπ›Όβˆ—π‘›π‘—β€–2 can be easily calculated using currently available 𝑓. Each 𝑀𝑏𝑗 is determined not only by the similarity metrics between patches 𝑛𝑏 and 𝑛𝑗, but also by the sum of all the similarity metrics between patches with the central pixels located in 𝑁𝑗. So 𝑀𝑏𝑗 is different from 𝑀𝑗𝑏.

(2) argmaxπ‘“πœ“π›½ξ(𝑓,𝑀,Μ‚πœ‚βˆ£π‘”).
Here 𝑀 denotes the fixed current weight estimate. Omitting the terms with no 𝑓, the πœ“π›½ξ(𝑓,π‘€βˆ£π‘”) becomes πœ“π›½ξ€·ξξ€Έξ“π‘“,π‘€βˆ£π‘”=𝐿(𝑔,𝑓)βˆ’π›½π‘—βŽ›βŽœβŽœβŽξ“π‘βˆˆπ‘π‘—ξπ‘€π‘π‘—β€–β€–πΊπ›Όβˆ—π‘›π‘βˆ’πΊπ›Όβˆ—π‘›π‘—β€–β€–2⎞⎟⎟⎠.(3.3)
To make a tractable maximization of (2.8), based on the theories of pseudolikelihood and iterated conditional mode (ICM) in [29], we can restore each 𝑓𝑗 by solving the following factorized problem: argmaxπ‘“π‘—πœ“π›½ξ‚€π‘“π‘—,ξπ‘€π‘π‘—ξπ‘“ξ‚βˆ£π‘”,⟹argmaxπ‘“π‘—πΏβŽ›βŽœβŽœβŽξ“(𝑓,𝑔)βˆ’π›½π‘βˆˆπ‘π‘—ξπ‘€π‘π‘—β€–β€–πΊπ›Όβˆ—π‘›π‘βˆ’πΊπ›Όβˆ—π‘›π‘—β€–β€–2⎞⎟⎟⎠.(3.4)

Considering the convexity for β€–π‘”βˆ’π΄βˆ—π‘“β€–2 and β€–πΊπ›Όβˆ—π‘›π‘βˆ’πΊπ›Όβˆ—π‘›π‘—β€–2, we conclude that, given the fixed weight 𝑀, the Hessian matrix of each πœ“π›½(𝑓𝑗,ξπ‘€π‘π‘—ξβˆ£π‘”,𝑓) in (3.4) is a negative definite. In this step of image updating, we applied the restoration algorithm in [4] to solve (3.4) to obtain the image estimate of each pixel 𝑓𝑗. The πœ“π›½(𝑓,𝑀,πœ‚βˆ£π‘”) in (2.8) is separately convex with respect to each 𝑓 and 𝑀. Denoting 𝑓𝑛,𝑀𝑛 and 𝑓𝑛+1,𝑀𝑛+1 as the 𝑛th and (𝑛+1)th iterated estimates, we can obtain following relation: πœ“π›½(𝑓𝑛+1,𝑀𝑛+1βˆ£π‘”)β‰₯πœ“π›½(𝑓𝑛,𝑀𝑛+1βˆ£π‘”)β‰₯πœ“π›½(𝑓𝑛,ξπ‘€π‘›βˆ£π‘”), which implies convergence to one local maximum and can be obtained for this joint updating strategy [30].

In each iteration in the proposed restoration, it is very expensive to perform the pixel-wise calculations of all the 𝑑𝑏𝑗 and 𝑀𝑏𝑗 for each pair of translated patches 𝑛 in each 𝑁 over the whole image region. In the experiments, to save computation cost, we do not extend the sizes of 𝑁 over the whole image region, and set them to some appropriate sizes. We achieve significant acceleration by using parallel Compute Unified Device Architecture (CUDA) [26–28]. For each iteration in the restoration using the proposed LS-TPV prior model, all threads in this block grid structure execute simultaneously to perform all the involved pixel-wise operations. Furthermore, considering the symmetry property that distance 𝑑𝑏𝑗 equals to distance 𝑑𝑗𝑏, we can further save one half computation cost by only calculating one of the two distances 𝑑𝑏𝑗 and 𝑑𝑗𝑏.

4. Experiments

Both simulated and real data are used in the experiments. To provide comparative results, we also performed Wiener deconvolution, ForWaRD restoration, Bayesian restoration using TV prior, Huber prior, and the SA prior in [21]. The Wiener deconvolution was performed by using the degraded image to estimate the power spectrum in Fourier domain. The TWIST algorithm in [7] and Newton-Raphson algorithm in [12] are used in the restorations using TV prior and Huber prior, respectively. Restored images from above Wiener filter method are taken as the initial images for all the Bayesian restorations. In experiment, the parameters needed to be preset include the 𝛽 for all Bayesian restorations, the threshold 𝛾 for Huber prior, the wavelet threshold, and decomposition level in the ForWaRD method, the 𝛿 for the SA prior, the h and the 𝑛 and 𝑁 settings for the proposed LS-TPV prior. In the simulation experiment, all the parameters are set based on SNR maximization criteria with SNR calculated asSNR=10log10βŽ›βŽœβŽœβŽœβŽβˆ‘π‘€π‘—ξ‚€πΉπ‘—βˆ’πΉξ‚2βˆ‘π‘€π‘—ξ‚€πΉπ‘—βˆ’ξπ‘“π‘—ξ‚2⎞⎟⎟⎟⎠.(4.1) Here 𝑀, 𝑓, 𝐹, and 𝐹 denote the total pixel number in image, the restored image, the original true image, and the mean of the original true image, respectively. As to the real data experiment, which has no available true image for SNR calculations, all the parameters are set based on the tradeoff between edge preservation and noise suppression. 𝑁 and 𝑛 of sizes (11Γ—11𝑁 and 7Γ—7𝑛) is used in our work. A full study of the effects of the sizes of 𝑁 and 𝑛 on the resulting restorations is performed in Section 4.5.

Generally, the solution to the restoration is considered optimized when the iteration goes stable. So for the Bayesian restorations using TV prior, Huber prior, and SA prior in which the energy functions are maximized with respect only to image, we stop the iterations when the current image estimates satisfy the condition πœ“π›½(𝑓𝑛+1βˆ£π‘”)βˆ’πœ“π›½(ξπ‘“π‘›βˆ£π‘”)<0.995(πœ“π›½(ξπ‘“π‘›βˆ£π‘”)βˆ’πœ“π›½(ξπ‘“π‘›βˆ’1βˆ£π‘”)). For the restorations using the proposed LS-TPV prior in which the energy functions are maximized with respect to both image and weights, we stop the iterations when the current image and weight estimates satisfy the condition πœ“π›½(𝑓𝑛+1,𝑀𝑛+1βˆ£π‘”)βˆ’πœ“π›½(𝑓𝑛,ξπ‘€π‘›βˆ£π‘”)<0.995(πœ“π›½(𝑓𝑛,ξπ‘€π‘›βˆ£π‘”)βˆ’πœ“π›½(ξπ‘“π‘›βˆ’1,ξπ‘€π‘›βˆ’1βˆ£π‘”)). In practical experiments, to reach the above iteration conditions, it is found that 411, 433, 475, and 486 iterations are required for the Bayesian restorations using TV prior, Huber prior, SA prior, and the proposed LS-TPV prior, respectively.

4.1. Simulation Experiment

In this section, simulated experiments with 256Γ—256 β€œLena” image (intensity range [βˆ’98    117]) are performed. In the simulation, a uniform 9Γ—9 PSF is assumed, and Gaussian noise with variance = 0.1663 is added. The PSF used in this experiment is normalized to 1. Figure 1(b) is the degraded image simulated by (1.1) using the PSF with the white Gaussian noise. The parameters for different restorations are listed in Table 1. All the restored images are illustrated in Figures 1(c)–1(h), respectively.

The restored images from TV prior restoration and Huber prior restoration are, respectively, shown in Figures 1(d) and 1(e). Figure 1(f) shows the restored image using the ForWaRD method. The result of Bayesian restoration using the SA prior is illustrated in Figure 1(g). For the Bayesian restoration using the proposed LS-TPV prior, the algorithm in Section 3 is used, and the corresponding restored image is shown in Figure 1(h).

Through the results, we find the ForWaRD method and all the Bayesian restorations (Figures 1(d)–1(h)) can overcome the ring effects in Wiener deconvolution (Figure 1(c)). The block artifacts (the left arrows), which are observed in the ForWaRD approaches and the restorations using Huber prior and TV prior, are effectively suppressed using the restorations using the SA prior and the proposed LS-TPV prior. The LS-TPV prior (Figure 1(h)) presents a further visual enhancement over the SA prior (Figure 1(g)) in block suppressing (the left arrows) and edge preserving (the right arrows). Figure 2 plots the profiles along one specified track (the red line in the hair region in β€œLena” image in Figure 2) for different restorations, and we can see that the profile from the proposed restoration (the blue profile) has the closest match with the profile of the true β€œLena” image (the red profile) of those from other restorations. Table 2 shows that, compared to other methods, the proposed LS-TPV prior restoration can lead to restored images with a higher SNR.

4.2. Real Data Experiment

In this section, a real blurred astronomical 256Γ—256 β€œmoon” image with intensity range [βˆ’77    170] is obtained from β€œhttp://www.iceinspace.com.au/”. Here we use a 5Γ—5 normalized Gaussian PSF, which was tested to be effective. The parameters for different restoration approaches are listed in Table 3.

Figure 3(b) is the restored result from Wiener deconvolution. The restored images from TV prior restoration and Huber prior restoration are, respectively, illustrated in Figures 3(c) and 3(d). Figures 3(e) and 3(f) show the restored images from the ForWaRD method and the Bayesian restoration using SA prior. As to the Bayesian restoration using the LS-TPV prior, the restored image is shown in Figure 3(g). Also, we can note that the ForWaRD method and all the Bayesian restorations (Figures 3(c)–3(g)) are free of the ring effects in Wiener deconvolution (Figure 3(b)). Also, the block artifacts observed in the restorations using Huber prior and TV prior are effectively suppressed in the restorations using SA prior and the LS-TPV prior (the lower red arrows in Figures 3(f) and 3(g)). The oversmoothing in the ForWaRD restoration can not be observed in the restorations using the SA prior and the proposed LS-TPV prior (the upper arrows in Figures 3(e), 3(f), and 3(g)). Compared to the restoration using the SA prior, the restoration using the LS-TPV prior also shows a better visual performance in preserving edges (the upper arrows in Figures 3(f) and 3(g)).

4.3. Computation Cost Comparisons

Table 4 lists the recorded CPU times (in seconds) of different restorations for the experiments with simulated data and real data, respectively. Please note that the notations simu and real in Table 4 correspond to experiments with simulated data and real data, respectively. We can see that the restorations using SA prior and the LS-TPV prior are much computationally intensive than other methods. We define LS-TPV prior res1, LS-TPV prior res2, and LS-TPV prior res3 as the original serial restorations, the restorations after optimization, and the restorations after both optimization and parallelization. Table 4 shows that, with respect to the original serial version, the proposed restorations after optimization are about 3 times faster, and the restorations after both optimization and parallelization are about 50 times faster. But we can not claim that the LS-TPV prior restoration needs less computation costs than other methods since that we did not design the parallelized versions for them.

4.4. Algorithm Monotonicity

Here, to analyze the monotonicity of the joint image/weight algorithm proposed in Section 3, we calculated the posterior function energy πœ“π›½(𝑓,𝑀,πœ‚βˆ£π‘”) and SNR with respect to the iteration numbers for the proposed restorations. The total iteration number is set to 500. As illustrated in Figure 4, we can see that both the calculated posterior function energy and the SNR increase monotonically during the whole iteration.

4.5. Study on Neighborhood 𝑁 and 𝑛 of Different Sizes

To study the roles of the sizes of 𝑁 and 𝑛 in restorations, we perform restorations using the proposed prior with 𝑁 and 𝑛 of different sizes in above simulation experiment with β€œLena” image. To characterize local image structures, we do not set 𝑛 larger than 7Γ—7. Considering the fact that the total pixel numbers are different for different comparing patches 𝑛 (49 for 7Γ—7𝑛, 9 for 3Γ—3𝑛, and 1 for 1Γ—1𝑛), parameter β„Ž should also change with the restorations using different patches 𝑛. The hyperparameter 𝛽 and parameter β„Ž for the proposed prior are chosen by hand to produce the best stable images in terms of SNR maximization. Combinations of 𝑁 and 𝑛 with different sizes and the corresponding computed SNR and the recorded CPU time cost (with CUDA parallelization) are all listed in Table 5.

We can see that higher SNR can be obtained when enlarging 𝑁 and 𝑛 from 1Γ—1 to 11Γ—11 and 7Γ—7. We also note in Table 5 that more CPU time is needed for the restorations using the proposed prior with larger 𝑁 and 𝑛. We should set the sizes of 𝑁 and 𝑛 based on the tradeoff between performance and computation cost. We find in Table 5 that no significantly quantitative SNR differences are made between 11Γ—11𝑁 and larger 13Γ—13𝑁. In fact, as the iteration proceeds, more global information beyond the 11Γ—11𝑁 will be incorporated into the regularization of the pixels.

5. Conclusions and Future Work Plan

In this paper on image restoration, we proposed an LS-TPV prior which penalizes the total distances between neighboring patches through a constrained mixture prior mode. A convergent joint image/weight updating algorithm, which estimates image and weights sequentially, is proposed to overcome the heuristical OSL weight determination for the SA prior restoration in [21]. We can see that, in addition to providing effective regularization, the proposed approach can lead to stable iteration with convergence guaranteed.

The application of the proposed prior model needs a pixel-wise computation of the distances between neighboring patches over a large region, which implies high computation cost of restorations. In this paper, remarkable advance in shortening computation time is achieved by optimizing patch distance computation and replacing the original pixel-wise calculation of patch distances by CUDA framework.

Further work includes applying the proposed approach in blind image deconvolutions, further justifying the proposed approach by using more image-quality measures based on human visual system, and further accelerating the computation by parallelize the PSF convolution in restorations.

Acknowledgments

This research was supported by National Basic Research Program of China under Grant (2010CB732503), National Natural Science Foundation under Grant (81000636), and the Project supported by Natural Science Foundation of Jiangsu Province (BK2009012).