Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2011 / Article

Research Article | Open Access

Volume 2011 |Article ID 408241 | 15 pages | https://doi.org/10.1155/2011/408241

Bayesian Image Restoration Using a Large-Scale Total Patch Variation Prior

Academic Editor: Nickolas S. Sapidis
Received05 Sep 2010
Revised30 Nov 2010
Accepted27 Apr 2011
Published02 Jul 2011

Abstract

Edge-preserving Bayesian restorations using nonquadratic priors are often inefficient in restoring continuous variations and tend to produce block artifacts around edges in ill-posed inverse image restorations. To overcome this, we have proposed a spatial adaptive (SA) prior with improved performance. However, this SA prior restoration suffers from high computational cost and the unguaranteed convergence problem. Concerning these issues, this paper proposes a Large-scale Total Patch Variation (LS-TPV) Prior model for Bayesian image restoration. In this model, the prior for each pixel is defined as a singleton conditional probability, which is in a mixture prior form of one patch similarity prior and one weight entropy prior. A joint MAP estimation is thus built to ensure the iteration monotonicity. The intensive calculation of patch distances is greatly alleviated by the parallelization of Compute Unified Device Architecture(CUDA). Experiments with both simulated and real data validate the good performance of the proposed restoration.

1. Introduction

Image restorations have wide applications in remote sensing, radar imaging, tomographic imaging, microscopic imaging, astronomic imaging, digital photography, and so forth [1ā€“4]. For linear and shift invariant imaging systems, the transformation from š‘“ to š‘” is well described by the following additive linear degradation model [3, 4]:š‘”=š“āˆ—š‘“+šœ€,(1.1) where š‘”, š‘“, and šœ€ represent the degraded observed image, the original true image, and the corrupting white Gaussian noise with variance šœŽ2, respectively. š“ is the PSF (Point Spread function) of the imaging system and āˆ— is the linear convolution operator. Throughout this paper, we assume that the PSF š“ is known or could be numerically determined or estimated.

Bayesian or maximum a posteriori (MAP) approach, within Markov Random Fields (MRFs) framework, can provide a stable solution of the ill-posed inverse image restorations through incorporating a priori information about the geometrical properties of an image [3ā€“8]. Via modelling the unknown parameters in the prior probability density functions, prior information can also be interpreted as the regularization metrics, which measures the extent to which the contextual constraint assumption is violated. With the incorporated prior information, Bayesian approach is able to distinguish solutions from less desirable ones by transforming the original ill-posed problem into a well-posed one. Generally, we can build the following posterior probability š‘ƒ(š‘“āˆ£š‘”) for image restorations,ī‚€āˆ’1š‘ƒ(š‘“āˆ£š‘”)āˆš‘ƒ(š‘”āˆ£š‘“)š‘ƒ(š‘“),(1.2)š‘ƒ(š‘”āˆ£š‘“)=exp(šæ(š‘”,š‘“))=exp2ā€–š‘”āˆ’š“āˆ—š‘“ā€–2ī‚,(1.3)š‘ƒ(š‘“)=š‘āˆ’1Ɨexp(āˆ’š›½š‘ˆ(š‘“))=š‘āˆ’1īƒ©ī“Ć—expāˆ’š›½š‘—š‘ˆš‘—īƒŖ(š‘“).(1.4) Here, š‘ƒ(š‘”āˆ£š‘“) and š‘ƒ(š‘“) denote the likelihood distribution and the prior distribution, respectively. šæ(š‘”,š‘“) and š‘ˆ(š‘“) denote the corresponding likelihood energy and the prior energy function. š‘ˆš‘—(š‘“) is the notation of the energy function š‘ˆ evaluated at pixel index š‘—. The partition function š‘ is a normalizing constant. š›½ is the global parameter controlling the balance between the likelihood energy and the prior energy. We can build the posterior energy function in the logarithm formšœ“š›½1(š‘“)=logš‘ƒ(š‘“āˆ£š‘”)=šæ(š‘”,š‘“)āˆ’š›½š‘ˆ(š‘“)=āˆ’2ā€–š‘”āˆ’š“āˆ—š‘“ā€–2ī“āˆ’š›½š‘—š‘ˆš‘—(š‘“).(1.5) The restored or deconvolved image š‘“ can be obtained through the maximization of function šœ“š›½(š‘“).

The widely used quadratic membrane (QM) prior or the Tikhonov L2 regularization tend to smooth both noise and edge details and often lead to unfavorable oversmoothing in restorations [5]. On the other side, some edge-preserving Bayesian restoration methods have been proposed, which can be classified into three main categories: wavelet relevant regularization restorations, Bayesian restorations with nonquadratic prior energies, and Bayesian restorations with total variation (TV) prior. Wavelet relevant regularization restorations, with multiscale stochastic prior models, have been proposed in the research of edge-preserving restorations [8ā€“11]. In [10], priors based on wavelet decompositions and heavy-tailed pdfs were incorporated into the EM restoration algorithm. In [11], Neelamani et al. proposed a Fourier-Wavelet hybrid restoration, which exploits the potentials of the sparse representation of noise for Fourier transform and the sparse representation of the coefficients in wavelet transform. All these wavelet regularization methods are based on the distribution modeling and thresholding of the decomposed wavelet coefficients [11]. Edge-preserving priors with nonquadratic energies were also proposed to preserve the edge details by using nonlinear inverse-proportional functions between edge existences and intensity differences [5, 12, 13]. The weighting matrices in nonquadratic prior Bayesian restorations preserve the edges by turning off or suppressing smoothing at appropriate locations. Another impressive research direction in this area is the total variation (TV)-based image restorations [7, 14, 15]. This kind of approach can also be viewed as TV priors with the half-quadratic regularization prior energies. With no enforced global image continuity, TV prior restorations often demonstrate good property of edge preserving.

The nonquadratic priors preserve the edges by determining the pixel-wise regularizations based on the intensity-difference information within local fixed neighborhoods. The regularizations on TV priors also come from the information of the local intensity gradients. For all these nonquadratic and TV priors, the local information of intensity difference can by no means provide effective regularizations to discriminate noise from image textures in different scales. So these edge-preserving approaches often fail to cope well with some complicated restorations with high noise contaminations, and tend to produce block artifacts around the continuous edges in those situations [16ā€“21].

Objects in the images are reasonably assumed to be a composition of edges and backgrounds with intensities changing coherently over different scales. Thus, for the traditional prior model, the intensity differences between individual pixels within local neighborhoods are often insufficient to characterize objects [19, 20]. This limit elicited our previous work in [21] which introduced a spatial adaptive (SA) prior restoration approach. This SA prior works by adaptively including the relevant neighboring pixels and excluding the negative irrelevant ones within a large neighborhood. Though showing expressive results, the iterative restoration in [21] updates the prior weight using the latest image estimate in a one step late (OSL) way, which leads to inconsistent posterior energy function and nonconvergent iterations. In addition, the approach in [21] is greatly limited by the high computational cost in calculating all the patch distances for all the pixel pairs in each prior neighborhood. In this paper, to overcome the first problem, we applied a large-scale total patch variation (LS-TPV) prior model, which is built through the specification of the conditional probability for each pixel [22ā€“25]. The joint maximization estimation of this constrained entropy function will lead to convergent image/weight restorations. To overcome the second problem, we introduce the parallel computation structure of CUDA to calculate the patch distances [26ā€“28]. Furthermore, another benefit from the CUDA-accelerated parallelization is that it allows further performance enhancement for the LS-TPV prior restoration by using larger search neighborhoods.

In Section 2, a review of some previous prior models is illustrated, and after that we introduce the proposed LS-TPV prior model. In Section 3, we give a joint estimation algorithm for the proposed restoration. In Section 4, we perform comparative experiments with both simulated and real data. Relevant visual/quantitative results are also presented. Conclusion and discussion are given in Section 5.

2. Prior Model

Conventionally, the value of š‘ˆš‘—(š‘“) is commonly computed through a weighted sum of potential functions š‘£ of the differences between the pixels in the neighborhood š‘š‘—:š‘ˆš‘—(ī“š‘“)=š‘āˆˆš‘š‘—š‘¤š‘š‘—š‘£ī€·š‘“š‘āˆ’š‘“š‘—ī€ø.(2.1)

Generally, different choices of the potential function š‘£ lead to different priors. For the simple space-invariant QM prior, the potential function has the form š‘£(š‘”)=š‘”2/2. Some Edge-preserving nonquadratic priors could be chosen by adopting a nonquadratic potential function š‘£, such as the Huber potential functionāŽ§āŽŖāŽØāŽŖāŽ©š‘”š‘£(š‘”)=22,š›¾š›¾|š‘”|ā‰¤š›¾,|š‘”|āˆ’22,|š‘”|>š›¾,(2.2) where š›¾ is the threshold parameter [5, 13]. Such edge-preserving nonquadratic priors preserve structure information by choosing nonquadratic potential functions that increase less as the differences between the adjacent pixels become bigger. Weight š‘¤š‘š‘— is a positive value that denotes the interaction degree between pixel š‘ and pixel š‘—. In traditional prior models, it is simply considered to be inversely proportional to the distance between pixels š‘ and š‘—. So on a square lattice of image š‘“, in which the 2D positions of pixels š‘ and š‘— (š‘ā‰ š‘—) are, respectively, (š‘š‘„,š‘š‘¦) and (š‘—š‘„,š‘—š‘¦), š‘¤š‘š‘— is usually calculated by the geometric distance ī”1/(š‘š‘„āˆ’š‘—š‘„)2+(š‘š‘¦āˆ’š‘—š‘¦)2.

Restorations using total variation (TV) prior take the prior energy as the following expression:š‘ˆTVī“(š‘“)=š‘—ī”ī€·Ī”ā„Žš‘—š‘“ī€ø2+ī€·Ī”š‘£š‘—š‘“ī€ø2,(2.3) where Ī”ā„Žš‘— and Ī”š‘£š‘— are linear operators corresponding to the first-order horizontal and vertical intensity differences at pixel š‘—, respectively [7, 14, 15]. Since both the smooth and sharp edges tend to have similar prior energies, this TV prior energy given by (2.3) does not penalize weak discontinuities and often favors images with bounded variations.

Traditional quadratic priors, nonquadratic priors and TV prior, only provide local intensity information for Bayesian restorations. And the edge-preserving nonquadratic and TV priors, though being able to preserve edge information, tend to produce unfavorable block artifacts or false edges in high-noise situations.

To improve restoration quality, in [21] we proposed a spatial adaptive (SA) prior model, in which a spatial adaptive prior neighborhood is implemented by setting the weight š‘¤š‘š‘— to an 1-0 binary function to classify the pixels in neighborhood š‘ into the pixels relevant to the center pixel š‘— and those not. The building of the SA prior can be formalized as follows:š‘ˆSAī“(š‘“)=š‘—š‘ˆš‘—ī“(š‘“)=š‘—ī“š‘āˆˆš‘š‘—īƒ©š‘¤š‘š‘—ī€·š‘“š‘āˆ’š‘“š‘—ī€ø2||š‘š‘Ÿš‘—||īƒŖ,š‘¤(2.4)š‘š‘—=āŽ§āŽŖāŽØāŽŖāŽ©||š‘›1,š‘(š‘“)āˆ’š‘›š‘—||(š‘“)2||š‘›ā‰¤š›æ0,š‘(š‘“)āˆ’š‘›š‘—||(š‘“)2||š‘›>š›æ,(2.5)š‘(š‘“)āˆ’š‘›š‘—||(š‘“)2=ī“š‘™ī‚€š‘“š‘™āˆˆš‘›š‘āˆ’š‘“š‘™āˆˆš‘›š‘—ī‚2.(2.6) Here, š‘ˆSA is the energy function for the SA prior. š‘¤š‘š‘—, which represents the classification of the neighboring pixels in the search neighborhood š‘š‘—, can be computed via (2.5) and (2.6). |š‘š‘Ÿš‘—|, the number of the neighboring pixels with nonzero š‘¤š‘š‘— in the neighborhood š‘š‘—, is the normalization factor for the different sizes of non-1 neighboring pixels in each š‘š‘—. Parameter š›æ in (2.5) is the threshold parameter. The distance |š‘›š‘(š‘“)āˆ’š‘›š‘—(š‘“)|2 is determined by the distance measure (2.6) between the two translated comparing patches š‘›š‘ and š‘›š‘—, whose centers are, respectively, located at pixel š‘ and pixel š‘—.

Though proved to be effective in suppressing both noise and block artifacts, all the weights š‘¤ in SA prior need to be heuristically inferred using current image estimate in the way of one step late (OSL), which often leads to inconsistent posterior energy function with no guaranteed convergence [8]. To overcome this problem, in this paper, we proposed an LS-TPV prior model, in which regularization takes effect through penalizing the weighted distances/variations between the patches surrounding each neighboring pixel pair. The weights š‘¤ in each š‘ are considered variables with a nonlinear dependence on image š‘“. For the central pixel š‘— and its neighboring pixel š‘ in each prior neighborhood š‘š‘—, š‘¤š‘š‘— is considered the similarity probability between the two patches š‘›š‘ and š‘›š‘— surrounding pixels š‘ and š‘—, respectively [24]. An entropy prior of all the weights š‘¤š‘š‘— for each š‘š‘— is introduced into our model in the form of āˆ’āˆ‘š‘—āˆ‘š‘āˆˆš‘š‘—š‘¤š‘š‘—lnš‘¤š‘š‘—. Here as the ā€œFrameā€ model in [25], the constraint is āˆ‘š‘āˆˆš‘š‘—š‘¤š‘š‘—=1. Then, based on joint MAP theorem [12], we can obtainš‘ƒ(š‘“,š‘¤,šœ‚āˆ£š‘”)āˆš‘ƒ(š‘”āˆ£š‘“)š‘ƒ(š‘“āˆ£š‘¤,šœ‚)š‘ƒ(š‘¤,šœ‚).(2.7) Here, šœ‚ is the introduced pixel-wise Lagrange-multiplier parameter. For each pixel š‘—, with īš‘“ denoting the current image estimate, the prior energy š‘ˆLS-TPV(š‘“š‘—ī,š‘¤,šœ‚āˆ£š‘“) of the LS-TPV prior is in the combined form of a weighted patch similarity energy, an entropy energy āˆ’āˆ‘š‘—āˆ‘š‘āˆˆš‘š‘—š‘¤š‘š‘—lnš‘¤š‘š‘— and a normalization constraint āˆ‘š‘āˆˆš‘š‘—š‘¤š‘š‘—=1. We can thus obtain the following posterior energy function:šœ“š›½īƒ©ī“(š‘“,š‘¤,šœ‚āˆ£š‘”)=logš‘ƒ(š‘”āˆ£š‘“)+logš‘ƒ(š‘“āˆ£š‘¤,šœ‚)+logš‘ƒ(š‘¤,šœ‚)=šæ(š‘”,š‘“)āˆ’š›½š‘—š‘ˆLS-TPVī‚€š‘“š‘—īš‘“ī‚īƒŖ1,š‘¤,šœ‚āˆ£=āˆ’2ā€–š‘”āˆ’š“āˆ—š‘“ā€–2ī“āˆ’š›½š‘—āŽ›āŽœāŽœāŽāŽ›āŽœāŽœāŽī“š‘āˆˆš‘š‘—š‘¤š‘š‘—š‘‘š‘š‘—ī“+ā„Žš‘āˆˆš‘š‘—š‘¤š‘š‘—lnš‘¤š‘š‘—āŽžāŽŸāŽŸāŽ +šœ‚š‘—āŽ›āŽœāŽœāŽī“š‘āˆˆš‘š‘—š‘¤š‘š‘—āŽžāŽŸāŽŸāŽ āŽžāŽŸāŽŸāŽ .āˆ’1(2.8) Here, as (1.4), š›½ is the global parameter controlling the balance between the likelihood energy and the prior energy. The patch distance š‘‘š‘š‘— is calculated as ā€–šŗš›¼āˆ—š‘›š‘āˆ’šŗš›¼āˆ—š‘›š‘—ā€–2, in which šŗš›¼ denotes the Gaussian convolution kernel with standard deviation š›¼. šœ‚š‘— is the value of Lagrange-multiplier parameter at pixel š‘—. The parameter ā„Ž controls the maximum entropy constraint on š‘¤, which is routinely modulated with respect to the noise levels (large ā„Ž is routinely set to suppress more noise for high-noise situations).

3. Restoration Algorithm

To obtain tractable maximization of the posterior probability using the proposed LS-TPV prior energy š‘ˆLS-TPV in (2.8), we use the algorithm of iterative coordinate descent (ICD), in which each element is sequentially updated by maximizing the log-style local conditional probability function [29]. In this way, the objective image š‘“ and the weight š‘¤ are jointly estimated through alternative maximization of the šœ“š›½(š‘“,š‘¤,šœ‚āˆ£š‘”) with respect to š‘“ and š‘¤.

(1) argmaxš‘¤āˆ’š‘ˆLS-TPVī(š‘¤,šœ‚āˆ£š‘“).
īš‘“ denotes the fixed current image estimation, and each weight š‘¤š‘š‘— for each pixel š‘— is updated by solving the following singleton optimization problem: argmaxš‘¤š‘š‘—āŽ›āŽœāŽœāŽāˆ’ī“š‘āˆˆš‘š‘—š‘¤š‘š‘—ā€–ā€–šŗš›¼āˆ—š‘›š‘āˆ’šŗš›¼āˆ—š‘›š‘—ā€–ā€–2ī“āˆ’ā„Žš‘āˆˆš‘š‘—š‘¤š‘š‘—lnš‘¤š‘š‘—āˆ’šœ‚š‘—āŽ›āŽœāŽœāŽī“š‘āˆˆš‘š‘—š‘¤š‘š‘—āŽžāŽŸāŽŸāŽ āŽžāŽŸāŽŸāŽ āˆ’1.(3.1)
With Lagrange-multiplier šœ‚š‘— introduced, by joint solving (3.1) with respect to each š‘¤š‘š‘— and šœ‚š‘—, we can then obtain š‘¤š‘š‘—: š‘¤š‘š‘—=ī‚€āˆ’ā€–ā€–šŗexpš›¼āˆ—š‘›š‘āˆ’šŗš›¼āˆ—š‘›š‘—ā€–ā€–2ī‚/ā„Žāˆ‘š‘ā€²āˆˆš‘š‘—ī‚€āˆ’ā€–ā€–šŗexpš›¼āˆ—š‘›š‘ā€²āˆ’šŗš›¼āˆ—š‘›š‘—ā€–ā€–2ī‚/ā„Ž,(3.2) where ā€–šŗš›¼āˆ—š‘›š‘āˆ’šŗš›¼āˆ—š‘›š‘—ā€–2 can be easily calculated using currently available īš‘“. Each š‘¤š‘š‘— is determined not only by the similarity metrics between patches š‘›š‘ and š‘›š‘—, but also by the sum of all the similarity metrics between patches with the central pixels located in š‘š‘—. So š‘¤š‘š‘— is different from š‘¤š‘—š‘.

(2) argmaxš‘“šœ“š›½ī(š‘“,š‘¤,Ģ‚šœ‚āˆ£š‘”).
Here īš‘¤ denotes the fixed current weight estimate. Omitting the terms with no š‘“, the šœ“š›½ī(š‘“,š‘¤āˆ£š‘”) becomes šœ“š›½ī€·īī€øī“š‘“,š‘¤āˆ£š‘”=šæ(š‘”,š‘“)āˆ’š›½š‘—āŽ›āŽœāŽœāŽī“š‘āˆˆš‘š‘—īš‘¤š‘š‘—ā€–ā€–šŗš›¼āˆ—š‘›š‘āˆ’šŗš›¼āˆ—š‘›š‘—ā€–ā€–2āŽžāŽŸāŽŸāŽ .(3.3)
To make a tractable maximization of (2.8), based on the theories of pseudolikelihood and iterated conditional mode (ICM) in [29], we can restore each š‘“š‘— by solving the following factorized problem: argmaxš‘“š‘—šœ“š›½ī‚€š‘“š‘—,īš‘¤š‘š‘—īš‘“ī‚āˆ£š‘”,āŸ¹argmaxš‘“š‘—šæāŽ›āŽœāŽœāŽī“(š‘“,š‘”)āˆ’š›½š‘āˆˆš‘š‘—īš‘¤š‘š‘—ā€–ā€–šŗš›¼āˆ—š‘›š‘āˆ’šŗš›¼āˆ—š‘›š‘—ā€–ā€–2āŽžāŽŸāŽŸāŽ .(3.4)

Considering the convexity for ā€–š‘”āˆ’š“āˆ—š‘“ā€–2 and ā€–šŗš›¼āˆ—š‘›š‘āˆ’šŗš›¼āˆ—š‘›š‘—ā€–2, we conclude that, given the fixed weight īš‘¤, the Hessian matrix of each šœ“š›½(š‘“š‘—,īš‘¤š‘š‘—īāˆ£š‘”,š‘“) in (3.4) is a negative definite. In this step of image updating, we applied the restoration algorithm in [4] to solve (3.4) to obtain the image estimate of each pixel š‘“š‘—. The šœ“š›½(š‘“,š‘¤,šœ‚āˆ£š‘”) in (2.8) is separately convex with respect to each š‘“ and š‘¤. Denoting īš‘“š‘›,īš‘¤š‘› and īš‘“š‘›+1,īš‘¤š‘›+1 as the š‘›th and (š‘›+1)th iterated estimates, we can obtain following relation: šœ“š›½(īš‘“š‘›+1,īš‘¤š‘›+1āˆ£š‘”)ā‰„šœ“š›½(īš‘“š‘›,īš‘¤š‘›+1āˆ£š‘”)ā‰„šœ“š›½(īš‘“š‘›,īš‘¤š‘›āˆ£š‘”), which implies convergence to one local maximum and can be obtained for this joint updating strategy [30].

In each iteration in the proposed restoration, it is very expensive to perform the pixel-wise calculations of all the š‘‘š‘š‘— and š‘¤š‘š‘— for each pair of translated patches š‘› in each š‘ over the whole image region. In the experiments, to save computation cost, we do not extend the sizes of š‘ over the whole image region, and set them to some appropriate sizes. We achieve significant acceleration by using parallel Compute Unified Device Architecture (CUDA) [26ā€“28]. For each iteration in the restoration using the proposed LS-TPV prior model, all threads in this block grid structure execute simultaneously to perform all the involved pixel-wise operations. Furthermore, considering the symmetry property that distance š‘‘š‘š‘— equals to distance š‘‘š‘—š‘, we can further save one half computation cost by only calculating one of the two distances š‘‘š‘š‘— and š‘‘š‘—š‘.

4. Experiments

Both simulated and real data are used in the experiments. To provide comparative results, we also performed Wiener deconvolution, ForWaRD restoration, Bayesian restoration using TV prior, Huber prior, and the SA prior in [21]. The Wiener deconvolution was performed by using the degraded image to estimate the power spectrum in Fourier domain. The TWIST algorithm in [7] and Newton-Raphson algorithm in [12] are used in the restorations using TV prior and Huber prior, respectively. Restored images from above Wiener filter method are taken as the initial images for all the Bayesian restorations. In experiment, the parameters needed to be preset include the š›½ for all Bayesian restorations, the threshold š›¾ for Huber prior, the wavelet threshold, and decomposition level in the ForWaRD method, the š›æ for the SA prior, the h and the š‘› and š‘ settings for the proposed LS-TPV prior. In the simulation experiment, all the parameters are set based on SNR maximization criteria with SNR calculated asSNR=10log10āŽ›āŽœāŽœāŽœāŽāˆ‘š‘€š‘—ī‚€š¹š‘—āˆ’š¹ī‚2āˆ‘š‘€š‘—ī‚€š¹š‘—āˆ’īš‘“š‘—ī‚2āŽžāŽŸāŽŸāŽŸāŽ .(4.1) Here š‘€, īš‘“, š¹, and š¹ denote the total pixel number in image, the restored image, the original true image, and the mean of the original true image, respectively. As to the real data experiment, which has no available true image for SNR calculations, all the parameters are set based on the tradeoff between edge preservation and noise suppression. š‘ and š‘› of sizes (11Ɨ11š‘ and 7Ɨ7š‘›) is used in our work. A full study of the effects of the sizes of š‘ and š‘› on the resulting restorations is performed in Section 4.5.

Generally, the solution to the restoration is considered optimized when the iteration goes stable. So for the Bayesian restorations using TV prior, Huber prior, and SA prior in which the energy functions are maximized with respect only to image, we stop the iterations when the current image estimates satisfy the condition šœ“š›½(īš‘“š‘›+1āˆ£š‘”)āˆ’šœ“š›½(īš‘“š‘›āˆ£š‘”)<0.995(šœ“š›½(īš‘“š‘›āˆ£š‘”)āˆ’šœ“š›½(īš‘“š‘›āˆ’1āˆ£š‘”)). For the restorations using the proposed LS-TPV prior in which the energy functions are maximized with respect to both image and weights, we stop the iterations when the current image and weight estimates satisfy the condition šœ“š›½(īš‘“š‘›+1,īš‘¤š‘›+1āˆ£š‘”)āˆ’šœ“š›½(īš‘“š‘›,īš‘¤š‘›āˆ£š‘”)<0.995(šœ“š›½(īš‘“š‘›,īš‘¤š‘›āˆ£š‘”)āˆ’šœ“š›½(īš‘“š‘›āˆ’1,īš‘¤š‘›āˆ’1āˆ£š‘”)). In practical experiments, to reach the above iteration conditions, it is found that 411, 433, 475, and 486 iterations are required for the Bayesian restorations using TV prior, Huber prior, SA prior, and the proposed LS-TPV prior, respectively.

4.1. Simulation Experiment

In this section, simulated experiments with 256Ɨ256 ā€œLenaā€ image (intensity range [āˆ’98ā€‰ā€‰ā€‰ā€‰117]) are performed. In the simulation, a uniform 9Ɨ9 PSF is assumed, and Gaussian noise with variance = 0.1663 is added. The PSF used in this experiment is normalized to 1. Figure 1(b) is the degraded image simulated by (1.1) using the PSF with the white Gaussian noise. The parameters for different restorations are listed in Table 1. All the restored images are illustrated in Figures 1(c)ā€“1(h), respectively.


TV prior Huber prior ForWaRD SA prior LS-TPV prior

š›½ = 0 . 0 0 3 8 š›½ = 0 . 0 2 6 ,
š›¾ = 0 . 3 5
Threshold = 3.0,
Levels = 3
š›½ = 0 . 0 1 5 ,
š›æ = 9 0 0 ,
7 Ɨ 7 š‘› ,
1 1 Ɨ 1 1 š‘
š›½ = 0 . 0 7 5 ,
š›¼ = 0 . 5 ,
ā„Ž = 1 5 0 ,
7 Ɨ 7 š‘› ,
1 1 Ɨ 1 1 š‘

The restored images from TV prior restoration and Huber prior restoration are, respectively, shown in Figures 1(d) and 1(e). Figure 1(f) shows the restored image using the ForWaRD method. The result of Bayesian restoration using the SA prior is illustrated in Figure 1(g). For the Bayesian restoration using the proposed LS-TPV prior, the algorithm in Section 3 is used, and the corresponding restored image is shown in Figure 1(h).

Through the results, we find the ForWaRD method and all the Bayesian restorations (Figures 1(d)ā€“1(h)) can overcome the ring effects in Wiener deconvolution (Figure 1(c)). The block artifacts (the left arrows), which are observed in the ForWaRD approaches and the restorations using Huber prior and TV prior, are effectively suppressed using the restorations using the SA prior and the proposed LS-TPV prior. The LS-TPV prior (Figure 1(h)) presents a further visual enhancement over the SA prior (Figure 1(g)) in block suppressing (the left arrows) and edge preserving (the right arrows). Figure 2 plots the profiles along one specified track (the red line in the hair region in ā€œLenaā€ image in Figure 2) for different restorations, and we can see that the profile from the proposed restoration (the blue profile) has the closest match with the profile of the true ā€œLenaā€ image (the red profile) of those from other restorations. Table 2 shows that, compared to other methods, the proposed LS-TPV prior restoration can lead to restored images with a higher SNR.


Degraded Wiener TV prior Huber Prior ForWaRDSA prior LS-TPV prior

6.65 15.06 15.54 15.59 15.50 15.84 16.02

4.2. Real Data Experiment

In this section, a real blurred astronomical 256Ɨ256 ā€œmoonā€ image with intensity range [āˆ’77ā€‰ā€‰ā€‰ā€‰170] is obtained from ā€œhttp://www.iceinspace.com.au/ā€. Here we use a 5Ɨ5 normalized Gaussian PSF, which was tested to be effective. The parameters for different restoration approaches are listed in Table 3.


TV prior Huber Prior ForWaRD SA prior LS-TPV prior

š›½ = 0 . 0 5 9 š›½ = 0 . 0 4 5 ,
š›¾ = 0 . 4 5
Threshold = 3.5,
Levels = 3
š›½ = 0 . 0 3 2 ,
š›æ = 7 0 0 ,
7 Ɨ 7 š‘› ,
1 1 Ɨ 1 1 š‘
š›½ = 0 . 0 9 5 ,
š›¼ = 0 . 5 ,
ā„Ž = 3 2 5 ,
7 Ɨ 7 š‘› ,
1 1 Ɨ 1 1 š‘

Figure 3(b) is the restored result from Wiener deconvolution. The restored images from TV prior restoration and Huber prior restoration are, respectively, illustrated in Figures 3(c) and 3(d). Figures 3(e) and 3(f) show the restored images from the ForWaRD method and the Bayesian restoration using SA prior. As to the Bayesian restoration using the LS-TPV prior, the restored image is shown in Figure 3(g). Also, we can note that the ForWaRD method and all the Bayesian restorations (Figures 3(c)ā€“3(g)) are free of the ring effects in Wiener deconvolution (Figure 3(b)). Also, the block artifacts observed in the restorations using Huber prior and TV prior are effectively suppressed in the restorations using SA prior and the LS-TPV prior (the lower red arrows in Figures 3(f) and 3(g)). The oversmoothing in the ForWaRD restoration can not be observed in the restorations using the SA prior and the proposed LS-TPV prior (the upper arrows in Figures 3(e), 3(f), and 3(g)). Compared to the restoration using the SA prior, the restoration using the LS-TPV prior also shows a better visual performance in preserving edges (the upper arrows in Figures 3(f) and 3(g)).

4.3. Computation Cost Comparisons

Table 4 lists the recorded CPU times (in seconds) of different restorations for the experiments with simulated data and real data, respectively. Please note that the notations simu and real in Table 4 correspond to experiments with simulated data and real data, respectively. We can see that the restorations using SA prior and the LS-TPV prior are much computationally intensive than other methods. We define LS-TPV prior res1, LS-TPV prior res2, and LS-TPV prior res3 as the original serial restorations, the restorations after optimization, and the restorations after both optimization and parallelization. Table 4 shows that, with respect to the original serial version, the proposed restorations after optimization are about 3 times faster, and the restorations after both optimization and parallelization are about 50 times faster. But we can not claim that the LS-TPV prior restoration needs less computation costs than other methods since that we did not design the parallelized versions for them.


Wiener filtering: simu: 58.16, real: 64.48 TV prior res: simu: 214.96, real: 228.92
Huber prior res: simu: 270.52, real: 292.36 ForWaRD res: simu: 2128.27, real: 2411.15
SA prior res: simu: 2128.27, real: 2382.43 LS-TPV prior res1: simu: 2885.61, real: 3058.07
LS-TPV prior res2: simu: 910.48, real: 954.48 LS-TPV prior res3: simu: 58.61, real: 64.45

4.4. Algorithm Monotonicity

Here, to analyze the monotonicity of the joint image/weight algorithm proposed in Section 3, we calculated the posterior function energy šœ“š›½(š‘“,š‘¤,šœ‚āˆ£š‘”) and SNR with respect to the iteration numbers for the proposed restorations. The total iteration number is set to 500. As illustrated in Figure 4, we can see that both the calculated posterior function energy and the SNR increase monotonically during the whole iteration.

4.5. Study on Neighborhood š‘ and š‘› of Different Sizes

To study the roles of the sizes of š‘ and š‘› in restorations, we perform restorations using the proposed prior with š‘ and š‘› of different sizes in above simulation experiment with ā€œLenaā€ image. To characterize local image structures, we do not set š‘› larger than 7Ɨ7. Considering the fact that the total pixel numbers are different for different comparing patches š‘› (49 for 7Ɨ7š‘›, 9 for 3Ɨ3š‘›, and 1 for 1Ɨ1š‘›), parameter ā„Ž should also change with the restorations using different patches š‘›. The hyperparameter š›½ and parameter ā„Ž for the proposed prior are chosen by hand to produce the best stable images in terms of SNR maximization. Combinations of š‘ and š‘› with different sizes and the corresponding computed SNR and the recorded CPU time cost (with CUDA parallelization) are all listed in Table 5.


š‘ and š‘› SNR CPU times š‘ and š‘› SNR CPU times š‘ and š‘› SNR CPU times

(a) š‘ 3 Ɨ 3 š‘› 1 Ɨ 1 15.48 17.98ā€‰sec (b) š‘ 3 Ɨ 3 š‘› 3 Ɨ 3 15.65 28.59ā€‰sec (c) š‘ 3 Ɨ 3 š‘› 7 Ɨ 7 15.79 53.69ā€‰sec
(d) š‘ 7 Ɨ 7 š‘› 1 Ɨ 1 15.54 21.16ā€‰sec (e) š‘ 7 Ɨ 7 š‘› 3 Ɨ 3 15.70 31.37ā€‰sec (f) š‘ 7 Ɨ 7 š‘› 7 Ɨ 7 15.8850.91ā€‰sec
(g) š‘ 1 1 Ɨ 1 1 š‘› 1 Ɨ 1 15.55 24.59ā€‰sec (h) š‘ 1 1 Ɨ 1 1 š‘› 3 Ɨ 3 15.72 38.42ā€‰sec (i) š‘ 1 1 Ɨ 1 1 š‘› 7 Ɨ 7 16.02 58.61ā€‰sec
(j) š‘ 1 3 Ɨ 1 3 š‘› 1 Ɨ 1 15.55 29.54ā€‰sec (k) š‘ 1 3 Ɨ 1 3 š‘› 3 Ɨ 3 15.75 77.67ā€‰sec (l) š‘ 1 3 Ɨ 1 3 š‘› 7 Ɨ 7 16.0795.31ā€‰sec

We can see that higher SNR can be obtained when enlarging š‘ and š‘› from 1Ɨ1 to 11Ɨ11 and 7Ɨ7. We also note in Table 5 that more CPU time is needed for the restorations using the proposed prior with larger š‘ and š‘›. We should set the sizes of š‘ and š‘› based on the tradeoff between performance and computation cost. We find in Table 5 that no significantly quantitative SNR differences are made between 11Ɨ11š‘ and larger 13Ɨ13š‘. In fact, as the iteration proceeds, more global information beyond the 11Ɨ11š‘ will be incorporated into the regularization of the pixels.

5. Conclusions and Future Work Plan

In this paper on image restoration, we proposed an LS-TPV prior which penalizes the total distances between neighboring patches through a constrained mixture prior mode. A convergent joint image/weight updating algorithm, which estimates image and weights sequentially, is proposed to overcome the heuristical OSL weight determination for the SA prior restoration in [21]. We can see that, in addition to providing effective regularization, the proposed approach can lead to stable iteration with convergence guaranteed.

The application of the proposed prior model needs a pixel-wise computation of the distances between neighboring patches over a large region, which implies high computation cost of restorations. In this paper, remarkable advance in shortening computation time is achieved by optimizing patch distance computation and replacing the original pixel-wise calculation of patch distances by CUDA framework.

Further work includes applying the proposed approach in blind image deconvolutions, further justifying the proposed approach by using more image-quality measures based on human visual system, and further accelerating the computation by parallelize the PSF convolution in restorations.

Acknowledgments

This research was supported by National Basic Research Program of China under Grant (2010CB732503), National Natural Science Foundation under Grant (81000636), and the Project supported by Natural Science Foundation of Jiangsu Province (BK2009012).

References

  1. A. Srivastava, A. B. Lee, E. P. Simoncelli, and S. C. Zhu, ā€œOn advances in statistical modeling of natural images,ā€ Journal of Mathematical Imaging and Vision, vol. 18, no. 1, pp. 17ā€“33, 2003. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  2. S. Tan and L. Jiao, ā€œA unified iterative denoising algorithm based on natural image statistical models: derivation and examples,ā€ Optics Express, vol. 16, no. 2, pp. 975ā€“992, 2008. View at: Publisher Site | Google Scholar
  3. R. Molina, ā€œOn the hierarchical Bayesian approach to image restoration: applications to astronomical images,ā€ IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 16, no. 11, pp. 1122ā€“1128, 1994. View at: Publisher Site | Google Scholar
  4. G. Archer and D. M. Titterington, ā€œOn some Bayesian/regularization methods for image restoration,ā€ IEEE Transactions on Image Processing, vol. 4, no. 7, pp. 989ā€“995, 1995. View at: Publisher Site | Google Scholar
  5. S. Geman and D. Geman, ā€œStochastic relaxation, Gibbs distributions and the Bayesian restoration of images,ā€ IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 6, no. 6, pp. 721ā€“741, 1984. View at: Google Scholar
  6. N. P. Galatsanos and A. K. Katsaggelos, ā€œMethods for choosing the regularization parameter and estimating the noise variance in image restoration and their relation,ā€ IEEE Transactions on Image Processing, vol. 1, no. 3, pp. 322ā€“336, 1992. View at: Publisher Site | Google Scholar
  7. J. M. Bioucas-Dias and M. A. T. Figueiredo, ā€œA new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration,ā€ IEEE Transactions on Image Processing, vol. 16, no. 12, pp. 2992ā€“3004, 2007. View at: Publisher Site | Google Scholar
  8. Y. Wan and R. D. Nowak, ā€œBayesian multiscale approach to joint image restoration and edge detection,ā€ in Proceedings of the International Society for Optical Engineering (SPIE '99), vol. 3813, pp. 73ā€“84, 1999. View at: Google Scholar
  9. M. A. T. Figueiredo and R. D. Nowak, ā€œAn EM algorithm for wavelet-based image restoration,ā€ IEEE Transactions on Image Processing, vol. 12, no. 8, pp. 906ā€“916, 2003. View at: Publisher Site | Google Scholar
  10. J. M. Bioucas-Dias, ā€œBayesian wavelet-based image deconvolution: a GEM algorithm exploiting a class of heavy-tailed priors,ā€ IEEE Transactions on Image Processing, vol. 15, no. 4, pp. 937ā€“951, 2006. View at: Publisher Site | Google Scholar
  11. R. Neelamani, H. Choi, and R. Baraniuk, ā€œForward: Fourier-wavelet regularized deconvolution for ill-conditioned systems,ā€ IEEE Transactions on Signal Processing, vol. 52, no. 2, pp. 418ā€“433, 2004. View at: Publisher Site | Google Scholar
  12. C. A. Bouman and K. Sauer, ā€œA generalized Gaussian image model for edge-preserving MAP estimation,ā€ IEEE Transactions on Image Processing, vol. 2, no. 3, pp. 296ā€“310, 1993. View at: Publisher Site | Google Scholar
  13. D. F. Yu and J. A. Fessler, ā€œEdge-preserving tomographic reconstruction with nonlocal regularization,ā€ IEEE Transactions on Medical Imaging, vol. 21, no. 2, pp. 159ā€“173, 2002. View at: Publisher Site | Google Scholar
  14. L. I. Rudin, S. Osher, and E. Fatemi, ā€œNonlinear total variation based noise removal algorithms,ā€ Physica D, vol. 60, no. 1ā€“4, pp. 259ā€“268, 1992. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  15. A. Chambolle, ā€œAn algorithm for total variation minimization and applications,ā€ Journal of Mathematical Imaging and Vision, vol. 20, no. 1ā€“2, pp. 89ā€“97, 2004. View at: Publisher Site | Google Scholar
  16. M. Mignotte, ā€œA segmentation-based regularization term for image deconvolution,ā€ IEEE Transactions on Image Processing, vol. 15, no. 7, pp. 1973ā€“1984, 2006. View at: Publisher Site | Google Scholar
  17. G. K. Chantas, N. P. Galatsanos, and A. C. Likas, ā€œBayesian restoration using a new nonstationary edge-preserving image prior,ā€ IEEE Transactions on Image Processing, vol. 15, no. 10, pp. 2987ā€“2997, 2006. View at: Publisher Site | Google Scholar
  18. G. Chantas, N. Galatsanos, and A. Likas, ā€œBayesian image restoration based on variatonal inference and a product of student-t priors,ā€ in Proceedings of the IEEE International Machine Learning for Signal Processing Workshop (MLSP '07), pp. 109ā€“112, Thessaloniki, Greece, 2007. View at: Google Scholar
  19. Y. Chen, J. Ma, Q. Feng, L. Luo, P. Shi, and W. Chen, ā€œNonlocal prior Bayesian tomographic reconstruction,ā€ Journal of Mathematical Imaging and Vision, vol. 30, no. 2, pp. 133ā€“146, 2008. View at: Publisher Site | Google Scholar
  20. A. Buades, B. Coll, and J. M. Morel, ā€œA nonlocal algorithm for image denoising,ā€ in Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 60ā€“65, San Diego, Calif, USA, 2005. View at: Google Scholar
  21. W. Chen, Y. Chen, Y. Li, Y. Dong, L. Hao, and L. Luo, ā€œEffective image testorations using a novel spatial adaptive prior,ā€ EURASIP Journal on Advances in Signal Processing, vol. 2010, Article ID 508089, 2010. View at: Publisher Site | Google Scholar
  22. S. Lyu, ā€œAn implicit Markov random field model for the multi-scale oriented representations of natural images,ā€ in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1919ā€“1925, 2009. View at: Publisher Site | Google Scholar
  23. G. Winkler, Image Analysis, Random Fields and Markov Chain Monte Carlo Methods, vol. 27 of Applications of Mathematics, Springer, Berlin, Germany, 2nd edition, 2003.
  24. G. PeyrĆ©, S. Bougleux, and L. Cohen, ā€œNon-local regularization of inverse problems,ā€ in Proceedings of the European Conference on Computer Vision, pp. 57ā€“68, 2008. View at: Google Scholar
  25. S. C. Zhu, Y. Wu, and D. Mumford, ā€œFRAME: filters, random field and maximum erntropy : towards a unified theory for texture modeling,ā€ International Journal of Computer Vision, vol. 27, no. 2, pp. 1ā€“20, 1998. View at: Google Scholar
  26. NVIDIA CUDA Programming Guide(Version 3.0), http://developer.download.nvidia.com/compute/cuda/3_2/toolkit/docs/CUDA_C_Programming_Guide.pdf.
  27. M. Fatica and W. Jeong, ā€œAccelerating matlab with CUDA,ā€ in Proceedings of the High Performance Embedded Computing, vol. 2007, Lexington, Massachusetts, USA, September 2007. View at: Google Scholar
  28. Y. Chen, Z. Zhou, W. F. Chen, X. D. Yin et al., ā€œImproving Low-dose X-ray CT Images by Weighted Intensity Averaging over Large-scale Neighborhood,ā€ European Journal of Radiology, 2010. In press. View at: Publisher Site | Google Scholar
  29. C. A. Bouman and K. Sauer, ā€œA unified approach to statistical tomography using coordinate descent optimization,ā€ IEEE Transactions on Image Processing, vol. 5, no. 3, pp. 480ā€“492, 1996. View at: Publisher Site | Google Scholar
  30. W. Chen, M. Chen, and J. Zhou, ā€œAdaptively regularized constrained total least-squares image restoration,ā€ IEEE Transactions on Image Processing, vol. 9, no. 4, pp. 588ā€“596, 2000. View at: Publisher Site | Google Scholar | Zentralblatt MATH

Copyright Ā© 2011 Yang Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

819Ā Views | 409Ā Downloads | 2Ā Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder