Abstract

The objective of image denoising is to retain useful details while removing as much noise as possible to recover an original image from its noisy version. This paper proposes a novel normal inverse Gaussian (NIG) model-based method that uses a Bayesian estimator to carry out image denoising in the nonsubsampled contourlet transform (NSCT) domain. In the proposed method, the NIG model is first used to describe the distributions of the image transform coefficients of each subband in the NSCT domain. Then, the corresponding threshold function is derived from the model using Bayesian maximum a posteriori probability estimation theory. Finally, optimal linear interpolation thresholding algorithm (OLI-Shrink) is employed to guarantee a gentler thresholding effect. The results of comparative experiments conducted indicate that the denoising performance of our proposed method in terms of peak signal-to-noise ratio is superior to that of several state-of-the-art methods, including BLS-GSM, K-SVD, BivShrink, and BM3D. Further, the proposed method achieves structural similarity (SSIM) index values that are comparable to those of the block-matching 3D transformation (BM3D) method.

1. Introduction

The objective of image denoising, a classical but still very active area in image processing, is to retain useful details while removing as much noise as possible to recover the original image from a noisy image [1]. Following the development of the fast wavelet decomposition method by Mallat [2], wavelet transform has been used extensively in various fields. Its use rapidly extended from mathematics and signal processing to other fields, such as image denoising, an important image processing technique that has developed rapidly with the introduction of wavelets.

The threshold method, developed by Donoho [3] in 1995, provides a viable treatment option for the wavelet coefficients of nonlinear processing and, consequently, significantly advanced the field of image denoising. Other feasible wavelet-based approaches have also been developed. Hard and soft thresholds emerged first, followed by Bayesian threshold [4, 5] methods that significantly enhanced the denoised results. Recently, various intrascale correlation coefficient threshold methods and a coefficient threshold method under high-dimensional space [6] have also been developed. Through continuous improvement of the threshold method, wavelet denoising methods have achieved increasingly better denoised results. These methods are supported by the underlying wavelet theory. Further, the denoising effect of the wavelet continues to improve with active research being carried out on the scaling relation in the transform domain.

In general, there are two key questions associated with image denoising based on wavelet domain statistical models: () how to choose the prior distribution model of the wavelet coefficients and () how to determine the appropriate denoising algorithm with respect to the distribution model. The most important topic in wavelet denoising is the wavelet coefficients statistical model. The objective of the research being conducted in this area is to make an accurate model of the wavelet coefficients, which are non-Gaussian and related to each other. Chang et al. [7] modelled the prior distribution model of the wavelet coefficients of the original image using a generalized Gaussian model and developed the Bayes-Shrink denoising algorithm. Crouse et al. [8] carried out denoising using a hidden Markov tree model, whereas Portilla et al. [9] used a Gaussian scale mixtures model. Şendur and Selesnick [10] proposed a Bivariate model and a corresponding Bishrink algorithm that achieved good image denoising performance. Achim et al. [11] proposed an -distribution that approximates the distribution of the wavelet coefficients of SAR images. Xie et al. [12] used a finite Gaussian mixture model to approximate the distribution of wavelet coefficients. Although each of these models reflects only a portion of the feature information of the wavelet coefficients, they have contributed to improvements in the denoising of images.

Sparse representation based on an overcomplete dictionary is a novel image representation theory. Various structural features of an image can be effectively captured by using the redundancy of the overcomplete dictionary, ultimately resulting in the image being effectively represented. The K-SVD algorithm [13], which adaptively updates the dictionary based on -means clustering, has exhibited good performance in image compression and restoration. The SA-DCT approach [14] exploits a shape-adaptive transform on neighbourhoods whose shapes are adaptive to salient image details and thus predominantly contain homogeneous signals. The shape-adaptive transform can achieve a very sparse representation of the true signal in these adaptive neighbourhoods. Dabov et al. developed a block-matching 3D transformation (BM3D) iterative image reconstruction algorithm [15] that is considered one of the best image denoising algorithms in the literature.

This paper combines current multiscale, multiresolution analysis to propose a novel nonsubsampled contourlet transform (NSCT) denoising scheme based on the normal inverse Gaussian (NIG) probability density function (PDF). This proposed scheme uses the NIG distribution to gain the Bayesian maximum a posteriori estimator of NSCT coefficients, which are heavy-tailed in nature. First, the parameters of the NIG model are adaptively estimated via a local window that models the intrascale dependency between coefficients. Then, optimal linear interpolation thresholding algorithm named OLI-shrink [16] is utilised. Because the NSCT is translation invariant and has sufficient redundant information, the proposed algorithm can effectively extract the image orientation information and better meet human visual requirements. Experimental results show that the proposed algorithm has better performance than other image denoising methods, including nonsubsampled contourlet transform domain denoising and BLS-GSM [9]. Even though the proposed scheme is a probability-based algorithm, it nevertheless achieves outstanding denoising performance in terms of both peak signal-to-noise ratio (PSNR) and subjective visual quality. We explore the use of the proposed scheme for image denoising and show that it achieves better results than the K-SVD algorithm [13] for all noise levels and performs better than the BM3D algorithm [15] for most noise levels.

The remainder of this paper is organized as follows. Section 2 outlines the basic idea underlying NSCT and the basic image processing scheme. Section 3 discusses the edge coefficient of NSCT statistics modelling and imparts basic knowledge necessary to develop the proposed denoising algorithms, including NIG prior, maximum a posteriori probability, parameter estimation, and OLI-Shrink. Experimental results are presented and discussed in Section 4. Finally, concluding remarks are given in Section 5.

2. Nonsubsampled Contourlet Transform (NSCT)

2.1. Overview of NSCT

Wavelet transform is used extensively in image and voice signal processing fields because of its frequency localisation and multiscale and multiresolution properties. However, it only represents point-singularities efficiently in a one-dimensional domain and is less efficient for line-singularities and curve-singularities (edges) in a two-dimensional domain. Wavelet transform also suffers from several fundamental shortcomings, including poor directional selectivity for the anisotropy of its basis function and lack of invariance in shift and orientation [17]. Thus, wavelet transform does not possess optimal properties for two-dimensional signals such as natural images. These two drawbacks of wavelet transform limit its ability to fully capture the directional information in natural images. Consequently, many researchers have begun to focus on methods of expressing image geometry structures more effectively in order to overcome the inability of wavelet transformation to adequately represent the geometric structure information of images. These activities have led to the introduction of ridgelet transform [18], curvelet transform [19], and contourlet transform [20] for singular analysis of two-dimensional or higher images, techniques which can achieve good sparsity for spatially localised details, such as edges and singularities. Because such details are typically abundant in natural images and convey a significant portion of the information embedded therein, these transforms have become useful in image processing [21, 22].

NSCT, proposed by da Cunha et al. in 2006, outperforms the abovementioned transforms [23] in image processing. Its main properties are multiresolution, localisation, directionality, and anisotropy. The directionality property, in particular, permits NSCT to resolve intrinsic directional features that characterise the analysed image. NSCT is based on a nonsubsampled pyramid structure and nonsubsampled directional filter banks, which enables it to realise and develop a flexible, multiscale, multidirectional, and shift-invariable image decomposition that can be efficiently implemented via the à trous algorithm. At the core of the NSCT scheme is the nonseparable, two-channel, nonsubsampled filter bank. The nonsubsampled pyramid decomposition structure is realised via a multistage iterative process. An image is decomposed into a two-dimensional, low-frequency subband and a two-dimensional, high-frequency subband using filters. The multistructure can be realised by performing iterative filtering on the low-frequency subband. Compared to the contourlet transformation, the filters of NSCT have better frequency selectivity and regularity, which can better perform subband direction decomposition. In contrast to wavelet decomposition, NSCT is a multiscale, multiresolution, and multidirectional analysis method. Because of the flexibility of the direction decomposition achievable using NSCT, more detailed information about the image can be obtained during the image decomposition process. The resulting transform not only has the multiscale and time-frequency-localisation properties of wavelets but also offers a high degree of directionality and anisotropy. Figure 1 illustrates the NSCT processing scheme [24].

2.2. Image Denoising Method Based on NSCT

The objective of image denoising is to recover an image from its noisy version. In general, a denoising problem can be described as follows. Let be the original image of size and let be the observation image which has been corrupted by additive white Gaussian noise with zero mean and variance . Let be the pixel position in the image. Then, assume that the corrupted image satisfies

The goal of denoising is to construct the optimal approximation of using the observation data to minimize the mean square error (MSE) between the optimal approximation and the original signal :

The basic steps in the threshold denoising algorithm are as follows:(1)Determine the decomposition level, , of NSCT and perform NSCT on the noisy image, , to obtain the high and low-frequency coefficients of the decomposition.(2)Determine the correct threshold value and perform threshold processing on the high-frequency coefficients in the NSCT domain to obtain the new transform coefficients, while leaving the low-frequency coefficients unchanged. The threshold processing includes soft and hard threshold processing.(3)Perform inverse NSCT on the processed high-frequency coefficients and low-frequency coefficients to obtain the denoised image estimation, , which is the estimation of the original image, .

The key issue in threshold denoising is selection of the threshold value, which directly determines the denoising effect. A relatively small threshold value may retain the decomposition coefficients as much as possible and thus retain more detailed information about the image. However, a small threshold value may retain an undesired amount of noise in the denoised result. Conversely, a relatively large threshold value may destroy the high-frequency information of the image and produce a false Gibbs phenomenon in the denoised image.

3. Marginal Statistical Modelling on the NSCT Subband Coefficients

3.1. Normal Inverse Gaussian Distribution

The important features of the image statistics in the NSCT domain is that they are non-Gaussian, have a high kurtosis, sharp central peak, and heavy tails. These features are expected because images often primarily comprise homogeneous regions with some important details such as edges; the homogeneous regions provide coefficients that are close to zero, and the edges provide a small number of coefficients with large magnitudes. The NIG model developed by Barndorff-Nielsen [25] is a normal variance-mean mixture distribution, in which the inverse Gaussian PDF is used as the mixing distribution. In theory, the hybrid model can overcome the disadvantages of the traditional model, which cannot meet the needs of modelling. Because of the flexibility with which parameters can be selected, the hybrid model can describe curves of any shape. Consequently, we chose the NIG model to describe the distribution of the NSCT coefficients of an image. The probability density function can be defined as where , , and denotes a modified Bessel function of the second kind with index one. As can be seen, the distribution of NIG is specified by the four parameters . The flexibility of the values of the four parameters makes the NIG density a suitable model for a variety of unimodal, positive, kurtotic data. Parameter is the feature factor which controls the steepness of the density; a smaller value for indicates a slower decay rate with heavier tails. Parameter controls skewness: for , the density is skewed to the left; for , the density is skewed to the right; and implies a symmetric density around , which is a translation parameter. The parameter is scale-like.

For most images, the corresponding decomposition coefficients are generally symmetrical distributions; thus, is assumed to correspond to NIG. Consequently, the probability density function corresponding to NIG can be simplified as follows:

3.2. Marginal Distribution of the NSCT Coefficients

In recent years, the Bayes denoising algorithm, which is based on a coefficient’s statistical model in the transform domain, has become a hot topic in the field of image denoising. By establishing a reasonable prior probability distribution model for the edge of the coefficients, it estimates the original image using Bayesian theory and balances suppression of noise and retention of signals better than other denoising algorithms. Bayes’ algorithm is more flexible than conventional threshold algorithms. Its key to success is to accurately model the prior marginal distribution of coefficients firstly. The main area of research in this case is the marginal distribution statistical models of image NSCT coefficients.

For flexible selection of model parameters, the NIG distribution can accurately model the data with different tails [26]. This paper uses the NIG distribution to fit the marginal statistical distribution of the NSCT coefficients. Because an image typically comprises a smooth area and details such as edges, decomposition coefficients corresponding to smooth regions are, for the most part, close to zero. Only some edges correspond to the high-magnitude transform coefficient. Thus, in the NSCT domain, the image decomposition coefficients show highly non-Gaussian features such as high kurtosis, sharp central peaks, and heavy tails.

We used an image of Barbara, which includes texture and structure, to test how well the NIG distribution fits the NSCT coefficients. Figure 2 shows the results obtained. The histogram distribution (solid line) of the third layer subband of NSCT coefficients in the first direction is shown in the figure, with the NIG PDF (dashed line) and Gaussian PDF (dotted line) fitted to this density. As shown in the figure, the NIG PDF can accurately model the coefficient distribution, especially the heavy tail. The distribution demonstrates a sharp peak near to zero and long tails to both sides of the peak, which more closely fit the coefficient’s histogram. It also fits the degree of the distribution of the coefficients on the other subbands well. Similar statistical characteristics of the histograms of other test images still hold.

3.3. Bayesian Estimation

Assume that an original image has been corrupted by additive white Gaussian noise:where denotes the observed noisy image and denotes the white Gaussian noise. Given , denoising attempts to return as accurately as possible. After NSCT transformation of the observed image, the transformed decomposition coefficients are where , , and denote NSCT coefficients of the noisy image, noise-free original image, and noise, respectively. The purpose of Bayesian denoising is to obtain , which is the estimation of . Bayesian maximum a posteriori estimation can be denoted as follows: where is the conditional density of the observation given .

can be calculated using the Bayesian rule:

These equations allow us to write this estimation in terms of and , which are the probability distribution of noise coefficients and the prior distribution of the noise-free image coefficients, respectively. From the assumption on the noise, is a zero-mean Gaussian function with variance ; that is,

Solving (8) by using (9) and performing a logarithm on the independent variable, (8) can be written as where is a convex, differentiable function. Let the first derivative of be zero. Then, can be computed to obtain the maximum a posteriori; thus

Because is discontinuous and singular in the vicinity of zero and the symbol of is different from , (11) cannot be solved in closed form. Bhuiyan et al. [27] developed an estimated solution that overcomes this drawback. It is defined as where

Parameters and are estimated from every subband. As a result, (12) is adaptive to every subband and can remove noise from all the subbands adaptively. Equation (12) is also similar to the classical soft threshold function. The threshold value is and can be changed depending on the values of the coefficients. The noise-based coefficient is greater with larger threshold values. Conversely, the signal-based coefficient is larger with smaller threshold values.

3.4. Parameter Estimation

Image denoising via (12) needs to estimate the NIG distribution parameters and and the noise variance . The parameters used in the experiments can be estimated according to the actual distribution of different subband coefficients. For different noise levels, these parameters can be used to obtain different degrees of threshold processing given different decomposition coefficients.

Let , , , and denote the one to four-order cumulants of the noisy coefficients, respectively. Thence, the skewness and kurtosis of the coefficients of the clear image are and , respectively. At this time, the parameters and can be estimated using the following two equations: where and .

3.5. Thresholding Algorithm

Denoising by wavelet is performed by the thresholding algorithm, in which coefficients smaller than a specific value, or threshold, are cancelled. It can also be utilised in other transform domains, such as the NSCT domain. Hard and soft thresholding are two popular methods. In hard thresholding algorithms, any coefficient, , less than or equal to the threshold, , is replaced with zero. That is,

In soft thresholding algorithms, however, any coefficient, , less than the threshold is replaced with zero, and the others are modified by subtracting the threshold value in accordance with the following equation:

Soft thresholding is more efficient and yields more visually pleasing images than hard thresholding, but soft thresholding does not use the optimal value for modification of large coefficients. In order to overcome this limitation, Fathi and Naghsh-Nilchi [16] developed a new thresholding algorithm (OLI-Shrink) which uses optimal linear interpolation between each coefficient and its corresponding subband mean to modify dominant coefficients:where is the mean value of the coefficient of the corresponding subband, as in (12), and is computed as follows:

Here, the true value is estimated by a weighted linear interpolation of the unconditional mean of and the observed value . Even if this formula is derived on the assumption that the transform domain coefficients in a subband of a natural image have Gaussian PDFs, it can still function efficiently.

3.6. Monte-Carlo Estimates of NSCT Domain Noise Coefficient Variance

The nonorthogonal nature of NSCT may lead to different noise variances in different directional subbands. Thus, the Monte-Carlo method is used to obtain the noise coefficient variance of NSCT, which corresponds to each subband at each scale. In some image denoising applications, the value of the input noise variance is known or can be measured based on information other than the corrupted image. In other cases, we can estimate the noise variance by applying the robust median estimator on the HH1 subband’s coefficients , as outlined by Donoho [3]:

The specific steps are as follows.

Step 1. Perform orthogonal wavelet transform on the noisy image and estimate the noise standard deviation, , using (19).

Step 2. Construct a white Gaussian noise image the same size as the original image with mean zero and variance .

Step 3. Perform NSCT transformation on the noise image and obtain the noise NSCT coefficients variance in each high-frequency subband.

Step 4. Go to Step and repeat the above steps five times. The final coefficient variance, , is obtained by averaging the coefficient variances obtained.

3.7. Detailed Implementation

In summary, we apply the NIG model as a prior model of NSCT coefficients to estimate denoised NSCT coefficients and employ OLI-Shrink to guarantee a more gently thresholded effect. The proposed method can be summarized as follows.

Step 1. Perform NSCT decomposition to obtain a series of NSCT coefficients, .

Step 2. Estimate the noise variance in each high-frequency subband using the method outlined in Section 3.6.

Step 3. For each subband in each level higher than two, compute the threshold value and statistical parameters:(a)the terms and using (14);(b)the threshold value using (13), where the used here is equal to in corresponding subbands;(c)the subband’s variance ;(d)the subband’s mean ;(e)the term using (18).

Step 4. Threshold the coefficients of all subbands using (17);

Step 5. Perform inverse NSCT to reconstruct the denoised image.

4. Experimental Results

To investigate the feasibility and the effectiveness of the proposed image denoising method in removing additive Gaussian white noise, we first conducted experiments on the grayscale images Lena and Barbara, each measuring pixels. The images were obtained from http://www.cs.tut.fi/~foi/GCF-BM3D, and we verified that they were the same as the images used by Dabov et al. [15]. Using these images, we redid Dabov et al.’s entire denoising experiment. The original images were corrupted with zero mean and Gaussian white noise, with the standard deviation, , of the noise varying over the range 5 to 100. For comparison, we also applied several existing methods to denoise the same images. BivShrink [10] was implemented using software obtained from http://eeweb.poly.edu/iselesni/WaveletSoftware/denoise2.html, which describes the implementation of this algorithm using both the separable DWT and the complex dual-tree DWT. We chose the complex dual-tree DWT method because it gives better denoised results. The BLS-GSM denoising method [9] was implemented using software obtained from http://decsai.ugr.es/~javier/denoise/index.html. The SA-DCT denoising method [14] was implemented using software obtained from http://www.cs.tut.fi/~foi/SA-DCT. The K-SVD [13] denoising method was implemented using software obtained from http://www.cs.technion.ac.il/~elad/software/, and the image reconstruction algorithm [15] based on block matching and 3D filtering was implemented using software obtained from http://www.cs.tut.fi/~foi/GCF-BM3D. Ram et al. [28] recently proposed an image processing scheme based on reordering of patches. They applied their proposed approach to image denoising and showed that it generally performs better than the BM3D algorithm for high noise levels. Thus, we implemented this denoising method using software obtained from http://www.cs.technion.ac.il/~elad/software/.

The decomposition level of the proposed method was set at five and the decomposed direction from coarse scale to fine scale ranged over the values 4, 8, 8, 16, and 16. The local window size was 11 × 11 when estimating local parameters. The quantitative analysis results obtained are listed in Table 1. Each denoising method outlined above was quantitatively measured in terms of peak signal-to-noise ratio (PSNR). The details of the denoised images are given in Figures 3 and 5 to help the reader observe the qualitative differences between the different methods.

As shown in Table 1, the proposed method achieved the best results among all the denoising methods, except for some of the results from BM3D and Ram’s method, with an average increase in PSNR of approximately 0.29 dB compared to Ram’s method and 0.17 dB compared to BM3D. In particular, when compared with denoised results obtained by K-SVD method, the average PSNR increase for the proposed method was 1.09 dB. For image Barbara, Ram’s method actually performed better than the BM3D algorithm for high noise levels. Ram’s method also had a higher PSNR value than BM3D for the Lena image at high noise levels, but it was inferior to our proposed method for all noise levels.

The denoising performance of our proposed algorithm is illustrated in Figure 3, which depicts fragments of a few noisy () modifications of the original Barbara image and fragments of various image denoising results.

In Figure 3, it can be seen that our proposed method exhibits outstanding performance which is superior to the other methods, especially in terms of retaining the image details, such as the small stripes in the tablecloth. BM3D and the proposed method produce fewer artefacts and better preserve the edges and other details than the other methods. The homogeneous region is smoother and the texture line on the tablecloth and trousers is clearer than the results from the other methods. The denoised images in Figures 3(g) and 3(h) are relatively mild, have fewer scratch effects, show most of the texture features clearly, and retain their original geometric features, such as the clear checkerboard tablecloth pattern. The resulting image following denoising by BLS-GSM (Figure 3(c)) appears to have better overall effects; however, close inspection shows that the image has been smoothed too much and, as a consequence, has lost much detailed information, information which is very important in image denoising. Figures 3(d) and 3(e), obtained from SA-DCT and K-SVD, have excessive smoothing and insufficient contrast, and some small details are missing. Ram’s method appears to be efficient in homogeneous regions but is less effective in the detailed regions. The evaluation values in Table 1 show that the methods applied to produce Figures 3(b)3(f) are significantly worse than BM3D and the proposed method. In conclusion, it is clear that our denoised method exhibits outstanding performance and preserves detailed information, especially texture information.

In order to better understand the numerical results, we further compared the gain of BivShrink, BLS-GSM, SA-DCT, K-SVD, Ram’s method, and the proposed method, normalised with respect to BM3D, as shown in Figure 4. In the figure, the superiority of the proposed method is clear. In the case of image Lena, the proposed method outperforms the other methods and provides significant improvements over BM3D. Compared with BM3D, the performance of Ram’s method is slightly inferior when the standard deviation is small. BLS-GSM and SA-DCT have similar results, which are better than that of BivShrink. The performance of the proposed method is as expected. Image Barbara comprises similar image textures and can therefore provide more information to BM3D and Ram’s method to train their datasets. As a result, their output is a bit better than that of the proposed method at the start (Figure 4(b)).

BM3D is based on block structure [29], which tends to oversmooth edge pixels after denoising. As a result, its reconstructed image shows obvious blocking artefacts under certain conditions, especially when the noise variance is relatively large. Figures 5(a) and 5(b) show fragments of the reconstructed Barbara images obtained using BM3D and the proposed scheme, respectively. When the noise standard deviation is 50 for the Barbara image, the denoised results with BM3D and the proposed scheme have virtually the same PSNR value. However, it is clear that the stripe information in Figure 5(a) has essentially been replaced by smooth areas, whereas the stripe is still clearly visible in Figure 5(b).

From the above results, it is clear that the more competitive denoising methods are Ram’s method, BM3D, and the proposed method. To provide a more thorough comparison of these three methods, we assessed their performance on a test set containing noisy versions of seven images with eight different noise levels. For the proposed method, four-level decomposition and the decomposed direction from coarse scale to fine scale were 4, 8, 16, and 16 for the images House and Peppers, with sizes 256 × 256 pixels. The PSNR values for the results obtained using the three schemes are listed in Table 2.

The results obtained by our scheme are inferior to those of Ram’s method for the images House, Peppers, and Montage in most cases. The PSNR values for the images Fingerprint, Man, and Couple are significantly better than those obtained by BM3D and Ram’s method. Our method performed as expected for Boats and Hill, providing the highest PSNR values. Further, our results are better than those achieved by Ram’s algorithm by approximately 0.31 dB on average for the entire number lists belonging to Ram’s algorithm and our scheme, respectively, in Table 2. Further, our scheme outperforms BM3D by more than 0.85 dB (Man, ) and 0.16 dB on average for all results obtained by the BM3D method as listed in Table 2. In order to facilitate a more comprehensive comparison, the noisy and denoised images obtained with our proposed scheme for are shown in Figures 6 and 7.

An image has a specific structure and its pixels have strong affiliations. These affiliations reflect the configuration information in the visual scene. In order to better reflect the structural similarity between two images, Wang et al. [30] developed an image quality assessment method based on structural distortion called the structural similarity (SSIM) index. There are also several similar structural information based image quality metrics (IQMs), including low-level feature similarity induced full reference image quality assessment metric, namely FSIM [31] and sparse feature fidelity (SFF) [32]. The closer the IQMs are to one, the higher the structural similarity between the two images is. In image denoising, the IQMs indicate the ability of the denoising algorithm to maintain the image structure. Table 3 shows the SSIM, FSIM, and SFF index values for image Barbara following denoised processing by BM3D and the proposed method. The average PSNR value for the proposed method is smaller than that of BM3D for image Barbara by approximately 0.06 dB in Table 1. However, as the SSIM index listed in Table 3 shows, the denoising results for the proposed method are superior to those for BM3D in maintaining the original image structure when the value of is below 15 or above 30, and it outperforms the average for in the range 2–100. For FSIM and SFF index, familiar results can be given in Table 3. Sometimes they are equal to each other; sometimes they alternately outperform the other side.

5. Conclusion

In this paper, we proposed a new image processing method based on the Bayesian framework that uses the key facets of the Bayesian denoising method to correctly estimate the distribution model of the decomposition coefficients. The proposed method comprises a nonsubsampled contourlet transform denoising algorithm based on the normal inverse Gaussian prior model, which has parameters that can be selected flexibly and changed adaptively. The model can accurately describe the sharp peak of the NSCT decomposition coefficients at zero and heavy-tailed characteristics distributed on both sides symmetrically; consequently, Bayesian denoising can be used to effectively remove noise from images. The results of experiments conducted indicate that the proposed method can effectively eliminate Gaussian white noise. Further, visual and quantitative evaluations show that it performs significantly better than existing state-of-the-art image denoising methods.

The NSCT’s high redundancy consequentially leads to an overloading computational cost of proposed method, and this high computational cost may limit proposed method’s real time application in practice. Even if quantitative comparison of various denoising methods’ computation efficiency is not our study emphasis, how to reduce the complexity of the algorithm and improve the operational efficiency becomes the next focus of the study. In future, we will try to transplant proposed method on other programming platforms such as C++ and adopt parallel compiling technique to improve running efficiency. This proposed method also can be naturally extended in other image processing fields, such as image impainting, image restoration, and image fusion.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This research was supported by the National Natural Science Foundation of China (no. 61379010), National Natural Science Foundation of China (no. 61502219), and Postdoctoral Science Foundation of China (no. 2015M582697). The authors would like to thank the editor and the anonymous reviewers for their valuable and insightful comments on this paper.