Abstract

We present a novel nonlocal mean (NLM) algorithm using an anisotropic structure tensor to achieve higher accuracy of imaging denoising and better preservation of fine image details. Instead of using the intensity to identify the pixel, the proposed algorithm uses the structure tensor to characterize the boundary information around the pixel more comprehensively. Meanwhile, similarity of the structure tensor is computed in a Riemannian space for more rigorous comparison, and the similarity weight of the pixel (or patch) is determined by the intensity and structure tensor simultaneously. The proposed algorithm is compared with the original NLM algorithm and a modified NLM algorithm that is based on the principle component analysis. Quantitative and qualitative comparisons of the three NLM algorithms are presented as well.

1. Introduction

Image denoising is a key preprocessing step for higher level of processes such as image segmentation and pattern recognition. The most straightforward denoising approach is the direct application of spatial coherence which assumes noisy samples in a local area of a given pixel follow the same distribution of that pixel [1]. Although many efforts have been done dedicatedly to overcome it such as anisotropic filtering [2] and total variation minimization [3], this kind of algorithms comes with a common drawback of image blurring due to smoothing effect in both homogeneous regions and at object boundaries. Besides denoising methods in spatial domain, removing noise in transformation domain is also well developed, such as DCT transform [4] and wavelet [5].

In contrast to spatial coherence based image smoothing, nonlocal means (NLM) denoising algorithms have been recently proposed, which average pixel intensities weighted by the similarity of pixel gray level in a certain neighborhood [6]. This kind of pixel selection scheme makes NLM significantly outperform traditional denoising methods such as anisotropic filtering [2], total variation [3], and bilateral filtering [7], which has enabled it to be used in various applications such as computer vision and statistical nonparametric regression [8, 9]. Extension of the original approach including scale and rotation invariance for the data patches used to define the weights is well studied [1013].

However, as proposed in local denoising methods before [14], the pixel intensity itself cannot fully characterize the information contained in the image. Besides this, this kind of pointwise mean will cause large flat zones and spurious contours which are called “staircasing” effects. To overcome this, higher order information, which is derived from image gradients and can provide better description of image structure, is involved in NLM for more robust or sensitive measures of pixel similarity. For instance, Buades et al. [15, 16] employ a nonlocal polynomial model to attenuate the “staircasing” effect. Chatterjee and Milanfar [17] resort to a similar higher order NLM where the polynomial approximations up to a second order are used. Other traditional techniques such as singular value decomposition [18] and principle component analysis [1921] have also been used to characterize high-order information and project image neighborhood vectors to a lower dimensional subspace. One of these methods which are referred to as principal neighborhood dictionary (PND) NLM results in a significant computational saving together with increased estimation accuracy [21].

In this paper, a novel NLM denoising algorithm based on the concept of anisotropic structure tensor is proposed. Inferred from the anisotropic filter [22], the structure tensor encapsulates structure information of the pixel (or patch), which is used in conjunction with image intensity to compute similarity weights in this work. Moreover, current algorithms (such as PND) commonly treat a patch as a vector whose components correspond to the pixel intensities in it and define the similarity weight as a mean Euclidean distance between the individual components. In the proposed algorithm, we improve upon the simplistic weighting scheme by computing the similarity distance between structure tensors in a Riemannian space and comparing the structure information of the pixel as an ensemble. It is anticipated that this new method will effectively increase the accuracy of similarity comparisons and thus will significantly enhance the performance of image denoising.

The structure of the letter is as follows. Section 2 describes the proposed algorithm in detail. Section 3 provides denoising experiments using the proposed algorithm, the PND, and the original NLM. Section 4 concludes some key contributions of this work.

2. Method

The original NLM image denoising algorithm introduced by Buades et al. [6] smoothes images according to a weighted mean with the weights defined by the intensity similarity in a predetermined neighborhood. Specifically, for a position , a filtered intensity is computed as follows: where , are the positions of image pixels, is the noised image, and is a neighborhood of with a reasonable size. The parameter is a weighting factor computed as where is an Euclidean distance between vectors whose elements are the gray levels of neighboring pixels centered around and with a fixed size, controls the rate of decay of the exponential function, and is a normalizing factor as follows

According to the equations above, the estimated value of is a weighted average of pixels in the neighborhood , and the pixels with a more similar gray level will be assigned larger weights. However, as pointed out earlier, using gray level alone does not fully capture the structure information contained in a neighborhood of pixels.

To capture the structure information, the concept of structure tensor can be used. The structure tensor encapsulates the predominant direction of the intensity gradient in a given neighborhood and the degree to which those directions are spatially coherent [22]. The 2D structure tensor can be written as a matrix as follows where is a summation index ranging over a finite set (the “window,” typically for a constant ), is a fixed weight depending on , such that the sum of all weights is 1, and is a matrix-valued array as follows: where and are mean derivatives with respect to and coordinates.

As seen in (4) and (5), the structure tensor characterizes intensity variations of the pixels in a neighborhood, so comparisons between structure tensors at different pixel locations can provide more structure information which can be used to enhance the weighting scheme of NLM.

Similar to the Euclidean distance of intensities between pixels and , the similarity of structure tensor and can be defined using Euclidean distance of individual element of the tensor. However, since the structure tensor resides in a non-Euclidean space, this Euclidean type of operation is quite problematic due to the widely recognized swelling effect which tends to blend the orientation and diffusivity feature [23]. To circumvent this issue, affine-invariant Riemannian metrics have been proposed as more rigorous and general frameworks for tensor comparisons [2426]. In this paper, a Riemannian metric called Log-Euclidean metric is used for its nice theoretical properties along with simple and fast computations [26]. According to this framework, the similarity between structure tensors and is computed as follows:

As alluded to before, comparing pixels in terms of distances in both gray level and its derivatives simultaneously provides more accurate estimation of similarity in the local image structure. Therefore, we redefine the weighting factor in (2) to include both the gray level similarity and structure similarity below: where the weighting parameter regulates a trade-off between the image intensity and structure tensor. This weighting scheme reduces to the original NLM scheme when is assigned to 0. The corresponding normalizing factor is re-defined accordingly as

3. Results

In this section, we first present experimental findings on the effects of the weighting parameter on the performance of the proposed algorithm and then quantitatively and qualitatively compare the performance of the proposed algorithm with that of the PND algorithm [21] and the original NLM algorithm [6]. It should be pointed out that selections of other parameters including the smoothing kernel , the subspace dimensionality, and search-window size are not the primary concern of this work. Therefore these parameters are defined according to literature reports [6, 21]. As suggested in [6], the parameter is used according to the noise level (), and the subspace dimensionality and the search-window size are and , respectively, which have been discussed in [21] as a relatively optimal selection.

The weighting parameter in (7) controls relative contributions of gray level similarity and structure tensor similarity. To examine the effects of , experiments using four different images (House, Coins, Lena, and Cameraman) corrupted by additive Gaussian noise with standard deviation were conduct. Figure 1 shows variations of the peak signal-to-noise ratio (PSNR) with the value of this parameter. It can be seen that, in general, the PSNR in all four images increases as the parameter increases from 0 to 20, beyond which the PSNR oscillates irregularly and tends to decline as approaches 50. Based on the observations in Figure 1, the trend of these curves implicates different roles of the gray level similarity and structure tensor similarity. Briefly, when there is a large intensity difference between the pixels under comparison, the overall similarity is mainly given by the intensity difference. On the other hand, when the difference in the intensity between two pixels is small, the structure tensor similarity will have a dominant role. Given these and the observations from Figure 1, the parameter is set to 20 in all the following experiments.

To compare the performance of the proposed algorithm with that of the PND and the NLM, the same four test images corrupted by additive Gaussian noise but with different standard deviation (, 25, and 50) were used, each of which was denoised with the three denoising algorithms above. Comparisons were made with three criteria that included: visual assessment, PSNR, and mean structure similarity index map (MSSIM) [27], with each measuring a specific aspect of the denoising effect. Among them, visual assessment (Figure 2) qualitatively demonstrates how well the denoised images can be visually interpreted, the weighting distribution of pixel similarity is demonstrated in Figure 3, the PSNR (Table 1) quantitatively measures the extent to which noise has been suppressed, and the MSSIM (Table 2) gauges the clarity of detail and boundary definition after denoising.

Figure 2 presents the denoised images using the proposed algorithm, the PND, and the NLM. Among different noise levels, only the images with the highest noise standard deviation () are demonstrated in order to emphasize the differences among the algorithms. It can be seen that even in this relatively high noise standard, the denoising effects of the three algorithms are quite reasonable, except for a little blurring in the output of the NLM (the last column). Careful observations of the denoised images by the proposed algorithm (the second column) and the PND algorithm (the third column) reveal that denoised images by the proposed algorithm appear clearer (e.g., the declaration of the hat in Lena) and boundaries are much better defined (e.g., the upper edge of the roof in House). This phenomenon is reasonable because the proposed algorithm incorporates full structure information into the weighting scheme, and the comparisons are made in a Riemannian space, which provides more accurate evaluation of the tensor.

Figure 3 compares the weighting distribution between NLM and the proposed algorithm. It can be seen that the weighting distributions of both algorithms are approximately same in uncorrupted image. In noised image (Figure 3(d)), NLM lost a great number of pixels with high similarity because of the gray intensity variation caused by noise (Figure 3(e)). On the other hand, the proposed algorithm uses both gray level and structure tensor to weight the similarity and keep most of the high similarity pixels (Figure 3(f)) thus will be a great benefit for the further denoising process.

To quantitatively evaluate the denoising effects, PSNR and MSSIM of the four images corrupted by zero mean Gaussian noise with different levels and then denoised by the three algorithms are calculated, with results shown in Tables 1 and 2, respectively. In general, the proposed algorithm and the PND outperform the original NLM, though the denoising effects of the three algorithms are almost identical when the noise level is low (). And compared with the PND, the PSNR is comparable for the two algorithms (each with 6 better measures than the other), but the MSSIM is superior for the proposed algorithm (with 8 better measures). Similar to the qualitative assessment, the performance gained by the proposed algorithm is attributable to the new similarity measure defined and tensor computation in the Riemannian space.

Computational efficiency of the proposed algorithm mainly depends on the size of image. Under the parameter setting in this letter and on a notebook computer with an Intel Core i7 CPU and 4 GB RAM, the proposed algorithm uses approximately 12 seconds to denoise a image compared with 9 seconds of NLM and 5 seconds of PND.

4. Conclusions

In this paper, a novel NLM denoising algorithm simultaneously using intensity and structure information was proposed. Following the concept of anisotropic filtering, structure information around an image pixel is characterized by structure tensor more comprehensively. Moreover, similarity of the structure tensor is compared in a Riemannian space with a Log-Euclidean metrics. This method evaluates the structure tensor as an ensemble, which is anticipated to yield better comparison. The weighting scheme of the proposed method represents a reasonable trade-off between pixel intensity and structure tensor: when the pixels under comparison have a large difference in the intensity, the weight is mainly determined by the gray level; when the pixels have similar gray level, the structure information characterized by the structure tensor will dominate the weight. In sum, the proposed method fully utilizes the intensity and its derivative information to weight the compared pixels (or patches). This improvement can increase the accuracy of pixel comparisons and weighting significantly, thus enhancing the performance of NLM denoising both qualitatively and quantitatively.

Acknowledgment

This study was supported by National Natural Science Foundation of China (NSFC) Grant 81201158 and 61271330.