Table of Contents Author Guidelines Submit a Manuscript
Journal of Applied Mathematics
Volume 2012, Article ID 467412, 16 pages
http://dx.doi.org/10.1155/2012/467412
Research Article

Spatial Images Feature Extraction Based on Bayesian Nonlocal Means Filter and Improved Contourlet Transform

Beijing Key Laboratory of Intelligent Telecommunication Software and Multimedia, School of Computer Science, Beijing University of Posts and Telecommunications, Beijing 100876, China

Received 1 March 2012; Accepted 6 April 2012

Academic Editor: Baocang Ding

Copyright © 2012 Pengcheng Han and Junping Du. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Spatial images are inevitably mixed with different levels of noise and distortion. The contourlet transform can provide multidimensional sparse representations of images in a discrete domain. Because of its filter structure, the contourlet transform is not translation-invariant. In this paper, we use a nonsubsampled pyramid structure and a nonsubsampled directional filter to achieve multidimensional and translation-invariant image decomposition for spatial images. A nonsubsampled contourlet transform is used as the basis for an improved Bayesian nonlocal means (NLM) filter for different frequencies. The Bayesian model adds a sigma range in image a priori operations, which can be more effective in protecting image details. The NLM filter retains the image edge content and assigns greater weight to similarities for edge pixels. Experimental results both on standard images and spatial images confirm that the proposed algorithm yields significantly better performance than nonsubsampled wavelet transform, contourlet, and curvelet approaches.

1. Introduction

In spatial rendezvous and docking, spatial images are obtained by multisource remote sensors. Spatial images are inevitably mixed with different levels of noise and distortion. The accurate image feature extraction will be helpful for spatial object recognition and can directly influence the success of spatial rendezvous and docking [1, 2]. Image feature extraction of spatial images is based on the definition of image features; to some extent, it can be said that it is based on sensitivity changes to image grayscale values for the human eye. Multidimensional image representation can process images for the sparsest representation, especially for 2D image signals [3, 4]. This approach identifies optimal high-dimensional function representation for an image and yields superior image-processing results for an effective solution. A nonlocal means (NLM) filter uses redundant image information on the basis that structural similarity superimposed on pixel noise is random and noise can be effectively removed using weighted averages [5, 6]. Compared to traditional statistical filtering methods, NLM filtering overcomes the constraint of the local neighborhood and extends pixel similarity to block-based similarity, so it is very suitable to deal with spatial images.

In this paper, we use a nonsubsampled pyramid structure and a nonsubsampled directional filter to achieve multidimensional and translation-invariant image decomposition for spatial images. A nonsubsampled contourlet transform is used as the basis for an improved Bayesian nonlocal means (NLM) filter for different frequencies. The Bayesian model adds a sigma range in image a priori operations, which can be more effective in protecting image details. The NLM filter retains the image edge content and assigns greater weight to similarities for edge pixels. Experimental results both on standard images and spatial images confirm that the proposed algorithm yields significantly better performance than nonsubsampled wavelet transform, contourlet, and curvelet approaches.

The rest of this paper is organized as follows. Section 2 describes multidimensional image decomposition, with a focus on contourlet and nonsubsampled contourlet transforms (NSCTs). Section 3 outlines application of an NLM filter and proposes an improved NLM algorithm based on a Bayesian model. Section 4 applies the improved NLM filter to NSCT, especially NSDFB, to process image features for further extraction. Section 5 compares feature extraction results for the proposed algorithm and other algorithms. Section 6 concludes the paper.

2. Contourlet Transform Decomposition

2.1. Multidimensional Image Decomposition

The target of image multidimensional representation is to provide a description of image with less characteristic information. The wavelet transform is a classic image multidimensional representation algorithm that has a good effect on image edge points [7, 8]. However, the wavelet transform can capture only limited direction information in the horizontal, vertical, and diagonal directions, as shown in the left side of Figure 1. It is difficult to express image smoothness contours; a better image representation is shown in the right side of Figure 1.

467412.fig.001
Figure 1: Multidimensional image decomposition.

Other well-known multidimensional image decomposition algorithms include bandlets, brushlets, edge multidimensional transform, complex wavelets, and wedgelet. However, these algorithms require image edge detection and then summarize a representative adaptive coefficient. A decomposition algorithm that can transform an image into fixed decomposition coefficients is desirable. These coefficients can then be used in a broader context that does rely on edge detection alone but also includes better directional image decomposition.

In 2004, Candès and Donoho proposed a curvelet transform that uses a value approximation algorithm for a continuous 2D spatial domain and adds a smooth signal on the basis of a 1D Fourier transform [9]. The best approximation deviation is 𝑂((log𝑀)3𝑀2) for curvelet and 𝑂(𝑀1) for wavelet transforms. The curvelet transform is first applied to a continuous signal and then combines a multidimensional filter and ridgelet transformation. A second curvelet transform is based on frequency segments and extreme judgment. The curvelet transform is universally applicable to continuous signals, but there will be parallel noise in discrete fields [10]. It is also biased in directional image decomposition. The reason is that the typical rectangular sampling mode leads to a priori geometric deviation in decomposition of discrete image signals, especially in the horizontal and vertical directions. This limitation prompted researchers to develop a new multiscale decomposition algorithm that does not depend on edge detection and can decompose images in cross-scale multidimensions.

2.2. Contourlet Transform

The contourlet transform is a multidimensional decomposition algorithm proposed by Do and Vetterli in 2005 [11]. The transform can be directly used for multidimensional decomposition of discrete image signals. It has a dual filter for image decomposition and yields a smoother sparse representation of the original image. The two filters are a Laplacian pyramid (LP) filter [12, 13] and a directional filter bank (DFB) [14]. The LP yields nonconsecutive image points, and then the DFB connects consecutive points into a nonlinear structure. The process is shown in Figure 2.

467412.fig.002
Figure 2: Contourlet transform.

A subsample contourlet transform uses a relevance factor 𝑀 for image subsampling at each decomposition level. A 2D filter is evolved from the 1D filter. For complete image reconstruction, the following relationship holds for the 1D filter:𝑀0(𝑧)𝑁0(𝑧)+𝑀1(𝑧)𝑁1(𝑧)=2,𝑆𝑚=𝐷2𝑙1,2,for0𝑚2𝑙11,𝐷2,2𝑙1,for2𝑙1𝑚2𝑙1,(2.1) where 𝑀0(𝑧) and 𝑀1(𝑧) represent low- and high-pass analysis filters, and 𝑁0(𝑧) and 𝑁1(𝑧) represent low- and high-pass synthesis filters, respectively. Downsampling matrices 𝑆𝑚 are shown above. For 2D complete decomposition,𝑀0(𝑀(𝑧))𝑁0(𝑀(𝑧))+𝑀1(𝑀(𝑧))𝑁1(𝑀(𝑧))=2.(2.2)

𝑀(𝑧) represents discrete properties of the heterogeneous domain, which can help to reduce the filter complexity from O(𝑁2) to O(𝑁). In two dimensions, the first DFB step is construction of the spectrum in the frequency domain using two-channel quincunx filter banks to decompose an image into horizontal and vertical directions [15]. The DFB equivalent parallel family is𝐷(𝑙)𝑘𝑛𝑆(𝑙)𝑚0𝑚2𝑙,𝑚𝐿2.(2.3)

Cutting operations on both directions for the decomposition spectrum provide 2D directional and segmental image decomposition. Like the discrete wavelet transform, the discrete downsampling contourlet transform is shift-invariant [16].

2.3. Nonsubsampled Contourlet Transform

NSCT is a fast implementation of the contourlet transform that provides a shift-invariant and multidimensional image representation [17]. Compared with subsampled contourlet transforms, NSCT is closer to the nonredundant wavelet transform [18]. NSCT uses a 2D nonsubsampled filter bank and can be expressed as𝑀0(𝑧)𝑁0(𝑧)+𝑀1(𝑧)𝑁1(𝑧)=1,(2.4) where 𝑀(𝑧) represents a 2D filter of the 𝑧 transform, 𝑀0(𝑧) and 𝑀1(𝑧) represent 2D low- and high-pass analysis filters, and 𝑁0(𝑧) and 𝑁1(𝑧) represent 2D low- and high-pass synthesis filters, respectively. There are also other limitations for the filter design.

NSCT involves two steps: multidimensional representation and directional decomposition. Multidimensional representation is achieved by nonsubsampled pyramid decomposition. This step is similar to the 1D discrete nonsubsampled wavelet transform (NSWT), which uses the à trous method [19]. Compared to NSWT, NSCT uses a nonsubsampled 2D filter. The frame bound of an NSCT directional decomposition is𝑃1||𝑀0(𝑒𝜀)||+||𝑀1(𝑒𝜀)||𝑡(𝑒𝜀)𝑃2,𝑃1=essinf𝑡(𝑒𝜀),𝑃2=esssup𝑡(𝑒𝜀),𝜀[𝜋,𝜋]2.(2.5)

After decomposition of the first layer, the sampling filter banks provide multiscale decomposition of the underlying properties. The process for two-layer nonsubsampled pyramid decomposition is shown in Figure 3.

467412.fig.003
Figure 3: Non-subsampled pyramid decomposition.

The frequency domain for layer 𝑗 supported by a low-pass filter is [(𝜋/2𝑗),(𝜋/2𝑗)]; the replacement domain is from [(𝜋/2𝑗1),(𝜋/2𝑗1)] to [(𝜋/2𝑗),(𝜋/2𝑗)], which is supported by a high-pass filter.

Each step in NSWT image decomposition involves three directions. The total image redundancy is 3𝐽+1; in NSP, the result redundancy is 𝐽+1 [20]. The second NSCT step provides directional information via the nonsubsampled filter, which combines two-channel quincunx sampling filters and a resampling operation for 2D frequency division on directional edges [21]. More accurate directional details can be sampled discretely on a sample stage. Sampling uses a quincunx matrix 𝑄 and considers image direction alignment. The process is shown in Figure 4.

467412.fig.004
Figure 4: Nonsubsampled directional filter banks.

3. NLM Filter Based on a Bayesian Approach

3.1. NLM Filter

Different frequency components play different roles in an image structure. Low-frequency components account for most image energy, forming basic local gradation areas, but play a small role in image content or structure. High-frequency components form the main image edges and determine its basic content or structure and are thus the most important components. Changes in high-frequency information lead to changes in the basic image content or structure, and information extracted from the image by the human eye will thus be subject to major changes. Thus, high-frequency components play the most important role in image perception by the human eye.

At present, many image filters only consider adjacent pixels; some filters take into account information for neighboring pixels, such as Yaroslavsky neighborhood filters [22] and bilateral filters [23]. A nonlinear filter involves additive white noise and can effectively handle image redundancy [24].

The NLM algorithm takes advantage of grayscale image redundancy and structural redundancy through a weighted average of pixel values to estimate the current pixel value. The value of each pixel is calculated using the Gaussian-weighted Euclidean distance between subblocks; a pixel as taken as the right center of the corresponding subblock. This ensures that pixels with a similar structure are assigned greater weight. For an original image 𝑣={𝑣(𝑖)𝑖𝐼}, the expression for the image processed using the NLM filter, 𝑁𝐿(𝑣), is𝑁𝐿(𝑣,𝑖)=𝑗𝐼𝑤(𝑖,𝑗)𝑣(𝑗),(3.1) where 𝑤(𝑖,𝑗) is the Gaussian-weighted Euclidean distance between pixels 𝑖 and 𝑗, which represents the similarity of the image subblocks with 𝑖 and 𝑗 as centers:𝑤(𝑖,𝑗)=1𝑍(𝑖)exp𝑣(𝑁𝑖)𝑑×𝑑𝑣(𝑁𝑗)𝑑×𝑑22,𝑍(𝑖)=𝑗exp𝑣𝑁𝑖𝑑×𝑑𝑣𝑁𝑗𝑑×𝑑22𝑎2,2=1𝜎21+𝑟2𝑁𝑖𝑁𝑗𝑟𝑁𝑖𝑁𝑗2,(3.2) where 𝑍(𝑖) is a normalization factor, 𝑣(𝑁𝑖) is a set of subblocks with pixel 𝑖 as the center, 2𝑎 is a Gaussian-weighted Euclidean distance function, 𝑎 is Gaussian kernel standard deviation, and is a filter parameter that controls the degree of smoothing. The weight 𝑤(𝑖,𝑗) meets 0𝑤(𝑖,𝑗)1, if neighborhood pixels are more similar with 𝑣(𝑁𝑖), the weight of center pixel is greater.

3.2. NLM Filter Combined with a Bayesian Method

The traditional NLM algorithm is very similar in both grayscale and structure content for smooth neighborhood areas. The algorithm yields the best results in flat areas, where better denoising effects can be obtained. At image edges and in texture-rich regions, the algorithm performs poorly because these regions have many repeat structures, the difference in grayscale content is greater, and the larger Euclidean distance makes the weights very small and reduces denoising capability, especially the ability to retain image detail [25]. To improve the edge retention capacity of the NLM algorithm, a Bayesian algorithm was added to make use of image edge information and adjust the similarity of the neighborhood structure so that the center pixel of edge contents that are similar can be given greater weight. This provides a more effective approach for protecting image detail.

The Bayesian NLM filter is expressed as𝑛(𝑥)=𝑦Δ(𝑥)𝑝(𝑣(𝑥)𝑢(𝑦))𝑝(𝑢(𝑦))𝑢(𝑦)𝑦Δ(𝑥)𝑝(𝑣(𝑥)𝑢(𝑦))𝑝(𝑢(𝑦)),(3.3) where 𝑣(𝑥) represents noise data, 𝑢(𝑥) represents nonnoise image data, 𝑛(𝑥) is the average pixel precision weight for gray value 𝑢(𝑦) at a rate of change Δ𝑥, and 𝑝(𝑣(𝑥)𝑢(𝑦))𝑝(𝑢(𝑦)) is the similarity between 𝑣(𝑥) and 𝑢(𝑦). Equation (5) can then be rewritten as𝑤(𝑖,𝑗)=1𝑍(𝑖)exp𝑣𝑁𝑖𝑑×𝑑𝑣𝑁𝑗𝑑×𝑑22𝑎2𝑛𝑥𝑖𝑑×𝑑𝑛𝑥𝑗𝑑×𝑑22𝑎.(3.4)

3.3. Improved Bayesian NLM Filter

We propose an improved Bayesian NLM filter in which a sigma range is added to the prior image operation for more effective protection of image details [26]. The first step is analysis of the probability density for pixel levels, which takes the average variance for the improved Bayesian filter. Considering the independence and integrity of an image, its conditional probability distribution can be expressed as𝑝(𝑣(𝑥)𝑢(𝑦))=𝑀×𝑀𝑚=1𝑝𝑣𝑚(𝑥)𝑢𝑚(𝑦),(3.5) where 𝑣𝑚(𝑥) and 𝑢𝑚(𝑦) are the image probability densities at pixel 𝑚 and 𝑣𝑚(𝑦) is a subset of 𝑢𝑚(𝑦). For L-level image decomposition, the conditional probability density function is𝑝𝑣𝑚(𝑥)𝑢𝑚(𝑦)=𝑣𝑚(𝑥)𝐿1𝐹(𝐿)𝐿𝑢𝑚(𝑦)exp𝐿𝑣𝑚(𝑥)𝑢𝑚(𝑦),(3.6)𝐹(𝐿)=𝐺𝑎(𝐿)𝐿𝑙exp𝑢𝑚(𝑦)𝑙1𝑣𝑚(𝑥)𝑙1.(3.7)

Because of multiscale features, we assume that the prior probability 𝑝(𝑢(𝑦)) is continuous and uniform, 𝑝(𝑢(𝑦))=1/|Δ𝑥|. The proposed algorithm uses an iterative technique, which takes the observed value 𝑣(𝑥) as the initial value 𝑢(𝑦). This treatment can process data directly but takes longer, and details can become fuzzy. If the frequency window is too large, the result will have too much edge details and point targets become even more blurred. Experimental results confirmed that 3 × 3 window is an appropriate choice. The algorithm uses an a priori estimate mean 𝑢(𝑦) to replace 𝑢(𝑦) to reduce image noise bias and Δx is replaced by 𝑁(𝑥). The new Bayesian filter can be expressed as𝑛(𝑥)=𝑦Δ(𝑥)𝑝(𝑣(𝑥)𝑢(𝑦))𝑝(𝑢(𝑦))𝑢(𝑦)𝑦Δ(𝑥)𝑝(𝑣(𝑥)𝑢(𝑦))𝑝(𝑢(𝑦)).(3.8)

𝑁(𝑥) can be expressed as 𝑁(𝑥)=Δ𝑥𝑁1(𝑥)𝑁2(𝑥), where 𝑁1(𝑥) and 𝑁2(𝑥) are a priori regional image characteristics and pixel features, respectively. The a priori regional characteristic is image region Δ𝑥, and unrelated points are removed using a region similarity algorithm. The a priori pixel feature is the set obtained by comparing the similarity of adjacent pixels [27]. A priori pixel characteristics are generally always overlooked in NLM filter processes. In fact, a priori pixel characteristics are good for excluding pixel noise [28].

The sigma range between pixel 𝑥 and the a priori mean 𝑢(𝑥) can be defined as (𝑢(𝑥)𝐼1,𝑢(𝑥)𝐼2) and the range (𝐼1,𝐼2) meets 𝜉=𝐼2𝐼1𝑝(𝑠)𝑑𝑠, where 𝑝(𝑠) is the image probability density function. For different sigma values 𝜉{0.1,0.2,,0.9}, the range can be calculated by pixel search [29].

It is desirable to have a greater sigma weight; however, under conditional probability, the sigma range cannot be greater than the maximum upper boundary, 𝑢(𝑥)𝐼2<𝑉max, where 𝑉max is the maximum image density. It has been demonstrated that a priori pixel characteristics can have a good effect on retention of image edges, but there will be some situations in which isolated pixels are ignored. To solve this problem, the proposed algorithm uses a threshold 𝑇=𝑉max/2 to separate two pixels [30]. For a priori pixels, only 𝑢(𝑥)<𝑇 are retained.

4. Image Feature Extraction Based on the Contourlet Transform

The proposed algorithm can improve the accuracy and completeness of image feature extraction based on direct contourlet decomposition. An image is first processed by the contourlet transform to yield a multidimensional domain, with multiple-resolution decomposition coefficients for large-scale details (low-frequency signal) and finer image details (high-frequency signal). Next, the algorithm applies deeper decomposition to the large-scale approximation. The whole process can be repeated until the algorithm yields the detail required.

Figure 5 shows the two-layer decomposition, where 𝐼 is the observed image, 𝐼 is the image processed using the contourlet transform, LP is a Laplacian pyramid decomposition filter, DFB is a direction filter bank, LF is the low-frequency signal and HF is the high-frequency signal.

467412.fig.005
Figure 5: Image feature extraction based on the contourlet transform.

The decomposition coefficients for different frequencies are processed using the Bayesian-based NLM filter with a decomposition threshold. In particular, we use the wavelet threshold approach for the low-frequency part and the NLM approach for the high-frequency part. The specific steps in the algorithm are as follows.

Step 1. Decompose image I using the nonsubsampled contourlet transform.

Step 2. Apply the decomposition threshold method to the low-frequency part for noise suppression and feature extraction.

Step 3. Apply the improved Bayesian NLM filter to the high-frequency part for feature extraction.

Step 4. Reconstruct the high-frequency and low-frequency parts of the image processed using the contourlet transform.

For the low-frequency part, threshold decomposition is used to remove image noise. First, the threshold value 𝑇 is set. Decomposition coefficients smaller than 𝑇 are considered to be noise and thus are set to zero; coefficients greater than 𝑇 are reserved. The decomposition threshold is𝑇=𝜃𝑛2𝑘2log𝑁,𝜃𝑛=𝑀(𝑑1)0.6745,𝑀(𝑑1)=𝑁𝑖=0𝑛𝑧2𝑑12𝑛+𝑘𝑖,(4.1) where 𝑘 is the number of layers for wavelet decomposition, 𝜃𝑛 is a function for estimating mean absolute deviation, where 𝑑1 is the high-frequency coefficient for first-layer contourlet decomposition. The high-frequency part of the first layer usually contains few signal components and comprises mainly noise. For the high-frequency part, 𝐺[𝐷𝑘(𝑥)] and 𝐺[𝐷𝑘(𝑦)] are two blocks for image I.𝐺[𝐷𝑘(𝑝)] represents the rectangular neighborhood around 𝑝 as center. The proposed algorithm uses an improved Bayesian NLM image filter and a Euclidean distance to represent similarity between high-frequency image blocks. The similarity is represented by 𝑤(𝑥,𝑦), which is defined as𝑤(𝑥,𝑦)=1𝑍(𝑥)𝑒(𝐺[𝐷𝑘(𝑥)]𝐺[𝐷𝑘(𝑦)]2/2),𝑍(𝑥)=𝑥,𝑦𝐼𝑒(𝐺[𝐷𝑘(𝑥)]𝐺[𝐷𝑘(𝑦)]2/2),(4.2) where is a constant used to control the exponential decay rate. Compared with the original NLM filter, the contourlet transform decomposes the image at different resolutions and the proposed approach uses different algorithm to process an image: threshold decomposition is used for the low-frequency part, and an improved Bayesian NLM filter is used for the high-frequency part. The NLM filter involves time-consuming calculations. If an image is decomposed by the contourlet transform for 𝑘 levels, the improved algorithm only has to process 1/2𝑘 of the original size. This not only reduces the computational complexity but also greatly improves the accuracy of feature extraction.

5. Experimental Results

Spatial image can directly influence the success of spatial rendezvous and docking. We must ensure that the most accurate image feature can be extracted from spatial images. The proposed algorithm can also be applied to general image processing. To verify the performance of the algorithm, we carried out experiments both on Spatial images and standard images.

5.1. Performance Evaluation Based on Spatial Images

Tests were carried out on image I and image II for an image size of 512 × 512. Multidimensional contourlet and curvelet decomposition algorithms were used for comparison. The same parameters were used for all algorithms.

Image feature extraction results were evaluated according to subjective and objective standards. Figure 6 shows the processing results for image I, and Figure 7 shows the results for image II. It is evident that the proposed algorithm yields a better subjective visual effect compared with the curvelet and contourlet algorithms.

fig6
Figure 6: Process results for image I.
fig7
Figure 7: Process results for image II.

The results show that feature extraction with the curvelet transform leads to confusion for some background information, and the contourlet transform yields a blurry image. By contrast, the proposed algorithm effectively suppresses noise and displays the main features of the image. Finer image details are shown in Figure 8.

fig8
Figure 8: Process results for image II.
5.2. Performance Evaluation Based on Standard Images

Figures 9 and 10 show the processing results for image as III, IV. Image as III and IV are standard images for an image size of 512 × 512. It is shown that our algorithm also provides better performance than that of Curvelet transform and Contourlet transform.

fig9
Figure 9: Process results for image III.
fig10
Figure 10: Process results for image IV.

Results are objectively evaluated using the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM), defined as follows:PSNR=10log10(𝐿1)2𝑀𝑖=1𝑁𝑗=1[𝑅(𝑖,𝑗)𝐹(𝑖,𝑗)]2,SSIM(𝑥,𝑦)=2𝑢𝑥𝑢𝑦+𝐶12𝜎𝑥𝑦+𝐶2𝑢2𝑥+𝑢2𝑦+𝐶1𝜎2𝑥+𝜎2𝑦+𝐶2,MSSIM(𝑋,𝑌)=1𝑊𝑊𝑟=1SSIM𝑋𝑟,𝑌𝑟,(5.1) where 𝑢𝑥 and 𝑢𝑦 are the mean and 𝜎𝑥 and 𝜎𝑦 are the standard deviation for the original and processed images, respectively, 𝜎𝑥𝑦 is the covariance for the original and processed images, and 𝐶1 and 𝐶2 are constants. MSSIM is mean SSIM, and 𝑊 is the number of image subblocks. For greater PSNR and MSSIM (0MSSIM1), the processed image is closer to the original.

Table 1 compares the feature extraction performance for images I and II for several algorithms and different noise levels. The PSNR results show that the contourlet transform is superior to the curvelet transform for image I by almost 0.5 dB. Both the contourlet and curvelet transforms provide good edge detection. The proposed CT+NLM algorithm showed even better performance (~1.1 dB) compared with the contourlet transform but retained the good edge detection of the latter method (Table 1).The proposed algorithm uses an NLM filter for adaptive image expression. The MSSIM results show that the proposed algorithm yields the best performance for Gaussian, Poisson, Salt and Pepper and Speckle noise (Table 2).

tab1
Table 1: PSNR results.
tab2
Table 2: MSSIM results.

Our algorithm uses a nonsubsampled key point filter for which 𝐽+1 redundancy is the most efficient. In pyramid decomposition, a lesser extent of image loss can be considered as an effective means to reduce redundancy. The proposed algorithm, which uses a nonsubsampled pyramid filter and a directional filter, leads to some image loss in reducing redundancy. Search windows of 16×16, 32×32, and 64×64 were applied to images I and II. The size of the search window can affect the computational complexity of the NLM filter.

Comparison of the experimental results for different window sizes reveals that the proposed algorithm delivers better noise suppression and feature extraction than the other algorithms. It provides a maximum PSNR value and a minimum MSSIM value for all windows (Table 3).

tab3
Table 3: Experimental results for different search windows.

6. Conclusions

Focusing on the actual needs of spatial images analysis, an improved contourlet transform, consisting of a nonsubsampled pyramid transform and nonsubsampled directional filter banks, was used to reduce the filter design problem of spatial images. The improved contourlet transform uses a mapping approach to solve the 2D filter design problem. The algorithm uses a Bayesian NLM filter for high-frequency information to suppress noise and improve the accuracy of image feature extraction. Experimental results confirm that the NLM filter can effectively retain structural information and reduce the residual structure. In the NSCT domain, the proposed algorithm showed better denoising and enhancement effects compared with the contourlet transform. Moreover, in comparison with NSWT, the algorithm is a more mature and sophisticated image-processing method.

Acknowledgments

This work was supported by the National Basic Research Program of China (973 Program) (2012CB821206), the National Natural Science Foundation of China (no. 91024001, no. 61070142), and the Beijing Natural Science Foundation (no. 4111002).

References

  1. P. Liu, F. Huang, G. Li, and Z. Liu, “Remote-sensing image denoising using partial differential equations and auxiliary images as priors,” IEEE Geoscience and Remote Sensing Letters, vol. 9, no. 3, Article ID 6061940, pp. 358–362, 2012. View at Publisher · View at Google Scholar
  2. H. Demirel and G. Anbarjafari, “Discrete wavelet transform-based satellite image resolution enhancement,” IEEE Transactions on Geoscience and Remote Sensing, vol. 49, no. 6, pp. 1997–2004, 2011. View at Publisher · View at Google Scholar · View at Scopus
  3. P. Pan and D. Schonfeld, “Image reconstruction and multidimensional field estimation from randomly scattered sensors,” IEEE Transactions on Image Processing, vol. 17, no. 1, pp. 94–99, 2008. View at Publisher · View at Google Scholar
  4. F. Kamalabadi, “Multidimensional image reconstruction in astronomy,” IEEE Signal Processing Magazine, vol. 27, no. 1, pp. 86–96, 2010. View at Publisher · View at Google Scholar · View at Scopus
  5. M. Mahmoudi and G. Sapiro, “Fast image and video denoising via nonlocal means of similar neighborhoods,” IEEE Signal Processing Letters, vol. 12, no. 12, pp. 839–842, 2005. View at Publisher · View at Google Scholar · View at Scopus
  6. A. Buades, B. Coll, and J. M. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Modeling & Simulation, vol. 4, no. 2, pp. 490–530, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  7. D. L. Donoho, M. Vetterli, R. A. DeVore, and I. Daubechies, “Data compression and harmonic analysis,” IEEE Transactions on Information Theory, vol. 44, no. 6, pp. 2435–2476, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  8. S. Mallat, A Wavelet Tour of Signal Processing, Elsevier/Academic Press, Amsterdam, The Netherlands, 3rd edition, 2009.
  9. E. J. Candès and D. L. Donoho, “Curvelets—a surprisingly effective nonadaptive representation for objects with edges,” in Curve and Surface Fitting, A. Cohen, C. Rabut, and L.L. Schumaker, Eds., pp. 105–120, Vanderbilt University Press, Nashville, Tenn, USA, 1999. View at Google Scholar
  10. E. J. Candès and D. L. Donoho, “New tight frames of curvelets and optimal representations of objects with piecewise C2 singularities,” Communications on Pure and Applied Mathematics, vol. 57, no. 2, pp. 219–266, 2004. View at Publisher · View at Google Scholar
  11. M. N. Do and M. Vetterli, “The contourlet transform: an efficient directional multiresolution image representation,” IEEE Transactions on Image Processing, vol. 14, no. 12, pp. 2091–2106, 2005. View at Publisher · View at Google Scholar · View at Scopus
  12. P. J. Burt and E. H. Adelson, “The Laplacian pyramid as a compact image code,” IEEE Transactions on Communications, vol. 31, no. 4, pp. 532–540, 1983. View at Publisher · View at Google Scholar · View at Scopus
  13. M. N. Do and M. Vetterli, “Framing pyramids,” IEEE Transactions on Signal Processing, vol. 51, no. 9, pp. 2329–2342, 2003. View at Publisher · View at Google Scholar
  14. R. H. Bamberger and M. J. T. Smith, “A filter bank for the directional decomposition of images: theory and design,” IEEE Transactions on Signal Processing, vol. 40, no. 4, pp. 882–893, 1992. View at Publisher · View at Google Scholar · View at Scopus
  15. R. Eslami and H. Radha, “Translation-invariant contourlet transform and its application to image denoising,” IEEE Transactions on Image Processing, vol. 15, no. 11, pp. 3362–3374, 2006. View at Publisher · View at Google Scholar · View at Scopus
  16. D. D.-Y. Po and M. N. Do, “Directional multiscale modeling of images using the contourlet transform,” IEEE Transactions on Image Processing, vol. 15, no. 6, pp. 1610–1620, 2006. View at Publisher · View at Google Scholar
  17. A. L. da Cunha, J. Zhou, and M. N. Do, “The nonsubsampled contourlet transform: theory, design, and applications,” IEEE Transactions on Image Processing, vol. 15, no. 10, pp. 3089–3101, 2006. View at Publisher · View at Google Scholar · View at Scopus
  18. J. H. McClellan, “The design of two-dimensional digital filters by transformation,” in Proceedings of the 7th Annual Princeton Conference on Information Sciences and Systems, pp. 247–251, 2003.
  19. M. J. Shensa, “The discrete wavelet transform: wedding the atrous and Mallat algorithms,” IEEE Transactions on Signal Processing, vol. 40, no. 10, pp. 2464–2482, 1992. View at Publisher · View at Google Scholar · View at Scopus
  20. R. H. Bamberger and M. J. T. Smith, “A filter bank for the directional decomposition of images: theory and design,” IEEE Transactions on Signal Processing, vol. 40, no. 4, pp. 882–893, 1992. View at Publisher · View at Google Scholar · View at Scopus
  21. E. P. Simoncelli, W. T. Freeman, E. H. Adelson, and D. J. Heeger, “Shiftable multiscale transforms,” IEEE Transactions on Information Theory, vol. 38, no. 2, pp. 587–607, 1992. View at Publisher · View at Google Scholar
  22. T. Tasdizen, “Principal neighborhood dictionaries for nonlocal means image denoising,” IEEE Transactions on Image Processing, vol. 18, no. 12, pp. 2649–2660, 2009. View at Publisher · View at Google Scholar
  23. J. Orchard, M. Ebrahimi, and A. Wong, “Efficient nonlocal-means denoising using the SVD,” in IEEE International Conference on Image Processing (ICIP '08), pp. 1732–1735, October 2008. View at Publisher · View at Google Scholar · View at Scopus
  24. N. Dowson and O. Salvado, “Hashed nonlocal means for rapid image filtering,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 3, pp. 485–499, 2011. View at Publisher · View at Google Scholar · View at Scopus
  25. M. Protter, M. Elad, H. Takeda, and P. Milanfar, “Generalizing the nonlocal-means to super-resolution reconstruction,” IEEE Transactions on Image Processing, vol. 18, no. 1, pp. 36–51, 2009. View at Publisher · View at Google Scholar
  26. P. Coupé, P. Hellier, C. Kervrann, and C. Barillot, “Nonlocal means-based speckle filtering for ultrasound images,” IEEE Transactions on Image Processing, vol. 18, no. 10, pp. 2221–2229, 2009. View at Publisher · View at Google Scholar
  27. W. L. Zeng and X. B. Lu, “Region-based non-local means algorithm for noise removal,” Electronics Letters, vol. 47, no. 20, pp. 1125–1127, 2011. View at Publisher · View at Google Scholar
  28. H. Zhong, Y. Li, and L. Jiao, “SAR image despeckling using bayesian nonlocal means filter with sigma preselection,” IEEE Geoscience and Remote Sensing Letters, vol. 8, no. 4, pp. 809–813, 2011. View at Publisher · View at Google Scholar · View at Scopus
  29. R. Lai and Y. T. Yang, “Accelerating non-local means algorithm with random projection,” Electronics Letters, vol. 47, no. 3, pp. 182–183, 2011. View at Publisher · View at Google Scholar · View at Scopus
  30. N. A. Thacker, J. V. Manjon, and P. A. Bromiley, “Statistical interpretation of non-local means,” IET Computer Vision, vol. 4, no. 3, pp. 162–172, 2010. View at Publisher · View at Google Scholar