Abstract

In this paper, we develop a novel linear singularity representation method using spatial K-neighbor block-extraction and Haar transform (BEH). Block-extraction provides a group of image blocks with similar (generally smooth) backgrounds but different image edge locations. An interblock Haar transform is then used to represent these differences, thus achieving a linear singularity representation. Next, we magnify the weak detailed coefficients of BEH to allow for image enhancement. Experimental results show that the proposed method achieves better image enhancement, compared to block-matching and 3D filtering (BM3D), nonsubsampled contourlet transform (NSCT), and guided image filtering.

1. Introduction

Image enhancement plays an important role in image processing and pattern recognition. Image enhancement techniques can be generally divided into two categories: spatial methods and frequency domain methods. Because frequency domain-based methods can represent image details in high-frequency subbands by certain transformations, the image enhancement can be achieved by magnifying the weak detailed transform coefficients.

The contours and textures provide the most important information in most natural images and usually present linear singularities. Because a classical orthogonal wavelet transform can only effectively represent point singularities [1, 2], to better represent linear singularities, a series of beyond wavelets (i.e., Ridgelet [3, 4], Curvelet [5], Contourlet [6], and NSCT [7]) have been developed. However, either orthogonal wavelets or beyond wavelets will always directly convolute an image by a certain convolution kernel. Due to the convolution operation in filtering, artifacts are inevitably introduced after the inverse transform. In addition, to better represent linear singularities, more and larger support directional filtering banks have to be used, at the cost of increasing computational complexity.

The convolution operation in traditional transforms is often performed between a kernel and a local neighborhood of the image. Therefore, these methods are called local methods. In recent years, some nonlocal image processing methods have been developed for image denoising, i.e., nonlocal-means (NL_means) [8] and block-matching and 3D filtering (BM3D) [915]. These nonlocal methods attempt to find some of the most similar image blocks, i.e., by implementing block-matching and the weighted means on the blocks [8], or by implementing a 3D transform on a 3D array stacked by similar blocks for enhanced sparsity representation and then for hard-threshold shrinkage on the transformed coefficients to achieve image denoising [9, 10, 1619]. Because NL-means and BM3D were both initially developed to achieve image denoising, the sufficient similar image blocks are crucial for denoising. However, overly strong similarities will weaken the representation of differences among blocks in the 3D transformation, or, equivalently, the linear singularity in each block. This is problematic for applications such as image enhancement.

Some new methods of image enhancement have been proposed recently. For example, He et al. [20, 21] proposed a novel guided image filtering method for image enhancement. Wang et al. [22] proposed a color face image enhancement method using adaptive singular value decomposition in Fourier domain for face recognition. Gao et al. [23] proposed an image enhancement method specifically for visual impairments. Li et al. [24] proposed an adaptive fractional calculus of small probability strategy-based method for achieving image denoising and enhancement. These methods usually enhance the background or have a halo phenomenon as well when used to enhance image details.

In this paper, we propose a novel linear singularity representation method that avoids finding sufficient similar image blocks. The proposed method is based on an observation of natural images that depict many image blocks (or patches) with similar smooth backgrounds in a small neighborhood. Suppose that the detail (i.e., textures or contours) locations scarcely differ between two isometric image blocks that are both extracted from the same small local neighborhood, simply by subtracting one block by one block. The image details can then be effectively represented; for example, the details can be preserved only on relatively greater magnitudes, by setting all other magnitudes approximately equal to zero. Figure 1 illustrates this situation. Inspired by this fact, we propose a novel linear singularity representation method, called K-neighbor block-extraction and Haar transform (BEH). The Haar transform is chosen because of its low computational complexity but also because of its ability to represent sudden transitional signals. Although it lacks continuity and differentiability, this property is actually advantageous for analyzing signals with sudden transitions [25].

In the proposed method, we first select an image block using a sliding window and then extract its spatial K-neighbor image blocks. All of the blocks extracted using this approach will have similar smooth backgrounds but with subtle differences in their detail locations. Next, we implement a fast Haar transform, by calculating either the weighted summation or subtraction among these blocks. Thus, the linear singularity can be effectively represented. To verify the effectiveness of our proposed method, we apply it to image enhancement in order to improve the visibility of images, which is crucial for image processing and computer vision [2631]. Experimental results demonstrate that the proposed method achieves performance in image enhancement that is superior to that of the existing state-of-the-art linear singularity representation methods, including nonsubsample contourlet transform (NSCT).

2. NSCT and BM3D for Linear Singularity Representation in Image Enhancement

2.1. Linear Singularity Representation

Because orthogonal wavelets can only effectively represent point singularities, a series of beyond wavelets has been developed for linear singularity representation. Among them, the NSCT has some favorable properties, such as translation invariability and multidirectional filtering, which makes it one of the state-of-the-art linear singularity representation methods. Although NSCT achieves the best linear singularity representation performance, it has drawbacks similar to those of other local transform methods, i.e., it always introduces some artifacts after a certain type of operation on the transformed coefficients and inverse transform. The introduction of artifacts comes from the convolution operation of the inverse transform,where is the transformed coefficients, is the inverse transform filtering bank, and is the reconstructed image. For image denoising, if one implements a hard thresholding operation, some of the isolated noisy points may remain after the inverse transform. As these noisy points have influences on their surrounding pixels, the ringing artifacts for orthogonal wavelets and the strip artifacts for beyond wavelets will be introduced. An illustrated example of this situation is shown in Figure 2. From this figure, we can see that some strip artifacts are introduced when enhancing some coefficients of a certain directional subband of NSCT.

Among recently developed nonlocal methods, BM3D is one of the state-of-the-art image denoising methods [32]. Specifically, exploring other nonlocal approaches to achieve ideal image denoising performance resulted in the creation of BM3D. In particular, BM3D does not consider linear singularity representation whatsoever, which may cause problems to surface with the use of the block-matching technique. In the 3D transform, only the 2D transform on each image block can be seen as a singularity representation. However, this singularity representation degenerates the general 2D wavelet transform method because the third-dimensional transform is on considerably similar image blocks, which implies that singularity cannot be represented by the Haar transform. In the extreme case, if two image blocks are completely alike, the subtraction between the two blocks will equal zero, and there will be no information in the high-frequency subbands of the Haar transform. There are always some differences among image blocks in a group formed by block-matching, even though the ideal linear singularity representation cannot be achieved. So, to achieve better linear singularity representation performance, some local differences must exist among a group of blocks. Considering this, we propose to modify the block-matching in the BM3D method to be block-extraction.

In the BM3D method, the block-matching is implemented by computing the Euclidean distances between a given reference block and each block in the neighborhood of the reference block,where is the reference block, is the block to be matched, and is a 2D transform, i.e., either DCT or an orthogonal wavelet transform. However, this 2D transform does change the block-matching results corresponding to the case of using the original image blocks to calculate distance in block-matching. is the size of each square image block. Because does not affect block-matching, we simplify (2) to be the following one:Here, we can show a block-matching example by (3) and give the 3D transform results of the grouped image blocks in Figure 3. From the transformed results in Figure 3, we can see that there is scarce information in the high-frequency subbands. The main reason is that all of the blocks are too similar in a group; that is, there is not enough singularity among these blocks.

2.2. Application to Image Enhancement
2.2.1. Image Enhancement by NSCT

NSCT has excellent linear singularity representation performance, which allows some relatively weak edges in images to be better represented by NSCT than by BM3D. Therefore, when using NSCT to achieve better image enhancement, strong edges and weak edges should be processed differently. In [7], to achieve better image enhancement results, the definition below is used to differentiate among strong edge, weak edge, and noise: where is a parameter ranging from 1 to 5 and is the standard deviation of noise in the subbands at a specific pyramidal level. In this case, a NSCT-based image enhancement algorithm can be given as follows:where the input is the original transform coefficient and is the amplifying gain. This function keeps the coefficients of strong edges, amplifies the coefficients of weak edges, and zeros the noise coefficients.

2.2.2. Image Enhancement by BM3D

According to the analysis in Section 2.1, BM3D cannot obtain the linear singularity representation like NSCT. Therefore, BM3D cannot use the same algorithm as NSCT to achieve image enhancement. In [33], a joint image enhancement and denoising algorithm by 3D transform-domain collaborative filtering was proposed. In this algorithm, block-matching and 3D transform were implemented. Additionally, the hard thresholding operation on the 3D transformed coefficients was used to remove noise, and the alpha-rooting method was used to amplify coefficients and achieve image enhancement.

Given a transform spectrum of a signal, which contains a DC coefficient termed , the alpha-rooting is performed as where is the spectrum of the transformed signal and an exponent leads to the enhancement of image details.

In [33], two different approaches are given to perform image enhancement: BM3D-SH3D and BM3D-SH2D. The first one implements the alpha-rooting algorithm on the 3D transformed spectrum, while the second implements the alpha-rooting algorithm only on the 2D transformed spectrum after the 1D inverse transform, i.e., on each block-transformed spectrum. However, actually, both methods achieve image enhancement only via one block in a certain sense; because of extreme similarity among the blocks in a group, the magnitudes of the coefficients in the high-frequency subbands are all nearly zero.

Accordingly, to achieve better enhancement on image details, we propose using a K-neighbor block-extraction method to replace the block-matching procedure in BM3D and further discard the 2D transform on each image block by only implementing an interblock 1D Haar transform.

3. K-Neighbor Block-Extraction and Haar Transform

Considering the effectiveness of the proposed linear singularity representation method, the extracted image blocks should have similar smooth backgrounds as well as some different local details. We first select an image block from the input image and then extract its spatial K-neighbor blocks. Thus, the extracted blocks can follow the above-mentioned condition; i.e., all the extracted image blocks have similar smooth backgrounds but different local details. After a fast Haar transform on the group of these image blocks, the image details can be effectively represented in the details subbands.

3.1. K-Neighbor Block-Extraction

A reference image block with the top-left pixel coordinate (where is the coordinate set of the input image ) is first selected by an sliding window according to a given sliding step size , and then its spatial K-neighbor image blocks are extracted to form a vector , where every block can be considered as an element of , and their top-left pixel coordinates form a set .

Because we want to implement a Haar transform on vector , must be a power-of-two integer. In addition, to represent more directional details, should at least be . For example, we can extract image blocks and then form all of their top-left pixel coordinates as the -neighborhood top-left pixel coordinates of . A block-extraction operation is illustrated in Figure 4. The pink block is an 88 reference block , with the blue solid round as its top-left pixel coordinate. The green solid rounds are the top-left pixel coordinates of the extracted image blocks, while the red solid rounds are the top-left pixel coordinates of the extracted image blocks.

3.2. Haar Transform

In this section, we give a fast Haar transform method according to the characteristics of block-extraction. Because all extracted blocks are isometric, we can fully investigate the simplicity of Haar wavelet to construct a fast Haar transform on each group of blocks.

Forward Transform. Here, image blocks are denoted as , . For example, if , we can use the following formulation to realize a complete Haar transform with 3 levels of transform:where is a Haar transform matrix, is a column vector in which every element denotes an image block, and is also a column vector in which every element denotes a transformed subband. An of Haar transform can be defined as follows:By computing the matrix product, can be decomposed into subbands, with as the approximated subband and the rest as detailed subbands. Figure 5 shows a way of decomposing a group of image blocks with a Haar transform. Note that the contours can be effectively represented in the detailed subbands.

Inverse Transform. The Haar transform matrix defined in (2) is invertible. Thus, one can perfectly reconstruct all original image blocks using the following inverse transform:where denotes the inverse matrix of .

After finishing operations on a group of image blocks, we can return them to their original locations in a zero matrix of size by averaging all pixels in the same location. By finishing operations on all reference blocks, we can get the output image by the following aggregation equation:where is the characteristic function of the square support of a block located at , and all of the image blocks are outside-padded by zeros to form an image. Because the Haar transform is perfect in reconstruction transform, we always calculate the average of all pixels extracted in the same location in the aggregation procedure. Namely, the pixel value always returns to the original value at every location, so a perfect reconstructed image can be obtained after conducting all of the above operations. Figure 6 shows a Lena image and its reconstructed image with PSNR of .

4. Application to Image Enhancement

We apply the proposed linear singularity representation method to image enhancement to verify its effectiveness in capturing image details among a group of blocks. Amplifying certain transformed coefficients can typically achieve image enhancement, i.e., amplifying only the transformation coefficients of image details but suppressing transformation coefficients of background noise [7, 34, 35]. If image details cannot be well represented, the transformation coefficients of background noises may also be amplified when amplifying transformation coefficients. In addition, when implementing the traditional convolution-based transform, image details and their surrounding smooth background can be influenced by each other because the transformation coefficients of the background near the image details are usually larger than those that are far away. Thus, the halo phenomenon would be introduced after image enhancement. However, due to the effectiveness of our proposed linear singularity representation, it can amplify the transformation coefficients of image details and also suppress the transformation coefficients of background and noise simultaneously. Most importantly, without using large filter banks, the halo phenomenon can be effectively alleviated.

For the purpose of suppressing background noise, we first estimate the noise deviation of the input image with the robust median operator [36].

As in the analysis in Section 3.2, we implement the forward Haar transform on according to (7) to obtain the transformation coefficients . Then, we can use the following nonlinear mapping function to amplify the transformation coefficients of image details,where is the Haar-transformed coefficients, is the gain factor to amplify the transformation coefficients of image details, and are the two constant parameters, and is the enhanced coefficient. Next, we can implement the inverse Haar transform to get by (9). Finally, we can use (10) to obtain the enhanced image.

The image enhancement algorithm by BEH can be summarized as follows:(1)Estimate the noise deviation of the input image.(2)Extract image blocks by the method in Section 2.1.(3)Implement Haar transform according to (7).(4)Amplify the transformation coefficients of image details and also suppress the transformation coefficients of background noise by (11).(5)Implement the inverse Haar transform according to (9).(6)Aggregate all of the blocks to obtain the final enhanced image.

The proposed BEH method has two drawbacks for image denoising: the extracted image blocks are not similar enough to each other, and this algorithm does not implement the 2D transform on each block. These two drawbacks limit separation of noise from signal. Fortunately, because the BM3D method can achieve outstanding denoising performance, we can use BM3D to denoise a noisy image and then use the proposed method to enhance the details of the denoised image. This overall method is called BEH-BM3D.

It is worth noting that there are two steps in BM3D method for achieving better image denoising. The first step can basically remove all the noise in the noisy image; however the image details are also oversmoothed. In order to restore the oversmoothed image details, BM3D uses the second step, i.e., Wiener filtering. This two-step strategy is needed for image denoising problem; but, for image enhancement, the Wiener filtering step is not suitable, since the enhanced image would be restored to the original one if applying Wiener filtering on the enhanced image. We only implement one-step block-extraction Haar transform in our method; thus the computational complex can also be lowered than the BM3D method.

5. Experimental Results

In this section, we will provide two groups of experiments. One is to implement the proposed linear singularity representation method, and the other is to implement image enhancement.

5.1. Linear Singularity Representation

To demonstrate the proposed linear singularity representation method, the Barbara image with a size of is decomposed via the proposed KN-BEH. The parameter values used are , , and . Figure 7 shows all the transformed subbands of the proposed method. Obviously, image details can be observed in the detailed subbands. This is mainly because the KN-BEH can easily capture differences among the blocks of the same group, where neighboring image blocks have very similar backgrounds.

5.2. Image Enhancement

We have conducted image enhancement experiments to verify the effectiveness of our proposed method. When , the output image details will be enhanced, with the larger correlated to a more strongly enhanced image. In all experiments, the parameters are set as and for the normalized input image, i.e., with the pixel value ranging from to . We just adjust to get different image enhancement results. The enhancement results on House and Barbara are shown in Figures 8 and 9.

To further demonstrate the advantage of the BEH method, we also compare our image enhancement results with results using the NSCT method [7] and the alpha-rooting method [33]. Due to the use of multidirectional filtering banks and translation invariants, NSCT is a powerful linear singularity representation method. Figures 8 and 9 show the comparison of image enhancement results by our proposed method, NSCT, and alpha-rooting. We observe that the proposed method hardly produces any halo phenomenon around strong edges, but the NSCT method results in a strong halo phenomenon. From Figures 8 and 9, we also observe that our proposed method can better preserve weak details than the NSCT method, i.e., for the details on the table legs at the bottom-left part of the Barbara image. Because our method can better represent linear singularity, as observed from Figures 8 and 9, our proposed method has better subjective visual sharpness than the alpha-rooting method. Additionally, many small, weak details in the image cannot be sharpened using the alpha-rooting method.

We have further conducted other comparison experiments. For example, we used both BEH-BM3D and alpha-rooting BM3D to enhance images with added Gaussian white noise. Specifically, we first added different levels of noise to the image and then used these two methods to enhance the image and remove noise. For BEH-BM3D, we used a two-stage method, with the first stage of image denoising by BM3D and the second stage of enhancement by BEH. For the alpha-rooting BM3D method, by exactly following the procedure in [33], we simultaneously enhanced the transformation coefficients of image details and removed the transformation coefficients of noise by hard thresholding in BM3D transform domain. The experimental results are shown in Figures 10 and 11. From these two figures, we can see that the BEH-BM3D can better preserve image details and achieve better subjective visual results.

To quantitatively validate image enhancement performance, the Local Phase Coherence Sharpness Index (LPC-SI) [37], a novel and effective image sharpness assessment method, is used. This method considers sharpness as strong local phase coherence near the distinctive image features evaluated in the complex wavelet transform domain. Furthermore, this method can assess sharpness without using the original image as reference. In our experiments, we compare our results with both the NSCT method and the alpha-rooting method. In each method, we provide its best LPC-SI results by adjusting the respective parameters. For the alpha-rooting method, we fix the alpha value to and then adjust the standard deviation of noise to obtain the best LPC-SI values. For the NSCT method, we use the enhancement algorithm in [7] by adjusting the gain factor in (5) to obtain the best LPC-SI values. All of the results are shown in Table 1. We can see that our method achieves the best results for most images.

In addition, we further used the traditional background variation (BV) and detail variation (DV) [35] to evaluate our results. The BV and DV values represent the variance of background and foreground pixels, respectively. A good image enhancement method should increase the DV of the original image but keep or even decrease the BV. The BV and DV values of three methods are summarized in Table 2. Notice that all of these results were obtained for each image and each method when the corresponding LPC-SI was the best. From these results, we can see that the alpha-rooting method achieved the lowest BV, and our method achieved the highest DV. However, the BV values provided by our method were just a little bit higher than those of the alpha-rooting method. If we use the ratio between DV and BV to assess the obtained results, our method achieves the best outcome.

We also find that NSCT achieves the best LPC-SI value, but the worst BV/DV result on Fingerprint images in Tables 1 and 2, respectively. This is mainly because the estimated standard deviation of noise is too high, which results in most image edges being removed as noise. Therefore, the highest LPC-SI value cannot always be trusted. This is also the main reason for us to use both LPC-SI and BV/DV to assess the experimental results.

It is worth indicating another significant advantage of our proposed method, i.e., its lower time complexity compared to that of the NSCT method. For image enhancement on a grayscale image, our method takes only second, while the NSCT method takes seconds. Our method uses the same parameter values, i.e., , , and , and the NSCT method uses levels of decomposition, i.e., and directional subbands, respectively.

In the final experiment, we also used the proposed BEH method to enhance several color images to show superior enhancement performance and compare the results with the guided image filtering method [20, 21], with the results shown in Figure 12. Although the image details are enhanced, the enhanced images are still visually natural when using the proposed method. The guided image filtering method can also effectively enhance the image details; however, the image background is also enhanced, making the enhanced images look rather artificial.

6. Conclusions

We have proposed an effective linear singularity representation method and have applied this method to image enhancement in both gray and color images. Similar to the BM3D method, our method is nonlocal. However, our method extracts some spatially adjacent blocks, instead of using block-matching. By using our method, the image details can be effectively represented. Additionally, by using different parameters to amplify the transformation coefficients of image details, we can obtain different image enhancement results. Because our method uses no local convolution operation, it introduces fewer halo phenomena, compared to the NSCT method. Furthermore, the computational cost of our method is also significantly lower, compared to that of the NSCT method.

Although our proposed BEH method looks like the original BM3D method, the purpose of our proposed method is to represent image details, which is different from the original BM3D. Note that edges in the matched image blocks are usually located in the same position; thus, edge information cannot be preserved in the high-frequency subbands after the third-dimensional transform. Therefore, our proposed method is different from the BM3D method.

In summary, we have presented a simple but effective image linear singularity representation method that achieves better objective and subjective results when applied to image enhancement than other state-of-the-art methods.

Data Availability

The images used in this article can be downloaded from BM3D website.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the National Science Foundation of China [grant numbers 61379015 and 61866005]; the Natural Science Foundation of Shandong Province [grant number ZR2011FM004]; and the Talent Introduction Project of Taishan University [grant numbers Y-01-2013012 and Y-01-2014018]; Dr. S.-W. Lee was partially supported by the Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (No. 2017-0-00451).