Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2019 / Article

Research Article | Open Access

Volume 2019 |Article ID 6395147 | 14 pages | https://doi.org/10.1155/2019/6395147

Block-Extraction and Haar Transform Based Linear Singularity Representation for Image Enhancement

Academic Editor: Erik Cuevas
Received16 May 2019
Accepted11 Jul 2019
Published06 Aug 2019

Abstract

In this paper, we develop a novel linear singularity representation method using spatial K-neighbor block-extraction and Haar transform (BEH). Block-extraction provides a group of image blocks with similar (generally smooth) backgrounds but different image edge locations. An interblock Haar transform is then used to represent these differences, thus achieving a linear singularity representation. Next, we magnify the weak detailed coefficients of BEH to allow for image enhancement. Experimental results show that the proposed method achieves better image enhancement, compared to block-matching and 3D filtering (BM3D), nonsubsampled contourlet transform (NSCT), and guided image filtering.

1. Introduction

Image enhancement plays an important role in image processing and pattern recognition. Image enhancement techniques can be generally divided into two categories: spatial methods and frequency domain methods. Because frequency domain-based methods can represent image details in high-frequency subbands by certain transformations, the image enhancement can be achieved by magnifying the weak detailed transform coefficients.

The contours and textures provide the most important information in most natural images and usually present linear singularities. Because a classical orthogonal wavelet transform can only effectively represent point singularities [1, 2], to better represent linear singularities, a series of beyond wavelets (i.e., Ridgelet [3, 4], Curvelet [5], Contourlet [6], and NSCT [7]) have been developed. However, either orthogonal wavelets or beyond wavelets will always directly convolute an image by a certain convolution kernel. Due to the convolution operation in filtering, artifacts are inevitably introduced after the inverse transform. In addition, to better represent linear singularities, more and larger support directional filtering banks have to be used, at the cost of increasing computational complexity.

The convolution operation in traditional transforms is often performed between a kernel and a local neighborhood of the image. Therefore, these methods are called local methods. In recent years, some nonlocal image processing methods have been developed for image denoising, i.e., nonlocal-means (NL_means) [8] and block-matching and 3D filtering (BM3D) [915]. These nonlocal methods attempt to find some of the most similar image blocks, i.e., by implementing block-matching and the weighted means on the blocks [8], or by implementing a 3D transform on a 3D array stacked by similar blocks for enhanced sparsity representation and then for hard-threshold shrinkage on the transformed coefficients to achieve image denoising [9, 10, 1619]. Because NL-means and BM3D were both initially developed to achieve image denoising, the sufficient similar image blocks are crucial for denoising. However, overly strong similarities will weaken the representation of differences among blocks in the 3D transformation, or, equivalently, the linear singularity in each block. This is problematic for applications such as image enhancement.

Some new methods of image enhancement have been proposed recently. For example, He et al. [20, 21] proposed a novel guided image filtering method for image enhancement. Wang et al. [22] proposed a color face image enhancement method using adaptive singular value decomposition in Fourier domain for face recognition. Gao et al. [23] proposed an image enhancement method specifically for visual impairments. Li et al. [24] proposed an adaptive fractional calculus of small probability strategy-based method for achieving image denoising and enhancement. These methods usually enhance the background or have a halo phenomenon as well when used to enhance image details.

In this paper, we propose a novel linear singularity representation method that avoids finding sufficient similar image blocks. The proposed method is based on an observation of natural images that depict many image blocks (or patches) with similar smooth backgrounds in a small neighborhood. Suppose that the detail (i.e., textures or contours) locations scarcely differ between two isometric image blocks that are both extracted from the same small local neighborhood, simply by subtracting one block by one block. The image details can then be effectively represented; for example, the details can be preserved only on relatively greater magnitudes, by setting all other magnitudes approximately equal to zero. Figure 1 illustrates this situation. Inspired by this fact, we propose a novel linear singularity representation method, called K-neighbor block-extraction and Haar transform (BEH). The Haar transform is chosen because of its low computational complexity but also because of its ability to represent sudden transitional signals. Although it lacks continuity and differentiability, this property is actually advantageous for analyzing signals with sudden transitions [25].

In the proposed method, we first select an image block using a sliding window and then extract its spatial K-neighbor image blocks. All of the blocks extracted using this approach will have similar smooth backgrounds but with subtle differences in their detail locations. Next, we implement a fast Haar transform, by calculating either the weighted summation or subtraction among these blocks. Thus, the linear singularity can be effectively represented. To verify the effectiveness of our proposed method, we apply it to image enhancement in order to improve the visibility of images, which is crucial for image processing and computer vision [2631]. Experimental results demonstrate that the proposed method achieves performance in image enhancement that is superior to that of the existing state-of-the-art linear singularity representation methods, including nonsubsample contourlet transform (NSCT).

2. NSCT and BM3D for Linear Singularity Representation in Image Enhancement

2.1. Linear Singularity Representation

Because orthogonal wavelets can only effectively represent point singularities, a series of beyond wavelets has been developed for linear singularity representation. Among them, the NSCT has some favorable properties, such as translation invariability and multidirectional filtering, which makes it one of the state-of-the-art linear singularity representation methods. Although NSCT achieves the best linear singularity representation performance, it has drawbacks similar to those of other local transform methods, i.e., it always introduces some artifacts after a certain type of operation on the transformed coefficients and inverse transform. The introduction of artifacts comes from the convolution operation of the inverse transform,where is the transformed coefficients, is the inverse transform filtering bank, and is the reconstructed image. For image denoising, if one implements a hard thresholding operation, some of the isolated noisy points may remain after the inverse transform. As these noisy points have influences on their surrounding pixels, the ringing artifacts for orthogonal wavelets and the strip artifacts for beyond wavelets will be introduced. An illustrated example of this situation is shown in Figure 2. From this figure, we can see that some strip artifacts are introduced when enhancing some coefficients of a certain directional subband of NSCT.

Among recently developed nonlocal methods, BM3D is one of the state-of-the-art image denoising methods [32]. Specifically, exploring other nonlocal approaches to achieve ideal image denoising performance resulted in the creation of BM3D. In particular, BM3D does not consider linear singularity representation whatsoever, which may cause problems to surface with the use of the block-matching technique. In the 3D transform, only the 2D transform on each image block can be seen as a singularity representation. However, this singularity representation degenerates the general 2D wavelet transform method because the third-dimensional transform is on considerably similar image blocks, which implies that singularity cannot be represented by the Haar transform. In the extreme case, if two image blocks are completely alike, the subtraction between the two blocks will equal zero, and there will be no information in the high-frequency subbands of the Haar transform. There are always some differences among image blocks in a group formed by block-matching, even though the ideal linear singularity representation cannot be achieved. So, to achieve better linear singularity representation performance, some local differences must exist among a group of blocks. Considering this, we propose to modify the block-matching in the BM3D method to be block-extraction.

In the BM3D method, the block-matching is implemented by computing the Euclidean distances between a given reference block and each block in the neighborhood of the reference block,where is the reference block, is the block to be matched, and is a 2D transform, i.e., either DCT or an orthogonal wavelet transform. However, this 2D transform does change the block-matching results corresponding to the case of using the original image blocks to calculate distance in block-matching. is the size of each square image block. Because does not affect block-matching, we simplify (2) to be the following one:Here, we can show a block-matching example by (3) and give the 3D transform results of the grouped image blocks in Figure 3. From the transformed results in Figure 3, we can see that there is scarce information in the high-frequency subbands. The main reason is that all of the blocks are too similar in a group; that is, there is not enough singularity among these blocks.

2.2. Application to Image Enhancement
2.2.1. Image Enhancement by NSCT

NSCT has excellent linear singularity representation performance, which allows some relatively weak edges in images to be better represented by NSCT than by BM3D. Therefore, when using NSCT to achieve better image enhancement, strong edges and weak edges should be processed differently. In [7], to achieve better image enhancement results, the definition below is used to differentiate among strong edge, weak edge, and noise: where is a parameter ranging from 1 to 5 and is the standard deviation of noise in the subbands at a specific pyramidal level. In this case, a NSCT-based image enhancement algorithm can be given as follows:where the input is the original transform coefficient and is the amplifying gain. This function keeps the coefficients of strong edges, amplifies the coefficients of weak edges, and zeros the noise coefficients.

2.2.2. Image Enhancement by BM3D

According to the analysis in Section 2.1, BM3D cannot obtain the linear singularity representation like NSCT. Therefore, BM3D cannot use the same algorithm as NSCT to achieve image enhancement. In [33], a joint image enhancement and denoising algorithm by 3D transform-domain collaborative filtering was proposed. In this algorithm, block-matching and 3D transform were implemented. Additionally, the hard thresholding operation on the 3D transformed coefficients was used to remove noise, and the alpha-rooting method was used to amplify coefficients and achieve image enhancement.

Given a transform spectrum of a signal, which contains a DC coefficient termed , the alpha-rooting is performed as where is the spectrum of the transformed signal and an exponent leads to the enhancement of image details.

In [33], two different approaches are given to perform image enhancement: BM3D-SH3D and BM3D-SH2D. The first one implements the alpha-rooting algorithm on the 3D transformed spectrum, while the second implements the alpha-rooting algorithm only on the 2D transformed spectrum after the 1D inverse transform, i.e., on each block-transformed spectrum. However, actually, both methods achieve image enhancement only via one block in a certain sense; because of extreme similarity among the blocks in a group, the magnitudes of the coefficients in the high-frequency subbands are all nearly zero.

Accordingly, to achieve better enhancement on image details, we propose using a K-neighbor block-extraction method to replace the block-matching procedure in BM3D and further discard the 2D transform on each image block by only implementing an interblock 1D Haar transform.

3. K-Neighbor Block-Extraction and Haar Transform

Considering the effectiveness of the proposed linear singularity representation method, the extracted image blocks should have similar smooth backgrounds as well as some different local details. We first select an image block from the input image and then extract its spatial K-neighbor blocks. Thus, the extracted blocks can follow the above-mentioned condition; i.e., all the extracted image blocks have similar smooth backgrounds but different local details. After a fast Haar transform on the group of these image blocks, the image details can be effectively represented in the details subbands.

3.1. K-Neighbor Block-Extraction

A reference image block with the top-left pixel coordinate (where is the coordinate set of the input image ) is first selected by an sliding window according to a given sliding step size , and then its spatial K-neighbor image blocks are extracted to form a vector , where every block can be considered as an element of , and their top-left pixel coordinates form a set .

Because we want to implement a Haar transform on vector , must be a power-of-two integer. In addition, to represent more directional details, should at least be . For example, we can extract image blocks and then form all of their top-left pixel coordinates as the -neighborhood top-left pixel coordinates of . A block-extraction operation is illustrated in Figure 4. The pink block is an 88 reference block , with the blue solid round as its top-left pixel coordinate. The green solid rounds are the top-left pixel coordinates of the extracted image blocks, while the red solid rounds are the top-left pixel coordinates of the extracted image blocks.

3.2. Haar Transform

In this section, we give a fast Haar transform method according to the characteristics of block-extraction. Because all extracted blocks are isometric, we can fully investigate the simplicity of Haar wavelet to construct a fast Haar transform on each group of blocks.

Forward Transform. Here, image blocks are denoted as , . For example, if , we can use the following formulation to realize a complete Haar transform with 3 levels of transform:where is a Haar transform matrix, is a column vector in which every element denotes an image block, and is also a column vector in which every element denotes a transformed subband. An of Haar transform can be defined as follows:By computing the matrix product, can be decomposed into subbands, with as the approximated subband and the rest as detailed subbands. Figure 5 shows a way of decomposing a group of image blocks with a Haar transform. Note that the contours can be effectively represented in the detailed subbands.

Inverse Transform. The Haar transform matrix defined in (2) is invertible. Thus, one can perfectly reconstruct all original image blocks using the following inverse transform:where denotes the inverse matrix of .

After finishing operations on a group of image blocks, we can return them to their original locations in a zero matrix of size by averaging all pixels in the same location. By finishing operations on all reference blocks, we can get the output image by the following aggregation equation:where is the characteristic function of the square support of a block located at , and all of the image blocks are outside-padded by zeros to form an image. Because the Haar transform is perfect in reconstruction transform, we always calculate the average of all pixels extracted in the same location in the aggregation procedure. Namely, the pixel value always returns to the original value at every location, so a perfect reconstructed image can be obtained after conducting all of the above operations. Figure 6 shows a Lena image and its reconstructed image with PSNR of .

4. Application to Image Enhancement

We apply the proposed linear singularity representation method to image enhancement to verify its effectiveness in capturing image details among a group of blocks. Amplifying certain transformed coefficients can typically achieve image enhancement, i.e., amplifying only the transformation coefficients of image details but suppressing transformation coefficients of background noise [7, 34, 35]. If image details cannot be well represented, the transformation coefficients of background noises may also be amplified when amplifying transformation coefficients. In addition, when implementing the traditional convolution-based transform, image details and their surrounding smooth background can be influenced by each other because the transformation coefficients of the background near the image details are usually larger than those that are far away. Thus, the halo phenomenon would be introduced after image enhancement. However, due to the effectiveness of our proposed linear singularity representation, it can amplify the transformation coefficients of image details and also suppress the transformation coefficients of background and noise simultaneously. Most importantly, without using large filter banks, the halo phenomenon can be effectively alleviated.

For the purpose of suppressing background noise, we first estimate the noise deviation of the input image with the robust median operator [36].

As in the analysis in Section 3.2, we implement the forward Haar transform on according to (7) to obtain the transformation coefficients . Then, we can use the following nonlinear mapping function to amplify the transformation coefficients of image details,where is the Haar-transformed coefficients, is the gain factor to amplify the transformation coefficients of image details, and are the two constant parameters, and is the enhanced coefficient. Next, we can implement the inverse Haar transform to get by (9). Finally, we can use (10) to obtain the enhanced image.

The image enhancement algorithm by BEH can be summarized as follows:(1)Estimate the noise deviation of the input image.(2)Extract image blocks by the method in Section 2.1.(3)Implement Haar transform according to (7).(4)Amplify the transformation coefficients of image details and also suppress the transformation coefficients of background noise by (11).(5)Implement the inverse Haar transform according to (9).(6)Aggregate all of the blocks to obtain the final enhanced image.

The proposed BEH method has two drawbacks for image denoising: the extracted image blocks are not similar enough to each other, and this algorithm does not implement the 2D transform on each block. These two drawbacks limit separation of noise from signal. Fortunately, because the BM3D method can achieve outstanding denoising performance, we can use BM3D to denoise a noisy image and then use the proposed method to enhance the details of the denoised image. This overall method is called BEH-BM3D.

It is worth noting that there are two steps in BM3D method for achieving better image denoising. The first step can basically remove all the noise in the noisy image; however the image details are also oversmoothed. In order to restore the oversmoothed image details, BM3D uses the second step, i.e., Wiener filtering. This two-step strategy is needed for image denoising problem; but, for image enhancement, the Wiener filtering step is not suitable, since the enhanced image would be restored to the original one if applying Wiener filtering on the enhanced image. We only implement one-step block-extraction Haar transform in our method; thus the computational complex can also be lowered than the BM3D method.

5. Experimental Results

In this section, we will provide two groups of experiments. One is to implement the proposed linear singularity representation method, and the other is to implement image enhancement.

5.1. Linear Singularity Representation

To demonstrate the proposed linear singularity representation method, the Barbara image with a size of is decomposed via the proposed KN-BEH. The parameter values used are , , and . Figure 7 shows all the transformed subbands of the proposed method. Obviously, image details can be observed in the detailed subbands. This is mainly because the KN-BEH can easily capture differences among the blocks of the same group, where neighboring image blocks have very similar backgrounds.

5.2. Image Enhancement

We have conducted image enhancement experiments to verify the effectiveness of our proposed method. When , the output image details will be enhanced, with the larger correlated to a more strongly enhanced image. In all experiments, the parameters are set as and for the normalized input image, i.e., with the pixel value ranging from to . We just adjust to get different image enhancement results. The enhancement results on House and Barbara are shown in Figures 8 and 9.

To further demonstrate the advantage of the BEH method, we also compare our image enhancement results with results using the NSCT method [7] and the alpha-rooting method [33]. Due to the use of multidirectional filtering banks and translation invariants, NSCT is a powerful linear singularity representation method. Figures 8 and 9 show the comparison of image enhancement results by our proposed method, NSCT, and alpha-rooting. We observe that the proposed method hardly produces any halo phenomenon around strong edges, but the NSCT method results in a strong halo phenomenon. From Figures 8 and 9, we also observe that our proposed method can better preserve weak details than the NSCT method, i.e., for the details on the table legs at the bottom-left part of the Barbara image. Because our method can better represent linear singularity, as observed from Figures 8 and 9, our proposed method has better subjective visual sharpness than the alpha-rooting method. Additionally, many small, weak details in the image cannot be sharpened using the alpha-rooting method.

We have further conducted other comparison experiments. For example, we used both BEH-BM3D and alpha-rooting BM3D to enhance images with added Gaussian white noise. Specifically, we first added different levels of noise to the image and then used these two methods to enhance the image and remove noise. For BEH-BM3D, we used a two-stage method, with the first stage of image denoising by BM3D and the second stage of enhancement by BEH. For the alpha-rooting BM3D method, by exactly following the procedure in [33], we simultaneously enhanced the transformation coefficients of image details and removed the transformation coefficients of noise by hard thresholding in BM3D transform domain. The experimental results are shown in Figures 10 and 11. From these two figures, we can see that the BEH-BM3D can better preserve image details and achieve better subjective visual results.

To quantitatively validate image enhancement performance, the Local Phase Coherence Sharpness Index (LPC-SI) [37], a novel and effective image sharpness assessment method, is used. This method considers sharpness as strong local phase coherence near the distinctive image features evaluated in the complex wavelet transform domain. Furthermore, this method can assess sharpness without using the original image as reference. In our experiments, we compare our results with both the NSCT method and the alpha-rooting method. In each method, we provide its best LPC-SI results by adjusting the respective parameters. For the alpha-rooting method, we fix the alpha value to and then adjust the standard deviation of noise to obtain the best LPC-SI values. For the NSCT method, we use the enhancement algorithm in [7] by adjusting the gain factor in (5) to obtain the best LPC-SI values. All of the results are shown in Table 1. We can see that our method achieves the best results for most images.


ImageLPC-SILPC-SILPC-SILPC-SI
inputNSCTalpha-rootingour proposed

Barbara0.92460.95320.96850.9682

House0.87600.93020.93950.9603

Boat0.93410.95030.95290.9590

Lena0.90600.94110.95490.9629

Peppers0.92100.95330.94730.9592

Couple0.92130.95350.94990.9565

Hill0.88400.94380.94220.9566

Man0.91040.95080.94990.9594

Cameraman0.93640.95780.95340.9559

Fingerprint0.77770.94010.90340.9204

In addition, we further used the traditional background variation (BV) and detail variation (DV) [35] to evaluate our results. The BV and DV values represent the variance of background and foreground pixels, respectively. A good image enhancement method should increase the DV of the original image but keep or even decrease the BV. The BV and DV values of three methods are summarized in Table 2. Notice that all of these results were obtained for each image and each method when the corresponding LPC-SI was the best. From these results, we can see that the alpha-rooting method achieved the lowest BV, and our method achieved the highest DV. However, the BV values provided by our method were just a little bit higher than those of the alpha-rooting method. If we use the ratio between DV and BV to assess the obtained results, our method achieves the best outcome.


Image
InputAlpha-rootingNSCTOur proposed
BVDVBVDVBVDVBVDV

Lena0.556.600.3413.380.4016.850.3618.38

House0.627.050.1224.140.2620.770.1232.33

Barbara0.6612.530.3332.570.4831.480.3733.50

Peppers0.6810.430.3119.740.4713.040.4231.02

Boat0.669.470.3315.430.4724.400.3423.07

Couple0.639.950.3424.200.4525.650.4030.47

Hill0.588.420.3018.520.4118.780.3421.81

Man0.538.600.2819.670.4821.660.3429.22

Cam.0.6011.010.2628.360.3635.620.3543.79

Fingerprint0.5318.950.3961.970.5710.470.5250.54

We also find that NSCT achieves the best LPC-SI value, but the worst BV/DV result on Fingerprint images in Tables 1 and 2, respectively. This is mainly because the estimated standard deviation of noise is too high, which results in most image edges being removed as noise. Therefore, the highest LPC-SI value cannot always be trusted. This is also the main reason for us to use both LPC-SI and BV/DV to assess the experimental results.

It is worth indicating another significant advantage of our proposed method, i.e., its lower time complexity compared to that of the NSCT method. For image enhancement on a grayscale image, our method takes only second, while the NSCT method takes seconds. Our method uses the same parameter values, i.e., , , and , and the NSCT method uses levels of decomposition, i.e., and directional subbands, respectively.

In the final experiment, we also used the proposed BEH method to enhance several color images to show superior enhancement performance and compare the results with the guided image filtering method [20, 21], with the results shown in Figure 12. Although the image details are enhanced, the enhanced images are still visually natural when using the proposed method. The guided image filtering method can also effectively enhance the image details; however, the image background is also enhanced, making the enhanced images look rather artificial.

6. Conclusions

We have proposed an effective linear singularity representation method and have applied this method to image enhancement in both gray and color images. Similar to the BM3D method, our method is nonlocal. However, our method extracts some spatially adjacent blocks, instead of using block-matching. By using our method, the image details can be effectively represented. Additionally, by using different parameters to amplify the transformation coefficients of image details, we can obtain different image enhancement results. Because our method uses no local convolution operation, it introduces fewer halo phenomena, compared to the NSCT method. Furthermore, the computational cost of our method is also significantly lower, compared to that of the NSCT method.

Although our proposed BEH method looks like the original BM3D method, the purpose of our proposed method is to represent image details, which is different from the original BM3D. Note that edges in the matched image blocks are usually located in the same position; thus, edge information cannot be preserved in the high-frequency subbands after the third-dimensional transform. Therefore, our proposed method is different from the BM3D method.

In summary, we have presented a simple but effective image linear singularity representation method that achieves better objective and subjective results when applied to image enhancement than other state-of-the-art methods.

Data Availability

The images used in this article can be downloaded from BM3D website.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the National Science Foundation of China [grant numbers 61379015 and 61866005]; the Natural Science Foundation of Shandong Province [grant number ZR2011FM004]; and the Talent Introduction Project of Taishan University [grant numbers Y-01-2013012 and Y-01-2014018]; Dr. S.-W. Lee was partially supported by the Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (No. 2017-0-00451).

References

  1. S. Mallat, A Wavelet Tour of Signal Processing, Academic Press, 2nd edition, 1999. View at: MathSciNet
  2. S. Mallat and W. L. Hwang, “Singularity detection and processing with wavelets,” IEEE Transactions on Information Theory, vol. 38, no. 2, pp. 617–643, 1992. View at: Publisher Site | Google Scholar | MathSciNet
  3. E. J. Candes, Ridgelets: Theory and Applications, Department of Statistics, Stanford University, 1998. View at: MathSciNet
  4. M. N. Do and M. Vetterli, “The finite ridgelet transform for image representation,” IEEE Transactions on Image Processing, vol. 12, no. 1, pp. 16–28, 2003. View at: Publisher Site | Google Scholar | MathSciNet
  5. E. J. Candès and D. L. Donoho, “Curvelets-a suprisingly effective nonadaptive representation for objects with edges,” in Curve and Surface Fitting, Vanderbilt University Press, Saint-Malo, France, 1999. View at: Google Scholar
  6. M. N. Do and M. Vetterli, “The contourlet transform: an efficient directional multiresolution image representation,” IEEE Transactions on Image Processing, vol. 14, no. 12, pp. 2091–2106, 2005. View at: Publisher Site | Google Scholar
  7. A. L. da Cunha, J. Zhou, and M. N. Do, “The nonsubsampled contourlet transform: theory, design, and applications,” IEEE Transactions on Image Processing, vol. 15, no. 10, pp. 3089–3101, 2006. View at: Publisher Site | Google Scholar
  8. A. Buades, B. Coll, and J. M. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Modeling and Simulation, vol. 4, no. 2, pp. 490–530, 2005. View at: Publisher Site | Google Scholar
  9. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Transactions on Image Processing, vol. 16, no. 8, pp. 2080–2095, 2007. View at: Publisher Site | Google Scholar
  10. Y. K. Hou, C. X. Zhao, D. Y. Yang, and Y. Cheng, “Comments on image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Transactions on Image Processing, vol. 20, no. 1, pp. 268–270, 2011. View at: Publisher Site | Google Scholar
  11. X. Qu, Y. Hou, F. Lam, D. Guo, J. Zhong, and Z. Chen, “Magnetic resonance image reconstruction from undersampled measurements using a patch-based nonlocal operator,” Medical Image Analysis, vol. 18, no. 6, pp. 843–856, 2014. View at: Publisher Site | Google Scholar
  12. X. Qu, D. Guo, B. Ning et al., “Undersampled MRI reconstruction with patch-based directional wavelets,” Magnetic Resonance Imaging, vol. 30, no. 7, pp. 964–977, 2012. View at: Publisher Site | Google Scholar
  13. Y. Hou, S. H. Park, Q. Wang et al., “Enhancement of perivascular spaces in 7 T MR image using haar transform of non-local cubes and block-matching filtering,” Scientific Reports, vol. 7, no. 1, article no. 8569, 2017. View at: Publisher Site | Google Scholar
  14. Y. Hou, M. Liu, and D. Yang, “Multi-stage block-matching transform domain filtering for image denoising,” Journal of Computer-Aided Design and Computer Graphics, vol. 26, no. 2, pp. 225–231, 2014. View at: Google Scholar
  15. F. Zhu, Y. Hou, and J. Yang, “Block-matching based multifocus image fusion,” Mathematical Problems in Engineering, vol. 2015, 7 pages, 2015. View at: Google Scholar
  16. Y. Hou and D. Shen, “Image denoising with morphology- and size-adaptive block-matching transform domain filtering,” Eurasip Journal on Image and Video Processing, vol. 59, no. 1, 2018. View at: Google Scholar
  17. Y. Hou, J. Xu, M. Liu et al., “NLH: a blind pixel-level non-local method for real-world image denoising,” https://arxiv.org/abs/1906.06834, 2019. View at: Google Scholar
  18. J. Xu, L. Zhang, W. Zuo, D. Zhang, and X. Feng, “Patch group based nonlocal self-similarity prior learning for image denoising,” in Proceedings of the 15th IEEE International Conference on Computer Vision, ICCV 2015, pp. 244–252, Santiago, Chile, 2015. View at: Publisher Site | Google Scholar
  19. J. Xu, D. Ren, L. Zhang, and D. Zhang, “Patch group based bayesian learning for blind image denoising,” in Proceedings of the Asian Conference on Computer Vision Workshop (ACCVW), Taipei, Taiwan, 2016. View at: Publisher Site | Google Scholar
  20. K. He, J. Sun, and X. Tang, “Guided image filtering,” in Computer Vision—ECCV 2010, pp. 1–14, Springer, Heidelberg, Berlin, Germany, 2010. View at: Publisher Site | Google Scholar
  21. K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 6, pp. 1397–1409, 2013. View at: Publisher Site | Google Scholar
  22. J. Wang, N. T. Le, J. Lee, and C. Wang, “Color face image enhancement using adaptive singular value decomposition in fourier domain for face recognition,” Pattern Recognition, vol. 57, pp. 31–49, 2016. View at: Publisher Site | Google Scholar
  23. X. W. Gao and M. Loomes, “A new approach to image enhancement for the visually impaired,” Electronic Imaging, vol. 2016, no. 20, pp. 1–7, 2016. View at: Google Scholar
  24. B. Li and W. Xie, “Image denoising and enhancement based on adaptive fractional calculus of small probability strategy,” Neurocomputing, vol. 175, pp. 704–714, 2016. View at: Publisher Site | Google Scholar
  25. B. Y. Lee and Y. S. Tarng, “Application of the discrete wavelet transform to the monitoring of tool failure in end milling using the spindle motor current,” The International Journal of Advanced Manufacturing Technology, vol. 15, no. 4, pp. 238–243, 1999. View at: Publisher Site | Google Scholar
  26. L. Hong, Y. Wan, and A. Jain, “Fingerprint image enhancement: algorithm and performance evaluation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 8, pp. 777–789, 1998. View at: Publisher Site | Google Scholar
  27. G. Liu and J. Yang, “Exploiting color volume and color difference for salient region detection,” IEEE Transactions on Image Processing, vol. 28, no. 1, pp. 6–16, 2019. View at: Publisher Site | Google Scholar
  28. X. Jun, H. Yingkun, Y. Mengyang et al., “STAR: a structure and texture aware retinex model,” https://arxiv.org/abs/1906.06690, 2019. View at: Google Scholar
  29. X. Jun, Y. Huang, L. Liu et al., “Noisy-As-Clean: learning unsupervised denoising from the corrupted image,” https://arxiv.org/abs/1906.06878, 2019. View at: Google Scholar
  30. J. Xu, L. Zhang, and D. Zhang, “External prior guided internal prior learning for real-world noisy image denoising,” IEEE Transactions on Image Processing, vol. 27, no. 6, pp. 2996–3010, 2018. View at: Publisher Site | Google Scholar
  31. J. Xu, L. Zhang, and D. Zhang, “A trilateral weighted sparse coding scheme for real-world image denoising,” in Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 2018. View at: Publisher Site | Google Scholar
  32. V. Katkovnik, A. Foi, K. Egiazarian, and J. Astola, “From local kernel to nonlocal multiple-model image denoising,” International Journal of Computer Vision, vol. 86, no. 1, pp. 1–32, 2010. View at: Publisher Site | Google Scholar | MathSciNet
  33. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Joint image sharpening and denoising by 3D transform-domain collaborative filtering,” in Proceedings of the 2007 International TICSP Workshop on Spectral Methods and Multirate, SMMSP, Moscow, Russia, 2007. View at: Google Scholar
  34. J.-L. Starck, F. Murtagh, E. J. Candès, and D. L. Donoho, “Gray and color image contrast enhancement by the curvelet transform,” IEEE Transactions on Image Processing, vol. 12, no. 6, pp. 706–717, 2003. View at: Publisher Site | Google Scholar | MathSciNet
  35. G. Ramponi, N. Strobel, S. K. Mitra, and T.-H. Yu, “Nonlinear unsharp masking methods for image contrast enhancement,” Journal of Electronic Imaging, vol. 5, no. 3, pp. 353–366, 1996. View at: Publisher Site | Google Scholar
  36. S. G. Chang, B. Yu, and M. Vetterli, “Spatially adaptive wavelet thresholding with context modeling for image denoising,” IEEE Transactions on Image Processing, vol. 9, no. 9, pp. 1522–1531, 2000. View at: Publisher Site | Google Scholar | MathSciNet
  37. R. Hassen, Z. Wang, and M. M. Salama, “Image sharpness assessment based on local phase coherence,” IEEE Transactions on Image Processing, vol. 22, no. 7, pp. 2798–2810, 2013. View at: Publisher Site | Google Scholar

Copyright © 2019 Yingkun Hou et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

511 Views | 339 Downloads | 0 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.