Research Article  Open Access
Yingkun Hou, Xiaobo Qu, Guanghai Liu, SeongWhan Lee, Dinggang Shen, "BlockExtraction and Haar Transform Based Linear Singularity Representation for Image Enhancement", Mathematical Problems in Engineering, vol. 2019, Article ID 6395147, 14 pages, 2019. https://doi.org/10.1155/2019/6395147
BlockExtraction and Haar Transform Based Linear Singularity Representation for Image Enhancement
Abstract
In this paper, we develop a novel linear singularity representation method using spatial Kneighbor blockextraction and Haar transform (BEH). Blockextraction provides a group of image blocks with similar (generally smooth) backgrounds but different image edge locations. An interblock Haar transform is then used to represent these differences, thus achieving a linear singularity representation. Next, we magnify the weak detailed coefficients of BEH to allow for image enhancement. Experimental results show that the proposed method achieves better image enhancement, compared to blockmatching and 3D filtering (BM3D), nonsubsampled contourlet transform (NSCT), and guided image filtering.
1. Introduction
Image enhancement plays an important role in image processing and pattern recognition. Image enhancement techniques can be generally divided into two categories: spatial methods and frequency domain methods. Because frequency domainbased methods can represent image details in highfrequency subbands by certain transformations, the image enhancement can be achieved by magnifying the weak detailed transform coefficients.
The contours and textures provide the most important information in most natural images and usually present linear singularities. Because a classical orthogonal wavelet transform can only effectively represent point singularities [1, 2], to better represent linear singularities, a series of beyond wavelets (i.e., Ridgelet [3, 4], Curvelet [5], Contourlet [6], and NSCT [7]) have been developed. However, either orthogonal wavelets or beyond wavelets will always directly convolute an image by a certain convolution kernel. Due to the convolution operation in filtering, artifacts are inevitably introduced after the inverse transform. In addition, to better represent linear singularities, more and larger support directional filtering banks have to be used, at the cost of increasing computational complexity.
The convolution operation in traditional transforms is often performed between a kernel and a local neighborhood of the image. Therefore, these methods are called local methods. In recent years, some nonlocal image processing methods have been developed for image denoising, i.e., nonlocalmeans (NL_means) [8] and blockmatching and 3D filtering (BM3D) [9–15]. These nonlocal methods attempt to find some of the most similar image blocks, i.e., by implementing blockmatching and the weighted means on the blocks [8], or by implementing a 3D transform on a 3D array stacked by similar blocks for enhanced sparsity representation and then for hardthreshold shrinkage on the transformed coefficients to achieve image denoising [9, 10, 16–19]. Because NLmeans and BM3D were both initially developed to achieve image denoising, the sufficient similar image blocks are crucial for denoising. However, overly strong similarities will weaken the representation of differences among blocks in the 3D transformation, or, equivalently, the linear singularity in each block. This is problematic for applications such as image enhancement.
Some new methods of image enhancement have been proposed recently. For example, He et al. [20, 21] proposed a novel guided image filtering method for image enhancement. Wang et al. [22] proposed a color face image enhancement method using adaptive singular value decomposition in Fourier domain for face recognition. Gao et al. [23] proposed an image enhancement method specifically for visual impairments. Li et al. [24] proposed an adaptive fractional calculus of small probability strategybased method for achieving image denoising and enhancement. These methods usually enhance the background or have a halo phenomenon as well when used to enhance image details.
In this paper, we propose a novel linear singularity representation method that avoids finding sufficient similar image blocks. The proposed method is based on an observation of natural images that depict many image blocks (or patches) with similar smooth backgrounds in a small neighborhood. Suppose that the detail (i.e., textures or contours) locations scarcely differ between two isometric image blocks that are both extracted from the same small local neighborhood, simply by subtracting one block by one block. The image details can then be effectively represented; for example, the details can be preserved only on relatively greater magnitudes, by setting all other magnitudes approximately equal to zero. Figure 1 illustrates this situation. Inspired by this fact, we propose a novel linear singularity representation method, called Kneighbor blockextraction and Haar transform (BEH). The Haar transform is chosen because of its low computational complexity but also because of its ability to represent sudden transitional signals. Although it lacks continuity and differentiability, this property is actually advantageous for analyzing signals with sudden transitions [25].
In the proposed method, we first select an image block using a sliding window and then extract its spatial Kneighbor image blocks. All of the blocks extracted using this approach will have similar smooth backgrounds but with subtle differences in their detail locations. Next, we implement a fast Haar transform, by calculating either the weighted summation or subtraction among these blocks. Thus, the linear singularity can be effectively represented. To verify the effectiveness of our proposed method, we apply it to image enhancement in order to improve the visibility of images, which is crucial for image processing and computer vision [26–31]. Experimental results demonstrate that the proposed method achieves performance in image enhancement that is superior to that of the existing stateoftheart linear singularity representation methods, including nonsubsample contourlet transform (NSCT).
2. NSCT and BM3D for Linear Singularity Representation in Image Enhancement
2.1. Linear Singularity Representation
Because orthogonal wavelets can only effectively represent point singularities, a series of beyond wavelets has been developed for linear singularity representation. Among them, the NSCT has some favorable properties, such as translation invariability and multidirectional filtering, which makes it one of the stateoftheart linear singularity representation methods. Although NSCT achieves the best linear singularity representation performance, it has drawbacks similar to those of other local transform methods, i.e., it always introduces some artifacts after a certain type of operation on the transformed coefficients and inverse transform. The introduction of artifacts comes from the convolution operation of the inverse transform,where is the transformed coefficients, is the inverse transform filtering bank, and is the reconstructed image. For image denoising, if one implements a hard thresholding operation, some of the isolated noisy points may remain after the inverse transform. As these noisy points have influences on their surrounding pixels, the ringing artifacts for orthogonal wavelets and the strip artifacts for beyond wavelets will be introduced. An illustrated example of this situation is shown in Figure 2. From this figure, we can see that some strip artifacts are introduced when enhancing some coefficients of a certain directional subband of NSCT.
(a)
(b)
(c)
(d)
Among recently developed nonlocal methods, BM3D is one of the stateoftheart image denoising methods [32]. Specifically, exploring other nonlocal approaches to achieve ideal image denoising performance resulted in the creation of BM3D. In particular, BM3D does not consider linear singularity representation whatsoever, which may cause problems to surface with the use of the blockmatching technique. In the 3D transform, only the 2D transform on each image block can be seen as a singularity representation. However, this singularity representation degenerates the general 2D wavelet transform method because the thirddimensional transform is on considerably similar image blocks, which implies that singularity cannot be represented by the Haar transform. In the extreme case, if two image blocks are completely alike, the subtraction between the two blocks will equal zero, and there will be no information in the highfrequency subbands of the Haar transform. There are always some differences among image blocks in a group formed by blockmatching, even though the ideal linear singularity representation cannot be achieved. So, to achieve better linear singularity representation performance, some local differences must exist among a group of blocks. Considering this, we propose to modify the blockmatching in the BM3D method to be blockextraction.
In the BM3D method, the blockmatching is implemented by computing the Euclidean distances between a given reference block and each block in the neighborhood of the reference block,where is the reference block, is the block to be matched, and is a 2D transform, i.e., either DCT or an orthogonal wavelet transform. However, this 2D transform does change the blockmatching results corresponding to the case of using the original image blocks to calculate distance in blockmatching. is the size of each square image block. Because does not affect blockmatching, we simplify (2) to be the following one:Here, we can show a blockmatching example by (3) and give the 3D transform results of the grouped image blocks in Figure 3. From the transformed results in Figure 3, we can see that there is scarce information in the highfrequency subbands. The main reason is that all of the blocks are too similar in a group; that is, there is not enough singularity among these blocks.
2.2. Application to Image Enhancement
2.2.1. Image Enhancement by NSCT
NSCT has excellent linear singularity representation performance, which allows some relatively weak edges in images to be better represented by NSCT than by BM3D. Therefore, when using NSCT to achieve better image enhancement, strong edges and weak edges should be processed differently. In [7], to achieve better image enhancement results, the definition below is used to differentiate among strong edge, weak edge, and noise: where is a parameter ranging from 1 to 5 and is the standard deviation of noise in the subbands at a specific pyramidal level. In this case, a NSCTbased image enhancement algorithm can be given as follows:where the input is the original transform coefficient and is the amplifying gain. This function keeps the coefficients of strong edges, amplifies the coefficients of weak edges, and zeros the noise coefficients.
2.2.2. Image Enhancement by BM3D
According to the analysis in Section 2.1, BM3D cannot obtain the linear singularity representation like NSCT. Therefore, BM3D cannot use the same algorithm as NSCT to achieve image enhancement. In [33], a joint image enhancement and denoising algorithm by 3D transformdomain collaborative filtering was proposed. In this algorithm, blockmatching and 3D transform were implemented. Additionally, the hard thresholding operation on the 3D transformed coefficients was used to remove noise, and the alpharooting method was used to amplify coefficients and achieve image enhancement.
Given a transform spectrum of a signal, which contains a DC coefficient termed , the alpharooting is performed as where is the spectrum of the transformed signal and an exponent leads to the enhancement of image details.
In [33], two different approaches are given to perform image enhancement: BM3DSH3D and BM3DSH2D. The first one implements the alpharooting algorithm on the 3D transformed spectrum, while the second implements the alpharooting algorithm only on the 2D transformed spectrum after the 1D inverse transform, i.e., on each blocktransformed spectrum. However, actually, both methods achieve image enhancement only via one block in a certain sense; because of extreme similarity among the blocks in a group, the magnitudes of the coefficients in the highfrequency subbands are all nearly zero.
Accordingly, to achieve better enhancement on image details, we propose using a Kneighbor blockextraction method to replace the blockmatching procedure in BM3D and further discard the 2D transform on each image block by only implementing an interblock 1D Haar transform.
3. KNeighbor BlockExtraction and Haar Transform
Considering the effectiveness of the proposed linear singularity representation method, the extracted image blocks should have similar smooth backgrounds as well as some different local details. We first select an image block from the input image and then extract its spatial Kneighbor blocks. Thus, the extracted blocks can follow the abovementioned condition; i.e., all the extracted image blocks have similar smooth backgrounds but different local details. After a fast Haar transform on the group of these image blocks, the image details can be effectively represented in the details subbands.
3.1. KNeighbor BlockExtraction
A reference image block with the topleft pixel coordinate (where is the coordinate set of the input image ) is first selected by an sliding window according to a given sliding step size , and then its spatial Kneighbor image blocks are extracted to form a vector , where every block can be considered as an element of , and their topleft pixel coordinates form a set .
Because we want to implement a Haar transform on vector , must be a poweroftwo integer. In addition, to represent more directional details, should at least be . For example, we can extract image blocks and then form all of their topleft pixel coordinates as the neighborhood topleft pixel coordinates of . A blockextraction operation is illustrated in Figure 4. The pink block is an 88 reference block , with the blue solid round as its topleft pixel coordinate. The green solid rounds are the topleft pixel coordinates of the extracted image blocks, while the red solid rounds are the topleft pixel coordinates of the extracted image blocks.
3.2. Haar Transform
In this section, we give a fast Haar transform method according to the characteristics of blockextraction. Because all extracted blocks are isometric, we can fully investigate the simplicity of Haar wavelet to construct a fast Haar transform on each group of blocks.
Forward Transform. Here, image blocks are denoted as , . For example, if , we can use the following formulation to realize a complete Haar transform with 3 levels of transform:where is a Haar transform matrix, is a column vector in which every element denotes an image block, and is also a column vector in which every element denotes a transformed subband. An of Haar transform can be defined as follows:By computing the matrix product, can be decomposed into subbands, with as the approximated subband and the rest as detailed subbands. Figure 5 shows a way of decomposing a group of image blocks with a Haar transform. Note that the contours can be effectively represented in the detailed subbands.
Inverse Transform. The Haar transform matrix defined in (2) is invertible. Thus, one can perfectly reconstruct all original image blocks using the following inverse transform:where denotes the inverse matrix of .
After finishing operations on a group of image blocks, we can return them to their original locations in a zero matrix of size by averaging all pixels in the same location. By finishing operations on all reference blocks, we can get the output image by the following aggregation equation:where is the characteristic function of the square support of a block located at , and all of the image blocks are outsidepadded by zeros to form an image. Because the Haar transform is perfect in reconstruction transform, we always calculate the average of all pixels extracted in the same location in the aggregation procedure. Namely, the pixel value always returns to the original value at every location, so a perfect reconstructed image can be obtained after conducting all of the above operations. Figure 6 shows a Lena image and its reconstructed image with PSNR of .
4. Application to Image Enhancement
We apply the proposed linear singularity representation method to image enhancement to verify its effectiveness in capturing image details among a group of blocks. Amplifying certain transformed coefficients can typically achieve image enhancement, i.e., amplifying only the transformation coefficients of image details but suppressing transformation coefficients of background noise [7, 34, 35]. If image details cannot be well represented, the transformation coefficients of background noises may also be amplified when amplifying transformation coefficients. In addition, when implementing the traditional convolutionbased transform, image details and their surrounding smooth background can be influenced by each other because the transformation coefficients of the background near the image details are usually larger than those that are far away. Thus, the halo phenomenon would be introduced after image enhancement. However, due to the effectiveness of our proposed linear singularity representation, it can amplify the transformation coefficients of image details and also suppress the transformation coefficients of background and noise simultaneously. Most importantly, without using large filter banks, the halo phenomenon can be effectively alleviated.
For the purpose of suppressing background noise, we first estimate the noise deviation of the input image with the robust median operator [36].
As in the analysis in Section 3.2, we implement the forward Haar transform on according to (7) to obtain the transformation coefficients . Then, we can use the following nonlinear mapping function to amplify the transformation coefficients of image details,where is the Haartransformed coefficients, is the gain factor to amplify the transformation coefficients of image details, and are the two constant parameters, and is the enhanced coefficient. Next, we can implement the inverse Haar transform to get by (9). Finally, we can use (10) to obtain the enhanced image.
The image enhancement algorithm by BEH can be summarized as follows:(1)Estimate the noise deviation of the input image.(2)Extract image blocks by the method in Section 2.1.(3)Implement Haar transform according to (7).(4)Amplify the transformation coefficients of image details and also suppress the transformation coefficients of background noise by (11).(5)Implement the inverse Haar transform according to (9).(6)Aggregate all of the blocks to obtain the final enhanced image.
The proposed BEH method has two drawbacks for image denoising: the extracted image blocks are not similar enough to each other, and this algorithm does not implement the 2D transform on each block. These two drawbacks limit separation of noise from signal. Fortunately, because the BM3D method can achieve outstanding denoising performance, we can use BM3D to denoise a noisy image and then use the proposed method to enhance the details of the denoised image. This overall method is called BEHBM3D.
It is worth noting that there are two steps in BM3D method for achieving better image denoising. The first step can basically remove all the noise in the noisy image; however the image details are also oversmoothed. In order to restore the oversmoothed image details, BM3D uses the second step, i.e., Wiener filtering. This twostep strategy is needed for image denoising problem; but, for image enhancement, the Wiener filtering step is not suitable, since the enhanced image would be restored to the original one if applying Wiener filtering on the enhanced image. We only implement onestep blockextraction Haar transform in our method; thus the computational complex can also be lowered than the BM3D method.
5. Experimental Results
In this section, we will provide two groups of experiments. One is to implement the proposed linear singularity representation method, and the other is to implement image enhancement.
5.1. Linear Singularity Representation
To demonstrate the proposed linear singularity representation method, the Barbara image with a size of is decomposed via the proposed KNBEH. The parameter values used are , , and . Figure 7 shows all the transformed subbands of the proposed method. Obviously, image details can be observed in the detailed subbands. This is mainly because the KNBEH can easily capture differences among the blocks of the same group, where neighboring image blocks have very similar backgrounds.
5.2. Image Enhancement
We have conducted image enhancement experiments to verify the effectiveness of our proposed method. When , the output image details will be enhanced, with the larger correlated to a more strongly enhanced image. In all experiments, the parameters are set as and for the normalized input image, i.e., with the pixel value ranging from to . We just adjust to get different image enhancement results. The enhancement results on House and Barbara are shown in Figures 8 and 9.
(a)
(b)
(c)
(d)
(e)
(f)
(a)
(b)
(c)
(d)
To further demonstrate the advantage of the BEH method, we also compare our image enhancement results with results using the NSCT method [7] and the alpharooting method [33]. Due to the use of multidirectional filtering banks and translation invariants, NSCT is a powerful linear singularity representation method. Figures 8 and 9 show the comparison of image enhancement results by our proposed method, NSCT, and alpharooting. We observe that the proposed method hardly produces any halo phenomenon around strong edges, but the NSCT method results in a strong halo phenomenon. From Figures 8 and 9, we also observe that our proposed method can better preserve weak details than the NSCT method, i.e., for the details on the table legs at the bottomleft part of the Barbara image. Because our method can better represent linear singularity, as observed from Figures 8 and 9, our proposed method has better subjective visual sharpness than the alpharooting method. Additionally, many small, weak details in the image cannot be sharpened using the alpharooting method.
We have further conducted other comparison experiments. For example, we used both BEHBM3D and alpharooting BM3D to enhance images with added Gaussian white noise. Specifically, we first added different levels of noise to the image and then used these two methods to enhance the image and remove noise. For BEHBM3D, we used a twostage method, with the first stage of image denoising by BM3D and the second stage of enhancement by BEH. For the alpharooting BM3D method, by exactly following the procedure in [33], we simultaneously enhanced the transformation coefficients of image details and removed the transformation coefficients of noise by hard thresholding in BM3D transform domain. The experimental results are shown in Figures 10 and 11. From these two figures, we can see that the BEHBM3D can better preserve image details and achieve better subjective visual results.
To quantitatively validate image enhancement performance, the Local Phase Coherence Sharpness Index (LPCSI) [37], a novel and effective image sharpness assessment method, is used. This method considers sharpness as strong local phase coherence near the distinctive image features evaluated in the complex wavelet transform domain. Furthermore, this method can assess sharpness without using the original image as reference. In our experiments, we compare our results with both the NSCT method and the alpharooting method. In each method, we provide its best LPCSI results by adjusting the respective parameters. For the alpharooting method, we fix the alpha value to and then adjust the standard deviation of noise to obtain the best LPCSI values. For the NSCT method, we use the enhancement algorithm in [7] by adjusting the gain factor in (5) to obtain the best LPCSI values. All of the results are shown in Table 1. We can see that our method achieves the best results for most images.

In addition, we further used the traditional background variation (BV) and detail variation (DV) [35] to evaluate our results. The BV and DV values represent the variance of background and foreground pixels, respectively. A good image enhancement method should increase the DV of the original image but keep or even decrease the BV. The BV and DV values of three methods are summarized in Table 2. Notice that all of these results were obtained for each image and each method when the corresponding LPCSI was the best. From these results, we can see that the alpharooting method achieved the lowest BV, and our method achieved the highest DV. However, the BV values provided by our method were just a little bit higher than those of the alpharooting method. If we use the ratio between DV and BV to assess the obtained results, our method achieves the best outcome.

We also find that NSCT achieves the best LPCSI value, but the worst BV/DV result on Fingerprint images in Tables 1 and 2, respectively. This is mainly because the estimated standard deviation of noise is too high, which results in most image edges being removed as noise. Therefore, the highest LPCSI value cannot always be trusted. This is also the main reason for us to use both LPCSI and BV/DV to assess the experimental results.
It is worth indicating another significant advantage of our proposed method, i.e., its lower time complexity compared to that of the NSCT method. For image enhancement on a grayscale image, our method takes only second, while the NSCT method takes seconds. Our method uses the same parameter values, i.e., , , and , and the NSCT method uses levels of decomposition, i.e., and directional subbands, respectively.
In the final experiment, we also used the proposed BEH method to enhance several color images to show superior enhancement performance and compare the results with the guided image filtering method [20, 21], with the results shown in Figure 12. Although the image details are enhanced, the enhanced images are still visually natural when using the proposed method. The guided image filtering method can also effectively enhance the image details; however, the image background is also enhanced, making the enhanced images look rather artificial.
6. Conclusions
We have proposed an effective linear singularity representation method and have applied this method to image enhancement in both gray and color images. Similar to the BM3D method, our method is nonlocal. However, our method extracts some spatially adjacent blocks, instead of using blockmatching. By using our method, the image details can be effectively represented. Additionally, by using different parameters to amplify the transformation coefficients of image details, we can obtain different image enhancement results. Because our method uses no local convolution operation, it introduces fewer halo phenomena, compared to the NSCT method. Furthermore, the computational cost of our method is also significantly lower, compared to that of the NSCT method.
Although our proposed BEH method looks like the original BM3D method, the purpose of our proposed method is to represent image details, which is different from the original BM3D. Note that edges in the matched image blocks are usually located in the same position; thus, edge information cannot be preserved in the highfrequency subbands after the thirddimensional transform. Therefore, our proposed method is different from the BM3D method.
In summary, we have presented a simple but effective image linear singularity representation method that achieves better objective and subjective results when applied to image enhancement than other stateoftheart methods.
Data Availability
The images used in this article can be downloaded from BM3D website.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Acknowledgments
This work was supported by the National Science Foundation of China [grant numbers 61379015 and 61866005]; the Natural Science Foundation of Shandong Province [grant number ZR2011FM004]; and the Talent Introduction Project of Taishan University [grant numbers Y012013012 and Y012014018]; Dr. S.W. Lee was partially supported by the Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (No. 2017000451).
References
 S. Mallat, A Wavelet Tour of Signal Processing, Academic Press, 2nd edition, 1999. View at: MathSciNet
 S. Mallat and W. L. Hwang, “Singularity detection and processing with wavelets,” IEEE Transactions on Information Theory, vol. 38, no. 2, pp. 617–643, 1992. View at: Publisher Site  Google Scholar  MathSciNet
 E. J. Candes, Ridgelets: Theory and Applications, Department of Statistics, Stanford University, 1998. View at: MathSciNet
 M. N. Do and M. Vetterli, “The finite ridgelet transform for image representation,” IEEE Transactions on Image Processing, vol. 12, no. 1, pp. 16–28, 2003. View at: Publisher Site  Google Scholar  MathSciNet
 E. J. Candès and D. L. Donoho, “Curveletsa suprisingly effective nonadaptive representation for objects with edges,” in Curve and Surface Fitting, Vanderbilt University Press, SaintMalo, France, 1999. View at: Google Scholar
 M. N. Do and M. Vetterli, “The contourlet transform: an efficient directional multiresolution image representation,” IEEE Transactions on Image Processing, vol. 14, no. 12, pp. 2091–2106, 2005. View at: Publisher Site  Google Scholar
 A. L. da Cunha, J. Zhou, and M. N. Do, “The nonsubsampled contourlet transform: theory, design, and applications,” IEEE Transactions on Image Processing, vol. 15, no. 10, pp. 3089–3101, 2006. View at: Publisher Site  Google Scholar
 A. Buades, B. Coll, and J. M. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Modeling and Simulation, vol. 4, no. 2, pp. 490–530, 2005. View at: Publisher Site  Google Scholar
 K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3D transformdomain collaborative filtering,” IEEE Transactions on Image Processing, vol. 16, no. 8, pp. 2080–2095, 2007. View at: Publisher Site  Google Scholar
 Y. K. Hou, C. X. Zhao, D. Y. Yang, and Y. Cheng, “Comments on image denoising by sparse 3D transformdomain collaborative filtering,” IEEE Transactions on Image Processing, vol. 20, no. 1, pp. 268–270, 2011. View at: Publisher Site  Google Scholar
 X. Qu, Y. Hou, F. Lam, D. Guo, J. Zhong, and Z. Chen, “Magnetic resonance image reconstruction from undersampled measurements using a patchbased nonlocal operator,” Medical Image Analysis, vol. 18, no. 6, pp. 843–856, 2014. View at: Publisher Site  Google Scholar
 X. Qu, D. Guo, B. Ning et al., “Undersampled MRI reconstruction with patchbased directional wavelets,” Magnetic Resonance Imaging, vol. 30, no. 7, pp. 964–977, 2012. View at: Publisher Site  Google Scholar
 Y. Hou, S. H. Park, Q. Wang et al., “Enhancement of perivascular spaces in 7 T MR image using haar transform of nonlocal cubes and blockmatching filtering,” Scientific Reports, vol. 7, no. 1, article no. 8569, 2017. View at: Publisher Site  Google Scholar
 Y. Hou, M. Liu, and D. Yang, “Multistage blockmatching transform domain filtering for image denoising,” Journal of ComputerAided Design and Computer Graphics, vol. 26, no. 2, pp. 225–231, 2014. View at: Google Scholar
 F. Zhu, Y. Hou, and J. Yang, “Blockmatching based multifocus image fusion,” Mathematical Problems in Engineering, vol. 2015, 7 pages, 2015. View at: Google Scholar
 Y. Hou and D. Shen, “Image denoising with morphology and sizeadaptive blockmatching transform domain filtering,” Eurasip Journal on Image and Video Processing, vol. 59, no. 1, 2018. View at: Google Scholar
 Y. Hou, J. Xu, M. Liu et al., “NLH: a blind pixellevel nonlocal method for realworld image denoising,” https://arxiv.org/abs/1906.06834, 2019. View at: Google Scholar
 J. Xu, L. Zhang, W. Zuo, D. Zhang, and X. Feng, “Patch group based nonlocal selfsimilarity prior learning for image denoising,” in Proceedings of the 15th IEEE International Conference on Computer Vision, ICCV 2015, pp. 244–252, Santiago, Chile, 2015. View at: Publisher Site  Google Scholar
 J. Xu, D. Ren, L. Zhang, and D. Zhang, “Patch group based bayesian learning for blind image denoising,” in Proceedings of the Asian Conference on Computer Vision Workshop (ACCVW), Taipei, Taiwan, 2016. View at: Publisher Site  Google Scholar
 K. He, J. Sun, and X. Tang, “Guided image filtering,” in Computer Vision—ECCV 2010, pp. 1–14, Springer, Heidelberg, Berlin, Germany, 2010. View at: Publisher Site  Google Scholar
 K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 6, pp. 1397–1409, 2013. View at: Publisher Site  Google Scholar
 J. Wang, N. T. Le, J. Lee, and C. Wang, “Color face image enhancement using adaptive singular value decomposition in fourier domain for face recognition,” Pattern Recognition, vol. 57, pp. 31–49, 2016. View at: Publisher Site  Google Scholar
 X. W. Gao and M. Loomes, “A new approach to image enhancement for the visually impaired,” Electronic Imaging, vol. 2016, no. 20, pp. 1–7, 2016. View at: Google Scholar
 B. Li and W. Xie, “Image denoising and enhancement based on adaptive fractional calculus of small probability strategy,” Neurocomputing, vol. 175, pp. 704–714, 2016. View at: Publisher Site  Google Scholar
 B. Y. Lee and Y. S. Tarng, “Application of the discrete wavelet transform to the monitoring of tool failure in end milling using the spindle motor current,” The International Journal of Advanced Manufacturing Technology, vol. 15, no. 4, pp. 238–243, 1999. View at: Publisher Site  Google Scholar
 L. Hong, Y. Wan, and A. Jain, “Fingerprint image enhancement: algorithm and performance evaluation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 8, pp. 777–789, 1998. View at: Publisher Site  Google Scholar
 G. Liu and J. Yang, “Exploiting color volume and color difference for salient region detection,” IEEE Transactions on Image Processing, vol. 28, no. 1, pp. 6–16, 2019. View at: Publisher Site  Google Scholar
 X. Jun, H. Yingkun, Y. Mengyang et al., “STAR: a structure and texture aware retinex model,” https://arxiv.org/abs/1906.06690, 2019. View at: Google Scholar
 X. Jun, Y. Huang, L. Liu et al., “NoisyAsClean: learning unsupervised denoising from the corrupted image,” https://arxiv.org/abs/1906.06878, 2019. View at: Google Scholar
 J. Xu, L. Zhang, and D. Zhang, “External prior guided internal prior learning for realworld noisy image denoising,” IEEE Transactions on Image Processing, vol. 27, no. 6, pp. 2996–3010, 2018. View at: Publisher Site  Google Scholar
 J. Xu, L. Zhang, and D. Zhang, “A trilateral weighted sparse coding scheme for realworld image denoising,” in Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 2018. View at: Publisher Site  Google Scholar
 V. Katkovnik, A. Foi, K. Egiazarian, and J. Astola, “From local kernel to nonlocal multiplemodel image denoising,” International Journal of Computer Vision, vol. 86, no. 1, pp. 1–32, 2010. View at: Publisher Site  Google Scholar  MathSciNet
 K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Joint image sharpening and denoising by 3D transformdomain collaborative filtering,” in Proceedings of the 2007 International TICSP Workshop on Spectral Methods and Multirate, SMMSP, Moscow, Russia, 2007. View at: Google Scholar
 J.L. Starck, F. Murtagh, E. J. Candès, and D. L. Donoho, “Gray and color image contrast enhancement by the curvelet transform,” IEEE Transactions on Image Processing, vol. 12, no. 6, pp. 706–717, 2003. View at: Publisher Site  Google Scholar  MathSciNet
 G. Ramponi, N. Strobel, S. K. Mitra, and T.H. Yu, “Nonlinear unsharp masking methods for image contrast enhancement,” Journal of Electronic Imaging, vol. 5, no. 3, pp. 353–366, 1996. View at: Publisher Site  Google Scholar
 S. G. Chang, B. Yu, and M. Vetterli, “Spatially adaptive wavelet thresholding with context modeling for image denoising,” IEEE Transactions on Image Processing, vol. 9, no. 9, pp. 1522–1531, 2000. View at: Publisher Site  Google Scholar  MathSciNet
 R. Hassen, Z. Wang, and M. M. Salama, “Image sharpness assessment based on local phase coherence,” IEEE Transactions on Image Processing, vol. 22, no. 7, pp. 2798–2810, 2013. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2019 Yingkun Hou et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.