Table of Contents Author Guidelines Submit a Manuscript
Scanning
Volume 2019, Article ID 4271761, 15 pages
https://doi.org/10.1155/2019/4271761
Research Article

No-Reference Quality Assessment Method for Blurriness of SEM Micrographs with Multiple Texture

1School of Information and Control Engineering, China University of Mining and Technology, Xuzhou, China
2School of Physics, China University of Mining and Technology, Xuzhou, China
3School of Computer Science and Technology, China University of Mining and Technology, Xuzhou, China
4Advanced Analysis and Computation Centre, China University of Mining and Technology, Xuzhou, China

Correspondence should be addressed to Zhaolin Lu; nc.ude.tmuc@ulniloahz

Received 5 August 2018; Revised 30 December 2018; Accepted 4 February 2019; Published 2 June 2019

Academic Editor: Antonio Checco

Copyright © 2019 Hui Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Scanning electron microscopy (SEM) plays an important role in the intuitive understanding of microstructures because it can provide ultrahigh magnification. Tens or hundreds of images are regularly generated and saved during a typical microscopy imaging process. Given the subjectivity of a microscopist’s focusing operation, blurriness is an important distortion that debases the quality of micrographs. The selection of high-quality micrographs using subjective methods is expensive and time-consuming. This study proposes a new no-reference quality assessment method for evaluating the blurriness of SEM micrographs. The human visual system is more sensitive to the distortions of cartoon components than to those of redundant textured components according to the Gestalt perception psychology and the entropy masking property. Micrographs are initially decomposed into cartoon and textured components. Then, the spectral and spatial sharpness maps of the cartoon components are extracted. One metric is calculated by combining the spatial and spectral sharpness maps of the cartoon components. The other metric is calculated on the basis of the edge of the maximum local variation map of the cartoon components. Finally, the two metrics are combined as the final metric. The objective scores generated using this method exhibit high correlation and consistency with the subjective scores.

1. Introduction

Scanning electron microscopy (SEM) helps researchers intuitively understand microstructures because it can provide ultrahigh magnification. SEM is playing an increasingly important role in various research areas, such as medical imaging, automated inspection, bioimaging, and ore detection. At present, microscopists must deal with a considerable number of images because tens or hundreds of images are regularly generated and saved during a typical microscopy imaging process [13]. Images obtained via SEM may be blurred because of the imaging equipment used or the operators who performed the process. Frequently, only a few images are useful for further analysis. Given the subjectivity of the SEM operation, blurring is a major distortion in SEM images [46]. Postek and Vladár qualitatively and quantitatively analysed the sharpness of micrographs in the Fourier domain [7, 8]. Their work provided valuable information to the SEM community. In addition, their research was easy to understand and the meaning was clear. However, Postek and Vladár disregarded the characteristics of the human visual system (HVS), which is the final receiver of the images. Thus, HVS characteristics should be considered.

Image quality assessment (IQA) is a useful method for finding clear images [911]. Human beings are the ultimate receivers of processed images, and they judge image quality. Automatic approaches that can consistently assess the image quality with human subjective evaluation must be developed [10]. Subjective IQA methods require numerous observers to participate in experiments. These methods are not only expensive and time-consuming but they also cannot be incorporated into automatic or real-time image systems [9]. Hence, objective quality methods that can automatically and accurately assess image quality must be developed. Objective IQA methods can be classified into three categories on the basis of the availability of the reference images: full reference (FR) [9, 10], in which complete reference images are required; reduced reference (RR) [12, 13], in which partial information about the reference images is available; and no reference (NR) [1416], in which no information about the reference images is necessary. Given that NR images are available, the use of the FR and RR methods is limited in microscopy applications.

NR assessment methods for blurriness or sharpness can be divided into three categories [17]. (1) Edge-based approaches assume that image edges will spread when blurriness occurs. Marziliano et al. detected vertical edges by using the Sobel operator and then obtained a blurriness measure from the edge map. The average width of all edge pixels was regarded as a metric for blurriness [18]. Ferzli and Karam combined Weber’s law with Marziliano’s method and proposed the “just noticeable blur” method, which considers HVS properties [14]. (2) Transform-based approaches: blurriness leads to distortion in different domains, and thus, many methods assess blurriness on the basis of transform domains, such as discrete cosine transform [19], discrete wavelet transform (DWT) [20], and sparse transform [21]. Some methods utilise information from two or more domains. Chen and Bovik used the comprehensive spatial and gradient information of DWT as the final blurriness metric [16]. Vu et al. combined spatial features with spectral features as a sharpness measurement [22]. Li et al. proposed a robust method by learning multiscale features extracted in the spatial and spectral domains [23]. (3) Statistical pixel information-based approaches analyse the distribution laws of the pixels of original images or their maps. Considering that blurriness decreases the variance of difference in the intensity of adjacent pixels, Tsomko and Kim calculated the block difference variance and regarded it as a blurriness metric [24]. Bahrami and Kot developed a method based on the maximum local variation (MLV) distribution of each pixel, and the standard deviation of distribution was used to measure sharpness [25]. Li et al. combined the sum of squared non-DC moment (SSM) values of a gradient map that were computed on the basis of their Tchebichef moments, block variances, and visual saliency to measure blurriness [26]. Edge-based methods rely excessively on the image content. If an image only has few sharp edges, then edge-based methods may lead to inaccuracy. Transform-based methods assume that distortion in certain domains can be easily extracted and computed; however, these methods occasionally overlook human visual perception, which plays a key role in assessment. Statistical pixel information-based methods are not robust because they are sensitive to noise.

Samples should be preprocessed before SEM imaging. This procedure leads to the difference between micrographs and natural images. Samples are typically ores, which are polished and placed on conductive tapes. Therefore, the micrograph content evidently has an edge area. Figure 1 shows that the final micrographs have many textures. Micrographs used in the database have more strong edges and textures than natural images. Images can be decomposed into cartoon and textured components due to the entropy masking property of human visual perception [27]. The cartoon components of images contain strong edges and flat areas, whereas the textured components contain middle- and high-frequency information, including noise and textures. In [28], Attneave indicated that image information is concentrated on contours. Certain areas and objects are described simply by HVS on the basis of the Gestalt perception psychology [29]. Moreover, HVS is more sensitive to changes in cartoon components than to changes in textured components.

Figure 1: Several samples in the micrograph database.

The experiments in the succeeding sections also prove that the cartoon components exhibit better blurriness features than the original micrographs. This study proposes a new NR assessment method that initially decomposes SEM micrographs into cartoon and textured components. The blurriness of the cartoon components is then assessed. The assessment method based on the combination of spatial-spectral information and spatial map edges is adopted to separately calculate two different metrics and to obtain the final metric via the weighted summation of the two metrics after normalization. The experiments demonstrate the good performance of the proposed method.

2. Cartoon+Textured Components with Isotropic Nonlinear Filters

Before assessing the quality of the SEM micrographs, the micrographs are initially decomposed into cartoon and textured components. An original SEM micrograph is denoted as , which can be decomposed into the cartoon component and textured component . The decomposition process is defined as . The general variational framework for the decomposition model is provided in Meyer’s models [30] as an energy minimisation problem: where are functions and and are spaces of functions or distributions. and if and only if . is a tuning parameter. The cartoon component contains the strong edges and low-frequency information of a micrograph, which can be described as . By contrast, the textured component contains the noise and texture of a micrograph, which can be described as .

A fast and approximate solution for the general variational problem was proposed in [31, 32] by applying a nonlinear low-pass/high-pass filter pair. For each point of a micrograph, when the micrograph is filtered with a low-pass filter and if is a cartoon component, the total variation () does not decrease. By contrast, if is a textured component, then the decreases rapidly. A nonlinear filter was proposed by the solution on the basis of this characteristic. The local total variation () at is defined as follows: where is a low-pass filter with a Gaussian kernel and a standard deviation of , , denotes the entire micrograph region, and is a convolution symbol.

is the relative reduction rate of the , which is defined as follows:

This formula denotes the decrease rate of ’s when is filtered using a low-pass filter . If is close to 0, then the is slightly reduced and pixel belongs to the cartoon region. If is close to 1, then the is large and belongs to the textured region. The proposed fast nonlinear low-pass and high-pass filter pair is defined as follows: where is the soft threshold function. The function is defined as follows: and its parameters and are fixed in the experiments. The specific numbers are and [32].

The attenuation of the high-frequency content is caused by the blurriness of an image. Therefore, if an image is more blurred, then its cartoon component is more similar to that of the original image. Figure 2 compares between the cartoon and textured decompositions of micrographs with different blurriness extent but the same content. The parameter of the nonlinear filter is 3. Compared with that of the sharp micrograph, the cartoon component of the blurred micrograph looks more similar to that of the original micrograph because in the blurred micrograph, more of the pixels’ are close to 0. Thus, more regions of the micrograph belong to the cartoon component. Therefore, the textured part of the blurred micrograph is less evident than that of the sharp micrograph.

Figure 2: Sharp and blurred micrographs and their cartoon and textural parts.

Figure 3 presents the different decomposition results of various parameter values. Compared with , more regions of the original micrograph belong to the textured component when . Although more noises and textures are separated, the edges of the cartoon component exert greater zigzag effect. If is small, then the cartoon component still contains texture. If is large, then some edges are regarded as texture. A zigzag effect also occurs on the cartoon component.

Figure 3: Cartoon and textural parts under different decomposition parameters.

In this study, . The cartoon component is a simplified description of the original micrograph, and it contains strong edges that are perceived more easily by human visual perception when blurriness occurs. As previously mentioned, human visual perception is sensitive to distortion in edges. The cartoon component retains the original edges, simplifies the original micrograph, and does not lose distortion information. Although the textured component is also affected by blurriness, human visual perception is less sensitive to distortion in this part because of perceptual redundancy. Furthermore, noise and repetitive texture reduce the performance of assessment methods, as proven in the following sections. The experiments also prove that assessing the quality of the cartoon component is better than that of the original micrograph.

3. NR IQA Method for Blurriness

Distortion at the edges should be given attention in accordance with the characteristics of human visual perception; thus, this method is primarily edge-based. Apart from strong edges, the cartoon component also contains other frequency information that is utilised to measure the attenuation of high-frequency information caused by blurriness. Attenuation is measured using a transform-based approach. The final metric is a weighted summation of the edge- and transform-based metrics.

A famous property called the law exists in the spectrum domain. This property describes the amplitude spectrum of an image as an approximately straight line on a log-log scale [33, 34]. If blurriness appears in images, then the absolute value of the line’s slope increases, particularly in the high-frequency content [35]. For original images, if the tails of curves overlap (red mark in Figure 4), then the accuracy of quality assessment decreases. The curves of the cartoon components retain the trend caused by blurriness and avoid overlapping tails (blue mark in Figure 4). The blue solid line in Figure 4 is a log-log spectrum curve of the image in Figure 2(a), whereas the yellow line belongs to the image in Figure 2(b). The brown line is the image in Figure 2(c), and the purple curve is the image in Figure 2(d). The green dashed line is the fitted straight line of the brown curve at high frequencies, whereas the red dashed line is the fitted straight line of the purple curve at high frequencies.

Figure 4: Log-log spectrum of Figure 2.

The spectral map generated using the method mentioned in the spectral and spatial sharpness algorithm [22] intuitively proves that the cartoon component plays a more important role than its original micrograph in quality assessment. The spectral map is defined as and is calculated using the previously mentioned slope. The absolute value of the slope is defined as . To obtain , the algorithm initially calculates the 2D discrete Fourier transform of micrograph . In , and are computed using the following: where and . is the summed magnitude spectrum, as given by

The algorithm finds a line that best fits the magnitude spectrum. is the absolute value of the slope of the line. is calculated by where is taken over all radial frequency . Finally, is defined by where and . For additional details, refer to [22]. Figure 5 shows that the spectral map of the cartoon component contains no noise and efficiently distinguishes high-low-frequency content. By contrast, the spectral map of the original micrograph misconstrues some low-frequency content as high-frequency content (marked by the blue ellipses). The blurriness of high-frequency content still exists (marked by the red ellipses), thereby indicating that the blurriness in the cartoon component conforms well to HVS.

Figure 5: Spectral maps of micrographs and their cartoon parts generated by S3.

The cartoon component extracts blurriness features well in the spectral and spatial domains. The spatial maps in spectral and spatial sharpness (S3) [22] and the MLV [25] are generated using the local variation of an image. In S3, map is generated on the basis of the TV. The TV of micrograph is defined as , and where are eight-neighbour pixels of . is computed using where is a block of . The final map is defined as follows: where ; was set as 0.5 in [22]. For the reducing effect of the noise, the average sharpness is 1% of the highest values of . Additional details are provided in [22]. The MLV also generates its map. The MLV is defined as follows: where are eight neighbours of . Given a micrograph with size , the MLV is calculated for each pixel at location using formula (13). The final MLV map is generated via

Additional details are available in [25]. Given the separation of texture, the spatial map of the cartoon components focuses on edges. Figures 6 and 7 provide the intuitive details. In Figure 6, the region marked by the blue ellipse in the blurred micrograph shows that the textures and edges are mixed and measuring distortions at the edges is difficult. In its cartoon component, we observe that more pixels are considered edges. The red-marked region indicates that the cartoon component still contains distortions caused by blurriness. The same condition appears in Figure 7. The yellow-marked region confirms that the MLV can extract blur distortions better than the sum of the local variations because of the clear edges of the MLV map.

Figure 6: Spatial maps of micrographs and their cartoon parts generated by S3.
Figure 7: Spatial maps of micrographs and their cartoon parts generated by the MLV.

As we all know, blurriness leads to the spread of edges. Figure 8 shows the edge of the cartoon component’s MLV spatial map. From the details of the micrographs in Figures 7(a) and (7)c, we observe that the edge detection of the blurry MLV map has more edge pixels than that of the sharp one and its edge widths are wider than those of the sharp one.

Figure 8: Edge detections of the MLV maps and their zoom-in.

In [25], the textured component and edges exhibit high MLV, thereby indicating that high variations in pixel intensities are better indicators of sharpness than low variations. The cartoon component has edges and blank content, but blurriness does not change the blank content. Thus, we do not utilise the statistics of the MLV distribution as [25] did. In this research, we detect edges of the MLV map and calculate the sparsity of edge pixels as a blurriness metric. We define the sparsity of edge pixels as the average distance of pixels at edges. For pixels that correspond to an edge location, the start and end positions of the edge are defined as the location of the local luminance extrema closest to the edge. Edge width is defined as the length between the start and end positions [18]. The final blur metric sparsity is generated by the following:

Blurriness causes the spread of edges. It also produces more edge pixels during edge detection. The experiments indicate that the sparsity of edge pixels is lower when the micrograph is more blurred.

On the basis of the preceding analysis, this study proposes a new assessment method. The flowchart of this method is presented in Figure 9. An original micrograph is initially decomposed into cartoon and textured components. Then, the spectral and spatial features with the sparsity of edge pixels are combined. The final score is obtained via weighting summation. The method extracts spectral and spatial features using the algorithm in S3, and the metric is defined as . We separately calculate the sparsity of edge pixels in the vertical and horizontal directions. Vertical sparsity is , whereas horizontal sparsity is . The final sparsity is defined as and the final score is obtained using where is a weighting coefficient.

Figure 9: Flow chart of the proposed method.

4. Analysis and Discussion of Experiment Results

4.1. SEM Micrographs and Their Quality Assessment Results

The SEM micrographs used in this research were taken at the Modern Analysis and Computing Centre of China University of Mining and Technology. We selected 50 samples. For every sample, we obtained three blurred micrographs with different extent by artificially adjusting the SEM focus parameter. After obtaining 150 micrographs, 30 SEM users without knowledge in image processing participated in the subjective experiment. Every micrograph gained 30 scores. To reduce the error of the experimental results, we selected 30 scores based on confidence interval and eliminated 5 scores that were not found in the confidence interval. The final mean opinion score () was the average score of the remaining 25 scores. Apart from the two samples presented in Figures 1 and 2, three other samples are illustrated in Figure 10.

Figure 10: Three samples and their micrographs with different blurriness extent.

The blurriness extent increases from the first column to the last column. In Table 1, the higher the blurriness extent, the more blurred the micrograph is. The subjective and objective assessment scores are also provided in Table 1. is the mean opinion score, is the objective score obtained from the combination of spectral and spatial features, is the sparsity of edge pixels, and is the final objective score. With regard to these parameters, the lower their values, the more blurred the micrograph is. This analysis matches the one mentioned in Section 3.

Table 1: Assessment result of the different extent of blurriness.
4.2. Performance Analysis of the Proposed Objective Method

In this study, three performance indexes are adopted to measure the proposed objective method. (1)Pearson Linear Correlation Coefficient (): where are subjective scores, are objective scores, and are their average scores, and and are their variances. is a metric that measures how well the objective scores correlate with the subjective scores. If is higher, then the correlation is better.(2)Root-Mean-Square Error (): is a metric that measures the absolute error between the subjective and objective scores. A good algorithm is supposed to have a low value.(3)Spearman’s Rank Ordered Correlation Coefficient (): where and are the rank positions of and in arrays and , respectively. is a metric that measures the relative monotonicity between the subjective and objective scores. A high value indicates a good algorithm.

Figure 11 presents eight pairs of comparison between the original micrographs and their cartoon components. We obtain three performance indexes and fitted curves of the subjective and objective scores using eight different methods [14, 18, 22, 25, 3638]. The fitted curves of the cartoon component are evidently better than those of the original micrographs. The performance indexes validate this finding. Therefore, the distortion in the cartoon components of the micrographs conforms more to the observed distortion by HVS when blurriness occurs.

Figure 11: Fitted curves by different methods.

The last graph in Figure 11 shows the fitted curve of the subjective and objective scores obtained using the proposed method, and three performance indexes are appended at the top left corner. Figure 12 presents the analysis of . The values of , , and SRCC are the best when . Therefore, in the proposed method, the weighting coefficient is assumed as 0.3. This also indicates HVS is more sensitive to blurred distortion at the edges. Table 2 provides a summary of the performance indexes generated using different methods.

Figure 12: Performance indexes with different .
Table 2: Summary of the performance indexes generated by different methods.

The top two indexes are marked in boldface. We obtain two conclusions from Table 2. (1) Cartoon components reflect blurriness characteristics better than original micrographs. (2) The indexes of the proposed method are the best compared with those of the other eight methods. Thus, the proposed method is the most similar to HVS perception characteristics.

5. Conclusion

This study proposes a new method for evaluating the blurriness of SEM micrographs. HVS is more sensitive to the distortion of cartoon components than that of redundant texture components according to the Gestalt perception psychology and the entropy masking property. The method initially decomposes original micrographs into cartoon and textured components. Then, blurriness features are extracted from the cartoon components. When assessing the quality of the cartoon components, the method combines the micrographs’ spectral-spatial features and the sparsity of edge pixels of the MLV spatial map. Finally, we obtain the final quality scores via the weighted summation of the two metrics. The experiments demonstrate that the proposed method is more similar to human visual perception than other state-of-art methods when assessing the quality of SEM micrographs.

Data Availability

The data including the database of blurred micrographs, (mean opinion scores), and objective scores used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by two projects of the National Natural Science Foundation of China whose Grant Numbers are 51604271 and 61771474, one project of the Natural Science Foundation of Jiangsu Province whose grant number is BK20170273 and one project whose Grant Number is 2015XKMS100 supported by the Fundamental Reserach Funds for the Gentral Universities.

References

  1. S. Koho, E. Fazeli, J. E. Eriksson, and P. E. Hänninen, “Image quality ranking method for microscopy,” Scientific Reports, vol. 6, no. 1, article 28962, 2016. View at Publisher · View at Google Scholar · View at Scopus
  2. M. D. Zotta, Y. Han, M. D. Bergkoetter, and E. Lifshin, “An evaluation of image quality metrics for scanning electron microscopy,” Microscopy and Microanalysis, vol. 22, no. S3, pp. 572-573, 2016. View at Publisher · View at Google Scholar
  3. M. Zeder, E. Kohler, and J. Pernthaler, “Automated quality assessment of autonomously acquired microscopic images of fluorescently stained bacteria,” Cytometry Part A, vol. 77, no. 1, pp. 76–85, 2010. View at Publisher · View at Google Scholar · View at Scopus
  4. L. Firestone, K. Cook, K. Culp, N. Talsania, and K. Preston, “Comparison of autofocus methods for automated microscopy,” Cytometry, vol. 12, no. 3, pp. 195–206, 1991. View at Publisher · View at Google Scholar · View at Scopus
  5. J. F. Brenner, B. S. Dew, J. B. Horton, T. King, P. W. Neurath, and W. D. Selles, “An automated microscope for cytologic research a preliminary evaluation,” Journal of Histochemistry & Cytochemistry, vol. 24, no. 1, pp. 100–111, 1976. View at Publisher · View at Google Scholar · View at Scopus
  6. S. L. Ellenberger, Influence of Defocus on Measurements in Microscope Images, [M.S. thesis], Delft: Delft University of Technology, 2000.
  7. M. L. T. Chae Postek and A. E. Vladár, “Image sharpness measurement in scanning electron microscopy—part I,” Scanning, vol. 20, no. 1, 9 pages, 1998. View at Publisher · View at Google Scholar
  8. A. E. Vladár, M. T. Postek, and M. P. Davidson, “Image sharpness measurement in scanning electron microscopy—part II,” Scanning, vol. 20, no. 1, 34 pages, 1998. View at Publisher · View at Google Scholar
  9. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004. View at Publisher · View at Google Scholar · View at Scopus
  10. H. R. Sheikh, A. C. Bovik, and G. De Veciana, “An information fidelity criterion for image quality assessment using natural scene statistics,” IEEE Transactions on Image Processing, vol. 14, no. 12, pp. 2117–2128, 2005. View at Publisher · View at Google Scholar · View at Scopus
  11. H. R. Sheikh and A. C. Bovik, “Image information and visual quality,” IEEE Transactions on Image Processing, vol. 15, no. 2, pp. 430–444, 2006. View at Publisher · View at Google Scholar · View at Scopus
  12. Z. Wang and A. C. Bovik, “Reduced- and no-reference image quality assessment,” IEEE Signal Processing Magazine, vol. 28, no. 6, pp. 29–40, 2011. View at Publisher · View at Google Scholar
  13. R. Soundararajan and A. C. Bovik, “RRED indices: reduced reference entropic differencing for image quality assessment,” IEEE Transactions on Image Processing, vol. 21, no. 2, pp. 517–526, 2012. View at Publisher · View at Google Scholar · View at Scopus
  14. R. Ferzli and L. J. Karam, “A no-reference objective image sharpness metric based on the notion of just noticeable blur (JNB),” IEEE Transactions on Image Processing, vol. 18, no. 4, pp. 717–728, 2009. View at Publisher · View at Google Scholar · View at Scopus
  15. P. Ye, J. Kumar, L. Kang, and D. Doermann, “Unsupervised feature learning framework for no-reference image quality assessment,” in 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1098–1105, Providence, RI, USA, June 2012. View at Publisher · View at Google Scholar · View at Scopus
  16. M.-J. Chen and A. C. Bovik, “No-reference image blur assessment using multiscale gradient,” EURASIP Journal on Image and Video Processing, vol. 3, 11 pages, 2011. View at Publisher · View at Google Scholar
  17. Z.-M. Wang, “Review of no-reference image quality assessment,” Acta Automatica Sinica, vol. 41, no. 6, pp. 1062–1079, 2015. View at Google Scholar
  18. P. Marziliano, F. Dufaux, S. Winkler, and T. Ebrahimi, “Perceptual blur and ringing metrics: application to JPEG2000,” Signal Processing: Image Communication, vol. 19, no. 2, pp. 163–172, 2004. View at Publisher · View at Google Scholar · View at Scopus
  19. M. A. Saad, A. C. Bovik, and C. Charrier, “Blind image quality assessment: a natural scene statistics approach in the DCT domain,” IEEE Transactions on Image Processing, vol. 21, no. 8, pp. 3339–3352, 2012. View at Publisher · View at Google Scholar · View at Scopus
  20. R. Ferzli and L. J. Karam, “No-reference objective wavelet based noise immune image sharpness metric,” in IEEE International Conference on Image Processing 2005, pp. 1–405, Genova, Italy, September 2005. View at Publisher · View at Google Scholar · View at Scopus
  21. L. Li, D. Wu, J. Wu, H. Li, W. Lin, and A. C. Kot, “Image sharpness assessment by sparse representation,” IEEE Transactions on Multimedia, vol. 18, no. 6, pp. 1085–1097, 2016. View at Publisher · View at Google Scholar · View at Scopus
  22. C. T. Vu, T. D. Phan, and D. M. Chandler, “S3: a spectral and spatial measure of local perceived sharpness in natural images,” IEEE Transactions on Image Processing, vol. 21, no. 3, pp. 934–945, 2012. View at Publisher · View at Google Scholar · View at Scopus
  23. L. Li, W. Xia, W. Lin, Y. Fang, and S. Wang, “No-reference and robust image sharpness evaluation based on multi-scale spatial and spectral features,” IEEE Transactions on Multimedia, vol. 19, no. 5, pp. 1030–1040, 2017. View at Publisher · View at Google Scholar · View at Scopus
  24. E. Tsomko and H. J. Kim, “Efficient method of detecting globally blurry or sharp images,” in 2008 Ninth International Workshop on Image Analysis for Multimedia Interactive Services, pp. 171–174, Klagenfurt, Austria, May 2008. View at Publisher · View at Google Scholar · View at Scopus
  25. K. Bahrami and A. C. Kot, “A fast approach for no-reference image sharpness assessment based on maximum local variation,” IEEE Signal Processing Letters, vol. 21, no. 6, pp. 751–755, 2014. View at Publisher · View at Google Scholar · View at Scopus
  26. L. Li, W. Lin, X. Wang, G. Yang, K. Bahrami, and A. C. Kot, “No-reference image blur assessment based on discrete orthogonal moments,” IEEE Transactions on Cybernetics, vol. 46, no. 1, pp. 39–50, 2016. View at Publisher · View at Google Scholar · View at Scopus
  27. A. B. Watson, R. Borthwick, and M. Taylor, “Image quality and entropy masking,” in Proceedings Volume 3016, Human Vision and Electronic Imaging II, pp. 2–12, San Jose, CA, USA, June 1997. View at Publisher · View at Google Scholar · View at Scopus
  28. F. Attneave, “Some informational aspects of visual perception,” Psychological Review, vol. 61, no. 3, pp. 183–193, 1954. View at Publisher · View at Google Scholar · View at Scopus
  29. K. Koffka, Principles of Gestalt Psychology, Routledge, 2013. View at Publisher · View at Google Scholar
  30. Y. Meyer, Oscillating Patterns in Image Processing and Nonlinear Evolution Equations: the Fifteenth Dean Jacqueline B. Lewis Memorial Lectures, American Mathematical Society, 2001. View at Publisher · View at Google Scholar
  31. A. Buades, T. M. le, J. M. Morel, and L. A. Vese, “Fast cartoon + texture image filters,” IEEE Transactions on Image Processing, vol. 19, no. 8, pp. 1978–1986, 2010. View at Publisher · View at Google Scholar · View at Scopus
  32. A. Buades and J. L. Lisani, “Directional filters for cartoon + texture image decomposition,” Image Processing On Line, vol. 5, pp. 75–88, 2016. View at Publisher · View at Google Scholar
  33. D. L. Ruderman, “The statistics of natural images,” Network: Computation in Neural Systems, vol. 5, no. 4, pp. 517–548, 1994. View at Publisher · View at Google Scholar · View at Scopus
  34. A. Srivastava, A. B. Lee, E. P. Simoncelli, and S. C. Zhu, “On advances in statistical modeling of natural images,” Journal of Mathematical Imaging and Vision, vol. 18, no. 1, pp. 17–33, 2003. View at Publisher · View at Google Scholar · View at Scopus
  35. D. J. Field and N. Brady, “Visual sensitivity, blur and the sources of variability in the amplitude spectra of natural scenes,” Vision Research, vol. 37, no. 23, pp. 3367–3383, 1997. View at Publisher · View at Google Scholar · View at Scopus
  36. N. D. Narvekar and L. J. Karam, “A no-reference image blur metric based on the cumulative probability of blur detection (CPBD),” IEEE Transactions on Image Processing, vol. 20, no. 9, pp. 2678–2683, 2011. View at Publisher · View at Google Scholar
  37. R. Hassen, Z. Wang, and M. M. A. Salama, “Image sharpness assessment based on local phase coherence,” IEEE Transactions on Image Processing, vol. 22, no. 7, pp. 2798–2810, 2013. View at Publisher · View at Google Scholar · View at Scopus
  38. P. V. Vu and D. M. Chandler, “A fast wavelet-based algorithm for global and local image sharpness estimation,” IEEE Signal Processing Letters, vol. 19, no. 7, pp. 423–426, 2012. View at Publisher · View at Google Scholar