Table of Contents Author Guidelines Submit a Manuscript
Journal of Sensors
Volume 2016, Article ID 5803095, 16 pages
http://dx.doi.org/10.1155/2016/5803095
Research Article

Thumbnail with Integrated Blur Based on Edge Width Analysis

School of Electrical and Electronic Engineering, Universiti Sains Malaysia, Engineering Campus, 14300 Nibong Tebal, Penang, Malaysia

Received 11 September 2015; Revised 28 January 2016; Accepted 31 January 2016

Academic Editor: Stephane Evoy

Copyright © 2016 Boon Tatt Koik and Haidi Ibrahim. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Thumbnail image is widely used in electronic devices to help the user to scan through original high resolution images. Hence, it is essential to represent the thumbnail image correspondingly to the original image. A blur image should not appear to be a clear image in thumbnail form, where this situation might mislead the perceptual analysis of user. The main purpose of this research work is to develop a downsampling algorithm to create a thumbnail image which includes blur information. The proposed method has three stages involved to obtain the proposed output thumbnail, which are preliminary processes, blur detection, and lastly image downsampling. For preliminary processes, Sobel first-order derivatives, gradient magnitude, and gradient orientation are determined. In blur detection stage, local maximum, local minimum, and gradient orientation are ultilized to calculate the edge width. The thumbnail image with blur information is generated using the average edge width map as a weightage to integrate blur information. This proposed method has achieved satisfying results and has high potential to be applied as one of the thumbnail generation options for photo viewing.

1. Introduction

For an image, generally, it is important that an image is clear and sharp. However, there are also occasions where the blur is done purposely to make the image captured more meaningful, such as motion blur in a photo of a runner that is about to cross the ending line (i.e., motion blur on the body but clear on facial expression). Be it as it may, blur image in most occasions is unwanted and many researches have magnificentworks in this blur image branch. In this paper, blur analysis and thumbnail generation based on blur analysis will be emphasized.

There are few fundamental thumbnail generation methods available such as Direct Pixel-based Downsampling (DPD) [1], Direct Sub-pixel-based Downsampling (DSD) [2], and Pixel-based Downsampling with Antialiasing Filter (PDAF) [3]. Direct Pixel-based Downsampling (DPD) is a method of generating thumbnail image without any filtering method. In a method that employs DPD technique, one corresponding pixel of the original image is taken to represent the single pixel in thumbnail image [4]. DPD method generally generates a sharper image compared with the original image. Direct Sub-pixel-based Downsampling (DSD) is the extension of DPD downsampling method. DSD uses a similar concept to DPD, but instead of using information from just one pixel, one thumbnail image’s pixel by DSD uses three pixel values from the original image [1]. These three pixels correspond to the red R, green G, and blue B color channel. Therefore, DSD is only applicable for color images [1]. Pixel-based Downsampling with Antialiasing Filter (PDAF) ultilizes the usage of filtering method, which is low-pass filter. Low-pass filter band-limit the frequency component of the initial high resolution original image, therefore the resulting thumbnail image unlikely to retain noise very well. As noise normally occupies the high frequency components in image, the resulting thumbnail image from PDAF method will appear to be smoother and clean from noise [2].

Blur analysis is the process where the image is analyzed in terms of blurriness of edge or blurriness of a certain region in image with quantitative parameter. Example of blur analysis is edge width analysis where the degree of blur is assumed to be proportional to the edge width [5]. Edge width data is mapped for the original image and the data can be used for further process, such as dedeblurring and resampling of the image. Other examples of blur analysis are Bayes discriminant analysis [6] which researches gradient statistic of the blur object in image. There are also researches on the calculation of local blur at boundary and blur magnitude averaging of an image using nonreference block based analysis [7].

The works of other researchers have enriched blur analysis research area. Samadani et al. [8, 9] have come out with an idea to embed blur and noise information into thumbnail images, which we refer to as downsampled image with Integrated blur (DIB). This is because in normal thumbnail image, the blur and noise information is lost due to the filtering and subsampling. Their work consists of two stages: blur detection stage, and noise detection stage. In works of Trentacoste et al. [10], perceptual-based model to downsample an image is built to create a downsampled version of the image that gives the same impression as the original. A study has been conducted to find out how much blur must be present in downsampled image to be perceived the same as the original. Trentacoste et al. use the modified version of Samadani’s work [8, 9] to conduct their experiment. New appearance-preserving algorithm has incorporated in altering blur magnitude locally to create a smaller image corresponding to the original image. The blur magnitude is analyzed as a function of spatial frequency. For another downsampling method [4], thumbnail with blur and noise information (TBNI), two temporary thumbnail images are combined to generate thumbnail image that retain the original image’s blurriness and noise (TBNI). This proposes that thumbnail scheme is actually an extension to DPD method, with DPD thumbnail being used as the base thumbnail. The selection of the information to be embedded into the final thumbnail is based on blur extend parameter. This parameter controls the sensitivity of the algorithm towards noise and blur.

In this paper, a new downsampling algorithm is proposed based on the edge width analysis to embed the blur information of the original high resolution image into thumbnail image. The methodology will be further described in the next section.

2. Preliminary Concept of Thumbnail with Integrated Blur Based on Edge Width Analysis

In this proposed method, similar to other image thumbnail algorithms, the user needs to input the downsampling factor . If the original input image F is of size , its thumbnail version f is of size . Generally, the originally high resolution acquired image F can be described by the following equation [11, 12]:where B is the space-varying blur and C is the ideal clean and sharp image. The symbol in this equation presents a 2-dimensional convolution operation.

To clarify (1), an example is given in Figure 1. Image F shown by Figure 1(c) is obtained by convolving image C (i.e., Figure 1(a)) with blur function B (i.e., Figure 1(b)). In this example, B is a space-varying blur function, described by 2-dimensional Gaussian function [13]:where is the standard deviation of the distribution. Assuming that the Gaussian filter applied is a square filter, with odd size, defined as , can be written as [14]where, in this example, is set to value 0.0001. Variable in this formula stands for the radius of the blur kernel. Therefore, the Gaussian filter in (2) can be expressed in terms of the filter’s size.

Figure 1: (a) An ideal clean image C. (b) The blur B as a function of the radius of the blur kernel. (c) A blurred image F according to (1). (d) Image profile of C and F along -axis, at .

Therefore, the blur function B in Figure 1(b) is defined as a function of . As shown by this figure, the size of the blur filter kernel applied to the pixels located at the center of the image is small, and the filter’s size is gradually increasing towards the border of the image. Blur filter with larger size gives more blur to the image, as compared to blur filter with smaller size. As a consequence, in Figure 1(c), the image appears relatively sharper at the center region (i.e., regions where blur filters with small size are applied), as compared with the regions on the border (i.e., regions where blur filters with larger size are applied).

By inspecting the image profiles shown in Figure 1(d), the effect of the blur filter can be observed. When comparing the profile from the sharp image C with the blurred image F, it is shown that blur filter changes the step edges in C to become ramp edges in F. This figure also shows that the slope of the edges is becoming more gradual when the size of the blur filter applied is bigger (i.e., when the blur effect becomes more serious).

Therefore, the degree of blurs can be observed by inspecting the edges on the image. Sharp regions have narrow edge witdh, while blur regions have wider edge width. This is the main approach used by this proposed method in detecting the blur regions.

Figure 2 shows the general view of this proposed method. As shown by this figure, the proposed method has three main blocks. These blocks are () preliminary processes, () blur detection, and () image downsampling. The proposed method takes the information from the blur detection stage and uses it in its image downsampling stage, so that more blur information from F can be embedded into f.

Figure 2: Overview of the process involved in this proposed method.
2.1. Preliminary Processes

As mentioned in the previous section, the determination of the degree of blurs is by inspecting the edges on the image. It is well known that the edges can be enhanced by calculating its gradient value. Therefore, the main purpose of this preliminary process is to emphasize the edges from F. As shown in Figure 3, the input for this stage is the original image F, while the outputs from this stage are the gradient magnitude and two gradient directions, and .

Figure 3: The block diagram for the preliminary processes.
2.1.1. Determination of Gradient Components

At this stage, the gradient values (i.e., and ) of the input image F at every spatial coordinate , within the matrix of size pixels, are calculated. These gradient values are obtained by applying 2-dimensional directional derivatives as given in the following equation:where is the first-order derivative in -direction (i.e., horizontal direction) and is the first-order derivative in -direction (i.e., vertical direction).

Sobel filter [15] is used to find the approximation to the gradient values at each point in F. Sobel operators are chosen because they have lower sensitivity towards the image noise, as compared with the Robert cross filter or Laplacian filter [16]. Therefore, this condition brings an advantage to the proposed method’s algorithm. This is because Sobel operators enable this method to give more focus on blur issue, which is the main concern of this thesis.

Calculation of Gradient Magnitude. The gradient magnitude combines the information from and using the following equation:Figure 4 shows the calculated . As presented by this figure, shows the locations of the edges on image F. The gradient magnitude is higher at sharper region, as compared to blurred regions. This figure also shows that the width of the edges becomes wider when the blurs are serious.

Figure 4: Gradient magnitude calculated from the gradient components and .
2.1.2. Calculation of Gradient Orientations

Gradient orientation, or edge direction, indicates the direction of the vector normal to the edge point, with respect to the horizontal direction (i.e., -axes). In the implementation of this proposed method, the gradient orientation is defined using a function called “atan2,” which calculates the value of arctangent of all four quadrants as shown in Figure 5.

Figure 5: Gradient direction returned by the “atan2” function.

This “atan2” function calculates the arctangent value by taking two arguments, which are and ,The function of “atan2” returns the value of the angle in radians, in the range between and . This means that .

In the implementation of this proposed method, the edge orientation as defined by is considered as bidirectional. This characteristic is important for the width measurement of the blurry edges, used in the blur detection stage. This means that the direction defined by is taken to be the same as the angle defined in the opposite direction (i.e., or ). This is shown by the example given in Figure 6.

Figure 6: Gradient orientation (in unit degrees) and its equivalent directions. (a) . (b) . (c) .

In order to fulfil the abovementioned requirement, in the implementation of this proposed method, an additional edge orientation is defined using the following equation:An example of the gradient orientation is shown in Figure 7. In Figure 7(a), because is defined between and radians, this gradient orientation has both positive and negative values. On the other hand, in Figure 7(b), the directions are now turned against the original direction, where is defined between and radians. Therefore, as shown by this subfigure, all values are positive values.

Figure 7: Gradient orientation (in radians) obtained from the gradient components. (a) The original gradient orientation, . (b) The modified gradient orientation, .
2.2. Blur Detection

In this blur detection stage, the proposed method takes , , and from the preliminary processing stage, as its input. The output from this stage is an edge width map , which brings the information regarding the width of the edges on the image. A general block diagram for this blur detection stage is shown in Figure 8. The calculation of the edge width starts on the edge, which is indicated by the local maximum points on the gradient magnitude . The process, which is guided by and , stops on the uniform regions, which can be identified from the local minimum points on . Therefore, in this blur detection stage, the proposed method needs to find the locations of the local maximum points and the local minimum points of before it can proceed to the edge width calculation process.

Figure 8: The details of block diagram for the blur detection stage.
2.2.1. Local Maximum and Local Minimum

In order to find the local maximum points, the definitions for the local maximum are needed. Thus, in this research, Figure 9 shows this definition. As shown in this figure, there are three definitions for the local maximum locations:(1)defined at the gradient edge, from a region with constant value, moving towards lower gradient values (i.e., );(2)defined at the actual peak, where current gradient value is greater than the neighboring gradient values (i.e., );(3)defined at the edge from a region with increasing gradient value to a region with constant gradient value (i.e., ).

Figure 9: The definitions for local maximum locations.

Similarly, the definitions for the local minimum point are needed in order to search the local minimum locations. Figure 10 gives this definition for these local minimum points used in this thesis. There are three definitions used to define the local minimum locations, as presented by this figure:(1)defined at the edge from a region with decreasing gradient value to a region with constant gradient value (i.e., );(2)defined at the gradient edge, from a region with constant value, moving towards higher gradient values (i.e., );(3)defined at the actual valley, where current gradient value is the smallest when compared to the neighboring gradient values (i.e., ).

Figure 10: The definition for local minimum.

For local minimum, some of the data locations failed to be detected and therefore create some “fragmented” regions in . As a consequence, this will lead to inaccurate edge width calculation. In order to reduce this problem, in this proposed method, a binary mathematical dilation has been utilized. This operation has been selected as it can combine “fragmented” areas. The binary mathematical dilation is defined as [14]where is the structuring element and is a set of points.

2.2.2. Measure the Edge Width

In order to measure the edge width , the idea based on the work by [5] is implemented in this stage of the proposed method. This idea is depicted in Figure 11. By inspecting the gradient maximum locations provided by , the location of the edge is identified. The coordinate is defined as the location of the edge when has value of 1, or logical “true.” Then, this algorithm takes this location as the starting point for the edge width measure. From this point, the algorithm traverses to search the location of the gradient magnitude local minimum in , which is the location where has values 1. First, the method traverses to the “left side” of the edge. The distance between the location of the local maximum and the location of the local minimum is defined as . Then, the same traversing process is applied again from the edge (i.e., starting point) to the local minimum location but now in the opposite direction, which is on the “right side” of the edge. The distance found on this side is defined as . The width of the edge blur is the distance between the two local minimums. This can be defined using the following equation:In this equation, value 1 is added because the edge point was not included during the measurement of and .

Figure 11: The basic idea used in the calculation of the edge width. The green circle presents the location of the local maximum, whereas the red circles present the locations of the local minimum.

The idea presented in Figure 11 is actually the simplified concept, as it only shows how the method works for 1-dimensional data. Because an image is a 2-dimensional data, the edge width calculation process is not as simple as presented in the previous paragraph. For a 1-dimensional data, it is easy to define “left side” and “right side” as there are only two neighboring elements that need to be considered. For a 2-dimensional data, the elements now have eight neighboring elements. Therefore, for 2-dimensional data, the definitions for “left side” and “right side” are guided by the gradient orientations and .

In order to ease the process of the traversing, two functions are defined. These two functions are and , which are based on the eight neighboring elements shown in Figure 12. These two functions take angle as their input and are given by (10) and (11), respectively,

Figure 12: (a) Definition for . (b) Definition for .

Figure 13 shows the flowchart used by this proposed method to determine the edge width . It is worth noting that this process is executed only when is equal to 1 (i.e., at the edge element). As shown in this figure, the process of the edge width calculation can be divided into three main stages:(1)finding the value of ;(2)finding the value of ;(3)assigning the values to .The process of finding the value of is almost the same as the process of finding , except that the latter gives more priority towards .

Figure 13: Flowchart for the edge width calculation.

As shown by the flowchart in Figure 13, 2-dimensional array mask is used in the process. If the input image F is of size pixels, the size of mask is also of size pixels. This array is used to mark the locations that are involved in the traversing process. Other variables, which are left, right, top, and bottom, define the region of interest (ROI). This ROI is a rectangle identified from two coordinates, which are (top, left) and (bottom, right). Coordinate presents the current location, while coordinate is the temporary location. In addition to the conditions shown in this flowchart, the traversing process on each “side” will also be terminated if the method points to the element located outside the area defined by the image.

Figure 14(a) shows the edge width calculated by using . As shown in this figure, the obtained edge width is not accurate, as the values are significantly higher at the uniform region as compared to the regions near the edges. This is because the traversing process during the calculation “leaks” through the discontinuities of local minimum points defined by .

Figure 14: (a) Edge width calculated from . (b) Edge width obtained from .

On the other hand, by improving through mathematical morphological dilation operation, is obtained. The edge width obtained from is presented in Figure 14(b). As shown in this figure, the obtained values are more accurate because it is proportional to the blur kernel function shown in Figure 1(b).

2.3. Image Downsampling

If a downsampling factor is used, each element in is presented by elements in :In this algorithm, the thumbnail image for the edge width map is obtained by the averaging process. Yet, because presents the width of the blur in the thumbnail image, it is worth noting that as is downsampled by using a downsampling factor , the edge width will also be scaled by . Therefore, the value for at coordinate is defined by using the following formula:

Embed Blur. In the implementation of this proposed method, the blur is embedded into f by combining image with image using a weighted average approach. This process is shown in Figure 15 and can be expressed by the following equation:where and are the space-varying weight values. Here, and . In order to make sure that the intensity values in f are in the correct intensity range, the following restriction is applied:This restriction implies that when more emphasis is given to less emphasis will be given to and vice versa.

Figure 15: Block diagram for blur embedding process.

Thumbnail image f gives higher values of to the regions with blurs, while it gives higher values of to the sharp regions. Therefore, in this proposed method, the weight is related to the edge width map . The higher the value of , the higher the value that will be given to as the blur is expected to be more serious. Thus, is defined asWhen is greater than 1.0, this condition indicates a serious blur. This is because the blur here is not in subpixel level, but its size is bigger than one pixel. Therefore, will take only the value of for this condition.

An example of the thumbnail image for the edge width map is shown in Figure 16(a). This thumbnail is obtained from Figure 14(b), with downsampling factor being equal to 8. The differences between images f and and are shown in Figures 16(b) and 16(c), respectively. These differences show that image f is unique.

Figure 16: (a) Thumbnail image f. (b) Array of f-f1. (c) Array of f-f2.

3. Results and Discussions

This section presents the evaluations of the experimental results obtained from the proposed thumbnail algorithm. The performances of the algorithm are evaluated qualitatively and also quantitatively. This section is divided into two subsections.

3.1. Qualitative Evaluation Based on Visual Inspection

In this section, in order to evaluate the performance of the proposed method, four other thumbnail image algorithms are implemented and used as benchmarks. The methods that are included for comparisons are DPD method [1], DSD method [2], PDAF method with averaging filter [3], and thumbnail with blur and noise information (TBNI) [4]. Two test images and are used. Both images are with dimensions pixels.

The result from downsampling image is shown in Figure 17. Figure 17(a) shows original high resolution image. Notice that the lion in this image is blurred, while the tree trunks in front of it are sharp. Figures 17(c)17(f) show the five thumbnail images, including the thumbnail image from the proposed method. The downsampling factor used is set to 10. The thumbnail images are with dimensions pixels. As shown in Figure 17, thumbnail images from DPD, DSD, and TBNI are sharp, but noisy. Thumbnail image from PDAF and the proposed method (i.e., TBI) are smoother and more accurately represent the original image.

Figure 17: . Comparison of the proposed method with other methods with downsampling factor of 10.

The result from the downsampling image is shown in Figure 18. The original high resolution image is shown in Figure 18(a). As shown in this figure, the object, which is the crocodile, is blurred. Figures 17(c)17(f) show the five thumbnail images versions. Same as the results from Figure 17, the downsampling factor used is also set to 10. As shown in this figure, TBI method produces thumbnail image which represents more accurately the original image.

Figure 18: . Comparison of the proposed method with other methods with downsampling factor of 10.
3.2. Quantitative Evaluation Based on Survey

In order to evaluate the performance of the proposed method properly, five other thumbnail image algorithms are implemented and used as benchmarks. The methods that are included for comparisons are DPD method [1], DSD method [2], PDAF method with averaging filter [3], thumbnail with blur and noise information (TBNI) [4], and thumbnail with integrated blur method (DIB) [8].

In this section, the analysis is done based on survey, conducted using 12 test images obtained from [17]. The downsampling factor is set to 4 and 8. The survey questions asked about the preference on choosing between(1)TBI thumbnail image and DPD thumbnail image;(2)TBI thumbnail image and DSD thumbnail image;(3)TBI thumbnail image and PDAF thumbnail image;(4)TBI thumbnail image and TBNI thumbnail image;(5)TBI thumbnail image and DIB thumbnail image.

For each of the questions, only one answer is being chosen. Each question will test volunteer’s preference on two thumbnail images, which are judged based on the original image. Volunteer will choose the preferred thumbnail (e.g., left thumbnail or right thumbnail) and put a scale (i.e., scale from 0 to 5) on how much the thumbnail being chosen corresponds more to the original image, as compared with another thumbnail. Scale of “0” will be selected if the two thumbnails being compared are of the same quality. The scale below shows scale indication as references:(i)scale “5-left”: the thumbnail on the left side is totally different from the thumbnail on the right side. Almost, 90 percent of the blur or clear regions of the left thumbnail image are showing better correspondence to the original image;(ii)scale “4-left”: the thumbnail on the left side is different from the thumbnail on the right side. Almost 70 percent of the blur or clear regions of the left thumbnail image are showing better correspondence to the original image;(iii)scale “3-left”: the thumbnail on the left side is different from the thumbnail on the right side. Almost 50 percent of the blur or clear regions of the left thumbnail image are showing better correspondence to the original image;(iv)scale “2-left”: the thumbnail on the left side is almost similar to the thumbnail on the right side. Almost 30 percent of the blur or clear regions of the left thumbnail image are showing better correspondence to the original image;(v)scale “1-left”: the thumbnail on the left side is almost similar to the thumbnail on the right side. Less than 10 percent of the blur or clear regions of the left thumbnail image are showing better correspondence to the original image;(vi)scale “0”: the thumbnail on the right is of the same quality as the thumbnail on the left corresponding to the original image;(vii)scale “1-right”: the thumbnail on the right side is almost similar to the thumbnail on the left side. Less than 10 percent of the blur or clear regions of the right thumbnail image are showing better correspondence to the original image;(viii)scale “2-right”: the thumbnail on the right side is almost similar to the thumbnail on the left side. Almost 30 percent of the blur or clear regions of the right thumbnail image are showing better correspondence to the original image;(ix)scale “3-right”: the thumbnail on the right side is different from the thumbnail on the left side. Almost 50 percent of the blur or clear regions of the right thumbnail image are showing better correspondence to the original image;(x)scale “4-right”: the thumbnail on the right side is different from the thumbnail on the left side. Almost 70 percent of the blur or clear regions of the right thumbnail image are showing better correspondence to the original image;(xi)scale “5-right”: the thumbnail on the right side is totally different from the thumbnail on the left side. Almost 90 percent of the blur or clear regions of the right thumbnail image are showing better correspondence to the original image.

Each section of the question consists of five preference questions. There are twelve sections for each set of questionnaires. There are two sets of questionnaires available. The total number of questions for each survey is 120 questions (i.e., ); average completion time is 35 minutes including 5 minutes rest time in between the switching of values. Each question estimated answering time is 15 seconds. For data compilation, graphs of box-and-whisker are plotted for comparisons of the proposed TBI method with other methods with the respective parameter . Briefing is given to every volunteer prior to the survey process. The volunteer will choose one of the two thumbnails in each comparison question (e.g., TBI or DPD) and put a scaling of their choice, depending on their preferences.

The number of volunteers is 51; the volunteer’s age ranges from 20 years to 40 years and consists of both male and female in various faculties across the campus. Volunteers are given a briefing prior to the start of the survey and are being placed 60 cm in front of the 15.6-inche monitor screen of resolutions. The survey is done in a closed room, without any noise of sound. After the survey, box-and-whisker plots are generated to show the overall tendency of volunteers towards the thumbnail methods.

Box-and-whisker plots are ideal for comparing distributions because the center, spread, and overall range are immediately apparent. A box-and-whisker plot is a way of summarizing a set of data measured on an interval scale. It is often used in explanatory statistical data analysis.

Box-and-whisker plot is useful for indicating whether a distribution is skewed and whether there are potential unusual observations (outliers) in the data set. It is also very useful when large numbers of observations are involved and when two or more data sets are being compared. This type of graph is used to show the shape of the distribution, its central value, and its variability. In box-and-whisker plots, the ends of the box are the upper and lower quartiles, so the box spans the interquartile range. Besides, the median is marked by a horizontal line inside the box and the whiskers are the two lines outside the box that extend to the highest and lowest observations.

Figure 19 shows the box-and-whisker plot for TBI versus DPD thumbnail images. The median values for both Figures 19(a) and 19(b) show that the overall volunteer tendency towards choosing TBI is slightly higher than DPD image for and the tendency is roughly the same for . Most of the interquartile range falls between “0” and “1” towards TBI indicating a slight preference of user in choosing TBI. This tendency happens for both and . The extreme value is scattered around the values of “2” and “3,” for both and . Generally, the tendency of the user towards the comparisons between TBI and DPD is slightly towards TBI thumbnail. Therefore, the proposed TBI is slightly better than DPD in this survey analysis.

Figure 19: Box-and-whisker plot for TBI versus DPD thumbnail images survey.

Figure 20 shows the box-and-whisker plot for TBI versus DSD thumbnail images survey. The median values in the plot for Figures 20(a) and 20(b) show that the tendency of the volunteer to choose TBI method and DSD method is the same for and it is slightly towards TBI thumbnail for .

Figure 20: Box-and-whisker plot for TBI versus DSD thumbnail images survey.

For , there are preferences in choosing TBI thumbnail and DSD thumbnail, counteracting each other in terms of tendency. For , the preference is given slightly towards TBI thumbnail. Quartile 1 and quartile 3 also show roughly the same tendency of liking from the survey with some tendency of DPD counteracting with TBI. Extreme error bar for both Figures 20(a) and 20(b) ranges mostly between the values of “2” and “3,” for both TBI and DSD. Overall, TBI and DSD have the same performance tendency in survey’s perspective for , but, for , the tendency is slightly given to TBI.

Figure 21 shows box-and-whisker plot for TBI versus PDAF thumbnail images survey. The overall value of median shows that TBI is slightly chosen over PDAF thumbnail image by the volunteer for both Figures 21(a) and 21(b).

Figure 21: Box-and-whisker plot for TBI versus PDAF thumbnail images survey.

The quartile 1 is mostly of value “0” while quartile 3 has value “1” as majority, for both and . The extreme error bar value has low fluctuations range (e.g., values of “1” and “2”) for Figures 21(a) and 21(b), indicating that the preference of the survey in this comparison is quite stable. Overall, the tendency of choosing the proposed TBI image in this survey is slightly higher than PDAF.

Figure 22 shows box-and-whisker plot for TBI versus TBNI thumbnail images survey. The median of the plots shows that both TBI and TBNI have the same tendency from the volunteer’s choice for both and . Both median of and median of conteract each other in terms of preferences.

Figure 22: Box-and-whisker plot for TBI versus TBNI thumbnail images survey.

From quartile 1 and quartile 3, TBNI method shows roughly the same tendency compared with TBI for Figure 22(a), but the tendency is slightly higher for TBI in Figure 22(b). The error bar value for Figure 22 is high, indicating that there are extreme preferences from the volunteers for both TBI and TBNI. From this survey, the overall TBI shows slightly higher results than TBNI for , but the tendency for both TBI and TBNI is roughly the same for .

Figure 23 shows box-and-whisker plot for TBI versus DIB thumbnail images survey. Analyzing the median, it is obvious that TBI method is chosen over DIB method with a very high tendency from volunteers for both Figures 23(a) and 23(b).

Figure 23: Box-and-whisker plot for TBI versus DIB thumbnail images survey.

Quartile 1 and quartile 3 also show that the choice is favourable towards TBI. The extreme minority value also shows tendency towards TBI method for both and . Therefore, from this survey, TBI method outperformed DIB with high preference from the volunteers.

4. Conclusion

The results and discussions have shown that this proposed method has obtained satisfactory thumbnail results. This method is successful in a way that it proposed a new method of generating thumbnail image based on edge analysis. Normal downsampling like DPD and DSD is a direct downsampling method, without preanalysis of the image. PDAF method with averaging filter shows smoother results, sometimes not corresponding to the sharp original image. DIB method shows more blurred thumbnail image, which does not correspond to the original image perceptually, while TBNI shows a roughly equal visual performance with the proposed method in this analysis. Overall, the proposed method has shown a satisfactory result of downsampling method based on edge analysis, which might not be in an obvious better way, but it is a new approach for a downsampling method.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This work was supported in part by the Universiti Sains Malaysia’s Research University Individual (RUI) Grant with account no. 1001/PELECT/814169.

References

  1. L. Fang, O. C. Au, K. Tang, X. Wen, and H. Wang, “Novel 2-D MMSE subpixel-based image down-sampling,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 22, no. 5, pp. 740–753, 2012. View at Publisher · View at Google Scholar · View at Scopus
  2. L. Fang and O. C. Au, “Subpixel-based image down-sampling with min-max directional error for stripe display,” IEEE Journal on Selected Topics in Signal Processing, vol. 5, no. 2, pp. 240–251, 2011. View at Publisher · View at Google Scholar · View at Scopus
  3. J. R. Hernández, F. Pérez-González, J. M. Rodríguez, and G. Nieto, “Performance analysis of a 2-D-multipulse amplitude modulation scheme for data hiding and watermarking of still images,” IEEE Journal on Selected Areas in Communications, vol. 16, no. 4, pp. 510–524, 1998. View at Publisher · View at Google Scholar · View at Scopus
  4. H. Ibrahim, “Image thumbnail with blur and noise information to improve browsing experience,” Advances in Multimedia, vol. 2, no. 3, pp. 39–48, 2011. View at Google Scholar
  5. Y. Chung, J. Wang, R. Bailey, S. Chen, and S. Chang, “A non-parametric blur measure based on edge analysis for image processing applications,” in Proceedings of the IEEE Conference on Cybernetics and Intelligent Systems, pp. 356–360, Singapore, December 2004. View at Publisher · View at Google Scholar
  6. J. Ko and C. Kim, “Low cost blur image detection and estimation for mobile devices,” in Proceedings of the 11th International Conference on Advanced Communication Technology (ICACT '09), vol. 3, pp. 1605–1610, IEEE, Phoenix Park, Ireland, February 2009.
  7. L. Debing, Z. Chen, M. Huadong, X. Feng, and G. Xiaodong, “No reference block based blur detection,” in Proceedings of the International Workshop on Quality of Multimedia Experience (QoMEx '09), pp. 75–80, IEEE, San Diego, Calif, USA, July 2009. View at Publisher · View at Google Scholar · View at Scopus
  8. R. Samadani, T. A. Mauer, D. M. Berfanger, and J. H. Clark, “Image thumbnails that represent blur and noise,” IEEE Transactions on Image Processing, vol. 19, no. 2, pp. 363–373, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  9. R. Samadani, S. H. Lim, and D. Tretter, “Representative image thumbnails for good browsing,” in Proceedings of the 14th IEEE International Conference on Image Processing (ICIP '07), vol. 2, pp. II-193–II-196, IEEE, San Antonio, Tex, USA, September 2007. View at Publisher · View at Google Scholar · View at Scopus
  10. M. Trentacoste, R. Mantiuk, and W. Heidrich, “Blur-aware image downsampling,” in Computer Graphics Forum, vol. 30, pp. 573–582, Wiley Online Library, New York, NY, USA, 2011. View at Google Scholar
  11. A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Transactions on Graphics, vol. 26, no. 3, p. 70, 2007. View at Publisher · View at Google Scholar · View at Scopus
  12. G. Liu, S. Chang, and Y. Ma, “Blind image deblurring using spectral properties of convolution operators,” IEEE Transactions on Image Processing, vol. 23, no. 12, pp. 5047–5056, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  13. V. Nguyen and M. Blumenstein, “An application of the 2D Gaussian filter for enhancing feature extraction in off-line signature verification,” in Proceedings of the 11th International Conference on Document Analysis and Recognition (ICDAR '11), pp. 339–343, Beijing, China, September 2011. View at Publisher · View at Google Scholar · View at Scopus
  14. M. Sonka, V. Hlavac, and R. Boyle, Image Processing, Analysis, and Machine Vision, Cengage Learning, 2014.
  15. R. Jain, R. Kasturi, and B. G. Schunck, Machine Vision, vol. 5, McGraw-Hill, New York, NY, USA, 1995.
  16. G. T. Shrivakshan and C. Chandrasekar, “A comparison of various edge detection techniques used in image processing,” International Journal of Computer Science Issues, vol. 9, no. 5, pp. 269–276, 2012. View at Google Scholar
  17. M. Trentacoste, R. Mantiuk, and W. Heidrich, “Blur aware image down-sampling,” 2011, https://www.cs.ubc.ca/labs/imager/tr/2011/BlurAwareDownsize/ap-resizing-validation/index.html.