Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2017, Article ID 6089650, 14 pages
https://doi.org/10.1155/2017/6089650
Research Article

A Single Image Deblurring Algorithm for Nonuniform Motion Blur Using Uniform Defocus Map Estimation

Department of Computer Science and Engineering, National Chung Hsing University, Taichung 402, Taiwan

Correspondence should be addressed to Jiunn-Lin Wu; wt.ude.uhcn.sc@uwlj

Received 13 August 2016; Revised 23 December 2016; Accepted 15 January 2017; Published 13 February 2017

Academic Editor: Abdel-Ouahab Boudraa

Copyright © 2017 Chia-Feng Chang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

One of the most common artifacts in digital photography is motion blur. When capturing an image under dim light by using a handheld camera, the tendency of the photographer’s hand to shake causes the image to blur. In response to this problem, image deblurring has become an active topic in computational photography and image processing in recent years. From the view of signal processing, image deblurring can be reduced to a deconvolution problem if the kernel function of the motion blur is assumed to be shift invariant. However, the kernel function is not always shift invariant in real cases; for example, in-plane rotation of a camera or a moving object can blur different parts of an image according to different kernel functions. An image that is degraded by multiple blur kernels is called a nonuniform blur image. In this paper, we propose a novel single image deblurring algorithm for nonuniform motion blur images that is blurred by moving object. First, a proposed uniform defocus map method is presented for measurement of the amounts and directions of motion blur. These blurred regions are then used to estimate point spread functions simultaneously. Finally, a fast deconvolution algorithm is used to restore the nonuniform blur image. We expect that the proposed method can achieve satisfactory deblurring of a single nonuniform blur image.

1. Introduction

Recently, image deblurring has become an essential issue in photography because of the popularity of handheld cameras and smartphones. When an image has been captured under dim light, that captured image often suffers from motion blur artifacts. Handheld cameras often have insufficient light and thus require excessive exposure times, which cause blurring. Numerous photographs capture unusual moments that cannot be repeated with different camera settings. Mathematically, a blurred image can be modeled aswhere is blur kernel, is convolution operator, is latent unblurred image, and is noise in blurred image. During exposure, the movement of the camera can be viewed as a motion blur kernel called the point spread function (PSF). However, image deblurring is an ill-posed problem because the PSF can be unknown and images can be latent. If the PSF is shift invariant, an image deblurring problem can be reduced to an image deconvolution problem. According to what is known regarding the PSF, image deconvolution problems can be divided into two categories, nonblind image deconvolution and blind image deconvolution. In nonblind image deconvolution, the PSF is already known or computed; therefore, the problem focuses on how to recover a blurred image by using a known PSF. The Wiener filter [1] and Richardson-Lucy deconvolution [2] are well-known nonblind deconvolution algorithms that are effective at image deblurring for cases in which the PSF is not complicated. However, in real cases, the PSF is complex. If a given PSF is not precisely estimated, ringing artifacts are generated in the deblurred result. To address this problem, several blind image deconvolution methods [313] have been proposed. However, the blind deconvolution problem is even more confusingly framed than the nonblind deconvolution problem. The blind problem entails recovering the unblurred image and estimating the PSF simultaneously. Image pair approaches have been proposed for image deblurring. Leveraging additional images makes the blind deblurring problem more tractable. Rav-Acha and Peleg used two motion blur images [9]. Yuan et al. recovered a blurred image from a noise/motion pair that had been captured under low light conditions [10]. Zhuo et al. leveraged a flash/motion pair that provided clear structural information for an image deblurring problem [11]. Zhang et al. applied varying image pairs to image deblurring [12]. Nevertheless, image pair approaches require additional images that must be captured by extra hardware. Chang and Wu proposed a new deconvolution method for deblurring a blurred image with uniform motion blur by using hyper Laplacian priors [13].

As shown in Figure 1, motion blur in a real-world image can be so complicated that the assumption of shift invariance is not always held. To deblur a nonuniform motion blur image is more difficult [1423]. The causes of nonuniform motion blur can be divided into two categories, camera rotation [14, 15] and target motion [1623]. To solve the target motion problem, nonuniform blur image deblurring methods have been proposed. In 2006, Levin deblurred a nonuniform motion blur image by segmenting nonuniform motion blur images into blurred regions and unblurred regions by using image statistics [16]. But this method works when the direction of blur was vertical or horizontal. In 2007, Cho et al. proposed using multiple blurred images that had been captured continuously [17]. An image registration method was then applied to estimate the PSF, but it required previous discernment of the blurred object and additional images. In 2011, Tai et al. proposed using a projective motion path that was recorded with additional hardware to replace the traditional plane blur kernel for image deblurring [14]. A projective motion path describes the movement of the camera in a three-dimensional (3D) model; therefore, it can restore nonuniform motion blur images for which blur had been caused by camera rotation. However, additional hardware was required to record the 3D blur kernel. Several nonuniform image deblurring methods with depth maps have been proposed to handle this problem. In 2012, Xu and Jia used a depth map to deblur blurred images with the same motion blur but different Gaussian blurs [22]. In 2014, Hu et al. proposed using depth estimation to deblur a nonuniform blur image [23]. The authors separated the blurred image into layers according to a depth map and then deblurred one layer but maintained the others in a fixed state. This deblurring method treats every object in the same depth as having the same blur amount. However, if there is an object with a different blur amount in the same layer, deblurring fails because the blur amount of objects at the same depth is not always the same.

Figure 1: An example of a real-world image. There are different kinds of blur in the image. The fallen ball is blurry, and others are not. (a) Real-world image. (b) Magnification; red rectangle is blurred basketball, and blue rectangles are unblurred regions.

In this paper, we propose a novel image deblurring algorithm for nonuniform motion blur images. To locate blurred parts, we measure the blur amount for each edge in the blurred image. The measurement of blur is then applied to propagate a uniform defocus map. We use the proposed uniform defocus map to segment the nonuniform blur image into multiple blurred and unblurred portions. Subsequently, we estimate the PSF for each blurred portion to obtain the PSF of each portion. All portions and the corresponding PSFs are entered as inputs to a fast deconvolution algorithm. Finally, an unblurred result is obtained.

The rest of the paper are organized as follows. In Section 2, we briefly review the related works on nonuniform image deblurring. Section 3 presents our proposed method in detail. The experimental results are then designed for verifying the proposed method in Section 4. Finally, we conclude this paper in Section 5.

2. Related Works

Nonuniform motion is an ill-posed problem that many studies have been proposed to address this problem. In 2006, Levin removed nonuniform motion blur by separating a blurred image into unblurred regions and blurred regions by using image statistics. The authors located blurred and unblurred regions depending on the observation that the statistics of derivative filters in images are substantially changed by partial motion blur. They assumed that the blur effect resulted from motion at a constant velocity and thus modeled the expected derivative distributions as functions of the width of the blur kernel. This was effective when the direction of motion blur was vertical or horizontal or, in other words, the blur kernel was one-dimensional. In 2007, Cho et al. proposed using multiple blurred images that were captured continuously to remove nonuniform motion blur [17]. These blurred images were captured for the same scene. The authors applied an image registration method to compute the offsets of a moving object in two sequential images. Once the moving object was found in the images, they calculated the movement of the moving object from the tiny movement that was computed from the image pair. This movement was the estimated PSF of the moving object. The estimated PSF was applied to recover an image of the blurred object. Using multiple images to estimate the PSF made the nonuniform deblurring problem more tractable. However multiple sequential images are not always captured. In 2011, Tai et al. proposed using a projective motion path, which was estimated using additional hardware, to deblur a blurred image. The projective motion path is a model that records every movement of a camera through planar projective transformation (i.e., homography) during the exposure time. Homography provides not only planar information but also rotational information for rotation blur, whereas the traditional 2D PSF provides only planar information. Therefore, homography can recover a motion blur image with rotational blur successfully, but it requires additional hardware to record the projective motion path. In 2014, Hu et al. proposed a nonuniform image deblurring approach that involves using a depth map and applying depth estimation to an image deblurring algorithm for a nonuniform blur image. The approach entails separating the blurred image into layers according to estimated depth information. Every object with the same depth estimate is classified into the same depth layer. Hu et al. deblurred each layer but maintained the others at a fixed level. The depth map provided crucial information for deblurring a nonuniform blur image. However, the deblurred result will fail while there are different blur objects in the same depth layer.

Inspired by these nonuniform image deblurring methods, we sought to locate blurred objects precisely in nonuniform motion blur images. This is a vital task for deblurring spatially varying blurred images. We also sought to detect all types of motion blur objects in the blurred image automatically. In the next section, we demonstrate a noteworthy characteristic of blurred objects in nonuniform motion blur images, explain the dynamics of this characteristic, and then propose our method for solving this ill-posed image deblurring problem.

3. Proposed Method

As mentioned in the previous section, separating a nonuniform blur image into unblurred and blurred portions is an essential task for recovering nonuniform motion blur images. As shown in Figure 1, a nonuniform blur image contains partial motion blur that the fallen ball is blurry and the basket is not. We notice that the ball and the basket are rigid objects. If these rigid objects suffered from blur problem, the pixels within a rigid object should suffer from same blur problem. Once the blurred objects were founded, we can perform deblurring procedure for each blurred object; therefore, the nonuniform motion deblurring problem can be reduced to shift-invariant image deconvolution problem. Through the proposed method, we seek to measure the blur amount of each object in a blurred image and then apply that blur amount to find blurred objects in the blurred image. Inspired by Bae and Durand [18], we use the distance between the maximal value and minimal value of a second derivative to define the blur amount for each pixel in the blurred object. Consider the blurred signals with varying sigma in Figure 2. A larger sigma yields a more blurred signal and a wider distance between the maximum and minimum. We consider that the two signals include the blue signal that is extracted from the car and the red signal that is extracted from the book, as shown in Figure 3. The blur amount of the blue signal in Figure 3(b) is greater than the blur amount of the red signal. The car is blurred and the book is not, as shown in Figure 3(a). That is, the distance between the second derivative maximum and the local minimum can be viewed as the blur amount of an object. In fact, the blur amount of all rigid object is the same. From this fact, we unify the blur amount of an object by using a -means clustering algorithm. The flowchart of the proposed method is shown in Figure 4.

Figure 2: An example of blur amount estimation.
Figure 3: An example of blurred edge and unblurred edge. (a) A nonuniform blurred image. (b) Signals of blurred edge and signal of sharp edge corresponding to the blue line and red line in (a). The -axis is position and -axis is intensity in gray level.
Figure 4: Flowchart of proposed method.
3.1. Uniform Defocus Map

According to Elder and Zucker [24], the edge regions that have notable frequency content are suitable for blur amount measurement. Step edges are the main edge type in a natural image; therefore, we consider measuring the blur amount at edge pixels in this paper. In mathematics, an ideal step edge can be formulated as follows:where is step function, is amplitude, and is offset. A blurred step edge can then be defined as the result of a step edge that convolutes the Gaussian function where is blurred step edge, is Gaussian function, is convolution operator, and is the standard deviation of Gaussian function.

3.1.1. Blur Amount Estimation

In the formulation of a blurred step edge, can be represented as the blur amount. We can calculate by using a reblur method. A reblur of the blurred step edge is where is standard deviation of reblur Gaussian function. Then, with blurred step edge divided by reblur blurred step edge, we can obtain the following equation:Let be the ratio of blurred step edge to reblur blurred step edge. When , has maximum value. Then we haveGiven a ratio , the unknown can be computed usingWhile is computed, we use as the blur amount at edge pixel. Blur measurement result is shown in Figure 5.

Figure 5: Intermediate result of applying simple bilateral filter to propose blur amount measure method. (a) Blur measurement. (b) Blur measurement with simple bilateral filter. (c) Close-up views correspond to red rectangle in (a) and the same position in (b).
3.1.2. Blur Amount Refinement

After the blur amounts of edges pixels have been computed, we propagate the blur amount to nonedge pixels for which blur amounts have not previously been computed. However, phenomena such as shadows and highlights may cause some outliers to appear in the measurement results. These outliers may propagate incorrect results. We should remove these outliers to obtain sparse measurement results. Pixels in a sparse matrix are often zero. Therefore, instead of using a cross-bilateral filter, such as that applied in [18, 20], we propose using a simple bilateral filter for our sparse blur measurements. The main idea is that only nonzero value pixels are considered in filter procedure. The definition of a simple bilateral filter, which is applied only to nonzero pixels, is as follows:withand the weighting functions and are defined aswhere is the filtered result, is the sparse blur measurement to be filtered, are the coordinates of the current pixel to be filtered, is the window centered in and must be none zero value, is the range kernel of Gaussian function for smoothing differences in intensities, and is the spatial kernel of Gaussian function for smoothing differences in coordinates. The intermediate result is shown in Figure 6. The word on the roof is smoother after application of a simple bilateral filter.

Figure 6: Effect of k-means method. (a) Proposed defocus map with k-means clustering. (b) Close-up views corresponding to red rectangle in (a) and in Figure 5(a) with the same position. From (b), the blur amount of moving car in proposed defocus map is more uniform than Zhuo’s defocus map.
3.1.3. Unify Blur Amount

The blur amounts of pixels that are located in the same rigid object are equal. Therefore, we use -means clustering to unify the measurement results. -means clustering algorithms minimize the within-cluster sums of squares. Thus, -means methods cluster the pixels with the same blur amounts together. Labels of each pixel at the edge region are calculated by minimizing the following cost function:where is a set of clusters, is the desired number of sets, and is the mean of points in . The blur measurement result with -means clustering which is shown in Figure 6. Close-up views in Figure 6(b) show that the blur amount of the car in our measurement result is uniform, but the amount for Zhuo’s edge map is not. We apply morphological operations to the refinement result, and we observe that -means clustering successfully unifies the blur amount in a rigid object, as shown in Figure 7.

Figure 7: Result of k-means clustering.
3.1.4. Blur Amount Propagation

To propagate an estimated blur amount to a whole image, we apply the matting Laplacian interpolation method to our estimated blur measurement. The estimated blur amounts from the edge regions are propagated to other nonedge regions to form a defocus map. The cost function of the matting Laplacian interpolation method is formulated as follows:where is the full defocus map, is the vector forms of the sparse defocus map, L is the matting Laplacian matrix, and is diagonal matrix whose element is 1 at edge regions and 0 otherwise and is a parameter which balances fidelity to the sparse depth map and smoothness of interpolation. The defocus map can be obtained by solving (12) [25]. The proposed defocus map and Zhuo’s defocus map are shown in Figure 8. From Zhuo’s defocus map in Figure 8(a), the estimated blur amounts are different at different parts of the blurred car; therefore, it was difficult to segment the blurred car. By contrast, for the proposed defocus map, shown in Figure 8(b), the estimated blur amount is uniform for the car. The key benefit from our uniform defocus map is that the blurred car can be easily segmented.

Figure 8: Comparison of proposed defocus map and Zhuo’s defocus map. (a) Zhuo’s defocus map. (b) Proposed defocus map. Comparing (a) and (b), proposed defocus map is more uniform in the rigid object, such as car and book, than Zhuo’s defocus map. Therefore the blurred objects in proposed defocus map can be easily segmented.
3.2. Blind Image Deconvolution

If the blurred object regions can be segmented accurately, the nonuniform blur problem can be reduced to a uniform blur problem. Hence, the blurred objects are used as inputs for a uniform blur image deconvolution. We applied blind image deconvolution to recover the blurred object regions.

3.2.1. PSF Estimation

In mathematics, a blurred image can be modeled as where is blurred image, is convolution operator, I is latent unblurred image, is PSF, and is noise in image. The equation can then be represented as follows according to Bayes’ theorem:where represents the likelihood and and denote the priors on the latent image and PSF. In the PSF estimation step, Bayes’ theorem can be transformed into the following equations:We only consider the gradient of latent image and blurred image while solving kernel estimation problem. We rewrite (15) into the following energy function:where is the partial derivative operators in different directions, is Sobel operator, and and are preset parameters.

We iteratively solve the preceding equations to obtain an accurate PSF. To accelerate the PSF estimation process, we apply a shock filter before each PSF estimation step. The formulation of the shock filter is defined as follows: where is the image at current iteration, is the image at next iteration, is the map derived from Laplacian operator at iteration , is the gradient of at current iteration, and is time step.

3.2.2. Fast Adaptive Deconvolution

When a sufficient PSF is obtained, a fast adaptive deconvolution method is used for final deconvolution [26]. Equation (18) is the minimization problem of the fast adaptive deconvolution method: where is an index running through all pixels. In this paper, we use the value as which is suggested by Krishnan and Fergus [26], and is first-order derivative filter:We search for which minimizes the reconstruction error , with the image prior preferring to favor the correct sharp explanation.

However, makes the optimization problem nonconvex. It becomes slow to solve the approximation. Using the half-quadratic splitting, Krishnan’s fast algorithm introduces two auxiliary variables and at each pixel to move the terms outside the expression. Thus, (18) can be converted to the following optimization problem:where term is for constraint of and is a control parameter that we will vary during the iteration process. As parameter becomes large, the solution of (20) converges to that of (18). This scheme also called alternating minimization [26] where we adopt is a common technique for image restoration. Minimizing (20) for a fixed can be performed by alternating two steps. This means that we solve and , respectively.

To solve the subproblem, first, the input blurred image is set to the initial . Given a fixed , finding the optimal can be reduced to the following optimization problem:where the value . For the case , satisfying the above equation is the analytical solution of the following quartic polynomial:To find and select the correct roots of the above quartic polynomial, we adopt Krishnan’s approach, as detailed in [26].

Then we solve subproblem to get the latent image. Given a fixed value of from previous iteration, we obtain the optimal by the following optimization problem. Equation (20) is modified asBy solving the above problems iteratively, we can obtain the deblurred result.

In Figure 8, we show a comparison of the proposed method and Zhuo’s method. The blur amount is the same for a rigid object, but the blur amount of the car is nonuniform in Zhuo’s defocus map, which is shown in Figure 8(a). By contrast, the blur amount of the car in the proposed defocus map is uniform. Thus, the car can be successfully detected as a blurred object because of its uniform blur amount. The deblurred result is shown in Figure 9. The movement of the car is shown in Figure 9(d). In Figure 9(c), which can be compared with the blurred car, the word on the roof of the car in the deblurred image can be clearly discerned. The deblurred result shows a satisfactory view of the nonuniform motion blur image. In the next section, we report experimental results that demonstrate the effectiveness of the proposed nonuniform deblurring algorithm.

Figure 9: Deblurred result of proposed method. (a) Nonuniform motion blurred image. (b) Deblurred result. (c) Close-up views of the roof of car. (d) PSF corresponding to blur object which is shown in Figure 7.

4. Experiments and Discussions

In this paper, we mainly focus on the nonuniform motion blur problem which is caused by moving object. Our proposed method was implemented with Visual C++.NET software. The testing environment was a personal computer running the 64-bit version of Windows 7 with an AMD Phenom II X4 945 3.4-GHz CPU and 8 GB of RAM. To show the effectiveness of our proposed method, we compared our result with the results of four state-of-the-art single image deblurring algorithms. We used nonuniform motion blur images as inputs. These test images were taken with a high-end DSLR by using a long exposure.

4.1. Comparison with Other Single Image Deblurring Methods

The first test image shows a basketball in the process of falling. We sought to deblur the blurred basketball in this image. This image of a basketball was a regular testing image; it did not show a complicated scene. From Figure 10, a nonuniform motion blur happened that the basketball was blurry but the court was not. This nonuniform problem makes the deblurring problem more difficult to solve. From Figures 10(b) and 10(e), Fergus and Levin’s deblurring method favored a blurred result because of nonuniform motion blur. And the basketball in the deblurred result of Shan and Cho’ method is sharper, but ringing artifacts appeared around the line in basketball court. The proposed method segments the blurred object and then deblurs it. As a result, the ringing artifacts around the court line do not appear, and the blurred basketball can be recovered.

Figure 10: A comparison between proposed method and state-of-the-art image deblurring algorithms. (a) Blurred image. (b) Fergus et al. (c) Shan et al. (d) Cho and Lee. (e) Levin et al. (f) Proposed method. (g) Close-up views of (a) to (f). The first row is the patch at right top corner. And the second row is the patch containing details of basketball at center.
4.2. Result from a Complicated Scene

We next used a different nonuniform blur testing image that shows a complicated scene, namely, a hoopoe standing amid weeds and hay. The weeds are luxuriant, and it was not easy to discern the hoopoe, as shown in Figure 11(a). Figures 11(b) and 11(e) show the gray level results of k-means clustering with different values of . A comparison of the two clustering results with different values shows that the hoopoe in Figure 11(b) was not well described. The foot of the hoopoe was classified as part of the weeds because of the complicated texture. For small values of k, k-means clustering does not suffice to describe complicated scenes. By contrast, Figure 11(h) shows that the blurred bird was accurately described despite the same amount of blur when the algorithm used a larger value of . Therefore, with an unusually complicated scene, we can increase the value of to attain a superior result. In our experiment, we used the parameter k = 7 for the regular testing image and k = 12 for the test image of the complicated scene. Comparing Figures 11(h) and 11(i), the proposed uniform defocus map describes the blur amount precisely, but Zhou’s defocus map fails to describe the information of blur amount.

Figure 11: Using different parameter for k-means clustering algorithm in proposed defocus map for a complicated scene. (a) Blurred image. (b) Clustering result (). (c) Close-up view of the hoopoe in (a). (d) Deblurred result. (e) Clustering result (). (f) Close-up view of the hoopoe in (e). (g) Proposed defocus map (). (h) Proposed defocus map (). (e) Zhuo’s defocus map.

The parameter Lambda balances fidelity to the sparse depth map and smoothness of interpolation. When the bigger is used, the propagation result fit the given blur amount estimation map. While is smaller, the propagation result fit the original image; the result is shown in Figure 12. In our experiments, we choose a fixed value, 0.0001, for all experiments, so that a soft constraint is put on estimated defocus map to further refine small errors in our blur estimation.

Figure 12: The result with different balance parameter in case of . (a) . (b) . (c) .
4.3. Robustness

To verify the robustness of the proposed method for nonuniform motion blur images, we used a number of real images with various moving objects and various degrees of blur. All test images were taken with a high-end DSLR by using a long exposure. For each blurred image, Zhuo’s defocus map, the proposed uniform defocus map, and a -means clustering result are presented for comparison. Figure 13 shows a man waving his hand. The waving hand is the blurred object. From Figure 13(e), the blur amount of the blurred object is uniform in the proposed map. By contrast, in Figure 13(d), the blur amount is overdefined in the blurred object; thus, the blur amount in the hand is not uniform. This nonuniformity problem complicates the segmentation of the blurred hand. Once the blurred hand had been located, we estimated the blur kernel of the blurred hand. The estimated PSF in Figure 13(f) shows that the hand of this man was waving. Some ringing occurred around the wrist of the man because of the influence of the gradient intensity of the shadow. The next test image shows two students standing together; the first student is beside the second student at the same depth, as shown in Figure 14. The left hand side student moved from right to left, but the other stood still. The proposed defocus map is shown in Figure 14(e). The blur amount of the moving student was brighter and uniform, and the blur amount was darker for the motionless student. Compared with the proposed defocus map, the blur amount was affected by the color and texture in Zhuo’s defocus map; therefore, it was not uniform for the moving object. As mentioned previously, a nonuniform defocus map cannot separate the blurring precisely. However, the amount of blurring in our proposed defocus map was uniform. Therefore, from Figure 14(f), the graph on the T-shirt of the moving student can be accurately recovered. Next, we used the image “Ball,” which shows a sloped direction of motion blur and a small moving object. The deblurred result is shown in Figure 15. From Figure 15(a), it can be seen that the ball was falling from the top left to the bottom right; this situation caused the ball to appear blurred. A comparison of the proposed uniform defocus map with Zhuo’s defocus map shows that the blur amount was uniform for the blurred object in the proposed map, but Zhuo’s defocus map failed to provide uniformity. Figure 15(f) shows the estimated PSF corresponding to the falling ball. It precisely shows that the ball is falling from the top left to the bottom right.

Figure 13: Deblurred result of image “Man.” (a) Blurred image with nonuniform motion blur. (b) Deblurred result. (c) Clustering result. (d) Zhuo’s defocus map. (e) Proposed uniform defocus map. (f) Close-up views of (a) and (b). (g) PSF corresponding to waving hand as shown in (a).
Figure 14: Deblurred result of image “Students.” (a) Blurred image with nonuniform motion blur. (b) Deblurred result. (c) Clustering result. (d) Zhuo’s defocus map. (e) Proposed uniform defocus map. (f) Close-ups of (a) and (b). (g) PSF corresponding to moving student as shown in (a).
Figure 15: Deblurred result of image “Ball.” (a) Blurred image with nonuniform motion blur. (b) Deblurred result. (c) Clustering result. (d) Zhuo’s defocus map. (e) Proposed uniform defocus map. The gray rectangle is the detected blurred object. (f) Close-up views of (a) and (b) on the basketball. (g) The corresponding PSF to basketball as shown in (a).
4.4. Limitation

However, we note that the shadow under the ball was detected as a blurred object, as shown in Figure 15(d). The reason for this phenomenon is that the border of the shadow region had a gradient of intensity. From Figure 16(c), we can see that the signal shown in Figure 16(b) was similar to the blurred signal shown in Figure 3. Therefore, the shadow region was recognized as a blurred object. In this case, a false deblurring result was generated.

Figure 16: Demonstration of the influence of shadow in the image with nonuniform motion blur. (a) Blurred image. (b) Shadow region corresponding to the red rectangle in (a). (c) A gray level signal corresponding to red line in (b). The -axis is the position, and -axis is intensity in gray level. We can observe that shadow region was detected as a blurred object in Figures 14(d) and 14(e). This phenomenon is caused by a gradient intensity as shown in (c).

5. Conclusions

In this paper, we propose a novel image deblurring algorithm for nonuniform motion blur. Because a rigid object has a consistent amount of blur, we propose a uniform defocus map for image segmentation. We segment the blurred image into blurred regions and unblurred regions by using the proposed uniform defocus map. Each blurred region is analyzed to obtain an estimate of its PSF. Each blurred region and its PSF are then entered as inputs to a uniform motion blur image deconvolution algorithm. Finally, an unblurred image is obtained. The experiments showed that our deblurred results had a satisfactory visual perspective for any type of motion blur. However, for optimal results, manual settings were required for numerous parameters. Furthermore, shadows tended to cause the algorithm to detect blurred objects incorrectly.

A possible future research direction is the automatic deblurring of spatially varying motion blur images. In future work, for automatic image deblurring, it may be interesting to classify the blurred regions correctly for blurred images in which shadows exist. This is expected to require an effective classification method for selecting blurred objects correctly.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

This research is supported by the Ministry of Science and Technology, Taiwan, under Grants MOST 103-2221-E-005-073 and MOST 104-2221-E-005-090.

References

  1. R. C. Gonzalez and R. E. Woods, Digital Image Processing, Prentice Hall, 2nd edition, 2002.
  2. W. H. Richardson, “Bayesian-based iterative method of image restoration,” Journal of the Optical Society of America, vol. 62, no. 1, pp. 55–59, 1972. View at Publisher · View at Google Scholar
  3. R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. T. Freeman, “Removing camera shake from a single photograph,” ACM Transactions on Graphics, vol. 25, no. 3, pp. 787–794, 2006. View at Google Scholar
  4. A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Transactions on Graphics, vol. 26, no. 3, Article ID 1276464, 2007. View at Publisher · View at Google Scholar · View at Scopus
  5. Q. Shan, J. Jia, and A. Agarwala, “High-quality motion deblurring from a single image,” ACM Transactions on Graphics, vol. 27, no. 3, article no. 73, 2008. View at Publisher · View at Google Scholar · View at Scopus
  6. S. Cho and S. Lee, “Fast motion deblurring,” ACM Transactions on Graphics, vol. 28, no. 5, 2009. View at Publisher · View at Google Scholar · View at Scopus
  7. L. Xu and J. Jia, “Two-phase Kernel estimation for robust motion deblurring,” in Proceedings of the 11th European Conference on Computer Vision (ECCV '10), pp. 157–170, Heraklion, Greece, September 2010.
  8. A. Levin, Y. Weiss, F. Durand, and W. T. Freeman, “Understanding blind deconvolution algorithms,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2354–2367, 2011. View at Publisher · View at Google Scholar · View at Scopus
  9. A. Rav-Acha and S. Peleg, “Two motion-blurred images are better than one,” Pattern Recognition Letters, vol. 26, no. 3, pp. 311–317, 2005. View at Publisher · View at Google Scholar · View at Scopus
  10. L. Yuan, J. Sun, L. Quan, and H.-Y. Shum, “Image deblurring with blurred/noisy image pairs,” ACM Transactions on Graphics, vol. 26, no. 3, article 1, 2007. View at Publisher · View at Google Scholar · View at Scopus
  11. S. Zhuo, D. Guo, and T. Sim, “Robust flash deblurring,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '10), pp. 2440–2447, San Francisco, Calif, USA, June 2010.
  12. H. Zhang, D. Wipf, and Y. Zhang, “Multi-observation blind deconvolution with an adaptive sparse prior,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 8, pp. 1628–1643, 2014. View at Publisher · View at Google Scholar · View at Scopus
  13. C.-F. Chang and J.-L. Wu, “A new single image deblurring algorithm using hyper laplacian priors,” Frontiers in Artificial Intelligence and Applications, vol. 274, pp. 1015–1022, 2015. View at Publisher · View at Google Scholar · View at Scopus
  14. Y.-W. Tai, P. Tan, and M. S. Brown, “Richardson-Lucy deblurring for scenes under a projective motion path,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 8, pp. 1603–1618, 2011. View at Publisher · View at Google Scholar · View at Scopus
  15. O. Whyte, J. Sivic, A. Zisserman, and J. Ponce, “Non-uniform deblurring for shaken images,” International Journal of Computer Vision, vol. 98, no. 2, pp. 168–186, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  16. A. Levin, “Blind motion deblurring using image statistics,” in Advances in Neural Information Processing Systems 19 (NIPS '06), pp. 841–848, MIT Press, 2006. View at Google Scholar
  17. S. Cho, Y. Matsushita, and S. Lee, “Removing non-uniform motion blur from images,” in Proceedings of the IEEE 11th International Conference on Computer Vision (ICCV '07), Rio de Janeiro, Brazil, October 2007. View at Publisher · View at Google Scholar · View at Scopus
  18. S. Bae and F. Durand, “Defocus magnification,” in Proceedings of the Annual Conference of the European Association for Computer Graphics (EUROGRAPHICS '07), pp. 571–579, Prague, Czech Republic, September 2007.
  19. H.-Y. Lin, K.-J. Li, and C.-H. Chang, “Vehicle speed detection from a single motion blurred image,” Image and Vision Computing, vol. 26, no. 10, pp. 1327–1337, 2008. View at Publisher · View at Google Scholar · View at Scopus
  20. S. Zhuo and T. Sim, “Defocus map estimation from a single image,” Pattern Recognition, vol. 44, no. 9, pp. 1852–1858, 2011. View at Publisher · View at Google Scholar · View at Scopus
  21. M. Hirsch, C. J. Schuler, S. Harmeling, and B. Scholkopf, “Fast removal of non-uniform camera shake,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV '11), pp. 463–470, Barcelona, Spain, November 2011.
  22. L. Xu and J. Jia, “Depth-aware motion deblurring,” in Proceedings of the IEEE International Conference on Computational Photography (ICCP '12), pp. 1–8, IEEE, Seattle, Wash, USA, April 2012. View at Publisher · View at Google Scholar · View at Scopus
  23. Z. Hu, L. Xu, and M.-H. Yang, “Joint depth estimation and camera shake removal from single blurry image,” in Proceedings of the 27th IEEE Conference on Computer Vision and Pattern Recognition (CVPR '14), pp. 2893–2900, IEEE, Columbus, Ohio, USA, June 2014. View at Publisher · View at Google Scholar · View at Scopus
  24. J. H. Elder and S. W. Zucker, “Local scale control for edge detection and blur estimation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 7, pp. 699–716, 1998. View at Publisher · View at Google Scholar · View at Scopus
  25. A. Levin, D. Lischinski, and Y. Weiss, “Colorization using optimization,” in Proceeding of the ACM Transactions on Graphics (ACM SIGGRAPH '04), pp. 689–694, Los Angeles, Calif, USA, August 2004. View at Publisher · View at Google Scholar · View at Scopus
  26. D. Krishnan and R. Fergus, “Fast image deconvolution using hyper-laplacian priors,” in Proceedings of the 23rd Annual Conference on Neural Information Processing Systems (NIPS '09), pp. 1033–1041, Vancouver, Canada, December 2009. View at Scopus