Abstract

One of the most common artifacts in digital photography is motion blur. When capturing an image under dim light by using a handheld camera, the tendency of the photographer’s hand to shake causes the image to blur. In response to this problem, image deblurring has become an active topic in computational photography and image processing in recent years. From the view of signal processing, image deblurring can be reduced to a deconvolution problem if the kernel function of the motion blur is assumed to be shift invariant. However, the kernel function is not always shift invariant in real cases; for example, in-plane rotation of a camera or a moving object can blur different parts of an image according to different kernel functions. An image that is degraded by multiple blur kernels is called a nonuniform blur image. In this paper, we propose a novel single image deblurring algorithm for nonuniform motion blur images that is blurred by moving object. First, a proposed uniform defocus map method is presented for measurement of the amounts and directions of motion blur. These blurred regions are then used to estimate point spread functions simultaneously. Finally, a fast deconvolution algorithm is used to restore the nonuniform blur image. We expect that the proposed method can achieve satisfactory deblurring of a single nonuniform blur image.

1. Introduction

Recently, image deblurring has become an essential issue in photography because of the popularity of handheld cameras and smartphones. When an image has been captured under dim light, that captured image often suffers from motion blur artifacts. Handheld cameras often have insufficient light and thus require excessive exposure times, which cause blurring. Numerous photographs capture unusual moments that cannot be repeated with different camera settings. Mathematically, a blurred image can be modeled aswhere is blur kernel, is convolution operator, is latent unblurred image, and is noise in blurred image. During exposure, the movement of the camera can be viewed as a motion blur kernel called the point spread function (PSF). However, image deblurring is an ill-posed problem because the PSF can be unknown and images can be latent. If the PSF is shift invariant, an image deblurring problem can be reduced to an image deconvolution problem. According to what is known regarding the PSF, image deconvolution problems can be divided into two categories, nonblind image deconvolution and blind image deconvolution. In nonblind image deconvolution, the PSF is already known or computed; therefore, the problem focuses on how to recover a blurred image by using a known PSF. The Wiener filter [1] and Richardson-Lucy deconvolution [2] are well-known nonblind deconvolution algorithms that are effective at image deblurring for cases in which the PSF is not complicated. However, in real cases, the PSF is complex. If a given PSF is not precisely estimated, ringing artifacts are generated in the deblurred result. To address this problem, several blind image deconvolution methods [313] have been proposed. However, the blind deconvolution problem is even more confusingly framed than the nonblind deconvolution problem. The blind problem entails recovering the unblurred image and estimating the PSF simultaneously. Image pair approaches have been proposed for image deblurring. Leveraging additional images makes the blind deblurring problem more tractable. Rav-Acha and Peleg used two motion blur images [9]. Yuan et al. recovered a blurred image from a noise/motion pair that had been captured under low light conditions [10]. Zhuo et al. leveraged a flash/motion pair that provided clear structural information for an image deblurring problem [11]. Zhang et al. applied varying image pairs to image deblurring [12]. Nevertheless, image pair approaches require additional images that must be captured by extra hardware. Chang and Wu proposed a new deconvolution method for deblurring a blurred image with uniform motion blur by using hyper Laplacian priors [13].

As shown in Figure 1, motion blur in a real-world image can be so complicated that the assumption of shift invariance is not always held. To deblur a nonuniform motion blur image is more difficult [1423]. The causes of nonuniform motion blur can be divided into two categories, camera rotation [14, 15] and target motion [1623]. To solve the target motion problem, nonuniform blur image deblurring methods have been proposed. In 2006, Levin deblurred a nonuniform motion blur image by segmenting nonuniform motion blur images into blurred regions and unblurred regions by using image statistics [16]. But this method works when the direction of blur was vertical or horizontal. In 2007, Cho et al. proposed using multiple blurred images that had been captured continuously [17]. An image registration method was then applied to estimate the PSF, but it required previous discernment of the blurred object and additional images. In 2011, Tai et al. proposed using a projective motion path that was recorded with additional hardware to replace the traditional plane blur kernel for image deblurring [14]. A projective motion path describes the movement of the camera in a three-dimensional (3D) model; therefore, it can restore nonuniform motion blur images for which blur had been caused by camera rotation. However, additional hardware was required to record the 3D blur kernel. Several nonuniform image deblurring methods with depth maps have been proposed to handle this problem. In 2012, Xu and Jia used a depth map to deblur blurred images with the same motion blur but different Gaussian blurs [22]. In 2014, Hu et al. proposed using depth estimation to deblur a nonuniform blur image [23]. The authors separated the blurred image into layers according to a depth map and then deblurred one layer but maintained the others in a fixed state. This deblurring method treats every object in the same depth as having the same blur amount. However, if there is an object with a different blur amount in the same layer, deblurring fails because the blur amount of objects at the same depth is not always the same.

In this paper, we propose a novel image deblurring algorithm for nonuniform motion blur images. To locate blurred parts, we measure the blur amount for each edge in the blurred image. The measurement of blur is then applied to propagate a uniform defocus map. We use the proposed uniform defocus map to segment the nonuniform blur image into multiple blurred and unblurred portions. Subsequently, we estimate the PSF for each blurred portion to obtain the PSF of each portion. All portions and the corresponding PSFs are entered as inputs to a fast deconvolution algorithm. Finally, an unblurred result is obtained.

The rest of the paper are organized as follows. In Section 2, we briefly review the related works on nonuniform image deblurring. Section 3 presents our proposed method in detail. The experimental results are then designed for verifying the proposed method in Section 4. Finally, we conclude this paper in Section 5.

Nonuniform motion is an ill-posed problem that many studies have been proposed to address this problem. In 2006, Levin removed nonuniform motion blur by separating a blurred image into unblurred regions and blurred regions by using image statistics. The authors located blurred and unblurred regions depending on the observation that the statistics of derivative filters in images are substantially changed by partial motion blur. They assumed that the blur effect resulted from motion at a constant velocity and thus modeled the expected derivative distributions as functions of the width of the blur kernel. This was effective when the direction of motion blur was vertical or horizontal or, in other words, the blur kernel was one-dimensional. In 2007, Cho et al. proposed using multiple blurred images that were captured continuously to remove nonuniform motion blur [17]. These blurred images were captured for the same scene. The authors applied an image registration method to compute the offsets of a moving object in two sequential images. Once the moving object was found in the images, they calculated the movement of the moving object from the tiny movement that was computed from the image pair. This movement was the estimated PSF of the moving object. The estimated PSF was applied to recover an image of the blurred object. Using multiple images to estimate the PSF made the nonuniform deblurring problem more tractable. However multiple sequential images are not always captured. In 2011, Tai et al. proposed using a projective motion path, which was estimated using additional hardware, to deblur a blurred image. The projective motion path is a model that records every movement of a camera through planar projective transformation (i.e., homography) during the exposure time. Homography provides not only planar information but also rotational information for rotation blur, whereas the traditional 2D PSF provides only planar information. Therefore, homography can recover a motion blur image with rotational blur successfully, but it requires additional hardware to record the projective motion path. In 2014, Hu et al. proposed a nonuniform image deblurring approach that involves using a depth map and applying depth estimation to an image deblurring algorithm for a nonuniform blur image. The approach entails separating the blurred image into layers according to estimated depth information. Every object with the same depth estimate is classified into the same depth layer. Hu et al. deblurred each layer but maintained the others at a fixed level. The depth map provided crucial information for deblurring a nonuniform blur image. However, the deblurred result will fail while there are different blur objects in the same depth layer.

Inspired by these nonuniform image deblurring methods, we sought to locate blurred objects precisely in nonuniform motion blur images. This is a vital task for deblurring spatially varying blurred images. We also sought to detect all types of motion blur objects in the blurred image automatically. In the next section, we demonstrate a noteworthy characteristic of blurred objects in nonuniform motion blur images, explain the dynamics of this characteristic, and then propose our method for solving this ill-posed image deblurring problem.

3. Proposed Method

As mentioned in the previous section, separating a nonuniform blur image into unblurred and blurred portions is an essential task for recovering nonuniform motion blur images. As shown in Figure 1, a nonuniform blur image contains partial motion blur that the fallen ball is blurry and the basket is not. We notice that the ball and the basket are rigid objects. If these rigid objects suffered from blur problem, the pixels within a rigid object should suffer from same blur problem. Once the blurred objects were founded, we can perform deblurring procedure for each blurred object; therefore, the nonuniform motion deblurring problem can be reduced to shift-invariant image deconvolution problem. Through the proposed method, we seek to measure the blur amount of each object in a blurred image and then apply that blur amount to find blurred objects in the blurred image. Inspired by Bae and Durand [18], we use the distance between the maximal value and minimal value of a second derivative to define the blur amount for each pixel in the blurred object. Consider the blurred signals with varying sigma in Figure 2. A larger sigma yields a more blurred signal and a wider distance between the maximum and minimum. We consider that the two signals include the blue signal that is extracted from the car and the red signal that is extracted from the book, as shown in Figure 3. The blur amount of the blue signal in Figure 3(b) is greater than the blur amount of the red signal. The car is blurred and the book is not, as shown in Figure 3(a). That is, the distance between the second derivative maximum and the local minimum can be viewed as the blur amount of an object. In fact, the blur amount of all rigid object is the same. From this fact, we unify the blur amount of an object by using a -means clustering algorithm. The flowchart of the proposed method is shown in Figure 4.

3.1. Uniform Defocus Map

According to Elder and Zucker [24], the edge regions that have notable frequency content are suitable for blur amount measurement. Step edges are the main edge type in a natural image; therefore, we consider measuring the blur amount at edge pixels in this paper. In mathematics, an ideal step edge can be formulated as follows:where is step function, is amplitude, and is offset. A blurred step edge can then be defined as the result of a step edge that convolutes the Gaussian function where is blurred step edge, is Gaussian function, is convolution operator, and is the standard deviation of Gaussian function.

3.1.1. Blur Amount Estimation

In the formulation of a blurred step edge, can be represented as the blur amount. We can calculate by using a reblur method. A reblur of the blurred step edge is where is standard deviation of reblur Gaussian function. Then, with blurred step edge divided by reblur blurred step edge, we can obtain the following equation:Let be the ratio of blurred step edge to reblur blurred step edge. When , has maximum value. Then we haveGiven a ratio , the unknown can be computed usingWhile is computed, we use as the blur amount at edge pixel. Blur measurement result is shown in Figure 5.

3.1.2. Blur Amount Refinement

After the blur amounts of edges pixels have been computed, we propagate the blur amount to nonedge pixels for which blur amounts have not previously been computed. However, phenomena such as shadows and highlights may cause some outliers to appear in the measurement results. These outliers may propagate incorrect results. We should remove these outliers to obtain sparse measurement results. Pixels in a sparse matrix are often zero. Therefore, instead of using a cross-bilateral filter, such as that applied in [18, 20], we propose using a simple bilateral filter for our sparse blur measurements. The main idea is that only nonzero value pixels are considered in filter procedure. The definition of a simple bilateral filter, which is applied only to nonzero pixels, is as follows:withand the weighting functions and are defined aswhere is the filtered result, is the sparse blur measurement to be filtered, are the coordinates of the current pixel to be filtered, is the window centered in and must be none zero value, is the range kernel of Gaussian function for smoothing differences in intensities, and is the spatial kernel of Gaussian function for smoothing differences in coordinates. The intermediate result is shown in Figure 6. The word on the roof is smoother after application of a simple bilateral filter.

3.1.3. Unify Blur Amount

The blur amounts of pixels that are located in the same rigid object are equal. Therefore, we use -means clustering to unify the measurement results. -means clustering algorithms minimize the within-cluster sums of squares. Thus, -means methods cluster the pixels with the same blur amounts together. Labels of each pixel at the edge region are calculated by minimizing the following cost function:where is a set of clusters, is the desired number of sets, and is the mean of points in . The blur measurement result with -means clustering which is shown in Figure 6. Close-up views in Figure 6(b) show that the blur amount of the car in our measurement result is uniform, but the amount for Zhuo’s edge map is not. We apply morphological operations to the refinement result, and we observe that -means clustering successfully unifies the blur amount in a rigid object, as shown in Figure 7.

3.1.4. Blur Amount Propagation

To propagate an estimated blur amount to a whole image, we apply the matting Laplacian interpolation method to our estimated blur measurement. The estimated blur amounts from the edge regions are propagated to other nonedge regions to form a defocus map. The cost function of the matting Laplacian interpolation method is formulated as follows:where is the full defocus map, is the vector forms of the sparse defocus map, L is the matting Laplacian matrix, and is diagonal matrix whose element is 1 at edge regions and 0 otherwise and is a parameter which balances fidelity to the sparse depth map and smoothness of interpolation. The defocus map can be obtained by solving (12) [25]. The proposed defocus map and Zhuo’s defocus map are shown in Figure 8. From Zhuo’s defocus map in Figure 8(a), the estimated blur amounts are different at different parts of the blurred car; therefore, it was difficult to segment the blurred car. By contrast, for the proposed defocus map, shown in Figure 8(b), the estimated blur amount is uniform for the car. The key benefit from our uniform defocus map is that the blurred car can be easily segmented.

3.2. Blind Image Deconvolution

If the blurred object regions can be segmented accurately, the nonuniform blur problem can be reduced to a uniform blur problem. Hence, the blurred objects are used as inputs for a uniform blur image deconvolution. We applied blind image deconvolution to recover the blurred object regions.

3.2.1. PSF Estimation

In mathematics, a blurred image can be modeled as where is blurred image, is convolution operator, I is latent unblurred image, is PSF, and is noise in image. The equation can then be represented as follows according to Bayes’ theorem:where represents the likelihood and and denote the priors on the latent image and PSF. In the PSF estimation step, Bayes’ theorem can be transformed into the following equations:We only consider the gradient of latent image and blurred image while solving kernel estimation problem. We rewrite (15) into the following energy function:where is the partial derivative operators in different directions, is Sobel operator, and and are preset parameters.

We iteratively solve the preceding equations to obtain an accurate PSF. To accelerate the PSF estimation process, we apply a shock filter before each PSF estimation step. The formulation of the shock filter is defined as follows: where is the image at current iteration, is the image at next iteration, is the map derived from Laplacian operator at iteration , is the gradient of at current iteration, and is time step.

3.2.2. Fast Adaptive Deconvolution

When a sufficient PSF is obtained, a fast adaptive deconvolution method is used for final deconvolution [26]. Equation (18) is the minimization problem of the fast adaptive deconvolution method: where is an index running through all pixels. In this paper, we use the value as which is suggested by Krishnan and Fergus [26], and is first-order derivative filter:We search for which minimizes the reconstruction error , with the image prior preferring to favor the correct sharp explanation.

However, makes the optimization problem nonconvex. It becomes slow to solve the approximation. Using the half-quadratic splitting, Krishnan’s fast algorithm introduces two auxiliary variables and at each pixel to move the terms outside the expression. Thus, (18) can be converted to the following optimization problem:where term is for constraint of and is a control parameter that we will vary during the iteration process. As parameter becomes large, the solution of (20) converges to that of (18). This scheme also called alternating minimization [26] where we adopt is a common technique for image restoration. Minimizing (20) for a fixed can be performed by alternating two steps. This means that we solve and , respectively.

To solve the subproblem, first, the input blurred image is set to the initial . Given a fixed , finding the optimal can be reduced to the following optimization problem:where the value . For the case , satisfying the above equation is the analytical solution of the following quartic polynomial:To find and select the correct roots of the above quartic polynomial, we adopt Krishnan’s approach, as detailed in [26].

Then we solve subproblem to get the latent image. Given a fixed value of from previous iteration, we obtain the optimal by the following optimization problem. Equation (20) is modified asBy solving the above problems iteratively, we can obtain the deblurred result.

In Figure 8, we show a comparison of the proposed method and Zhuo’s method. The blur amount is the same for a rigid object, but the blur amount of the car is nonuniform in Zhuo’s defocus map, which is shown in Figure 8(a). By contrast, the blur amount of the car in the proposed defocus map is uniform. Thus, the car can be successfully detected as a blurred object because of its uniform blur amount. The deblurred result is shown in Figure 9. The movement of the car is shown in Figure 9(d). In Figure 9(c), which can be compared with the blurred car, the word on the roof of the car in the deblurred image can be clearly discerned. The deblurred result shows a satisfactory view of the nonuniform motion blur image. In the next section, we report experimental results that demonstrate the effectiveness of the proposed nonuniform deblurring algorithm.

4. Experiments and Discussions

In this paper, we mainly focus on the nonuniform motion blur problem which is caused by moving object. Our proposed method was implemented with Visual C++.NET software. The testing environment was a personal computer running the 64-bit version of Windows 7 with an AMD Phenom II X4 945 3.4-GHz CPU and 8 GB of RAM. To show the effectiveness of our proposed method, we compared our result with the results of four state-of-the-art single image deblurring algorithms. We used nonuniform motion blur images as inputs. These test images were taken with a high-end DSLR by using a long exposure.

4.1. Comparison with Other Single Image Deblurring Methods

The first test image shows a basketball in the process of falling. We sought to deblur the blurred basketball in this image. This image of a basketball was a regular testing image; it did not show a complicated scene. From Figure 10, a nonuniform motion blur happened that the basketball was blurry but the court was not. This nonuniform problem makes the deblurring problem more difficult to solve. From Figures 10(b) and 10(e), Fergus and Levin’s deblurring method favored a blurred result because of nonuniform motion blur. And the basketball in the deblurred result of Shan and Cho’ method is sharper, but ringing artifacts appeared around the line in basketball court. The proposed method segments the blurred object and then deblurs it. As a result, the ringing artifacts around the court line do not appear, and the blurred basketball can be recovered.

4.2. Result from a Complicated Scene

We next used a different nonuniform blur testing image that shows a complicated scene, namely, a hoopoe standing amid weeds and hay. The weeds are luxuriant, and it was not easy to discern the hoopoe, as shown in Figure 11(a). Figures 11(b) and 11(e) show the gray level results of k-means clustering with different values of . A comparison of the two clustering results with different values shows that the hoopoe in Figure 11(b) was not well described. The foot of the hoopoe was classified as part of the weeds because of the complicated texture. For small values of k, k-means clustering does not suffice to describe complicated scenes. By contrast, Figure 11(h) shows that the blurred bird was accurately described despite the same amount of blur when the algorithm used a larger value of . Therefore, with an unusually complicated scene, we can increase the value of to attain a superior result. In our experiment, we used the parameter k = 7 for the regular testing image and k = 12 for the test image of the complicated scene. Comparing Figures 11(h) and 11(i), the proposed uniform defocus map describes the blur amount precisely, but Zhou’s defocus map fails to describe the information of blur amount.

The parameter Lambda balances fidelity to the sparse depth map and smoothness of interpolation. When the bigger is used, the propagation result fit the given blur amount estimation map. While is smaller, the propagation result fit the original image; the result is shown in Figure 12. In our experiments, we choose a fixed value, 0.0001, for all experiments, so that a soft constraint is put on estimated defocus map to further refine small errors in our blur estimation.

4.3. Robustness

To verify the robustness of the proposed method for nonuniform motion blur images, we used a number of real images with various moving objects and various degrees of blur. All test images were taken with a high-end DSLR by using a long exposure. For each blurred image, Zhuo’s defocus map, the proposed uniform defocus map, and a -means clustering result are presented for comparison. Figure 13 shows a man waving his hand. The waving hand is the blurred object. From Figure 13(e), the blur amount of the blurred object is uniform in the proposed map. By contrast, in Figure 13(d), the blur amount is overdefined in the blurred object; thus, the blur amount in the hand is not uniform. This nonuniformity problem complicates the segmentation of the blurred hand. Once the blurred hand had been located, we estimated the blur kernel of the blurred hand. The estimated PSF in Figure 13(f) shows that the hand of this man was waving. Some ringing occurred around the wrist of the man because of the influence of the gradient intensity of the shadow. The next test image shows two students standing together; the first student is beside the second student at the same depth, as shown in Figure 14. The left hand side student moved from right to left, but the other stood still. The proposed defocus map is shown in Figure 14(e). The blur amount of the moving student was brighter and uniform, and the blur amount was darker for the motionless student. Compared with the proposed defocus map, the blur amount was affected by the color and texture in Zhuo’s defocus map; therefore, it was not uniform for the moving object. As mentioned previously, a nonuniform defocus map cannot separate the blurring precisely. However, the amount of blurring in our proposed defocus map was uniform. Therefore, from Figure 14(f), the graph on the T-shirt of the moving student can be accurately recovered. Next, we used the image “Ball,” which shows a sloped direction of motion blur and a small moving object. The deblurred result is shown in Figure 15. From Figure 15(a), it can be seen that the ball was falling from the top left to the bottom right; this situation caused the ball to appear blurred. A comparison of the proposed uniform defocus map with Zhuo’s defocus map shows that the blur amount was uniform for the blurred object in the proposed map, but Zhuo’s defocus map failed to provide uniformity. Figure 15(f) shows the estimated PSF corresponding to the falling ball. It precisely shows that the ball is falling from the top left to the bottom right.

4.4. Limitation

However, we note that the shadow under the ball was detected as a blurred object, as shown in Figure 15(d). The reason for this phenomenon is that the border of the shadow region had a gradient of intensity. From Figure 16(c), we can see that the signal shown in Figure 16(b) was similar to the blurred signal shown in Figure 3. Therefore, the shadow region was recognized as a blurred object. In this case, a false deblurring result was generated.

5. Conclusions

In this paper, we propose a novel image deblurring algorithm for nonuniform motion blur. Because a rigid object has a consistent amount of blur, we propose a uniform defocus map for image segmentation. We segment the blurred image into blurred regions and unblurred regions by using the proposed uniform defocus map. Each blurred region is analyzed to obtain an estimate of its PSF. Each blurred region and its PSF are then entered as inputs to a uniform motion blur image deconvolution algorithm. Finally, an unblurred image is obtained. The experiments showed that our deblurred results had a satisfactory visual perspective for any type of motion blur. However, for optimal results, manual settings were required for numerous parameters. Furthermore, shadows tended to cause the algorithm to detect blurred objects incorrectly.

A possible future research direction is the automatic deblurring of spatially varying motion blur images. In future work, for automatic image deblurring, it may be interesting to classify the blurred regions correctly for blurred images in which shadows exist. This is expected to require an effective classification method for selecting blurred objects correctly.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

This research is supported by the Ministry of Science and Technology, Taiwan, under Grants MOST 103-2221-E-005-073 and MOST 104-2221-E-005-090.