Abstract

One of the most common defects in digital photography is motion blur caused by camera shake. Shift-invariant motion blur can be modeled as a convolution of the true latent image and a point spread function (PSF) with additive noise. The goal of image deconvolution is to reconstruct a latent image from a degraded image. However, ringing is inevitable artifacts arising in the deconvolution stage. To suppress undesirable artifacts, regularization based methods have been proposed using natural image priors to overcome the ill-posedness of deconvolution problem. When the estimated PSF is erroneous to some extent or the PSF size is large, conventional regularization to reduce ringing would lead to loss of image details. This paper focuses on the nonblind deconvolution by adaptive regularization which preserves image details, while suppressing ringing artifacts. The way is to control the regularization weight adaptively according to the image local characteristics. We adopt elaborated reference maps that indicate the edge strength so that textured and smooth regions can be distinguished. Then we impose an appropriate constraint on the optimization process. The experiments’ results on both synthesized and real images show that our method can restore latent image with much fewer ringing and favors the sharp edges.

1. Introduction

Image blurring is one of the prime causes of poor image quality in digital photography. The one main cause of blurry images is motion blur caused by camera shake. If a motion blur is linear shift invariant, the blurring process can be generally modeled as a convolution of the true latent image and a point spread function (PSF) with additive noise: where is the degraded image, is the true latent image, is the PSF or a motion blur kernel which describes the trace of a sensor, is the additive noise introduced during image acquisition, and denotes the convolution operator. The goal of image deblurring is to reconstruct a latent image from degraded image .

To remove motion blur, we need to estimate the PSF and restore a latent image through deconvolution. Existing single image deblurring methods can be further categorized into two classes. If both the PSF and the latent image are unknown, the challenging problem is called blind deconvolution. Although great progresses have been achieved in the recent years [14], blind case is severely ill-posed problem because the number of unknowns exceeds the number of observed data. In contrast to the former, if the PSF is assumed to be known or computed in other ways, the problem is reduced to estimating the latent image alone. This is called nonblind deconvolution. However, the nonblind case is still an ill-conditioned problem that has to do with the presence of image noise. Slight mismatches between the PSF used by the method and the true blurring PSF also lead to poor deblurring results.

Unfortunately, the deconvolved result usually contains unpleasant artifacts even if the PSF is exactly known or well estimated. The main visually disturbing artifact is ringing that appears around strong edges. Because the PSF is often band-limited with a sharp frequency cutoff, that will be zero or near-zero values in its frequency response. Thus the direct inverse of the PSF causes large amplification of signal and noise.

Since the estimated PSF is usually inaccurate and the real blurred image is also noisy, many underconstrained factors will affect the amplification of ringing. To reduce undesirable artifacts, various regularization techniques have been proposed using image priors to improve nonblind deconvolution methods [510]. The most commonly used priors are those which encourage the image gradients to a set of derivative filters to follow heavy-tailed distribution. However, strong regularization to reduce severe artifacts destroys the image details in the deconvolved result. Weak regularization preserves image details well, but it does not remove artifacts tellingly. The challenge of this work is how to balance the details recovery and ringing suppression.

In this paper, we focus on non-blind deconvolution with adaptive regularization that controls the regularization strength according to the image local characteristics. This strategy reduces ringing artifacts in a smooth region effectively and preserves image details in a textured region simultaneously. First, we estimate the reference maps in the scale space. At the coarsest scale, we are able to extract reasonably main strong edges. Comparing the texture information of multiscale results, we can make the elaborated reference map that differentiates the edge property of textured region and smooth region. Second, regularization strength is controlled adaptively referring to these maps. Then, we apply appropriate regularization with hyper-Laplacian prior to image deconvolution so that the sharp edges can be eventually recovered. To solve the optimization process, we adopt Krishnan’s [7] fast algorithm. It is performed fast in the frequency domain using fast Fourier transforms (FFTs). The experimental results show that our nonblind deconvolution can produce latent image with much fewer ringing and preserve the sharp edges.

2. Regularization Formulation

Assuming that the PSF is known or computed in other ways, our method focuses on recovering the sharp image from the blurred image. To reduce undesirable artifacts, most advanced regularization techniques are proposed using the prior of nature image in the gradient domain. We now introduce the regularization formulation. From a probabilistic perspective, we seek maximum a posteriori (MAP) estimate of latent image in Bayesian framework: where represents the likelihood and denotes the priors on the latent image. Maximizing is equivalent to minimizing the cost :

The MAP solution of can be obtained by minimizing the cost function above. We now define these two terms. The likelihood of a degraded image given the latent image is based on the common blur model . Assuming that the noise is modeled as a set of independent and identically distributed (i.i.d.) noise random variables for all pixels, each of which follows a Gaussian distribution, we can express the likelihood with Gaussian variance as

Furthermore, recent research in natural image statistics shows that image gradients follow a heavy-tailed distribution. These distributions are well modeled by a hyper-Laplacian prior and have proven effective priors for deblurring problem. In this paper, we utilize the hyper-Laplacian prior to regularize the solution and it can be modeled as where is a positive exponent value set in the range of as suggested by Krishnan and Fargus [7]. In this work, we unify the use of   for value. The denotes the simple horizontal and vertical first-order derivative filters. It can also be useful to include second-order derivative filters or the more sophisticated filters. With the Gaussian noise likelihood and the hyper-Laplacian image prior, (3) can be represented by the following minimization problem: where is an index running through all pixels, is the whole pixels in the image, and weighting coefficient controls the strength of the regularization term. For simplicity, and are two first-order derivative filters. We search for the which minimizes the reconstruction error , with the image prior preferring to favor the correct sharp explanation.

However, to analyze the formulation of Equation (6), the above conventional regularization’s weighting coefficient is applied to all pixels with the same strength. When the estimated PSF is inaccurate or the PSF size is large, it usually adds strong regularization weight for suppressing ringing around the edges. But it also destroys the image details in the deconvolved result, and this is inevitable problem since perfect PSF estimation is impossible. In addition, the weak regularization weight preserves major details well, but it does not remove artifacts effectively.

3. The Proposed Method

To balance the details recovery and ringing suppression, the main ideal of our approach is to control the regularization weight adaptively according to the image local characteristics. This means that we need strong regularization for the smooth regions and weak regularization for the sharp edges. Our estimated maps further consider the edge property of image textured regions. In this chapter, we will explain each step of our algorithm in detail. That includes how to estimate reference map and perform the adaptive regularization in the deconvolution stage. Finally, we supply some added improvement to obtain better results. Figure 1 shows the overall process of our nonblind deconvolution method.

3.1. Reference Map Estimation

Inspired by the local constraint idea [8, 9, 11], the reference map is designed for classifying the smooth regions and different edge regions correctly. Reference map is used for deconvolution with adaptive regularization and can apply the right image characteristics information to control the local weight of deblurring. In fact, the classified texture regions also contain the main strong edges and some smaller or fainter edges. Compared to the smaller or fainter edges, the extracted main edges are the important visible regions, and those need more weak regularization to preserve the sharp features. In addition, we should add strong regularization for the classified smooth regions to reduce the most noticeable artifacts since the ringing artifacts are usually propagated in the smooth areas.

Based on these observations, we estimate the reference maps in the multiscale space. We first build a pyramid of the full resolution input blurred image using bilinear downsampling. Our goal is to combine different edge information from coarse to fine layers. So we perform the map estimation approach in each scale and distinguish its different regions. At the coarsest scale, we are able to extract reasonably main strong edges of the image. In the next finer scale, we gradually extract more and more small edge details. Finally, all estimated maps from different scale results combine to form one elaborated reference map. The estimation is guided to a good map by concentrating on the main edges of the input image and progressively dealing with smaller and smoother details. But it is difficult to obtain correct edge information from the blurred image directly, so the map is renewed from every deconvolved result during each iteration time.

We now describe how to produce and combine the estimated reference maps. The initial reference map is estimated from the input blurred image. Since the locally smooth region which has no edges information is still smooth after blurring, the edge areas of blurred image will be affected. Inspired by this idea, we compute the edge strength and use predefined threshold to classify different regions. The edge strength at located pixel is defined as follows: where denotes the edge strength response at pixel on the observed image. is the window whose center is located on , , , and is the total number of pixels in default window. If the computed edge strength value is smaller than a predefined threshold , which is set to in our experiment, we will regard the center pixel as in smooth region , that is, :

Since the texture regions which is classified by (7) also contain the main strong edges and some smaller edges, we aim to further improve the estimated results. The improved extracting method includes the following steps. Firstly, we build a pyramid of the blurred image. At each scale , the edge strength is computed in every level and then is upsampled. It means that this step will produce number binary maps. Finally, we sum all the edge intensities at the same pixel location and run through all pixels. Now we get a new reference map after normalization process, which is shown in Figure 2(b), and the smooth region is shown as the set of all white pixels. The strongest edges are indicated by the most dark pixels and the other gray level pixels mean the smaller or fainter edges. Hence, we define this extracting step using the multiscale approach of the observed image as: where is the binary map at the th level of the images pyramid by (8), and denotes the total scale layers, for example, is set to levers in our implementation. In (9), the final map is extracted by summing up all the intensity values of those upsampling maps and normalizing the results.

We gradually recover more and more image details by an iterative deconvolution algorithm because it can refine the result until convergence. Obviously, the deconvolved image with initial adaptive regularization shows much better edge information than does the input blurred image. Thus, at the following reference map estimation, we use the deconvolved result from the previous iteration to distinguish different edge regions elaborately. We also adopt the same criterion to extract map. Second estimated result is represented in Figure 2(d). The map will be renewed from the earlier deconvolved result after each iteration.

3.2. Image Deconvolution with Adaptive Regularization

Now, we would like to seek the MAP solution for the latent image in the deconvolution stage. But the limitation of conventional regularization is that the same value of weighting coefficient is applied to whole pixels. To overcome this fault and balance the image quality, our method introduces an improvement, that is, to apply appropriate regularization weight according to the local characteristics. So, the adaptive regularization is performed based on the estimated reference map during each iteration time. Based on this idea, we simply modify (6) as follows: where the weighting coefficient is adjusted based on the estimated reference map. It provides a basis to adaptively suppress the ringing effects in different regions. The effect of adaptive regularization in deconvolution result is represented in Figure 3. In comparison, our result exhibits sharper image detail and fewer artifacts.

However, the use of sparse distributions with makes the optimization problem nonconvex. It becomes slow to solve the approximation. Using the half-quadratic splitting, Krishnan’s [7] fast algorithm introduces two auxiliary variables and at each pixel to move the terms outside the expression. Thus, (10) can be converted to the following optimization problem: where term is for constraint of and is a control parameter that we will vary during the iteration process. As parameter becomes large, the solution of (13) converges into that of (12). This scheme is also called alternating minimization [12] where we adopt a common technique for image restoration. Minimizing (13) for a fixed can be performed by alternating two steps. This means that we solve and , respectively.

One subproblem is to solve and , which is called subproblem. First, the initial is set to the input blurred image . Given a fixed , finding the optimal can be simplified into the following optimization problem: where the value . For convenient calculation, is replaced with , and it does not affect the performance. Inparticular for the case , the satisfying the above equation is the analytical solution of the following quartic polynomial: to find and select the correct roots of the above quartic polynomial, we adopt Krishnan’s approach, as detailed in [7]. They numerically solve (12) for all pixels using a lookup table to find .

Second, we briefly describe the subproblem and its straightforward solution. Given a fixed value of from previous iteration, we aim to obtain the optimal by the following optimization problem. Equation (11) is modified as the subproblem can be optimized by setting the derivative of the cost function to zero. So, (14) is quadratic in . The optimal is for brevity, we set and . Assuming circular boundary conditions, (15) can apply 2D FFTs which diagonalize the convolution matrices , , and , helping us to find optimal directly:

where denotes the complex conjugate and denotes component-wise multiplication. The and represent Fourier and inverse Fourier transforms respectively. Solving (16) requires only three FFTs at each iteration since some FFTs operation of , , and can be precomputed.

The nonconvex optimization problem arises from the use of a hyper-Laplacian prior with , then we adopt a splitting approach that allows the nonconvexity to become separable over pixels. We now give the summary of alternating minimization scheme. As described above, we minimize (11) by alternating the and subproblems, before increasing the value of and repeating. At the beginning of each iteration, we first compute given the initial or deconvolved by minimizing (14). Then, given a fixed value of , it can minimize (14) to find the optimal . The iteration process stops when the parameter is sufficiently large. In practice, starting with small value we scale it by an integer power until it exceeds some fixed values .

Finally, we mention one last improvement detail. In the blurring process, the convolution operator makes use of not only the image inside the field of view (FOV) of the given observation but also part of the scene in the area bordering it. The part outside the FOV cannot be available to the deconvolution process. When any algorithm performs deconvolution in the Fourier domain, this missing information would cause artifacts at the image boundaries. To analyze this cause, the FFTs assume the periodicity of the data, the missing pixels will be taken from the opposite side of the image when FFTs are performed, since the data obtained may not coincide with the missing ground truth data. These give rise to boundary artifacts, which poses a difficulty in various restoration methods.

To reduce the visual artifacts caused by the boundary value problem, we use Liu’s approach, as detailed in [13]. The basic concept to solve the problem is to expand the blurred image such that the intensity and gradient are maintained at the border between the input image and expanded block. This algorithm suppresses the ripples near the image border. It does not require the extra assumption of the PSF and can be adopted by any FFTs based restoration methods.

4. Experimental Results

4.1. Parameters Setting

Since we want to suppress the contrast of ringing in the extracted smooth regions while avoiding suppression of sharp edges, the regularization strength should be large in smooth regions and small in others. We also concern the main edges of the textured regions and other smaller or fainter features. So we controlled regularization strength referring to the different edge property. This means that we need the most weak regularization for the main edges regions.

Through all the experiments, the weighting coefficient is controlled adaptively referring to estimated maps. We have used a geometric progression for the different regions of values of . The rate is set to ; that is, we decrease the regularization strength according to the local characteristics. In our experiments, the is set to in smooth regions. Then we decrease the value to in next smaller or fainter edges, in the strong edges, and so on. The main edges will be applied to the least value of , and the is equal to in (14). The also progressively decrease as the number of iterations is increased. It can produce the best results. In addition, our method uses a hyper-Laplacian prior with . To find the optimal solution, parameter will vary during the iterative process. The value is varied from to by integer powers of . As is larger than , the iterations will stop.

Besides, for all testing images, in our experiments, we covert the RGB color image to the YCbCr color space and only the luminance channel is taken into computation. Furthermore, with the alternating minimization and FFTs operation, those schemes can accelerate total computational time. In the next two parts we will demonstrate the effectiveness of our approach.

4.2. Synthetic Images

The synthesized degradations are generated by convolving the artificial PSF and the sharp images. In the first part of the experiments, both subjective quality and objective quality are compared. For testing the subjective performance, we compare our result against those of three existing nonblind image deconvolution methods, as shown in Figure 4.

Figure 4 shows the comparison of the visual quality for the simulated image. The standard RL method preserves edges well but produces the severe ringing artifacts. More iteration introduces not only more image details but also more ringing. Besides, the Matlab RL function is performed in the frequency domain, so it gives rise to severe boundary artifacts. Levin’s and Krishnan’s methods reduce ringing effectively due to advanced image priors. But they suppress or blur some details since regularization strength is applied to all pixels with the same value. The IRLS solution of Levin’s algorithm also expands expensive running times. Our approach can recover finer image details and thin image structures while successfully suppressing ringing artifacts.

In addition, for testing the visual objective quality, the peak signal to noise ratio (PSNR) measurement is also used to evaluate quantitatively the quality of above restored result. Given signal , the PSNR value of its estimate is defined as: where is the size of the input image, is the intensity value of true clear image at the pixel location , and corresponds to the intensity value of the restored image at the same location. The average PSNR values of above results are listed in Table 1.

4.3. Real Blurred Images

Besides testing the method on synthetic degradations, we also apply our method to real life blurred degradations. We estimate the PSF of these images by Xu’s method [4]. For the real blurred image, only subjective quality is measured, as shown in Figures 5, 6, and 7.

Figures 5 and 6 show more results of the images from [3]. The PSF is estimated by [4]. Then, the inferred PSF is used as the input of all nonblind methods for comparison. With estimated PSF, which is usually not accurate, the deconvolution methods need to perform robustly. The fine details of the thin branch are compared in Figure 5. It can be verified that our approach shows superior edge preserving ability compared to other nonblind methods. In Figure 6, standard RL produces the most noticeable ringing even preserving the sharp edges. Our restoration result balances the details recovery and ringing suppression. Finally, the degraded image of Figure 7 uses a large aperture and the focused object is blurry due to camera shake. In comparison, our deconvolution result exhibits richer and clearer image structures than other methods.

Finally, the computational costs of our proposed method and other nonblind image deconvolution methods considered in the experiments are compared in Table 2. All codes are tested in the computer with AMD Phenom II X4 965 3.40 GHz processor and 4.0 GB RAM. The Matlab RL method is performed in the frequency domain with 20 iterations. For the IRLS scheme, we used the implementation of [5] with default parameters. Krishnan’s method and our proposed method both use hyper-Laplacian prior to regularize the solution, and we set the same value in the experiment. Experimental results show that the proposed adaptive scheme consumes acceptable processing time while preserving more details in the deblurred images.

5. Conclusions

The image deblurring is a long-standing problem for many applications. In this paper, a high-quality nonblind deconvolution to remove camera motion blur from a single image has been presented. We follow the regularization based framework using natural image prior to constrain the optimal solution. Our main contribution is an effective scheme for balancing the image details recovery and ringing suppression. We first introduce the reference map indicating the smooth regions and different textured edge regions. Then, according the image local characteristics, we can control regularization weighting factor adaptively. In addition, the proposed method is practically considering the complexity by FFTs operations. Deconvolved results obtained by our approach show a noticeable improvement in recovering visually pleasing details with fewer ringing.

In the future, the deblurring topic is still more challenging and exciting. We have found that the severe noise, PSF estimation errors, or large PSF may be propagated and amplified the unpleasant artifacts. Thus, the research direction of future work is how to estimate accurate PSF and propose the robust blind deblurring model. Recently, the spatially variant PSF models have also drawn some attention for better modeling practical motion blurring operator [14, 15]. This means that the PSFs are not uniform in appearance, for example, from slight camera rotation or nonuniform objects movement. It needs to explore the removal of shift-variant blur using a general kernel assumption. Moreover, we are also interested to apply the proposed framework to other restoration problems, such as image denoising, video deblurring, or surface reconstruction.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The authors would like to thank the anonymous reviewers. Their comments helped in improving this paper. This research is supported by the National Science Council, Taiwan, under Grants NSC 101-2221-E-005-088 and NSC 102-2221-E-005-083.