Table of Contents Author Guidelines Submit a Manuscript
International Journal of Optics
Volume 2014 (2014), Article ID 732937, 9 pages
http://dx.doi.org/10.1155/2014/732937
Research Article

Omnigradient Based Total Variation Minimization for Enhanced Defocus Deblurring of Omnidirectional Images

Department of Military Vehicle, Military Transportation University, Tianjin 300161, China

Received 22 July 2014; Accepted 19 November 2014; Published 8 December 2014

Academic Editor: Takashige Omatsu

Copyright © 2014 Yongle Li and Jingtao Lou. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We propose a new method of image restoration for catadioptric defocus blur using omnitotal variation (Omni-TV) minimization based on omnigradient. Catadioptric omnidirectional imaging systems usually consist of conventional cameras and curved mirrors for capturing 360° field of view. The problem of catadioptric omnidirectional imaging defocus blur, which is caused by lens aperture and mirror curvature, becomes more severe when high resolution sensors and large apertures are used. In an omnidirectional image, two points near each other may not be close to one another in the 3D scene. Traditional gradient computation cannot be directly applied to omnidirectional image processing. Thus, omnigradient computing method combined with the characteristics of catadioptric omnidirectional imaging is proposed. Following this Omni-TV minimization is used as the constraint for deconvolution regularization, leading to the restoration of defocus blur in an omnidirectional image to obtain all sharp omnidirectional images. The proposed method is important for improving catadioptric omnidirectional imaging quality and promoting applications in related fields like omnidirectional video and image processing.

1. Introduction

Owing to the advantage of one-shot seamless panoramic imaging, catadioptric omnidirectional imaging systems consisting of conventional cameras and curved mirrors for capturing 360° field of view are widely used in many vision applications, such as aerial photographic reconnaissance, intelligent transportation system, robot navigation, surveillance, medical applications, and video conferencing [15]. The problem of catadioptric omnidirectional imaging defocus blur, which is caused by lens aperture and mirror curvature, becomes more severe when high resolution sensors and large apertures are used. To solve the defocus problem, the classical Wiener filter and Richardson-Lucy deconvolution method [6] can be applied to obtain the clear images. In recent years, the restoration methods based on regularization theory, like Tikhonov model and Total Variation (TV) minimization model [7, 8], have become more popular. The TV minimization based regularization restoration approach aims to preserve more image details and restrain noise impact and ringing artifacts. Nevertheless, the traditional TV computation is not appropriate for omnidirectional image processing. The classical operators cannot be directly applied to this type of images because of the catadioptric projection [9, 10]. Thus, targeting omnidirectional images, a novel gradient computation is needed. The remainder of this paper is structured as follows. In Section 2, to make this paper self-contained, the basic principles and defocus blur analysis of a single viewpoint catadioptric omnidirectional imaging are briefly summarized. In Section 3, the omnigradient computation method combined with the characteristics of catadioptric omnidirectional imaging is proposed. For deconvolution restoration, Omni-TV minimization is proposed as the regularization constraint. Experimental results in Section 4 show that the proposed approach is effective for omnidirectional image defocus deblurring and has an important impact on improving catadioptric omnidirectional imaging quality and promoting other applications in related fields.

2. Analysis of Catadioptric Omnidirectional Imaging Defocus Blur

To make this paper self-contained, the basic principles and defocus blur analysis of a single viewpoint catadioptric omnidirectional imaging are briefly summarized. The system of central catadioptric omnidirectional imaging with hyperboloid mirror is considered for single viewpoint constraints [11]. Figure 1 shows an illustration of a Cartesian frame ROZ where is the location of the effective viewpoint which is the origin and one focus of the hyperboloid mirror, , is the location of the effective pinhole which is the other focus of the hyperboloid.

Figure 1: Defocus sketch map of a catadioptric omnidirectional imaging system.

Considering a point () on the mirror and a point in the world, according to the single viewpoint constraint, if the extended ray passes through the effective viewpoint , the ray of light from which is reflected by the mirror at passes directly through the centre of the lens . The hyperboloid can be expressed as where and are the parameters of the hyperboloid equation and is a constant. The lens plane is located at and rays of light and are the principal incident ray and reflected ray, respectively. The image plane is located at , where is the image of world point and is the distance from the image plane to lens plane.

Suppose that another ray of light from which is reflected by the mirror at passes through the point on the lens and finally arrives at the image plane at the point . Similarly, another ray of light from which is reflected by the mirror at passes through the point on the lens and eventually arrives at the image plane at the point . In general, these two rays of light will not be imaged at the same point on the image plane as the principal ray. This causes defocus blur to occur [12]. The defocus simulation results shown in Figure 2 are implemented using ZEMAX [13] which is one of optics simulation tools.

Figure 2: Simulation results of catadioptric omnidirectional imaging defocus.

3. Omnitotal Variation Minimization Deconvolution Algorithm

3.1. Image Restoration Model

The image degradation model can be expressed as where is the captured image, is the original clear image, is PSF, is a convolution operator, and is image noise. Image deconvolution is an inverse problem with applications in remote sensing, medical image, and, more generally, image restoration. The challenge in many inverse problems is that they are ill-posed even if the point spread function is known. To cope with the ill-posed nature of these problems, a large number of techniques have been developed, most of them using regularization. At the heart of regularization is a priori knowledge expressed by the prior or regularization term.

TV regularization was introduced by Rudin et al. in [8] and has become popular in recent years [6]. Recently, the range of applications of TV-based methods has been successfully extended to inpainting, blind deconvolution, and processing of vector-valued images. Arguably, the success of TV regularization relies on a good balance between the ability to describe piecewise smooth images and the complexity of the resulting algorithms. The TV regularizer favors images of bounded variation, without penalizing possible discontinuities. Furthermore, the TV regularizer is convex, thus opening the door to research on efficient algorithms for computing optimal or nearly optimal solutions [7].

The TV can be defined as the sum of gradient amplitudes over the entire image as where , , is the gradient amplitude in the pixel position of the image . According to the law of Lagrange, the image restoration based on TV minimization can be expressed as the following minimization problem: where is a constant which can control the convergence of the deconvolution algorithm and is the constraint, also called the image prior. The clear image can be obtained by minimizing the reconstruction error .

3.2. Omnitotal Variation

We need to especially note that, because of the distortions observed in such catadioptric omnidirectional images as shown in Figure 3, traditional image processing techniques are no longer appropriate and need to be adapted to the new sensor geometry. In this way, classical operators cannot be directly applied to this type of images. As we know, the spatial resolution of catadioptric omnidirectional images decreases gradually from the periphery to the centre of the image. Because of catadioptric projection, the two distances, which have the same size between two points in real world, are different when the two points are imaged in the periphery and the centre of the image, respectively. That is, two points near each other in the omnidirectional image do not have the same relation in the real world.

Figure 3: Catadioptric camera and omnidirectional image.

Thus, traditional gradient computing methods cannot reflect the characteristics of catadioptric omnidirectional imaging and the intensity variability between the nearby pixels. We propose a new gradient computing method. First, a cylindrical panoramic image is obtained by the backward projection of an omnidirectional image. Then, the nearby pixels in the horizontal and vertical directions are established in the cylindrical panoramic image. Finally, the nearby pixels are forward projected onto the omnidirectional image to find the corresponding positions to calculate the gradient.

As shown in Figure 4(a), in terms of the single viewpoint constraints of catadioptric omnidirectional imaging and the method of light ray tracking, the point in an omnidirectional image is backward projected to the point in a cylindrical panoramic image passed through the point on the mirror, where and are the two foci of the hyperbolic mirror, respectively, and is the radius of the cylinder. As shown in Figure 4(b), the point in the omnidirectional image is backward projected to the point in a cylindrical panoramic image. Then, nearby pixels and of the point in the horizontal and vertical directions are established in the panoramic image. and are forward projected to and in the omnidirectional image. Thus, the omnigradient amplitude in the pixel can be expressed as

Figure 4: Sketch map of Omni-TV computation: (a) backward projection; (b) Omni-TV computation method.

The Omni-TV of omnidirectional image is defined as Combining (4) and (6), the deblurred omnidirectional image can be obtained.

4. Experimental Results

4.1. Omnigradient Computation Results

In order to verify the rationality and effectiveness of the proposed Omni-TV, the experiments give the comparative results using the computation methods of omnigradient and traditional gradient, respectively. Figure 5 is the simulation test image of catadioptric omnidirectional imaging using 3ds Max. The simulation scene contains the cameraman, part of eye chart, lena, and peppers. Then, the scene is pasted onto a cylindrical surface joined from the beginning to the end. As shown in Figure 6, the results obtained by our proposed omnigradient computation can show the image edges more clearly. Figure 7 is the omnigradient computation result of a real image. In particular, in the centre part of the omnidirectional image, the edges are clearer and more regular along the radial direction.

Figure 5: Simulation test image of catadioptric omnidirectional imaging.
Figure 6: Gradient computation results of test image: (a) and (b) are traditional gradient computation result and close-up views, respectively; (c) and (d) are omnigradient computation result and close-up views, respectively.
Figure 7: Gradient computation results of real image: (a) and (b) are traditional gradient computation result and close-up views, respectively; (c) and (d) are omnigradient computation result and close-up views, respectively.
4.2. Image Restoration Results

In order to verify the effectiveness of the deconvolution algorithm based on Omni-TV minimization, the TwIST algorithm [14] is applied, using both the Omni-TV and traditional TV as the image priors, to do the deconvolution. The signal-to-noise ratio (SNR), which can be defined as , is used to assess the deblurred results. and are original and deblurred image; is the average value of the original image. Figure 8 is the defocus deblurred results for the simulation test image. Figure 8(a) is the ground truth. Figure 8(b) is the blurry image. Figure 8(c) shows deblurred result using traditional TV, with . Figure 8(d) shows deblurred result using Omni-TV, with . Figure 9 shows the SNR comparison results when the local window (64 × 64 pixel) moves from the centre to the periphery of the image, and the deconvolution results based on Omni-TV are better than these using traditional TV. Figure 10 shows the defocus deblurred results of a real omnidirectional image captured by our experimental setup. We can observe that the proposed deconvolution algorithm using Omni-TV as the image prior can produce more visually appealing deblurred results.

Figure 8: Defocus deblurring results of test image: (a) simulation omnidirectional image (ground truth); (b) blurry image; (c) deblurred results using TwIST algorithm, SNR = 22.87; (d) deblurred results using our proposed method (Omni-TV), SNR = 24.47.
Figure 9: SNR comparison when the local window moves from the centre to the periphery of the image: (a) local window locations; (b) SNR comparison of cameraman; (c) SNR comparison of lena; (d) SNR comparison of peppers.
Figure 10: Defocus deblurring results of real image captured by our experimental setup: (a) blurry omnidirectional image; (b) deblurred results using TwIST algorithm; (c) deblurred results using our proposed method (Omni-TV).

5. Conclusion

In this paper, a novel method of image restoration for catadioptric defocus blur image based on omnitotal variation minimization is proposed. Since the traditional gradient computation cannot fit the characteristic of catadioptric omnidirectional imaging, the omnigradient computing method is proposed. Then, omnitotal variation minimization is used as the condition of deconvolution regularization, which is used for the defocus blur omnidirectional image restoration to obtain all sharp omnidirectional images. Experimental results show that the proposed algorithm can produce more visually appealing deblurred results. In particular, in the centre part of the omnidirectional image, the edges are clearer and more regular along the radial direction.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This research was partially supported by National Natural Science Foundation (NSFC) of China under Project no. 61271438, no. 61275016, no. 91120306, and no. 51175290.

References

  1. K. Ikeuchi, M. Sakauchi, H. Kawasaki, and I. Sato, “Constructing virtual cities by using panoramic images,” International Journal of Computer Vision, vol. 58, no. 3, pp. 237–247, 2004. View at Publisher · View at Google Scholar · View at Scopus
  2. M. Fiala and A. Basu, “Robot navigation using panoramic tracking,” Pattern Recognition, vol. 37, no. 11, pp. 2195–2215, 2004. View at Publisher · View at Google Scholar · View at Scopus
  3. T. E. Boult, X. Gao, R. Micheals, and M. Eckmann, “Omni-directional visual surveillance,” Image and Vision Computing, vol. 22, no. 7, pp. 515–534, 2004. View at Publisher · View at Google Scholar · View at Scopus
  4. L.-D. Chen, M.-J. Zhang, and Z.-H. Xiong, “Series-parallel pipeline architecture for high-resolution catadioptric panoramic unwrapping,” IET Image Processing, vol. 4, no. 5, pp. 403–412, 2010. View at Publisher · View at Google Scholar · View at Scopus
  5. M. Bahrami and A. V. Goncharov, “All-spherical catadioptric telescope design for wide-field imaging,” Applied Optics, vol. 49, no. 30, pp. 5705–5712, 2010. View at Publisher · View at Google Scholar · View at Scopus
  6. S. Kuthirummal, Flexible Imaging for Capturing Depth and Controlling Field of View and Depth of Field, Columbia University, New York, NY, USA, 2009.
  7. J. M. Bioucas-Dias, M. A. T. Figueiredo, and J. P. Olivei, “Total variation-based image deconvolution: a majorization-minimization approach,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '06), vol. 2, pp. 861–864, 2006.
  8. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D: Nonlinear Phenomena, vol. 60, no. 1–4, pp. 259–268, 1992. View at Publisher · View at Google Scholar · View at Scopus
  9. C. Demonceaux, P. Vasseur, and Y. Fougerolle, “Central catadioptric image processing with geodesic metric,” Image and Vision Computing, vol. 29, no. 12, pp. 840–849, 2011. View at Publisher · View at Google Scholar · View at Scopus
  10. F. Jacquey, F. Comby, and O. Strauss, “Fuzzy edge detection for omnidirectional images,” Fuzzy Sets and Systems, vol. 159, no. 15, pp. 1991–2010, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  11. S. Baker and S. K. Nayar, “Theory of single-viewpoint catadioptric image formation,” International Journal of Computer Vision, vol. 35, no. 2, pp. 175–196, 1999. View at Publisher · View at Google Scholar · View at Scopus
  12. Y. Li, Y. Liu, W. Wang, J. Lou, A. Basu, and M. Zhang, “Defocus deblurring for catadioptric omnidirectional imaging based on spatially invariant point spread function,” Journal of Modern Optics, vol. 60, no. 6, pp. 458–466, 2013. View at Publisher · View at Google Scholar · View at Scopus
  13. Zemax Co. Ltd., 2013, http://www.radiantzemax.com/zemax/.
  14. J. M. Bioucas-Dias and M. A. Figueiredo, “A new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Transactions on Image Processing, vol. 16, no. 12, pp. 2992–3004, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus