About this Journal Submit a Manuscript Table of Contents
Mathematical Problems in Engineering
Volume 2013 (2013), Article ID 470165, 11 pages
http://dx.doi.org/10.1155/2013/470165
Research Article

Combined First- and Second-Order Variational Model for Image Compressive Sensing

1School of Computer Science & Engineering, Nanjing University of Science & Technology, Nanjing, Jiangsu 210094, China
2North Information Control Group Co. Ltd., Nanjing, Jiangsu 210094, China

Received 16 June 2013; Accepted 2 September 2013

Academic Editor: Suh-Yuh Yang

Copyright © 2013 Can Feng et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A hybrid variational model combined first- and second-order total variation for image reconstruction from its finite number of noisy compressive samples is proposed in this paper. Inspired by majorization-minimization scheme, we develop an efficient algorithm to seek the optimal solution of the proposed model by successively minimizing a sequence of quadratic surrogate penalties. Both the nature and magnetic resonance (MR) images are used to compare its numerical performance with four state-of-the-art algorithms. Experimental results demonstrate that the proposed algorithm obtained a significant improvement over related state-of-the-art algorithms in terms of the reconstruction relative error (RE) and peak signal to noise ratio (PSNR).

1. Introduction

Traditional approaches to sampling signals or images follow the basic principle of the Nyquist-Shannon sampling theorem; the sampling rate must be at least twice of the frequency bandwidth. In many practical applications, including image and video cameras, MRI scanners, and radar receivers, requirements of high resolution imaging lead to very high Nyquist sampling rate which brings a series of realistic difficult problems in the field of data conversion (e.g., analog-digital conversion), storage, and transmission. The technique called compressive sensing (CS), which goes against conventional wisdoms in data acquisition, has recently been developed to overcome those problems.

CS theory designs nonadaptive sampling techniques that condense the information in sparse or compressible images into a small amount of data and yet reconstruct them accurately. The general framework of compressive sensing consists of two phases: encoding and decoding. In the encoding phase, a sparse or compressible image is encoded into noisy compressive samples by Here, is the sensing matrix ( is the number of measurements and is the number of total pixels, ), is the compressive samples of original image , and is Gaussian white noise with standard deviation . While in the decoding phase, a nonlinear recovery algorithm is used to reconstruct the original image from compressive samples [13]. One common approach is to formulate the CS decoding as the following optimization problem [4]: where is the regularization term and is a tunable regularization parameter that trades off the fidelity term and the regularization term . One of popular choices of the regularization term is the total variation (TV) regularizer defined by the norm of the image gradient [5, 6]:

Total variation regularization was firstly introduced by Rudin et al. [7] to cope with image denoising problem in 1992. Their approach had a significant impact in the area of inverse problems in image processing. Since then, various improvements have been proposed by the community, for example, adaptive TV regularization [8, 9], nonlocal TV regularization [10], and anisotropic and directional TV regularization [1113]. TV regularization is widely used in image processing problems mainly due to its good properties such as convexity, invariance to image shifts, high-accuracy of recovery piecewise constant images, and desirable ability to preserve edges. However, it also has some limitations. The penalization of norm of gradient usually encourages the recovery of images in a piecewise-constant form, thus, resulting in reconstructed images with patchy of painting like staircase artifacts [14, 15]. To reduce the staircase effect and give better denoising performance, several higher-order TV regularization schemes were reported in the context of image denoising over the last few years [1418]. However, the higher-order regularizers are prone to causing blurring across sharp edges, since they prefer to suppress large partial gradients. To complement each other for most nature images, it is thus needed to further improve the image restoration capability by combining first- and higher-order methods into one framework.

In this paper, we propose a hybrid compressive sensing method using a new hybrid TV measure by a combined first- and second-order TV penalty for recovering a piecewise smooth image containing all possible sharper edges from limited compressive samples. To seek the optimal solution of the proposed model, we develop an efficient image reconstruction algorithm by using the majorization-minimization scheme. The novelty in this work is that our hybrid TV regularization method is able to utilize the best properties of first- and second-order TV regularization and manage to overcome the weaknesses of both.

The rest of the paper is organized as follows. Section 2 provides a brief review of second-order TV regularization method for CS image reconstruction problem. In Section 3, we give a detailed description of our hybrid TV-based CS construction model and corresponding optimization algorithm. Experimental results and performance analysis are given in Section 4. Finally, Section 5 concludes this paper.

2. Second-Order TV Penalty

In this section, we reinterpret classic TV regularizer and extend it to second-order TV regularizer. We denote first-order directional derivative of along the unit vector as , which can be expressed as the linear combination of horizontal and vertical directional derivatives as follows: Similarly, for the second-order directional derivative , we have

In order to extend the standard TV scheme to higher-order TV regularizer, we reinterpret the TV regularizer as a group separable mixed norm of directional derivatives of the specified image. To do this, we rewrite gradient magnitude as follows:

Hence, TV regularizer defined in (3) can be expressed as a functional involving the directional derivatives of as follows:

Based on the above reinterpretation, we introduce second-order TV regularizer by replacing first-order directional derivative in (7) with second-order directional derivative , thus we have

Substituting (5) into (8), we can rewrite the second-order TV regularizer as follows:

Since the second-order TV regularizer sums the square magnitude of the directional derivatives of the image along all directions and orientations, this penalty is invariant to rotations and translations and is also convex. The second-order derivatives are steerable which enables us to obtain analytical expressions for the new regularizer as the standard TV regularizer. Furthermore, the minimization of second-order TV regularizer will preserve the strong directional derivatives and attenuate the small ones at other directions; thus, it can provide better preservation along line-like features.

3. The Proposed Variational Approach for Image Compressive Sensing

3.1. Hybrid TV-Based Reconstruction Model

Combining first-order and second-order TV regularizers, we present a hybrid TV regularization model to reconstruct original image from its noisy compressive samples. Given noisy compressive samples and sensing matrix , the proposed reconstruction model can be formulated as where and are two positive parameters that control contribution of the fidelity term and two regularization terms.

In the first-order TV regularization method, images are often assumed to be piecewise-constant, while the second-order TV regularization method implicitly adopts a piecewise-linear assumption. The piecewise-linear approximation of images usually yields a lesser approximation error than piecewise-constant representation. Therefore, assuming that the images under consideration can be better decomposed into piecewise-constant and piecewise-linear functions, we expect the combination of first- and second-order regularization to be more accurate in terms of reconstruction quality.

3.2. Discretization of Hybrid TV Regularizer

Let us now consider the discrete formulation of two regularization terms in (10). A discrete version can be obtained by considering that the image has been uniformly sampled. At this point, we switch the notation and start assuming that denote vector containing all these pixels arranged in column lexicographic ordering.

Using local first-order differences to approximate the two orthogonal components of the gradient, we obtain a discrete version of first-order TV regularizer. Consider where and are operators corresponding to horizontal and vertical first-order differences at pixel , respectively, and the summation takes over all pixels. Similarly, we can write discrete version of second-order TV regularizer as Here , , and are operators corresponding to, respectively, horizontal-horizontal, vertical-vertical, horizontal-vertical second-order differences at pixel .

For the sake of simplicity, we denote Thus, (10) can be rewritten as

3.3. An Efficient Algorithm

In this subsection, we derive an efficient optimization algorithm which belongs to the class of majorization-minimization (MM) approaches [19, 20] to solve proposed model (10). The MM approaches have been introduced to solve optimization problems that are too difficult to solve directly. These schemes reformulate the original problem as the solution to a sequence of surrogate problems. Optimizing the surrogate functions will drive the objective function downward until a local optimum is reached.

Let be the cost function to be minimized. Instead of minimizing directly, the MM approaches solve a sequence of surrogate problems which are solved easier than . As the result, a sequence of obtained by are then applied to estimate the solution of . In (15), is the majorizer of function at a fixed point which satisfies the following two properties:

To develop a MM-based algorithm, we derive a quadratic majorizer for the proposed hybrid TV regularizer. The choice of a quadratic majorizer is motivated by the fact that the minimization of a quadratic function amounts to solving a system of linear equations, a task for which there are excellent methods available in the literature. We define the following majorizer:

Using the inequality , we have for any , with equality for .

Equation (19) means that majorizer satisfies the two properties (16) and (17). So, the solution of optimization problem (14) can be obtained by finding the minimizer of , . Consider

Notice that, for fixed , the terms and in (18) are simply additive constants which can be ignored since they do not affect the solution of the optimization problem. Thus, problem (20) is equivalent to

Let and denote the horizontal and vertical global finite difference matrices with size such that and are the vectors of all horizontal and vertical first-order differences of image . We then have where here, is a column vector whose th component is .

Similarly, for the second term in (21), we have where , , and are three second-order difference matrices with size such that and the weighting matrix is a block matrix defined by here is a column vector whose th component is

Substituting (22) and (24) into (21), we can rewrite (21) in the more compact form as follows:

Since the objective function in optimization (28) is a quadratic function, according to the theory of variational methods, is the solution to the following Euler-Lagrange equation: System (29) can be solved efficiently using the conjugate gradient (CG) algorithm.

In summary, the proposed algorithm for our hybrid TV model (10) is composed of four steps.

Step 1. Input , , , ,   ( is a tolerance). Set . Initialize .

Step 2. (a)Compute according to (23) for fixed .(b)Compute according to (26) and (27) for fixed .

Step 3. Solve Euler-Lagrange equation (29) using CG algorithm, obtain .

Step 4. Terminate if relative change . Otherwise, set and go to Step 2.

Here, we make one comment about implementation of this algorithm. In Step 3, since the linear system of equations cannot be solved analytically due to its huge dimension ( for an image), we take as an operator and solve the system by CG algorithm in our implementation.

4. Experimental Results

In this section, we present some numerical results to illustrate the performance of our method. We compare our results with those obtained by four state-of-the-art methods: (a) TVAL3 [21]; (b) RecPF [22]; (c) Hybrid TVL1 [23]; and (d) second-order TV [17]. MATLAB codes of those algorithms are available at the following web sites:http://www.caam.rice.edu/~optimization/L1/TVAL3/http://www.caam.rice.edu/~optimization/L1/RecPF/http://www.dssz.com/495330.htmlhttp://www.engineering.uiowa.edu/~jcb/Software/HDTV/Software.html

To compare the performance fairly, all parameters of those algorithms are set as the suggestion values by the authors in [17, 2123], respectively. The reconstruction performance of each method is evaluated in terms of two indicators: the reconstruction relative error (RE) and peak signal to noise ratio (PSNR). The relative error of reconstructed image is defined as follows: where and are the original and the reconstructed signal/image, respectively. Apparently, the lower the value of relative error is, the better reconstructed performance will be. The PSNR of reconstructed images is defined by where and are the original and the reconstructed image, respectively, and , is the size of image.

In our experiments, we consider a common task of reconstructing an image from their undersampled Fourier samples which is an important problem in magnetic resonance imaging (MRI). We generate our test sets using four images (see Figure 1): three natural images and one MR image. In each test, we acquire compressive samples by first applying randomized partial discrete Fourier transform (DFT) encoding and then adding Gaussian white noise with different signal noise ratio (SNR) level (30 dB and 40 dB). The encoding procedure can be formulated as where represents Fourier transform matrix, is a selection matrix containing rows of the identity matrix of order , is Gaussian white noise, and serves as the sensing matrix.

470165.fig.001
Figure 1: Test images.

The reconstructed images of “Lena” image with sampling ratio 25% and noise level 30 dB are shown in Figure 2. We note that the reconstructed image of TVAL3 is contaminated by noise. The results of RecPF and Hybrid TVL1 provide better performance than that of TVAL3 but with some loss of fine details, for example, the texture in hair and hat regions. Comparing the results of those five methods, we observe that second-order TV and our method preserve fine details more effectively than other three methods. From Figures 2(e) and 2(f), we can see that the result of our method have better image quality than that of second-order TV.

fig2
Figure 2: Reconstructed results of “Lena” image from noisy and undersampled Fourier data (noise level: 30 dB, sampling ratio: 25%). (a) Actual image; (b) TVAL3; (c) RecPF; (d) hybrid TVL1; (e) second-order TV; (f) our method.

In order to highlight the differences, we zoom in the region (marked by the red box in Figure 2(a)) of all the images shown in Figure 2 and present the details in Figure 3. It further demonstrates that second-order TV and our method are superior to other three methods. Furthermore, it can be seen from the regions marked by red boxes and green arrows in Figures 3(e) and 3(f) that second-order TV provide better preservation of smooth image regions and our method provide better preservation of fine details.

fig3
Figure 3: Zoom into a region of images shown in Figure 2 (the region is marked in Figure 2(a) by the red box).

Figure 4 compares the reconstructed results of “Brain” image by different methods with sampling ratio 20% and SNR level 40 dB. We can see that RecPF and Hybrid TVL1-based reconstruction result in a little patchy artifact and some loss of fine details. But on the whole, all five methods provide very good reconstruction of image features, and it is hard to tell which one is better. To make the difference between those methods more evident, we zoom in the region marked by the red box in Figure 4(a) of each image and show them in Figure 5. The regions marked by green arrows clearly demonstrate that our method provides consistently improved reconstructions.

fig4
Figure 4: Reconstructed results of “Brain” image from noisy and undersampled Fourier data (noise level: 40 dB, sampling ratio: 20%). (a) Actual image; (b) TVAL3; (c) RecPF; (d) hybrid TVL1; (e) second-order TV; (f) our method.
fig5
Figure 5: Zoom into a region of images shown in Figure 4 (the region is marked in Figure 4(a) by the red box).

To characterize the performance quantitatively, we show the PSNRs of the reconstructed images at various sampling ratios and SNR levels in Table 1. We observe that our method provides the best overall SNR for most of the cases. Our method obtains an improvement of around 0.35 dB on average over second-order TV method for “Lena” image reconstruction, and 0.33 dB, 0.30 dB, and 0.42 dB for “Pepper”, “Barbara”, and “Brain” images reconstructions, respectively. Moreover, to show the influence of sampling ratio on the performance of reconstruction, we plot relative errors of reconstructed images with various sampling ratio (from 15% to 50%, with interval 5%) in Figure 6. We can see that relative errors of our method keep the lowest level among these methods and our method obtain notable improvements when sampling ratio is less than 35%.

tab1
Table 1: PSNRs of the reconstructed images.
fig6
Figure 6: Relative errors of various methods versus sampling ratio.

We compare the average central processing unit (CPU) times of the different methods in Table 2. All experiments are performed under Windows 7 and MATLAB V8.0 (R2012b) running on a Lenovo workstation with an Intel (R) Xeon (R) CPU W3520 at 2.67 GHz and 4 GB of memory.

tab2
Table 2: Average CPU times of different methods (s).

From Table 2, we see that RecPF is faster than other four algorithms. This is primarily because RecPF solves the TV problem using shrinkage and fast Fourier transforms (FFT). However, this technique can also be applied to accelerate our algorithm, which we plan to pursue in the future.

5. Conclusion

This paper presents a hybrid variational model for image compressive sensing. The model minimizes the sum of three terms corresponding to least squares data fitting, first-order TV and second-order TV. We propose an efficient majorization-minimization algorithm to determine the solution of our model. We test our method with nature images and MR image. Comparisons of the proposed regularization method with four state-of-the-art algorithms demonstrate the significant improvement in the quality of the reconstructed images. Although achieving better performance in terms of RE and PSNR, our algorithm is slower than FFT-based method such as RecPF. How to accelerate the algorithm is an important problem which we will investigate in forthcoming research. We hope that our method is useful in relevant areas of image compressive sensing such as SAR/ISAR imaging and MR images reconstruction.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to thank the anonymous reviewers for their careful reading and valuable comments on their paper, which helped them to improve that paper. This research is supported by the National Nature Science Foundation of China (61171165, 60802039), the Nature Science Foundation of Jiangsu Province (BK2010488), and the Qinglan Outstanding Scholar Project of Jiangsu Province.

References

  1. E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Transactions on Information Theory, vol. 52, no. 2, pp. 489–509, 2006. View at Publisher · View at Google Scholar · View at MathSciNet
  2. D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006. View at Publisher · View at Google Scholar · View at MathSciNet
  3. E. J. Candes and M. B. Wakin, “An introduction to compressive sampling: a sensing/sampling paradigm that goes against the common knowledge in data acquisition,” IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 21–30, 2008. View at Publisher · View at Google Scholar · View at Scopus
  4. H. W. Engl, M. Hanke, and A. Neubauer, Regularization of Inverse Problems, vol. 375 of Mathematics and Its Applications, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1996. View at Publisher · View at Google Scholar · View at MathSciNet
  5. E. Candes and J. Romberg, “Signal recovery from random projections,” in Computational Imaging III, Proceedings of SPIE, pp. 76–86, San Jose, Calif, USA, January 2005. View at Publisher · View at Google Scholar · View at Scopus
  6. S. Ma, W. Yin, Y. Zhang, and A. Chakraborty, “An efficient algorithm for compressed MR imaging using total variation and wavelets,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '08), pp. 1–8, Anchorage, Alaska, USA, June 2008. View at Publisher · View at Google Scholar · View at Scopus
  7. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D, vol. 60, no. 1–4, pp. 259–268, 1992. View at Publisher · View at Google Scholar · View at Scopus
  8. M. Grasmair, “Locally adaptive total variation regularization,” in Scale Space and Variational Methods in Computer Vision, pp. 331–342, Springer, Berlin, Germany, 2009.
  9. Q. Chen, P. Montesinos, Q. S. Sun, P. A. Heng, and D. S. Xia, “Adaptive total variation denoising based on difference curvature,” Image and Vision Computing, vol. 28, no. 3, pp. 298–306, 2010. View at Publisher · View at Google Scholar · View at Scopus
  10. S. Kindermann, S. Osher, and P. W. Jones, “Deblurring and denoising of images by nonlocal functionals,” Multiscale Modeling & Simulation, vol. 4, no. 4, pp. 1091–1115, 2005. View at Publisher · View at Google Scholar · View at MathSciNet
  11. M. Grasmair and F. Lenzen, “Anisotropic total variation filtering,” Applied Mathematics and Optimization, vol. 62, no. 3, pp. 323–339, 2010. View at Publisher · View at Google Scholar · View at MathSciNet
  12. G. Steidl and T. Teuber, “Anisotropic smoothing using double orientations,” in Scale Space and Variational Methods in Computer Vision, pp. 477–489, Springer, Berlin, Germany, 2009.
  13. F. Lenzen, F. Becker, J. Lellmann, S. Petra, and C. Schnörr, “A class of quasi-variational inequalities for adaptive image denoising and decomposition,” Computational Optimization and Applications, vol. 54, no. 2, pp. 371–398, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  14. G. Steidl, S. Didas, and J. Neumann, “Splines in higher order TV regularization,” International Journal of Computer Vision, vol. 70, no. 3, pp. 241–255, 2006. View at Publisher · View at Google Scholar · View at Scopus
  15. Z. Dogan, S. Lefkimmiatis, A. Bourquard, and M. Unser, “A second-order extension of TV regularization for image deblurring,” in Proceedings of the 18th IEEE International Conference on Image Processing (ICIP '11), pp. 705–708, Brussels, Belgium, September 2011. View at Publisher · View at Google Scholar · View at Scopus
  16. Y. Hu and M. Jacob, “Improved higher degree total variation (HDTV) regularization,” in Proceedings of the 9th IEEE International Symposium on Biomedical Imaging (ISBI '12), pp. 656–659, Barcelona, Spain, May 2012. View at Publisher · View at Google Scholar
  17. Y. Hu and M. Jacob, “Higher degree total variation (HDTV) regularization for image recovery,” IEEE Transactions on Image Processing, vol. 21, no. 5, pp. 2559–2571, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  18. X.-G. Lv, J. Le, J. Huang, and L. Jun, “A fast high-order total variation minimization method for multiplicative noise removal,” Mathematical Problems in Engineering, vol. 2013, Article ID 834035, 13 pages, 2013. View at Publisher · View at Google Scholar
  19. D. R. Hunter and K. Lange, “A tutorial on MM algorithms,” The American Statistician, vol. 58, no. 1, pp. 30–37, 2004. View at Publisher · View at Google Scholar · View at MathSciNet
  20. M. A. T. Figueiredo, J. B. Dias, J. P. Oliveira, and R. D. Nowak, “On total variation denoising: a new majorization-minimization algorithm and an experimental comparison with wavalet denoising,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '06), pp. 2633–2636, Atlanta, Ga, USA, October 2006. View at Publisher · View at Google Scholar · View at Scopus
  21. C. Li, An Efficient Algorithm For Total Variation Regularization With Applications To the Single Pixel Camera and Compressive Sensing, Rice University, 2009.
  22. J. Yang, Y. Zhang, and W. Yin, “A fast alternating direction method for TVL1-L2 signal reconstruction from partial Fourier data,” IEEE Journal on Selected Topics in Signal Processing, vol. 4, no. 2, pp. 288–297, 2010. View at Publisher · View at Google Scholar · View at Scopus
  23. X. Shu and N. Ahuja, “Hybrid compressive sampling via a new total variation TVL1,” in Proceedings of the Computer Vision (ECCV '10), pp. 393–404, Springer, Berlin, Germany, 2010.