Research Article  Open Access
Approximate Sparsity and Nonlocal Total Variation Based Compressive MR Image Reconstruction
Abstract
Recent developments in compressive sensing (CS) show that it is possible to accurately reconstruct the magnetic resonance (MR) image from undersampled space data by solving nonsmooth convex optimization problems, which therefore significantly reduce the scanning time. In this paper, we propose a new MR image reconstruction method based on a compound regularization model associated with the nonlocal total variation (NLTV) and the wavelet approximate sparsity. Nonlocal total variation can restore periodic textures and local geometric information better than total variation. The wavelet approximate sparsity achieves more accurate sparse reconstruction than fixed wavelet and norm. Furthermore, a variable splitting and augmented Lagrangian algorithm is presented to solve the proposed minimization problem. Experimental results on MR image reconstruction demonstrate that the proposed method outperforms many existing MR image reconstruction methods both in quantitative and in visual quality assessment.
1. Introduction
Magnetic resonance imaging (MRI) is a noninvasive and nonionizing imaging processing. Due to its noninvasive manner and intuitive visualization of both anatomical structure and physiological function, MRI has been widely applied in clinical diagnosis. Imaging speed is important in many MRI applications. However, both scanning and reconstruction speed of MRI will affect the quality of reconstructed image. In spite of advances in hardware and pulse sequences, the speed, at which the data can be collected in MRI, is fundamentally limited by physical and physiological constraints. Therefore many researchers are seeking methods to reduce the amount of acquired data without degrading the image quality [1–3].
In recent years, the compressive sensing (CS) framework has been successfully used to reconstruct MR images from highly undersampled space data [4–9]. According to CS theory [10, 11], signals/images can be accurately recovered by using significantly fewer measurements than the number of unknowns or than mandated by traditional Nyquist sampling. MR image acquisition can be looked at as a special case of CS where the sampled linear combinations are simply individual Fourier coefficients (space samples). Therefore, CS is claimed to be able to make accurate reconstructions from a small subset of space data. In compressive sensing MRI (CSMRI), we can reconstruct a MR image with good quality from only a small number of measurements. Therefore, the application of CS to MRI has potential for significant scan time reductions, with benefits for patients and health care economics.
Because of the illposed nature of the CSMRI reconstruction problem, regularization terms are required for a reasonable solution. In existing CSMRI models, the most popular regularizers are , sparsity [4, 9, 12] and total variation (TV) [3, 13]. The sparsity regularized CSMRI model can be understood as a penalized least square with norm penalty. It is well known that the complexity of this model is proportional with the number of variables. Particularly when the number is large, solving the model generally is intractable. The regularization problem can be transformed into an equivalent convex quadratic optimization problem and, therefore, can be very efficiently solved. And under some conditions, the resultant solution of regularization coincides with one of the solutions of regularization [14]. Nevertheless, while regularization provides the best convex approximation to regularization and it is computationally efficient, the regularization often introduces extra bias in estimation and cannot reconstruct an image with the least measurements when applied to CSMRI [15]. In recent years, the regularization [16, 17] was introduced into CSMRI, since regularization can assuredly generate much sparser solutions than regularization. Although the regularizations achieve better performance, they always fall into local minima. Moreover, which should yield a best result is also a problem. Trzasko and Manduca [18] proposed a CSMRI paradigm based on homotopic approximation of the quasinorm. Although this method has no guarantee of achieving a global minimum, it achieves accurate MR image reconstructions at higher undersampling rates than regularization. And it was faster than those regularization methods. Recently, Chen and Huang [19] accelerated MRI by introducing the wavelet tree structural sparsity into the CSMRI.
Despite high effectiveness in CSMRI recovery, sparsity and TV regularizers often suffer from undesirable visual artifacts and staircase effects. To overcome those drawbacks, some hybrid sparsity and TV regularization methods [5–8] are proposed. In [5], Huang et al. proposed a new optimization algorithm for MR image reconstruction method, named fast composite splitting algorithm (FCSA), which is based on the combination of variable and operator splitting techniques. In [8], Yang et al. proposed a variable splitting method (RecPF) to solve hybrid sparsity and TV regularized MR image reconstruction optimization problem. Ma et al. [20] proposed an operator splitting algorithm (TVCMRI) for MR reconstruction. In order to deal with the problem of low and high frequency coefficients measurement, Zhang et al. [6] proposed a new socalled TVWL2L1 model which measures low frequency coefficients and high frequency coefficients with norm and norm. In [7], an experimental study on the choice of CSMRI regularizations was given. Although the classical TV regularization performs well in CSMRI reconstruction while preserving edges, especially for cartoonlike MR images, it is well known that TV regularization is not suitable for images with fine details and it often tends to oversmooth image details and textures. Nonlocal TV regularization extends the classical TV regularization by nonlocal means filter [21] and has been shown to outperform the TV in several inverse problems such as image deonising [22], deconvolution [23], and compressive sensing [24, 25]. In order to improve the signaltonoise ratio and preserve the fine details of MR images, Gopi et al. [26], Huang and Yang [27], and Liang et al. [28] have proposed nonlocal TV regularization based MR reconstruction and sensitivity encoding reconstruction.
In this paper, we proposed a novel compound regularization based compressive MR image reconstruction method, which exploits the nonlocal total variation (NLTV) and the approximate sparsity prior. The approximate sparsity, which is used to replace the traditional regularizer and regularizer of compressive MR image reconstruction model, is sparser and much easier to be solved. The NLTV is much better than TV for preserving the sharp edges and meanwhile recovering the local structure details. In order to compound regularization model, we develop an alternative iterative scheme by using the variable splitting and augmented Lagrangian algorithm. Experimental results show that the proposed method can effectively improve the quality of MR image reconstruction. The rest of the paper is organized as follows. In Section 2 we review the compressive sensing and MRI reconstruction. In Section 3 we propose our model and algorithm. The experimental results and conclusions will be shown in Sections 4 and 5, respectively.
2. Compressive Sensing and MRI Reconstruction
Compressive sensing [10, 11], as a new sampling and compression theory, is able to reconstruct an unknown signal from a very limited number of samples. It provides a firm theoretical foundation for the accurate reconstruction of MRI from highly undersampled space measurements and significantly reduces the MRI scan duration.
Suppose is a MR image and is a partial Fourier transform; then the sampling measurement of MR image in space can be defined as The compressive MR image reconstruction problem is to recover given the measurement and the sampling matrix . Undersampling occurs whenever the number of space sample is less than the number of unknowns . In that case, the compressive MR image reconstruction is an underdetermined problem.
In general, compressive sensing reconstructs the unknowns from the measurements by minimizing the norm of the sparsified image , where represents a sparsity transform for the image. In this paper, we choose the orthonormal wavelet transform as the sparsity transform for the image. Then the typical compressive MR image reconstruction is obtained by solving the following constrained optimization problem [4, 9, 12]: However, in terms of computational complexity, the norm optimization problem (2) is a typical NPhard problem, and it was difficult to solve. According to the certain condition of the restricted isometric property, the norm can be replaced by the norm. Therefore, the optimization problem (2) is relaxed to alternative convex optimization problem as follows: When the measurements are contaminated with noise, the typical compressive MR image reconstruction problem using relaxation of the norm is formulated as the following unconstrained Lagrangian version: where is a positive parameter.
Despite high effectiveness of sparsity regularized compressive MR image reconstruction methods, they often suffer from undesirable visual artifacts such as Gibbs ringing in the result. Due to its desirable ability to preserve edges, total variation (TV) model is successfully used in compressive MR image reconstruction [3, 13]. But the TV regularizer has still some limitations that restrict its performance, which cannot generate good enough results for images with many small structures and often suffers from staircase artifacts. In order to combine the advantages of sparsitybased and TV model and avoid their main drawbacks, a TV regularizer, corresponding to a finitedifference for the sparsifying transform, is typically incorporated into the sparsity regularized compressive MR image reconstruction [5–8]. In this case the optimization problem is written as where is a positive parameter. The TV was defined discretely as , where and are the horizontal and the vertical gradient operators, respectively. The compound optimization model (5) is based on the fact that the piecewise smooth MR images can be sparsely represented by the wavelet and should have small total variations.
3. Proposed Model and Algorithm
As mentioned above, joint TV and norm minimization model is a useful way to reconstruct MR images. However, they have still some limitations that restrict their performance. norm needs a combinatorial search for its minimization and its too high sensibility to noise. problems can be very efficiently solved. But the solution is not sparse, which influences the performance of MRI reconstruction. The TV model can preserve edges, but it tends to flatten inhomogeneous areas, such as textures. To overcome those shortcomings, a novel method is proposed for compressive MR imaging based on the wavelet approximate sparsity and nonlocal total variation (NLTV) regularization, named WasNLTV.
3.1. Approximate Sparsity
The problems of using norm in compressive MR imaging (i.e., the need for a combinatorial search for its minimization and its too high sensibility to noise) are both due to the fact that the norm of a vector is a discontinuous function of that vector. The same as [29, 30], our idea is to approximate this discontinuous function by a continuous one, named approximate sparsity function, which provides smooth measure of norm and better sparsity than regularizer.
The approximate sparsity function is defined as The parameter may be used to control the accuracy with which approximate the Kronecker delta. In mathematical terms, we have Define the continuous multivariate approximate sparsity function as It is clear from (7) that is an indicator of the number of zeroentries in for small values of . Therefore, norm can be approximate by Note that the larger the value of , the smoother the and the worse the approximation to norm; the smaller the value of norm, the closer the behavior of to norm.
3.2. Nonlocal Total Variation
Although the classical TV is surprisingly efficient for preserving edges, it is well known that TV is not suitable for images with fine structures, details, and textures which are very important to MR images. The NLTV is a variational extension of the nonlocal means filter proposed by Wang et al. [30]. NLTV uses the whole image information instead of using adjacent pixel information to calculate the gradients in regularization term. The NLTV has been proven to be more efficient than TV for improving the signaltonoise ratio, on preserving not only sharp edges, but also fine details and repetitive patterns [26–28]. In this paper, we use the NLTV to replace the TV in compound regularization based compressive MR image reconstruction.
Let , , be a real function , and let be a weight function. For a given image , the weighted graph gradient is if defined as the vector of all directional derivatives at : The directional derivatives apply to all the nodes since the weight is extended to the whole domain . Let us denote vectors such that ; the nonlocal graph divergence is defined as the adjoint of the nonlocal gradient:
Due to being analogous to classical TV, the norm is in general more efficient than the norm for sparse reconstruction. In this paper, we are interested in NLTV. Based on the above definition, the NLTV is defined as follows: The weight function denotes how much the difference between pixels and is penalized in the images, which is calculated by where and denote a small patch in image centering at the coordinates and , respectively. is the normalizing factor. is a filtering parameter.
3.3. The Description of Proposed Model and Algorithm
According to the compressive MR image reconstruction models described in Section 2, the proposed WasNLTV model for compressive MR image reconstruction is It should be noted that the optimization problem in (14), although convex, is very hard to solve owing to nonsmooth terms and its huge dimensionality. To solve the problem in (14), we use the variable splitting and augmented Lagrangian algorithm following closely the methodology introduced in [31]. The core idea is to introduce a set of new variables per regularizer and then exploit the alternating direction method of multipliers (ADMM) to solve the resulting constrained optimization problems.
By introducing an intermediate variable vector , the problem (14) can be transformed into an equivalent one; that is, The optimization problem (15) can be written in a compact form as follows: where The augmented Lagrangian of problem (16) is where is a positive constant, , and denotes the Lagrangian multipliers associated to the constraint . The basic idea of the augmented Lagrangian method is to seek a saddle point of , which is also the solution of problem (16). By using ADMM algorithm, we solve the problem (16) by iteratively solving the following problems:
It is evident that the minimization problem (19) is still hard to solve efficiently in a direct way, since it involves a nonseparable quadratic term and nondifferentiability terms. To solve this problem, a quite useful ADMM algorithm is employed, which alternatively minimizes one variable while fixing the other variables. By using ADMM, the problem (19) can be solved by the following four subproblems with respect to and . (1) subproblem: by fixing and , the optimization problem (19) to be solved is
It is clear that problem (21) is a quadratic function. By direct computation, we get the EulerLagrange equation for (21): Therefore, the solution of problem (21) is
Due to the computational complexity of NLTV, the same as [27], the NLTV regularization in this paper only runs one time. (2) subproblem: by fixing , , , and , the optimization problem (19) to be solved is Clearly, the problem (24) is a quadratic function; its solution is simply (3) subproblem: by fixing , , , and , the optimization problem (19) to be solved is The same as problem (24), the problem (26) is a quadratic function and its gradient is simplified as . The steepest descent method is desirable to use to solve (26) iteratively by applying (4) subproblem: by fixing , , , and , the optimization problem (19) to be solved is Problem (28) is a norm regularized optimization problem. Its solution is the wellknown soft threshold [32]: where denotes the componentwise application of softthreshold function.
In conclusion, the ADMM algorithm for optimization problem (16) is shown in Algorithm 1.
4. Experimental Results
In this section, a series of experiments on four 2D MR images (named brain, chest, artery, and cardiac) are implemented to evaluate the proposed and existing methods. Figure 1 shows the test images. All experiments are conducted on a PC with an Intel Core i73520M, 2.90 GHz CPU, in MATLAB environment. The proposed method (named WasNLTV) is compared with the existing methods including TVCMRI [19], RecPF [8], and FCSA [5]. We evaluate the performance of various methods both visually and qualitatively in signaltonoise ratio (SNR) and rootmeansquare error (RMSE) values. The SNR and RMSE are defined as where and denote the original image and the reconstructed image, respectively, and is the mean function.
(a) Chest
(b) Artery
(c) Cardiac
(d) Brain
For fair comparisons, experiment uses the same observation methods with TVCMRI. In the space, we randomly obtain more samples in low frequencies and fewer samples in higher frequencies. This sampling scheme is widely used for compressed MR image reconstructions. Suppose a MR image has pixels and the partial Fourier transform in problem (1) consists of rows of matrix corresponding to the full 2D discrete Fourier transform. The chosen rows correspond to the sampling measurements . Therefore, the sampling ratio is defined as . In the experiments, the Gaussian white noise generated by in MATLAB is added, where standard deviation . The regularization parameters , , and are set as 0.001, 0.035, and 1, respectively. To be fair to compare the reconstruction MR images of various algorithms, all methods run 50 iterations and the Rice wavelet toolbox is used as the wavelet transform.
Table 1 summarizes the average reconstruction accuracy obtained by using different methods at different sampling ratios on the set of test images. From Table 1, it can be seen that the proposed WasNLTV method attains the highest SNR (dB) in all cases. Figure 2 plots the SNR values with sampling ratios for different images. It can also be seen that the WasNLTV method achieves the larger improvement of SNR values.

(a) Chest
(b) Artery
(c) Cardiac
(d) Brain
Table 2 gives the RMSE results of reconstructed MRI after applying different algorithms. From Table 2, it can be seen that WasNLTV method attains the lowest RMSE in all cases. As is known, the lower the RMSE is, the better the reconstructed image is. That is to say the MR images reconstructed by WasNLTV have the best visual quality.

To illustrate visual quality, reconstructed compressive MR images obtained using different methods with sampling ratios 20% are shown in Figures 3, 4, 5, and 6. For better visual comparison, we zoom in a small patch where the edge and texture are much more abundant. From the figures, it can be observed that the WasNLTV always obtains the best visual effects on all MR images. In particular, the edge of organs and tissues obtained by WasNLTV are much more clear and easy to identify.
(a) TVCMRI
(b) Cropped TVCMRI
(c) RecPF
(d) Cropped RecPF
(e) FCSA
(f) Cropped FCSA
(g) WasNLTV
(h) Cropped WasNLTV
(a) TVCMRI
(b) Cropped TVCMRI
(c) RecPF
(d) Cropped RecPF
(e) FCSA
(f) Cropped FCSA
(g) WasNLTV
(h) Cropped WasNLTV
(a) TVCMRI
(b) Cropped TVCMRI
(c) RecPF
(d) Cropped RecPF
(e) FCSA
(f) Cropped FCSA
(g) WasNLTV
(h) Cropped WasNLTV
(a) TVCMRI
(b) Cropped TVCMRI
(c) RecPF
(d) Cropped RecPF
(e) FCSA
(f) Cropped FCSA
(g) WasNLTV
(h) Cropped WasNLTV
Figure 7 gives the performance comparisons between different methods with sampling ratios 20% in terms of the CPU time over the SNR. In general, the computational complexity of NLTV is much higher than TV. In order to reduce the computational complexity of WasNLTV, in the experiment, we perform the NLTV regularization once in some iterations. Despite the higher computational complexity of WasNLTV, the WasNLTV obtains the best reconstruction results on all MR images by achieving the highest SNR in less CPU time.
(a) Chest
(b) Artery
(c) Cardiac
(d) Brain
5. Conclusions
In this paper, we propose a new compound regularization based compressive sensing MRI reconstruction model, which exploits the NLTV regularization and wavelet approximate sparsity prior. The approximate sparsity prior is used in compressive MR image reconstruction model instead of or norm, which can produce much sparser results. And the optimization problem is much easier to be solved. Because the NLTV takes advantage of the redundancy and selfsimilarity in a MR image, it can effectively avoid blocky artifacts caused by traditional TV regularization and keep fine edge of organs and tissues. As for the algorithm, we apply the variable splitting and augmented Lagrangian algorithm to solve the compound regularization minimization problem. Experiments on test images demonstrate that the proposed method leads to high SNR measure and more importantly preserves the details and edges of MR images.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
The authors would like to thank the anonymous referees for their valuable and helpful comments. The work was supported by the National Natural Science Foundation of China under Grants 61162022 and 61362036, the Natural Science Foundation of Jiangxi China under Grant 20132BAB201021, the Jiangxi Science and Technology Research Development Project of China under Grant KJLD12098, and the Jiangxi Science and Technology Research Project of Education Department of China under Grant GJJ12632.
References
 K. P. Pruessmann, “Encoding and reconstruction in parallel MRI,” NMR in Biomedicine, vol. 19, no. 3, pp. 288–299, 2006. View at: Publisher Site  Google Scholar
 B. Sharif, J. A. Derbyshire, A. Z. Faranesh, and Y. Bresler, “Patientadaptive reconstruction and acquisition in dynamic imaging with sensitivity encoding (PARADISE),” Magnetic Resonance in Medicine, vol. 64, no. 2, pp. 501–513, 2010. View at: Publisher Site  Google Scholar
 M. Lustig, D. Donoho, and J. M. Pauly, “Sparse MRI: the application of compressed sensing for rapid MR imaging,” Magnetic Resonance in Medicine, vol. 58, no. 6, pp. 1182–1195, 2007. View at: Publisher Site  Google Scholar
 D. Donoho, J. M. Santos, and J. M. Pauly, “Compressed sensing MRI,” IEEE Signal Processing Magazine, vol. 2, pp. 72–82, 2008. View at: Google Scholar
 J. Huang, S. Zhang, and D. Metaxas, “Efficient MR image reconstruction for compressed MR imaging,” Medical Image Analysis, vol. 15, no. 5, pp. 670–679, 2011. View at: Publisher Site  Google Scholar
 Z. Zhang, Y. Shi, W. P. Ding, and B. C. Yin, “MR images reconstruction based on TVWL2L1 model,” Journal of Visual Communication and Image Representation, vol. 2, pp. 187–195, 2013. View at: Publisher Site  Google Scholar
 A. Majumdar and R. K. Ward, “On the choice of compressed sensing priors and sparsifying transforms for MR image reconstruction: An experimental study,” Signal Processing: Image Communication, vol. 27, no. 9, pp. 1035–1048, 2012. View at: Publisher Site  Google Scholar
 J. Yang, Y. Zhang, and W. Yin, “A fast alternating direction method for TVL1L2 signal reconstruction from partial Fourier data,” IEEE Journal on Selected Topics in Signal Processing, vol. 4, no. 2, pp. 288–297, 2010. View at: Publisher Site  Google Scholar
 S. Ravishankar and Y. Bresler, “MR image reconstruction from highly undersampled kspace data by dictionary learning,” IEEE Transactions on Medical Imaging, vol. 30, no. 5, pp. 1028–1041, 2011. View at: Publisher Site  Google Scholar
 E. J. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Transactions on Information Theory, vol. 52, no. 2, pp. 489–509, 2006. View at: Publisher Site  Google Scholar  MathSciNet
 D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006. View at: Publisher Site  Google Scholar  MathSciNet
 X. Qu, X. Cao, D. Guo, C. Hu, and Z. Chen, “Combined sparsifying transforms for compressed sensing MRI,” Electronics Letters, vol. 46, no. 2, pp. 121–123, 2010. View at: Publisher Site  Google Scholar
 F. Knoll, K. Bredies, T. Pock, and R. Stollberger, “Second order total generalized variation (TGV) for MRI,” Magnetic Resonance in Medicine, vol. 65, no. 2, pp. 480–491, 2011. View at: Publisher Site  Google Scholar
 D. Donoho and J. Tanner, “Observed universality of phase transitions in highdimensional geometry, with implications for modern data analysis and signal processing,” Philosophical Transactions of the Royal Society of London A, vol. 367, no. 1906, pp. 4273–4293, 2009. View at: Publisher Site  Google Scholar  MathSciNet
 Z. B. Xu, X. Y. Chang, and F. M. Xu, “L1/2 regularization: a thresholding representation theory and a fast solver,” IEEE Transactions on Neural Networks and Learning Systems, vol. 7, pp. 1013–1027, 2012. View at: Google Scholar
 R. Chartrand, “Fast algorithms for nonconvex compressive sensing: MRI reconstruction from very few data,” in Proceedings of the 6th IEEE International Conference on Biomedical Imaging: From Nano to Macro (ISBI '09), pp. 262–265, July 2009. View at: Publisher Site  Google Scholar
 C. Y. Jong, S. Tak, Y. Han, and W. P. Hyun, “Projection reconstruction MR imaging using FOCUSS,” Magnetic Resonance in Medicine, vol. 57, no. 4, pp. 764–775, 2007. View at: Publisher Site  Google Scholar
 J. Trzasko and A. Manduca, “Highly undersampled magnetic resonance image reconstruction via homotopic l0minimization,” IEEE Transactions on Medical Imaging, vol. 28, no. 1, pp. 106–121, 2009. View at: Publisher Site  Google Scholar
 C. Chen and J. Z. Huang, “The benefit of tree sparsity in accelerated MRI,” Medical Image Analysis, vol. 18, pp. 834–842, 2014. View at: Google Scholar
 S. Q. Ma, W. T. Yin, Y. Zhang, and A. Chakraborty, “An efficient algorithm for compressed MR imaging using total variation and wavelets,” in Proceeding of the 26th IEEE Conference on Computer Vision and Pattern Recognition (CVPR '08, pp. 1–8, Anchorage, Alaska, USA, June 2008. View at: Publisher Site  Google Scholar
 A. Buades, B. Coll, and J. M. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Modeling & Simulation, vol. 4, no. 2, pp. 490–530, 2005. View at: Publisher Site  Google Scholar  MathSciNet
 F. F. Dong, H. L. Zhang, and D. X. Kong, “Nonlocal total variation models for multiplicative noise removal using split Bregman iteration,” Mathematical and Computer Modelling, vol. 55, no. 34, pp. 939–954, 2012. View at: Publisher Site  Google Scholar  MathSciNet
 S. Yun and H. Woo, “Linearized proximal alternating minimization algorithm for motion deblurring by nonlocal regularization,” Pattern Recognition, vol. 44, no. 6, pp. 1312–1326, 2011. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 X. Zhang, M. Burger, and X. Bresson, “Bregmanized nonlocal regularization for deconvolution and sparse reconstruction,” SIAM Journal on Imaging Sciences, vol. 3, no. 3, pp. 253–276, 2011. View at: Google Scholar  MathSciNet
 W. Dong, X. Yang, and G. Shi, “Compressive sensing via reweighted TV and nonlocal sparsity regularisation,” Electronics Letters, vol. 49, no. 3, pp. 184–186, 2013. View at: Publisher Site  Google Scholar
 V. P. Gopi, P. Palanisamy, and K. A. Wahid, “MR image reconstruction based on iterative split Bregman algorithm and nonlocal total variation,” Computational and Mathematical Methods in Medicine, vol. 2013, Article ID 985819, 16 pages, 2013. View at: Publisher Site  Google Scholar  MathSciNet
 J. Huang and F. Yang, “Compressed magnetic resonance imaging based on wavelet sparsity and nonlocal total variation,” in Proceedings of the 9th IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI '12), pp. 968–971, Barcelona, Spain, May 2012. View at: Publisher Site  Google Scholar
 D. Liang, H. F. Wang, Y. C. Chang, and L. L. Ying, “Sensitivity encoding reconstruction with nonlocal total variation regularization,” Magnetic Resonance in Medicine, vol. 65, no. 5, pp. 1384–1392, 2011. View at: Publisher Site  Google Scholar
 H. Mohimani, M. BabaieZadeh, and C. Jutten, “A fast approach for overcomplete sparse decomposition based on smoothed L0 norm,” IEEE Transactions on Signal Processing, vol. 57, pp. 289–301, 2009. View at: Google Scholar
 J.H. Wang, Z.T. Huang, Y.Y. Zhou, and F.H. Wang, “Robust sparse recovery based on approximate ${L}_{\text{0}}$ norm,” Acta Electronica Sinica, vol. 40, no. 6, pp. 1185–1189, 2012. View at: Publisher Site  Google Scholar
 M. V. Afonso, J. M. BioucasDias, and M. A. T. Figueiredo, “An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems,” IEEE Transactions on Image Processing, vol. 20, no. 3, pp. 681–695, 2011. View at: Publisher Site  Google Scholar  MathSciNet
 P. L. Combettes and V. R. Wajs, “Signal recovery by proximal forwardbackward splitting,” Multiscale Modeling & Simulation, vol. 4, pp. 1168–1200, 2005. View at: Publisher Site  Google Scholar  MathSciNet
Copyright
Copyright © 2014 Chengzhi Deng et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.