Table of Contents Author Guidelines Submit a Manuscript
BioMed Research International
Volume 2016, Article ID 2860643, 7 pages
http://dx.doi.org/10.1155/2016/2860643
Research Article

Two-Layer Tight Frame Sparsifying Model for Compressed Sensing Magnetic Resonance Imaging

1Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Shenzhen, Guangdong 518055, China
2The Beijing Center for Mathematics and Information Interdisciplinary Sciences, Beijing 100048, China
3Biomedical and Multimedia Information Technology (BMIT) Research Group, School of Information Technologies, The University of Sydney, Sydney, NSW 2006, Australia
4Nanchang University, Nanchang, Jiangxi, China

Received 20 April 2016; Revised 5 August 2016; Accepted 18 August 2016

Academic Editor: Andrey Krylov

Copyright © 2016 Shanshan Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Compressed sensing magnetic resonance imaging (CSMRI) employs image sparsity to reconstruct MR images from incoherently undersampled -space data. Existing CSMRI approaches have exploited analysis transform, synthesis dictionary, and their variants to trigger image sparsity. Nevertheless, the accuracy, efficiency, or acceleration rate of existing CSMRI methods can still be improved due to either lack of adaptability, high complexity of the training, or insufficient sparsity promotion. To properly balance the three factors, this paper proposes a two-layer tight frame sparsifying (TRIMS) model for CSMRI by sparsifying the image with a product of a fixed tight frame and an adaptively learned tight frame. The two-layer sparsifying and adaptive learning nature of TRIMS has enabled accurate MR reconstruction from highly undersampled data with efficiency. To solve the reconstruction problem, a three-level Bregman numerical algorithm is developed. The proposed approach has been compared to three state-of-the-art methods over scanned physical phantom and in vivo MR datasets and encouraging performances have been achieved.

1. Introduction

Compressed sensing magnetic resonance imaging (CSMRI) is a very popular signal processing based technique for accelerating MRI scan. Different from the classical fixed-rate sampling dogma Shannon-Nyquist sampling theorem, CS exploits the sparsity of an MR image and allows CSMRI to recover MR images from less incoherently sampled -space data [1]. The classical formulation of CSMRI can be written aswhere and , respectively, denote the MR image and its corresponding undersampled raw -space data, represents the undersampled Fourier encoding matrix with , and is an analysis model which sparsifies the image with transform under the norm constraint. and are the number of image pixels and measured data. The classical formulation is typically equipped with total variation and wavelet and it can be solved very efficiently [1]. However, the efficiency comes at the expense of accuracy, especially with highly undersampled noisy measurements, due to lack of adaptability or insufficient sparsity promotion. To address this issue, there have been diverse methods proposed [2, 3] and we focus on the following three representative directions.

One main endeavor is employing nonlocal operations or redundant transforms to analytically sparsify the MR image [4]. Typical examples include nonlocal total variation regularization [5], patch-based directional wavelet [6], and wavelet tree sparsity based CSMRI techniques [7]. These methods generally have straightforward models; nevertheless, the reconstruction accuracy is not that perfectly satisfying due to lack of adaptability. We proposed one-layer data-driven tight frame DDTF for undersampled image reconstruction [8]. It is generally very efficient. But its performance is still limited due to its insufficient sparsity promotion and reliance on the Bregman iteration technique for bringing back the image details.

The other effort is training adaptive dictionary to sparsely represent the MR image in the synthesis manner. For example, DLMRI [9], BPFA triggered MR reconstruction [10], and our proposed TBMDU [3] employ dictionary learning to adaptively capture image structures while promoting sparsity. These methods can generally achieve accurate MR image reconstruction with strong noise suppression capability. Unfortunately, the complexity of these approaches is very high and the sparsity is still directly limited to one-layer representation of the target image.

The third group endeavors could be regarded as the variants of the above two efforts, which target employing the advantages of both the analysis and synthesis sparse models. For example, the balanced tight frame model [11] introduces a penalty term to bridge the gap between the analysis and synthesis model. Unfortunately, although it possesses a fascinating mathematical explanation, the sparsity promotion is still limited to a single layer and therefore its performance is only comparable to the analysis one. To further promote sparsity, a wavelet driven dictionary learning (named WaveDLMRI) [12] technique and our proposed total variation driven dictionary learning approach (named GradDLRec) [13] adaptively represent the sparse coefficients derived from the analysis transform rather than directly encode the underlying image. Nevertheless, despite achieving encouraging performances, they still rely on the computationally expensive dictionary learning technique.

Recently, there are double sparsity model and doubly sparse transforms proposed in general image/signal processing community [14, 15]. The double sparsity model tries to train a sparse dictionary over a fixed base, while the doubly sparse transform is devoted to learning an adaptive sparse matrix over an analytic transform. There is no doubt that their application to image denoising has presented promising results, albeit the two-layer sparsifying model is more concerned to assist efficient learning, storage, and implementation by constraining the dictionary sparse rather than focus on further triggering of the sparsity of the image.

Motivated by the above observations, we try to develop a two-layer tight frame sparsifying (TRIMS) model for CSMRI by sparsifying the image with a product of a fixed tight frame and an adaptive learned tight frame. The proposed TRIMS has several merits: (1) the tight frame satisfies the perfect reconstruction property which ensures the given signal can be perfectly represented by its canonical expansion [16]; (2) a tight frame can be implemented very efficiently since it satisfies ; (3) the adaptability has been kept by the second-layer tight frame tailored for the target reconstruction task; (4) the two-layer tight frame has enabled the image sparsity to be explored more sufficiently compared to the one-layer one. Furthermore, the two-layer tight frame also has a convolutional explanation, which extracts appropriate image characteristics to constrain MR image reconstruction [17]. We have compared our method with three state-of-the-art approaches of the above three directions, namely, DDTF-MRI, DLMRI, and GradDLRec on an in vivo complex valued MR dataset. The results have advised the proposed method could properly balance the efficiency, accuracy, and acceleration factors.

2. Theory

2.1. TRIMS Model

To reconstruct MR images from undersampled data, we propose a TRIMS model which can be implicitly described aswhere is the fixed tight frame and denotes the data-driven tight frame. means the tight frame system, since a tight frame can be formulated with a set of filters under the unitary extension principle (UEP) condition [16]. The proposed model also has another approximately equivalent convolutional expression, which we name the explicit modelwhere are the fixed kernels and denote the to-be-learned adaptive kernels.

2.2. TRIMS Algorithm

To solve the proposed model, we develop a three-level Bregman iteration numerical algorithm. Introducing a Bregman parameter , we have the first-level Bregman iterationTo attack the first subproblem in (4), we introduce an assistant variable and obtain the second-level iterationThe subproblem regarding the update of is a simple least squares problem admitting an analytical solution. Its solution satisfies the following normal equation: Since is a tight frame satisfying , letting denote the full Fourier encoding matrix normalized such that , we havewhere , , and denotes the sampled -space subset. In order to update and , we introduce another assistant variable to decompose the coupling between and and therefore obtain the third-level Bregman iterationSimilar to the update of , we can easily get the least squares solution for

As for the update of , we temporarily fix the value of and can easily obtain its update rule with the iterative shrinkage/thresholding algorithm (ISTA)where . Now fix , we update by minimizingInstead of directly optimizing , we sequentially partition the coefficient vectors into vectors and apply the technique of [16] to solve this subproblem using singular value decomposition (SVD), with the aim of learning its corresponding filter . To facilitate the readers to grasp the overall picture, we summarize the proposed TRIMS in Algorithm 1.

Algorithm 1: Reconstructing MR images from undersampled K-space data with TRIMS.

3. Experiments and Results

We evaluated the proposed method on three datasets, namely, a T1-weighted brain image obtained from GE 3T commercial scanner with an eight-channel head coil (TE = 11 ms, TR = 700 ms, FOV = 22 cm, and matrix = 256 × 256), a PD-weighted brain image scanned from 3T SIEMENS with an eight-channel head coil and MPRAGE (3D flash with IR prep, TE = 3.45 ms, TR = 2530 ms, TI = 1100 ms, flip angle = 7 deg., slice = 1, matrix = 256 × 256, slice thickness = 1.33 mm, FOV = 256 mm, and measurement = 1), and a physical phantom scanned from a 3T commercial scanner (SIEMENS MAGNETOM TrioTim syngo) with a four-channel head coil (TE = 12 ms, TR = 800 ms, FOV = 24.2 cm, and matrix = 256 × 256). Informed consent was obtained from the imaging subject in compliance with the Institutional Review Board policy. The Walsh adaptive combination method is applied to combine the multichannel data to a single-channel one corresponding to a complex-valued image. We have compared the proposed method to three state-of-the-art methods, namely, the representative analysis transform based DDTF-MRI, the synthesis dictionary based DLMRI, and the analysis-synthesis mixture based GradDLRec approach. TRIMS was implemented with shift invariant Haar wavelet filters for the fixed tight frame (the size of each filter is ) and for initializing the second-level tight frame (the size of each filter is ). The other three algorithms were implemented with their recommended parameter settings. To quantitatively evaluate the reconstruction accuracy of each method, we have employed peak signal-to-noise ratio (PSNR), relative error, and structural similarity (SSIM) index [18] which are defined as follows: where SSIM is multiplicative combination of the three terms, namely, the luminance term , the contrast term , and the structural term .

We firstly applied the four approaches to reconstruct T1-weighted MR image under the radial sampling scheme with the acceleration factor (sampling ratio ). The reconstructed image obtained by each algorithm and the absolute difference between the reconstructed image and the ground truth image were displayed in Figure 1. We also present an enlargement area to reveal the fine details and structures each method has preserved. We can see that there exist somewhat blurring artifacts on the edges in the results reconstructed by the four methods. However, TRIMS can reconstruct an image closer to the one reconstructed from the full data. The absolute difference maps also indicate that TRIMS incurs less errors while reconstructing the MR image compared to the other three approaches.

Figure 1: Visual quality comparison on GE MR images reconstructed by the four approaches from radially undersampled -space data (25.16%). (a) From left to right: ground truth image and images reconstructed by the DDTF, DLMRI, GradDLRec, and proposed TRIMS; each one has an enlarged region for a closer comparison. (b) From left to right: color axis and difference images of the DDTF, DLMRI, GradDLRec, and TRIMS.

We further utilized the four approaches to reconstruct the PD-weighted brain image from of 2D randomly sampled -space data. Figure 2(a) displays the original image and the images reconstructed by the four approaches. For a close-up look, the white box enclosed part has been zoomed and presented at the right corner of the image. It can be observed that our method has produced an image closer to the original image. The four approaches were also evaluated on a scanned physical phantom which consists of quite a few regular structures with fine details. Figure 2(b) provided the visual comparison results of the phantoms reconstructed from 12.79% of 2D randomly sampled -space data. An area with different scales of lines was enlarged in each image to visualize the reconstruction accuracy of each method. It can be observed that the enlarged parts in the reconstruction results suffer from blur. Nevertheless, the proposed method can still produce an image with less blurry artifacts.

Figure 2: Visual quality comparison on PD-weighted and physical phantom MR images reconstructed by the four approaches from 2D randomly undersampled -space data (9.312%). From left to right: ground truth image and images reconstructed by the DDTF, DLMRI, GradDLRec, and proposed TRIMS; each one has an enlarged region for a closer comparison.

To test the sensitivity of the four methods to acceleration factors, we retrospectively undersampled the full -space data with the 2D variable density scheme at 2.5-, 4-, 6-, 8-, and 10-time acceleration and employed the four methods to reconstruct MR images from the undersampled data. Figure 3 has presented the average PSNR, relative error values, and SSIM over all the three images reconstructed by the four methods versus different acceleration factors. The two PSNR and relative error plots have demonstrated that the proposed method could achieve better reconstruction results at all acceleration rates. Nevertheless, we should admit that the plot of SSIM indicates that the proposed method does not produce the best results at all undersampling factors on average since the current tight frame size is relatively small based on the concern of the computational complexity. Better results can be produced if the size of the tight frame is set a little bigger.

Figure 3: The average reconstruction errors in PSNR, relative error, and SSIM over all images with respect to different acceleration rates.

We also have provided a comparison of the convergence property of the four methods over acceleration rates 2.5 and 6 on the T1-weighted image in Figure 4. As can be seen, the four methods all have approximately converged.

Figure 4: The convergence development of the four methods over acceleration rates 2.5 and 6 in PSNR, relative error, and SSIM while reconstructing the T1-weighted image.

Finally, we compare the computational time of the four methods, which were implemented on a Windows 7 (64-bit) operating system equipped with 8 GB RAM and Intel® Core i7-4770 CPU @ 3.40 GHz in MATLAB 2015a. Table 1 lists the computational time for each method over the six acceleration rates. We can observe that TRIMS is more efficient compared to DLMRI and GradDLRec. It is even more efficient than DDTF since DDTF needs to train 64 filters, each size of which is , while TRIMS only needs to train 16 filters whose size is . Furthermore, it is worth mentioning that although the size of the to-be-learned tight frame of TRIMS is smaller than that of DDTF, the two-layer sparsifying nature has facilitated TRIMS to achieve better reconstruction results in shorter time compared to DDTF.

Table 1: The computational time (in second) comparison over different acceleration rates.

4. Conclusions

This paper proposes a two-layer tight frame sparsifying model, namely, TRIMS, for compressed sensing magnetic resonance imaging. This approach explores the strength of adaptive learning technique and tight frames for accurate reconstruction of MR images from undersampled -space data. The experimental results demonstrated that the proposed TRIMS could accurately reconstruct MR images from a variety of undersampled data with proper efficiency.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

This research was partly supported by the National Natural Science Foundation of China (61601450, 61471350, 11301508, 61671441, and 61401449), the Natural Science Foundation of Guangdong (2015A020214019, 2015A030310314, and 2015A030313740), the Basic Research Program of Shenzhen (JCYJ20160531183834938, JCYJ20140610152828678, JCYJ20150630114942318, and JCYJ20140610151856736), and the SIAT Innovation Program for Excellent Young Researchers (201403 and 201313).

References

  1. M. Lustig, D. Donoho, and J. M. Pauly, “Sparse MRI: the application of compressed sensing for rapid MR imaging,” Magnetic Resonance in Medicine, vol. 58, no. 6, pp. 1182–1195, 2007. View at Publisher · View at Google Scholar · View at Scopus
  2. J. Trzasko and A. Manduca, “Highly undersampled magnetic resonance image reconstruction via homotopic l0-minimization,” IEEE Transactions on Medical Imaging, vol. 28, no. 1, pp. 106–121, 2009. View at Publisher · View at Google Scholar · View at Scopus
  3. Q. Liu, S. Wang, K. Yang, J. Luo, Y. Zhu, and D. Liang, “Highly undersampled magnetic resonance image reconstruction using two-level Bregman method with dictionary updating,” IEEE Transactions on Medical Imaging, vol. 32, no. 7, pp. 1290–1301, 2013. View at Publisher · View at Google Scholar · View at Scopus
  4. X. Qu, Y. Hou, F. Lam, D. Guo, J. Zhong, and Z. Chen, “Magnetic resonance image reconstruction from undersampled measurements using a patch-based nonlocal operator,” Medical Image Analysis, vol. 18, no. 6, pp. 843–856, 2014. View at Publisher · View at Google Scholar · View at Scopus
  5. D. Liang, H. Wang, Y. Chang, and L. Ying, “Sensitivity encoding reconstruction with nonlocal total variation regularization,” Magnetic Resonance in Medicine, vol. 65, no. 5, pp. 1384–1392, 2011. View at Publisher · View at Google Scholar · View at Scopus
  6. X. Qu, D. Guo, B. Ning et al., “Undersampled MRI reconstruction with patch-based directional wavelets,” Magnetic Resonance Imaging, vol. 30, no. 7, pp. 964–977, 2012. View at Publisher · View at Google Scholar · View at Scopus
  7. C. Chen and J. Huang, “The benefit of tree sparsity in accelerated MRI,” Medical Image Analysis, vol. 18, no. 6, pp. 834–842, 2014. View at Publisher · View at Google Scholar · View at Scopus
  8. J. Liu, S. Wang, X. Peng, and D. Liang, “Undersampled MR image reconstruction with data-driven tight frame,” Computational and Mathematical Methods in Medicine, vol. 2015, Article ID 424087, 10 pages, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  9. S. Ravishankar and Y. Bresler, “MR image reconstruction from highly undersampled k-space data by dictionary learning,” IEEE Transactions on Medical Imaging, vol. 30, no. 5, pp. 1028–1041, 2011. View at Publisher · View at Google Scholar · View at Scopus
  10. Y. Huang, J. Paisley, Q. Lin, X. Ding, X. Fu, and X.-P. Zhang, “Bayesian nonparametric dictionary learning for compressed sensing MRI,” IEEE Transactions on Image Processing, vol. 23, no. 12, pp. 5007–5019, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  11. Y. Liu, J.-F. Cai, Z. Zhan et al., “Balanced sparse model for tight frames in compressed sensing magnetic resonance imaging,” PLoS ONE, vol. 10, no. 4, Article ID e0119584, 2015. View at Publisher · View at Google Scholar · View at Scopus
  12. B. Ophir, M. Lustig, and M. Elad, “Multi-scale dictionary learning using wavelets,” IEEE Journal on Selected Topics in Signal Processing, vol. 5, no. 5, pp. 1014–1024, 2011. View at Publisher · View at Google Scholar · View at Scopus
  13. Q. Liu, S. Wang, L. Ying, X. Peng, Y. Zhu, and D. Liang, “Adaptive dictionary learning in sparse gradient domain for image recovery,” IEEE Transactions on Image Processing, vol. 22, no. 12, pp. 4652–4663, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  14. R. Rubinstein, M. Zibulevsky, and M. Elad, “Double sparsity: learning sparse dictionaries for sparse signal approximation,” IEEE Transactions on Signal Processing, vol. 58, no. 3, part 2, pp. 1553–1564, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  15. S. Ravishankar and Y. Bresler, “Learning doubly sparse transforms for images,” IEEE Transactions on Image Processing, vol. 22, no. 12, pp. 4598–4612, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  16. J.-F. Cai, H. Ji, Z. Shen, and G.-B. Ye, “Data-driven tight frame construction and image denoising,” Applied and Computational Harmonic Analysis, vol. 37, no. 1, pp. 89–105, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  17. X. Peng and D. Liang, “MR image reconstruction with convolutional characteristic constraint (CoCCo),” IEEE Signal Processing Letters, vol. 22, no. 8, pp. 1184–1188, 2015. View at Publisher · View at Google Scholar · View at Scopus
  18. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004. View at Publisher · View at Google Scholar · View at Scopus