BioMed Research International

Volume 2016, Article ID 2860643, 7 pages

http://dx.doi.org/10.1155/2016/2860643

## Two-Layer Tight Frame Sparsifying Model for Compressed Sensing Magnetic Resonance Imaging

^{1}Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Shenzhen, Guangdong 518055, China^{2}The Beijing Center for Mathematics and Information Interdisciplinary Sciences, Beijing 100048, China^{3}Biomedical and Multimedia Information Technology (BMIT) Research Group, School of Information Technologies, The University of Sydney, Sydney, NSW 2006, Australia^{4}Nanchang University, Nanchang, Jiangxi, China

Received 20 April 2016; Revised 5 August 2016; Accepted 18 August 2016

Academic Editor: Andrey Krylov

Copyright © 2016 Shanshan Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Compressed sensing magnetic resonance imaging (CSMRI) employs image sparsity to reconstruct MR images from incoherently undersampled -space data. Existing CSMRI approaches have exploited analysis transform, synthesis dictionary, and their variants to trigger image sparsity. Nevertheless, the accuracy, efficiency, or acceleration rate of existing CSMRI methods can still be improved due to either lack of adaptability, high complexity of the training, or insufficient sparsity promotion. To properly balance the three factors, this paper proposes a two-layer tight frame sparsifying (TRIMS) model for CSMRI by sparsifying the image with a product of a fixed tight frame and an adaptively learned tight frame. The two-layer sparsifying and adaptive learning nature of TRIMS has enabled accurate MR reconstruction from highly undersampled data with efficiency. To solve the reconstruction problem, a three-level Bregman numerical algorithm is developed. The proposed approach has been compared to three state-of-the-art methods over scanned physical phantom and in vivo MR datasets and encouraging performances have been achieved.

#### 1. Introduction

Compressed sensing magnetic resonance imaging (CSMRI) is a very popular signal processing based technique for accelerating MRI scan. Different from the classical fixed-rate sampling dogma Shannon-Nyquist sampling theorem, CS exploits the sparsity of an MR image and allows CSMRI to recover MR images from less incoherently sampled -space data [1]. The classical formulation of CSMRI can be written aswhere and , respectively, denote the MR image and its corresponding undersampled raw -space data, represents the undersampled Fourier encoding matrix with , and is an analysis model which sparsifies the image with transform under the norm constraint. and are the number of image pixels and measured data. The classical formulation is typically equipped with total variation and wavelet and it can be solved very efficiently [1]. However, the efficiency comes at the expense of accuracy, especially with highly undersampled noisy measurements, due to lack of adaptability or insufficient sparsity promotion. To address this issue, there have been diverse methods proposed [2, 3] and we focus on the following three representative directions.

One main endeavor is employing nonlocal operations or redundant transforms to analytically sparsify the MR image [4]. Typical examples include nonlocal total variation regularization [5], patch-based directional wavelet [6], and wavelet tree sparsity based CSMRI techniques [7]. These methods generally have straightforward models; nevertheless, the reconstruction accuracy is not that perfectly satisfying due to lack of adaptability. We proposed one-layer data-driven tight frame DDTF for undersampled image reconstruction [8]. It is generally very efficient. But its performance is still limited due to its insufficient sparsity promotion and reliance on the Bregman iteration technique for bringing back the image details.

The other effort is training adaptive dictionary to sparsely represent the MR image in the synthesis manner. For example, DLMRI [9], BPFA triggered MR reconstruction [10], and our proposed TBMDU [3] employ dictionary learning to adaptively capture image structures while promoting sparsity. These methods can generally achieve accurate MR image reconstruction with strong noise suppression capability. Unfortunately, the complexity of these approaches is very high and the sparsity is still directly limited to one-layer representation of the target image.

The third group endeavors could be regarded as the variants of the above two efforts, which target employing the advantages of both the analysis and synthesis sparse models. For example, the balanced tight frame model [11] introduces a penalty term to bridge the gap between the analysis and synthesis model. Unfortunately, although it possesses a fascinating mathematical explanation, the sparsity promotion is still limited to a single layer and therefore its performance is only comparable to the analysis one. To further promote sparsity, a wavelet driven dictionary learning (named WaveDLMRI) [12] technique and our proposed total variation driven dictionary learning approach (named GradDLRec) [13] adaptively represent the sparse coefficients derived from the analysis transform rather than directly encode the underlying image. Nevertheless, despite achieving encouraging performances, they still rely on the computationally expensive dictionary learning technique.

Recently, there are double sparsity model and doubly sparse transforms proposed in general image/signal processing community [14, 15]. The double sparsity model tries to train a sparse dictionary over a fixed base, while the doubly sparse transform is devoted to learning an adaptive sparse matrix over an analytic transform. There is no doubt that their application to image denoising has presented promising results, albeit the two-layer sparsifying model is more concerned to assist efficient learning, storage, and implementation by constraining the dictionary sparse rather than focus on further triggering of the sparsity of the image.

Motivated by the above observations, we try to develop a two-layer tight frame sparsifying (TRIMS) model for CSMRI by sparsifying the image with a product of a fixed tight frame and an adaptive learned tight frame. The proposed TRIMS has several merits: (1) the tight frame satisfies the perfect reconstruction property which ensures the given signal can be perfectly represented by its canonical expansion [16]; (2) a tight frame can be implemented very efficiently since it satisfies ; (3) the adaptability has been kept by the second-layer tight frame tailored for the target reconstruction task; (4) the two-layer tight frame has enabled the image sparsity to be explored more sufficiently compared to the one-layer one. Furthermore, the two-layer tight frame also has a convolutional explanation, which extracts appropriate image characteristics to constrain MR image reconstruction [17]. We have compared our method with three state-of-the-art approaches of the above three directions, namely, DDTF-MRI, DLMRI, and GradDLRec on an in vivo complex valued MR dataset. The results have advised the proposed method could properly balance the efficiency, accuracy, and acceleration factors.

#### 2. Theory

##### 2.1. TRIMS Model

To reconstruct MR images from undersampled data, we propose a TRIMS model which can be implicitly described aswhere is the fixed tight frame and denotes the data-driven tight frame. means the tight frame system, since a tight frame can be formulated with a set of filters under the unitary extension principle (UEP) condition [16]. The proposed model also has another approximately equivalent convolutional expression, which we name the explicit modelwhere are the fixed kernels and denote the to-be-learned adaptive kernels.

##### 2.2. TRIMS Algorithm

To solve the proposed model, we develop a three-level Bregman iteration numerical algorithm. Introducing a Bregman parameter , we have the first-level Bregman iterationTo attack the first subproblem in (4), we introduce an assistant variable and obtain the second-level iterationThe subproblem regarding the update of is a simple least squares problem admitting an analytical solution. Its solution satisfies the following normal equation: Since is a tight frame satisfying , letting denote the full Fourier encoding matrix normalized such that , we havewhere , , and denotes the sampled -space subset. In order to update and , we introduce another assistant variable to decompose the coupling between and and therefore obtain the third-level Bregman iterationSimilar to the update of , we can easily get the least squares solution for

As for the update of , we temporarily fix the value of and can easily obtain its update rule with the iterative shrinkage/thresholding algorithm (ISTA)where . Now fix , we update by minimizingInstead of directly optimizing , we sequentially partition the coefficient vectors into vectors and apply the technique of [16] to solve this subproblem using singular value decomposition (SVD), with the aim of learning its corresponding filter . To facilitate the readers to grasp the overall picture, we summarize the proposed TRIMS in Algorithm 1.