Computational and Mathematical Methods in Medicine

Volume 2016, Article ID 1724630, 14 pages

http://dx.doi.org/10.1155/2016/1724630

## Sparse Parallel MRI Based on Accelerated Operator Splitting Schemes

^{1}School of Information Engineering, Guangdong University of Technology, Guangzhou 510006, China^{2}Paul C. Lauterbur Research Centre for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Shenzhen, China^{3}Shenzhen Key Laboratory for MRI, Shenzhen, Guangdong, China

Received 26 April 2016; Accepted 29 August 2016

Academic Editor: Po-Hsiang Tsui

Copyright © 2016 Nian Cai et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Recently, the sparsity which is implicit in MR images has been successfully exploited for fast MR imaging with incomplete acquisitions. In this paper, two novel algorithms are proposed to solve the sparse parallel MR imaging problem, which consists of regularization and fidelity terms. The two algorithms combine forward-backward operator splitting and Barzilai-Borwein schemes. Theoretically, the presented algorithms overcome the nondifferentiable property in regularization term. Meanwhile, they are able to treat a general matrix operator that may not be diagonalized by fast Fourier transform and to ensure that a well-conditioned optimization system of equations is simply solved. In addition, we build connections between the proposed algorithms and the state-of-the-art existing methods and prove their convergence with a constant stepsize in Appendix. Numerical results and comparisons with the advanced methods demonstrate the efficiency of proposed algorithms.

#### 1. Introduction

Reducing encoding is one of the most important ways for accelerating magnetic resonance imaging (MRI). Partially parallel imaging (PPI) is a widely used reduced-encoding technique in clinic due to many desirable properties such as linear reconstruction, easy use, and -factor for clearly characterizing the noise property [1–6]. Specifically, PPI exploits the sensitivity prior in multichannel acquisitions to take less encodings than the conventional methods [7]. Its acceleration factor is restricted to the number of channels. More and more large coil arrays, such as 32-channel [8–11], 64-channel [12], and even 128-channel [13], have been used for faster imaging. However, the acceleration ability of PPI under the condition of ensuring certain signal noise ratio (SNR) is still limited because the imaging system is highly ill-posed and can enlarge the sampling noise with higher acceleration factor. One solution is to introduce some other prior information as the regularization term into the imaging equation. Sparsity prior, becoming more and more popular due to the emergence of compressed sensing (CS) theory [14–16], has been extensively exploited to reconstruct target image from a small amount of acquisition data (i.e., below the Nyquist sampling rate) in many MRI applications [17–20]. Because PPI and compressed sensing MRI (CSMRI) are based on different ancillary information (sensitivity for the former and sparseness for the latter), it is desirable to combine them for further accelerating the imaging speed.

Recently, SparseSENSE and its equivalence [1, 3, 21–25] have been proposed as a straightforward method to combine PPI and CS. The formulation of this method is similar to that in SparseMRI, except that the Fourier encoding is replaced by the sensitivity encoding (comprising Fourier encoding and sensitivity weighting). Generally, SparseSENSE aims to solve the following optimization problem:where the first term is the regularization term and the second one is the data consistency term. and represent separately 1-norm and 2-norm and is the to-be-reconstructed image. denotes a special transform (e.g., spatial finite difference and wavelet) and the term controls the solution sparsity. and are the encoding matrix and the measured data, respectively:where is the partial Fourier transform and is the diagonal sensitivity map for receiver . is the measured -space data at receiver . In this paper, we mainly solve the popular total variation (or its improved version: total generalized variation) based SparseSENSE model, that is, (or ).

For the minimization (1), there exists computational challenge not only from the nondifferentiability of norm term but also from the ill-condition of the large size inversion matrix . Further, the computational complexity becomes more and more huge if we try to improve the performance of SparseSENSE by using large coil arrays, high undersampling factor, or some more powerful transformations (which are usually nonorthogonal) to squeeze sparsity. Therefore, rapid and efficient numerical algorithms are highly desirable, especially for large coil arrays, high undersampling factor, and general sparsifying transform.

Several rapid numerical algorithms can solve the numerical difficulties, which are, for example, alternating direction method of multipliers (ADMM) [26], augmented Lagrangian method (ALM) [27], splitting Bregman algorithm (SBA) [28], splitting Barzilai-Borwein (SBB) [24], and Bregman operator splitting (BOS) [29]. The efficiency of these methods largely depends on the special structure of the matrix operator (such as Toeplitz matrix and orthogonal matrix) and the encoding kernel (without the sensitivity maps). However, they are not suitable for simultaneously dealing with general regularization operator and the parallel encoding matrix . That is, these algorithms are not able to solve the problem (1) efficiently because the complex inversion of the large size matrix has to be computed, if and/or cannot be diagonalized directly by fast Fourier transform (FFT). Alternating minimization (AM) algorithm can address the issue of general and , which is a powerful optimization scheme that breaks a complex problem into simple subproblems [3]. But the addition of new variable may slow the speed of convergence. Our numerical results in Section 4 also demonstrate that the alternating minimization algorithm for large coil data is not very effective in the aspect of convergence speed. Table 1 illustrates the ability of working on general and (without any preconditioning) for these algorithms. We can see that only AM is able to deal with general operators simultaneously.