Abstract

Recently, the sparsity which is implicit in MR images has been successfully exploited for fast MR imaging with incomplete acquisitions. In this paper, two novel algorithms are proposed to solve the sparse parallel MR imaging problem, which consists of regularization and fidelity terms. The two algorithms combine forward-backward operator splitting and Barzilai-Borwein schemes. Theoretically, the presented algorithms overcome the nondifferentiable property in regularization term. Meanwhile, they are able to treat a general matrix operator that may not be diagonalized by fast Fourier transform and to ensure that a well-conditioned optimization system of equations is simply solved. In addition, we build connections between the proposed algorithms and the state-of-the-art existing methods and prove their convergence with a constant stepsize in Appendix. Numerical results and comparisons with the advanced methods demonstrate the efficiency of proposed algorithms.

1. Introduction

Reducing encoding is one of the most important ways for accelerating magnetic resonance imaging (MRI). Partially parallel imaging (PPI) is a widely used reduced-encoding technique in clinic due to many desirable properties such as linear reconstruction, easy use, and -factor for clearly characterizing the noise property [16]. Specifically, PPI exploits the sensitivity prior in multichannel acquisitions to take less encodings than the conventional methods [7]. Its acceleration factor is restricted to the number of channels. More and more large coil arrays, such as 32-channel [811], 64-channel [12], and even 128-channel [13], have been used for faster imaging. However, the acceleration ability of PPI under the condition of ensuring certain signal noise ratio (SNR) is still limited because the imaging system is highly ill-posed and can enlarge the sampling noise with higher acceleration factor. One solution is to introduce some other prior information as the regularization term into the imaging equation. Sparsity prior, becoming more and more popular due to the emergence of compressed sensing (CS) theory [1416], has been extensively exploited to reconstruct target image from a small amount of acquisition data (i.e., below the Nyquist sampling rate) in many MRI applications [1720]. Because PPI and compressed sensing MRI (CSMRI) are based on different ancillary information (sensitivity for the former and sparseness for the latter), it is desirable to combine them for further accelerating the imaging speed.

Recently, SparseSENSE and its equivalence [1, 3, 2125] have been proposed as a straightforward method to combine PPI and CS. The formulation of this method is similar to that in SparseMRI, except that the Fourier encoding is replaced by the sensitivity encoding (comprising Fourier encoding and sensitivity weighting). Generally, SparseSENSE aims to solve the following optimization problem:where the first term is the regularization term and the second one is the data consistency term. and represent separately 1-norm and 2-norm and is the to-be-reconstructed image. denotes a special transform (e.g., spatial finite difference and wavelet) and the term controls the solution sparsity. and are the encoding matrix and the measured data, respectively:where is the partial Fourier transform and is the diagonal sensitivity map for receiver . is the measured -space data at receiver . In this paper, we mainly solve the popular total variation (or its improved version: total generalized variation) based SparseSENSE model, that is, (or ).

For the minimization (1), there exists computational challenge not only from the nondifferentiability of norm term but also from the ill-condition of the large size inversion matrix . Further, the computational complexity becomes more and more huge if we try to improve the performance of SparseSENSE by using large coil arrays, high undersampling factor, or some more powerful transformations (which are usually nonorthogonal) to squeeze sparsity. Therefore, rapid and efficient numerical algorithms are highly desirable, especially for large coil arrays, high undersampling factor, and general sparsifying transform.

Several rapid numerical algorithms can solve the numerical difficulties, which are, for example, alternating direction method of multipliers (ADMM) [26], augmented Lagrangian method (ALM) [27], splitting Bregman algorithm (SBA) [28], splitting Barzilai-Borwein (SBB) [24], and Bregman operator splitting (BOS) [29]. The efficiency of these methods largely depends on the special structure of the matrix operator (such as Toeplitz matrix and orthogonal matrix) and the encoding kernel (without the sensitivity maps). However, they are not suitable for simultaneously dealing with general regularization operator and the parallel encoding matrix . That is, these algorithms are not able to solve the problem (1) efficiently because the complex inversion of the large size matrix has to be computed, if and/or cannot be diagonalized directly by fast Fourier transform (FFT). Alternating minimization (AM) algorithm can address the issue of general and , which is a powerful optimization scheme that breaks a complex problem into simple subproblems [3]. But the addition of new variable may slow the speed of convergence. Our numerical results in Section 4 also demonstrate that the alternating minimization algorithm for large coil data is not very effective in the aspect of convergence speed. Table 1 illustrates the ability of working on general and (without any preconditioning) for these algorithms. We can see that only AM is able to deal with general operators simultaneously.

To solve the problems existing in the algorithms mentioned above, this paper develops two fast numerical algorithms based on the operator splitting and Barzilai-Borwein techniques. The proposed algorithms can be classified into the forward backward splitting (FBS) method [30] or its variations. Different from some existing fast algorithms, the proposed algorithms can treat general matrix operators and and avoid solving a partial differential equation so as to save huge computational cost. The superiority of our algorithms lies in that operator splitting is applied to both regularization term and data consistency term. Meanwhile, they ensure that a well-posed optimization system of equation is simply solved. Barzilai-Borwein (BB) stepsize selection scheme [31] is adopted for much faster computation speed.

This paper is organized as follows. In Section 2, a review on some related numerical methods for solving the SparseSENSE model is given. In Section 3, two algorithms are proposed as variations of forward-backward splitting scheme. We compare the proposed algorithms with popular algorithms based on the SparseSENSE model in Section 4. In Section 5, we discuss the parameters selection of the proposed algorithms and the connections between the proposed algorithms and the existing algorithms. Section 6 concludes this paper. Appendix proves the convergence of the proposed algorithms with constant stepsizes.

In the early works, gradient descent methods with explicit or semi-implicit schemes [19, 32] were usually used to solve problem (1), in which the nondifferentiable norm was approximated by a smooth term:where contains the forward finite differences of . The selection of the regulating positive parameter is crucial for the reconstruction results and convergence speed. A large parameter encourages a fast convergence rate but fails to preserve high quality details. A small one preserves fine structures in the reconstruction at the expense of slow convergence.

The methods in [33, 34] and the split Bregman method [28] equivalent to the alternating direction method of multipliers [26] were presented for solving minimization (1). The efficiency of the algorithms benefits from the soft shrinkage operator and the special structure of the encoding matrix and sparse transform. This requires that both and in the optimal equation on can be directly diagonalized by FFT. But these methods may not be suitable for the parallel encoding matrix and more general transform . They are even ill-posed if , where represents the null space of the operator. In addition, the augmented Lagrangian method in [27] preconditioned the encoding matrix and inevitably computed the inversion of the matrix including general . So, it is also invalid in the computational efficiency.

The Bregman operator splitting (BOS) method replaces by a proximal-like term [29]. BOS is able to deal with of uncertainty structure by the following iterations:However, a partial differential equation including should be solved as indicated in (4). This equation may bring heavy computation for the general regularization operator . To solve the problem of heavy computation, Ye et al. presented a SBB scheme by utilizing the BB stepsize [24]. However, these algorithms may be not efficient for the general if the matrix operator cannot be diagonalized by fast transform.

Consequently, minimization (1) can be written as a saddle-point problem:where . Although the primal-dual hybrid gradient (PDHG) method alternately updates the primal and dual variables and [35], its efficiency relies on the special structure of . That is, can be diagonalized directly by fast transform. The alternating minimization (AM) algorithm [3] reduces the PPI reconstruction problem with regularization into the TV-based image denoising and least square (LS) subproblems asThe AM algorithm is updated as follows:where the stepsize is updated by the rules , [35].

3. Algorithm Framework

In this section, we propose two algorithms for solving the SparseSENSE model (1), which are based on the operator splitting scheme and connected by Yosida approximation. We deduce the algorithms with the fixed-point technology as follows.

For the convenience of derivation, we denote by and rewrite the SparseSENSE model as By the classic arguments of convex analysis, solution to (8) satisfies the first-order optimality condition:where is the subdifferential of at point . According to the chain rule, subdifferential is identified byBy substituting it into (9) and splitting, we get the equivalent formulation:In addition, for any positive number , we havewhere the proximal operator is defined asTherefore, when the Barzilai-Borwein technique is involved, the solution to the minimization problem (1) can be obtained quickly based on the following updating:The above iterations can be identified as a forward-backward operator splitting method [36]. Comparing (14) to (4), the partial differential equation in (4) is not involved. Since in the SparseSENSE model (1), the proximal operator is a shrinkage operator that is able to solve quickly from the following formulation:The proposed algorithm (14) is referred to as the forward-backward operator splitting shrinkage (FBOSS) algorithm.

Considering Moreau’s decomposition [30], for the proximal operator there exists a connection as follows:where represents the conjugate function of . Applying (16) to the updates in (14), for all we get a modified iterating sequences asObviously, the proximal operator for minimization (1) is a projection operator and . According to (15) and (16), we obtain the following computation through a simple derivation:This shows that the modified iteration algorithm (17) is also a fast numerical algorithm for solving minimization (1), which is called the forward-backward operator splitting projection (FBOSP) method. This is because the projection operator keeps the calculation speed same as the shrinkage.

4. Numerical Experiments

Three sets of MR data are utilized in the experiments. Data 1 was acquired on a GE 3T scanner (GE Healthcare, Waukesha, WI) with a 32-channel head coil and a 3D T1-weighted spoiled gradient echo sequence (TE = minimum full, TR = 7.5 ms,  cm, , and slice thickness = 1.7 mm). Data 2 was of one frame of the dynamic MR data, which was acquired on a 3T Siemens Verio scanner (Siemens Medical Solutions, Erlangen, Germany) (flip angle = 50 degree,  ms, ,  mm × 287 mm, and slice  mm). Data 3 () was downloaded from the following website: http://www.ece.tamu.edu/~jimji/pulsarweb/ [37].

We can directly reconstruct the sparse data acquired from MRI by means of our proposed schemes. However, we employ the fully acquired data as the reference to make quantitative comparisons between our proposed schemes and the other methods. So, all data sets were fully acquired and then artificially undersampled using nonuniform random sampling masks corresponding to different undersampling factors. Here, a ground truth or reference image is set to the square-root of sum of squares of coil images obtained by fully sampled -space data from all channels. Peak signal noise ratio (PSNR) and relative error are employed as the performance metrics, which are defined as and , respectively. The sensitivity maps are simulated with the full -space data to avoid the effect of inaccurate sensitivity estimation.

All the comparison algorithms were implemented in MATLAB (version R2013a) and performed on a computer with an Intel(R) Xeon(R) CPU X5690 3.47 GHz, 64 GB of memory, and a Windows operating system. Empirically, we set for all algorithms, for BOS, and for AM. For BOS, . is set in the range for FBOSS and FBOSP. Each algorithm is terminated when the relative change reaches the predefined stopping criterion . This criterion could guarantee that the iterative solution for all algorithms approximates to the optimal solution sufficiently.

4.1. Comparisons on the TV-Based SparseSENSE Model

In this subsection, the comparisons on the TV-based SparseSENSE model were carried out by BOS [29], AM [3], SBB [24], FBOSS, and FBOSP. We tested on each data set with different undersampling factors. Table 2 illustrates the numerical comparison results. Here, we did not use 10-fold undersampling factor for data 2 because the reconstruction error for all the algorithms is very large due to high undersampling rate and noise in data acquisition. As shown in Table 2, the proposed algorithms FBOSS and FBOSP achieve better reconstruction performances with less computational time compared to the other three methods.

As shown in Figures 1, 2, and 3, images reconstructed by the five algorithms have similar reconstruction qualities. In Figures 4 and 5, we give the reconstructed results and error images based on FBOSS and FBOSP at different iterations to illustrate that the proposed algorithms can improve image quality iteratively.

Besides, we plot the relative error and PSNR as functions of the CPU time to examine the efficiency of FBOSS, FBOSP compared to BOS, AM, and SBB. As indicated in Figures 6(a), 6(b), and 6(c), the relative error curves of FBOSS and FBOSP descend faster than those of BOS, AM, and SBB. The curve of BOS is far above the others, which implies its lower efficiency. FBOSS and FBOSP have almost identical performance. Figures 6(d), 6(e), and 6(f) demonstrate that PSNR curves of FBOSS and FBOSP have the fastest ascents among these five algorithms.

The above experiments illustrate that the proposed algorithms FBOSS and FBOSP have better performance than BOS, SBB, and AM in terms of computational efficiency, although generated from the TV-based SparseSENSE model can be diagonalized by FFT. Besides, as indicated in Table 2, the larger the undersampling factor of the test data, the bigger the CPU time gap between the proposed methods and the others. This gap for the small downsampling factor is not obvious. This is because the initial solution for the data with small undersampling factor is close to the optimal solution ; the iterations for the algorithms only take a small amount of time to meet the stopping criterion. Nevertheless, the case for the data with large undersampling factor is opposite. In addition, we find out that the larger the coil numbers of the test data are, the more significant the performances of the proposed methods become. To demonstrate this point, we plotted the CPU time under similar relative error as the function of the coil number of data in Figures 7(a) and 7(b). From the figures we can see that as the coil number increases the proposed algorithms ascend more slowly than the others. The robustness for the coil numbers of data set becomes another advantage of the presented algorithms. Thus, these observations indicate that the proposed algorithms are superior especially in the harsh situation where the data is of large-scale and highly undersampled.

4.2. Comparisons on the Second-Order TGV-Based SparseSENSE Model

In this subsection, we only compared FBOSP with AM based on a special second-order TGV reconstruction model to demonstrate the performance of the proposed algorithms for general . This is because BOS and SBB cannot work well for the general sparse transform and FBOSP has an inexact connection to AM.

The second-order total generalized variation [38] is defined aswhere , is the space of compactly supported symmetric tensor field, and is the space of symmetric tensors on . By taking , , a special form of second-order TGV can be written as where is the second-order finite difference and 1 and 2 represent the horizontal and vertical directions, respectively. Obviously, operator in (20) cannot be diagonalized by fast transform. Therefore, BOS and SBB are absent in the comparisons as they are not efficient for the special TGV SparseSENSE model.

For the comparisons between FBOSP and AM, we test data 1, data 2, and data 3 corresponding to undersampling factors 10, 6, and 4, respectively. Table 3 demonstrates the superiority of FBOSP. The reconstructed and error images for AM and FBOSP on data 3 are shown in Figure 8. In addition, we give the reconstructed and error images at different iterations to demonstrate that FBOSP is able to improve image quality iteratively (shown in Figure 9). As shown in Figures 10(a), 10(b), and 10(c), the proposed algorithm FBOSP keeps its superiority in convergence speed.

5. Discussions

5.1. Parameters Selection

In this subsection, we discuss the effect of the involved parameters and on the performance of the proposed algorithms for the TV-based SparseSENSE model. Here, data 1 is employed for the test. We plot the relative error as the function of CPU time under the different parameters for FBOSS and FBOSP. As shown in Figure 11(a), FBOSS with smaller or larger seems to have better performance than other parameter combinations. The reason is that for a small the information of gradient image is not easy to lose in the shrinkage threshold procedure. Meanwhile, can better approximate the optimal solution when is sufficiently large. In contrast, FBOSP has nearly stable performance under different parameters. Therefore, we selected the parameters of FBOSS for moderated performance and fix the parameters of FBOSP in the experiments.

5.2. Connection to the Existing Methods

It is known that the main computational steps of BOS (or SBB) and AM are the splitting Bregman iteration, which are equivalent to ADMM and the PDHG iterations, respectively. In this subsection, we analyze two relations between the proposed algorithms and the existing popular methods: (1) FBOSS and ADMM and (2) FBOSP and PDHG. For simplicity, we simplify (1) by replacing with an identity operator Id.

(1) Relation between FBOSS and ADMM. FBOSS can be interpreted as an ADMM method with a semi-implicit scheme applied to the augmented Lagrangian of the original problem as follows: where is Lagrangian multiplier. By using ADMM for , the solution to problem (1) can be gained from the following sequence:If , FBOSS is almost equivalent to ADMM, except that the former employs a semi-implicit scheme for the -subproblem, while the latter employs an implicit scheme. That is why the proposed algorithm performs better than the SBB method although SBB also adopts the BB stepsize scheme.

(2) Relation between FBOSP and PDHG. FBOSP based on the projection gradient method can be regarded as a PDHG technique without the proximal term in the gradient descent step, applied to the primal-dual formulation:where is dual variable. According to the PDHG iteration, the max-min problem (23) is solved by the update of the sequence:By replacing with and taking in the above iteration, it is the main procedure of the FBOSP method for minimization (1).

6. Conclusions

In this paper, two fast numerical algorithms based on the forward-backward splitting scheme are proposed for solving the SparseSENSE model. Both of them effectively overcome the difficulty brought by the nondifferentiable norm according to the fine property of proximity operator. They also avoid solving a partial differential equation with huge computation, which has to be solved in the conventional gradient descent methods. The proposed algorithms can treat the general matrix operators and that may be not diagonalized by the fast Fourier transform. We compare them with Bregman algorithms [24, 29] and the alternating minimization method [3]. The results show that the proposed algorithms are efficient for solving the SparseSENSE model quickly.

Appendix

To illustrate the convergence of the presented algorithms, we show that the proposed algorithm FBOSS with the constant stepsize δ converges by utilizing the nonexpansivity of proximity operator in this section. Based on the connection between FBOSS and FBOSP, it is easy to deduce the convergence of FBOSP with the constant stepsize δ because the projection operator keeps the contractive property. First, the related theories about the proximity operator are given as follows.

Lemma A.1. For the given convex function and its conjugate function , it holds for any that

Next, we give the main conclusion for the convergence of FBOSS with constant δ under the nonexpansivity of proximity operator.

Theorem A.2. Assume , the sequence generated by Algorithm 1 with constant converges to the point , where is the solution to the SparseSENSE model.

Input  , and . Set , and .
Repeat
   Given and , compute ;
   Given and , compute ;
   Given , and , compute ;
   Given , and , compute ;
   Given and , compute ;
   
until  
return

Proof. Suppose that the point satisfies (14), according to (16) and Lemma A.1, we haveIn addition, by combining the first and the fourth equations in (14), we get Let , then ⋯ and for all . Hence, exists, which implies that It leads to , which deduces that Moreover, it follows from (14), (16), and Lemma A.1 that Therefore, by taking limits for both sides of the above inequality, we obtainwhich immediately getsOn the other hand, by combining (9), (10), and (12), it is easy to conclude that is the solution to the SparseSENSE model (1) if point satisfies (14).

Through the proof for Theorem A.2, we can immediately obtain the following convergence conclusion for Algorithm 2 with constant .

Input  , and . Set , and .
Repeat
   Given and , compute ;
   Given and , compute ;
   Given , and compute ;
   Given and , compute ;
   
until  
return

Theorem A.3. Assume , the sequence generated by Algorithm 2 with constant converges to the point , where is the solution to the SparseSENSE model.

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this article.

Acknowledgments

This research was partly supported by the National Natural Science Foundation of China (nos. 61471350, 11301508, and 61401449), the Natural Science Foundation of Guangdong (nos. 2015A020214019, 2015A030310314, and 2015A030313740), the Guangdong Science and Technology Plan (Grant nos. 2015B010124001 and 2015B090903017), the Basic Research Program of Shenzhen (nos. JCYJ20140610152828678, JCYJ20150630114942318, and JCYJ20140610151856736), and the SIAT Innovation Program for Excellent Young Researchers (no. 201403).