Abstract

The reconstruction of dynamic magnetic resonance imaging (dMRI) from partially sampled k-space data has to deal with a trade-off between the spatial resolution and temporal resolution. In this paper, a low-rank and sparse decomposition model is introduced to resolve this issue, which is formulated as an inverse problem regularized by robust principal component analysis (RPCA). The inverse problem can be solved by convex optimization method. We propose a scalable and fast algorithm based on the inexact augmented Lagrange multipliers (IALM) to carry out the convex optimization. The experimental results demonstrate that our proposed algorithm can achieve superior reconstruction quality and faster reconstruction speed in cardiac cine image compared to existing state-of-art reconstruction methods.

1. Introduction

Dynamic MRI (magnetic resonance imaging), an essential medical imaging technique, allows noninvasiveness, nonionization visualization, and analysis of anatomical and functional changes of internal body structure through time. However, MRI sampling speed is relatively slow due to the need of physical and physiological conditions such as nuclear relaxation and peripheral nerve stimulation [1]. One way for accelerating MRI is to reconstruct high-resolution images from undersampled k-space data. However, such undersampling violates the Nyquist criterion and often results in aliasing artifacts if the traditional linear reconstruction is directly applied.

To address this issue, there are so much research efforts to accelerate MRI acquisition process using hardware and software [24]. Among them, compressed sensing (CS) has been proved to be able to increase imaging speed and efficiency in MRI application [57]. The CS theory requires image sparsity and incoherence between the acquisition space and representation space [8]. Fortunately, the MR image sequence often provides redundant information in both spatial and temporal domains, which presents favorable conditions for the application of CS. In addition, the idea is easily extended to the reconstruction of dynamic MRI (dMRI) images due to extensive spatio-temporal correlations that result in sparser representations. The k-t FOCUSS is a successful method, which imposes a sparsity constraint in the temporal transform domain by using the FOCUSS algorithm [9], and extends the FOCUSS technique with motion estimation and compensation to compressed sensing framework for cardiac cine MRI. But the limitation of the prediction schemes on sparsifying the residual signal sets back the further improvement when the motion is aperiodic.

Recently, researchers have made great efforts to exploit the low-rank property of matrices instead of simply sparsity of vectors. Lingala et al. proposed a k-t SLR algorithm that exploited the low rank prior and global sparsity in Karhunen-Louve Transform (KLT) domain for MRI reconstruction [10]. However, the algorithm failed to take into account the structural sparsity of the MRI image, and the limitation held back the further improvement. Some studies presented patch-based dictionary learning techniques for dMRI reconstruction [11, 12]. However, a major challenge in learning sparse dictionary is that such patch-based learning cannot be effectively employed for dMRI reconstruction. Because the size of dMRI sequence is large, it is inefficient to learn dictionaries for such large datasets [13]. Even though we take no account of computational limitations, it is not practical to acquire such huge dMRI training sequences for learning sparsifying dictionaries. Currently, robust principal component analysis (RPCA) has been used in recovering dynamic images to explore the low-rank structure of data [14, 15]. The RPCA decomposes the data in low rank and sparse components, where the low rank component models the temporally correlated background information and the sparse component represents the dynamic information. k-t RPCA [16], a method developed for dMRI, uses the low-rank plus sparse decomposition prior to reconstructing dynamic MRI from part of the k-space measurements. In this method [16], the image reconstruction is regularized by a low-rank plus sparse prior, where the Fourier transform is used as the sparsifying transform and the alternating direction methods of multipliers (ADMM) is applied to solve the minimization problem in the temporal direction. The shortcoming of k-t RPCA is that the results of reconstructed image are easily affected by the noise, since the noise will generally be represented by highly sparse coefficients during the sparsifying transform.

In this paper, aims to the shortcoming of k-t RPCA, we propose an efficient numerical algorithm based on inexact augmented Lagrangian method (IALM) instead of ADMM to solve the optimization problem and accelerate the dMRI reconstruction. The experimental results demonstrate that our proposed algorithm can achieve more satisfactory reconstruction performance and faster reconstruction speed in given cardiac cine sets.

2. Theory Background

The dynamic MRI data acquisition in the k-t space can be expressed as follows: where represents the measured k-t space signal, denotes the desired dynamic image series, and is the measured noise, which can be reasonably modeled by an additive white Gaussian distribution [16, 17].

In this paper, the solution of this problem is to find the closest representation of the MR image from undersampled measurement . Since the k-t space is partially sampled, (1) is converted to an inverse problem and can be rewritten as a vector [18]. where , , , is the total number of frames, is the Fourier transform operator, and the measurement matrix is the undersampled mask applied on the k-space.

2.1. CS-Based MR Image Reconstruction

The CS approach [5, 19] was proposed to reconstruct the MR image from the partially sampled k-space data by exploiting the sparsity transform and convex optimization algorithms. The problem will be solved if we can find the sparsest vector satisfying (2), where is l0-norm, counting the number of nonzero entries in the vector, is the sparsifying transform or dictionary, and is a small constant. Unfortunately, (3) is NP-hard problem, which needs to be solved by a brute force search. The CS theory [8] proves that the convex relaxation approach referred to as l1 minimization can be replaced with the l0-norm in (3), where is l1-norm, meaning the sum of absolute values of the vector.

2.2. Low-Rank and Sparse Decomposition Model for MR Image Reconstruction

CS-based techniques that exploit sparsity of the image in the transform domain have been successfully used for MR image reconstruction. However, the performance of CS is primarily dependent on the specific dictionary or sparsifying operator, which limits the maximum achievable acceleration rate. Therefore, some researchers tried to investigate a few new approaches to reconstruct MR image [2024]. In those methods, low-rank matrix recovery is a popular technique in medical image processing.

The basic assumption is the same as [18], that is, the image is simultaneously sparse (in a transform domain) and low rank. The problem is to recover , given fewer k-space samples than the number of elements in the matrix. We assume that the approximate rank of the matrix is and the size of single frame image is . When the matrix is low rank, which has only degrees of freedom instead of , it is possible to recover the matrix from lesser number of samples by solving the rank minimization problem,

However, the rank minimization problem, that is, solving (5), is combinatorial and known to be NP-hard [25]. Therefore, convex relaxation is often used to make the minimization tractable. where denotes any linear operator and is the nuclear norm, which is defined as where are the singular values of and is the rank of .

To recover from the given , can be decomposed into a superposition of a low-rank matrix and a sparse matrix .

is recovered as the solution of the following optimization: where low-rank matrix has few nonzero singular values and represents the background component, sparse matrix has few nonzero entries and corresponds to the changes, and is a tuning parameter that balances the contribution of the l1-norm relative to the nuclear norm.

3. The Proposed Method

In principal component pursuit (PCP) model [26], to solve (9) can be posed as an optimization problem by using regularization rather than strict constraints [15]. Hence, (9) can be converted as where the parameters and trade off data consistency and is a sparse transform basis.

Equation (10) is a RPCA problem that involves minimizing a combination of the nuclear norm and l1-norm. Otazo et al. Study [15] adopted the iterative thresholding scheme to solve (10); however, the iterative thresholding technique converges slowly. So, we presented an inexact augmented Lagrange multipliers (IALM) algorithm to solve the RPCA problem [27]. According to the constraint conditions of (6), where is a dual operator, contains the measurement noise, and and are low-rank element and sparse element, respectively. We applied IALM method to solve the following optimization problem: where is a Lagrange multiplier to remove the equality constraint and is a small positive scalar. The condition implies that cannot grow too fast. The IALM method for solving the RPCA problem can be described as Algorithm 1.

Input: Multicoil undersampled k-t data ; space-time multicoil encoding operator ; sparsifying transform ; weight parameter .
Initialization: Set ; ; ; ; ; ; .
Loop: Repeat until convergence
Update : ;
     .
Update : .
Update : =.
Update : .
Update : =
Update : set ; .
End Loop.
Output:
Algorithm 1: The proposed algorithm.

For Algorithm 1, if is nondecreasing and , then converges to an optimal solution for the RPCA problem. The advantage of unbounded is that the feasibility condition can be approached more quickly because and are bounded. In Algorithm 1, the singular value thresholding (SVT) operator [28] is defined as where is any singular value decomposition of . is a soft-thresholding operator, which can be defined as

4. Experimental Results and Discussion

Experiments were run in MATLAB V7.14.0 (R2012a) with the computing environment being an Intel Core i7-2640 M CPU, 4.0 GB memory, and a 64-bit Win7 operating system. The proposed algorithm was validated by experiments using two cardiac cine sets. The first dataset was obtained from Bio Imaging and Signal Processing Lab (http://bispl.weebly.com/), which contains temporal frames of size with a field of view (FOV) and slice thickness. The second dataset was acquired from the website of Dr. Caballero (http://www.doc.ic.ac.uk/~jc1006/index.html), which was introduced by Caballero et al. [12] and the relevant imaging parameters were as follows: the image matrix size , the number of temporal frame , FOV , and slice thickness =. Two widely used sampling trajectories, Cartesian and radial undersampling strategies, were exploited for the acquisition of the MR data set in the k-space domain. Figure 1 shows the sampling masks used in the study and their effect on the magnitude of a temporal frame.

We compared the proposed method against k-t SLR [10] and k-t RPCA [16] in reconstruction accuracy and reconstruction speed. Quantitative image quality assessment was performed by using the metrics of peak signal to noise ratio (PSNR) and structural similarity index (SSIM) [29]. The PSNR is used to evaluate the difference of reconstruction image and the full-sampled image, which can be defined as where is a reconstruction image and represents the full-sampled image.

The SSIM is a new method for measuring the similarity between the reconstructed image and the fully sampled image. We adopted the SSIM to measure the difference of reconstruction image and the fully sampled image at each time frame the SSIM index between the reconstructed image and the fully sampled image at one same frame is evaluated as where and are the mean intensity of the reconstructed image and the fully sampled image , and are the standard deviation of image and , is the covariance of and , and are constants where is the dynamic range, 255 for 8-bit grayscale images. and are parameter values suggested by Wang et al. [29].

The k-t SLR uses a combination of TV and nonconvex Schatten p-norms with ; some parameters are selected based on the suggested values in the public software package (penalty parameters for Schatten and for TV norms, maximum number of 50 inner and 9 outer iterations). In k-t RPCA, two regularization parameters are and for the regularization and decomposition, respectively.

Similarly, the method requires the specification of three parameters , and . We set and . We may take as the stopping criteria for Algorithm 1. We chose a fixed weight parameter by the suggestion of the authors of [16]. The proposed algorithm was verified by experiments using the fully sampled cardiac cines (two mentioned datasets above) with two different sampling trajectories.

For simulating the acceleration of the k-space, the fully sampled k-space data was artificially subsampled by using variable density (sampling factor) random sampling. To test the robustness of the proposed method, the k-space data of the two datasets are corrupted with additional complex Gaussian white noise with fixed standard deviation Firstly, this method was tested on the first cardiac dataset by using different sampling models with variable sampling ratio. A comparison of the visual quality was showed in Figure 2, which compares the reconstruction results between the proposed method (Algorithm 1) and the other methods. The acceleration factor is approximately 4 (about 25% of acquired samples) for Cartesian sampling masks. Figure 3 shows the PSNR of the reconstructed results for Cartesian sampling and pseudo-radial sampling as a function of sampling factor. It can be seen that the performance of the proposed method outperforms the other two methods with pseudo-radial sampling acquisition. But the performance at lower sampling ratio is slightly lower than the k-t SLR method with Cartesian sampling. Additionally, SSIM at each time frame is shown in Figure 4 for both Cartesian and radial sampling at the same sampling factor (an acceleration factor of approximately 6 with about 16.4% of acquired samples). The experimental results reveal that the proposed method achieved a superior reconstruction result in terms of SSIM and hence the advantage of our method is more relatively evident when pseudo-radial sampling is used instead of Cartesian sampling.

Moreover, we tested our proposed method on the second cardiac dataset by using same experimental method. Figure 5 provides the visual evaluations for radial sampling with an acceleration rate of 4 (about 25% of acquired samples). Quantitative results (PSNR performance) are reported for Cartesian sampling and pseudo-radial sampling in Figure 6. It is observed that the performance of reconstructions by using the two sampling models is similar to Figure 3.

Figures 3 and 6 indicate that both the proposed and the other two methods are effective to the choice of Cartesian sampling with higher sampling ratios. However, the choice of pseudo-radial sampling ensures that the greater performance can be obtained at lower sampling ratios. Moreover, it can be acquired that the proposed method is more robust in a cardiac cycle.

We also evaluated the execution time of the three methods by using different sampling models with variable sampling factors on different datasets. Table 1 shows the average computational time for reconstructing the cardiac MRI images in complete temporal frames. From the Table 1, it can be known that our method is faster than the other two methods, and it is more potential for online dMRI reconstruction.

5. Conclusion

In this paper, we proposed a scalable and fast algorithm (IALM) for solving RPCA optimization problem to recover dMRI sequence from highly undersampled k-space data. Our proposed algorithm has a generalized formulation capability of separating dynamic MR data into low-rank component and sparse component. And this algorithm reconstructs and separates simultaneously dynamic MR data from partial measurement. Experiments on cardiac datasets have validated the efficiency and effectiveness compared to the state-of-the-art methods.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The work was supported by the National Nature Science Foundation of China (81271659), the China Postdoctoral Science Foundation (2014 M552346), and the Special Fund for Basic Scientific Research of Central College, South-Central University for Nationalities (nos. CZP17033 and CZP17014).