Abstract

This paper introduces an efficient algorithm for magnetic resonance (MR) image reconstruction. The proposed method minimizes a linear combination of nonlocal total variation and least-square data-fitting term to reconstruct the MR images from undersampled -space data. The nonlocal total variation is taken as the -regularization functional and solved using Split Bregman iteration. The proposed algorithm is compared with previous methods in terms of the reconstruction accuracy and computational complexity. The comparison results demonstrate the superiority of the proposed algorithm for compressed MR image reconstruction.

1. Introduction

Magnetic resonance (MR) imaging has been utilized in diagnosis because of its glorious depiction of soft tissue changes and noninvasive nature. As explained in [1, 2], it is possible to accurately reconstruct the MR images from undersampled -space data using compressed sensing and considerably scale back the scanning period. Suppose is a sparse signal, and let be a measurement matrix such that , where is the observed data. Then, recovering from is equivalent to solve where is a regularizing function; usually it may be bounded variation or Besov norm. In case of compressed sensing MRI, is partial Fourier matrix (), where is an identity matrix (), is a discrete Fourier matrix, and is the observed -space data contaminated with Gaussian noise of variance , the relaxation form for (1) should be given by In order to make (1) simple to solve, first convert it into an unconstrained optimization problem. One common method for doing this is to use a penalty function/continuation method, which approximates (1) by a problem of the form where is a balancing parameter between the fidelity term and sparsity term [3, 4]. This is the unconstrained problem we need to solve.

There are a lot of iterative methods existing to reconstruct MR images from undersampled data such as conjugate gradient algorithm (CG) [4], operator splitting algorithm (TVCMRI) [5], variable splitting method (RecPF) [6], composite splitting algorithm (CSA) and its accelerated version called a fast composite splitting algorithm (FCSA) [7], and split Bregman algorithm combined with total variation (SB-TV) [8]. The Split Bregman method provides better solution to a wide class of -regularized problems. In SB-TV, the Split Bregman technique is applied to the Rudin-Osher-Fatemi functional to a compressed sensing problem that arises in a magnetic resonance imaging. A detailed explanation about the Split Bregman technique is given in Section 2. Reconstruction using CG is very slow for MR images. TVCMRI and RecPF can replace iterative linear solvers with Fourier domain computations, which can provide a reduction in computation time. FCSA performs much better than TVCMRI and RecPF and CSA. FCSA is based on wavelet sparsity and total variation (TV). However, despite the huge popularity of TV minimization, it has also some unwanted effects. In the presence of noise, it tends to piecewise constant solutions, the so called staircasing effect which was analyzed in detail in [9, 10]. Another deficit of TV regularization is the systematic loss of contrast in the reconstructions, even if the given data is noise free. This effect is the well-known systematic error of the total variation minimization and was studied extensively in [11, 12]. The major problem associated with TV-based compressed sensing method is that it tries to uniformly penalize the image gradient irrespective of the underlying image structures, and thus low contrast regions are sometimes over smoothed [13]. To resolve this issue, we propose a new algorithm which jointly minimizes a linear combination of nonlocal total variation and least-square data-fitting term to reconstruct the MR images from under-sampled data. In medical image reconstruction, fine structures, details, and texture should be preserved. The nonlocal total variation makes the recovered image quality sharper, and they preserve the fine structures, details, and textures. NLTV is much better than TV for improving the signal-to-noise ratio in practical application [1416]. Authors of [17, 18] showed that among the existing nonlocal regularization techniques, NLTV is performing well in reconstruction. So in the proposed method, the nonlocal total variation is taken as the -regularization functional and solved using Split Bregman algorithm. Numerous experiments have been done on real MR images to show the efficiency and advantages of the proposed work in terms of computational complexity and reconstruction accuracy.

2. Split Bregman Algorithm and Regularization Method

The -regularized problems have many important applications in engineering and imaging science. The general form of such problems is where represents the -norm and both and are convex functions. Many important problems in image processing can be interpreted as -regularized optimization problems. Equation (3) is an example of such a problem Unfortunately, for several issues, selecting a large value for makes (5) extremely tough to solve numerically [19, 20]. Also, for several applications, should be increased in small steps, creating the strategy less efficient. Bregman iteration can be used to reduce (5) into small sequence of unconstrained problems for further processing.

2.1. Bregman Iteration

Bregman iteration is a method for finding extrema of convex functionals [21]. Bregman iteration was already applied to solve the basis pursuit problem in [22] and medical imaging problem in [23]. The general formulation of this method is explained by using Bregman distance. The Bregman distance associated with a convex function at the point is where is in the subgradient of at . Clearly, this is not a distance in the usual sense, it measures closeness in the sense that and for on the line segment between and . Again, consider two convex energy functionals, and , defined over with . The associated unconstrained minimization problem is Modify this problem by iteratively solving as suggested by Bregman in [21]. For simplicity, assume that is differentiable, and in this case, we have , where this subdifferential is evaluated at . Since at this location, then Apply the Bregman iteration (8) on (5) Bregman iterations of this form were considered in [12]. Here, it is shown that when is linear, this seemingly complicated iteration is equivalent to the simplified method

2.2. Split Bregman Method

Apply Bregman framework to solve -regularized problem (4). In this case, assume that both and are convex functions and is differentiable. In this method, decouple the and parts of energy in (4). Now, consider (4) as To solve this problem, first convert it into an unconstrained problem Let , and let , then we can see that (14) is simply an application of (5). Use (11) to obtain the solution for the above problem: Applying simplification presented in (12), we get the following solutions: In order to perform the minimization effectively, split the and components of (16) and minimize with respect to and separately. The two steps result in the following solutions: In (19), there is no coupling between elements of . We can use shrinkage operator to find the optimal value of as follows: where This shrinkage operation is extremely fast and requires only few operations per element of .

Based on the above described equations, the generalized Split Bregman algorithm can be explained in Algorithm 1.

while     do
 for   to   do
  
  
 end for
end while

2.3. Nonlocal Total Variation Norm

Total Variation (TV) regularization [24, 25] makes the recovered image quality sharper, but they do not preserve the fine structures, details, and textures. This effect is caused by the regularity assumption of the TV formulation of the image model, namely, that the image has a simple geometric description consisting of a set of connected sets (objects) with smooth contours (edges). Additionally, the model assumes that the image is smooth inside single objects and has discontinuous jumps across the boundaries. Therefore, TV regularization is optimal to reduce the noise and to reconstruct the main geometrical configuration in an image. However, it fails to preserve texture, details, and fine structures, because they behave in all aspects like noise and thus cannot be distinguished from noise. The nonlocal total variation (NLTV) extends the TV functional to a nonlocal variant using the definition of nonlocal derivative operators based on a nonlocal weight function [1417, 26, 27]. NLTV is an effective tool instead of TV for improving the signal-to-noise ratio in practical application [1416]. Recently, it has been successfully used for 4D computed tomography reconstruction from few projection data [28]. NLTV extends the TV functional to a nonlocal variant using the definition of nonlocal derivative operators based on a nonlocal weight function (graph). The notion nonlocal means that any point can directly interact with any other point in the whole image domain, where the intensity of the interaction is depending on the value of the weight function. This weight function should represent the similarity of the two points and should be significant, if both points are similar in an appropriate measure. Therefore, the expectation is that such an approach is able to process both structures (geometrical parts) and texture within the same framework, due to the identification of recurring structures in the whole image. A brief review regarding the definition of nonlocal functional is given below.

Let , and let be a real function. Assume that is a weight function which is symmetric and nonnegative. Then, the nonlocal gradient of an image is defined as The norm of a vector at point is given by Hence, the norm of the nonlocal gradient of a function at point is represented as The nonlocal divergence operator can be defined using the standard adjoint relation with the nonlocal gradient as follows: which guides to the definition of nonlocal divergence of the nonlocal vector Next, the nonlocal Laplacian operator is defined as The discrete forms of the nonlocal gradient, divergence, and laplace operators can be represented as follows: where represents the value of a pixel in the image () and is the discrete sparse version of , and it is defined as where is a filtering parameter; in general corresponds to the noise level; usually, we set it to be the standard deviation of the noise, and is a Gaussian of standard deviation , and and are the image values in pixel and . The weight functions denote how much the difference between pixels and is penalized in the images. The more similar the neighborhoods of and are, more the difference should be penalized, and vice versa. The weight functions , significant only if the window around looks like the corresponding window around . Hence, the nonlocal algorithm is very efficient in reducing noise, while preserving contrast in natural images and redundant structures such as texture. In our work, we used pixel patches, a search neighborhood window of size . The observed noisy data is taken as the reference image to construct the weight, and by this weighed averaging, the structures, for example, boundaries, are reinforced, while the noise gets cancelled out [29]. The weights are computed by using either a distance between the noisy pixel values [3032] or a distance between the patches around and [3335]. The use of NLTV reduces the noise in the reconstructed image, thus the difference between reference and reconstructed image reduces. From the definition of signal-to-noise ratio (SNR), it is clear that the reduction in image difference increases the SNR.

3. Proposed Work

The proposed work jointly minimizes a linear combination of nonlocal total variation and least-square data-fitting term to reconstruct the MR image from undersampled data. The main aim is to solve the compressed sensing MRI problem (2) using Split Bregman algorithm and nonlocal total variation. In this work, the nonlocal total variation is taken as the -regularization functional and solved using Split Bregman iteration. Recall (2) Using Bregman iteration method, (30) can be reduced to a sequence of unconstrained problems of the form where represents -regularization term. In order to proceed further selection of regularization method is important. Here, we choose nonlocal total variation (NLTV) as the regularizer; that is Now, (31) becomes where . We can write (34) as follows by introducing an auxiliary variable instead of : Equation (35) can be converted into unconstrained form by using the quadratic penalty method Using split Bregman method, (36) can be transformed into the following forms: Equation (37) is convex and can be minimized by alternatively solving the following two minimization subproblems with respect to and By direct computation, the optimal conditions of (39) are Use the Gauss-Seidel iteration to get a fast solution of (40), and the discrete solution is represented as As explained in the introduction, is partial Fourier matrix (), where is an identity matrix () and is a discrete Fourier matrix. Using the identity , now (42) becomes The discrete solution for (41) can be written as

Finally, the Bregman variable is updated as The proposed method is summarized as Algorithm 2.

Initialize: and
while     do
 Compute based on (43)
 Compute based on (44)
 Update based on (45)
end while

4. Evaluation of Image Quality

In this work, a detailed evaluation study has done on the reconstruction of MR images, which represent varying degrees of object structural complexity. Even though algorithms based on regularization techniques effectively remove streaks, other aspects of image quality should also be analyzed. To address this, a number of image quality evaluations are performed at different levels including qualitative visualization-based evaluation and quantitative metric-based evaluation.

4.1. Qualitative Visualization-Based Evaluation

In qualitative visualization-based evaluation, reconstructed image obtained with different algorithms are visually compared with the reference image.

4.2. Quantitative Metric-Based Evaluation

Besides the visualization-based evaluation, similarity between reconstructed and reference images is quantitatively assessed by means of four measures such as signal-to-noise ratio (SNR), relative error (RE), structural similarity index (SSIM), and feature similarity index (FSIM). SNR and RE are widely used for measuring reconstruction accuracy, SSIM and FSIM are used for evaluating the similarity between reconstructed and reference image.

4.2.1. Signal-to-Noise Ratio (SNR)

One can see that where is the reference image, is the mean intensity value of , and is the reconstructed image.

4.2.2. Relative Error (RE)

One can see that

4.2.3. Structural Similarity Index (SSIM)

The SNR measurement gives a numerical value on the damage, but it does not describe its type. Moreover, as is noted in [36, 37], it does not quite represent the quality perceived by human observers. For medical imaging applications, where images are degraded, must eventually be examined by experts, traditional evaluation remains insufficient. For this reason, objective approaches are needed to assess the medical imaging quality. We then evaluate a new paradigm to estimate the quality of medical images based on the assumption that the human visual system (HVS) is highly adapted to extract structural information. The similarity index compares the brightness , contrast , and structure between each pair of vectors, where the SSIM index between two signals and is given by the following expression [38, 39]: However, the comparison of brightness is determined by the following expression: where the average intensity of signal is given by the constant , and is the dynamic row of the pixel values (255 for an image coded on 8 bits). The function of contrast comparison takes the following form: where is the standard deviation of the original signal , , and the constant .

The function of structure comparison is defined as follows: where and .

Then, the expression of the structural similarity index becomes

4.2.4. Feature Similarity Index (FSIM)

SSIM index provides image quality assessment (IQA) from pixel-based stage to structure-based stage. Human visual system (HVS) understands an image mainly based on its low-level features: mainly, the phase congruency (PC), which is a measure of the significance of a local structure and it is dimensionless. PC is used as the primary feature in FSIM [40]. The secondary feature used in FSIM is the image gradient magnitude (GM). In order to find out the feature similarity between two images and , the above mentioned parameters PC and GM are to be calaculated first.

Let , , , and be the phase congruency and gradient magnitude of images and , respectively. Initially, separate the feature similarity measurement between and into two components. First similarity measure is based on and and is defined as where is a positive constant which increases the stability of . Value of depends on the dynamic range of PC. Similarly, the similarity measure based on GM values and is defined as where is a positive constant which depends on the dynamic range of GM value.

Next step is to combine and to get the similarity measure between and and is defined as The relative importance of PC and GM features can be adjusted by means of the parameters and . For simplicity, set , then (56) becomes where represents the similarity at each location , and the overall similarity should be found. For a given location , if any of and has a significant PC value, it implies that this position x will have a high impact on HVS in evaluating the similarity between and . Therefore, introducing a new term to weight the importance of in the overall similarity between and . Accordingly, the FSIM index can defined as where means the whole image spatial domain.

5. Experiments and Numerical Results

The experimental setup used in previous works [57] is explained here. Suppose that an MR image has pixels and the partial Fourier transform in (3) consists of rows of a matrix corresponding to the full 2D discrete Fourier transform. The selected rows correspond to the acquired data . The sampling ratio is outlined as . The scanning time is shorter if the sampling ratio is smaller. In the -space, randomly choose more samples in low frequencies and fewer samples in higher frequencies. This sample theme has been widely used for compressed MR image reconstruction, and therefore similar themes are utilized in [47]. Practically, the sampling scheme and speed in MR imaging also depend on the physical and physiological limitations [4]. In the proposed work, the compressive sensing matrix , where is a row selector matrix, and is the Fourier transform matrix. For an image, we randomly choose m coefficients, then is a sampling matrix of size . All experiments are conducted on a PC with an Intel core-i72670, 2.2 GHz CPU in MATLAB environment. The result of the proposed work is compared with the existing methods like TVCMRI [5], RecPF [6], CSA, FCSA [7], and SB-TV [8]. The observation measurement is synthesized as , where is the noise, and are given as inputs, and is the unknown target. In this work we considered two sets of observations: one is contaminated with Gaussian noise of standard deviation , and the other is contaminated with Rician noise of noise level 5%.

The proposed and existing algorithms are tested using four 2D MR images: brain, chest, artery, and cardiac, imges respectively, as shown in Figure 1. They have been used for validation in [7]. For convenience, they have the same size of . The sample ratio is set to be approximately 20%. To perform comparisons, all methods run 50 iterations, except SB-TV and the proposed method. SB-TV completes reconstruction in 35 iterations, while the proposed method takes only 30 iterations. The proposed method provides best visual effects on all MR images. Figures 2, 3, 4, and 5 show the visual comparisons of the reconstructed results using Gaussian noisy observations by different methods in the brain, chest, artery, and cardiac images, respectively. Figure 10 gives the performance comparisons between different methods in terms of the CPU time over SNR. Table 1 shows the average value of CPU time, SNR, RE, SSIM and FSIM of different methods in Gaussian noise case. Figures 6, 7, 8, and 9 show the visual comparisons of the reconstructed results using Rician noisy observations by different methods in the brain, chest, artery, and cardiac images, respectively. Figure 11 gives the performance comparisons between different methods in terms of the CPU time over SNR. Table 2 shows the average value of CPU time, SNR, RE, SSIM and FSIM of different methods in Rician noise case. In both cases, the reconstructed results produced by the proposed method is better than those produced by the TVCMRI, RecPF, CSA, FCSA, and SB-TV. The reconstruction performance of the proposed work is the best in terms of both the computational complexity and reconstruction accuracy, which clearly demonstrate the efficiency of this method for the compressed MR image reconstruction.

6. Conclusion

An efficient algorithm is proposed for the compressed MR image reconstruction based on nonlocal total variation and Split Bregman method. The algorithm effectively solves a NLTV-based -norm term by using the Split Bregman method. NLTV can effectively avoid blocky artifacts caused by traditional TV regularization. Numerous experiments were conducted to compare different reconstruction methods. The results of our study indicate that the proposed method outperforms the classic methods in terms of both accuracy and complexity.

Acknowledgment

The authors would like to thank the Government of Canada for the financial support through the Commonwealth Scholarship 2011-12 for this research work.