#### Abstract

A sparse-representation-based direct minimum -norm algorithm is proposed for a two-dimensional MRI phase unwrapping. First, the algorithm converts the weighted--norm-minimization-based phase unwrapping problem into a linear system problem whose system (coefficient) matrix is a large, symmetric one. Then, the coefficient-matrix is represented in the sparse structure. Finally, standard direct solvers are employed to solve this linear system. Several wrapped phase datasets, including simulated and MR data, were used to evaluate this algorithm’s performance. The results demonstrated that the proposed algorithm for unwrapping MRI phase data is reliable and robust.

#### 1. Introduction

From the MRI complex data, the phase information can be extracted with a restricted interval . That is, the phase value is wrapped. We call it a wrapped phase or the principal value . This relationship between the wrapped phase and its corresponding true phase can be described by , where is an integer and is the wrapping operator, forcing the value of its argument inside the curly braces into the range by adding or subtracting an integral multiple of radians from its argument. However, what is needed is the true unknown phase , because this relates to certain properties of interest, such as the velocity of the moving spins, the main field inhomogeneity, and the magnetic susceptibility variations. Given the wrapped phase, phase unwrapping is applied to restore the true phase, obtaining the unwrapped estimate . This technique is an important tool in many MRI applications, for example, three-point Dixon water and fat separation [1], MR venography [2, 3], motion tracking in a tagged cardiac MRI [4], and field mapping in EPI [5].

Should the wrapped phases have no inconsistencies, the process of the phase unwrapping will be merely integrating the phase gradients over a path that covers the whole domain of interest. This process is quite simple and path independent. No matter which path is followed, the results will be the same regardless of the constant offset. However, in practice, there are always inconsistencies owing to the presence of the noise, undersampling, and/or object discontinuities. Consequently, phase unwrapping becomes intractable and path dependent. The inconsistency is usually called “residue” and is detected by summing the wrapped phase gradients around each closed loop in the two-dimensional (2D) array, as shown in Figure 1. Consider where and are defined to be the gradients of the wrapped phases:

In the literature, numerous two-dimensional (2D) phase unwrapping approaches have been developed. They can be classified into four categories: path-following [6–11], minimum-norm [12–18], Bayesian/regularization [19, 20], and parametric modeling [21] methods. Our method belongs to the second category; thus, we briefly introduce the minimum-norm methods.

The minimum-norm methods estimate the unwrapped phases by minimizing the -norm of the differences between the gradients of the wrapped and the unwrapped phases. With , this results in least-square algorithms. These employ the FFT/DCT transforms [18, 22] or/and iterative techniques [22] to reach an approximation for the least-square solution. The exact least-square solution is obtained by applying network programming techniques in [15]. Nevertheless, the least-square minimization tends to smooth the discontinuities, unless these discontinuities are given in advance as binary weights. The minimum -norm algorithms [13, 23] have a better ability than the -norm ones to preserve the discontinuities. As approaches zero, the minimum -norm method tends to obtain a more reliable result [24, 25]. Thus, the -norm minimization is widely accepted as the most desirable in practice. However, the -norm minimization is a nondeterministic polynomial-time hard (NP-hard) problem [14] with only approximate, not exact, solutions being developed in [12, 14]. The conventional minimum -norm method [12] converts the -norm minimization problem into a generalized matrix equation with a flexible option of . It yields more accurate solutions than other methods, except for the cases including some residues closer to the periphery than to the other residues with opposite polarity. Additionally, because it is implemented in a dual-iterative structure (an iteration structure embedded in an outer one), it can be computationally intense.

In this work, a sparse-representation-based direct minimum -norm () algorithm is proposed. It introduces the user-defined weights into the generalized matrix equation of the conventional minimum -norm method to improve the performance of phase unwrapping, because, in this way, the discontinuities in the unwrapped phase surface can be confined to the low-quality or zero-weight regions [22]. On the other hand, the sparse representations of the matrices in the modified matrix equation and the direct solvers are exploited in this algorithm to significantly reduce the computational time of the conventional minimum -norm method for MRI phase unwrapping.

#### 2. Materials and Methods

##### 2.1. Mathematics Foundation

In general, for an 2D wrapped phase array, the weighted minimum -norm phase unwrapping problem is expressed by [14, 17] where and are the user-defined weights for corresponding differences and indicate where the phase values are more reliable than others. The user-defined weights are given by where For each sum, the indexes cover over the window centered at the pixel . The terms and are the wrapped phase gradients in the windows and and are the averages of these wrapped phase gradients. In this paper . is relatively higher in the areas where the phase changes smoothly.

Through the analogous derivation process in [12, 26], the minimum -norm solution of (3) eventually entails the solution of the following linear system of equations: where

That is to say, when the weighted minimum -norm phase unwrapping solution is desired, it can be simpler to find the solution of (6) instead.

For the setting of the values of data-dependent weights and empirically, the weights are always normalized in the interval in practice. Hence, (7) and (8) are modified to incorporate this constraint. To avoid with being extremely great or even infinite, the normalization is defined by Here, setting to be 0.01 radians is a compromise between accuracy and efficiency [12, 22].

##### 2.2. Sparse-Representation-Based Direct Minimum -Norm () Algorithm

###### 2.2.1. Sparse Representation of the Objective

The estimated phase distribution is shown in Figure 1. To simplify the algorithm, concatenating the columns of the phase matrix yields a vector of length* MN*:
The superscript refers to a matrix transpose.

The wrapped phase differences along the two directions, and , are also matrices. Their sizes are and , respectively. As above, we concatenate the columns of each matrix and then merge these two vectors vertically into a single array: The length of is .

Analogous to the matrix equation used in the weighted least-squares phase unwrapping algorithm [18], the system of equations defined by (6) can be represented in matrix form as follows: where denotes an matrix given by

is a matrix consisting of an upper partition and an lower partition: where is an matrix given by and is an identity matrix.

To make the expansion of (13) equal to (6), the matrix is constructed as where puts its arguments in order on the main diagonal. It is easy to see that is an matrix.

If these foregoing matrices are stored and operated using a standard matrix structure, this will consume a significant amount of memory and become unfeasible in standard computers. For instance, given a phase image, the sizes of , **,** and are , , and , respectively. These matrices are obviously too large (exceeding 2^{30}) to be stored and manipulated in a program. Fortunately, all these matrices are sparse and can be represented with a sparse structure. We store only nonzero entries of the matrix together with their indexes. That is, a 3-tuple, , is applied to uniquely identify a nonzero entry of the sparse matrix [27, 28], where and are the row and column indexes, respectively, and denotes the value of the nonzero entry located in . Moreover, to make the subsequent computation more efficient, the nonzero entries are ordered first by columns and then by rows. Finally, the size of the original sparse matrix should also be stored.

It should be noted that, henceforth, , , and will still refer to the original sparse matrices but will be stored and operated, respectively, in the corresponding sparse structures.

###### 2.2.2. Direct Solver

Equation (14) implies that is a large sparse, real, symmetric matrix. Thus, (13) becomes a linear system of equations involving the large sparse symmetric coefficient matrix . To solve this system of equations, considerable effort has been devoted over the past three decades [29]. Among these algorithms, the direct solvers that rely on the explicit factorization of the coefficient matrix are widely used owing to their generality and robustness. Especially for a symmetric coefficient matrix, the direct solvers prefer to adopt the factorization [30], a variant of Gaussian elimination that can also be considered as an alternative form of the Cholesky factorization without extracting the square roots [31]. Under this factorization, the matrix is decomposed into a lower triangular matrix , a diagonal matrix with and blocks, and the conjugate transpose of . Consider

Then, given the right-hand side , the estimated phases in (13) can be obtained through a forward elimination followed by the backward substitution [28]. The time complexity is O((MN)^{1.5}) and the memory complexity is O (MNlog(MN)). For more details of the direct solvers, please see [32].

###### 2.2.3. Implementation of the Algorithm

During the last two decades a number of software packages that implement direct solvers have been developed [33]. If the coefficient matrix is positive definite, CHOLMOD [34] is adopted, because of its rapid computation and relatively small amounts of memory demand. If not, MA57 [35] is strongly recommended.

To accelerate the convergence, the termination condition is set to no residues for the distinctions [12]. First, the distinction is defined by

Then, we treat these distinctions similarly to the phase data. Based on the theory that any wrapped phase image can be uniquely unwrapped (with an arbitrary constant offset) if it does not have residues [36], we check whether the have residues. Given no residues, we unwrap by a flood-fill algorithm [37, 38] with a centre start point and no branch cuts.

*Step 1. *Choose the start pixel as the point in the location of = round() and = round(), where round(·) rounds its argument to the nearest integer. Its phase value is stored as an unwrapped phase value in the solution matrix. The four neighbouring pixels are next unwrapped and their unwrapped phase values are placed in the solution matrix. These four pixels are inserted in the unwrapped list.

*Step 2. *Pick a pixel from the unwrapped list and then eliminate it from the unwrapped list. Unwrap the phase values of its four neighbouring pixels, avoiding pixels that have been unwrapped. Insert these pixels in the unwrapped list and put their unwrapped phase values in the solution matrix.

*Step 3. *Repeat Step 2 until the unwrapped list becomes empty.

Unwrapping is signified by , which is added back to the estimated phase to obtain the final result. Consider

In summary, the detailed procedure of the phase unwrapping algorithm is described as below.

*Step 1. *Set the iteration time . Set the value of . Initialize the estimated phases.

*Step 2. *Compute the weights and from (10).

*Step 3. *Compute and from (14) and (15); then solve (13) by the direct solver.

*Step 4. *Compute the distinction by (20). Check for residues. If there are no residues, continue. If , end. Otherwise, set and go to Step 2.

*Step 5. *Unwrap by the flood-fill algorithm, and form the final estimated phases by (21).

###### 2.2.4. Evaluation

The following weighted measure is used to evaluate the quality of an unwrapped solution: This counts the ratio of pixels where the gradients of the unwrapped solution mismatch the wrapped phase gradients, which is what we have also minimized with . Therefore, the lower weighted measure indicates the better performance.

#### 3. Results and Discussion

Several 2D datasets are used to evaluate the performance of the proposed algorithm. The actual number of the iterations of the algorithm varies, depending on the loops after which the distinctions of each example become residue free. The limited maximum number of iterations is set to be 20.

The performance of the proposed algorithm is compared with conventional minimum -norm algorithm [12] and other two widely used 2D algorithms, PUMA [16] and PHUN [10]. Note that we choose for the first two methods, because -norm minimization is more well behaved and most desired in practice (explained in Section 1). We merely display the best result we can obtain for the PHUN method in each example, because the behaviour of this algorithm is controlled by many parameters and the optimum values of the parameters vary for each dataset. All these methods are implemented in MATLAB (MathWorks, Natick, MA) on a PC (Intel 2 Quad CPU 2.39 GHz).

##### 3.1. Simulated Data

We begin with a phase dataset with one shear line located horizontally along the 13th row below the upper border. The wrapped phase image is shown in Figure 2(a). Figure 2(b), where the black border is added, shows its residues marked as black dots.

**(a)**

**(b)**

**(c)**

**(d)**

**(e)**

**(f)**

The proposed method, , made no unwrapping errors for this case, producing a solution (see Figure 2(c)) exactly the same as the true phase. However, while it obtained the smallest weighted measure for this example, it converged more slowly than the PUMA and PHUN methods (see Figure 6). The conventional minimum -norm and PUMA methods both yielded extra short vertical shear lines marked with red arrows in the upper parts of their results (see Figures 2(d) and 2(e)). Figure 2(f) shows the solution achieved by the PHUN method with five unexpected crooked shear lines and a black spot of outliers whose values are very low (all marked with red arrows).

Figure 3(a) shows a wrapped phase shear image with Gaussian noise (the signal-to-noise ratio is 0.8379 dB). Its residue distribution is shown in Figure 3(b).

**(a)**

**(b)**

**(c)**

**(d)**

**(e)**

**(f)**

The result of the algorithm in Figure 3(c) is composed of a two-planar surface tearing along the median shear line as expected, except for some protrusions on this line where the phases are severely corrupted by both the noise and the object discontinuities. The conventional minimum -norm method offers a slightly worse result, shown in Figure 3(d), where minor errors (marked with red arrows) occurred near the top and bottom of the median shear line. So the weighted measure values of the and conventional minimum -norm methods are almost the same (see Figure 6(a)). However, it is noted in this case that the former converged much faster than the latter. The unwrapped phase image of the PUMA method in Figure 3(e) has some small-scale, anomalous “layer” artifacts around the median shear line. The PHUN method generated an incorrect unwrapped phase image with the left half full of “layer” artifacts that propagated from the median line to the surrounding regions or even the image border.

In summary, these two simulation examples, both having object discontinuities lying in the shear line, demonstrate the discontinuity-preserving ability of the algorithm. Additionally, the algorithm did not produce extra-undesired discontinuities that appeared as shear lines or “layer” artifacts.

##### 3.2. MR Data

Figures 4(a) and 4(b) show the magnitudes and phases of a displacement encoded MR heart dataset [39], respectively.

**(a)**

**(b)**

**(c)**

**(d)**

**(e)**

**(f)**

**(g)**

**(h)**

**(i)**

**(j)**

**(k)**

The results of the , the conventional minimum -norm, and the PUMA algorithms are shown in Figures 4(d), 4(e), and 4(f). There is no significant visible difference between these three images. In these unwrapped phase images, the rough shape of the heart (compared to the magnitude picture in Figure 4(a)) is dimly visible. We then examine the corresponding discontinuity maps (added black borders) that are the distributions of the discontinuities where pixels (marked in black) differ from a neighbouring pixel by more than radians. These maps in Figures 4(h), 4(i), and 4(j) have little differences. In addition, the weighted measures, depicted in Figure 6(a), of these three methods are similar. The extremely fast algorithm, PHUN, offered a solution involving many black spots (see Figure 4(g)) that correspond roughly to the locations of the residues (see Figure 4(c)). Also, it created a large number of undesired discontinuities (see Figure 5(j)).

**(a)**

**(b)**

**(c)**

**(d)**

**(e)**

**(f)**

**(g)**

**(h)**

**(i)**

**(j)**

**(a)**

**(b)**

An MR head example [12] is shown in Figure 5. The , the conventional minimum -norm, and the PUMA algorithms all produced plausible unwrapped phase images, shown in Figures 5(c), 5(d), and 5(e). Moreover, the discontinuity maps (see Figures 5(g), 5(h), and 5(i)) and weighted measures (see Figure 6(a)) of these three methods are almost the same. The PHUN method failed to unwrap this dataset correctly (see Figure 5(f)). Its result has a large number of undesired discontinuities (see Figure 5(j)).

The weighted measures and execute time of all methods in the preceding four examples are compared in Figures 6(a) and 6(b), respectively. For a better visual effect, both of the ordinate axes are in a logarithmic scale. The algorithm achieved the smallest weighted measures for all four examples above. It significantly reduced the computational time compared to the conventional minimum -norm method. However, the algorithm did not converge fast enough compared with the PUMA and PHUN algorithms. From the horizontal comparison of the execute time, it can be concluded that the execute time of the algorithm depended partly on the size of the phase dataset. For example, the algorithm converged in 0.5 seconds for the heart dataset, while, for the simulated dataset, it took 31 seconds.

The final example is the transverse section of a 5-slice MR knee dataset, as shown in Figure 7. The size of each slice is . The measurements were made with a 0.5T MRI scanner (Ningbo Xingaoyi Co., LTD., China).

Excellent results with no phase errors, shown in the third row of Figure 7, are derived from the unwrapping by the algorithm. The conventional minimum -norm algorithm worked well, except for generating two undesirable shear lines in the fourth slice. The PUMA algorithm yielded undesired results in the third and fourth slices with some shear lines but successfully unwrapped the other slices. The PHUN method made some unwrapping errors in all slices, shear lines in the first, third, and fourth slices, and white patches of outliers, whose values are very high, in the second and fifth slices. All the truncations and outliers are marked with yellow and green arrows, respectively, in Figure 7.

As above, the weighted measures and execute time of all the methods for this multislice dataset are compared in Figures 8(a) and 8(b), respectively. The method returned the smallest weighted measures for all slices. Additionally, it yielded these results in seconds, slower than the PUMA and PHUN methods.

**(a)**

**(b)**

#### 4. Conclusions

In this work, we developed a sparse-representation-based direct minimum -norm () algorithm for the 2D phase unwrapping.

The user-defined weights are introduced in the objective function to improve the discontinuity-preserving ability of the algorithm. Furthermore, the sparse structures are used to represent the matrices involved in the objective function to accelerate the computation and decrease the memory space. Finally, the algorithms computed effectively and efficiently by employing direct solvers.

The proposed algorithm does produce excellent, reliable results with a very small weighted measure; it even allows phase images with large discontinuities through the whole phases to be unwrapped correctly. Moreover, benefiting from using the sparse representation and well-developed direct solvers, the method converges much faster than the conventional minimum -norm method. However, it is not fast enough. Further research can be devoted to reducing the execution time.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgments

This work was supported in part by the 973 National Basic Research & Development Program of China (2010CB732502) and the National Science & Technology Pillar Program (2011BAI12B01).