Abstract

A pseudorandom, velocity-insensitive, volumetric π‘˜-space sampling trajectory is designed for use with balanced steady-state magnetic resonance imaging. Individual arcs are designed independently and do not fit together in the way that multishot spiral, radial or echo-planar trajectories do. Previously, it was shown that second-order cone optimization problems can be defined for each arc independent of the others, that nulling of zeroth and higher moments can be encoded as constraints, and that individual arcs can be optimized in seconds. For use in steady-state imaging, sampling duty cycles are predicted to exceed 95 percent. Using such pseudorandom trajectories, aliasing caused by under-sampling manifests itself as incoherent noise. In this paper, a genetic algorithm (GA) is formulated and numerically evaluated. A large set of arcs is designed using previous methods, and the GA choses particular fit subsets of a given size, corresponding to a desired acquisition time. Numerical simulations of 1 second acquisitions show good detail and acceptable noise for large-volume imaging with 32 coils.

1. Introduction

Reconstruction of magnetic resonance imaging (MRI) from data sampled using noncartesian sampling has recently received increasingly mathematically sophisticated treatment, for example [1], with notable improvements in reconstruction speed and accuracy. The application of novel design techniques for noncartesian sampling trajectories, however, has received less attention.

In [2], a novel pseudorandom volumetric π‘˜-space trajectory design method was presented. This methodology, henceforth referred to as Durga, combines a number of ideas in trajectory design and general sampling design for the first time, including randomness, [3, 4] constrained optimization [5] to balance trajectories for steady-state imaging [6–9], genetic algorithms [10], under-sampling to trade acquisition time for (structured) noise [11–13], and target-oriented design rather than patterns of symmetric interleaving [14, 15]. By combining these ideas, Durga achieves significantly better efficiency as measured by sampling duty-cycle for a balanced steady-state pulse sequence.

Flow-insensitive π‘˜-space trajectories are inherently more efficient than trajectories requiring a rewinder to balance first and possibly higher moments. Trajectories which sample three dimensions in π‘˜-space, like Durga, can further increase efficiency by not rewinding slice select gradients, and simply starting and stopping sampling offset from the center of π‘˜-space immediately after and before the excitation pulse. This is summarized in Table 1 by a comparison of spiral with velocity compensating rewinder [5], Teardrop [7] which incorporates in-plane velocity compensation into the readout, and Durga [2]. These numbers are relative to gradient peak/slew limits of 40 mTmβˆ’1 and 150 Tmβˆ’1sβˆ’1 for Spiral [5] and Durga [2] and 27 mTmβˆ’1 and 72 Tmβˆ’1sβˆ’1 for Teardrop [7].

Volumetric imaging has several inherent advantages over thin-slice imaging, including isotropic resolution, reduced effect of in-flow, and the ability to completely correct for distortions in gradients, main-field inhomogeneity, and eddy currents. It is also easier to completely compensate for velocity effects by nulling higher moments. Moment-nulled planar trajectories are often only effectively nulled in one or two dimensions, because they are used with slice select and possibly phase encoding gradients which are difficult to null.

In the case of rapid imaging, however, the most important property of sampling in more dimensions is the ability to under sample without introducing aliasing in the form of ghosts [11, 16]. Using Durga, one can essentially trade off under sampling for unstructured noise on a continuous basis, breaking the dependence on Nyquist sampling. The trajectories used in the illustrations to reconstruct 2563 volumes are 33Γ— under sampled in time relative to perfect Cartesian sampling (ignoring gradient limits, and only considering sampling bandwidth).

This paper presents two innovations over [2]. (1)A genetic algorithm for choosing subsets of trajectory arcs (Figure 1) corresponding to a 𝑇𝑅, designed using one of the methods in [2]. This significantly increases the simplicity and flexibility of implementation and improves the quality of the solution, as measured by the point spread function.(2)Multicoil numerical simulations, including the effect of noise, showing low levels of aliasing artifacts with extreme under-sampling. Although this sampling strategy is compatible with and benefits from multicoil imaging, the reduction in aliasing possible by using an iterative SENSE reconstruction [17] is much smaller than what could be expected based on experience with other trajectories.

2. Methods

2.1. Individual Trajectory Design

Sampling trajectories in 3D π‘˜-space are generated by solving a second-order cone optimization problem (SOCP), originally formulated in [2]. An SOCP is a minimization problem in which the variables are constrained to lie inside a set defined by quadratic functions of the variables. Putting a bound on the length of a vector is a common special case, and the one exploited in this trajectory design problem: π‘˜min𝜏,(1)0=0=π‘˜π‘,β€–β€–π‘˜(2)𝑖+1βˆ’π‘˜π‘–β€–β€–2≀𝐺max𝛿𝑇,β€–β€–π‘˜(3)𝑖+2βˆ’2π‘˜π‘–+1+π‘˜π‘–β€–β€–2≀𝑆max𝛿2𝑇,β€–β€–π‘˜(4)𝑖‖‖2‖≀1/𝑅,(5)𝑁𝑖=1π‘–ξ‚€π‘˜π‘–+1βˆ’π‘˜π‘–ξ‚β€–<πœ†ο¬‚owβ€–β€–π‘˜πœ,(6)π‘‘π‘–βˆ’πΊπ‘–β€–β€–2<ξƒ―πœ†0πœπΊπ‘–πœ†=0,goal𝜏otherwise,(7) where the variables, π‘˜π‘–βˆˆβ„3,𝑖=1,…,π‘βˆ’1, are the position in π‘˜-space at time 𝑖; 𝜏 controls the deviation from the constraints (1); 𝛿𝑇 is the time step; 𝐺max and 𝑆max are peak (3) and slew (4) constraints on the gradients; 𝑅 is the maximum resolution used in the reconstruction (5); the sum in (6) models the phase error caused by constant flowβ€”errors from nonconstant flow can be handled by adding norms of higher moments; 𝐺𝑖 are the targets near which the trajectory should pass (7) at time 𝑑𝑗, see Figure 2; and the πœ†s determine the relative priorities placed on meeting the different goals.

In the trajectories designed for this paper, the goals 𝐺𝑗 are random points on the boundary sphere |π‘˜|=1/𝑅 or at π‘˜=0. Multiple traversals of π‘˜=0 are not optimal for sampling, but included to facilitate calibration of gradient distortion and main-field inhomogeneity. This problem nulls the first moment at the center of the rf pulse. Higher moments, nulling at different points in the trajectory, or bounding the size of the moment with a tolerance could be encoded in the same way. The trajectories in this paper are designed to begin and end at π‘˜=0, for steady-state imaging, but this is not a limitation of the method. See [2] for a detailed description of the software used to solve this optimization problem, and a discussion of variations in the design, including iterative designs in which goals are placed at low-density points for previous trajectories.

Note that the constraint on the first moment will cause an optimal trajectory to traverse parts of π‘˜-space not near a shortest path between the goals, 𝐺𝑖, which increases the coverage of π‘˜-space, but there are no explicit objectives for coverage, intertrajectory spacing, or trajectory length. In [2], coverage is handled by pseudorandom assignment of goals from the vertices of a regular triangulation of the limit sphere in π‘˜-space.

2.2. Combining Sets of Trajectories

In this paper, coverage is assured by optimizing its effect on the point spread function (psf), which captures sampling artifacts (exactly for uniform coils, and approximately for surface coils). The ratio of the 𝐿2 norm of the central point in the psf and the 𝐿2 norm of the rest of the psf is maximized over subsets of a large fixed set of trajectories, each optimized as described above. Since the set of subsets of a fixed size grows factorially with the number of trajectories, only a small subset can be tested. A genetic algorithm uses information about previous subsets to test only subsets likely to have better psfs.

This approach differs from formulations, where the variables (referred to as chromosomes in genetic algorithms) are local or nonlocal control points on individual trajectories [10]. Such approaches lead to a much larger search space, and increased complexity of each search step, since the solution of the SOCP problem is the most expensive part of the optimization. In the present method, one SOCP problem is solved per trajectory (and may be solved in parallel) before application of the GA search.

Individual trajectories are 5.6 milliseconds, including 0.1 millisecond for rf excitation, and designed with gradient limits typical of a whole body clinical imager (peak gradient of 40 mTmβˆ’1 and a gradient slew rate 150 Tmβˆ’1sβˆ’1). For ease of comparison, we will choose enough trajectories for 1 second of sampling (178 with the above parameters).

As a first step, many trajectories are designed by randomly selecting goals on the boundary sphere (which are interleaved with fixed goals at π‘˜=0). Of these, many are clearly less suitable than others. To filter out the least suitable, a small number are designed to determine an approximate threshold for the longest 20%, and the 20% closest to their own goals. Only trajectories above the thresholds are used in the initial set.

This initial large set is then partitioned into 400 sets of the desired size (168) to seed the GA. These seed sets are then evolved through 45 generations, using chromosomes consisting of integer indices into the initial set. An additional 10 trajectories were added using a β€œdensity threading” [2], whose aim is to increase sampling of voids along the π‘˜π‘₯βˆ’π‘˜π‘¦ plane by specifically creating trajectories to pass through areas of low sampling density. Computation time refers to a PowerMac G5 2.5, running a C program calling SOCP, http://www.stanford.edu/~boyd/socp, and the Genetic Algorithm Utility Library, http://gaul.sourceforge.net.

The objective of the optimization is to improve the quality of the psf, as measured by Fourier transforming the (weighted) sampling density, and dividing the central value in the psf by the total energy. The method of determining the density must be the same as used in reconstructing the images, since different methods can potentially produce different artifacts even with the same trajectories [18].

To calculate the fractional sampling, both in π‘˜-space and in time, nearest neighbour resampling was used to calculate a sampling density, and the number of nonzero elements in the π‘˜-space array was computed. For example, 1 second trajectory set, 1.54% of the voxels in a 2563π‘˜-space had nonzero values, giving one measure of under-sampling. This corresponds to 51.7% of the number of the samples that would be collected in 1 second with a receiver bandwidth of 500 kHz, giving a measure of the efficiency relative to the (unrealizable) maximum sampling rate.

2.3. Image Reconstruction

Numerical simulations were reconstructed by simulating irregularly sampled data (500 kHz sampling) along the trajectories. Solid phantoms were approximated by cubes. Receiver coils were approximated by magnetic dipoles, with sensitivities evaluated at the center of each phantom cube. In this approximation, the signal in π‘˜-space corresponding to each cube is a product of sinc functions, scaled by the complex sensitivity value.

The irregular samples were then divided by the sampling density in π‘˜-space and resampled to a regular grid using convolution.

For multiple coils with sensitivities 𝑆𝑖, the transformed images πœŒπ‘–, were combined with two methods: weighting-by-sensitivitiesβˆ‘coilsπ‘†π‘–β‹…πœŒπ‘–βˆ‘coils𝑆𝑖⋅𝑆𝑖(8)and conjugate gradient reconstructions [17].

A lot of analysis has been done on resampling irregular data with density variations, see [18–20] and references therein. Convolution with a fixed kernel [21] is the simplest resampling method to implement and to analyze. The optimal piecewise-linear kernel described in [22] was used. Estimating the density to use for correction was done by resampling a constant set of data points using a width-three triangular kernel. The key property of this triangular function is that it is a partition of unity for samples spaced 1, 2, or 3 grid points apart.

3. Results

When the described algorithm was used to generate the 168-shot volume representing 1 second of imaging time, 12 hours were taken to create a working base of 4000 trajectories using randomly generated goals. An additional 13000 trajectories were rejected by the threshold test. Another 72 hours were required for the GA to select a fit subset.

Figure 3 compares the results of the GA to a histogram of 3000 randomly formed subsets. The GA set is 7.8 standard deviations better than the set, and is shown as a banded bar. Also shown is the result of adding 10 density-threaded trajectories [2].

The estimated sampling density used for density correction is shown in Figure 4. For display purposes, the corrected sampling density is shown, in which the uniformity of the bright pixels reflects the quality of the correction. Recall that only 1.54% of grid points in π‘˜-space contain a sample point at this reconstructed resolution.

A better measure of the quality of the set of trajectories, together with the density correction, is given by the psf (Figure 4), which predicts very little blurring and a low level of noise-like aliasing.

Because it is difficult to inspect more than a planar cross-section of the psf at a time, it is hard to appreciate the extent to which the noise-like aliasing accumulates in three dimensions. This is demonstrated using a numerical phantom consisting of four rings of varying sizes, Figure 5.

The phantom was chosen to be readily identifiable in cross-section, and in volume/surface rendering. The apparent noise will increase as a function of the total signaling volume; the phantom represents a midway point between contrast-enhanced MR angiography (with a small blood volume producing signal) and anatomical imaging. Four different reconstructions of the central π‘₯-𝑦 cross-section are shown in Figure 6.

The uniform reconstruction shows significantly more alias noise in the periphery of the image than the 32 surface-coil reconstruction. For measurement noise, this is as expected, since the coils are more sensitive near the periphery. For the noise-like aliasing artifact, a similar phenomenon is in operation. The true image is multiplied by the coil sensitivity, which is then corrected when performing multicoil combination. The artifacts are also modulated by the sensitivity, but since they are delocalized (by definition), they are not corrected during combination, and hence partially cancel each other out. This is more apparent where the sensitivities vary (the periphery) than where they are relatively uniform (the center).

The noisy multiple and single surface coil images (lower row) show typical loss of sensitivity by the center of the phantom for a single channel, which is reduced in the combined image. The 32-coil noisy image shows visually equal noise levels to the first two images, but with different (higher) frequency composition.

Iterative methods have been successfully used in planar imaging to reduce spiral artifacts by using a priori information about multiple coil sensitivities, notably using the conjugate gradient method [17]. This does not produce visible reductions in aliasing in this case. What it can do is reducing the effect of blurring, as is visible in the reconstruction of a 163 voxel uniform box. Figure 7 shows the result of the first step common to steepest-descent and the conjugate gradient method. Figure 8 shows the residual decreasing for the first step, and starting to diverge thereafter, with two coil configurations, and simple and complex phantoms. This is not very surprising. Adapted methods have been proposed for ill-posedness [24] and round-off errors [25], but are beyond the scope of this paper.

A common cross-section is displayed of the original solution, the result after one step, and a scaled version of the gradient (using, in the electronic edition, different colors for positive and negative pixel values). What aliasing artifact is present is either unchanged or enhanced by the gradient step, while softening of the edges is clearly reduced.

4. Discussion

The psf and numerical phantoms presented here show that Durga is a very promising approach to designing trajectories for volumetric imaging. Pseudorandom trajectory design visibly eliminates coherent aliasing artifacts in numerical simulations.

Using the GA increases the flexibility of this method and increases the quality of the solutions. There is a marked difference in quality between randomly generating subsets and using the GA to improve a population. After filtering out inferior individual trajectories, randomly selecting subsets produced surprisingly little variation in the quality of the psf. Initial attempts in using genetic algorithms based on randomly chosen initial points and unfiltered trajectories were not able to beat the heuristic in [2], so success of the GA depends very much on a careful formulation, and even then, it is unlikely that the global optimum will be reached. Unfortunately, it is also considerably more expensive. Additional work should focus on improving individual trajectories before starting the GA.

The modest improvement produced by steepest descent means that the sampling patterns are so efficient that little additional information can be extracted by a parallel reconstruction taking advantage of geometric coil information, at least with the present coils. We conjecture that the randomness optimized to make the psf flat produces a large cluster of small eigenvalues in the operator being iterated in the CG step, causing the CG to begin diverging after one iteration.

We are planning modifications of the basic trajectory design to quantify the first effect, and working with partners to collect data to evaluate the second. In any case, multiple coil reconstruction using coil sensitivities does reduce the apparent noise, which is important to end users.

We will also investigate compressed sensing reconstructions [26] which require incoherent aliasing artifacts such as those presented in this work, because in such cases β€œrandomness is too important to be left to chance” [26].