Abstract

With the advent of novel biomedical 3D image acquisition techniques, the efficient and reliable analysis of volumetric images has become more and more important. The amount of data is enormous and demands an automated processing. The applications are manifold, ranging from image enhancement, image reconstruction, and image description to object/feature detection and high-level contextual feature extraction. In most scenarios, it is expected that geometric transformations alter the output in a mathematically well-defined manner. In this paper we emphasis on 3D translations and rotations. Many algorithms rely on intensity or low-order tensorial-like descriptions to fulfill this demand. This paper proposes a general mathematical framework based on mathematical concepts and theories transferred from mathematical physics and harmonic analysis into the domain of image analysis and pattern recognition. Based on two basic operations, spherical tensor differentiation and spherical tensor multiplication, we show how to design a variety of 3D image processing methods in an efficient way. The framework has already been applied to several biomedical applications ranging from feature and object detection tasks to image enhancement and image restoration techniques. In this paper, the proposed methods are applied on a variety of different 3D data modalities stemming from medical and biological sciences.

1. Introduction

The analysis of three-dimensional images has gained more and more importance in recent years. Particular in the medical and biological sciences, new acquisition techniques lead to an enormous amount of 3D data calling for automated analysis. In this paper, we show how the harmonic analysis of the 3D rotation group offers a convenient and computationally efficient framework for rotation covariant image processing and analysis. Most of the state-of-the-art techniques rely on “low”-order features such as intensities, gradients of intensities, or second order tensors like the Hessian matrix or the structure tensor [1]. For example, consider a lesion detection/segmentation problem in a -weighted magnetic resonance image. A typical procedure for solving such a task would rely on a local image feature extraction step such as the computation of a Laplacian- or a Gaussian-pyramid. Once the feature images are computed, a healthy group of volunteers is used to determine the distribution of such features for subjects in a healthy condition.

From such a distribution, we can estimate the probabilities for the absence or presence of lesions in a voxel-by-voxel manner. Instead of solely using 0-order features, such as the Laplacian-pyramid, higher order tensor fields can be used to derive further scalar valued quantities. Such features can be the smoothed intensity gradient magnitudes (1-order features), or the eigenstructures of a Hessian matrix field or a structure tensor field (2-order features). However, due to their mathematical and computational complexity, features of order three or even higher order are rarely used. This paper proposes a unified framework that can cope with high-order features in a systematic way. The proposed framework is based on the harmonic, irreducible representations of the 3D rotation group. This guarantees the most sparse tensor representation. Consequently, in comparison with ordinary Cartesian tensor analysis, the algorithms and the handling are operationally clearer and more efficient.

Given a Cartesian tensor of rank , such a tensor can be the result from a simple projection onto moment functions or from a differentiation process (for instance, a gradient is a tensor of order 1, and a Hessian matrix is a tensor of oder 2). A tensor can be considered as a feature describing an object in a rotation covariant way; that is, if the original object is rotated by a rotation matrix , the tensor rotates in the following manner: where denotes an element of the 3D rotation group. A tensor rotation is a common operation in many applications, for instance, steering a local image descriptor (a tensor) with respect to some data dependent reference frame. From a computationally point of view a Cartesian tensor rotation is quite inconvenient. Typically, there are symmetries with respect to index permutations (for instance, the Hessian matrix is a symmetric tensor). These symmetries have to be taken into account to provide an efficient computation. Another problem is that the tensor rotation matrix is “full”; that is, all elements mix under rotations. Spherical tensor analysis, where tensors appear in their irreducible representations, solves these problems, and, even more, it offers further advantages regarding tensor operations. Suppose that we aim at extracting rotation invariant features. Given a Cartesian tensor , for Cartesian tensors, the basic operation is tensor contraction. The tensor is contracted down to a rank tensor by repeatedly combining two indexes (with the Kronecker delta ), or three indexes (with the -tensor ). This can be done in several ways, for example, linearly , quadratically , or even cubicly . It is possible to combine different tensors as well. A problem that occurs is ambiguities; in the presence of symmetries, some ways might end up in the same result, some may not. In contrast, the spherical tensor analysis offers a systematic way of performing such operations. The Kronecker and products are replaced by one single spherical product which allows for multiplying spherical tensors of arbitrary rank. The even parity products are related to the Kronecker product, the odd parity products to products.

In this paper, we want to review the basics of spherical tensor analysis and how it can be applied to image processing problems. In Section 2, we introduce the basic concepts such as the notion of a spherical tensors. We define the spherical product and introduce its properties. We also show how spherical tensors are related to ordinary Cartesian tensors. In Section 3, the so-called spherical tensor derivative operators (shortly spherical derivatives) are introduced. The spherical derivative operators are able to connect spherical tensor fields of different rank. We discuss several properties and derive their representation in polar coordinates. We focus on two types of basis systems evolving from the spherical differentiation process: the Gauss-Laguerre functions and the spherical Gabor-functions. Both are known to be very important in pattern analysis. The differential relationships of these functions offer an efficient way to compute projections onto these type of functions. In Section 4, expansions in terms of tensorial harmonics are discussed, which are just the straight-forward generalization of ordinary scalar-valued spherical harmonic expansions. Finally, in Section 5, several biomedical applications are reviewed and discussed.

2. Spherical Tensor Analysis

Let be the unitary irreducible representation of a of order with . They are also known as the Wigner D-matrices (e.g., see [3]). The representation acts on a vector space which is represented by . We write the elements of in bold face, for example, , and write the components in unbolt face where . For the transposition of a vector/matrix, we write ; the joint complex conjugation and transposition is denoted by . In this terms, the unitarity of is expressed by the formula .

Note that we treat the space as a real vector space of dimensions , although the components of may be complex. This means that the space is only closed under weighted superpositions with real numbers. As a consequence of this, we always have that the components are interrelated by . From a computational point of view, this is an important issue. Although the vectors are elements of , we just have to store just real numbers.

We denote the standard basis of by , where the th component of is . In contrast, the standard basis of is written as . We denote the corresponding “imaginary” space by ; that is, elements of can be written as where . So, elements fulfill . Hence, we can write the space as the direct sum of the two spaces . The standard coordinate vector has a natural relation to elements by Note that is an unitary coordinate transformation. The representation is directly related to the real-valued rotation matrix by .

Definition 1. A function is called a spherical tensor field of rank if it transforms with respect to rotations as for all . The space of all spherical tensor fields of rank is denoted by .

2.1. Spherical Tensor Coupling

Now, we define a family of bilinear forms that connect tensors of different ranks.

Definition 2. For every , we define a family of bilinear forms of type where has to be chosen according to the triangle inequality . It is defined by where are the Clebsch-Gordan coefficients.

The characterizing property of these products is that they respect the rotations of the arguments.

Proposition 3. Let and ; then, for any , holds.

Proof. The components of the left-hand side look as First one has to insert the identity by using orthogonality relation with respect to and . Then, we can use relation and the definition of to prove the assertion.

Proposition 4. If is even, then is symmetric, otherwise antisymmetric. The spaces are closed for the symmetric product, and for the antisymmetric product this is not the case. Consider where and .

Proof. The proposition is proved by the symmetry properties of the Clebsch-Gordan coefficients . To show the closure property, consider Hence, we have for even the “realness” condition complying to and for odd the “imaginariness” condition for , which prove the statements.

We will later see that the symmetric product plays an important role, in particular, because we can normalize it in an special way such that it shows a more gentle behavior with respect to the spherical harmonics.

Definition 5. For every with and even , we define a family of symmetric bilinear forms by

For the special case , the arguments have to be of the same rank due to the triangle inequality. Actually in this case, the symmetric product coincides with the standard inner product where is the rank of and .

The introduced product can also be used to combine tensor fields of different rank by point-wise multiplication.

Proposition 6. Let and and chosen such that ; then, is in , that is, a tensor field of rank .

In fact, there is another way to combine two tensor fields: by convolution. The advantage of the convolution is that the evolving product also is covariant with respect to translation; that is, the product is covariant to 3D Euclidean motion.

Proposition 7. Let and and chosen such that ; then, is in , that is, a tensor field of rank .

Given a translation , the following two relations hold: Further important properties of the products are their associativity rules.

Proposition 8. The product is associative as holds if . And holds if . And with .

2.2. Spherical and Solid Harmonics

Due to their special properties, the spherical harmonics (see, Appendix A for definition) play the central role in spherical tensor analysis. One of the most important ones is that each , interpreted as a tensor field of rank , is a fix-point with respect to rotations; that is, Consequently, The form an orthogonal and complete basis of the functions defined on the 2-sphere. Hence, any real square-integrable scalar field can be written as

A band-limited spherical harmonic representation of two images is illustrated in Figure 1.

The expansion coefficients of the rotated function are simply , which can be concluded from the fix-point property. In the following, we always use Racah’s normalization (also known as semi-Schmidt normalization); that is, where the integral ranges over a sphere using the standard measure. With this, the coupling of two spherical harmonics gives, again, a spherical harmonic From a computational perspective, this property can be used to efficiently compute higher order harmonics for lower ones.

Besides the spherical harmonics, the so-called solid harmonics, often appear in the context of harmonic analysis of the 3D rotation group. They are the homogeneous solutions of the Laplace-equation and are just related by , and they are homogeneous polynomials of degree ; that is, .

2.3. Relation to Cartesian Tensors

The correspondence of spherical and Cartesian tensors of rank is trivial. For rank , it is just the matrix that connects the real-valued vector with the spherical coordinate vector . For rank , the consideration gets more intricate. Consider a real-valued Cartesian rank- tensor and the following unique decomposition: where , is an antisymmetric matrix, and a traceless symmetric matrix. In fact, this decomposition follows the same manner as the spherical tensor decomposition. A rank spherical tensor corresponds to the identity matrix in Cartesian notation, while the rank spherical tensor to a antisymmetric matrix or, equivalently, to a vector. And finally, the rank spherical tensor corresponds to a traceless, symmetric matrix. So, let us consider the spherical decomposition. For convenience, let ; then, the components of the corresponding spherical tensors with are where corresponds to , to and to . Explicitly, the relation to is The inverse of this “Cartesian to spherical”-transformation is Note that for arbitrary ranked Cartesian tensor, the relations are not that trivial.

3. Spherical Derivatives

This section proposes the concepts of differentiation in the context of spherical tensor analysis. First, we will introduce the spherical derivative operator which connects spherical tensor fields of different ranks by differentiation. The basic idea is simple; formally replace the coordinates appearing within the solid harmonics by the gradient operator .

Proposition 9 (spherical derivatives). Let be a tensor field. The spherical up-derivative and the down-derivative are defined as where is the gradient operator .

In fact there are much more rotation covariant differential operators than the two defined previously. Given a tensor field , any field of the form , which we obtain via differentiation, is a spherical tensor field, too. But the up- and down-derivatives are from a computational point very attractive, because, as shown earlier, they allow an iterative computation of higher order differentials, which is computationally much more efficient than the direct way. For further discussion on the spherical tensor derivative operator, consider the spherical derivatives in the Fourier domain, where they act by point-wise -multiplications with a solid harmonic where is the frequency magnitude.

Proposition 10 (Fourier representation). Let be the Fourier transformation of some and representations of the spherical derivative in the Fourier domain that are implicitly defined by ; then,

Proof. See [4].

Both statements are direct consequences of the Fourier correspondences for the ordinary partial derivatives. For scalar fields, we can generalize this statement also for higher orders.

Proposition 11 (multiple spherical derivatives). For , he defines by In the Fourier domain, these multiple derivatives act by Using this one can show that , where is the Laplace operator.

Proof. See [5].

We want to emphasize that both statements only hold for scalar-valued fields, and generalizations to tensor-valued do not hold in general due to the nontrivial associativity rules.

Proposition 12 (product rule). Let and ; then, one has the product rules

It is well known that convolutions commute with differentiation, and actually there are generalized commutation rules for spherical tensor fields.

Proposition 13 (commuting property for convolutions). Let and be arbitrary spherical tensor fields; then, where and .

Proof. Both assertions are founded by the associativity of the spherical product. Consider the first statement in the Fourier domain by using (28) and then apply the associativity given in (17) as follows: where we abbreviated . A repeated application of this proves the first assertion. For the second statement, it is similar but using the associativity as given in (15).

This proposition shows again the importance of the up- and down-derivatives. For general derivative operators , the previous commutations rules do not hold. The previous convolution property is of particular importance for the efficient covariant processing of 3D images. The major motivation is to compute convolutions with the spherical harmonic basis in an efficient way. Suppose that the goal is to compute where is some arbitrary scalar image. In fact, as we will show in the next section, one can show that . Together with the convolution theorem, we get which enables us to compute the convolution by an repeated application of the spherical derivatives, which is computationally much cheaper than a direct convolution (even by the use of the Fast Fourier Transform).

3.1. Spherical Derivatives in Polar Representation

To get a better understanding of what happens during the differentiation via spherical derivatives, we consider their properties in polar representations.

Lemma 14. Given a spherical tensor field whose angular and radial component are separable such that , where denotes the function representing the radial component of , then the spherical up- and down-derivatives of can be computed by respectively.

Proof. See [7].

3.2. Gauss-Laguerre Functions

Previously, we already stated that holds; in fact, there is a more general statement involving the so-called Laguerre polynomials. This offers the possibility to compute convolutions with the evolving functions in an iterative and efficient way. We denote by the associated Laguerre polynomial of order . We further denote by the spherical tensor valued polynomials . These polynomials are widely known as Laguerre Gaussian-type functions in the field of theoretical chemistry (e.g., see [8] or [9]). In the image processing community, these functions are known as generic neighborhood operators [10] and are used, for example, for key-point detection [11].

Theorem 15. The Gaussian windowed polynomials can be computed iteratively in terms of starting with an isotropic Gaussian; namely,

Proof. See [7].

3.3. Gabor Functions

Gabor functions, that is, Gaussian-windowed plane waves, play an important role in image processing due to the fact that the different frequency components of signals can be studied locally. This information is, for example, used for tracking [12] or feature extraction [6]. Thus, it is of particular interest to provide efficient methods to apply Gabor filters. One way is to explicitly represent a finite number of Gabor kernels, each representing a certain orientation of the plane-wave [13]. The problem is that the orientation space must be discretized. However, representing Gabor functions in terms of spherical derivatives offers a way to compute Gabor filter responses for the whole range of possible orientations. First, note that applying spherical derivatives on a plane wave gives a quite neat result as Following the proof from Section 3.1, a similar result holds for the spherical Bessel function, which constitutes the radial part in the harmonic expansion of the plane wave as

In the following, we show that there exists a very similar way to represent the Gaussian windowed wave in terms of the derivatives of the Gaussian windowed Bessel functions. Let be the Gaussian windowed 0-order Bessel functions. The parameter represents the size of the Gaussian window with respect to the wave. With (38) and , we can derive the higher order Gaussian windowed Bessel functions .

Theorem 16. The spherical derivatives of the Gaussian windowed 0-ordered Bessel functions are given by

Consider that . The Gabor wave can now be represented by a superposition of Bessel functions , each representing a certain angular frequency; namely, where are real-valued weighting factors.

Proof. See [7].

4.  Tensorial Harmonic Expansions

In most image processing applications, the data to be processed is of scalar nature; that is, for each voxel, we observe one single intensity value. But there are actually acquisition techniques, where the measurement itself is already a tensorial quantity. For example, in diffusion weighted magnet resonance imaging (DW-MRI), rank 2 tensors are common. Or, in phase contrast MRI velocity, vectors are measured. Thus, there is a great interest to represent these measurement in an appropriate way. In [14], we proposed to expand a spherical tensor field of rank as follows: where are expansion coefficients. For , the expansion coincides with the ordinary scalar spherical harmonic expansion. We can observe properties very similar to the ordinary SH expansion; that is, A rotation of the tensor field affects the expansion coefficients to be multiplied from the left with . So, the previous expansion shows the same, very convenient, rotation behavior like an SH expansion, which can be used, for example, to extract invariant local descriptors in a simple way. And in fact, the previous representation is orthogonal and complete. By setting , we can identify the functional basis as

Proposition 17 (tensorial harmonics). The functions provide a complete and orthogonal basis of the angular part of , that is; where The functions are called the tensorial harmonics.

4.1. Symmetric Tensor Fields

In this section, we discuss the properties of expansion coefficients of specific tensor fields, expanded in terms of tensorial harmonics. We show that symmetries in a tensor field are simplifying the tensorial harmonic expansion coefficients. This is similar to the ordinary spherical harmonic expansion. For example, the point symmetry of a scalar fields leads to vanishing spherical harmonic coefficients for odd . In the following, we consider similar symmetries for tensorial harmonics.

The rotation symmetry of a spherical tensor field around the -axis is expressed algebraically by the fact that for all rotation around the -axis. Such fields can easily be obtained by averaging a general tensor field over all these rotations as It is well known that the representation of such a rotation is diagonal; namely, . Hence, the expansion coefficients of vanish for all . Thus, we can write any rotation symmetric tensor field as

We call such a rotation symmetric field torsion-free if , where is a reflection with respect to the -plane (or -plane). The action of such a reflection on spherical tensors is given by . Similar to the rotational symmetry, we can obtain such fields by averaging over the symmetry operation as Note that the mirroring operation for a spherical harmonic is just a complex conjugation; that is, . The consequence for (53) is that all terms where the are odd vanish. The reason for that is mainly Proposition 4 because with its help we can show that holds.

Finally, consider the reflection symmetry with respect to the -plane. This symmetry is particularly important for fields of even rank. The symmetry is algebraically expressed by where is a reflection with respect to the -plane, whose action on spherical tensors is given by . Averaging over this symmetry operation has the consequence that expansion terms with odd are vanishing. For odd rank tensor fields, the reflection symmetry is not imperative. But there is typically an antisymmetry of the form . This antisymmetry lets the expansion terms vanish with even index .

5. Applications

In the context of rotation covariant image processing, the applications of the proposed framework are manifold. The mathematical representation might appear unfamiliar, but the provided tools can be used quite easily. Basically, there are two types of operations: differentiation by spherical tensor derivatives and multiplication by spherical tensor products. The spherical derivatives can be used in two ways. On the one hand, the up-derivatives can be used to “create” new tensor fields out of existing fields by incorporating neighborhood relations. This can be regarded as a simple and efficient way to compute local meaningful image descriptors in a covariant way. On the other hand, the down-derivatives can be used to gather information from a local point neighborhood and form a lower ranked tensor field via superposition. Due to the tensorial nature, the information is able to interfere in a destructive or constructive way. The spherical products are the basic nonlinear ingredient in the framework. They can be used to combine tensor fields in a nonlinear, covariant manner.

Several principles in the image processing and pattern recognition [1517] literature are based on the following principle: compute, in a first step, local descriptors at several image locations, make some inference based on this knowledge, and cast this information back by combining evidence from several locations. In fact, our framework is ideally suited to adopt this principle. First, local descriptors are densely computed by differentiation for all image locations. Then, the information is combined by using spherical products in a nonlinear and nontrivial way. Finally, we use again the spherical derivative to form neighborhood descriptors. The descriptors are then used for object or feature detection.

In the following, we give examples of the proposed framework in several application domains.

5.1. Implementation

For implementing the discrete spherical derivatives, we propose to utilize central differences of 4th order accuracy for computing the partial derivatives (see Figure 2(b)). We observed that this scheme is a good tradeoff between computational complexity and accuracy. We experienced that the standard Laplace operator (considering a six voxel neighborhood) is numerically very unstable (even if double precision numbers are used!). Therefore, we propose the usage of the scheme depicted in Figure 2(a) which performed significantly better regarding numerical stability in our experiments. This is illustrated in Figure 3. As an example, we show the expansion images obtained via the proposed schemes together with the images obtained via a standard scheme. For comparison, we also show explicitly computed expansion images. The example illustrates that the ordinary Laplace operator leads to strong artifacts after a few number of applications.

5.2. Tensor Voting

The Tensor Voting framework was originally proposed by [15] and has found several application in low-level vision in 2D and 3D. For example, it is used for perceptual grouping and extraction of line, curves, and surfaces. The key idea is to make unreliable measurements more robust by incorporating neighborhood information in a consistent and coherent manner. Following [4], the key expression that has to be computed is where   is the voting field, a scalar valued feature image giving evidence for the occurrence of the feature, and the orientation of the feature of interest. In the following, we restrict ourselves to axial symmetric voting fields. Therefore, let be a axial symmetric function, where the -axis is the symmetry axis. Then, the voting field is where is a rotation such that the -axis is mapped onto the axis defined by the normalized vector . In [4], we have shown that (56) simplifies to where are combined tensor-valued evidence images and is the harmonic expansion of the voting field steered in -direction. The coefficients can be obtained by a projection on the tensorial harmonics Due to the symmetry of , only are involved. Further information concerning a practical point of view can be found in [14].

5.3. Nonlinear Covariant Filters

In the following, we briefly show how to design trainable rotation covariant image filters which can be used for rotation invariant object or landmark detection. The idea is that expansion coefficients of a spherically expanded voting function are learned in a data driven way. The filter is mainly based on two steps. Rotation covariant image descriptors are densely computed in a voxel-by-voxel manner. Then, a weighted superposition of these image descriptors is used to form expansion coefficients of a spherical voting function. The expansion coefficients are formed such that each voting function votes for the presence or absence of landmarks or objects. The weights are found by a least square fit to a given training data set. For a fast implementation, we propose to use voting functions based on an expansion of spherical functions having a differential relationship in terms of spherical derivatives. In [18, 19], we used a spherical superposition of Gaussian windowed solid harmonics for representing the voting function. However, we are not restricted to them. For instance, we also can use the spherical plane-wave expansion leading to a voting function that is not only highly adaptable in angular direction, but also highly adaptable in radial direction, too; see the paper by [20]. The Fourier like voting function can be written as where are the expansion coefficients of the filter and are spherical Fourier basis functions known as Bessel functions (see (43)). The filter response is a saliency map representing the evidence for the presence or absence of objects. The saliency map is computed by collecting all contributions (votes) utilizing simple scalar valued convolutions. The explicit expression of the filter is For implementation we use a band-limited expansion (up to order ) and only take a small set of frequencies () into account. We further make use of Gabor waves (see Theorem 16) to gain a filter that adapts and votes locally. In this case, the filter simplifies to Trainable filters based on the Gabor waves have shown superior performance over the standard harmonic filters [20]. Figure 4 shows some qualitative results of an experiment where we detect the pores of airborne-pollen. The database contains 3D recordings of airborne-pollen acquired via a confocal laser scanning microscope. In Figure 4(a), we see the training image. The three porates are marked by red circles. In Figure 4(b), we exemplary show three datasets belonging to the test set together with the maximum intensity projection of the filter response.

5.4. Voxel-Wise Classification

Especially in the field of biomedical imaging, the third dimension becomes more and more important due to the fact that organism can be studied in their natural constellation. Objects and organism can be located in any number at any position and, much more challenging, in any orientation. The third dimension does not only lead to larger datasets, but also the interrelation of neighboring intensity values becomes more complex. With a fast voxel-wise transformation of volumetric images into the harmonic domain, we are capable to compute rotation invariant image descriptors in an analytical way. In [6, 7], we used a fast Gabor transform to locally analyze images by decomposing local image patches into basic frequency components. For the experiments, we used confocal recordings of Arabidopsis root tips. We exemplary aimed at detecting differentiated cells located in the root cap. They morphologically differ from the other cells by their nonroundish shape. For this experiment, two datasets were used: one dataset for training and one dataset for evaluation. All cells (about 3600 in each root) were manually labeled by an expert. We transformed the Gabor expansion coefficients into invariant features utilizing the spherical tensor product; we combine the expansion coefficients corresponding to the same angular frequency, but not necessarily the same radial frequencies, whereas where are the rotation invariant image descriptors. It is worth mentioning that the combination of the same expansion coefficient coincides with the power-spectrum; namely, . In Figure 5(a), we depict the center slice of the training data together with the training samples. Based on the rotation invariant image descriptors representing the training samples, an SVM classifier is trained. We used the SVM to classify test-set in a voxel-by-voxel manner (Figures 5(b) and 5(c)). We classed each voxel into root-cap cell or non-root-cap cell. For further details regarding the experiment, we refer to [6, 7].

5.5. DTI Processing

Diffusion weighted magnetic resonance imaging (DWI) plays a substantial role in neuroscience and clinical applications. One field of interest is the investigation of the neuronal fiber architecture located in the brain white matter connecting different regions in the brain. The fibers themselves cannot be recorded directly. However, the data is usually recorded using the high angular resolution diffusion imaging (HARDI) technique [21], a specific kind of diffusion tensor imaging (DTI) technique. The resulting signal is an angular dependent, volumetric image. From such an image representation, the fiber architecture can be estimated (e.g., see [22]). Due to the angular dependency of HARDI signals, spherical harmonics are a common tool for signal representation. Therefore, in the context of DTI, there exist several applications worth considering spherical tensor algebra.

5.5.1. Tissue Classification

For the analysis of the fiber structure, a preprocessing step that identifies the brain white matter within the image is required. For group studies, the parcellation of the human brain into anatomical regions is of great interest. Preliminary results have been published in conference papers [23, 24].

We utilize the fact that the given recordings are tensor valued. We first transform the local measurements into the spherical harmonic domain (e.g., see [25]). Based on these rotation covariant image representations, we compute voxel-wise rotation invariant image features.

This is done by first comprising the voxels surrounding using the spherical down derivative operators. This can be seen as some kind of Taylor expansion of the given data. Then, we compute rotation invariant image features by computing the power spectrum of the resulting expansion coefficients. We finally use a random forest classifier [26] to learn the appearance of different kinds of brain regions and tissue types based on labeled training images. Such a parcellation might be for example, gray brain matter, white brain matter, and background signal. Qualitative results showing the resulting decisions of the random forest on an unclassified image are shown for the gray matter/white matter scenario in Figure 7. Furthermore, the votes for a certain class can be used as a kind of evidence value in further processing steps. Examples for the three classes background, white matter, and gray matter are depicted in Figure 8.

In Figures 9 and 10, we show the probability map of different kinds of brain regions that have been detected within unlabeled test images via a random forest classifier. Figure 6 shows final predictions for one of the test sets.

5.5.2. Unique Point-Landmark Detection

Group studies often require the coregistration of images or partial image structures of different individuals. In such applications, the detection of characteristic landmarks is often an indispensable prerequisite.

Similar to [27], where features are used to find correspondences in scalar valued MR contrasts, we used tensor-based features in [28] offering a unique signature of a voxel’s surrounding in tensor-valued HARDI signals. Thanks to these features, a large number of corresponding points can be reliably found in images of different individuals using a linear classifier. The features are computed in three steps. (1) We first entirely fit the HARDI signal to spherical harmonics. (2) The resulting fields are then efficiently expanded in terms of tensorial harmonics (Section 4) via tensor derivatives (see Section 3). (3) We obtain new covariant feature images which we use to form a trainable filter (see Section 5.3). The filter is used for the landmark detection task.

Second-order features which are sufficient for most applications are not providing enough information to solve the detection task in a human brain; they are invariant against reflection about an axis. Hence, they cannot distinguish the left and the right hemisphere. It is known that the spherical triple-correlation [29] yields complete rotation invariant features. Hence, they must solve this issue. Based on this idea we designed new 3rd order rotation invariant differential features fitting into our framework that are variant with respect to reflections about an axis. These features are additionally included in the harmonic filter framework. The triple product is given by where , , are the local tensorial harmonics expansion coefficients. A proof can be found in [28].

The resulting filter has shown very promising results on a training set of 7 and a test set of 14 images. For the experiment, we placed about 20000 landmarks within the brain gray and white matter in an equidistant manner. For each dataset, the computation of the features and the detection of of all landmarks took about 5 minutes. We show some detection results in Figures 11, 12, 13, and 14.

Appendices

A. Spherical Harmonic Functions

The Schmidt seminormalized spherical harmonics are defined by where are the with associated Legendre polynomials of order [30]. The spherical harmonics build a complete orthogonal basis for functions on the 2-sphere, whereas

B. Clebsch-Gordan Coefficients

Orthogonality Special values Symmetry

C. Wigner D-Matrix

The components of are written . They are called the Wigner D-matrix. In Euler angles in ZYZ-convention, we have where is the Wigner d-matrix which is real-valued. Relation to the Clebsch-Gordan coefficients:

D. Spherical Bessel Functions

The spherical Bessel functions are related to the Bessel functions of the kind (e.g., see [30]) by and are represented by the expansion where For the spherical Bessel functions, we have the following differential relations [30]: The Hankel Transform [31] (also known as Fourier-Bessel transform) of order in terms of the spherical Bessel functions is given by and its corresponding inverse transformation is given by which both are directly a result of (D.2).

E. Plane Wave

Using the addition theorem of the spherical harmonics, we can express the spherical expansion of the plane wave (e.g., see [3, page 136]) in terms of the tensor product leading to where are the Legendre polynomials [30] of order and the semi-Schmidt normalized spherical harmonics written as vector.

F. Associated Laguerre Polynomials

The associated Laguerre polynomials [30] are defined by The following 3-point-rule [30] is used in this work: We further need the the following differential equation [30]: The polynomials and are orthogonal over with respect to the weighting function as For positive integers , we have the following relation between the Gamma function and the double factorial [32, 33]:

Acknowledgments

M. Reisert and H. Skibbe are indebted to the Baden-Württemberg Stiftung for the financial support of this research project by the Eliteprogramme for Postdocs. This work was partly supported by Bioinformatics for Brain Sciences under the Strategic Research Program for Brain Sciences, by the Ministry of Education, Culture, Sports, Science, and Technology of Japan (MEXT).