- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Computational and Mathematical Methods in Medicine

Volume 2013 (2013), Article ID 196259, 9 pages

http://dx.doi.org/10.1155/2013/196259

## Particle System Based Adaptive Sampling on Spherical Parameter Space to Improve the MDL Method for Construction of Statistical Shape Models

^{1}Graduate School of Medicine, Yamaguchi University, Tokiwadai 2-16-1, Ube, Yamaguchi 755-8611, Japan^{2}Graduate School of Medicine, Gifu University, Yanagito 1-1, Gifu 501-1194, Japan^{3}Information Science and Technology Department, Oshima National College of Maritime Technology, Komatsu 1091-1, Oshimagun Suooshimacho, Yamaguchi 742-2193, Japan

Received 16 January 2013; Revised 1 May 2013; Accepted 20 May 2013

Academic Editor: Yi-Hong Chou

Copyright © 2013 Rui Xu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Minimum description length (MDL) based group-wise registration was a state-of-the-art method to determine the corresponding points of 3D shapes for the construction of statistical shape models (SSMs). However, it suffered from the problem that determined corresponding points did not uniformly spread on original shapes, since corresponding points were obtained by uniformly sampling the aligned shape on the parameterized space of unit sphere. We proposed a particle-system based method to obtain adaptive sampling positions on the unit sphere to resolve this problem. Here, a set of particles was placed on the unit sphere to construct a particle system whose energy was related to the distortions of parameterized meshes. By minimizing this energy, each particle was moved on the unit sphere. When the system became steady, particles were treated as vertices to build a spherical mesh, which was then relaxed to slightly adjust vertices to obtain optimal sampling-positions. We used 47 cases of (left and right) lungs and 50 cases of livers, (left and right) kidneys, and spleens for evaluations. Experiments showed that the proposed method was able to resolve the problem of the original MDL method, and the proposed method performed better in the generalization and specificity tests.

#### 1. Introduction

Statistical shape models (SSMs) are an efficient method which considers statistical information of a set of training shapes to improve the robustness of medical image processing algorithms. It has been widely used in the task of segmentation and achieved good results [1]. It is reported that the best three ranked segmentation algorithms on livers are based on SSMs in the contest held in 2007 [2]. According to our knowledge, several SSMs have been proposed, which are point distribution models (PDMs) [3], m-rep [4], SPHARMs [5, 6], and so on. We focus on PDMs in this paper since it is the most widely used SSMs.

The main problem of the construction of SSMs is how to determine corresponding points on each shape. For simple 2D shapes, that is, faces, corresponding points can be determined manually by placing a set of landmarks on shapes; however obvious disadvantages are both large manual labors and the inevitably subjects’ bias. Especially for complex 3D shapes of clinical data, manual determination of hundreds or thousands of landmarks is not practical. Therefore, lots of researches have focused on automatic ways to find corresponding points [1]. The determination of shapes’ correspondence can be generalized as a shape registration problem. A group-wise based method is proposed to align a group of shapes according to an information-based theory, called minimal description length (MDL) [7], in order to get the simplest description of constructed SSMs [8]. It is reported that the MDL based method is the state-of-the-art method according to exhaustive evaluations [9]. At first, the MDL method can only find the corresponding points of 2D shapes [8, 10, 11]. Recently, it is generalized to deal with the problems of 3D shapes [12–14]. We follow the pilot study [12], since the open source codes helped us to understand its essence [15]. In the following context, we call the pilot study [12] the original MDL method.

The original MDL method can be divided into three steps. First, each original surface mesh of genus-0 topology is initially mapped onto a parameter space of a unit sphere by a conformal mapping [16]. Each vertex of an original surface mesh is transformed onto the unit sphere, and a new spherical mesh built by transformed vertices according to their original linked relationships is called a parameterized mesh in this paper. Second, these parameterized meshes are aligned by manipulating their vertices simultaneously on the unit sphere in order to minimize the MDL measure. Finally, each of the aligned parameterized meshes is sampled by a set of uniformly distributed points on the unit sphere, and the sampled points are mapped back by the corresponding inverse conformal mapping onto each original shape surface to obtain corresponding points. Mapping surfaces onto a unit sphere makes shapes’ registration become easier, since the freedoms of translation and scale can be eliminated and only the freedoms of rotation and subjects’ differences are left. However, the manipulation of parameterized meshes makes them to be distorted during MDL registration. Some triangles of parameterized meshes are shrunk while others are expanded. This makes the predetermined conformal mapping become poor. If uniformly distributed positions are sampled on the unit sphere for each parameterized mesh, less sampling points are obtained in the shrunk regions and more sampling points are obtained in expanded regions. Therefore, this method has suffered from the problem that determined corresponding points are usually densely located in some parts and coarsely in other parts on original surfaces.

This problem may be resolved by either redetermining a new conformal mapping for each shape to replace the poor one or finding adaptive sampling positions on the unit sphere instead of the uniformly distributed sampling method. In this paper, we use the latter method to resolve the problem of the original MDL method. Here, we propose a method to obtain adaptive sampling positions on the unit sphere by considering distortions of parameterized meshes. The proposed method is based on a particle system which is originally adopted for modeling isosurfaces of shapes in [17].

#### 2. Materials and Methods

The proposed particle-system based method is operated when the original MDL registration is finished. Therefore, the proposed method can be seen as a postprocessing of the pilot study [12]. The flowchart of the proposed method is given by Figure 1. The input is a set of parameterized meshes aligned by the original MDL method [12] on the unit sphere. Vertices of these meshes are used to generate a probability distribution to show the frequency of vertices existing at a position on the unit sphere. This probability distribution can reflect how parameterized meshes are distorted in the MDL registration. Next, a set of particles are uniformly placed on the unit sphere. A particle can be seen as a dot or a sampling position. Each particle gives repulsive forces to push its surrounding particles away from itself. The value of the force for a pair of particles is related to both of their distance and the probability of vertices at which the particles are located. An energy function can be calculated by adding the forces of all particles. By minimizing this energy, particles are manipulated on the unit sphere until the energy becomes steady. The final particles are treated as vertices to build a spherical mesh. Finally, this mesh is relaxed to slightly adjust vertices’ positions on the spherical surface, and the optimal sampling points are the vertices of the relaxed mesh. When the proposed method is finished, each parameterized mesh is sampled at these sampling points on the unit sphere, and then the sampled points are transformed back to each surface of organs by its corresponding inverse conformal mapping to get corresponding points of each shape. The details are described in the following subsections.

##### 2.1. Probability Distribution for Vertices of Parameterized Meshes

We adopt the probability distribution for vertices of parameterized meshes located on the unit sphere to describe how these parameterized meshes are distorted in the original MDL registration. If triangles of parameterized meshes are shrunk in a region, there will be more vertices in that part. Therefore, vertices have a high probability to exist there. Conversely, the probability is low in regions where triangles are expanded. We adopt a parzen window [18] based method to estimate this probability distribution. Given denoted by the th vertex of the th parameterized mesh, the probability of vertices existing in a location on the unit sphere can be calculated by where is a coefficient to ensure and is the Euclidian distance of the two points and . is a trunked Gaussian kernel whose definition is given by where is set to be 0.033 in this paper.

In implementation, we discretize the unit sphere by a 5-time recursively-refined icosahedron mesh which contains 10242 vertices uniformly distributed on the unit sphere. An example of the estimated probability distribution is shown by Figure 2.

##### 2.2. Energy of the Particle-System

The energy of the particle-system is defined by where are all particles and their energy is denoted as . is the force term of th and th particles and is the weight associated for them. We adopt the force term suggested in [17]. Its definition is given by where is the Euclidean distance of the pair of particles which are and . is a parameter. Larger value of means that a particle has taken forces from more surrounding particles in a larger region, so more computation costs are required. But the value should be large enough to make particles move. In our system, we choose it to be 0.25 for trade-offs between performances and computation costs.

It is important to design suitable weight terms to make the particle system work properly. If all weight terms are set to be equal, particles finally spread uniformly on the unit sphere when the energy is minimized. Here, we make use of the estimated probability distribution of vertices to design suitable weight terms in order to obtain the optimal sampling positions which are adaptive to distortions of parameterized meshes. The definition of the weight terms is given by where is the position on which the th particle is located. is the estimated probability of vertices at the location of . , , are parameters whose values are set to be , and 2.2, respectively in experiments.

##### 2.3. Minimization of the Energy of the Particle-System

The optimal positions of particles are obtained by minimizing the energy defined in (3). This can be generalized as an optimization problem. Considering the number of particles is large, we adopt an iterative gradient descent based method to make it converge faster to a local minimum. Here, we only update one particle’s position along the opposite gradient direction at one time, and this procedure is operated for each particle in turns. One iteration is called when the iterative updating for all particles is finished. The pseudocode is given by Algorithm 1. Its initialization requires a set of uniformly-distributed particles on the unit sphere ( ()), the same maximal step length associated for each particle for gradient descent ( ()), the minimal step length (), and the total iteration times (). When we update the th particle’s position in the th iteration, we firstly calculate a position using the current step length for the th particle along the opposite gradient direction. By using the condition of , we ensure that the new position is on the surface of unit sphere. Then, we calculate the new energy , by replacing with the new position . If is smaller than the former energy , we accept as the new position for the th particle in the th iteration, update by and end the loop. If not, we halve the step length of the th particle and try a new position to see whether it can minimize the energy. However, the step length should not be smaller than . In such a case, we give up updating the position of the th particle in the -iteration and only let the step length of the th particle to be equal to .

In out experiments, we set the iteration times to be 15. and are determined as follows. Before Algorithm 1 begins, we randomly choose 10 percent of particles and calculate the mean value of the norms (or lengths) of their gradients. This mean value is denoted by . is set to be , and is set to be .

##### 2.4. Construction of Sampling Mesh and Relaxation

When the Algorithm 1 is finished, the obtained particles’ positions are treated as vertices to be linked by Delaunay triangles to build a spherical mesh. This mesh gives how sampling points (particles) are linked with each other, and this linked information is required in SSMs. Additionally, we require this information to slightly adjust particles on the spherical surface to obtain optimal sampling positions. According to [19], Delaunay triangles of a surface can be obtained by constructing convex hulls for a set of 3D points. Constructing triangles directly by this way lead to a locally ununiform mesh. This can be improved by the technique of mesh relaxation, which slightly adjusts vertices’ positions according to their linked information. Here, we adopt the method suggested by [20] to optimize the constructed mesh. Finally, the positions for the vertices of the relaxed mesh are the optimal sampling positions on the unit sphere.

#### 3. Results and Discussion

##### 3.1. Data

We evaluate the proposed method by 6 kinds of human organs, which are livers, left and right lungs, left and right kidneys, and spleens. We collect 47 cases of lungs and 50 cases of other organs from different subjects by a practical clinical protocol using high-resolution CT (HRCT) with the slice thickness of 1 mm and in-plane resolution around 0.6 mm. The organs’ regions are manually labeled on original scans and then resampled to be saved as binary masks with the resolution of 1 mm × 1 mm × 1 mm. Finally, marching cubes followed by mesh decimation using the visualization toolkit (VTK) [21] is adopted to get a triangles’ mesh for each organ. The number of vertices for each mesh is related to organ’s sizes. There are about 3000–5000 vertices for livers and lungs and 1000–2000 vertices for kidneys and spleens. These meshes are used as inputs for evaluations.

##### 3.2. Optimal Sampling Positions Obtained by the Proposed Method

Algorithm 1 requires a given number of uniformly distributed points (particles) on the unit sphere for its initialization. It looks like a strict and difficult condition; however, we can obtain these points with any given number if we make a little change on Algorithm 1 by setting the terms of in (3) to be an equal value (i.e., ). For example, we can get 1000 points uniformly distributed on the unit sphere as follows. Here, we make use of a 4-time iteratively refined icosahedron mesh which contains 2562 uniformly distributed vertices on the unit sphere. First, we break the mesh and randomly select 1000 vertices as initialized positions for particles. Then we set the terms to be 1 and run Algorithm 1. When the iterative procedure is finished, we can obtain 1000 points uniformly distributed on the unit sphere. This procedure is illustrated by Figure 3. Actually other methods could also be used for the initialization, such as random selection 1000 points on the unit sphere.

The uniformly distributed points are treated as initial positions for the particle system to get optimal sampling positions. By setting the weight terms in (3) to be (5), Algorithm 1 is able to make the final particles’ locations to be adaptive to the probability of vertices of parameterized meshes. Figure 4 gives examples to show the iterative procedure to manipulate 1000 particles on the unit sphere according to Algorithm 1.

When the iterative procedure is finished, the obtained particles are treated as vertices to build a spherical mesh by Delaunay triangles. The left figure in Figure 5 gives the constructed Delaunay triangles mesh on the unit sphere. It can be seen that triangles in local regions are not uniform. The mesh is improved by mesh relaxation, shown in the middle figure in Figure 5. The final optimal sampling positions overlaid on the probability distribution of vertices are shown in the right figure of Figure 5.

##### 3.3. Corresponding Points and Evaluations on SSMs

We evaluate the performances of the proposed method and the original MDL method by using six kinds of human organs. Examples of corresponding points determined by the two methods are given by Figure 6. It can be seen that corresponding points are gathered in partial regions on the surfaces of organs for the results of the original MDL method. Especially, there are nearly no corresponding points in the regions indicated by red circles. This problem is able to be corrected by the proposed method. It can be seen that corresponding points are almost uniformly located on the surfaces of organs.

We build two SSMs based on the corresponding points determined by the two methods and compare their performances by the generalization and specificity tests, which are the most widely used method to evaluate SSMs suggested by [22]. Generalization tests are based on leave-one-out experiments, where shapes are used for training SSMs and the remaining one is used to test whether it can be represented by the trained SSMs. The distance between the untrained shape and the reconstructed shape by SSMs is calculated to show how much difference exists. Low values indicate that the constructed SSMs have good generalization performances on untrained shapes. The measure of generalization is defined by where is the total number of shapes, is the th untrained shape, is the corresponding reconstructed shape by SSMs using the largest eigen-vectors, and is the distance of the two shapes, and .

In specificity tests, the constructed SSMs are used to synthesize a large number of shapes. Each synthesized shape is compared with training shapes and finds the closest one. The mean value of distances between synthesized shapes and their corresponding closest training shapes is used as the measure of specificity tests. Low values indicate that the trained SSMs have good specificity on training shapes. The measure of specificity is defined by where is the number of synthesize shapes by SSMs using largest eigen vectors, is the th synthesize shape, is the set of training shapes to build SSMs, and is the distance of the two shapes, and . In experiments, we synthesize 10000 shapes for the specificity test.

Shape distances in generalization and specificity tests are calculated by (8), in order to make the comparison of two shapes not to be concentrated on corresponding points [23]. where and are vertices of the two shape and , respectively, is the Euclidian distance of two vertices, and and are the number of vertices for the two shapes.

Since livers and lungs are larger than kidneys and spleens, we use 1000 corresponding points to build SSMs of livers and lungs and 500 corresponding points to build SSMs of kidneys and spleens. Additionally, we adopt general-purpose computing on graphics processing units (GPGPU) to deal with the large computation costs on (8).

Results on generalization and specification tests are given in Figure 7. We calculate the mean values and the standard deviations for generalization and specificity measures to compare the two methods. It can be seen that the proposed method gives smaller measures of generalization and specificity for all the six kinds of organs, which means that the proposed method performs better than the original MDL method.

##### 3.4. Discussion

Although it is a good idea to conformally transform each shape onto a unit sphere to eliminate variations on translations and scales before the MDL registration is performed [12], the conformal mappings become poor because manipulation of parameterized meshes makes them to be distorted in shape alignment. Therefore, a uniform distributed sampling on the parameterized meshes after the MDL registration causes the determined corresponding points to be densely located in some parts and coarsely in other parts. An example of uniform sampling adopted in the original MDL method is shown by the most left figure in Figure 4 (with subtitle “”). It can be seen that there are a few sampling positions located in regions where triangles of parameterized meshes are shrunk (red regions) and relatively more sampling positions in regions where triangles are expanded (blue regions). This causes the problem that determined corresponding points are not uniformly distributed on the original surfaces of organs.

In the proposed method, the obtained optimal sampling positions are adaptive to the distortions of parameterized meshes. The most right figure (with the subtitle “Final”) in Figure 4 gives such an example. It can be seen that there are a lot of sampling positions in the red regions where triangles are shrunk and relatively few positions in the blue regions where triangles are expanded. Therefore, determined corresponding points can spread uniformly on the original surfaces of organs even though conformal mappings become poor after MDL registration.

In the generalization and specificity tests, it can be seen that SSMs built by corresponding points determined by the proposed method perform better. Here, we give an example to show the reason. Figure 8 gives the mean shapes of SSMs built by the corresponding points determined by the original MDL method and the proposed method for livers. It can be seen that the mean shape built by the original MDL method loses detailed information of livers’ shapes. In particular, the mean shape decays in the part indicated by the red circle. However, the mean shape contains more details for the livers’ shapes by using the proposed method. Therefore, SSMs built by the proposed method performs better than SSMs built by the original MDL method.

It seems that there is another work to resolve the same problem of original MDL in [24], where a 2D-dithering based remeshing method was adopted [20, 25]. We reimplement it; however we encounter a dense-line problem. An example of this problem is shown by Figure 9. In methodology, the method [24] has to firstly get two adaptive half-spherical meshes separately and then unit them to construct a whole spherical mesh whose vertices are the optimal sampling positions. If the densities of vertices on the two meshes’ margins (through which the two meshes are connected) are different, this artifact problem will happen. We refer to all the related literature [20, 23, 24], and we find no descriptions of how to keep the two densities equal while ensuring the required condition of the MDL method that the number of sampling positions should be unchanged. Although it seems that there is no such a line artifact problem in [24], we do not know how to avoid this problem. In the proposed method, since we manipulate a set of particles to find optimal sampling positions directly on the unit sphere, the connection of two meshes is not required. Therefore, the proposed method is able to avoid the line artifact problem in methodology. Additionally, we feel that it could not be fair to compare the performances of the proposed method and the reimplemented version with dense-line artifacts. Therefore, we only state the fact that we encounter in the reimplementation of [24] and do not give the compared results.

Additionally, we note that there is a particle and entropy based method for SSMs [26]. The difference is that we apply the particle-based method as the postprocessing step of MDL method in order to obtain optimal sampling positions on the unit sphere.

#### 4. Conclusion

MDL based shape registration method proposed in [12] was a state-of-the-art method to determine the corresponding points on surfaces of 3D organs. Since uniformly distributed points on the unit sphere were used to sample the shapes registered in the parameter space, the obtained corresponding points were not uniformly distributed on the original surfaces of organs. In this paper, we proposed a particle system based method to find the optimal sampling positions on the unit sphere to resolve this problem. In our method, a set of particles was manipulated on the unit sphere to find the optimal sampling positions by minimizing a carefully designed energy function. We evaluated the proposed method on six kinds of human abdominal and chest organs, which were livers, left and right lungs, left and right kidneys, and spleens. We collected 47 cases of lungs and 50 cases of other organs from different subjects to evaluate the proposed method. Experiments showed that the proposed method was able to find optimal sampling positions on the unit sphere and resolve the problem of the original MDL method. Additionally, we compared the proposed method with the original MDL method in the generalization and specificity tests. Experimental results showed that SSMs built by the proposed method performed better than SSMs built by the original MDL method. In future, we will apply the built SSMs in some segmentation tasks of human organs.

#### Acknowledgment

This research was supported by MEXT KAKENHI Grant No. 21103008.

#### References

- T. Heimann and H. P. Meinzer, “Statistical shape models for 3D medical image segmentation: a review,”
*Medical Image Analysis*, vol. 13, no. 4, pp. 543–563, 2009. View at Publisher · View at Google Scholar · View at Scopus - T. Heimann, B. V. Ginneken, M. A. Styner, et al., “Comparison and evaluation of methods for liver segmentation from CT datasets,”
*IEEE Transactions on Medical Imaging*, vol. 28, pp. 1251–1265, 2009. - T. F. Cootes, C. J. Taylor, D. H. Cooper, and J. Graham, “Active shape models-their training and application,”
*Computer Vision and Image Understanding*, vol. 61, no. 1, pp. 38–59, 1995. View at Publisher · View at Google Scholar · View at Scopus - S. M. Pizer, P. T. Fletcher, et al., “Deformable m-reps for 3D medical image segmentation,”
*International Journal of Computer Vision*, vol. 55, pp. 85–106, 2003. - G. Székely, A. Kelemen, C. Breehbühler, and G. Gerig, “Segmentation of 2-D and 3-D objects from MRI volume data using constrained elastic deformations of flexible Fourier contour and surface models,”
*Medical Image Analysis*, vol. 1, no. 1, pp. 19–34, 1996. View at Scopus - A. Kelemen, G. Székely, et al., “Elastic model-based segmentation of 3D neuroradiological datasets,”
*IEEE Transactions on Medical Imaging*, vol. 18, pp. 828–839, 1999. - J. Rissanen,
*Stochastic Complexity in Statistical Inquiry*, World Science, Singapore, 1989. - R. H. Davies, C. J. Twining, T. F. Cootes, J. C. Waterton, and C. J. Taylor, “A minimum description length approach to statistical shape modeling,”
*IEEE Transactions on Medical Imaging*, vol. 21, no. 5, pp. 525–537, 2002. View at Publisher · View at Google Scholar · View at Scopus - M. A. Styner, J. A. Rajamani, L.-P. Nolte, et al., “Evaluation of 3D correspondence methods for model building,” in
*Information Processing in Medical Imaging*, vol. 2732 of*Lecture Notes in Computer Science*, pp. 63–75, 2003. - H. H. Thodberg, “Minimum description length shape and appearance models,” in
*Image Processing Medical Imaging*, vol. 2732 of*Lecture Notes in Computer Science*, pp. 51–62. - A. Ericsson and K. Astrom, “Minimizing the description length using steepest descent,” in
*Proceedings of the British Machine Vision Conference*, pp. 93–102, 2003. - T. Heimann, I. Wolf, T. Williams, and H. Meinzer, “3D Active Shape Models using gradient descent optimization of description length,” in
*Information Processing in Medical Imaging*, Lecture Notes in Computer Science, pp. 566–577, 2005. View at Scopus - R. H. Davies, C. J. Twining, T. F. Cootes, and C. J. Taylor, “Building 3-D statistical shape models by direct optimization,”
*IEEE Transactions on Medical Imaging*, vol. 29, no. 4, pp. 961–981, 2010. View at Publisher · View at Google Scholar · View at Scopus - R. H. Davies, C. J. Twining, T. F. Cootes, J. C. Waterton, and C. J. Taylor, “3D statistical shape models using direct optimization of description length,” in
*Proceedings of the 7th European Conference on Computer Vision (ECCV '02)*, pp. 3–21, 2002. - T. Heimann, I. Oguz, I. Wolf, M. Styner, and H. Meinzer, “Implementing the automatic generation of 3D statistical shape models with ITK,” The Insight Journal MICCAI Open Science Workshop, 2006, http://hdl.handle.net/1926/224.
- X. F. Gu, Y. Wang, T. F. Chan, P. M. Thompson, and S. T. Yau, “Genus zero surface conformal mapping and its application to brain surface mapping,”
*IEEE Transactions on Medical Imaging*, vol. 23, no. 8, pp. 949–958, 2004. View at Publisher · View at Google Scholar · View at Scopus - M. D. Meyer, P. Georgel, and R. T. Whitaker, “Robust particle systems for curvature dependent sampling of implicit surfaces,” in
*Proceedings of the International Conference on Shape Modeling and Applications (SMI '05)*, pp. 124–133, June 2005. View at Publisher · View at Google Scholar · View at Scopus - R. O. Duta, P. E. Hart, and D. G. Stork,
*Pattern Classification*, Wiley-Interscience, 2nd edition, 2000. - J. O'Rourke,
*Computational Geometry in Csecond Edition*, Cambridge University Press, 2001. - P. Alliez, M. Meyer, et al., “Interactive geometry remeshing,”
*ACM Transactions on Graphics*, vol. 21, pp. 347–354, 2002. - The Visualization Toolkit (VTK), http://www.vtk.org/.
- R. H. Davies, C. J. Twining, and C. J. Taylor,
*Statistical Models of Shape: Optimization and Evaluation*, Springer, 1st edition, 2008. - T. Heimann,
*Statistical Shape Models for 3D Medical Image Segmentation*, VDM, 2008. - T. Heimann, I. Wolf, and H.-P. Meinzer, “Automatic generation of 3D statistical shape models with optimal landmark distributions,”
*Methods of Information in Medicine*, vol. 46, no. 3, pp. 275–281, 2007. View at Publisher · View at Google Scholar · View at Scopus - V. Ostromoukhov, “A simple and efficient error-diffusion algorithm,” in
*Proceedings of the Computer Graphics Annual Conference (SIGGRAPH '01)*, pp. 567–572, August 2001. View at Scopus - J. Cates, P. T. Fletcher, M. Styner, M. Shenton, and R. Whitaker, “Shape modeling and analysis with entropy-based particle systems,” in
*Information Processing in Medical Imaging*, vol. 4584 of*Lecture Notes in Computer Science*, pp. 333–345, 2007.