Table of Contents Author Guidelines Submit a Manuscript
Journal of Electrical and Computer Engineering
Volume 2011, Article ID 410912, 11 pages
http://dx.doi.org/10.1155/2011/410912
Research Article

Unsupervised 3D Prostate Segmentation Based on Diffusion-Weighted Imaging MRI Using Active Contour Models with a Shape Prior

1Department of Electrical and Computer Engineering, Medical Imaging Research Center (MIRC), Illinois Institute of Technology, Chicago, IL 60616, USA
2Joint Department of Medical Imaging, University Health Network, Mount Sinai Hospital, Toronto, ON, Canada M5G 1X6

Received 29 March 2011; Revised 10 June 2011; Accepted 24 June 2011

Academic Editor: Tamal Bose

Copyright © 2011 Xin Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Accurate estimation of the prostate location and volume from in vivo images plays a crucial role in various clinical applications. Recently, magnetic resonance imaging (MRI) is proposed as a promising modality to detect and monitor prostate-related diseases. In this paper, we propose an unsupervised algorithm to segment prostate with 3D apparent diffusion coefficient (ADC) images derived from diffusion-weighted imaging (DWI) MRI without the need of a training dataset, whereas previous methods for this purpose require training datasets. We first apply a coarse segmentation to extract the shape information. Then, the shape prior is incorporated into the active contour model. Finally, morphological operations are applied to refine the segmentation results. We apply our method to an MR dataset obtained from three patients and provide segmentation results obtained by our method and an expert. Our experimental results show that the performance of the proposed method is quite successful.

1. Introduction

Prostate cancer is the second leading cause of cancer-related deaths and most frequently diagnosed cancer in American men [1]. Therefore, there is a significant interest in improvements of prostate cancer diagnosis and treatment. Imaging methods that could provide reliable information about the location, size, and shape of prostate gland would greatly useful to localize cancer foci, guide biopsies, and radiotherapy. To this date, the most widely used modality for prostate cancer diagnosis is trans rectal ultrasound (TRUS) because of its low cost and short acquisition time. However, its false negative rate is high [2], and prostate cancer visualization is poor. As an alternative, high-resolution MRI allows physicians to better evaluate the prostate diseases that may not be assessed adequately with other imaging methods such as X-ray, TRUS, and computed tomography (CT). Recent studies have shown that MRI has higher accuracy in the detection of prostate cancer [3]. Because of the advances in MRI technology, diffusion-weighted (DWI) MRI is also now commonly applied to the prostate along with other MRI techniques. Prostate volume is routinely asked as part of imaging evaluation, as it helps in clinical decision making when combined with serum prostate-specific antigen (PSA) to derive PSA density. Knowledge of prostate boundaries is useful in the planning of conformal radiation therapy and computer-aided prostate cancer localization. Although the identification of prostate boundary is a crucial step in these clinic applications, manual segmentation prostate boundaries on 3D MR images slice by slice is a tedious and laborious job. Moreover, the manual segmentation is subjective and produces different results among different observers. Therefore, accurate automated prostate segmentation based on 3D MR images is extremely useful. However, this task is very challenging, because of the noise and inhomogeneity of MR images and the complex anatomical structures of the prostate and surrounding organs.

In the literature, previous work on automated prostate segmentation is primarily focused on TRUS images. Pathak et al. proposed an edge-based boundary delineation scheme to detect prostate edges as a visual guide to the observer doing manual editing in [4]. Because of the visual guide, the accuracy of the detected prostate edges was as good as those of the human observers. Shape information of the prostate were also incorporated in the literature to improve the segmentation performance. In [5, 6], ellipses were used to model the prostate shape. In [5], the prostate shape was modeled by parametric deformable superellipses, and Bayesian segmentation algorithm was then applied to 2D TRUS images. In [6], an elliptical level set algorithm was proposed to segment prostate with TRUS images. Due to the shape information, accurate and consistent segmentation results could be obtained in 2D without any manual intervention. Statistical shape information extracted by Gabor filter from training dataset was employed in [7]. Besides the global population-based shape statistics, Yan et al. proposed to combine patient-specific local shape information to segment prostate with TRUS video in [8] and further improved the results proposed in [5, 7]. The shape-based prostate segmentation method was also applied to CT images. Freedman et al. presented a method based on matching probability distributions of photometric variables that incorporates learned shape and appearance models for the prostate and applied it to 3D CT images in [9].

Compared with research in automated prostate segmentation using TRUS images, attempts on MR images are limited. For most of the literature available algorithms developed for prostate segmentation with MRI, the shape information were widely considered. One way to incorporate the shape information is to learn the shape statistics from training dataset. Tsai et al. derived a model-based, implicit representation of the segmentation curve evolution by applying principle component analysis (PCA) to a set of signed distance representations of the training data. This method is applied for the segmentation of medical images containing known object types [10] and could obtain satisfactory visual results of prostate volume. In [11], authors did not only extract the shape information, but also the texture information of the prostate region by PCA to further improve the segmentation performance. Statistical atlas was also used to incorporate the shape information. Klein et al. developed an atlas matching method to segment prostate from MR images based on prelabeled and registered atlas images [12]. In [13, 14], two semiautomatic prostate segmentation methods were proposed. In [13], a method using wavelet multiscale products to detect the prostate boundaries were developed. This method requires the user to specify four reference points around the prostate. In [14], the prostate contour of one slice was manually refined and used as initial estimation in the neighboring slices. The contour was propagated in 3D through steps of refinement in each slice. Template matching was also used to fuse the prostate shape information. Based on reasonable initials, these two algorithms could successfully segment the prostate. Although majority of the literature available methods for prostate segmentation with MRI are based on T2 MRI which provides details of the prostate structures, there are a few attempts to consider other MR techniques to perform prostate segmentation. In [15], T2 MRI and MR spectroscopy were combined, and an active shape model segmentation scheme was developed. An appropriate initialization is essential to the accurate segmentation for this method. In [16], a framework based on maximum aposterior (MAP) estimation was proposed to segment prostate from dynamic contrast enhanced MRI (DCE MRI) by fusing appearance and spatial and shape information of the prostate learned from training data.

To this date, all these available prostate segmentation methods with MRI are either supervised or semiautomatic, and the supervised methods have difficulties to handle the large variety of the prostate size, shape, and texture of different patients. Since the prostate appearance varies significantly between patients, we develop an unsupervised segmentation algorithm in this work based on level set framework introducing a shape prior to region-based active contour model. The proposed method is fully automatic, and it segments prostate from apparent diffusion efficient (ADC) images derived from diffusion-weighted imaging (DWI) MRI. The major contribution of this paper is the development of a 3D automated prostate segmentation method which does not need training data and is the first attempt to make use of DWI MRI to differentiate the prostate region with other tissues.

Implicit level set-based representations of a contour have become a popular framework for medical image segmentation. The question of how to fuse higher-level shape prior information into level set-based contour evolutions has been addressed by a number of researchers. In many of the previous study, either the shape prior is extracted from training data [5, 7, 8, 10, 11, 1618], or the exact shape of the object is assumed to be known [19, 20]. In this paper, we propose a novel approach to obtain the shape information from the 3D MR images. The proposed method is a level set-based active contour model which incorporates shape information by adding a shape penalty term. The idea of adding a shape penalty term is given in [19]. However, instead of having the exact shape of the object as in [19, 20], or learning the shape from training data, we use a three-step strategy [21]: (i) coarsely segment the prostate volume, (ii) then based on the coarse segmentation result, the prostate shape is modeled by a series of deformable ellipses slice by slice to constrain the level set evolution as close to an ellipsoid as possible, (iii) estimate the prostate volume by region-based contour model and shape prior defined by the previous step. To incorporate the shape prior into active contour model, we introduce a shape penalty term to the energy functional and propose a method to automatically select the shape penalty weight. Finally, a series of morphological operations are applied to further refine the prostate boundary. The proposed method is based on our previous study [21], where the prostate was segmented from ADC images in 2D. In this paper, we improve that method in the following aspects: (i) we extend it to 3D from 2D, (ii) we propose a coarse segmentation step and use a stack of parametric deformable ellipses to extract the shape information, (iii) we develop a method to automatically select the weighting parameter of the shape term, and (iv) we apply morphological operations to refine the segmentation results. In our experiment, ADC images were calculated from diffusion-weighted data on a pixel-wise basis, according to , where is the signal intensity without the diffusion weighting ( value of 0 sec/mm2) and is the signal intensity with the gradient ( value of 600 sec/mm2). ADC images measure the diffusion coefficient and use diffusion as contrast.

This paper is organized as follows. In Section 2, the proposed prostate segmentation scheme for 3D MR images is explained in detail, including the basic concepts of region-based active contour model, the parametric deformable ellipsoid model, the proposed segmentation algorithm with shape information, and the automated shape penalty weight selection method. Section 3 provides the experimental results of applying the proposed method to 3D prostate MR images and comparison with the manual segmentation results. A summary and conclusion of our prostate segmentation method is given in Section 4.

2. Segmentation Method

In this section, we explain the proposed segmentation method in detail. The proposed method is based on a level set formulation of the Mumford-Shah functional developed by Chan and Vese. We extend this framework by introducing a shape penalty term to constrain the level set evolution. Our input data is a 3-D apparent diffusion coefficient (ADC) maps calculated from diffusion-weighted (DWI) MR prostate dataset. As shown in Figure 1, our method consists of four main steps: (i) a coarse segmentation step to roughly obtain the prostate shape; (ii) a shape information extraction step to estimate the shape of the prostate, (iii) segmentation step to estimate the prostate volume by region-based active contour model combining the shape prior, and (iv) a refining step to smooth the prostate surface and remove the isolated components in the segmentation result. Each of these steps is described in detail next.

410912.fig.001
Figure 1: Prostate segmentation method diagram.
2.1. Coarse Segmentation by Region-Based Active Contour Model

In the first step, we use a region-based active contour model to 3D ADC images to obtain a coarse prostate shape to further extract the shape information in the next step. For medical images, including prostate MR images, the tissue of interest may not have complete boundaries, or have complex anatomical structures. The edge-based segmentation method, including active contour with edges model [22], also named geodesic active contours, largely depends on the nearby edges, is sensitive to local minimum and noise and cannot deal with topological changes. Because of these shortages, we consider region-based active contour model in our application. Compared with edge-based models, region-based models consider the pixel intensities within the entire image dataset [10, 23, 24]. The image dataset is segmented into a certain number of regions based on the regional statistics (sample mean and variance) of the corresponding region. Therefore, region-based active models are more robust to noise and can handle topological changes. In [23], Chan and Vese proposed a pure region-based model to segment image where is the segmentation image and and are the average intensities of the two regions partitioned by the curve . During the minimization of (1), the image is divided into two regions: inside and outside of the curve. Level set framework is combined to minimize the energy function shown in (1). Level set method first introduced in [23] is a numerical technique that can follow the evolution of interfaces. It has been applied to various image processing applications including image segmentation, reconstruction, and denoising. Chan and Vese's region-based active contour model combines the Mumford-Shan functional and level set framework. The level set formulation of variational active contour model is based on a higher-dimensional level set function , whose zero level set segments the image into several intensity homogeneous regions. The energy function of Chan-Vese model in level set framework in 3D is: where is the segmentation image, is the Heaviside function, and and is the average intensities (sample means) of the two regions segmented by zeros level set Figure 2 shows the coarse segmentation results obtained by the region-based active contour model for one patient with different slices in mid-gland, near apex, and base. These results are very poor and not acceptable. We can also see that the intensity information is not sufficient to distinguish the prostate gland from surrounding organs and tissues. Therefore, it is necessary to combine the shape information of the prostate to improve the performance of automated segmentation method.

fig2
Figure 2: Coarse segmentation results by active contour model overlaid on the ADC images. The prostate was removed after surgery, and the specimen weighs 45 g and measures 4.5 cm SI × 4.0 cm ML × 3.2 cm AP, part(a) is 3 mm to the actual base, and part(d) is 3 mm to the actual apex.
2.2. Shape Information Extraction

Medical image segmentation in general faces difficulties because of noise, missing boundaries, and complex anatomical structures. Under such conditions, introducing some prior information, such as the general shape, location, intensity, and curvature profile of the tissue of interest could help the segmentation algorithm perform better. In prostate MR images, the prostate gland and surrounding tissues, such as the bladder, rectum, and muscles, have overlapping intensity and texture. For some patients, at certain slices, the prostate boundaries may be missing or blended with those surrounding tissues. However, the prostate has a walnut-like shape in general. Combining this shape information that the prostate is close to an ellipsoid in 3D could constrain the segmentation algorithm evolution and help it extract the prostate more accurately.

2.2.1. Parametric Deformable Ellipsoid Model

After the coarse segmentation step, we model the prostate shape by a stack of parametric ellipses. In the literature, several methods have been proposed for fitting superellipses [25]. In this study, to obtain the prostate shape information, we use a two-step scheme. First, we roughly model the prostate volume by a parametric ellipsoid. Then, we model the apex as a stack of parametric ellipses. Since the prostate volume is not an ideal ellipsoid, we use a stack of parametric ellipses to model the apex to fit the prostate more accurately. In the first step, the prostate volume is fitted by a parametric ellipsoid roughly as follows: where is the center of the ellipsoid, the lengths of the semiaxes, and the orientation. Shape parameters define an ellipsoid. In (4), only one rotation is considered. By observing the axial MR images, we can see that the rotation of the prostate in the axial plane is ignorable, so we assume there is no rotation in - plane in (4). To obtain an ellipsoid which best fit the prostate shape, we borrow the idea of least-square minimization and superquadric inside-outside function presented in [26]. Based on the implicit representation of the parametric ellipsoid, we define the function called the inside-outside function. When , the corresponding voxels are on the surface of the ellipsoid. When , the corresponding voxels are inside of the ellipsoid, and vice versa. To find a function with a minimum corresponding to the ellipsoid that best fits the given prostate shape, we define a shape fitting function: where is the image domain and is the prostate region obtained by the coarse segmentation step. For voxels inside of the ellipsoid, we have ; for voxels outside of the ellipsoid, we have . The shape parameters are obtained as If the prostate is perfectly segmented, is the ideal prostate mask, the deformable ellipse converges to the smallest ellipsoid which best fits the prostate volume. In practice, we simplify the estimation of the ellipsoid by estimating , where is the number of slices containing prostate region and is predefined by a radiologist. That means, a radiologist first selects the slices which belong to the prostate region. If 18 slices are selected, then we have . In this way, the shape parameters of the ellipsoid which best fits the prostate shape in 3D can be approximately estimated in 2D by finding an ellipse which best fit the prostate shape in the central slice, where . That is for the ellipse , we have The cost function is minimized by iterative gradient descent method, and the gradient descent with respect to the unknown shape parameters is where

By observing the ellipsoid fitting results, we can see that the ellipsoid as shown in Figure 3 (corresponding to the same prostate images in Figure 2) is able to roughly model the prostate shape in 3D. To further improve the shape result, we apply this ellipse fitting method to the slices of the apex again to obtain a stack of ellipses fitting the prostate shape more accurately as shown in Figure 4 (corresponding to the same prostate images in Figure 2). That is, we update the shape information by finding an ellipse that best fits each slice, and for each slice, we obtain a set of shape parameters defining an ellipse as follows: and those ellipses are combined with the active contour model. It is worth to mention that at this step, the apex slices need to be predefined. Although the identification of the apex is difficult, it is not crucial in this step. If the slices of the mid-gland are misidentified as apex, the results of the deformable ellipses will not change, because for the mid-gland slices, the shape has already been fitted by the ellipsoid very well.

fig3
Figure 3: The prostate shape can be roughly fitted by an ellipsoid.
fig4
Figure 4: The prostate shape is fitted by a stack of ellipses. The first row are the ellipses, and the second row is the prostate outlined by a radiologist.
2.2.2. Initial Estimates of the Shape Parameters

It is worth to mention that the gradient descent minimization may converge to a local minimum instead of a global minimum. Therefore, initial estimates of the set of shape parameters determines to which local minimum the minimization procedure will converge. In the proposed method, we use a rough estimation of the prostate's true position, orientation and size obtained based on the profile (the intensity values) across the centroid of the coarse segmentation result along vertical and horizontal direction of the ADC images. These initial estimates suffice to assure convergence to the minimum that corresponds to the actual shape of the prostate. After the coarse segmentation step described in Section 2.1, we calculate the number of pixels of the coarse segmentation result of the central slice along the horizontal and vertical direction. We can see that the profile image has roughly a rectangular shape, and we can detect the rectangular edges which corresponding to the prostate boundary by calculating the first derivative of the profile image. By detecting the left, right, top, and bottom boundary of the prostate in the central slice, we can calculate the center and radius of the prostate as initial estimates of the shape parameters to assure a more robust shape extraction.

2.3. Prostate Segmentation with Shape Information
2.3.1. Active Contour Model with Shape Prior

There are several ways to incorporate the shape information into level set-based variational approaches. In [10], a number of training shapes are implicitly represented in the segmentation curve using signed distance functions. In [19, 20], authors proposed two models to introduce shape priors into Chan-Vese models, but their models are both based on the exact shape of the object is known and segment the known shape or object from the background, where there are several objects. However, in our application, the exact shape of the prostate is unknown and considering the large variety of the prostate shapes and sizes, we propose an unsupervised shape-based segmentation algorithm. We assume the prostate shape is close to an ellipsoid which is estimated by the method discussed in Section 2.2. To combine the shape prior and constrain the level set evolution, we add the shape fitting function (6) to the level set energy function (2) as a penalty term as follows: where is a weighting parameter of the shape fitting term. The shape penalty term forces the segmented region close to the shape prior . We update the level set function by gradient descent method, and the gradient descent with respect to the segmentation function is

2.3.2. Shape Weighting Parameter Selection

Usually, the shape weighting parameter is selected manually based on previous experience. In this paper, we present a method to select the shape weight automatically based on the correlation between the segmentation result and shape prior. The correlation between the segmentation result and shape prior is defined as where is the segmentation result with shape weight , is the shape prior, and represents the th voxel. By varying the shape weighting parameter , we plot the curve correlation versus shape weight shown as Figure 5. An appropriate should be small enough so that the segmentation result will be able to capture the prostate real boundary. Meanwhile, the appropriate should be large enough so that the shape prior could constrain the level set function evolution, and segmentation result will close to the shape prior. Considering these two points, we compare all the correlation values, and select the one closest to the upper left corner in the plot. That corresponds to the smallest shape weight with high correlation between the segmentation result and shape prior. where is the Euclidean distance, is the smallest shape weight for the correlation .

410912.fig.005
Figure 5: Correlation versus shape weighting parameter.

Figure 6 (corresponding to the same prostate images in Figure 2) shows the different segmentation results obtained by the active contour model with different weight of shape prior. This figure demonstrates the efficacy of our method of selecting appropriate shape parameter automatically.

fig6
Figure 6: Segmentation results comparison with different shape weight. The first row is the segmentation results obtained without shape prior (), the second row is the segmentation results obtained with a very large shape weight (), the third row is the segmentation results obtained with an automatic selected shape weight. The last column shows the segmentation results in 3D.
2.4. Prostate Volume Refinement

After the segmentation with shape prior, the prostate volume is obtained. However, certain surrounding tissues are also labeled as prostate and appear as some isolated components in the image data. To remove those tissues, a morphological opening operation is firstly applied, and then, only the largest component in the image domain which corresponds to the prostate volume is selected. Finally, a morphological closing is used to restore the prostate boundaries detected. The results are shown as in Figure 7 (corresponding to the same prostate images in Figure 2), which is our final segmentation result.

410912.fig.007
Figure 7: Segmentation results after refinement. First four figures are various slices and the fifth figure is the 3D segmentation result.

The main steps of the presented approach can be summarized as (1)Apply a region based active contour model to the 3D ADC image data to obtain a coarse estimation of the prostate mid-gland and apex.(2)Estimate the prostate shape by using a parametric deformable ellipse model based on the coarse segmentation of the prostate mid-gland and apex.(3)Apply region-based active contour model again with shape prior obtained in the previous step to further segment the prostate volume in 3D with an automatically selected weighting term.(4)Apply morphological processing step to refine the prostate volume result.

3. Experimental Results

In this study, MR image data obtained from ten patients with biopsy-confirmed prostate cancer are used. After the prostatectomy, the prostate was removed and weighted by a pathologist. Because ADC maps provide better anatomical shape and contrast between the prostate gland and other tissue, we apply our method to 3D ADC images. The segmentation typically takes about 7 minutes for each patient (about ) on an Intel Core2 Quad PC running at 2.4 GHz. We use edge-based and volume-based metrics measurements for quantitative analysis of segmentation results: Hausdorff distance, mean absolute distance (MAD) [27], and dice measure (DSC) to evaluate our segmentation scheme. We denote the manual delineated boundary as and automated segmentation results as , where each element of and is a point on the corresponding contour. We find the distance of every point in from all points in . We define the distance to the closest point for to the contour as where is the 3D Euclidean distance between any two points. The Hausdorff distance is defined as the maximum over all . The MAD is the average of over all . The Hausdorff distance measures the worst possible disagreement between the two boundaries, while the MAD estimates the disagreement averaged over the two outlines. On the other hand, the DSC value is defined as where is the automatic segmentation result, the manual segmentation by an expert radiologist, and denotes the number of voxels contained in the set.

Figure 8 provides a comparison between the proposed method and the manual segmentation results and Table 1 shows the DSC, MAD and Hausdorff distance values of ten patients. The weights of the prostates are also provided in Table 1, and we can see that the size of the prostate varies significantly among patients. In Table 2, the segmentation results of the base, mid-gland and apex are provided separately. We can see that the majority of mis-segmentation occurs at the base and apex where the surface between prostate and surrounding tissues are very weak. Comparison of segmentation results with other prostate segmentation schemes in the literature show that our system performs at least as well as, or better than other systems. Klein et al. [12] have reported a mean DSC value of 0.82, Zhu et al. [28] have DSC values ranging from about 0.15 to about 0.85, and Toth et al. [15] have DSC values ranging from 0.746 to 0.826, while our DSC values range from 0.738 to 0.871. Note that our method does not need a training dataset.

tab1
Table 1: DSC, MAD, and Hausdorff distance values of ten patients.
tab2
Table 2: The mean and standard deviation values of DSC, MAD, and Hausdorff distance of the whole prostate and at the base, mid-gland and apex of ten patients.
fig8
Figure 8: 3D segmentation results of three patients. The first row is the manual segmentation by an expert human reader, and the second row is the automated segmentation by proposed method. The prostate specimen corresponding to the first column weighs 45 g and measures 4.5 cm superior to inferior (SI) × 4.0 cm medial to left (ML) × 3.2 cm anterior to posterior (AP), the prostate specimen corresponding to the second column weighs 72.1 g and measures 5.5 cm SI × 5.0 cm ML × 3.6 cm AP, the third column's prostate specimen weighs 56.3 g and measures 4.5 cm SI × 5.0 cm ML × 3.2 cm AP.

4. Conclusions

In this study, we have developed and applied an unsupervised automated segmentation method to the problem of prostate segmentation with 3D DWI MR image data. Accurate segmentation of prostate from MR datasets is useful in many applications. Although many researchers have proposed algorithms for prostate segmentation, attempts on MR prostate segmentation are very limited with only supervised techniques that require a training dataset. Currently, the level set framework is a popular approach for medical image segmentation, and shape information is also considered in most prostate segmentation method presented in the literature. To this date, in most shape-based prostate segmentation methods, either for TRUS or MR images, the shape information is obtained from training data or by compared with atlas images, but the prostate shape, size, and texture vary widely between patients. Besides, in the literature, the majority of MR-based prostate segmentation algorithms are based on T2 MRI. In this paper, we present an unsupervised and automated method to segment prostate volume based on DWI MRI by a shape-based active contour model with level set framework without the need of a training dataset.

We extend the region-based active contour model proposed by Chan and Vese and apply it to MR images by fusing a shape penalty term to the cost function. We firstly apply a coarse segmentation step to the 3D ADC image data, and we model the prostate shape by a stack of parametric deformable ellipses to extract the shape prior information. Then, we introduce a shape fitting function to force the active contour evolution close to the shape prior for further segmentation, and we select the shape weighting parameter automatically, as explained in Section 2. The experimental results on 3D MR prostate images show the effectiveness of the proposed method.

Because of the high variability of the prostate appearance between patients, future work will include applying our method to a larger MR dataset. Because of the nonuniformity of the texture and the lack of clear edge of the prostate apex and base, our method performs poor at certain slice for certain patients, future work will also attempt to overcome these limitations.

Acknowledgment

This study is approved by the patients’ institutional research ethics board, and all patients gave informed consent.

References

  1. American Cancer Society, Cancer Facts and Figures, American Cancer Society, Atlanta, Ga, USA, 2007.
  2. W. J. Catalona, D. S. Smith, T. L. Ratliff et al., “Measurement of prostate-specific antigen in serum as a screening test for prostate cancer,” New England Journal of Medicine, vol. 324, no. 17, pp. 1156–1161, 1991. View at Google Scholar · View at Scopus
  3. J. C. Weinreb and F. V. Coakley, “MR imaging and MR spectroscopic imaging of prostate cancer prior to radical prostatectomy: a prospective multi-institutional clinicopathological study,” presented at RSNA, Chicago, IL, 2006.
  4. S. D. Pathak, V. Chalana, D. R. Haynor, and Y. Kim, “Edge-Guided boundary delineation in prostate ultrasound images,” IEEE Transactions on Medical Imaging, vol. 19, no. 12, pp. 1211–1219, 2000. View at Google Scholar · View at Scopus
  5. L. Gong, S. D. Pathak, D. R. Haynor, P. S. Cho, and Y. Kim, “Parametric shape modeling using deformable superellipses for prostate segmentation,” IEEE Transactions on Medical Imaging, vol. 23, no. 3, pp. 340–349, 2004. View at Publisher · View at Google Scholar · View at Scopus
  6. N. N. Kachouie, P. Fieuth, and S. Rahnamayan, “An elliptical level set method for automatic trus prostate image segmentation,” in Proceedings of the IEEE Symposium on Signal Processing and Information Technology, pp. 191–196, Vancouver, Canada, August 2006.
  7. D. Shen, Y. Zhan, and C. Davatzikos, “Segmentation of prostate boundaries from ultrasound images using statistical shape model,” IEEE Transactions on Medical Imaging, vol. 22, no. 4, pp. 539–551, 2003. View at Publisher · View at Google Scholar · View at Scopus
  8. P. Yan, S. Xu, B. Turkbey, and J. Kruecker, “Adaptively learning local shape statistics for prostate segmentation in ultrasound,” IEEE Transactions on Biomedical Engineering, vol. 58, no. 3 PART 1, pp. 633–641, 2011. View at Publisher · View at Google Scholar
  9. D. Freedman, R. J. Radke, T. Zhang, Y. Jeong, D. M. Lovelock, and G. T. Y. Chen, “Model-based segmentation of medical imagery by matching distributions,” IEEE Transactions on Medical Imaging, vol. 24, no. 3, pp. 281–292, 2005. View at Publisher · View at Google Scholar · View at Scopus
  10. A. Tsai, A. Yezzi, W. Wells et al., “A shape-based approach to the segmentation of medical imagery using level sets,” IEEE Transactions on Medical Imaging, vol. 22, no. 2, pp. 137–154, 2003. View at Publisher · View at Google Scholar · View at Scopus
  11. S. Ghose, A. Oliver, R. Martí et al., “Prostate segmentation with texture enhanced active appearance model,” Proceedings of the 6th International Conference on Signal Image Technology and Internet Based Systems (SITIS '10), pp. 18–22, 2010. View at Publisher · View at Google Scholar
  12. S. Klein, U. A. van der Heide, B. W. Raaymakers, A. N. T. J. Kotte, M. Staring, and J. P. W. Pluim, “Segmentation of the prostate in MR images by atlas matching,” in Proceedings of the 4th IEEE International Symposium on Biomedical Imaging (ISBI '07), pp. 1300–1303, April 2007. View at Publisher · View at Google Scholar · View at Scopus
  13. D. Flores-Tapia, G. Thomas, N. Venugopal, B. McCurdy, and S. Pistorius, “Semi automatic MRI prostate segmentation based on wavelet multiscale products,” in Proceedings of the 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS '08), pp. 3020–3023, August 2008. View at Scopus
  14. S. Vikal, S. Haker, C. Tempany, and G. Fichtinger, “Prostate contouring in MRI guided biopsy,” in Medical Imaging 2009: Image Processing, vol. 7259, article 72594A of Proceedings of SPIE, Lake Buena Vista, Fla, USA, February 2009. View at Publisher · View at Google Scholar · View at Scopus
  15. R. Toth, P. Tiwari, M. Rosen, A. Kalyanpur, S. Pungavkar, and A. Madabhushi, “A multi-modal prostate segmentation scheme by combining spectral clustering and active shape models,” in Medical Imaging 2008: Image Processing, vol. 6914, article 69144S of Proceedings of SPIE, San Diego, Calif, USA, February 2008. View at Publisher · View at Google Scholar · View at Scopus
  16. A. Firjani, A. Elnakib, F. Khalifa et al., “A novel 3D segmentation approach for segmentation the prostate from dynamic contrast enhanced MRI using current appearance and learned shape prior,” in Proceedings of the IEEE Symposium on Signal Processing and Information Technology (ISSPIT '10), vol. 2010, pp. 137–143, 2010.
  17. R. Del Vera, E. Coiras, J. Groen, and B. Evans, “Automatic target recognition in synthetic aperture sonar images based on geometrical feature extraction,” Eurasip Journal on Advances in Signal Processing, vol. 2009, 9 pages, 2009. View at Publisher · View at Google Scholar
  18. R. Del Vera, E. Coiras, J. Groen, and B. Evans, “Motion segmentation and retrieval for 3D video based on modified shape distribution,” Eurasip Journal on Advances in Signal Processing, vol. 2007, 11 pages, 2007. View at Google Scholar
  19. D. Cremers, N. Sochen, and C. Schnörr, “Towards recognition-based variational segmentation using shape priors and dynamic labeling,” in Proceedings of the International Conference on Scale Space Theories in Computer Vision, L. Griffith, Ed., vol. 2695, pp. 388–400, 2003.
  20. T. Chan and W. Zhu, “Level set based shape prior segmentation,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '05), pp. 1164–1170, June 2005. View at Scopus
  21. X. Liu, D. L. Langer, M. A. Haider et al., “Unsupervised segmentation of the prostate using MR images based on level set with a shape prior,” in Proceedings of the 31st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC '09), pp. 3613–3616, September 2009. View at Publisher · View at Google Scholar · View at Scopus
  22. S. Osher and J. A. Sethian, “Fronts propagating with curvature-dependent speed: algorithms based on Hamilton-Jacobi formulations,” Journal of Computational Physics, vol. 79, no. 1, pp. 12–49, 1988. View at Google Scholar · View at Scopus
  23. T. F. Chan and L. A. Vese, “Active contours without edges,” IEEE Transactions on Image Processing, vol. 10, no. 2, pp. 266–277, 2001. View at Publisher · View at Google Scholar · View at Scopus
  24. R. Ronfard, “Region-based strategies for active contour models,” International Journal of Computer Vision, vol. 13, no. 2, pp. 229–251, 1994. View at Publisher · View at Google Scholar · View at Scopus
  25. P. L. Rosin, “Fitting superellipses,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 7, pp. 726–732, 2000. View at Publisher · View at Google Scholar · View at Scopus
  26. F. Solina and R. Bajcsy, “Recovery of parametric models from range images: the case for superquadrics with global deformations,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 2, pp. 131–147, 1990. View at Publisher · View at Google Scholar · View at Scopus
  27. A. Madabhushi and D. N. Metaxas, “Combining low-, high-level and empirical domain knowledge for automated segmentation of ultrasonic breast lesions,” IEEE Transactions on Medical Imaging, vol. 22, no. 2, pp. 155–169, 2003. View at Publisher · View at Google Scholar · View at Scopus
  28. Y. Zhu, R. Zwiggelaar, and S. Williams, “Prostate segmentation: a comparative study,” Medical Image Understanding and Analysis, pp. 129–132, 2003. View at Google Scholar