International Journal of Biomedical Imaging

International Journal of Biomedical Imaging / 2012 / Article
Special Issue

Mathematical Methods for Images and Surfaces 2011

View this Special Issue

Research Article | Open Access

Volume 2012 |Article ID 327198 |

Jie Wu, Pavani Davuluri, Kevin R. Ward, Charles Cockrell, Rosalyn Hobson, Kayvan Najarian, "Fracture Detection in Traumatic Pelvic CT Images", International Journal of Biomedical Imaging, vol. 2012, Article ID 327198, 10 pages, 2012.

Fracture Detection in Traumatic Pelvic CT Images

Academic Editor: Shan Zhao
Received02 Jul 2011
Revised30 Sep 2011
Accepted30 Sep 2011
Published12 Jan 2012


Fracture detection in pelvic bones is vital for patient diagnostic decisions and treatment planning in traumatic pelvic injuries. Manual detection of bone fracture from computed tomography (CT) images is very challenging due to low resolution of the images and the complex pelvic structures. Automated fracture detection from segmented bones can significantly help physicians analyze pelvic CT images and detect the severity of injuries in a very short period. This paper presents an automated hierarchical algorithm for bone fracture detection in pelvic CT scans using adaptive windowing, boundary tracing, and wavelet transform while incorporating anatomical information. Fracture detection is performed on the basis of the results of prior pelvic bone segmentation via our registered active shape model (RASM). The results are promising and show that the method is capable of detecting fractures accurately.

1. Introduction

Pelvic fractures are high energy injuries that constitute a major cause of death in trauma patients. According to the Centers for Disease Control and Prevention (CDC), trauma injury kills more people between the ages of 1 and 44 than any other disease or illness. Among different types of trauma with a high impact on the lives of Americans, traumatic pelvic injuries, caused mainly by sports, falls, and motor vehicle accidents, contribute to a large number of mortalities every year [1, 2]. Traumatic pelvic injuries and associated complications, such as severe hemorrhage multiple organ dysfunction syndrome (MODS), result in the mortality rate from 8.6% to 50% [3]. When combined with other injuries in the body, for instance, the abdomen, the chance of mortality is even higher [4]. In general, a pelvic fracture can be associated hemorrhage, neurologic injury, vascular injury, and organ damage, as all of the vital structures run through pelvis. Pain and impaired mobility are normally the results of nerve and internal organ damage associated with the pelvic fracture [57].

Patient data, in particular, medical images such as computed tomography (CT) images, contain a significant amount of information, and it is crucial for physicians to make diagnostic decisions as well as treatment planning on the basis of this information and other patients’ data. Currently, a large portion of the data is not optimally and comprehensively utilized, because information held in the data is inaccessible through visual observation or simple traditional computational methods. Information contained in pelvic CT images is a very important resource for the assessment of the severity and prognosis of the injuries. Each pelvic CT scan consists of several slices; each slice contains a large amount of data that may not be thoroughly and accurately analyzed via visual inspection. In addition, in the field of trauma, physicians frequently need to make quick decisions based on large amount of information. Hence, a computer-assisted pelvic trauma decision-making system is crucial and necessary for assisting physicians in making accurate diagnostic decisions and determining treatment planning in a short period.

Automated fracture detection from segmented bones in traumatic pelvic injuries can help physicians examine the pelvic CT images and to detect the injury severity within a short period. Extraction of features such as presence and location of fracture, hemorrhage, and displacement between the fractured bones in an automated fashion is vital for such injuries. Identification of fracture alone is not sufficient to assess the injury severity. Therefore, details of the fracture such as distance and angle between the fractured bones must be taken into account. However, the task of pelvic bone segmentation and fracture detection is very challenging due to low resolution of CT images, complex pelvic structures, variations in bone shape, and size from patient to patient. Adding to these complexities, the presence of noise, partial volume effects, and in-homogeneities in the CT images make the task of fracture detection very challenging. The objective of this study is to design a computer-assisted system that helps radiologists better and further assess the bone fractures in pelvic region. It also illustrates the fracture bones in a clearer and more visible manner. In particular, mild and small fractures, while still partially visible in the CT images, are sometimes considered as “irregularities” that need further investigation by the radiologists in the first read, as radiologists may not be able to reliably label them as fractures due to the quality of the CT as well as the volume of the data to be processed. For these situations, it normally takes multiple reads to identify and determine the confirmation on the existence and/or details of fracture. A machine-based analysis can consider and process detailed information from several neighboring slices to provide radiologists with clues as to whether one particular slice contains a fracture and if so extract details such as the separation among the pieces.

While there have been few studies directly focusing on fracture detection in pelvic CT images, there are many closely related work. Moghari and Abolmaesumi [8] utilized a global registration method for multifragment fracture fixation in femur bone. However, the method suffers from initial alignment errors, and the dataset includes only femur bone generated randomly from 3D data points. Moghari and Abolmaesumi [9] proposed a technique to automatically register multiple bone fragments of a fracture using a global registration method guided by a statistical anatomical atlas model. Due to the limited number of bone models, the method is unable to capture all variations of femur. Winkelbach et al. [10] presented an which is approach based on a modified version of Hough Transformation and registration techniques for estimating the relative transformations between fragments of a broken cylindrical structure. This method is designed for computer-aided bone alignment, such as fractured long bones and fracture reduction in surgery. However, the approach is not fully automatic and requires a significant amount of human supervision. Another work, by Ryder et al. [11] explored using nonvisual methods to detect fractures. In addition, there are image processing methods for fracture detection applies to X-ray images [1214]. Douglas et al. [12] focused on early detection of fractures with low-dose digital X-ray images in a pediatric trauma unit. Tian et al. [13] determined the presence of femoral fracture by measuring the neck-shaft angle of the femur. Lum et al. [14] used three-texture features combined with a classifier to detect radius and femur fractures. This method may suffer from the imbalanced dataset. The majority of these X-ray image processing methods may not be applicable to fracture detection in CT images because of the variation in image intensities and resolution between X-ray and CT images.

Even though few studies have been conducted on fracture detection from pelvic CT scans, several segmentation techniques have been created for medical images of various regions of human body, that is, brain, abdomen, and so forth. These methods include threshold-based techniques, region growing, classifiers, clustering, Markov random field models, artificial neural networks, deformable models, atlas-guided methods, knowledge-based methods. Thresholding techniques segment an image by creating a binary partition on the basis of the image intensities [15]. The drawback is that they cannot be effectively applied to multichannel images. The deformable model approaches start with the initial contour placement near the desired boundary, and then, the contour is improved through an iterative relaxation process [1618]. The disadvantage is that these methods require manual interaction for the selection of initial position and appropriate parameters of the model. Atlas-guided methods utilize a standard atlas or template for segmentation [19]. The atlas used as the reference frame is generated on the basis of the previously known anatomical information. However, due to anatomical variability across individuals, accurate segmentation of complex structures remains as a challenging task. Clustering algorithms, also referred to as unsupervised methods [20, 21], while successful in some applications, they can be sensitive to noise and variations in intensity. In addition, the calculation can become computationally expensive when the clusters have a large number of pixels.

This study develops an automated hierarchical algorithm to detect fracture in pelvic bones using a hierarchical method combining several of the above-motioned methods in different steps. Fracture detection is performed using the proposed automated segmentation method, called registered active shape model (RASM), along with wavelet transformation, adaptive windowing, boundary tracing, and masking.

The rest of the paper is organized as follows. Section 2 provides the methods used for pelvic bone segmentation and fracture detection. Section 3 includes the results obtained using the proposed methods and discusses the obtained results. Section 4 concludes the proposed methods and provides the future work of the study.

2. Methods

Automated fracture detection is important for making fast and accurate decisions and treatment planning. In order to successfully detect pelvic bone fractures, utilizing the bone information contained in pelvic CT images is crucial. Figure 1 illustrates the overall process of the proposed automated fracture detection. The proposed fracture detection method involves automated bone segmentation using registered active shape model (RASM), adaptive windowing, 2D stationary wavelet transform, masking, and boundary tracing. Each step in the process is explained in detail in the following subsections.

2.1. Multilevel Segmentation of Bone in Pelvic CT Scans

Segmentation is a vital step in analyzing pelvic bones in CT images and the first step in fracture detection. Specifically, bone segmentation helps extract the bones from the images that are later used for detecting fractures. Our previous work was focused on the segmentation of pelvic bones in CT scans [22]. In this paper, a new segmentation algorithm for multilevel pelvic CT scans was developed. This is shown in Figure 2. This new segmentation technique consists of four main parts: preprocessing, edge detection, shape matching and Registered Active Shape Model (RASM) with automatic initialization.

The presence of surrounding artifacts and noise in the original pelvic CT images make bone segmentation a challenging task. Therefore, preprocessing is performed to remove the surrounding artifacts (e.g., CT table, cables, hands, and lower extremities) present in the original image. This is the first step in segmentation. The preprocessing is carried out using blob analysis. Later, high-frequency speckle noise is removed from the images using a 2D Gaussian filter. The image is then enhanced to emphasize the features of interest, that is, pelvic bones. This enhancement is done using brightness contrast stretching. Later, the bone edges are detected using Canny edge detection technique. However, some weak edges may remain unconnected, and as such, morphological operations are applied to remove spurious edges and subedge connections and removal.

The obtained preliminary segmentation results are then used to detect the best matching template using a shape matching algorithm [23]. This helps with the automation of the segmentation process and therefore contributes to minimizing human errors during the diagnostic process. 100 bone templates are created from the Visible Human Project dataset manually. These templates are then compared to each CT slice in order to determine the best-matched template. Determining best-matched template enables the application of corresponding training shape models of each best-matched template to the preprocessed image during bone segmentation phase.

The last step in the segmentation process is the extraction of pelvic bones. Standard active shape model (ASM) is one of the popular techniques that is generally used for bone segmentation. Standard ASM uses training images labeled with landmark points to generate statistical shape and intensity-level models of a desired object. The shape model can be iteratively deformed to locate the object in a test image [24]. The landmarks are points selected by an expert for the bone region in each registered image during the training phase. The pelvic bones in each original training image have different sizes, rotation angles, and locations which may lead to unstable and unreliable shape models for inaccurate bone segmentation. In addition, standard ASM is highly sensitive to initialization and requires an initial position to be correctly assigned to the training model in order to detect a target object in the image. The algorithm then attempts to fit the shape model to the object. If the shape model is not accurately placed, the standard ASM may fail to detect the target object accurately.

In order to overcome these shortcomings, a new image registration algorithm, that is, registered active shape model (RASM), is developed using enhanced homogeneity feature extraction [15], correlation coefficient calculation for similarity measure, affine transformation, and Powell algorithm application [25]. This algorithm, that is, RASM, is developed to create a set of more robust training models which will result in more accurate segmentation. This includes two stages: training stage in which registered training models are created and testing stage which includes automatic initialization. Figure 3. provides the flowchart for the RASM algorithm. After the creation of training models, segmentation is performed on the test images. As mentioned earlier, manual initialization may fail to segment the targeted objects accurately. Hence, an automated hierarchical initialization algorithm is used in the study. The proposed initialization process involves image registration, bone extraction, and edge detection to automatically and sequentially place the training models of each individual object for the test images to extract the bone from the background.

2.2. Fracture Detection in Pelvic CT Images

After bone segmentation, a multistage process is used for fracture detection in pelvic CT scans. Fracture detection of pelvic bones includes several steps. First, pelvic bone segmentation is conducted using the proposed RASM algorithm, as described in Section 2.1. The extracted bone boundaries are utilized to create a series of adaptive windows. Later, 2D stationary wavelet transform (SWT) is applied to each window to test the contour discontinuities in each window using boundary tracing. If there is a contour discontinuity in a window, then it is considered as a potential bone fracture.

2.2.1. Adaptive Window Creation

Discontinuities around the bone boundary help identify the presence of fracture. Therefore, a detailed view of bone boundary is required through the formation of windows around the bone whose sizes are adaptively adjusted to include the bone borders. Creation of these adaptive windows around the bone boundary will facilitate the process of identifying the discontinuities. In this study, a systematic method is proposed to form adaptive windows around the bone boundary to include and detect possible discontinuities associated with fractures. The appearance of bone fractures in a pelvic CT scan depends on the injury severity. Major fractures are usually visible, while minor fractures may not severely distort the edge of the bone; instead, they may appear as dual edges or a single subedge that is slightly blurred compared to the neighboring edges. Therefore, it is important to refine the blurred boundary of each bone in order to achieve accurate fracture detection. The refinement is done using a wavelet transform which is later described in the following subsections. However, due to local intensity variations, it may be difficult to achieve practical and desirable results by applying wavelet transform to the entire bone structure. Hence, the detected bone boundary is divided into a series of windows. The size and location of each window is determined by the area of the bone and boundary detected using the RASM. This is called adaptive windowing. The adaptive windowing algorithm is explained in detail as follows.

On the basis of the segmentation formed by the RASM algorithm, the landmarks are placed on the boundary of each segmented bone. The windows are created starting from the first segmented pelvic bone region. The adaptive window is created on the basis of each landmark placed on the segmented bone boundary.

Let , be the coordinates of the landmarks of each bone in the image. is the number of bones, and is the number of landmarks for each pelvic bone. The landmarks are located at the center position () of each window. The area of the window is determined using where is the area of the corresponding piece of bone, The determined empirical constant 1/6 has been selected to ensure that the size of the window is appropriately selected. The side length of the each leg of the cubicle (square) window is identified using Since the area of each adaptive window is small, in order to obtain more suitable virtualization effects, each window is scaled to the size of 256 × 256 by applying the bilinear interpolation technique [14]. As shown in Figure 4, sample adaptive windows are created. Each landmark is located at the center of each window.

2.2.2. The 2D Stationary Wavelet Transform

After adaptive windowing, 2D stationary wavelet transform (SWT) is applied on each window in order to refine the blurred boundary of pelvic bone. The classic discrete wavelet transform (DWT) suffers a shortcoming that the DWT of a translated version of a signal/image is not, in general, the translated version of the DWT of the signal/image. To overcome this, SWT is applied in our work, as it is designed to overcome any shift variation [26]. The wavelet transform algorithm is explained as follows.

The wavelet transform decomposes an input signal into different frequency components using a series of filtering operations. A wavelet is a function with a zero average The wavelet generates a family of wavelets by scaling by and translating it by : The wavelet transform of a signal at time and scale can be represented as The convolution computes the wavelet transform of the input signal with dilated band-pass filters. Two sets of coefficients are obtained through wavelet transform, one is approximation coefficients, , and the other is detail coefficients, , where is the level of decomposition, including horizontal, vertical, and diagonal coefficients. Decimation makes wavelet transform a shift-variant process. To overcome this, a stationary discrete wavelet transform is used in this study.

The scaled window is first decomposed using a 2D Stationary Discrete Wavelet Transform. The classical Discrete Wavelet Transform (DWT) is not a space-invariant transform. The SWT is an algorithm which does not decimate the coefficients at every level of decomposition [26]. The filters at level are upsampled versions of those at level (). As with the 2D DWT, decomposition outputs approximation, horizontal, vertical, and diagonal coefficients. In this application, three levels of decomposition are applied to window using the Haar wavelet. The level 3 detail coefficients, , , and , are then extracted and used to reconstruct detail arrays , , and of horizontal, vertical, and diagonal coefficients. Figure 5 represents decomposition of 2D SWT.

The accuracy and running speed of the SWT algorithm are compared when extracting the upsampled coefficients separately at 1st, 2nd, 3rd, and 4th levels. The algorithm runs on the computer with 2.80 GHz Intel(R) Core(TM) i7 processor, 64-bit Operating System, 6.0 GB memory. For each CT slice, it takes approximately 0.15 seconds more for the 2nd level of stationary wavelet decomposition than the 1st level decomposition. While the 3rd level of decomposition is only 0.1 second slower than the 2nd level of decomposition in terms of running speed, more noise is filtered out, and edges are clearer in the 3rd level of decomposition compared to other two levels; this improves the accuracy of the fracture detection algorithm. Going to the 4th level adds another 0.15 second of additional delay while not adding much to the filtering performance. Hence, in order to achieve a suitable balance between the running speed and accuracy, the 3rd level of SWT is used in this work.

2.2.3. Masking

The next step in the fracture detection is to create a binary version of the chosen detail array from the wavelet transform. This binary version not only contains the pelvic bone contour, but also includes other redundant and unnecessary edges. A mask is formed to filter these redundant edges out. The mask is formed by converting the smoothed window to a binary image using Otsu’s threshold [27]. The threshold is computed to minimize the intraclass variance, defined as a weighted sum of variances of two classes, black and white pixels. Weights are probabilities of the two classes separated by a threshold and variances of these classes. Minimizing the intraclass variance is the same as maximizing interclass variance where are probabilities of the two classes and is the class mean.

The contour is then extracted from the binary image. The unwanted edges are removed from the binary image to create an edge window. Later, a precise edge window is obtained by removing the extra edges in the image using the pelvic bone contour and the mask. The process is defined as a combination of and . This edge window is used for the boundary tracing as described in next step

2.2.4. Boundary Tracing

After masking, the last and final step in fracture detection is the detection of discontinuities. This is achieved by tracing the extracted bone edges. Small artifacts surrounding the extracted bone edges may interfere with the boundary tracing. Therefore, these artifacts must be removed. These are removed by applying morphologic opening to all the objects in the image with area below a specific threshold, which is predefined as 1% of the window area in the testing step. The remaining edges are then traced using the 8-neighborhood of each pixel and are returned as a matrix of pixel positions. The traced edges represent the pelvic bone contours. The window will therefore contain a single continuous boundary if there is no fracture. In the presence of fracture, multiple boundaries are present in the window, depending on the type and severity of fracture.

3. Results and Discussion

3.1. Dataset

The dataset has been obtained from the Virginia Commonwealth University Medical Center. Data have been collected from twelve patients with traumatic pelvic injuries. Forty-five to seventy-five images are collected from each patient. Axial CT images with five millimeter slice thickness are used for the study. Images collected from five patients are used for training, and the other seven patients’ images are used for testing. For fracture detection, a total of 12 patients are used, out of which 8 patients exhibit small to very severe bone fractures.

3.2. Results of Bone Segmentation

Figure 6 shows a sample segmentation of pelvic bones using RASM. Figure 7 shows the compared results of pelvic bone segmentation via standard ASM without initialization. The main reason of inaccurate bone segmentation is that the initial positions of training models are not correctly assigned. As given in [8], total segmentation accuracy for both good and acceptable classes is 95.77%. These results were evaluated by expert radiologist as ground truth for assessment.

3.3. Results of Fracture Detection

Figures 8 through 10 show the results obtained at various stages of fracture detection. In these figures, (a) is the original image, (b) is the extracted adaptive window after being scaled, and (c) is the enhanced window after brightness contrast stretching. This is done for better visualization effect. And, (d) shows the final fracture detection results. In Figure 8, the patient suffers from a minor fracture in right iliac wing. Figure 8(d) indicates the fracture detected in the right iliac wing. Figure 9 is the “no fracture” case. The result in Figure 9(d) shows that the bone appears smooth with no fracture. Figure 10 illustrates a patient with a very severe fracture in the right ilium bone. Fractures are detected from the windows of this bone region. Example of detected fractures shown in Figure 10(d) indicates fractures in three different regions of the right ilium bone. These results are evaluated by an expert radiologist and are considered acceptable. For 8% of the cases, the method was unable to capture the fracture. The few cases that the algorithm gave false alarms in fracture detection may be either due to the algorithm needing further refinement or other factors such as the poor quality of these particular CT images.

The results show that the method can successfully detect bone fracture. Table 1 presents the performance of the method detecting fractures. The proposed method is highly sensitive to the discontinuities present in the bone and is capable of detecting fractures.

Statistical ResultsAccuracySensitivitySpecificity

Rate %91.982193.333389.2617

3.4. Discussion

The results were validated on the basis of the assessment and evaluation made by radiologists on the CT scans in the above mentioned database. As shown in the results, the designed algorithm is able to detect the fractures relatively accurately. Using the proposed algorithm, fractured bone may be further highlighted in the processed images; this could help the radiologists better analyze the scans and increase the chances of capturing the fractures. Additionally, as it can be seen in the results, our designed method may help quantify the fracture separation distance and the angle between the broken bone pieces as well as other quantitative assessment of the fractures, which may not be easily accessible and measurable through visual inspection. The designed algorithm provides these clues and recommendations on the fracture detection in an automated fashion and with relatively high speed (the processing time is less than one second for each slice). This helps physicians reduce the decision-making and diagnostic time, which is highly important for traumatic pelvic injuries.

4. Conclusion and Future Work

This paper presents a method for detecting fractures in pelvic bones using automated bone segmentation, adaptive windowing, boundary tracing, and 2D stationary wavelet Transform while including anatomical information. The results show that the proposed method is capable of detecting fractures in pelvic bones accurately. Automated fracture detection, once verified with more data, will be an important component of a larger modular system to extract features from CT images for a computer-assisted decision-making system. Future work will focus on the quantitative measurement of fracture on the basis of a larger dataset, for example, horizontal displacement, as well as the determination of fracture type.


This material is based upon work supported by the National Science Foundation under Grant no. IIS0758410. The authors would like to thank Virginia Commonwealth University Medical Center for providing the data for the study.


  1. M. A. Schiff, A. F. Tencer, and C. D. Mack, “Risk factors for pelvic fractures in lateral impact motor vehicle crashes,” Accident Analysis and Prevention, vol. 40, no. 1, pp. 387–391, 2008. View at: Publisher Site | Google Scholar
  2. A. Salim, P. G. R. Teixeira, J. DuBose et al., “Predictors of positive angiography in pelvic fractures: a prospective study,” Journal of the American College of Surgeons, vol. 207, no. 5, pp. 656–662, 2008. View at: Publisher Site | Google Scholar
  3. University of Maryland National Study Center for Trauma/EMS, “Lower extremity injuries among restrained vehicle occupants,” Tech. Rep., University of Maryland National Study Center for Trauma/EMS, 2001. View at: Google Scholar
  4. G. S. Pajenda, H. Seitz, M. Mousavi, and V. Vecsei, “Concomitant intra-abdominal injuries in pelvic trauma,” Wien Klin Wochenscher, vol. 110, no. 23, pp. 834–840, 1998. View at: Google Scholar
  5. Z. Balogh, K. L. King, P. Mackay et al., “The epidemiology of pelvic ring fractures: a population-based study,” Journal of Trauma, vol. 63, no. 5, pp. 1066–1072, 2007. View at: Publisher Site | Google Scholar
  6. P. C. Ferrera and D. A. Hill, “Good outcomes of open pelvic fractures,” Injury, vol. 30, no. 3, pp. 187–190, 1999. View at: Publisher Site | Google Scholar
  7. F. D. Brenneman, D. Katyal, B. R. Boulanger, M. Tile, and D. A. Redelmeier, “Long-term outcomes in open pelvic fractures,” Journal of Trauma, vol. 42, no. 5, pp. 773–777, 1997. View at: Publisher Site | Google Scholar
  8. M. H. Moghari and P. Abolmaesumi, “Global registration of multiple bone fragments using statistical atlas models: feasibility experiments,” in Proceedings of the 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS '08), pp. 5374–5377, August 2008. View at: Google Scholar
  9. M. H. Moghari and P. Abolmaesumi, “Global registration of multiple point sets: feasibility and applications in multi-fragment fracture fixation,” in Proceedings of 10th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI '07), vol. 10, pp. 943–950, Brisbane, Australia, 2007. View at: Google Scholar
  10. S. Winkelbach, R. Westphal, and T. Goesling, “Pose estimation of cylindrical fragments for semi-automatic bone fracture reduction,” in Proceedings of the 25th Annual Symposium of the German Association for Pattern Recognition (DAGM '03), vol. 2781 of Lecture Notes in Computer Science, pp. 566–573, Magdeburg, Germany, 2003. View at: Google Scholar
  11. D. M. Ryder, S. L. King, C. J. Olliff, and E. Davies, “Possible method of monitoring bone fracture and bone characteristics using a non-invasive acoustic technique,” in Proceedings of the International Conference on Acoustic Sensing and Imaging, pp. 159–163, March 1993. View at: Google Scholar
  12. T. S. Douglas, V. Sanders, R. Pitcher, and A. B. van As, “Early detection of fractures with low-dose digital X-ray images in a pediatric trauma unit,” Journal of Trauma, vol. 65, no. 1, pp. E4–E7, 2008. View at: Publisher Site | Google Scholar
  13. T. P. Tian, Y. Chen, W. K. Leow, W. Hsu, T. S. Howe, and M. A. Png, “Computing neck-shaft angle of femur for X-ray fracture detection,” in Proceedings of the International Conference on Computer Analysis of Images and Patterns, vol. 2756 of Lecture Notes in Computer Science, pp. 82–89, Springer, 2003. View at: Google Scholar
  14. V. L. F. Lum, W. K. Leow, Y. Chen, T. S. Howe, and M. A. Png, “Combining classifiers for bone fracture detection in X-ray images,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '05), pp. 1149–1152, September 2005. View at: Publisher Site | Google Scholar
  15. C. Lee, S. Huh, T. A. Ketter, and M. Unser, “Unsupervised connectivity-based thresholding segmentation of midsagittal brain MR images,” Computers in Biology and Medicine, vol. 28, no. 3, pp. 309–338, 1998. View at: Publisher Site | Google Scholar
  16. J. Montagnat and H. Delingette, “4D deformable models with temporal constraints: application to 4D cardiac image segmentation,” Medical Image Analysis, vol. 9, no. 1, pp. 87–100, 2005. View at: Publisher Site | Google Scholar
  17. J. Schmid and N. Magnenat-Thalmann, “MRI bone segmentation using deformable models and shape priors,” in Proceedings of 11th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI '08), vol. 11, pp. 119–126, New York, NY, USA, 2008. View at: Google Scholar
  18. P. C. T. Gonçalves, J. M. R. S. Tavares, and R. M. N. Jorge, “Segmentation and simulation of objects represented in images using physical principles,” Computer Modeling in Engineering and Sciences, vol. 32, no. 1, pp. 45–55, 2008. View at: Google Scholar
  19. S. Sandor and R. Leahy, “Surface-based labeling of cortical anatomy using a deformable atlas,” IEEE Transactions on Medical Imaging, vol. 16, no. 1, pp. 41–54, 1997. View at: Google Scholar
  20. W. Cai, S. Chen, and D. Zhang, “Fast and robust fuzzy c-means clustering algorithms incorporating local information for image segmentation,” Pattern Recognition, vol. 40, no. 3, pp. 825–838, 2007. View at: Publisher Site | Google Scholar
  21. H. A. Vrooman, C. A. Cocosco, R. Stokking et al., “KNN-based multi-spectral MRI brain tissue classification: manual training versus automated atlas-based training,” in Medical Imaging 2006: Image Processing, Proceedings of the SPIE, San Diego, Calif, USA, February 2006. View at: Publisher Site | Google Scholar
  22. J. Wu, P. Davuluri, K. Ward, C. Cockrell, R. Hobson, and K. Najarian, “A new hierarchical method for multi-level segmentation of bone in pelvic CT scans,” in Proceedings of the 33rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC '11), 2011. View at: Google Scholar
  23. S. Belongie, J. Malik, and J. Puzicha, “Shape matching and object recognition using shape contexts,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 4, pp. 509–522, 2002. View at: Publisher Site | Google Scholar
  24. T. F. Cootes, C. J. Taylor, D. H. Cooper, and J. Graham, “Active shape models-their training and application,” Computer Vision and Image Understanding, vol. 61, no. 1, pp. 38–59, 1995. View at: Publisher Site | Google Scholar
  25. F. Maes, D. Vandermeulen, and P. Suetens, “Comparative evaluation of multiresolution optimization strategies for multimodality image registration by maximization of mutual information,” Medical Image Analysis, vol. 3, no. 4, pp. 373–386, 1999. View at: Google Scholar
  26. G. P. Nason and B. W. Silverman, “The stationary wavelet transform and some statistical applications,” in Wavelets and Statistics, vol. 103 of Lecture Notes in Statistics, pp. 281–299, Springer, 1995. View at: Google Scholar
  27. N. Otsu, “A threshold selection method from gray level histograms,” IEEE Transactions on Systems, Man and Cybernetics, vol. 9, no. 1, pp. 62–66, 1979. View at: Publisher Site | Google Scholar

Copyright © 2012 Jie Wu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.