Table of Contents Author Guidelines Submit a Manuscript
Journal of Healthcare Engineering
Volume 2018, Article ID 2365178, 11 pages
https://doi.org/10.1155/2018/2365178
Research Article

Improved Surface-Based Registration of CT and Intraoperative 3D Ultrasound of Bones

1Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas, Universidad Nacional Autónoma de México, Ciudad Universitaria, Circuito Escolar S/N, 04510 CDMX, Mexico
2Instituto de Ciencias Aplicadas y Tecnología, Universidad Nacional Autónoma de México, Ciudad Universitaria, Circuito Exterior S/N, 04510 CDMX, Mexico
3Instituto Nacional de Rehabilitación, Calzada México Xochimilco No. 289, Colonia Arenal de Guadalupe, 14389 CDMX, Mexico
4National Laboratory for Additive and Digital Manufacturing, Mexico

Correspondence should be addressed to F. Arámbula Cosío; xm.manu.samii@alubmara.odnanref

Received 18 December 2017; Revised 9 March 2018; Accepted 1 April 2018; Published 3 June 2018

Academic Editor: Mehran Moazen

Copyright © 2018 Zian Fanti et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The intraoperative registration of preoperative CT volumes is a key process of most computer-assisted orthopedic surgery (CAOS) systems. In this work, is reported a new method for automatic registration of long bones, based on the segmentation of the bone cortical in intraoperative 3D ultrasound images. A bone classifier was developed based on features, obtained from the principal component analysis of the Hessian matrix, of every voxel in an intraoperative ultrasound volume. 3D freehand ultrasound was used for the acquisition of the intraoperative ultrasound volumes. Corresponding bone surface segmentations in ultrasound and preoperative CT imaging were used for the intraoperative registration. Validation on a phantom of the tibia produced encouraging results, with a maximum mean segmentation error of () and a registration accuracy error of ().

1. Introduction

The use of image-guided surgery (IGS) systems has expanded significantly for more than 20 years, in the number of procedures performed and the variety of clinical applications, in part due to the continuous advancement of computer power and medical imaging systems (as well as tracking systems, registration methods, software development frameworks, and visualization techniques). The main clinical applications of IGS systems in orthopedics are pedicle screw insertion, hip replacement, knee replacement, and fracture alignment [1].

Scheufler et al. [2] reported the results of image-guided instrumentation of the cervical, upper, and midthoracic pedicles. The study considers the insertion of 248 pedicle screws and reports safe and highly accurate results. Jeswani et al. [3] reported the evaluation of CT image-guided navigation of pedicle screws in small thoracic pedicles. The results showed that image-guided navigation allows for safe, effective, and accurate instrumentation of small () to very small () thoracic pedicles. Deep et al. [4] reviewed the literature on computer-assisted orthopedic surgery (CAOS) for knee and hip arthroplasty; the authors conclude that CAOS systems have gained a pivotal role in lower limb arthroplasty. The use of image-guided surgery (IGS) systems in trauma has been explored for the last eight years approximately, and several methods and systems have been developed with various levels of success [5].

The recent development of minimally invasive arthroscopic procedures for fracture fixation in long bones has substantially improved the accuracy of the procedures with shorter patient recovery times. The success of the surgery depends heavily on the skill and experience of the surgeon [6]; however, the use of CAOS systems in fracture fixation performed with arthroscopic procedures offers an improved visualization of the surgical site with position feedback [7]. It has also been reported that the use of CAOS, in arthroscopic fracture reduction procedures, improves the accuracy of the surgery and achieves a shorter and more effective patient recovery [8]. Buschbaum et al. [9] reported the development of a virtual environment which helps the surgeon with planning optimal reduction paths for femoral fractures. Weil et al. [10] reviewed the literature on the evolution of image-guided iliosacral screw placement, for the reduction of pelvic ring fractures. 3D image-guided surgery is faster and more accurate and uses less X-rays than conventional fluoroscopy-guided surgery. The authors stress the challenges of platform interoperability, learning times, and cost reductions.

The main stages of most CAOS systems are preoperative image acquisition (usually computed tomography) of the anatomic region of interest, computer-assisted modeling and surgery planning, intraoperative image registration, and finally surgery execution with some computer assistance based on the surgical plan. Intraoperative registration is a critical process of most CAOS systems, given that registration accuracy determines the precision of the surgical visualization, planning, and navigation with respect to the patient on the operating table [11].

Several methods have been developed for intraoperative registration of preoperative CT volumes for CAOS, with fiducial-based registration as the most widely used. Preoperatively, fiducial points are annotated in the medical images or graphic models of the bones, and intraoperatively, the same points are located in the bones of the patient using a navigated tool [12]. Usually only a few anatomical landmarks can be reliably selected; therefore, artificial fiducial markers have been used. The marker is surgically attached directly to the bone of the patient before the preoperative study is acquired. This is an accurate registration approach; however, it increases the complexity of the surgical procedure and the risk of complications due to the invasiveness required to reach each fiducial point.

In order to minimize the invasiveness of the registration process while maintaining high accuracy, image-based methods have been developed. Image-based registration has clear advantages: no need for artificial fiducials, therefore, limiting bone exposure; high accuracy can be obtained through registration of large surface areas. Fluoroscopy-based registration has been extensively used [7, 13, 14]. However, fluoroscopes are large and expensive and expose the patient and the surgical staff to ionizing radiation. On the other hand, ultrasound is a convenient and safe intraoperative imaging modality; it is portable, low cost, and real time. Its main limitations are low signal to noise ratio due to speckle, user-dependent acquisition and interpretation, and inability to penetrate bones.

Previous studies have shown that the use of intraoperative ultrasound is feasible to achieve errors that satisfy the accuracy requirements of surgery. Herring et al. [15] reported the use of spine phantoms submerged in water to locate fiducial points in ultrasound images. Submillimetric accuracy was reported for the registration of the fiducial points with the spine phantom. Ionescu et al. [16] reported one of the first registration methods based on the segmentation of the bone cortical in ultrasound images. Yan et al. [17] achieved registration errors of less than in transpedicular screw insertion in porcine cadaver experiments and also less than of error in phantom experiments.

Most methods for image-based registration of CT and ultrasound can be classified as intensity-based or surface-based methods. Surface-based registration requires a process of feature extraction in both modalities, and registration accuracy is then affected by feature extraction. Meanwhile, intensity-based registration methods optimize a similarity (objective) function. The following groups, Penney et al. [18], Winter et al. [19], and Gill et al. [20], developed different intensity-based registration methods and reported registration errors under . The most important difference between both registration approaches (intensity or surface-based) is its suitability for a specific surgical procedure.

In surface-based registration methods, the manual, or automatic, segmentation of the bone surface in the intraoperative ultrasound images is necessary. Kowal et al. [21] developed an early automatic segmentation method for 2D ultrasound images of bones, based on pixel intensity and position, since bones usually appear bright and are located at the top of the ultrasound images. The method is sensitive to image contrast, having an average segmentation error of ().

Beek et al. [22] reported a method for surface-based registration during scaphoid fracture reduction surgery, 3D ultrasound images are segmented semiautomatically with the annotation of 10 seed points in the cortical of the bones, registration is performed using iterative closest points, and a segmentation error of () was reported. Hacihaliloglu et al. [23] developed a bone segmentation method for 3D ultrasound images, based on the Log-Gabor filters and phase coherence, and a segmentation error of () was reported for validation in bone phantoms.

In this work, is reported a new method for the automatic segmentation of the bone cortical in 3D ultrasound imaging. The method is based on a voxel classifier, trained with features obtained from the eigenanalysis of the Hessian matrix corresponding to every voxel in the ultrasound volume. The segmentation of the bone surface in an ultrasound volume is subsequently used for intraoperative registration of the corresponding surface segmented in a CT volume. In the following sections, are reported the acquisition of 3D freehand ultrasound volumes of long bones, the segmentation of the bone surface (cortical), and the registration of a preoperative CT volume. Experimental validation was performed on a phantom of the tibia.

2. Bone Segmentation and Registration Methods

2.1. Freehand Acquisition and Reconstruction of 3D Ultrasound Volumes

3D freehand ultrasound images were acquired with a conventional 2D scanner (Aloka 1000, Hitachi Aloka Medical America, Inc.) using a 7.5 MHz probe. An optical tracker (Polaris Spectra, Northern Digital, Inc.) was used to navigate the ultrasound probe. Ultrasound B-mode images were continuously acquired using a frame grabber (Epiphan Systems Inc.). For the reconstruction of the 3D ultrasound volumes, the probe was calibrated using the cross-wire methodology [24], and 3D volumes were reconstructed using the pixel-based method reported in [25]. Full details of the acquisition and reconstruction of 3D freehand ultrasound images have been published elsewhere [26, 27].

2.2. Segmentation of the Bone Surface in 3D Ultrasound

The segmentation method has two main stages: bone surface enhancement, using feature extraction based on differential geometry, and surface segmentation using a Bayes classifier and a region growing algorithm. Bone surfaces in an ultrasound image are identified as bright regions above dark regions caused by the acoustic shadows produced by the bone. High-intensity echoes produced by the bone surface can be observed in ultrasound images as bright lines with a width of 2–4 mm. Jain and Taylor [28] have shown that a good estimate of the bone cortical in an ultrasound image lies at the center line of the bright echo line. The center line of the ultrasound echoes corresponds to the maximum principal curvature in the, orthogonal, gradient direction. This allows for the detection of the bone cortical using a method based on ridge detection as described below.

2.2.1. Bone Surface Enhancement in 3D Ultrasound

The second derivatives, in each direction around a voxel in an ultrasound volume, provide information about the type of geometric shape to which the voxel belongs. The geometric shapes could be blobs, tubes, or surfaces [29]. Shape information was obtained from the principal component analysis of the Hessian matrix calculated on each voxel in the volume. The Hessian matrix (2) was constructed with the second partial directional derivatives, which are approximated with the convolution of every voxel in the volume with a Gaussian kernel (1) in each direction,where is the scale factor of the kernel. The size of the kernel is determined as three times . The Hessian matrix of a voxel is a symmetric matrix that contains the partial derivatives in all the possible directions as shown in the following equation:where each represents the second order partial derivative of the voxel in the directions and . The principal component analysis of the Hessian matrix results in three eigenvalues , and and the corresponding eigenvectors , and , for each voxel in the ultrasound volume. If , there is a set of conditions that determines the membership of a voxel to a certain type of geometric shape which can be tubular, spherical, or a surface. The conditions are shown in Table 1.

Table 1: Basic conditions for geometric structure discrimination.

Based on the conditions shown in Table 1, voxels that belong to a surface were chosen and assigned a new value given by (3). This highlighted all the voxels that had a high probability of belonging to a surface [29],where

Equations (3) and (4) represent the condition that one voxel belongs to a surface. The condition was expressed as and ; those inequalities are described by (4) where the value of decreases with the deviation from the condition . As recommended by Sato et al. [29], we choose the values and .

2.2.2. Bone Surface Extraction in 3D Ultrasound

A bone segmentation algorithm based on a Bayes classifier was developed. The classifier was trained with features of the voxel intensity and the first, second, and third moments of the enhanced voxel values previously calculated. Three classes were considered for training: bone, soft tissue, and acoustic shadow produced by the bone. For each voxel, a feature vector was constructed as follows: , where and are the original image and the enhanced image values at a specific position; , , and are the mean, variance, and skewness in a window centered in . For each vector , the probability of membership to each class, bone, tissue, and shadow, was estimated using the Bayes posterior probability defined in the following equation:where is the a priori probability for each class obtained from the training sample proportions and is the a priori probability of vector . is the conditional probability of given . To estimate , we assumed a normal distribution for each class (bone, tissue, and shadow) as defined in the following equation [30]:where is the covariance matrix and is the mean of a set of vectors that belong to the class. Substituting (6) in (5) and discarding , we can calculate a discriminant function :

Equation (7) allowed us to classify each voxel in an ultrasound volume as follows: for each voxel, we calculate the corresponding vector and evaluate (7) for each of the classes: tissue, bone, and shadow. A voxel was labeled as bone (i.e., labeled as “1”) if and ; otherwise, it was labeled as background (i.e., labeled as “0”).

The mean and covariance of each class (bone, tissue, and shadow) were estimated on a small training volume which should include voxels of the three classes. To produce one complete surface with a minimum of holes, a region growing process was also applied [31]. Seed points were selected from the 0.01 percent of the previously classified bone voxels with the highest probability. Starting from these seed points, the bone surface was segmented considering all the neighboring voxels previously classified as bone.

2.3. Intraoperative Registration of CT Volumes

The bone surface (cortical) was segmented manually in the preoperative CT volume, and automatically in the intraoperative ultrasound volume, using the method reported in the previous section. From each segmentation, the corresponding mesh was generated using Marching Cubes [32], and each mesh has around vertices. Registration was performed in two stages. First, a few points (three or four) are selected manually with a coarse accuracy, in both meshes, to have an initial rough alignment required by the iterative closest point (ICP) method. Then a subset of 2000 points was randomly acquired from both meshes. Mesh registration was performed using iterative closest points [33]. The software was developed using 3D Slicer [34] with specific modules developed for this research using ITK (http://www.itk.org) and VTK (http://www.vtk.org).

3. Results and Discussion

The accuracy of the bone segmentation and registration was evaluated on a realistic phantom of the tibia, which was constructed with a synthetic tibial bone (ERP #1117-42, Sawbones Inc., Vashon, WA, USA) immersed in a hydrogel made of polyvinyl alcohol (PVA) diluted in 95% of water. It has been shown that PVA resembles the appearance and the mechanical properties of soft tissue on ultrasound images [35]. A passive tracking tool was firmly attached to the distal end of the tibia phantom. Figure 1(a) shows a photo of the PVA phantom of the tibia. The dimensions and shape of the tibia phantom were accurately measured using microCT.

Figure 1: Polyvinyl alcohol (PVA) phantom used in this study. (a) PVA phantom with the tracking tool attached. (b) Three orthogonal views from a 3D ultrasound volume of the phantom acquired, with the 3D freehand ultrasound technique. (c) Three orthogonal views from the CT volume of the phantom.
3.1. CT Image Acquisition

MicroCT (Nikon Metrology XTH 225) with a pixel matrix was used to scan the phantom of the tibia including the tracking tool; imaging settings were 220 kV, 61 µA, 3142 projections, and one image per projection, and no physical filter was used. The final volume dimensions were voxels in axes, respectively, with an isometric voxel size of 0.115 mm. The acquired CT volume had a very high resolution that helps improving the accuracy measurements due to a small resolution error () in the CT data. Figure 1(c) shows three orthogonal views of the acquired CT volume of the phantom.

3.2. 3D Freehand Ultrasound Image Acquisition

3D freehand ultrasound images were acquired from the tibia phantom, and the volume was reconstructed as described in Section 2.1. The origin of our coordinate frame was located at the center of one of the reflecting spheres of the tracking tool attached to the tibial bone, shown in Figure 1(a), and the voxel size for the reconstruction of the ultrasound volume was the same of the CT: 0.115 mm. The phantom was scanned with the ultrasound probe using the same water-based gel used for clinical ultrasound scans. The phantom was scanned at different sections always including the tibial plateau. In Figure 1(b), are shown three orthogonal views of the acquired ultrasound volume of the phantom.

3.3. 3D Ultrasound Segmentation

A small ultrasound volume ( voxels) was acquired from the test phantom, for training of the segmentation method. Bone, tissue, and shadow regions were manually annotated. The corresponding voxel feature values were stored as the training sample for the Bayes classifier described in Section 2.2. This training set was used for all the experiments reported below, and the training process was performed only once for all experiments.

The accuracy of the bone segmentation of the intraoperative ultrasound volumes was measured using as a reference the manual segmentation of the bone in the high-resolution CT data. This segmentation was approved by an expert orthopedic surgeon. As previously mentioned, all the intraoperative ultrasound images were acquired with the origin of the navigation reference frame located at the tracking tool attached to the plastic bone. Since the position of the tracking spheres in the corresponding CT volume was determined, the CT and the ultrasound images were accurately registered. All the remaining errors between the bone surface in CT and ultrasound were mainly due to segmentation errors of the bone cortical in the ultrasound volume (as illustrated in Figure 2).

Figure 2: Results of the segmentation process. (a) One slice of the ultrasound volume of the tibia phantom. (b) Same slice in A with the CT segmentation overlapped. (c) Same slice in A with the result of the ultrasound segmentation overlapped. (d) Same slice in A with the result of the ultrasound and CT segmentations overlapped. (e) Ultrasound and CT reference segmentations overlapped. (f) 3D reconstruction of the resulting ultrasound segmentation of one experiment. (g) 3D reconstruction of the CT reference segmentation. (h) 3D reconstructions ultrasound and CT overlapped.

Twelve acquisitions of 3D freehand ultrasound volumes were performed at different sections of the phantom of the tibia, which was located each time at different positions and orientations on the experiment table. The bone cortical on each volume was automatically segmented, and the corresponding mesh was constructed. Table 2 summarizes the mean distances of each node of the ultrasound mesh to the nearest node of the reference CT mesh. As can be observed in Table 2, the maximum mean segmentation error was () for experiment four, and Figure 3 shows graphically the segmentation errors. The overall mean segmentation error for the twelve experiments performed was (). The average processing time per experiment including volume reconstruction and volume segmentation was about five minutes. Holes in the segmentation, due to shadows in the acquisition of bone ultrasound, were not taken into account.

Table 2: Segmentation errors of the twelve experiments performed. For each experiment, the maximum, mean, and standard deviation errors, as well as the number of mesh vertices resulting from the segmented ultrasound volume are shown.
Figure 3: Distribution of the segmentation errors. The height of each bar represents the mean distance error, and the black line represents the standard deviation for each experiment. The mean error values are shown on top of each bar.
3.4. Intraoperative Registration

Twelve registration experiments were performed; on each one of them, the mesh obtained from the manual segmentation of the bone surface in the CT images of the phantom was registered with the tibia phantom located at different positions on the experiment table; using the method described in Section 2.3, the errors of the registration process were measured as follows.

For each position of the phantom on the table, a 3D freehand ultrasound volume of a different section of the tibia was acquired. The bone surface in the ultrasound volume was automatically segmented using the method described in Section 2.2.2; from the resulting segmentation, a mesh called was constructed. The mesh constructed from the manual segmentation of the bone surface in the CT images of the phantom, called was registered with using iterative closest points as described in Section 2.3. The resulting registered CT mesh was called (Figure 4(a)). In order to have a reference to measure the registration error, was also registered with the phantom on each position using the transformation given by the optical tracker, and this accurately registered CT mesh was called (Figure 4(b)). The accuracy of each registration experiment was then measured between the registered CT mesh and the reference CT mesh of the tibia phantom , as described below.

Figure 4: (a) Registration of the preoperative CT mesh () and the intraoperative ultrasound mesh (); the registered CT mesh () is shown in green. (b) Registration of the preoperative CT mesh () using the tracking tool to achieve an accurately registered mesh .

For each experiment, the same preoperative CT mesh was registered against the tibia phantom by two independent methods: registered with the method reported in Section 2.3 and exactly registered using the tracking tool. Target registration errors (TREs) of each experiment were measured as the mean distance between all corresponding vertices in both registrations. Figure 5 shows the results of one experiment before (Figure 5(a)) and after (Figure 5(b)) registration, the reference mesh is shown in yellow, the ultrasound generated mesh is shown in blue, and both meshes are shown overlapping on one slice of the ultrasound volume. Figure 6 shows the errors for six experiments, showing the ultrasound registered mesh in red and the TRE in false color on the reference mesh. The TRE values for all the experiments are reported in Table 3 and are shown graphically in Figure 7. The average time taken by the registration process was approximately two minutes.

Figure 5: (a) Orthogonal views of the ultrasound volume with the unregistered ultrasound mesh (shown in blue) and registered reference CT mesh (shown in yellow). (b) Orthogonal views of the ultrasound volume with the registered ultrasound and CT meshes overlapped. (c) Ultrasound (blue) and CT (yellow) 3D meshes before (left) and after (right) registration.
Figure 6: Results of six different experiments. The result of the bone segmentation in the intraoperative ultrasound is shown in red. The registered CT mesh is shown with a false color scale illustrating the target registration error (in mm), for each point on the surface of the CT. The mean target registration error, of each experiment, is shown as TRE on the top-right corner of each subfigure.
Table 3: TRE of the twelve experiments performed.
Figure 7: Distribution of the registration errors. The height of each bar represents the mean error, and the black line represents the standard deviation for each experiment. The mean error values are shown on top of each bar.
3.5. Discussion

A fully automatic 3D method was developed for the segmentation of the cortical bone in 3D ultrasound images, acquired with the 3D ultrasound freehand technique, which enables the acquisition of large volumes. The eigenvalues, corresponding to the principal component vectors, of the 3D Hessian matrix of each voxel in an ultrasound volume, were used to enhance the bone surface following the work of Sato et al. [29]. A Bayes classifier was trained with five features: the original and enhanced voxel intensity values, as well as the first, second, and third moments of the enhanced voxel values. Validation of the segmentation method on a realistic phantom of the tibia immersed in PVA, allowed for accurate estimates of the segmentation errors, since PVA has mechanical properties similar to those of tissue in ultrasound imaging. The acquisition of ultrasound images was performed with the same water-based gel used clinically.

Segmentation errors were measured as the distances between all the points of the segmented ultrasound mesh and the nearest points in the reference CT mesh (Section 3.3). Table 2 summarizes the segmentation errors obtained for twelve different ultrasound volumes of the same phantom (approximate volume size of voxels that corresponds to ). A maximum mean error of () for experiment four is shown. Figure 3 shows the distribution of all the mean error values, and the overall mean for the twelve segmentation experiments performed was (). These results show improvement over previously reported methods for the segmentation of bones in 3D ultrasound. Kowal et al. [21] reported the automatic segmentation of bone contours in 2D ultrasound images, with a mean segmentation error of the bone contour lines of (). Beek et al. [22] reported an IGS system for the fixation of scaphoid fractures based on preoperative CT and intraoperative ultrasound images. A semiautomatic method was developed for the segmentation of bone contours in 2D ultrasound. Validation was performed on plastic phantoms of the scaphoid bone immersed in water. The authors reported a mean segmentation error of (). Hacihaliloglu et al. [23] reported an automatic method for bone segmentation based on Log-Gabor filters and phase coherence. A mean segmentation error of () was reported for validation on a realistic bovine bone phantom immersed in solid gel.

The bone surfaces, obtained from the automatic segmentation of the ultrasound volumes, were used to register a high-resolution CT model of the tibia using iterative closest points. Twelve intraoperative registration experiments were performed using the same tibia phantom located at different positions on the experiment table. The target registration errors (TREs) of each experiment were calculated as the distances between all corresponding vertices in the registered CT mesh () and a reference CT mesh (). Table 3 shows a maximum mean TRE of () for experiment two. The overall mean TRE for the twelve experiments was which is smaller than other TREs previously reported: in the work of Beek et al. [22], was reported a mean TRE of () for three alumina beads of of diameter, which includes the uncertainty to locate the exact center of the alumina beads; Penney et al. [18] reported a RMS TRE of or less for 3 registration experiments on the left and right femurs of cadavers.

Figure 6 shows the target registration errors (TREs) illustrated in a false color scale, for each point on the surface of the CT. As expected, smaller registration errors are obtained when large areas of the tibia phantom are scanned with the ultrasound probe. The lower row of Figure 6 shows three cases with mean target registration errors smaller than 0.24 mm.

The shape of the diaphysis (i.e., the middle section) in long bones is very similar along the bone. This makes it difficult to achieve an accurate registration if the scanned area contains only the diaphysis of the bone. In order to obtain better accuracy in the registration process, it is advisable to include part of the epiphysis (i.e., ends of the long bone) into the scan since the epiphysis contains features that can be captured with ultrasound images. Figure 6 shows that when the scanned areas do not contain part of the epiphysis, the registration error is larger.

The mean registration time including ultrasound volume reconstruction, surface segmentation, and ICP registration was approximately eight minutes. All computations were made on a Mac Pro with a 2.8 GHz Quad Core Intel Xeon processor and 16 GB of RAM.

4. Conclusions

Fast and accurate intraoperative registration is a critical stage of most computer-assisted orthopedic surgery (CAOS) systems. The use of intraoperative ultrasound for image-guided registration in CAOS has several advantages: low cost, no exposure to ionizing radiation, and compact size. However, the accurate registration of CT and ultrasound of bones is still a research challenge due to the low signal to noise ratio of ultrasound imaging and its incapability to penetrate bones.

A new method for the intraoperative registration of preoperative CT volumes of long bones was reported in this work. The method is based on the automatic 3D segmentation of the bone cortical in 3D freehand ultrasound imaging which, in turn, enables the acquisition of large bone sections. The method is able to segment the bone surface in large ultrasound volumes with an overall mean segmentation error of (). The approximate segmentation time per volume was 1 min (for volumes of voxels). The corresponding preoperative CT was manually segmented, and both meshes (ultrasound and CT) were accurately registered using iterative closest points. Accurate registration was achieved, with minimum user interaction. The maximum TRE was under , and the mean maximum TRE was .

It has been observed that long bones have very similar surface shapes in completely different locations. A conventional nonnavigated 3D ultrasound probe can only scan a small part of the bone. This makes 3D freehand ultrasound as an optimal choice for intraoperative registration of long bones since large parts of the bone can be scanned and automatically segmented with the method reported. After the acquisition of the 3D ultrasound images, the total processing time was approximately 8 min, with 5 min. for volume reconstruction, 1 min. for bone segmentation, and 2 min. for CT registration. Ultrasound volume reconstruction is suitable for parallel implementations which can reduce significantly the total reconstruction times. Atesok et al. [5] reported that, on average, 14 min. of extra surgical time are added for 2D fluoroscopic navigation in fracture reduction surgery. This extra time is acceptable for the surgeons given the localization advantages of navigated instruments during minimally invasive procedures.

The image segmentation and registration methods reported here are suitable for minimally invasive surgery of long bones such as fracture reduction of the femur and tibia. Other arthroscopic surgical procedures such as total knee replacement will be subject to the possibilities to scan the surgical site with ultrasound imaging.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this article.

Acknowledgments

The authors would like to thank the National Council of Science and Technology of México (CONACYT) for the financial support of this work under grant (SALUD-2006-45519). Fabian Torres and Zian Fanti gratefully acknowledge their postgraduate scholarships from CONACYT.

References

  1. K. Cleary and T. M. Peters, “Image-guided interventions: technology review and clinical applications,” Annual Review of Biomedical Engineering, vol. 12, no. 1, pp. 119–142, 2010. View at Publisher · View at Google Scholar · View at Scopus
  2. K.-M. Scheufler, J. Franke, A. Eckardt, and H. Dohmen, “Accuracy of image-guided pedicle screw placement using intraoperative computed tomography-based navigation with automated referencing, part I: cervicothoracic spine,” Neurosurgery, vol. 69, no. 4, pp. 782–795, 2011. View at Publisher · View at Google Scholar · View at Scopus
  3. S. Jeswani, D. Drazin, J. C. Hsieh et al., “Instrumenting the small thoracic pedicle: the role of intraoperative computed tomography image–guided surgery,” Neurosurgical Focus, vol. 36, no. 3, p. E6, 2014. View at Publisher · View at Google Scholar · View at Scopus
  4. K. Deep, S. Shankar, and A. Mahendra, “Computer assisted navigation in total knee and hip arthroplasty,” SICOT-J, vol. 3, p. 50, 2017. View at Publisher · View at Google Scholar
  5. K. Atesok, J. Finkelstein, A. Khoury, M. Liebergall, and R. Mosheiff, “CT (ISO-C-3D) image based computer assisted navigation in trauma surgery: a preliminary report,” Injury Extra, vol. 39, no. 2, pp. 39–43, 2008. View at Publisher · View at Google Scholar · View at Scopus
  6. G. Burdin, “Arthroscopic management of tibial plateau fractures: surgical technique,” Orthopaedics and Traumatology: Surgery and Research, vol. 99, no. 1, pp. S208–S218, 2013. View at Publisher · View at Google Scholar · View at Scopus
  7. K. S. Leung, N. Tang, L. W. H. Cheung, and E. Ng, “Image-guided navigation in orthopaedic trauma,” Bone and Joint Journal, vol. 92-B, no. 10, pp. 1332–1337, 2010. View at Publisher · View at Google Scholar · View at Scopus
  8. D. M. Kahler, “Navigated long-bone fracture reduction,” Journal of Bone and Joint Surgery, vol. 91, no. S1, pp. 102–107, 2009. View at Publisher · View at Google Scholar · View at Scopus
  9. J. Buschbaum, R. Fremd, T. Pohlemann, and A. Kristen, “Computer-assisted fracture reduction: a new approach for repositioning femoral fractures and planning reduction paths,” International Journal of Computer Assisted Radiology and Surgery, vol. 10, no. 2, pp. 149–159, 2015. View at Publisher · View at Google Scholar · View at Scopus
  10. Y. Weil, M. Liebergall, and A. Khoury, “Computer assisted surgery for iliosacral screw placement-how far have we gone,” Journal of Trauma and Treatment, vol. 5, no. 345, pp. 2167–1222, 2016. View at Google Scholar
  11. A. M. DiGioia and L.-P. Nolte, “The challenges for CAOS: what is the role of CAOS in orthopaedics?” Computer Aided Surgery, vol. 7, no. 3, pp. 127-128, 2002. View at Publisher · View at Google Scholar · View at Scopus
  12. M. A. Audette, F. P. Ferrie, and T. M. Peters, “An algorithmic overview of surface registration techniques for medical imaging,” Medical Image Analysis, vol. 4, no. 3, pp. 201–217, 2000. View at Publisher · View at Google Scholar · View at Scopus
  13. M. Citak, M. Citak, E. M. Suero, P. F. O’Loughlin, T. Hüfner, and C. Krettek, “Navigated reconstruction of a tibial plateau compression fracture post-virtual reconstruction: a case report,” The Knee, vol. 18, no. 3, pp. 205–208, 2011. View at Publisher · View at Google Scholar · View at Scopus
  14. Y. A. Weil, A. Greenberg, A. Khoury, R. Mosheiff, and M. Liebergall, “Computerized navigation for length and rotation control in femoral fractures: a preliminary clinical study,” Journal of Orthopaedic Trauma, vol. 28, no. 2, pp. e27–e33, 2014. View at Publisher · View at Google Scholar · View at Scopus
  15. J. Herring, B. Dawant, J. Maurer et al., “Surface-based registration of CT images to physical space for image-guided surgery of the spine: a sensitivity study,” IEEE Transactions on Medical Imaging, vol. 17, no. 5, pp. 743–752, 1998. View at Publisher · View at Google Scholar · View at Scopus
  16. G. Ionescu, S. Lavallée, and J. Demongeot, “Automated registration of ultrasound with CT images: application to computer assisted prostate radiotherapy and orthopedics,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI’99, C. Taylor and A. Colchester, Eds., vol. 1679, Lecture Notes in Computer Science, pp. 768–777, Springer, Berlin, Germany, 1999. View at Google Scholar
  17. C. X. B. Yan, B. Goulet, J. Pelletier, S.-S. Chen, D. Tampieri, and D. L. Collins, “Towards accurate, robust and practical ultrasound-CT registration of vertebrae for image-guided spine surgery,” International Journal of Computer Assisted Radiology and Surgery, vol. 6, no. 4, pp. 523–537, 2011. View at Publisher · View at Google Scholar · View at Scopus
  18. G. P. Penney, D. C. Barratt, C. S. K. Chan et al., “Cadaver validation of intensity-based ultrasound to CT registration,” Medical Image Analysis, vol. 10, no. 3, pp. 385–395, 2005. View at Publisher · View at Google Scholar · View at Scopus
  19. S. Winter, B. Brendel, I. Pechlivanis, K. Schmieder, and C. Igel, “Registration of CT and intraoperative 3-D ultrasound images of the spine using evolutionary and gradient-based methods,” IEEE Transactions on Evolutionary Computation, vol. 12, no. 3, pp. 284–296, 2008. View at Publisher · View at Google Scholar · View at Scopus
  20. S. Gill, P. Abolmaesumi, G. Fichtinger et al., “Biomechanically constrained groupwise ultrasound to CT registration of the lumbar spine,” Medical Image Analysis, vol. 16, no. 3, pp. 662–674, 2012. View at Publisher · View at Google Scholar · View at Scopus
  21. J. Kowal, C. Amstutz, F. Langlotz, H. Talib, and M. G. Ballester, “Automated bone contour detection in ultrasound B-mode images for minimally invasive registration in computer-assisted surgery—an in vitro evaluation,” The International Journal of Medical Robotics and Computer Assisted Surgery, vol. 3, no. 4, pp. 341–348, 2007. View at Publisher · View at Google Scholar · View at Scopus
  22. M. Beek, P. Abolmaesumi, S. Luenam, R. E. Ellis, R. W. Sellens, and D. R. Pichora, “Validation of a new surgical procedure for percutaneous scaphoid fixation using intra-operative ultrasound,” Medical Image Analysis, vol. 12, no. 2, pp. 152–162, 2008. View at Publisher · View at Google Scholar · View at Scopus
  23. I. Hacihaliloglu, P. Guy, A. J. Hodgson, and R. Abugharbieh, “Volume-specific parameter optimization of 3D local phase features for improved extraction of bone surfaces in ultrasound,” The International Journal of Medical Robotics and Computer Assisted Surgery, vol. 10, no. 4, pp. 461–473, 2014. View at Publisher · View at Google Scholar · View at Scopus
  24. R. W. Prager, R. N. Rohling, A. H. Gee, and L. Berman, “Rapid calibration for 3-D freehand ultrasound,” Ultrasound in Medicine and Biology, vol. 24, no. 6, pp. 855–869, 1998. View at Publisher · View at Google Scholar · View at Scopus
  25. O. V. Solberg, F. Lindseth, H. Torp, R. Blake, and T. Nagelhushernes, “Freehand 3D ultrasound reconstruction algorithms—a review,” Ultrasound in Medicine and Biology, vol. 33, no. 7, pp. 991–1009, 2007. View at Publisher · View at Google Scholar · View at Scopus
  26. F. Torres, Z. Fanti, E. Lira et al., “Image tracking and volume reconstruction of medical ultrasound,” Revista Mexicana de Ingenieria Biomedica, vol. 33, no. 2, pp. 101–115, 2012. View at Google Scholar
  27. Z. Fanti, F. Torres, and F. Arámbula Cosío, “Preliminary results in large bone segmentation from 3D freehand ultrasound,” in Proceedings of the IX International Seminar on Medical Information Processing and Analysis, vol. 8922, Mexico City, Mexico, November 2013. View at Publisher · View at Google Scholar · View at Scopus
  28. A. K. Jain and R. H. Taylor, “Understanding bone responses in B-mode ultrasound images and automatic bone surface extraction using a Bayesian probabilistic framework,” in Proceedings of the Medical Imaging 2004: Ultrasonic Imaging and Signal Processing, vol. 5373, pp. 131–142, San Diego, CA, USA, April 2004. View at Publisher · View at Google Scholar · View at Scopus
  29. Y. Sato, C. F. Westin, A. Bhalerao et al., “Tissue classification based on 3d local intensity structures for volume rendering,” IEEE Transactions on Visualization and Computer Graphics, vol. 6, no. 2, pp. 160–180, 2000. View at Publisher · View at Google Scholar · View at Scopus
  30. C. M. Bishop, Neural Networks for Pattern Recognition, Oxford University Press, New York, NY, USA, 1995.
  31. R. Adams and L. Bischof, “Seeded region growing,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 16, no. 6, pp. 641–647, 1994. View at Publisher · View at Google Scholar · View at Scopus
  32. W. E. Lorensen and H. E. Cline, “Marching cubes: a high resolution 3D surface construction algorithm,” ACM SIGGRAPH Computer Graphics, vol. 21, no. 4, pp. 163–169, 1987. View at Publisher · View at Google Scholar · View at Scopus
  33. P. J. Besl and N. D. McKay, “A method for registration of 3-D shapes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 2, pp. 239–256, 1992. View at Publisher · View at Google Scholar · View at Scopus
  34. A. Fedorov, R. Beichel, J. Kalpathy-Cramer et al., “3D slicer as an image computing platform for the quantitative imaging network,” Magnetic Resonance Imaging, vol. 30, no. 9, pp. 1323–1341, 2012. View at Publisher · View at Google Scholar · View at Scopus
  35. K. J. M. Surry, H. J. B. Austin, A. Fenster, and T. M. Peters, “Poly(vinyl alcohol) cryogel phantoms for use in ultrasound and MR imaging,” Physics in Medicine and Biology, vol. 49, no. 24, pp. 5529–5546, 2004. View at Publisher · View at Google Scholar · View at Scopus