Research Article  Open Access
KinectBased Correction of Overexposure Artifacts in Knee Imaging with CArm CT Systems
Abstract
Objective. To demonstrate a novel approach of compensating overexposure artifacts in CT scans of the knees without attaching any supporting appliances to the patient. CArm CT systems offer the opportunity to perform weightbearing knee scans on standing patients to diagnose diseases like osteoarthritis. However, one serious issue is overexposure of the detector in regions close to the patella, which can not be tackled with common techniques. Methods. A Kinect camera is used to algorithmically remove overexposure artifacts close to the knee surface. Overexposed nearsurface knee regions are corrected by extrapolating the absorption values from more reliable projection data. To achieve this, we develop a crosscalibration procedure to transform surface points from the Kinect to CT voxel coordinates. Results. Artifacts at both knee phantoms are reduced significantly in the reconstructed data and a major part of the truncated regions is restored. Conclusion. The results emphasize the feasibility of the proposed approach. The accuracy of the crosscalibration procedure can be increased to further improve correction results. Significance. The correction method can be extended to a multiKinect setup for use in realworld scenarios. Using depth cameras does not require prior scans and offers the possibility of a temporally synchronized correction of overexposure artifacts. To achieve this, we develop a crosscalibration procedure to transform surface points from the Kinect to CT voxel coordinates.
1. Introduction
Carm CT systems (Figure 1(a)), in contrast to conventional CT systems, have a high mechanical flexibility which gives radiologists the opportunity to perform CT scans in a variety of spatial positions. In particular, it is possible to rotate the CT system around a vertical axis [1]. This enables imaging of patients with knee diseases such as osteoarthritis while they are standing in an upright position, hence while the knee is bearing the weight of the patient [2].
(a)
(b)
One challenge of imaging relatively thin body parts like the knee is the limited dynamic range of the Carm CT flat panel detector, leading to overexposure of the exterior regions of the knee. If not avoided or compensated for, overexposure leads to artifacts in the reconstructed volume, as shown in Figure 1(b). The front and back of the knee appear blurry and lack clearly defined outer boundaries. The image quality of important parts of the knee image, such as the patella, is severely affected by these artifacts. This has a negative impact on reliability of the diagnosis.
Using a Carm CT acquisition protocol with the patient lying in supine position, several approaches are available to avoid or compensate overexposure artifacts. One way to avoid overexposure artifacts during acquisition is by covering the knees with an additional absorber, for example, a rubber belt [2, 3]. However, extra weight of the belt can cause great discomfort for an upright patient with pains in the knees.
Different algorithmic methods for truncation correction in Carm CT systems have been developed in the recent years. Truncation artifacts that arise in scans with a small region of interest can be effectively corrected without any explicit extrapolation scheme [4]. If bigger portions of the patient are of diagnostic interest, different correction methods have to be applied. In [5], additional knowledge through a prior lowintensity scan is facilitated for artifact correction. In the case of imaging of standing patients with knee diseases, however, expected patient movement makes the use of a prior scan very difficult.
Other methods, which do not use a prior lowintensity scan, correct truncation artifacts through an appropriate extrapolation model such as a water cylinder for the upper body [6, 7]. In [8], the modelbased extrapolation is extended by an iterative truncation correction algorithm, which is able to handle cases where the water cylinder assumptions are not exactly fulfilled. These modelbased methods are not applicable for knee imaging, as the anatomical structure is too complex to be approximated by a single cylindrical or elliptical object. Another approach which uses a multicylinder extrapolation model [9] yields better results. Similar to the single water cylinder model, however, overexposure correction only works for objects that sufficiently fit to the simplified cylindrical knee models.
Hence, in order to bring the novel diagnostic possibility of imaging knees of standing patients into clinical practice, it is highly desirable to develop an imaging solution that avoids these drawbacks.
In this paper, we present a method for correcting overexposure by combining information from a Kinect depth camera with a Carm system. As a proof of concept, we demonstrate its feasibility for patients in supine position. However, there is no fundamental limitation for applying the same setup to patients in weightbearing standing position. In such scenarios, multiple Kinect depth cameras, observing the patient from different angles, could be used for artifact correction. The approach has the further advantage that the information used for correction can be acquired simultaneously to the CT scan. Thus, depreciation of the correction through patient movement is low in comparison to methods relying on prior information.
The contributions of the paper are as follows:(i)We introduce a specifically designed, easytoreproduce calibration target for crosscalibrating a Carm CT system with a Kinect depth camera.(ii)We propose a crosscalibration procedure between the depth camera and the Carm CT.(iii)We present a depthbased correction of overexposure artifacts.
Figure 2(a) shows a sketch of the crosscalibration procedure using a calibration phantom. The calibration target is detected by both imaging systems and enables the computation of a transformation of the coordinates from one modality to coordinates of the other modality.
(a)
(b)
Figure 2(b) shows a sketch of the imaging protocol. Once the system is calibrated, a patient is placed into the field of view of both modalities.
When imaging a patient, the Kinect depth data is used to find the points of intersection between the Xray beam path and the object surface, that is, the points at which the Xrays enter and leave the knee tissue. For each pixel in each projection, the length of the beam path across the knee is calculated. Overexposed pixels are corrected by extrapolating the absorption along the corresponding line integrals.
In Section 2, we describe the phantom and the crosscalibration procedure for transforming points between both imaging modalities. In Section 3, we describe the proposed projectionbased artifact correction. In Section 4, the reconstruction of the corrected projections is evaluated and compared with an uncorrected volume and the ground truth. In Section 5, we discuss the correction results and limitations of our proposed method. In Section 6, we discuss possible improvements and future work based on the current correction method.
2. Kinect to CT System CrossCalibration
The Microsoft Kinect camera provides a color image and additionally per pixel the distance in 3D of the depicted scene point to the camera. To use this distance information in a CT scan, we determine the parameters for a rigid transformation between both imaging systems through crosscalibration procedure.
A crosscalibration phantom with known geometry is observed by both imaging modalities to determine the relative translation and rotation between both coordinate systems.
The crosscalibration phantom consists of the cylindrical PDS2 calibration phantom, which is commonly used for Carm conebeam CT calibration [10], and an attached depth calibration structure. Figure 3 shows the basic design and geometry of the phantom. The depth calibration structure is a scaffold of orthogonal plastic rods.
(a)
(b)
Three spheres are attached on each rod. The spheres are particularly suitable for detection and localization with the Kinect camera from a wide range of viewing angles. The goal of the calibration is to identify the three rods with the coordinate axes and their intersection with the coordinate origin.
From solely observing the cylinder surface, only the direction of the axis of the cylinder could be determined. The spheres allow the determination of the alignment of the  and axes purely based on depth data. Painting the cylinder to indicate the axes directions, for example, would introduce inaccuracies from the Kinectinternal RGBtodepth calibration to the crosscalibration procedure.
The use of the attachment could prove to be especially advantageous in weightbearing scanning scenarios, where two or more Kinect cameras are observing the phantom from different angles for the calibration.
For processing the depth data, we use the Range Imaging Toolkit RITK [11]. A visualization of acquired data can be seen in Figure 4. Raw depth images from the Kinect camera are relatively noisy, with a standard deviation of pointtoplane distances of about mm at m distance [12]. To counter this noise, we apply spatial smoothing (Gaussian kernel with ), temporal averaging ( frames) and edgepreserving smoothing (guided filter with pixel support [13]).
(a)
(b)
(c)
2.1. Sphere Segmentation and Fitting
First, a user has to mark the spheres in the RGB data. We compute the estimated projected size of the sphere from the depth information at the marked point. Pixels of similar depth around the seed point are recursively added to the sphere area, as long as the distance of newly added pixels does not exceed the sphere size.
Spheres belonging to the same axis are fitted to the depth data. We estimate the sphere center for each connected set of sphere surface points (see Figure 5). An initial estimation of the sphere center is made by using and coordinates from the initially userselected spheres. The depth value of the center is approximated by adding half of the sphere radius to the mean depth value of the respective surface points. The best fitting center point is determined using a least squares error metric. Let , , be the th surface point in Kinect 3D coordinates and the unknown Kinect 3D center coordinate of the sphere. Then, is determined by solving the convex optimization problem where denotes the number of segmented sphere surface pixels.
(a)
(b)
(c)
2.2. Estimation of the Axes Directions
From the estimated center points, position and direction of the axes are obtained as follows. Per axis we use at least two center points (each axis has to spheres, resp.). Without loss of generality, we aim to recover one point on the first coordinate axis and its direction, denoted as and , respectively. Let denote the number of segmented spheres on this axis and , , the center of the th segmented sphere. The axis point is the 3D mean coordinate of all : The algorithm for finding is analogous to finding the best fitting plane to the points. We solve this problem via orthogonal distance regression and singular value decomposition (SVD) [14].
Letbe a zeromean matrix containing the displacement vectors of the sphere centers to the mean center coordinate . SVD yields a matrix factorization , where is a diagonal matrix containing the singular values of , and the columns of and are, respectively, left and rightsingular vectors corresponding to the singular values. Let be the eigenvector in associated with the largest singular value in . Then, is a leastsquare estimate of the first coordinate axis in parametric form with scale parameter . Accordingly, we estimate other coordinate axes and from the two remaining sets of sphere centers.
2.3. Estimation of the Kinect Coordinate Origin
We calculate the axis origin as the estimated point of intersection of the rod axes. Due to noise and estimation inaccuracies, the axes are unlikely to intersect in one single point. Therefore, we define the coordinate origin as the closest point to all three axes in a leastsquares sense [15].
The formula for calculating the closest point to multiple dimensional lines is the following (see Appendix A.1):
The unit direction vectors and suspension points of the axes are already known from the previous estimation of the axes directions.
The solution is the fitted origin of the sphere mount in the Kinect coordinate system. All detected 3D points in the Kinect coordinate system are translated to the estimated origin :
2.4. Coordinate System Transformation
Knowing the position and rotation of the calibration structure to the phantom, coordinates can be directly transformed from Kinect to the Carm CT (see also Appendix A.2). The coordinate system origin of the Carm CT lies in the center of the cylinder (cf. Figure 3(a)).
Let capture rotation around axis and translation between and the center of the cylinder. Then transforms a Kinect surface point into a Carm CT coordinate .
3. Overexposure Artifact Correction
The flat panel detector used in Carm CT imaging has a limited dynamic range. If both knees overlap in a projection, higher Xray doses are necessary to penetrate both knees. In the exterior regions of the knees the Xrays are only slightly attenuated and the resulting high intensities at the detector cause saturation. Hence, information about these regions is lost and saturation artifacts arise.
3.1. ProjectionBased Extrapolation
The correction of the saturation artifacts is performed for every detector line in each projection separately. Joint use of Kinect and CT data allows a straightforward correction of overexposure in three steps:(1)If a detector line in a projection contains overexposed pixels, we determine the 3D points where the Xrays entered and exited the knee.(2)From these points, the length of the beam path through the knee is computed.(3)Overexposed pixels are corrected by extrapolating a smooth absorption falloff from nonoverexposed pixels. Note that the extrapolation does not automatically suppress tissue variations at knee boundaries: the angular range in Carm CT scans usually amounts to . Upon tomographic reconstruction of the knee volume, there exist for each boundary voxel many projection angles where a sufficiently thick portion of the knee is traversed, such that tissue variations at knee boundaries can in principle still be observed.
3.2. Geometric Considerations of Correction
Figure 7(a) shows axis view of an Xray beam hitting an exemplary detector line. We are interested in the length of the beam path through the knees. Figure 7(b) shows the same trajectory in axis view. We are looking at rays on a plane defined by the Xray source and the currently considered detector line. For each ray, we are seeking the intersection length of the ray with the knees.
In our experiments, we simulate the knees with two plastic bottles filled with water (see Figure 6). To simulate the femurs in the legs in the CT images, two dense rods with a density of g/cm^{2} are placed between the bottles.
(a)
(b)
(a)
(b)
In principle, the intersection length can be directly computed from the nearest Kinect surface points at the entrance and exit of the knee. However, to make the results more robust to noise, we first fit a cubic Bspline curve to all points lying on the plane and determine the intersection length from the spline. Note that this computation can be performed in 2D, as all involved points are located on the same plane.
Examples for resulting closed cubic Bspline curves are shown in Figure 8. Here, we observe two plastic bottles that represented the knees. The line that passes through the curve represents an example of the Xray trajectory. In this case the component of the Xray direction vector is dominating; that is, detector and radiation source lie close to the plane. Note the slight inaccuracies on the right side and truncated horizontal contours due to limitations in the edge detection of the depth camera. We extrapolated the surface points on the unobserved side of the knee phantoms by mirroring the visible points on a plane parallel to the plane.
(a)
(b)
(c)
A schematic explanation of the proposed extrapolation method is shown in Figure 9. The objective is a smooth and reasonable extrapolation of the line integrals at the transitions to the overexposed regions and .
(a)
(b)
(c)
For a smooth transition, the intersection lengths are normalized to match the value of line integral at and , respectively. To prevent noiserelated inaccuracies, we use an average value of the last nonoverexposed points for normalization.
The result of the Kinectbased correction of a CT projection is demonstrated in Figure 10.
(a)
(b)
(c)
(d)
3.3. Reconstruction Setup
We use the CONRAD framework for reconstruction [16] after the artifact correction.
The reconstruction pipeline consists of a cosine weighting filter [17], a Parker redundancy weighting filter [18], a SheppLogan ramp filter [17], and a GPUbased back projection tool [19]. After the reconstruction, the data is normalized to the Hounsfield scale. In a final step, the reconstructed data is smoothed with a bilateral filter (width: , photometric distance: ). The sourcedetector and source to axis distances are mm and mm, respectively. We acquire 133 projections in a 200° rotation around the object. The detector size in pixels is with a pixel spacing of mm for and . The mean distance of the Kinect camera to the phantom is mm.
4. Results
We evaluate and compare the reconstructions of the four projection data sets which are shown in Figure 10. After a brief description of the reconstruction setup we describe the results for one slice of the reconstructed volumes.
Afterwards, the results are compared quantitatively for five regions of interest in the exterior region of the knee phantoms.
4.1. Observations
We first inspect the reconstruction of the uncorrected projections (see Figure 11(a)). The saturation causes strong artifacts. High intensity streaks are observed at the onset of the overexposure and the original shape of the edges on the right side can not be clearly recognized. The exterior regions on the right side lack a definite outer boundary and are blurred.
(a)
(b)
(c)
(d)
(e)
(f)
Figure 11(c) shows the reconstruction of the corrected projections. The overexposure artifacts are significantly reduced for both bottles and the boundaries on the right side of the phantoms are mostly restored. However, the contour of the phantom is still blurred at the outer regions of the bottles in the top right and bottom right.
The boundaries of the ground truth and the surface data do not align perfectly (see Figure 12(b)). This problem arises from inaccuracies in the crosscalibration procedure. As these inaccuracies are sufficiently small, we can still achieve good correction results. The outline of the bottles in the left half of the surface data slice lies outside the ground truth boundary. This inaccuracy results from the extrapolation of surface points to the back side of the knees, which was based on mirroring the surface points on the plane at an estimated height.
(a)
(b)
(c)
(d)
In Figures 11 and 12 we observe that in principle there is sufficient depth information to extrapolate the truncated boundary within the field of view. However, the boundary was not restored completely in the corrected volume. The reason for this is the nonlinear preprocessing by the Carm CT system. As a result of this preprocessing, the values of the last nonoverexposed pixels can be very low. If the intersection lengths are normalized to these very low values, the extrapolation is of almost no effect. This effect can be countered by starting the extrapolation at an earlier point at which the pixel values have not been minimized by preprocessing.
4.2. Quantitative Comparison of the Results
For quantitative comparison, five regions of interest (ROIs) are placed in the exterior regions of the bottles (Figure 13). The measurements are shown in Table 1. The table compares the measurements of the HU values within the ROIs for the reconstructions of the uncorrected projections, the corrected projections, and the ground truth.

(a)
(b)
(c)
For the corrected reconstruction, we can observe that the mean values of ROIs move closer to the corresponding ground truth values. This change occurs because the previously truncated parts of the phantom are now partially restored at the positions of the ROIs. Now, the material of the phantom is more consistently measured inside the ROIs instead of air in the truncated case.
Furthermore, the values of the standard deviation are reduced. This shows that the values within the ROI in the corrected data are more homogeneous and outliers, which would increase the standard deviation, have been eliminated. The saturation artifacts cause very high maximum values on the truncated edge of the lower bottle. These artifacts are corrected with the Kinectbased correction tool.
The corrected data shows significantly improved reconstruction results. The visualization of the absolute differences between both uncorrected and corrected data and the ground truth (see Figures 11(e) and 11(f)) backs up our measurements. We observe that the differences are lower for almost all regions. Furthermore, the figures show that, apart from artifact correction inside the knee phantoms, artifacts caused by truncation between the two phantoms were also reduced.
5. Discussion
The results have shown that the Kinectbased correction of saturation in conebeam CT is a feasible approach for reducing artifacts in saturated scans. Lost surface information, especially at the front side of the knee phantoms, was restored. Furthermore, noise and overexposure artifacts were reduced through the correction of the projections.
Overexposure not only exclusively occurs in Carm CT imaging but also occurs in other systems such as multidetector CT (MDCT). One factor that makes overexposure compensation easier in MDCT is the higher dynamic range of 20 bits [20], which generally leads to less severe artifacts. Furthermore, bowtie filters and tube current modulation can be utilized to reduce radiation dosage in the exterior regions of the scanned object [21–23]. In Carm CT, overexposure artifacts are mostly tackled after image acquisition, as bowtie filters are linked to reduced detector efficiency [24] and overexposure of the detector is often even intentionally caused to tackle image quality limitations due to the limited dynamic range [9, 25, 26].
Fully leveraging the 200° raw data acquisition of the Carm CT around the knees might allow for better correction results in algorithmic approaches than the baseline considered in this paper. ROI reconstruction [27–29] or iterative reconstruction [30] could be utilized for this approach. Severe truncation, however, is still unlikely to be fully corrected [30]. In this context, it should be noted that the proposed method can, in principle, be used in combination with any other correction method. Using the additional surface information could be used for regularization which would likely lead to further performance improvements.
Additional considerations would have to be made, if the overexposure occurred in the bone, for example, the patella. In this case, the normalization factor would be based on the bone density. Instead of the skin tissue, the bone would be expanded until the outer surface, which would cause correction errors. The first likely occurrence of overexposure is to be expected in the skin tissue right next to the bones. The extrapolation of the skin tissue based on the values of the neighboring bone tissue can algorithmically be avoided. If the values of the last nonoverexposed pixels are significantly higher than expected for skin tissue, the normalization factor can be adjusted according to nearby or typical skin tissue values.
In the Bspline interpolation we observe inaccuracies of the edge detection of the Kinect camera. For a bigger field of view problems may arise in the correction of the outer edges. However, saturation artifacts are usually only expected at front side and back side of the knees for patient scans. For these regions we can acquire reliable information with the Kinect camera.
Choi et al. [31] proposed an approach for motion correction in weightbearing knee scans. However, it is still necessary to correct for overexposure artifacts. A depth camerabased solution offers the possibility of a temporally synchronized correction of overexposure artifacts, because the depth information is captured in realtime and continuously throughout the complete scanning procedure.
6. Outlook
The experiments in our research aim to demonstrate the general feasibility of the correction method. For this, we focus on supine scans of the human knees. However, the design of the method is not restricted to supine scans and could in principle also be used for weightbearing scans of the knees in realworld scenarios.
For this, we propose using two Kinect sensors to gather surface information for all relevant angles. The design of the crosscalibration phantom allows the simultaneous crosscalibration of two Kinect sensors with the Carm CT. By capturing the surface areas close to the patella and popliteus with two separate cameras, closed Bspline curves can directly be calculated from the merged surface data and used for saturation correction. By using this approach, no further estimations for the back side of the object have to be made and more accurate results are to be expected.
In this paper, we analyzed the new correction approach in isolation. The correction method could be combined with other recent algorithmic approaches to leverage their respective benefits. In future experiments, the performance improvements of the artifact correction for combined approaches could therefore be investigated in detail.
In order to use proposed approach in realworld scenarios, the accuracy of the crosscalibration is of high importance and can be improved through more precise manufacturing. Design improvements could be achieved by evaluating the crosscalibration accuracy for different positions, sizes, and numbers of spheres. Transparent materials are usually not detectable by the depth camera and could be used for the spherecarrying rods to improve the segmentation accuracy.
Besides qualitative improvements in the phantom design, the procedure could be improved algorithmically. In the experiments, only depth features from the Kinect sensor are used for the calibration. By making use of the additional RGB data gathered by the Kinect, the accuracy of the crosscalibration could be further enhanced.
Big improvements in processing time can be made in the projection correction. The main source of computing time derives from the Bspline curve interpolation and calculation of line integrals along the Xrays through the object. This type of calculation is one of the basic routines on a GPU and could be performed by providing the graphics card with the 3D points and projection geometry [32].
The sphere segmentation was performed semiautomatically by first clicking on the individual spheres in a predefined order. In future, the spheres could be detected in the RGB image automatically, based on their color.
7. Summary
When scanning knees, the limited dynamic range of the detector causes saturation artifacts in the reconstructed volumes. As these artifacts affect the surface regions of the scanned object, the idea for the correction method is to additionally use a Kinect camera to locate the surface of the object in 3D.
In order to use these surface points for the correction of CT images, we develop a procedure for crosscalibration between the camera and the Carm CT. For crosscalibration we use a PDS2 calibration phantom and attached a structure that is detectable with the Kinect camera.
After the crosscalibration, a projectionbased saturation correction is performed where all detector lines are successively corrected within the projections. With the Carm geometry, we determine the 3D points where the Xrays entered and exited the knee and calculate the length of the Xray through the knee with these points. Ultimately, we use these calculated lengths for smooth extrapolation of the boundary of the object in the overexposed regions.
The reconstruction results show that the projectionbased correction itself yields clear improvements to the noncorrected data. The boundaries of both knee phantoms are extrapolated to their correct position and overexposure artifacts are significantly reduced.
Potentially arising problems due to limited edge detection and the different tissue densities in the knees are also considered.
Possible future work includes the usage of a second Kinect camera for weightbearing scans and a GPUbased calculation of the intersection lengths. The sphere segmentation could be automated by identifying the spheres based on their color. Furthermore, a temporally synchronized correction approach could be applied in current research projects.
Appendix
A. Mathematical Formulas
A.1. LineLine Intersection
All direction and normal vectors of lines in the following equations shall be considered as unit vectors. Twodimensional lines can be represented by a point on the line and a normal vector perpendicular to that line. The distance between a point and a line defined by and is The sum of squared distances to more than one line is To find the closest mutual point, that is, the minimum of function , the equation has to be differentiated with respect to and the result has to be set equal to the zero vector. This leads to
The equationwith the identity matrix and normalized direction vector is introduced and proved for the twodimensional case. Multiplying both sides of (A.4) with any direction or normal vector or leads to which is always true. Equation (A.4) is rearranged toand used to modify (A.3), which leads to the final solution for :The calculation of in three dimensions is very similar. Equations (A.3) and (A.4) are sufficient in 2D cases, because the two orthogonal vectors and span the 2D space. The calculation of the mutually closest point in 3D has to take into account a second normal vector but is analog to the 2D case apart from that. The vectors , , and are pairwise orthogonal and therefore span the 3D space. For threedimensional lines, (A.3) and (A.4) contain the sum instead of and now is the unity matrix.
The mutually closest point is still calculated as described in (A.7), because the sum is replaced in the same way as in the 2D case:
A.2. Coordinate System Transformation
The first step of the transformation between both coordinate systems is a coordinate system rotation. The rotation matrix is calculated by using the matrix rotation formulawhere is the rotation matrix that is used to rotate the orthogonal axis vectors of the Kinect coordinate system onto the corresponding axis vectors of the Zeego Carm coordinate system. is the matrix containing the unit direction vectors of the sphere mount axes in the Kinect coordinate system. is the matrix containing the unit direction vectors of the sphere mount axes in the Zeego coordinate system. Using (A.9), the direction vectors of shall be rotated onto the direction vectors of . The rotation matrix can be obtained by calculating the matrix inverse and rearranging the equation to
After rotating the points with the rotation matrix , the final step of the coordinate system transformation is the translation of the coordinate system origin to the center of the PDS2 phantom which amounts to mm on the axis: Subsequently, the complete transformation is
Competing Interests
The authors declare that they have no competing interests.
References
 A. Maier, J.H. Choi, A. Keil et al., “Analysis of vertical and horizontal circular Carm trajectories,” in Medical Imaging 2011: Physics of Medical Imaging, 796123, vol. 7961, p. 8, International Society for Optics and Photonics, March 2011. View at: Publisher Site  Google Scholar
 J.H. Choi, A. Maier, A. Keil et al., “Fiducial markerbased correction for involuntary motion in weightbearing Carm CT scanning of knees. II. Experiment,” Medical Physics, vol. 41, no. 6, Article ID 061902, 2014. View at: Publisher Site  Google Scholar
 J.H. Choi, R. Fahrig, A. Keil et al., “Fiducial markerbased correction for involuntary motion in weightbearing Carm CT scanning of knees. Part I. Numerical modelbased optimization,” Medical Physics, vol. 40, no. 9, Article ID 091905, 2013. View at: Publisher Site  Google Scholar
 Y. Xia, A. Maier, F. Dennerlein, H. G. Hofmann, and J. Hornegger, “Efficient 2D filtering for conebeam VOI reconstruction,” in Proceedings of the IEEE Nuclear Science Symposium and Medical Imaging Conference Record (NSS/MIC '12), pp. 2415–2420, November 2012. View at: Publisher Site  Google Scholar
 D. Kolditz, Y. Kyriakou, and W. A. Kalender, “Volumeofinterest (VOI) imaging in Carm flatdetector CT for high image quality at reduced dose,” Medical Physics, vol. 37, no. 6, pp. 2719–2730, 2010. View at: Publisher Site  Google Scholar
 J. Hsieh, E. Chao, J. Thibault et al., “A novel reconstruction algorithm to extend the CT scan fieldofview,” Medical Physics, vol. 31, no. 9, pp. 2385–2391, 2004. View at: Publisher Site  Google Scholar
 B. Scholz and J. Boese, “Correction of Truncation Artifacts in CArm CT Images by FanBeam Extrapolation Using SavitzkyGolay Filtering,” RSNA. RSNA, pp. Session SSJ2406, 2008. View at: Google Scholar
 A. Maier, B. Scholz, and F. Dennerlein, “Optimizationbased extrapolation for truncation correction,” in Proceedings of the 2nd International Conference on Image Formation in XRay Computed Tomography, F. Noo, Ed., pp. 390–394, Salt Lake City, Utah, USA, 2012. View at: Google Scholar
 A. Preuhs, M. Berger, Y. Xia, A. Maier, J. Hornegger, and R. Fahrig, “Overexposure correction in CT using optimizationbased multiple cylinder fitting,” in Bildverarbeitung für die Medizin 2015: Algorithmen  Systeme  Anwendungen. Proceedings des Workshops vom 15. bis 17. März 2015 in Lübeck, Informatik aktuell, pp. 35–40, Springer, Berlin, Germany, 2015. View at: Google Scholar
 N. K. Strobel, B. Heigl, T. M. Brunner et al., “Improving 3D image quality of xray Carm imaging systems by using properly designed pose determination systems for calibrating the projection geometry,” in Medical Imaging 2003: Physics of Medical Imaging, 943, vol. 5030 of Proceedings of SPIE, pp. 943–954, International Society for Optics and Photonics, June 2003. View at: Publisher Site  Google Scholar
 J. Wasza, S. Bauer, S. Haase, M. Schmid, S. Reichert, and J. Hornegger, “RITK: the range imaging toolkita framework for 3D range image stream processing,” in Proceedings of the 16th International Workshop on Vision, Modeling and Visualization (VMV '11), pp. 57–64, October 2011. View at: Publisher Site  Google Scholar
 K. Khoshelham, “Accuracy analysis of kinect depth data,” in Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences (ISPRS '11), vol. 38, pp. 133–138, 2011. View at: Publisher Site  Google Scholar
 K. He, J. Sun, and X. Tang, “Guided image filtering,” in Computer Vision—ECCV 2010, pp. 1–14, Springer, 2010. View at: Publisher Site  Google Scholar
 I. Söderkvist, “Using SVD for some fitting problems,” Tech. Rep., Luleå University of Technology, Department of Engineering Science and Mathematics, Luleå, Sweden, 2009. View at: Google Scholar
 E. W. Weisstein, “LineLine Intersection,” From MathWorldA Wolfram Web Resource, 2015, http://mathworld.wolfram.com/LineLineIntersection.html. View at: Google Scholar
 A. Maier, H. G. Hofmann, M. Berger et al., “CONRAD—a software framework for conebeam imaging in radiology,” Medical Physics, vol. 40, no. 11, Article ID 111914, 2013. View at: Publisher Site  Google Scholar
 A. C. Kak and M. Slaney, Principles of Computerized Tomographic Imaging, IEEE Service Center, Piscataway, NJ, USA, 1988. View at: MathSciNet
 D. L. Parker, “Optimal short scan convolution reconstruction for fanbeam CT,” Medical Physics, vol. 9, no. 2, pp. 254–257, 1982. View at: Publisher Site  Google Scholar
 H. Scherl, B. Keck, M. Kowarschik, and J. Hornegger, “Fast GPUbased CT reconstruction using the Common Unified Device Architecture (CUDA),” in Proceedings of the IEEE Nuclear Science Symposium, Medical Imaging Conference, vol. 6, pp. 4464–4466, Honolulu, Hawaii, USA, November 2007. View at: Publisher Site  Google Scholar
 M. Spahn, “Xray detectors in medical imaging,” Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, vol. 731, pp. 57–63, 2013. View at: Publisher Site  Google Scholar
 M. K. Kalra, M. M. Maher, T. L. Toth et al., “Techniques and applications of automatic tube current modulation for CT 1,” Radiology, vol. 233, no. 3, pp. 649–657, 2004. View at: Publisher Site  Google Scholar
 D. Tack, V. De Maertelaer, and P. A. Gevenois, “Dose reduction in multidetector CT using attenuationbased online tube current modulation,” American Journal of Roentgenology, vol. 181, no. 2, pp. 331–334, 2003. View at: Publisher Site  Google Scholar
 T. Toth, Z. Ge, and M. P. Daly, “The influence of patient centering on CT dose and image noise,” Medical Physics, vol. 34, no. 7, pp. 3093–3101, 2007. View at: Publisher Site  Google Scholar
 A. C. Miracle and S. K. Mukherji, “Conebeam CT of the head and neck, part 1: physical principles,” American Journal of Neuroradiology, vol. 30, no. 6, pp. 1088–1095, 2009. View at: Publisher Site  Google Scholar
 M. Knaup, L. Ritschl, and M. Kachelrieß, “Digitization and visibility issues in flat detector CT: a simulation study,” in Proceedings of the IEEE Nuclear Science Symposium and Medical Imaging Conference Record (NSS/MIC '12), pp. 2661–2666, IEEE, Anaheim, Calif, USA, November 2012. View at: Publisher Site  Google Scholar
 L. Shi, M. Berger, B. Bier et al., “Analog nonlinear transformationbased tone mapping for image enhancement in Carm CT,” in Proceedings of the IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC '16), October 2016. View at: Google Scholar
 Y. Xia, A. Maier, G. Hannes et al., “Reconstruction from truncated projections in conebeam CT using an efficient 1D filtering,” in Proceedings of the Medical Imaging: Physics of Medical Imaging, vol. 8668 of Proceedings of SPIE, Lake Buena Vista, Fla, USA, February 2013. View at: Publisher Site  Google Scholar
 F. Dennerlein and A. Maier, “Regionofinterest reconstruction on medical Carms with the ATRACT algorithm,” in Medical Imaging 2012: Physics of Medical Imaging, vol. 8313 of Proceedings of SPIE, pp. 1B–83–1B–89, February 2012. View at: Publisher Site  Google Scholar
 F. Dennerlein, “Conebeam ROI reconstruction using the Laplace operator,” in Proceedings of the 11th International Meeting on Fully ThreeDimensional Image Reconstruction in Radiology and Nuclear Medicine, pp. 80–83, Potsdam, Germany, July 2011. View at: Google Scholar
 E. Y. Sidky, D. N. Kraemer, E. G. Roth, C. Ullberg, I. S. Reiser, and X. Pan, “Analysis of iterative regionofinterest image reconstruction for xray computed tomography,” Journal of Medical Imaging, vol. 1, no. 3, Article ID 031007, 2014. View at: Google Scholar
 J.H. Choi, A. Maier, M. Berger, and R. Fahrig, “Effective one stepiterative fiducial markerbased compensation for involuntary motion in weightbearing Carm conebeam CT scanning of knees,” Proceedings of SPIE, vol. 9033, Article ID 903312, 6 pages, 2014. View at: Publisher Site  Google Scholar
 A. Maier, H. G. Hofmann, C. Schwemmer, J. Hornegger, A. Keil, and R. Fahrig, “Fast simulation of xray projections of splinebased surfaces using an append buffer,” Physics in Medicine and Biology, vol. 57, no. 19, pp. 6193–6210, 2012. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2016 Johannes Rausch et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.