Research Article  Open Access
Thomas Weidinger, Thorsten M. Buzug, Thomas Flohr, Steffen Kappler, Karl Stierstorfer, "Polychromatic Iterative Statistical Material Image Reconstruction for PhotonCounting Computed Tomography", International Journal of Biomedical Imaging, vol. 2016, Article ID 5871604, 15 pages, 2016. https://doi.org/10.1155/2016/5871604
Polychromatic Iterative Statistical Material Image Reconstruction for PhotonCounting Computed Tomography
Abstract
This work proposes a dedicated statistical algorithm to perform a direct reconstruction of materialdecomposed images from data acquired with photoncounting detectors (PCDs) in computed tomography. It is based on local approximations (surrogates) of the negative logarithmic Poisson probability function. Exploiting the convexity of this function allows for parallel updates of all image pixels. Parallel updates can compensate for the rather slow convergence that is intrinsic to statistical algorithms. We investigate the accuracy of the algorithm for ideal photoncounting detectors. Complementarily, we apply the algorithm to simulation data of a realistic PCD with its spectral resolution limited by Kescape, charge sharing, and pulsepileup. For data from both an ideal and realistic PCD, the proposed algorithm is able to correct beamhardening artifacts and quantitatively determine the material fractions of the chosen basis materials. Via regularization we were able to achieve a reduction of image noise for the realistic PCD that is up to 90% lower compared to material images form a linear, imagebased material decomposition using FBP images. Additionally, we find a dependence of the algorithms convergence speed on the threshold selection within the PCD.
1. Introduction
The development of dualenergy (DE) CT scanners paved the road for new clinical applications, taking advantage of the energy dependence of Xray attenuation in matter, a fact that formerly had been considered a drawback. This technology allows for exposing patients to two distinct spectra, acquiring information about the material composition of the interior of the body. Besides scanners with two Xray sources and detectors [1] there also exist approaches that, for instance, apply a fast kVp switching technique [2] or duallayer detectors [3, 4] to acquire spectrally resolved scan data. All those approaches have in common that they are hardly extendable to produce more than two spectrally well separated data sets. Additionally, spectrally resolved data from dualsource dualenergy and fast kVp switching devices do not match exactly due to a time shift between measured sinograms or projections, respectively, making a correct statistical treatment of the problem in image reconstruction difficult.
The continuous improvement of photoncounting detectors (PCDs) over the past few years promises a remedy for this limitation and holds new possibilities for multienergy imaging. Recent prototypedetectors already provide two to four [5, 6] discriminator thresholds. This allows the acquisition of two to four sinograms in a single scan, each sensitive to a different part of the applied tube spectrum. Apart from energyresolved scan data, photoncounting technology can provide further benefits, like reduced image noise [7] and enhanced contrast [8, 9].
However, there are also issues with the new detector type concerning its capability of high Xray fluxes which may occur in clinical CT examinations [10]. The consequence of high photon flux is the occurrence of pulse pileup. The severeness of pileup correlates with the detector pixel pitch. Additionally, Kescape events and crosstalk between adjacent pixels lead to a degradation in spectral resolution. Their influence is inversely correlated with the detector pixel pitch. So the challenge is to find a compromise between spectral sensitivity and high fluxrate capability to ensure the clinical usability of photoncounting detectors.
In established dualenergy applications, the spectral information is used, for instance, to classify different body materials, identify high contrast agent concentrations that indicate pathological tissue, or support an early diagnosis of gout by helping to distinguish urate from small calcifications [11, 12]. Other applications utilize the spectral information to remove osseous structures in the patient image, revealing previously covered structures [13].
Usually, a linear material decomposition is performed subsequently to image reconstruction, although raw data based approaches have already been investigated [14]. As a consequence of the material decomposition process, image noise is increased significantly, especially if more than two materials are to be separated [6].
Statistical reconstruction methods offer the opportunity to achieve images with reduced noise compared to convenient filtered backprojection (FBP) reconstructions by including mature regularization techniques [15]. Statistical methods account not for the physical effects governed by the absorption characteristics of matter only. They also respect the Poisson nature of the emission and absorption processes in Xray tubes and detectors, respectively. The correct statistical treatment becomes relevant in PCDs with high spectral resolution. Impinging photons are distributed into energy bins according to the energy assigned to them in the detection process. Therefore, an increase in energy resolution implies smaller energy bins with reduced photon statistics for a constant number of impinging photons and constant spatial sampling.
Statistical algorithms converge to the most likely image that best fits the measured sinogram data, according to the underlying statistical model. The canonical measure for how well image and measured data agree is given by the logarithmic likelihood function (loglikelihood), which must be maximized to find the best suiting image. Since no analytic solution exists for the maximum of the Poisson loglikelihood function it must be approximated by numerical methods. Some years ago, a new class of statistical, polychromatic algorithms has been introduced [16, 17] that exploit the convexity of the negative loglikelihood by minimizing surrogate functions instead. Surrogate functions are successive local approximations of the negative loglikelihood function. With surrogates it is possible to formulate the optimization problem in a parallel manner, which allows quick computation on graphic processing units (GPUs). Under certain circumstances [18] the monotonic convergence of the algorithm is provable. The algorithm’s polychromatic nature also implies a beamhardening correction (BHC) for the chosen basis materials (cf. Section 2.1.3). With quite strong empirical assumptions the algorithm can be applied to spectrally not resolved data (e.g., to sinogram data taken with a convenient CT system with an energy integrating detector) with good results [19, 20].
In our approach we drop these assumptions and reconstruct the material fractions directly from energyresolved sinogram data. Compared to [21] our algorithm allows a simultaneous and parallel update of all material images and considers the detector response function of a photoncounting detector. The algorithm is a priori not limited to two spectrally well separated data sets but scales with the number of energyresolved data sets. As the number of thresholds in future PCDs might increase an extension of the material separation to the respective number of basis materials will be possible, as long as these materials are distinguishable by means of energyresolved CT (cf. Section 2.1.3). Our intention is to evaluate the accuracy of the reconstructed material fractions and determine the anticipated gain in image quality due to image noise reduction from a correct statistical treatment of the problem.
In Section 2 we introduce the mathematical background and describe the setup of this study. Section 3 presents the results that are discussed subsequently in Section 4.
2. Materials and Methods
2.1. Theoretical Background
2.1.1. Statistical Model: Poisson Random Variables
The creation of Xray photons and their absorption and registration in a counting detector pixel are statistical processes that can be described by a Poisson random variable . In computed tomography the joint probability for the measurement of a complete sinogram is thus the product of the individual random variables of all sinogram pixelsHere indexes all projection values that constitute the complete sinogram. In photoncounting CT is the number of pulses registered in a single detector pixel and the corresponding expectation value. The negative logarithm of yields the canonical negative loglikelihood function which can be used as a measure for how well taken data agree with their expectation values We use bold letters to indicate vector quantities. Terms that are constant with respect to have been omitted, since the goal of statistical reconstruction is to find the expectation values that best fit the measured data, that is, the minimum of , which is not altered by terms constant with respect to .
In multienergy imaging one can formulate separate loglikelihood functions for all sinograms, each associated with one of the energy bins provided by a PCD. If all random variables are statistically independent, which we assume here, an appropriate objective function for minimization is the sum over all negative loglikelihood functionsThe statistical independence holds true only in the case of an ideal photoncounting detector. In realistic PCDs dependencies between energy bins are introduced by Kescape, pixel crosstalk, electronic noise, and pulse pileup. Our proposed algorithm accounts for correlations introduced by Kescape, pixel crosstalk, and electronic noise by considering the detectors response function, whereas pulse pileup is corrected in a preprocessing step directly on the sinogram raw data.
In general, a minimization of the loglikelihood function alone leads to very noisy images. To compensate for this, a regularization term (prior) is added to the objective function. It incorporates a priori knowledge about the material images that are to be reconstructed. Convenient priors suppress small differences between neighboring pixels but preserve contrast edges. They usually take the formwhere is the number of image pixels and labels the neighboring pixels of a considered image pixel . The index labels the material images that are to be reconstructed. We require the penalty functions to be strictly convex and twice continuously differentiable with respect to . This requirement is necessary [16] to allow parallelization while assuring convergence of the final algorithm. We implemented a prior suggested by Green [22]. The computational effort for this prior is higher compared to frequently utilized quadratic priors, but since it is a differentiable approximation of the Huber prior [23] it can better preserve contrast edges. The addition of the prior term to the negative loglikelihood function yieldsas new objective function, with the scalar productThe parameter governs the strength of regularization. Moreover, we made use of the fact that the number of pulses registered in a sinogram pixel depends on the fractions of materials constituting the scanned object; that is, . The dependence is given by LambertBeer’s law in (7) and (9) as described below.
2.1.2. Physical Model: Polyenergetic LambertBeer’s Law
The goal of computed tomography is to reconstruct an attenuation map describing the interior of a scanned object. The attenuation map cannot be measured directly, but only via projections of the object. The link between the attenuation map and its projections is given by LambertBeer’s law. This model correctly reflects the physical behavior for a monoenergetic Xray source. But in clinical practice Xrays are created by Xray tubes which emit quanta distributed over a continuous spectrum of energies. Image reconstructions based on LambertBeer’s law, like filtered backprojection (FBP), neglect the polychrome nature of the Xrays and can therefore lead to serious beamhardening artifacts that degrade image quality. The artifacts need to be corrected, which usually happens in a preprocessing step. To avoid their generation already in the reconstruction process, it is necessary to know the materials composing the object and integrate LambertBeer’s law for polyenergetic Xrays (cf. [15])into the reconstruction algorithm. In (7) is defined as and is the spectrum the th energy bin is sensitive to. For realistic PCDs can be calculated from the tube spectrum viawhere is the normalized detector response function. It states the probability of the detector assigning energy to a measured photon with true energy . The system matrix governs the contribution of the th image pixel to the th projection, so it implicitly contains the scanner geometry.
2.1.3. Material Composition Model
Neglecting Rayleigh scattering, there are two physical effects relevant for the attenuation in clinical CT examinations: Compton scattering and photoelectric effect. This fact would limit the number of materials separable with energyresolved CT to . Fortunately, due to the unique spectral lines of each chemical element more than two materials can be separated, if at least of the materials exhibit one or more individual spectral lines in the range of the applied Xray spectrum and all of them are mutually exclusive regarding their constituting chemical elements. Under that assumption, we can decompose each material into a linear combination of basis materials with their respective energydepending attenuation coefficient The number of materials forming the material basis may not exceed the number of available spectrally different data sets to guarantee that the system of equations is not underdetermined. The choice of the material basis must depend on the composition of the scanned object and is crucial for the accuracy of the composition estimation. A suitable choice for clinical applications should include water, since it resembles well the attenuation behavior of the majority of human tissue [24–26]. Additional basis materials should be chosen dependent on the imaging task. Calcium or a suiting mixture of materials could be included if osseous body parts are to be identified. If an injected iodinebased contrast agent is to be identified, which is frequently used in clinical CT examinations to enhance the contrast of cancerous tissue, iodine should be included.
A correct and reliable separation of materials requires a different absorption behavior of each basis material in the range of the applied tube spectrum. If attenuation coefficients of different materials have the same energy dependency in the energy range covered by the spectrum, those materials cannot be distinguished by means of spectrally resolved CT.
2.1.4. Optimization Transfer Principle
The proposed algorithm applies the optimization transfer principle introduced by De Pierro [16, 27]. According to this principle, the objective function is locally approximated by surrogate functions that are easier to minimize (cf. Figure 1). In fact, we will approximate the objective function by parabolic surrogates whose minima are known analytically. A new surrogate function is generated in each iteration step . The following conditions imposed to a surrogate are sufficient [18] to guarantee convergence of the algorithm:Equations (10) ensure that each surrogate always lies completely above the objective function and is tangential to it at the current iteration step.
2.2. Algorithm Derivation
2.2.1. Surrogate Objective Functions
Our derivation of the separable surrogate objective function and the update equation follows the procedure presented in [19]. In a first step, the energy integral in the physical model, (7), is moved out of the loglikelihood term of the objective function (5). Therefore, we defineWith these definitions, the polychromatic LambertBeer law (7) can be rewritten asSince is a convex function and it holds thatwe can replace by the surrogate by applying Jensen’s inequality for convex functionsIt can be shown that fulfills conditions (10). Next, we want to approximate by another parabolic surrogate whose minimum is known analytically. To this end, expand into a Taylor series up to the second order about the current line integrals We end up with the new surrogate functionThe curvature in (17) must be chosen such that the optimization transfer principle (10) is satisfied. Practically, we will use the Hessian of instead. With this choice the monotonicity of the algorithm cannot be guaranteed mathematically anymore. If we forced the line integrals to be positive, there would exist curvatures that provably ensures monotonic convergence [18]. However, this constraint might affect the capability of the algorithm to model materials not incorporated in the material basis. Materials not part of the basis are represented as a linear combination of the basis materials (see (9)) which also requires negative coefficients . A weaker form of the positivity constraint merely postulates that would be compatible with the model. Whether this constraint still guarantees monotonic convergence and the existence of an optimal (in the sense of convergence rate, cf. [18]) curvature remains an open question.
Finally, we replace by a surrogate that allows independent updates of all image pixels, enabling parallel computation. To this end, we rewrite the th line integral as a convex combination by transformingwithApplying once again Jensen’s inequality, we pull the sum over the image pixels out of which yields our final surrogate function For the deduction of the separable surrogate of the regularization term we followed De Pierro [16] and ended up with
2.2.2. Minimization Method
To minimize the surrogate objective function we apply the NewtonRaphson method: and are the Hessian matrices, that is, the second derivatives of and , respectively. Hence, the ()st iteration step for the th material image does only depend on the measured data of the th energy bin and the material images previously calculated in the th iteration step. This permits parallel updates of all material images.
Evaluating the gradient of at the current iteration yieldsFor the elements of the second derivative of one getsUsing the Hessian of instead of a curvature that satisfies the conditions of the optimization transfer principle we getIn the last step and were replaced by their respective definitions (21) and (13). The derivatives of the surrogates of the regularization function arewith the Kronecker symbol .
2.2.3. Summary of the Algorithm
In the following we give a brief step by step overview over the algorithm:(1)First, one has to compute the line integrals of the material images by forwardprojecting them; see (11). Subsequently, determine as given by (12). For an adequate initialization of the algorithm provide the initial material images from material separated FBP images reconstructed from the raw data.(2)Next, determine the by multiplying with the mean number of photons emitted by the tube and the respective spectrum the th energy bin is sensitive to. Finally, discretize the integral in (7) an carry out the summation. The spectra can be calculated from the detector response function as stated by (8).(3)Additionally, calculate which equates to .(4)Finally, one can determine following (26).(5)The Hessematrix can be achieved in a similar fashion, using (28). It should be mentioned that in (28) is a simple forwardprojection of an image containing all ones, so it can be precomputed.(6)Multiplying the inverted Hessematrix with the previously calculated gradient and subtracting the result from the current material images yield the updated material images .
2.3. System Setup
2.3.1. Accuracy Analysis with Ideal PCD Data
We started evaluating the accuracy of the algorithm by reconstructing material images from ideal simulation data of a water filled acrylic glass cylinder, generated with the DRASIM software [28]. In this context, ideal means that the detector perfectly separates the irradiating tube spectrum into disjoint energy bins. Effects like Kescape, charge sharing, electronic noise, and pulse pileup do not occur. For the evaluation of the algorithm we choose the number of energy thresholds and by that the number of energy bins to , matching the number of material images to be reconstructed. The cylinder phantom has a diameter of 30 cm, containing five small cylinders with various concentrations of iodine. The iodine concentration increases from counterclockwise to in equidistant steps, with the upper contrast cylinder (cf. Figure 2), having the lowest concentration.
The scans were simulated in fanbeam geometry with a field of view (FOV) of 52 cm. The isocenter to detector distance was cm and the focus to isocenter distance cm. The virtual detector features 4 rows with 3680 quadratic subpixels, each with a pitch of 250 μm. Data simulated with that geometry are fused to macropixels previous to further processing. A macropixel contains the summed counts of small pixels. Each fifth column of small pixels is excluded to account for the covering due to a collimator grating.
Over an angular range of 540° 1664 projections were taken, exposing the phantom to a 140 kVp tube spectrum. The additional angular range of 180° in addition to a full cycle is required by the employed rebinning algorithm. Rebinning converts the sinogram data from fanbeam to parallelbeam geometry prior to reconstruction. In doing so, the detector quartershift is used to double the sampling.
The tube was driven with a current of 100 mA. The emitted spectrum, irradiated onto the cylinder phantom, was prefiltered with 0.9 mm of titanium and 3.5 mm aluminum. We did not account for a bowtie filter in the simulations, albeit it could be respected in the detector response function (8) via an additional factor that depends on the sinogram pixel index . The simulated detector has two counter thresholds, providing two spectrally separated data sets.
We chose iodine and water as basis materials and initialized the algorithm with the ground truth, that is, the correct material images . Since acrylic glass is not part of the material basis, the borders of the cylinder phantom will be represented as a linear combination of iodine and water.
The energy resolution of the algorithm was set to 5 keV. This determines the precision of approximation of the energy integral in (7). The binspectra are provided with the same resolution. Binspectra are the parts of the tube spectrum the energy bins of a PCD are sensitive to (see Figure 3(a)). The mass attenuation coefficients and the material densities for the calculation of attenuation values were taken from the EPDL library [26] and Kuchling [29], respectively. To evaluate the mass attenuation coefficients at the sample points of the binspectra the mass attenuation coefficients from the EPDL library were interpolated. The interpolation was conducted piecewise if the considered material possesses one or more absorption edges within the energy range of the binspectra.
(a) Ideal 2bin PCD
(b) Realistic 2bin PCD
(c) Realistic 2bin PCD
(d) Realistic 2bin PCD
2.3.2. Dependence of Convergence Speed on Energy Bin Selection
Real photoncounting detectors are not able to separate the registered photons into perfectly disjoint energy bins. This is mainly a consequence of signal sharing between neighboring pixels. Together with pulse pileup, it leads to a considerable overlap of the sensitive ranges of individual energy bins. Hence, the spectral resolution and consequently the amount of spectral information gathered is reduced compared to ideal PCDs. Since the material separation capability strongly depends on the basis materials difference in absorption behavior among the energy bins, we expect the convergence rate of our algorithm to depend on the choice of energy bins.
The algorithm requires knowledge of the specific binspectra , that is, the part of the tube spectrum an energy bin is sensitive to, to correctly reconstruct material fractions; see (7). The specific spectra for arbitrary, realistic bins can be calculated via the detector response function ; see (8). We acquired the response function with a resolution of by simulating monoenergetic scans without phantom, with Xray energies between and 139.5 keV, using the SimSD simulation tool [30, 31]. The tool accounts for all relevant physical processes occurring in realistic PCDs. To keep the influence of pulse pileup on the detector response low (<1.5%), we chose a small Xray flux of and used a clocked readout to discriminate pulses. The calculation of the response function assumes a virtual CdTedetector with a bulk thickness of 1.6 mm, biased with a voltage of 1 kV, and the same (pixel) size as the detector described in Section 2.3.1. We sample the monoenergetic response functions with thresholds between 4 and 173 keV, again with a resolution of 1 keV. The detector response to each photon energy is approximated by averaging the response of 10k detector pixels. Figures 3(b)–3(d) show the sensitive range of two energy bins calculated from the detector response function for the chosen prefiltered 140 keV tube spectrum (cf. Section 2.3.1) for the investigated bin configurations. With the SimSD tool [30, 31] we also created realistic data sets from the cylindrical phantom for a 2bin PCD, where realistic means that we consider pulse pileup, Kescape, charge sharing, and electronic noise. Figure 2 shows the normalized, scaled total density of the cylindrical phantom with image values centered at with a window width of .
To investigate the dependence of convergence speed on the selected counter thresholds, we fixed the low energy counter at 20 keV and varied the high energy counter between 50 keV and 80 keV in steps of 15 keV. We created two data sets for each of the three energy bin configurations that only differ in their noise realization. This enables us to evaluate the image noise in difference images, avoiding systematic errors.
In comparison to ideal PCD data (cf. Section 2.3.1), a pileup correction is applied to the realistic raw data before reconstructing them with the proposed algorithm. The correction is based on a highorder polynomial that we fit to the mean true count rate plotted versus the respective mean count rate measured by the detector. We initialized the algorithm with the material fractions estimated from material separated FBP images. First, the FBP images are reconstructed from the simulated raw data after a combined waterBHC and pulse pileup correction had been applied. Next, the FBP images on Hounsfield scale are converted to attenuation values withFinally, the initial material fractions are calculated from the attenuation maps viawith and . The matrix contains the attenuation values of the pure basis materials for the respective binspectrum . can be calculated using the tabulated attenuation values from [26]
3. Results
3.1. Accuracy Analysis with Ideal PCD Data
We compared the results of two different reconstructions of the same raw data sets. In the first case we reconstructed the material images from the ideal PCD data without any regularization; that is, . For the second case we employed regularization with . Parameters were selected depending on the considered material by taking into account the image noise contained in the respective initial material images reconstructed from realistic data. After 1000 iterations the material fractions as well as their uncertainties were evaluated for both cases within the two ROIs that are depicted in Figures 4(a) and 4(b). is indicated by a dashed circle located within the highest iodine contrast cylinder and at the respective position in the water image. is indicated by a solid circle located at the lowest iodine contrast cylinder and at the respective position in the water image. The results of the ROI analysis are summarized in Table 1. The table also lists the mean deviations from the ground truth within the ROIs. Figure 4 shows the true material images for iodine (a) and water (b), as well as the difference between the respective images and the ground truth after 1000 iterations for both investigated cases ((c)–(f)).

(a) Iodine image; ground truth; ,
(b) Water image; ground truth; ,
(c) Iodine image; 1000 iterations, ground truth; ; ,
(d) Water image; 1000 iterations, ground truth; ; ,
(e) Iodine image; 1000 iterations, ground truth; ,
(f) Water image; 1000 iterations, ground truth; ,
3.2. Dependence of Convergence Speed on Energy Bin Selection
The initial material images as well as the final material images after 1000 iterations are exemplarily shown in Figure 5 for the data set featuring the (20–65 keV, 65–140 keV) bin configuration. Compared to the ground truth images shown in Figures 4(a) and 4(b) significant beamhardening artifacts are visible between the iodine contrast probes. They are caused by the FBP reconstruction based on the monoenergetic version of LambertBeer’s law. Due to these artifacts, the iodine contrast is considerably underestimated and images of the iodine contrast probes also show up in the initial water material image.
(a) Initial iodine image, ,
(b) Initial water image, ,
(c) Final iodine image, ,
(d) Final water image, ,
Similarly to the ideal material images in Section 2.3.1, all material images have been evaluated within identically positioned ROIs. The image noise, that is, the standard deviation of the ROI data, is not estimated directly in the reconstructed images. Instead, the images reconstructed from the two data sets that only differ in their noise realization are subtracted and the image noise is evaluated within the resulting difference image. Eventually, the measured image noise is scaled by a factor of to account for Gaussian error propagation.
The deviation of the measured material fraction from the ground truth as well as the respective image noise is plotted against iteration number in Figure 6 for both basis materials and the various bin configurations.
(a)
(b)
(c)
(d)
4. Discussion
4.1. Accuracy Analysis with Ideal PCD Data
Considering the results of the reconstruction with deactivated regularization (see Figures 4(c) and 4(d)), the chosen internal energy resolution of 5 keV seems sufficient, since the deviations from the ground truth are very small after 1000 iterations. The final images from nonregularized reconstruction mainly exhibit moiré pattern residuals that are typical discretization artifacts from the forward and backprojectors. Those artifacts are much more prominent than the highfrequency, random noise residuals expected from a statistical reconstruction algorithm. So we conclude that the standard deviations calculated from the ROI data are a rough measure of discretization artifacts. The standard deviation is of equal magnitude as the deviations , which are mainly caused by the limited internal energy resolution. This confirms that for the utilized projection operator an internal energy resolution of 5 keV is adequate.
If regularization is employed, standard deviation in both material images is reduced, especially in the contrast probes within the iodine image. In return though, the precision of the material fraction estimation is reduced slightly for data within , that is, the probe with the highest iodine contrast. We suppose that this loss in precision comes from the way regularization is applied to the material images. Although the material images are linked by measured spectral data sets , regularization is applied to each material image individually. If the regularization is not perfectly edgepreserving this leads to a bias, affecting the precision of the material image estimate. Suppose that by regularization of the iodine material image the edges of a contrast probe are smoothed out as illustrated in Figure 8(a). The requirement and by far stronger attenuation coefficient of iodine in the considered energy window inevitably cause the smoothed out edge to reappear in the water image with increased amplitude; see Figure 8(b). The regularization of the water image assures that the imprinted edges from the iodine image get smoothed and blurred and in turn also influence the iodine, albeit to a much lesser extent. Thus, accumulated over the number of iterations this effect slightly influences the mean contrast measured within the ROIs in both images. The more prominent the contrast edge, the stronger the resulting deviation caused by regularization (cf. Table 1). A strictly edgepreserving prior like the Huber prior might reduce this issue but was not tested, since it does not fulfill the convergence criteria according to [16] and a divergence of the algorithm seemed likely.
4.2. Dependence of Convergence Speed on Energy Bin Selection
Generally, the choice of the second derivative of seems to be valid, since in all investigated cases the algorithm converged. As conjectured, it proves true that the convergence speed depends on the selected thresholds. While for thresholds located at 20∣65 keV and 20∣80 keV the algorithm shows a similar convergence rate (cf. Figure 6), it converges noticeably slower for the threshold combination 20∣50 keV. On the one hand, this is due to a reduced separation between the two binspectra ; see Figure 3(b). On the other hand, it is a consequence of comparably worse initial images which are affected by the large overlap of the binspectra as well. Therefore, not only is an increase of the number of thresholds in a PCD necessary to allow a separation of more than two basis materials, but the respective data sets also have to be spectrally well separated. In addition, the absorption behavior of the basis materials must be sufficiently distinct. For instance, a separation of water and fat will hardly converge with the proposed algorithm due to the similar absorption behavior of these materials within the energy range of clinical CT.
Addressing the dark shades between the contrast probes, caused by beamhardening of the polychromatic Xray tube spectrum, those artifacts are noticeably reduced after 1000 steps of iteration.
Remarkably, the image noise measured in within the highest contrast rod of the iodine image is increasing beyond the respective image noise within the FBP starting image. This happens since within the first iterations mainly the mean iodine contrast is scaled to match the respective sinogram data. Scaling all image values by a certain factor affects the standard deviation as well. Since the initial estimate of the highest iodine contrast is not very accurate and regularization primarily limits highfrequency noise from being added to the material images, it cannot prevent an initial increase of image noise in this case. For the smallest contrast, the initial image represents a decent estimate of the actual iodine fraction and changes made by the algorithm within the first few iterations are less severe, so an increase of image noise can be prevented by regularization.
Summarizing the results of Figure 6, the algorithm performs well in correcting beamhardening artifacts and quantitatively determining the material fractions for a twobin PCD, if the material basis is chosen properly.
With the utilized regularization function and parameters a significant reduction of image noise in both material images was achievable; see Figure 7.
(a)
(b)
(c)
(d)
(a) Cut through the iodine image
(b) Cut through the water image
5. Conclusions
An iterative statistical reconstruction algorithm has been introduced that successively approximates the negative loglikelihood function by paraboloidal surrogate functions. With the proposed algorithm a direct reconstruction of a set of material images is possible from energyresolved sinogram data. Since the algorithm considers the polychromatic nature of Xrays generated by typical clinical Xray sources the algorithm includes an implicit beamhardening correction for the selected basis materials. Apart from that the algorithm has been tailored for reconstruction of material images from data measured with photoncounting detectors by taking into account correlations introduced between energy bin data sets in the detection process via the detector response function. It was shown that an internal energy resolution of 5 keV is sufficient to yield quantitative results if a proper material basis is selected. Compared to reconstruction algorithms utilized on current commercial scanners the proposed algorithm converges rather slowly. This might make an immediate implementation in those scanners unlikely, despite the possibility for parallel computation.
Competing Interests
The authors declare that they have no competing interests.
Acknowledgments
The authors would like to thank H. Bruder, J. Levakhina, J. Müller, F. Schöck, and J. Sunnegardh for their helpful comments and suggestions.
References
 T. G. Flohr, C. H. McCollough, H. Bruder et al., “First performance evaluation of a dualsource ct (dsct) system,” European Radiology, vol. 16, no. 2, pp. 256–268, 2006. View at: Publisher Site  Google Scholar
 W. A. Kalender, W. H. Perman, J. R. Vetter, and E. Klotz, “Evaluation of a prototype dualenergy computed tomographic apparatus. I. Phantom studies,” Medical Physics, vol. 13, no. 3, pp. 334–339, 1986. View at: Publisher Site  Google Scholar
 F. Kelcz, P. M. Joseph, and S. K. Hilal, “Noise considerations in dual energy CT scanning,” Medical Physics, vol. 6, pp. 418–425, 1979, Erratum in: Medical Physics, vol. 7, no. 4, p. 388, 1980. View at: Publisher Site  Google Scholar
 R. Carmi, G. Naveh, and A. Altman, “Material separation with duallayer CT,” in Proceedings of the IEEE Nuclear Science Symposium Conference Record (NSS/MIC '05), vol. 4, p. 367, Fajardo, Puerto Rico, October 2005. View at: Publisher Site  Google Scholar
 R. Ballabriga, M. Campbell, E. H. M. Heijne, X. Llopart, and L. Tlustos, “The medipix3 prototype, a pixel readout chip working in single photon counting mode with improved spectrometric performance,” in Proceedings of the Nuclear Science Symposium Conference Record, vol. 6, pp. 3557–3561, OctoberNovember 2006. View at: Google Scholar
 S. Kappler, A. Henning, B. Krauss et al., “Multienergy performance of a research prototype CT scanner with smallpixel counting detector,” in Medical Imaging: Physics of Medical Imaging, vol. 8669 of Proceedings of SPIE, Orlando Area, Fla, USA, February 2013. View at: Publisher Site  Google Scholar
 T. Weidinger, T. M. Buzug, T. Flohr et al., “Investigation of ultra lowdose scans in the context of quantumcounting clinical CT,” in Medical Imaging 2012: Physics of Medical Imaging, vol. 8313 of Proceedings of SPIE, February 2012. View at: Publisher Site  Google Scholar
 S. Kappler, D. Niederlöhner, K. Stierstorfer, and T. Flohr, “Contrastenhancement, image noise, and dualenergy simulations for quantumcounting clinical CT,” in Medical Imaging: Physics of Medical Imaging, vol. 7622 of Proceedings of SPIE, San Diego, Calif, USA, March 2010. View at: Publisher Site  Google Scholar
 S. Kappler, F. Glasser, S. Janssen, E. Kraft, and M. Reinwand, “A research prototype system for quantumcounting clinical CT,” in Medical Imaging 2010: Physics of Medical Imaging, vol. 7622 of Proceedings of SPIE, March 2010. View at: Publisher Site  Google Scholar
 C. Szeles, S. A. Soldner, S. Vydrin, J. Graves, and D. S. Bale, “CdZnTe semiconductor detectors for spectroscopic Xray imaging,” IEEE Transactions on Nuclear Science, vol. 55, no. 1, pp. 572–582, 2008. View at: Publisher Site  Google Scholar
 T. R. C. Johnson, C. Fink, S. O. Schonberg, and M. F. Reiser, Eds., Dual Energy CT in Clinical Practice, Springer, Berlin, Germany, 2011.
 M. A. Desai, J. J. Peterson, H. W. Garner, and M. J. Kransdorf, “Clinical utility of dualenergy CT for evaluation of tophaceous gout,” Radiographics, vol. 31, no. 5, pp. 1365–1375, 2011. View at: Publisher Site  Google Scholar
 C. Thomas, A. Korn, B. Krauss et al., “Automatic bone and plaque removal using dual energy CT for head and neck angiography: feasibility and initial performance evaluation,” European Journal of Radiology, vol. 76, no. 1, pp. 61–67, 2010. View at: Publisher Site  Google Scholar
 W. H. Marshall Jr., R. E. Alvarez, and A. Macovski, “Initial results with prereconstruction dualenergy computed tomography (PREDECT),” Radiology, vol. 140, no. 2, pp. 421–430, 1981. View at: Publisher Site  Google Scholar
 T. M. Buzug, Computed Tomography, Springer, Berlin, Germany, 2008.
 A. R. De Pierro, “Modified expectation maximization algorithm for penalized likelihood estimation in emission tomography,” IEEE Transactions on Medical Imaging, vol. 14, no. 1, pp. 132–137, 1995. View at: Publisher Site  Google Scholar
 J. A. Fessler, I. A. Elbakri, P. Sukovic, and N. H. Clinthorne, “Maximumlikelihood dualenergy tomographic image reconstruction,” in Medical Imaging: Image Processing, vol. 4684 of Proceedings of SPIE, pp. 38–49, San Diego, Calif, USA, February 2002. View at: Publisher Site  Google Scholar
 H. Erdoğan and J. A. Fessler, “Monotonic algorithms for transmission tomography,” IEEE Transactions on Medical Imaging, vol. 18, no. 9, pp. 801–814, 1999. View at: Publisher Site  Google Scholar
 I. A. Elbakri and J. A. Fessler, “Statistical image reconstruction for polyenergetic Xray computed tomography,” IEEE Transactions on Medical Imaging, vol. 21, no. 2, pp. 89–99, 2002. View at: Publisher Site  Google Scholar
 I. A. Elbakri and J. A. Fessler, “Segmentationfree statistical image reconstruction for polyenergetic Xray computed tomography with experimental validation,” Physics in Medicine and Biology, vol. 48, no. 15, pp. 2453–2477, 2003. View at: Publisher Site  Google Scholar
 P. J. la Riviere and P. Vargas, “Penalizedlikelihood sinogram decomposition for dualenergy computed tomography,” in Proceedings of the IEEE Nuclear Science Symposium Conference Record (NSS/MIC '08), pp. 5166–5169, Dresden, Germany, October 2008. View at: Publisher Site  Google Scholar
 P. J. Green, “Bayesian reconstructions from emission tomography data using a modified EM algorithm,” IEEE Transactions on Medical Imaging, vol. 9, no. 1, pp. 84–93, 1990. View at: Publisher Site  Google Scholar
 P. J. Huber, “Robust estimation of a location parameter,” Annals of Mathematical Statistics, vol. 35, pp. 73–101, 1964. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 J. F. Sutcliffe, “A review of in vivo experimental methods to determine the composition of the human body,” Physics in Medicine and Biology, vol. 41, no. 5, pp. 791–833, 1996. View at: Publisher Site  Google Scholar
 K. J. Ellis, “Human body composition: in vivo methods,” Physiological Reviews, vol. 80, no. 2, pp. 649–680, 2000. View at: Google Scholar
 D. E. Cullen, J. H. Hubbell, and L. Kissel, “Epdl97: the evaluated photon data library, '97 version,” Tech. Rep. UCRL50400, Lawrence Livermore National Laboratory, 1997, vol. 6, rev. 5. View at: Google Scholar
 A. R. De Pierro, “On the relation between the ISRA and the EM algorithm for positron emission tomography,” IEEE Transactions on Medical Imaging, vol. 12, no. 2, pp. 328–333, 1993. View at: Publisher Site  Google Scholar
 K. Stierstorfer, “Drasim—deterministic radiological simulation,” Technical Report, Internal Report, Siemens Medical Engineering, 2000. View at: Google Scholar
 H. Kuchling, Taschenbuch der Physik, Fachbuchverlag Leipzig im C. Hanser, 2004.
 M. Balda, D. Niederlöhner, B. Kreisler, J. Durst, and B. Heismann, “Lookup tablebased simulation of directlyconverting counting Xray detectors for computed tomography,” in Proceedings of the IEEE Nuclear Science Symposium Conference Record (NSS/MIC '09), pp. 2588–2593, Orlando, Fla, USA, October 2009. View at: Publisher Site  Google Scholar
 M. Balda, Quantitative computed tomography [Diss.], FriedrichAlexanderUniversität ErlangenNürnberg, Erlangen, Germany, 2011, http://www5.informatik.unierlangen.de/Forschung/Publikationen/2011/Balda11QCT.pdf.
Copyright
Copyright © 2016 Thomas Weidinger et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.