A corrigendum for this article has been published. To view the corrigendum, please click here.

International Journal of Biomedical Imaging

Volume 2016, Article ID 5871604, 15 pages

http://dx.doi.org/10.1155/2016/5871604

## Polychromatic Iterative Statistical Material Image Reconstruction for Photon-Counting Computed Tomography

^{1}Institute of Medical Engineering, University of Lübeck, Ratzeburger Allee 160, 23538 Lübeck, Germany^{2}Siemens AG, Healthcare Sector, Imaging & Therapy Division, Siemensstraße 1, 91301 Forchheim, Germany

Received 29 November 2015; Revised 5 February 2016; Accepted 15 February 2016

Academic Editor: Jiang Hsieh

Copyright © 2016 Thomas Weidinger et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This work proposes a dedicated statistical algorithm to perform a direct reconstruction of material-decomposed images from data acquired with photon-counting detectors (PCDs) in computed tomography. It is based on local approximations (surrogates) of the negative logarithmic Poisson probability function. Exploiting the convexity of this function allows for parallel updates of all image pixels. Parallel updates can compensate for the rather slow convergence that is intrinsic to statistical algorithms. We investigate the accuracy of the algorithm for ideal photon-counting detectors. Complementarily, we apply the algorithm to simulation data of a realistic PCD with its spectral resolution limited by K-escape, charge sharing, and pulse-pileup. For data from both an ideal and realistic PCD, the proposed algorithm is able to correct beam-hardening artifacts and quantitatively determine the material fractions of the chosen basis materials. Via regularization we were able to achieve a reduction of image noise for the realistic PCD that is up to 90% lower compared to material images form a linear, image-based material decomposition using FBP images. Additionally, we find a dependence of the algorithms convergence speed on the threshold selection within the PCD.

#### 1. Introduction

The development of dual-energy (DE) CT scanners paved the road for new clinical applications, taking advantage of the energy dependence of X-ray attenuation in matter, a fact that formerly had been considered a drawback. This technology allows for exposing patients to two distinct spectra, acquiring information about the material composition of the interior of the body. Besides scanners with two X-ray sources and detectors [1] there also exist approaches that, for instance, apply a fast kVp switching technique [2] or dual-layer detectors [3, 4] to acquire spectrally resolved scan data. All those approaches have in common that they are hardly extendable to produce more than two spectrally well separated data sets. Additionally, spectrally resolved data from dual-source dual-energy and fast kVp switching devices do not match exactly due to a time shift between measured sinograms or projections, respectively, making a correct statistical treatment of the problem in image reconstruction difficult.

The continuous improvement of photon-counting detectors (PCDs) over the past few years promises a remedy for this limitation and holds new possibilities for multienergy imaging. Recent prototype-detectors already provide two to four [5, 6] discriminator thresholds. This allows the acquisition of two to four sinograms in a single scan, each sensitive to a different part of the applied tube spectrum. Apart from energy-resolved scan data, photon-counting technology can provide further benefits, like reduced image noise [7] and enhanced contrast [8, 9].

However, there are also issues with the new detector type concerning its capability of high X-ray fluxes which may occur in clinical CT examinations [10]. The consequence of high photon flux is the occurrence of pulse pileup. The severeness of pileup correlates with the detector pixel pitch. Additionally, K-escape events and crosstalk between adjacent pixels lead to a degradation in spectral resolution. Their influence is inversely correlated with the detector pixel pitch. So the challenge is to find a compromise between spectral sensitivity and high flux-rate capability to ensure the clinical usability of photon-counting detectors.

In established dual-energy applications, the spectral information is used, for instance, to classify different body materials, identify high contrast agent concentrations that indicate pathological tissue, or support an early diagnosis of gout by helping to distinguish urate from small calcifications [11, 12]. Other applications utilize the spectral information to remove osseous structures in the patient image, revealing previously covered structures [13].

Usually, a linear material decomposition is performed subsequently to image reconstruction, although raw data based approaches have already been investigated [14]. As a consequence of the material decomposition process, image noise is increased significantly, especially if more than two materials are to be separated [6].

Statistical reconstruction methods offer the opportunity to achieve images with reduced noise compared to convenient filtered back-projection (FBP) reconstructions by including mature regularization techniques [15]. Statistical methods account not for the physical effects governed by the absorption characteristics of matter only. They also respect the Poisson nature of the emission and absorption processes in X-ray tubes and detectors, respectively. The correct statistical treatment becomes relevant in PCDs with high spectral resolution. Impinging photons are distributed into energy bins according to the energy assigned to them in the detection process. Therefore, an increase in energy resolution implies smaller energy bins with reduced photon statistics for a constant number of impinging photons and constant spatial sampling.

Statistical algorithms converge to the most likely image that best fits the measured sinogram data, according to the underlying statistical model. The canonical measure for how well image and measured data agree is given by the logarithmic likelihood function (log-likelihood), which must be maximized to find the best suiting image. Since no analytic solution exists for the maximum of the Poisson log-likelihood function it must be approximated by numerical methods. Some years ago, a new class of statistical, polychromatic algorithms has been introduced [16, 17] that exploit the convexity of the* negative* log-likelihood by minimizing surrogate functions instead. Surrogate functions are successive local approximations of the negative log-likelihood function. With surrogates it is possible to formulate the optimization problem in a parallel manner, which allows quick computation on graphic processing units (GPUs). Under certain circumstances [18] the monotonic convergence of the algorithm is provable. The algorithm’s polychromatic nature also implies a beam-hardening correction (BHC) for the chosen basis materials (cf. Section 2.1.3). With quite strong empirical assumptions the algorithm can be applied to spectrally not resolved data (e.g., to sinogram data taken with a convenient CT system with an energy integrating detector) with good results [19, 20].

In our approach we drop these assumptions and reconstruct the material fractions directly from energy-resolved sinogram data. Compared to [21] our algorithm allows a simultaneous and parallel update of all material images and considers the detector response function of a photon-counting detector. The algorithm is a priori not limited to two spectrally well separated data sets but scales with the number of energy-resolved data sets. As the number of thresholds in future PCDs might increase an extension of the material separation to the respective number of basis materials will be possible, as long as these materials are distinguishable by means of energy-resolved CT (cf. Section 2.1.3). Our intention is to evaluate the accuracy of the reconstructed material fractions and determine the anticipated gain in image quality due to image noise reduction from a correct statistical treatment of the problem.

In Section 2 we introduce the mathematical background and describe the setup of this study. Section 3 presents the results that are discussed subsequently in Section 4.

#### 2. Materials and Methods

##### 2.1. Theoretical Background

###### 2.1.1. Statistical Model: Poisson Random Variables

The creation of X-ray photons and their absorption and registration in a counting detector pixel are statistical processes that can be described by a Poisson random variable . In computed tomography the joint probability for the measurement of a complete sinogram is thus the product of the individual random variables of all sinogram pixelsHere indexes all projection values that constitute the complete sinogram. In photon-counting CT is the number of pulses registered in a single detector pixel and the corresponding expectation value. The negative logarithm of yields the canonical negative log-likelihood function which can be used as a measure for how well taken data agree with their expectation values We use bold letters to indicate vector quantities. Terms that are constant with respect to have been omitted, since the goal of statistical reconstruction is to find the expectation values that best fit the measured data, that is, the minimum of , which is not altered by terms constant with respect to .

In multienergy imaging one can formulate separate log-likelihood functions for all sinograms, each associated with one of the energy bins provided by a PCD. If all random variables are statistically independent, which we assume here, an appropriate objective function for minimization is the sum over all negative log-likelihood functionsThe statistical independence holds true only in the case of an ideal photon-counting detector. In realistic PCDs dependencies between energy bins are introduced by K-escape, pixel crosstalk, electronic noise, and pulse pileup. Our proposed algorithm accounts for correlations introduced by K-escape, pixel crosstalk, and electronic noise by considering the detectors response function, whereas pulse pileup is corrected in a preprocessing step directly on the sinogram raw data.

In general, a minimization of the log-likelihood function alone leads to very noisy images. To compensate for this, a regularization term (prior) is added to the objective function. It incorporates a priori knowledge about the material images that are to be reconstructed. Convenient priors suppress small differences between neighboring pixels but preserve contrast edges. They usually take the formwhere is the number of image pixels and labels the neighboring pixels of a considered image pixel . The index labels the material images that are to be reconstructed. We require the penalty functions to be strictly convex and twice continuously differentiable with respect to . This requirement is necessary [16] to allow parallelization while assuring convergence of the final algorithm. We implemented a prior suggested by Green [22]. The computational effort for this prior is higher compared to frequently utilized quadratic priors, but since it is a differentiable approximation of the Huber prior [23] it can better preserve contrast edges. The addition of the prior term to the negative log-likelihood function yieldsas new objective function, with the scalar productThe parameter governs the strength of regularization. Moreover, we made use of the fact that the number of pulses registered in a sinogram pixel depends on the fractions of materials constituting the scanned object; that is, . The dependence is given by Lambert-Beer’s law in (7) and (9) as described below.

###### 2.1.2. Physical Model: Polyenergetic Lambert-Beer’s Law

The goal of computed tomography is to reconstruct an attenuation map describing the interior of a scanned object. The attenuation map cannot be measured directly, but only via projections of the object. The link between the attenuation map and its projections is given by Lambert-Beer’s law. This model correctly reflects the physical behavior for a monoenergetic X-ray source. But in clinical practice X-rays are created by X-ray tubes which emit quanta distributed over a continuous spectrum of energies. Image reconstructions based on Lambert-Beer’s law, like filtered back-projection (FBP), neglect the polychrome nature of the X-rays and can therefore lead to serious beam-hardening artifacts that degrade image quality. The artifacts need to be corrected, which usually happens in a preprocessing step. To avoid their generation already in the reconstruction process, it is necessary to know the materials composing the object and integrate Lambert-Beer’s law for polyenergetic X-rays (cf. [15])into the reconstruction algorithm. In (7) is defined as and is the spectrum the th energy bin is sensitive to. For realistic PCDs can be calculated from the tube spectrum viawhere is the normalized detector response function. It states the probability of the detector assigning energy to a measured photon with true energy . The system matrix governs the contribution of the th image pixel to the th projection, so it implicitly contains the scanner geometry.

###### 2.1.3. Material Composition Model

Neglecting Rayleigh scattering, there are two physical effects relevant for the attenuation in clinical CT examinations: Compton scattering and photoelectric effect. This fact would limit the number of materials separable with energy-resolved CT to . Fortunately, due to the unique spectral lines of each chemical element more than two materials can be separated, if at least of the materials exhibit one or more individual spectral lines in the range of the applied X-ray spectrum and all of them are mutually exclusive regarding their constituting chemical elements. Under that assumption, we can decompose each material into a linear combination of basis materials with their respective energy-depending attenuation coefficient The number of materials forming the material basis may not exceed the number of available spectrally different data sets to guarantee that the system of equations is not underdetermined. The choice of the material basis must depend on the composition of the scanned object and is crucial for the accuracy of the composition estimation. A suitable choice for clinical applications should include water, since it resembles well the attenuation behavior of the majority of human tissue [24–26]. Additional basis materials should be chosen dependent on the imaging task. Calcium or a suiting mixture of materials could be included if osseous body parts are to be identified. If an injected iodine-based contrast agent is to be identified, which is frequently used in clinical CT examinations to enhance the contrast of cancerous tissue, iodine should be included.

A correct and reliable separation of materials requires a different absorption behavior of each basis material in the range of the applied tube spectrum. If attenuation coefficients of different materials have the same energy dependency in the energy range covered by the spectrum, those materials cannot be distinguished by means of spectrally resolved CT.

###### 2.1.4. Optimization Transfer Principle

The proposed algorithm applies the optimization transfer principle introduced by De Pierro [16, 27]. According to this principle, the objective function is locally approximated by surrogate functions that are easier to minimize (cf. Figure 1). In fact, we will approximate the objective function by parabolic surrogates whose minima are known analytically. A new surrogate function is generated in each iteration step . The following conditions imposed to a surrogate are sufficient [18] to guarantee convergence of the algorithm:Equations (10) ensure that each surrogate always lies completely above the objective function and is tangential to it at the current iteration step.