Table of Contents Author Guidelines Submit a Manuscript
Computational and Mathematical Methods in Medicine
Volume 2015, Article ID 638568, 12 pages
http://dx.doi.org/10.1155/2015/638568
Research Article

Adaptively Tuned Iterative Low Dose CT Image Denoising

1Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, ON, Canada M5S 3G9
2Joint Department of Medical Imaging, Toronto General Hospital, University Health Network, Toronto, ON, Canada M5G 2N2
3Department of Electrical and Computer Engineering, Ryerson University, Toronto, ON, Canada M5B 2K3

Received 23 January 2015; Revised 2 May 2015; Accepted 3 May 2015

Academic Editor: Hugo Palmans

Copyright © 2015 SayedMasoud Hashemi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Improving image quality is a critical objective in low dose computed tomography (CT) imaging and is the primary focus of CT image denoising. State-of-the-art CT denoising algorithms are mainly based on iterative minimization of an objective function, in which the performance is controlled by regularization parameters. To achieve the best results, these should be chosen carefully. However, the parameter selection is typically performed in an ad hoc manner, which can cause the algorithms to converge slowly or become trapped in a local minimum. To overcome these issues a noise confidence region evaluation (NCRE) method is used, which evaluates the denoising residuals iteratively and compares their statistics with those produced by additive noise. It then updates the parameters at the end of each iteration to achieve a better match to the noise statistics. By combining NCRE with the fundamentals of block matching and 3D filtering (BM3D) approach, a new iterative CT image denoising method is proposed. It is shown that this new denoising method improves the BM3D performance in terms of both the mean square error and a structural similarity index. Moreover, simulations and patient results show that this method preserves the clinically important details of low dose CT images together with a substantial noise reduction.

1. Introduction

While X-ray computed tomography (CT) enables ultrafast acquisition of patient images obtained with excellent spatial resolution, the dose needed to achieve diagnostic image quality can result in a significant increase in the risk of developing cancer [1]. Consequently, low-dose CT imaging is clinically desired and has been under investigation for several years. Lowering the radiation dose may seriously degrade diagnostic performance or undermine physician confidence by producing noisier images [2, 3]. Several different algorithmic approaches have been proposed to reduce the effect of noise in the CT images, including projection data denoising [46], optimizing the reconstruction algorithms to include the noise statistics [79], and CT image denoising [1012]. The latter is the focus of this paper, where an adaptively tuned iterative CT image denoising algorithm is presented.

The main source of noise in X-ray projection data is quantum noise caused by statistical fluctuations of X-ray quanta reaching the detectors, so that the CT projection noise follows the Poisson distribution [3]. However, because of the use of different reconstruction algorithms and signal processing steps in CT reconstruction, the noise statistics of the processed CT images are usually unknown and hard to model and are spatially changing. Moreover, directional noise in the form of streak artifacts is present in many CT images. As a result, incorporating accurate noise statistics into image-based CT denoising can be very challenging. When denoising is based on the projection data and its statistics, other difficulties arise. Specifically, such denoising methods and the associated iterative reconstructions require access to the CT raw data, which is often unavailable. Furthermore, these methods have a high computational complexity making it challenging to obtain a final image in a reasonable length of time, depending on the available computational resources. On the other hand, image-based denoising methods are fast and can be applied directly on the CT images without changing the clinical workflow.

A simplified noise model is usually used in image based denoising algorithms, in which, following the Central Limit Theorem (CLT) [13], the final noise in each voxel follows a Gaussian distribution [11, 1417]. The CLT can be used since each voxel in CT images is computed by adding values from many different projections. With this assumption, a noisy CT image can be modeled bywhere is the noiseless image and is a zero mean additive anisotropic Gaussian noise with variance of , which varies with the pixel location and its value.

Different image based denoising algorithms have been used to estimate the noiseless CT images, such as anisotropic diffusion [10], total variation (TV) [18], bilateral filtering [19], or wavelet-based techniques [11, 12, 20]. These methods can usually be formulated as an unconstrained Lagrangian multiplier optimization problem [18, 2123], that is,in which is a regularization parameter that controls the tradeoff between the data fidelity and the regularization term and . Different regularization terms, , lead to different denoising methods. For example, in TV-based methods [2426], where and are the gradient in horizontal and vertical directions, and in wavelet soft thresholding methods , where is the 2D wavelet transform [27].

There is a strong dependence of the quality of the result on the regularization parameter. It is a challenging task to find the regularization parameter that provides the best balance between signal smoothing and feature preservation [26]. Specifically, if is not appropriately adjusted, the optimization is trapped in a local minimum; that is, if is too small, noise is only partially removed and, if it is too large, the image may be oversmoothed [25]. Some methods have been proposed to update the regularization parameters iteratively, such as use of the discrepancy principle [28], generalized cross-validation [26], and L-curve [29]. These methods fail in certain situations, are problem specific, and generally increase the computational complexity of the algorithms.

One straight forward approach used in many algorithms is to use a heuristic value combined with a criterion to stop the algorithm before the estimated signal is oversmoothed. Different stopping criteria have been proposed for iterative denoising problems. For instance, Akkoul et al. [30] used a switching median filter algorithm to stop the iterative process when the number of changed pixels in the denoising iterations is a minimum. In [20, 31] the statistical properties of high frequency wavelet subbands were used to stop the TV iterations. However, such methods are unable to differentiate oversmoothed data from well-denoised data. As a result, to avoid oversmoothing, the updating steps are typically chosen to be small, which decreases the convergence speed.

In this paper, a noise confidence region evaluation (NCRE) method is used to address the regularization selection and the algorithm stopping problems. It adaptively updates the regularization parameters at the end of each iteration by validating the result of that iteration. The algorithm stops when the statistical properties of the denoising residual resemble those of the additive white Gaussian noise. Using NCRE, a new iterative block matching and 3D filtering (BM3D) method is proposed, which outperforms BM3D [32] itself. The proposed method is compared with anisotropic diffusion denoising, which is generally regarded as a standard denoising method in CT imaging [10], nonlocal mean [3335], and BM3D [32]. In addition, we study the noise properties of CT images and show that the noise in small image blocks has an additive white Gaussian model, which justifies the success of the nonlocal based denoising algorithms in image based CT denoising [33, 3639].

2. Problem Formulation and CT Noise Properties

Recently, it has been shown that nonlocal patch based algorithms outperform others in CT image denoising [33, 34, 3641]. For example, in [41] a nonlocal means (NLM) based method, which takes advantage of the presence of repeating structures in a given image, was compared with a principle component analysis based denoising method and a highly constrained backprojection method. It was shown that the NLM method outperformed both of the methods in terms of the contrast to noise ratio, noise standard deviation, and squared error. Another class of algorithms looks for similar blocks in the whole 2D image and stacks them together in 3D arrays. Denoising is then performed through transform domain shrinkage of the 3D arrays. An algorithm called K-SVD [37, 38, 42] uses these 3D patches to train an optimum dictionary. This method, which assumes that each 3D block can sparsely be represented by the trained dictionary atoms, uses shrinkage algorithms to denoise the patches.

Our proposed algorithm makes use of the block matching and 3D filtering (BM3D) technique [32, 36]. This is a noniterative denoising method that currently outperforms many newer algorithms [40]. It is composed of two major filtering steps. In both stages collaborative filtering is utilized, which itself has four stages: grouping similar patches with a reference patch, calculation of the 3D wavelet coefficients of each stack of patches, denoising the wavelet coefficients (thresholding in step or Wiener filtering in step ), and recovering the denoised image by calculating the inverse 3D wavelet transformation. The BM3D approach aims to denoise the patches by Wiener filtering, which is done in step . To find best match to similar patches in the step and to reliably estimate the Wiener coefficients, the method requires a reliable estimate of the noiseless image, which is the main purpose of step . The input for this step, which is a hard thresholding block, is the 3D noisy wavelet coefficients of similar patches located by block matching applied to the available noisy image. A hard threshold with a heuristically determined value of is used in step . The resulting denoised coefficients are then transformed back to a spatial domain to be used as the initial estimate of the noiseless data used for calculating the Wiener filter coefficients.

To denoise the images, patch based methods generally use a model based onin which is the patch grouping information, is the number of 3D patches, denotes the noisy patches, denotes the noiseless 3D patches, and is the noise at each 3D patch. Conventional BM3D uses an independent identical additive Gaussian noise model in which the noise variances are similar in all patches. Using this assumption, its regularization term is a nonlocal wavelet -norm , where [43] in which is the 3D wavelet transform. Using this regularization in (2), BM3D solves the following optimization problem in its first step:In our approach, we modify the BM3D formulation for CT image denoising by incorporating a more realistic noise model, to include the nonstationarity of the noise and its dependence on the position and value of the pixels. Our proposed method uses the noise properties of the patches , studied in the Appendix, to improve the performance of BM3D for CT image denoising.

2.1. Noise in CT Images

Although a reasonable statistical model for the CT projection data is the independent Poisson distributions [3], it has been shown that the corrected polyenergetic X-ray projections can be modeled more accurately by a Gaussian distribution with the following relationship between its mean and variance:where is the mean and is the variance of the projections at the th projection angle () and the th detector bin whose distance is from the detectors center, is a scaling factor, is the electronic noise variance, and is a parameter adaptive to different detector channels [6]. During the reconstruction process the noise distribution is changed by the reconstruction algorithm and filters. As a result, due to the complicated dependencies of noise on scan parameters and on spatial position, the noise distribution in the reconstructed CT images is usually very difficult to determine (a detailed study of the noise model in CT can be found in [44, 45]). Using the discrete filtered back projection relation,the noise variance in the reconstructed images can be described by [38]where is the distance between the center of two adjacent detectors, is the parallel projection at the th angle and the th detector bin, and is the ramp filter in the spatial domain. This can be interpreted as the backprojection of the projection noise variances, making it nonstationary, object dependent, and correlated. Moreover, since the variance of each voxel is the summation of the variances from many angles, if the variance in one direction is significantly larger than that at another direction, then the variances along that direction will be more correlated than that for other directions [46] producing what is known as a streak artifact in the reconstructed images. It should be noted that this effect is not included in our model. In (8), the ramp filter is symmetric about the center of rotation and depends on the attenuation of the media through which the X-ray beams pass. Therefore, we assert that the noise variances of small neighborhoods with similar attenuations and similar radial distances from the center of rotation should be very similar. The results of experimental tests of this assertion are presented in the Appendix.

2.2. Modified Formulation

Based on the above discussion and our experimental results, presented in the Appendix, it can be assumed that the noise in 3D similar patches of the CT images follows a white additive Gaussian distribution, with different variances for different 3D patches . Using (3), the modified optimization problem used in this paper is given byin which is a set of regularization parameters that are functions of . To improve the regularization parameter selection, we used an adaptive updating method based on an evaluation of the noise statistics. It automatically avoids the oversmoothing without lowering the convergence speed. The NCRE method validates the statistical properties of the error residuals at the end of each iteration and categorizes the result as well denoised, partially denoised, or oversmoothed. This information is then used to update the parameters in the next iteration or to stop the algorithm at the end of the iteration for which the similarity between error residual and Gaussian noise is satisfied.

3. Proposed Denoising Method: BM3D-NCRE

If denotes the signal recovered at th iteration, the denoising residual at the end of th iteration can be expressed by which, ideally, should be the noise (3), . Here, we provide a quantitative measure that verifies the similarity between the structure of and that of .

3.1. Noise Confidence Region Evaluation (NCRE)

In [47] it was shown that the following function of zero mean white Gaussian noise, , with length , and for any given scalar value of :is equivalent to sorting the absolute value of the noise elements . The expected value of this function is and its variance is , where and is the cumulative distribution function (CDF) of a Gaussian distribution. Therefore, is bounded by the following lower () and upper () values: with probability of . If the sorted absolute values of a signal lie between these two boundaries for a large enough , that signal will follow a white Gaussian distribution with a confidence probability close to one.

As shown in Figure 1, these boundaries divide the space into three regions. At the end of each iteration the sorted absolute value of the residual is calculated. If this sequence falls into Region II (in our proposed algorithm being a subset of a region is evaluated by having a high fraction of in that region; for example, in our simulations this fraction is 90%), it means that the residual has a Gaussian-like structure and denoising stops. On the other hand, if the denoising at the th iteration has removed not only the noise but also parts of the noiseless data itself, will have some of the image information making its samples larger than Gaussian noise. Therefore, for a specific value of , (average number of s with absolute values smaller than ) is smaller than and falls in Region III. This will enforce continuation of the denoising to the th step with changing of the regularization parameters such that the denoising algorithm extracts less noise in the next iteration, that is, decreasing , . If falls in Region I, it indicates that the noise is partially removed. In this case the algorithm continues to the th step and changes the regularization parameters such that more noise is extracted by the denoising algorithm, that is, increasing , . In summary, at each iteration when is in either Region I or III, the regularization parameter is updated such that it moves toward Region II. The value of can be tuned as a fixed or an adaptively changing variable based on the euclidean distance between and , . In our proposed method to update a global is used (similar to [33, 41]), which is updated based on the placement of the denoising residual of the patches in different Regions I–III. The algorithm is stopped when the denoising residual of the soft tissue around the lung is placed in Region II. An example of a soft tissue mask for a thoracic phantom, denoted by , is shown in Figure 2. The pixels in this region have very similar CT# and have almost the same radial distances from the center of the rotation. Therefore, it could be assumed that the noise in this region has a white Gaussian distribution.

Figure 1: Three possible regions for the residual at the end of each iteration. If it lies in Region II (noise confidence region), denoising is stopped.
Figure 2: Noise statistics of the soft tissue region surrounding the lung. (a) Top: the thoracic phantom which is used to evaluate the noise characteristics and bottom: the soft tissue region of the phantom (denoted by in Algorithm 1). (b) The statistical distribution of the noise in the soft tissue region is shown by the blue experimental values and these are compared with Gaussian distribution with the same variance (red dashed line).

Algorithm 1: Proposed Iterative Regularization Parameter Updating.

3.2. Summary of the Proposed Method: BM3D-NCRE

Algorithm 1 shows the proposed iterative updating scheme, in which denotes an element-wise multiplication. The updating method uses a memory strategy for recovery of possible lost edges and fine details. This process is represented byin which and and are positive scalars, chosen based on the conditions given in [48]. This stage of the algorithm was inspired by the second order iterative methods [48, 49] that improve the convergence rate of the iterative methods.

To denoise CT images the fundamentals of BM3D were used in an iterative scheme: the output of the Wiener filter is a better estimate of the original image than the input of Wiener filter from the first step. Therefore, this output can be fed into the first step to provide better Wiener coefficients in the second iteration. The modified BM3D formulation (9) is used iteratively in Algorithm 1 to denoise the CT images, where NCRE adjusts the threshold values applied on 3D wavelet coefficients of the first step. The parameters of the BM3D algorithm are chosen based on the ones that resulted in the best performance in [50]. The initial value for is with noise variances estimated independently for each 3D stack, using the median-absolute deviation method as described in [51]. In each iteration, if the sorted absolute value of the denoising residual of the soft tissues around the lung, , falls into Region I, the threshold value will be increased and, if it falls into Region III, the threshold values are decreased, so that in the next iteration the residual moves towards Region II.

4. Results and Discussion

Three test methods were used to evaluate the performance of the proposed algorithm. The first method consisted of a simulated Shepp-Logan phantom corrupted by adding Poisson noise to the fan beam X-ray projections. The images were reconstructed using the ifanbeam command in Matlab. The number of unattenuated photons in each projection was taken to be , which led to a noise variance similar to the images reconstructed from the tube current of 50 mAs and peak voltage of 120 kVp. White Gaussian noise with standard deviation of 100 was added to all the projections to simulate the presence of electronic noise. The second method used a CATPHAN phantom (Phantom Laboratory, Greenwich, NY, USA). This is a standard phantom widely used for CT image quality evaluation and contains spheres of differing contrast as well as line pairs with differing spacing that can be used to test the spatial resolution. Ideally, the denoising algorithms should enable us to distinguish between smaller spheres with lower contrasts in the low contrast slice and should keep the line pair resolution unchanged. The third method uses axial chest CT images from a clinical patient. All three test methods used the following parameters in the Algorithm 1: , , , , and . The parameters of BM3D are chosen similar to the ones proposed in [50]: the size of the blocks is , sliding step to process every next reference block is , maximum number of similar blocks is , and the size of the search neighborhood for full-search blockmatching is .

Figure 3 shows the mean square error (MSE) and the structural similarity index (SSIM) [52] resulting from successive iterations of BM3D-NCRE applied to the reconstructed noisy Shepp-Logan phantom images. The first iteration is equivalent to the result of BM3D. As shown, the MSE decreases and the SSIM increases in successive iterations to a point where the algorithm is stopped by falling into Region II. The results of denoising the Shepp-Logan phantom with BM3D and BM3D-NCRE are shown in Figure 4. As can be seen, the noise is removed more effectively by BM3D-NCRE. However, the streak artifacts are still visible in both denoised images.

Figure 3: Squared error (blue line) and the Structural Similarity Index (SSIM) (green dashed line) showing the changes with each iteration when using BM3D-NCRE. The shading colors show the region in which ’s are placed after each iteration. The algorithm stops when Region II is reached.
Figure 4: Shepp-Logan phantom simulations: (a) original phantom, (b) noisy reconstructed phantom, (c) denoised by BM3D, and (d) denoised by BM3D-NCRE.

In the second test, the CATPHAN phantom was scanned using a low dose (50 mAs, 120 kVp) and a high dose (300 mAs, 120 kVp) protocol. Image reconstructions were performed with a Toshiba Aquilion One CT using the proprietary lung kernel FC52 and the proprietary iterative reconstruction algorithm Adaptive Iterative Dose Reduction 3D (AIDR3D) [53]. The latter uses anisotropic diffusion denoising as its base to improve the image quality at each iteration. Our proposed denoising method was applied to the images reconstructed with the high spatial resolution filter algorithm, FC52. These are compared to the images reconstructed with AIDR3D, and the FC52 reconstructed images denoised by nonlocal mean and BM3D. The nonlocal mean package provided by Gabriel Peyre on Mathwork File Exchange was used here (http://www.mathworks.com/matlabcentral/fileexchange/13619-toolbox-non-local-means). This package is based on the method described in [35]. The BM3D code is also based on the package provided by Alessandro Foi on his homepage (http://www.cs.tut.fi/~foi/GCF-BM3D/). It should be noted that the parameters of nonlocal mean and BM3D were heuristically adjusted to achieve the best performance, based on visual inspection of the results. In addition, the parameters are adjusted to keep the spatial resolution in the line pair resolution slice the same. Figure 5 shows the line pair slice reconstructed by FC52, AIDR3D, and FC52 denoised by BM3D-NCRE, nonlocal mean, and BM3D. As can be seen, all these methods have the same spatial resolution as the original image whose resolution was not improved by the proprietary iterative reconstruction method used (AIDR3D).

Figure 5: Top: line pair slice of the CATPHAN phantom scanned with 50 mAs and 120 kVp (window width/window level = 400/60 HU). Bottom: red rectangular ROI of the images with the red line showing the cut-off line pair resolution (window width/window level = 400/500 HU). Left to right: image reconstructed with FC52 (STD = 64 HU) and reconstructed with AIDR3D (STD = 41 HU), FC52 denoised with the proposed method (STD = 22 HU), FC52 denoised with nonlocal mean (STD = 34 HU), and FC52 denoised with BM3D method (STD = 27 HU).

Figure 6 shows that the detectability of low contrast objects is improved with our method and outperforms that achieved by AIDR3D, nonlocal mean, and BM3D. Visibility of the spheres with higher contrasts in the images denoised by BM3D is very close to the ones denoised by BM3D-NCRE. However, the visibility is significantly less for the spheres with lower contrasts. Both BM3D and BM3D-NCRE outperform AIDR3D and nonlocal mean, while the number of visible spheres is almost the same as in the images denoised by nonlocal mean and the images reconstructed by AIDR3D.

Figure 6: Low contrast study using CATPHAN low contrast slice. Top: images scanned with 50 mAs/120 kVp and bottom: scanned with 300 mAs/120 kVp. Left to right: image reconstructed with FC52, reconstructed with AIDR3D, reconstructed with FC52 and denoised by BM3D-NCRE, reconstructed with FC52 and denoised by nonlocal mean, and reconstructed with FC52 and denoised by BM3D. In all images window width/window level = 100/70 HU.

The final comparison was made using a low dose (50 mAs, 120 kVp) lung CT of a patient reconstructed using FC52 and processed by anisotropic diffusion denoising, BM3D-NCRE, nonlocal mean, and BM3D. A single axial slice of the images is shown in Figure 7. As can be seen, anisotropic diffusion removes some fine details and reduces the contrast of the small features. Nonlocal mean, BM3D, and BM3D-NCRE outperform anisotropic diffusion denoising in sense of preserving the small structures. Comparing the results of nonlocal mean with BM3D-NCRE, it can be seen that the low contrast features are kept perfectly unchanged in BM3D-NCRE, while they are removed or their contrasts are reduced in the image denoised by nonlocal mean. In addition, the noise is not homogeneously removed from the image. Comparing BM3D with BM3D-NCRE, it can be seen that the image denoised by BM3D-NCRE has less noise and the removed noise is more homogeneous. It should be noted that the parameters of the anisotropic diffusion denoising are adjusted in such a way that the noise variance in the reconstructed images is the same as the images denoised by BM3D-NCRE.

Figure 7: Comparison of the effects of anisotropic diffusion, nonlocal mean, BM3D, and BM3D-NCRE. (a) In the original image, the circular regions show the area from which the noise variance is measured, the dashed rectangular region is the area which is shown in (d)-(e), and average noise of the three regions is around 55 HU. (b) Denoised by anisotropic diffusion, noise is around 25 HU. (c) Denoised by BM3D-NCRE, noise is around 25 HU. (d) Denoised by nonlocal mean, noise is around 21 HU. (e) Denoised by BM3D, noise is around 28 HU. ((f)–(j)) The zoomed-in region shown by dashed rectangle in image (a). The difference between original image and the image denoised by (k) anisotropic diffusion, (l) BM3D-NCRE, (m) nonlocal mean, and (n) BM3d are shown. The window width/window level in (a)–(j) is 1600/−300 HU and is 100/0 HU in (k)–(n).

5. Conclusions

An iterative denoising scheme was proposed for low dose CT images, which adjusts the denoising parameters at each iteration based on the effect of the denoising method in the previous iteration. Noise confidence region evaluation (NCRE) was used to compare the Gaussian noise with denoising residual to determine if the denoising was effectively, weakly, or strongly executed. Based on this information the denoising parameters were adjusted for the next iteration. BM3D was used in the new proposed iterative scheme. The phantom study showed that our proposed method improved low contrast detectability. The patient study demonstrated that the image was efficiently denoised and the visibility of small objects was preserved. However, it should be noted that the modified optimization model is not accurate when the electronic noise dominates the photon fluctuations as could occur for very low doses. In addition, the streak artifacts would still be present in images denoised by the proposed method.

Appendix

Phantom Study of the Noise Statistics in Reconstructed CT Images

The noise statistics in CT images were studied using a Toshiba phantom containing five circular regions of differing CT#  (HU) and a Toshiba Aquilion One CT scanner (this should be applicable to other scanners and can be easily tested in a similar way). As shown in Figure 8, six regions are considered in the phantom: a region in the background (with CT# of 0 HU) and the five regions inside the circles. Scanning was performed with eight different doses controlled by changing the tube current  (mAs) and with a fixed peak voltage of 120 kV. The noise distribution in each region was compared with white Gaussian additive noise. As can be seen in Figure 9, the noise distribution matches white Gaussian noise with high accuracy, provided that the dose is sufficiently high, that is, greater than 50 mAs, as in our experiments. The noise distribution of four selected X-ray tube currents is shown in Figure 9. Figure 10 summarizes the noise variance changes for each region with different CT#’s as a function of the dose. When tube current is less than 10 mA, the electronic noise dominates the photon fluctuations causing the measured CT# to be inaccurate. Moreover, the noise variances of the six regions differ, even for the same tube currents. More realistically, we also studied the noise statistics of soft tissue surrounding the lung in a commercially available adult thoracic anthropomorphic phantom (Lungman, Kyoto Kakagu, Japan). These pixels have very close CT# values and almost the same radial locations. It was observed that the noise of these regions follows a Gaussian distribution with an acceptable accuracy. Two examples are shown in Figure 11. Based on these results, we assume that the noise in small neighborhoods of the image with similar CT# follows a white Gaussian distribution. This assumption enables patch based methods, such as BM3D and K-SVD, to use a Gaussian noise model in the similar patches of the image and to perform the denoising on each 3D stack independently.

Figure 8: Phantom scanned with eight different X-ray source currents with the same peak voltage: top left to right 5, 10, 25, and 50 mAs and bottom left to right 100, 150, 200, and 250 mAs.
Figure 9: Comparison of the noise distribution in the six regions of the phantom shown in Figure 8 with white Gaussian noise. Blue dots are the measured values and the red dashed lines are the fitted Gaussian distributions.
Figure 10: Showing the noise variance changes for the background and the five circular regions of the phantom as the dose was increased from 5 mAs to 250 mAs.
Figure 11: Two small soft tissue regions in the lung of a thoracic anthropomorphic phantom are shown. The noise distribution of these regions (blue dots), which could be used in patch based denoising, is compared to that of the fitted Gaussian distribution (dashed red lines).

Conflict of Interests

The authors declare that there is no conflcit of interests regarding the publication of this paper.

Acknowledgments

This study was partially supported by Toshiba Canada, Medical Systems Group, and NSERC Canada Grant no. 3247-2012.

References

  1. D. J. Brenner and E. J. Hall, “Computed tomography—an increasing source of radiation exposure,” The New England Journal of Medicine, vol. 357, no. 22, pp. 2277–2284, 2007. View at Publisher · View at Google Scholar · View at Scopus
  2. L. Yu, X. Liu, S. Leng et al., “Radiation dose reduction in computed tomography: techniques and future perspective,” Imaging in Medicine, vol. 1, no. 1, pp. 65–84, 2009. View at Publisher · View at Google Scholar
  3. J. Hsieh, Computed Tomography: Principles, Design, Artifacts, and Recent Advances, vol. PM188, SPIE Press Book, Bellingham, Wash, USA, 2nd edition, 2009.
  4. Z. Liao, S. Hu, M. Li, and W. Chen, “Noise estimation for single-slice sinogram of low-dose X-ray computed tomography using homogenous patch,” Mathematical Problems in Engineering, vol. 2012, Article ID 696212, 16 pages, 2012. View at Publisher · View at Google Scholar · View at Scopus
  5. S. Hu, Z. Liao, and W. Chen, “Sinogram restoration for low-dosed X-ray computed tomography using fractional-order Perona-Malik diffusion,” Mathematical Problems in Engineering, vol. 2012, Article ID 391050, 13 pages, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  6. T. Li, X. Li, J. Wang et al., “Nonlinear sinogram smoothing for low-dose X-ray CT,” IEEE Transactions on Nuclear Science, vol. 51, no. 5, pp. 2505–2513, 2004. View at Publisher · View at Google Scholar · View at Scopus
  7. H. Lee, L. Xing, R. Davidi, R. Li, J. Qian, and R. Lee, “Improved compressed sensing-based cone-beam CT reconstruction using adaptive prior image constraints,” Physics in Medicine and Biology, vol. 57, no. 8, pp. 2287–2307, 2012. View at Publisher · View at Google Scholar · View at Scopus
  8. J. Tang, B. E. Nett, and G.-H. Chen, “Performance comparison between total variation (TV)-based compressed sensing and statistical iterative reconstruction algorithms,” Physics in Medicine and Biology, vol. 54, no. 19, pp. 5781–5804, 2009. View at Publisher · View at Google Scholar · View at Scopus
  9. K. Li, J. Tang, and G.-H. Chen, “Statistical model based iterative reconstruction (MBIR) in clinical CT systems: experimental assessment of noise performance,” Medical Physics, vol. 41, no. 4, Article ID 041906, 2014. View at Publisher · View at Google Scholar · View at Scopus
  10. A. M. Mendrik, E.-J. Vonken, A. Rutten, M. A. Viergever, and B. Van Ginneken, “Noise reduction in computed tomography scans using 3-D anisotropic hybrid diffusion with continuous switch,” IEEE Transactions on Medical Imaging, vol. 28, no. 10, pp. 1585–1594, 2009. View at Publisher · View at Google Scholar · View at Scopus
  11. H. Rabbani, R. Nezafat, and S. Gazor, “Wavelet-domain medical image denoising using bivariate laplacian mixture model,” IEEE Transactions on Biomedical Engineering, vol. 56, no. 12, pp. 2826–2837, 2009. View at Publisher · View at Google Scholar · View at Scopus
  12. A. Borsdorf, R. Raupach, T. Flohr, and J. Hornegger, “Wavelet based noise reduction in CT-images using correlation analysis,” IEEE Transactions on Medical Imaging, vol. 27, no. 12, pp. 1685–1703, 2008. View at Publisher · View at Google Scholar · View at Scopus
  13. L. Scharf and C. Demeure, Statistical Signal Processing: Detection, Estimation, and Time Series Analysis, Addison-Wesley, Reading, Mass, USA, 1991.
  14. F. Zhu, T. Carpenter, D. R. Gonzalez, M. Atkinson, and J. Wardlaw, “Computed tomography perfusion imaging denoising using Gaussian process regression,” Physics in Medicine and Biology, vol. 57, no. 12, pp. N183–N198, 2012. View at Publisher · View at Google Scholar · View at Scopus
  15. S. Hyder Ali and R. Sukanesh, “An efficient algorithm for denoising MR and CT images using digital curvelet transform,” Advances in Experimental Medicine and Biology, vol. 696, pp. 471–480, 2011. View at Publisher · View at Google Scholar · View at Scopus
  16. R. Sivakumar, “Denoising of computer tomography images using curvelet transform,” ARPN Journal of Engineering and Applied Sciences, vol. 2, no. 9, pp. 21–26, 2007. View at Google Scholar
  17. H. Rabbani, “Image denoising in steerable pyramid domain based on a local Laplace prior,” Pattern Recognition, vol. 42, no. 9, pp. 2181–2193, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  18. E. Y. Sidky, Y. Duchin, X. Pan, and C. Ullberg, “A constrained, total-variation minimization algorithm for low-intensity x-ray CT,” Medical Physics, vol. 38, no. 1, pp. S117–S125, 2011. View at Publisher · View at Google Scholar · View at Scopus
  19. A. R. Al-Hinnawi, M. Daear, and S. Huwaijah, “Assessment of bilateral filter on 1/2-dose chest-pelvis CT views,” Radiological Physics and Technology, vol. 6, no. 2, pp. 385–398, 2013. View at Publisher · View at Google Scholar · View at Scopus
  20. Y. Wang and H. Zhou, “Total variation wavelet-based medical image denoising,” International Journal of Biomedical Imaging, vol. 2006, Article ID 89095, 6 pages, 2006. View at Publisher · View at Google Scholar · View at Scopus
  21. K. Hämäläinen, L. Harhanen, A. Hauptmann, A. Kallonen, E. Niemi, and S. Siltanen, “Total variation regularization for large-scale X-ray tomography,” International Journal of Tomography & Simulation, vol. 25, no. 1, pp. 1–25, 2014. View at Google Scholar · View at Scopus
  22. Z. Zhu, K. Wahid, P. Babyn, D. Cooper, I. Pratt, and Y. Carter, “Improved compressed sensing-based algorithm for sparse-view CT image reconstruction,” Computational and Mathematical Methods in Medicine, vol. 2013, Article ID 185750, 15 pages, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  23. S. Hashemi, S. Beheshti, P. R. Gill, N. S. Paul, and R. S. C. Cobbold, “Fast fan/parallel beam CS-based low-dose CT reconstruction,” in Proceedings of the 38th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '13), pp. 1099–1103, IEEE, May 2013. View at Publisher · View at Google Scholar · View at Scopus
  24. K. Chen, E. Loli Piccolomini, and F. Zama, “An automatic regularization parameter selection algorithm in the total variation model for image deblurring,” Numerical Algorithms, vol. 67, no. 1, pp. 73–92, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  25. S. Osher, A. Sole, and L. Vese, “Image decomposition and restoration using total variation minimization and the H1,” Multiscale Modeling and Simulation, vol. 1, no. 3, pp. 349–370, 2003. View at Google Scholar
  26. H. Liao, F. Li, and M. K. Ng, “Selection of regularization parameter in total variation image restoration,” Journal of the Optical Society of America A, vol. 26, no. 11, pp. 2311–2320, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  27. D. L. Donoho, “De-noising by soft-thresholding,” IEEE Transactions on Information Theory, vol. 41, no. 3, pp. 613–627, 1995. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  28. G. M. Vainikko, “The discrepancy principle for a class of regularization methods,” USSR Computational Mathematics and Mathematical Physics, vol. 22, no. 3, pp. 1–19, 1982. View at Google Scholar
  29. P. C. Hansen, “Analysis of discrete ill-posed problems by means of the l-curve,” SIAM Review, vol. 34, no. 4, pp. 561–580, 1992. View at Publisher · View at Google Scholar · View at MathSciNet
  30. S. Akkoul, R. Harba, and R. Lédée, “An image dependent stopping method for iterative denoising procedures,” Multidimensional Systems and Signal Processing, vol. 25, no. 3, pp. 611–620, 2014. View at Publisher · View at Google Scholar · View at Scopus
  31. S. Hashemi, S. Beheshti, R. S. Cobbold, and N. S. Paul, “Non-local total variation based low-dose Computed Tomography denoising,” in Proceedings of the 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC '14), pp. 1083–1086, Chicago, Ill, USA, August 2014. View at Publisher · View at Google Scholar
  32. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Transactions on Image Processing, vol. 16, no. 8, pp. 2080–2095, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  33. Z. Li, L. Yu, J. D. Trzasko et al., “Adaptive nonlocal means filtering based on local noise level for CT denoising,” Medical Physics, vol. 41, no. 1, Article ID 011908, 2014. View at Publisher · View at Google Scholar · View at Scopus
  34. K. Lu, N. He, and L. Li, “Nonlocal means-based denoising for medical images,” Computational and Mathematical Methods in Medicine, vol. 2012, Article ID 438617, 7 pages, 2012. View at Publisher · View at Google Scholar · View at Scopus
  35. A. Buades, B. Coll, and J. M. Morel, “A review of image denoising algorithms, with a new one,” SIAM Journal on Multiscale Modeling and Simulation, vol. 4, no. 2, pp. 490–530, 2005. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  36. D. Kang, P. Slomka, R. Nakazato et al., “Image denoising of low-radiation dose coronary CT angiography by an adaptive block-matching 3D algorithm,” in Medical Imaging 2013: Image Processing, vol. 8669 of Proceedings of SPIE, February 2013. View at Publisher · View at Google Scholar · View at Scopus
  37. K. Abhari, M. Marsousi, J. Alirezaie, and P. Babyn, “Computed Tomography image denoising utilizing an efficient sparse coding algorithm,” in Proceedings of the 11th International Conference on Information Science, Signal Processing and their Applications (ISSPA '12), pp. 259–263, July 2012. View at Publisher · View at Google Scholar · View at Scopus
  38. D. Bartuschat, A. Borsdorf, H. Kostler, R. Rubinstein, and M. Sturmer, A Parallel K-SVD Implementation for CT Image Denoising, Fridrich-Alexander University, Erlangen, Germany, 2009.
  39. D. H. Trinh, M. Luong, J. M. Rocchisani, C. D. Pham, H. D. Pham, and F. Dibos, “An optimal weight method for CT image denoising,” Journal of Electronic Scince and Technology, vol. 10, no. 2, pp. 124–129, 2012. View at Google Scholar
  40. P. Chatterjee and P. Milanfar, “Patch-based near-optimal image denoising,” IEEE Transactions on Image Processing, vol. 21, no. 4, pp. 1635–1649, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  41. J. Dutta, R. M. Leahy, and Q. Li, “Non-local means denoising of dynamic PET images,” PLoS ONE, vol. 8, no. 12, Article ID e81390, 2013. View at Publisher · View at Google Scholar · View at Scopus
  42. F. Yu, Y. Chen, and L. Luo, “CT image denoising based on sparse representation using global dictionary,” in Proceedings of the 7th ICME International Conference on Complex Medical Engineering (CME '13), pp. 408–411, May 2013. View at Publisher · View at Google Scholar · View at Scopus
  43. X. Shu, J. Yang, and N. Ahuja, “Non-local compressive sampling recovery,” in Proceedings of the IEEE International Conference on Computational Photography (ICCP '14), pp. 1–8, IEEE, Santa Clara, Calif, USA, May 2014. View at Publisher · View at Google Scholar
  44. C. Won Kim and J. H. Kim, “Realistic simulation of reduced-dose CT with noise modeling and sinogram synthesis using DICOM CT images,” Medical Physics, vol. 41, no. 1, Article ID 011901, 2014. View at Publisher · View at Google Scholar · View at Scopus
  45. B. R. Whiting, P. Massoumzadeh, O. A. Earl, J. A. O'Sullivan, D. L. Snyder, and J. F. Williamson, “Properties of preprocessed sinogram data in x-ray computed tomography,” Medical Physics, vol. 33, no. 9, pp. 3290–3303, 2006. View at Publisher · View at Google Scholar · View at Scopus
  46. J. Hsieh, “Adaptive streak artifact reduction in computed tomography resulting from excessive x-ray photon noise,” Medical Physics, vol. 25, no. 11, pp. 2139–2147, 1998. View at Publisher · View at Google Scholar · View at Scopus
  47. S. Beheshti, M. Hashemi, X.-P. Zhang, and N. Nikvand, “Noise invalidation denoising,” IEEE Transactions on Signal Processing, vol. 58, no. 12, pp. 6007–6016, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  48. J. M. Bioucas-Dias and M. A. Figueiredo, “A new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Transactions on Image Processing, vol. 16, no. 12, pp. 2992–3004, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  49. O. Axelsson, Iterative Solution Methods, Cambridge University Press, Cambridge, UK, 1994. View at Publisher · View at Google Scholar · View at MathSciNet
  50. M. Lebrun, “An analysis and implementation of the BM3D image denoising method,” Image Processing On Line, vol. 2, pp. 175–213, 2012. View at Publisher · View at Google Scholar
  51. S. G. Chang, B. Yu, and M. Vetterli, “Adaptive wavelet thresholding for image denoising and compression,” IEEE Transactions on Image Processing, vol. 9, no. 9, pp. 1532–1546, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  52. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004. View at Publisher · View at Google Scholar · View at Scopus
  53. T. Yamashiro, T. Miyara, O. Honda et al., “Adaptive iterative dose reduction using three dimensional processing (AIDR3D) improves chest CT image quality and reduces radiation exposure,” PLoS ONE, vol. 9, no. 8, Article ID e105735, 2014. View at Publisher · View at Google Scholar