Table of Contents Author Guidelines Submit a Manuscript
Computational and Mathematical Methods in Medicine
Volume 2011, Article ID 173748, 9 pages
http://dx.doi.org/10.1155/2011/173748
Research Article

Efficient CT Metal Artifact Reduction Based on Fractional-Order Curvature Diffusion

1College of Computer Science, Sichuan University, Chengdu 610065, China
2School of Electronics and Information Engineering, Sichuan University, Chengdu 610065, China

Received 13 April 2011; Revised 20 May 2011; Accepted 26 May 2011

Academic Editor: Dimos Baltas

Copyright © 2011 Yi Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We propose a novel metal artifact reduction method based on a fractional-order curvature driven diffusion model for X-ray computed tomography. Our method treats projection data with metal regions as a damaged image and uses the fractional-order curvature-driven diffusion model to recover the lost information caused by the metal region. The numerical scheme for our method is also analyzed. We use the peak signal-to-noise ratio as a reference measure. The simulation results demonstrate that our method achieves better performance than existing projection interpolation methods, including linear interpolation and total variation.

1. Introduction

Metal artifact reduction (MAR) is still one of the major challenges in X-ray computed tomography (CT) imaging [16]. For high density objects, such as a metal object, the severe attenuation of X-rays allows only a limited number of photos to reach the receiving CT array of sensors. As a result, streak artifacts appear in the reconstructed image after filtered back projection (FBP). The artifacts spread through the whole image, thereby contaminating the imaging quality.

Since the projection data of the metal regions are much larger than ordinary tissue projection data sets, we can assume that the segments of the sinogram with the metal projection data sets are dominated by the metal component only. Based on this assumption, we can deal with projection data of metal objects as lost information, using one of the two main categories of methods: projection interpolation methods [712] or iterative reconstruction methods [1317]. After theoretical analysis, MAR methods based on iterative reconstruction have better reconstruction performance than projection interpolation methods, but they often incur high computational costs and are difficult to implement in current CT imaging systems. In this paper, we concentrate on projection interpolation methods for MAR. In 1978, Lewitt and Bates used a Chebyshev polynomial to implement interpolation [7]. Kalender et al. employed linear interpolation (LI) [8], while Crawford added various assistant processes based on it [9]. Also based on linear interpolation, Gu et al. presented a more accurate metal region segmentation method using the differences in neighboring pixels [10]. Zhao et al. proposed interpolating the wavelet coefficients of the projection data [11]. To obtain a better visual effect, inpainting based on partial differential equations (PDEs) was introduced in [12, 13]. The inpainting method proposed by Gu et al. is based on Euler's elastica and curvature [12]. Duan et al. introduced a classical image inpainting method based on total variation (TV) for MAR [13]. However, the existing PDE image inpainting method cannot connect a wide inpainting region smoothly, and therefore, if wide regions exist, it does not achieve satisfactory results.

In this paper, we propose a fractional-order curvature-driven diffusion (FCDD) MAR model based on our generalized image regularization framework [18]. First, we introduce the FCDD inpainting model. Then, we give the numerical scheme for our method. After presenting our simulation, we conclude this paper.

2. Method

The main steps in our algorithm are similar to those in conventional projection interpolation algorithms as shown in Figure 1.

173748.fig.001
Figure 1: Flowchart of projection interpolation algorithms.

The main difference between our method and the conventional projection interpolation methods lies in the step “correction for metal projection data.” When the metal damages the original projection data in the form of a gap, conventional methods apply interpolations [7, 8] or PDE inpainting [12, 13] algorithms to restore the data gap. But these conventional methods have some drawbacks. First, the gap boundaries after inpainting lack smoothness. Second, if the gap is wide, the inpainting results do not achieve satisfactory visual effects. The proposed CT MAR method is based on FCDD that can fix these flaws. This method achieves smoother results and can also cope with the artifacts caused by wide data gaps. In this section, we first introduce the well-known TV inpainting model, and then present our FCDD model. Finally, the numerical algorithm is given for our model.

2.1. Review of Classic TV Inpainting Model

Assume a standard image model defined as𝑢(𝑥)=𝑢0(𝑥)+𝑛(𝑥),(1) where 𝑢0 is the original image, 𝑛 is additive noise, and 𝑢 is the contaminated image with noise.

Let Ω be the inpainting (open) domain with its boundary 𝜕Ω, and 𝐸 an extended domain surrounding the 𝜕Ω such that 𝜕Ω lies within 𝐸Ω. The image inpainting model based on TV proposed by Chan and Shen [19] is followed here:min𝐽𝜆[𝑢]=𝐸Ω||||𝜆𝑢𝑑𝑥𝑑𝑦+Ω2𝐸||𝑢𝑢0||2𝑑𝑥𝑑𝑦.(2) The first term is the regularizing term for inpainting damaged domains, while the second term in the energy function is the data fidelity term that can keep important features and sharp edges when noise exists. 𝜆Ω is a scale function to tune the weights of the two terms. According to variational theory, the Euler-Lagrange equation corresponding to (2) is||||𝑢1𝑢+𝜆Ω𝑢𝑢0=0(3) with the Neumann boundary condition, 𝜕𝑢/𝜕𝑛=0 on 𝜕Ω, where 𝜆Ω=𝜆𝟏𝐸(𝑥,𝑦)=𝜆,(𝑥,𝑦)𝐸0,otherwise.(4) This model is inspired by the classic TV denoising model [20].

2.2. Proposed FCDD Inpainting Model

In the classical TV inpainting model, the conductivity coefficient of the diffusion strength, which only depends on the numerical value of the isophotes, is||||𝐹=𝑢1.(5) The geometric information of the isophotes is not considered, which is why a wide gap cannot be restored perfectly. Motivated by the methods in [21, 22], to recover from this situation, we use a new fractional-order conductivity coefficient instead of the old one:||𝜅𝐹=𝑓𝛼||||𝛼𝑢||1,(6) where 𝑓() is a function with the following property:𝑓(𝑠)=0,𝑠=0,𝑠=between0and,0<𝑠<.(7) If the isophote has a large fractional-order curvature, the diffusion strength is also large. In this paper, we choose 𝑓(𝑠)=𝑠. Thus, the Euler-Lagrange equation for the FCDD model is𝜕𝑢𝑓||𝜅𝜕𝑡=𝛼||||𝛼𝑢||1𝑢,in𝐷,𝑢=𝑢0,in𝐷𝑐.(8) Here, the inpainting domain 𝐷 is a mathematical open set, 𝐷𝑐 denotes the outer area of 𝐷, and 𝑢0 is the available part of the image. The fractional-order curvature 𝜅𝛼 is defined as𝜅𝛼=𝛼𝛼𝑢||𝛼𝑢||(1).(9)

2.3. Numerical Scheme

In this section, we apply a time marching scheme to our model. Assuming a time step size of Δ𝑡 and a space grid size of , we let 𝑥𝑖=𝑖,𝑦𝑗𝑡=𝑗,𝑖,𝑗=0,1,,𝑁,with𝑁=1,𝑛=𝑛Δ𝑡,𝑛=0,1,(10)

The explicit scheme iterates as𝑢𝑛+1=𝑢𝑛||𝜅+Δ𝑡Δ𝑢𝛼||||𝛼𝑢||1.(11)

First, we describe the discretization of the fractional-order gradient operator 𝛼 and fractional-order Laplacian operator Δ𝛼 using the Grümwald-Letnikov definition [23] in fractional calculus as the mask we proposed in [24, 25].

The 𝛼-order Grümwald-Letnikov definition-based fractional differential can be expressed as𝐷𝛼G-L𝑠𝑑(𝑥)=𝛼[𝑑(𝑥𝑎)]𝛼𝑠||||(𝑥)G-L=lim𝑁((𝑥𝑎)/𝑁)𝛼Γ(𝛼)𝑁1𝑘=0Γ(𝑘𝛼)Γ(𝑘+1)×𝑠𝑥𝑘𝑥𝑎𝑁,(12) where the duration of signal 𝑠(𝑥) is [𝑎,𝑥], 𝛼 is any real number, and 𝑠(𝑥𝑘((𝑥𝑎)/𝑁)) is the discrete sampling.

If 𝑁 is big enough with 𝑎=0, the limit symbol can be dispensed with and (12) is rewritten as𝑑𝛼𝑑𝑥𝛼|||𝑠(𝑥)𝐺-𝐿𝑥𝛼𝑁𝛼Γ(𝛼)𝑁1𝑘=0Γ(𝑘𝛼)Γ(𝑘1)𝑠×𝑥+𝛼𝑥2𝑁𝑘𝑥𝑁.(13) To obtain the value of 𝑠(𝑥+𝛼𝑥/2𝑁𝑘𝑥/𝑁), we use Lagrange 3-point interpolation with 𝑠(𝑥+𝑥/𝑁𝑘𝑥/𝑁), 𝑠(𝑥𝑘𝑥/𝑁), and 𝑠(𝑥𝑥/𝑁𝑘𝑥/𝑁). Then, we obtain 𝑑𝛼𝑑𝑥𝛼𝑥𝑠(𝑥)𝛼𝑁𝛼Γ(𝛼)𝑁1𝑘=0Γ(𝑘𝛼)Γ×𝑠(𝑘+1)𝑘+𝛼4𝑠𝑘1𝑠𝑘+1+𝛼28𝑠𝑘12𝑠𝑘+𝑠𝑘+1.(14) If 𝑘=𝑛𝑁1, from (13), the anterior 𝑛+2 approximate backward differences of the fractional partial differentials on the negative 𝑥- and 𝑦-axes, respectively, are expressed as 𝜕𝛼𝑠(𝑥,𝑦)𝜕𝑥𝛼𝛼4+𝛼28+𝛼𝑠(𝑥+1,𝑦)122𝛼381×𝑠(𝑥,𝑦)+×Γ(𝛼)𝑛2𝑘=1Γ(𝑘𝛼+1)𝛼(𝑘+1)!4+𝛼28+Γ(𝑘𝛼)𝛼𝑘!124+Γ(𝑘𝛼1)𝛼(𝑘1)!4+𝛼28×𝑠(𝑥𝑘,𝑦)+Γ(𝑛𝛼1)𝛼(𝑛1)!Γ(𝛼)124+Γ(𝑛𝛼2)𝛼(𝑛2)!Γ(𝛼)4+𝛼28×𝑠(𝑥𝑛+1,𝑦)+Γ(𝑛𝛼1)𝛼(𝑛1)!Γ(𝛼)4+𝛼28𝜕×𝑠(𝑥𝑛,𝑦),𝛼𝑠(𝑥,𝑦)𝜕𝑦𝛼𝛼4+𝛼28+𝛼𝑠(𝑥,𝑦+1)122𝛼381×𝑠(𝑥,𝑦)+×Γ(𝛼)𝑛2𝑘=1Γ(𝑘𝛼+1)𝛼(𝑘+1)!4+𝛼28+Γ(𝑘𝛼)𝛼𝑘!124+Γ(𝑘𝛼1)𝛼(𝑘1)!4+𝛼28×𝑠(𝑥,𝑦𝑘)+Γ(𝑛𝛼1)𝛼(𝑛1)!Γ(𝛼)124+Γ(𝑛𝛼2)𝛼(𝑛2)!Γ(𝛼)4+𝛼28×𝑠(𝑥,𝑦𝑛+1)+Γ(𝑛𝛼1)𝛼(𝑛1)!Γ(𝛼)4+𝛼28𝑠(𝑥,𝑦𝑛).(15)

For simplicity, we only use fractional order masks in four directions for calculation, including positive 𝑥- and 𝑦-coordinates and negative 𝑥- and 𝑦-axes. Let 𝐷𝛼𝑥+, 𝐷𝛼𝑥, 𝐷𝛼𝑦+, and 𝐷𝛼𝑦 denote the calculations in the four directions, as illustrated in Figure 2.

fig2
Figure 2: Masks of the four directions: (a) 𝐷𝛼𝑥+, (b) 𝐷𝛼𝑥, (c) 𝐷𝛼𝑦+, and (d) 𝐷𝛼𝑦.

The coefficients of the masks in Figure 2 are given below:𝐶𝑠1=𝛼4+𝛼28𝐶𝑠0𝛼=122𝛼38𝐶𝑠1=5𝛼4+5𝛼3+𝛼164𝐶16𝑠𝑘=1Γ(𝛼)Γ(𝑘𝛼+1)𝛼(𝑘+1)!4+𝛼28+Γ(𝑘𝛼)𝛼𝑘!124+Γ(𝑘𝛼1)𝛼(𝑘1)!4+𝛼28𝐶𝑠𝑛2=1ΓΓ(𝛼)(𝑛𝛼1)𝛼(𝑛1)!4+𝛼28+Γ(𝑛𝛼2)𝛼(𝑛2)!124+Γ(𝑛𝛼3)𝛼(𝑛3)!4+𝛼28𝐶𝑠𝑛1=Γ(𝑛𝛼1)(𝛼𝑛1)!Γ(𝛼)124+Γ(𝑛𝛼2)𝛼(𝑛2)!Γ(𝛼)4+𝛼28𝐶𝑠𝑛=Γ(𝑛𝛼1)𝛼(𝑛1)!Γ(𝛼)4+𝛼28.(16) Thus, we get a discrete representation of each itemΔ𝑢=𝑢𝑥+𝑢𝑦,𝑢𝑥𝑢=minmod𝑐𝑥,minmod2𝑢𝑏𝑥,2𝑢𝑓𝑥,𝑢𝑦𝑢=minmod𝑐𝑦,minmod2𝑢𝑏𝑦,2𝑢𝑓𝑦,||𝛼𝑢||1=1||𝐷𝛼𝑥+𝑢||2+||𝐷𝛼𝑦+𝑢||2,𝜅+𝜀𝛼=𝛼𝛼𝑢||𝛼𝑢||=𝐷𝛼𝑥𝐷𝛼𝑥+𝑢||𝐷𝛼𝑥+𝑢||2+||𝐷𝛼𝑦+𝑢||2+𝜀+𝐷𝛼𝑦𝐷𝛼𝑦+𝑢||𝐷𝛼𝑥+𝑢||2+||𝐷𝛼𝑦+𝑢||2+𝜀(17) where 𝜀 is a small positive number to prevent dividing by zero, the superscript indexes 𝑐, 𝑏, and 𝑓 denote central, backward, and forward differences, respectively, and the minmod function satisfies minmod(𝑥,𝑦)=sign(𝑎)max(0,min(|𝑎|,𝑏sign(𝑎))).(18)

3. Results

In this section, we present the experimental results of our FCDD model compared with the LI and TV models.

As there is no quantitative method to measure the performance of CT MAR [11], we apply the peak-signal to-noise ratio (PSNR), which is commonly used in image inpainting [26], as the available criterionPSNR=10×log102552𝑢𝑢022.(19)

Here, 𝑢 is the image after inpainting and 𝑢0 is the original image. The greater the value of the PSNR is, the better is the performance. PSNR is usually used to measure the similarity between an inpainted image and a real image, and as such it is a suitable reference.

Several concrete steps are followed in our method. First, because in this paper we focus on the inpainting algorithm, for simplicity, we only use the threshold method to extract the metal region. It is easy to say that a more accurate segmentation algorithm would enhance the performance of MAR. Second, we locate the corresponding metal regions in the projection data set. Third, we employ the FCDD algorithm to inpaint the metal regions. Finally, we reconstruct the image from the inpainted sinogram and insert the metal regions.

In the first experiment, while dealing with multiple metals, we compared the performance of FCDD against other methods. Five metal regions with much higher attenuation were added into the Shepp-Logan (S-L) phantom (256 × 256) to simulate the metal artifacts. The parameters are given in Table 1.

tab1
Table 1: Parameters of the modified S-L phantom with five metal regions: 𝜏 is the attenuation coefficient; 𝐴, 𝐵 are the lengths of the semiaxes of each ellipse; 𝑥, 𝑦 are the central coordinates of each ellipse; 𝜃 is the rotation angle (in degrees).

Figure 3 shows the results of the different algorithms, where Figure 3(a) gives the original phantom with a metal region while Figure 3(b) shows the phantom reconstructed from the projection data and containing severe metal artifacts. Figures 3(c) to 3(e) illustrate the results of LI, TV, and FCDD, respectively. In these figures, we can see that metal artifacts are suppressed to different degrees. Compared with LI and TV, FCDD achieves a better visual effect. The structure information (edges and shapes) in Figure 3(e) (𝛼=1.8) is the clearest. In particular, the shapes of the virtual organs are almost maintained, except that the TV method causes an artificial effect. Near the metal regions, the staircase effect is marked. The results of FCDD show the best performance with the highest PSNR. To explain the reason for this, we also give the sinograms after inpainting with these three methods in Figure 4.

fig3
Figure 3: Comparison of different sinogram inpainting methods with multiple metal regions: (a) phantom with five metal regions, (b) filtered back projection with metal artifacts, (c) filtered back projection after LI (PSNR = 26.4205), (d) filtered back projection after TV (PSNR = 25.9497), and (e) filtered back projection after FCDD (PSNR = 26.4399).
fig4
Figure 4: Sinogram inpainting results for different methods with multiple metal regions: (a) sinogram after LI, (b) sinogram after TV, and (c) sinogram after FCDD.

When there are multiple metal regions, the gaps to be inpainted are much wider than a single metal region and there is much more missing information that needs to be interpolated. The classic interpolation methods only use the information in the same column for interpolation. The useful information is too scant to obtain an accurate result, as shown in Figure 4(a). As the TV and FCDD methods are 2D inpainting methods, they make use of not only the information in the columns, but also that in the rows and thus they obtain better results. However, TV has a flaw in dealing with wide regions, and the staircase effect occurs. The reason for this is that the order of TV is 2 and to obtain an accurate result, the order must be greater than 2 [27] but less than 4 [28]. In this experiment, we set 𝛼=1.8 in FCDD, which means that the order approximates to 3 and thus the visual effects of inpainting sinograms of FCDD are superior.

Figure 5 gives the variation of the PSNR value with the order of the algorithm. If 𝛼=1, the model is the same as in [22]. As the order increases, the PSNR value also increases. If 𝛼1.8, the PSNR peaks and subsequently declines.

173748.fig.005
Figure 5: Variations in PSNR with fractional order.

Figure 6 shows the clinical case of a patient with metal implants in both femurs. In this experiment, we do not know the original image, so we cannot choose 𝛼 according to the PSNR. Based on the analysis in the previous experiment, we approximately set 𝛼=1.8. Figure 6(a) shows the original image with dark, board streaks radiating from the metals. Figures 6(b) to 6(d) illustrate the reduction in the most severe artifacts using LI, TV, and FCDD, respectively. The dark streaks are not adequately suppressed in Figures 6(b) and 6(c), while fictitious artifacts caused by TV are visible in Figure 6(c). These disturbing artifacts are reduced to a large degree in Figure 6(d). Structural information previously invisible because of artifacts or the incomplete correction becomes visible.

fig6
Figure 6: Comparison of different methods dealing with a clinical case: (a) original image with metal artifacts, (b) FBP after LI, (c) FBP after TV, and (d) FBP after FCDD.

4. Conclusion

A new MAR algorithm has been proposed based on the classic TV inpainting model. We replaced the conditional conductivity coefficient for TV with a new fractional-order curvature, and in comparison with linear interpolation and TV, our method obtained better quantitative results and visual effects.

To achieve the best performance for different images, the fractional order also needs to be different. Thus, future work will focus on the adaptive selection of the order of the algorithm.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China (60972131). The authors thank the anonymous reviewers for their comments, which have substantially improved the paper.

References

  1. J. Hsieh, Computed Tomography: Principles, Design, Artifacts, and Recent Advances, SPIE Press, Bellingham, Wash, USA, 2003.
  2. P. Stradiotti, A. Curti, G. Castellazzi, and A. Zerbi, “Metal-related artifacts in instrumented spine. techniques for reducing artifacts in CT and MRI: state of the art,” European Spine Journal, vol. 18, supplement 1, pp. S102–S108, 2009. View at Publisher · View at Google Scholar · View at Scopus
  3. A. Kondo, Y. Hayakawa, J. Dong, and A. Honda, “Iterative correction applied to streak artifact reduction in an X-ray computed tomography image of the dento-alveolar region,” Oral Radiology, vol. 26, no. 1, pp. 61–65, 2010. View at Publisher · View at Google Scholar · View at Scopus
  4. M. Yazdia, L. Gingras, and L. Beaulieu, “An adaptive approach to metal artifact reduction in helical computed tomography for radiation therapy treatment planning: experimental and clinical studies,” International Journal of Radiation Oncology Biology Physics, vol. 62, no. 4, pp. 1224–1231, 2005. View at Publisher · View at Google Scholar · View at Scopus
  5. N. A. Ebraheim, R. Coombs, J. J. Rusin, and W. T. Jackson, “Reduction of postoperative CT artifacts of pelvic fractures by use of titanium implants,” Orthopedics, vol. 13, no. 12, pp. 1357–1358, 1990. View at Google Scholar · View at Scopus
  6. N. Haramati, R. B. Staron, K. Mazel-Sperling et al., “CT scans through metal scanning technique versus hardware composition,” Computerized Medical Imaging and Graphics, vol. 18, no. 6, pp. 429–434, 1994. View at Publisher · View at Google Scholar
  7. R. M. Lewitt and R. H. Bates, “Image reconstruction from projections(III): projection completion methods,” Optik, vol. 50, no. 4, pp. 189–204, 1978. View at Google Scholar · View at Scopus
  8. W. A. Kalender, R. Hebel, and J. Ebersberger, “Reduction of CT artifacts caused by metallic implants,” Radiology, vol. 164, no. 2, pp. 576–577, 1987. View at Google Scholar · View at Scopus
  9. C. R. Crawford, “Reprojection using a parallel backprojector,” Medical Physics, vol. 13, no. 4, pp. 480–483, 1986. View at Google Scholar · View at Scopus
  10. J. W. Gu, L. Zhang, Z. Q. Chen, Y. X. Xing, and Z. F. Huang, “A method based on interpolation for metal artifacts reduction in CT images,” Journal of X-Ray Science and Technology, vol. 14, no. 1, pp. 11–19, 2006. View at Google Scholar · View at Scopus
  11. S. Zhao, D. D. Robertson, G. Wang, B. Whiting, and K. T. Bae, “X-ray CT metal artifact reduction using wavelets: an application for imaging total hip prostheses,” IEEE Transactions on Medical Imaging, vol. 19, no. 12, pp. 1238–1247, 2000. View at Google Scholar · View at Scopus
  12. J. Gu, L. Zhang, G. Yu, Y. Xing, and Z. Chen, “X-ray CT metal artifacts reduction through curvature based sinogram inpainting,” Journal of X-Ray Science and Technology, vol. 14, no. 2, pp. 73–82, 2006. View at Publisher · View at Google Scholar · View at Scopus
  13. X. Duan, L. Zhang, Y. Xiao, J. Cheng, Z. Chen, and Y. Xing, “Metal artifact reduction in CT images sinogram TV inpainting,” in Proceedings of the IEEE Nuclear Science Symposium Conference Record (NSS/MIC '08), pp. 4175–4177, Dresden, Germany, October 2008. View at Publisher · View at Google Scholar · View at Scopus
  14. B. de Man, J. Nuyts, P. Dupont, G. Marchal, and P. Suetens, “An iterative maximum-likelihood polychromatic algorithm for CT,” IEEE Transactions on Medical Imaging, vol. 20, no. 10, pp. 999–1008, 2001. View at Google Scholar · View at Scopus
  15. G. Wang, D. L. Snyder, J. A. O'Sullivan, and M. W. Vannier, “Iterative deblurring for metal artifact reduction,” IEEE Transactions on Medical Imaging, vol. 15, no. 5, pp. 657–667, 1996. View at Google Scholar · View at Scopus
  16. C. Lemmens, D. Faul, and J. Nuyts, “Suppression of metal artifacts in CT using a reconstruction procedure that combines MAP and projection completion,” IEEE Transactions on Medical Imaging, vol. 28, no. 2, Article ID 4591397, pp. 250–260, 2009. View at Publisher · View at Google Scholar · View at Scopus
  17. D. Prell, Y. Kyriakou, M. Beister, and W. A. Kalender, “A novel forward projection-based metal artifact reduction method for flat-detector computed tomography,” Physics in Medicine and Biology, vol. 54, no. 21, pp. 6575–6591, 2009. View at Publisher · View at Google Scholar · View at Scopus
  18. Y. Zhang, Y. Pu, J. Hu, and J. Zhou, “A fractional framework for image inpainting,” submitted to IET Image Process.
  19. T. F. Chan and J. Shen, “Mathematical models for local nontexture inpaintings,” SIAM Journal on Applied Mathematics, vol. 62, no. 3, pp. 1019–1043, 2001. View at Publisher · View at Google Scholar · View at Scopus
  20. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D, vol. 60, no. 1–4, pp. 259–268, 1992. View at Google Scholar · View at Scopus
  21. Z. Jun and W. Zhihui, “A class of fractional-order multi-scale variational models and alternating projection algorithm for image denoising,” Applied Mathematical Modelling, vol. 35, pp. 2516–2528, 2011. View at Publisher · View at Google Scholar · View at Scopus
  22. T. F. Chan and J. Shen, “Nontexture inpainting by curvature-driven diffusions,” Journal of Visual Communication and Image Representation, vol. 12, no. 4, pp. 436–449, 2001. View at Publisher · View at Google Scholar · View at Scopus
  23. K. B. Oldham and J. Spanie, The Fractional Calculus, Academic Press, New York, NY, USA, 1974.
  24. Y. Pu, W. Wang, J. Zhou, H. Jia, and Y. Wang, “Fractional derivative detection of digital image texture details and implementation of fractional derivative filter,” Science in China Series F, vol. 38, no. 12, pp. 2252–2272, 2008. View at Google Scholar
  25. Y. F. Pu, J. L. Zhou, and X. Yuan, “Fractional differential mask: a fractional differential-based approach for multiscale texture enhancement,” IEEE Transactions on Image Processing, vol. 19, no. 2, Article ID 5340520, pp. 491–511, 2010. View at Publisher · View at Google Scholar · View at Scopus
  26. J. C. Russ, The Image Processing Handbook, CRC Press, Boca Raton, Fla, USA, 6th edition, 2011.
  27. M. Bertalmío, A. Bertozzi, and G. Sapiro, “Navier-Stokes, fluid dynamics, and image and video inpainting,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (CVPR '01), pp. 355–362, December 2001. View at Scopus
  28. M. Bertalmío, “Strong-continuation, contrast-invariant inpainting with a third-order optimal PDE,” IEEE Transactions on Image Processing, vol. 15, no. 7, pp. 1934–1938, 2006. View at Publisher · View at Google Scholar · View at Scopus