Abstract
In view of the drawback of most image inpainting algorithms by which texture was not prominent, an adaptive inpainting algorithm based on continued fractions was proposed in this paper. In order to restore every damaged point, the information of known pixel points around the damaged point was used to interpolate the intensity of the damaged point. The proposed method included two steps; firstly, Thiele’s rational interpolation combined with the mask image was used to interpolate adaptively the intensities of damaged points to get an initial repaired image, and then NewtonThiele’s rational interpolation was used to refine the initial repaired image to get a final result. In order to show the superiority of the proposed algorithm, plenty of experiments were tested on damaged images. Subjective evaluation and objective evaluation were used to evaluate the quality of repaired images, and the objective evaluation was comparison of Peak Signal to Noise Ratios (PSNRs). The experimental results showed that the proposed algorithm had better visual effect and higher Peak Signal to Noise Ratio compared with the stateoftheart methods.
1. Introduction
Image inpainting is an important branch of image processing which studies how to restore the damaged part based on human visual mechanism. Currently, there are many image inpainting methods, including methods based on partial differential equation (PDE) [1–4], methods based on texture synthesis [5, 6], methods based on sparse representation [7–10], and other methods [11–15].
PDE is introduced to propagate local structures from the exterior to the interior of the damaged regions. Zhou and Gao [16] studied the pricing problem of the American options with fractal transmission system under twostate regime switching models, which could be formulated as a free boundary problem of timefractional PDE system. Su et al. [17] solved the problem of the partial differential equations by using local polynomial regression. Bertalmio et al. [1] proposed an image inpainting method based on PDE for the first time in the conference of SIGGRAPH, where the damaged parts were repaired by the iteration along the direction of isophotes of surrounding pixel points. And then Bertalmio et al. [1] proposed an image inpainting model based on highorder partial differential equations of transmission theory. Chan and Shen [2, 3] proposed an image inpainting model based on total variation and a new model based on curvature diffusion. Pascal et al. [4] proposed a new compression framework with homogeneous, biharmonic, and edgeenhancing diffusion which supported different strategies for data selection and storage and gave a detailed analysis of the advantages and disadvantages of the three partial differential equations (PDEs). These methods based on PDE are suitable for straight lines, curves, and small regions, and they are not suitable for texture details of large areas.
The texture synthesis scheme is another effective method for image inpainting [5, 6]. Efros and Leung [5] proposed a nonparametric sampling method based on single pixel synthesis, and for the first time the Markov Random Field model was adopted in texture synthesis. Criminisi et al. [6] proposed an image inpainting method based on texture synthesis by using priority match. The damaged images can be repaired well by using these texture synthesis methods; nevertheless, the problem of boundaries blur still exists.
With the advent of sparse representation, sparse priors have also been used to solve the inpainting problem. Hu and Xiong [7] proposed an image inpainting scheme by combining the Criminisi algorithm with sparse representation, where the sparse representation inpainting method was used to replace the best matching patch search in Criminisi et al.’s algorithm. Shen et al. [8] proposed an examplebased image inpainting method by using the sparse representation based patch inpainting model and isophotebased priority. A patch based image inpainting method for multiview images was proposed [9], which was executed in two phases. Rao et al. [10] presented the concept of group based sparse representation instead of traditional patch based sparse representation, which was used as the basic unit. The sparse representation based methods are more suitable than the PDEbased and texture synthesis methods for large texture areas.
Huo et al. [11] proposed an automatic video scratch removal method based on Thiele type continued fraction, which adopted Thiele’s rational interpolation to interpolate the intensity of every damaged pixel point. Bornemann and Marz [12] proposed a fast noniterative method for image inpainting, which traversed the inpainting domain by the fast marching method just once while transporting, along the way, image values in a coherence direction robustly estimated by means of the structure tensor. The distribution of training data was learned to predict missing content in a damaged image [13]. Pathak et al. [14] presented an unsupervised visual feature learning algorithm driven by contextbased pixel prediction, and by the context encoder, the appearance and the semantics of visual structures were captured. In [15], an image inpainting algorithm based on Kriging interpolation technique was proposed, where the Kriging interpolation technique automatically could fill the damaged region and scratched regions.
These methods above have good repaired effect, however, by which texture details cannot be well processed. Considering that the reconstruction images have better visual effect and prominent texture by the continued fractions [11, 18–20], in this paper, we propose an adaptive image inpainting method based on continued fractions.
The main contributions of this paper are as follows: the adaptive inpainting scheme based on Thiele’s rational interpolation is proposed; the novel inpainting model by Thiele’s rational interpolation combined with NewtonThiele’s rational interpolation is proposed.
2. Overview of the Proposed Method
In this section, we mainly summarize the proposed method. Our method consists of two phases, namely, adaptive inpainting phase and refining inpainting phase. In the adaptive inpainting phase, we adopt Thiele’s rational interpolation function and the corresponding mask image to interpolate the intensities of damaged pixel points. According to the mask image, we can judge the overall direction of scratches. If the overall direction of scratches is horizontal, then the damaged pixel points will be processed in columns; otherwise, they are processed in rows. For example, the overall direction of scratches is vertical in Figure 1, and the intensities of damaged pixel points will be interpolated in rows. We scan the mask image line by line and find the positions of damaged pixel points, and, for every damaged pixel point, we adopt the information of known pixel points near it and Thiele’s rational interpolation function to interpolate its intensity. In order to get correct intensity, we adaptively select interpolation sampling points that are closer to the damaged point. That is, the known pixel points are selected as interpolation sampling points by bilateral symmetry way centering on the damaged point. When all damaged pixel points are processed, we can get an initial repaired image.
In the refining inpainting phase, we adopt NewtonThiele’s rational interpolation function to update the intensity of every damaged point. After the first phase, the damaged points have been repaired, and, in the second phase, we will refine the previous result in order to be closer to the original image. We use the mask image again to find the position of every damaged point corresponding to the initial repaired image. Different from previous continued fractions inpainting method [11], except for the damaged point being interpolated, the other damaged points are used as the known pixel points and the intensities are those of the corresponding points in the initial repaired image. As shown in Figure 1, by using NewtonThiele’s rational interpolation function and information of 16 known pixel points around the damaged point being interpolated, the intensity of the damaged point can be interpolated and then the intensity of this point in initial repaired image will be replaced by it. Finally, we can get final repaired result.
3. Adaptive Inpainting via Thiele’s Rational Interpolation
3.1. Thiele’s Interpolating Continued Fractions
The continued fractions are an ancient branch of mathematics, and the theory appeared for the first time in 1948 [21]. The most useful form of continued fractions is the Thiele type continued fractions which can be defined as follows.
Suppose a set of real or complex points a function defined in this domain , where don’t have to be distinct from one another. One basic approach to approximate function is Thiele’s interpolating continued fraction which can be expressed as [22–25]where are the inverse differences of the function at points , which can be defined as
It is not difficult to show that is a rational function with the degrees of numerator polynomial and denominator polynomial not exceeding and , respectively, satisfying
3.2. NewtonType Approximation of Thiele Type Continued Fractions
From the above section, we find that when the denominator is zero and the inverse differences cannot be calculated, Thiele’s interpolating continued fraction should be replaced by the following Newtontype polynomial:where
3.3. Selection of Interpolation Sampling Points
For image inpainting by Thiele’s rational interpolation, the key is the selection of interpolation sampling points. Considering the computational complexity of continued fractions, we adaptively select 4 sampling points for interpolation. These sampling points are all known pixel points that are close enough to the damaged point being interpolated. The selection order of sampling points is as shown in Figure 2, where the black solid point is damaged pixel point and the numbers are the selection order of interpolation sampling points. The sampling points are selected by the way of bilateral symmetry, and the selection is finished when the number of sampling points is 4. Sometimes there are several damaged pixel points in one row (column); when we select interpolation sampling points of one damaged pixel point, we skip the other damaged pixel points.
3.4. Inpainting Algorithm by Thiele’s Rational Interpolation
The proposed inpainting algorithm by Thiele’s rational interpolation can work only when there is consistent scratching direction in damaged domain. As shown in Figure 3, it is a scratching simulated diagram, where the black solid points are the damaged pixel points. From Figure 3, we find that the scratching overall direction is vertical, so we select interpolation sampling points in the horizontal direction, which can be done by using the method of the previous section. From left to right, we scan every pixel point of one row. If the pixel point is the damaged point, we use the information of interpolation sampling points around it and Thiele’s rational interpolation function to get its intensity.
Now we describe the adaptive inpainting algorithm in detail. From Figure 3, we select some pixel points in the dotted box, which are displayed in Figure 4. In Figure 4, the black solid points are damaged points and the dot points are the damaged points being interpolated, and the numbers are the selection order of interpolation sampling points. In order to better describe the selection order of interpolation sampling points for every damaged point, for continuous damaged points in the same row, we display them, respectively, in different rows. They have different selection orders of interpolation sampling points for those damaged points near the damaged point being interpolated. In general, the selection rule is principle of proximity and the interpolation sampling points are selected by the way of bilateral symmetry. If one pixel point is a damaged pixel point in searching process, then we skip this pixel point and continue to search for the next pixel point. After 4 interpolation sampling points are all selected, we use the information of these pixel points and Thiele’s rational interpolation function to get the intensity of the damaged point. When the intensities of all damaged points are got, the initial repaired image can be got.
4. Refining Inpainting via NewtonThiele’s Rational Interpolation
4.1. NewtonThiele’s Rational Interpolation
NewtonThiele’s rational interpolation is formed jointly by Newton’s polynomial in and Thiele’s continued fraction in , which can be defined as follows [26, 27].
Letwhere , and are natural numbers.where , , are Thiele’s interpolants based on continued fractions defined as follows:where (; ) are the blending differences, which can be calculated recursively as follows:
Then it is not difficult to show that determined by (7) and (8) satisfies
4.2. NewtonType Approximation Formula of NewtonThiele’s Rational Interpolation
We find that if from the above equation, the denominator is zero, and the equation cannot be calculated. Similar to the approximation method of Thiele type continued fractions in Section 3.2, defined in (8) should be replaced by the following Newtontype polynomial:where
4.3. NewtonThiele’s Rational Interpolation Window
Different from Thiele’s rational interpolation, a rectangle interpolation window is used in NewtonThiele’s interpolation process. We use 16 pixel points around the interpolation point as sampling points, which are not in the same row (column) as the interpolation point. The details are as shown in Figure 5, where the black dot is the point being interpolated and 16 color solid points around it are sampling points.
4.4. Refining Inpainting Algorithm by NewtonThiele’s Rational Interpolation
By using Thiele’s interpolation inpainting algorithm, we get an initial repaired image, and next we need to refine the intensity of every damaged pixel point. Because every damaged point has been repaired initially, in the refinement process, except for the damaged point being interpolated, the other damaged points are treated as known pixel points whose intensities are those of the initial repaired image. The mask image is used again to determine the position of every damaged pixel point, and the interpolation window in Figure 5 and NewtonThiele’s rational interpolation function are used to get intensity of every damaged point. And then we update the intensity of the damaged point instead of that in initial repaired image. When the intensities of all damaged points have been interpolated and updated, we can get a final repaired image.
5. Implementation and Experimental Analysis
5.1. Algorithm Implementation
In order to get a better repaired image, we use Thiele’s rational interpolation to get an initial repaired image, and then NewtonThiele’s rational interpolation is used to refine the initial repaired image to get a final result. The whole inpainting algorithm includes two steps: the adaptive inpainting and the refining inpainting. In the adaptive inpainting phase, Thiele’s rational interpolation function is used to interpolate intensity of every damaged point, and the detailed process is summarized in Algorithm 1.

In the refining inpainting phase, NewtonThiele’s rational interpolation function is used to refine the intensity of every damaged point, and the detailed process is summarized in Algorithm 2. The implementation of Algorithm 2 is based on the result of Algorithm 1.

Peak Signal to Noise Ratio (PSNR) is often used as a measure of signal reconstruction quality, which can be defined as follows: where is the size of the damaged image, is the final repaired image, and is the original image.
5.2. Experimental Results and Analysis
In this section, we demonstrate the effectiveness and superiority of the proposed method through plenty of experiments. In our experiments, the original images are selected from standard datasets (set 5, set 14, and B100) and the original images with scratching are used as damaged images. The standard datasets are used for image processing, which can be downloaded from the Internet (http://vllab.ucmerced.edu/wlai24/LapSRN/). The intensities of repaired regions in the mask images are 255, and others are 0. For color images, in experiments, we apply our algorithm to each of the red, green, and blue channels, respectively. We use plenty of images for experiments and choose five algorithms for comparison with the proposed method. They are the schemes of papers [1] (BSCB for short), [12] (CT for short), [6] (EB for short), [3] (TV for short), and [11] (Thiele for short). All the experiments are tested on PC with Intel(R) core (TM) i34130, 3.4 GHz CPU, 8 GB RAM, NVIDIA 1 GB, and MATLAB 2010b.
From Figures 6–11, all images are with the size of 256 × 256. The PSNRs of the images repaired by above different methods, respectively, are listed in Table 1, where the maximal PSNRs are highlighted with bold letters.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
In order to illustrate the property of prominent texture details by our method, we select repaired image patches from Figure 11, which are shown in Figure 12. In order to show the superiority of our method, we select, respectively, one column from repaired image patches in Figure 12 and the intensities by different methods are compared, respectively, with that of original image patch, which are displayed in Figure 13. If intensities and frequency of intensity distribution are closer to those of original image patch, this means that the repaired image by this method is closer to original image. From Figure 13, we find that the intensity distribution by our method is closer to that of the original image patch.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(a)
(b)
(c)
(d)
(e)
(f)
Through visual comparison of Figures 6–11, we find that the repaired results by TV [3] are not good due to some damaged domains not being repaired. When the damaged regions are large, the visual effects are not very good by BSCB [1] and EB [6] methods. By CT [12] and Thiele methods [11], the visual effects of repaired images are good; however, the details are not well repaired. The results by our method have better visual effect and prominent texture regions. From Figures 1213, we find that the visual effect by our method is the best and the intensity distribution by our method is closer to that of the original image patch. Through objective comparisons of Table 1, we get that our PSNRs are all higher than those of other methods.
6. Discussions and Conclusions
A novel image inpainting method by using nonlinear rational interpolation was presented. Our approach was based on the observation that the texture details of repaired images by most image inpainting algorithms were not prominent. Inspired by the applications of continued fractions in image processing [11, 18–20], we proposed a novel image inpainting algorithm by using continued fractions rational interpolation. In order to obtain better repaired results, Thiele’s rational interpolation was combined with NewtonThiele’s rational interpolation to repair damaged images. That is, the Thieles rational interpolation was used to interpolate adaptively initial repaired images, and NewtonThiele’s rational interpolation was used to refine the initial repaired images to get final results. Our method has been tested on a series of images, and results show that the proposed method has significantly better performance compared with the other inpainting approaches. The proposed method was suitable for the scratched images and small regions damaged images, and it was not suitable for the large area damaged images and badly scratched images. In future work, we will solve this problem.
Data Availability
Fully documented templates are available in the elsarticle package on CTAN.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This work is supported by the National Natural Science Foundation of China (Grants nos. 61502141, 61070227, 61472466, and 11601115), the Anhui Provincial Natural Science Foundation (Grant no. 1508085QF128), and the Fundamental Research Funds for the Central Universities (Grants nos. JZ2015HGXJ0175 and JZ2016HGBZ1005).