About this Journal Submit a Manuscript Table of Contents
Journal of Applied Mathematics
Volume 2014 (2014), Article ID 757318, 13 pages
http://dx.doi.org/10.1155/2014/757318
Research Article

A Multiresolution Image Completion Algorithm for Compressing Digital Color Images

1Department of ECE, Anna University, Dindigul Campus, Dindigul, Tamil Nadu 624 622, India
2Department of Information Technology, PSNA College of Engineering and Technology, Dindigul, Tamil Nadu 624 622, India

Received 28 November 2013; Revised 31 January 2014; Accepted 3 February 2014; Published 25 March 2014

Academic Editor: Feng Gao

Copyright © 2014 R. Gomathi and A. Vincent Antony Kumar. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper introduces a new framework for image coding that uses image inpainting method. In the proposed algorithm, the input image is subjected to image analysis to remove some of the portions purposefully. At the same time, edges are extracted from the input image and they are passed to the decoder in the compressed manner. The edges which are transmitted to decoder act as assistant information and they help inpainting process fill the missing regions at the decoder. Textural synthesis and a new shearlet inpainting scheme based on the theory of p-Laplacian operator are proposed for image restoration at the decoder. Shearlets have been mathematically proven to represent distributed discontinuities such as edges better than traditional wavelets and are a suitable tool for edge characterization. This novel shearlet p-Laplacian inpainting model can effectively reduce the staircase effect in Total Variation (TV) inpainting model whereas it can still keep edges as well as TV model. In the proposed scheme, neural network is employed to enhance the value of compression ratio for image coding. Test results are compared with JPEG 2000 and H.264 Intracoding algorithms. The results show that the proposed algorithm works well.

1. Introduction

Image inpainting [15] is a method for recovering regions in images whose pixels are distorted or removed in some way. Inpainting methods are commonly based on Partial Differential Equations (PDEs) and Total Variation (TV) models. In PDEs technique [6], pixel values around the region to be inpainted are considered to be the boundary condition for a boundary value problem. Then, a proper equation for interpolating in that area will be solved. Image inpainting has a variety of applications such as text and object removal, denoising, superresolution, digital zooming, filling-in, and compression. Galić et al. [7] use an inpainting technique directly for compression whereas, in previous studies, image inpainting is treated only as a preprocessing step to increase other existing image compression standards. Bugeau et al. [8] offered a working algorithm for image inpainting trying to approximate the global minimum of an energy functional that combines the three fundamental concepts of self-similarity, coherence, and propagation. In this method, when the image does not have enough patches to copy from, either because the mask is too spread and the patch size is large or because the mask is placed on a singular location on the image, then the results are poor and although the presence of a geometry term seems to help, it is clearly not enough. The authors did not solve these problems and they did not optimize the search in the patch space.

Chan et al. [6] have proposed TV wavelet inpainting models. The main benefit of TV model is that it can keep the edges very well. But, the method has the drawback called staircase effect. To overcome this defect, we analyze the physical characteristics of TV model [9] and -Laplacian operator [10] in local coordinates. At the same time, the traditional wavelets [11] are not very effective in dealing with multidimensional signals containing distributed discontinuities such as edges. To overcome this limitation, one has to use basis elements with much higher directional sensitivity and of various shapes to be able to capture the intrinsic geometrical features of multidimensional phenomena.

In this paper, a new discrete multiscale representation [12] called the Discrete Shearlet Transform (DST) is introduced to perform the inpainting based on -Laplacian operator in wavelet domain. This approach, which is based on the shearlet transform, combines the power of multiscale methods with a unique ability to capture the geometry of multidimensional data and is optimally efficient in representing images containing edges. In the proposed algorithm, the correlated portions are identified and they are removed at encoder and filled by image inpainting at the decoder. A -Laplacian based inpainting method which employs a DST is presented which can effectively reduce the staircase effect in TV model and can be used to achieve less computing time. Gradient descent back propagation [13] with adaptive learning rate is proposed for compression.

This paper is arranged as follows. First, in Section 2, the statement of the problem is described. In Section 3, the framework of proposed coding scheme is discussed. Specifically, Section 3 shows the ANNs based compression in shearlet domain and the shearlet image inpainting based on -Laplacian operator. In Section 4, the experimental results are presented. Section 5 describes conclusion.

2. Statement of the Problem

A standard image model [6] is defined as where is original noise free model; is Gaussian white noise.

The standard wavelet transform of is given by where and is wavelet coefficients; is mother wavelet function.

Damages in the wavelet domain cause loss of wavelet coefficients of on the index region , with represent those wavelet components missing or damaged. The task of inpainting is to restore the missing coefficients in a proper manner, so that the image will have as much information being restored as possible.

For inpainting when we used the traditional wavelets [1416], they do not deal with multidimensional discontinuities such as edges. Recently, a theory for multidimensional data called Multiscale Geometric Analysis (MGA) has been developed. Many new MGA tools have been proposed such as ridgelet, curvelet, bandlet, and contourlet, which provide higher directional sensitivity than wavelets. Shearlets [1719], a new approach proposed in this paper, not only possess all the above properties but also are equipped with a rich mathematical structure similar to wavelets, which are associated with a multiresolution analysis. Shearlets form a tight frame at various scales and directions and are optimally sparse in representing image with edges.

The discrete shearlet transform of is given by and the corresponding image inpainting model in shearlet domain is where,

When this model is solved, the use of TV norm can retain sharp edges while reducing noise and other oscillations. But the corresponding Euler-Lagrange equation is not trivial to compute since it is highly nonlinear and ill-posed in strong sense. Furthermore, this model has the drawback that is called stair case effect. To overcome these deficiencies, a -Laplacian operator [20] is introduced in this new shearlet inpainting model and is where can be adaptively selected based on the local gradient features of images. That is, away from edges, will be approached to 2 to overcome the staircase effect; on the contrary, will be approached to 1 to preserve edges. So this new model can effectively reduce the staircase effect in TV model whereas it can still retain the sharp edges as TV model.

The Euler-Lagrange equation of the above inpainting model is

The gradient descent flow of (7) is

The above equation is solved by the simple explicit finite difference algorithm. To simplify the formulation, we introduce the standard finite difference notations, such as

the forward differences:

and the backward differences:

We note that it is important to evaluate the nonlinear term, which we denote as

in (8). However, the -Laplace operator is defined in the pixel domain. In this paper, we calculate it straightforwardly by transforming the shearlet domain to the pixel domain to compute the -Laplace operator and then transform back to the shearlet domain. That is, we calculate the following:

For all, we compute the following: where is a small positive number which is used to prevent the numerical blow-up when

Then we compute the curvature projection on the wavelet basis by

3. Framework of Proposed Scheme

The method proposed in this section is based on removing redundancy at the encoder and restoring the removed information using an inpainting method at the decoder. In this algorithm, redundancy removal is performed through detecting texture regions with similar statistical characteristics and dividing the image into homogeneous regions. The overall system with encoder and decoder diagrams is depicted in Figures 1 and 2. In the following subsections, encoder and decoder blocks are discussed separately.

757318.fig.001
Figure 1: Block diagram of proposed image compression scheme–encoder.
757318.fig.002
Figure 2: Block diagram of proposed image compression scheme–decoder.

3.1. Design of Encoder
3.1.1. Image Analysis

The input image is subjected to image analysis by extracting edges from the images in order to identify both the structural and textural regions. An input image is divided into blocks. Then they are identified as textural or structural blocks based on their distance from edges. The block is identified as a structural one when it contains more numbers of pixels having very small distance from edges. The remaining blocks are called textural blocks.

The important blocks from both textural regions and structural regions are selected by using different algorithms [21]. The remaining blocks are easily removed during encoding. In this way, the blocks are identified and removed. After the removal of specified blocks, in the original image, the regions are filled with the corresponding DC value or filled with pure red, or green, or blue component, in the case of color images.

3.1.2. ANN Based Image Compression

Gradient descent back propagation algorithm [13] is the widely used algorithm in Artificial Neural Networks (ANNs). The feed-forward neural network architecture is capable of approximating most problems with high accuracy and generalization ability. This algorithm mainly focuses on the error correction learning rule. Error propagation consists of two passes through the different layer of the network: a forward pass and a backward pass. In the forward pass, the input vector is applied to the sensory nodes of the network and its effect propagates through the network layer by layer.

Finally, a set of outputs is produced as the actual response of the network. During the forward pass, the synaptic weights of the networks are all fixed. During the back pass, the synaptic weights are all adjusted in accordance with an error correction rule. The actual response of the network is subtracted from the desired response to produce an error signal. This error signal is then propagated through the network against the direction of synaptic conditions. The synaptic weights are adjusted to make the actual response of the network move closer to the desired response.

Procedure for Image Compression. For experiment, feed-forward neural network with three layers is selected. Input layer, hidden layer with 16 neurons, and output layer with 64 neurons are introduced in that network. Back propagation algorithm is used for training process. For training the network, 256 × 256 Lena image is selected (see Algorithm 1).

alg1
Algorithm 1: Image compression and decompression using ANNs.

3.2. Design of Decoder

After performing the ANN based compression at encoder, the decompression process is carried out at decoder. Final reconstructed image is obtained through -Laplacian inpainting based on DST and texture synthesis. The theory of shearlets is discussed in addition to the two most important modules, namely, shearlet domain -Laplacian inpainting and texture synthesis.

3.2.1. Continuous Shearlet Transform

The continuous shearlet transform [17, 18] is a nonisotropic version of the continuous wavelet transform with a superior directional sensitivity. For ,

Each analyzing element is termed shearlet. The frequency tiling of shearlets is shown in Figure 3.

757318.fig.003
Figure 3: (a) The tiling of the frequency plane. The tiling of horizontal cone is illustrated in solid line; the tiling of vertical cone is in dashed line. (b) Frequency support of shearlet satisfies parabolic scaling (-decomposition level). The Figure shows only the support for ; the other half of the support, for , is symmetrical.
3.2.2. Discrete Shearlet Transform

By sampling the continuous shearlet transform, we can get a discrete transform which is shown in Figure 4  (see Algorithm 2).

alg2
Algorithm 2: Construction of discrete shearlet transform.

757318.fig.004
Figure 4: Succession of Laplacian pyramid and directional filtering.
3.2.3. Shearlet Domain -Laplacian Inpainting

From the pure inpainting perspective, the inpainting problem may be stated as follows. Let be the original image, which is composed by a source area, denoted by , whose pixel values are known, and a target area, denoted by , representing the damaged region to be repaired or a region to be filled in, and this means being inpainted. As shown in Figure 5, these are nonoverlapping areas, that is, and for the boundary between the source and target regions. The simplified architecture for image coding with inpainting is shown in Figure 5 and the pseudocode of proposed work is described as follows: the pseudocode of proposed framework with PDEs inpainting algorithm based on-Laplacian operator (see Algorithm 3).

alg3
Algorithm 3: Pseudocode of proposed framework with PDEs inpainting algorithm based on Laplacian operator.

fig5
Figure 5: (a) Illustration of the inpainting problem. (b) The simplified architecture for image coding with inpainting.
3.2.4. Texture Synthesis

The missing texture blocks are filled in with the texture from its surrounding [2]. Let the region to be filled be denoted by . The lost block will now be filled, pixel by pixel, in a raster fashion. Let be a representative template touching the left of a pixel . We proceed to find an estimate of from the available neighborhood, such that a given distance is minimized. As per [2], is a normalized Sum of Squared Differences (SSD) metric. Once such an estimate of is found, we choose the pixel to the immediate right of estimate of as our candidate for . For stochastic textures, the algorithm selects at random one of the pixels neighboring estimate of . The template can be a simple seed block of 3 × 3 pixels. Then, of all possible 3 × 3 blocks in the 8 neighbourhoods, the one with the minimum normalized SSD is found and a pixel to its right is copied into the current pixel in the lost block. This algorithm is considerably fast when using the improvements in [2125].

4. Performance Evaluation

We illustrate the performance of the proposed algorithm for image compression with image inpainting in shearlet domain using ANNs and compare it with the image inpainting method proposed by Liu et al. [3], The codes are written in MATLAB 2008a.

4.1. Test Conditions and Parameters

The proposed algorithm is tested with color images from USC-SIPI and Kodak image database. In all testes, we use shearlet base -Laplacian inpainting and we set the parameter values , for noiseless images and for noisy images.

4.2. Overall Performance

Figure 6 shows test image Lena and corresponding results of proposed system. In this test, edges are extracted from the input image and shown in Figure 6(b) and the image with removed blocks (25% removal) is shown in Figure 6(c).

fig6
Figure 6: Comparison with JPEG 2000 with QP = 75. (a) Input image. (b) Edges. (c) Image with 25% of portions removed. (d) Reconstructed image after inpainting and texture synthesis. (e) Reconstructed image by JPEG 2000.

Based on the preserved blocks, the shearlet -Laplacian inpainting gives results in Figure 6(d). When comparing the restored image in Figure 6(e) with JPEG 2000, proposed scheme saves 36.15% of bits with QP = 75. The comparison of standard images shows up to 50% bits saving which is achieved by the proposed scheme compared to JPEG 2000 and up to 31.61% bit saving compared to edge based image inpainting method proposed by Liu et al. [3]. The bit-saving results are shown in Table 1.

tab1
Table 1: Bit-rate saving of proposed scheme compared to JPEG 2000 and edge based image inpainting method (QP = 75).

Figure 7 shows the reconstructed images by the proposed scheme with standard H.264/AVC Intracoding. The bit-rate saving is also noticeable, shown in Table 2, but not as much as the comparison with JPEG 2000. The proposed scheme can acquire 48.07% bit-rate saving compared to the state-of-the-art H.264 Intracoding with QP = 24 for an image Kodim19. Bit-rate saving of 33.01% is achieved for an image Kodim11 by the proposed scheme when compared with edge based image inpainting method.

tab2
Table 2: Bit rate savings of proposed scheme compared to H.264/AVC intra and edge based image inpainitng method (QP = 24).
fig7
Figure 7: Comparison with H.264/AVC with QP = 24. (a) Jet (25% removal); (b) peppers (32.5% removal); (c) kodim05 (23% removal); (d) kodim02 (51% removal). The top row shows the reconstructed images by proposed scheme and the bottom row shows the reconstructed images by H.264/AVC Intracoding.

Figure 8 shows the comparison results with JPEG 2000. The first row in Figure 8 shows results of the proposed method and the second row presents JPEG 2000 results. The bits saving of proposed system is indicated in Table 1. The proposed scheme averagely saves 43.54% bits with QP = 75 for the five images shown in Figure 8. Peak Signal-to-Noise Ratio (PSNR) is also used to measure the quality of the restored images. It is defined as follows: where is the maximum possible pixel value of the image.

fig8
Figure 8: Comparison with JPEG 2000 with QP = 75. (a) Kodim07 (32.8% removal); (b) kodim23 (53% removal); (c) kodim03 (42.8% removal); (d) kodim20 (54.2% removal); (e) kodim11 (35% removal). The first row shows the restored images by proposed algorithm and the second row shows the restored images by JPEG 2000.

Figure 9 gives objective quality comparisons between the proposed scheme, JPEG 2000, and H.264/AVC Intracoding. It can be observed that the objective quality of the proposed scheme outperforms the two standard compression schemes (JPEG 2000 and H.264/AVC Intra) and edge based inpainting algorithm at both low and high bit rates. The proposed algorithm is adapted for subjective quality, and PSNR is not a good measure to evaluate the subjective quality, especially for the resulting images of -Laplacian inpainting in shearlet domain.

fig9
Figure 9: Objective quality comparison between proposed scheme, H.264/AVC Intra, and JPEG 2000 on some typical color images.

The reconstructed images with same visual quality regardless of large PSNR difference are shown in Figure 10. Inpainting with edges produces the results which are difficult to differentiate compared with H.264/AVC Intra. But the inpainting method reduces the bit rate because much less number of bits is required for coding edges. The proposed algorithm averagely saves 21.07% bit rate for images having high percentage of texture regions (mandrill and kodim13) when QP = 24 and it is shown in Table 2.

fig10
Figure 10: Subjective quality comparison between the proposed scheme and H.264/AVC Intra. QP is 24 for high quality. From top to bottom: mandrill (43% removal) and kodim13 (42% removal). From left to right: incomplete image with black blocks; reconstructed image by the proposed scheme; reconstructed image by H.264/AVC Intra. Please note that the proposed scheme reconstructs both highly textured images with 21.07% bit-rate saving. Please observe that in kodim13 (the first row) the mandrill eye can also be reconstructed on both sides.

The recent no-reference quality assessment method is adopted in this paper that is proposed in [26] to evaluate the proposed scheme in comparison with JPEG 2000 and H.264/AVC Intra. Note that in [26], blocking artifacts are detected within a compressed image. From Figure 11, it can be observed that the reconstructed images by the proposed scheme contain less blocking than those by JPEG 2000 and H.264/AVC Intra at similar compression ratios. The result shows the better bit-rate reduction when compared with previous methods as far as visual quality is concerned.

fig11
Figure 11: Quality assessment (by measuring the blocking artifacts by the method in [26]) results that compare the proposed scheme with JPEG 2000 and H.264/AVC Intra on some typical color images.
4.3. Computational Complexity

For the experiment, Intel 2 GHz CPU is used. The shearlet domain -Laplacian inpainting with the help of assistant information module is realized in MATLAB 2008a. According to our empirical results, the shearlet domain -Laplacian inpainting process at the decoder needs averagely 3-4 iterations to converge. We found that the proposed scheme needs 1–3 s to decode a 768 × 512 image at 40% removal. The time complexity of inpainting is in general proportional to the size of removed regions.

The proposed algorithm is simpler because of the presence of region removal and assistant information generation based on Discrete Shearlet Transform (DST).

5. Conclusion

In this paper, we develop an ANNs based image compression framework that adopts shearlet domain inpainting technique. In this proposed algorithm, the correlated regions are identified and removed automatically at the encoder. Then they are restored at the decoder by using inpainting scheme. The key techniques used for coding are gradient descent back propagation algorithm with adaptive learning rate and -Laplacian image inpainting in shearlet domain. Experimental results show that the proposed scheme produces good results. The proposed scheme produces up to 50% and 48.07% bits saving when compared with JPEG 2000 and H.264/AVC Intra, respectively.

Edge extraction can be flexible and adaptable for compression. Finding the regions that can be eliminated is considered to be an open problem and it seems that solving this problem will lead to increase in compression ratios and output quality.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

  1. V. Bastani, M. S. Helfroush, and K. Kasiri, “Image compression based on spatial redundancy removal and image inpainting,” Journal of Zhejiang University: Science C, vol. 11, no. 2, pp. 92–100, 2010. View at Publisher · View at Google Scholar · View at Scopus
  2. A. A. Efros and T. K. Leung, “Texture synthesis by non-parametric sampling,” in Proceedings of the 7th IEEE International Conference on Computer Vision (ICCV '99), pp. 1033–1038, September 1999. View at Scopus
  3. D. Liu, X. Sun, F. Wu, S. Li, and Y.-Q. Zhang, “Image compression with edge-based inpainting,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 17, no. 10, pp. 1273–1286, 2007. View at Publisher · View at Google Scholar · View at Scopus
  4. S. D. Rane, G. Sapiro, and M. Bertalmio, “Structure and texture filling-in of missing image blocks in wireless transmission and compression applications,” IEEE Transactions on Image Processing, vol. 12, no. 3, pp. 296–303, 2003. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  5. M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester, “Image inpainting,” in Proceedings of the 27th ACM International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '00), pp. 417–424, July 2000. View at Scopus
  6. T. F. Chan, J. Shen, and H.-M. Zhou, “Total variation wavelet inpainting,” Journal of Mathematical Imaging and Vision, vol. 25, no. 1, pp. 107–125, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  7. I. Galić, J. Weickert, M. Welk, A. Bruhn, A. Belyaev, and H.-P. Seidel, “Image compression with anisotropic diffusion,” Journal of Mathematical Imaging and Vision, vol. 31, no. 2-3, pp. 255–269, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  8. A. Bugeau, M. Bertalmío, V. Caselles, and G. Sapiro, “A comprehensive framework for image inpainting,” IEEE Transactions on Image Processing, vol. 19, no. 10, pp. 2634–2645, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  9. M. Lysaker and X.-C. Tai, “Iterative image restoration combining total variation minimization and a second-order functional,” International Journal of Computer Vision, vol. 66, no. 1, pp. 5–18, 2006. View at Publisher · View at Google Scholar · View at Scopus
  10. H.-Y. Zhang, Q.-C. Peng, and Y.-D. Wu, “Wavelet inpainting based on p-Laplace operator,” Acta Automatica Sinica, vol. 33, no. 5, pp. 546–549, 2007. View at Publisher · View at Google Scholar · View at Scopus
  11. U. A. Ignácio and C. R. Jung, “Block-based image inpainting in the wavelet domain,” The Visual Computer, vol. 23, no. 9–11, pp. 733–741, 2007. View at Publisher · View at Google Scholar · View at Scopus
  12. G. R. Easley, D. Labate, and F. Colonna, “Shearlet-based total variation diffusion for denoising,” IEEE Transactions on Image Processing, vol. 18, no. 2, pp. 260–268, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  13. A. Shaik, C. R. K. Reddy, and S. A. Ali, “Empirical analysis of image compression through wave transforms and NN,” International Journal of Computer Science and Information Technologies, vol. 2, pp. 924–931, 2011.
  14. R. H. Chan, Y.-W. Wen, and A. M. Yip, “A fast optimization transfer algorithm for image inpainting in wavelet domains,” IEEE Transactions on Image Processing, vol. 18, no. 7, pp. 1467–1476, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  15. M. S. Joshi, R. R. Manthalkar, and Y. V. Joshi, “Image compression using curvelet, ridgelet and wavelet transform: a comparative study,” ICGST International Journal on Graphics, Vision and Image Processing (GVIP), vol. 8, pp. 25–31, 2008.
  16. L. Granai, F. Moschetti, and P. Vandergheynst, “Ridgelet transform applied to motion compensated images,” in Proceedings of the IEEE International Conference on Accoustics, Speech, and Signal Processing, vol. 3, pp. 381–384, April 2003. View at Scopus
  17. G. Kutyniok and D. Labate, “Resolution of the wavefront set using continuous shearlets,” Transactions of the American Mathematical Society, vol. 361, no. 5, pp. 2719–2754, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  18. K. Guo, D. Labate, and W.-Q. Lim, “Edge analysis and identification using the continuous shearlet transform,” Applied and Computational Harmonic Analysis, vol. 27, no. 1, pp. 24–46, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  19. G. Easley, D. Labate, and W.-Q. Lim, “Sparse directional image representations using the discrete shearlet transform,” Applied and Computational Harmonic Analysis, vol. 25, no. 1, pp. 25–46, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  20. S. Li, “Existence of solutions to a superlinear p-Laplacian equation,” Electronic Journal of Differential Equations, vol. 2001, no. 66, pp. 1–6, 2001. View at MathSciNet · View at Scopus
  21. L. Luo, F. Wu, S. Li, Z. Xiong, and Z. Zhuang, “Advanced motion threading for 3D wavelet video coding,” Signal Processing: Image Communication, vol. 19, no. 7, pp. 601–616, 2004. View at Publisher · View at Google Scholar · View at Scopus
  22. P. J. Burt and E. H. Adelson, “The Laplacian pyramid as a compact image code,” IEEE Transactions on Communications, vol. 31, no. 4, pp. 532–540, 1983. View at Scopus
  23. Ö. N. Gerek and A. E. Çetin, “A 2-D orientation-adaptive prediction filter in lifting structures for image coding,” IEEE Transactions on Image Processing, vol. 15, no. 1, pp. 106–111, 2006. View at Publisher · View at Google Scholar · View at Scopus
  24. C. Zhu, X. Sun, F. Wu, and H. Li, “Video coding with spatio-temporal texture synthesis and edge-based inpainting,” in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME '08), pp. 813–816, June 2008. View at Publisher · View at Google Scholar · View at Scopus
  25. Y. Lu, W. Gao, and F. Wu, “Efficient background video coding with static sprite generation and arbitrary-shape spatial prediction techniques,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 13, no. 5, pp. 394–405, 2003. View at Publisher · View at Google Scholar · View at Scopus
  26. F. Pan, X. Lin, S. Rahardja, E. P. Ong, and W. S. Lin, “Measuring blocking artifacts using edge direction information,” in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME '04), vol. 2, pp. 1491–1494, June 2004. View at Scopus