Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2015 / Article
Special Issue

Inverse Problems: Theory and Application to Science and Engineering 2015

View this Special Issue

Research Article | Open Access

Volume 2015 |Article ID 965690 | 10 pages | https://doi.org/10.1155/2015/965690

Joint Motion Deblurring and Superresolution from Single Blurry Image

Academic Editor: Herb Kunze
Received03 Dec 2014
Revised25 Jul 2015
Accepted29 Jul 2015
Published19 Oct 2015

Abstract

Currently superresolution from a motion blurred image still remains a challenging task. The conventional approach, which preprocesses the blurry low resolution (LR) image with a deblurring algorithm and employs a superresolution algorithm, has the following limitation. The high frequency texture of the image is unavoidably lost in the deblurring process and this loss restricts the performance of the subsequent superresolution process. This paper presents a novel technique that performs motion deblurring and superresolution jointly from one single blurry image. The basic idea is to regularize the ill-posed reconstruction problem using an edge-preserving gradient prior and a sparse kernel prior. This method derives from an inverse problem approach under an efficient optimization scheme that alternates between blur kernel estimation and superresolving until convergence. Furthermore, this paper proposes a simple and efficient refinement formulation to remove artifacts and render better deblurred high resolution (HR) images. The improvements brought by the proposed combined framework are demonstrated by the processing results of both simulated and real-life images. Quantitative and qualitative results on challenging examples show that the proposed method outperforms the existing state-of-the-art methods and effectively eliminates motion blur and artifacts in the superresolved image.

1. Introduction

Image superresolution is a technique designed to improve the resolution of images obtained by LR sensors avoiding blurring and artifacts. This technique is generally employed to exceed the inherent limitations of LR imaging (e.g., mobile smartphone or portable digital cameras) and make better use of the growing capability of HR displays (e.g., high-definition television (HDTV)), which are relatively inexpensive to complement [13].

Approaches to superresolution can be mainly divided into three categories including interpolation-based magnification methods, reconstruction based methods, which make use of different observations of the same scene captured with subpixel displacements to produce a superresolved image, and learning-based methods [4]. The methods of the first type (such as New Edge Directed Interpolation (NEDI) [5] and Directional Cubic Convolution Interpolation (DCCI) [6]) are fast and noncomplex. However, they are ineffective under blurry and noisy cases. The second type (e.g., maximum a posteriori (MAP) [7], projection onto convex set (POCS) [8], etc.) relies on the accuracy of the required subpixel registration process, which is even more difficult in the superresolution setting. The type later known as single image superresolution is based on machine learning techniques, with the attempt to exceed some of the limitations mentioned above. Yang et al. [9] use a sparse representation for training a sparse dictionary pair containing HR/LR patch pairs. The missing high-frequency information (i.e., edge details) is constructed by conducting a combination of patches chosen from the pretrained prototype structures (i.e., dictionary atom) [4] and the performance of this process depends much on the content of the training dataset.

Although a lot of progress has been made in the past decade, superresolving real-life images still remains an arduous task. The majority of existing superresolution algorithms assume the blur kernel to be identical, typically modeled as a symmetric Gaussian function [10, 11]. However, in practice the motion of objects and cameras can be arbitrary and the acquisition may be contaminated with noise and motion blurring. Only a limited number of works were dedicated to superresolving the motion blurred LR image. In general, the motion blurred LR image is first preprocessed with a deblurring algorithm, followed by the use of a superresolving algorithm. This framework has proven to be effective by Lu and Wu [12]. While this method works well for slight blurring, it has difficulty dealing with a complicated blur kernel and relies heavily on a heuristic overcomplete dictionary. So it is often unstable when handling real-life image.

This paper proposes a joint blind-deblurring and superresolution algorithm from one single image that combines gradient and motion blur kernel prior in a coherent framework. Multiple constraints are adopted to obtain an accurate motion blur kernel and a satisfactory HR image. A guided filter based refinement formulation is then applied to further improve the HR image quality. Various results are provided to show that the proposed method can effectively remove the motion blur and preserve complex image structures and the fine edge details while suppressing the ringing artifacts. On average, the proposed method outperforms other state-of-the-art deblurring and superresolution methods.

The rest of the paper is organized as follows. Section 2 introduces an observation model for single image superresolution reconstruction and analyzes the statistic properties of real-life images. The proposed algorithm is described in detail in Section 3. Experiment results and comparison are presented in Section 4. Section 5 concludes the paper.

2. The Proposed Framework

Motion blur caused by camera shake has been a prime cause of poor image quality in real applications, particularly in portable digital imaging. Generally, given a blurry input image , the imaging model is expressed aswhere is the unknown image to be estimated, and are degrading operators, and is the noise effect. When is identity and is a motion blurring operator, the restoration process becomes deblurring. The goal of motion deblurring is to deduce both and from a single blurry image . The solution to (1) with -norm fidelity constraint, that is, , is generally adopted. When and represent downsampling and blurring, respectively, the restoration process becomes single image superresolution. The HR image and motion blur kernel are given by . The uniform downsampling operator is assumed to be known in advance.

Mathematically, both single image superresolution and deblurring are severely ill-posed inverse problems, and these two similar problems can be incorporated into a single optimization framework. This paper proposes a joint motion deblurring and superresolution method to estimate both the motion blur kernel and an HR image from a single blurry image. To solve this difficult inverse problem, the proposed method implements a regularized least-square optimization framework as follows:where is the data fitting term and and are regularization terms for the latent image gradients and the motion blur kernel, respectively; and are the weights for controlling the extent of regularization.

Empirically, regularization has been described from both the algebraic and statistical perspectives. The existing methods tend to explore effective priors to get the ill-conditioned problem well regularized. The priors include total variation (TV), image edge transparency, and sparse approximation. However, these prior models cannot preserve the sharp edges of real-life images. As shown in Figure 1, the logarithmic image gradient histogram of four natural images is a heavy-tailed distribution. This distribution shows that the image primarily contains small or zero gradients; however, a few gradients have large magnitude. Chen et al. [13] and Cho et al. [14] use a certain parametric model to approximate the heavy-tailed prior of natural image and to estimate the blur kernel.

The heavy-tailed prior of natural image imposed on image gradients can be regarded as an approximation of -norm [15]. Thus the second term in (2) uses the -norm regularization on image gradients which can preserve the sparsity of the natural image gradients. The image gradient for each pixel is calculated along the horizontal and vertical directions. This is defined aswhere is the -norm that counts the number of nonzero values of . Moreover, the last term in (2) is typically -norm stable regularization for the motion blur kernel. Consider

3. Optimization

Figure 2 shows the flow diagram of the proposed joint motion deblurring and superresolution method, given an initial blurry image. First, the proposed method initializes the blur kernel using the initialization method presented in [16] and then iteratively estimates the latent image and refining the blur kernel until we obtain convergence. In the optimization process, the kernel estimation error would converge to a constant value. Finally, the HR image is constructed by deconvolution and artifact removal with an estimated blur kernel.

3.1. Problem Formulation

Equation (2) is a nonconvex minimization problem for -norm regularization. This means that many of the existing -norm optimization methods are not applicable. Instead, a conjugate gradient (CG) method may be used to solve this problem; however, typical CG methods require expensive computation due to many unknown quantities and have poor convergence. In order to numerically find the solutions to , an alternating direction method (ADM) [16] is adopted which can be efficiently solved in the frequency domain using fast Fourier transforms (FFTs). The solution can be found using the following equations:

To avoid local minima, this method implements a progressive coarse-to-fine scheme. First, an initial blur kernel at full resolution is specified. At the coarsest level, the downsampled version of and is used as an initialization. Then, the estimate of is obtained at a coarse level while holding fixed. These values of and are upsampled by bicubic interpolation and then act as the input of the next finer level. In this paper, seven iterations were performed at each level.

3.2. Estimating with

Due to the incorporation of (-norm term), (5) is commonly regarded as an intractable discrete optimization problem. Traditional discrete optimization methods may not be used to solve this problem. Based on the idea of ADM, the proposed method adopts a half-quadratic splitting minimization method [17, 18] to solve it. The proposed method introduces auxiliary variables and which correspond to and , respectively. Then, defining as the automatically adapting parameter to control the similarity between variables and their corresponding gradients, (5) may be rewritten aswhere

The solution of (7) approaches (5) when is very large. With this formulation, the proposed method fixes the blur kernel and optimizes (7) over using the half-quadratic minimization solver (see [18] for detail). Equation (7) is solved through alternatively minimizing and as shown in (9) and (10). In each step, the solution of is obtained by solving

The function is quadratic, thus having a global minimum that can even be found with descending gradient. It can be solved by differentiating (10) and setting the partial derivative to zero. Given , the objective function for is

The closed-form solutions for (9) and (10) are obtained based on the half-quadratic splitting minimization method and Parseval’s theorem. Considerwhere is the FFTs operator and denotes the inverse FFTs; is the complex conjugate; is the element-wise multiplication. Element-wise operators are used to speed up the process in the Fourier domain.

3.3. Estimating with

In the kernel estimation step, the proposed method fixes and updates using CG method and accelerating the computation by FFTs. This is a least-square problem and it can be solved by simply setting the partial derivative to zero. Since and are block circulant matrices, the estimation of the blur kernel can be represented by (13) according to Parseval’s theorem. One has

Motion blur kernel, which describes the trace of a sensor’s motion during exposure, only contains positive components. Therefore, this paper sets all element values below 1/40 of the maximum value to 0 and normalizes the kernel whereby the sum becomes 1. The sparsity of the refined blur kernel is reserved in the primary structures.

3.4. Artifact Removal

In the previous section, the proposed method obtains satisfactory HR image through (11) and (13), as shown in Figure 3(b). Although the motion blur kernel can be estimated accurately, the proposed formulation with prior -regularization produces oversmoothed results and eliminates many image details. The fast deconvolution method [19] with Hyper-Laplacian priors effectively preserves details. However, it is subject to the ring effect as shown in Figure 3(c).

In the view of the above-mentioned facts, an artifact removal method is proposed that uses the following approach (see Figure 4). First, the motion blur kernel and the HR image are estimated by using the proposed method. Second, the HR image is estimated by using the method of Krishnan and Fergus [19] with the estimated kernel , and then a difference map between these two estimated HR images is computed. Ring artifacts are subsequently removed with a guided filter (GF) [20]. Finally, filtered detail components are added in the HR estimator while details are further enhanced using the geometric locally adaptive sharpening (GLAS) enhancement method [21]. This proposed artifact removal method leads to superior visual quality (see Figure 3(d)), not only removing ring effects, but also better reconstructing and producing sharper image edges. The result shown in Figure 3(d) contains clearer details and demonstrates that this framework works well on images of natural scenes.

4. Experimental Results

This section compares the proposed single image superresolution method with other state-of-the-art algorithms such as the DCCI method [6], the Fast Image Upsampling (FIU) method [22], and the Sparse Coding Superresolution (ScSR) method [9]. These three other methods do not contain motion deblurring capability, and to compare them more fairly to the proposed method, the Alternating Maximum a Posteriori (AMAP) method [23] is first employed to estimate the blur kernel and to deblur the LR images. The entire source code of the previous compared methods is available online (DCCI: http://www.mathworks.com/matlabcentral/fileexchange/38570-image-zooming-using-directional-cubic-convolution-interpolation; FIU: http://www.cse.cuhk.edu.hk/leojia/projects/upsampling/index.html; ScSR: http://www.ifp.illinois.edu/~jyang29/; AMAP: http://zoi.utia.cas.cz/deconv_sparsegrad). This paper demonstrates the effectiveness of the proposed framework upon real-life images.

4.1. Synthetically Blurred Images

Four well-known images, “Lena,” “Parrots,” “Fruits,” and “Lighthouse”, as shown in Figures 5(a)5(d), are used to compare results. All original images have a same size of 512 × 512. To evaluate the feasibility of the proposed method, the degraded LR image is generated by initially applying an arbitrary 21 × 21 motion blur kernel to the original image then downsampling by a factor of 2. Three arbitrary 21 × 21 motion blur kernels (kernel-1, kernel-2, and kernel-3) are shown in Figures 5(e)5(g). In the proposed superresolution method, the parameters are set as follows: ; ; . For the three compared methods, the default parameter settings appearing in the original research are used. To handle the color images, this paper first estimates the motion blur kernel and HR gray image as previously researched using the luminance channel. Then, each color channel is separately superresolved. The final superresolution results are obtained by combining the information from all color channels.

First, our comparison employed the Peak Signal to Noise Ratio (PSNR) and the structural similarity (SSIM) [24] index for image quality assessment (IQA) [25]. Higher PSNR and SSIM indicate better visual quality and greater similarity structure between the recovered image and the original image. For color images, this paper only reports the PSNR and SSIM measures for the luminance channel. In Figure 6 and Table 1, this paper evaluates the algorithms with 4 different images blurred by 3 different kernels. The PSNR results of the four testing images with different kernels are shown in Figures 5(a)5(c), respectively. It is observed that the proposed method achieved a higher PSNR in almost every case, achieving a 1 dB–5 dB improvement over other compared methods and outperforming the benchmark AMAP + FIU method by an average of 3 dB on all four images (up to 5 dB on the “Parrots” image). The SSIM results of the methods for comparison on four test images are listed in Table 1. It is also observed that the proposed method achieves the highest SSIM index. The average gains in SSIM over the three competing methods are 0.075, 0.12, and 0.102 in the case of kernel-1, respectively. These results support the idea that the proposed method reduces motion deblurring error (i.e., error due to inaccurate motion blur kernel estimation) while restoring detail components in reconstructed HR images. In general, our method gives the best performance among all regarding PSNR and SSIM.


KernelsMethod“Lena”“Parrots”“Fruits”“Lighthouse”Average

Kernel-1AMAP + DCC0.7830.7310.7790.6420.734
AMAP + FIU0.7190.7040.7300.6030.689
AMAP + ScSR0.7710.7160.7400.6020.707
Proposed0.8420.8790.8030.7110.809

Kernel-2AMAP + DCC0.6850.7290.7650.6390.705
AMAP + FIU0.6680.7080.7150.6190.678
AMAP + ScSR0.7130.7270.7420.6250.702
Proposed0.8330.8960.8760.8240.857

Kernel-3AMAP + DCC0.7530.7360.6920.6970.720
AMAP + FIU0.7400.7340.6270.6290.683
AMAP + ScSR0.7990.7550.6820.6530.722
Proposed0.8350.8780.8420.8310.847

Figure 7 gives the qualitative evaluation of the “Lena” image. The synthesized blurred LR image with the corresponding blur kernel is shown in Figure 7(a). The results obtained with the AMAP + DCC method, the AMAP + FIU method, the AMAP + ScSR method, and the proposed method are shown in Figures 7(b)7(e), respectively. The bottom of each result offers a close-up of a portion of the “Lena” image. As shown, the competing methods do not perform well visually upon the “Lena” image. The result of the AMAP + FIU method seems sharper, while the drawback is the severer ring effect. Both the AMAP + DCCI result and the AMAP + ScSR result are smooth and have the ring effect. In contrast, the proposed method generates more satisfactory superresolution images and preserves edge structures more effectively. The proposed method gives the sharpest HR images with abundant properly reconstructed details. The key to successful reconstruction lies in the accuracy of kernel estimation and the appropriate strategies for artifact removal. Overall, the proposed superresolution method causes less ring effect and less blur compared with other methods.

Furthermore, Figure 8 shows more motion deblurring and superresolution results of all methods on other testing images with different motion blur kernels. The LR “Parrots” and “Lighthouse” images are degraded by kernel-2 and kernel-3, respectively, as shown in Figure 8(a). The results produced by the AMAP + DCCI method, the AMAP + FIU method, the AMAP + ScSR method, and the proposed method are presented in Figures 8(b)8(e), respectively. The bottom of each result presents a close-up of a portion of the “Parrots” and “Lighthouse” images. The proposed method can almost recover the full edge structures and produce less undesired blur and ring effects. As an example, the improvements are very obvious when looking at the Parrots’ eye in the “Parrots” image or the fence in the “Lighthouse” image. See Figure 8 for a better visual comparison. Although the other methods are also designed for edge preservation, their preservation failed in large part, for the reason that the previous deblurring process destroys structures, contains unnatural colors, and brings ring effect (i.e., error due to inaccurate motion blur kernel estimation and unreasonable deconvolution strategy). However, it is shown by Figures 7 and 8 that all the reconstructed images of the previous method lose a large amount of details due to this superresolution process which requires simultaneous deblurring and superresolution from a single blurred image. Despite this defect, the results of the proposed superresolution are still satisfactory.

4.2. Real Data with Unknown Blur

In order to demonstrate that the single image superresolution method works well in real-life circumstances, photographs were taken using a 1080 p digital smartphone camera, and the proposed superresolution method was applied to these images. Figure 9(a) shows a real captured image which exhibits motion blur induced by camera shake. The proposed superresolution method is once again compared with the other three methods. The comparison of the reconstructed details in the highlighted area is shown in Figures 9(b)9(e). It can be observed that the proposed method yields a visually satisfying HR result (e.g., around the pavilion) with negligible artifacts. Despite being without customized hardware, the proposed method achieves at least competitive results (clear texture, sharp edges, and natural color), which is sharper and contains fewer artifacts. Hence, it is suitable for real-life circumstances. In the context of portable digital imaging, more clear texture and fast processing speed are required or desired. As mentioned in Section 3, the implementation of ADM can be completed in FFTs and accelerated by GPU processing.

5. Conclusion

When facing superresolution problems in motion blurred photographs, one of the most convenient solutions is to use a separation process. However, two-stage processes have not been well addressed in literature and are still complicated. This paper proposes a framework that combines motion deblurring and superresolution and uses a single motion blurred image. With a precise modeling degradation course, the proposed superresolution method starts with defining an optimization problem of natural images and motion blur regularization terms. Then this paper employs an effective optimization strategy based on ADM, which uses the FFTs technique and ensures that each subproblem has a closed-form solution in the frequency domain. In addition, this paper develops a simple artifact removal method which helps to reduce ring artifacts and effectively sharpens edge structures.

Experimental results show that the proposed method can produce a high visual quality HR image on the testing images that is superior in comparison with state-of-the-art methods. Furthermore, the effectiveness of the proposed method is demonstrated by using IQA indexes. The above features make the proposed superresolution method much more applicable and productive than previous approaches, as shown in the experiments. The future work will focus on a better optimization method and extension of the proposed superresolution method.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

  1. S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Processing Magazine, vol. 20, no. 3, pp. 21–36, 2003. View at: Publisher Site | Google Scholar
  2. F. Xu, T. Fan, C. Huang, X. Wang, and L. Xu, “Block-based MAP superresolution using feature-driven prior model,” Mathematical Problems in Engineering, vol. 2014, Article ID 508357, 14 pages, 2014. View at: Publisher Site | Google Scholar
  3. W. Dong, L. Zhang, G. Shi, and X. Wu, “Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization,” IEEE Transactions on Image Processing, vol. 20, no. 7, pp. 1838–1857, 2011. View at: Publisher Site | Google Scholar
  4. T. Goto, Y. Kawamoto, Y. Sakuta, A. Tsutsui, and M. Sakurai, “Learning-based super-resolution image reconstruction on multi-core processor,” IEEE Transactions on Consumer Electronics, vol. 58, no. 3, pp. 941–946, 2012. View at: Publisher Site | Google Scholar
  5. X. Li and M. T. Orchard, “New edge-directed interpolation,” IEEE Transactions on Image Processing, vol. 10, no. 10, pp. 1521–1527, 2001. View at: Publisher Site | Google Scholar
  6. D. Zhou, X. Shen, and W. Dong, “Image zooming using directional cubic convolution interpolation,” IET Image Processing, vol. 6, no. 6, pp. 627–644, 2012. View at: Publisher Site | Google Scholar
  7. S. P. Belekos, N. P. Galatsanos, and A. K. Katsaggelos, “Maximum a posteriori video super-resolution using a new multichannel image prior,” IEEE Transactions on Image Processing, vol. 19, no. 6, pp. 1451–1464, 2010. View at: Publisher Site | Google Scholar
  8. A. J. Patti and Y. Altunbasak, “Artifact reduction for POCS-based super resolution with edge adaptive regularization and higher-order interpolants,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '98), Chicago, Ill, USA, 1998. View at: Google Scholar
  9. J. Yang, J. Wright, T. S. Huang, and Y. Ma, “Image super-resolution via sparse representation,” IEEE Transactions on Image Processing, vol. 19, no. 11, pp. 2861–2873, 2010. View at: Publisher Site | Google Scholar
  10. C. Liu and D. Sun, “On Bayesian adaptive video super resolution,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 2, pp. 346–360, 2014. View at: Publisher Site | Google Scholar
  11. A. H. Yousef, J. Li, and M. Karim, “On the visual quality enhancement of super-resolution images,” in Applications of Digital Image Processing XXXIV, vol. 8135 of Proceedings of SPIE, 81350Z, September 2011. View at: Google Scholar
  12. J. Lu and B. Wu, “Single-image super-resolution with joint-optimization of TV regularization and sparse representation,” Optik—International Journal for Light and Electron Optics, vol. 125, no. 11, pp. 2497–2504, 2014. View at: Publisher Site | Google Scholar
  13. X. Chen, X. He, J. Yang, and Q. Wu, “An effective document image deblurring algorithm,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '11), pp. 369–376, Providence, RI, USA, June 2011. View at: Publisher Site | Google Scholar
  14. H. Cho, J. Wang, and S. Lee, “Text image deblurring using text-specific properties,” in Proceedings of the European Conference on Computer Vision (ECCV '12), Firenze, Italy, 2012. View at: Google Scholar
  15. L. Xu, S. Zheng, and J. Jia, “Unnatural L0 sparse representation for natural image deblurring,” in Proceedings of the 26th IEEE Conference on Computer Vision and Pattern Recognition (CVPR '13), pp. 1107–1114, Portland, Ore, USA, June 2013. View at: Publisher Site | Google Scholar
  16. S. Cho and S. Lee, “Fast motion deblurring,” ACM Transactions on Graphics, vol. 28, no. 5, pp. 145–153, 2009. View at: Publisher Site | Google Scholar
  17. J. Pan and Z. Su, “Fast L0-regularized kernel estimation for robust motion deblurring,” IEEE Signal Processing Letters, vol. 20, no. 9, pp. 841–844, 2013. View at: Publisher Site | Google Scholar
  18. L. Xu, C. Lu, Y. Xu, and J. Jia, “Image smoothing via L0 gradient minimization,” ACM Transactions on Graphics, vol. 30, no. 6, article 174, 2011. View at: Publisher Site | Google Scholar
  19. D. Krishnan and R. Fergus, “Fast image deconvolution using hyper-laplacian priors,” in Proceedings of the 24th Annual Conference on Neural Information Processing Systems, pp. 1033–1041, Vancouver, Canada, 2009. View at: Google Scholar
  20. K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 6, pp. 1397–1409, 2013. View at: Publisher Site | Google Scholar
  21. X. Zhu and P. Milanfar, “Restoration for weakly blurred and strongly noisy images,” in Proceedings of the IEEE Workshop on Applications of Computer Vision (WACV '11), pp. 103–109, IEEE, Kona, Hawaii, USA, January 2011. View at: Publisher Site | Google Scholar
  22. Q. Shan, Z. Li, J. Jia, and C.-K. Tang, “Fast image/video upsampling,” ACM Transactions on Graphics, vol. 27, no. 5, article 153, 2008. View at: Publisher Site | Google Scholar
  23. J. Kotera, F. Sroubek, and P. Milanfar, “Blind deconvolution using alternating maximum a posteriori estimation with heavy-tailed priors,” in Proceedings of the 15th International Conference on Computer Analysis of Images and Patterns, pp. 59–66, York, UK, 2013. View at: Google Scholar
  24. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004. View at: Publisher Site | Google Scholar
  25. X. Gao, W. Lu, D. Tao, and X. Li, “Image quality assessment based on multiscale geometric analysis,” IEEE Transactions on Image Processing, vol. 18, no. 7, pp. 1409–1423, 2009. View at: Publisher Site | Google Scholar

Copyright © 2015 Linyang He et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

939 Views | 954 Downloads | 1 Citation
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.