Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 9944385 | https://doi.org/10.1155/2021/9944385

Yingying Xu, Jianhua Li, Haifeng Song, Lei Du, "Single-Image Super-Resolution Using Panchromatic Gradient Prior and Variational Model", Mathematical Problems in Engineering, vol. 2021, Article ID 9944385, 11 pages, 2021. https://doi.org/10.1155/2021/9944385

Single-Image Super-Resolution Using Panchromatic Gradient Prior and Variational Model

Academic Editor: Taseer Muhammad
Received06 Mar 2021
Revised12 Apr 2021
Accepted25 Apr 2021
Published03 May 2021

Abstract

Single-image super-resolution (SISR) is a resolution enhancement technique and is known as an ill-posed problem. Motivated by the idea of pan-sharping, we propose a novel variational model for SISR. The structure tensor of the input low-resolution image is exploited to obtain the gradient of an imaginary panchromatic image. Then, by constraining the gradient consistency, the image edges and details can be better recovered during the procedure of restoration of high-resolution images. Besides, we resort to the nonlocal sparse and low-rank regularization of image patches to further improve the super-resolution performance. The proposed variational model is efficiently solved by ADMM-based algorithm. We do extensive experiments in natural images and remote sensing images with different magnifying factors and compare our method with three classical super-resolution methods. The subjective visual impression and quantitative evaluation indexes both show that our method can obtain higher-quality results.

1. Introduction

Image super-resolution (SR) is one of the most fundamental problems in the field of image processing, which aims to reconstruct clear and accurate high-resolution images from degraded low-resolution images. In other words, SR technology can recover a high-resolution (HR) image from one or more low-resolution (LR) input images [1]. In recent years, image SR has attracted great attention from academia and industry. The quality of the reconstructed SR image also greatly affects the accuracy of other computer vision tasks (such as image classification, image segmentation, and target detection). At present, SR technology has made great progress; it has been widely used in high-definition digital television, remote sensing monitoring, medical image reconstruction, image restoration, military reconnaissance, and other fields [2]. However, as an ill-posed problem, the existing image SR technology cannot always obtain satisfactory reconstruction results. Therefore, the study of SR is still a challenging but significant research topic.

According to the number of input LR images, the SR can be classified into multi-image super-resolution (MISR) [35] and single-image super-resolution (SISR) [68]. MISR uses the complementary information provided by multiple images of the same scene to enhance resolution. However, in a practical application, it is much difficult to obtain image sequences of the same scene and achieve precise subpixel registration. Meanwhile, SISR is suitable for more scenarios, which enhances the resolution of one input image based on some prior information [9, 10]. To date, SISR methods can be broadly classified into interpolation-based methods, reconstruction-based methods, and learning-based methods [11].

Interpolation methods like nearest-neighbor interpolation, bilinear interpolation, and bicubic Interpolation [12, 13] estimate the intensity at a point using the information of adjacent pixels. While these methods are easier calculated and very speedy, they tend to generate excessive smoothing and jagged artifacts. Reconstruction-based SISR methods [1416] generally utilize image priors to restrict the possible solution space with an advantage of recovering sharp details. However, these methods are usually time-consuming, and their performance suffers a rapid degradation when the amplification factor increases.

Learning-based SISR methods include example-based methods, neighbor-embedding-based methods, sparse-representation-based methods, and deep-learning-based methods [17, 18]. The basic idea of these approaches is to obtain the correspondence between the LR image and the HR image by a priori knowledge training and then reconstruct an HR image from the input LR image using the correspondence that has been learned. Freeman et al. [19] presented an example-based method with a Markov network. Chang et al. [20] proposed a neighbor-embedding-based method that needs less training set. Yang et al. [21] proposed a sparse-representation-based algorithm to obtain more favorable results and further improve their work by jointly training coupled dictionaries for LR and HR image patch pairs [22]. Very recently, deep-learning-based SISR methods, using convolutional network [23, 24], residual network [25, 26], generative adversarial network [27], attention network [28, 29], and so on, have become popular and demonstrated good performance with the rapid development of deep learning technology.

Pan-sharpening [30] is a multispectral image fusion and super-resolution technique; the panchromatic (PAN) image has higher spatial resolution than the corresponding multispectral image but only has a single band. Pan-sharpening fuses PAN image and multispectral image together to enhance both spatial and spectral resolutions of the data. Motivated by the idea of pan-sharpening, we present a novel variational approach for single-image super-resolution. In this paper, we assume that there is a PAN image of the input LR image. By utilizing the structure tensor [31] of input image, we can construct the gradient of the PAN image; then we build a variational model combined with low-rank and sparse representation of similar patches to effectively fuse the constructed PAN information with LR image and obtain HR image. The proposed approach addresses the super-resolution problem of both single nature image and the multispectral data without panchromatic image.

The remainder of this paper is organized as follows. In Section 2, we present the proposed SISR model in detail. The numerical algorithm for solving our model is given in Section 3. The experimental results and analysis are discussed in Section 4. Finally, conclusion is presented in Section 5. In addition, the descriptions of the acronyms used in this paper are listed in Table 1.


AcronymDescription

SRSuper-resolution
SISRSingle-image super-resolution
MISRMulti-image super-resolution
HRHigh resolution
LRLow resolution
PANPanchromatic
ADMMAlternating direction method of multipliers
PSNRPeak signal-to-noise ratio
RMSERoot mean square error
SSIMStructure similarity index
SCSparse-coding-based super-resolution method
SCBPIterative-back-projection-based SC method

2. Variational Model

Drawing on the idea of pan-sharpening, we construct a variational fusion model to realize single-image super-resolution. As panchromatic image contains the edges and details needed for resolution enhancement, we first construct the information of PAN image which does not actually exist. According to [31], the key information of the assumed PAN image can be exploited from the structure tensor of the input image. Then the gradient of PAN image can be fused into the LR image to enhance the HR details. The similar image patch pairs extracted from LR and HR images are constrained by low-rank and sparse regularization to improve the SR performance.

2.1. Image Degradation Model

We consider that the observed low-resolution image is a downsampled version of the high-resolution image , and the degradation model [32] can be written aswhere represents a downsampling operator and is the additive Gaussian white noise.

2.2. Construct the Gradient of PAN Image

We construct the gradient of the PAN image corresponding to input LR image using the knowledge of structure tensor [31]. Firstly, we apply quadratic interpolation to the given LR image to obtain the enlarged image in desired scale. Then, matrix known as the structure tensor [31] can be expressed aswhere denotes the number of spectral bands; denotes the weight and it can be determined by  = .

As a symmetric matrix, can be decomposed aswhere is an orthogonal matrix and the columns of which are the eigenvectors of ; is a diagonal matrix and the diagonal elements of which are the eigenvalues of . Equation (3) can also be written aswhere is the maximum eigenvalue and it gives the maximum rate of change of ; is the minimum eigenvalue and it gives the minimum rate of change. The corresponding eigenvectors and give the directions of change.

The PAN image can theoretically capture the basic geometry of the image . Consider image with  =  and  = 0, whose structure tensor should approximately be equal to . Thus, we have the following equation:

In order to solve , equation (5) can be rewritten as follows:

To specify the sign of the eigenvectors, the target gradient of the constructed PAN image is thus obtained as

2.3. Variational Model Based on Gradient Consistency

In this subsection, we build a variational model to fuse the low-resolution image and the gradient of its corresponding PAN image ; then the model is optimized together with the nonlocal sparse and low-rank regularization to obtain the high-resolution image .

2.3.1. Consistency of Gradient

Bands of an ideal HR image often have similar structural information with the PAN image ; namely, bands of HR image closely approximate the PAN image in gradient information. Therefore, we can use constraint about the consistency of gradient to establish a relationship model. We use to denote the -th band of image and then set up the gradient consistency regularization term as follows:where denotes the weight of the -th band in , and we generally set in the absence of special requirements.

2.3.2. Sparse and Low-Rank Decomposition

Motivated by the effective work of optical flow estimation with nonlocal sparse and low-rank regularization in [33], we adopt a joint sparse and low-rank decomposition term formulated as follows:where denotes the operator that extracts the -th patch of image and groups its nonlocal similar patches together. Then the grouped similar patches are decomposed to the low-rank component and the sparse component , and denotes the Gaussian white noise.

2.3.3. Total Energy

Combining the equality constraint in (1), (8), and (9), the proposed variational SISR model can be formulated as the following optimization objective:where is the number of image patches that are extracted from the -th band of image , is the number of spectral bands, are parameters, denotes the rank of low-rank component, and norm denotes the sparse regularization. It is well known that the rank-solution and norm are discrete combinatorial optimization problems; they are both NP-hard. Therefore, in order to make the minimization tractable, we resort to their convex relaxed modification, and model (10) can be reformulated as the following convex optimization problem:where denotes the nuclear norm, which is the convex envelope (tightest convex surrogate) of rank, and norm is the nearest convex norm to norm.

3. Numerical Algorithm

In this section, the numerical procedure of the proposed model will be implemented. We adopt the alternating direction method of multipliers (ADMM) [34] to solve the minimization problem (11). The flexibility of the ADMM lies in the fact that it splits the initial optimization problem into several subproblems.

3.1. Solution of the Sparse Matrix

For each band, we fix and the low-rank matrix ; the optimization problem of sparse matrix is as follows:

We can obtain the solution of equation (12) by soft threshold algorithm aswhere .

3.2. Solution of the Low-Rank Matrix

For each band, we fix and the sparse matrix ; the optimization problem of low-rank matrix is as follows:

We can obtain the solution of equation (14) by singular value threshold algorithm aswhere , are singular values of matrix , and are left and right singular vectors of , and .

3.3. Solution of the SR Image

In order to separately solve the fidelity term and two regularization terms in (11), we introduce two intermediate variables and which are close to . Thus, we need to iteratively solve the three following subproblems to obtain the solution of SR image.(1)We use the intermediate variable to replace in . The optimization problem for is as follows:where denotes the inner product, denotes the Lagrange multiplier matrix, and is a positive penalty parameter. We set the derivative of (16) with respect to to 0:which yieldsThe Lagrange multiplier is updated as follows:(2)We introduce an intermediate variable to replace in the gradient consistency term. The optimization problem for is as follows:where denotes the Lagrange multiplier matrix. We set the derivative of (20) with respect to to 0:Using the fast Fourier transform (FFT), we can obtain the closed-form solution of from (21) directly:where and denote the FFT and inverse FFT, respectively. The Lagrange multiplier is updated as follows:(3)After and are substituted into equation (11), the optimization subproblem of becomesWe set the derivative of (24) with respect to to 0, which yields

Overall, taking all above analyses into account, we can summarize the complete numerical procedure for the proposed method. The detailed descriptions are shown in Algorithm 1.

(i)Input:
 The LR image ;
 Magnification factor.
Output:
 The SR image .
Initialization:
 Compute an initial SR image via bicubic
 interpolation;
 Calculate the target gradient via equation (7).
For:
For:
  1. For each extracted image patch , search
   for a set of similar patches across the whole
   image by patch matching.
  2. Decompose the grouped patches into low-rank
   and sparse components.
   (a) Calculate the sparse matrix via (13);
   (b) Calculate the low-rank matrix via (15).
  3. Reconstruct the SR image .
   (a) Update the image via (25);
   (b) Update the intermediate variable and via (18) and (22);
   (c) Update the Lagrange multiplier and via (19) and (23);
   (d)  =  ;
End;
End;
=

4. Experimental Results and Analysis

In this section, we first evaluate the sensitivity of the proposed algorithm with respect to the main parameters, that is, , , , and . Then, to demonstrate the SR effectiveness on both natural images and remote sensing images with different magnifying factors (2 and 4), we compare our method with the Bicubic interpolation method, sparse-coding-based method (SC) [22], and its back-projection enhanced version (SCBP) [35]. We evaluate the outcome of various methods by using quantitative indexes: peak signal-to-noise ratio (PSNR), root mean square error (RMSE), and structure similarity index (SSIM) [36]; they are calculated on RGB channels for nature image and on each band for multispectral image. The higher PSNR and SSIM values and lower RMSE value represent the better SR performance.

4.1. Adjustment of Parameters

The parameter is the weight to the consistency of gradient, controls the low-rank and sparse regularization of the similar patches, and and control the contribution of low-rank and sparse regularization terms. We adjust the parameter values in proper ranges and demonstrate the average quantitative metrics as functions of the parameters on the experimental dataset; then the optimal parameter combination is found with extensive numerical tests. Figure 1 shows the average PSNR, RMSE, and SSIM results when the parameter varies from 0 to 0.5 with fixed  = 1.6,  = 0.2, and . From Figure 1, we can see that when , all quantitative metrics get the worst values, which illustrate the effectiveness of the gradient consistency constraint. Meanwhile, Figure 1 shows that our method achieves the best performance when . Figure 2 shows the average PSNR, RMSE, and SSIM results under various combinations of and with fixed and . The analysis of and shows that the trend of the overall performance gets better and then worse. When the maximal PSNR is selected, we have  = 1.8 and  = 0.3; when the maximal SSIM is selected, we have  = 1.4 and  = 0.1. In order to synthesize the performance of our algorithm in these two indexes, we normalize PSNR and SSIM by introducing E = 0.8PSNR + 0.2SSIM, and E gets its maximum when  = 1.6 and  = 0.2. Besides, we find that the performance of the proposed method is insensitive to the parameter , and we simply set .

4.2. SR Results and Analysis
4.2.1. Natural Images Super-Resolution

To evaluate the performance of the proposed super-resolution algorithm, we use bicubic interpolation and SC and SCBP algorithms for comparison. Experimental results of the images “Flower,” “Leaf,” and “Lena” with magnification factor of 2 are shown in Table 2 and Figure 3, while the results with magnification factor of 4 are shown in Table 3 and Figure 4.


ImagesBicubicSCSCBPOurs

FlowerPSNR32.735632.703234.858836.5869
RMSE5.88525.90724.60893.7774
SSIM0.91630.89980.92950.9351

LeafPSNR23.780723.282025.853628.6582
RMSE16.500917.476012.99759.4108
SSIM0.86500.83860.89970.9323

LenaPSNR32.053331.701934.084436.1762
RMSE6.36616.62895.03873.9603
SSIM0.91430.89390.92890.9319


ImagesBicubicSCSCBPOurs

FlowerPSNR26.112328.625428.440828.8676
RMSE12.61609.44659.64949.1867
SSIM0.74160.79220.78560.7969

LeafPSNR17.263619.322619.012919.3259
RMSE34.843027.568228.568927.5578
SSIM0.50810.59190.59050.6295

LenaPSNR25.440827.906227.498728.0027
RMSE13.630210.261910.754810.1486
SSIM0.73810.78330.77610.7996

As we can see from the visual comparison, our method could obtain desired results; it effectively reduces zigzag and distortion of the recovered SR images so that it better preserves the image details compared to other competitors. From the close-ups of the regions in red boxes, we can see clearly that the SR images obtained by bicubic interpolation and SC methods are more blurred than ours, and SCBP method could restore clear edges but generate some obvious artifacts in SR images. In summary, our algorithm is superior to the contrast methods in terms of reconstructing clearer SR images with sharper edges and without artifacts. The above observations can be quantitatively confirmed by the Tables 2 and 3 which record the PSNR, RMSE, and SSIM values of the SR results of the three images with magnification factors of 2 and 4, respectively. The proposed method obtains the best metrics values compared with the other three methods.

4.2.2. Multispectral Images Super-Resolution

The adopted multispectral images contain four bands (red, blue, green, and infrared bands), and our algorithm is executed at each band. Same as the nature image experiments, we compared the performance of our method with bicubic interpolation, SC, and SCBP. Experimental results for the multispectral datasets “Field,” “Tree,” and “Beach” with magnification factor of 2 are shown in Figure 5 and Table 4, while the results for magnification factor of 4 are shown in Table 5.


ImagesBicubicSCSCBPOurs

FieldPSNR24.450424.505822.793226.8614
RMSE15.276415.179218.487611.5736
SSIM0.74930.74750.68720.8255

TreePSNR23.526424.039022.184825.9000
RMSE16.991116.016619.828912.9283
SSIM0.72370.72480.66640.8139

BeachPSNR22.997622.915020.901824.7080
RMSE18.057718.230022.985414.8300
SSIM0.74270.74010.68210.8166


ImagesBicubicSCSCBPOurs

FieldPSNR19.539219.594417.886121.0251
RMSE26.889426.718932.893422.6614
SSIM0.45360.45370.38250.5437

TreePSNR19.039819.514817.692520.5211
RMSE28.480626.956032.987724.0150
SSIM0.40180.41380.35100.5144

BeachPSNR18.394018.242016.446319.5784
RMSE30.679031.220438.390826.7681
SSIM0.40490.39460.33960.5010

The observation of visual comparison is close to that of natural image experiment. Bicubic interpolation and SC algorithms tend to reconstruct overly smooth images that lack detailed information and have fuzzy image quality, while the SR images obtained by SCBP method are overenhanced and have obvious artifacts. Comparatively speaking, our algorithm can better restore the detailed information of images and make a better texture effect. By comparing the numerical results in Tables 4 and 5, our algorithm also gets the best results for all indicators. It is worth noting that, because of the lack of training set, the results of SCBP algorithm are worse than bicubic interpolation. In conclusion, our proposed algorithm is superior to other algorithms compared in the subjective visual impression and objective indexes.

5. Conclusion

This paper presented an efficient variational method for single-image super-resolution. Motivated by image fusion and pan-sharpening, we exploit the gradient of imaginary PAN image from the input image via structure tensor and then construct a gradient consistency constraint which is supposed to fuse HR edges and details into the optimization target image. Meanwhile, we adopt the regularization of sparse and low-rank decomposition for similar image patches to further improve the SR accuracy. The variables of our variational model are iteratively optimized by ADMM-based algorithm. Extensive experiment results demonstrate the edge recovery and artifacts suppression capabilities of our method.

Data Availability

The datasets used to support the findings of this study are public available and included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

Yingying Xu was supported by the School of Electronics and Information Engineering, Taizhou University, Zhejiang, China. This research was founded by Zhejiang Provincial Natural Science Foundation of China under Grant no. LQ21F020001, Research Project of Education Department of Zhejiang Province under Grant no. Y201840245, and Agricultural Science and Technology Project of Taizhou under Grant no. 20ny13.

References

  1. D. Glasner, S. Bagon, and M. Irani, “Super-resolution from a single image,” in Proceedings of the IEEE 12th International Conference on Computer Vision, pp. 349–356, Kyoto, Japan, October 2009. View at: Google Scholar
  2. S. Yang, M. Wang, Y. Sun, F. Sun, and L. Jiao, “Compressive sampling based single-image super-resolution reconstruction by dual-sparsity and non-local similarity regularizer,” Pattern Recognition Letters, vol. 33, no. 9, pp. 1049–1059, 2012. View at: Publisher Site | Google Scholar
  3. Y. Wang, L. Wang, J. Yang, W. An, J. Yu, and Y. Guo, “Spatial-angular interaction for light field image super-resolution,” in Computer Vision–ECCV 2020, Springer, Berlin, Germany, 2020. View at: Google Scholar
  4. X. Ying, L. Wang, Y. Wang, W. Sheng, W. An, and Y. Guo, “Deformable 3d convolution for video super-resolution,” IEEE Signal Processing Letters, vol. 27, pp. 1500–1504, 2020. View at: Publisher Site | Google Scholar
  5. Y. Tian, Y. Zhang, Y. Fu, and C. Xu, “Tdan: temporally-deformable alignment network for video super-resolution,” in Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3357–3366, Seattle, WA, USA, June 2020. View at: Publisher Site | Google Scholar
  6. H. Su, N. Jiang, Y. Wu, and J. Zhou, “Single image super-resolution based on space structure learning,” Pattern Recognition Letters, vol. 34, no. 16, pp. 2094–2101, 2013. View at: Publisher Site | Google Scholar
  7. H. Liu, Z. Fu, J. Han, L. Shao, S. Hou, and Y. Chu, “Single image super-resolution using multi-scale deep encoder-decoder with phase congruency edge map guidance,” Information Sciences, vol. 473, pp. 44–58, 2019. View at: Publisher Site | Google Scholar
  8. S. A. Hussein, T. Tirer, and R. Giryes, “Correction filter for single image super-resolution: robustifying off-the-shelf deep super-resolvers,” in Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1425–1434, Seattle, WA, USA, June 2020. View at: Publisher Site | Google Scholar
  9. C. Ren, X. He, Y. Pu, and T. Q. Nguyen, “Enhanced non-local total variation model and multi-directional feature prediction prior for single image super resolution,” IEEE Transactions on Image Processing, vol. 28, no. 8, pp. 3778–3793, 2019. View at: Publisher Site | Google Scholar
  10. C. Ma, Y. Rao, Y. Cheng, C. Chen, J. Lu, and J. Zhou, “Structure-preserving super resolution with gradient guidance,” in Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7766–7775, Seattle, WA, USA, June 2020. View at: Publisher Site | Google Scholar
  11. W. Yang, X. Zhang, Y. Tian, W. Wang, J.-H. Xue, and Q. Liao, “Deep learning for single image super-resolution: a brief review,” IEEE Transactions on Multimedia, vol. 21, no. 12, 2019. View at: Publisher Site | Google Scholar
  12. R. Keys, “Cubic convolution interpolation for digital image processing,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 29, no. 6, pp. 1153–1160, 1981. View at: Publisher Site | Google Scholar
  13. H. S. Hou and H. Andrews, “Cubic splines for image interpolation and digital filtering,” IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 26, no. 6, pp. 508–517, 1978. View at: Publisher Site | Google Scholar
  14. M. Irani and S. Peleg, “Improving resolution by image registration,” CVGIP: Graphical Models and Image Processing, vol. 53, no. 3, pp. 231–239, 1991. View at: Publisher Site | Google Scholar
  15. A. J. Patti, M. I. Sezan, and A. Murat Tekalp, “Superresolution video reconstruction with arbitrary sampling lattices and nonzero aperture time,” IEEE Transactions on Image Processing, vol. 6, no. 8, pp. 1064–1076, 1997. View at: Publisher Site | Google Scholar
  16. R. R. Schultz and R. L. Stevenson, “Extraction of high-resolution frames from video sequences,” IEEE Transactions on Image Processing, vol. 5, no. 6, pp. 996–1011, 1996. View at: Publisher Site | Google Scholar
  17. C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 2, pp. 295–307, 2016. View at: Publisher Site | Google Scholar
  18. S. Gao and X. Zhuang, “Multi-scale deep neural networks for real image super-resolution,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, June 2019. View at: Publisher Site | Google Scholar
  19. W. T. Freeman, E. Pasztor, and O. Carmichael, “Learning low-level vision,” International Journal of Computer Vision, vol. 40, no. 1, pp. 25–47, 2000. View at: Publisher Site | Google Scholar
  20. H. Chang, D. Yeung, and Y. Xiong, “Super-resolution through neighbor embedding,” in Proceedings of the 2004 IEEE Computer Society Conference on computer Vision and pattern recognition 2004, CVPR 2004, pp. 275–282, Washington, DC, USA, July 2004. View at: Publisher Site | Google Scholar
  21. J. Yang, J. Wright, T. Huang, and Y. Ma, “Image super-resolution as sparse representation of raw image patches,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2008, CVPR 2008, pp. 1–8, Anchorage, AK, USA, June 2008. View at: Publisher Site | Google Scholar
  22. J. Yang, J. Wright, T. S. Huang, and Y. Ma, “Image super-resolution via sparse representation,” IEEE Transactions on Image Processing, vol. 19, no. 11, pp. 2861–2873, 2010. View at: Publisher Site | Google Scholar
  23. W. Shi, J. Caballero, F. Huszár et al., “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1874–1883, Las Vegas, NV, USA, June 2016. View at: Publisher Site | Google Scholar
  24. J. Kim, J. Kwon Lee, and K. Mu Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1646–1654, Las Vegas, NV, USA, June 2016. View at: Publisher Site | Google Scholar
  25. B. Lim, S. Son, H. Kim, S. Nah, and K. Mu Lee, “Enhanced deep residual networks for single image super-resolution,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 136–144, Honolulu, HI, USA, July 2017. View at: Publisher Site | Google Scholar
  26. Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu, “Residual dense network for image super-resolution,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2472–2481, Salt Lake City, UT, USA, June 2018. View at: Publisher Site | Google Scholar
  27. C. Ledig, L. Theis, F. Huszár et al., “Photo-realistic single image super-resolution using a generative adversarial network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4681–4690, Honolulu, HI, USA, July 2017. View at: Publisher Site | Google Scholar
  28. Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu, “Image super-resolution using very deep residual channel attention networks,” in Computer Vision–ECCV 2018, pp. 286–301, Springer, Berlin, Germany, 2018. View at: Publisher Site | Google Scholar
  29. T. Dai, J. Cai, Y. Zhang, S. Xia, and L. Zhang, “Second-order attention network for single image super-resolution,” in Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11 057–111 066, Long Beach, CA, USA, June 2019. View at: Publisher Site | Google Scholar
  30. F. Fang, F. Li, C. Shen, and G. Zhang, “A variational approach for pan-sharpening,” IEEE Transactions on Image Processing, vol. 22, no. 7, pp. 2822–2834, 2013. View at: Publisher Site | Google Scholar
  31. G. Piella, “Image fusion for enhanced visualization: a variational approach,” International Journal of Computer Vision, vol. 83, no. 1, pp. 1–11, 2009. View at: Publisher Site | Google Scholar
  32. K. I. Kim and Y. Kwon, “Single-image super-resolution using sparse regression and natural image prior,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 6, pp. 1127–1133, 2010. View at: Publisher Site | Google Scholar
  33. W. Dong, G. Shi, X. Hu, and Y. Ma, “Nonlocal sparse and low-rank regularization for optical flow estimation,” IEEE Transactions on Image Processing, vol. 23, no. 10, pp. 4527–4538, 2014. View at: Publisher Site | Google Scholar
  34. E. Esser, “Primal dual algorithms for convex models and applications to image restoration, registration and nonlocal inpainting,” University of California Los Angeles, 2010, Ph.D. thesis. View at: Google Scholar
  35. C. Liu, F. Fang, Y. Xu, and C. Shen, “Single image super-resolution based on nonlocal sparse and low-rank regularization,” Springer, Berlin, Germany, 2016. View at: Publisher Site | Google Scholar
  36. X. Gao, K. Zhang, D. Tao, and X. Li, “Joint learning for single-image super-resolution via a coupled constraint,” IEEE Transactions on Image Processing, vol. 21, no. 2, pp. 469–480, 2011. View at: Publisher Site | Google Scholar

Copyright © 2021 Yingying Xu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views199
Downloads354
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.