Mobile Information Systems

Mobile Information Systems / 2021 / Article
Special Issue

AI-Enabled Big Data Processing for Real-World Applications of IoT

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 9948811 | https://doi.org/10.1155/2021/9948811

Wenbing Yang, Feng Tong, Xiaoqi Gao, Chunlei Zhang, Guantian Chen, Zhijian Xiao, "Remote Sensing Image Compression Evaluation Method Based on Neural Network Prediction and Fusion Quality Fidelity", Mobile Information Systems, vol. 2021, Article ID 9948811, 9 pages, 2021. https://doi.org/10.1155/2021/9948811

Remote Sensing Image Compression Evaluation Method Based on Neural Network Prediction and Fusion Quality Fidelity

Academic Editor: Fazlullah Khan
Received20 Mar 2021
Revised09 Apr 2021
Accepted26 Apr 2021
Published12 May 2021

Abstract

Lossy compression can produce false information, such as blockiness, noise, ringing, ghosting, aliasing, and blurring. This paper provides a comprehensive model for optical remote sensing image characteristics based on the block standard deviation’s retention rate (BSV). We first propose a compression evaluation method, CR_CI, that combines neural network prediction and remote sensing image quality fidelity. Through the compression evaluation and improved experimental verification of multiple satellites (CBERS-02B satellite, ZY-1-02C satellite, CBERS-04 satellite, GF-1, GF-2, etc.), CR_CI can be stable, cleverly test changes in the information extraction performance of optical remote sensing images, and provide strong support for optimizing the design of compression schemes. In addition, a predictor of remote sensing image number compression is constructed based on deep neural networks, which combines compression efficiency (compression ratio), image quality, and protection. Empirical results demonstrate the image’s highest compression efficiency under the premise of satisfying visual interpretation and quantitative application.

1. Introduction

With the continuous development of high-resolution remote sensing satellites, the data volume for satellite storage and transmission is exploding, bringing tremendous pressure to the real-time transmission systems. At present, lossy compression technology is primarily used in high-resolution remote sensing satellite transmission systems [1]. The method will produce false information, such as blocking effect, noise, ringing, ghost image, sawtooth, and blurring. SPIHT and JPEG2000 embedded coding are generally used in China, losing the high-frequency information (texture details) when the compression quality is very low [2]. The increase of the compression ratio will damage high-frequency information and lose some low-frequency information, seriously affecting image information extraction.

The information extraction performance of remote sensing images can reflect the whole imaging link’s ability to obtain surface feature information, critical for visual interpretation and quantitative applications. In existing general compression assessment methods, the compression quality is evaluated using such indexes as peak signal-to-noise ratio (PSNR) and root-mean-square error (RMSE) according to the difference of image statistics [3]. PSNR and RMSE cannot accord with image quality and fidelity’s perception characteristics or accurately reflect image information extraction performance degradation. They are global parameters that do not consider the scene's target feature and location information. They only calculate the correlation between pixels. In recent years, significant progress has been made in image compression quality evaluation. Sakuldee et al. [3] proposed the space-frequency domain measurement (SFM) method. Charrier et al. [4] proposed the minimum likelihood difference scale (MLDS) method. Bailey et al. [5] proposed the measurement method of compression blocking effect strength, and Seghir and Hachouf [6] proposed the measurement method of information deformation and displacement in the marginal area. The above compression assessment methods also have shortcomings, such as complexity, low stability, and sensibility. They consider the generality of images but ignore the characteristics of remote sensing images.

NIIRS (National Imagery Interpretability Rating Scale) is a classical standard for evaluating image information extraction performance, which can evaluate the influence of each imaging link on the image information extraction performance. It is mainly implemented in two ways: one is the visual assessment by trained image analysts; the other is the automatic assessment by forecasting NIIRS rating with image quality model (IQM) [4, 5]. The limitation of the NIIRS-based method is that the compression is insensitive to the image information extraction performance because the design purpose of NIIRS is visual interpretation (target identification), which is concerned about image quality rather than image fidelity [7]. Image fidelity contains more metrics of texture detail than image quality. BSV can represent the texture details of the image; that is, it can solve the problem of texture sensitivity and make up for the deficiency of NIIRS [6, 8].

This paper further proposes a comprehensive retention rate calculation method for two factors (BSV and NIIRS) and achieves the compression quality evaluation combining remote sensing image quality and fidelity. Through empirical results, we demonstrate the significant performance improvements of our model.

2. Calculation Method of CR_CI

2.1. BSV Retention Rate

Blocking standard variance (BSV) can acutely reflect the image texture detail, the slight surface feature shape, and the slight radiation energy change. BSV divides the whole image into several image blocks with the size of the B × B pixel. The average value of each image block’s standard variance is calculated as BSV of the image [9]. The IQM is an information-based measure predicting the NIIRS level of a single image. The digital image’s power spectrum adjusts the IQM model to calculate the amount of information in the image that is related to the NIIRS level [10]. As a result, the change rate of the BSV value can reflect the image fidelity to some extent [11].

2.2. Calculation Method of BSV Value

Suppose size is pixel that divides the image into blocks of B × B pixel, where n is the number of blocks and is defined as follows:and then we calculate the gray average of the image block aswherein is the gray value of a pixel in the ith row and jth column in the image block. i = 1, 2, 3, …, B; j = 1, 2, 3, …, B.

We then calculate the grayscale standard variance of k image block as

Finally, BSV value is calculated as

2.3. Calculation Method of BSV Value Retention Rate

We can calculate the image’s BSV value either before processing (BSV1) or after processing (BSV2), which can be calculated as follows:

2.4. NIIRS Value Retention Rate
2.4.1. Calculation Method of NIIRS Value

In this paper, the NIIRS value is calculated using the IQM model to predict the NIIRS level. The digital image’s power spectrum adjusts the IQM model to calculate the amount of information in the image related to the NIIRS level. Adjustments include the addition of human’s visual system MTF, noise adjustment, and one proportionality coefficient. IQM is defined aswhere is the number of pixels in the image (M × M pixel), is the proportionality coefficient of the input image, is Modified Wiener Filter, is the square of the Human Visual System (HVS) MTF, and is a two-dimensional power spectrum.

The proportionality coefficient is defined aswhere is focal length, is the distance from a remote sensor to the ground, and is pixel pitch. The unit of is cycles/m.

Modified Wiener Noise Filter is defined aswhere is the reciprocal of the average pulse width, is the landscape variance (bit/cycle), is the radial spatial frequency (cycles/pixel), is the variance of Gaussian MTF with Nyquist Frequency MTF 20%, is noise power spectrum, and are two empirical constants, for noisy images, and are 51.2 and 1.5, respectively, for an image of noise filtering, and and are 19.2 and 1.5.

The MTF of the Human Visual System is defined aswhere is a constant determined by the spatial frequency (5.11 cycles/degree) that the Human Visual System MTF is a peak, is the normalized spatial frequency that the dimension is cycle/Pixel width, is the number of cycles per degree corresponding to the human eye, and MTF peak is located at , that is, 20% of 0.5 cycle/pixel width (namely, Nyquist frequency). Therefore, the T value is 51.1.

Normalized 2D power spectrum is defined aswhere is a two-dimensional power spectrum in polar coordinate form, is the square of the average gray level of the image (DC power), is the number of pixels in the image, and and are the spatial frequency component of the input image.

In case of mist,

In case of no mist,

Subjective quantitative measures of image quality differences include the NIIRS difference (ΔNIIRS) method, subjective quality classification (SQS) method, and Just Noticeable Difference (JND). ΔNIIRS is the difference of NIIRS between two images, which is generally indicated at decimal levels, such as 0.1NIIRS and 0.2NIIRS. According to the specific requirements, SQS can formulate the evaluation method of image quality difference of 5-point or 100-point system. JND is a psychological measure of image quality difference and the basic unit of image quality difference. Studies have established links between ΔNIIRS and JND. The results suggest that 10JND is roughly equivalent to 2.5NIIRS; that is, 0.1NIIRS is equivalent to 0.4JND. The difference of two images with ΔNIIRS less than 0.1 is unaware visually.

2.4.2. Calculation Method of NIIRS Retention Rate

Calculate NIIRS values of images before and after compression, that is, NIIRS1 and NIIRS2.

NIIRS retention rate is used in this paper, that is, CR_NIIRS:

3. CR_CI

Combining the advantages of the IQM model and BSV, this paper proposes a method combining the quality and fidelity of remote sensing images that can evaluate the effect of compression on remote sensing images’ information extraction performance stably and acutely.

We use the NIIRS value retention rate and BSV value retention rate to calculate the total information retention rate CR_CI as follows:

Comprehensive information retention rate CR_CI can characterize the retention of information extraction performance before and after image processing: the domain of CR_CI is [0, 1]; if CR_CI = 1, it shows that the information extraction performance of image after processing has not changed; if CR_CI = 1, it shows that the information extraction performance of image after processing has changed. CR_CI is smaller, indicating the loss of information extraction performance of images is larger. When remote sensing images’ information extraction performance is of low applicability and image information extraction performance is of low sensitivity, the calculation of comprehensive information retention rate CR_CI can combine image quality and fidelity. It has higher stability and sensitivity in information extraction performance change detection, as shown in Figure 1.

4. Preprocessing Neural Network

Figure 2 shows the overall structure design of the network. The input image A is first passed through the designed neural network [1215] and changed into a slightly changed image A' that is more suitable for image compression. This slightly changed image A' is input to a traditional encoding framework such as JPEG2000. In the compression model, the restored image is obtained after encoding and decoding.

Following Fourure et al. [10], we adopt a GridNet architecture, composed of a two-dimensional grid pattern shown in Figure 3. Maps inside the model are connected via computational layers [12, 14]. Data are input to the model in the first block (line 0), and output in the last block. In between the input and output blocks, data flow through various paths, whether following a straight line or long paths, which follow lines with indexes 6 = 0. Data are processed through layers that connect blocks Xi; j. We term the horizontal feature as connections stream, which is fully convolutional and uses constant feature map sizes. The model also includes residual connection, which predicts the various dissimilarities in the input. Each green cell is a residual block and is the steam block, which does not change the input map’s resolution or feature maps. The red block is the downsampling convolution unit, which downsamples the input to a quarter of the original image (half the length and width) and doubles the number of channels. The yellow unit is the corresponding upsampling deconvolution unit, which can upsample the image to four times the original (length and width are doubled), and the number of channels is reduced by half. The yellow and red blocks are no residuals. This concept’s main idea is an adaptive way to compute how the data will flow in the computation graph.

In Figure 3, the blue part only includes downsampling units and residual blocks, and the purple part only includes upsampling units and residual blocks. In our practice, there are three layers of networks vertically and six layers of networks horizontally.

Taking the purple right half as an example, the input comes from and (except for the bottom module). Through this structure, the network can fully obtain the information of images on different scales. In order to enable data to flow in the entire network rather than simply at the top layer, random pruning of the network is required during training to force each layer of the network to obtain information. In this way, each branch can learn the information of the corresponding scale for reconstruction.

Figure 4 describes the detailed schema of the GridBlock, where green units specify the residual units that keep feature map dimensions constant among the input/output block. Red blocks are the convolutional and subsampling units, which increase the size of feature dimensions. Yellow block represents deconvolutional + upsampling responsible for decreasing the feature dimensions (back to the original one to allow the addition). Besides, trapeziums demonstrate the upsampling/subsampling operations obtained through stridden convolutions, and the BN represents batch normalization.

5. Experimental Verification

This algorithm has been applied to the “software of Spaceborne Remote Sensing Image Information Extraction Performance Evaluation and Detection” and also to the compression quality evaluation and improvement of CBERS-02B satellite, ZY-1-02C satellite, CBERS-04 satellite, GF-1, and GF-2. The algorithm’s effect is described as taking the compression quality evaluation and improvement experiment of the ZY-1-02C satellite.

We implemented the GridNet architecture in TensorFlow. We used Adam optimizer [16] to optimize our model, which is used extensively in the existing literature [10]. The batch size is 128 and learning is set among {0.001, 0.005, 0.01, and 0.05}. We set the dropout rate turning off some neurons when training the model to 0.2.

5.1. Verification Data Description

When a satellite remote sensing camera image is captured, it does not know the scene’s content in advance or its texture. The information of images with rich texture is less redundant, and the information of images with a single texture is more redundant. In the compression quality evaluation and improvement experiment, some images with different texture richness for different surface features (such as desert, city, and waterbody) will be selected. The 8 scene images in Figure 4 are the original images before the compression processing of XX series A satellite, numbered image 1∼ image 8. Its pixel resolution is 2.5 meters, and the number of quantized bits is 8. The surface features mainly cover cities, farmland, vegetation, water, and deserts.

5.2. Verification Method Description

The compression algorithm used on the ZY-1-02C satellite is used to compress 8 images at 4 : 1 and 8 : 1. This proposed algorithm and the existing algorithm of “forecasting NIIRS using IQM” and “PSNR” algorithm are used to evaluate compressed images’ information extraction performance after the compression at 4 : 1 and 8 : 1. Compare the difference between this algorithm and the existing algorithm instability and sensitivity.

5.3. Verification Results

Figures 5(a)5(c) are the local interceptions of original image 1 (Hefei) before compression and the compressed images after the compression at 4 : 1 and 8 : 1. Displayed at 0.5 mm/pixel, it can clearly show the effect of compression on fine points, linear surface features, and fine texture of vegetation.

5.4. Verification Results of the Existing Algorithms

Table 1 is the NIIRS value of the original image and the compressed image, Table 2 is the change value of NIIRS of the compressed image relative to the original image, and Table 3 is the PSNR of the compressed image.


NIIRS valueImage 1Image 2Image 3Image 4Image 5Image 6Image 7Image 8Average value

Original image4.283.074.313.863.393.474.234.343.87
4 : 1 compressed image4.273.064.303.863.383.474.234.343.86
8 : 1 compressed image4.233.044.283.823.363.454.194.313.84


Changes of NIIRS valueImage 1Image 2Image 3Image 4Image 5Image 6Image 7Image 8Average value

4 : 1 compressed image0.005070.011720.004410.002890.01210.002520.005170.00210.00575
8 : 1 compressed image0.047710.025180.031830.035690.0350.02570.045250.030080.03456


PSNRImage 1Image 2Image 3Image 4Image 5Image 6Image 7Image 8Average value

4 : 1 compressed image39.0846.7541.9842.5646.2145.2140.0442.0042.98
8 : 1 compressed image34.0242.9637.0737.6942.1940.4035.1236.9038.29

5.5. Verification Results of the Algorithm

Table 4 shows the retention rate of NIIRS (CR_NIIRS) of the compressed image relative to the original image, Table 5 shows the BSV value of the original image’s compressed image, Table 6 shows the change of BSV in the compressed image relative to the original image, Table 7 shows the retention rate of BSV (CR_BSV) of the compressed image relative to the original image, and Table 8 shows the image CR_CI proposed firstly in this paper.


CR_NIIRSImage 1Image 2Image 3Image 4Image 5Image 6Image 7Image 8Average value

4 : 1 compressed image0.99880.99620.99900.99930.99640.99930.99880.99950.9984
8 : 1 compressed image0.98890.99180.99260.99080.98970.99260.98930.99310.9911


BSV valueImage 1Image 2Image 3Image 4Image 5Image 6Image 7Image 8Average value

Original image13.213.759.187.244.36.6111.269.48.12
4 : 1 compressed image12.963.668.977.054.216.4910.999.197.94
8 : 1 compressed image12.443.528.536.644.016.2710.518.737.58


Changes in BSV valueImage 1Image 2Image 3Image 4Image 5Image 6Image 7Image 8Average value

4 : 1 compressed image0.250.090.210.190.090.120.270.210.18
8 : 1 compressed image0.770.230.650.60.290.340.750.670.54


CR_BSVImage 1Image 2Image 3Image 4Image 5Image 6Image 7Image 8Average value

4 : 1 compressed image0.98110.97600.97710.97380.97910.98180.97600.97770.9778
8 : 1 compressed image0.94170.93870.92920.91710.93260.94860.93340.92870.9338


CR_CIImage 1Image 2Image 3Image 4Image 5Image 6Image 7Image 8Average value

4 : 1 compressed image0.97400.95980.97100.96970.96230.97830.96880.97470.9698
8 : 1 compressed image0.87960.90540.88780.87150.88700.91430.87490.88960.8888

5.6. Analysis of Verification Results

The numerical references in the following analysis are rounded to two decimal places:(1)It can be seen from Figure 5 that the fine texture of vegetation becomes blurring in 4 : 1 compressed images. When the compression ratio increases to 8 : 1, the blurring degree of the fine texture of vegetation in the compressed images is aggravated and even disappears.(2)When using the algorithm of “forecasting NIIRS using IQM,” the maximum change of NIIRS value at 4 : 1 compression ratio is 0.01 and at 8 : 1 compression ratio is 0.05. According to the civil NIIRS standard, the difference of 0.01 or 0.05 cannot describe the physical change of image information extraction performance. When the NIIRS value drops 0∼0.2, there is no visual perception. This is inconsistent with the visual results shown in Figures 5 and 6. This shows that the NIIRS value is insensitive in evaluating the effect of compression on the information extraction performance of images because NIIRS concerns image quality rather than image fidelity.(3)When using the “PSNR” algorithm, it can be seen from Table 3 that the mean PSNR of 4 : 1 compressed image is 42.98 dB, in which the value of image 1 is less than 40 dB. The mean PSNR of 8 : 1 compressed image is 38.29 dB, which is 4.69 dB lower than 4 : 1 compressed image, in which the value of image 1 is less than 35 dB. Generally, when the PSNR of a compressed image is larger than 40 dB, image quality and fidelity are acceptable. If PSNR is less than 35 dB, the image quality will drop sharply. Therefore, the quality of 4 : 1 compression is superior to 8 : 1 compression, and the quality of compressed images is acceptable. As shown in Table 4 and Figure 7, the PSNR of 8 : 1 compressed images (images 2, 5, and 6) is higher than that of 4 : 1 compressed images (image 1); that is, the range of PSNR at 4 : 1 compression is overlapped with 8 : 1 compression. This shows that PSNR is seriously affected by the scene features of the image. The stability of the image information extraction performance is poor when evaluating the effect of compression.(4)This algorithm refers to the retention rate concept: It is the retained percentage of the physical quantity, which can clearly and intuitively represent the degree of change of the image, rather than the change value. The algorithm proposes a comprehensive information retention rate CR_CI for the first time. Combining the image quality and fidelity, CR_CI can achieve the highest compression efficiency under the premise of visual interpretation and quantitative application. When using the algorithm, CR_CI falls between 0.96 and 0.98 at 4 : 1 compression, and the mean value is 0.97; CR_CI falls between 0.87 and 0.91 at 8 : 1 compression, and the mean value is 0.89. Generally, when the CR_CI of the compressed image is larger than 90%, image quality and fidelity are acceptable. CR_CI has a narrow range distribution at the same compression ratio and no intersection at different compression ratios. When CR_CI tests the changes in image information extraction performance, the change range of CR_CI is small if the compression ratio is fixed; CR_CI of 4 : 1 compressed images is higher than 8 : 1 compressed images. This suggests that the CR_CI value does not depend on the content of the image scene. Both desert images with single texture (high information redundancy) and urban images with rich texture (low information redundancy) can stably detect the decline of information extraction performance with the increase of compression ratio. Besides, there is a difference of 0.05 (5%) between the minimum CR_CI (0.96) at 4 : 1 compression and the maximum CR_CI (0.91) at 8 : 1 compression. It indicates that CR_CI has good sensitivity in evaluating the effect of compression on image information extraction performance. The evaluation results of this algorithm are consistent with the visual effect.(5)Conclusion: the original compression ratio is 8 : 1 in this experiment. With this algorithm, it is found that the 8 : 1 compressed image cannot meet quantitative application requirements (i.e., lower CR_CI), and the compression ratio should be lowered to 4 : 1 or lower. 4 : 1 compression scheme is finally used in the project.

6. Conclusion

The lossy compression technology will produce false information and affect the information extraction performance of remote sensing images. Aiming at this phenomenon, in this paper, we present comprehensive modeling for the characteristics of optical remote sensing images based on the retention rate of BSV. The NIIRS proposes a compression assessment method that combines remote sensing image quality and fidelity CR_CI with good effect. Through the compression assessment and improved experimental verification of multiple satellites (CBERS-02B satellite, ZY-1-02C satellite, CBERS-04 satellite, GF-1, GF-2, etc.), the method can stably and subtly test the changes in information extraction performance of optical remote sensing images. It provides strong support for optimal compression scheme design, combines compression efficiency (compression ratio), image quality, and fidelity, and achieves the highest compression efficiency of images upon meeting prerequisites for visual interpretation and quantitative applications.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. L. C. Leachtenauer and R. G. Driggers, Surveillance and Reconnaissance Image Systems (Modeling and Performance Prediction), Artech House Boston, London, UK, 2001.
  2. S. Hongwei and C. Shiping, “A remote sensing image quality standard orienting to user’s mission requirements-NIIRS,” Spacecraft Recovery & Remote Sensing, vol. 24, no. 3, pp. 30–35, 2003. View at: Google Scholar
  3. R. Sakuldee, N. Yamsang, and S. Udomhunsakul, “Image quality assessment for JPEG and JPEG2000,” in Proceeding of Third 2008 International Conference on Convergence and Hybrid Information Technology, pp. 320–325, Daejeon, Republic of Korea, August 2008. View at: Google Scholar
  4. C. Charrier, K. Knoblauch, A. K. Moorthy et al., “Comparison of image quality assessment algorithms on compressed images, image quality and system performance VI,” in Proceedings of SPIE-IS&T Electronic Imaging, SPIE, San Jose, CF, USA, January 2010. View at: Google Scholar
  5. D. Bailey, M. Carli, M. Farias et al., “Quality assessment for block-based compressed images and videos with regard to blockiness artifacts,” in Proceedings of Tyrrhenian International Workshop on Digital Communications, Capez, Brazil, September 2002. View at: Google Scholar
  6. A. Z. Seghir and F. Hachouf, “Edge-region information with distorted and displaced pixels measure for image quality evaluation,” in Proceeding of 10th International Conference on Information Science, Signal Proceeding and Their Application, Kuala Lumpur, Malaysia, May 2010. View at: Google Scholar
  7. S. P. Chen and W. Jiang, “Optimal design and MTFC of space optical sampling imaging system MTF,” Spacecraft Recovery & Remote Sensing, vol. 28, no. 4, pp. 17–22, 2007. View at: Google Scholar
  8. C. K. Qu, F. Li, X. Yang et al., “Using zero-phase kaiser window filter to improve the orbital accuracy of MEX Doppler data,” Journal of Wuhan University (Information Science Edition), vol. 43, no. 7, pp. 1071–1077, 2018. View at: Google Scholar
  9. H.-y. He, Y. Zeng, and W.-y. Wang, “Research on assessing compression quality taking into account the space-borne remote sensing images,” High Technology Letters, vol. 21, no. 1, pp. 109–117, 2015. View at: Google Scholar
  10. D. Fourure, R. Emonet, E. Fromont, D. Muselet, A. Tremeau, and C. Wolf, “Residual conv-deconv grid network for semantic segmentation,” 2017, arXiv preprint arXiv:1707.07958. View at: Google Scholar
  11. Y. Shao, F.-c. Sun, and H.-b. Li, “No-reference remote sensing image assessment method using visual properties,” Journal of Tsinghua University (Science and Technology), vol. 53, no. 4, pp. 550–555, 2013. View at: Google Scholar
  12. X. Ning, Y. Wang, W. Tian, L. Liu, and W. Cai, “A biomimetic covering learning method based on principle of homology continuity,” ASP Transactions on Pattern Recognition and Intelligent Systems, vol. 1, no. 1, pp. 8–15, 2021. View at: Google Scholar
  13. W. Cai and Z. Wei, “Remote sensing image classification based on a cross-attention mechanism and graph convolution,” IEEE Geoscience and Remote Sensing Letters, 2020, inpress. View at: Google Scholar
  14. X. Ning, W. Tian, W. Li et al., “BDARS_CapsNet: Bi-directional attention routing sausage capsule network,” IEEE Access, vol. 8, pp. 59059–59068, 2020. View at: Publisher Site | Google Scholar
  15. W. Cai, B. Liu, Z. Wei, M. Li, and J. Kan, “TARDB-Net: triple-attention guided residual dense and BiLSTM networks for hyperspectral image classification,” Multimedia Tools and Applications, vol. 80, no. 7, pp. 11291–11312, 2021. View at: Publisher Site | Google Scholar
  16. S. T. U. Shah, J. Li, Z. Guo, G. Li, and Q. Zhou, “DDFL: a deep dual function learning-based model for recommender systems,” in Proceedings of International Conference on Database Systems for Advanced Applications, pp. 590–606, Springer, Jeju, Korea, September 2020. View at: Publisher Site | Google Scholar

Copyright © 2021 Wenbing Yang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views194
Downloads128
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.