Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2019 / Article
Special Issue

Computational Intelligence in Image Processing 2020

View this Special Issue

Research Article | Open Access

Volume 2019 |Article ID 1384921 | 7 pages | https://doi.org/10.1155/2019/1384921

Blind Stereo Image Quality Evaluation Based on Convolutional Network and Saliency Weighting

Academic Editor: Daniel Zaldivar
Received07 Jun 2019
Revised26 Jul 2019
Accepted23 Aug 2019
Published09 Sep 2019

Abstract

With the rapid development of stereo image applications, there is an increasing demand to develop a versatile tool to evaluate the perceived quality of stereo images. Therefore, in this study, a blind stereo image quality evaluation (SIQE) algorithm based on convolutional network and saliency weighting is proposed. The main network framework used by the algorithm is the quality map generation network, which is used to train the distortion image dataset and quality map label to obtain an optimal network framework. Finally, the left view, right view, and cyclopean view of the stereo image are used as inputs to the network frame, respectively, and then weighted fusion for the final stereo image quality score. The experimental results reveal that the proposed SIQE algorithm can improve the accuracy of the image quality prediction and prediction score to a certain extent and has good generalization ability.

1. Introduction

With the rapid development of stereo image applications, many related stereo image technologies and services have been introduced in our daily lives as well as in many professional fields [19]. A variety of distortions can occur during the collection, transmission, processing, and displaying of stereo images [1019]. Therefore, it is of immense practical significance to establish a high-performance stereo image quality evaluation method. Stereo image quality evaluation (SIQE) is classified into objective and subjective evaluation. Subjective evaluation is the subjective evaluation of images directly by humans. Because the human visual system is the ultimate recipient of images, subjective evaluation is very persuasive. However, in practical applications, subjective evaluation becomes extremely time-consuming and laborious and is difficult to apply to real-time systems. Therefore, objective evaluation plays a dominant role in SIQE. There are three main types of objective stereo image quality evaluation methods: full-reference (FR) evaluation [48], reduced-reference (RR) evaluation [9], and no-reference (NR) evaluation [1022]. The FR evaluation method compares the undistorted original image with the distorted image to obtain the difference between them. The reduced-reference evaluation method uses the partially undistorted original image information. The NR evaluation method does not use the undistorted original image at all. Because the original undistorted image is difficult to obtain in practical applications, the NR evaluation method has a higher research value.

Many SIQA methods are designed based on the typical 2D image quality evaluation methods [13]. The typical full-reference SIQE method was studied, which was proposed by Chen et al. [4], mainly uses a stereo image pair, disparity image, and Gabor filter response synthesis image. The central eye image uses an FR 2D image quality assessment method to predict the 3D quality score. Zhou et al. [11] proposed a new NR SIQE based on binocular self-similarity and deep neural networks. In [12], the stereo image block is first input to the convolutional neural network and then pooling. The final image quality score is obtained through the multilayer perceptron, where the initial parameters of the convolutional neural network are trained by a large number of natural images. Jiang et al. [13] performed SIQE by learning color visual characteristics based on nonnegative matrix factorization and considering binocular interaction. The study [14] proposes an NR stereo image quality assessment based on the combination of wavelet decomposition and statistical models. The SIQE method proposed in [15, 16] and the method proposed in [15] are based on the evaluation method of binocular vision mechanism. The method proposed in [16] is based on the gradient dictionary color visual feature learning evaluation method. Wu et al. [17] proposed an evaluation method based on depth edge information and color signals, which also uses segmented self-encoders. Liu et al. [18] proposed an SIQE method based on classification and prediction. Jiang et al. proposed an SIQE algorithm for processing multiple distortions [19], which characterizes the local receptive field characteristics of the visual cortex by learning monocular and binocular local visual primitives for various distorted stereo images quality assessment and single distortion stereo images. Bensalma and Larabi [20] proposed perceptual quality metric for stereo images based on the binocular energy. Zhou et al. [21] proposed blind SIQE metric based on binocular combination and extreme learning machine.

Although the above methods have achieved certain effects on the stereo image evaluation problem, these evaluation methods do not consider versatility or image saliency weighting. Moreover, few studies consider the left and right views weight assignment problems of stereo images. Therefore, in this study, an NR SIQE method based on convolutional networks and saliency weighting is proposed. The main contributions of this work are as follows.

First, the training dataset used in the proposed method is a self-made distortion image dataset, and the quality map obtained by the high-performance FR image quality evaluation method is used to render corresponding labels.

Second, the main network framework used by the algorithm is the quality map generation network, which is used to train the distortion image dataset and quality map label to obtain an optimal network framework.

Finally, the left view, the right view, and the cyclopean view of the stereo image are, respectively, used as the input of the quality map generation optimal network framework, and the corresponding quality maps are predicted. Weighted fusion is sequentially performed to obtain the final stereo image quality score.

2. Model Methods

Figure 1 illustrates the overall frame structure of the proposed method. The inputs to the network frame are a left-view distortion image, right-view distortion image, and cyclopean view of the left and right views. The output is a predicted value of the distortion stereo image quality score. As can be seen from the figure, the overall framework can be divided into three submodules, the quality map generation network, the saliency weighting module, and the weighted summation module of the left, right, and cyclopean views. The specifications of each module are described in detail in the following sections.

2.1. Quality Map Generation Network

In the proposed method, the quality map generation network is a main component of the overall framework. The requirement for the quality map generative network is outputting a quality map of the same size with the input image. We construct an improved framework of U-Net, an extension of fully convolutional networks, as a base of quality map generative network because U-Net integrates the hierarchical representations in subsampling layers with the corresponding features in upsampling layers. Hence, these layers increase the resolution of the output. In order to localize, high-resolution features from the contracting path are combined with the upsampled output. A successive convolution layer can then learn to assemble a more precise output based on this information.

A major component of the quality map generation network is the convolutional layer. In the encoder part, the basic structure consists of a two-layer convolution plus a layer of pooling as a small module. The number of convolution kernels in each convolutional layer is depicted in Figure 2. The size of the input distortion image is  × h × 3, the size of the output quality map is  × h, and the size of the convolution kernel is 3 × 3. The activation function uses a linear correction unit (ReLU) function. Let denote the parameters of lth filter kernel in the ith convolution layer and denote its corresponding bias. Then, the lth feature map produced in the ith convolution layer is represented bywhere hi−1 denote i 1th feature maps outputted from previous convolution layer and h0 corresponds to the input image. is the ReLU function.

The supervisory label used in the network is based on the image structure similarity (SSIM) quality map, and the effect of locally obtaining the SSIM quality map is better than that of globally obtaining it.

In this work, we first select 100 source images from a dataset [23] for the training set. The source images have different scenes (all reference images of the dataset are shown in Figure 2, resized to a fixed-size of 528 × 400). Four commonly observed distortion types, namely, JPEG 2000 (JP2K) compression, JPEG compression, Gaussian blur (GB), and white noise (WN) are used to generate the distorted images. Finally, the SSIM metric is employed to generate the ground-truth objective quality/similarity maps as training labels. In the proposed method, the SSIM quality map used is the local mapping matrix. Let x and y be two image signals. The original and distorted image signals are obtained, thereby obtaining a quality map based on the similarity, and the generation formula of the SSIM quality map is defined aswhere μx is the mean of image x, μy is the mean of image y, σx is the standard deviation of image x, σy is the standard deviation of image y, and σxy is the covariance of x and y and C1 and C2 denote small positive constant for increasing stability when the denominator approaches zero. In this paper, we set C1 = C2 = 0.085 for our experiments.

2.2. Weighted Fusion Module

The weighted fusion module is one of the important components of the proposed method, and its main structure is depicted in Figure 1.

In this study, the fusion of the quality score of the left and right views is based on the widely used Bayesian theory [24], in which the binocular quality score can be obtained by

In (3), and denote the left retinal view and the disparity/depth-compensated right retinal view, respectively; ϑ denotes the quality; and denote feature distributions that are utilized to balance the roles of binocular visual mechanism in determining the overall visual quality; and the likelihoods and denote quality scores of the left and right views.

The left and right views weights of the distorted stereo image are referred to the left and right views weight assignment methods of the paper [24], and the formula is as follows:where and are the significant level estimates of the left and right views, respectively, and equations (4) and (5), respectively, represent the weights of the left and right views, thereby obtaining a weighted quality score of the left and right views quality scores:where . Smap,l and Smap,r denote the saliency maps of the left and right views (we use the method of reference [25] to derive saliency maps from distorted views), respectively. Qmap,l and Qmap,r denote the predict quality maps of the left and right views, respectively.

In addition to the weights used in the combination of the left and right views quality scores, there is a set of weights that left and right views weighted score Qlr and cyclopean view quality score . Smap,m denotes the saliency maps of the cyclopean views. Qmap,m denotes the predict quality maps of the cyclopean views. For the distorted stereo images, the over quality score can be obtained bywhere α is the weight. The selection of weight α is discussed in the experimental section.

3. Experimental Results and Analysis

3.1. Experimental Database

In the experiment, the performance of the proposed method was verified using two published three-dimensional image quality evaluation databases: LIVE 3D Phase I [26] and LIVE 3D Phase II [27]. Basic information of the two databases is provided below.

LIVE 3D Phase I: This database includes 20 pairs of original undistorted stereo images and 365 pairs of distorted stereo images. The size of each view is 640 × 360. All distorted images contain five different levels of distortion, including JPEG distortion, JPEG 2000 distortion, additive white Gaussian noise (WN) distortion, fast decay channel (FF) distortion, and Gaussian blur (GB) distortion. In addition, each pair of distorted images has a differential mean opinion score (DMOS), which is a human subjective image quality score obtained through a large number of experiments.

LIVE 3D Phase II (LIVE 3D Phase II-Symmetric and LIVE 3D Phase II-Asymmetric) database includes 8 pairs of original undistorted stereo images and 360 pairs of distorted stereo images. The size of each view is 640 × 360. Similar to LIVE 3D Phase I, all the distorted images contain 5 different levels of distortion. For each type of distortion, three sets of symmetrically distorted stereo image pairs and six sets of asymmetrically distorted stereo image pairs are generated for each pair of original stereo images. “Asymmetric” means that the left and right views of the stereo image have different types or different levels of distortion. Each pair of stereo images has a DMOS value.

3.2. Experimental Training Step

The experiment was implemented on a 64 bit memory computer using an NVIDIA GTX 1080 TI. The deep network framework was implemented using the Keras depth framework with TensorFlow as the backend. When training deep neural networks, the optimizer used is Adam, which, also known as “adaptive momentum estimation,” is a parameter updating method that is used to calculate the adaptive learning rate for each parameter. The learning rate of some parameter updating methods is adjusted in the global scope, and the adjustment of all network parameters is equivalent, which makes the adjustment of the learning rate difficult, and requires very good initial network parameters, compared to these parameter updating methods. Adam has a bigger advantage. We use the Adam [28] optimization method with a learning rate of 1 × 10−4 with a decay of 0.5 every 50 epochs and a weight decay of 1 × 10−5 for regularization to optimize the network. In the iterative process, the batch size used is 16, and the loss function uses the mean square error (MSE).

3.3. Analysis of Experimental Results

The three classical performance evaluation indexes are used to evaluate the performance of the proposed SIQE methods, namely, Pearson linear correlation coefficient (PLCC), Spearman correlation coefficient (SROCC), and root mean square error (RMSE). The PLCC index can be used to evaluate the prediction accuracy of the NR image quality evaluation algorithm. The higher the correlation between the objective prediction score of the image and the subjective score is, the closer the PLCC correlation coefficient is to 1, indicating better performance of the image quality evaluation algorithm.

Since the planar database we created contains only four types of distortion, namely, JPEG, JPEG 2000, WN, and GB, only four types of distortions are tested and analyzed in the experiment. The data shown in Table 1 are the data for all four distortions.


DatabasesIndicatorα = 0.1α = 0.2Α = 0.3α = 0.4α = 0.5α = 0.6α = 0.7α = 0.8α = 0.9

LIVE 3D Phase IPLCC0.9130.9130.9130.9120.9100.9080.9070.9040.902
SROCC0.8850.8870.8890.8900.8890.8880.8870.8850.883
RMSE6.3476.3376.3546.3896.4386.4966.5606.6296.700

LIVE 3D Phase IIPLCC0.8580.8630.8610.8560.8490.8410.8330.8240.816
SROCC0.8490.8630.8490.8420.8330.8250.8180.8120.802
RMSE5.7395.6515.6905.7875.9106.0456.1866.3296.470

The results of the weight distribution experiment when the left and right views quality scores are combined with the cyclopean view quality score are shown in Table 1. From the experiment, it can be concluded that, in the stereo distortion database, the best performance is obtained under this condition, that is, the value is α = 0.2.

The overall test results are shown in Table 2. Here, the final result obtained with a value of 0.2 was used. It can be seen from the relevant performance index that the image quality evaluation method proposed in this paper has a good generalization ability.


DatabasesIndicatorJPEGJPEG 2000WNGBAll

LIVE 3D Phase IPLCC0.6210.9080.9410.9150.913
SROCC0.5920.8810.9310.8570.887
RMSE5.1245.4395.6325.8456.337

LIVE 3D Phase IIPLCC0.8270.7270.9730.9600.863
SROCC0.7900.7150.9620.8520.863
RMSE4.1206.7422.4843.8825.651

In Tables 3 and 4, two quality evaluation performance indicators PLCC and SROCC are used to compare the SIQE method proposed in this paper with the methods proposed by the predecessors, including three 2D full-reference image quality evaluation methods (SSIM [1], FSIM [2], and GMSD [3]) and three SIQE methods (Chen [4], Bensalma and Larabi [20] and Zhou [21]). It can be seen from Table 3 that the stereo image evaluation method proposed in this paper is better than the other NR image evaluation methods. As can be seen from Table 4, the method proposed in this paper is more advantageous for evaluating asymmetric stereo images. Since the proposed method does not require original undistorted images and human subjective scores, its evaluation performance may be slightly lower than that of the FR evaluation method, but it achieves the desired effect of no-reference SIQE.


DatabasesDistortionFRBlind
SSIMFSIMGMSDChenBensalmaShaoProposed

LIVE 3D Phase IJP2K0.8680.9370.9280.8550.8480.8720.908
JPEG0.4960.6010.6520.4760.3760.5970.622
WN0.9380.9310.9470.95330.9140.9160.941
Gblur0.9120.9330.9380.9390.9160.9230.915
ALL0.8990.9360.9430.9290.8950.8990.913

LIVE 3D Phase II-symmetricJP2K0.81620.81830.8750.6700.6900.9030.898
JPEG0.67700.84560.8440.6010.5510.8730.890
WN0.97490.96300.9610.9460.9360.9170.982
GBLUR0.83250.86380.9280.9180.9530.9770.953
ALL0.73260.83010.9250.8140.8230.9120.918

LIVE 3D Phase II-asymmetricJP2K0.6760.7850.8680.7220.6190.7890.876
JPEG0.6850.7960.8690.5640.6310.7050.699
WN0.8230.9410.9160.9450.9330.9240.965
GBLUR0.8400.8880.7410.6920.8620.8550.951
ALL0.7500.6780.6530.6340.7430.5650.766


DatabasesDistortionFRBlind
SSIMFSIMGMADChenBensalmShaoProposed

LIVE 3D Phase IJP2K0.8670.9010.9050.8710.8170.9000.881
JPEG0.4560.5630.6100.4350.3280.6070.594
WN0.9380.9300.9470.9390.9060.9270.931
Gblur0.8990.9250.9370.9210.9180.9240.858
ALL0.8820.9130.9220.8820.8400.8940.887

LIVE 3D Phase II-SymmetricJP2K0.7260.8240.8670.6620.6080.9040.872
JPEG0.7180.8410.8380.6300.5480.9100.884
WN0.9450.9370.9270.9070.9240.9370.942
GBLUR0.7700.8500.8360.8450.8460.9110.652
ALL0.7000.9090.9100.8370.8050.8970.893

LIVE 3D Phase II-AsymmetricJP2K0.7240.8050.8540.7220.6190.7890.872
JPEG0.7140.8050.8760.6360.6780.6960.686
WN0.8820.9520.9370.9290.9410.9240.955
GBLUR0.8070.8500.8880.6910.8400.8030.838
ALL0.7190.6610.6420.6110.6970.5240.711

A good image quality evaluation method not only yields high performance, but also provides good computational efficiency. Predictive speed is also an important component of image quality evaluation performance indicators, because we need to consider the practicality of the algorithm. Here, we present the calculation time required for a pair of stereo images. In the LIVE 3D Phase I database, the prediction time for a pair of stereo images by using the NVIDIA 1080Ti GPU is approximately 0.034 s (more than 25 frames per second), while testing the pair in the LIVE 3D Phase II database, the prediction time for the stereo image is 0.039 s (more than 25 frames per second). In terms of speed, the method proposed in this paper can achieve real-time prediction (the real-time system run at the speed of 25 frames per second).

4. Conclusions

In this paper, we propose an effective SIQE method, which is different from the existing image evaluation method based on deep learning. First, the proposed method training dataset is a self-made distortion image dataset and the corresponding quality maps obtained by the high-performance FR image quality evaluation method as labels. Second, the main network framework used by the algorithm is the quality map generation network, which is used to train the distortion image dataset and quality map label to obtain an optimal network framework. The physiological function of the neurons ensured that the predicted results are highly consistent with the original quality map. Finally, the left view, the right view, and the cyclopean view of the stereo image are, respectively, input into the quality map generation network framework, and the corresponding quality maps are predicted, the three quality prediction scores are obtained using saliency weighting, and the three quality values are obtained. Weighted fusion is sequentially performed to obtain the final stereo image quality score. Experiment was performed on two 3D LIVE databases to verify the effectiveness of the proposed algorithm. The experimental results show that the algorithm can improve the accuracy of image quality prediction and prediction scores and has a good generalization ability.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (grant no. 61502429), the Zhejiang Provincial Natural Science Foundation of China (grant no. LY18F020012), the Zhejiang Open Foundation of the MOST Important Subjects, and the China Postdoctoral Science Foundation (grant no. 2015M581932).

References

  1. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004. View at: Publisher Site | Google Scholar
  2. L. Zhang, L. Zhang, X. Mou, and D. Zhang, “FSIM: a feature similarity index for image quality assessment,” IEEE Transactions on Image Processing, vol. 20, no. 8, pp. 2378–2386, 2011. View at: Publisher Site | Google Scholar
  3. W. Zhou, L. Yu, W. Qiu, Y. Zhou, and M. Wu, “Local gradient patterns (LGP): an effective local-statistical-feature extraction scheme for no-reference image quality assessment,” Information Sciences, vol. 397-398, pp. 1–14, 2017. View at: Publisher Site | Google Scholar
  4. M.-J. Chen, C.-C. Su, D.-K. Kwon, L. K. Cormack, and A. C. Bovik, “Full-reference quality assessment of stereopairs accounting for rivalry,” Signal Processing: Image Communication, vol. 28, no. 9, pp. 1143–1155, 2013. View at: Publisher Site | Google Scholar
  5. Y. Zhang, D. M., and Chandler, “3D-MAD: a full reference stereo image quality estimator based on binocular lightness and contrast perception,” IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3810–3825, 2015. View at: Publisher Site | Google Scholar
  6. W. Zhou, G. Jiang, M. Yu, F. Shao, and Z. Peng, “PMFS: a perceptual modulated feature similarity metric for stereoscopic image quality assessment,” IEEE Signal Processing Letters, vol. 21, no. 8, pp. 1003–1006, 2014. View at: Publisher Site | Google Scholar
  7. X. Geng, L. Shen, K. Li, and P. An, “A stereoscopic image quality assessment model based on independent component analysis and binocular fusion property,” Signal Processing: Image Communication, vol. 52, pp. 54–63, 2017. View at: Publisher Site | Google Scholar
  8. F. Gao, Y. Wang, P. Li, M. Tan, J. Yu, and Y. Zhu, “DeepSim: deep similarity for image quality assessment,” Neurocomputing, vol. 257, pp. 104–114, 2017. View at: Publisher Site | Google Scholar
  9. W. Zhou, G. Jiang, M. Yu, F. Shao, and Z. Peng, “Reduced-reference stereoscopic image quality assessment based on view and disparity zero-watermarks,” Signal Processing: Image Communication, vol. 29, no. 1, pp. 167–176, 2014. View at: Publisher Site | Google Scholar
  10. W. Zhou, L. Yu, Y. Zhou, W. Qiu, M.-W. Wu, and T. Luo, “Local and global feature learning for blind quality evaluation of screen content and natural scene images,” IEEE Transactions on Image Processing, vol. 27, no. 5, pp. 2086–2095, 2018. View at: Publisher Site | Google Scholar
  11. W. Zhou, S. Zhang, T. Pan et al., “Blind 3D image quality assessment based on self-similarity of binocular features,” Neurocomputing, vol. 224, pp. 128–134, 2017. View at: Publisher Site | Google Scholar
  12. W. Zhang, C. Qu, L. Ma, J. Guan, and R. Huang, “Learning structure of stereoscopic image for no-reference quality assessment with convolutional neural network,” Pattern Recognition, vol. 59, pp. 176–187, 2016. View at: Publisher Site | Google Scholar
  13. G. Jiang, H. Xu, M. Yu, T. Luo, and Y. Zhang, “Stereoscopic image quality assessment by learning non-negative matrix factorization-based color visual characteristics and considering binocular interactions,” Journal of Visual Communication and Image Representation, vol. 46, pp. 269–279, 2017. View at: Publisher Site | Google Scholar
  14. W. Hachicha, M. Kaaniche, A. Beghdadi, and F. A. Cheikh, “No-reference stereo image quality assessment based on joint wavelet decomposition and statistical models,” Signal Processing: Image Communication, vol. 54, pp. 107–117, 2017. View at: Publisher Site | Google Scholar
  15. W. Zhou, W. Qiu, and M.-W. Wu, “Utilizing dictionary learning and machine learning for blind quality assessment of 3-D images,” IEEE Transactions on Broadcasting, vol. 63, no. 2, pp. 404–415, 2017. View at: Publisher Site | Google Scholar
  16. J. Yang, P. An, J. Ma, K., L. Li, and Shen, “No-reference stereo image quality assessment by learning gradient dictionary-based color visual characteristics,” in Proceedings of the 2018 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1–5, IEEE, Florence, Italy, May 2018. View at: Publisher Site | Google Scholar
  17. J. Wu, J. Zeng, W. Dong, G. Shi, and W. Lin, “Blind image quality assessment with hierarchy: degradation from local structure to deep semantics,” Journal of Visual Communication and Image Representation, vol. 58, pp. 353–362, 2019. View at: Publisher Site | Google Scholar
  18. T.-J. Liu, C.-T. Lin, H.-H. Liu, and S.-C. Pei, “Blind stereoscopic image quality assessment based on hierarchical learning,” IEEE Access, vol. 7, pp. 8058–8069, 2019. View at: Publisher Site | Google Scholar
  19. Q. Jiang, F. Shao, W. Gao, Z. Chen, G. Jiang, and Y.-S. Ho, “Unified no-reference quality assessment of singly and multiply distorted stereoscopic images,” IEEE Transactions on Image Processing, vol. 28, no. 4, pp. 1866–1881, 2019. View at: Publisher Site | Google Scholar
  20. R. Bensalma and M.-C. Larabi, “A perceptual metric for stereoscopic image quality assessment based on the binocular energy,” Multidimensional Systems and Signal Processing, vol. 24, no. 2, pp. 281–316, 2013. View at: Publisher Site | Google Scholar
  21. W. Zhou, L. Yu, Y. Zhou, W. Qiu, M.-W. Wu, and T. Luo, “Blind quality estimator for 3D images based on binocular combination and extreme learning machine,” Pattern Recognition, vol. 71, pp. 207–217, 2017. View at: Publisher Site | Google Scholar
  22. W. Zhou and L. Yu, “Binocular responses for no-reference 3D image quality assessment,” IEEE Transactions on Multimedia, vol. 18, no. 6, pp. 1077–1084, 2016. View at: Publisher Site | Google Scholar
  23. K. Ma, Z. Duanmu, Q Wu et al., “Waterloo exploration database: new challenges for image quality assessment models,” IEEE Transactions on Image Processing, vol. 26, no. 2, pp. 1004–1016, 2017. View at: Publisher Site | Google Scholar
  24. J. Wang, A. Rehman, K. Zeng, S. Wang, and Z. Wang, “Quality prediction of asymmetrically distorted stereoscopic 3D images,” IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3400–3414, 2015. View at: Publisher Site | Google Scholar
  25. L. Zhang, Y. Gu, and H. Li, “SDSP: a novel saliency detection method by combining simple priors,” in Proceedings of the 2013 IEEE International Conference on Image Processing ICIP, pp. 171–175, IEEE, Melbourne, VIC, Australia, September 2013. View at: Publisher Site | Google Scholar
  26. A. K. Moorthy, C.-C. Su, A. Mittal, and A. C. Bovik, “Subjective evaluation of stereoscopic image quality,” Signal Processing: Image Communication, vol. 28, no. 8, pp. 870–883, 2013. View at: Publisher Site | Google Scholar
  27. M.-J. Chen, L. K. Cormack, and A. C. Bovik, “No-reference quality assessment of natural stereopairs,” IEEE Transactions on Image Processing, vol. 22, no. 9, pp. 3379–3391, 2013. View at: Publisher Site | Google Scholar
  28. D. P. Kingma and J. Ba, Adam: A Method for Stochastic Optimization, 2014, http://arxiv.org/abs/412.6980.

Copyright © 2019 Wujie Zhou. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

391 Views | 242 Downloads | 0 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.