Table of Contents Author Guidelines Submit a Manuscript
The Scientific World Journal
Volume 2014 (2014), Article ID 890562, 11 pages
http://dx.doi.org/10.1155/2014/890562
Research Article

A Simple Quality Assessment Index for Stereoscopic Images Based on 3D Gradient Magnitude

Faculty of Information Science and Engineering, Ningbo University, Ningbo 315211, China

Received 21 February 2014; Revised 9 June 2014; Accepted 10 June 2014; Published 15 July 2014

Academic Editor: Antonio Fernández-Caballero

Copyright © 2014 Shanshan Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We present a simple quality assessment index for stereoscopic images based on 3D gradient magnitude. To be more specific, we construct 3D volume from the stereoscopic images across different disparity spaces and calculate pointwise 3D gradient magnitude similarity (3D-GMS) along three horizontal, vertical, and viewpoint directions. Then, the quality score is obtained by averaging the 3D-GMS scores of all points in the 3D volume. Experimental results on four publicly available 3D image quality assessment databases demonstrate that, in comparison with the most related existing methods, the devised algorithm achieves high consistency alignment with subjective assessment.

1. Introduction

In recent years, there has been great progress in developing objective image quality assessment (IQA) metrics [1]. However, the development of 3D image/video quality index is still in its early stage. Assessing the 3D image quality is a very challenging issue because it is affected by 2D image quality, depth perception, visual comfort, and other factors [2, 3]. It is particularly challenging when the stereoscopic image pair consists of two views with different quality levels. Therefore, how to understand the binocular vision perception, for example, binocular rivalry in stereosis [4], is still limited in 3D image quality assessment (3D-IQA).

Numerous approaches for full-reference 2D image quality assessment (2D-IQA) have been widely researched over the last several decades, such as structural similarity (SSIM) [5], multiscale SSIM (MS-SSIM) [6], and UQI (universal quality index) [7]. Among these 2D metrics, gradient information has been employed in various ways. Chen et al. [8] proposed a gradient SSIM (G-SSIM) metric based on the edge as the structure information. Liu et al. [9] devised an IQA approach by integrating gradient similarity and luminance similarity. Zhu and Wang [10] proposed a multiscale visual gradient similarity (VGS) model by adopting different properties of gradient. Xue et al. [11] proposed a new effective gradient magnitude similarity deviation (GMSD) model to predict the overall image quality score. However, 3D-IQA is still a less investigated problem due to lack of understanding of 3D visual perception. In this paper, we simply classify the existing 3D-IQA into the following two categories: evaluate stereoscopic images using 2D-IQA metrics; evaluate stereoscopic images considering 3D perceptual properties.

The most direct way of applying state-of-the-art 2D-IQA methods to 3D-IQA is to evaluate the two views of the stereoscopic images, disparity/depth image, separately by 2D metrics, and then combine them into an overall score. Boev et al. [12] combined monoscopic and stereoscopic quality components from the “Cyclopean” image and disparity map, respectively, for stereo-video evaluation. Campisi et al. [13] computed quality scores of both stereo-pair and the disparity map by 2D quality metrics and then combined them to produce a final score. You et al. [14] investigated various 2D quality evaluators on a stereo-pair and its disparity map and found the optimal combination which can yield the best performance. Hewage et al. [15] investigated the effectiveness of three 2D metrics (PSNR, VQM, and SSIM) to predict the perceived quality of compressed color plus depth 3D video. However, for effective 3D evaluation, we cannot assess the perceived quality directly using 2D-IQA metrics (factors toward the perceived quality are different in 3D).

For measuring the perceived quality of stereoscopic images, several metrics have been proposed by integrating 3D perceptual properties. Hwang and Wu [16] fused the impacts of visual attention, depth variation, and stereo distortion in the stereo image quality assessment. Bensalma and Larabi [17] devised a binocular energy quality metric (BEQM) by modeling the complex cells responsible for the construction of the binocular energy. Chen et al. [18] constructed a “Cyclopean” image from the stereo-pair and evaluated the quality of “Cyclopean” image by 2D-IQA metrics. De Silva et al. [19] measured the quality of symmetrically and asymmetrically compressed artifacts by quantifying structural distortion, asymmetric blur, and content complexity. In our previous work [20], we proposed a perceptual quality assessment metric by considering binocular visual characteristics, in which the stereoscopic images are separated into noncorresponding, binocular fusion, and binocular suppression regions. Other relevant works can be found in [2124].

In this paper, we proposed a simple yet effective quality assessment index for stereoscopic images based on 3D gradient magnitude. The main contributions of this paper are as follows: we construct 3D data from a stereoscopic image pair to account for depth perception under different disparity spaces; we compute 3D gradient using different kernels on horizontal, vertical, and viewpoint directions; we demonstrate that 3D gradient magnitude allows more emphasis on distortions around edge regions in the proposed 3D-IQA scheme. The rest of the paper is organized as follows. Section 2 presents 3D data construction. Section 3 presents the proposed IQA for stereoscopic images. The experimental results are given and discussed in Section 4, and, finally, conclusions are drawn in Section 5.

2. 3D Data Construction

As known, the process of binocular visual perception is regarded as responses of a pair of simple cells received from the left and right eyes [25]. The output of a simple receptive field at a position is formulated as convolution with a filter function (e.g., Gabor filter):

Then, binocular energy response combines the output of the receptive fields of both left and right images as [26] where and are real and imaginary parts of the response. With this understanding, the preferred disparity can be estimated by , where is the phase difference between the left and right images, , , and is the radial frequency of the cell.

Depth perception is the most important feature for stereoscopic images, which occurs as a result of the horizontal separation between the left and right eyes [27]. The different locations on the two cells are crucial to detect variations in depth. Given two input images, and , the goal of disparity estimation is to find an optimal binocular disparity so that the two images match as closely as possible:

An important issue for understanding the binocular vision is how to characterize binocular disparity. However, it is usually not easy to assess the quality of the estimated disparity since ground truth disparity is generally not available. Numerous disparity estimation algorithms had been proposed [28, 29]. Therefore, we define disparity space image (DSI) as the squared difference between the shifted left and right images as follows [30]:

Thus, we can obtain a 3D volume of intensity differences over the spatial positions and the disparity ranges. The disparity can be obtained by searching the optimal path from the 3D volume. In this paper, we advocate the 3D volume as the basic processing unit. The local structured features in the DSI can effectively reflect the impact of distortion on different disparity ranges. Therefore, it is useful to think about the quality assessment issue by adding some types of distortion across different disparity spaces. Figure 1 shows the different slice sampling of the DSI under different types of distortion. It is obvious that quality degradation in the left and right views will be directly reflected by the computed DSI; that is, the disparity values with the minimum DSI values are not the same before and after degradation; thus, depth perception will be affected (i.e., it can be measured by the DSI).

fig1
Figure 1: The figure of x-d and y-d cross-sectional views under different types of distortion.

3. Proposed Quality Assessment Index

3.1. Traditional SSIM Index

The SSIM index in [5] is defined as the similarity of three components: luminance similarity, contrast similarity, and structural similarity, and these three components are mathematically described as where , , , , and are the mean of , the mean of , the variance of , the variance of , and the covariance of and , respectively; , , and are constants to avoid the denominator being zero. The above results range in , in which 0 indicates no similarity between two numbers and 1 implies perfect similarity between two numbers. The SSIM index is given as where , , and are parameters to adjust the relative importance of three components. In this work, we generalize the single-image SSIM index to a new 3D image pair quality index by incorporating 3D gradient magnitude information.

3.2. 3D Gradient Computation

In 2D image, the gradient is usually computed by convolving an image with a linear filter, such as Roberts, Sobel. In this work, we use different kernels to compute the 3D gradient on three directions. For simplicity, we use the kernels in [31] with first order of derivative shown in Figure 2. Since the nonzero elements’ absolute values are 1, convolving the kernels with a 3D volume yields the horizontal, vertical, and viewpoint gradients that can be fast computed by where

fig2
Figure 2: Kernels used for 3D gradient computation in three directions.
3.3. 3D Gradient Magnitude Similarity (3D-GMS) Based Quality Metric

With the 3D gradient magnitude values of the original and distorted 3D volumes, the 3D-GMS index is defined as where the parameter is a constant to avoid the denominator being zero; and are the 3D gradient magnitudes of the original and distorted 3D volumes, which are defined as the root mean square of directional gradients along three directions:

The 3D-GMS value reflects the range of distortion degrees in an image. The higher the 3D-GMS value, the larger the distortion rang, and, thus, the lower the image perceptual quality. Here, we present one example to illustrate this point above. The first row of Figure 3 shows (a) Gaussian blurred image of “Balloons” test sequences from NBU IQA database and the corresponding horizontal, vertical, and viewpoint gradient maps in (b)~(d). The second row of Figure 3 shows the JPEG compressed image in (e) and the corresponding horizontal, vertical, and view gradient maps in (f)~(h). The third row of Figure 3 shows the white noise (WN) distorted image in (i) and the corresponding horizontal, vertical, and view gradient maps in (j)~(l). Note that only one selected viewpoint is selected for the viewpoint gradient maps in (d), (h), and (l). The difference mean opinion scores (DMOS) values for the Gaussian blurred, JPEG compressed, and WN distorted stereoscopic images are 29.435, 30.609, and 30.130, respectively; that is, the subjective measures for these distorted stereoscopic images are similar. The 3D-GMS scores for these distorted stereoscopic images are 0.9720, 0.9803, and 0.9793, respectively. It is clearly demonstrated that the quality scores are more consistent with the DMOS values.

fig3
Figure 3: Examples of quality degraded left images and the corresponding gradient maps of “Balloons” test sequence. (a)~(d): (a) Gaussian blurred image; (b) horizontal gradient map of (a); (c) vertical gradient map of (a); (d) viewpoint gradient map of (a). DMOS = 29.435, 3D-GMS = 0.9720; (d)~(g): (e) JPEG compressed image; (f) horizontal gradient map of (e); (g) vertical gradient map of (e); (h) viewpoint gradient map of (e). DMOS = 30.609, 3D-GMS = 0.9803; (i)~(l): (i) WN distorted image; (j) horizontal gradient map of (i); (k) vertical gradient map of (i); (l) viewpoint gradient map of (i). DMOS = 30.130, 3D-GMS = 0.9793.

4. Experimental Results and Analyses

4.1. Databases and Performance Measures

In the experiment, four publicly available 3D IQA databases: NBU 3D IQA Database [20], LIVE 3D IQA Phase I Database [18], and LIVE 3D IQA Phase II Database (including symmetric and asymmetric databases) [32] are used to verify the performance of the proposed metric for stereoscopic images. The NBU 3D IQA Database consists of 312 distorted stereoscopic pairs generated from 12 reference stereoscopic images. Five types of distortions, JPEG, JP2K, Gblur, WN, and H.264, are symmetrically applied to the left and right reference stereoscopic images at various levels. The LIVE 3D IQA Phase I Database consists of 365 distorted stereoscopic pairs generated from 20 reference stereoscopic images. The LIVE 3D IQA Phase II-Symmetric Database and Phase II-Asymmetric Database consist of 210 and 240 distorted stereoscopic pairs generated from 8 reference stereoscopic images, respectively. Five types of distortions, JPEG, JP2K, Gblur, WN, and FF, are symmetrically applied to the left and right reference stereoscopic images at various levels for the LIVE 3D IQA Phase I Database and LIVE 3D IQA Phase II-Symmetric Database and asymmetrically applied for the LIVE 3D IQA Phase II-Asymmetric Database.

In the paper, three commonly used performance indicators are used to benchmark the proposed metric against the relevant state-of-the-art techniques: Pearson linear correlation coefficient (PLCC), Spearman rank order correlation coefficient (SRCC), and root mean squared error (RMSE), between the objective and subjective scores. For a perfect match between the objective and subjective scores, PLCC = SRCC = 1 and RMSE = 0. For the nonlinear regression, we use the following five-parameter logistic function [33]: where , , , , and are determined by using the subjective scores and the objective scores.

4.2. Overall Assessment Performance

In Table 1, we compare the competing 2D-IQA and 3D-IQA metrics’ performance on the four databases in terms of PLCC, SRCC, and RMSE. For the three 2D-IQA metrics, they directly estimate the quality of each view separately and generate a weighted average score. The proposed scheme outperforms the three 2D-IQA schemes in the databases. For You et al.’s and Benoit et al.’s schemes, since they are the combination of 2D image quality metrics for stereoscopic images and disparity maps, the performance of the two schemes is highly dependent on the estimated disparity maps (stereo matching algorithm [29] is used in this paper), and the proposed scheme performs better than the two schemes on three databases (i.e., NBU 3D IQA Database, LIVE 3D IQA Phase I Database, and LIVE 3D IQA Phase II-Symmetric Database with symmetrical distortions). The performances of Bensalma et al.’s, Chen et al.’s, and Shao et al.’s schemes are reasonably good on most of the databases, but the proposed scheme can still get comparable performance. Figure 4 shows the scatter plots of predicted quality scores against subjective quality scores (in terms of DMOS) of the proposed scheme on the three databases. Overall, the proposed scheme has an impressive consistency with human perception.

tab1
Table 1: Performance of the proposed method and the other seven schemes in terms of PLCC, SRCC, and RMSE on the four databases (the cases in bold: the best performance).
fig4
Figure 4: Scatter plots of predicted quality scores against the subjective scores (DMOS) of the proposed method on four databases.
4.3. Performance Comparison on Individual Distortion Types

To more comprehensively evaluate the prediction performance of the proposed method, we compare the nine schemes on each type of distortion. The PLCC and SRCC results are listed in Tables 2 and 3, where the top two metrics have been highlighted in boldface. One can see that the proposed scheme is among the top 2 metrics 13 times in terms of PLCC, followed by You et al.’s scheme (among the top 2 metrics 9 times), Shao et al.’s scheme (among the top 2 metrics 6 times). However, the overall performance of You et al.’s and Shao et al.’s scheme is not the best on the four databases. Since the proposed scheme is to measure the structure degradation, it is especially for Gblur distortion type and is an effective measure for WN distortion type on the NBU 3D IQA Database, LIVE 3D IQA Phase I Database, and LIVE 3D IQA Phase II-Symmetric Database. Even though some 2D metrics may have remarkable performances in evaluating the qualities of 2D images, they may not be sufficient to predict the perceptual quality of stereoscopic images. In general, the proposed 3D gradient magnitude can serve as an excellent feature for quality prediction.

tab2
Table 2: Performance comparison of the eight schemes on each individual distortion type in terms of PLCC.
tab3
Table 3: Performance comparison of the eight schemes on each individual distortion type in terms of SRCC.
4.4. Discussion of Computational Complexity

Computational complexity is another important factor to evaluate the performance of the proposed scheme. The DSIs are computed offline in advance. The main operations in the proposed 3D-GMS include calculating 3D gradients (by convolving three different 5 × 5 × 5 templates), thereby producing gradient magnitude maps. Overall, the proposed 3D-GMS can provide a low-complexity solution for 3D-IQA, compared with these 3D-IQA metrics (e.g., You et al.’s, Benoit et al.’s, Bensalma et al.’s, Chen et al.’s, and Shao et al.’s schemes).

5. Conclusions

In this study, we devised a simple yet effective quality assessment index, called 3D gradient magnitude similarity (3D-GMS), for stereoscopic images. More specifically, we construct 3D volume from the stereoscopic images across different disparity spaces and calculate pointwise gradient magnitude similarity along three directions. Then, average 3D-GMS score for all points in the 3D volume is computed as the final quality index. Compared with state-of-the-art 2D image quality assessment (2D-IQA) and 3D image quality assessment (3D-IQA) metrics, the proposed 3D-GMS metric performs better in terms of both accuracy and efficiency on four publicly available 3D IQA databases. In the future work, we will further explore how to combine 3D visual perceptual models, such as 3D visual attention, into the 3D-GMS metric.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the Natural Science Foundation of China (Grants 61271021, 61271270, and U130125). It was also sponsored by K. C. Wong Magna Fund in Ningbo University.

References

  1. W. Lin and C. Jay Kuo, “Perceptual visual quality metrics: a survey,” Journal of Visual Communication and Image Representation, vol. 22, no. 4, pp. 297–312, 2011. View at Publisher · View at Google Scholar · View at Scopus
  2. A. K. Moorthy and A. C. Bovik, “A survey on 3D quality of experience and 3D quality assessment,” in 18th Human Vision and Electronic Imaging, vol. 8651 of Proceedings of SPIE, Burlingame, Calif, USA, February 2013. View at Publisher · View at Google Scholar
  3. R. Vlad, P. Ladret, and A. Guérin, “Three factors that influence the overall quality of the stereoscopic 3D content: image quality, comfort, and realism,” in Proceedings of the 18th Human Vision and Electronic Imaging (SPIE '13), vol. 8651, San Jose, CA, USA, February 2013. View at Publisher · View at Google Scholar · View at Scopus
  4. I. P. Howard and J. B. Rogers, Binocular Fusion and Rivalry in Binocular Vision and Stereopsis, Oxford University Press, New York, NY, USA, 1995.
  5. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004. View at Publisher · View at Google Scholar · View at Scopus
  6. Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multi-scale structural similarity for image quality assessment,” in Proceedings of the 37th Asilomar Conference on Signals, Systems and Computers, vol. 2, pp. 1398–1402, November 2003. View at Publisher · View at Google Scholar · View at Scopus
  7. Z. Wang and A. C. Bovik, “A universal image quality index,” IEEE Signal Processing Letters, vol. 9, no. 3, pp. 81–84, 2002. View at Publisher · View at Google Scholar · View at Scopus
  8. G. H. Chen, C. L. Yang, and S. L. Xie, “Gradient-based structural similarity for image quality assessment,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '06), pp. 2929–2932, Atlanta, Ga, USA, October 2006. View at Publisher · View at Google Scholar · View at Scopus
  9. A. Liu, W. Lin, and M. Narwaria, “Image quality assessment based on gradient similarity,” IEEE Transactions on Image Processing, vol. 21, no. 4, pp. 1500–1512, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  10. J. Zhu and N. Wang, “Image quality assessment by visual gradient similarity,” IEEE Transactions on Image Processing, vol. 21, no. 3, pp. 919–933, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  11. W. Xue, L. Zhang, X. Mou, and A. C. Bovik, “Gradient magnitude similarity deviation: a highly efficient perceptual image quality index,” IEEE Transactions on Image Processing, vol. 23, no. 2, pp. 684–695, 2014. View at Publisher · View at Google Scholar · View at MathSciNet
  12. A. Boev, A. Gotchev, K. Egiazarian, A. Aksay, and G. B. Akar, “Towards compound stereo-video quality metric: a specific encoder-based framework,” in Proceedings of the 7th IEEE Southwest Symposium on Image Analysis and Interpretation, pp. 218–222, IEEE, Denver, Colorado, March 2006. View at Scopus
  13. P. Campisi, A. Benoit, P. Le Callet, and R. Cousseau, “Quality assessment of stereoscopic images,” EURASIP Journal on Image and Video Processing, vol. 2008, Article ID 659024, 2008. View at Publisher · View at Google Scholar · View at Scopus
  14. J. You, L. Xing, A. Perkis, and X. Wang, “Perceptual quality assessment for stereoscopic images based on 2D image quality metrics and disparity analysis,” in Proceedings of the International Workshop on Video Processing and Quality Metrics for Consumer Electronics, Scottsdale, Ariz, USA, 2010.
  15. C. T. E. R. Hewage, S. T. Worrall, S. Dogan, and A. M. Kondoz, “Prediction of stereoscopic video quality using objective quality models of 2-D video,” Electronics Letters, vol. 44, no. 16, pp. 963–965, 2008. View at Publisher · View at Google Scholar · View at Scopus
  16. J. J. Hwang and H. R. Wu, “Stereo image quality assessment using visual attention and distortion predictors,” KSII Transactions on Internet and Information Systems, vol. 5, no. 9, pp. 1613–1631, 2011. View at Publisher · View at Google Scholar · View at Scopus
  17. R. Bensalma and M. Larabi, “A perceptual metric for stereoscopic image quality assessment based on the binocular energy,” Multidimensional Systems and Signal Processing, vol. 24, no. 2, pp. 281–316, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  18. M.-J. Chen, C.-C. Su, D.-K. Kwon, L. K. Cormack, and A. C. Bovik, “Full-reference quality assessment of stereopairs accounting for rivalry,” Signal Processing: Image Communication, vol. 228, no. 9, pp. 1143–1155, 2013. View at Publisher · View at Google Scholar · View at Scopus
  19. V. de Silva, H. Kodikara Arachchi, and A. Kondoz, “Toward an impairment metric for stereoscopic video: a full-reference video quality metric to assess compressed stereoscopic video,” IEEE Transactions on Image Processing, vol. 22, no. 9, pp. 3392–3404, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  20. F. Shao, W. Lin, S. Gu, and G. Jiang, “Perceptual full-reference quality assessment of stereoscopic images by considering binocular visual characteristics,” IEEE Transactions on Image Processing, vol. 22, no. 5, pp. 1940–1953, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  21. S. Ryu, D. H. Kim, and K. Sohn, “Stereoscopic image quality metric based on binocular perception model,” in Proceedings of the 19th IEEE International Conference on Image Processing (ICIP '12), pp. 609–612, Orlando, Florida, October 2012. View at Publisher · View at Google Scholar · View at Scopus
  22. W. Hachicha, A. Beghdadi, and F. A. Cheikh, “Stereo image quality assessment using a binocular just noticeable difference model,” in Proceedings of the International Conference on Image Processing, Melbourne, Australia, September 2013.
  23. F. Qi, T. Jiang, X. Fan, S. Ma, and D. Zhao, “Stereoscopic video quality assessment based on stereo just-noticeable difference model,” in Proceedings of the 20th IEEE International Conference on Image Processing (ICIP '13), pp. 34–38, Melbourne, Australia, September 2013. View at Publisher · View at Google Scholar
  24. H. Ko, C.-S. Kim, S. Y. Choi, and C.-C. Jay Kuo, “3D image quality index using SDP-based binocular perception model,” in Proceedings of the IEEE 11th IVMSP Workshop (Image, Video, and Multidimensional Signal Processing Technical Committee), pp. 1–4, IEEE, Seoul, South Korea, June 2013. View at Publisher · View at Google Scholar
  25. R. Blake and H. Wilson, “Binocular vision,” Vision Research, vol. 51, no. 7, pp. 754–770, 2011. View at Publisher · View at Google Scholar · View at Scopus
  26. D. J. Fleet, H. Wagner, and D. J. Heeger, “Neural encoding of binocular disparity: energy models, position shifts and phase shifts,” Vision Research, vol. 36, no. 12, pp. 1839–1857, 1996. View at Publisher · View at Google Scholar · View at Scopus
  27. F. da Faria, J. Batosta, and H. Araujo, “Stereoscopic depth perception using a model based on the primary visual cortex,” PloS One, vol. 8, no. 12, Article ID e80745, 2013. View at Google Scholar
  28. W. Stürzl, U. Hoffmann, and H. A. Mallot, “vergence control and disparity estimation with energy neurons: theory and implementation,” in Proceedings of the International Conference on Artificial Neural Networks, pp. 1255–1260, 2002.
  29. V. Kolmogorov and R. Zabih, “Computing visual correspondence with occlusions using graph cuts,” in Proceedings of the 8th International Conference on Computer Vision (ICCV '01), vol. 2, pp. 508–515, Vancouver, Canada, July 2001. View at Publisher · View at Google Scholar · View at Scopus
  30. R. Szeliski and D. Scharstein, “Sampling the Disparity Space Image,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 3, pp. 419–425, 2004. View at Publisher · View at Google Scholar · View at Scopus
  31. Y. Li, J. Zhao, J. Yin, and X. Zhao, “A fast simple optical flow computation approach based on 3D gradient,” Transactions on Circuits and Systems for Video Technology, vol. 24, no. 5, pp. 842–853, 2014. View at Google Scholar
  32. M.-J. Chen, L. K. Cormack, and A. C. Bovik, “No-reference quality assessment of natural stereopairs,” IEEE Transactions on Image Processing, vol. 22, no. 9, pp. 3379–3391, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  33. P. G. Gottschalk and J. R. Dunn, “The five-parameter logistic: a characterization and comparison with the four-parameter logistic,” Analytical Biochemistry, vol. 343, no. 1, pp. 54–65, 2005. View at Publisher · View at Google Scholar · View at Scopus