Abstract

We present a simple quality assessment index for stereoscopic images based on 3D gradient magnitude. To be more specific, we construct 3D volume from the stereoscopic images across different disparity spaces and calculate pointwise 3D gradient magnitude similarity (3D-GMS) along three horizontal, vertical, and viewpoint directions. Then, the quality score is obtained by averaging the 3D-GMS scores of all points in the 3D volume. Experimental results on four publicly available 3D image quality assessment databases demonstrate that, in comparison with the most related existing methods, the devised algorithm achieves high consistency alignment with subjective assessment.

1. Introduction

In recent years, there has been great progress in developing objective image quality assessment (IQA) metrics [1]. However, the development of 3D image/video quality index is still in its early stage. Assessing the 3D image quality is a very challenging issue because it is affected by 2D image quality, depth perception, visual comfort, and other factors [2, 3]. It is particularly challenging when the stereoscopic image pair consists of two views with different quality levels. Therefore, how to understand the binocular vision perception, for example, binocular rivalry in stereosis [4], is still limited in 3D image quality assessment (3D-IQA).

Numerous approaches for full-reference 2D image quality assessment (2D-IQA) have been widely researched over the last several decades, such as structural similarity (SSIM) [5], multiscale SSIM (MS-SSIM) [6], and UQI (universal quality index) [7]. Among these 2D metrics, gradient information has been employed in various ways. Chen et al. [8] proposed a gradient SSIM (G-SSIM) metric based on the edge as the structure information. Liu et al. [9] devised an IQA approach by integrating gradient similarity and luminance similarity. Zhu and Wang [10] proposed a multiscale visual gradient similarity (VGS) model by adopting different properties of gradient. Xue et al. [11] proposed a new effective gradient magnitude similarity deviation (GMSD) model to predict the overall image quality score. However, 3D-IQA is still a less investigated problem due to lack of understanding of 3D visual perception. In this paper, we simply classify the existing 3D-IQA into the following two categories: evaluate stereoscopic images using 2D-IQA metrics; evaluate stereoscopic images considering 3D perceptual properties.

The most direct way of applying state-of-the-art 2D-IQA methods to 3D-IQA is to evaluate the two views of the stereoscopic images, disparity/depth image, separately by 2D metrics, and then combine them into an overall score. Boev et al. [12] combined monoscopic and stereoscopic quality components from the “Cyclopean” image and disparity map, respectively, for stereo-video evaluation. Campisi et al. [13] computed quality scores of both stereo-pair and the disparity map by 2D quality metrics and then combined them to produce a final score. You et al. [14] investigated various 2D quality evaluators on a stereo-pair and its disparity map and found the optimal combination which can yield the best performance. Hewage et al. [15] investigated the effectiveness of three 2D metrics (PSNR, VQM, and SSIM) to predict the perceived quality of compressed color plus depth 3D video. However, for effective 3D evaluation, we cannot assess the perceived quality directly using 2D-IQA metrics (factors toward the perceived quality are different in 3D).

For measuring the perceived quality of stereoscopic images, several metrics have been proposed by integrating 3D perceptual properties. Hwang and Wu [16] fused the impacts of visual attention, depth variation, and stereo distortion in the stereo image quality assessment. Bensalma and Larabi [17] devised a binocular energy quality metric (BEQM) by modeling the complex cells responsible for the construction of the binocular energy. Chen et al. [18] constructed a “Cyclopean” image from the stereo-pair and evaluated the quality of “Cyclopean” image by 2D-IQA metrics. De Silva et al. [19] measured the quality of symmetrically and asymmetrically compressed artifacts by quantifying structural distortion, asymmetric blur, and content complexity. In our previous work [20], we proposed a perceptual quality assessment metric by considering binocular visual characteristics, in which the stereoscopic images are separated into noncorresponding, binocular fusion, and binocular suppression regions. Other relevant works can be found in [2124].

In this paper, we proposed a simple yet effective quality assessment index for stereoscopic images based on 3D gradient magnitude. The main contributions of this paper are as follows: we construct 3D data from a stereoscopic image pair to account for depth perception under different disparity spaces; we compute 3D gradient using different kernels on horizontal, vertical, and viewpoint directions; we demonstrate that 3D gradient magnitude allows more emphasis on distortions around edge regions in the proposed 3D-IQA scheme. The rest of the paper is organized as follows. Section 2 presents 3D data construction. Section 3 presents the proposed IQA for stereoscopic images. The experimental results are given and discussed in Section 4, and, finally, conclusions are drawn in Section 5.

2. 3D Data Construction

As known, the process of binocular visual perception is regarded as responses of a pair of simple cells received from the left and right eyes [25]. The output of a simple receptive field at a position is formulated as convolution with a filter function (e.g., Gabor filter):

Then, binocular energy response combines the output of the receptive fields of both left and right images as [26] where and are real and imaginary parts of the response. With this understanding, the preferred disparity can be estimated by , where is the phase difference between the left and right images, , , and is the radial frequency of the cell.

Depth perception is the most important feature for stereoscopic images, which occurs as a result of the horizontal separation between the left and right eyes [27]. The different locations on the two cells are crucial to detect variations in depth. Given two input images, and , the goal of disparity estimation is to find an optimal binocular disparity so that the two images match as closely as possible:

An important issue for understanding the binocular vision is how to characterize binocular disparity. However, it is usually not easy to assess the quality of the estimated disparity since ground truth disparity is generally not available. Numerous disparity estimation algorithms had been proposed [28, 29]. Therefore, we define disparity space image (DSI) as the squared difference between the shifted left and right images as follows [30]:

Thus, we can obtain a 3D volume of intensity differences over the spatial positions and the disparity ranges. The disparity can be obtained by searching the optimal path from the 3D volume. In this paper, we advocate the 3D volume as the basic processing unit. The local structured features in the DSI can effectively reflect the impact of distortion on different disparity ranges. Therefore, it is useful to think about the quality assessment issue by adding some types of distortion across different disparity spaces. Figure 1 shows the different slice sampling of the DSI under different types of distortion. It is obvious that quality degradation in the left and right views will be directly reflected by the computed DSI; that is, the disparity values with the minimum DSI values are not the same before and after degradation; thus, depth perception will be affected (i.e., it can be measured by the DSI).

3. Proposed Quality Assessment Index

3.1. Traditional SSIM Index

The SSIM index in [5] is defined as the similarity of three components: luminance similarity, contrast similarity, and structural similarity, and these three components are mathematically described as where , , , , and are the mean of , the mean of , the variance of , the variance of , and the covariance of and , respectively; , , and are constants to avoid the denominator being zero. The above results range in , in which 0 indicates no similarity between two numbers and 1 implies perfect similarity between two numbers. The SSIM index is given as where , , and are parameters to adjust the relative importance of three components. In this work, we generalize the single-image SSIM index to a new 3D image pair quality index by incorporating 3D gradient magnitude information.

3.2. 3D Gradient Computation

In 2D image, the gradient is usually computed by convolving an image with a linear filter, such as Roberts, Sobel. In this work, we use different kernels to compute the 3D gradient on three directions. For simplicity, we use the kernels in [31] with first order of derivative shown in Figure 2. Since the nonzero elements’ absolute values are 1, convolving the kernels with a 3D volume yields the horizontal, vertical, and viewpoint gradients that can be fast computed by where

3.3. 3D Gradient Magnitude Similarity (3D-GMS) Based Quality Metric

With the 3D gradient magnitude values of the original and distorted 3D volumes, the 3D-GMS index is defined as where the parameter is a constant to avoid the denominator being zero; and are the 3D gradient magnitudes of the original and distorted 3D volumes, which are defined as the root mean square of directional gradients along three directions:

The 3D-GMS value reflects the range of distortion degrees in an image. The higher the 3D-GMS value, the larger the distortion rang, and, thus, the lower the image perceptual quality. Here, we present one example to illustrate this point above. The first row of Figure 3 shows (a) Gaussian blurred image of “Balloons” test sequences from NBU IQA database and the corresponding horizontal, vertical, and viewpoint gradient maps in (b)~(d). The second row of Figure 3 shows the JPEG compressed image in (e) and the corresponding horizontal, vertical, and view gradient maps in (f)~(h). The third row of Figure 3 shows the white noise (WN) distorted image in (i) and the corresponding horizontal, vertical, and view gradient maps in (j)~(l). Note that only one selected viewpoint is selected for the viewpoint gradient maps in (d), (h), and (l). The difference mean opinion scores (DMOS) values for the Gaussian blurred, JPEG compressed, and WN distorted stereoscopic images are 29.435, 30.609, and 30.130, respectively; that is, the subjective measures for these distorted stereoscopic images are similar. The 3D-GMS scores for these distorted stereoscopic images are 0.9720, 0.9803, and 0.9793, respectively. It is clearly demonstrated that the quality scores are more consistent with the DMOS values.

4. Experimental Results and Analyses

4.1. Databases and Performance Measures

In the experiment, four publicly available 3D IQA databases: NBU 3D IQA Database [20], LIVE 3D IQA Phase I Database [18], and LIVE 3D IQA Phase II Database (including symmetric and asymmetric databases) [32] are used to verify the performance of the proposed metric for stereoscopic images. The NBU 3D IQA Database consists of 312 distorted stereoscopic pairs generated from 12 reference stereoscopic images. Five types of distortions, JPEG, JP2K, Gblur, WN, and H.264, are symmetrically applied to the left and right reference stereoscopic images at various levels. The LIVE 3D IQA Phase I Database consists of 365 distorted stereoscopic pairs generated from 20 reference stereoscopic images. The LIVE 3D IQA Phase II-Symmetric Database and Phase II-Asymmetric Database consist of 210 and 240 distorted stereoscopic pairs generated from 8 reference stereoscopic images, respectively. Five types of distortions, JPEG, JP2K, Gblur, WN, and FF, are symmetrically applied to the left and right reference stereoscopic images at various levels for the LIVE 3D IQA Phase I Database and LIVE 3D IQA Phase II-Symmetric Database and asymmetrically applied for the LIVE 3D IQA Phase II-Asymmetric Database.

In the paper, three commonly used performance indicators are used to benchmark the proposed metric against the relevant state-of-the-art techniques: Pearson linear correlation coefficient (PLCC), Spearman rank order correlation coefficient (SRCC), and root mean squared error (RMSE), between the objective and subjective scores. For a perfect match between the objective and subjective scores, PLCC = SRCC = 1 and RMSE = 0. For the nonlinear regression, we use the following five-parameter logistic function [33]: where , , , , and are determined by using the subjective scores and the objective scores.

4.2. Overall Assessment Performance

In Table 1, we compare the competing 2D-IQA and 3D-IQA metrics’ performance on the four databases in terms of PLCC, SRCC, and RMSE. For the three 2D-IQA metrics, they directly estimate the quality of each view separately and generate a weighted average score. The proposed scheme outperforms the three 2D-IQA schemes in the databases. For You et al.’s and Benoit et al.’s schemes, since they are the combination of 2D image quality metrics for stereoscopic images and disparity maps, the performance of the two schemes is highly dependent on the estimated disparity maps (stereo matching algorithm [29] is used in this paper), and the proposed scheme performs better than the two schemes on three databases (i.e., NBU 3D IQA Database, LIVE 3D IQA Phase I Database, and LIVE 3D IQA Phase II-Symmetric Database with symmetrical distortions). The performances of Bensalma et al.’s, Chen et al.’s, and Shao et al.’s schemes are reasonably good on most of the databases, but the proposed scheme can still get comparable performance. Figure 4 shows the scatter plots of predicted quality scores against subjective quality scores (in terms of DMOS) of the proposed scheme on the three databases. Overall, the proposed scheme has an impressive consistency with human perception.

4.3. Performance Comparison on Individual Distortion Types

To more comprehensively evaluate the prediction performance of the proposed method, we compare the nine schemes on each type of distortion. The PLCC and SRCC results are listed in Tables 2 and 3, where the top two metrics have been highlighted in boldface. One can see that the proposed scheme is among the top 2 metrics 13 times in terms of PLCC, followed by You et al.’s scheme (among the top 2 metrics 9 times), Shao et al.’s scheme (among the top 2 metrics 6 times). However, the overall performance of You et al.’s and Shao et al.’s scheme is not the best on the four databases. Since the proposed scheme is to measure the structure degradation, it is especially for Gblur distortion type and is an effective measure for WN distortion type on the NBU 3D IQA Database, LIVE 3D IQA Phase I Database, and LIVE 3D IQA Phase II-Symmetric Database. Even though some 2D metrics may have remarkable performances in evaluating the qualities of 2D images, they may not be sufficient to predict the perceptual quality of stereoscopic images. In general, the proposed 3D gradient magnitude can serve as an excellent feature for quality prediction.

4.4. Discussion of Computational Complexity

Computational complexity is another important factor to evaluate the performance of the proposed scheme. The DSIs are computed offline in advance. The main operations in the proposed 3D-GMS include calculating 3D gradients (by convolving three different 5 × 5 × 5 templates), thereby producing gradient magnitude maps. Overall, the proposed 3D-GMS can provide a low-complexity solution for 3D-IQA, compared with these 3D-IQA metrics (e.g., You et al.’s, Benoit et al.’s, Bensalma et al.’s, Chen et al.’s, and Shao et al.’s schemes).

5. Conclusions

In this study, we devised a simple yet effective quality assessment index, called 3D gradient magnitude similarity (3D-GMS), for stereoscopic images. More specifically, we construct 3D volume from the stereoscopic images across different disparity spaces and calculate pointwise gradient magnitude similarity along three directions. Then, average 3D-GMS score for all points in the 3D volume is computed as the final quality index. Compared with state-of-the-art 2D image quality assessment (2D-IQA) and 3D image quality assessment (3D-IQA) metrics, the proposed 3D-GMS metric performs better in terms of both accuracy and efficiency on four publicly available 3D IQA databases. In the future work, we will further explore how to combine 3D visual perceptual models, such as 3D visual attention, into the 3D-GMS metric.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the Natural Science Foundation of China (Grants 61271021, 61271270, and U130125). It was also sponsored by K. C. Wong Magna Fund in Ningbo University.