Scientific Programming

Scientific Programming / 2020 / Article
Special Issue

Scientific Programming Towards a Smart World 2020

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 8856640 | https://doi.org/10.1155/2020/8856640

Di Wu, Fei Yuan, En Cheng, "Underwater No-Reference Image Quality Assessment for Display Module of ROV", Scientific Programming, vol. 2020, Article ID 8856640, 15 pages, 2020. https://doi.org/10.1155/2020/8856640

Underwater No-Reference Image Quality Assessment for Display Module of ROV

Academic Editor: Chao Huang
Received24 Apr 2020
Revised22 May 2020
Accepted03 Aug 2020
Published28 Aug 2020

Abstract

The optical images collected by remotely operated vehicles (ROV) contain a lot of information about underwater (such as distributions of underwater creatures and minerals), which plays an important role in ocean exploration. However, due to the absorption and scattering characteristics of the water medium, some of the images suffer from serious color distortion. These distorted color images usually need to be enhanced so that we can analyze them further. However, at present, no image enhancement algorithm performs well in any scene. Therefore, in order to monitor image quality in the display module of ROV, a no-reference image quality predictor (NIPQ) is proposed in this paper. A unique property that differentiates the proposed NIPQ metric from existing works is the consideration of the viewing behavior of the human visual system and imaging characteristics of the underwater image in different water types. The experimental results based on the underwater optical image quality database (UOQ) show that the proposed metric can provide an accurate prediction for the quality of the enhanced image.

1. Introduction

In recent years, there have been a growing number of ocean-related activities, such as aquaculture, hydrological exploration, and underwater archaeology. The optical images collected by the observational remotely operated vehicle (ROV) provide very convenient conditions for these activities, and high-quality underwater images play an essential role in these activities. However, due to the absorption and scattering effects of water limiting the visibility of the underwater objects, the images captured by an optical sensor of ROV often suffer from diminished color (color distortion), which affects our understanding of underwater conditions, so poor quality underwater images need to be enhanced. It is worth noting that not all underwater images need to be enhanced, as shown in Figure 1, because bodies of water exhibit extreme differences in their optical properties. Some lakes are as clear as distilled water, and some change colors several times a year, among white, blue, green, and brown [1]. In the ocean, coastal harbors are often murky, while offshore waters are blue and clear. Simply put, whether an image needs to be enhanced depends on whether the visibility of the underwater objects in the image is good. However, the existing display module of observational ROV either displays the captured image directly in the terminal or integrates an enhancement algorithm in the system to display the captured image after enhancement. However, the existing display module of observational ROV either displays the captured image directly in the terminal or integrates an enhancement algorithm in the system to display the captured image after enhancement. None of them determines whether the image needs to be enhanced or not. Also, currently, there is no underwater image enhancement algorithm suitable for any scene, so we need reliable underwater image quality metrics to help us preassess whether the captured image needs to be enhanced or not and to monitor the quality of the enhanced image.

The most accurate methods of image quality estimation are subjective image quality assessment (IQA). However, subjective IQA is expensive, time-consuming, and impractical for real-time implementation and system integration. In order to automatically estimate image quality and save workforce and resources, a reliable underwater objective image quality metric needs to be designed. In underwater image processing scenarios, an ideal reference image is usually not available, so no-reference (NR) IQA is the best choice for evaluating underwater image quality.

Some generic image quality measures, such as histogram analysis [2], the Variance [3], Image Entropy [4], and color image quality measure (CIQI) [5], have been developed. Popular BRISQUE [6] and LPSI [7] have also been proposed, which summarize the statistical rules of natural images and calculate the degree of deviation of distorted images. However, these objective measures are not designed specifically for underwater images. They fail to consider the strong absorption and scattering effects of the water, and they are not applicable to underwater images. There are also NR IQA based on in-depth learning, such as DIQA [8], Deep IQA [9], and RankIQA [10], which perform well in air images. The deep learning method has a strong learning ability and can automatically extract image features. However, the method requires a large amount of data (usually more than 5000 images) for training, and the acquisition of subjective scores (as the ground truth during training) is expensive and time-consuming. Currently, there is no relevant dataset available for training in the underwater image field, so it is temporarily impossible to design NR-IQA based on deep learning for underwater images.

Some paper [11] pointed out that the overall quality of an image can be effectively obtained by combinations of image attribute measures. At present, the most commonly used underwater image quality measures, UCIQE [12], UIQM [11], and CCF [13], are designed based on this principle. The UCIQE [12] proposed by Yang Miao is a linear combination of the standard deviation of chroma, the contrast of luminance, and the average of saturation. Karen Panetta’s UIQM [11] is a linear combination of colorfulness, sharpness, and contrast. The CCF proposed by Yan Wang et al. [13] starts from the imaging analysis of underwater absorption and scattering characteristics, calculates the fog density index, and evaluates the underwater image quality by combining the color index, contrast index, and fog density index. They all determine the weighting coefficients by multivariate linear regression from the training image set. However, regardless of the performance of the training set, the generalization ability of these methods is largely limited by the training image samples. At the same time, the attention mechanism of the human visual system (HVS) [14] has not been paid enough attention in the underwater image evaluation. In underwater scenes, the image quality of the target object has higher research value and practical significance than that of the ocean background, which does not belong to the region of interest (ROI). Moreover, the three commonly used underwater metrics measure the image quality from the perspective of image statistics, and the robustness is not high. This results in an overemphasis on color richness. This paper holds that, in addition to the color fidelity in the statistical sense, the color fidelity of objects is also very important from the perspective of pixels. It is worth noting that the color fidelity here refers to whether the image color is reasonable, not to say the difference between the object color in the image and the real object color. Most of the underwater natural images are blue-green due to color selective attenuation, and the color is single, and the color richness is not ideal (as shown in Figures 2(a)2(c)). After the enhancement algorithm processing, the underwater image can generally eliminate the color attenuation from the vision, and the color richness is greatly improved, as shown in Figures 2(d)2(f), but the color fidelity of the enhanced image is questionable, and the color of the fish in Figures 2(d) and 2(e) is not reasonable, and the artificial facilities in Figure 2(f) are obviously different from what we know. That is, Figures 2(a)2(c) have high color fidelity (because they are real natural images, although the color of objects in them is different from that of real objects, the pixel color is reasonable), but the color richness is very low; Figures 2(d)2(f) have low color fidelity and high color richness. That is, the underwater absorption and scattering characteristics cause image color distortion (where the type of distortion is the large difference between the object color in the image and the color of the real object), the image tends to be blue-green, and the color richness of the image is reduced. Overemphasis on color richness can also result in color distortion (in this case, the kind of distortion refers to the unreasonable color in the image), which affects the viewing effect of the image and subsequent use of the image.

In view of the shortcomings of existing metrics, this paper proposes a no-reference image quality predictor NIPQ. NIPQ is designed with a three-stage framework. The first stage focuses on the attention mechanism of the human visual system (HVS), which can also be interpreted as ROI. Because in the IQA field our ROIs are not fixed, some are task-driven; some are data-driven; some such as fish, corals, divers, or even artifacts of unknown shape may be of interest to us. For more applications, we interpret the foreground area (nonocean background) as ROI. This paper extracts ROI based on background/foreground and focuses on the image quality of ROI. The second stage considers the impact of color richness on image quality. As the distance between the camera and the object increases (horizontally), the color of the object in the underwater image will keep approaching blue and green [1]. At the same time, as the position of the optical sensor gets deeper, the object will be farther away from the sunlight source, and the color of the object will be darker, and the contrast will be lower. It can be understood that if the ROI of a natural underwater image has good color richness, its image quality will be significantly better than that of a low color richness image. The third stage considers the fidelity of the color. As mentioned earlier, if NR IQA overemphasizes color richness, it will cause the enhanced underwater image to become oversaturated, which is also a form of color distortion. Inspired by the underwater image formation model, we distinguish the water types (yellow water, green water, and blue water) in the image by the ocean background area of the image and estimate the reasonable range of pixel intensity of ROI in the enhanced underwater image from the perspective of pixels. In this stage, the difference between the reasonable range of pixel intensity and the ROI pixel intensity of the actual enhanced image is used to represent the rationality of the enhanced image, that is, color fidelity. Finally, in the fourth stage, color richness and color fidelity are systematically integrated for quality prediction.

In order to measure the performance of NIPQ, a underwater optical image quality database (UOQ) is established. The database contains some typical underwater images and their mean opinion scores (MOS). Based on the comprehensive analysis of all experimental results, the contribution of NIPQ proposed in this paper is summarized as follows:(a)It is a kind of NR IQA inspired by underwater imaging characteristics. By considering the color attenuation of images in different water bodies, the color fidelity and color richness metrics are proposed.(b)By adopting a suitable ROI extraction method for the underwater IQA field, ROI and IQA are effectively combined due to the block strategy in the ROI extraction method.(c)It is superior to many commonly used IQA metrics and can effectively evaluate the performance of the image enhancement algorithm and can be used as quality supervision.(d)We propose NR IQA-based underwater smart image display module, which embodies the role of our IQA in application.

We arrange the reminder of this paper as follows. Section 2 describes the NR IQA-based underwater smart image display module, which is the application background of NR IQA. Section 3 describes the detail of our NIPQ metric explicitly. Section 4 describes the establishment of our database UOQ for evaluating IQA performance, which consists of underwater optical images and their enhanced images. In Section 5, performance comparisons of the NIPQ metric with selected existing NR IQA methods are performed. We conclude this paper in Section 6.

2. NR IQA-Based Underwater Smart Image Display Module

Most of the underwater images captured by optical sensors have practical applications. For underwater images with severe color distortion, image enhancement is often needed before the terminal display. However, not all underwater images need to be enhanced. We believe that whether the image needs to be enhanced or not depends on the visibility of the underwater objects. Besides, because no image enhancement algorithm can achieve good results in all scenes, NR IQA can be used as a guide for image enhancement, so that the system can automatically select more appropriate image enhancement algorithm in real time. From the application point of view, the framework of the NR IQA based display module is shown in Figure 3. The traditional image display module only provides a single image enhancement scheme or displays the image directly, which cannot flexibly cope with the different water environment. The display module proposed in this paper builds various image enhancement algorithms into the Image Enhancement Algorithm Database. The system can choose a more appropriate enhancement algorithm according to the results of NR IQA. Firstly, the input underwater image is preassessed, and the natural image with less color distortion is directly displayed. And for the natural image with severe color distortion, the default enhancement algorithm in the Image Enhancement Algorithm Database is used to enhance the image. The Selector automatically determines whether to enable the alternative image enhancement scheme and which alternative scheme to choose according to the results of NR IQA.

According to the above analysis of NR IQA-based underwater smart image display module and the consideration of the characteristics of underwater image in Section 1, this paper uses the color richness of ROI and color fidelity of ROI to estimate image quality. In the display module proposed in our paper, the color richness of ROI is used as the metric of pre-NR IQA in the display module, and the NIPQ, which combines ROI, color richness, and color fidelity, will be used as the metric of NR IQA in the display module.

The ROI extraction method is based on background/foreground. The block strategy in the extraction method helps ROI and IQA better combine. The ROI extraction method is introduced in Section 3.1 in detail. The color richness represents the distribution of image color in a statistical sense, which is described by the spatial characteristics of image in CIE XYZ space and detailed in Section 3.2. The color fidelity is based on the underwater image formation model in the sense of pixel, which is used to describe whether the pixel intensity is within a reasonable range. It is introduced in Section 3.3 in detail.

3. Proposed NIPQ Metric

3.1. ROI Extraction Based on Background/Foreground

Considering that the final receiver of display module of ROV is often human, it is particularly important that IQA can reflect the feeling of human eyes well. The mechanism of human visual attention is an important feature of HVS. The mechanism of human visual attention enables the brain to quickly understand the overall information of the image and obtain the regions that need attention. Then the brain begins to focus on the target information and suppress other background information. Therefore, the human eye is usually sensitive to the damage of the area of concern. At the same time, compared with the ocean background, high-quality ROI has better practical significance and value. Therefore, it is necessary to introduce ROI into image quality assessment.

Researchers usually get ROI by saliency detection or object detection [15, 16]. Different from the image in the air, the contrast of most underwater images is low, and the traditional significance detection method in the air is not applicable in the underwater. At present, there is no robust saliency detection algorithm in underwater image field. Some researchers combine image enhancement with saliency detection [17]. Some researchers combine fish localization and saliency detection [18]. In IQA, the purpose of the metric is to evaluate the enhancement algorithm, and the ROI of the underwater image is not always one or several fixed categories of targets with predictable shapes. Therefore, the above method is not applicable. Considering that, compared with the target, the underwater background features are easier to be recognized, this paper extracts ROI based on background/foreground, and the process is shown in Figure 4. In order to better combine ROI and IQA in the next steps, the preprocessed underwater image is divided into m  × n image blocks (we call it the block strategy).

Then, we map the boundary connectivity (definition in [19]) of block i region by (1) and obtain the background region probability :where . The background block, the target block, and the uncertain block are initially divided by using , , and . Then the color feature and spatial position feature of the block are used as the correction of the uncertain block to help judge the background probability of the uncertain block, which is expressed by a mathematical formula as shown in the following equation:where is the color similarity between blocks and , which is calculated by the Euclidean distance between the average colors of blocks and . is the Euclidean distance between blocks and . is obtained after mapping according to (3), among which . Finally, we use the method of maximum variance between classes to get the final ROI.

3.2. Color Richness

With the aggravation of the phenomenon of color attenuation, the color of the natural underwater image will become less and less, and the visibility of the object will become worse. Therefore, the color richness of ROI is a simple and fast metric to measure whether the color distortion of natural underwater image is serious, which is suitable for the evaluation of image quality.

In this paper, the richness of color is measured by the spatial characteristics of color in CIE XYZ color space. The color richness should not only include color diversity but also consider the lightness distribution, so XYZ color space is a good choice. CIE XYZ color space can represent all colors, and the Y parameter is the measurement of color lightness. According to the XYZ color space distribution of the two images shown in Figure 5, the wide distribution of image color, respectively, in the three dimensions of X, Y, and Z does not mean that the color richness is good. That is because the three components of X, Y, and Z, have a certain correlation. So the spatial characteristics of color can better represent the distribution of color. According to (4), the image color divergence in XYZ color space is defined to determine the color richness of the image:where represents the shortest distance between two points or between points and lines, and mn belongs to X-Y, Y-Z, and X-Z sections. and represent the closest and farthest points from the origin, respectively.

3.3. Color Fidelity

As mentioned in Section 1, the enhanced image may be oversaturated/pseudobright (as shown in Figure 2). If too much attention is paid to the color richness, the color of ROI in the image will deviate from the color of real objects. Therefore, we should not only consider the color richness of the enhanced underwater image but also consider the color fidelity of ROI, that is, whether the intensity of pixels is within a reasonable range.

It is necessary to understand the formation and degradation of underwater images if we want to estimate a reasonable range of intensity of pixels. The formation of the underwater image is dominated by the following factors [1, 20, 21]:where , G, B is the color channel, is the underwater image captured by the camera, and is the direct signal, recorded as . is a backscattered signal, which is recorded as . z is the distance between the camera and the photographed object; is the obscured light. is the unattenuated scene, that is, the RGB intensity of the surface captured by the sensor with the spectral response at the distance z0 = 0 (generally z0 is regarded as 0):

is the camera’s scaling constant. and have a certain dependence on the spectrum of distance z, reflectivity , ambient light , camera’s spectral response , scattering coefficient , and beam attenuation coefficient , as shown in (7) and (8). z0 and (z0 + z) are the starting and ending points along the line of sight.

So, we can calculate the unattenuated scene as

We need to estimate the reasonable range of values range of RGB intensity of each pixel in the foreground (that is, ROI). The color fidelity metric defined by (10) is calculated by the out-of-range part of the enhanced underwater image . The process is shown in Figure 6.

Firstly, the veiling light of the underwater image is estimated. Backscatter increases exponentially with z and eventually is saturated [1]. In other words, at infinity, . Referring to [22], we assume an area without objects is visible in the image, in which the pixels’ color is determined by the veiling light alone. Such areas are smooth and have no texture. This assumption often holds in the application scenarios of our IQA. First, the edge graph of the image is generated. Then, the threshold value is set, and the largest connected pixel area is found. The veiling light is the average color of the pixels in these areas.

Next, we estimate the type of water body in the underwater image by veiling light . The reason for estimating the type of water is that the common notion that water attenuates red colors faster than blue/green only holds for oceanic water types [1]. We simplified Jerlov water types [23] into blue water (Jerlov I–III), green water (Jerlov 1c–3c), and yellow water (Jerlov 5c–9c) and simulated the average RGB value of perfect white surface appearance under these three water types (data from [23], using D65 light source, Canon camera 5D Mark2, ). We calculate Euclidean distance between the veiling light and the average RGB value and estimate water body type based on Euclidean distance.

Then, we calculate the reasonable intensity range of each pixel after the enhancement of the underwater image. varies most strongly with range z [1]. So, the most important thing to calculate the range is to estimate the distance z in addition to the water body type. Due to the limitation of real conditions, the distance z of the object in the image cannot be obtained, so it is necessary to roughly estimate the possible range of the distance z. For the foreground, the distance z from the camera is approximately the same; for the background, the distance z from the camera tends to be infinite. We assume that the distance z from the camera is the same at each part of the foreground, and there may be white objects. Therefore, the distance z, which makes the of the foreground pixels under the three RGB channels not greater than 255 and not less than 0, is considered as the possible distance. In order to simplify the calculation, the attenuation coefficient of white in three color channels , G, B is adopted for all colors (using in Macbeth ColorChecker).

Finally, the color fidelity defined by (10) is calculated:

represents the number of pixels in ROI block, and represents the total number of pixel intensity deviations that are not within a reasonable range.

We make some qualitative analysis on the influence of simplification on and during the calculation. As shown in Figure 7(b), (8) is used to calculate the broadband (RGB) attenuation coefficient (using of the color block in Macbeth ColorChecker, depth d = 1m, distance z = 1m) of seven common colors of red, orange, yellow, green, green, blue, and purple (Figure 7(a)) under all Jerlov water types. It can be seen that the difference of each color is not large in the same scene. Figure 7(c) shows the influence of different camera types on in three types of water bodies. The influence of camera parameters on the attenuation coefficient is not significant. The experimental results in [1] also prove this view.

3.4. NIPQ Metric

Section 3.1, Section 3.2, and Section 3.3 above, respectively, introduce the ROI extraction method, color richness in statistical sense, and color fidelity in pixel sense. In this paper, the color richness of ROI and color fidelity of ROI are combined by the multiplication model to get our NIPQ. The common underwater image evaluation models UIQM [11], UCIQE [12], and CCF [13] with multiparameters use linear weighting to measure the comprehensive quality of the image. We consider that if a submetric points to a very low value (indicating low quality), the subjective feeling of the whole image will be very poor regardless of other metrics. Therefore, this paper uses the multiplication model to generate the overall underwater image quality assessment, as follows:

represents normalization, represents color quality of ROI block, , and the larger the value is, the higher the image quality is.

The overall process of NIPQ is shown in Figure 8, which is divided into four step. Firstly, the ROI of the original image (not enhanced) is extracted based on background/foreground. Then, the color richness of ROI of the enhanced underwater image is estimated. Then the ocean background information is extracted from the original image, from which the water body type is estimated, and the reasonable range of pixel intensity distribution is estimated. According to the estimated range, the ROI color fidelity of the enhanced underwater image is estimated. Finally, the two metrics of color richness and color fidelity are integrated to obtain the comprehensive NIPQ metric for the whole underwater image.

4. UOQ Database

In order to better evaluate the performance of NIPQ metric, we built an underwater optical image quality database UOQ.

Image Selection. In order to fully consider various underwater scenes, we selected 36 typical underwater optical images with a size of 512 × 512. These images include blue water, green water, yellow water, dark light, single object, multiobject, simple texture and complex texture, serious color distortion, and a little color distortion. Considering that there is no general ROI related dataset in the field of underwater image, we label their foreground region (ROI) pixel by pixel to prove the reliability of ROI in this paper. And, we use five popular image enhancement algorithms (white balance algorithms [24], Fu’s algorithm [25], multifusion algorithm [26], histogram equalization [27], and Retinex [28]) to process these 36 natural images. 180 enhanced images were obtained. Some images and their enhanced images processed by the white balance algorithm [24] are shown in Figure 9.

Evaluation Methods and Evaluation Protocols. In this database, the single incentive evaluation method is used. Volunteers only watch one image to be evaluated each time, and each image only appears once in a round of evaluation. After each image was displayed, volunteers gave subjective quality scores to the corresponding images. Underwater optical images usually have practical applications, so volunteers will not be affected by any aesthetic factors in the process of subjective quality assessment, and the evaluation protocols are shown in Table 1.


ScoreComprehensive feelings

5The subjective feeling is excellent, foreground information is recognizable, and no color distortion is felt
4The subjective feeling is good, the foreground information is visible and recognizable, there is a small amount of perceptual distortion, but it does not affect the extraction of important information
3The subjective feeling is general, part of the information in the foreground is damaged, and a small amount of important information is lost due to distortion
2The subjective perception is poor, and only the general outline of the foreground content can be distinguished; the distortion leads to the loss of some important information
1The subjective feeling is very poor, it is difficult to recognize the foreground content, and it is almost impossible to extract any effective information from the image

Choosing Volunteers. In order to avoid the evaluation bias caused by prior knowledge, none of the volunteers had the experience of image quality assessment. We consider the strong application background of underwater images, so all volunteers selected are graduate students with relevant work experience in underwater acoustic communication, underwater detection, and so on.

All the obtained subjective scores are used to calculate the mean opinion scores (MOS). Note as the subjective score of the image by the -th volunteer and as the number of subjective scores obtained by image. MOS is calculated as follows:

We draw a histogram about MOS of all images in the database, as shown in Figure 10. It can be seen that our image covers a wide range of quality, which is conducive to the design of IQA. And there are many images with scores in the middle score segment because the volunteer will try to avoid giving extreme scores when scoring images. It also can be seen that the lower quality image is slightly more than the higher quality underwater image. This is because most underwater images have the characteristics of blue-green and poor contrast, and sometimes the quality of the enhanced image is still not ideal. In the practical applications, more robust enhancement algorithms will be built into the underwater image enhancement algorithm database of the display module mentioned in Section 2.

5. Experiment

In combination with the UOQ database, we mainly evaluate the performance of IQA through five criteria. The prediction monotonicity of IQA is measured by the Spearman rank order correlation coefficient (SROCC) and Kendall’s rank order correlation coefficient (KROCC). The prediction accuracy of IQA is measured by the Pearson linear correlation coefficient (PLCC). Root mean square error (RMSE) is used to measure the prediction consistency of IQA. The mean absolute error (MAE) is also used to evaluate the performance of IQA. The high values (close to 1) of SROCC, PLCC, and KROCC and the low values (close to 0) of RMSE and MAE indicate that IQA has a better correlation with subjective scores.

The selected IQA metrics for performance comparison include the following:(1)The popular no-reference metrics underwater: UIQM [11], UCIQE [12], and CCF [13](2)The popular no-reference metrics in the air: BRISQUE [6] and LPSI [7](3)Common color metrics for underwater images: UICM [11] and variance of chromaticity (Var Chr) [29]

For the BRISQUE, a low score means high quality, and other metrics are that the higher the score, the better the quality.

5.1. Effect Analysis of Introducing ROI into IQA

In order to observe the influence of the introduction of ROI on the quality evaluation of underwater images, we need to combine ROI with the popular underwater no-reference IQA. The block strategy mentioned in Section 3.1 is necessary because it helps us combine ROI with IQA better. According to the block fusion strategy represented by (13), we combine image block with IQA and get comprehensive quality score. We can observe the change of correlation between objective metrics and MOS before and after combining with ROI.

represents the weight of the i-th image block, and represents the objective quality score under the metric. belongs to 0 or 1. That is to say, the difference between before and after IQA combined with ROI is that the original metric calculates the quality of the whole image, while the metric combined with ROI only calculates the image quality of ROI. The results are shown in the first six lines of Table 2. The results show that the correlation between the metric combined with ROI and MOS is higher than the original metric. This shows that the combination of ROI and IQA is helpful for IQA.


PLCCSROCCKROCCMAERMSE

UIQM−0.173−0.199−0.1320.7510.903
ROI_UIQM0.2770.2800.1960.7390.897
UCIQE0.2940.2070.1450.7070.868
ROI_UCIQE0.3740.2740.1920.6830.840
CCF0.0690.0750.0500.7910.946
ROI_CCF0.3930.3580.2540.7220.872
Var_Chr0.1580.1800.1250.6740.841
UICM−0.283−0.338−0.2250.7140.854
BRISQUE−0.309−0.265−0.1850.7470.902
LPSI0.3230.2450.1690.7340.898
0.4810.4650.3350.6350.789
0.4780.4320.3030.6580.806
Proposed0.6410.6230.4520.5760.713

5.2. Performance Analysis of Proposed NIPQ

We calculated the correlation between various metrics and MOS in the database, and the results are shown in Table 2. It can be seen that the correlation between NIPQ metric and the subjective is significantly higher than other metrics.

In order to compare various NR IQAs intuitively, the scatter diagram between MOS and the estimated objective score is drawn, including six selected NR IQA and the NIPQ proposed in this paper, as shown in Figure 11. On this basis, the experimental data were regressed by the least square method, and the straight line is also drawn. The better the fitting effect of scatter point is, the better the correlation between the metric and MOS is. The regression line shows that the correlation between NIPQ and MOS is obviously better than other metrics. It validates the results of Table 2. It can be seen that LPSI and BRISQUE are the metrics designed for images in the air, which are not applicable to underwater images. As a whole, UIQM, UCIQE, and CCF are specially designed for underwater images, and their performance is better than that for images in the air. Performance of UICM, as a submetric indicating chromaticity in UIQM, is slightly worse than that of UIQM. Compared with the scatter plots of other NR IQA metrics, it can be seen that the performance of our NIPQ shows the best correlation with MOS. Although there are still some aberrant data points, generally speaking, the proposed NIPQ has better robustness to a variety of typical representative underwater images contained in the database. Further analysis shows that some of these aberrant points are caused by the fact that the submetric of the original image (without enhancement) is directly taken as 1 in our experiment.

As shown in Figures 12 and 13, there are two natural underwater images and their enhanced images in the UOQ database. Table 3 shows the corresponding MOS and objective scores of these images. Figure 14 shows the color distribution of their ROI. From these images, the ROI of the original image of (1) is dark and that of (2) is blue. The image enhanced by the histogram algorithm is reddish, and the color distribution of ROI is wider, but the color of ROI is obviously oversaturated/pseudobright. There is no significant difference between the image processed by the Retinex algorithm and the original image. The color of the image processed by Fu's algorithm is not vibrant. For Figure 12, the overall difference between the white balance and the multifusion algorithm is small. The local graph (Figure 15) shows that the brightness distribution of the image processed by the multifusion algorithm is uneven, slightly oversaturated, and the image enhanced by the white balance algorithm has a better visual effect. For Figure 13, the image processed by the white balance algorithm is too dark and has a single color. The image processed by the multifusion algorithm has a better visual effect.


OriMultifusionFuWhite balanceHistogram equalizationRetinex

Figure 12MOS2.5504.5004.1004.5503.0502.500
CCF13.26522.29223.88716.43730.79413.379
ROICCF20.82133.38931.27429.10826.40921.604
UCIQE0.5540.6640.5910.6520.6840.569
ROIUCIQE0.5600.6470.5730.6270.5800.575
UIQM3.9834.5434.8503.9694.7804.085
ROIUIQM5.5855.5895.4955.6725.0555.620
BRISQUE16.30326.93431.70817.82436.76216.744
LPSI0.9260.9010.9100.9230.9120.926
0.2430.7150.5780.6980.8460.324
1.0000.8020.6370.8270.4640.994
Proposed0.2430.5740.3680.5770.3920.322

Figure 13MOS3.2003.8001.5502.1502.7003.250
CCF31.44331.46537.06918.68836.92829.029
ROICCF22.58235.99532.46813.26538.36623.097
UCIQE0.5190.6280.6230.4760.6930.541
ROIUCIQE0.5410.6200.5880.4470.6760.564
UIQM1.5043.3374.3253.8404.1002.182
ROIUIQM6.6585.2355.2495.3494.7895.160
Brisque4.33014.74917.3194.15320.5964.441
LPSI0.9230.8870.9110.9040.9060.926
0.4750.7300.3170.0290.6400.401
1.0000.8470.6320.5810.6010.972
Proposed0.4750.6190.2000.0170.3840.390

Tables 3 shows that the selected IQAs do not perform well in the quality assessment of images in the UOQ database. They generally have higher objective scores for images enhanced by the histogram equalization algorithm because the color distribution of the images is wider. This is a disadvantage of quality evaluation based on statistics: color fidelity is not taken into account. It can be seen that if the performance of the original metric is not ideal, the metric combined with ROI will not necessarily improve this situation, because this is the limitation of the original metric itself.

6. Conclusion

Because of the characteristics of water medium, color has become one of the important concerns in underwater image quality assessment. Color contains important information. Severe color selective attenuation/pseudo-vividness can make it difficult to identify foreground content and extract key and effective information from images. In this paper, a new underwater image evaluation metric, NIPQ, is proposed based on the underwater environment characteristics and HVS. The NIPQ is designed in a three-stage framework. The first stage focuses on the attention mechanism of HVS. The second stage considers the influence of color richness in a statistical sense. The third stage is inspired by underwater image formation models and considers color fidelity from a pixel perspective. Finally, in the fourth phase, color richness and color fidelity are systematically integrated for real-time quality monitoring. At the same time, the relevant underwater image database UOQ with MOS is built to measure IQA performance. Experimental results show that, compared with other commonly used underwater metrics, NIPQ in this paper has better correlation with MOS, which shows better performance.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (61571377, 61771412, and 61871336) and the Fundamental Research Funds for the Central Universities (20720180068).

References

  1. D. Akkaynak and T. Treibitz, “A revised underwater image formation model,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6723–6732, Salt Lake City, UT, USA, March 2018. View at: Google Scholar
  2. S. Bazeille, I. Quidu, L. Jaulin, and J.-P. Malkasse, “Automatic Underwater image Pre-processing,” , CMM’06, Brest, France, 2006. View at: Google Scholar
  3. I. Avcibas, B. Sankur, and K. Sayood, “Statistical evaluation of image quality measures,” Journal of Electronic Imaging, vol. 11, no. 2, pp. 206–223, 2002. View at: Google Scholar
  4. D.-Y. Tsai, Y. Lee, and E. Matsuyama, “Information entropy measure for evaluation of image quality,” Journal of Digital Imaging, vol. 21, no. 3, pp. 338–347, 2008. View at: Publisher Site | Google Scholar
  5. Y. Y. Fu, “Color image Quality Measures and Retrieval,” , New Jersey Institute of Technology, Newark, NJ, USA, 2006. View at: Google Scholar
  6. A. Mittal, A. K. Moorthy, and A. C. Bovik, “No-reference image quality assessment in the spatial domain,” IEEE Transactions on Image Processing, vol. 21, no. 12, pp. 4695–4708, 2012. View at: Publisher Site | Google Scholar
  7. Q. Wu, Z. Wang, and H. Li, “A highly effificient method for blind image quality assessment,” in Proceedings of the 2015, IEEE International Conference on Image Processing (ICIP), pp. 339–343, IEEE, Quebec City, Canada, September 2015. View at: Publisher Site | Google Scholar
  8. J. Kim, A.-D. Nguyen, and S. Lee, “Deep cnn-based blind image quality predictor,” IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 1, pp. 11–24, 2018. View at: Publisher Site | Google Scholar
  9. S. Bosse, D. Maniry, K.-R. Muller, T. Wiegand, and W. Samek, “Deep neural networks for no-reference and full-reference image quality assessment,” IEEE Transactions on Image Processing, vol. 27, no. 1, pp. 206–219, 2017. View at: Publisher Site | Google Scholar
  10. X. Liu, J. Van De Weijer, and A. D. Bagdanov, “Rankiqa: learning from rankings for no-reference image quality assessment,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 1040–1049, Venice, Italy, October 2017. View at: Google Scholar
  11. K. Panetta, C. Gao, and S. Agaian, “Human-visual-system-inspired underwater image quality measures,” IEEE Journal of Oceanic Engineering, vol. 41, no. 3, pp. 541–551, 2016. View at: Publisher Site | Google Scholar
  12. M. Yang and A. Sowmya, “An underwater color image quality evaluation metric,” IEEE Transactions on Image Processing, vol. 24, no. 12, pp. 6062–6071, 2015. View at: Publisher Site | Google Scholar
  13. Y. Wang, N. Li, Z. Li et al., “An imaging-inspired no-reference underwater color image quality assessment metric,” Computers & Electrical Engineering, vol. 70, pp. 904–913, 2018. View at: Publisher Site | Google Scholar
  14. S. Kastner and L. G. Ungerleider, “Mechanisms of visual attention in the human cortex,” Annual Review of Neuroscience, vol. 23, no. 1, pp. 315–341, 2000. View at: Publisher Site | Google Scholar
  15. L. Zhang, J. Chen, and B. Qiu, “Region-of-interest coding based on saliency detection and directional wavelet for remote sensing images,” IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 1, pp. 23–27, 2016. View at: Publisher Site | Google Scholar
  16. C. Zhu, K. Huang, and G. Li, “An innovative saliency guided roi selection model for panoramic images compression,” in Proceedings of the 2018 Data Compression Conference, p. 436, IEEE, Snowbird, UT, USA, March 2018. View at: Google Scholar
  17. Z. Cui, J. Wu, H. Yu, Y. Zhou, and L. Liang, “Underwater image saliency detection based on improved histogram equalization,” in Proceedings of the International Conference of Pioneering Computer Scientists, Engineers and Educators, Engineers and Educators, pp. 157–165, Springer, Singapore, 2019. View at: Google Scholar
  18. L. Xiu, H. Jing, S. Min, and Z. Yang, “Saliency segmentation and foreground extraction of underwater image based on localization,” in Proceedings of the OCEANS 2016, Shanghai, China, 2016. View at: Publisher Site | Google Scholar
  19. W. Zhu, S. Liang, Y. Wei, and J. Sun, “Saliency optimization from robust background detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2814–2821, Columbus, OH, USA, June 2014. View at: Google Scholar
  20. D. Akkaynak and T. Treibitz, “Sea-thru: a method for removing water from underwater images,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1682–1691, Long Beach, CA, USA, April 2019. View at: Google Scholar
  21. D. Akkaynak, T. Treibitz, T. Shlesinger, Y. Loya, R. Tamir, and D. Iluz, “What is the space of attenuation coeffificients in underwater computer vision?” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4931–4940, Honolulu, HI, USA, 2017. View at: Google Scholar
  22. D. Berman, T. Treibitz, and S. Avidan, “Diving into haze-lines: color restoration of underwater images,” in Proceedings of the British Machine Vision Conference (BMVC), vol. 1, London, UK, September 2017. View at: Google Scholar
  23. M. G. Solonenko and C. D. Mobley, “Inherent optical properties of jerlov water types,” Applied Optics, vol. 54, no. 17, pp. 5392–5401, 2015. View at: Publisher Site | Google Scholar
  24. E. Y. Lam, “Combining gray world and retinex theory for automatic white balance in digital photography,” in Proceedings of the Ninth International Symposium on Consumer Electronics 2005, pp. 134–139, IEEE, Melbourne, Australia, July 2005. View at: Google Scholar
  25. X. Fu, P. Zhuang, Y. Huang, Y. Liao, X.-P. Zhang, and X. Ding, “A retinex-based enhancing approach for single underwater image,” in Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), pp. 4572–4576, IEEE, Paris, France, October 2014. View at: Publisher Site | Google Scholar
  26. C. Ancuti, C. O. Ancuti, T. Haber, and P. Bekaert, “Enhancing underwater images and videos by fusion,” in Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 81–88, IEEE, Providence, RI, USA, June 2012. View at: Google Scholar
  27. B. Zhang, “Image enhancement based on equal area dualistic sub-image histogram equalization method,” IEEE Transactions on Consumer Electronics, vol. 45, no. 1, 75 pages. View at: Google Scholar
  28. E. H. Land, “The retinex theory of color vision,” Scientific American, vol. 237, no. 6, pp. 108–128, 1977. View at: Publisher Site | Google Scholar
  29. D. Hasler and S. E. Suesstrunk, “Measuring colorfulness in natural images,” in Human Vision and Electronic Imaging VIII, vol. 5007, pp. 87–95, International Society for Optics and Photonics, Bellingham, WA, USA, 2003. View at: Google Scholar

Copyright © 2020 Di Wu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views426
Downloads327
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.