Complexity

Complexity / 2020 / Article
Special Issue

Control and Stability Analysis of Complex Dynamical Systems involved in Finance, Ecology and Engineering

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 6549410 | 14 pages | https://doi.org/10.1155/2020/6549410

Learning-Based Dark and Blurred Underwater Image Restoration

Academic Editor: Hamid Reza Karimi
Received24 May 2020
Revised26 Jun 2020
Accepted07 Jul 2020
Published01 Aug 2020

Abstract

Underwater image processing is a difficult subtopic in the field of computer vision due to the complex underwater environment. Since the light is absorbed and scattered, underwater images have many distortions such as underexposure, blurriness, and color cast. The poor quality hinders subsequent processing such as image classification, object detection, or segmentation. In this paper, we propose a method to collect underwater image pairs by placing two tanks in front of the camera. Due to the high-quality training data, the proposed restoration algorithm based on deep learning achieves inspiring results for underwater images taken in a low-light environment. The proposed method solves two of the most challenging problems for underwater image: darkness and fuzziness. The experimental results show that the proposed method surpasses most other methods.

1. Introduction

Recently, developing, exploring, and protecting the ocean’s resources have received significant attention from the international community. Following the recent development of sea research, the autonomous underwater vehicles (AUVs) and remotely operated vehicles (ROVs) have been widely used as the carrier of various sensing devices. Sonars and vision camera are two kinds of major perception equipment to detect and recognize objects in underwater environments. In general, sonar is suitable for long-range detection and generates low-resolution image. However, these vision sensors are used for short-range and high-resolution identification.

The underwater imaging model could help us better understand underwater optical propagation. The diagram of light transmission in underwater is shown in Figure 1. The optical sensor receives three types of light which are shown as three different arrow symbols. The first solid arrow represents the direct of transmission of the light from the subject without the obstruction and scattering of particles. The second kind of light is the forward scattering light which is reflected from the objects and scattered by particles. The third kind of light is background scattering light which comes from the background light and is reflected by the suspended particles. According to the model, the imaging process of underwater images can be represented as the linear superposition of three components [1], shown as follows:where Er represents the total received light. Ed, Ef, and Eb represent the direct transmission light, the forward scattering light, and the background scattering light, respectively. The background scattering light comes from all scattered light of the suspended particles except the objects. This kind of light blurs the visual effect and reduces the clarity of underwater images. If the object is close to the camera, the forward scattering light should have very little value in the direction of the camera. In this situation, the component Ef often is ignored for easier analysis of the model [2], so (1) can be shortened as follows:

Due to light absorption and scattering in an underwater environment, the underwater images generally have the following problems: low contrast and brightness, blurry details, color distortion, and bright specks. In addition, the dark environment is also encountered in the period of deep-sea exploration and complex environmental detection. The camera flash also can be used to increase light in the environment; however, it is still necessary that the restoration algorithms are highly capable in an underwater dark environment when the flash is not available. The case can be divided into three primary factors: (1) The flash is not allowed on certain occasions such as the detection of certain sea creatures or passive detection of underwater intrusion. (2) The flash could be ineffective in complex surroundings. For example, the light from the flash could be blocked by the complicated structure of bridge bases, ports, and sea wrecks. (3) The battery’s power limits the flash exposure range. Even when the flash is turned on, there are larger areas away from the areas illuminated by flash. The above issues mentioned in a special submerged scene are urgently needed to be addressed. In this paper, we mainly focus on restoring the dark and blurred underwater images.

The methods related to image restoration can be divided into traditional methods and “modern” data-driven techniques. The former type of method includes the model-based and the model-free methods, detailed in Section 2. The latter method utilizes big data to learn the model and mainly uses machine learning techniques to complete the task. Deep learning methods are an important technique in machine learning. Deep learning methods have made rapid progress since 2012 in various computer vision tasks. There are three components that can vastly improve deep learning methods: big data, improved networks, and powerful hardware. Big data provides not only adequate data for training but also a standard answer (Ground Truth) to the algorithms. In other words, deep learning methods “peek” at the ground truth while traditional methods go without it.

The contributions of this paper are summarized as follows:(1)The image pairs (the dark and blurred underwater images and the corresponding normal exposure and clear images) are collected and provided to a neural network. We use a new method to collect the image pairs (the dark and blurred underwater images and the corresponding normal exposure and clear images). In particular, the objects are placed in the air, and the light passes through both air and water to the camera to simulate the underwater environment. The collecting method is proved to be effective to generate the underwater images in theory and practice(2)The proposed algorithm restores the images captured in the extremely dark and blurred underwater environment. The restoration results are beyond most underwater enhancement and restoration methods.

The rest of this paper is organized as follows. Related work about underwater restoration is proposed in Section 2. The new framework of the neural network is shown in Section 3. Detailed experimental results are shown in Section 4. Several problems which need to be further studied are proposed in Section 5. The paper concludes with Section 6.

The restoration of dark underwater images has been extensively studied in huge works of literature. In this section, we provide a short review of related work. In our research, we only consider that the underwater image restoration relies on a single image. The categories of single underwater image restoration are shown in Figure 2. The algorithms can be divided into two categories: traditional methods and machine learning methods [3].

Traditional methods include image enhancement and restoration and aim at improving the image quality which can be performed in both spatial and frequency domains [4]. The former is often a subjective process, a heuristic procedure designed to improve low-quality images, and without degradation model. On the other hand, the latter formulates an objective criterion and attempts to reconstruct a degraded image by using the prior knowledge of the degradation. In other words, it models the degradation procedure and applies the inverse process to recover the ideal image. The model is called the underwater image formation model (IFM) in the underwater scene. Image enhancement is equivalent to a blind operation, while image restoration tries to model the reverse procedure of the degradation. The difference between image enhancement and image restoration is listed in Table 1.


CriterionEnhancementRestoration

Result evaluationObjectiveSubjective
Modeling of degradationNoYes
Use of prior knowledgeNoYes

Early studies of underwater IFM-free methods directly used the corresponding methods that were used out of the water. Later methods are designed according to the distinguishing features found underwater, such as haze, color cast, and low contrast. The IFM-free methods can be divided into two categories: spatial-domain [5, 6] and transform-domain methods [79].

The methods based on spatial domain complete a redistribution about intensity histogram by expanding gray levels. It works in different color models such as Red-Green-Blue (RBG), Hue-Saturation-Intensity (HSI), and Hue-Saturation-Value (HSV). The color models can also be divided into single-color model (SCM) and multiple-color model (MCM) by class number of color models. The typical SCM-based image enhancement methods, Histogram Equalization (HE) [10], Contrast Limited Adaptive Histogram Equalization (CLAHE) [11], and Generalized Unsharp Masking (GUM) [12], work in RGB color model. Many researchers such as Torres-Méndez et al. [13], Iqbal et al. [14], and Huang et al. [15] proposed MCM-based image enhancement. For example, Torres-Méndez et al. [13] and Iqbal et al. [14] used Markov Random Field (MRF) and Integrated Colour Model (ICM) to describe the correlation procedure of distortion. Huang et al. [15] proposed the relative global histogram stretching (RGHS) strategy in RGB and CIE-Lab color models.

An image can be explained in frequency domain. The high-frequency component in an image usually indicates the edge region where the brightness value or color value of the pixels have sudden change, whereas the low-frequency component indicates the flat and large area. In order to achieve a higher-quality image, the high-frequency component needs more data, while the low-frequency component does not require as much data. Firstly, the transform-domain image enhancement methods convert the spatial domain image into the frequency domain through the conversion methods of the spatial-transform domain such as the Fourier Transform [16]. Secondly, the quality of underwater images can be improved by increasing the high-frequency component and suppressing the low-frequency component, synchronously. In 2010, Prabhakar et al. [17] used a homomorphic filter, an anisotropic filter, and an adaptive wavelet subband threshold to correct nonuniform illumination and smooth and denoise the image. In 2016, Amjad et al. presented the wavelet-based fusion method [18] to improve the low-quality issue of underwater images. In 2017, Vasamsetti et al. proposed a wavelet-based perspective technique [19] for underwater images, which performed the discrete wavelet transform (DWT) [20] on the RGB channels to generate two decomposition levels and reconstruct the grayscale images.

The image formation model-based (IFM-based) method is one of the traditional methods. It analyzes the underwater imaging mechanism and law of light propagation in water then constructs a physical model to restore high-quality images. Considering the optical properties, different prior-based methods are used for underwater image restoration. These methods include dark channel prior (DCP) [21], underwater dark channel prior (UDCP) [22], red channel prior (RCP) [23], and blurriness and light prior [24]. According to the priors, the background light (BL) and transmission map (TM) can be derived and entered into the IFM model for image restoration.

In recent years, many researchers have explored machine learning technology to improve the quality of the underwater images. Support Vector Machines (SVM) [25], one of the machine learning methods, was mostly used in underwater image object detection [26]. The deterministic annealing algorithm [27, 28] is developed based on Lyapunov’s functional method. It can be used in the learning of network parameters. During the past few decades, deep learning has achieved rapid development.

Deep learning works well by using convolutional neural networks (CNNs) [29, 30] or generative adversarial networks (GANs) [31] by backpropagation training [32]. The model in deep learning is unlike the physical model in the image restoration. The methods based on deep learning are neither image enhancement nor image restoration. Depending on the various models, deep learning can be divided into several categories. Sun et al. [33] suggested the pixel-to-pixel (P2P) network to enhance underwater images. The encoder part is composed of three convolutional layers, while the decoder is three deconvolutional layers. Underwater generative adversarial network (UGAN) [34] is proposed to improve the underwater image quality. The discriminator of UGAN is Wasserstein GAN with gradient penalty (WGANGP) [35] to soft constraint on the output. For solving the limitation of underwater images, Anwar et al. [36] proposed an end-to-end model UWCNN trained by the synthetic image. To take advantage of the popular dense connections, residual network, and multiscale network, the multiscale dense block (MSDB) algorithm [37] is proposed to enhance the underwater images. Not only the single-branch network, but also the multibranch network is designed to learn the different features of the same input. For example, UIE-Net [38] is composed of three subnetworks. In general, deep learning is divided into two classes: supervised learning and semisupervised/unsupervised learning. Mainstream technology in the two learning methods is CNN and GAN, respectively.

3. Method

3.1. Procedure Pipeline of Dark and Blurred Underwater Image

The pipelines based on the traditional methods and deep learning can be used to process the dark and blurred underwater images, as shown in Figure 3. The dark environment is defined as a 100-fold reduction in exposure amount to normal exposure in our study. The blurred underwater environment is made by adding a little milk powder. The traditional methods are divided into the single algorithm and the cascading algorithms, shown as the upper subimage surrounded by the dotted box.

The traditional pipeline using a single algorithm works well on the normal-light underwater images, shown in line A of Figure 3. However, it has less consideration for the low-light environment, often having poor performance in low-light underwater environments.

The second type of traditional method cascades multiple low-level vision processing procedures, shown in line B of Figure 3. The first step is luminosity scaling of the dark images. The images taken by the Nikon D700 made in Japan are RAW-format images with 14 bits. The maximum brightness value of the images is 214, 16384. Experiments show that the brightness values of the pixels are less than 50 in the underexposure 100 times environment. The procedure of luminosity scaling can be written as , where represents the brightness value of a pixel and represents the max brightness value of all pixels. The simple luminosity scaling initially solves the issue of underexposure, but simultaneously amplifies the noise while amplifying the information. For denoising the amplified noise, the next step noise reduction is immediately behind luminosity scaling. Considering the fact that BM3D [39] is a classic noise reduction algorithm, we select it as a baseline in the step of denoising. After brightness enhancement and noise reduction, the last step is underwater image enhancement and restoration.

The third type of traditional method is also a cascade method, but it uses a single method to accomplish both brightness enhancement and denoising, shown in line C of Figure 3. Recently, many algorithms were proposed to recover the low-light images while keeping a high SNR, such as the Robust Retinex Model algorithm [40], LIME [41], etc. Because LIME is a simple yet effective low-light image enhancement (LIME) method, we select LIME as the baseline in our study. The principle of LIME is that firstly the illumination value of each pixel is estimated individually by calculating the maximum value in RGB channels. Further, the initial illumination map is refined by imposing a structure prior to it. Finally, the enhancement can be achieved by the constructed illumination map.

Unlike the traditional methods without ground truth images, a general deep learning neural network must train the data before the test phase, shown in the bottom of the pipelines figure. The deep learning method includes the training phase (line D_1) and the test phase (line D_2). Data in the training phase is taken in the atmosphere by placed tanks in front of the camera, while data in the test phase is taken from the water. The details about the procedure of collecting data are described in the next section. In our work, the deep learning convolutional neural network [42] is proposed for dark and blurred underwater images. Specifically, a network similar to U-net [43] is used for processing, inspired by the recent algorithms [43, 44].

The structure of the proposed deep learning network is shown in Figure 4. The raw images are entered into the input of the network, and the restored images are output.

In the network, block 1 includes three layers: two convolutional layers abbreviated as conv2d(32, [3, 3]) and max pooling2d. The parameter “32, [3, 3]” in convolutional layers represents that output array size is 32 and convolutional kernel size is 3 × 3. The blocks from the 2nd to the 8th, respectively, include two conv2d(64, [3, 3]) and max pooling2d, two conv2d(128, [3, 3]) and max pooling2d, two conv2d(256, [3, 3]) and max pooling2d, two conv2d(512, [3, 3]), two conv2d(256, [3, 3]), two conv2d(128, [3, 3]), and two conv2d(64, [3, 3]). Block 9 includes two conv2d(32, [3, 3]) and conv2d(12, [1, 1]) layers.

The U-net network belongs to autoencoder neural networks [45, 46], which are trained to attempt to map the input to the output. The autoencoder network is divided into two parts: an encoder function h = F(x) and a decoder function G(h) which generates the reconstruction, respectively, shown as the left and the right parts of Figure 4. The skip connection shown as the blue arrow in Figure 4 transfers weight, avoiding gradient disappearance.

The algorithm is an offline one; it is necessary to provide the computational complexity analysis. Computational complexity is a concept that focuses on the amount of computing resources for particular kinds of tasks. Each operation of deep learning requires a lot of computing resources. For convolutional layer, each input feature has a Fh convolutional kernel, and output feature has a coutHcout convolutional kernel. At the same time, the number of input and output features is defined as Nin and Ncout, respectively. If multiplication and addition are required for each element, all operations of the convolution layer can be calculated as 2NinNcout Fh coutHcout. For ReLU activation layer, there is only one compare operation in each cell. So, the operation number of ReLU is Ncout. For pooling layer, each filter has the size of Ph, and output is Npout images which have the size of poutHpout. The operation number of pooling layer can be calculated as poutHpoutNpout.

3.2. Procedure of Collecting Data

The deep learning portion is divided into three methods: full supervision, semi-supervision, and unsupervised methods according to whether the data is labeled. In our research, we consider the underwater image restoration as a supervised learning task, in which the data label must be given in the training phase.

A label is not just the name of the image but has varied concepts in different computer vision areas. For example, a label is a number that represents the ID of a category in an image classification task. It is the location of the objects in the object detection task. It indicates whether each pixel belongs to a category in image segmentation. In the image restoration, the label is the high-quality image corresponding to the low-quality image.

The training phase and test phase are shown as the bottom of Figure 3. The training data in the training phase (line D_1) have two parts: the low-light and blurred images (left) and the Ground Truth images (right). The Ground Truth represented by the clear Lena image is the label of the left dark images. Because our method uses the label data, the method is considered a full supervision method. Two kinds of images before and after restoration must be the same size, preferably aligned pixel by pixel. The learning-based model can learn the map relation between the two kinds of training data using BP algorithm [47]. The trained model is produced through the training phase, shown in line D_1. In the test phase (line D_2), the low-light images are input into the trained model, and then the normal-light images are generated by the model.

The test and training data are best derived from the same probability distribution to ensure the effectiveness of the algorithm. Specifically, test data is taken from an extremely dark and blurred underwater environment; training data needs to be collected from the same or similar environment. However, it is well known that the collection of underwater images requires high computational cost. It hinders the application of deep learning methods in underwater image processing. In Section 2, several methods about collecting underwater images are discussed. We propose a method to collect underwater images, as shown in Figure 5.

In scene 1, the camera is fixed on the tripod, and a glass tank filled with clear water is placed on the table. The glass tank has a high grade of transparency, and the water is clear. The images taken in scene 1 are considered as the ground truth in our research. Then, the first tank is moved; the second tank is placed on the same location as the first tank. In other words, the only difference between scene 1 and scene 2 is the replacement of the tank. The second tank is filled with water mixed with suspended particles. The milk powder is added into the clear water in order to simulate the suspended particles. In scene 2, we capture dark images by adjusting the camera parameter, for example, reduction of the exposure time, closing the aperture, and reduction of the ISO value. Thus, we can collect the low-quality underwater images in scene 2.

The bottom subimage in Figure 5 demoes the light refraction transmission, which follows Snell’s law expressed as follows: , where and are the angle of incidence and Ir is an abbreviation for the indices of refraction. The equation states that for a given pair of media, the ratio of sin and sin is equal to the ration of the indices of refraction in the respective media. The indices of refraction are about 1, 1.33, and 1.5 in the following media: atmosphere, water, and glass, respectively. The indices of refraction of the water are larger than those of the atmosphere. Considering that the thickness of the glass is much smaller than the width of the water in the bottom of Figure 5, the refractive effect of the glass is ignored. According to the refractive law, the camera has a larger angle of view when the light passes through a tank filled with water. If we remove the tank in scene 1, the camera directly takes a picture in the clear air. In this case, we can actually get higher-quality label data (Ground Truth), but the angle of view should shrink. So, the images from scene 1 and scene 2 have different shooting regions, and the two images from the image pairs cannot be aligned pixel by pixel. The summary is as follows; we can collect the image pairs from the two scenes.

3.3. Scattering Models
3.3.1. Atmospheric Scattering Model

For an image shot in a scattering medium, only a part of the light from the object reaches the camera due to the absorption and scattering effects. Similar to (2), the atmospheric scattering model [48, 49] can be written aswhere x denotes the pixel coordinates, U(x) is the captured image, I(x) is a clear image, B is the global atmospheric light, and T(x) is the transmission proportion of light passing through the object to the camera. When the media is homogenous, T(x) can be written in an exponential decay term as follows:where represents the atmospheric attenuation coefficient and d(x) is the distance from the object to the camera. In the atmospheric scattering model, attenuation is independent of wavelengths. Since underwater images usually have a blurred appearance, the atmospheric scattering model can be used to describe the degradation the underwater image.

3.3.2. Underwater Scattering Model [4, 50]

Similar to the atmosphere scattering model, the underwater image imaging model can be written aswhere presents the wavelength of the light. The main difference between the models about the underwater and the air environments is the effect of the wavelength parameters. The attenuation of varying wavelengths needs to be calculated separately in (5), because light with varying wavelengths has varying attenuation levels in an underwater environment. Experiments show that the light with about 500 nm wavelength (blue-green color) has the smallest attenuation coefficient [51]. Thus, the underwater images have a more blue-green color. In our research, the object is close to the camera, so the effect of the wavelength is ignored.

Similar to the equation in the atmosphere, the transmission ration in underwater environments can be written as

It can be also written aswhere is the strength of light after the transmission of d(x) distance, is the energy of light in the original location before transmission, and is the normalized residual energy.

3.3.3. Mixed Scattering Model in Underwater and Atmosphere

Two different media are between the object and the camera, as shown in Figure 6; the scattering model is defined as a mixed scattering model. If the tank is removed between the camera and the object, the mixed scattering model is converted to atmospheric scattering model. Similarly, the atmosphere media is removed; in other words, if the object is placed in the water, then the model should be converted to an underwater scattering model.

The mixed mode is expressed aswhere is the light transmittance ratio in air, is the transmittance ratio in water with suspended particles, and is the global background light in the tank. Because clear air can be approximated to contain fewer particles, approximately set to 1 and is close to . Additionally, because the particles in air are fewer than those in water, the parameter can be ignored. By the above analysis, the mixed model can be expressed as

Comparing (9) with (5), we can find that if the light passes through two media (air and water), objects placed in the air are equivalent to being placed in water. Taking advantage of this law, the objects are placed in air to simulate the scene of the objects in water in the experimental part of our research.

3.4. Formulation as an Image Restoration Task

To recover the clear image I(x), the traditional underwater image restoration methods estimate not only I(x) but also homogenous global background light B and the medium energy ratio T(x) from an underwater image U(x), shown in the equation (3). The estimating process can be divided into two main steps. After the first step of estimating B and b, the latent image I is reconstructed by inverting the underwater formation model.

Unlike the previous conventional methods, the methods based on deep learning directly estimate the latent image I without calculating the global background light and the medium energy ratio. Instead of estimating parameters, deep learning methods compute the residual information between the target latent image and the underwater image in a data-driven and end-to-end manner. We use maximum a posteriori estimator (MAP estimator) to explain the restoration procedure from I(x) to U(x), written in a nonlinear function . The function can be further shortened as .

According to the Bayes rule, the maximization over the probability distribution of the posterior can be written aswhere is the likelihood of observing U given I, and is the prior on the latent image. Having a uniform distribution on the observations, the maximization of the posterior can be written as

In (11), because U is a determined value, is a fixed value and omitted. Further, (11) can be converted by minimizing the log likelihood aswhere the first item is log-probability term similar with the maximum likelihood method, is a priori probability influencing the results, term enforces the observations to be faithful to the degraded image, and is the regularization prior term.

4. Experiments

Three factors including big data, subtle algorithms, and hardware for parallel computing have led to significant progress in the deep learning approach. The high cost of collecting underwater images causes the scarcity of the data in underwater restoration algorithms. We designed the corresponding experimental scene for collecting training and test data, shown in Figure 7, based on the theoretical analysis in the section “The Procedure of Collecting Data” of the approach chapter.

In the training phase, two kinds of data are collected in scene 1 and scene 2 as shown in Figure 5. Figure 7(a) shows the scene 1 for collecting the Ground Truth images in the training phase. A variety of methods are adopted to make the quality of the images better. A Wi-Fi controller is used to take a photo remotely by the app “qDslrDashboard.” Polarized lenses are mounted in front of the lens to eliminate the reflection light of the glass. The tripods and tables are fixed steadily. After collecting the GT data, the first tank filled with clear water is removed and the second tank filled with the slightly muddy water is placed in the same location. We adjust the camera parameter to take the corresponding low-exposure images.

In the training phase, the distance between the tank and the camera is only about 2 cm, while the corresponding distance of the test phase is longer than 45 cm. The different focus objects cause different distances in two phases. In the training phase, the camera focuses on the object away from the tank. However, the camera focused on the object in the tank in the test phase. The closest focus distance for the lens is 45 cm, so the camera must be placed at least 45 cm away. The training pictures picked randomly from the training data including about 50 pictures are shown in Figure 8. We can see that the images taken in dark environments are noisy and blurry.

The scene of the test phase is shown in Figure 7(b); several black cloths are used to obtain better test images. Firstly, the top of the tank is covered by a black cloth to avoid direct sunlight on the top. Secondly, the back of the tank is sheltered by a black cloth to make sure the scene has a black background. Thirdly, a big black cloth is placed behind the camera to prevent the rear light from shining on the tank and reflecting back to the camera. In the test phase, only one tank with slightly muddy water is provided, and the camera is adjusted to take the low-exposure images.

Based on the recent advances in underwater low-light image processing algorithms, the comparative experiments are divided into four categories, respectively, shown in lines A, B, C, and D.

Line A in Figure 3 describes the pipeline that the underwater low-light images are processed directly by underwater enhancement and restoration algorithms. The comparative results using different underwater algorithms are shown in Figure 9.

It can be seen from Figure 9 that single processing underwater enhancement and restoration algorithms cannot effectively process the underwater images taken in a dark environment. The results by single underwater algorithms are too dark except for the column E. The result of column E has abnormal bright spots and rough texture on a smooth surface. Our algorithm achieves the closest effect to the Ground Truth.

Objective IQA methods are used to measure the results. The classical FR methods peak signal-to-noise ratio (PSNR) and structural similarity index [56] (SSIM index) are selected in our quantitative analysis as full-reference methods. The higher the value of the two FR methods, the better the image quality. Other IQA methods selected in the analysis are IL-NIQE [57] and NIQE [58]. IL-NIQE uses a feature-enriched completely blind image quality evaluator. NIQE makes a completely blind image quality analyzer and is also one of the NR methods.

The results of objective IQA methods between our algorithm and underwater enhancement and restoration algorithms are shown in Table 2. The results show that our algorithm works better than the other algorithms.


Image IDAssessment methodsOriginal imageCLAHEGCICMRayleigh DistributionUCMRoWSUDCPOursGround Truth

100600PSNR15.3916.82618.11016.0599.81319.07114.92414.92527.036Inf
100600SSIM0.1210.4480.5740.2500.1830.4260.0120.0110.8401
100600IL-NIQE85.73792.68185.93782.37853.60776.323157.962154.89439.23528.716
100600NIQE14.4738.5598.6528.6878.8108.86220.83020.9015.7565.1267
100602PSNR15.83918.81519.63316.74010.49517.19115.16914.72724.579Inf
100602SSIM0.2800.6240.6460.4120.2460.3580.1170.0300.8421
100602IL-NIQE83.13384.79684.33772.22162.31372.44385.41387.96132.96134.990
100602NIQE7.4678.4588.4908.36010.1988.1317.7318.0215.96646.849
100603PSNR11.71713.87214.80312.36312.29212.95211.35311.29023.613Inf
100603SSIM0.2050.5050.6050.3490.3750.2770.0890.0770.8671
100603IL-NIQE80.89380.29268.37965.77445.44764.70880.85684.26037.50333.236
100603NIQE7.3107.1767.2967.5049.0257.0857.3397.1865.6224.641
100605PSNR11.63313.15614.82212.17311.95614.14010.98511.02322.167Inf
100605SSIM0.1340.3520.5120.2370.2790.3570.0070.0170.7701.00
100605IL-NIQE113.81196.737106.721100.24462.86585.374112.699105.88051.91928.02
100605NIQE9.9799.2789.2489.5339.9319.30012.78910.4925.46184.944

The second image processing pipeline is shown in line B of Figure 3. This pipeline cascades the following three steps: luminosity scaling, denoising, and underwater processing algorithm. The first two steps are abbreviated as “S + D.” In the experiment, the classic BM3D [39] is selected in the denoising step. The results of the pipeline are shown in Figure 10.

All cascading pipelines obtain the results with normal exposure relative to the single underwater algorithm. We consider that the normal exposure benefits from the step of luminosity scaling. The images of column C are fuzzier than our algorithm. The results of other algorithms, such as columns E, F, and G, cannot restore dark areas well. The results of objective IQA methods are shown in Table 3.


Image IDAssessment methods(A)(B)(C)(D)(E)(F)(G)(H) ours

100600PSNR13.27113.36015.05812.28711.25311.27311.26527.036
100600SSIM0.3710.4340.5560.2970.0940.0950.0940.840
100600IL-NIQE59.89950.37547.86649.56977.21175.68974.24439.235
100600NIQE6.4426.9525.7576.3766.4586.5006.4705.756
100602PSNR13.82513.97415.24112.23311.90011.70211.41624.579
100602SSIM0.4260.5200.5710.3380.2430.2100.1710.842
100602IL-NIQE63.87050.36850.73547.96956.58665.47468.11832.961
100602NIQE6.2717.0195.7666.3126.4536.1756.7275.9664
100603PSNR13.08813.22112.94112.01011.73312.10411.13223.613
100603SSIM0.4320.5210.5160.3560.2880.3730.2110.867
100603IL-NIQE63.64639.31239.85039.88243.50240.64742.09637.503
100603NIQE5.9886.2815.6215.6795.8675.6935.9985.622
100605PSNR21.21322.22317.19417.47216.74416.84515.71722.167
100605SSIM0.6170.7440.6900.5770.5240.5370.4520.770
100605IL-NIQE85.56567.61166.66167.62468.77765.31970.87651.919
100605NIQE7.2076.8156.1305.9066.2786.3146.4335.4618

The 3rd kind of image processing pipeline is shown in line C of Figure 3. The pipeline cascades low-light enhancement algorithm LIME [41] and underwater images processing algorithm. The results of algorithms and the objective IQA methods are shown in Figure 11 and Table 4. In Figure 11, our algorithm exceeds all LIME + underwater enhancement and restoration algorithms. The column D (LIME + GBdehazingRCorrection) has the same effect in terms of color and brightness recovery. In the aspect of objective IQA methods, our algorithm exceeds all of the LIME + underwater enhancement and restoration algorithms.


Image IDAssessment methods(A)(B)(C)(D)(E)(F)(H) ours

100600PSNR17.55821.83119.10422.41720.75920.91427.036
100600SSIM0.5320.6970.4450.7170.5780.5920.840
100600IL-NIQE68.85061.83868.34366.96761.45560.93639.235
100600NIQE7.2006.9096.8147.1666.1316.1355.756
100602PSNR12.69320.23218.47220.50218.87218.84424.579
100602SSIM0.4230.6710.5410.6220.5510.5440.842
100602IL-NIQE69.91259.43560.96269.75961.74461.31732.961
100602NIQE7.2838.0546.8856.8327.0366.9525.9664
100603PSNR14.58418.76517.14820.58018.39618.01223.613
100603SSIM0.5620.6900.5840.7010.6280.6170.867
100603IL-NIQE64.94559.37454.17165.78051.56047.94337.503
100603NIQE6.5036.7116.3606.9466.4846.4555.622
100605PSNR15.40816.46115.45019.11917.66717.40622.167
100605SSIM0.5310.5690.4830.6290.5760.5680.770
100605IL-NIQE91.35881.76181.38985.47478.27573.77851.919
100605NIQE7.8197.6526.8636.7928.1837.7385.4618

The pipeline based on deep learning is shown in line D of Figure 3. In the field of computer vision, most of the current deep learning methods are applied to aerial pictures. There are also some deep learning methods used in underwater image recovery, but few deep learning methods can be applied to blurring and dark underwater images. Deep underwater image enhancement [36] is selected as a comparison algorithm. The results of restoration and objective IQA methods are shown in Figure 12 and Table 5. It can be seen that columns B, E, and F show whiteness, column A has low clarity and color cast, columns D, E, and F cannot restore the background, and column C has rough particles on the smooth surfaces. Because underwater image characteristics are more complex than aerial images, many algorithms have not enough robustness. The test results of objective IQA methods show that our algorithm exceeds all deep underwater image enhancement algorithms.


Image IDAssessment methods(A)(B)(C)(D)(E)(F)(G) ours

100600PSNR20.94722.40323.48522.82919.19018.72427.036
100600SSIM0.6460.7810.7800.7830.7440.7410.840
100600IL-NIQE58.02048.48754.52159.03481.41072.06039.235
100600NIQE7.3346.7326.9037.65210.06212.7045.756
100602PSNR21.93516.09124.40921.8289.36618.59324.579
100602SSIM0.7570.6550.8130.7490.4700.7360.842
100602IL-NIQE66.38150.99562.06162.31868.02251.32532.961
100602NIQE7.3317.2226.9967.6986.8996.6385.9664
100603PSNR17.44020.64118.46517.36412.47616.23623.613
100603SSIM0.6430.8070.7360.6510.6150.7610.867
100603IL-NIQE47.97544.81745.50953.03994.66368.42037.503
100603NIQE6.8127.0036.4107.1047.5845.8525.622
100605PSNR16.17020.45418.61116.79117.27216.09522.167
100605SSIM0.5510.6960.6710.6070.6380.6400.770
100605IL-NIQE71.73244.95058.54973.383100.59795.61051.919
100605NIQE6.8486.3736.1637.6938.9058.5985.4618

4.1. Implementation Details

In all of our experiments, we used L1 loss and the Adam optimizer [59]. We only trained the network with the use of a Nikon D700 camera. The initial learning rate was set to 0.0001. The initial learning rate decreased according to the cosine function. The weight decay was set to 0.00001 and dampened to 0. According to the practical effect of the experiment, the training epoch is set between 3000 and 5000. Our implementation was based on Torch which is one of the deep learning platforms.

5. Discussion

In this work, we shared a new method for collecting images that can be used for future research of machine learning. With the help of a high-quality dataset, our algorithm achieves inspiring results in the restoration of the extremely low-light underwater images.

In future, we should try to improve our work with the following points. (1) The improved U-net networks can be used to improve performance. (2) The generalization performance of the method still needs to be studied, for example, different depth and different turbidity. (3) Experimental equipment for underwater image acquisition can be extended from water tank to pool.

Our research has important theoretical value for the underwater robot, surveillance, and many more areas.

6. Conclusions

To see in dark and blurred underwater environments, we propose a new method of collecting underwater image pairs by two tanks filled with different-turbidity water and different-environment light. Experiments show that our approach in the collection underwater image is simple and highly effective. We demonstrate the efficacy of our algorithm in blurred and dark underwater image restoration by supervised learning. The experiment shows that this approach can achieve inspiring results.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Authors’ Contributions

All authors contributed to the paper. Huigang Wang carried out project administration; Yifeng Xu carried out conceptualization and methodology, investigation, and writing; Garth Douglas Cooper carried out review and editing; Shaowei Rong carried out data curation; and Weitao Sun carried out software.

Acknowledgments

This research was funded by the National Science Foundation of China (Grant no. 61571369). It was also funded by Zhejiang Provincial Natural Science Foundation (ZJNSF) (Grant no. LY18F010018) and by the 111 Project under Grant no. B18041.

References

  1. J. S. Jaffe, “Computer modeling and the design of optimal underwater imaging systems,” IEEE Journal of Oceanic Engineering, vol. 15, no. 2, pp. 101–111, 1990. View at: Publisher Site | Google Scholar
  2. E. Trucco and A. T. Olmos-Antillon, “Self-tuning underwater image restoration,” IEEE Journal of Oceanic Engineering, vol. 31, no. 2, pp. 511–519, 2006. View at: Publisher Site | Google Scholar
  3. Y. Wang, W. Song, G. Fortino, L. Z. Qi, W. Zhang, and A. Liotta, “An experimental-based review of image enhancement and image restoration methods for underwater imaging,” IEEE Access, vol. 7, 2019. View at: Publisher Site | Google Scholar
  4. D. Akkaynak and T. Treibitz, “A revised underwater image formation model,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, June 2018. View at: Publisher Site | Google Scholar
  5. C. O. Ancuti and C. Ancuti, “Single image dehazing by multi-scale fusion,” IEEE Transactions on Image Processing, vol. 28, no. 2, 2013. View at: Publisher Site | Google Scholar
  6. C. O. Ancuti, C. Ancuti, C. De Vleeschouwer, and P. Bekaert, “Color balance and fusion for underwater image enhancement,” IEEE Transactions on Image Processing, vol. 27, no. 1, 2018. View at: Publisher Site | Google Scholar
  7. S. Wang, J. Zheng, H. M. Hu, and B. Li, “Naturalness preserved enhancement algorithm for non-uniform illumination images,” IEEE Transactions on Image Processing, vol. 22, no. 9, 2013. View at: Publisher Site | Google Scholar
  8. J. Y. Chiang, Y. C. Chen, and Y. F. Chen, “Underwater image enhancement: using wavelength compensation and image dehazing (WCID),” in Lecture Notes in Computer Science (including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer, Berlin, Germany, 2011. View at: Publisher Site | Google Scholar
  9. K. Gu, G. Zhai, X. Yang, W. Zhang, and C. W. Chen, “Automatic contrast enhancement technology with saliency preservation,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 25, no. 9, 2015. View at: Publisher Site | Google Scholar
  10. R. Hummel, “Image enhancement by histogram transformation,” Computer Graphics and Image Processing, vol. 6, no. 2, 1977. View at: Publisher Site | Google Scholar
  11. K. Zuiderveld, “Contrast limited adaptive histogram equalization,” in Graphics Gems, Elsevier, Amsterdam, Netherlands, 1994. View at: Publisher Site | Google Scholar
  12. G. Deng, “A generalized unsharp masking algorithm,” IEEE Transactions on Image Processing, vol. 20, no. 5, 2011. View at: Publisher Site | Google Scholar
  13. L. A. Torres-Méndez and G. Dudek, “Color correction of underwater images for aquatic robot inspection,” in Lecture Notes in Computer Science (including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer, Berlin, Germany, 2005. View at: Google Scholar
  14. K. Iqbal, R. A. Salam, A. Osman, and A. Z. Talib, “Underwater image enhancement using an integrated colour model,” IAENG International Journal of Computer Science, vol. 34, no. 2, 2007. View at: Google Scholar
  15. D. Huang, Y. Wang, W. Song, J. Sequeira, and S. Mavromatis, “Shallow-water image enhancement using relative global histogram stretching based on adaptive parameter acquisition,” in Lecture Notes in Computer Science (including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer, Berlin, Germany, 2018. View at: Google Scholar
  16. R. N. Bracewell, “The Fourier Transform and Its Applications,” McGraw-Hill, New York, NY, USA, vol. 31999, 1986. View at: Google Scholar
  17. C. J. Prabhakar and P. U. Praveen Kumar, “Underwater image denoising using adaptive wavelet subband thresholding,” in Proceedings of the 2010 International Conference on Signal and Image Processing, ICSIP 2010, Chennai, India, December 2010. View at: Publisher Site | Google Scholar
  18. A. Khan, S. S. A. Ali, A. S. Malik, A. Anwer, and F. Meriaudeau, “Underwater image enhancement by wavelet based fusion,” in Proceedings of the USYS 2016 - 2016 IEEE 6th International Conference on Underwater System Technology: Theory and Applications, Penang, Malaysia, December 2017. View at: Publisher Site | Google Scholar
  19. S. Vasamsetti, N. Mittal, B. C. Neelapu, and H. K. Sardana, “Wavelet based perspective on variational enhancement technique for underwater imagery,” Ocean Engineering, vol. 141, 2017. View at: Publisher Site | Google Scholar
  20. M. J. Shensa, “The discrete wavelet transform: wedding the À trous and mallat algorithms,” IEEE Transactions on Signal Processing, vol. 40, no. 10, 1992. View at: Publisher Site | Google Scholar
  21. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, 2011. View at: Publisher Site | Google Scholar
  22. P. Drews Jr., E. Do Nascimento, F. Moraes, S. Botelho, and M. Campos, “Transmission estimation in underwater single images,” in Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, December 2013. View at: Publisher Site | Google Scholar
  23. A. Galdran, D. Pardo, A. Picón, and A. Alvarez-Gila, “Automatic Red-Channel underwater image restoration,” Journal of Visual Communication and Image Representation, vol. 26, 2015. View at: Publisher Site | Google Scholar
  24. Y. T. Peng and P. C. Cosman, “Underwater image restoration based on image blurriness and light absorption,” IEEE Transactions on Image Processing, vol. 26, no. 4, 2017. View at: Publisher Site | Google Scholar
  25. M. D. Wilson, “Support vector machines,” in Encyclopedia of Ecology, Elsevier, Amsterdam, Netherlands, 2008. View at: Publisher Site | Google Scholar
  26. V. Mitra, C. J. Wang, and S. Banerjee, “Lidar detection of underwater objects using a neuro-SVM-based architecture,” IEEE Transactions on Neural Networks, vol. 17, no. 3, 2006. View at: Publisher Site | Google Scholar
  27. Z. Wu, H. R. Karimi, and C. Dang, “A deterministic annealing neural network algorithm for the minimum concave cost transportation problem,” in Proceedings of the IEEE Transactions on Neural Networks and Learning Systems, 2019. View at: Publisher Site | Google Scholar
  28. Z. Wu, B. Jiang, and H. R. Karimi, “A logarithmic descent direction algorithm for the quadratic knapsack problem,” Applied Mathematics and Computation, vol. 369, 2020. View at: Google Scholar
  29. Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, 2015. View at: Publisher Site | Google Scholar
  30. A. Krizhevsky, I. Sutskever, and H. E. Geoffrey, “ImageNet classification with deep convolutional neural networks,” Advances in Neural Information Processing Systems, vol. 25, pp. 1–9, 2012. View at: Google Scholar
  31. I. Goodfellow, J. Pouget-Abadie, and M. Mirza, “Generative adversarial networks,” pp. 1–9, 2014. View at: Google Scholar
  32. A. Waibel, T. Hanazawa, G. Hinton, K. Shikano, and K. J. Lang, “Phoneme recognition using time-delay neural networks,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 37, 1989. View at: Publisher Site | Google Scholar
  33. X. Sun, L. Liu, Q. Li, J. Dong, E. Lima, and R. Yin, “Deep pixel-to-pixel network for underwater image enhancement and restoration,” IET Image Process, vol. 13, no. 2, 2019. View at: Publisher Site | Google Scholar
  34. C. Fabbri, M. J. Islam, and J. Sattar, “Enhancing underwater imagery using generative adversarial networks,” in Proceedings of the IEEE International Conference on Robotics and Automation, Brisbane, Australia, May 2018. View at: Publisher Site | Google Scholar
  35. I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville, “Improved training of wasserstein GANs,” pp. 1–19, 2017, http://arxiv.org/abs/1704.00028. View at: Google Scholar
  36. S. Anwar, C. Li, and F. Porikli, “Deep underwater image enhancement,” 2018, http://arxiv.org/abs/1807.03528. View at: Google Scholar
  37. Y. Guo, H. Li, and P. Zhuang, “Underwater image enhancement using a multiscale dense generative adversarial network,” IEEE Journal of Oceanic Engineering, vol. 35, no. 3, pp. 862–870, 2019. View at: Publisher Site | Google Scholar
  38. Y. Wang, J. Zhang, Y. Cao, and Z. Wang, “A deep CNN method for underwater image enhancement,” in Proceedings of the International Conference on Image Processing, ICIP, Beijing, China, September 2018. View at: Publisher Site | Google Scholar
  39. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Transactions on Image Processing, vol. 16, no. 8, 2007. View at: Publisher Site | Google Scholar
  40. M. Li, J. Liu, W. Yang, X. Sun, and Z. Guo, “Structure-revealing low-light image enhancement via Robust Retinex model,” IEEE Transactions on Image Processing, vol. 27, no. 6, 2018. View at: Publisher Site | Google Scholar
  41. X. Guo, Y. Li, and H. Ling, “LIME: low-light image enhancement via illumination map estimation,” IEEE Transactions on Image Processing, vol. 26, no. 2, 2017. View at: Publisher Site | Google Scholar
  42. Y. LeCun, B. Boser, J. S. Denker et al., “Backpropagation applied to handwritten zip code recognition,” Neural Computation, vol. 1, no. 4, pp. 541–551, 1989. View at: Publisher Site | Google Scholar
  43. O. Ronneberger, P. Fischer, and T. Brox, U-Net: Convolutional Networks for Biomedical Image Segmentation, Miccai, Callao District, Peru, 2015.
  44. C. Chen, Q. Chen, J. Xu, and V. Koltun, “Learning to see in the dark,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, June 2018. View at: Publisher Site | Google Scholar
  45. Y. Le Cun and F. Fogelman-Soulié, “Modèles connexionnistes de l’apprentissage,” Intellectica. Revue de l'Association pour la Recherche Cognitive, vol. 2, no. 1, 1987. View at: Publisher Site | Google Scholar
  46. G. E. Hinton and R. S. Zemel, “Autoencoders, minimum description length and helmholtz free energy,” Advances in Neural Information Processing Systems, vol. 6, no. 3, 1994. View at: Google Scholar
  47. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature, vol. 323, no. 6088, 1986. View at: Publisher Site | Google Scholar
  48. H. Israël, F. Kasten, H. Israël, and F. Kasten, “KOSCHMIEDERs theorie der horizontalen sichtweite,” in Die Sichtweite im Nebel und die Möglichkeiten ihrer künstlichen Beeinflussung, Springer, Berlin, Germany, 1959. View at: Publisher Site | Google Scholar
  49. S. G. Narasimhan and S. K. Nayar, “Chromatic framework for vision in bad weather,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Hilton Head Island, SC, USA, June 2000. View at: Publisher Site | Google Scholar
  50. W. Hou, “A simple underwater imaging model,” Optics Letters, vol. 34, no. 17, 2009. View at: Publisher Site | Google Scholar
  51. D. Berman, T. Treibitz, and S. Avidan, “Diving into haze-lines: color restoration of underwater images,” in British Machine Vision Conference, BMVC, London, UK, 2017. View at: Google Scholar
  52. S. C. Huang, F. C. Cheng, and Y. S. Chiu, “Efficient contrast enhancement using adaptive gamma correction with weighting distribution,” IEEE Transactions on Image Processing, vol. 22, no. 3, 2013. View at: Publisher Site | Google Scholar
  53. A. S. Abdul Ghani and N. A. Mat Isa, “Underwater image quality enhancement through composition of dual-intensity images and rayleigh-stretching,” Springerplus, vol. 3, no. 1, 2014. View at: Publisher Site | Google Scholar
  54. K. Iqbal, M. Odetayo, A. James, R. A. Salam, and A. Z. H. Talib, “Enhancing the low quality images using unsupervised colour correction method,” in Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Istanbul, Turkey, October 2010. View at: Publisher Site | Google Scholar
  55. L. Chao and M. Wang, “Removal of water scattering,” in Proceedings of the ICCET 2010 - 2010 International Conference on Computer Engineering and Technology, Proceedings, Chengdu, China, April 2010. View at: Publisher Site | Google Scholar
  56. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, 2004. View at: Publisher Site | Google Scholar
  57. L. Zhang, L. Zhang, and A. C. Bovik, “A feature-enriched completely blind image quality evaluator,” IEEE Transactions on Image Processing, vol. 24, no. 8, 2015. View at: Publisher Site | Google Scholar
  58. A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind” image quality analyzer,” IEEE Signal Processing Letters, vol. 20, no. 3, 2013. View at: Publisher Site | Google Scholar
  59. D. P. Kingma and J. L. Ba, “Adam: a method for stochastic optimization,” in Proceedings of the 3rd international conference on learning representations, ICLR 2015-Conference Track Proceedings, San Diego, CA, USA, May 2015. View at: Google Scholar

Copyright © 2020 Yifeng Xu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

95 Views | 47 Downloads | 0 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.