Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2016, Article ID 3141478, 15 pages
http://dx.doi.org/10.1155/2016/3141478
Research Article

Restoration and Enhancement of Underwater Images Based on Bright Channel Prior

School of Electrical Engineering, Yanshan University, Qinhuangdao 066004, China

Received 2 June 2016; Revised 16 August 2016; Accepted 5 September 2016

Academic Editor: Jinyang Liang

Copyright © 2016 Yakun Gao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper proposed a new method of underwater images restoration and enhancement which was inspired by the dark channel prior in image dehazing field. Firstly, we proposed the bright channel prior of underwater environment. By estimating and rectifying the bright channel image, estimating the atmospheric light, and estimating and refining the transmittance image, eventually underwater images were restored. Secondly, in order to rectify the color distortion, the restoration images were equalized by using the deduced histogram equalization. The experiment results showed that the proposed method could enhance the quality of underwater images effectively.

1. Instruction

For the past several years, the attention of more and more scholars was drawn to the field of underwater images enhancement and restoration. As a result of scattering and absorption, underwater images always suffer from the problems of low contrast, blur, and color distortion. So, underwater image restoration and enhancement has been a challenging field. Figure 1(a) shows some pictures captured in underwater environment and we can see the quality decline obviously.

Figure 1: (a) Degraded underwater image; (b) the improved result of our method.

High quality images are needed in many fields which use the underwater images to achieve some specific goals, such as the tracking of underwater objects, 3D reconstruction of underwater objects, underwater archaeology, underwater biological research, and sea floor exploration.

In order to obtain high quality images, scholars proposed different approaches which could be sorted into two categories. One is image restoration and the other one is image enhancement. The image enhancement technology does not consider the physics model and it can improve the image quality by image processing methods simply. The image restoration technology is based on the physics model of image formation. But this technology is not good at dealing with the color distortion. Because the two technologies have their own advantages and disadvantages, in this paper, we combined the two technologies and obtained satisfying results.

For the image dehazing questions in air, some scholars in [1, 2] proposed the method which needed several pictures obtained under different weathers to get the image without fog. Recently more and more researchers have begun to focus on single image dehazing [36]. Tan in [3] dehazed images by maximizing the local contrast of restoration images. The result of this method was satisfying, but the saturation suffered from over enhancement. Fattal in [4] used one single image to obtain a transmittance image and used this transmittance image to dehaze the image. He et al. in [5] proposed the dark channel prior to acquire the transmittance image. He found that one of the three channels (R/G/B) of images without fog and sky areas normally had low intensity. Once sky or foggy areas existed in images, this phenomenon would be invalid. He found this statistics phenomenon and then proposed the dark channel prior. Ge et al. in [6] proposed one single image dehazing method by linear transformation. Li et al. [7] put forward one single image dehazing method which utilized the detailed prior information.

For underwater environment, by observing the relationship between the blurring degree and the imaging distance, Peng et al. [8] applied the blurring method to the imaging formation model and estimated the distance between the scene and the camera and then removed the fog. Ancuti et al. [9] utilized the image fusion method to remove the fog; up to now this method may be the best method on visual perception.

Considering the features of underwater environment, Carlevaris-Bianco et al. in [10] used the notable differences of attenuation among different channels to estimate the depth of the scene. Galdran et al. in [11] put forward an automatic red channel underwater image restoration method, and this method could be regarded as the deformation of the dark channel prior method. Wen et al. in [12] used the blue and green channel without the red channel to redefine a new dark channel that fitted the underwater image. This slightly modified dark channel prior based method was successfully applied to underwater images.

In this paper, we proposed one new restoration and enhancement method for underwater images: underwater image restoration and enhancement based on bright channel prior. Our method could be regarded as the improved version of preciously reported dark channel prior. Another bright channel prior based method different from the method proposed in this manuscript has been reported in [13]. In [13] the bright channel prior seems to be the opposite of the dark channel prior in [5]. The flow chart of our method is shown in Figure 2. Experiment results showed that the proposed method was of validity to all images of different scenes, and in a certain degree our method could correct the color distortion. In our experiment, all parameters of different images were same.

Figure 2: Flow chart of the proposed algorithm.

The structure of this paper is as follows: in Section 2, the proposed image restoration and enhancement method is described, including bright channel image acquisition, the maximum color difference image acquisition, bright channel image correction, atmospheric light estimation, then initial transmittance image acquisition, image restoration, and deduced histogram equalization. Section 3 mainly analyzes the validness of the proposed method and some other contrast experiments. Section 4 is the conclusion.

2. Underwater Image Restoration and Enhancement Based on Bright Channel Prior

2.1. Underwater Imaging Formation

Many image dehazing methods use the model proposed by Duntley et al. [14]. The underwater simplified model proposed by Carlevaris-Bianco in [10] is the same as the standard Duntley model used to remove the fog in air. So we use the Duntley model to restore the underwater images:where is the observed intensity, the input degraded color image; is the transmission, which describes the amount of light without being scattered nor absorbed and reaches the observer. , is the attenuation coefficient of the medium, and is the distance between the object and the camera. is the atmospheric light, physically related to the color of the haze; is the scene radiance or haze-free image. As we can see, if we know and , then we can solve .

2.2. Estimate the Transmittance Image through Dark Channel Prior

By observing, He et al. in [5] found one of the three channels (R/G/B) of a local area in images without fog and sky areas had low intensity, which means the light intensity is a small numeric. Once sky or foggy areas exist in images, this phenomenon will be invalid. For one image the dark channel is defined as follows in [5]:where means the three channels of each image, denotes a window block centered at pixel , and the dark channel theory is shown as follows:

There are three factors to explain why the dark channel prior is valid: (a) shades of cars, buildings, trees, and other objects have low intensity; (b) objects with bright colors always have one low intensity channel; (c) the intensity of objects is low. In one word, shade and color of natural scenes are very normal; the dark channel images of these scenes will be very gloomy.

By using dark channel prior, the initial transmission can be solved by

Figure 3 shows several degraded underwater images and their dark channel images. From Figure 3 we can see that the dark channel images of underwater images are not the same as the dark channel images in air. In air, the sky areas or the distant scenes always have bright dark channel, but this phenomenon is not valid for dark channel images of underwater images. So we can come to the conclusion that the dark channel prior failed to work for degraded underwater image.

Figure 3: (a) Degraded underwater images; (b) dark channels of the degraded underwater images.

The reason why the dark channel prior failed to work for degraded underwater image directly is as follows: when the weather is foggy, atmospheric particle size is larger than wavelengths of visible light, and scattering effects on visible lights with different wavelengths are same, so the image tends to be white or gray. Meanwhile, particle size in water is usually larger than the wavelengths of visible lights, so the absorption effect is more obvious than the scattering effect in water. The scattering effects of visible lights with different wavelengths are normally the same, but the absorption effects in air and water are different: the absorption effect in water is stronger than the absorption effect in air, and it will become more serious with the wavelength increasing in water. So the images captured in water are usually blue or green, and the red channel intensity will be low in the whole picture; this leads to the problem that the dark channel image of degraded underwater image will not change with the imaging distance and the transmittance image is not related to the dark channel image. In this situation, the dark channel prior has no sense anymore: with the image being degraded or not, there is almost always one color channel with low intensity (usually the red one) [11]. On the contrary, the degraded image in air does not suffer from the problems of color distortion, so the dark channel prior works for degraded images in air. In one word, the dark channel prior fails to work in degraded underwater images due to the low intensity of the red channel.

2.3. Image Restoration Based on Bright Channel Prior
2.3.1. Obtain Bright Channel Image

Assume that the atmospheric light is known (we will introduce how to estimate in Section 2.3.4); split different color channels and deform (1), we can obtain the following:where ,  , and    are the red, green, and blue channel images of the degraded underwater image, respectively. ,  , and    are the red, green, and blue channel images of the nondegraded underwater image, respectively. ,  , and    are the atmospheric light of the red, green, and blue channel images of the degraded underwater image, respectively.

Equation (5) is completely equal to (1). We combine the three channel images (, , and left term of (5)) as the new degraded image , and we call it half-revision image. Regard as the new nondegraded image ; is the new atmospheric light; then we can get the new imaging formation model:Considering different color channels we deform (6) into denotes different color channels.

Define the bright channel as follows:

Figures 4(a), 4(b), 4(c), and 4(d) show that the bright channel images of nondegraded underwater images always have higher intensity, while in the degraded underwater images the bright channel intensity of the near scene is high. Otherwise, the bright channel intensity of the distant scene is low, especially when pure water areas exist in these scenes. So we suppose the bright channel intensity of underwater images without pure water areas and distant scenes approximate to 1. We call this bright channel prior Maximize both sides of (7) in a local block: Depending on (8) and (9) we have the following:Bringing (11) to (10) we getIf we have known , we can compute the initial transmission by using (12). By analyzing (12), we find the following: is a constant which is less than 1, is a constant which is more than 1, and the relationship of and is a linear proportional relationship.

Figure 4: (a) Clear underwater images; (b) the bright channel of clearly underwater images; (c) degraded underwater images; (d) the bright channels of degraded underwater images.
2.3.2. Generate the Maximum Color Difference Image

We know the red light attenuates fastest while the green and blue light attenuate slower, so the color distortion will become more serious with the distance increasing. We define the maximum color difference image as follows:where is the maximum color difference image, is each pixel, is the channel whose intensity is the maximum among the three channels, is the channel whose intensity is medium among the three channels, and is the channel whose intensity is the minimum among the three channels; max operation means to choose the maximum one from all candidates. Figure 5 shows the maximum color difference images by using (13); we can see that the further the imaging distance, the more obvious the difference between different channels. The value of the maximum color difference image is inversely proportional to the imaging distance.

Figure 5: The maximum color difference images.
2.3.3. Rectify the Bright Channel Image

From Section 2.3.1 we know the transmittance image is linearly proportional to the bright channel image, and from Section 2.3.2 we know the value of the maximum color difference image is inversely proportional to the imaging distance. Analyzing (12), we find that the transmission we get will be smaller than the real transmission because the bright channel prior assumed that the bright channel of nondegraded underwater image is approximate to 1. In order to increase the stability, we rectify the bright channel image using the maximum color difference image. The rectifying equation iswhere denotes the rectified bright channel image, (in Section 2.3.1) denotes the nonrectified bright channel image of the degraded underwater image, denotes the maximum color difference image, and is the proportional coefficient. In our experiments, we found the bright channel should be the main part to produce the rectified bright channel image, and should be larger than 0.5; at the same time, we find that in (15) could satisfy this requirement, so is captured as follows:where is the saturation channel image of the degraded underwater image in HSV color space. The first max operation means we pick out the maximum value of each column for the saturation image, and the second max operation means we compute the maximum value for these maximum values picked out. Figure 6 shows the restoration images with the nonrectified and rectified bright channel images, respectively. In Figure 6(b) the image is overrestored especially in the red rectangle. We can see the rectification of the bright channel image can restrain the overrestoration.

Figure 6: (a) The restored result with the rectification of the bright channel; (b) the restored result without the rectification of the bright channel.
2.3.4. Estimate the Atmospheric Light

In previous sections, we assume the atmospheric light is known, but the value of atmospheric light is also estimated by us. In Section 2.3.1, we obtained the bright channel image of the degraded underwater image. In this section we will use the bright channel image of the degraded underwater image to estimate the atmospheric light.

Firstly, we use the gray image of the original degraded underwater image to produce the variance image (V). For each pixel in the gray image we compute its variance within a block which centers at this pixel point. The variance of each pixel in one block shows the evenness of this block.

Secondly, we pick out the top one percent darkest pixels in the bright channel. These pixels are usually most haze-opaque. Among these pixels, the pixel with the lowest value in the variance image V is selected as the atmospheric light. These pixels are in the red rectangle in Figure 7.

Figure 7: The estimated atmospheric light point is the white point in the red rectangle.
2.3.5. Compute and Refine Transmittance Image

After obtaining the rectified bright channel image and the atmospheric light, we can compute the initial transmittance image of each color channel. The computing equation iswhere denotes the different color channels, denotes the rectified bright channel image, and denotes the atmospheric light of each channel. After computing the transmittance image of each channel, we can solve the average value of the three transmittance images and regard it as the initial transmittance image.

Figure 8(a) shows the initial transmission we got. The main problems are some halos and block artifacts, the same as the transmittance image captured in [5]. So we use the gray image of original degraded underwater image as the guide image and the initial transmittance image as the input image to perform guide filter in [15]. Then, we obtain the final transmittance image. Figure 8(b) shows the refined transmittance image.

Figure 8: (a) The initial transmittance image; (b) the final transmittance image after guided image filter.
2.3.6. Restore and Enhance Underwater Image

After obtaining the transmittance image and the atmospheric light, we can obtain the restoration image:where is the channel whose mean intensity is the maximum among the three channels, is the channel whose mean intensity is the medium among the three channels, and is the channel whose mean intensity is the minimum among the three channels. is the transmittance image. is the degraded underwater image, is the atmospheric light, and is the restored image.

Equation (17) demonstrates that if the intensity value of one pixel in one of the three color channels is different from the atmospheric light in this channel, the difference will become larger. This can increase the contrast of the image, but it can also bring some questions. In the maximum color channel, if the intensity of one point is larger than the atmospheric light value, the intensity of this point will become much larger (this makes the color distortion problem more serious). In the minimum color channel, if the intensity value of one pixel point is smaller than the atmospheric light value, the intensity of this point will become much smaller (this leads to the losing of some detailed information in low intensity district). So the pixel points whose intensity values are smaller than the atmospheric light are computed by using (17) in the maximum color channel. The pixel point whose intensity value is larger than the atmospheric light is computed by using (17) in the minimum color channel. Figure 9(a) shows the four restoration images whose degraded images have been introduced in previous sections.

Figure 9: (a) The result with the bright channel restoration; (b) the result with the bright channel restoration and histogram equalization.

Estimating the different transmittance images of different channels precisely is a challenging question because it is so difficult for us to estimate the precise transmittance () for different channels. So many methods used the same transmittance image, which were not good at rectifying the color distortion. In this paper we use the deduced histogram equalization method to rectify the color distortion. We do not equalize the image from 0 to 255; we equalize the restored image from 0 to a specific value.

Firstly, we compute the average intensity value of each channel; then we multiply the three means with three coefficients (,  ,   in Figure 10(d); the three coefficients may be the same or not). Next, we compare the three products with 255, respectively, and choose the smaller one as the specific value. Figures 9(b) and 10 show the result of the histogram equalization method on the restoration images. We can see that the effect of histogram equalization on rectifying the color distortion is obvious. Figure 10(d) is the flow chart of the deduced histogram equalization method.

Figure 10: (a) The original degraded underwater image; (b) the result with the bright channel restoration; (c) the result with the bright channel restoration and histogram equalization; (d) the flow chart of the deduced histogram equalization method.

3. Testing and Analyzing Our Method

3.1. Experiment Result

There are varieties of different scenes for underwater images, so it is difficult for us to test all scenes. It is very difficult to assess the performance of an underwater image restoration algorithm, since there is no ground truth or uniform measure standard available. To compare different methods we pick out four underwater images from the website “https://github.com/agaldran/UnderWater.” It is known that underwater images always appear blue or green; these four images represent four different scenes of underwater. The four images are shown in Figure 11.

Figure 11: Four underwater images: (a) wreck; (b) coral; (c) fish; (d) diver.

The color shift changes from blue to green gradually from Figures 11(a)11(d). The image of wreck shifts blue the most seriously among the four images; the image of diver shifts green the most seriously among the four images.

Figures 12, 15, 18, and 21 are the visual results of the algorithms in [5, 911, 16]. In order to compare these algorithms, we pick out two image features which are regarded as the standard of different algorithms. One is the amount of canny edge point; the other is the amount of sift feature point. Figures 13, 16, 19, and 22 are the results of canny edge point feature of the algorithms in [5, 911, 16]. Figures 14, 17, 20, and 23 are the results of sift point feature of the algorithms in [5, 911, 16]. The amounts of canny edge point are stored in Table 1, and the amounts of sift feature point are stored in Table 2. Figures 24 and 25 are the bar charts of the canny edge point amount and sift feature point amount, respectively.

Table 1: The canny edge point amount of different algorithms.
Table 2: The sift feature point amount of different algorithms.
Figure 12: The visual results of different algorithms.
Figure 13: The canny edge results of different algorithms.
Figure 14: The sift feature results of different algorithms.
Figure 15: The visual results of different algorithms.
Figure 16: The canny edge results of different algorithms.
Figure 17: The sift feature results of different algorithms.
Figure 18: The visual results of different algorithms.
Figure 19: The canny edge results of different algorithms.
Figure 20: The sift feature results of different algorithms.
Figure 21: The visual results of different algorithms.
Figure 22: The canny edge results of different algorithms.
Figure 23: The sift feature results of different algorithms.
Figure 24: The canny edge point amount of different algorithms.
Figure 25: The sift feature point amount of different algorithms.

The algorithms in Tables 1 and 2 show that they can improve the quality of degraded underwater images [9, 11] and the proposed algorithms in this paper are better than the algorithms in [5, 10, 16] in increasing the visual perception. The algorithm in [5] could increase the sift feature point amount of the four images, but it can not increase the canny edge point amount and it is not obvious in improving the visual effect. Ref. [16] is the best in increasing the sift feature point quantity, but it can not increase the canny edge point quantity obviously, and the visual effect is not true compared to the original image. Ref. [10] can increase both feature point amounts, but the visual effect is not obvious. Ref. [11] can improve the visual effect obviously but is not superior to the proposed method in increasing both feature point amounts. Considering all these factors [9] and the proposed algorithm have the best performance in improving the quality of underwater images. Ref. [9] is better than our algorithm in dealing with the green shifting image, but the proposed algorithm is better than [9] in dealing with the blue shifting image. Considering the experiment result we can come to the conclusion that the proposed method can enhance the quality of underwater images effectively.

4. Conclusion

In this paper, a new method of the restoration and enhancement of underwater images was proposed. Our algorithm was inspired by the dark channel prior image dehazing.

Firstly, we proposed the bright channel prior of underwater environment. By estimating and rectifying the bright channel image, estimating the atmospheric light, and estimating and refining the transmittance image, finally the underwater images were restored. Secondly, in order to rectify the color distortion further, we utilized the deduced histogram equalization to equalize the restoration images.

We carried out our experiments on four different underwater images which represent four different scenes of underwater environment. We compared our algorithm with another five algorithms by using the quantities of two feature points. The experiment results showed that the proposed algorithm was effective in improving the quality of underwater degraded images.

There are still some questions in our method and a lot of work we should do in the future.(1)The transmission computed by the bright channel prior is smaller than the real transmission. This always leads to the overrestoration question; we used the maximum difference image to rectify this phenomenon, but the rectifying coefficient may not be the best, and we need to find a better rectifying coefficient or a better method to solve this question in the future.(2)The proposed method used the guided image filter to refine the transmittance image. Because of the properties of the guided image filter some detailed information will be lost in the refined transmittance image; this leads to the losing of detailed information in restored image. The future work is to improve the filter, which can obtain detailed information in the refined transmittance image.(3)In this paper we used the deduced histogram equalization method to solve the color distortion problem. We can see its effectiveness in this paper, but this method also may not be the best and it may result in the over equalizing of one channel (like the red channel). So in the future, we should also try our best to solve the color distortion problem.(4)Since there is no ground truth or uniform measure standard available, we choose the amount of canny point and sift feature point to evaluate the performance of different methods, but this also may not be the most suitable method. So we should also adapt or define a more suitable measure standard to evaluate different methods in the future.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

The work was partly supported by the Natural Science Foundation of Hebei Province of China under Project no. D2014203153 and the Natural Science Foundation of Hebei Province of China under Project no. D2015203310.

References

  1. S. G. Narasimhan and S. K. Nayar, “Chromatic framework for vision in bad weather,” in Proceedings of the IEEE Conference on Computer Vision & Pattern Recognition (CVPR '00), vol. 1, pp. 598–605, Hilton Head Island, SC, USA, June 2000. View at Scopus
  2. S. G. Narasimhan and S. K. Nayar, “Vision and the atmosphere,” International Journal of Computer Vision, vol. 48, no. 3, pp. 233–254, 2002. View at Publisher · View at Google Scholar · View at Scopus
  3. R. T. Tan, “Visibility in bad weather from a single image,” in Proceedings of the 26th IEEE Conference on Computer Vision and Pattern Recognition (CVPR '08), pp. 1–8, June 2008. View at Publisher · View at Google Scholar · View at Scopus
  4. R. Fattal, “Single image dehazing,” ACM Transactions on Graphics, vol. 27, no. 3, article 72, pp. 1–9, 2008. View at Publisher · View at Google Scholar · View at Scopus
  5. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2341–2353, 2011. View at Publisher · View at Google Scholar · View at Scopus
  6. G. Ge, Z. Wei, and J. Zhao, “Fast single-image dehazing using linear transformation,” Optik, vol. 126, no. 21, pp. 3245–3252, 2015. View at Publisher · View at Google Scholar · View at Scopus
  7. J. Li, H. Zhang, D. Yuan, and M. Sun, “Single image dehazing using the change of detail prior,” Neurocomputing, vol. 156, pp. 1–11, 2015. View at Publisher · View at Google Scholar · View at Scopus
  8. Y.-T. Peng, X. Zhao, and P. C. Cosman, “Single underwater image enhancement using depth estimation based on blurriness,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '15), pp. 4952–4956, September 2015. View at Publisher · View at Google Scholar · View at Scopus
  9. C. Ancuti, C. O. Ancuti, T. Haber et al., “Enhancing underwater images and videos by fusion,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '12), pp. 447–456, June 2012. View at Publisher · View at Google Scholar
  10. N. Carlevaris-Bianco, A. Mohan, and R. M. Eustice, “Initial results in underwater single image dehazing,” Oceans, vol. 27, no. 3, pp. 1–8, 2010. View at Google Scholar
  11. A. Galdran, D. Pardo, A. Picón, and A. Alvarez-Gila, “Automatic Red-Channel underwater image restoration,” Journal of Visual Communication & Image Representation, vol. 26, pp. 132–145, 2015. View at Publisher · View at Google Scholar · View at Scopus
  12. H. Wen, Y. Tian, T. Huang, and W. Gao, “Single underwater image enhancement with a new optical model,” in Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS '13), pp. 753–756, May 2013. View at Publisher · View at Google Scholar · View at Scopus
  13. Z. Chen, H. Wang, J. Shen, X. Li, and L. Xu, “Region-specialized underwater image restoration in inhomogeneous optical environments,” Optik, vol. 125, no. 9, pp. 2090–2098, 2014. View at Publisher · View at Google Scholar · View at Scopus
  14. S. Q. Duntley, A. R. Boileau, and R. W. Preisendorfer, “Image transmission by the troposphere I,” Journal of the Optical Society of America, vol. 47, no. 6, pp. 499–506, 1957. View at Publisher · View at Google Scholar
  15. K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 6, pp. 1397–1409, 2013. View at Publisher · View at Google Scholar · View at Scopus
  16. S. Bazeille, I. Quidu, L. Jaulin, and J.-P. Malkasse, “Automatic underwater image pre-processing,” in Proceedings of the Caracterisation du Milieu Marin (CMM '06), pp. 145–152, October 2006.