Abstract

Scattering and absorption of light is main reason for limited visibility in water. The suspended particles and dissolved chemical compounds in water are also responsible for scattering and absorption of light in water. The limited visibility in water results in degradation of underwater images. The visibility can be increased by using artificial light source in underwater imaging system. But the artificial light illuminates the scene in a nonuniform fashion. It produces bright spot at the center with the dark region at surroundings. In some cases imaging system itself creates dark region in the image by producing shadow on the objects. The problem of nonuniform illumination is neglected by the researchers in most of the image enhancement techniques of underwater images. Also very few methods are discussed showing the results on color images. This paper suggests a method for nonuniform illumination correction for underwater images. The method assumes that natural underwater images are Rayleigh distributed. This paper used maximum likelihood estimation of scale parameter to map distribution of image to Rayleigh distribution. The method is compared with traditional methods for nonuniform illumination correction using no-reference image quality metrics like average luminance, average information entropy, normalized neighborhood function, average contrast, and comprehensive assessment function.

1. Introduction

When light travels through the water it deviates from its path which is called scattering of the light. The scattering is caused by water molecules themselves, dissolved chemical compounds in water, and suspended particles. This scattering decreases visibility in water. As you go deeper in the water the scattering of natural light increases which results in poor visibility in deep water. This degrades the performance of the underwater imaging system and it goes worse for deep water imaging.

Propagation of light in water is discussed by Jaffe in [1]. His image formation model divides reflected light in three components given by the following:where is total irradiance, is direct component, is forward scattered component, and is backscattered component.

Forward and backward scattering of light degrade the quality of underwater images. The degradation includes limited contrast, addition of blur, and diminished colors.

The image enhancement techniques are used to compensate for the degradation like contrast enhancement and color correction. Underwater visibility can be increased by using artificial light source. But artificial light adds nonuniform illumination to image. This problem is ignored by many researchers. Very few researchers proposed a correction method for nonuniform illumination in underwater images.

The nonuniform illumination problem in underwater images is shown in Figure 1. An artificial light source is used in underwater imaging system to increase visibility in water. The light rays from artificial light source are reflected in backward direction without reaching the objects in the scene. This is called backward scattering. This reflected light produces bright spot in the center and is surrounded by dark region [2]. This illuminates the scene in nonuniform fashion. It is assumed that pixel intensity values of underwater images are dominated by Rayleigh scattering [3]. The scattering of light is wavelength dependent so for correction of nonuniform illumination each color component () should be processed separately. While providing solution for nonuniform illumination, the method assumes that the underwater images are Rayleigh distributed [46]. Consider

The Rayleigh distribution function is bell shaped with major amount of pixels being concentrated in middle intensity levels. The probability distribution function of Rayleigh distributed random variable with parameter is given by (2), where is mean square value of .

The method maps image to Rayleigh distribution by estimation of parameters using maximum likelihood method. The principle of maximum likelihood method [7] says that given observations , are function of alone, and the value of that maximizes the above probability density function is the most likely value for , and it is chosen as its maximum likelihood estimation .

This paper proposed a method for nonuniform illumination correction for underwater images. The rest of the paper is organized as follows. Section 2 discussed state of the art for the problem, in Section 3 a proposed method for nonuniform illumination correction is explained, the image quality metrics are discussed in Section 4, results are reported in Section 5, and conclusion is given in Section 6.

2. Literature Review

Arnold-Bos et al. [8] suggested that global histogram equalization is not suitable when illumination in image is unequal, and local methods are needed. They proposed a method of histogram clipping and then equalized contrast by division method.

Bazeille et al. [9] suggested homomorphic filtering to correct nonuniform illumination. The maximum and minimum coefficient values selected by them are and .

Garcia et al. [10] compared four methods for nonuniform illumination correction. The first method uses illumination reflectance model and illumination correction is obtained as where is original and is smoothed version of image. is constant. Contrast Limited Adaptive Histogram Equalization (CLAHE) and homomorphic filter are second and third methods. In fourth method illumination field is subtracted from original image for nonuniform illumination correction.

Borgetto et al. [11] also discussed two methods for illumination correction: one method used homomorphic filter and the other is based on CCD camera radiometric correction required for mosaicking.

Prabhakar and Praveen Kumar [12] proposed algorithm for enhancement of underwater images. They used homomorphic filter to correct nonuniform illumination.

3. Proposed Method

The proposed method in this paper improves quality of underwater images by correcting nonuniform illumination in image. Like traditional methods proposed by Iqbal et al. in [13, 14], histogram of the image is modified for correction. The method in this paper applied histogram stretching. The detailed process is given in Figure 2.

Input image is first decomposed into three channels (red, green, and blue). Then histogram stretching is performed on individual channels. With the assumption that each of R, G, and B channel is Rayleigh distributed, histogram stretching is done with respect to Rayleigh distribution. The scale parameter is estimated from given image. So the histogram stretching is adaptive. The scale parameter is estimated using the maximum likelihood method given below. First find log likelihood function of image as given by

Here is a probability density function of with scale parameter . Consider

Then find the value of where log likelihood function has maximum value. It can be determined by taking a derivative of log likelihood function with respect to and equate it to zero:

Solve (6) for :

Obtained in (10) is estimated maximum likelihood value . Thus maximum likelihood values are estimated for all three channels using same process. These estimated values are used in histogram stretching with respect to Rayleigh distribution of R, G, and B components. As image is nonuniformly illuminated there are dark and bright patches in the image, so stretching is performed locally on small patches of image, instead of global stretching. A limit of 1% is applied to all three color components at minimum and maximum values. The limits are necessary to avoid under- and oversaturation [15]. The stretching process is applied in the range of 1% to 99%. Histogram stretching performed on each color channel with respect to Rayleigh distribution is given by the following:where is pixel value in transformed image, is minimum pixel value in the transformed image, is parameter value, and is cumulative distribution function of pixel values of input image. The effect of histogram stretching is shown in Figure 3. Figure 3 gives histogram of red, green, and blue components after and before stretching along with images.

4. Image Quality Metrics

The image quality metrics used in this paper are no-reference color image quality metrics. These metrics are based on primary and fundamental vision parameters perceived by human vision system, which are luminance, contrast, information content, and added noise. According to human vision perception a good quality image should have appropriate luminance, information content, and contrast. These four vision parameters can be measured using mathematical model given by Xie and Wang [16]. These are average contrast (AC), average information entropy (AIE), and average luminance (AL).

Average contrast is computed as where is magnitude of mean of gradients of three color components R, G, and B given as

Information entropy for single color channel is computed aswhere is probability of th color level.

Then using (14) IE is calculated for red, green, and blue channels as IER, IEG, and IEB, respectively. Total AIE is defined as

The maximal value of AIE for color image is 8 bits.

Average luminance is computed as follows:

  and   are rows and columns of the image and is luminance value at pixel . When histogram of luminance component () of an image is equalized then ideal value of AL should be 127.5.

So 127.5 is considered as an optimum value of luminance (OL) [16].

Comprehensive assessment function (CAF) is a general image quality assessment function given by Xie and Wang [16] computed as follows: where , , and are weight parameters; through experiment and comparison with subjective assessment Xie and Wang decided its values as 1, 1/4, and 3, respectively [16]. In the above equation (17) AIE is average information entropy computed using (15), AC is average contrast computed using (12), and NNF is normalized neighborhood function defined aswhere OL is optimum luminance value and here it is considered [16] 127.5 and dist is absolute value. When AL is equal to OL then is zero and NNF has its optimum value of 1. CAF computed by (17) is a convex function [16] and if the CAF is larger then the image quality is better.

5. Results

The proposed method is applied to five nonuniform illuminated images. The results are compared with the results of traditional methods of nonuniform illumination correction, which are histogram equalization (HE), adaptive histogram equalization (AHE), and homomorphic filter.

Comparison of results is performed using no-reference image quality metrics for color images given in previous section.

The results are shown in Figure 4. Table 1 shows quantitative comparison of image quality metrics of original and processed images. The superior results are indicated in bold fonts in the table.

The parameters values for homomorphic filter are selected as given by Bazeille et al. [9].

6. Conclusion

The proposed method is used for nonuniform illumination correction of underwater images. Nonuniform illumination affects the overall contrast of the image. The proposed method shows improvement in all the quality metrics when compared with original image. Also there is improvement in quality parameters like average contrast, average information entropy, and comprehensive assessment function when the results are compared with results of traditional methods. But there is small degradation in average luminance and normalized neighborhood function compared to the results of traditional methods. However the degradation is by very small amount as compared to the improvements in other quality parameters. Also general image quality assessment function (CAF) which includes all other quality parameters (AC, AIE, and NNF) shows improvement for proposed method when compared with other methods. So the proposed method shows improvement in the results.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.