Table of Contents Author Guidelines Submit a Manuscript
Computational and Mathematical Methods in Medicine
Volume 2015, Article ID 607407, 19 pages
http://dx.doi.org/10.1155/2015/607407
Research Article

Color Enhancement in Endoscopic Images Using Adaptive Sigmoid Function and Space Variant Color Reproduction

Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK, Canada S7N 5A9

Received 10 September 2014; Accepted 25 December 2014

Academic Editor: Kevin Ward

Copyright © 2015 Mohammad S. Imtiaz and Khan A. Wahid. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Modern endoscopes play an important role in diagnosing various gastrointestinal (GI) tract related diseases. The improved visual quality of endoscopic images can provide better diagnosis. This paper presents an efficient color image enhancement method for endoscopic images. It is achieved in two stages: image enhancement at gray level followed by space variant chrominance mapping color reproduction. Image enhancement is achieved by performing adaptive sigmoid function and uniform distribution of sigmoid pixels. Secondly, a space variant chrominance mapping color reproduction is used to generate new chrominance components. The proposed method is used on low contrast color white light images (WLI) to enhance and highlight the vascular and mucosa structures of the GI tract. The method is also used to colorize grayscale narrow band images (NBI) and video frames. The focus value and color enhancement factor show that the enhancement level in the processed image is greatly increased compared to the original endoscopic image. The overall contrast level of the processed image is higher than the original image. The color similarity test has proved that the proposed method does not add any additional color which is not present in the original image. The algorithm has low complexity with an execution speed faster than other related methods.

1. Introduction

Visual quality of color images plays an important role in medical image diagnosis. Wireless capsule endoscopy (WCE) is an established methodology that offers medical doctors the capability of examining the interior of the small intestine with a noninvasive procedure [1]. However, due to power and hardware limitations, the image quality in WCE is lower than high definition wired endoscopy [2]. Some GI tract related diseases, such as stomach and colon cancers and ulcerative colitis, are now of great threats to human’s health [1]. Different such GI diseases can be prevented and cured by means of early detection. Despite several benefits of WCE, the images acquired by this technique are often not clear enough to see the mucosa structure, tissue and vascular characteristics of the digestive tract compared with traditional endoscope, which effects the detection accuracy and increase the miss rate during clinical diagnosis [1, 35]. This is why new techniques are being constantly persuaded to enhance certain mucosal or vascular characteristics so that abnormal growths can be visualized better.

There are both in-chip and postprocessing systems that can enhance certain mucosal or vascular characteristics. Among the in-chip technologies, narrow band imaging (NBI) [6] and autoflorescence imaging (AFI) [7] are worth mentioning. There are two types of NBI systems: one is the RGB sequential illumination system, where narrow spectra of red, green, and blue lights centered on 415 nm, 445 nm, and 500 nm, respectively, are used for tissue illumination [8]. In another type of NBI system, a band-pass filter with bandwidths of 30 nm and central wavelengths of 415 nm (for blue) and 540 nm (for green) is used to generate NBI images [6]. On the other hand, in AFI system, a special rotating color filter wheel is used in front of the xenon light source to sequentially generate blue light (390–470 nm) and green light (540–560 nm) for tissue illumination [7]. All of these techniques eventually increase the hardware complexity and power consumption of the endoscopic system. Virtual chromoendoscopy (CE) in contrast is a postprocessing system that decomposes images into various wavelengths and produces reconstructed image with enhanced mucosal surface [9]. Several researchers concluded that NBI appears to be a less time-consuming and equally effective alternative to CE for the detection of neoplasia, but with higher miss rate [3]. Additionally, neither NBI nor CE can improve the adenoma detection or reduce miss rates during screening colonoscopy. No difference has been observed in diagnostic efficacy between these two types of systems [4, 10].

There are some other global and adaptive techniques to enhance contrast and texture information of an image that is, adaptive histogram equalization (AHE) [11], contrast-limited adaptive histogram equalization (CLAHE) [12], high boost filtering (HBF) [13], brightness preserving dynamic fuzzy histogram equalization (BPDFHE) [14]. AHE applies locally varying grayscale transformation to each small blocks of the image, thus requiring the determination of the block size [15]. CLAHE operates on small regions in the image, often called tiles, instead of the entire image, based on user assigned parameters. Finally, the neighboring tiles are combined using bilinear interpolation to eliminate artificial included boundaries.

Two drawbacks of this technique are noise enhancement in smooth regions and image dependency of the contrast gain limit [15]. HBF emphasizes high frequency components without eliminating the low frequency. It may add distortions in the smoothing regions due to over filtering. BPDFHE is the modification of the brightness preserving dynamic histogram equalization (BPDHE) [16] that preserves the brightness and improves contrast enhancement abilities while reducing its computational complexity. However, it introduces additional artifacts depending on the variation of gray level distribution [17] which may lead to inaccurate diagnosis.

In this paper, a versatile endoscopic image enhancement and color reproduction method is proposed which can improve the detection rate of anomalies present in GI images. The image enhancement is achieved in two stages: image enhancement at gray level followed by space variant chrominance mapping color reproduction. Image enhancement is achieved in two steps using adaptive sigmoid function and uniform distribution of sigmoid pixels. This is somewhat similar to our previous work [18], where the enhancement is achieved by applying histogram equalization followed by adaptive sigmoid function; this can however enhance the desired mucosa and vascular features but cannot preserve the brightness of the image. As a result, in this work modified adaptive sigmoid function using precalculated gain and cutoff value is applied first to preserve the brightness of the gray image. The contrast level is enhanced in the next stage using histogram equalization.

Secondly, space variant color reproduction is achieved by generating a real color map by transferring and modifying old chrominance values either from theme image or input image. The proposed method can be useful in the following scenarios.(i)In white light imaging (WLI), white light is used for illuminating the GI tract and color images are generated by the endoscope. Using the proposed method, any low-contrast color WLI image can be enhanced at grayscale level and then be colorized with its original color, which can help the gastroenterologists to better inspect the vascular and mucosa structures.(ii)It can be used in colorizing a grayscale image using the tone of a different color theme image. This is useful when only grayscale image is available (the corresponding color image is either not available or distorted). Secondly, it is useful in saving power and bandwidth during transmission in wireless capsule endoscopy (WCE). Instead of transmitting all color images from the electronic capsule, it can only transmit one color image followed by 3 or 4 grayscale images. Using the proposed method, these grayscale images can be later colorized using the first color image as the theme image.(iii)In narrow band imaging (NBI), lights of 415 nm and 540 nm wavelengths are used to illuminate the mucosa surface; the reflected light from the mucosa is captured in a monochromic CCD image sensor [19]. The grayscale images from the CCD image sensor are then passed to an image processor where a pseudocolor is added to the images [20]. Using the proposed method, the grayscale NBI images can be further enhanced for better visibility of the mucosa structure; pseudocolors can then be added using the tone of any color theme image.

2. Proposed Method

The proposed method consists of two stages: image enhancement and space-variant chrominance mapping based color reproduction. The method is shown in Figure 1. The stages are briefly discussed below.

Figure 1: Proposed color image enhancement method.
2.1. Image Enhancement

At first, the color endoscopic image is converted into color space using (1). Here, is luminance or luma and and are chrominance components. The color space conversion allows us to process different luma pixels to enhance vascular features and chrominance pixels for color reproduction. Consider

Here, is considered grayscale image. After conversion, the proposed method normalizes grayscale image and each chrominance plane between 0 and 1 using (2). Consider

Here, and are minimum and maximum pixel values. Later, the normalized grayscale image is enhanced using adaptive sigmoid function and uniform distribution.

2.1.1. Adaptive Sigmoid Function

The proposed method uses contrast manipulation techniques for image enhancement. Generally, contrast manipulation technique can be performed either globally or adaptively. Global techniques apply a transformation to all image pixels, while adaptive techniques use an input-output transformation that varies adaptively with local image characteristics. Our method transforms the pixel values adaptively using sigmoid function.

In general, a sigmoid function is real valued and differentiable, having either a nonnegative or nonpositive first derivative that is bell shaped. It has been used in several researches related to image processing [2527]. Using for the input, the sigmoid function is given below: In the training mode, we have observed that in a certain exponent the image highlights some vascular characteristics and mucosa structure, which are not clearly visible in the original image. To control the exponent, we have introduced two coefficients in the sigmoid function. Using for the input, for gain, and for cutoff, the modified sigmoid function is expressed below: The cutoff value determines the midpoint of the input curve and the gain controls the amount of bending. These two parameters give us the control to train the proposed method to generate a certain exponent that highlights some vascular characteristics. Let, normalized image pixel values where sigmoid function (4) is applied. Figure 2 presents the sigmoid curve of input pixel values based on different cutoff and gain.

Figure 2: Sigmoid effect on pixel for different gain values (a) with 0.5 cutoff; (b) with 0.2 cutoff.

These parameters (gain and cutoff) can control the overall brightness and contrast level of the image too. The cutoff value controls the amount of brightness and the gain controls the consecutive difference between pixels. To maintain the exponent into desired level, we have proposed algorithms to generate cutoff and gain value. Based on the input pixel values, (5) generate specific cutoff and gain value. Later on, these values are used in (4) to generate the sigmoid image. Considerwhere , , , is the pixel values of th position and is the number of pixel. These values are heuristically collected from simulation. First of all, we processed endoscopic images in different combination of gain and cutoff values. The images are collected from Gastrolab [28] and Atlas [29] database and have comments from gastroenterologist; as a result, they can be sub-divided into different disease categories. Figure 3 shows some examples of the original and corresponding sigmoid images. The abnormalities in the images may be identified, but not the tissue and vascular characterization (as marked with an arrow in Figures 3(a) and 3(c)). It is noted that mucosa structure, tissue and vascular characteristics are important since by analyzing them the status of gastric glands and pits can be investigated [3032].

Figure 3: (a) Original image with defected polyp and (c) Crohn’s disease; (b) and (d) adaptive sigmoid images of (a) and (c), respectively.

During simulation, we observed that in certain cases, with gain in a range of 7.5–8.5 and cutoff in a range of 0.4–0.5, the tissue and vascular characterization are highly visible. To keep the gain and cutoff in that desired range, we propose (5). For better illustration, we have presented sigmoid images processed with different combination of gain and cutoff values in Table 1. Here, the effects on images for different combination of gain and cutoff values are observed. For example, Image #1 and #5 have low intensity; image #2 has high brightness; image #3 and #4 have highlighted tissue and vascular characterization.

Table 1: Sigmoid image with different combination of gain () and cutoff values ().
2.1.2. Uniform Distribution of Sigmoid Pixels

In the next stage, the sigmoid pixels are uniformly distributed to increase the contrast level. It helps to visualize the vascular characteristic of darker part of an adaptive sigmoid image. It is employed by effectively spreading out the most frequent intensities.

Let, be a given sigmoid image represented as by matrix of integer pixel intensities ranging from 0 to 255. Let, denotes the normalized histogram of with bin for possible intensities. So,where . The uniformly distributed sigmoid image is defined as,where maps to the largest integer but lesser than the number. Normally, the cumulative distribution function (CDF) of an image does not form a horizontal line, that means, the pixel values are not equally likely to occur. In the proposed method, a uniform distribution of sigmoid pixels is achieved by applying (6) and (7); this technique is similar to global histogram equalization. Table 2 shows the visual comparison of uniform distribution of sigmoid pixels. This uniformly distributed sigmoid image is later treated as new enhanced grayscale image ().

Table 2: The Comparison between processed sigmoid image and uniformly distributed sigmoid image.
2.2. Color Reproduction

In the second stage of the proposed method, we apply color reproduction. It is a computer-assisted process of adding color to a monochrome image [33, 34]. In the proposed method, it is possible to retrieve the original color with a better tone or add pseudocolor using a theme image. This choice is controlled by the user through the “color decision” module (see Figure 1) which selects the chrominance components.

Case 1. To retrieve original color, we first create new and planes by matching the original and values for corresponding pixels from the original grayscale image. First of all, the positions of all and values in the plane for a particular pixel are identified as expressed by (8):Here, is normalized grayscale image, is a pixel of normalized grayscale image and holds one or multiple positions. These positions will allow us to generate new chrominance planes. Two scenarios may occur: (a) if only one chrominance value is found, it places that value in the corresponding positions in the new and planes. (b) Otherwise, if multiple chrominance values are found, it generates a new chrominance value using (9) and places it in the corresponding positions of the new and planes These steps continue until all pixels of the grayscale image are scanned. The new and will have the same dimension of the original grayscale image. Later, the enhanced grayscale image and the new and images are converted back to RGB image using (10)

Case 2. To add pseudocolor, a theme image is required. It is applicable when only grayscale image or no color information is available. As the color information in an endoscopic image dictates clinical decision, the selection of theme image is very important. The theme image must be selected from the nearby location or region of GI tract. After selecting a proper theme color image, it is converted into space. Then, we create new and planes by matching the chrominance values of the theme image for the corresponding enhanced pixel . Now, similar procedure as given in (8) is followed to find the new and planes (given in (11))Here, is normalized theme grayscale image, is a pixel of enhanced grayscale image and holds one or multiple locations. These locations allow us to generate the new chrominance plane with respect to the enhanced and theme grayscale images. Here, the chrominance values are generated from the and planes of the theme image.

Three scenarios may occur: (a) if only one chrominance value is found, it places that value in the corresponding positions in the new and planes. (b) if multiple chrominance values are found, it generates a new chrominance value using (9) and places it in the corresponding positions of the new and planes (c) if no chrominance value is found, it reads the chrominance value respect to the positions of in theme and planes and places it in the corresponding position of the new and planes. These steps continue until all pixels of the enhanced grayscale image are scanned. The new and will have the same dimension of the original grayscale image. Later, the enhanced grayscale image and the new and images are converted back to RGB image using (10).

In Figure 4, the flow chart of the color reproduction algorithm is presented. Some reconstructed images for the two cases are shown in Figures 5 and 6. It can be seen that the proposed method enhances color information in all reconstructed images.

Figure 4: Flow chart: (a) Case 1: to retrieve original color; (b) Case 2: to add pseudocolor from theme image.
Figure 5: (a) Original image and (b) enhanced color image (color reproduced from original image).
Figure 6: (a) Original grayscale image (no color information is available) (b) enhanced color image (color reproduced from theme image shown in (c)).

3. Results and Discussion

In order to evaluate the performance of the proposed algorithm, we have applied it to several endoscopic images collected from Gastrolab [28] and Atlas [29]. The results are summarized below in four categories.

3.1. Category 1: Low-Contrast Color Images

In this case, the input image is first enhanced on gray level and then color added. The chrominance values of the original input image are used for color reproduction. As a result, the output image has similar color tone with enhanced features as shown in Table 3. It can be seen from the table that the vascular and other mucosa structures are better visible and highlighted in the output images, which can help the gastroenterologists in better diagnosis.

Table 3: Category 1: Enhancement of colored WLI images (where input image is used as theme image).
3.2. Category 2: Low-Contrast Grayscale Images

In this case, we show examples where low-contrast grayscale images are used (i.e., color information is not available for these images). The grayscale images are first enhanced and then colorized using a theme image. The choice of the theme image is important as it may add color distortion if not properly chosen. As a result, we choose a theme image from the same or similar physical location of the GI tract. The results of the enhanced color images are shown in Table 4 along with the corresponding theme images.

Table 4: Category 2: Enhancement of grayscale WLI images (here, no original color image is available, so color image from similar location of the GI tract is used as theme image).
3.3. Category 3: Raw NBI Images

In the next experiment, we applied our algorithm on several NBI images (grayscale in nature) as shown in Table 5. The raw NBI images are enhanced first and then a color theme image is used to generate pseudocolor. The theme images are chosen the same way as described before. We can see from the table that the output images have much better visibility of the mucosa structure compared to the grayscale images.

Table 5: Category 3: Enhancement and color reproduction of grayscale NBI images.
3.4. Category 4: Image Transmission in WCE

The proposed color generation method is very useful in saving power consumption during transmission in wireless capsule endoscopy (WCE). Instead of transmitting all color images from the electronic capsule (which takes 24 bits per pixel per image), it can only transmit one color image at the beginning followed by a defined number of grayscale images (8 bits per pixel per image). Using the proposed method, these grayscale images will be later colorized using the first color image at the receiver. In Table 5, we show the results of such case where the R, G and B components of frame 1 are transmitted first. Then only the luminance components of frame 2, 3, 4, and 5 are transmitted. At the receiver, these frames 2–5 are reconstructed using the proposed color reproduction method taking frame 1 as the theme image. Later on, the color reconstructed images are compared with the original color video sequences. In conventional case, the R, G, and B components of all frames are transmitted. For the given case, for five frames, it will require a total of 120 bits per pixel (i.e., 24 × 5). On the other hand, using the proposed method, it will require only 56 bits per pixel (i.e., 24 + 8 + 8 + 8 + 8) which results in a saving of 53% during the transmission. More saving will be achieved using the number of grayscale frames is increased. The original color video frames are also shown in Table 6 for comparison. Here we see that the reconstructed output images have the same color as compared with the original color video frames with a power saving of 53%.

Table 6: Category 4: Reproduction of color frames from grayscale frame in WCE video; no enhancement was applied.

It should be noted here that, the previous work [18] was only applied to color images whereas the proposed method can be applied to both color and grayscale images. As a result, low-contrast gray (category 2) and NBI raw (category 3) images can be colorized using the method using a theme image. This feature also makes the algorithm helpful in saving power during WCE image transmission (category 4).

4. Performance Analysis

In the following section, the performance of the proposed scheme is evaluated using focus value, statistic of visual representation, measurement of uniform distribution, color similarity test, color enhancement factor (CEF) and time complexity. The results are discussed below.

4.1. Focus Value

In our method, image enhancement is achieved by adaptive sigmoid function and uniform distribution of sigmoid pixels. As a result, the overall information of sharp counters and contrast is increased. These changes of an image are evaluated using focus value [35]. Focus value is a mathematical representation of the ratio of AC and DC energy values of a Discrete Cosine Transform (DCT) of an image [36]. Let be the AC values and the DC value of a DCT image. values carry the information related to high frequency component (i.e., changes of contrast level, sharp counters and crisp edges) of an image. On the other hand, value carries only the information related to low frequency components (i.e., luminance or brightness). The expressions are given below:

Here, and represent the row and column of the DCT image, is the DC part and is the AC part of DCT image. The resultant of the ratio of and is the focus value as given by

If the overall information of sharp counters, crisp edges and contrast of enhanced image is higher than the original image, then of the enhanced image will be higher than that of the original image and vice versa. We have compared our method in terms of focus value using 60 sample images with other methods like AHE [11], CLAHE [12], HBF [13] and BPDFHE [14]. The results are presented in Table 7. Here, we see that the focus values of the proposed method are relatively higher compared to the other methods.

Table 7: Comparisons of focus value with other related works.
4.2. Statistic of Visual Representation

Next, we used statistic of visual representation [37] to measure the contrast and intensity distortion between two images. Equations (14) represent statistic visual representation. Considerwhere and are the variance and mean of enhanced image; and are the variance and mean of original image, respectively. Here, defines the percentage of increment or decrement of contrast level and defines the percentage of increment or decrement in intensity level. In our experiment, we used 60 grayscale images. The results are presented in Table 8. We can see that the and of the first image using proposed method are 1.0636 and 0.0716, which means that the contrast and intensity level of proposed image are 103.6 and 7.16 times higher than the original image, respectively. Here, the negative sign denotes the decrement. It is noticeable that the proposed method’s contrast level and intensity level are higher compare to the other method.

Table 8: Comparisons of statistic of visual representation with other related works.
4.3. Measurement of Uniform Distribution

Here, we calculate the uniform distribution of R, G, and B planes by calculating entropy [38, 39]. The more the uniform distribution of color planes, the better the color enhancement. The entropy of distributed signals is defined by

First, we have showed the advantage of using proposed color reproduction in Table 9. Here, in image (a), we used the proposed image enhancement algorithm on the luminance plane and left the chrominance planes unchanged. In image (b), we applied proposed image enhancement on luminance and color reproduction on the chrominance planes. From both images, it is noticeable that the image in (a) without color reproduction does not preserve brightness and shows imbalance saturation level. On contrary, the image in (b) with color reproduction has much balanced saturation and it preserves the overall brightness. It happens because is a nonuniform and nonorthogonal color space. That is why we need to manipulate both luminance and chrominance in such a way that the correlation does not break and preserve the brightness along with the color saturation level. Additionally, our method achieves a higher entropy value, which means that it produces a more uniform histogram. The entropy value of image (b) is 7.6237 which is higher than that of image (a) that is 7.4961.

Table 9: Histogram of , , and planes in terms of uniform distribution (a) without and (b) with color reproduction.

Table 10 shows the performance comparison with other related methods. In shows that the proposed method produces images with enhanced and highlighted mucosa structures. The results are also summarized in Table 11.

Table 10: Comparison of the histogram of , , and planes in terms of uniform distribution.
Table 11: Comparisons of the measurement of uniform distribution based on entropy with other related works.
4.4. Color Similarity Test

To validate the results statistically, the color similarity between the original and color reproduced images is evaluated using several performance metrics such as, CIE94 delta- color difference [40], mean structure similarity index (MSSIM) [41] and structure and hue similarity (SHSIM) [42]. The purpose is to show that our color reproduction method does not add any additional color. CIE94 is used to measure the color differences between processed and original image in LAB color space. In CIE94, indicates that the color difference between two images is the lowest. MSSIM are used to measure color similarity in the chrominance planes in color space. SHSIM is used to measure the hue and structure similarity between processed and original image in HSV color space. Here, we have used 60 trial images to evaluate the color similarity index. The results are compared with other color reproduction methods and presented in Table 12. It can be seen that the average MSSIM and SHSIM indices are higher than others in our scheme with a color difference close to 2.3. All these values indicate that the colorized images are very close to the original images.

Table 12: Color similarity assessment.
4.5. Color Enhancement Factor (CEF)

We have also evaluated our scheme in terms of color enhancement. Here, we have used a no-reference performance metric called colorfulness matric (CM) [43]. The CM measurement is based on the mean and standard deviations of two axes opponent color representation with, and . The metric is defined aswhere and are standard deviations of and , respectively. Similarly, and are their means. However, in our comparison, we have used the ratio of CMs between the enhanced and original image for observing the color enhancement factor (CEF). If , than the original image is better compared to the enhanced image in terms of color image enhancement. CEF with value 1 indicates that there is no difference between the enhanced and original image in terms of color enhancement. The results have been presented in Tables 13 and 14. Here we can see that CEF values of the proposed method are highest compared to other enhancement methods which indicates that our scheme performs better in terms of color enhancement. Figure 7 shows some reconstructed images.

Table 13: Comparisons of CEF indices with other enhancement works.
Table 14: Comparisons of CEF indices with other color reproduction works.
Figure 7: Enhanced color images using different color reproduction algorithms. (a) References [23, 24]. (b) Reference [21]. (c) Reference [22]. (d) Proposed.
4.6. Algorithm Complexity

The time required to generate an enhanced color image for different image sizes using the proposed method and other related works [2124] are shown in Table 15. The experiment was conducted on a PC having Intel (R) Pentium(R) dual CPU @ 2.00 GHz and 6 GB of RAM. Here, it is noticeable that the proposed method is the fastest method including both image enhancement and color reproduction. For an image of pixels, the proposed algorithm has linear computational time complexity, . The average simulation time of proposed method for images is approximately 22 seconds and for images is approximately 85 seconds. The work in [21, 23, 24] have significantly higher execution time when compared with the proposed method. Although the execution time of [22] is lower than ours, the quality of the color reproduction is much worse as shown in Figure 7.

Table 15: Comparison of simulation speed between proposed method and other related works.

5. Conclusion

In this paper, we have presented an image enhancement and color reproduction method for endoscopic images. The work focuses on enhancing the mucosa structures present in endoscopic image. The proposed color image enhancement is achieved in two stages: image enhancement at gray level followed by space variant chrominance mapping color reproduction. Image enhancement is achieved in two steps: adaptive sigmoid function and uniform distribution of sigmoid pixels. Secondly, space variant color reproduction is performed by generating a real color map by transferring and modifying old chrominance values either from theme image or input image. The quality of the generated enhanced colored images is evaluated using several standard performance metrics, which show that the features are highlighted on the new processed images.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to acknowledge Grand Challenges Canada Star in Global Health, Natural Science and Engineering Research Council of Canada (NSERC), Canada Foundation for Innovation (CFI), and Western Economic Diversification (WED) Canada for their support to this research work.

References

  1. A. Brownsey and J. Michalek, Wireless Capsule Endoscopy, The American Society for Gastrointestinal Endoscopy, 2010.
  2. A. Brownsey and J. Michalek, High Definition Scopes, Narrow Band Imaging, Chromoendoscopy, American Society for Gastrointestinal Endoscopy, 2010.
  3. H.-M. Chiu, C.-Y. Chang, C.-C. Chen et al., “A prospective comparative study of narrow-band imaging, chromoendoscopy, and conventional colonoscopy in the diagnosis of colorectal neoplasia,” Gut, vol. 56, no. 3, pp. 373–379, 2007. View at Publisher · View at Google Scholar · View at Scopus
  4. W. Curvers, L. Baak, R. Kiesslich et al., “Chromoendoscopy and narrow-band imaging compared with high-resolution magnification endoscopy in Barrett's Esophagus,” Gastroenterology, vol. 134, no. 3, pp. 670–679, 2008. View at Publisher · View at Google Scholar · View at Scopus
  5. S. Liangpunsakul, V. Chadalawada, D. K. Rex, D. Maglinte, and J. Lappas, “Wireless capsule endoscopy detects small bowel ulcers in patients with normal results from state of the art enteroclysis,” The American Journal of Gastroenterology, vol. 98, no. 6, pp. 1295–1298, 2003. View at Publisher · View at Google Scholar · View at Scopus
  6. K. Kuznetsov, R. Lambert, and J.-F. Rey, “Narrow-band imaging: potential and limitations,” Endoscopy, vol. 38, no. 1, pp. 76–81, 2006. View at Publisher · View at Google Scholar · View at Scopus
  7. S. Schmitz-Valckenberg, F. G. Holz, A. C. Bird, and R. F. Spaide, “Fundus autofluorescence imaging: review and perspectives,” Retina, vol. 28, no. 3, pp. 385–409, 2008. View at Publisher · View at Google Scholar · View at Scopus
  8. T. Kaltenbach, Y. Sano, S. Friedland, and R. Soetikno, “American Gastroenterological Association (AGA) Institute technology assessment on image-enhanced endoscopy,” Gastroenterology, vol. 134, no. 1, pp. 327–340, 2008. View at Publisher · View at Google Scholar · View at Scopus
  9. J. Pohl, A. May, T. Rabenstein, O. Pech, and C. Ell, “Computed virtual chromoendoscopy: a new tool for enhancing tissue surface structures,” Endoscopy, vol. 39, no. 1, pp. 80–83, 2007. View at Publisher · View at Google Scholar · View at Scopus
  10. S. J. Chung, D. Kim, J. H. Song et al., “Comparison of detection and miss rates of narrow band imaging, flexible spectral imaging chromoendoscopy and white light at screening colonoscopy: a randomised controlled back-to-back study,” Gut, vol. 63, no. 5, pp. 785–791, 2014. View at Publisher · View at Google Scholar · View at Scopus
  11. S. M. Pizer, E. P. Amburn, J. D. Austin et al., “Adaptive histogram equalization and its variations,” Computer Vision, Graphics, and Image Processing, vol. 39, no. 3, pp. 355–368, 1987. View at Publisher · View at Google Scholar · View at Scopus
  12. K. Zuiderveld, “Contrast limited adaptive histogram equalization,” in Graphics Gems IV, pp. 474–485, Academic Press Professional, 1994. View at Google Scholar
  13. R. Srivastava, J. R. P. Gupta, H. Parthasarthy, and S. Srivastava, “PDE based unsharp masking, crispening and high boost filtering of digital images,” Communications in Computer and Information Science, vol. 40, pp. 8–13, 2009. View at Publisher · View at Google Scholar · View at Scopus
  14. D. Sheet, H. Garud, A. Suveer, M. Mahadevappa, and J. Chatterjee, “Brightness preserving dynamic fuzzy histogram equalization,” IEEE Transactions on Consumer Electronics, vol. 56, no. 4, pp. 2475–2480, 2010. View at Publisher · View at Google Scholar · View at Scopus
  15. A. Carbonaro and P. Zingaretti, “A comprehensive approach to image-contrast enhancement,” in Proceedings of the 10th International Conference on Image Analysis and Processing (ICIAP '99), pp. 241–246, Venice, Italy, September 1999. View at Publisher · View at Google Scholar · View at Scopus
  16. H. Ibrahim and N. S. P. Kong, “Brightness preserving dynamic histogram equalization for image contrast enhancement,” IEEE Transactions on Consumer Electronics, vol. 53, no. 4, pp. 1752–1758, 2007. View at Publisher · View at Google Scholar · View at Scopus
  17. V. Magudeeswaran and C. G. Ravichandran, “Fuzzy logic-based histogram equalization for image contrast enhancement,” Mathematical Problems in Engineering, vol. 2013, Article ID 891864, 10 pages, 2013. View at Publisher · View at Google Scholar
  18. M. S. Imtiaz and K. A. Wahid, “Image enhancement and space-variant color reproduction method for endoscopic images using adaptive sigmoid function,” in Proceedings of the 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC '14), pp. 3905–3908, August 2014.
  19. L.-R. Dung and Y.-Y. Wu, “A wireless narrowband imaging chip for capsule endoscope,” IEEE Transactions on Biomedical Circuits and Systems, vol. 4, no. 6, pp. 462–468, 2010. View at Publisher · View at Google Scholar · View at Scopus
  20. Olympus, “Evis Lucera Spectrum family brochure,” 2014, http://www.olympus.co.uk/medical/en/medical_systems/mediacentre/media_detail_7450.jsp.
  21. T. Welsh, M. Ashikhmin, and K. Mueller, “Transferring color to greyscale images,” ACM Transactions on Graphics, vol. 21, no. 3, pp. 277–280, 2002. View at Google Scholar
  22. V. Korostyshevskiy, Grayscale to RGB Converter, MATLAB Central File Exchange, 2006, http://www.mathworks.com/matlabcentral/fileexchange/13312-grayscale-to-rgb-converter.
  23. M. S. Imtiaz, T. H. Khan, and K. A. Wahid, “New color image enhancement method for endoscopic images,” in Proceedings of the 2nd International Conference on Advances in Electrical Engineering (ICAEE '13), pp. 263–266, December 2013. View at Publisher · View at Google Scholar · View at Scopus
  24. M. S. Imtiaz and K. Wahid, “A color reproduction method with image enhancement for endoscopic images,” in Proceedings of the 2nd Middle East Conference on Biomedical Engineering (MECBME '14), pp. 135–138, Doha, Qatar, February 2014. View at Publisher · View at Google Scholar · View at Scopus
  25. Saruchi, “Adaptive sigmoid function to enhance low contrast images,” International Journal of Computer Applications, vol. 55, no. 4, pp. 45–49, 2012. View at Publisher · View at Google Scholar
  26. A. Plaza, J. A. Benediktsson, J. W. Boardman et al., “Recent advances in techniques for hyperspectral image processing,” Remote Sensing of Environment, vol. 113, no. 1, pp. S110–S122, 2009. View at Publisher · View at Google Scholar · View at Scopus
  27. N. Hassan and N. Akamatsu, “A new approach for contrast enhancement using sigmoid function,” The International Arab Journal of Information Technology, vol. 1, no. 2, pp. 221–225, 2004. View at Google Scholar
  28. Gastrolab—The Gastrointestinal Site, 1996, http://www.gastrolab.net/index.htm.
  29. Atlanta South Gastroenterology, Atlas of Gastrointestinal Endoscopy, Atlanta South Gastroenterology, 1996, http://www.endoatlas.com/index.html.
  30. A. Allen and D. Snary, “The structure and function of gastric mucus,” Gut, vol. 13, no. 8, pp. 666–672, 1972. View at Publisher · View at Google Scholar · View at Scopus
  31. R. Ishihara, T. Inoue, N. Hanaoka et al., “Autofluorescence imaging endoscopy for screening of esophageal squamous mucosal high-grade neoplasia: a phase II study,” Journal of Gastroenterology and Hepatology, vol. 27, no. 1, pp. 86–90, 2012. View at Publisher · View at Google Scholar · View at Scopus
  32. B. Lovisa, Endoscopic fluorescence imaging: spectral optimization and in vivo characterization of positive sites by magnifying vascular imaging [Ph.D. thesis], École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland, 2010.
  33. B. A. Wandell and L. D. Silverstein, “Digital color reproduction,” in The Science of Color, S. K. Shevell, Ed., Optical Society of America, Washington, DC, USA, 2nd edition, 2003. View at Google Scholar
  34. J. A. Yule, G. G. Field, and J. A. C. Yule, “Color reproduction,” in Color: An Introduction to Practice and Principles, pp. 167–186, John Wiley & Sons, New York, NY, USA, 2012. View at Google Scholar
  35. X. Xu, Y. Wang, J. Tang, X. Zhang, and X. Liu, “Robust automatic focus algorithm for low contrast images using a new contrast measure,” Sensors, vol. 11, no. 9, pp. 8281–8294, 2011. View at Publisher · View at Google Scholar · View at Scopus
  36. C.-H. Shen and H. H. Chen, “Robust focus measure for low-contrast images,” in Proceedings of the International Conference on Consumer Electronics (ICCE '06), pp. 69–70, January 2006. View at Publisher · View at Google Scholar · View at Scopus
  37. B. Balas, L. Nakano, and R. Rosenholtz, “A summary-statistic representation in peripheral vision explains visual crowding,” Journal of Vision, vol. 9, no. 12, pp. 1–18, 2009. View at Publisher · View at Google Scholar · View at Scopus
  38. C. D. Manning and H. Schutze, Foundations of Statistical Natural Language Processing, MIT Press, Cambridge, Mass, USA, 1st edition, 1999. View at MathSciNet
  39. H. Ney, S. Martin, and F. Wessel, “Statistical language modeling using leaving-one-out,” in Corpus-Based Methods in Language and Speech Processing, vol. 2 of Text, Speech and Language Technology, pp. 174–207, Springer, Berlin, Germany, 1997. View at Publisher · View at Google Scholar
  40. A. R. Robertson, “Historical development of CIE recommended color difference equation,” Color Research & Application, vol. 15, no. 3, pp. 167–170, 1990. View at Publisher · View at Google Scholar
  41. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004. View at Publisher · View at Google Scholar · View at Scopus
  42. Y. Shi, Y. Ding, R. Zhang, and J. Li, “Structure and hue similarity for color image quality assessment,” in Proceedings of the International Conference on Electronic Computer Technology (ICECT '09), pp. 329–333, February 2009. View at Publisher · View at Google Scholar · View at Scopus
  43. S. E. Süsstrunk and S. Winkler, “Color image quality on the Internet,” in 5th IS&T/SPIE Electronic Imaging 2004: Internet Imaging, pp. 118–131, International Society of Optics and Photonics, San Jose, Calif, USA, January 2004.