Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2018 / Article

Research Article | Open Access

Volume 2018 |Article ID 5754604 | https://doi.org/10.1155/2018/5754604

Thao Nguyen-Trang, "A New Efficient Approach to Detect Skin in Color Image Using Bayesian Classifier and Connected Component Algorithm", Mathematical Problems in Engineering, vol. 2018, Article ID 5754604, 10 pages, 2018. https://doi.org/10.1155/2018/5754604

A New Efficient Approach to Detect Skin in Color Image Using Bayesian Classifier and Connected Component Algorithm

Academic Editor: Alberto Olivares
Received10 Feb 2018
Revised08 Jul 2018
Accepted16 Jul 2018
Published06 Aug 2018

Abstract

Skin detection is an interesting problem in image processing and is an important preprocessing step for further techniques like face detection, objectionable image detection, etc. However, its performance has not really been high because of the high overlapped degree between “skin” and “nonskin” pixels. This paper proposes a new approach to improve the skin detection performance using the Bayesian classifier and connected component algorithm. Specifically, the Bayesian classifier is utilized to identify “true skin” pixels using the first posterior probability threshold, which is approximate to 1, and to identify "skin candidate" pixels using the second posterior probability threshold. Subsequently, the connected component algorithm is used to find all the connected components containing the “skin candidate” pixels. According to the fact that a skin pixel often connects with other skin pixels in an image, all pixels in a connected component are classified as “skin” if there is at least one “true skin” pixel in that connected component. It means that the “nonskin” pixels whose color is similar to skin are classified as “nonskin” when they have the posterior probabilities lower than the first posterior probability threshold and do not connect with any “true skin” pixel. This idea can help us to improve the skin classification performance, especially the false positive rate.

1. Introduction

Skin detection is an indication of the presence of a human skin in a digital image by converting the original image to a binary image in which “1” represents a “skin” pixel and “0” represents a “nonskin” pixel. It is a very interesting problem as well as an important preprocessing step for further techniques like face detection, hand gestures detection, semantic filtering of web contents, etc. [14].

So far, two major groups of methods have been developed for solving this problem using either color or texture features [5]. In comparison to texture-based skin detection, color-based skin detection is usually studied more by researchers, and most of the state-of-the-art skin detection algorithms are color-based [6]. The majority of color-based skin detection algorithms are based on two issues: (i) color space and (ii) classification method. For (i), many color spaces like RGB, HSV, YCbCr, YIQ, YUV, etc. [613] were successfully applied to skin detection problem. Some studies concluded that the skin detection performance can be improved when using two different color spaces together [1416]. According to the above studies, this paper also applies a combined color space, RGBUV, which was proven to be effective by [14], to skin detection problem. For (ii), to classify whether a pixel is skin or not, the majority of previous studies usually focused on two groups of methods: the thresholding and the machine learning methods. The thresholding method is to define a fixed boundary between the “skin” and the “nonskin” region. If the color of a pixel falls into the “skin” region, it is classified as “skin” and vice versa. Some studies that applied the thresholding method to skin detection can be referred to as in [14, 1720]. In short, the thresholding method gains an advantage because it is a very basic and understandable method; however, it is mainly based on subjective experience and has low performance when the thresholds are incorrectly tuned [1, 21]. The machine learning method detects “skin” pixels by building a predictive model from the input data. Such models, like Bayesian classifier, linear discriminant analysis, binary logistic regression, adaptive neurofuzzy inference system, etc., were successfully applied to skin detection [7, 2226]. Among them, the Bayesian classifier is especially noteworthy not only in the field of skin detection but also in other disciplines because it provides the information concerning the probability that an observation belongs to a class, thereby evaluating the reliability of the result [2729]. However, the Bayesian classifier, as well as other methods, still suffers from low performance, especially the high false detection rate (the percentage of nonskin classified as skin). The main causes leading to such low performance and high false detection rate are the confusing background, the noise like skin pixels, and the various conditions of skin color with respect to different ages, sex, races, and body parts [14, 30]. Figure 1 shows the distribution of “skin” and “nonskin” pixels in a particular image, using U and V color channels. In Figure 1, the green points, the red points, and the black region represent the skin pixels, the nonskin pixels, and the skin region established by the Bayesian classifier, respectively. It can be seen that the nonskin pixels have a high volume and overlap with the skin pixels. Obviously, the skin region built by the Bayesian classifier is not robust enough to detect all skin pixels; this region even contains numerous nonskin pixels and provides a high false positive rate. Therefore, a new efficient method that can detect most of the skin and reduce false positive pixels is a necessary demand of the skin detection problem.

The main contribution of this paper is to propose a new approach for the skin detection using the Bayesian classifier and the connected component algorithm. First, the Bayesian classifier is used to compute the posterior probability that a pixel belongs to the skin class. Normally, the Bayesian classifier assigns a pixel to the skin class if its posterior probability is larger than 0.5. It leads to a high false positive rate because of the high overlapping degree between two regions as illustrated in Figure 1. In the proposed method, a high posterior probability threshold, , is utilized so that we can identify the “true skin” pixels and decrease the false positive rate as much as possible. The Bayesian classifier, in addition to finding the “true skin” pixels, also finds “skin candidate” pixels through another posterior probability threshold . Next, the connected component algorithm is utilized to find all connected components containing the “skin candidate” pixels. With the idea that a skin pixel is believed to connect to another skin pixel, the connect components that contain the “true skin” pixels are classified as skin and vice versa. Obviously, the above condition requires a skin candidate pixel connected with at least one “true skin” pixel. The confusing background and the noise like skin pixels that do not match the condition will be, therefore, classified as nonskin, thereby improving the classification performance especially in terms of false positive rate.

The remainder of this article is organized as follows. Section 2 presents the preliminary explanations of the Bayesian classifier and connected component algorithm. The proposed method is introduced in Section 3 and illustrated and applied in Section 4. Section 5 is the conclusion.

2. Preliminary Explanations

2.1. Bayesian Classifier

We consider classes, , with the prior probability , . is the -dimensional continuous data with being a specific sample. According to [31, 32], a new observation belongs to the class if and only if

In the continuous case, is calculated by

Because is the same for all classes, the classification’s rule isHere is the prior probability of class ; is the probability density function of class .

In the case of two classes like the skin detection problem, the new observation x belongs to the class if and only if or and vice versa.

2.2. Connected Component Algorithm

When processing binary images, we often expect to group the pixels, which have values of 1, into the maximally connected regions. These regions are called the connected components of the binary image. Mathematically, two pixels and belong to the same connected component if there is a sequence of pixels, which have values of 1, in such that , , and pi is a neighbor of pi-1 where the neighbors are defined using either 4 connected or 8 connected regions as shown in Figure 2.

This paper applies the connected component algorithm [33] that consists of two stages with the left-to-right, top-to-bottom scan order. In the first stage, the algorithm assigns a new label to the first pixel of each component and attempts to propagate the label of a pixel to its neighbor to the right or below it. This process is illustrated in Figures 3(a), 3(b), and 3(c). Figure 3(a) presents the considered binary image. In the first row, two pixels, which have values of 1, are separated by three pixels, which have values of 0. Therefore, the first pixel is assigned label 1 and the second pixel is assigned label 2 (label is represented by the red color to distinguish it from pixel value). In the second row, the first pixel valued 1 is labeled as 1 because it has a neighbor labeled as 1. In the same manner, the second pixel valued 1 is assigned label 2. The above process is repeated until the last pixel is assigned a label. In case of the pixel A, the considered pixel has two neighbors with different labels; we assign the smallest label to pixel A (label 1) and denote “equivalent label” for all pixels that have the remaining label. At the end of stage 1, we get Figure 3(c). In stage 2, the pixels labeled “equivalence label” are considered. If a pixel has any neighbor labeled “equivalent label”, we label the pixel as “equivalent label” and vice versa. In the end, we get the final connect components as Figure 3(d). For more details of the algorithm, please refer to [33].

3. The Proposed Method

3.1. Preprocessing

For building the Bayesian model and computing the posterior probability, the Skin Detection Dataset’ downloaded from https://archive.ics.uci.edu/ml/datasets/skin+segmentation is used as the training set. The dataset comprises 50859 skin and 194198 nonskin samples. Available features are pixel’s values in B, G, and R channels. As mentioned earlier, the RGBUV color space is used in this paper; hence, for building the training set, we have to compute U and V values using the following formula.

3.2. The Proposed Method

Let be a vector containing the pixel values in R, G, B, U, and V channels. We need to classify whether is the skin or not. For this purpose, the new method is proposed, involving the following steps.

Step 1. Compute the posterior probability that the pixel belongs to the skin class, , using Bayes theorem.(i)If then the pixel is labeled as “true skin”.(ii)If then the pixel is labeled as “skin candidate”.

Step 2. Find all connected components containing “skin candidate” pixels.

Step 3. Classify the pixels with the following rule: if the connected component contains at least one “true skin” pixel, then all pixels belonging to that component are classified as “skin” and vice versa.

In the above algorithm, in order to control the false positive rate at a low level, we choose For , the detection rate of the proposed method is equal to or less than that of Bayesian classifier if . Therefore, a value of threshold that is slightly less than 0.5 will increase the detection rate of the algorithm. The effect of thresholds and on the classification performance will be discussed in more detail in Section 4.1.

4. Numerical Example

This section presents two examples to demonstrate the effectiveness of the proposed algorithm. Specifically, Example 1 describes in detail how the new method works via a certain image file taken from the Pratheepan.FacePhoto dataset [30]. This example also presents the survey results of threshold values, and . In Example 2, the output binary images, the performance measured by accuracy, detection rate, and false positive rate of the proposed method on the whole Pratheepan.FacePhoto dataset are presented and compared with those of other methods such as Bayesian classifier (BC), linear discriminant analysis (LDA), binary logistics regression (BLR), and Adaptive Neuron Fuzzy Inference System (ANFIS). The detail results are as follows.

4.1. Example 1

To illustrate the proposed method and clarify the effect of the threshold values on classification performance, this subsection performs an experiment on a certain image downloaded from http://cs-chan.com/downloads_skin_dataset.html. We first use different thresholds to find “true skin” pixels. Figures 4(a) and 4(b) present the original image and the output binary image performed at the posterior threshold of 0.5, respectively. This is also the posterior threshold used by the Bayesian classifier. It can be seen that using this threshold can detect most skin pixels within the face region but incorrectly classifies the “nonskin” pixels located in the hair and background as “skin” pixels, thereby incurring a high false positive rate. As can be observed from Figure 4(c) to Figure 4(f), the false positive rate is reduced when we increase the posterior probability threshold . Using the threshold of 0.997, the false positive rate is very low, with few misclassified pixels occurring in the background. Even though the detection rate is also reduced when the algorithm fails to detect the skin pixels in the nose region, below the eyes, and near the brows, we accept the current result and expect that such skin pixels will be restored later using the connected component algorithm.

Let us now consider another illustration in which the U and V channels of the current image are extracted. Figures 5(a) and 5(b) show the skin and nonskin pixels taken from the ground truth and the “skin region” built by the Bayesian classifier with the thresholds of 0.5 and 0.997, respectively. With the posterior probability threshold of 0.5, despite defining a larger skin region which makes Bayesian classifier result in a higher detection rate, the black circle or the skin region built by the Bayesian classifier contains a lot of red points that are nonskin pixels, therefore, increasing the false positive rate. With the posterior probability threshold of 0.997, the black circle or the established skin region is smaller, but virtually all points that fall in this circle are the skin pixels. As a result, the number of false positive pixels is reduced; we accept the skin region established with the posterior probability threshold of 0.997 and enlarge them in the next step using the connected components of skin candidate pixels.

In the next step, the threshold is utilized for finding the “skin candidate” pixels. For the sake of clarity, we first use the fixed threshold, . Figure 6 illustrates the image after finding the “skin candidate” pixels. We note that a “true skin” pixel identified above is also a “skin candidate” pixel; hence, a pixel that is both the “skin candidate” and “true skin” is represented by the white color, a pixel that is only the “skin candidate” is represented by the gray color, and a nonskin pixel is represented by the black color, for distinguishing purpose. As observed in Figure 6, the false negative pixels, which are in the nose region, below the eyes, and near the brows and were misclassified in the previous step, are now the skin candidate pixels. It can be clearly seen that these skin candidate pixels mostly connect to “true skin” pixels; as a result, they are classified as “skin” via the connected component algorithm. In contrast, most “skin candidate” pixels in the background do not connect to any “true skin” pixels and are classified as “nonskin”.

The final results are presented in Figure 7. It can be seen that the proposed method, as well as the Bayesian classifier, can well detect the skin in the human face. However, the proposed method removes most of the pixels incorrectly detected by the Bayesian classifier in the hair and background, as shown in Figure 7(b). This is a reasonable output image which reduces the false positive rate and leads to a better accuracy, significantly.

Regarding the problem of thresholds selection, the effects of thresholds on the performance measured by the accuracy, the detection rate, and the false positive rate are investigated on a large number of images given the ground truth. The detailed results obtained for the investigated thresholds are presented in Tables 1 and 2. The results are reasonable since the lower threshold values provide better detection rate and worse false positive rate. Therefore, we consider the accuracy to balance the detection rate and false positive rate. In that case, and can be considered as the suitable thresholds.


Threshold  AccuracyDetection rateFalse positive

0.90.81860.81810.1812

0.990.81860.80800.1777

0.9940.81960.80800.1762

0.9970.82200.80270.1711


Threshold  AccuracyDetection rateFalse positive

0.20.82150.81370.1757

0.250.82210.80880.1732

0.30.8220.80270.1711

0.350.82220.79840.1693

0.40.82240.79410.1676

0.4250.82240.7920.1668

0.450.82250.78990.1659

0.4750.82270.78780.1649

0.50.82240.78450.1641

4.2. Example 2

In this section, we examined whether the proposed method improves the classification performance. In particular, the results including accuracy, detection rate, and false detection rate of the proposed method on the whole Pratheepan.FacePhoto dataset are presented and compared with some other methods such as the Bayesian classifier (BC), linear discriminant analysis (LDA), binary logistic regression (BLR), and Adaptive Neuron Fuzzy Inference System (ANFIS). For illustration purpose, some selected original and output binary images of comparative methods are presented in Figure 8. The results of accuracy, detection rate, and false detection rate on the whole image dataset are summarized in Table 3.


AccuracyDetection rateFalse positive rate

The proposed method0.82270.78780.1649

BC0.81910.79940.1739

LDA0.75070.77270.2571

BLR0.77100.67380.1944

ANFIS0.78810.87400.2424

For the detection rate, the right detection pixels accounted for 78.78% of the true pixels. It can be seen from Table 3 that the proposed method is a competitive method when it ranked third on the list of methods. The best method in terms of detection rate is the ANFIS with the detection rate over 87%. However, there are still a lot of false positive pixels which ANFIS and other methods incorrectly detect, whereas the proposed method can remove the false positive pixels in the background, as observed in Figure 8. The proposed method, therefore, outperforms the others in terms of lower false positive rate and higher accuracy with a rate of approximately 82%.

5. Conclusion

This paper has proposed a new approach to detect skin in color image using the Bayesian classifier and connected component algorithm. The illustrative examples have also been presented in detail. The results have shown that the proposed method is competitive in terms of detection rate and outperforms the others in terms of false positive rate and accuracy. In the future, the proposed method can be further studied for other applications, like face detection, objectionable image detection, etc.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. S. Bianco, F. Gasparini, and R. Schettini, “Computational strategies for skin detection,” in Computational Color Imaging, pp. 199–211, 2013. View at: Google Scholar
  2. H. Khaled, S. G. Sayed, E. S. M. Saad, and H. Ali, “Hand gesture recognition using modified 1$ and background subtraction algorithms,” Mathematical Problems in Engineering, pp. 1–8, 2015. View at: Google Scholar
  3. S.-H. Kim, H.-S. Lee, and H.-H. Kim, “Robust extraction of face candidate through segmentation and conditional merging in skin area,” in Proceedings of the 2009 IEEE International Conference on Intelligent Computing and Intelligent Systems, ICIS 2009, pp. 547–551, China, November 2009. View at: Google Scholar
  4. H.-J. Lin, S.-Y. Wang, S.-H. Yen, and Y.-T. Kao, “Face detection based on skin color segmentation and neural network,” in Proceedings of the 2005 International Conference on Neural Networks and Brain Proceedings, ICNNB'05, pp. 1144–1149, China, October 2005. View at: Google Scholar
  5. W. Kelly, A. Donnellan, and D. Molloy, “Screening for objectionable images: A review of skin detection techniques,” in Proceedings of the 2008 International Machine Vision and Image Processing Conference, pp. 151–158, September 2008. View at: Google Scholar
  6. E. Hassan, A. R. Hilal, and O. Basir, “Using ga to optimize the explicitly defined skin regions for human skincolor detection,” in Proceedings of the 30th IEEE Canadian Conference on Electrical and Computer Engineering, CCECE 2017, pp. 1–4, 2017. View at: Google Scholar
  7. B. Binias, M. Frąckiewicz, K. Jaskot, and H. Palus, “Pixel classification for skin detection in color images,” in Advanced Technologies in Practical Applications for National Security, vol. 106, pp. 87–99, Springer International Publishing, Cham, Switzerland, 2018. View at: Google Scholar
  8. E. Cuevas, D. Zaldivar, and R. Rojas, Fuzzy Segmentation Applied to Face Segmentation, 2004.
  9. P. Kakumanu, S. Makrogiannis, and N. Bourbakis, “A survey of skin-color modeling and detection methods,” Pattern Recognition, vol. 40, no. 3, pp. 1106–1122, 2007. View at: Publisher Site | Google Scholar
  10. J. Kovac, P. Peer, and F. Solina, “Human skin color clustering for face detection,” Computer as a Tool, pp. 144–148, 2003. View at: Google Scholar
  11. C. N. R. Kumar and A. Bindu, “An efficient skin illumination compensation model for efficient face detection,” in Proceedings of the IECON 2006 - 32nd Annual Conference on IEEE Industrial Electronics, pp. 3444–3449, France, November 2006. View at: Google Scholar
  12. D. Lyon and N. Vincent, “Interactive embedded face recognition,” Journal of Object Technology, vol. 8, pp. 1–32, 2009. View at: Google Scholar
  13. C. Prema and D. Manimegalai, “Survey on skin tone detection using color spaces,” International Journal of Applied Information Systems, vol. 2, pp. 18–26. View at: Google Scholar
  14. Z. H. Al-Tairi, R. W. Rahma, M. I. Saripan, and P. S. Sulaiman, “Skin segmentation using YUV and RGB color spaces,” Journal of Information Processing Systems, vol. 10, no. 2, pp. 283–299, 2014. View at: Publisher Site | Google Scholar
  15. G. Gomez, M. Sanchez, and L. Enrique Sucar, “On Selecting an Appropriate Colour Space for Skin Detection,” in Proceedings of the Mexican International Conference on Artificial Intelligence, pp. 69–78, Springer, Berlin, Germany, 2002. View at: Publisher Site | Google Scholar
  16. F. H. Xiang and S. A. Suandi, “Fusion of multi color space for human skin region segmentation,” International Journal of Information and Electronics Engineering, vol. 3, pp. 172–174, 2013. View at: Google Scholar
  17. K. H. B. Ghazali, J. Ma, and R. Xiao, “An innovative face detection based on skin color segmentation,” International Journal of Computer Applications, vol. 34, pp. 6–10, 2011. View at: Google Scholar
  18. A. S. Ghotkar and G. K. Kharate, “Hand segmentation techniques to hand gesture recognition for natural human computer interaction,” International Journal of Human Computer Interaction, vol. 3, pp. 15–25, 2012. View at: Google Scholar
  19. R. M. Jusoh, N. Hamzah, H. Marhaban, and N. M. A. Alias, “Skin detection based on thresholding in RGB and hue component,” in Proceedings of the 2010 IEEE Symposium on Industrial Electronics and Applications (ISIEA), pp. 515–517, 2010. View at: Google Scholar
  20. K. Sobottka and I. Pitas, “A novel method for automatic face segmentation, facial feature extraction and tracking,” Signal Processing: Image Communication, vol. 12, no. 3, pp. 263–281, 1998. View at: Publisher Site | Google Scholar
  21. P. Yogarajah, J. Condell, K. Curran, P. McKevitt, and A. Cheddad, “A dynamic threshold approach for skin tone detection in colour images,” International Journal of Biometrics, vol. 4, no. 1, pp. 38–55, 2012. View at: Publisher Site | Google Scholar
  22. N. Friedman, D. Geiger, and M. Goldszmidt, “Bayesian network classifiers,” Machine Learning, vol. 29, no. 2-3, pp. 131–163, 1997. View at: Publisher Site | Google Scholar
  23. M. J. Jones and J. M. Rehg, “Statistical color models with application to skin detection,” International Journal of Computer Vision, vol. 46, no. 1, Article ID 390108, pp. 81–96, 2002. View at: Publisher Site | Google Scholar
  24. G. Osman and M. S. Hitam, “Skin colour classification using linear discriminant analysis and colour mapping co-occurrence matrix,” in Proceedings of the 2013 International Conference on Computer Applications Technology, ICCAT 2013, pp. 1–5, Tunisia, January 2013. View at: Google Scholar
  25. N. Sebe, I. Cohen, T. S. Huang, and T. Gevers, “Skin detection: A bayesian network approach,” in Proceedings of the 17th International Conference on Pattern Recognition, ICPR 2004, pp. 903–906, UK, August 2004. View at: Google Scholar
  26. A. A. Zaidan, H. A. Karim, N. N. Ahmad, G. M. Alam, and B. B. Zaidan, “A new hybrid module for skin detector using fuzzy inference system structure and explicit rules,” International Journal of Physical Sciences, vol. 5, no. 13, pp. 2084–2097, 2010. View at: Google Scholar
  27. P. Addesso, F. Capodici, G. D'Urso et al., “Enhancing TIR image resolution via bayesian smoothing for IRRISAT irrigation management project,” in Remote Sensing for Agriculture, Ecosystems, and Hydrology XV, p. 888710, 2013. View at: Google Scholar
  28. M. Castellaro, G. Rizzo, M. Tonietto et al., “A Variational Bayesian inference method for parametric imaging of PET data,” NeuroImage, vol. 150, pp. 136–149, 2017. View at: Publisher Site | Google Scholar
  29. T. Vovan, “Classifying by Bayesian Method and Some Applications,” in Bayesian Inference, pp. 39–61, InTech, 2017. View at: Google Scholar
  30. W. R. Tan, C. S. Chan, P. Yogarajah, and J. Condell, “A fusion approach for efficient human skin detection,” IEEE Transactions on Industrial Informatics, vol. 8, no. 1, pp. 138–147, 2012. View at: Publisher Site | Google Scholar
  31. T. Nguyen-Trang and T. Vo-Van, “A new approach for determining the prior probabilities in the classification problem by Bayesian method,” Advances in Data Analysis and Classification. ADAC, vol. 11, no. 3, pp. 629–643, 2017. View at: Publisher Site | Google Scholar | MathSciNet
  32. T. Pham-Gia, N. Turkkan, and T. Vovan, “Statistical discrimination analysis using the maximum function,” in Communications in Statistics—Simulation and Computation®, vol. 37, pp. 320–336, Taylor & Francis, 2008. View at: Publisher Site | Google Scholar | MathSciNet
  33. L. Shapiro and R. Haralick, “Computer and Robot Vision,” in Reading, Addison-Wesley, 1992. View at: Google Scholar

Copyright © 2018 Thao Nguyen-Trang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views6462
Downloads846
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.