International Journal of Optics

International Journal of Optics / 2020 / Article
Special Issue

Photonics Applications in Biomedicine and Human Safety

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 1519205 | https://doi.org/10.1155/2020/1519205

Chih-Huang Yen, Pin-Yuan Huang, Po-Kai Yang, "An Intelligent Model for Facial Skin Colour Detection", International Journal of Optics, vol. 2020, Article ID 1519205, 8 pages, 2020. https://doi.org/10.1155/2020/1519205

An Intelligent Model for Facial Skin Colour Detection

Guest Editor: Cheng-Mu Tsai
Received30 Oct 2019
Revised29 Jan 2020
Accepted03 Feb 2020
Published17 Mar 2020

Abstract

There is little research on the facial colour; for example, choice of cosmetics usually was focused on fashion or impulse purchasing. People never try to make right decision with facial colour. Meanwhile, facial colour can be also a method for health or disease prevention. This research puts forward one set of intelligent skin colour collection method based on human facial identification. Firstly, it adopts colour photos on the facial part and then implements facial position setting of the face in the image through FACE++ as the human facial identification result. Also, it finds out the human face collection skin colour point through facial features of the human face. The author created an SCE program to collect facial colour by each photo, and established a hypothesis that uses minima captured points assumption to calculate efficiently. Secondly, it implements assumption demonstration through the Taguchi method of quality improvement, which optimized six point skin acquisition point and uses average to calculate the representative skin colour on the facial part. It is completed through the Gaussian distribution standard difference and CIE 2000 colour difference formula and uses this related theory to construct the optimized program FaceRGB. This study can be popularized to cosmetics purchasing and expand to analysis of the facial group after big data are applied. The intelligent model can quickly and efficiently to capture skin colour; it will be the basic work for the future fashion application with big data.

1. Introduction

Many studies on skin colour focus on face recognition or try to determine the typology of people [1]. However, the cosmetic market may need to assist people in finding the right make-up colours for different conditions. However, determining a person’s skin colour is also a large issue in cosmetic research. The main purpose of this study is to use females as an example to determine colour modes using an innovative method to extract features.

1.1. Skin Color Collection

There are many relevant software of screen collecting color, such as Just Color Picker, ColorPic, and ColorSPY to expert mapping software Photoshop; all have functions of collecting image and screen website color, merits of software such as Just Color Picker and ColorPic except to support color codes such as HTML, RGB, HEX, HSB/HSV, HSL, HSL(255), and HSL(240), even it provides simple palette tools, which can make us manually make the desired colors. Photoshop uses graphic expression to convey dye absorption and makes function of color filling, which indicates digitalized collected color and makes more users quickly get their desired referential color; although it is quick and convenient, it does not represent that the used software can precisely collect color; this is related to the used software in market which usually makes collection by pixel, which indicates that the chosen image area is not large-scale visual color seen by people, so choosing color does not mean the representative color of this image. Hsiao et al. put forward fuzzy relation matrix calculation program of the fuzzy method to implement; the aim lies in reducing color and converting image into color and chooses representative color of this area, which uses related concept of absorbing image color [13].

Soriano et al. put forward that skin color indicates different colors under different environments; scholars record skin color trace by using a digital camera and present color range of skin color by skin color space, establishment of skin color point will be affected by distance of human eyes seeing image skin color by establishing image skin color point, which indicates perception of human eyes is color of even skin color [4].

1.2. Outline of This Study

The purpose of this research mainly focuses on the facial skin colour; this is the extensive research based on human face identification, suppose image human face detection can use the minimum colour point as representative skin colour symbol, and through calculation of skin colour model and Taguchi method, it can minimize the colour point to 6 point and has representativeness. This application can make accumulation and calculation of plenty of data in the future, and it can be the base of big data analysis and expert system establishment for human skin colour. Figure 1 describes research structure and process.

2. Literature Review

2.1. RGB and CIELAB Conversions

Since RGB colour models are device-dependent, there is no simple formula for conversion between RGB values and . The RGB values must be transformed via a specific absolute colour space. This adjustment will be device-dependent, but the values resulting from the transform will be device-independent. After a device-dependent RGB colour space is characterized, it becomes device-independent. In the calculation of sRGB from CIE, XYZ is a linear transformation, which may be performed by a matrix multiplication. Referring to equations (1) and (2), it presents that these linear RGB values are not the final result as they have not been adjusted for the gamma correction. sRGB was designed to reflect a typical real-world monitor with a gamma of 2.2, and the following formula transforms the linear RGB values into sRGB. Let Clinear be Rlinear, Glinear, or Blinear, and Csrgb be Rsrgb, Gsrgb, or Bsrgb. The sRGB component values Rsrgb, Gsrgb, and Bsrgb are in the range 0 to 1 (a range of 0 to 255 can simply be divided by 255.0).where and C is R, G, or B.

It is followed by a matrix multiplication of the linear values to get XYZ:

These gamma-corrected values are in the range 0 to 1. If values in the range 0 to 255 are required, the values are usually clipped to the 0 to 1 range. This clipping can be done before or after this gamma calculation [5].

2.2. Taguchi Method

The Taguchi method is used to make the designed product to have stable quality and small fluctuation and makes the production process insensitive to every kind of noise. In the product design process, it uses relations of quality, cost, and profit to develop high-quality product under condition of low cost. The Taguchi method thinks the profit of product development can use internal profit of enterprise and social loss to measure, enterprise internal profit indicates low cost under condition with the same functions, and social profit uses effect on human after product entering consumption field as the measurement index. This research uses the Taguchi method, and its main aim is to find out the optimization of skin colour point because point distribution has many probabilities, and it can find out the optimal point model through calculation of the Taguchi method.

Taguchi’s designs aimed to allow greater understanding of variation than a lot of the traditional designs from the analysis of variance. Taguchi contended that conventional sampling is inadequate here as there is no way of obtaining a random sample of future conditions. In Fisher’s design of experiments and analysis of variance, experiments aim to reduce the influence of nuisance factors to allow comparisons of the mean treatment effects [6]. Variation becomes even more central in Taguchi’s thinking. The Taguchi approach provides more complete interaction information than typical fractional factorial designs that its adherents claim. Followers of Taguchi argue that the designs offer rapid results and that interactions can be eliminated by proper choice of quality characteristics. However, a “confirmation experiment” offers protection against any residual interactions. If the quality characteristic represents the energy transformation of the system, then the “likelihood” of control factor-by-control factor interactions is greatly reduced, since “energy” is “additive” [7].

2.3. Ellipsolid Skin-Colour Model

Zeng and Luo conducted the studies in human skin colour luminance dependence cluster shape discussed in the Lab colour space. The cluster of skin colours may be approximated using an elliptical shape [8]. Let X1, …, Xn be distinctive colours (a vector with two or three coordinates) of a skin colour training data set and f(X_i) = f_i(i = 1, …, n) be the occurrence counts of a colour, Xi. An elliptical boundary model Φ(X) = (X, Ψ, Λ) is defined aswhere Ψ and Λ are given bywhere is the total number of occurrences in a training data set and is the mean of colour vectors. To consider the lightness dependency of the shape of skin cluster, the cluster of skin colours in a lightness-chrominance colour space may be modeled with an ellipsoid. In a three-dimensional (3D) colour space, X is expressed asand is represented in a matrix form

in equation (3) can be reorganized as

According to equation (5),

Comparing equations (8) and (9), and The ellipsoid function (12) can be written aswhere , , , , , and .

3. Implementation Method

FOn the basis of human face identification, it uses characteristic point to make setting of the relative position, applies skin colour ellipse model and CNN of human face identification, and uses Java program to compile skin colour extractor, and its short form is SCE.

3.1. The Instruction Operation for Skin Colour Extractor (SCE)

Figure 2 is the operation instruction of SCE, click (1) to open file, insert file, will see (2) figure display area indicates this file image and (3) file name, click (4) Detect Feature option on button area, image will automatically generate blue line, which indicates it has detected colour point of right cheek of human, left cheek, chin and forehead, it can input Need (6) on the left corner and finally push (7) Generate Result, which is indicted by Figure 3, which indicates (8) red collection point of input value, skin colour ellipse will display this (9) ellipse and input (10) L illumination to observe its changes, and stores some skin colour RGB and Office Excel file of Lab.

3.2. Taguchi Method Finds Optimization

Figure 4(a) indicates detection and positioning of human face is completed and connection line way by point to point. It makes division for the right eye corner, left eye corner, middle of the right eyebrow, left eyebrow, left mouth corner, and right mouth corner. Figure 4(b) may be the area of producing skin colour point, so Figure 4(c) indicates the probable colour point; this research sets every inserted photo is 300 × 600 pixel, of which the pixel value of connection line, the maximum point of pull-up radian is 50 point, equivalent connection line of point to point is 25 point, the minimum pull-up radian is 3 point. Radian direction is, respectively, represented by −1, 0, and 1. For example, the minimum pull-up position of the right mouth corner and left mouth corner will reach the chin of mouth and shadows, this radian direction cannot construct area block which conforms to skin colour, so it need not be listed into calculation.

Apply the Taguchi method to get the optimization from the distribution possibility for input in SCE. Figure 5 shows the study divides face into 4 blocks; they are, respectively, the forehead, left cheek, right cheek, and chin. It chooses proper factor as the design level. One block, respectively, has factors (radian and point number), it totally has 8 factors, 3 grades, so it chooses the L18 orthogonal table.

Based on the 4 areas defined, Table 1 shows the control factor table to clarify all parameters, and will follow the Taguchi method to do the test. It uses characteristics of orthogonal table to reduce test times from 4,374 times to 18 times, it greatly simplifies test times and calculates S/N proportion, standard difference, and average, it presents the result by inputting into the table, through conversion of the Taguchi method S/N proportion, which makes test data to conform to the additive model (addition characteristic), and calculates level effect of every factor, and gets factor reaction table of quality characteristic, which is indicated in Table 2.


Level of control factorsLevelLevel 1Level 2Level 3

ChinRadianA−10
PointsB32550

R-cheekRadianC−10+1
PointsD32550

L-cheekRadianE−10+1
PointsF32550

ForeheadRadianG−10+1
PointsH32550


ExpABCDEFGHP1P2P3Ave.SS/N

1111111119897.59897.830.28950.6
2112222229493.894.2940.253.4
3113333339594.596.295.230.87440.7
41211223396.7969595.90.85441
51222331194959494.330.57744.3
61233112294.49494.594.30.26551
71312132396.29696.596.230.25251.7
8132321319696.59696.170.28950.5
91331321296969595.670.57744.4
102113322195.495.496.795.830.75142.1
11212113329594.595.294.90.36148.4
12213221139695.19695.70.5245.3
132212313295.49496.195.171.06939
142223121396949595139.6
152231232195.196.79695.930.80241.6
16231323129594.595.895.10.65643.2
172321312394.596.696.595.871.18538.2
182332123196.794.59695.731.12438.6

From action table, we can clearly see the effect result of quality characteristic because S/N proportion belongs to projection characteristics, and it can easily find the optimized result of every group in the table. Firstly, in the part of Significant in Table 3, can see B D F H, it is yes, and representative has reaction and effect. They are respectively: optimal combination of collection point is {BFHD}. It can also find factor significance sequence under quality characteristic, the optimized efficiency is B > F > H > D, referred factor characteristic result figure from Tables 2 and 3. Fraction of Rank, the first one is B, ranging from −4∼1.7, the range value of carry number is about 6, so the optimal collection point is 6 point.


ABCDEFGH

Level 144.046.844.644.046.646.044.646.0
Level 241.842.745.742.045.843.244.044.0
Level 344.443.641.045.042.043.042.7
E1-2−2.2−4.01.1−2.0−0.8−2.8−0.6−2.0
E2-31.7−2.1−1.0−0.8−1.2−1.0−1.3
Range2.25.72.23.01.64.01.63.3
Rank51647283
SignificantNoYesNoYesNoYesNoYes

4. Result and Discussion

From the analysis result, it is found that importance sequence will change according to the quality characteristic, and it is mainly because the Taguchi method belongs to the optimal method of single quality characteristic and then uses this to program correction base, which can make program of this research quickly calculate the optimized result of skin colour collection.

4.1. Verification for 6 Points to Detect Facial Color

In the course of the study, it is assumed that the typical image processing software (eg., Photoshop and CorelDraw) is as shown in different steps in Figure 6. There is a step-by-step procedure, Figure 6(a), which means that the file has been read. As for Figure 6(b), it shows that the background has been cut out and completely ensured the face shape. User could capture the skin colour manually. Most of the positions are decided relying on the intuition. So, Figure 6(c) presents the 6 point on the face, and the results of colour detected would be shown as the number by each in Figure 6(d). Then, the average values of the 6 data could be calculated as shown in Figure 6(e). The flow chart is the foundation of the FaceRGB program.

Six points may have come from part of the hair or shadow, since they are in the range of identification colour values but with different variations of brightness. For debugging efficiently, beside the limited value, the Gaussian distribution concept and standard deviation of the outliers are also removed. Figure 7 describes how to define and find the outlier from six points. The procedure of the FaceRGB program is as follows.

4.2. FaceRGB Program

The procedure of the FaceRGB program is as follows.(1)Calculate Faceskin data. All points of the average distance to FaceLABavg (ΔEavg) and standard deviation σ(2)Outlier is far from the distance of FaceLABavg (Distanceavg + 2σ)(3)Refer to equation (13). According to CIE2000 [9], the formula is(4)Delete the outlier from the six points

The FaceRGB program is described individually as follows:(1)Open the program, the title indicates FaceRGB.(2)When the file has been read, the image will appear in this picture window; it includes big data read or operation. There is instant synchronization status presenting in the window.(3)Spreadsheet progress strip windows, the situation will progress to the long schedule for a presentation to show they reached results.(4)For big data, create four computation channels in the program, and it will be dealing with huge data in the same time. Figure 8 shows the situation as it is working.(5)Option is designed to be read as a single image or input for only one time.(6)This is a single image processing result, including the colour, RGB values, and LAB values. Figure 9 presents the example.

4.3. Conclusion

This research has created programs to detect the facial colour. They can calculate huge amount of data and even complicated issues by the intelligent method. This colour selecting method can be accumulated for calculating huge data. Therefore, trend for skin colour can be derived from the obtained data. The purpose of this study is to propose a model and procedure for the investigation. Moreover, the process is more important than the result. In addition, the study anticipates that this expert system could be applied into big data type and IOT (internet of things) in the future.

Users will gain their skin colour and the colour location of the face region, which can assist them to select the right colour to match their skin. With it, it will be easier for females to find out their skin colour grouping. Furthermore, after colour harmony and applied aesthetics, every result can be the fashion trend in cosmetics. The expert system can be implemented to develop colour cosmetics; besides, it can be made in the future. Finally, if this system can be applied in the make-up market, it will make a considerable contribution and value.

Data Availability

The optimization data (through the Taguchi Method) used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors are grateful to The Institute of Minnan Culture Innovation Design & Technology Research, Minnan Normal University, for supporting this research under Grant no. 2018ZDJ03010004.

References

  1. S.-W. Hsiao and C.-J. Tsai, “Transforming the natural colors of an image into product design: a computer-aided color planning system based on fuzzy pattern recognition,” Color Research & Application, vol. 40, no. 6, pp. 612–625, 2015. View at: Publisher Site | Google Scholar
  2. S.-W. Hsiao, “A systematic method for color planning in product design,” Color Research & Application, vol. 20, no. 3, pp. 191–205, 1995. View at: Publisher Site | Google Scholar
  3. S. W. Hsiao and M. S. Chang, “A semantic recognition-based approach for car’s concept design,” International Journal of Vehicle Design, vol. 18, pp. 53–82, 1997. View at: Google Scholar
  4. M. Soriano, B. Martinkauppi, S. Huovinen, and M. Laaksonen, “Adaptive skin color modeling using the skin locus for selecting training pixels,” Pattern Recognition, vol. 36, no. 3, pp. 681–690, 2003. View at: Publisher Site | Google Scholar
  5. P. Green, W. Lindsay, and M. Donald, Colour Engineering: Achieving Device Independent Colour, John Wiley & Sons, Hoboken, NJ, USA, 2002.
  6. R. H. Hardin and N. J. A. Sloane, “A new approach to the construction of optimal designs,” Journal of Statistical Planning and Inference, vol. 37, no. 3, pp. 339–369, 1993. View at: Publisher Site | Google Scholar
  7. P. Friedrich, Optimal Design of Experiments, SIAM, Pittsburgh, PA, USA, 2006.
  8. H. Zeng and R. Luo, “Skin color modeling of digital photographic images,” Journal of Imaging Science and Technology, vol. 55, no. 3, Article ID 030201, 2011. View at: Publisher Site | Google Scholar
  9. G. Sharma, W. Wu, and E. N. Dalal, “The CIEDE2000 color-difference formula: implementation notes, supplementary test data, and mathematical observations,” Color Research & Application, vol. 30, no. 1, pp. 21–30, 2005. View at: Publisher Site | Google Scholar

Copyright © 2020 Chih-Huang Yen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views1193
Downloads372
Citations

Related articles