Table of Contents Author Guidelines Submit a Manuscript
International Journal of Photoenergy
Volume 2014, Article ID 309578, 12 pages
http://dx.doi.org/10.1155/2014/309578
Research Article

On-Road Driver Monitoring System Based on a Solar-Powered In-Vehicle Embedded Platform

1Department of Computer Science and Information Engineering, National Taipei University of Technology, Taipei 10608, Taiwan
2Department of Energy and Refrigerating Air-Conditioning Engineering, National Taipei University of Technology, Taipei 10608, Taiwan
3Department of Electrical Engineering, Fu Jen Catholic University, New Taipei City 24205, Taiwan

Received 16 May 2014; Accepted 16 June 2014; Published 3 July 2014

Academic Editor: Ching-Song Jwo

Copyright © 2014 Yen-Lin Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This study presents an on-road driver monitoring system, which is implemented on a stand-alone in-vehicle embedded system and driven by effective solar cells. The driver monitoring function is performed by an efficient eye detection technique. Through the driver’s eye movements captured from the camera, the attention states of the driver can be determined and any fatigue states can be avoided. This driver monitoring technique is implemented on a low-power embedded in-vehicle platform. Besides, this study also proposed monitoring machinery that can detect the brightness around the car to effectively determine whether this in-vehicle system is driven by the solar cells or by the vehicle battery. On sunny days, the in-vehicle system can be powered by solar cell in places without the vehicle battery. While in the evenings or on rainy days, the ambient solar brightness is insufficient, and the system is powered by the vehicle battery. The proposed system was tested under the conditions that the solar irradiance is 10 to 113 W/m2 and solar energy and brightness at 10 to 170. From the testing results, when the outside solar radiation is high, the brightness of the inside of the car is increased, and the eye detection accuracy can also increase as well. Therefore, this solar powered driver monitoring system can be efficiently applied to electric cars to save energy consumption and promote the driving safety.

1. Introduction

The economic development has caused large amount of energy to be consumed, and largely consumption of petrochemical inventories, resulting in the energy crisis. To reduce the petrochemical energy consumption on vehicles, the electric cars arise. In order to achieve higher endurance driving, we should care about the car battery power. The in-vehicle instruments in the electric cars also consume electric energy and may reduce the energy for car driving. Moreover, there are many traffic accidents which occur due to the inattentive and fatigued driving. Thus, an effective monitoring system for drivers’ attention states is also very important for the development of electric cars. Therefore, the solar energy can be adopted to drive the in-vehicle embedded computing platform to conduct the driver monitoring functions, and both the electric energy consumption and the driving safety can be effectively achieved.

Photovoltaic (PV) technology is one of the most important renewable sources of energy generation. Since the first recognition in 1839 [1], there have been many research works on the performance of PV. However, efficiency improvement and cost reduction of PV technology still need many efforts. The solar cells based on crystalline silicon (c-Si) are known as materials in first generation solar cell [2]. From the points of view on the cost, performance, and processibility, the applications of new advanced materials such as amorphous silicon (a-Si), cadmium telluride (CdTe), and copper indium gallium diselenide (CIGS) are achieved in the second and third generations of solar cells. Typical conversion efficiencies of first generation technologies are currently 15% to 20%, whereas those of second generation technologies are currently 7% to 15% [3].

Photovoltaic system can be categorized as stand-alone photovoltaic system and grid-connected photovoltaic system [4]. The stand-alone systems do not supply power to the grid. Such systems may vary widely in size and applications, such as consumer electronics and remote buildings. Chien et al. [5] have presented the experimental investigations on absorption refrigerators driven by solar cells. The system was tested by the varying solar irradiance ranged from 550 to 700 W/ as solar energy and 500 mL ambient temperature water as cooling load. After 160 minutes, this refrigerator can maintain the temperature at 5 to 8°C. Huang et al. [6] presented a solar LED street lighting system using constant-power and dimming control. The test results showed that the power of 18 W and 100 W LED luminaires can be controlled accurately with error at 2–5%.

To achieve the monitoring functions of drivers’ fatigue state, the human body features and actions should be efficiently analyzed. Human body feature sensing and detection have currently played important roles in many application areas, such as human surveillance and human-computer interaction applications [79]. In recent research studies, many human action detection techniques aiming at detecting different human body parts have been developed for human-computer interaction applications, such as the whole human motions [1012], the faces, and the eye gazes [1323]. Among the above-mentioned human body parts, the faces and eye gazes play the most important roles in interpreting and understanding a person’s attentions, intentions, and needs in human communications and interactions, especially for driver state monitoring. In this sense, tracking of faces and eye gazes provide necessary information to obtain the car driver’s attentive states.

Moreover, the computational power and flexibility of in-vehicle embedded systems have recently increased significantly because of the development of system-on-chip technologies and the advanced computing power of newly released portable devices. Thus, performing real-time vision-based eye detection and tracking in driver assistance and monitoring applications has become feasible on modern embedded platforms for driver assistance systems. This paper presents an on-road driver monitoring system that uses solar-cell powered in-vehicle embedded platform. The on-road driver monitoring system could detect driver’s mental state and determine which system is driven by battery or by solar cell by brightness of environment. The system could be driven by solar cell on sunny day which could save finite battery capacity in vehicle.

This study presents an on-road driver monitoring system, which is implemented on an in-vehicle embedded system and driven by solar cells. The driver monitoring function is performed by an efficient eye detection technique. Through the driver’s eye movements captured from the camera, the attentive states of the driver can be determined and any fatigue states can be avoided. This driver monitoring technique is implemented on a low-power embedded in-vehicle platform. Besides, this study also proposed monitoring machinery that can detect the brightness around the car to effectively determine whether this in-vehicle system is driven by the solar cells or by vehicle batteries. On sunny days, the in-vehicle system can be powered by solar cell in places without the vehicle battery. While in the evenings or on rainy days, the ambient solar brightness is insufficient, and the system is powered by the vehicle battery. From the experimental results, the proposed solar powered driver monitoring system demonstrates its efficiency in energy savings and driving safety for electric cars.

2. Brightness Detection and Power Source Determination

In the proposed system, we adopt a camera, pyranometer, and powermeter to obtain information of the solar irradiance and power generation efficiency of solar cells via analytical results of the images captured outside the car. First, the RGB (red, green, and blue) color model of the captured image sequences of the road environments will be adopted to compute the brightness value, denoted as V. Then, the brightness (V) value can be adopted to determine the ambient brightness conditions with respect to the strengths of the solar irradiance. The range of V value is ranged from 0 to 255. The brightness value can be computed as follows:

Equation (1) indicates the V value is the maximum of the color values of R, G, and B. We can use the brightness value as the threshold to determine the strengths of the solar irradiance. In this way, when the V value is lower than the threshold, system will conduct two operations: changing the power source from the solar cell to the vehicle battery; turning on the infrared light sources to more accurately detect the fatigue state of the driver. When the V value is higher than the threshold, the system will then conduct two operations: changing the power source from the battery to solar cells; turning off the infrared light sources, because the ambient lighting condition is sufficient for detecting the driver’s states. In the proposed system, the threshold of the brightness is experimentally determined as 130, as depicted in Figure 14.

3. Driver Monitoring by Eye Detection Methods

To monitor the driver’s attentive states, their eyesight is important information. This study presents a classification-based approach to detect and analyze the drivers’ eyes to determine their attention states. A fast face and eye detection process based on boost classification approach is presented to detect and locate the regions of faces and eyes in the image sequences captured via a video camera. In this detection process, we improve and optimize the boost classification method to provide necessary computational efficiency for embedded and portable devices.

3.1. Classification-Based Methods

Here each weak classifier stage is performed by selecting a combinative set of features being computed based on the integral images, which reflects integrals over the subregions contained in the region, as depicted in Figure 1. The features utilized in the proposed system are a variety of Haar-like features [24], which are reminiscent of Haar wavelets and reflect the human visual responses, such as linear, center-surround, and diagonal directional responses, as shown in Figure 2.

309578.fig.001
Figure 1: The integral image.
fig2
Figure 2: The Haar-like feature prototypes.

In this sense, a boosted cascade of strong classifiers can be constructed where each stage is a strong classifier having a suitable threshold for determining whether the input image comprises target objects, as shown in Figure 3. Each stage classifier is trained and performed by analyzing a combination of features obtained from the integral image, and thus detecting most target objects while screening out a certain portion of noninterested objects that might be accepted by previous classifier stages.

309578.fig.003
Figure 3: The cascaded classifier.
3.2. Face Detection

Having the boosting classification techniques, the faces in the grabbed image sequences can be detected and located via the trained boosted cascade detectors. The proposed system adopts pixel resolution for face training patterns.

Because the proposed system is designed mainly for the interactive user interface applications, the distances between the camera and the face are mostly within 25 cm to 60 cm. Therefore, to computationally effectively perform the real-time face detection by the cascade detectors, the proposed system adopts the scaling factors of the resolutions for the image pyramids [21] on the face images, as depicted in Table 1. The proposed system adopts the scale factors of 2–7 on the face images to perform fast multiscale detection via the boosted cascade detectors.

tab1
Table 1: The scaling factors of the resolutions used by the image pyramids [21].

For the purpose of running the system under different environments with varying ambient lighting conditions, the histogram equalization is applied on the scaled face images to obtain uniform gray level distributions, as shown in Figure 4. Accordingly, the boosted cascade detectors are applied on the histogram equalized images to detect faces using the Haar-like features, as depicted in Figure 5.

309578.fig.004
Figure 4: The histogram equalized facial images.
309578.fig.005
Figure 5: The example of applying the boosted cascade detectors with the Haar-like features on images.

As a result, after the faces in images are detected, their positions and sizes are recorded for further face and eye detection processes. Based on the positions and sizes of the current detected faces, the searching area for detecting the faces in the subsequent image frames is performed in the areas with the two times of the widths and heights of the current detected faces, and thus the computational costs of the face detection can be effectively saved in the following image frames.

3.3. Eye Detection

Having the face positions, the eyes can be effectively detected based on the interesting regions specified by the facial areas. Besides, to save the computational costs for performing real-time detection process of the boosting classification techniques, the face detection process on the whole image is only performed in the situation that the eyes have not been detected in the previous frame. Otherwise, if the face or eyes have already been detected in the previous frame, the proposed system will perform the eye detection process within the adaptive regions of interest (ROIs) according to the locations and areas of the face and eyes detected in previous frames.

In this way, the proposed system adopts two types of regions of interest for eye detection for the two situations: the face has been detected and the eyes have been detected in the previous frames [20]. In the first situation, the face has just been detected in the previous frame as depicted in Figure 6, and the ROIs for detecting the left eye and the right eye are determined as follows.

309578.fig.006
Figure 6: Illustration of the ROIs for detecting left and right eyes based on the detected face region [20].

The ROI starting position for detecting the left eye is obtained by

The ROI starting position for detecting the right eye is determined by where and denote the x and y coordinate of the top-left position of the detected face region, respectively; and represent the width and height of the detected face region, respectively.

Here the width and height of the eye detection ROIs are computed by

In the second situation, the eyes have already been detected in some previous frames, and then the eyes in the subsequent frames can be searched and detected based on their regions in previous frames.

Based on the detected eye region that includes the eye and the eyebrow and the displacement of the users’ eyes, the ROIs for detecting left and right eyes in the current frames can be determined by extending the detected eye regions by two times of their corresponding widths and heights. As illustrated in Figure 7, the red rectangular regions are the ROIs for detecting left and right eyes in the current frames, while the green rectangular regions are the detected eye regions in the previous frames.

309578.fig.007
Figure 7: Illustration of the ROIs for detecting left and right eyes based on the detected eye regions.

Accordingly, the boosting classifier is applied on the eye ROIs based on the above-mentioned two types of ROIs to obtain the left and right eye regions in the image sequences. Having the eyes’ regions, the proposed system then extracts the eyeballs and eyelids for further determining of the user-interactions. To computationally effectively obtain the positions of the eyeballs, the central regions of detected eye regions, which mostly contain the eyeballs of interest, are firstly extracted and transformed to gray-intensity images.

Then, to extract the eyeball blob pixels from other uninteresting object pixels of different illumination features under various ambient lighting conditions, an effective automatic thresholding technique is needed to adaptively segment the salient eyeball blob pixels of interest. Using the properties of discriminant analysis, our previous research presents an effective automatic multilevel thresholding technique for image segmentation [25]. The pixel regions of eyeball blobs can be appropriately extracted from other uninteresting objects contained in the detected eye regions. Then, to locate the eyeball blobs from the extracted bright object plane, a connected-component extraction process [26] is performed on the pixel regions of eyeball blobs to label and locate the connected components of the eyeball blobs. Locating the connected components reveals the meaningful features of the position, area, and pattern associated with each eyeball blob. Figure 8 illustrates the extraction process of the eyeball regions in the detected left and right eye regions. As depicted in Figure 9, after performing the automatic thresholding and connected-component extraction processes on the eyeball regions, the eyeball blobs and their features are obtained for further user interactive applications.

309578.fig.008
Figure 8: Illustration of extracting the eyeball regions in the detected eye regions.
309578.fig.009
Figure 9: The located eyeballs of the left and right eyes.
3.4. Blinking Eyelid Detection

In addition to detecting normal eye movements, the eye blinking events can also be adopted as the determination for fatigue detection of the drivers. To detect blinking events, we need to detect the blinking eyelids following the recent detected eyes in previous frames. Having the detected eye regions of the last previous frames, the blinking eyelids in the current frames can be determined by referring to the precious regions of the eyeball blobs. Based on the blinking eye detection approach presented in [27], this study presents a simplified and improved method to detect the blinking eyelids. The blinking eyelid detection process is illustrated in Figure 10. Firstly, the automatic thresholding process is applied on the possible eyelid regions, which are obtained by referring to the previously detected eye regions, to obtain the binary pixel regions. Then, the eyelid detection process starts by vertically scanning the first occurring black object pixels along with the coordinates at 1/4, 1/2, 3/4 widths of the eye regions. We can obtain three turning points A, B, C, which reflect the features of the eyelids, as depicted in Figure 10. Accordingly, the slopes of the feature lines and are computed, and if the slope of and the slope of , then we can determine that it is a blinking eyelid.

309578.fig.0010
Figure 10: Illustration of blinking eyelid detection for fatigue monitoring.

4. The Experimental Setup

The architecture of the proposed solar-powered on-road driver monitoring system is shown in Figure 11. In the study, we use a multicore embedded development platform to perform the tasks of image capturing, eye detection and monitoring of driver, and brightness detection for power source switching. In order to promote the computational performance of the proposed system, we assign the computing tasks as shown in Figure 12. In this computing architecture, one CPU core focuses on monitoring the driver’s state by detecting the eye states from the camera’s video streams, while the other core is used for detecting ambient brightness and selecting the suitable power sources. Figure 13 gives an overview of the experimental car, and the hardware specifications of the proposed system are depicted in Table 2.

tab2
Table 2: Specification of driver monitor system.
309578.fig.0011
Figure 11: System architecture.
309578.fig.0012
Figure 12: Task dispatches of the proposed system on the multicore platform.
309578.fig.0013
Figure 13: Overview of the experimental car.
309578.fig.0014
Figure 14: Software architecture.

The software architecture of our proposed solar-powered driver monitoring system consists of two parts. The first part includes the face and eye detection module based on the classification-based methods. This software module will detect the driver’s face and then will detect eye position and movements based on face locations. The second part is the brightness detection module. This module analyzes the ambient brightness around the vehicle and determines the system to be powered by solar cells or vehicle batteries. When the ambient brightness is sufficiently high, the driver monitoring system is powered by the solar cells; otherwise, when the ambient brightness is insufficient, the system will switch to being powered by vehicle batteries. The software architecture is depicted in Figure 14.

5. Results and Discussion

This section presents the results of the proposed solar-powered driver monitoring system. The face and eye detection, fatigue detection, and the brightness detection with power source switching techniques are implemented on a TI OMAP4430 dual-core embedded platform. This platform consists of dual Cortex A9 ARM-based general-purpose processor with a 1.0 GHz operational speed and 1 GB of DDR2 memory for executing the software modules. In this study, we used one CPU core focusing on monitoring the driver’s state by detecting the eye states from the RGB color model of camera’s video streams, while the other core is used for detecting ambient brightness and selecting the suitable power sources. In this way, the computational performance of the proposed system can be effectively promoted. By releasing the source code project associated with the software modules, the proposed techniques can be conveniently distributed and migrated onto different hardware platforms (such as different embedded platforms) with various operating systems (such as Linux, Android, and Windows Mobile). Thus, the application developers can easily implement and design many customized driver monitoring systems under different hardware and software environments.

Since the ambient brightness will affect the eye detection rate of the driver monitor system. Therefore, this study evaluates the relations of the ambient brightness with the eye detection accuracy rates, as can be seen from Figure 15. When the brightness is more than 60, the eye detection accuracy rate can be greater than 90%. Therefore, if the ambient brightness is insufficient, the proposed system will automatically activate the infrared lights to promote the eye detection performance to ensure the high detection rate for monitoring the driver’s fatigue states.

309578.fig.0015
Figure 15: The relationship between detection rate and brightness.

In our experimental platform, the power of the solar cell is 20 W, but it can only support about 18% of its power. However, the power consumption of the proposed embedded platform requires about 18 W of power. Thus, this study uses six solar cells connected in parallel to supply the proposed system. As shown in Figure 16, when the solar irradiance achieves more than 52 W/, the total power supply of the solar cells can be greater than 18 W, which is sufficient for the proposed driver monitor system. In this condition, the brightness becomes higher than 130, as solar irradiance is about 52 /, and the eye detection accuracy rate can also reach higher than 90%, as depicted in Figure 15. It is shown that eye detection system can achieve the accuracy rate of above 90% with brightness value higher than 130, and the solar irradiance in this situation can achieve above 52 /.

309578.fig.0016
Figure 16: Solar irradiance, brightness with power of solar cell analysis.

Therefore, this study will select 130 as the brightness threshold to determine whether the system is driven by solar cells or batteries. When the brightness is higher than 130, the system will switch to being powered by the solar cells; otherwise, the system is driven by vehicle batteries.

The frame rate of the vision system is approximately 30 frames per second, and the resolution of each frame of the RGB color model of captured image sequences is 640 pixels by 480 pixels per frame. An experimental set of several driving videos captured in various illumination conditions and application environments was adopted to evaluate the system’s face and eye detection for intelligent driver monitor applications. Figure 17 shows the results of faces and eyes detection and fatigue state identification. The yellow region of Figure 17 is the eyes location and the green region is representing the left-eye and right-eye. The red region of Figure 17 depicts the fact that fatigue state has been detected. Figure 18 demonstrates results of the proposed system in different face angles and illumination conditions.

309578.fig.0017
Figure 17: The result of faces and eyes detection and fatigue detection.
309578.fig.0018
Figure 18: The result of evaluate the proposed system capability in different angles.

The proposed system was tested under the conditions that the solar irradiance is 10 to 113 / and solar energy and brightness at 10 to 170. From the testing results, when the outside solar radiation is high, the brightness of the inside of the car is increased, and the eye detection accuracy can also increase as well.

6. Conclusions

This study has presented an on-road driver monitoring system, which is implemented on a stand-alone in-vehicle embedded system and powered by effective solar cells. An efficient eye detection technique is developed to monitor the driver’s fatigue states. This driver monitoring technique is implemented on a low-power embedded in-vehicle platform. To efficiently switch the appropriate power sources, this study also proposed monitoring machinery that can detect the brightness around the car to effectively determine whether the system is driven by the solar cells or by the vehicle battery. On sunny days, the in-vehicle system can be powered by solar cell in places without the vehicle battery. While in the evenings or on rainy days, the ambient solar brightness is insufficient, the system is powered by the vehicle battery. The proposed eye detection system can achieve the accuracy rate of above 90% with the brightness value higher than 130, and the solar irradiance in this situation can achieve above 52 /.

The proposed system has been tested under the conditions that the solar irradiance is 10 to 113 W/ and solar energy and brightness at 10 to 170. From the experimental results, when the outside solar radiation is high, the brightness of the inside of the car is increased, and the eye detection accuracy can also increase as well. Therefore, this solar powered driver monitoring system can be efficiently applied on electric cars to save energy consumption and promote the driving safety.

Conflict of Interests

None of the authors of the present work have direct or indirect financial relation that might lead to conflict of interests of any kind for any of the authors.

Acknowledgment

This project is financially sponsored by the National Science Council under Grants nos. NSC-102-2219-E-027-006 and NSC-102-2622-E-027-027-CC2.

References

  1. D. Yang and H. Yin, “Energy conversion efficiency of a novel hybrid solar system for photovoltaic, thermoelectric, and heat utilization,” IEEE Transactions on Energy Conversion, vol. 26, no. 2, pp. 662–670, 2011. View at Publisher · View at Google Scholar · View at Scopus
  2. A. Gaur and G. N. Tiwari, “Performance of photovoltaic modules of different solar cells,” Journal of Solar Energy, vol. 2013, Article ID 734581, 13 pages, 2013. View at Publisher · View at Google Scholar
  3. G. van de Kaa, J. Rezaei, L. Kamp, and A. de Winter, “Photovoltaic technology selection: a fuzzy MCDM approach,” Renewable and Sustainable Energy Reviews, vol. 32, pp. 662–670, 2014. View at Publisher · View at Google Scholar
  4. G. K. Singh, “Solar power generation by PV (photovoltaic) technology: a review,” Energy, vol. 53, pp. 1–13, 2013. View at Publisher · View at Google Scholar · View at Scopus
  5. Z. J. Chien, H. P. Cho, C. S. Jwo, C. C. Chien, S. L. Chen, and Y. L. Chen, “Experimental investigation on an absorption refrigerator driven by solar cells,” International Journal of Photoenergy, vol. 2013, Article ID 490124, 6 pages, 2013. View at Publisher · View at Google Scholar · View at Scopus
  6. B. Huang, C. Chen, P. Hsu, W. Tseng, and M. Wu, “Direct battery-driven solar LED lighting using constant-power control,” Solar Energy, vol. 86, no. 11, pp. 3250–3259, 2012. View at Publisher · View at Google Scholar · View at Scopus
  7. M. Turk, “Computer vision in the interface,” Communications of the ACM, vol. 47, no. 1, pp. 61–67, 2004. View at Publisher · View at Google Scholar · View at Scopus
  8. T. B. Moeslund, A. Hilton, and V. Krüger, “A survey of advances in vision-based human motion capture and analysis,” Computer Vision and Image Understanding, vol. 104, no. 2-3, pp. 90–126, 2006. View at Publisher · View at Google Scholar · View at Scopus
  9. I. Haritaoglu, D. Harwood, and L. S. Davis, “W4: real-time surveillance of people and their activities,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 809–830, 2000. View at Publisher · View at Google Scholar · View at Scopus
  10. N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '05), vol. 1, pp. 886–893, June 2005. View at Publisher · View at Google Scholar · View at Scopus
  11. H. Fujiyoshi and A. J. Lipton, “Real-time human motion analysis by image skeletonization,” in Proceedings of the 4th IEEE Workshop on Applications of Computer Vision (WACV '98), pp. 15–21, Princeton, NJ, USA, October 1998. View at Publisher · View at Google Scholar
  12. P. Viola, M. J. Jones, and D. Snow, “Detecting pedestrians using patterns of motion and appearance,” International Journal of Computer Vision, vol. 63, no. 2, pp. 153–161, 2005. View at Publisher · View at Google Scholar · View at Scopus
  13. T. Ishikawa, S. Baker, I. Matthews, and T. Kanade, “Passive driver gaze tracking with active appearance models,” in Proceedings of the 11th World Congress on Intelligent Transportation Systems, pp. 1–18, October 2004.
  14. R. Herpers and G. Sommer, “An attentive processing strategy for the analysis of facial features,” in Face Recognition: From Theory to Applications, pp. 457–468, Springer, 1998. View at Google Scholar
  15. R.-L. Hsu, M. Abdel-Mottaleb, and A. K. Jain, “Face detection in color images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 5, pp. 696–706, 2002. View at Publisher · View at Google Scholar · View at Scopus
  16. Z. Li and S. Kim, “A modification of Otsu's method for detecting eye positions,” in Proceedings of the 3rd International Congress on Image and Signal Processing (CISP '10), vol. 5, pp. 2454–2457, IEEE, Yantai, China, October 2010. View at Publisher · View at Google Scholar · View at Scopus
  17. D. W. Hansen and A. E. C. Pece, “Eye tracking in the wild,” Computer Vision and Image Understanding, vol. 98, no. 1, pp. 155–181, 2005. View at Publisher · View at Google Scholar · View at Scopus
  18. A. L. Yuille, P. W. Hallinan, and D. S. Cohen, “Feature extraction from faces using deformable templates,” International Journal of Computer Vision, vol. 8, no. 2, pp. 99–111, 1992. View at Publisher · View at Google Scholar · View at Scopus
  19. Y. Nara, J. Yang, and Y. Suematsu, “Face detection using the shape of face with both color and edge,” in Proceeding of the IEEE Conference on Cybernetics and Intelligent Systems, vol. 1, pp. 147–152, December 2004. View at Publisher · View at Google Scholar · View at Scopus
  20. A. Królak and P. Strumiłło, “Eye-blink controlled human-computer interface for the disabled,” in Human-Computer Systems Interaction, vol. 60 of Advances in Intelligent and Soft Computing, pp. 123–133, 2009. View at Google Scholar
  21. J. J. Magee, M. R. Scott, B. N. Waber, and M. Betke, “EyeKeys: a real-time vision interface based on gaze detection from a low-grade video camera,” in Computer Vision and Pattern Recognition Workshop, vol. 10, p. 159, 2004.
  22. C.-Y. Ho, Applying Connected Component Labeling in Real-Time Eye Tracking, National Taipei University of Technology, Taipei, Taiwan, 2010.
  23. M. Gopi Krishna and A. Srinivasulu, “Face detection system on AdaBoost algorithm using Haar classifiers,” International Journal of Modern Engineering Research, vol. 2, no. 5, pp. 3556–3560, 2012. View at Google Scholar
  24. P. Viola and M. J. Jones, “Robust real-time face detection,” International Journal of Computer Vision, vol. 57, no. 2, pp. 137–154, 2004. View at Publisher · View at Google Scholar · View at Scopus
  25. B. F. Wu, Y. L. Chen, and C. C. Chiu, “A discriminant analysis based recursive automatic thresholding approach for image segmentation,” IEICE Transactions on Electronics, no. 8, pp. 1716–1723, 2005. View at Publisher · View at Google Scholar · View at Scopus
  26. K. Suzuki, I. Horiba, and N. Sugie, “Linear-time connected-component labeling based on sequential local operations,” Computer Vision and Image Understanding, vol. 89, no. 1, pp. 1–23, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  27. C.-Y. Ho, Applying connected component labeling in real-time eye tracking [M.S. thesis], National Taipei University of Technology, Taipei, Taiwan, 2010.