Abstract

This study presents an on-road driver monitoring system, which is implemented on a stand-alone in-vehicle embedded system and driven by effective solar cells. The driver monitoring function is performed by an efficient eye detection technique. Through the driver’s eye movements captured from the camera, the attention states of the driver can be determined and any fatigue states can be avoided. This driver monitoring technique is implemented on a low-power embedded in-vehicle platform. Besides, this study also proposed monitoring machinery that can detect the brightness around the car to effectively determine whether this in-vehicle system is driven by the solar cells or by the vehicle battery. On sunny days, the in-vehicle system can be powered by solar cell in places without the vehicle battery. While in the evenings or on rainy days, the ambient solar brightness is insufficient, and the system is powered by the vehicle battery. The proposed system was tested under the conditions that the solar irradiance is 10 to 113 W/m2 and solar energy and brightness at 10 to 170. From the testing results, when the outside solar radiation is high, the brightness of the inside of the car is increased, and the eye detection accuracy can also increase as well. Therefore, this solar powered driver monitoring system can be efficiently applied to electric cars to save energy consumption and promote the driving safety.

1. Introduction

The economic development has caused large amount of energy to be consumed, and largely consumption of petrochemical inventories, resulting in the energy crisis. To reduce the petrochemical energy consumption on vehicles, the electric cars arise. In order to achieve higher endurance driving, we should care about the car battery power. The in-vehicle instruments in the electric cars also consume electric energy and may reduce the energy for car driving. Moreover, there are many traffic accidents which occur due to the inattentive and fatigued driving. Thus, an effective monitoring system for drivers’ attention states is also very important for the development of electric cars. Therefore, the solar energy can be adopted to drive the in-vehicle embedded computing platform to conduct the driver monitoring functions, and both the electric energy consumption and the driving safety can be effectively achieved.

Photovoltaic (PV) technology is one of the most important renewable sources of energy generation. Since the first recognition in 1839 [1], there have been many research works on the performance of PV. However, efficiency improvement and cost reduction of PV technology still need many efforts. The solar cells based on crystalline silicon (c-Si) are known as materials in first generation solar cell [2]. From the points of view on the cost, performance, and processibility, the applications of new advanced materials such as amorphous silicon (a-Si), cadmium telluride (CdTe), and copper indium gallium diselenide (CIGS) are achieved in the second and third generations of solar cells. Typical conversion efficiencies of first generation technologies are currently 15% to 20%, whereas those of second generation technologies are currently 7% to 15% [3].

Photovoltaic system can be categorized as stand-alone photovoltaic system and grid-connected photovoltaic system [4]. The stand-alone systems do not supply power to the grid. Such systems may vary widely in size and applications, such as consumer electronics and remote buildings. Chien et al. [5] have presented the experimental investigations on absorption refrigerators driven by solar cells. The system was tested by the varying solar irradiance ranged from 550 to 700 W/ as solar energy and 500 mL ambient temperature water as cooling load. After 160 minutes, this refrigerator can maintain the temperature at 5 to 8°C. Huang et al. [6] presented a solar LED street lighting system using constant-power and dimming control. The test results showed that the power of 18 W and 100 W LED luminaires can be controlled accurately with error at 2–5%.

To achieve the monitoring functions of drivers’ fatigue state, the human body features and actions should be efficiently analyzed. Human body feature sensing and detection have currently played important roles in many application areas, such as human surveillance and human-computer interaction applications [79]. In recent research studies, many human action detection techniques aiming at detecting different human body parts have been developed for human-computer interaction applications, such as the whole human motions [1012], the faces, and the eye gazes [1323]. Among the above-mentioned human body parts, the faces and eye gazes play the most important roles in interpreting and understanding a person’s attentions, intentions, and needs in human communications and interactions, especially for driver state monitoring. In this sense, tracking of faces and eye gazes provide necessary information to obtain the car driver’s attentive states.

Moreover, the computational power and flexibility of in-vehicle embedded systems have recently increased significantly because of the development of system-on-chip technologies and the advanced computing power of newly released portable devices. Thus, performing real-time vision-based eye detection and tracking in driver assistance and monitoring applications has become feasible on modern embedded platforms for driver assistance systems. This paper presents an on-road driver monitoring system that uses solar-cell powered in-vehicle embedded platform. The on-road driver monitoring system could detect driver’s mental state and determine which system is driven by battery or by solar cell by brightness of environment. The system could be driven by solar cell on sunny day which could save finite battery capacity in vehicle.

This study presents an on-road driver monitoring system, which is implemented on an in-vehicle embedded system and driven by solar cells. The driver monitoring function is performed by an efficient eye detection technique. Through the driver’s eye movements captured from the camera, the attentive states of the driver can be determined and any fatigue states can be avoided. This driver monitoring technique is implemented on a low-power embedded in-vehicle platform. Besides, this study also proposed monitoring machinery that can detect the brightness around the car to effectively determine whether this in-vehicle system is driven by the solar cells or by vehicle batteries. On sunny days, the in-vehicle system can be powered by solar cell in places without the vehicle battery. While in the evenings or on rainy days, the ambient solar brightness is insufficient, and the system is powered by the vehicle battery. From the experimental results, the proposed solar powered driver monitoring system demonstrates its efficiency in energy savings and driving safety for electric cars.

2. Brightness Detection and Power Source Determination

In the proposed system, we adopt a camera, pyranometer, and powermeter to obtain information of the solar irradiance and power generation efficiency of solar cells via analytical results of the images captured outside the car. First, the RGB (red, green, and blue) color model of the captured image sequences of the road environments will be adopted to compute the brightness value, denoted as V. Then, the brightness (V) value can be adopted to determine the ambient brightness conditions with respect to the strengths of the solar irradiance. The range of V value is ranged from 0 to 255. The brightness value can be computed as follows:

Equation (1) indicates the V value is the maximum of the color values of R, G, and B. We can use the brightness value as the threshold to determine the strengths of the solar irradiance. In this way, when the V value is lower than the threshold, system will conduct two operations: changing the power source from the solar cell to the vehicle battery; turning on the infrared light sources to more accurately detect the fatigue state of the driver. When the V value is higher than the threshold, the system will then conduct two operations: changing the power source from the battery to solar cells; turning off the infrared light sources, because the ambient lighting condition is sufficient for detecting the driver’s states. In the proposed system, the threshold of the brightness is experimentally determined as 130, as depicted in Figure 14.

3. Driver Monitoring by Eye Detection Methods

To monitor the driver’s attentive states, their eyesight is important information. This study presents a classification-based approach to detect and analyze the drivers’ eyes to determine their attention states. A fast face and eye detection process based on boost classification approach is presented to detect and locate the regions of faces and eyes in the image sequences captured via a video camera. In this detection process, we improve and optimize the boost classification method to provide necessary computational efficiency for embedded and portable devices.

3.1. Classification-Based Methods

Here each weak classifier stage is performed by selecting a combinative set of features being computed based on the integral images, which reflects integrals over the subregions contained in the region, as depicted in Figure 1. The features utilized in the proposed system are a variety of Haar-like features [24], which are reminiscent of Haar wavelets and reflect the human visual responses, such as linear, center-surround, and diagonal directional responses, as shown in Figure 2.

In this sense, a boosted cascade of strong classifiers can be constructed where each stage is a strong classifier having a suitable threshold for determining whether the input image comprises target objects, as shown in Figure 3. Each stage classifier is trained and performed by analyzing a combination of features obtained from the integral image, and thus detecting most target objects while screening out a certain portion of noninterested objects that might be accepted by previous classifier stages.

3.2. Face Detection

Having the boosting classification techniques, the faces in the grabbed image sequences can be detected and located via the trained boosted cascade detectors. The proposed system adopts pixel resolution for face training patterns.

Because the proposed system is designed mainly for the interactive user interface applications, the distances between the camera and the face are mostly within 25 cm to 60 cm. Therefore, to computationally effectively perform the real-time face detection by the cascade detectors, the proposed system adopts the scaling factors of the resolutions for the image pyramids [21] on the face images, as depicted in Table 1. The proposed system adopts the scale factors of 2–7 on the face images to perform fast multiscale detection via the boosted cascade detectors.

For the purpose of running the system under different environments with varying ambient lighting conditions, the histogram equalization is applied on the scaled face images to obtain uniform gray level distributions, as shown in Figure 4. Accordingly, the boosted cascade detectors are applied on the histogram equalized images to detect faces using the Haar-like features, as depicted in Figure 5.

As a result, after the faces in images are detected, their positions and sizes are recorded for further face and eye detection processes. Based on the positions and sizes of the current detected faces, the searching area for detecting the faces in the subsequent image frames is performed in the areas with the two times of the widths and heights of the current detected faces, and thus the computational costs of the face detection can be effectively saved in the following image frames.

3.3. Eye Detection

Having the face positions, the eyes can be effectively detected based on the interesting regions specified by the facial areas. Besides, to save the computational costs for performing real-time detection process of the boosting classification techniques, the face detection process on the whole image is only performed in the situation that the eyes have not been detected in the previous frame. Otherwise, if the face or eyes have already been detected in the previous frame, the proposed system will perform the eye detection process within the adaptive regions of interest (ROIs) according to the locations and areas of the face and eyes detected in previous frames.

In this way, the proposed system adopts two types of regions of interest for eye detection for the two situations: the face has been detected and the eyes have been detected in the previous frames [20]. In the first situation, the face has just been detected in the previous frame as depicted in Figure 6, and the ROIs for detecting the left eye and the right eye are determined as follows.

The ROI starting position for detecting the left eye is obtained by

The ROI starting position for detecting the right eye is determined by where and denote the x and y coordinate of the top-left position of the detected face region, respectively; and represent the width and height of the detected face region, respectively.

Here the width and height of the eye detection ROIs are computed by

In the second situation, the eyes have already been detected in some previous frames, and then the eyes in the subsequent frames can be searched and detected based on their regions in previous frames.

Based on the detected eye region that includes the eye and the eyebrow and the displacement of the users’ eyes, the ROIs for detecting left and right eyes in the current frames can be determined by extending the detected eye regions by two times of their corresponding widths and heights. As illustrated in Figure 7, the red rectangular regions are the ROIs for detecting left and right eyes in the current frames, while the green rectangular regions are the detected eye regions in the previous frames.

Accordingly, the boosting classifier is applied on the eye ROIs based on the above-mentioned two types of ROIs to obtain the left and right eye regions in the image sequences. Having the eyes’ regions, the proposed system then extracts the eyeballs and eyelids for further determining of the user-interactions. To computationally effectively obtain the positions of the eyeballs, the central regions of detected eye regions, which mostly contain the eyeballs of interest, are firstly extracted and transformed to gray-intensity images.

Then, to extract the eyeball blob pixels from other uninteresting object pixels of different illumination features under various ambient lighting conditions, an effective automatic thresholding technique is needed to adaptively segment the salient eyeball blob pixels of interest. Using the properties of discriminant analysis, our previous research presents an effective automatic multilevel thresholding technique for image segmentation [25]. The pixel regions of eyeball blobs can be appropriately extracted from other uninteresting objects contained in the detected eye regions. Then, to locate the eyeball blobs from the extracted bright object plane, a connected-component extraction process [26] is performed on the pixel regions of eyeball blobs to label and locate the connected components of the eyeball blobs. Locating the connected components reveals the meaningful features of the position, area, and pattern associated with each eyeball blob. Figure 8 illustrates the extraction process of the eyeball regions in the detected left and right eye regions. As depicted in Figure 9, after performing the automatic thresholding and connected-component extraction processes on the eyeball regions, the eyeball blobs and their features are obtained for further user interactive applications.

3.4. Blinking Eyelid Detection

In addition to detecting normal eye movements, the eye blinking events can also be adopted as the determination for fatigue detection of the drivers. To detect blinking events, we need to detect the blinking eyelids following the recent detected eyes in previous frames. Having the detected eye regions of the last previous frames, the blinking eyelids in the current frames can be determined by referring to the precious regions of the eyeball blobs. Based on the blinking eye detection approach presented in [27], this study presents a simplified and improved method to detect the blinking eyelids. The blinking eyelid detection process is illustrated in Figure 10. Firstly, the automatic thresholding process is applied on the possible eyelid regions, which are obtained by referring to the previously detected eye regions, to obtain the binary pixel regions. Then, the eyelid detection process starts by vertically scanning the first occurring black object pixels along with the coordinates at 1/4, 1/2, 3/4 widths of the eye regions. We can obtain three turning points A, B, C, which reflect the features of the eyelids, as depicted in Figure 10. Accordingly, the slopes of the feature lines and are computed, and if the slope of and the slope of , then we can determine that it is a blinking eyelid.

4. The Experimental Setup

The architecture of the proposed solar-powered on-road driver monitoring system is shown in Figure 11. In the study, we use a multicore embedded development platform to perform the tasks of image capturing, eye detection and monitoring of driver, and brightness detection for power source switching. In order to promote the computational performance of the proposed system, we assign the computing tasks as shown in Figure 12. In this computing architecture, one CPU core focuses on monitoring the driver’s state by detecting the eye states from the camera’s video streams, while the other core is used for detecting ambient brightness and selecting the suitable power sources. Figure 13 gives an overview of the experimental car, and the hardware specifications of the proposed system are depicted in Table 2.

The software architecture of our proposed solar-powered driver monitoring system consists of two parts. The first part includes the face and eye detection module based on the classification-based methods. This software module will detect the driver’s face and then will detect eye position and movements based on face locations. The second part is the brightness detection module. This module analyzes the ambient brightness around the vehicle and determines the system to be powered by solar cells or vehicle batteries. When the ambient brightness is sufficiently high, the driver monitoring system is powered by the solar cells; otherwise, when the ambient brightness is insufficient, the system will switch to being powered by vehicle batteries. The software architecture is depicted in Figure 14.

5. Results and Discussion

This section presents the results of the proposed solar-powered driver monitoring system. The face and eye detection, fatigue detection, and the brightness detection with power source switching techniques are implemented on a TI OMAP4430 dual-core embedded platform. This platform consists of dual Cortex A9 ARM-based general-purpose processor with a 1.0 GHz operational speed and 1 GB of DDR2 memory for executing the software modules. In this study, we used one CPU core focusing on monitoring the driver’s state by detecting the eye states from the RGB color model of camera’s video streams, while the other core is used for detecting ambient brightness and selecting the suitable power sources. In this way, the computational performance of the proposed system can be effectively promoted. By releasing the source code project associated with the software modules, the proposed techniques can be conveniently distributed and migrated onto different hardware platforms (such as different embedded platforms) with various operating systems (such as Linux, Android, and Windows Mobile). Thus, the application developers can easily implement and design many customized driver monitoring systems under different hardware and software environments.

Since the ambient brightness will affect the eye detection rate of the driver monitor system. Therefore, this study evaluates the relations of the ambient brightness with the eye detection accuracy rates, as can be seen from Figure 15. When the brightness is more than 60, the eye detection accuracy rate can be greater than 90%. Therefore, if the ambient brightness is insufficient, the proposed system will automatically activate the infrared lights to promote the eye detection performance to ensure the high detection rate for monitoring the driver’s fatigue states.

In our experimental platform, the power of the solar cell is 20 W, but it can only support about 18% of its power. However, the power consumption of the proposed embedded platform requires about 18 W of power. Thus, this study uses six solar cells connected in parallel to supply the proposed system. As shown in Figure 16, when the solar irradiance achieves more than 52 W/, the total power supply of the solar cells can be greater than 18 W, which is sufficient for the proposed driver monitor system. In this condition, the brightness becomes higher than 130, as solar irradiance is about 52 /, and the eye detection accuracy rate can also reach higher than 90%, as depicted in Figure 15. It is shown that eye detection system can achieve the accuracy rate of above 90% with brightness value higher than 130, and the solar irradiance in this situation can achieve above 52 /.

Therefore, this study will select 130 as the brightness threshold to determine whether the system is driven by solar cells or batteries. When the brightness is higher than 130, the system will switch to being powered by the solar cells; otherwise, the system is driven by vehicle batteries.

The frame rate of the vision system is approximately 30 frames per second, and the resolution of each frame of the RGB color model of captured image sequences is 640 pixels by 480 pixels per frame. An experimental set of several driving videos captured in various illumination conditions and application environments was adopted to evaluate the system’s face and eye detection for intelligent driver monitor applications. Figure 17 shows the results of faces and eyes detection and fatigue state identification. The yellow region of Figure 17 is the eyes location and the green region is representing the left-eye and right-eye. The red region of Figure 17 depicts the fact that fatigue state has been detected. Figure 18 demonstrates results of the proposed system in different face angles and illumination conditions.

The proposed system was tested under the conditions that the solar irradiance is 10 to 113 / and solar energy and brightness at 10 to 170. From the testing results, when the outside solar radiation is high, the brightness of the inside of the car is increased, and the eye detection accuracy can also increase as well.

6. Conclusions

This study has presented an on-road driver monitoring system, which is implemented on a stand-alone in-vehicle embedded system and powered by effective solar cells. An efficient eye detection technique is developed to monitor the driver’s fatigue states. This driver monitoring technique is implemented on a low-power embedded in-vehicle platform. To efficiently switch the appropriate power sources, this study also proposed monitoring machinery that can detect the brightness around the car to effectively determine whether the system is driven by the solar cells or by the vehicle battery. On sunny days, the in-vehicle system can be powered by solar cell in places without the vehicle battery. While in the evenings or on rainy days, the ambient solar brightness is insufficient, the system is powered by the vehicle battery. The proposed eye detection system can achieve the accuracy rate of above 90% with the brightness value higher than 130, and the solar irradiance in this situation can achieve above 52 /.

The proposed system has been tested under the conditions that the solar irradiance is 10 to 113 W/ and solar energy and brightness at 10 to 170. From the experimental results, when the outside solar radiation is high, the brightness of the inside of the car is increased, and the eye detection accuracy can also increase as well. Therefore, this solar powered driver monitoring system can be efficiently applied on electric cars to save energy consumption and promote the driving safety.

Conflict of Interests

None of the authors of the present work have direct or indirect financial relation that might lead to conflict of interests of any kind for any of the authors.

Acknowledgment

This project is financially sponsored by the National Science Council under Grants nos. NSC-102-2219-E-027-006 and NSC-102-2622-E-027-027-CC2.