Abstract

Intelligent robotic vehicles are more and more fully automated, without steering wheels, gas/brake pedals, or gearshifts. However, allowing the human driver to step in and maneuver the robotic vehicle under specific driving requirements is a necessary issue that should be considered. To this end, we propose a wearable-sensing-based hands-free maneuver intention understanding approach to assist the human to naturally operate the robotic vehicle without physical contact. The human intentions are interpreted and modeled based on the fuzzy control using the forearm postures and muscle activities information detected by a wearable sensory system, which incorporates electromyography (EMG) sensors and inertial measurement unit (IMU). Based on the maneuver intention understanding model, the human can flexibly, intuitively, and conveniently control diverse vehicle maneuvers only using his intention expressions. This approach was implemented by a series of experiments in the practical situations on a lab-based 1/10 robotic vehicle research platform. Experimental results and evaluations demonstrated that, by taking advantage of the nonphysical contact and natural handleability of this approach, the robotic vehicle was successfully and effectively maneuvered to finish the driving tasks with considerable accuracy and robustness in human-robotic vehicle interaction.

1. Introduction

With the improvements of computational resources and manufacturing capacities, there is a rapid and steady development in intelligent robotic vehicles technology in recent decades [1, 2]. These robotic vehicles have numerous advantages over traditional vehicles to solve traffic problems such as traffic jams and traffic accident which are caused by the driver’s error or negligence [3]. Besides, the intelligent robotic vehicles are more and more likely to eliminate gas/brake pedals, gearshifts, and steering wheels. Google has built a two-seater prototype intelligent vehicle sans the steering wheel or pedals. Even without vehicle controls available to the human driver, this prototype is able to safely maneuver around obstacles via the built-in sensors and the software system [4]. Besides, aiming at ride-hailing and ride-sharing fleets, Ford will build a fully autonomous robotic vehicle without a steering wheel or pedals by 2021 [5].

However, in the intelligent robotic vehicle driving process, the human driver or passenger usually has some specific driving requirements such as accelerating in a straight road, stopping for some emergencies, or turning in a temporary direction. Consequently, how to maneuver the driving modes according to the human special intentions is a necessary issue that should be considered in the vehicle design.

During the human-robotic vehicle interaction process, it is significant for the vehicle to understand the human intentions or behaviors in order to achieve different vehicle maneuvers. Several related works have been conducted in recent years.

By using human motions, the car’s speed, and the distance between the car and the intersection, Ohashi et al. proposed a model using case-based learning to construct an experimental system for human driver’s intentions understanding [6]. In [7], the research team recognized a set of continuous driver intentions by observing the easily accessible vehicles and environment signals such as pedals or global vehicle positions. Based on the playback system and machine learning, Oliver and Pentland presented a dynamical graphical framework to model and recognize driver’s behaviors at a tactical level that focused on how contextual information impacted the driver’s performance [8]. Researchers in [9] developed a driver behavior recognition approach by characterizing and detecting driving maneuvers and then modeled and recognized the driver’s behaviors in different situations. From the accessible vehicle onboard sensors, Berndt and Dietmayer investigated a method to infer the driver’s intentions to leave the lane or other maneuvers. In this work they expected to help drivers predict trajectories or assess risks [10].

However, these recognition and understanding methods for human intentions are too complex to implement in practice. Additionally, we usually cannot get much recorded information from the vehicle embedded system since there are less traditional operation devices in the future robotic vehicles.

Using the gesture to represent human intentions for the robotic vehicle maneuver is a practical and interesting work that attracts a lot of attention. Operating robotic vehicles via human gestures will help the human take his/her hands off the current operation habits to reduce the contributions of the negligence and error which may cause vehicle collisions [11]. Ionescu et al. developed efficient human-vehicle interaction through a smart and real-time depth camera operating in the near infrared spectrum. The acquired depth information was processed for the human gesture detection and recognition to interpret the driving intentions to control the vehicle [12]. Researchers in [13] established the communication by gestures between the human and an intelligent wheelchair through a webcam and sensors. By using an array of cameras which outputted information with an instantaneous state, Kramer and Underkoffler acquired the images of human gestures and then designed a controller that automatically extracted and detected the gesture from the gesture data for the vehicle maneuver [14]. In [15], to enable the human and the vehicle to communicate and work together, Fong et al. used sensor fusion and computer vision to recognize the remote environment and improve the situation awareness. Then they created easily used remote driving tools for the vehicles. Researchers in [16] employed a Leap Motion to detect the gesture data and extracted seven independent instructions for the autonomous vehicle maneuver.

Although there are several vision-based approaches, vision-based driving intention recognition and understanding highly depends on the working surroundings. Its performance is easily interfered by the complex and dynamic background such as the crowded urban settings. Furthermore, the vision system usually requires the human to be within some certain areas in order to capture the motion information, which significantly constrains the activities and working ranges of the humans.

Therefore, with the extensive development and employment of robotic vehicles, researchers expect that humans and vehicles could collaborate seamlessly in different driving situations. Developing a simply configured, naturally operable, and highly robust human intention understanding approach for human-robotic vehicle collaboration is a very necessary issue.

To this end, different from existing approaches of using vehicle built-in devices or vision systems, we propose a wearable-sensing-based maneuver intention understanding approach using a wearable sensory system [1719] to assist the human in the maneuvering of robotic vehicle without physical contact. This interaction method does not restrain the human’s hand to be physically involved in the driving task and can be applied in the complicated human-robotic vehicle interactions.

The major contributions of this work include the following: We propose a natural wearable sensing solution to assist human drivers to maneuver robotic vehicles without traditional operation devices in specific driving situations, which is more robust than existing approaches. We develop a driving intention understanding approach using fuzzy control and human motions information, including forearm postures and muscle activities, which are captured by the wearable sensory system.

2. System Framework

The system framework, which is designed for the human to use the wearable-sensing-based maneuver intention understanding approach to operate the robotic vehicle, is shown in Figure 1. This system contains three layers: the data layer, the decision layer, and the execution layer.

When the human intends to change the robotic vehicle’s driving modes, his intentions expressed by forearm postures and muscle activities are detected and calculated via a wearable sensory system, as presented in Figure 2. After being collected, the expression information is preprocessed and fused together to output useful information in the data layer. Then the processed information, including the human hand’s rotation angles and arm muscles electromyography (EMG) signals, is sent to the decision layer by means of wireless communication devices in real time.

After that, in the decision layer, the acquired information is further processed to generate intention instructions based on the intention understanding model. Simultaneously, the instruction outputs motivate the vehicle motion planning algorithms by calling the corresponding driving mode function. In order to ensure the vehicle to execute accurately, both intention instructions and algorithm outputs are utilized to make motion planning decisions.

In the execution layer, the vehicle driving commands are generated based on the decision layer outputs for the vehicle to plan motions in the real world workspace. Meanwhile, the vehicle execution states are sent back to the decision layer to alert the motion planning algorithms if the driving intention is accepted. The motion planning algorithms will output the decision again if the driving intention failed to be accepted.

3. Maneuver Intention Representation and Data Acquisition

3.1. Maneuver Intention Representation

The human maneuver intentions [20], including brake, turn, and acceleration, are pretty common in daily driving. These intentions can be reflected and represented by lots of manners, such as body movements and natural languages. Because there are no traditional manual operation devices in the future intelligent robotic vehicles, the normal driving manners are not available in these cases. In this research, in order to make it practical and natural, we utilize the human forearm postures and muscle activities to represent these maneuver intentions. As shown in Figure 2, the intention information usually contains forearm rotation angles and EMG signals. Therefore, the maneuver intentions can be described aswhere denotes the maneuver intention interpreted by forearm rotations; denotes the maneuver intention interpreted by EMG signals.

3.2. Wearable Sensory System

We employ a wearable sensory system for human-robotic vehicle interaction to acquire the human forearm postures and muscle activities information in the maneuver process. The sensory system that we choose is Myo [21], which can be worn at the driver’s forearm and integrates with an inertial measurement unit (IMU) [2224] and eight EMG sensors [2527]. The IMU chip contains an onboard digital motion processor (DMP) and MPU-9150 module which consists of a 3-axis accelerometer, a 3-axis gyroscope, and a 3-axis magnetometer. The detected information from the IMU and EMG sensors is preprocessed by a microcontroller unit (MCU) with a 32 bit ARM architecture 72 MHz Cortex M4 CPU core. All the raw and calculated data are made available through a first-in-first-out (FIFO) buffer that is read by the MCU over the communication bus. The Bluetooth Low Energy (BLE) module on the mainboard is used for external communication between Myo and the client controller [28].

The working principle of the information acquisition by this wearable sensory system is presented in Figure 2. The human forearm postures will be tracked and recorded by the IMU. This data includes acceleration and angular velocity information which can be fused to describe the forearm motions and rotation angles. When the human performs maneuver intentions, the electrical skeletal muscle activities from his forearm will be measured by the EMG sensors. This EMG information can be extracted to estimate human’s finger motions such as wave-in, finger-spread, and fist.

3.3. Data Acquisition and Processing

When the human performs his maneuver intentions, as shown in Figure 2, his forearm postures can be quantified by the IMU outputs which contain the 3-axis acceleration information and the 3-axis angular velocity information about forearm motions. Furthermore, these data can be fused together into quaternionswhere . The sample frequency of the IMU is 50 Hz in our work.

In order to calculate the forearm postures, Euler angles [29] are utilized to parameterize the forearm spatial rotations in the 3D work space. The Roll-Pitch-Yaw Euler angles can be represented bywhere denotes the IMU sampling time, is the Roll rotation about the -axis, is the Yaw rotation about the -axis, and is the Pitch rotation about the -axis. As presented in Figure 3, Euler angles are able to visually describe the forearm rotation movements in the hand-over process.

Moreover, the Euler angles are able to be calculated by the quaternions as

Therefore, the driver’s intentions interpreted by the arm rotation in (1) can be represented as

Simultaneously, the human finger motions can be calculated based on the EMG signals which are collected from the human forearm’s muscle activities. The EMG data acquired by the wearable sensory system can be described aswhere is the sampling time of the EMG sensor, is the output of each EMG sensor, and is the number of EMG channels on the wearable sensory system which is 8 in our work. We sample these EMG signals at a frequency of 200 Hz.

The raw EMG signal is a set of discrete points with positive and negative components. Along with the finger activities, the electric potentials generated by muscle cells have a distinct effect on the dispersion of the EMG signal. Therefore, to take advantage of the EMG data accurately, we adopt the standard deviation (SD) of the EMG data to extract the characteristics from the finger activities. The standard deviation could reflect the muscle activities observably. In the human-robotic vehicle interaction, the standard deviation can be calculated bywhere is a set of EMG signals and is the window size for determining the number of EMG data to be employed to calculate the stand derivation. We select in this study.

Moreover, the maneuver intentions interpreted by the finger motions in (1) can be represented by

According to (5) and (8), the maneuver intention can be represented as

From the above, it can be concluded that, during the robotic vehicle maneuver process, and are dynamically programmed and updated via the human forearm postures and muscle activities. Therefore, the maneuver intentions will be interpreted and updated in real time.

4. Maneuver Intention Understanding Using Fuzzy Control

In this section, based on the wearable sensing information and the maneuver intention representation, we build an intention understanding model using the fuzzy control.

4.1. Maneuver Intention Fuzzification

Before developing the fuzzy controller [30], we should define the fuzzy set and domain of discourse using the wearable sensing information, which contains forearm postures and muscle activities. In this work, we find it is difficult to distinguish various intentions by directly employing the raw standard derivations of all EMG signals, while the average of them presents clear differences. Hence, we utilize to denote the muscle activities from the driving intention. Additionally, in order to distinguish steering modes in the robotic vehicle maneuver, we utilized to work as an input for the fuzzy controller. Therefore, combining the forearm rotation angles and EMG signals together, we deploy the fuzzy controller with four inputs () and one output ().

Moreover, the fuzzy sets of the inputs and output are defined as follows:(i)The Roll angle (ii)The Yaw angle (iii)The Pitch angle (iv)The EMG signal (v)Driving intentions ,

where,” “,” “,” “,” “,” and “” denote the maneuver intentions of,” “,” “,” “ ,” “ ,” and ,” respectively. Additionally, “,” “,” “,” “,” “,” “,” and “” denote negative big, negative middle, negative small, zero, positive big, positive middle, positive small, and separately, which are employed to represent degree of membership in the maneuver intention understanding.

4.2. Membership Functions

According to the fused wearable sensing information and driving operations in the vehicle maneuver, we define the domains of discourse of the inputs and output as follows:(i)The Roll angle (ii)The Yaw angle (iii)The Pitch angle (iv)The EMG signal (v)Driving intentions .

In this study, we ask 5 subjects, who have different hands and muscular tensions when maneuvering the robotic vehicle, to detect the forearm rotation angles and EMG signals. Each subject performs 20 times for each maneuver intention. The triangular membership function [31] is employed for each input and output in the fuzzy controller. The membership functions we design are shown in Figures 4~8. It can be seen that, during the maneuver process, the degree of membership is varied with the domain of discourse of each input or output correspondingly.

4.3. Maneuver Intention Understanding

Since the fuzzy controller in this work is configured with four inputs and one output, the fuzzy rules cannot be presented by the traditional rule-table. However, we can use the “IF-THEN” statements [32] to describe the valid fuzzy rules we utilize in the robotic vehicle maneuver. As shown in Table 1, the “IF” parts are antecedents and the “THEN” parts are consequents.

For the maneuver intention understanding, we employ “AND” as the fuzzy operator in each rule and “OR” as the fuzzy operator among different rules. As presented in Table 1, for the maneuver intention of “Acceleration,” we utilize the aggregate degree of membership as its fuzzy decision, which can be calculated by

Similarly, the fuzzy decisions of other maneuver intentions can be calculated based on Table 1. Afterwards, we employ the Middle of Maximum (MOM) [33] as the defuzzification approach to calculate the corresponding maneuver intention. Furthermore, the maneuver intention understanding result can be expressed aswhere ,   denotes the outputted fuzzy decision, and “round” means that the output is rounded to the nearest integer.

Therefore, based on (11), the maneuver intention can be understood when the human operates the robotic vehicle under specific requirements.

5. Experimental Results and Analysis

5.1. Experimental Platform

The developed approach in this work is implemented on a lab research autonomous robotic vehicle of the 1/10-Scale Vehicle Research Platform (1/10-SVRP). The 1/10-SVRP consists of five 1/10-scale autonomous robotic vehicles, a human manual driving interface, and a 1/10-scale driving environment including an ultra-wideband based indoor GPS system, traffic lights, road signs, and various lane setups. As shown in Figure 9, the wearable sensory system described above is worn around human’s arm during the human-robotic vehicle interaction. The sensory information detected by IMU and EMG sensors is sent to the control system in real time via a pair of Bluetooth devices. Once the controller generates new commands, these signals are sent to the vehicle motor drivers to plan the goal motions.

In the robotic vehicle maneuver process, the velocity we employed for the robotic vehicle is expressed by where is the EMG control factor; is determined by the clench and release of the human fist.

The steering angles in the robotic vehicle turning operation are calculated with the following function:where is the steering angle coefficient; works as the offset to adjust the initial angle.

5.2. Maneuver Intention Understanding Verification

In this section, we test the maneuver intention understanding approach via the forward (acceleration and deceleration) and steering (turning left and turning right) driving modes in practical situations and present the verification results.

5.2.1. Forward Driving

When the human expects the robotic vehicle to speed up or speed down in the forward driving, according to the fuzzy rules, he should present specific finger motions and keep the rotation angles constrained in the required intervals at the same time. As depicted in Figure 10, the human forearm rotations and finger activities properly meet the required rules. Meanwhile, the vehicle’s movement procedure in Figure 10(a) shows that the vehicle is accelerated and decelerated correspondingly along with the variation of the EMG signals. Consequently, it can be seen that the maneuver intention understanding approach correctly follows the human intentions to execute the accelerating and decelerating driving.

5.2.2. Steering

When the human wants the robotic vehicle to turn left or right, in accordance with the steering rules of “TL” and “TR,” he should control the Yaw angle properly and keep the Roll angle and Pitch angle in the designed constraints simultaneously. The rotation angles information and EMG signals are presented in Figure 11. It can be seen that the robotic vehicle properly performs the steering directions along with the variation of the rotation angles, which indicates that the proposed approach exactly understands the maneuver intentions in the steering operation.

5.3. Accuracy Evaluation

In this section, we conduct understanding accuracy evaluation and compare the results with the work in [16], which utilized a Leap Motion to acquire the human behaviors’ information for the vehicle maneuver.

We employ the wearable sensory system to perform all designed maneuver intentions to operate the robotic vehicle without considering the fixed route. Each intention is operated 40 times based on the understanding model. The understanding accuracy results are presented in Table 2. It can be seen that the proposed understanding approach is able to effectively and sensitively identify all the maneuver intentions in the human-robotic vehicle interaction. However, for some intentions understanding such as “” and “,” they present relatively fair accuracy than others. To solve it, we can design much better fuzzy rules through practical trials to improve the accuracy of these maneuver intentions’ understanding.

In addition, the average understanding accuracy of this study is about 93.33% which is higher than the work in [16]. Furthermore, from [16] we can calculate that the standard deviation of all errors (SD-E) is about 4.52, which is higher than 3.76 in our work. Therefore, it can be concluded that our approach is more stable in the maneuver intentions understanding. The result comparisons are shown in Table 3.

5.4. Robustness Evaluation

Based on the research platform, we design two tasks to evaluate the robustness of the maneuver intention understanding model in some common driving scenes, such as driving straight on the lane, turning in the intersection, and turning for obstacle avoidance.

5.4.1. Lane Tracking

When the human maneuvers a robotic vehicle in the straight lane, the straightness of driving is very significant to the traffic. Meanwhile serpentine driving usually results in a fine or even a terrible accident. Therefore, the straight driving test is conducted based on the intention understanding model.

We ask 5 individuals with valid driving licenses and considerable driving experiences to maneuver the vehicle one by one for two loops. Each individual operates one straight driving process in each loop. Therefore, we can get 10 driving records from the experiment. As shown in Figure 12, the vehicle is driven forward from A to B.

According to the maneuver results, the occurrences of lane departure in the straight driving are shown in Figure 13. The numbers of lane departures of these ten driving records are “2,” “1,” “1,” “0,” “2,” “1,” “0,” “1,” “3,” and “1,” respectively. The average of the number of lane departures is 1.20, which suggests that the maneuver intention understanding approach presents a robust stability and adaptability for different individuals in the straight driving situations.

5.4.2. Obstacle Avoidance

To evaluate the performance of flexibility of the maneuver intention understanding model, some hybrid driving modes to avoid obstacles are allocated to the robotic vehicle in the second task. As presented in Figure 14, the vehicle is driven from A to B. During this process, the vehicle must cross the intersection and avoid colliding with obstacles on the road. The experiment is conducted using the same method by 5 different individuals as task 1.

Based on the driving records, the numbers of obstacle collisions are “1,” “2,” “1,” “1,” “3,” “0,” “1,” “3,” “2,” and “1,” respectively. As shown in Figure 15, the average of the number of obstacle collisions is 1.50. From the results, it can be observed that the maneuver intention understanding approach presents a robust flexibility for the hybrid driving modes in the complex road setting. Comparing to task 1, the standard deviation of the numbers of obstacle collisions (0.97) is higher than that of the numbers of lane departures (0.92), which reveals that the intention understanding approach in hybrid driving modes shows a relatively fair robustness. One of the key reasons is that different fuzzy rules for the intentions present diverse understanding accuracies, in which some of them will impact the overall robustness. Additionally, it is easy for divers to feel nervous in the complicated driving surroundings which can result in obstacle collisions. However, these problems above could be overcome by optimizing fuzzy rules in the proposed approach and taking more practice for the human.

From the above, it is shown that the vehicle maneuvers are successfully and effectively performed by using the maneuver intention understanding approach. Notably, experimental results and evaluations of this approach demonstrate that by taking advantage of the natural wearable sensing information the human driver can maneuver the vehicle only using forearm postures and muscle activities in a much easier and more stable manner with considerable accuracy and robustness.

6. Conclusions

A novel and practical wearable-sensing-based maneuver intention understanding approach was proposed to assist the human driver to naturally operate the robotic vehicles without physical contact. The wearable sensory device can be naturally applied in the complicated human-vehicle interactions without restraining the human’s hand to be physically involved in the driving task. First, when the human driver performed his maneuver intentions, the wearable sensory system information which included forearm postures and muscle activities was recorded and updated in real time. Additionally, after getting the parameterized intention information, we developed a maneuver intention understanding approach using the fuzzy control. Afterwards, based on the proposed approach, we conducted a set of experiments on our vehicle research platform. Experimental results and evaluations demonstrated that by taking advantage of the nonphysical contact and natural handleability of this approach the robotic vehicle was successfully and effectively maneuvered to accomplish the driving tasks with considerable accuracy and robustness in human-robotic vehicle interaction.

In human-vehicle interaction, the driver’s unconscious gestures and involuntary movements may cause unstable detection and interpretation of the driver’s intentions. Therefore, future works will be conducted on integrating multiple kinds of sensing information, such as human gaze information and natural language, as triggers to avoid the false-positive intention understanding. Additionally, with looking forward to extending the applications of our approach in more complicated situations, future works will also be conducted on integrating radar sensing information as the input of the fuzzy control to improve the intention understanding accuracy to avoid potential collisions.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Supplementary Materials

The demo we recorded for the experimental verification. (Supplementary Materials)