Abstract

Human-robot collaboration (HRC) is a key feature to distinguish the new generation of robots from conventional robots. Relevant HRC topics have been extensively investigated recently in academic institutes and companies to improve human and robot interactive performance. Generally, human motor control regulates human motion adaptively to the external environment with safety, compliance, stability, and efficiency. Inspired by this, we propose an augmented approach to make a robot understand human motion behaviors based on human kinematics and human postural impedance adaptation. Human kinematics is identified by geometry kinematics approach to map human arm configuration as well as stiffness index controlled by hand gesture to anthropomorphic arm. While human arm postural stiffness is estimated and calibrated within robot empirical stability region, human motion is captured by employing a geometry vector approach based on Kinect. A biomimetic controller in discrete-time is employed to make Baxter robot arm imitate human arm behaviors based on Baxter robot dynamics. An object moving task is implemented to validate the performance of proposed methods based on Baxter robot simulator. Results show that the proposed approach to HRC is intuitive, stable, efficient, and compliant, which may have various applications in human-robot collaboration scenarios.

1. Introduction

HRC represents most key characters of next generation of robots working without cages and sharing the same workspace with human community [1, 2]. It allows human to interact with robots physically with safety and compliance guaranteed. Therefore, HRC has a variety of applications in home care [3], office, industry [4], agriculture [5], school educations, rehabilitation, and various human motor skill training scenarios, while collaborative robots are being extensively investigated by more researchers and technology companies and may be expected to dominate robot fields in the next few years [6].

Conventional industrial robots have a lot of advantages such as working with precision and speed, but they are relying on high stiffness, high precise parts and components, and complicated control systems. This makes them relatively expensive, inflexible, and dangerous for humans [7]. Moreover, due to these disadvantages, such robots are only suitable and efficient for simple and repeated tasks in structured environment. However, the factory production should be more flexible and smart to meet diversity and personalization of customer demand which tends to be mainstream in the future [8]. Correspondingly, robots are required to fulfil more dexterous and compliant tasks to reduce human workload and improve the quality of products. Conventionally, this requires expert modelling and programming skills that only a few people can implement such skill transfer, which discourages median or small companies to employ robots in the production line. Therefore, an intuitive and augmented approach to human-robot complicated skills transfer will make robot application more practical and remove the virtual cage between human and robot completely [9]. For example, service robot needs to perfectly fit customer’s need in home or office scenarios, and such kind of robots needs to interact with different stages of people with different cognitive and knowledge background. So HRC with intuitive human-robot interface will be compliant and efficient way to extend human and robot workspace and applicable areas. On the other hand, as the new generation of robots, namely, collaborative robots (e.g., Figure 1), are expected to share the same workspace with humans, safety is a key for collaborative robotics [7], which is often achieved through complex control systems [10], but robots are rarely designed to be safe. The economic way is to make robot arm human-like and as compliant as human limb and impedance of the robots can be adapted to dynamic environment [11]. Thus, the studies on compliant control have received increasing attentions [12, 13], in the tasks involving uncertain and dynamic environment, for example, physical human-robot cooperation [1416]. Most existing relevant research work can be generally classified into three categories: (i) design of compliant mechanism by using active (e.g., KUKA LBR [17] and Barrett WAM Arm [18]), passive (e.g., AWAS [19]), or hybrid (e.g., DLR hand-arm system [20]) robot joints with intrinsic compliance to provide a safe and user friendly interface for implementation of telecontrol by adapting robot joint impedance; (ii) online optimisation of impedance regulations for specific tasks in constructed environment, whereas the impedance tuning algorithms rely on an accurate robot-environmental interaction model [21]; (iii) biomimetic control, instead of automatic control, robot can imitate human compliance and adaptivity by learning human stiffness regulation [2224]. Therefore, telimpedance control which enables a target relation between force and displacement [25] is ideal for robots to perform tasks with human and environment in a safer and more natural way [26, 27].

For an effective HRC with safety, compliance, and stability, it is crucial to enable the robots to be capable of integrating interactive cognition and manipulation with humans, similar to the mechanisms involved in human-human interaction [28], while in this paper we carry on a preliminary research on single human motor adaptation behaviour in specific tasks; for example, in object moving tasks, human tends to exert force along the direction where the stiffness amplitude is relatively larger in [29]. Inspired by human mechanical impedance adaptivity to various environments [30], human motor skill extractions via EMG and biomimetic learning controller [31, 32] are developed to achieve more natural and efficient performance, as they directly reflect impedance variation without disturbing motor control during the task. EMG signals represent muscle activation regulated by central neutral system (CNS) and can be used to estimate limb stiffness for compliant demonstration, though the precise estimation is still a challenge due to muscle fatigue, noise perturbation and estimation model error, and so forth [3335]. In [26], sEMG based human-machine interface was designed for teleimpedance control though without force feedback or autonomous controller after transferring human impedance regulations to robot. Furthermore, human can change arm posture to adapt stiffness as well.

Augmented HRC can be exploited by human arm motion [36] and stiffness profile recognition and imitation. According to different hardware equipment employed in human motion recognition, human motion extraction methods can be divided into two categories: (i) contact methods by employing wearable sensors. In [37, 38] Inertial Measurement Units (IMU) are used to collect human motion angles for human motion recognition and interaction with robot, while such methods require humans to wear specific devices, which may induce inconvenience in HRC. (ii) Noncontact technology: Date et al. [39] proposed a vision-based inverse kinematics approach to estimate full-body motion of human model which has 23 DOFs via visual cues. Noncontact methods do not need to put any physical devices on humans and provide a natural way to capture human motion though it may bring occlusion. In addition, human gesture also plays a significant role in human-robot interaction.

Gesture information acquisition is similar to human motion extraction. It can be divided into vision-based and wearable sensors-based methods. Starner et al. [40] and Shanableh et al. [41] achieved American and Arab sign language recognition, respectively, based on vision information. Liang and Ouhyoung [42] and Starner and Pentland [43] and Kim et al. [44] recognized their sign languages using data collected by Data Glove. Although Data Glove has high precision, its high price and cumbersome-wearing prevent its extensive usability. As for the human motion analysis, most researchers employ inverse kinematics method. Seven-DOF human arm is a redundant manipulator, so when given the hand position and attitude, there are infinite solutions for the inverse kinematics. Moradi and Lee [45] defined a redundancy circle and used a position as the redundancy parameter to solve this problem. Fang and Ding [46] proposed a three-level task-motion planning framework and used a human arm triangle space to solve the motion plan of human arm. Based on the description of the human arms posture [47], we proposed a novel approach, geometry vector method, which can calculate each joint angle of a 7-DOF human arm by just using the 3D points of each joint. We used Kinect sensor [48] (version 2.0) to get the 3D coordinates of each joint. Kinect is a depth camera developed by Microsoft Company and can be rapidly used in action tracking and gesture recognition. Furthermore, compared with other sensors, Kinect is a humanized device and it needs no touch with humans.

In this paper, a virtual Baxter robot setup is employed with one Baxter robot arm used as slave arm (see Figure 2). Motion estimation from Kinect is utilised to estimate demonstrator’s limb postural stiffness. Instead of estimating absolute human endpoint stiffness and motion, discrete-time stiffness estimation and motion extraction are employed to eliminate signal drift and the residual errors. In Section 2, methods are depicted for human arm postural stiffness estimation and motion extraction, as well as discrete-time biomimetic controller design. In Section 3, experimental setup is presented. Results are revealed in Section 4 followed by conclusion in Section 5.

2. Methods

2.1. Human Arm Postural Stiffness Modelling

Relevant research work indicates that limb impedance represents human skill performance [49]. Therefore, extraction of impedance regulations from human is a potential way to transfer human experienced skills to robots compared to conventional mathematical modelling. Generally, human upper limb endpoint stiffness is mainly determined by muscle cocontraction and geometry pose. This means we can adapt our arm stiffness by muscle cocontraction as well as arm pose change, while, in this paper, we mainly investigate the effect of arm pose on interacting with external forces. According to [50], human endpoint stiffness in Cartesian space can be denoted by muscle stiffness and postural stiffness inis time constant, is a vector of 7 joint angles of arm, is arm Jacobian matrix, is the endpoint or Cartesian stiffness, is joint stiffness, and is the external force applied to endpoint. Obviously human endpoint stiffness mainly depends on Jacobian (posture), joint stiffness, gravity, and motion. The arm joint angles of human will be estimated by Kinect; consequently, the Jacobian matrix can be calculated using human arm joint angles and skeleton link length. According to [51, 52], we assume that joint stiffness can be represented as multiplication of an intrinsic constant stiffness , so ; is the gravity contribution to the human endpoint stiffnesswhere and are stiffness coefficients, respectively. Here, the stiffness index can be adapted according to different scenarios. In this paper, we employ hand posture in discrete-time to indicate variations. For example, fist usually suggests that humans increase arm stiffness to keep arm stable against external disturbance or output strong force. Thus we can employ hand gesture to control robot stiffness index based on empirically identified human arm stiffness.

2.2. Human Arm Jacobian Estimation

Figure 3 shows the reference and link coordinate systems of the 7-DOF human arm using Denavit-Hartenberg (D-H) method. The D-H parameters of the kinematic model of the human arm are listed in Table 1, where , , and are the lengths of upper arm, forearm, and palm, respectively. According to the D-H method, the description of the coordination transformation from frame to frame is shown inHuman arm Jacobian matrix is related to each angle of human arm joints and is shown inwhere can be calculated from (5) and (6) and is the coordination transformation from frame to the end-effector frame and it can be deviated from (7). Consider

Now, the problem of getting Jacobian matrix is equal to calculating all of the 7 joint angles. By using Kinect sensor, we can have the skeleton data of human body directly, which include 25 joints 3D positions relative to the camera coordinate as shown in the left of Figure 4. Then, the geometry model of human left arm is built as seen in the right of Figure 4; we take the point HipLeft as the origin of the base coordinate, while -axis is in the same direction of vector And -axis is along with vector It is easy to get the normal vector of each axis of the base coordinate, , , and (note that is the normal vector of -axis in coordinate ):

In order to calculate all of the joint angles correctly, two conditions should exist: (i) human should stand straight and keep the body vertical to the ground as far as possible and (ii) the thumb must be in the same plane with the palm during their motion because we need the plane consisting of points of thumb, wrist, and hand to calculate some other joint angles. These seven angles are all in the joint space, and their initial positions are displayed in Figure 3. The geometry vector method we proposed is based on the principle of cosine value of two vectors shown in (9). What is more, the angle between two planes can be got by using their normal vector. As we all know, the scope of the angle between two angles is from to ; thus, this should be taken into consideration upon solving the 7 joint angles:

As shown in Figure 4, is the shoulder yaw angle. It is simple to find that is the supplement of the angle between planes COD and . is the shoulder pitch angle, so it is the angle between -axis and vector . and are the angles of shoulder roll and elbow roll, upon considering the origin position which is showed in Figure 3. can be seen as the angle of planes COD and CDE. Upon using the same method of solving , can be regarded as the angle between plane CDE and plane DEH. is the angle between upper and lower arm and it is the angle between vectors and . is the angle of lower arm and hand, so it is equal to the angle between vector and plane EFH.

But computing , the yaw angle of wrist, is challenge work, but here can be viewed as the angle between and , where So now the problem becomes solving . We know that must be in the plane of EFH; what is more is perpendicular to . Thus we suppose that: There are Therefore

Up to now, all of the 7 joint angles are calculated. Though the geometry vector approach we used can meet the demand of most cases there are still some situations where it can not work well: for example, the plane consisting of points ShoulderLeft, ElbowLfet, and WristLeft does not exist.

Then, two kinds of experiments have been implemented to verify the performance of the approach. The experiment environment is an indoor and adequate illumination environment. Only one person stands forward to Kinect in the distance about 2 meters. The first one is the static position experiment shown in Figure 5. There is no doubt about obversing that the curves of , , , , , and are in an accurate and correct variation. They are smoothing and the fluctuation is so tiny that can be ignored. But the fluctuation of is obvious. The reason is that, in this situation, the plane CDE consisting of points ShoulderLeft, ElbowLeft, and WristLeft is not stable because the positions captured by Kinect of these three points are almost collinear; therefore the normal vector of the plane will jitter in the precision scope of the sensor. Although there are indeed some fluctuations of the curves, their overall trends are correct and acceptable, which demonstrates that the approach to get the values of these 7 joint angles works well in static action situation.

Figures 6 and 7 are the two dynamic actions which represent the variation of the upper arm and forearm, respectively, namely, the joint angles, and . It is evident to find that angles and have a periodic and regular variation with the rotation of upper and lower arm from Figures 6 and 7. These trajectories are consistent with the expectation, indicating that our method is in good performance.

In the end, a simulation experiment was carried out to illustrate the real time performance of our approach in real environment. The simulation platform used in this paper is Baxter robot model. Baxter robot has two anthropomorphic arms that are all with 7 DOFs. The experiment flow diagram can be seen in Figure 8. From the results of this experiment we observed that all of these 7 angles of the robot arms were changed smoothly along with variation of the operator’s arm and the motions of both are similar but not identical because of the different size of them, which shows that the approach we proposed is feasible and effective. On the other hand, through the coordinate transformation between human coordinate and Baxter coordinate and the trial data, we can get the transformation matrix and offset matrix from human coordinate system to robot coordinate. That is, where

2.3. Discrete-Time Biomimetic Controller

In this section, a discrete-time biomimetic controller with visual feedback is presented for human and robot interaction. Flash and Hogan [53] conducted several experiments aiming at mathematically representing the kinematic part of the model. They found that, for a movement starting and ending with zero velocity (to and from end positions), the velocity profile of the hand in a Cartesian coordinate system typically followed a bell curve distribution. This velocity curve, once integrated, gives the general kinematic representation of the trajectory of the human hand during a typical pushing task. This relation is given by where is the displacement performed in the trajectory, is the time constant, is the initial position vector, and is the total time required to perform the trajectory.

Previous researches [5456] demonstrated the capability of impedance control to be used as a reactive model to follow this kinematic model for human-robot collaboration (HRC). Three impedance parameters can be applied to this model, namely, mass, damping, and stiffness. Kosuge and Kazamura [54] demonstrated experimentally that the effect of stiffness parameters is negligible. Hence, this model can be expressed as where is a force vector between the human and robot, is a virtual mass parameter, is a virtual damping parameter, and , are, respectively, the velocity and acceleration vectors, which can be derived from (16).

The torque input to the slave arm is produced by PD controller defined below in (18) which also employs a gravity compensator . Considerwhere , are slave arm endpoint damping and stiffness, respectively. The stiffness is set in real time according to the estimated stiffness at endpoint of the human operator’s limb, and is set proportional to with a fixed ratio, to be specified by the designer.

The endpoint force in Cartesian space needs to be transformed to joint space by (19), in order to be implemented on the robot arm joint torque controller. Considerwhere denotes robot joint torques; denotes robot endpoint force in Cartesian space; and represents robot arm Jacobian. Therefore, the stiffness in joint space of the slave robot can be derived from the stiffness in Cartesian space according towhere denotes robot joint stiffness and is human arm endpoint stiffness. The model (shown in Figure 9) of force and impedance enables simulation of a movement of the arm or leg along a planned trajectory in an environment characterized by force and impedance. First, the torque is computed using (19) and (18); then impedance is used to compute the effect of the feedback control item on human-robot interactions.

3. Experimental Setup

The experimental platform is a dual-arm Baxter robot which consists of a torso, 2-degree-of-freedom (2-DOF) head, dual arms with 7 DOFs, integrated cameras, sonar, torque sensors, encoders, and direct programming access via Robot Operating System (ROS) interface. Each joint of Baxter robot arm is driven by 7 serial elastic actuators (SEA) with intrinsic safety. Moreover, joint stiffness and damping can be modified under Baxter robot torque control mode which is simplified as PD impedance control law in this paper.

To verify the efficiency of proposed HRC method, human-robot telemanipulation for moving objects (shown in Figure 10) is implemented with comparative tests under two different scenarios, that is, low stiffness mode and human operated mode. The robot left arm serves as slave arm which is guided by human operator’s limb endpoint. The human left arm serves as master arm. As for the Baxter robot simulator setup, first, Baxter simulator is launched according to Baxter robot wiki guideline [57]. Then insert a book shelf from gazebo environment library and 4 bricks which are configured as shown in Figure 10. Human operator moves Baxter left arm via visual feedback and adapts arm pose to move the bricks to enable Baxter robot to imitate human motor adaptive behaviours by means of UDP communications between host computer and Baxter simulator. In addition, Baxter robot initial stiffness is set with a small value (Left_S0: 30, Left_S1: 30, Left_E0: 30, Left_E1: 20, Left_W0: 20, Left_W1: 15, Left_W2: 10, unit: N/m·rad).

4. Results

As indicated in Section 3, the experiment is based on two scenarios as shown in Figure 11. Baxter motion is regulated by human operator in real time. The human operator’s arm is simplified as 7-DOF manipulator, corresponding to Baxter robot joints. Meanwhile, human hand posture is captured by Kinect to control Baxter robot stiffness adaptations which vary according to time of posture maintenance.

Test I. Human operator teleoperates Baxter robot using Kinect which extracts human motion regulations in real time. First, the human operator guides Baxter to the book shelf and make a precise positioning and calibration by visual feedback. Second, human operator employs pose 1 shown in Figure 11(a) to move the object, but the object does not move at all. Naturally, human operator changes his arm configuration to pose 2 and moves the object again and still cannot move the object. As indicated in Figure 13, Baxter robot can adapt its stiffness using different pose as well; however, the initial stiffness set on Baxter robot is too small to move the object with two poses.

Test II. As shown in Figure 13(b), human operator interacts with Baxter robot with human motion intention detection based on Kinect. Human operator hand gesture is employed to control Baxter robot stiffness variations as shown in Figure 12. Generally, fist means human operator strengthens muscles to increase force to fulfil a specific task. Thumb-up denotes that the Baxter robot stiffness is sufficient to complete tasks, for example, moving objects in this test. We utilise finger spread pose to present that Baxter robot needs to decrease stiffness to return compliant. In this way, Baxter can understand human motion intention simply but stably and efficiently.

In this test, human operator replicates motion that has been done in test I. During the first pose in moving the object, Baxter can not complete the task, while human operator increases Baxter stiffness via hand gesture. However, Baxter can not make it as well. In the second pose in moving object, human uses the same way to control Baxter motion and stiffness as well, and Baxter can finish the task easily. When the task is completed, human operator spreads fingers to make Baxter robot return compliant.

5. Conclusion

In this paper, we have developed a natural interface in discrete-time for human-robot collaboration by making robot understand human motion intention especially for stiffness adaptation instead of using EMG signals which need to be detected by adhering electrodes on the skin. In addition, we used a geometry vector approach to extract human arm 7-DOF Jacobian for stiffness transformation from Cartesian space to joint space in real time. A biomimetic controller is employed for human and robot motion regulation transfer. Experiment validated that the proposed approach to HRC was intuitive, compliant, and efficient. It may have additional applications in various HRC scenarios.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

This work is supported in part by State Key Laboratory of Robotics and System (HIT) Grant SKLRS-2014-MS-05/SKLRS201405B, National Natural Science Foundation of China (NSFC) Grant 61273339, Heilongjiang Provincial Natural Science Foundation of China Grants QC2014C072 and F2015010. The authors would like to acknowledge support from Mr. Chao Wang in the experiment implementation.