Table of Contents Author Guidelines Submit a Manuscript
Discrete Dynamics in Nature and Society
Volume 2016, Article ID 9126056, 13 pages
Research Article

An Augmented Discrete-Time Approach for Human-Robot Collaboration

State Key Lab of Robotics Systems, Harbin Institute of Technology, Harbin 150001, China

Received 10 December 2015; Accepted 4 February 2016

Academic Editor: Paolo Renna

Copyright © 2016 Peidong Liang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Human-robot collaboration (HRC) is a key feature to distinguish the new generation of robots from conventional robots. Relevant HRC topics have been extensively investigated recently in academic institutes and companies to improve human and robot interactive performance. Generally, human motor control regulates human motion adaptively to the external environment with safety, compliance, stability, and efficiency. Inspired by this, we propose an augmented approach to make a robot understand human motion behaviors based on human kinematics and human postural impedance adaptation. Human kinematics is identified by geometry kinematics approach to map human arm configuration as well as stiffness index controlled by hand gesture to anthropomorphic arm. While human arm postural stiffness is estimated and calibrated within robot empirical stability region, human motion is captured by employing a geometry vector approach based on Kinect. A biomimetic controller in discrete-time is employed to make Baxter robot arm imitate human arm behaviors based on Baxter robot dynamics. An object moving task is implemented to validate the performance of proposed methods based on Baxter robot simulator. Results show that the proposed approach to HRC is intuitive, stable, efficient, and compliant, which may have various applications in human-robot collaboration scenarios.

1. Introduction

HRC represents most key characters of next generation of robots working without cages and sharing the same workspace with human community [1, 2]. It allows human to interact with robots physically with safety and compliance guaranteed. Therefore, HRC has a variety of applications in home care [3], office, industry [4], agriculture [5], school educations, rehabilitation, and various human motor skill training scenarios, while collaborative robots are being extensively investigated by more researchers and technology companies and may be expected to dominate robot fields in the next few years [6].

Conventional industrial robots have a lot of advantages such as working with precision and speed, but they are relying on high stiffness, high precise parts and components, and complicated control systems. This makes them relatively expensive, inflexible, and dangerous for humans [7]. Moreover, due to these disadvantages, such robots are only suitable and efficient for simple and repeated tasks in structured environment. However, the factory production should be more flexible and smart to meet diversity and personalization of customer demand which tends to be mainstream in the future [8]. Correspondingly, robots are required to fulfil more dexterous and compliant tasks to reduce human workload and improve the quality of products. Conventionally, this requires expert modelling and programming skills that only a few people can implement such skill transfer, which discourages median or small companies to employ robots in the production line. Therefore, an intuitive and augmented approach to human-robot complicated skills transfer will make robot application more practical and remove the virtual cage between human and robot completely [9]. For example, service robot needs to perfectly fit customer’s need in home or office scenarios, and such kind of robots needs to interact with different stages of people with different cognitive and knowledge background. So HRC with intuitive human-robot interface will be compliant and efficient way to extend human and robot workspace and applicable areas. On the other hand, as the new generation of robots, namely, collaborative robots (e.g., Figure 1), are expected to share the same workspace with humans, safety is a key for collaborative robotics [7], which is often achieved through complex control systems [10], but robots are rarely designed to be safe. The economic way is to make robot arm human-like and as compliant as human limb and impedance of the robots can be adapted to dynamic environment [11]. Thus, the studies on compliant control have received increasing attentions [12, 13], in the tasks involving uncertain and dynamic environment, for example, physical human-robot cooperation [1416]. Most existing relevant research work can be generally classified into three categories: (i) design of compliant mechanism by using active (e.g., KUKA LBR [17] and Barrett WAM Arm [18]), passive (e.g., AWAS [19]), or hybrid (e.g., DLR hand-arm system [20]) robot joints with intrinsic compliance to provide a safe and user friendly interface for implementation of telecontrol by adapting robot joint impedance; (ii) online optimisation of impedance regulations for specific tasks in constructed environment, whereas the impedance tuning algorithms rely on an accurate robot-environmental interaction model [21]; (iii) biomimetic control, instead of automatic control, robot can imitate human compliance and adaptivity by learning human stiffness regulation [2224]. Therefore, telimpedance control which enables a target relation between force and displacement [25] is ideal for robots to perform tasks with human and environment in a safer and more natural way [26, 27].

Figure 1: Typical human-robot safe collaboration system.

For an effective HRC with safety, compliance, and stability, it is crucial to enable the robots to be capable of integrating interactive cognition and manipulation with humans, similar to the mechanisms involved in human-human interaction [28], while in this paper we carry on a preliminary research on single human motor adaptation behaviour in specific tasks; for example, in object moving tasks, human tends to exert force along the direction where the stiffness amplitude is relatively larger in [29]. Inspired by human mechanical impedance adaptivity to various environments [30], human motor skill extractions via EMG and biomimetic learning controller [31, 32] are developed to achieve more natural and efficient performance, as they directly reflect impedance variation without disturbing motor control during the task. EMG signals represent muscle activation regulated by central neutral system (CNS) and can be used to estimate limb stiffness for compliant demonstration, though the precise estimation is still a challenge due to muscle fatigue, noise perturbation and estimation model error, and so forth [3335]. In [26], sEMG based human-machine interface was designed for teleimpedance control though without force feedback or autonomous controller after transferring human impedance regulations to robot. Furthermore, human can change arm posture to adapt stiffness as well.

Augmented HRC can be exploited by human arm motion [36] and stiffness profile recognition and imitation. According to different hardware equipment employed in human motion recognition, human motion extraction methods can be divided into two categories: (i) contact methods by employing wearable sensors. In [37, 38] Inertial Measurement Units (IMU) are used to collect human motion angles for human motion recognition and interaction with robot, while such methods require humans to wear specific devices, which may induce inconvenience in HRC. (ii) Noncontact technology: Date et al. [39] proposed a vision-based inverse kinematics approach to estimate full-body motion of human model which has 23 DOFs via visual cues. Noncontact methods do not need to put any physical devices on humans and provide a natural way to capture human motion though it may bring occlusion. In addition, human gesture also plays a significant role in human-robot interaction.

Gesture information acquisition is similar to human motion extraction. It can be divided into vision-based and wearable sensors-based methods. Starner et al. [40] and Shanableh et al. [41] achieved American and Arab sign language recognition, respectively, based on vision information. Liang and Ouhyoung [42] and Starner and Pentland [43] and Kim et al. [44] recognized their sign languages using data collected by Data Glove. Although Data Glove has high precision, its high price and cumbersome-wearing prevent its extensive usability. As for the human motion analysis, most researchers employ inverse kinematics method. Seven-DOF human arm is a redundant manipulator, so when given the hand position and attitude, there are infinite solutions for the inverse kinematics. Moradi and Lee [45] defined a redundancy circle and used a position as the redundancy parameter to solve this problem. Fang and Ding [46] proposed a three-level task-motion planning framework and used a human arm triangle space to solve the motion plan of human arm. Based on the description of the human arms posture [47], we proposed a novel approach, geometry vector method, which can calculate each joint angle of a 7-DOF human arm by just using the 3D points of each joint. We used Kinect sensor [48] (version 2.0) to get the 3D coordinates of each joint. Kinect is a depth camera developed by Microsoft Company and can be rapidly used in action tracking and gesture recognition. Furthermore, compared with other sensors, Kinect is a humanized device and it needs no touch with humans.

In this paper, a virtual Baxter robot setup is employed with one Baxter robot arm used as slave arm (see Figure 2). Motion estimation from Kinect is utilised to estimate demonstrator’s limb postural stiffness. Instead of estimating absolute human endpoint stiffness and motion, discrete-time stiffness estimation and motion extraction are employed to eliminate signal drift and the residual errors. In Section 2, methods are depicted for human arm postural stiffness estimation and motion extraction, as well as discrete-time biomimetic controller design. In Section 3, experimental setup is presented. Results are revealed in Section 4 followed by conclusion in Section 5.

Figure 2: Profile of HRC in discrete-time. Human arm endpoint stiffness, forward kinematic, and postural stiffness are estimated in discrete-time and transfer to Baxter robot via biomimetic controller.

2. Methods

2.1. Human Arm Postural Stiffness Modelling

Relevant research work indicates that limb impedance represents human skill performance [49]. Therefore, extraction of impedance regulations from human is a potential way to transfer human experienced skills to robots compared to conventional mathematical modelling. Generally, human upper limb endpoint stiffness is mainly determined by muscle cocontraction and geometry pose. This means we can adapt our arm stiffness by muscle cocontraction as well as arm pose change, while, in this paper, we mainly investigate the effect of arm pose on interacting with external forces. According to [50], human endpoint stiffness in Cartesian space can be denoted by muscle stiffness and postural stiffness inis time constant, is a vector of 7 joint angles of arm, is arm Jacobian matrix, is the endpoint or Cartesian stiffness, is joint stiffness, and is the external force applied to endpoint. Obviously human endpoint stiffness mainly depends on Jacobian (posture), joint stiffness, gravity, and motion. The arm joint angles of human will be estimated by Kinect; consequently, the Jacobian matrix can be calculated using human arm joint angles and skeleton link length. According to [51, 52], we assume that joint stiffness can be represented as multiplication of an intrinsic constant stiffness , so ; is the gravity contribution to the human endpoint stiffnesswhere and are stiffness coefficients, respectively. Here, the stiffness index can be adapted according to different scenarios. In this paper, we employ hand posture in discrete-time to indicate variations. For example, fist usually suggests that humans increase arm stiffness to keep arm stable against external disturbance or output strong force. Thus we can employ hand gesture to control robot stiffness index based on empirically identified human arm stiffness.

2.2. Human Arm Jacobian Estimation

Figure 3 shows the reference and link coordinate systems of the 7-DOF human arm using Denavit-Hartenberg (D-H) method. The D-H parameters of the kinematic model of the human arm are listed in Table 1, where , , and are the lengths of upper arm, forearm, and palm, respectively. According to the D-H method, the description of the coordination transformation from frame to frame is shown inHuman arm Jacobian matrix is related to each angle of human arm joints and is shown inwhere can be calculated from (5) and (6) and is the coordination transformation from frame to the end-effector frame and it can be deviated from (7). Consider

Table 1: D-H parameters of the human arm.
Figure 3: Human arm model and its primary D-H coordinate system and the initial position of each joint angle.

Now, the problem of getting Jacobian matrix is equal to calculating all of the 7 joint angles. By using Kinect sensor, we can have the skeleton data of human body directly, which include 25 joints 3D positions relative to the camera coordinate as shown in the left of Figure 4. Then, the geometry model of human left arm is built as seen in the right of Figure 4; we take the point HipLeft as the origin of the base coordinate, while -axis is in the same direction of vector And -axis is along with vector It is easy to get the normal vector of each axis of the base coordinate, , , and (note that is the normal vector of -axis in coordinate ):

Figure 4: Body skeleton map consisting of 25 joints captured by Kinect and Human arm motion model in joint space.

In order to calculate all of the joint angles correctly, two conditions should exist: (i) human should stand straight and keep the body vertical to the ground as far as possible and (ii) the thumb must be in the same plane with the palm during their motion because we need the plane consisting of points of thumb, wrist, and hand to calculate some other joint angles. These seven angles are all in the joint space, and their initial positions are displayed in Figure 3. The geometry vector method we proposed is based on the principle of cosine value of two vectors shown in (9). What is more, the angle between two planes can be got by using their normal vector. As we all know, the scope of the angle between two angles is from to ; thus, this should be taken into consideration upon solving the 7 joint angles:

As shown in Figure 4, is the shoulder yaw angle. It is simple to find that is the supplement of the angle between planes COD and . is the shoulder pitch angle, so it is the angle between -axis and vector . and are the angles of shoulder roll and elbow roll, upon considering the origin position which is showed in Figure 3. can be seen as the angle of planes COD and CDE. Upon using the same method of solving , can be regarded as the angle between plane CDE and plane DEH. is the angle between upper and lower arm and it is the angle between vectors and . is the angle of lower arm and hand, so it is equal to the angle between vector and plane EFH.

But computing , the yaw angle of wrist, is challenge work, but here can be viewed as the angle between and , where So now the problem becomes solving . We know that must be in the plane of EFH; what is more is perpendicular to . Thus we suppose that: There are Therefore

Up to now, all of the 7 joint angles are calculated. Though the geometry vector approach we used can meet the demand of most cases there are still some situations where it can not work well: for example, the plane consisting of points ShoulderLeft, ElbowLfet, and WristLeft does not exist.

Then, two kinds of experiments have been implemented to verify the performance of the approach. The experiment environment is an indoor and adequate illumination environment. Only one person stands forward to Kinect in the distance about 2 meters. The first one is the static position experiment shown in Figure 5. There is no doubt about obversing that the curves of , , , , , and are in an accurate and correct variation. They are smoothing and the fluctuation is so tiny that can be ignored. But the fluctuation of is obvious. The reason is that, in this situation, the plane CDE consisting of points ShoulderLeft, ElbowLeft, and WristLeft is not stable because the positions captured by Kinect of these three points are almost collinear; therefore the normal vector of the plane will jitter in the precision scope of the sensor. Although there are indeed some fluctuations of the curves, their overall trends are correct and acceptable, which demonstrates that the approach to get the values of these 7 joint angles works well in static action situation.

Figure 5: The initial positions of all joint angles. Operator stands statically in front of Kinect and keeps his upper and lower arm in a line and horizontal just as showed in the left picture. The right picture presents the variation of 7 joint angles in successive frames.

Figures 6 and 7 are the two dynamic actions which represent the variation of the upper arm and forearm, respectively, namely, the joint angles, and . It is evident to find that angles and have a periodic and regular variation with the rotation of upper and lower arm from Figures 6 and 7. These trajectories are consistent with the expectation, indicating that our method is in good performance.

Figure 6: The operator keeps his body still and only rotates his arm circularly, constantly, and with a low and smooth speed from the origin position to limiting position.
Figure 7: The operator keeps his body still and only rotates his elbow circularly, constantly, and with a low and smooth speed from the origin position to limiting position.

In the end, a simulation experiment was carried out to illustrate the real time performance of our approach in real environment. The simulation platform used in this paper is Baxter robot model. Baxter robot has two anthropomorphic arms that are all with 7 DOFs. The experiment flow diagram can be seen in Figure 8. From the results of this experiment we observed that all of these 7 angles of the robot arms were changed smoothly along with variation of the operator’s arm and the motions of both are similar but not identical because of the different size of them, which shows that the approach we proposed is feasible and effective. On the other hand, through the coordinate transformation between human coordinate and Baxter coordinate and the trial data, we can get the transformation matrix and offset matrix from human coordinate system to robot coordinate. That is, where

Figure 8: The simulation experiment flow diagram.
2.3. Discrete-Time Biomimetic Controller

In this section, a discrete-time biomimetic controller with visual feedback is presented for human and robot interaction. Flash and Hogan [53] conducted several experiments aiming at mathematically representing the kinematic part of the model. They found that, for a movement starting and ending with zero velocity (to and from end positions), the velocity profile of the hand in a Cartesian coordinate system typically followed a bell curve distribution. This velocity curve, once integrated, gives the general kinematic representation of the trajectory of the human hand during a typical pushing task. This relation is given by where is the displacement performed in the trajectory, is the time constant, is the initial position vector, and is the total time required to perform the trajectory.

Previous researches [5456] demonstrated the capability of impedance control to be used as a reactive model to follow this kinematic model for human-robot collaboration (HRC). Three impedance parameters can be applied to this model, namely, mass, damping, and stiffness. Kosuge and Kazamura [54] demonstrated experimentally that the effect of stiffness parameters is negligible. Hence, this model can be expressed as where is a force vector between the human and robot, is a virtual mass parameter, is a virtual damping parameter, and , are, respectively, the velocity and acceleration vectors, which can be derived from (16).

The torque input to the slave arm is produced by PD controller defined below in (18) which also employs a gravity compensator . Considerwhere , are slave arm endpoint damping and stiffness, respectively. The stiffness is set in real time according to the estimated stiffness at endpoint of the human operator’s limb, and is set proportional to with a fixed ratio, to be specified by the designer.

The endpoint force in Cartesian space needs to be transformed to joint space by (19), in order to be implemented on the robot arm joint torque controller. Considerwhere denotes robot joint torques; denotes robot endpoint force in Cartesian space; and represents robot arm Jacobian. Therefore, the stiffness in joint space of the slave robot can be derived from the stiffness in Cartesian space according towhere denotes robot joint stiffness and is human arm endpoint stiffness. The model (shown in Figure 9) of force and impedance enables simulation of a movement of the arm or leg along a planned trajectory in an environment characterized by force and impedance. First, the torque is computed using (19) and (18); then impedance is used to compute the effect of the feedback control item on human-robot interactions.

Figure 9: Scheme of HRC control model. The feedforward corresponds to the force of human and is force feedback based on PD impedance controller which is employed to regulate robot joint or endpoint impedance.

3. Experimental Setup

The experimental platform is a dual-arm Baxter robot which consists of a torso, 2-degree-of-freedom (2-DOF) head, dual arms with 7 DOFs, integrated cameras, sonar, torque sensors, encoders, and direct programming access via Robot Operating System (ROS) interface. Each joint of Baxter robot arm is driven by 7 serial elastic actuators (SEA) with intrinsic safety. Moreover, joint stiffness and damping can be modified under Baxter robot torque control mode which is simplified as PD impedance control law in this paper.

To verify the efficiency of proposed HRC method, human-robot telemanipulation for moving objects (shown in Figure 10) is implemented with comparative tests under two different scenarios, that is, low stiffness mode and human operated mode. The robot left arm serves as slave arm which is guided by human operator’s limb endpoint. The human left arm serves as master arm. As for the Baxter robot simulator setup, first, Baxter simulator is launched according to Baxter robot wiki guideline [57]. Then insert a book shelf from gazebo environment library and 4 bricks which are configured as shown in Figure 10. Human operator moves Baxter left arm via visual feedback and adapts arm pose to move the bricks to enable Baxter robot to imitate human motor adaptive behaviours by means of UDP communications between host computer and Baxter simulator. In addition, Baxter robot initial stiffness is set with a small value (Left_S0: 30, Left_S1: 30, Left_E0: 30, Left_E1: 20, Left_W0: 20, Left_W1: 15, Left_W2: 10, unit: N/m·rad).

Figure 10: Experimental setup for human and Baxter robot collaboration intuitively and compliantly in teleoperation scenario. The virtual Baxter robot environment is based on ROS and gazebo [58]. Four bricks are put on the shelf and their total weights are modified to 5 kg. Baxter robot is supposed to move the four bricks with different arm configurations.

4. Results

As indicated in Section 3, the experiment is based on two scenarios as shown in Figure 11. Baxter motion is regulated by human operator in real time. The human operator’s arm is simplified as 7-DOF manipulator, corresponding to Baxter robot joints. Meanwhile, human hand posture is captured by Kinect to control Baxter robot stiffness adaptations which vary according to time of posture maintenance.

Figure 11: Scheme of Baxter robot moving an object by two different poses. The red arrows denote the force directions of the two poses, respectively.

Test I. Human operator teleoperates Baxter robot using Kinect which extracts human motion regulations in real time. First, the human operator guides Baxter to the book shelf and make a precise positioning and calibration by visual feedback. Second, human operator employs pose 1 shown in Figure 11(a) to move the object, but the object does not move at all. Naturally, human operator changes his arm configuration to pose 2 and moves the object again and still cannot move the object. As indicated in Figure 13, Baxter robot can adapt its stiffness using different pose as well; however, the initial stiffness set on Baxter robot is too small to move the object with two poses.

Test II. As shown in Figure 13(b), human operator interacts with Baxter robot with human motion intention detection based on Kinect. Human operator hand gesture is employed to control Baxter robot stiffness variations as shown in Figure 12. Generally, fist means human operator strengthens muscles to increase force to fulfil a specific task. Thumb-up denotes that the Baxter robot stiffness is sufficient to complete tasks, for example, moving objects in this test. We utilise finger spread pose to present that Baxter robot needs to decrease stiffness to return compliant. In this way, Baxter can understand human motion intention simply but stably and efficiently.

Figure 12: Baxter robot stiffness control by human hand gestures: fist: increasing stiffness with a threshold; thumb-up: holding current stiffness; and finger spread: decreasing stiffness.
Figure 13: Object moving test results: (a) object moving without human postural stiffness index transfer; (b) object moving with human postural stiffness transfer. Both the two tests are composed of four steps: start to move Baxter robot endpoint to the target; move objects using pose 1; and move objects with pose 2 and finally finish moving objects. The predominated Baxter robot joint S0 is selected to track the torque variations during the tests.

In this test, human operator replicates motion that has been done in test I. During the first pose in moving the object, Baxter can not complete the task, while human operator increases Baxter stiffness via hand gesture. However, Baxter can not make it as well. In the second pose in moving object, human uses the same way to control Baxter motion and stiffness as well, and Baxter can finish the task easily. When the task is completed, human operator spreads fingers to make Baxter robot return compliant.

5. Conclusion

In this paper, we have developed a natural interface in discrete-time for human-robot collaboration by making robot understand human motion intention especially for stiffness adaptation instead of using EMG signals which need to be detected by adhering electrodes on the skin. In addition, we used a geometry vector approach to extract human arm 7-DOF Jacobian for stiffness transformation from Cartesian space to joint space in real time. A biomimetic controller is employed for human and robot motion regulation transfer. Experiment validated that the proposed approach to HRC was intuitive, compliant, and efficient. It may have additional applications in various HRC scenarios.

Competing Interests

The authors declare that they have no competing interests.


This work is supported in part by State Key Laboratory of Robotics and System (HIT) Grant SKLRS-2014-MS-05/SKLRS201405B, National Natural Science Foundation of China (NSFC) Grant 61273339, Heilongjiang Provincial Natural Science Foundation of China Grants QC2014C072 and F2015010. The authors would like to acknowledge support from Mr. Chao Wang in the experiment implementation.


  1. S. A. Green, M. Billinghurst, X. Chen, and J. G. Chase, “Human-robot collaboration: a literature review and augmented reality approach in design,” International Journal of Advanced Robotic Systems, vol. 5, no. 1, pp. 1–18, 2008. View at Google Scholar · View at Scopus
  2. B. Chandrasekaran and J. M. Conrad, “Human-robot collaboration: a survey,” in Proceedings of the IEEE SoutheastCon 2015, pp. 1–8, April 2015.
  3. J.-M. Lu and Y.-L. Hsu, “Telepresence robots for medical and homecare applications,” in Contemporary Issues in Systems Science and Engineering, M. Zhou, H.-X. Li, and M. Weijnen, Eds., chapter 21, pp. 725–735, John Wiley & Sons, Hoboken, NJ, USA, 2015. View at Publisher · View at Google Scholar
  4. T. Arai, R. Kato, and M. Fujita, “Assessment of operator stress induced by robot collaboration in assembly,” CIRP Annals—Manufacturing Technology, vol. 59, no. 1, pp. 5–8, 2010. View at Publisher · View at Google Scholar · View at Scopus
  5. A. Bechar and Y. Edan, “Human-robot collaboration for improved target recognition of agricultural robots,” Industrial Robot, vol. 30, no. 5, pp. 432–436, 2003. View at Publisher · View at Google Scholar · View at Scopus
  6. Top 10 emerging technologies of 2015,
  7. B. Matthias, S. Kock, H. Jerregard, M. Källman, and I. Lundberg, “Safety of collaborative industrial robots: certification possibilities for a collaborative assembly robot concept,” in Proceedings of the IEEE International Symposium on Assembly and Manufacturing (ISAM '11), pp. 1–6, Tampere, Finland, May 2011. View at Publisher · View at Google Scholar · View at Scopus
  8. V. Morabito, “Big data driven business models,” in Big Data and Analytics, pp. 65–80, Springer, 2015. View at Google Scholar
  9. B. Scassellati and K. M. Tsui, “Co-robots: humans and robots operating as partners,” in Handbook of Science and Technology Convergence, pp. 1–10, Springer, 2015. View at Google Scholar
  10. W. He, A. O. David, Z. Yin, and C. Sun, “Neural network control of a robotic manipulator with input deadzone and output constraint,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2015. View at Publisher · View at Google Scholar
  11. W. He, Y. Chen, and Z. Yin, “Adaptive neural network control of an uncertain robot with full-state constraints,” IEEE Transactions on Cybernetics, vol. 46, no. 3, pp. 620–629, 2016. View at Publisher · View at Google Scholar
  12. C. Reardon, H. Tan, B. Kannan, and L. DeRose, “Towards safe robot-human collaboration systems using human pose detection,” in Proceedings of the IEEE International Conference on Technologies for Practical robot Applications (TePRA '15), pp. 1–6, Woburn, Mass, USA, May 2015. View at Publisher · View at Google Scholar
  13. W. He, Y. Dong, and C. Sun, “Adaptive neural impedance control of a robotic manipulator with input saturation,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 46, no. 3, pp. 334–344, 2016. View at Publisher · View at Google Scholar
  14. K. Kronander and A. Billard, “Learning compliant manipulation through kinesthetic and tactile human-robot interaction,” IEEE Transactions on Haptics, vol. 7, no. 3, pp. 367–380, 2014. View at Publisher · View at Google Scholar · View at Scopus
  15. L. Peternel, T. Petrič, E. Oztop, and J. Babič, “Teaching robots to cooperate with humans in dynamic manipulation tasks based on multi-modal human-in-the-loop approach,” Autonomous Robots, vol. 36, no. 1-2, pp. 123–136, 2014. View at Publisher · View at Google Scholar · View at Scopus
  16. L. D. Rozo, S. Calinon, D. Caldwell, P. Jimenez, and C. Torras, “Learning collaborative impedance-based robot behaviors,” in Proceedings of the 27th AAAI Conference on Artificial Intelligence, Bellevue, Wash, USA, July 2013.
  17. Kuka lbr robot profile, March 2015,
  18. Barrett wam arm profile, 2015,
  19. A. Jafari, N. G. Tsagarakis, and D. G. Caldwell, “AwAS-II: a new actuator with adjustable stiffness based on the novel principle of adaptable pivot point and variable lever ratio,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '11), pp. 4638–4643, Shanghai, China, May 2011. View at Publisher · View at Google Scholar · View at Scopus
  20. M. Grebenstein, A. Albu-Schäffer, T. Bahls et al., “The DLR hand arm system,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '11), pp. 3175–3182, Shanghai, China, May 2011. View at Publisher · View at Google Scholar · View at Scopus
  21. S. P. Chan and H. C. Liaw, “Generalized impedance control of robot for assembly tasks requiring compliant manipulation,” IEEE Transactions on Industrial Electronics, vol. 43, no. 4, pp. 453–461, 1996. View at Publisher · View at Google Scholar · View at Scopus
  22. J. Buchli, F. Stulp, E. Theodorou, and S. Schaal, “Learning variable impedance control,” International Journal of Robotics Research, vol. 30, no. 7, pp. 820–833, 2011. View at Publisher · View at Google Scholar · View at Scopus
  23. A. Ajoudani, N. G. Tsagarakis, and A. Bicchi, “Tele-impedance: preliminary results on measuring and replicating human arm impedance in tele operated robots,” in Proceedings of the IEEE International Conference on Robotics and Biomimetics (ROBIO '11), pp. 216–222, Phuket, Thailand, December 2011. View at Publisher · View at Google Scholar · View at Scopus
  24. A. Ajoudani, M. Gabiccini, N. Tsagarakis, A. Albu-Schaffer, and A. Bicchi, “TeleImpedance: Exploring the role of common-mode and configuration- dependant stiffness,” in Proceedings of the 12th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2012, pp. 363–369, Osaka, Japan, December 2012. View at Publisher · View at Google Scholar · View at Scopus
  25. N. Hogan, “Impedance control: an approach to manipulation: part II—implementation,” Journal of Dynamic Systems, Measurement, and Control, vol. 107, no. 1, pp. 8–16, 1985. View at Publisher · View at Google Scholar
  26. A. Ajoudani, N. Tsagarakis, and A. Bicchi, “Tele-impedance: teleoperation with impedance regulation using a body-machine interface,” The International Journal of Robotics Research, vol. 31, no. 13, pp. 1642–1655, 2012. View at Publisher · View at Google Scholar · View at Scopus
  27. A. Ajoudani, S. B. Godfrey, M. Bianchi et al., “Exploring teleimpedance and tactile feedback for intuitive control of the Pisa/IIT SoftHand,” IEEE Transactions on Haptics, vol. 7, no. 2, pp. 203–215, 2014. View at Publisher · View at Google Scholar · View at Scopus
  28. G. Ganesh, A. Takagi, R. Osu, T. Yoshioka, M. Kawato, and E. Burdet, “Two is better than one: physical interactions improve motor performance in humans,” Scientific Reports, vol. 4, article 3824, 2014. View at Publisher · View at Google Scholar · View at Scopus
  29. T. E. Milner, “Contribution of geometry and joint stiffness to mechanical stability of the human arm,” Experimental Brain Research, vol. 143, no. 4, pp. 515–519, 2002. View at Publisher · View at Google Scholar · View at Scopus
  30. E. Burdet, R. Osu, D. W. Franklin, T. E. Milner, and M. Kawato, “The central nervous system stabilizes unstable dynamics by learning optimal impedance,” Nature, vol. 414, no. 6862, pp. 446–449, 2001. View at Publisher · View at Google Scholar · View at Scopus
  31. E. Burdet, G. Ganesh, C. Yang, and A. Albu-Schäffer, “Interaction force, impedance and trajectory adaptation: by humans, for robots,” in Experimental Robotics, vol. 79 of Springer Tracts in Advanced Robotics, pp. 331–345, Springer, Berlin, Germany, 2014. View at Publisher · View at Google Scholar
  32. C. Yang, G. Ganesh, S. Haddadin, S. Parusel, A. Albu-Schäeffer, and E. Burdet, “Human-like adaptation of force and impedance in stable and unstable interactions,” IEEE Transactions on Robotics, vol. 27, no. 5, pp. 918–930, 2011. View at Publisher · View at Google Scholar · View at Scopus
  33. A. Ajoudani, N. G. Tsagarakis, and A. Bicchi, “Tele-impedance: preliminary results on measuring and replicating human arm impedance in tele operated robots,” in Proceedings of the IEEE International Conference on Robotics and Biomimetics (ROBIO '11), pp. 216–222, IEEE, Phuket, Thailand, December 2011. View at Publisher · View at Google Scholar · View at Scopus
  34. D. Gradolewski, P. M. Tojza, J. Jaworski, D. Ambroziak, G. Redlarski, and M. Krawczuk, “Arm EMG wavelet-based denoising system,” in Mechatronics—Ideas for Industrial Application, J. Awrejcewicz, R. Szewczyk, M. Trojnacki, and M. Kaliczyńska, Eds., vol. 317 of Advances in Intelligent Systems and Computing, pp. 289–296, Springer, 2015. View at Publisher · View at Google Scholar
  35. M. Wehner, Man to Machine, Applications in Electromyography, InTech, 2012.
  36. A. D. Dragan, S. Bauman, J. Forlizzi, and S. S. Srinivasa, “Effects of robot motion on human-robot collaboration,” in Proceedings of the 10th Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI '15), pp. 51–58, ACM, March 2015. View at Publisher · View at Google Scholar · View at Scopus
  37. K. Saktaweekulkit and T. Maneewarn, “Motion classification using IMU for human-robot interaction,” in Proceedings of the International Conference on Control Automation and Systems (ICCAS '10), pp. 2295–2299, IEEE, Gyeonggi-do, Republic of Korea, October 2010.
  38. G. Shi, Y. Zou, W. J. Li, Y. Jin, and P. Guan, “Towards multi-classification of human motions using micro IMU and SVM training process,” in Advanced Materials Research, vol. 60, pp. 189–193, Trans Tech Publications, 2009. View at Google Scholar
  39. N. Date, H. Yoshimoto, D. Arita, and R.-I. Taniguchi, “Real-time human motion sensing based on vision-based inverse kinematics for interactive applications,” in Proceedings of the 17th International Conference on Pattern Recognition (ICPR '04), vol. 3, pp. 318–321, IEEE, Cambridge, UK, August 2004. View at Publisher · View at Google Scholar · View at Scopus
  40. T. Starner, J. Weaver, and A. Pentland, “Real-time american sign language recognition using desk and wearable computer based video,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 12, pp. 1371–1375, 1998. View at Publisher · View at Google Scholar · View at Scopus
  41. T. Shanableh, K. Assaleh, and M. Al-Rousan, “Spatio-temporal feature-extraction techniques for isolated gesture recognition in Arabic sign language,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 37, no. 3, pp. 641–650, 2007. View at Publisher · View at Google Scholar · View at Scopus
  42. R. Liang and M. Ouhyoung, “A sign language recognition system using hidden markov model and context sensitive search,” in Proceedings of the ACM Symposium on Virtual Reality Software and Technology (VRST '96), vol. 96, pp. 59–66, Hong Kong, June 1996.
  43. T. Starner and A. Pentland, “Real-time American sign language recognition from video using hidden Markov models,” in Motion-Based Recognition, vol. 9 of Computational Imaging and Vision, pp. 227–243, Springer, Amsterdam, The Netherlands, 1997. View at Publisher · View at Google Scholar
  44. J. Kim, S. Mastnik, and E. André, “Emg-based hand gesture recognition for realtime biosignal interfacing,” in Proceedings of the ACM 13th International Conference on Intelligent User Interfaces (IUI '08), pp. 30–39, Canary Islands, Spain, January 2008. View at Publisher · View at Google Scholar
  45. H. Moradi and S. Lee, “Joint limit analysis and elbow movement minimization for redundant manipulators using closed form method,” in Advances in Intelligent Computing, pp. 423–432, Springer, Berlin, Germany, 2005. View at Google Scholar
  46. C. Fang and X. Ding, “A set of basic movement primitives for anthropomorphic arms,” in Proceedings of the 10th IEEE International Conference on Mechatronics and Automation (ICMA '13), pp. 639–644, IEEE, Takamatsu, Japan, August 2013. View at Publisher · View at Google Scholar · View at Scopus
  47. H. Reddivari, C. Yang, Z. Ju, P. Liang, Z. Li, and B. Xu, “Teleoperation control of Baxter robot using body motion tracking,” in Proceedings of the International Conference on Multisensor Fusion and Information Integration for Intelligent Systems (MFI '14), pp. 1–6, Beijing, China, September 2014. View at Publisher · View at Google Scholar · View at Scopus
  48. Z. Zhang, “Microsoft kinect sensor and its effect,” IEEE Multimedia, vol. 19, no. 2, pp. 4–10, 2012. View at Publisher · View at Google Scholar · View at Scopus
  49. M. S. Erden and A. Billard, “End-point impedance measurements at human hand during interactive manual welding with robot,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '14), pp. 126–133, IEEE, Hong Kong, June 2014. View at Publisher · View at Google Scholar · View at Scopus
  50. A. Ajoudani, N. G. Tsagarakis, and A. Bicchi, “On the role of robot configuration in Cartesian stiffness control,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '15), pp. 1010–1016, Seattle, Wash, USA, May 2015. View at Publisher · View at Google Scholar · View at Scopus
  51. R. Osu, D. W. Franklin, H. Kato et al., “Short- and long-term changes in joint co-contraction associated with motor learning as revealed from surface EMG,” Journal of Neurophysiology, vol. 88, no. 2, pp. 991–1004, 2002. View at Google Scholar · View at Scopus
  52. A. Ajoudani, C. Fang, N. G. Tsagarakis, and A. Bicchi, “A reduced-complexity description of arm endpoint stiffness with applications to teleimpedance control,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS '15), pp. 1017–1023, Hamburg, Germany, September 2015. View at Publisher · View at Google Scholar
  53. T. Flash and N. Hogan, “The coordination of arm movements: an experimentally confirmed mathematical model,” The Journal of Neuroscience, vol. 5, no. 7, pp. 1688–1703, 1985. View at Google Scholar · View at Scopus
  54. K. Kosuge and N. Kazamura, “Control of a robot handling an object in cooperation with a human,” in Proceedings of the 1997 6th IEEE International Workshop on Robot and Human Communication (RO-MAN '97), pp. 142–147, October 1997. View at Scopus
  55. R. Ikeura, H. Monden, and H. Inooka, “Cooperative motion control of a robot and a human,” in Proceedings of the 3rd IEEE International Workshop on Robot and Human Communication (RO-MAN '94), pp. 112–117, IEEE, Nagoya, Japan, July 1994. View at Scopus
  56. M. M. Rahman, R. Ikeura, and K. Mizutani, “Investigating the impedance characteristic of human arm for development of robots to co-operate with human operators,” in Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC '99), vol. 2, pp. 676–681, Tokyo, Japan, October 1999. View at Publisher · View at Google Scholar
  57. Baxter simulator profile, March 2015,
  58. ROS, Ros,