Table of Contents Author Guidelines Submit a Manuscript
Journal of Robotics
Volume 2019, Article ID 4141269, 8 pages
https://doi.org/10.1155/2019/4141269
Research Article

Intention Recognition in Physical Human-Robot Interaction Based on Radial Basis Function Neural Network

1School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
2CATARC (Tianjin) Automotive Engineering Research Institute Co., Ltd., 300339, China

Correspondence should be addressed to Zhiguang Liu; moc.621@8gnaugihz

Received 12 November 2018; Revised 26 February 2019; Accepted 17 March 2019; Published 11 April 2019

Academic Editor: Yangmin Li

Copyright © 2019 Zhiguang Liu and Jianhong Hao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

To solve synchronization movement problem in human-robot haptic collaboration, the robot is often required to recognize intention of the cooperator. In this paper, a method based on radial basis function neural network (RBFNN) model is presented to identify the motion intention of collaborator. Here, the human intention is defined as the desired velocity in human limb model, of which the estimation is obtained in real time based on interaction force and the contact point movement characteristics (current position and velocity of the robot) by the trained RBFNN model. To obtain training samples, adaptive impedance control method is used to control the robot during the data acquisition process, and then the data matching is executed due to the phase delay of the impedance function. The advantage of proposed intention estimation method according to the system real-time status is that the model overcomes the shortcoming of difficult estimating the human body impedance parameters. The experimental results show that this proposed method improves the synchronization of human-robot collaboration and reduces the force of the collaborator.

1. Introduction

Human-robot collaboration is used in many fields such as military, space technology, industry, medical treatment and healing, helping old or disabled people, and entertainment [1, 2]. However, the way of human-robot cooperation has the disadvantages of synchronization, excessive interaction force, and poor motion compliance.

To make the robot track a prescribed trajectory, impedance control is acknowledged to be a promising approach for interaction control. Then there is no doubt that, in different applications of human-robot shared control systems, the methods of changing assistant lever of the impedance parameters are often used. Ikeura et al. proposed to adjust the online damping parameters in an optimal manner by minimizing a selected cost function in helping a human carry a large, heavy, or awkward object [3]. S.M.Mizanoor Rahman et al. modified the power-assist control by using a novel control strategy based on the weight perception and load force features. The control modification reduced the excessive load forces applied by the subjects in each lifting scheme [4].

There are many methods based on behavioral characteristics from visual image acquisition to forecast human behavior [5], including facial recognition [6] and outline recognition [7]. However, these methods are not available for applications in which only the interaction force information is provided in physical human-robot interaction.

Kazuhiro Kosuge and Norihide Kazamura proposed several algorithms to generate the motion based on the intentional force and experimentally compare them, including method of force augmentation type, method of position control type, and method of velocity control type [8]. Dylan P. Losey et al. provided a unifying view of human and robot sharing task execution. They defined three key themes that emerge in shared control scenarios, namely, intent detection, arbitration, and feedback [9]. As the first step in sharing tasks, it is necessary to explore methods for how the coupled pHRI system can detect what the human is trying to do, and how the physical coupling itself can be leveraged to detect intent.

In many physical human-robot interaction (pHRI) applications, the intent can be defined in a binary way. It is often ascertained from a brain-machine interface (BMI), as seen in [10]. In other applications, user intent is defined in terms of a velocity or position trajectory, the predicted forward path of the user and/or robot. Human motor control is complex, involving activities in the central nervous system (CNS), the peripheral nervous system (PNS), the musculoskeletal system, and finally the environmental interactions that are being controlled. At each of these subsystems there are measurable physical quantities, called state variables, which are manifestations of the human intention. In order to manage the complexity of an upper limb robotic prosthesis, in many researches, a user is often given predefined poses, grasps, or functions that the prosthesis can complete autonomously. In [1114], upper limb prosthetics are controlled by the user in selecting one of these predefined functions; therefore, it is possible that the user intent is represented by a single categorical variable. There are two main approaches to recognize human intention based on myographic signals: pattern recognition and mapping to continuous effort. The pattern recognition approach is to map patterns of any number of signal features to desired prosthesis poses, grasps, or functions. The way in which control algorithms learn this mapping varies, but common approaches are LDA, SVMs, and artificial neural networks (ANNs) [11, 12, 15]. Other researchers have worked to develop control systems that explicitly learn the mapping between EMG signals and user’s desired joint torque. An excellent example by Kiguchi et al. is the use of an adaptive neurofuzzy modifier to learn the relationship between the root-mean square (RMS) of measured EMG signals and the estimated user’s desired torques [16].

From human-human interaction experience, instead of measuring myographic signals to estimate human intention, two collaborators can easily complete a handshaking task or an objects-carrying task based on the interaction force between the humans. Moreover, the interaction force is easily and precisely measured compared to electromechanical signals by using a 6-DoF force/torque sensor at the robot end effector to monitor the human-robot interaction. Therefore, the following discussion is based on interaction force to recognize human intention.

Many examples of user intent detection revolve around estimating the user contribution to the interaction force. In order to obtain the interaction force, and to realize the robot assists the user minimally in achieving the predefined trajectory, Pehlivan et al. use an inverse dynamics model of the robot to estimate the user force applied to the robot from the robot encoder measurements [17]. Some researchers defined the motion intention of the human partner as the desired trajectory in the employed human limb model, which is estimated by developing an NN method. However, the real human motion intention needed in the training phase is difficult to obtain in practice. In [18], a Hidden Markov Model-based high-level controller is used to estimate stochastic intention, but it is very complex and difficult to establish a model of intention recognition. C. Lee uses the Baum-Welch algorithm to train Hidden Markov Model, which predicts the collaborator's intention by learning the meaning of the collaborators’ gestures. However, this method does not really realize the purpose of identifying partner intention [19]. In [2022], human intentions are predicted by establishing a dynamic model of collaborator. Passenberg C. uses human models and jerk models to recognize human intention with extended Kalman filtering method [23]. These estimation methods based on the human body model are very difficult because the impedance parameters of the human body are variable and can be hardly obtained [24]. In addition, the measured interaction force can be used to estimate other forms of motion intention. For example, the interaction forces measured in the handles to predict possible walking modes or the walker’s forward path in [25, 26].

In this paper, a method for identifying the intention of collaborators based on machine learning is proposed. Through the use of a radial-based function neural network, an intelligent forecasting model for human intention recognition is set up. It could help the robot to acquire some knowledge of the human motion intention according to the force of the partner and the movement characteristics of the robot in the cooperation process so that the robot behavior can be controlled accordingly. The advantage of this method lies in overcoming the disadvantages of the difficulty of establishing the human-machine collaboration model and avoiding the difficulty of estimating the human body impedance parameters of traditional methods. The experimental results show that this method improves the synchronization of human-robot collaboration and reduces the force of the collaborator.

2. Problem Statement

In this paper, we assume that the end effector of robot physically interacts with the human hand (shown in Figure 1). Particularly, it is assumed that there is only one interaction point.

Figure 1: Human-robot collaboration.

In this model, human intention is defined as human expected acceleration, velocity, and position, which are represented by in the Cartesian coordinate. are used to represent expected acceleration, velocity, and position of the robot accordingly. and , respectively, represent current acceleration, velocity, and position of the human hand and the robot. and , respectively, represent inertia matrix, damping matrix, and stiffness matrix of cooperator and robot. is the interaction force.

The robot arm dynamics in the operational space is described as [27]wherewhere is the Jacobian matrix; , , , respectively, represent inertia matrix, damping matrix, and stiffness matrix of robot; is joint torques vector; is the vector of constraint force exerted by the human limb. It is difficult to estimate the robot dynamics parameters. An adaptive approach as follows is often employed instead of (1).where is a positive definite matrix; is the estimation of . is updated as follows:where is a positive definite matrix; is a value greater than 0.

To estimate the motion intention of the human limb, a mass-damper-spring model that describing the human limb dynamics [28, 29] is acknowledged aswhere are inertia matrix, damping matrix, and stiffness matrix, respectively. In this paper, recognizing the collaborator intention means that how to get . To be short, is defined as the motion intention of the human limb. In the following, we aim to obtain in this model. is defined as the estimated motion intention.

If we can get this human intention and set , the synchronization problem will be solved. However, it is difficult to estimate from (5) because the inertia, damping, and stiffness matrices of the collaborator are time-varying based on different human arm position and task [30].

In human-human interaction scenario, it is natural to recognize the partner intention by exploring interaction force, position, and velocity between the interacting parties. Based on the idea in this paper, the robot should also predict the collaborator intention by gathering above force, position, and velocity data at the interaction point. Therefore, we try to utilize machine learning algorithm to solve this problem. In other words, we can specify a mapping between and the current interaction state data , which is , where is the estimated value of .

3. Intention Recognition Based on RBFNN

3.1. Overall

As one of the popular machine learning algorithms, radial basis function neural network (RBFNN) is explained in this paper. The overall framework is mainly composed of three parts: data collection, offline learning part, and online real-time estimation part, shown in Figure 2.

Figure 2: Human-robot collaboration.

In the data collection process, in order to obtain reliable sample information, adaptive impedance control method is used to collect a large number of samples such as human intention and system dynamics.

In the offline learning process, before training the Radial Basis Function Neural Network (RBFNN) model, data matching is necessary because of phase delay of the impedance function. The delay size is [31]. In order to obtain the effective training data, we rematch interaction force, position, and velocity to the intention velocity . During this training process, some parameters as the number of hidden nodes and the weight of the hidden layer to the output layer are obtained.

In the process of estimating the intention online, the interactive force, the end position, and velocity information of the robot are collected, and the neural network trained offline can predict the intention information (the estimated speed) of the person. This speed information is transformed from Cartesian space to robot joint space, thus achieving the robot motion. At last, results of the two control methods are compared.

3.2. Offline Training
3.2.1. Adaptive Impedance Control

In the offline training process, the adaptive impedance control method [32] is used to get the samples. The robot dynamics model iswhere are the desired inertia matrix and damping matrix, respectively. and are the real force and the desired force. The adaptive compensation term is adopted to eliminate the interaction force error. is designed aswhere is sampling period, is update rate, and is constant.

3.2.2. Radial Basis Function Neural Network

The model of RBFNN is expressed as follows:where is the output of the neural network, p is the number of hidden nodes, is the weight value from the hidden layer to the output layer, is the extension constant of the radial basis function, is the input of the neural network, and is the center of the neural network basis function.

In order to establish the network model, the network parameters are defined in conjunction with contact man-machine collaboration system variables. The name and description of the parameter are as follows:

represents n groups of observation (sample) data after matching.

represents the i-th human intention vector;

represents the i-th velocity vector of the contact point;

represents the i-th position vector of the contact point;

represents the i-th interaction force vector. The force vector is collected by the force sensor in the direction of motion.

In order to define the number of the hidden layer nodes, K-means method is used. The K-means algorithm is generally used as a method of data preprocessing or for labeling data-assisted classification. In general, the cluster number must be a much smaller positive integer, and it can be determined through enumeration which begins with 1. According to the literature [33], the K value can be determined via the enumeration. In this paper, we use Weighted mean value (Wmv) parameter (as follows) of cluster radius to determine K value.

where k represents the cluster number, is the average radius of the i-th cluster, is the number of data points in the i-th cluster, and is the total amount of classified data.

Gradient descent is an effective optimization algorithm that plays an important role in the iterative process of back propagation algorithms. In this paper, the gradient descent method is used to solve the weights of the neural network from the hidden layer to the output layer. Assuming there are m samples , cost function can be defined as

where is model function.

3.3. Online Prediction

During the online prediction, the inputs are the interaction force, speed, and position of the robot, . The output is . The trained neural network can predict the intention information of the person. This speed information is transformed from Cartesian space to robot joint space, thus achieving the robot motion.

4. Experimental Setup

4.1. Experimental Device

The schematic and experimental pictures of the interaction system are shown in Figure 3; the parameters in the system are shown in Table 1. The interaction force comes from the six-dimensional force sensor installed at the end of the robot, where only one-dimensional force information in the direction of the slide motion is extracted. The operating frequency of the system is 1 kHz.

Table 1: Important parameters of human-machine collaboration system.
Figure 3: Schematic and experimental pictures of human-computer collaboration system.
4.2. Sample Data Acquisition and RBFNN

In order to acquire training data and test data of radial basis function neural networks, the adaptive impedance control method is adopted based on the above-mentioned single-degree-of-freedom human-robot collaboration system. The training data is used to train radial basis function neural networks, and the test data is used to test the accuracy of the trained neural network model. In order to maximize the coverage of the collected data to the human-robot collaboration space data, the collaborators operate the handle to cover the robot's movement space with multiple speeds as much as possible during the data collection process. For example, the handle is moved according to sine or cosine speed motion. The sampling frequency was 50 Hz, the entire acquisition process lasted 10s, and the sampling process was repeated 3 times. In order to obtain effective training data and test data, data matching is executed according to the delay size , where the mass matrix and damping matrix of the robot are set , .

During this training process, some parameters such as the number of hidden nodes and the weight of the hidden layer to the output layer are obtained. In the process of online estimation of the intention, with the collection of the interactive force, the end position, and velocity information of the robot, the intention speed of the collaborator can be predicted by the offline trained neural network. This speed information is transformed from Cartesian space to robot joint space, thus achieving the robot motion.

5. Experimental Results and Analysis

In order to verify the performance of synchronization, an adaptive impedance control method [32] was compared with the RBFNN method. The tracking speed curves for the two methods are obtained, as shown in Figures 4 and 5, respectively.

Figure 4: Velocity curve based on adaptive impedance control.
Figure 5: Velocity curve based on radial basis function neural network.

From Figure 4, we can see that the actual robot tracking speed of adaptive impedance control method lags behind the partner’s expected speed, because the robot is always in a passive follow state. However, compared with the adaptive impedance control method, the method of collaborator intention identification based on radial basis function neural network proposed in this paper eliminates about 70ms of delay and achieves the synchronization of robot and collaborator motion from Figure 5.

In order to verify this recognition intention method based on RBFNN can save strength for the collaborator; the interaction forces are obtained by using the two control strategies, respectively, as shown in Figure 6. During the two experiments, the running trajectory and speed are remained the same as much as possible. From Figure 6, compared with the adaptive impedance control method, the intention recognition method based on radial basis function neural network can greatly reduce the collaborator’s force. For detailed data, see Table 2.

Table 2: Mean values of interaction force.
Figure 6: Interaction forces curves based on adaptive impedance control method and radial basis function neural network method.

From Table 2, it is seen that the mean of interaction force based on the RBFNN is very small, only 3.9175N. Compared to the method based on the adaptive impedance control, the mean of interaction force is reduced to 83.81%.

To thoroughly testify the conclusion that the method proposed in this paper can ensure synchronization in the process of human-robot cooperation with a small interaction force, in Figure 7, the fragile item (using egg roll in the experiment) between the cooperator hand and the robot arm is used. The experimental results show that the cooperator can rapidly randomly operate the handle while the fragile item is not broken.

Figure 7: Human-robot collaboration experiment by using the fragile item to operate the robot.

6. Conclusion

In this paper, a method of machine learning (radial-based function neural network model) is adopted to realize the estimation of the cooperation intention in the contact human-robot collaboration. Firstly, in order to obtain training data and test data of neural network, the adaptive impedance control method is used to collect sample data. Secondly, basis neural network model is established after mapping sample data. Finally, the intention of the collaborator is identified through online prediction.

In order to verify the validity of the RBFNN method in this paper, the robot’s contact point speed and interaction force are analyzed by comparing with the corresponding experimental parameters values from the adaptive impedance control algorithm. The experimental results show that the proposed method based on RBFNN can accurately identify collaborator’s intention in the contact human-robot interaction, which not only can improve motion synchronization, but also can effectively reduce the interaction force.

In this paper, the proposed method was verified by simulations and experiments in a single freedom physical human-robot interaction system. However, the proposed method needs to be conducted if we want to apply it to multidegree of freedom model.

Data Availability

So the algorithm and data in this manuscript used to support the findings of this study are currently under embargo while the research findings are commercialized. Requests for data, 12 months after publication of this article, will be considered by the corresponding author.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work is supported by Research Program and the CATARC (Tianjin) Automotive Engineering Research Institute Co., Ltd. (study of the evaluation’s methods of human-machine interaction parameters (18191221)), China.

References

  1. B. Siciliano and K. Oussama, Handbook of Robotics[M], Springer-Verlag, Berlin, Heidelberg, Germany, 2008.
  2. B. D. Argall and A. G. Billard, “A survey of tactile humanrobot interactions,” Robotics and Autonomous Systems, vol. 58, no. 10, pp. 1159–1176, 2010. View at Publisher · View at Google Scholar · View at Scopus
  3. R. Ikeura, T. Moriguchi, and K. Mizutani, “Optimal variable impedance control for a robot and its application to lifting an object with a human,” in Proceedings of the 11th IEEE International Workshop on Robot and Human Interactive Communication, IEEE ROMAN 2002, pp. 500–505, Germany, September 2002. View at Scopus
  4. S. M. M. Rahman, R. Ikeura, M. Nobe, and H. Sawai, “Controlling a power assist robot for lifting objects considering human's unimanual, bimanual and cooperative weight perception,” in Proceedings of the 2010 IEEE International Conference on Robotics and Automation, ICRA 2010, pp. 2356–2362, USA, May 2010. View at Scopus
  5. R. Poppe, “A survey on vision-based human action recognition,” Image and Vision Computing, vol. 28, no. 6, pp. 976–990, 2010. View at Publisher · View at Google Scholar · View at Scopus
  6. K. Yan, R. Sukthankar, and M. Hebert, “Spatio-temporal shape and flow correlation for action recognition,” in Proceedings of the Minnesota:IEEE Conference on Computer Vision Pattern Recognition, pp. 1–8, Minneapolis, 2007.
  7. L. Wang and D. Suter, “Learning and matching of dynamic shape manifolds for human action recognition,” IEEE Transactions on Image Processing, vol. 16, no. 6, pp. 1646–1661, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  8. K. Kosuge and N. Kazamura, “Control of a robot handling an object in cooperation with a human,” in Proceedings of the 1997 6th IEEE International Workshop on Robot and Human Communication (RO-MAN '97), pp. 142–147, Sendai, Tohoku, Japan, October 1997. View at Scopus
  9. D. P. Losey, C. G. McDonald, E. Battaglia, and M. K. O’Malley, “A review of intent detection, arbitration, and communication aspects of shared control for physical human-robot interaction,” Applied Mechanics Reviews, vol. 70, no. 1, pp. 1–19, 2018. View at Google Scholar · View at Scopus
  10. D. P. McMullen, G. Hotson, K. D. Katyal et al., “Demonstration of a semi-autonomous hybrid brain-machine interface using human intracranial EEG, eye tracking, and computer vision to control a robotic upper limb prosthetic,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 22, no. 4, pp. 784–796, 2014. View at Publisher · View at Google Scholar · View at Scopus
  11. A. Radmand, E. Scheme, and K. Englehart, “A characterization of the effect of limb position on EMG features to guide the development of effective prosthetic control schemes,” in Proceedings of the 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2014, pp. 662–667, Chicago, USA, August 2014. View at Scopus
  12. M. Rasouli, K. Chellamuthu, J.-J. Cabibihan, and S. L. Kukreja, “Towards enhanced control of upper prosthetic limbs: A force-myographic approach,” in Proceedings of the 6th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics, BioRob 2016, pp. 232–236, Singapore, June 2016. View at Scopus
  13. H. K. Yap, A. Mao, J. C. H. Goh, and C.-H. Yeow, “Design of a wearable FMG sensing system for user intent detection during hand rehabilitation with a soft robotic glove,” in Proceedings of the 6th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics, BioRob 2016, pp. 781–786, Singapore, June 2016. View at Scopus
  14. E. Cho, R. Chen, L.-K. Merhi, Z. Xiao, B. Pousett, and C. Menon, “Force myography to control robotic upper extremity prostheses: A feasibility study,” Frontiers in Bioengineering and Biotechnology, vol. 4, no. 18, pp. 1–12, 2016. View at Google Scholar · View at Scopus
  15. S. Au, M. Berniker, and H. Herr, “Powered ankle-foot prosthesis to assist level-ground and stair-descent gaits,” Neural Networks, vol. 21, no. 4, pp. 654–666, 2008. View at Publisher · View at Google Scholar · View at Scopus
  16. K. Kiguchi and Y. Hayashi, “An EMG-based control for an upper-limb power-assist exoskeleton robot,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 42, no. 4, pp. 1064–1071, 2012. View at Publisher · View at Google Scholar · View at Scopus
  17. A. U. Pehlivan, D. P. Losey, and M. K. Omalley, “Minimal assist-as-needed controller for upper limb robotic rehabilitation,” IEEE Transactions on Robotics, vol. 32, no. 1, pp. 113–124, 2016. View at Publisher · View at Google Scholar · View at Scopus
  18. R. Kelley, A. Tavakkoli, C. King, M. Nicolescu, M. Nicolescu, and G. Bebis, “Understanding human intentions via Hidden Markov Models in autonomous mobile robots,” in Proceedings of the 3rd ACM/IEEE International Conference on Human-Robot Interaction, HRI 2008, pp. 367–374, Amsterdam, Netherlands, March 2008. View at Scopus
  19. C. Lee and Y. Xu, “Online, interactive learning of gestures for human/robot interface,” in Proceedings of the International Conference on Robotics and Automation (ICRA), pp. 2982–2987, Minneapolis, Minnesota, USA, 1996.
  20. Y. Maeda, T. Hara, and T. Arai, “Human-robot cooperative manipulation with motion estimation,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2240–2245, Maui, USA, 2001.
  21. T. Flash and N. Hogan, “The coordination of arm movements: an experimentally confirmed mathematical model,” The Journal of Neuroscience, vol. 5, no. 7, pp. 1688–1703, 1985. View at Publisher · View at Google Scholar · View at Scopus
  22. N. Diolaiti, C. Melchiorri, and S. Stramigioli, “Contact impedance estimation for robotic systems,” IEEE Transactions on Robotics, vol. 21, no. 5, pp. 925–935, 2005. View at Publisher · View at Google Scholar · View at Scopus
  23. C. Passenberg, R. Groten, A. Peer, and M. Buss, “Towards real-time haptic assistance adaptation optimizing task performance and human effort,” in Proceedings of the 2011 IEEE World Haptics Conference, WHC 2011, pp. 155–160, Istanbul, Turkey, June 2011. View at Scopus
  24. M. Awais and D. Henrich, “Online intention learning for human-robot interaction by scene observation,” Advanced Robotics and Its Social Impacts, vol. 21, no. 5, pp. 13–18, 2012. View at Google Scholar · View at Scopus
  25. B. Shen, J. Li, F. Bai, and C. Chew, “Motion intent recognition for control of a lower extremity assistive device (LEAD),” in Proceedings of the 2013 IEEE International Conference on Mechatronics and Automation (ICMA), pp. 926–931, Karlsruhe, Germany, August 2013. View at Publisher · View at Google Scholar
  26. K. Wakita, J. Huang, P. Di, K. Sekiyama, and T. Fukuda, “Human-walking-intention-based motion control of an omnidirectional-type cane robot,” IEEE/ASME Transactions on Mechatronics, vol. 18, no. 1, pp. 285–296, 2013. View at Publisher · View at Google Scholar · View at Scopus
  27. S. S. Ge, Y. Li, and H. He, “Neural-network-based human intention estimation for physical human-robot interaction,” in Proceedings of the 2011 International Conference on Ubiquitous Robots and Ambient Intelligence, pp. 390–395, Incheon, South Korea, 2011.
  28. T. Tsumugiwa, R. Yokogawa, and K. Hara, “Variable impedance control based on estimation of human arm stiffness for human-robot cooperative calligraphic task,” in Proceedings of the 2002 IEEE International Conference on Robotics and Automation, pp. 1–10, Washington, USA, May 2002. View at Scopus
  29. Y. Chua, K. P. Tee, and R. Yan, “Human-robot motion synchronization using reactive and predictive controllers,” in Proceedings of the 2010 IEEE International Conference on Robotics and Biomimetics, ROBIO 2010, pp. 223–228, Tianjin, China, December 2010. View at Scopus
  30. M. Awais and D. Henrich, “Online intention learning for human-robot interaction by scene observation,” Advanced Robotics and Its Social Impacts, vol. 21, no. 05, pp. 13–18, 2012. View at Google Scholar
  31. S. Jung, Y. G. Bae, and M. Tomizuka, “A simplified time-delayed disturbance observer for position control of robot manipulators,” in Proceedings of the 2012 IEEE International Conference on Automation Science and Engineering: Green Automation Toward a Sustainable Society, CASE 2012, pp. 555–560, Republic of Korea, August 2012. View at Scopus
  32. S. Jung, T. C. Hsia, and R. G. Bonitz, “Force tracking impedance control of robot manipulators for environment with damping,” IEEE Transactions on Control Systems Technology, vol. 12, no. 3, pp. 474–483, 2004. View at Google Scholar
  33. A. Rajaraman, J. Leskovec, and J. Ullman, Mining of Massive Data Sets, Posts and Telecom Press, China, 2014. View at MathSciNet