Table of Contents Author Guidelines Submit a Manuscript
Journal of Healthcare Engineering
Volume 2017, Article ID 8932938, 11 pages
https://doi.org/10.1155/2017/8932938
Research Article

Hierarchical Shared Control of Cane-Type Walking-Aid Robot

1Beijing Key Laboratory of Rehabilitation Technical Aids for Old-Age Disability, The National Research Centre for Rehabilitation Technical Aids, Beijing 100176, China
2Key Laboratory of Ministry of Education for Image Processing and Intelligent Control, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China
3The 28th Research Institute of China Electronics Technology Group Corporation, Nanjing 210007, China

Correspondence should be addressed to Qingyang Yan; nc.ude.tsuh@gnaygniqnay

Received 5 January 2017; Revised 28 May 2017; Accepted 27 June 2017; Published 13 August 2017

Academic Editor: Chengzhi Hu

Copyright © 2017 Chunjing Tao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A hierarchical shared-control method of the walking-aid robot for both human motion intention recognition and the obstacle emergency-avoidance method based on artificial potential field (APF) is proposed in this paper. The human motion intention is obtained from the interaction force measurements of the sensory system composed of 4 force-sensing registers (FSR) and a torque sensor. Meanwhile, a laser-range finder (LRF) forward is applied to detect the obstacles and try to guide the operator based on the repulsion force calculated by artificial potential field. An obstacle emergency-avoidance method which comprises different control strategies is also assumed according to the different states of obstacles or emergency cases. To ensure the user’s safety, the hierarchical shared-control method combines the intention recognition method with the obstacle emergency-avoidance method based on the distance between the walking-aid robot and the obstacles. At last, experiments validate the effectiveness of the proposed hierarchical shared-control method.

1. Introduction

With the development of society, the aging of population has caused more and more important social problems [1]. The elderly are faced with the problems such as weakening of physical ability and visual deterioration as they grow older. Thus, the elderly need to improve their walking ability to cope with their basic needs of daily life indeed. Opportunely, many researchers have paid attention to the applications of the robot and medical technology in recent years [210]. They manufactured many intelligent rehabilitation aids (e.g., walking-aid robots) to help the elderly to gain the ability of normal walking and developed a series of achievements, such as Kawamoto et al. invented “HAL” [5]. A wearable Power-Assist Locomotor (WPAL) was invented for the lower limb [6]. Kikuchi et al. [7] proposed an intelligently controllable walker (i-walker). Hirata et al. invented a passive intelligent walker called “RT-Walker” [9]. Wakita et al. [10] also invented a cane-type walking-aid robot “i-cane” to help elderly walk and rehabilitate.

In addition, another hot point of the robotics industry is the obstacle avoidance. These investigations generally need various sensors such as ultrasonic sensors, laser sensors, and cameras. Combined with some specific algorithms, the autonomous navigation and obstacle avoidance function of robots can be achieved. Currently, a great deal of researches have been published [1114]. References [15, 16] proposed the obstacle detection and avoidance methods for the robot with a camera. However, the process of the images will make the computation more complicated, which will also cost more time and is not suitable for walking-aid robots.

Based on aforementioned researches, it can be found that the walking-aid robots have good human-machine interaction interfaces and there are many human motion intention recognition methods which can fully consider the human’s subjective intention [1724]. The admittance control performs well in using walking-aid robots [10]. However, these walking-aid robots usually cannot recognize and rectify the operator’s unreasonable or incorrect intentions, which may cause some safety hazard. In comparison with the walking-aid robots, the obstacle avoidance robots have the function of path planning and can reach the target point safely. Regretfully, the obstacle avoidance robots only have the single function and lack of the human-machine interface.

Considering the advantages and disadvantages of the robot control and the human control, many investigators combined advantages of these two control methods and proposed the concept of shared control. The shared control is defined that a system can share its controller with one or more human beings and one or multiple robotic controllers [25]. In the field of shared control, many researchers gained quite a few achievements [2630]. Overall, the research on the shared-control robot is still in its infancy. References [31, 32] proposed a shared-control method for the wheel robot with detection of the human intention through the EMG. But devices for obtaining the bioelectricity signals with pins cannot offer comfortable experiences. Meanwhile, those devices are expensive and not convenient to use in daily life without professional staffs around.

In this study, a convenient and cost-effective hierarchical shared-control method of the walking-aid robot based on human motion intention recognition and obstacle emergency-avoidance methods is presented for solving the situation if there are obstacles during the normal walking. It can save effort during obstacle avoidance and keeps part of the operator’s original walking intention. The walking-aid robot is introduced in Section 2. The hierarchical shared-control method which considers both the walking-aid and the obstacle avoidance functions in the walking-aid robot is introduced in Section 3 in detail. In particular, the intention recognition algorithms can make the walking-aid robot thoroughly consider the operator’s subjective intention and enhance the quality of human-computer interaction. Also, the artificial potential field method used in this paper can plan the path of walking-aid robot to avoid risks caused by the operators’ unreasonable intentions. Furthermore, different control strategies are assumed according to the distance between the walking-aid robot and the obstacles. Section 4 talks about the experiments and analysis. Conclusions are made in Section 5. Finally, experiments are conducted in the real environment which proves the effectiveness of the proposed shared-control algorithm.

2. Walking-Aid Robot System

2.1. Architectural Structure of the Walking-Aid Robot

In this work, the walking-aid robot system is composed of a set of solid support structures, an omni-directional platform, an industrial personal computer (IPC), a laser sensor, a force-sensing device with FSR, and a torque sensor. The actual photo of the walking-aid robot is shown in Figure 1. The omni-directional platform consists of three omni-directional mecanum wheels driven by DC motor. The laser sensor is adopted to detect obstacles around. The operation principle of the force-sensing device will be stated in the following section.

Figure 1: The walking-aid robot.

Figure 2 shows the control flowchart of the walking-aid robot. In this system, the interactive force from the operator collected by the laser sensor can be transmitted to the IPC. IPC can send control commands based on the collected data to the platform to control the movements of the walking-aid robot.

Figure 2: Architecture of control system applied in the walking-aid robot.
2.2. The Force-Sensing Device in Walking-Aid Robot

Figure 3 shows the force-sensing device in the walking-aid robot in detail. Figure 3(a) shows the structure of the force-sensing device including the handle, the torque sensor, and the FSR for four directions. Figure 3(b) shows the distribution of the four FSR sensors which are pasted to the four sides of the metal rod of the handle, thus making the FSR sensors and the torque sensor precisely detect the magnitude and direction of the force from the operator. Figure 3(c) shows the FSR sensor, which is a one-dimensional variable-resistance pressure sensor. Connecting these FSR sensors and the torque force sensor to the signal conditioning circuits, the force from the operator shall be obtained as the intention force.

Figure 3: Force sensors.
2.3. The Establishment of Coordinate System

Figure 4 shows the top view of the walking-aid robot architecture. Because the shared-control algorithm is just based on the human motion intention recognition and the local obstructions around the walking-aid robot, a global coordinate system is not necessary in this study. But the coordinate systems of the omni-directional platform, the force-sensing device, and the laser sensor need to be established. In order to simplify the calculations, the coordinate systems of the omni-directional platform, the force-sensing device, and the laser sensor are unified with a fixed local coordinate system XOY. In the coordinate system XOY, the front direction of the walking-aid robot is set as the positive x-axis and the left anterior side is set as the positive y-axis. Because the FSR sensors are one-dimensional pressure sensors, the data obtained by the force-sensing device is the component of the operator’s force in X or Y direction. We assume that the force along the positive y-axis is . Along the clockwise direction, the force detected by the torque sensor is . The forces towards four directions acquired by the FSR sensors are respectively. The laser sensor scans the front obstacles in the counterclockwise direction continuously to acquire the environment information. In the local coordinate system XOY, the positive X direction is set as the initial angle 0°, and the angle increases along the counterclockwise direction. In this case, the scanning range of the laser sensor is [−90°, 90°].

Figure 4: Top view of the walking-aid robot system.

3. Shared-Control Algorithm

3.1. The Algorithm for Admittance Control

From the force-sensing device described in 2.2 and the forces in 2.3, we can get the components of the operator’s intention force along the x-axis and y-axis and the rotate direction, respectively. And the operator’s intentions are Thus, the force components can be obtained by

In this study, the operator’s intention forces are represented by a five-dimensional vector and the operator’s intentions are represented by a three-dimensional vector. Then, the intention forces and operator’s intentions can be expressed as

is the intention force along the direction of the x-axis. is the intention force along the direction of the y-axis, and is the torque exerted on the walking-aid robots. The positive direction of the torque is counterclockwise direction in this article. Then, the intention forces can be rewritten into

is a transformation matrix which is defined by is half the width of the walking-aid robot in Figure 4.

Once the vector of the operator’s intention force is obtained, the vector can be converted into a velocity vector through the open-loop controlled admittance algorithm. According to the admittance control algorithm, the transfer function can be written as

is the proportional coefficient. Further, (5) can be taken into the form of continuous time domain model:

Discretizing (6), we have

Finally, (7) can be rewritten in a simplified form as discretizing (6), we have

3.2. The Artificial Potential Field Approaches

The repulsion force based on the artificial potential field approaches is given by the formula

, and are constant. The constant is a positive integer, and is the distance between an obstacle and the walking-aid robot. represents the repulsion force. Researchers have proposed an effective obstacle recognition algorithm to calculate the repulsion force in [33]. The linked list of obstacles which reflects the information of the around obstructions can be obtained by the simplified obstacle recognition algorithm.

Each obstacle in the environment could be expressed in a six-tuple structure [ and ], as shown in Figure 5.

Figure 5: The structure of the obstacle (ob).

The laser sensor locates at the point . The obstacle is ob. represents the distance between the obstruction’s right edge and the laser sensor. is the angle between the x-axis and the line which connects the right edge of obstruction and the laser sensor. Similarly, and represent the distance and the angle respectively. is the magnitude of the repulsion force, and is the angle between the positive x-axis and the direction of repulsion force. For each obstacle in this study, the direction of repulsion force is defined by the angle bisector of the angle which is between the edge of the obstacle and the detection line of laser sensor. Substituting into (9), the repulsion force caused by the single obstacle can be obtained as

It should be noted that the gap between two obstacles may be so small that the walking-aid robot cannot pass through the gap safely. Since the repulsion force caused by an obstacle is only determined by the relative distance between the obstacle and the walking-aid robot, the accumulative repulsive force caused by the group of obstacles in the same location may be greater than the operator’s intention force. In this case, the accumulative repulsion force will hinder the motion of the walking-aid robot. To avoid this case, the obstacles with short distance should be merged. Figure 6 gives a diagram of the merging obstacles.

Figure 6: Diagram of the merging obstacles ob1 and ob2.

For the two adjacent obstacles ob1 and ob2 as shown in Figure 6(a), the value of the angle can be easily obtained based on the given six-tuple structure. According to the law of cosines, the distance between ob1 and ob2 is found as

If the distance is smaller than the size of the walking-aid robot, the two obstacles need to be merged, as shown in Figure 6(b). The merged margin of the two obstacles is determined by the right edge of obstacle ob1 and the left edge of obstacle ob2. Then, the distance of the angle bisector is obtained. Based on (9), the repulsion force caused by the merged obstacle can be calculated as

The distance cannot be directly determined by the laser sensor. The result is that the light beam from the laser sensor (the line extension cord) might just cross the gap between two obstacles. If this happens, the laser sensor cannot detect obstacles and the repulsion force cannot be obtained. The information of merged obstacles is stored in the list, and the resultant of repulsion forces is given by

In practical application, the components of the repulsion forces along the X and Y direction are calculated, respectively, in order to facilitate the fusion with the intention force. The positive and negative direction of the repulsion forces is consistent with the x-axis and the y-axis, respectively. The corresponding components can be obtained as

3.3. The Algorithm of Obstacle Emergency Avoidance

The short distance between the walking-aid robot and the obstacle may cause the collision. Thus, the algorithm of obstacle emergency avoidance must be taken to ensure the operator’s safety and the safety of the robot. Considering the features of the walking-aid robot, an obstacle avoidance strategy is proposed in this study. When the walking-aid robot moves, the obstacle emergency-avoidance region is firstly determined based on the strategy. If the component of the intention force along the x-axis is negative when there are obstacles in the region, the walking-aid robot will move back to avoid obstacles. If not, the walking-aid robot will move in the lateral way to avoid obstacles. In this study, the selected obstacle emergency-avoidance region is a rectangle while the laser sensor locates at point O as shown in Figure 7. If a scanning point of the laser sensor is detected in the rectangular area, it means that there are obstacles in the area. The rectangle condition is described as follows:

Figure 7: The obstacle emergency-avoidance region.

The point A (x, y) is the detection point of the laser sensor and When (15) holds, the rectangle condition is true.

In Figure 7, the “WallD” represents the distance between the walking-aid robot and the lateral wall.

As the first step, the algorithm can make the robot move parallel along the positive or negative direction of the y-axis to avoid obstacles. For the selection of the robot movement direction, a voting algorithm is used. The robot counts the numbers of the scanning points which satisfy the rectangle condition (15) within each side of x-axis. Then, the side where fewer scanning points satisfying the rectangle condition (15) exist is considered as the enough space for the obstacle avoidance. Thus, it will move to the side where fewer scanning points satisfying the rectangle condition (15) exist. Based on the voting algorithm, the robot will move to the side where fewer scanning points exist. The initial state of the walking-aid robot is also determined by using the voting algorithm. Above all, the proposed obstacle emergency-avoidance method can be described by the state transition diagram, as shown in Figure 8. The numbers 0–3 represent the states of the walking-aid robot as shown in Table 1, and the letters “a–h” represent the state-transition conditions in Table 2.

Figure 8: The state transition diagram of obstacle emergency-avoidance method.
Table 1: The states of the walking-aid robot.
Table 2: The state-transition conditions.

During the obstacle avoidance, the robot detects the distance “WallD” all the time. Once the distance is less than the safe distance, the robot can change its state of motion immediately. The walking-aid robot can convert among various motion states when there are obstacles in the obstacle avoidance region.

3.4. The Hierarchical Shared-Control Algorithm

From the above descriptions, the admittance control algorithm proposed in Section 3.1 can effectively obtain the corresponding velocity of the robot based on the operator’s intention force. The admittance control algorithm can also make the operator control smoothly. In Section 2.2, the artificial potential field method synthetically considers the obstacles around the robot and the operator’s intention force, which is conducive to make the robot move towards a reasonable direction. The obstacle emergency-avoidance method proposed in Section 3.3 can fully guarantee the safety of the robot and avoid obstacles in the mean time.

Integrating above three algorithms, the shared-control algorithm can be designed easily. According to different distances between the robot and the obstacles in the hierarchical way as shown in Figure 9, the shared-control algorithm can use different control algorithms to control the robot, as shown in Figure 10 and Hierarchical Shared-Control Algorithm.

Figure 9: The hierarchical layers of the shared-control algorithm.
Figure 10: The hierarchical shared-control algorithm.

In the control system, the laser sensor is set as the center and the outermost is set as the Free Layer. Without obstacles in the semicircular region shown in Figure 9, the robot is in fully compliance with the operator’s intention. The region between the semicircular region and the rectangular region is defined as the Repulsion Interference Layer. If any obstacle is detected in this layer, the repulsion force calculated by the artificial potential field method and the operator’s intention force control the movement of the robot in the same time. The region within the rectangular region is defined as the Obstacle Emergency-Avoidance Layer. If any obstacle is detected in this layer, the robot will only receive the backward movement instruction from the operator; otherwise, it will follow the algorithm of obstacle emergency-avoidance proposed in Section 3.3 to avoid obstacles.

Algorithm 1: Hierarchical Shared Control.
Figure 11: The human-robot interaction experiment.

4. Experiment and Analysis

In this work, the experiments are conducted in the specific indoor environments depicted in Figure 11. In the experiments, constant in (9) is selected as 2 to ensure and the repulsion force intensively depends on the distance between the obstacles and the walking-aid robot. Meanwhile, the value of repulsion force should match with the magnitude of the intention force which the sensors measure. According to the AD conversion result of the IPC, the valid range of the intention force is approximately 1–25 N. When the distance the obstacle is very far from the walking-aid robot and its repulsion force is so small that cannot affect the movements of the walking-aid robot. At this time, the repulsion force When the distance the obstacle is very close to the walking-aid robot. At this time, the repulsion force shall roughly be equivalent to the maximum intention force, namely, Substituting these constants into (9), we can obtain the parameters: In the experiment, the walking-aid robot is pushed forward. Figure 11 shows the movement trajectory of the walking-aid robot.

The experiment is conducted with a 24-year-old student and a 23-year-old student. The experimental results are shown in Figures 12 and 13. In the figures, the horizontal axis is a time coordinate. and represent the intention forces along the x-axis and y-axis, respectively. and represent the repulsion forces along the x-axis and y-axis, respectively. and represent the velocities of the walking-aid robot along the x-axis and y-axis, respectively.

Figure 12: The experimental results with subject 1.
Figure 13: The experimental results with subject 2.

From the above three figures, the experiments can be divided into three stages as below:

Stage 1. At the beginning, the walking-aid robot is very far from the obstacles, so the obstacles stay in the Free Layer. At this moment, the component of the operators’ intention force along the x-axis is greater than the repulsion force while the operator’s intention force along the y-axis is close to zero. Thus, the robot is mainly controlled by the operators now. Meanwhile, the repulsion force is very small and mainly along the y-axis, as shown in Figures 12(b) and 13(b). It is indicated that the walking-aid robot moves at a certain speed in the negative direction of the y-axis. After 10 seconds, the repulsion force obviously increases and the walking-aid robot gradually approaches the obstacle ob2. The obstacle is detected in the Repulsion Interference Layer. At the same time, the velocity of the robot along the x-axis decreases rapidly to zero, while the velocity of the robot along the y-axis increases with the repulsion force increasing. During this stage, the state of the walking-aid robot is “state 0.”

Stage 2. At , the operators feel that the velocity of the walking-aid robot decreases; thus, the operators increase the thrust in the direction of the x-axis in order to make the walking-aid robot move again. At this moment, the obstacle ob2 is detected in the Obstacle Emergency-Avoidance Layer. According to the obstacle emergency-avoidance method, it meets the state-transition condition “a” then the state of the walking-aid robot changes from “state 0” to “state 1,” resulting that the velocity of the robot along the x-axis drops rapidly to zero and the robot moves at a fixed velocity along the y-axis. From Figures 12(b) and 13(b), we can see that the repulsion force remains constant. It indicates that in the “state 1,” the operators’ intention force cannot affect the velocity of the robot in the stage 2 and the obstacles are not detected in the emergency obstacle avoidance region.

Stage 3. Due to the influence of the obstacle ob3, the walking-aid robot moves slowly along the positive direction of the y-axis according to the obstacle emergency-avoidance method. At in Figure 12 and in Figure 13, the operators stop applying the intention force to the walking-aid robot and the walking-aid robot stops moving.

In the experiment with only the admittance control applied, it cannot reflect the effect of the obstacles in Figure 14. When the operator walks towards the obstacles, the operator have to avoid the obstacles on his own. Compared to the intention force in Figures 12 and 13, the intention force in Figures 12 and 13 is greater during the obstacle avoidance, which indicates that the shared-control method can save effort.

Figure 14: The experiment without shared control.

Over all, the human-robot interaction experiments show the effectiveness of the hierarchical shared control for the walking-aid robot. Based on the hierarchical shared-control algorithm, the robot can successfully help the operator to avoid obstacles and guide the operator to move in a feasible direction, which is really convenient.

5. Conclusions

In this work, we proposed a hierarchical control method for the walking-aid robot by combining the human motion intention recognition and the obstacle avoidance methods. It can save effort during obstacle avoidance and keeps the part of the operators’ original walking intention. Using this control strategy, the walking-aid robot can autonomously choose different control algorithms to avoid obstacles based on the distance between the walking-aid robot and the obstacles. Also, the emergency obstacle avoidance mechanism is designed to ensure the security of the robot system. The experimental results show that the walking-aid robot can switch among different control algorithms smoothly and guide operators to walk safely.

Conflicts of Interest

The authors indicated no potential conflicts of interest.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (NSFC) [Grant no. 61473130], the Applied Basic Research Program of Wuhan [Grant no. 2016010101010014], and in part by the Science Fund for Distinguished Young Scholars of Hubei Province (2015CFA047). The authors would like to gratefully acknowledge these supports.

References

  1. K. A. Kline and D. M. Bowdish, “Infection in an aging population,” Current Opinion in Microbiology, vol. 29, pp. 63–67, 2016. View at Publisher · View at Google Scholar · View at Scopus
  2. C. Hu, F. Aeschlimann, G. Chatzipirpiridis et al., “Spatiotemporally controlled electrodeposition of magnetically driven micromachines based on the inverse opal architecture,” Electrochemistry Communications, vol. 81, 2017. View at Publisher · View at Google Scholar
  3. C. Hu, H. Vogler, M. Aellen et al., “High precision, localized proton gradients and fluxes generated by a microelectrode device induce differential growth behaviors of pollen tubes,” Lab on a Chip, vol. 17, 2017. View at Publisher · View at Google Scholar
  4. J. Huang, X. Tu, and J. He, “Design and evaluation of the RUPERT wearable upper extremity exoskeleton robot for clinical and in-home therapies,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 46, no. 7, pp. 926–935, 2016. View at Publisher · View at Google Scholar · View at Scopus
  5. H. Kawamoto, K. Kamibayashi, Y. Nakata et al., “Pilot study of locomotion improvement using hybrid assistive limb in chronic stroke patients,” BMC Neurology, vol. 13, no. 1, pp. 93–98, 2013. View at Google Scholar
  6. S. Tanabe, E. Saitoh, S. Hirano et al., “Design of the Wearable Power-Assist Locomotor (WPAL) for paraplegic gait reconstruction,” Disability and Rehabilitation: Assistive Technology, vol. 8, no. 1, pp. 84–91, 2013. View at Publisher · View at Google Scholar · View at Scopus
  7. T. Kikuchi, T. Tanaka, K. Anzai, S. Kawakami, M. Hosaka, and K. Niino, “Evaluation of line-tracing controller of intelligently controllable walker,” Advanced Robotics, vol. 27, no. 7, pp. 493–502, 2013. View at Publisher · View at Google Scholar · View at Scopus
  8. J. Huang, Z. H. Guan, T. Matsuno, T. Fukuda, and K. Sekiyama, “Sliding-mode velocity control of mobile-wheeled inverted-pendulum systems,” IEEE Transactions on Robotics, vol. 26, no. 4, pp. 750–758, 2010. View at Publisher · View at Google Scholar · View at Scopus
  9. Y. Hirata, A. Hara, and K. Kosuge, “Motion control of passive intelligent walker using servo brakes,” IEEE Transactions on Robotics, vol. 23, no. 5, pp. 981–990, 2007. View at Publisher · View at Google Scholar · View at Scopus
  10. K. Wakita, J. Huang, P. Di, K. Sekiyama, and T. Fukuda, “Human-walking-intention-based motion control of an omnidirectional-type cane robot,” IEEE/ASME Transactions on Mechatronics, vol. 18, no. 1, pp. 285–296, 2013. View at Publisher · View at Google Scholar · View at Scopus
  11. M. Khatib and R. Chatila, “An extended potential field approach for mobile robot sensor-based motions,” in International Conference on Intelligent Autonomous Systems, pp. 490–496, 1995.
  12. K. Iwatsuka, K. Yamamoto, and K. Kato, “Development of a guide dog system for the blind with character recognition ability,” in First Canadian Conference on Computer and Robot Vision, 2004. Proceedings, pp. 401–405, London, ON, Canada, 2004, IEEE Computer Society.
  13. T. Gonnot and J. Saniie, “Integrated machine vision and communication system for blind navigation and guidance,” in 2016 IEEE International Conference on Electro Information Technology (EIT), pp. 0187–0191, Grand Forks, ND, 2016.
  14. E. B. Kaiser and M. Lawo, “Wearable navigation system for the visually impaired and blind people,” in IEEE Computer Society, 2012 IEEE/ACIS 11th International Conference on Computer and Information Science, pp. 230–233, Shanghai, China, 2012.
  15. C. J. Kim and D. Chwa, “Obstacle avoidance method for wheeled mobile robots using interval type-2 fuzzy neural network,” IEEE Transactions on Fuzzy Systems, vol. 23, no. 3, pp. 677–687, 2015. View at Publisher · View at Google Scholar · View at Scopus
  16. F. Fabrizio and A. D. Luca, “Real-time computation of distance to dynamic obstacles with multiple depth sensors,” IEEE Robotics and Automation Letters, vol. 2, no. 1, pp. 56–63, 2017. View at Publisher · View at Google Scholar
  17. J. Huang, P. Di, and T. Fukuda, “Motion control of omni-directional type cane robot based on human intention,” in 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2983–2988, Nice, France, 2009.
  18. P. Di, Y. Hasegawa, S. Nakagawa et al., “Fall detection and prevention control using walking-aid cane robot,” IEEE/ASME Transactions on Mechatronics, vol. 21, no. 2, pp. 625–637, 2016. View at Publisher · View at Google Scholar · View at Scopus
  19. Q. Y. Yan, W. X. Xu, J. Huang, and P. C. Su, “Laser and force sensors based human motion intent estimation algorithm for walking-aid robot,” in 2015 IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), pp. 1858–1863, Shenyang, China, 2015.
  20. W. Wang, Z. G. Hou, L. Cheng et al., “Toward patients’ motion intention recognition: dynamics modeling and identification of iLeg-An LLRR under motion constraints,” IEEE Transactions on Systems Man and Cybernetics Systems, vol. 46, no. 7, pp. 1–13, 2016. View at Google Scholar
  21. K. Khokar, R. Alqasemi, S. Sarkar, K. Reed, and R. Dubey, “A novel telerobotic method for human-in-the-loop assisted grasping based on intention recognition,” in 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 4762–4769, Hong Kong, 2014.
  22. J. H. Han, S. J. Lee, and J. H. Kim, “Behavior hierarchy-based affordance map for recognition of human intention and its application to human’ robot interaction,” IEEE Transactions on Human-Machine Systems, vol. 46, no. 5, pp. 1–15, 2016. View at Google Scholar
  23. J. Huang, W. Huo, W. Xu, S. Mohammed, and Y. Amirat, “Control of upper-limb power-assist exoskeleton using a human-robot interface based on motion intention recognition,” IEEE Transactions on Automation Science and Engineering, vol. 12, no. 4, pp. 1257–1270, 2015. View at Publisher · View at Google Scholar · View at Scopus
  24. H. C. Sang, J. M. Lee, S. J. Kim, Y. Hwang, and J. An, “Intention recognition method for sit-to-stand and stand-to-sit from electromyogram signals for overground lower-limb rehabilitation robots,” in 2015 IEEE International Conference on Advanced Intelligent Mechatronics (AIM), pp. 418–421, Busan, South Korea, 2015. View at Publisher · View at Google Scholar · View at Scopus
  25. T. B. Sheridan, Telerobotics, Automation and Human Supervisory Control, The MIT Press, Cambridge, 1992. View at Publisher · View at Google Scholar · View at Scopus
  26. S. S. Nudehi, R. Mukherjee, and M. Ghodoussi, “A shared-control approach to haptic interface design for minimally invasive telesurgical training,” IEEE Transactions on Control Systems Technology, vol. 13, no. 4, pp. 588–592, 2005. View at Publisher · View at Google Scholar · View at Scopus
  27. F. Chen, P. Di, J. Huang, H. Sasaki, and T. Fukuda, “Evolutionary artificial potential field method based manipulator path planning,” in 2009 International Symposium on Micro-NanoMechatronics and Human Science, pp. 92–97, Nagoya Japan, 2009. View at Publisher · View at Google Scholar
  28. H. T. Trieu, H. T. Nguyen, and K. Willey, “Shared control strategies for obstacle avoidance tasks in an intelligent wheelchair,” in 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 4254–4257, Vancouver, BC, Canada, 2008. View at Publisher · View at Google Scholar
  29. W. G. Huh and S. B. Cho, “Optimal partial filters of EEG signals for shared control of vehicle,” in 2015 7th International Conference of Soft Computing and Pattern Recognition (SoCPaR), pp. 290–293, Fukuoka, Japan, 2015.
  30. W. X. Xu, J. Huang, Y. J. Wang, C. J. Tao, and L. Cheng, “Reinforcement learning-based shared control for walking-aid robot and its experimental verification,” Advanced Robotics, vol. 29, no. 22, pp. 1463–1481, 2015. View at Publisher · View at Google Scholar · View at Scopus
  31. S. Sathish, K. Nithyakalyani, S. Vinurajkumar, C. Vijayalakshmi, and J. Sivaraman, “Control of robotic wheel chair using EMG signals for paralysed persons,” Indian Journal of Science and Technology, vol. 9, no. 1, 2016. View at Publisher · View at Google Scholar · View at Scopus
  32. A. Accogli, L. Grazi, S. Crea et al., “EMG-based detection of user’s intentions for human-machine shared control of an assistive upper-limb exoskeleton,” in Wearable Robotics: Challenges and Trends, pp. 181–185, Springer International Publishing, Cham, 2017. View at Google Scholar
  33. W. Hong, Y. T. Tian, Z. Dong, and M. Zhou, “Extracting features from local environment for intelligent robot system,” Robot, vol. 25, no. 3, 2003. View at Google Scholar